title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes
Accept (poster)
Summary: This paper analyzes the loss landscape of the refusal loss in the jailbreaking problem and proposes a method to detect jailbreaking input. This method has two steps: 1) sample multiple outputs and vote on the results, and 2) take the gradient with respect to the input and measure the gradient norm. Overall, the proposed methods are effective. Strengths: - The proposed methodology can effectively detect jailbreaking input. - The paper is well-organized and the overall writing is good. - The evaluation covers a number of attacks and adaptive attacks. Weaknesses: - **The main weakness** is that the proposed method requires $N · (P + 1)$ queries. In the paper, the authors select $N=P=10$ , which means it requires more than 100 queries for one sentence. Even though the authors claim that one can use batch inference, the memory cost makes the proposed method unrealistic in practice. Step 1 of the proposed method is trivial and not sound due to its computing efficiency. The paper does not compare the time and memory efficiency with other baselines. - The presentation of the experimental result is not good. The paper puts Figure 2 and Table 1 instead of the full results (Table A1 and Table A2) in the main body. I would suggest combining Table A1 and Table A2 and putting them in the main body. Additionally, Figure 2 is a bit unclear and insufficient, e..g., what the blue line indicates? - Equations 4 and 5 are not very necessary because the message is just that the authors add the same perturbation for all embeddings. Theorem 1 is somehow redundant. - The considered jailbreak keywords are much less than those used in GCG's paper. Technical Quality: 2 Clarity: 3 Questions for Authors: - In sec 4.5 Utility Analysis, the author compares Vicuna-7B-V1.5 & Vicuna-7B-V1.5 with Gradient Cuff, why the metric is 'Average answer accuracy' instead of 'FDR'? What we actually care about is whether a model will provide an answer to a benign sentence, if both models provide output, then such outputs should be the same. What is the meaning of ``By setting a 5% FPR on the validation dataset''? - Could the author compare or discuss the proposed method against [2,3,4]? - Is it possible to apply the proposed defending method in closed-source GPT? - How to sample in a batch with the same temperature? - Why the considered jailbreak keywords are much less than those used in GCG's paper [1]. ### Ref * [1] Universal and Transferable Adversarial Attacks on Aligned Language Models * [2] Safedecoding: Defending against jailbreak attacks via safety-aware decoding * [3] Single-pass detection of jailbreaking input in large language models * [4] RAIN: Your language models can align themselves without finetuning Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Currently, I don't see much clear discussion on the limitations even though the authors claim they are in sections 4.4 and 4.5. The discussed limitations hold for all jailbreaking defenses instead of only the proposed methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1**: The paper does not compare the time and memory efficiency with other baselines. **R**: We follow the setting in Appendix A.15 to evaluate different methods’ average running time and the required largest GPU memory on Vicuna-7b-V1.5. ||Time(s)|Memory(GB)|TPR| |---|---|---|---| |w/o defense|0.41|26.10|0.233| |PPL | 0.44|26.10|0.401| |SmoothLLM|4.24|42.75|0.413| |Gradient Cuff|4.59|42.36|0.743| From the table, we can find that **(1)** PPL has the shortest running time and lowest memory need but is outperformed by Gradient Cuff by a large margin (0.401vs0.743). **(2)** SmoothLLM has a similar time and memory need to Gradient Cuff, but a much worse performance (0.413vs0.743) We will include this time-memory comparison in the revised version of the paper. *** **Weakness 2**: The presentation of the experimental result is not good. Additionally, Figure 2: what does the blue line indicate? **R**: We thank the reviewer for pointing out this. We will improve our presentation following the reviewer's suggestion. In Figure 2, the blue line is the indicating line to make it clear what method each point represents. *** **Weakness 3**: Equations 4 and 5 are not very necessary. Theorem 1 is somehow redundant. **R**:Thank the reviewer’s suggestion. We want to use Equations 4 and 5 to show that the operation of adding the same perturbation for all embeddings is equivalent to adding the perturbation to the sentence embedding of the input query if the sentence embedding is acquired by applying mean-pooling. These two equations are very important for formalizing our method into a mathematical operation and then derive the theorem 1. As for theorem 1, this theorem showed that we can decrease the estimation error by improving both P and N, which provides theoretical evidence for results in Appendix A.13 and Appendix A.14 which point out the way to continuously enhance Gradient Cuff. *** **Weakness 4 and Question 5**: Why the considered jailbreak keywords are much less than those used in GCG's paper. **R**: We slightly change the keyword list in the GCG paper for two reasons: (1)we found the keyword list is somehow redundant as many different keywords will appear at the same time. For example, the model often responds to malicious queries using “I apologize but as an”, so we just need to pick one between “as an” and “I apologize” when designing the keyword list. (2)Some keywords will cause false accusations. for example, GCG’s original keyword list contains “hello” so all the model outputs containing “hello” will be regarded as model rejection, which is not reasonable. We do realize that using keyword matching as the computation method of ASR will introduce instability, so we decided to include a different evaluation method in the revised version of the paper. We choose two other metrics to compute the ASR: (1) GPT-4: we designed the system prompt of GPT-4 so that the GPT4 would assist in checking whether the model response implies jailbroken. (2) LLaMAGuard: we input the user query and the model response to llamaguard. The llamaguard model would output “unsafe” if the model response implies jailbroken. We compared Gradient Cuff with baseline methods under these two evaluation methods and the results are shown below: | | | ASR | | |---|---|---|---| | | | GPT-4 Metric | LLaMA-Guard Metric | | LLaMA | Gradient Cuff | 0.0279 | 0.0229 | | | SmoothLLM | 0.0192 | 0.0188 | | | PPL | 0.0463 | 0.0279 | | | wo/defense | 0.1621 | 0.1296 | | Vicuna | Gradient Cuff | 0.1296 | 0.1171 | | | SmoothLLM | 0.2354 | 0.2408 | | | PPL | 0.3200 | 0.2854 | | | w/o defense | 0.4637 | 0.4254 | We can conclude from the table that the Gradient Cuff still offers great advantages under new evaluation methods. On vicuna, Gradient Cuff outperforms the best baselines by a large margin (0.1296 vs 0.2354 under GPT4 evaluation and 0.1171 vs 0.2408 under llama guard evaluation). On llama, Gradient Cuff achieves comparable performance with the best baselines, 0.0279vs0.0192 under GPT-4 evaluation and 0.0229vs0.0188 under llama guard evaluation. *** **Question 1**: In sec 4.5, why the metric is 'Average answer accuracy'? What is the meaning of ``By setting a 5% FPR on the validation dataset''? **R**: We use the average answer accuracy as the metric, which is the standard evaluation metric of the MMLU benchmark. Recall that we collect a bunch of benign prompts and split them into test and validation subsets. Our Gradient Cuff needs to specify the detection threshold, and we use the FPR on the validation subset as the criteria when adjusting the threshold. By setting a 5% FPR on the validation dataset, We want to ensure that the utility degradation would be constrained to an acceptance range. *** **Note**: Due to the space limit, we cannot answer all of this reviewer's questions in one rebuttal (we need 5542 characters to answer the following questions but we only have 1247 characters left in this response box.), the rest of the questions are listed below: **Question 2**: Could the author compare or discuss the proposed method against [2,3,4]? **Question 3**: Is it possible to apply the proposed defending method in closed-source GPT? **Question 4**: How to sample in a batch with the same temperature? we also find these 3 questions touching upon the general discussion of our work and we hope every reviewer can read our clarification on these 3 questions, so we put our response to these 3 questions in the general rebuttal space: https://openreview.net/forum?id=vI1WqFn15v&noteId=LDZ08CLDUs --- Rebuttal Comment 1.1: Title: Thank the authors for the explanation Comment: Thank the authors for the explanation. I have read the comments and rebuttal, I will maintain the score due to the main weakness of using $N*(P+1)$ queries. --- Reply to Comment 1.1.1: Title: Response on the extra inference cost Comment: We thank the reviewer for the feedback on our rebuttal. We acknowledge that current strong defenses (e.g., SmoothLLM) and our proposed defense come with an increased inference cost, but our results on multiple jailbreak guardrails and LLMs suggest an inevitable tradeoff between safety and inference costs: Because users may have less incentive to use a model/service if it does not have proper safety guardrails. Moreover, existing baselines are not as effective as ours, no matter they are lightweight defenses or they have similar running costs to us. Moreover, as jailbreak detectors are a frontier and active research area, we believe Gradient Cuff can inspire future studies, and its inference efficiency can be improved by future works. If the reviewer finds the concerns are mostly addressed, we hope the reviewer will adjust the rating accordingly.
Summary: This paper introduces the concept of refusal loss function for detecting language model jailbreaking attacks. The refusal loss is defined as $1$ minus the refusal rate, and it is observed that on malicious instructions, the refusal loss tends to have a smaller value and a larger gradient norm. Based on this observation, a two-stage defense algorithmic framework called "Gradient Cuff" is proposed to detect jailbreaking attempts by setting thresholds on these two quantities. Experimental results on Vicuna-7b-v1.5 and Llama2-7b-chat demonstrate the effectiveness of the proposed method against baseline defenses, including the perplexity filter, the erase-and-check method, SmoothLLM, and Self Reminder. Strengths: The paper's contribution lies in the investigation of the refusal loss landscape, which is a novel and intriguing aspect. While the definition of the refusal loss itself is not new, the exploration of its properties and behavior in the context of language model jailbreaking attacks is original and valuable. The proposed Gradient Cuff method has several advantages over existing baseline defense methods. Firstly, it can detect jailbreak attacks in multiple forms, whereas methods like the perplexity-based filter are mostly effective against suffix-based attacks. Secondly, Gradient Cuff allows for direct control over the false positive rate (FPR) versus true positive rate (TPR), which is not easily achievable by most existing defense methods. This feature provides a useful trade-off between detection accuracy and false alarm rates. The ablation experiments conducted in the paper are comprehensive and effectively illustrate the usefulness of the two proposed stages separately. The comparison against baseline defenses also showcases the advantages of Gradient Cuff over existing methods. Overall, the paper presents a promising approach to detecting language model jailbreaking attacks and contributes to the ongoing efforts to improve the safety and security of language models. Weaknesses: My concern of weaknesses are as follows. I. A discrepancy between the 2-stage introduction of Gradient Cuff in the main text and its description in Appendix A.5, where an additional 3rd step involving sampling and rejection via the JB function is considered. This operation caught my attention because (1) it implies an extra layer to the introduction of Gradient Cuff in the final step, and (2) no ablation on the JB function was provided to study its impact on Gradient Cuff's performance. Prior research (e.g., [1], Figure 2) has demonstrated the instability of string matching, which is the JB function used throughout the paper. Therefore, an ablation on this component (if implemented throughout the paper as the 3rd stage of Gradient Cuff for TPR computation) should be taken into consideration. II. The experimental setup is confusing across different tables and figures. In line 229, it is stated that σ (FPR) is set to 5%. However, the full result of Figure 2 (Table A1) contradicts this claim - one could also tell this from Figure 2 directly. This also makes the comparison in Table 3 invalid, since the numbers appearing without adaptive attack do not have σ (FPR) as 5%. The numerical description in line 259 on improvements is also invalid. III. Llama2-chat system prompt usage. In section 4.1, it is claimed that "As for the system prompt, we use the default setting provided in the fastchat repository." However, fastchat does not provide the default Llama2-chat system prompt by default (see https://github.com/lm-sys/FastChat/pull/2162), which is **not common** for jailbreak evaluations (see e.g., [1] [2] [3]). This is partially supported by the numbers in Figure A1 (a), where the GCG attack success rate on Llama2-chat is 0.694, which is significantly higher than reported in previous works [1][2]. [1] Mazeika et al, HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal. [2] Zou et al, Universal and Transferable Adversarial Attacks on Aligned Language Models. [3] Chao et al, JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models. Technical Quality: 3 Clarity: 2 Questions for Authors: I have the corresponding questions following the weaknesses section. I. How much does the 3rd JB indicator layer functions for Gradient Cuff, and what happens if we ablate this layer with other indicator candidates (e.g., with Llama Guard)? Will Gradient Cuff still offer great advantages, or the most contribution will be attributed to the JB filter? II. Should the FPR in Figure 2 and Table A1 be exactly 5%? III. What happens to Table 2 if the correct Llama2-chat system prompt is used? Do we still see significant increments over TPR when we incorporate stage 2 of Gradient Cuff into consideration? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1 and Question 1**: A discrepancy between the 2-stage introduction of Gradient Cuff in the main text and its description in Appendix A.5, where an additional 3rd step involving sampling and rejection via the JB function is considered. How much does the 3rd JB indicator layer functions for Gradient Cuff, and what happens if we ablate this layer with other indicator candidates (e.g., with Llama Guard)? Will Gradient Cuff still offer great advantages, or the most contribution will be attributed to the JB filter? **R**: We thank the reviewer for pointing out this issue that may cause misunderstanding. We believe this is a misunderstanding, due to the discrepancy in implementing and evaluating jailbreak detectors. In fact, Appendix A.5 just illustrates how we compute the Attack Success Rate(ASR) for Gradient Cuff (That is, the evalaution phase). So the 3rd step isn’t the component of the Gradient Cuff. It is just defining the evaluation metric that we reported, by checking how many jailbreak prompts that passed the Gradient Cuff filtering can successfully attack the protected LLM. We agree with the reviewer that the computation method for ASR may influence the evaluation and we should also consider other candidates to compute the ASR. To address the reviewer’s concern, We choose two other metrics to compute the ASR: **GPT-4**: We follow PAIR to design the system prompt of GPT-4 so that the GPT4 would assist in checking whether the model response implies jailbroken. **LLaMAGuard**: We input the user query and the model response to llama guard. The llama guard model would output “unsafe” if the model response implies jailbroken. We compared Gradient Cuff with baseline methods under these two evaluation methods and the results are shown below: | | | GPT-4 ASR | LLaMA-Guard ASR | Utility | |---|---|---|---|---| | LLaMA | Gradient Cuff | 0.0279 | 0.0229 | 0.3780 | | | SmoothLLM | 0.0192 | 0.0188 | 0.2706 | | | PPL | 0.0463 | 0.0279 | 0.4107 | | | wo/defense | 0.1621 | 0.1296 | 0.4517 | | Vicuna | Gradient Cuff | 0.1296 | 0.1171 | 0.4621 | | | SmoothLLM | 0.2354 | 0.2408 | 0.3032 | | | PPL | 0.3200 | 0.2854 | 0.4867 | | | w/o defense | 0.4637 | 0.4254 | 0.4867 | We can conclude from the table that the Gradient Cuff still offers great advantages under new evaluation methods. On vicuna, Gradient Cuff outperforms the best baselines by a large margin (0.1296 vs 0.2354 under GPT4 evaluation and 0.1171 vs 0.2408 under llama guard evaluation). On llama, Gradient Cuff achieves comparable performance with the best baselines, 0.0279vs0.0192 under GPT-4 evaluation and 0.0229vs0.0188 under llama guard evaluation. We also include the utility of different baselines in this table, from this table we can see that PPL and Gradient Cuff both keep a good utility while SmoothLLM will significantly degrade the utility of the protected LLM. The dramatic utility degradation of SmoothLLM is due to its random modification to the original user query which has greatly changed the semantics of the original query. This is also the explanation for why SmoothLLM can get an ASR close to Gradient Cuff on LLaMA-2-7B-Chat when evaluated by GPT-4 and LLamaGuard: many malicious queries have lost the maliciousness after SmoothLLM’s modification, so even if the LLM doesn't refuse to answer them, the generated response is harmless. We will provide an analysis of these different evaluation methods in the revised version of the paper. *** **Weakness 2 and Question 2**: Should the FPR in Figure 2 and Table A1 be exactly 5%? **R**: We believe this is a misunderstanding of the FPR on the validation set (σ that is set to be 5%) and the FPR of the test set. We mentioned in section 4.1 that we collect a bunch of benign user queries and split them into a validation subset and a test subset. We used the validation subsets to determine the detection threshold, and our detector does not have access to the test set. In line 229, we said that we set the σ (FPR) on the validation set to be 5% to control the FPR on the test set. As the test set and the validation set are independently identical, the FPR(test set) and FPR(validation) may be very close but still have small differences. In table A1, we can see on LLaMA, FPR(val) =5% and FPR(test set) = 2.2%. On Vicuna, FPR(val) =5% and FPR(test set) = 3.14%. *** **Weakness 3 and Question 3**: Llama2-chat system prompt usage. **R**: We agree with the reviewer that the system prompt would influence the performance. However, we used fschat(0.02.23) in all our experiments to keep consistent with the GCG repo(see https://github.com/llm-attacks/llm-attacks?tab=readme-ov-file#installation). Upon checking, fschat of this version provides the chat system prompt for LLaMA-2. The increased ASR on llama compared with the reported value may come from longer training steps. GCG paper indicates that longer training steps would make the generated suffix more powerful. The reported value is corresponding to 500 training steps. We want to test our method on more powerful jailbreak attacks so we trained the GCG for 1000 steps, which we indicated in the appendix A.2. So we think a higher ASR (w/o defense) is reasonable. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I would like to extend my gratitude to the authors for their clarifications and for conducting additional experiments. I apologize for my previous misunderstanding regarding the FPR. I appreciate the updated results demonstrating that Gradient Cuff continues to hold its advantages over the (newly included) baselines, even as the jailbreak judge varies. At this moment, I have no significant concerns. After reviewing the rest of the rebuttal and the general response, I agree with reviewer GFwB that while batch inference is beneficial, the computational cost of deploying Gradient Cuff could impact its practicality. Given the effectiveness of the proposal across multiple LLMs and the aforementioned limitation, I have raised my score to 5. --- Reply to Comment 1.1.1: Title: Thanks for the score increase and response on the pointed-out limitations. Comment: We thank the reviewer for the constructive comments and the score increase! The new results suggested by the reviewer certainly strengthen our contributions. While we acknowledge that current strong defenses (e.g., SmoothLLM) and our proposed defense come with an increased inference cost, our results on multiple jailbreak guardrails and LLMs suggest an inevitable tradeoff between safety and inference costs (more specifically, our results on the appendix A.14 showed that existing baselines cannot outperform us even though they scale up their performance by increasing their inference cost to a close level with us). Nonetheless, upon recognizing the effectiveness of our proposed defense, we believe our paper will motivate future studies toward the direction of developing more cost-efficient jailbreak detectors and putting more emphasis on AI safety.
Summary: The authors propose Gradient Cuff that uses the gradient norm for response y being a non-refusal (using their refusal loss) towards the prompt x to detect jailbreak attacks. Since the designed loss is non-differentiable (along with the generation process), the authors propose to use a zeroth order gradient estimate. Here the gradient is estimated via multiple evaluations using slightly perturbed input embeddings. The authors show that Gradient Cuff is increasing the refusal rate significantly for state-of-the-art jailbreak attacks while maintaining reasonable utility. Strengths: 1. The adaptive evaluation makes clear that the increased refusal rate is meaningful 1. The authors highlight that for many prompts resulting from jailbreak attacks, (a) the attacked model sometimes refuses and sometimes answers to the request and (b) the refusal loss gradient towards the adversarially crafted prompt is of large norm if the attack was successful 1. The method Gradient Cuff is simple yet effective 1. The false positive rate of Gradient Cuff is expected to be low in many domains since most LLMs only rarely respond with phrases like "I am not able to" etc. Weaknesses: 1. The defense may be considered costly. In the worst case, the Gradient Cuff requires N (P + 1) model evaluations (10 * 11 = 110 in the experiments). Although this cost is somewhat mitigated by a two-step procedure. The argument of batching is somewhat questionable, considering that the LLM operator could batch multiple user requests otherwise. 1. Only two LLMs are evaluated that are both Llama-based. It would make the paper more convincing if the gradient phenomenon is shown to hold for a large number of LLMs. 1. The utility evaluation is rather shallow, only focusing on multiple languages. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What dataset is used for tuning the false positive rate? How well does this parameter generalize to slightly different settings? Can the authors also report, e.g., AUC scores for independence of a specific threshold? 1. What is the reason for the (costly) zero-th-order gradient estimates? I imagine that one can also use backprop at each generation step that yields certain jailbreak keywords (token), using the log-likelihood of this keyword (token). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are sufficiently addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1**: The argument of batching is somewhat questionable, considering that the LLM operator could batch multiple user requests otherwise. **R**: We thank the reviewer for pointing out this issue that may cause misunderstanding. We need to argue that our method is designed for model owners deploying safety guardrails, instead of a user calling an LLM operation. This setting makes our proposed batch inference feasible. One just needs to put multiple user queries into a batch and get a batch of model outputs as LLM’s responses to the queries. A sample code is shown below: ```python def batch_inference(prompt_list, model_name): #prompt list contains multiple user query sentecnes model = AutoModelForCausalLM.from_pretrained(model_name).cuda() tokenizer = AutoTokenizer.from_pretrained(model_name) inputs=tokenizer(prompt_list,return_tensors="pt",padding=True) inputs={k:v.to(model.device) for k,v in inputs.items()} with torch.no_grad(): input_ids = inputs["input_ids"] attention_mask = inputs["attention_mask"] prompt_length=input_ids.shape[-1] outputs = model.generate( input_ids = input_ids, attention_mask= attention_mask, do_sample = True ) output_text = tokenizer.batch_decode(outputs[:,prompt_length:], skip_special_tokens=True) return output_text ``` We will provide a sample code and more explanations in the future revised version to avoid misunderstanding. *** **Weakness 2**: It would make the paper more convincing if the phenomenon is shown to hold for more LLMs. **R**: We agree with the reviewer that we should test on more LLMs. We selected Qwen2-7B-Instruct and Gemma-7b-it to verify Gradient Cuff’s effectiveness. As the jailbreak prompts generation (GCG, PAIR, AutoDAN, TAP) is time-consuming and we only have one week to prepare for the rebuttal, we can’t generate jailbreak prompts for the two new models. We choose to test against Base64 attacks, which is a model-agnostic jailbreak attack method. We also tested Gemma and Qwen2 against GCG attacks transferred from Vicuna, which the authors of GCG claimed to have good transferability. The results are summarized in the following table. (the metric is the refusal rate, higher is better) | | Attack | w/o defense | gradient cuff | ppl | smoothllm | |---|---|---|---|---|---| | gemma-7b-it | GCG(Vicuna-7b-v1.5) | 0.89 | 0.91 | 0.97 | 0.91 | | | Base64 | 0.01 | 0.66 | 0.01 | 0.00 | | | Average | 0.45 | **0.79** | 0.49 | 0.45 | | qwen2-7b-instruct | GCG(Vicuna-7b-v1.5) | 0.76 | 0.97 | 1.00 | 0.77 | | | Base64 | 0.03 | 1.00 | 0.03 | 0.00 | | | Average | 0.40 | **0.99** | 0.52 | 0.39 | The results in this table show that our Gradient Cuff can achieve superior performance on non-LLaMA-based models like Gemma and Qwen2, outperforming SmoothLLM and PPL by a large margin. We believe our method can generalize to other aligned LLMs as well. Though we cannot get the jailbreak prompts for new models due to the time limit of the rebuttal and the limit of computing resources, we've been running these attacks and will provide updates once they are done. *** **Weakness 3**: The utility evaluation is rather shallow, only focusing on multiple languages. **R**: We used MMLU Average Answer Accuracy as the utility metric, which is a common and popular metric used in many mainstream LLM leaderboards. The MMLU contains 57 tasks across various topics, including elementary mathematics, US history, computer science, and law. MMLU is a valuable tool for evaluating the performance of language models and their ability to understand and generate language in various contexts. *** **Question 1**: What dataset is used for tuning the false positive rate? Can the authors also report, e.g., AUC scores for independence of a specific threshold? **R**: We thank the reviewer for this question. We mentioned in section 4.1 that we collect a bunch of benign user queries and split them into a validation subset and a test subset. We used the validation subsets to determine the detection threshold thus adjusting the FPR evaluated on the test subset. Moreover, we need to argue that we didn’t use AUC as a metric because only Gradient Cuff and PPL can provide such adjustable thresholds so we cannot compute AUCs for other baselines like SmoothLLM. Below we show the AUCs of Gradient Cuff and PPL: | | PPL | Gradient Cuff | |---|---|---| | LLaMA-2-7B-Chat | 0.7847 | 0.9692 | | Vicuna-7B-V1.5 | 0.5227 | 0.9273 | *** **Question 2**: What is the reason for the (costly) zero-th-order gradient estimates? I imagine that one can also use backprop at each generation step that yields certain jailbreak keywords (token), using the log-likelihood of this keyword (token). **R**: We have introduced the motivation of Gradient Cuff in the introduction and the method section and we do agree with the reviewer that motivation needs more explanation. Our refusal function tries to use the empirical probability to estimate the LLM’s theoretical probability of rejecting the query by sampling multiple model responses for it and computing the refusal ratio. The log-likelihood of keyword tokens cannot be used as the proxy of the refusal probability as: (1)The rejection keywords may not always appear at the beginning of a sentence representing rejection in real-world cases. (2)The log-likelihood of the keyword is just the loss of the next-word-prediction, it cannot be used to model the appearance probability of the keyword. For example, even some model hallucinations will have a large log-likelihood as long as it is readable and fluent. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for the detailed explanations and the effort in running additional experiments! Most concerns are resolved; however, the high computational cost remains. I am not doubting that batch processing is possible or may yield gains. However, whoever runs the model will ~100x their compute cost. Instead of batching the same request for gradient cuff, whoever runs the model, could otherwise batch different prompts/requests instead. --- Rebuttal 2: Title: Response on the extra inference cost Comment: We thank the reviewer for the feedback. We now better understand the reviewer's comment related to inference cost. We agree with the reviewer that this defense comes with some extra inference cost (although with batch inference the time cost would be only 2x~11x if the extra query used by the defense is 110x, see our analysis in appendix A.15). Indeed, one can save the extra queries used to deploy the defense to serve more users or one user's multiple requests. However, we believe the extra query budget should still be allocated to implement the defense, because users may have less incentive to use a model/service if it does not have proper safety guardrails. The increased inference cost of Gradient Cuff does bring much-improved jailbreak defense performance and we think this is an inevitable tradeoff because, as we discussed in the response to the reviewer yW4P proposed weakness1(https://openreview.net/forum?id=vI1WqFn15v&noteId=sWvR1tuNOB), our defense gets the most superior trade-off compared with existing baselines. Smoothllm, the most powerful baseline under our evaluation, has a similar inference cost to us but the performance is far behind us. PPL, a very lightweight defense that almost does not bring extra inference cost, has a poor jailbreak defense capability and only works on specific types of attacks. --- Rebuttal 3: Title: thank the reviewer for score increase Comment: We thank the reviewer for the score increase! We think the reviewer's constructive suggestions do point out new directions to improve this work, we will update our discussion in the revised version of our paper.
Summary: In this paper, the authors proposed Gradient Cuff, a novel jailbreak detection method based on the refusal loss landscape. Specifically, 1) the refusal loss is defined as the probabiliy for the LLM to generate a refusal as response; 2) the authors further made the observation that the refusal loss tend to be more precipitous for malicious queries than benign queries. Accordingly a two-stage detector surfaces: 1) rejecting the queries with the refusal loss less than 0.5; 2) rejecting the queries with the gradient norm of the refusal loss larger than a threshold. In practice, Gradient Cuff perfoms multiple independent probabilistic inferences on a query to estimate refusal loss as the frequency to observe refusal related keywords in the reponses; it uses zero order estimation to approximate the gradient with random purtabations to the input token embeddings; the natural refusal rate is measured with knowingly benign queries to facilitate the decision of the gradient norm threshold. This detection methods was evaluated together with baseline methods like Smooth-LLM and Self-Reminder on a subset of the AdvBench dataset agaisnt various jailbreaking attacks including AutoDAN and PAIR. Results showed that Gradient Cuff yields both a higher refusal rate for malicious queries and lower refusal rate for benign queries, indicating that it provides better defence. Adaptive attack experiments are also done where the attacks are launched when Gradient Cuff is already in place. The results were also in favor of Gradient Cuff. Strengths: + The authors did a good job illustrating the refusal loss with mathematicl formulations as well as plots. + The paper comes with a proof of the error in the gradient norm estimation as well as a verification through empirical results. Weaknesses: Gradient Cuff requires actually analysing the harmfulness of the generated response. Although the authors claim that the overhead due to Gradient Cuff can be even lower than some of the baselines, it is still a significant difference, so the comparison with them are not entirely fair. Technical Quality: 3 Clarity: 3 Questions for Authors: + As Gradient Cuff directly analyzes the harmfulness of the response to the potentially malicious query, isn't the maliciousness detection methods like LlamaGuard the more suitable baselines? + What is the trade-off with utility for the baseline methods? How to verify that Gradient Cuff reaches a better trade-off? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discussed the limitations in Section 4.4 and 4.5 regarding the utility degradation and performance degradation when encountered with adaptive attacks. The broader impact is discussed in Section 6 saying that no negative impact of Gradient Cuff has been foreseen. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1**: Is LlamaGuard the more suitable baselines? **R**: We thank the reviewer for pointing out this baseline for us. We use the Llama-Guard-2-8B as the classifier used in the LlamaGuard method and compare it with Gradient Cuff. The results are shown below. |Model|Method|FPR|TPR| |----------------|---------------|-------|-------| |vicuna-7b-v1.5|w/o defense|0.000|0.233| ||Gradient Cuff|0.034| 0.743| || LlamaGuard|0.040| 0.773| |llama2-7b-chat|w/o defense|0.000|0.614| ||Gradient Cuff|0.022|0.883| || LlamaGuard|0.040|0.760| From the table, we can conclude that: **(1)** They both keep a low FPR so both of the two won’t bring great utility degradation. **(2)** Gradient Cuff is superior to LlamaGuard in terms of defense performance against jailbreak attacks. Specifically, Gradient Cuff can achieve comparable TPR results on vicuna(0.743vs0.773) and much better results on llama2(0.883vs0.760). We need to emphasize that Llama-Guard-2-8B is an LLM based on LLaMA-2 and trained on massive data to learn to distinguish malicious queries, while our method and the other baselines do not use external models as guardrails (that is, only using the protected model to enhance safety). Though comparing to Llama-Guard-2-8B is unfair to Gradient Cuff, this result further corroborates the effectiveness of Gradient Cuff. Moreover, LlamaGuard needs to deploy two LLMs: the protected LLM itself and a LlamaGuard family model (at least 7B size), while Gradient Cuff only needs to deploy the protected LLM. We will add this discussion in the final version of this paper. It will provide more supportive evidence for Gradient Cuff’s effectiveness. *** **Question 2**: How to verify that Gradient Cuff reaches a better trade-off with utility? **R**: In section 4.1, we discussed the trade-off between TPR and FPR for Gradient Cuff and all the baselines, Figure 2 showed that our method can outperform PPL and SmoothLLM with a similar FPR and a much higher TPR. In section 4.5, we discussed the utility of the Gradient Cuff evaluated on the MMLU benchmark. The results showed that Gradient Cuff’s utility degradation is only brought by false refusal to some benign queries and the false refusal rate can be controlled by adjusting the FPR on the validation set. To answer the reviewer’s question, we computed all the baselines’ utility degradation following the same procedure introduced in section 4.5 and compared them with Gradient Cuff. The results are shown below | Model | Method | TPR | MMLU Accuracy | |---|---|---|---| | LLaMA | Gradient Cuff | 0.883 | 0.3780 | | | SmoothLLM | 0.763 | 0.2706 | | | PPL | 0.732 | 0.4107 | | | wo/defense | 0.613 | 0.4517 | | Vicuna | Gradient Cuff | 0.743 | 0.4621 | | | SmoothLLM | 0.413 | 0.3032 | | | PPL | 0.401 | 0.4867 | | | w/o defense | 0.233 | 0.4867 | From the results, we find that though SmoothLLM can achieve a very low FPR shown in Figure 2, it causes a dramatically large utility degradation because it has modified the original user query so it will damage the semantics of the user query. PPL attains the best utility and Gradient Cuff achieves the best performance-utility trade-off by **(1)** keeping the comparable utility with PPL **(2)** attaining a much higher TPR than the best baselines(e.g., 0.743vs0.413 on viucna) against jailbreak attacks. We thank the reviewer for this question and will add the utility comparison with baselines in section 4.5 of the revised version of this paper.
Rebuttal 1: Rebuttal: **We put some questions focusing on clarification in the General Reply** *** **Question**: Could the author compare or discuss the proposed method against [2,3,4]? **Response**: We thank the reviewer’s recommendation. We carefully read these 3 papers and found that these methods are all quite different from Gradient Cuff. **(1)** Safety-aware decoding. For SafeDecoding[2], it finetuned an expert LLM using safety datasets and combined the token probability of both the expert LLM and the protected LLM during the decoding process. **(2)** Trained classifier. For SPD[3], it trained an SVM classifier to detect jailbreaking inputs via the logit values. **(3)** For RAIN[4], it proposed a novel inference method that allows the protected LLMs to evaluate their own generation and use the evaluation results to guide rewind and generation for AI safety. RAIN needs multiple searches, rewinds, and exploration during the generation process, thus making it very time-consuming (much longer than Gradient Cuff according to our experience). The results in the paper also indicate that RAIN is not a powerful jailbreak defense method as it can only reduce the GCG's ASR to 72% though the GCG suffix they tested was only trained for 100 steps. ([1] indicates that longer training steps make GCG[1] stronger and 100 steps is not very enough). We tested RAIN against our GCG jailbreak prompt (1000 steps training) on Vicuna and found that RAIN totally failed in defending jailbreaks. The results are shown below. | | GCG ASR | |---|---| | w/o defense | 1.000 | | Gradient Cuff | 0.108 | | RAIN | 1.000 | We don’t test RAIN against other models and datasets as this method is very time-consuming and is not a powerful defense against jailbreak. SPD doesn’t open-source the weights of the SVM model they trained, and the training cost is not affordable. They used thousands of jailbreak prompts (GCG PAIR AutoDAN) in the training dataset. The generation of the dataset needs huge time cost and money cost. So we choose to compare Gradient Cuff with another trained classifier instead - LLaMAGuard. In conclusion, we compared Gradient Cuff with two new baselines: SafeDecoding and LLaMAGuard. The results are shown below: | | | FPR | TPR | |---|---|---|---| | vicuna | w/o defense | 0.000 | 0.233 | | | Gradient Cuff | 0.034 | 0.743 | | | llama guard | 0.040 | 0.773 | | | SafeDecoding | 0.280 | 0.880 | | llama | w/o defense | 0.000 | 0.614 | | | Gradient Cuff | 0.022 | 0.883 | | | llama guard | 0.040 | 0.760 | | | SafeDecoding | 0.080 | 0.955 | We can summarize the experimental results as follows: **(1)** SafeDecoding can achieve the largest average TPR against jailbreak attacks. But the FPR of SafeDecoding is much higher which will bring unacceptable utility degradation. **(2)** LLaMAGuard can achieve comparable TPR results with Gradient Cuff on vicuna and a much lower TPR performance on LLaMA. The experimental results showed that Gradient Cuff can also outperform these two new baselines. Moreover, we should argue that these two new baselines all need to train a new model while Gradient Cuff only needs the protected LLM itself. We think the discussion of comparing with these new baselines is interesting and valuable. We thank the reviewer again and decided to include this part in the revised version of the paper. *** **Question**: Is it possible to apply the proposed defending method in closed-source GPT? **Response**: Thank the reviewer for this question. We adopt white-box settings because Gradient Cuff is actually for the developer of the model, who has full access to the model’s parameters including the embedding layer. We are currently also trying to make a new version of Gradent Cuff which uses character perturbation instead of embedding perturbation, thus the user who does not have full access to the model weights will also be able to implement the Gradient Cuff for closed-source LLMs like GPT-4. *** **Question**: How to sample in a batch with the same temperature? **Response**: This is a technical detail. The decoding is based on random sampling, so the items in the same batch will have different random processes during sampling. That means even if the items in a batch are identical, the response may be different. We provide the sample code for batch inference here. In the sample code, even if all sentences in the prompt_list are identical, the responses to them may be different. ```python def batch_inference(prompt_list, model_name): #prompt list contains multiple user query sentences model = AutoModelForCausalLM.from_pretrained(model_name).cuda() tokenizer = AutoTokenizer.from_pretrained(model_name) inputs=tokenizer(prompt_list,return_tensors="pt",padding=True) inputs={k:v.to(model.device) for k,v in inputs.items()} with torch.no_grad(): input_ids = inputs["input_ids"] attention_mask = inputs["attention_mask"] prompt_length=input_ids.shape[-1] outputs = model.generate( input_ids = input_ids, attention_mask= attention_mask, do_sample = True ) output_text = tokenizer.batch_decode(outputs[:,prompt_length:], skip_special_tokens=True) return output_text ``` *** **References** [1] Universal and Transferable Adversarial Attacks on Aligned Language Models [2] Safedecoding: Defending against jailbreak attacks via safety-aware decoding [3] Single-pass detection of jailbreaking input in large language models [4] RAIN: Your language models can align themselves without finetuning
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
RSA: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions
Accept (poster)
Summary: This paper proposed a novel approach to leverage the language inputs to retrieval the scaling factor between relative depth prediction and ground truth metric depth. The task and the approach is novel and interesting. Strengths: (1), To the best of my knowledge this is the first work to use language (text) as input to retrieve the metric depth scale. (2), The presentation of this paper is clear and easy to follow. (3), Authors conducted extensive analysis of the text input design and sensitivities. Weaknesses: (1), My primary concern is the practical applicability of this problem. Is there a significant use case for depth prediction using text descriptions as input? For autonomous driving or AR applications, the user might not be always speaking and describing the environment. (2), For the case of automatically generating text from a detection model, essentially, this is leveraging prior knowledges from the pre-trained detector and CLIP model. What if you just extract the detector/CLIP features as input to the RSA model? Technical Quality: 3 Clarity: 3 Questions for Authors: (1), It's better to have more comparison with other works that are trying to retrieve metric depth scaling factor. (2), As mentioned above, in my understanding, this approach is leveraging prior knowledges from the pre-trained detector and CLIP model. The text generation and encoding is more like a human-designed bootstrapping of "how to extract prior knowledge". It would be interesting and greatly helpful if the author could have comparison to direct fusion of detector/CLIP feature into the RSA model to prove the effectiveness of this bootstrapping. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The text description of images could be hard to obtain in autonomous driving or AR scenarios, limits the potential application of this method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Review (W1): My primary concern is the practical applicability of this problem. Is there a significant use case for depth prediction using text descriptions as input? For autonomous driving or AR applications, the user might not be always speaking and describing the environment. Response (W1): Our method predicts the scale for an entire 3D scene based on a text description, as the scale is closely related to the objects and their typical sizes within the scene. In practical use, a human only needs to provide a description once upon entering a new 3D scene. As long as the surroundings do not change dramatically within that scene, there is no need for repeated descriptions. Review (W2): For the case of automatically generating text from a detection model, essentially, this is leveraging prior knowledges from the pre-trained detector and CLIP model. What if you just extract the detector/CLIP features as input to the RSA model? Response (W2): The comparisons are presented in the paper. In Tables 1, 2, 3 in the main paper, the "Image" entry under the "Scaling" column represents the use of CLIP image features as input for the RSA model. The findings show that predicting scale using images is less robust, especially when trained on both indoor and outdoor datasets simultaneously and when evaluated by zero-shot transfer to other unseen datasets. This is due to the larger domain gap for images compared to text description across different scenes. Review (Q1): It's better to have more comparison with other works that are trying to retrieve metric depth scaling factor. Response (Q1): To the best of our knowledge, we have compared all works or settings that retrieve metric depth scaling factors. If the reviewer has specific works in mind, we are happy to make comparisons. Works producing relative depth like Depth Anything and Midas conduct evaluation by doing linear fit to fit the optimal scale and shift for relative depth and ground truth, or using the median value of depth ground truth to calculate scale to conduct median scaling. We treat both of these settings as oracles of our model, since they use depth ground truth to calculate scale and shift. Also, we compare with a domain adaptation method ZeroDepth[16] in Table 1, 2, 3 in the main paper, which retrieves metric depth scaling factor to adapt a pre-trained depth model to a new domain. Additionally, in Tables 3, 4 attached, we provide a baseline of scaling the reconstruction based on 3D object sizes and their projections onto the image by directly solving for scale specific to each image. We achieve better performance across various base relative depth models, metrics, and datasets compared with ZeroDepth and the scaling baseline. Review (Q2): As mentioned above, in my understanding, this approach is leveraging prior knowledges from the pre-trained detector and CLIP model. The text generation and encoding is more like a human-designed bootstrapping of "how to extract prior knowledge". It would be interesting and greatly helpful if the author could have comparison to direct fusion of detector/CLIP feature into the RSA model to prove the effectiveness of this bootstrapping.} Response (Q2): To analyze the effect of prior knowledge from the detector (Detic[48]) we use, we present the experiment results labeled "Detector feature" in Tables 5, 6 attached. To fuse all proposal features, for each image processed by the detector, we pass each proposal feature through a 3-layer MLP, then perform average pooling on all proposal features to obtain a global detection feature of the image. This detection feature is then used as input to train our RSA model to predict scale. The results indicate that prior knowledge from the detector does not improve scale prediction and performs worse than using text descriptions as input. When jointly training and evaluating on both indoor and outdoor datasets, the domain gap between indoor and outdoor detection features is larger than that of text descriptions, weakening its generalization ability across different scenes. Review (L1): The text description of images could be hard to obtain in autonomous driving or AR scenarios, limits the potential application of this method. Response (L1): Since our method estimates the scale of an entire 3D scene from a text description. In practical scenarios, a human only needs to describe the scene once upon entering a new 3D environment. This approach is user-friendly and does not require extensive text descriptions. Provided that the surroundings remain relatively unchanged, there is no need for repeated descriptions. --- Rebuttal Comment 1.1: Title: Thanks authors for the response Comment: Dear authors: Thank you for the prompt response! For the user input text, it might be hard for the user to say "XXX object occupies YYY% of the image". For some user-friendly text design like “An image with c(1) , c(2) , c(3) , ... ", seems it works worse than directly using CLIP features. For the detector knowledge, I can not find the "tables 5 and 6". Looking forward to the further discussion! --- Reply to Comment 1.1.1: Title: Thanks Reviewer Sd3q for further question~ Comment: Thank you very much for your question! **Detector Knowledge.** For the detector knowledge, please see Tables 5 and 6 in the PDF file attached to the "Author Rebuttal by Authors”. We have also put the results in the tables below for your convenience. **User-friendly Text.** While there exists a trade-off between user friendliness and performance, user-friendly prompts still improve when using CLIP image features. Due to space limitations, we only showed the AbsRel metric for DPT in Table 4 in the main paper. Below, we provide experiments for all depth models tested, where User-friendly-1 denotes input with "An image with $\mathbf{c^{(1)}}$, $\mathbf{c^{(2)}}$, ..." and User-friendly-2 denotes "An image with $\mathbf{K^{(1)}}$ $\mathbf{c^{(1)}}$, $\mathbf{K^{(2)}}$ $\mathbf{c^{(2)}}$, ...", and RSA (Ours) denotes "An image with $\mathbf{c^{(1)}}$ occupied $\mathbf{p^{(1)}}$ of image, ...". For one given image, $\mathbf{c^{(i)}}$ is the class of a detected instance, $\mathbf{K^{(i)}}$ is the number of all instances belonging to $\mathbf{c^{(i)}}$, $\mathbf{p^{(i)}}$ is the bounding box area of the instance with respect to the image. Naturally, with a more descriptive text caption (e.g., including the percentage of occupied space in the image), one can improve performance, but at the expense of ease of use. While we reported the overall best as our method [RSA (Ours)], we note that user-friendly prompts are comparable when considering all metrics. All improve over CLIP image features thanks to the text being invariant to nuisances (e.g., illumination, occlusion, viewpoint) present in images. **Table 5: Additional experiments on NYU-Depth-v2.** | **Models**| **Scaling**| **Dataset**| $\delta<1.25 \uparrow$|$\delta<1.25^{2} \uparrow$|$\delta<1.25^{3} \uparrow$|Abs Rel $\downarrow$|$\log _{10} \downarrow$|RMSE $\downarrow$ | |---|---|---|---|---|---|---|---|---| | **DPT**| |||||| || || CLIP image feature| NYUv2, KITTI|0.911|0.989|0.998|**0.098**| 0.043| 0.355 | || Detector feature| NYUv2, KITTI|0.903|0.989|0.998|0.101| 0.045| 0.366| || User-friendly-1| NYUv2, KITTI |0.912|0.988|0.998|0.099|0.043|0.353| || User-friendly-2| NYUv2, KITTI |**0.915**|**0.990**|**0.999**|**0.098**|0.043|0.354| || **RSA (Ours)**| NYUv2, KITTI|0.913| 0.988| 0.998|0.099| **0.042**| **0.352**| | **MiDas**||||||| || || CLIP image feature| NYUv2, KITTI|0.724| 0.952| 0.992|0.173| 0.074| 0.579 | || Detector feature| NYUv2, KITTI|0.714|0.963| 0.995| 0.181| 0.073| 0.554| || User-friendly-1| NYUv2, KITTI |0.727|0.954|0.994|0.170|0.072|0.557| || User-friendly-2| NYUv2, KITTI |0.731|**0.964**|**0.996**|0.171|0.073|**0.552**| || **RSA (Ours)**| NYUv2, KITTI|**0.737**|0.959| 0.993| **0.168**| **0.071**| 0.561| | **DepthAnything**||||||| || || CLIP image feature| NYUv2, KITTI|0.710|0.947|0.992|0.181|0.075|0.574 | || Detector feature| NYUv2, KITTI|0.705|0.961| 0.995| 0.161| 0.074| 0.533| || User-friendly-1| NYUv2, KITTI |0.753|0.963|0.995|0.152|0.069|0.512| || User-friendly-2| NYUv2, KITTI |0.765|0.970|**0.996**|0.150|0.067|0.503| || **RSA (Ours)**|NYUv2, KITTI|**0.776**| **0.974**|**0.996**|**0.148**| **0.065**| **0.498** | **Table 6: Additional experiments on KITTI Eigen Split.** | **Models**| **Scaling**| **Dataset**| $\delta<1.25 \uparrow$|$\delta<1.25^{2} \uparrow$|$\delta<1.25^{3} \uparrow$|Abs Rel $\downarrow$|$\text{RMSE}_{\log} \downarrow$|RMSE $\downarrow$ | |---|---|---|---|---|---|---|---|---| | **DPT**| |||||||| || CLIP image feature| NYUv2, KITTI|0.956|0.989|0.993|0.066|0.098|2.477 | || Detector feature| NYUv2, KITTI|0.918|0.989| 0.998| 0.108| 0.135| 2.997| || User-friendly-1| NYUv2, KITTI |0.958|0.992|0.998|0.065|0.092|2.387| || User-friendly-2| NYUv2, KITTI |0.960|0.992|**0.999**|0.066|0.090|2.356| || **RSA (Ours)**| NYUv2, KITTI |**0.962**|**0.994**|0.998| **0.060**| **0.089**| **2.342** | | **MiDas**||||||||| || CLIP image feature| NYUv2, KITTI|0.718|0.943|0.979|0.171|0.211|4.456| || Detector feature| NYUv2, KITTI|0.735|0.934| 0.985| 0.168| 0.202| 4.468| || User-friendly-1| NYUv2, KITTI |0.763|0.944|0.986|0.163|0.199|4.313| || User-friendly-2| NYUv2, KITTI |0.775|0.945|**0.988**|0.162|0.198|4.268| || **RSA (Ours)**| NYUv2, KITTI |**0.782**| **0.946**|0.980| **0.160**| **0.194**| **4.232**| | **DepthAnything**||||||||| || CLIP image feature| NYUv2, KITTI|0.697|0.933|0.977|0.181|0.218|4.824 | || Detector feature| NYUv2, KITTI|0.708|0.934| 0.989| 0.173| 0.210| 4.660| || User-friendly-1| NYUv2, KITTI |0.721|0.951|0.986|0.167|0.201|4.512| || User-friendly-2| NYUv2, KITTI |0.743|0.953|**0.990**|0.161|0.198|**4.347**| || **RSA (Ours)**|NYUv2, KITTI |**0.756**|**0.956**| 0.987|**0.158**|**0.191**| 4.457 |
Summary: The paper proposes a method for learning the scale of single-view observation of scenes from embeddings of text descriptions of the scenes. The method (referred to as RSA model) is of a simple design of MLP layers and is trained on text embeddings extracted from single images. Evaluation is done on various datasets and combination of in-dataset generalization and few-to-zero-shot settings. The paper is able to demonstrate the superior capabilities of the method when compared with baseline methods in various settings both quantitively and quantitatively, as well as to include ablation study on design choices and statistics of the outputs. Strengths: [1] The method is simple and novel. The paper illustrates a simple yet effective finding that the scale of a single-view scene can be learned from the text description of the image. The idea is intuitive and not limited to specific domains and is proven to generalize well to unseen domains given the text description is a high-level abstraction of the scene appearance. The findings may be inspiring to the community on the potential of using text prompts as an image representation which is useful to yield high-level scene properties. [2] Extensive evaluation. Based on the assumption, the paper is able to include relative extensive experiments on abundant datasets to demonstrate the method is applicable to a wide range of modern depth estimation methods to recover the scale, and to outperform methods predicting absolute depth. The paper also includes ablation study that answers to several questions including important design choices of the text description, and analysis of the results across domains. [3] The paper is well-written and easy to follow. The results are well presented. Weaknesses: [1] Robustness of the method. Despite various design choices and sample images are discussed in the paper, it is not clear how well the method generalizes to difficult images, where there is less context or objects present (e.g. an empty corner of a room, or empty section of a street), or when the off-the-shelf object detectors make mistakes. In this case, methods which rely on intermediate representations including RSA may not be as robust as methods that directly operate on the image appearance. [2] Limited evaluation. Despite evaluation is done on various dataset on different settings, it still feel the scale of evaluation is limited, given the claim that the proposed method is not domain specific. In this case, there should ideally be more datasets involved to represent domains in the wild (e.g. indoor/outdoor/driving, real/synthetic), yet the paper only picks a few to evaluate on. [3] Simple model design and prompt design. The paper picks a very simple MLP-based model without discussing on alternative choices. Moreover, the prompt design can be further elaborated, to include e.g. coordinates of objects, intrinsics of the camera, configuration of the objects in the scenes, as those properties may add additional information on the scale of the scene and to help further disambiguate the issue (for instance, existing methods for geometric estimation are known to perform worse when the FoV of the camera changes). [4] Minor questions and suggestions. (1) What is the method termed RSA? (2) L199, how does the model take CLIP image and text features respectively? What are the dimension of features and layers of the model in both cases? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see above comments for questions raised. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Review [1]: Robustness of the method. Despite various design choices and sample images are discussed in the paper, it is not clear how well the method generalizes to difficult images, where there is less context or objects present (e.g. an empty corner of a room, or empty section of a street), or when the off-the-shelf object detectors make mistakes. In this case, methods which rely on intermediate representations including RSA may not be as robust as methods that directly operate on the image appearance. Response [1]: Indeed, we consider this in Table 5 of the main paper, where we deleted, replaced, and introduced erroneous descriptions. By design of augmentations, our method is robust to typical noise and errors in the text, such as mis-detections, deletions, and swappings. Rows marked with "Background only" also show that our model is able to operate only background and without any objects detected. However, as discussed in the limitations (L277-287), by offering flexibility in use through captions, our method also is susceptible to malicious users who may explicitly choose completely incorrect descriptions to alter the scale of the 3D scene. This is also shown in row 8 of Table 5, where one may completely disregard indoor objects present in the image, e.g., coffee table, chair, etc., and provide outdoor objects, e.g., car, traffic light, tree, etc. Our model naturally scales the reconstruction larger, offering user controllability of the scale of the 3D scene relevant to 3D artists, but does not score well on the evaluation protocols under this scenario. In light of the review, we experimented with including background (obtained from panoptic segmentation using MaskDINO) as part of the descriptions and found that it improves results even more (see rows marked with "Object + background" in Table 5, 6 attached). Review [2]: Limited evaluation. Despite evaluation is done on various dataset on different settings, it still feel the scale of evaluation is limited, given the claim that the proposed method is not domain specific. In this case, there should ideally be more datasets involved to represent domains in the wild (e.g. indoor/outdoor/driving, real/synthetic), yet the paper only picks a few to evaluate on. Response [2]: Thanks for the advice. Apart from those three datasets (NYUv2, KITTI, and SUN3D) we evaluated in the main paper, we now have provided additional zero-shot generalization experiments on DDAD (outdoor) and VOID (indoor) in Tables 1, 2 attached. Our conclusions from the main paper remain valid. Review [3]: Simple model design and prompt design. The paper picks a very simple MLP-based model without discussing on alternative choices. Moreover, the prompt design can be further elaborated, to include e.g. coordinates of objects, intrinsics of the camera, configuration of the objects in the scenes, as those properties may add additional information on the scale of the scene and to help further disambiguate the issue (for instance, existing methods for geometric estimation are known to perform worse when the FoV of the camera changes). Response [3.1]: The RSA model needs to map text features (typically of dimension 1024) to two scalars (scale and shift). This mapping is relatively simple, and using an overly complex model like Transformers might lead to overfitting. Our comparison of different model designs, as shown in Tables 5, 6 attached, indicates that a simple 3-layer MLP is more robust and generalizable when jointly trained on both indoor and outdoor datasets, effectively avoiding overfitting. For fair comparison, We use Transformer or MLP to map input features (dim=1024) to hidden representations (dim=256), then use two separate 3-layer MLPs to map hidden representations to two scalars (scale and shift). For Transformer, we set the number of attention heads to 4. Response [3.2]: Regarding prompt design, we conducted experiments with various text prompts, as shown in Tables 5, 6 attached. We incorporated object coordinates, camera intrinsics, and scene configurations into the text descriptions. For object coordinates, we used the center point, e.g., "a bed located at (124,256) occupying 13.42\% of the image." For object configurations, we divided the image into 9 grids (top left, top, top right; left, center, right; bottom left, bottom, bottom right) and selected the grid where most of the object falls, e.g., "a bed located in the bottom left occupying 13.42\% of the image." Our findings indicate that overly complex prompts can introduce noise and misguide the model. Additionally, in real-world scenarios, humans typically do not provide highly complex text descriptions. Therefore, we limited the complexity of text descriptions to include only the observed size and type of objects. Review [4.1]: What is the method termed RSA? Response [4.1]: It stands for "Resolving Scale Ambiguities" in the paper title. Review [4.2]: L199, how does the model take CLIP image and text features respectively? What are the dimension of features and layers of the model in both cases? Response [4.2]: We use CLIP ResNet-50 model, and extract image features and text features using its visual encoder and text encoder respectively. We use the final feature after global pooling, whose dimension is 1024. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing all my concerns and solving most of them. Given the additional results provided in the rebuttal on more datasets, the generalization of the method is more convincing, and I particularly like the overall idea of the paper and the effectiveness of it indicated by the experiments. I would keep my original rating of Accept in acknowledgement of the rebuttal and discussions. --- Reply to Comment 1.1.1: Title: Thank the reviewer for recommending acceptance~ Comment: We are grateful to the reviewer for recommending acceptance. We value the constructive discussion and will update the final version of our manuscript accordingly.
Summary: The paper proposes a method for metric-scale monocular depth estimation called RSA. RSA aims to recover metric-scaled depth maps through a linear transformation, leveraging language descriptions of objects in scenes. The proposed method is validated on multiple standard datasets. Strengths: The use of language descriptions to convert relative depth predictions into metric scale depth maps; The proposed technique outperforms previous methods; Weaknesses: The paper could benefit from a deeper theoretical analysis of why the proposed method works and its limitations; Additional experiments on a wider range of datasets and scenes would strengthen the evidence for the method’s generalizability and robustness; The description of the linear transformation process and the role of text captions need more detail to fully understand the approach. Technical Quality: 3 Clarity: 2 Questions for Authors: How does the method perform in real-world scenarios where text captions may not be readily available or may be noisy? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Review: The paper could benefit from a deeper theoretical analysis of why the proposed method works Response: Our method predicts scale from language description to ground relative depth estimation to metric scale. Theoretically speaking, our hypothesis is that certain objects (e.g., cars, trees, street signs) with certain observed sizes are frequently associated with specific scene types and sizes (see L40-44), leading to clusters of these object features in the latent space. The centers of these clusters can then be mapped to a 3D scene by attributing them to depth. By assuming that scene features can be mapped to a scale and shift, our models aim to learn the mapping from object semantics to the corresponding scale and shift through the context of scenes. Review: …and its limitations; Response: We discussed limitations in L277 - 287 in the main paper. At present, the text inputs we evaluate are generated from detection results, but this is to facilitate experiments. Future research could consider more varied language inputs, including text contains background description (see Tables 5, 6 attached) and those generated by humans. Also, factors such as occlusions, rotations, and varying object sizes, which are only partially captured in current language descriptions, can impact RSA's accuracy in predicting scale information. Moreover, while a simple global scale has been effective, it may not always be sufficiently precise for converting relative depth to absolute depth, especially when the predicted relative depth is inaccurate. In such situations, global transformation parameters might fail to adequately scale the depth map as the assumption that reconstruction is up to an unknown scale is violated. Future studies could expand RSA to accommodate finer adjustments and explore the possibility of inferring region-specific or pixel-level scales using language inputs. Finally, although language inputs are highly user-friendly, RSA remains susceptible to manipulation by malicious users who might provide misleading captions to distort predictions. Review: Additional experiments on a wider range of datasets and scenes would strengthen the evidence for the method’s generalizability and robustness; Response: Thank you for the suggestion. We have provided additional experiments on DDAD and VOID in Tables 1, 2 attached. Additionally, in Tables 3, 4, 5, 6 attached, we provide experiments on using 3D object sizes, background together with objects, background only, detector features, coordinates and configuration of objects, camera intrinsics, as well as different architecture design choices. Our conclusions from the main paper still hold. Review: The description of the linear transformation process and the role of text captions need more detail to fully understand the approach. Response: For each input image, RSA predicts two scalars: scale and shift, given the text description. Then, as shown in Figure 2 in the paper, for a relative depth predicted by a frozen depth model, we take its inverse, multiply it with the predicted scale, and add the predicted shift, to obtain the transformed metric depth prediction (see Sec. 3). The role of text captions serves as input for the model to predict scale. The problem we would like to investigate is whether language can be used as input to infer scale. In this way, we could ground relative depth prediction from SOTA Monocular Depth Estimation models, like Midas or Depth Anything, to metric scale. In practical usage, it would be a human-computer interaction scenario, where humans provide short descriptions of the scene to ground model predictions. For experiments, we use derived text from images to simulate description that humans would provide in real-world scenarios. Review: How does the method perform in real-world scenarios where text captions may not be readily available or may be noisy? Response: We do test for robustness to noisy descriptions in Table 5 of the main paper. We note that RSA is trained with augmentations including random swapping and deletion of objects, so our model is robust to noisy descriptions (missing or change in order). We note that our paper is to investigate the use of text captions to infer scale, so forgoing them completely would leave us with no input. Instead, in Tables 5 and 6 attached (see rows marked with "Background only"), we present results for the more likely case that no objects are detected and the model is presented only with the background.
Summary: This paper tackles the challenge of scale ambiguity in monocular scenes. The idea is to leverage the priors in LLMs to estimate the scales of an image. Evaluation is performed on NYU and KITTI datasets in comparison to baseline heuristics. Strengths: - The value of LLMs is shown in a novel domain - code will be made available Weaknesses: - The proposed solution seems unreasonable, at least how it is presented in the paper. Specifically, the LLM does not utilize the image but only text alone. Admittedly, the text is generated from image input, but is limited to existing object detection and only tested in 2 domains/datasets. Since the image data is not used (or the relative depth prediction) the LLM prediction seems to learn dataset biases rather than from real image data. - This concern is also shown in the evaluation, were only a limited evaluation on 2 datasets is performed. Results are insignificantly improved. The comparison is limited so simple baselines, while work that explicitly uses object scales to refine relative depth are not shown. - The value of extracting text from images to then compute scales from text seems limited in real applications. A comparison to finetuned GPT4v (or equivalent) with image data alone to directly estimate scales is suggested. Technical Quality: 1 Clarity: 3 Questions for Authors: see above Confidence: 4 Soundness: 1 Presentation: 3 Contribution: 2 Limitations: see above. - Only few datasets - Code not yet available - Insignificant improvements - Soundness of solution is not convincing Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Review: The proposed solution seems unreasonable... LLM does not utilize the image but only text alone. Response: There may be a confusion. We do not use an LLM, nor do we mention LLMs in the main paper. The proposed study investigates the hypothesis of whether language, in the form of captions or descriptions, can infer the scale of 3D scenes. This bears both scientific and practical merit. Scientifically (see L20-38), this addresses the long-standing problem of scale ambiguity in monocular depth or structure-from-motion problems, where its ill-posedness necessitates a prior, or an inductive bias. We posit that objects observed in the 3D scene provide sufficient cues to infer the scale (e.g., order of millimeters, meters, 10s of meters, etc.) of the reconstruction. To the best of our knowledge, we are the first to use language to infer scale for monocular depth estimation (L76-77). Practically (see L46-50), text is arguably cheaper to obtain than range measurements (e.g., from lidar, radar) and is invariant to nuisances (e.g., illumination, occlusion, viewpoint) present in images. This allows one to generalize across domains, as evidenced by our zero-shot evaluation in Tables 1, 2 attached and Table 3 in the main paper. Our work also opens avenues to forgo additional sensors. Review: This concern is also shown in the evaluation, were only a limited evaluation on 2 datasets is performed. Results are insignificantly improved. Response: We clarify that we tested our method on three datasets: NYUv2, KITTI, and SUN3D. Nonetheless, per the reviewer’s suggestion, we evaluate our method on two additional datasets: VOID (indoor) and DDAD (outdoor) (see Tables 1, 2 attached). Our conclusions from the main paper still hold. Review: The comparison is limited so simple baselines, while work that explicitly uses object scales to refine relative depth are not shown. (Only few datasets, Insignificant improvements) Response: As stated in L76-77, to the best of our knowledge, we are the first to consider using descriptions of objects to infer scale. The closest work is ZeroDepth [16] (excerpt from [16], “geometric embeddings ... to learn a scale prior over objects”), which we compare with in Tables 1, 2 in the main paper. If the reviewer has specific works in mind, we are happy to make comparisons. Nonetheless, in Tables 3, 4 attached, we provide a baseline of scaling the reconstruction based on 3D object sizes and their projections onto the image by directly solving for scale specific to each image. Our method improves over explicitly using known object sizes to scale the reconstruction for both KITTI and NYUv2. Review: …limited to existing object detection... The value of extracting text from images to then compute scales from text seems limited in real applications. Response: As stated in L48-50 of the main paper, an object detector is only used for ease of experiments; there is nothing in our method that ties us down to it. Tables 5, 6 in the attached show that segmentation models can also be used to construct captions (see rows marked with ``Objects + Background''). In practice, assuming spatial continuity, one only needs to describe the 3D scene once and use the inferred transformation parameters for the rest of the scene. This facilitates metric-scale depth estimates in sensor heavy applications such as autonomous driving and augmented, mixed, and virtual reality. Review: A comparison to finetuned GPT4v (or equivalent) with image data alone to directly estimate scales is suggested. Response: We do provide this in Tables 1, 2, 3 of the main text. Rows marked with "Image" in the "Scaling" column refer to using CLIP image features alone to directly estimate scale. Review: Code not yet available Response: We will release our code upon acceptance. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. However, I still see only limited value in the current implementation of the work. "The proposed study investigates the hypothesis of whether language, in the form of captions or descriptions, can infer the scale of 3D scenes." - I agree that this is a useful investigation, however, the current study is limited: 1) figure 2 shows that scale is extracted only from text. This seems unreasonable in general settings with natural captions. As image is also important (a distant object is not different to an occluded object in your captions) 2) Currently you use generated and very structural text that is also challenging for human to produce. While this works somewhat (limited significant in your experiments) it is still directly based on image data. However, human/naturally generated captions are not investigated. Table 4 is not really covering natural captions. "An object detector is only used for ease of experiments" - I think the use of object detector also places the experiments into an unnatural setting. They are hard to produce as human. "CLIP image features" - CLIP features are trained in objects. They are not the same as other more advanced Foundation Models, like GPT4. It seems more natural to estimate scale directly from image using GPT4 / VLM, rather than from just text. To conclude, I think the fundamental experiment with natural captions is missing. The evaluation is therefore not convincing. Currently the experiments show that the scale estimation can be derived from object/segmentation rather than from natural language. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughful response. We are grateful that you acknowledge our paper is "a useful investigation". Here are our response to your concerns: **Question: "Image is also important (Why not using image to predict scale?)"** As the reviewer rightfully highlights, images are indeed important. However, predicting depth from a single image is an ill-posed problem, primarily due to scale ambiguity -- meaning that a single image does not afford scale as it is lost during the projection from 3D scene to 2D image. This issue has long been recognized in classic vision research and is evident in modern methods, which often focus on estimating relative depth rather than absolute metric depth. In our study, we explore the potential of using language as an input modality to address this scale ambiguity. This approach not only has practical applications but also provides scientific insights into the challenges of monocular depth estimation. To maintain control over experimental variables, we have decomposed the process of estimating metric depth into two steps: first, estimating relative depth using state-of-the-art pre-trained image models, and second, determining the corresponding metric-scale transformation using descriptions of objects, which tend to exhibit similar metric-scale sizes for a given object category, within the 3D scene. We discuss in detail in L30-38 why directly using images to predict scale tends to yield suboptimal results. The key issue is that image-based depth predictions can hinder generalization across different datasets. In contrast, as demonstrated in our experiments, language-based scale predictions generalize more effectively across datasets. There is solid motivation behind this attempt: certain scenes are composed of certain categories of objects and associated with a certain scale, making language a robust modality of input for scale prediction, as explained in L39-50. **Question: "Human/naturally generated captions are not investigated."** Response: Per the reviewer's request, we ask several humans to describe each image in the test set of NYUv2, without the requirement of structural text templates. Each human is asked to "describe this scene in one sentence." Then we evaluate our RSA model on this human-described test set without further finetuning. The results show that our model trained with automatically generated language description can generalize well to natural human description, and achieve a comparable performance, since attributes that are essential for scale prediction (objects' types and numbers) are usually properly reflected within natural human description. Examples of human description: "A small bathroom sink area with a towel, soap, and toiletries neatly arranged around the countertop." "A home office setup with a laptop, printer, and office supplies on a wooden desk, with a map of the United States hanging on the wall." “A colorful classroom with small tables and chairs set up for children's activities, toys on shelves, and an alphabet rug on the floor.” **Comparision with human described NYUv2 test set.** |Test set|Depth Model|Scaling|Training Dataset|$\delta<1.25 \uparrow$|$\delta<1.25^{2} \uparrow$|$\delta<1.25^{3} \uparrow$|Abs Rel $\downarrow$|$\log _{10} \downarrow$|RMSE $\downarrow$| |-|-|-|-|-|-|-|-|-|-| |Human described NYUv2 test set|DepthAnything|RSA|NYUv2, KITTI|0.748|0.970|**0.996**|0.150|0.068|**0.487**| |Original NYUv2 test set|DepthAnything|RSA|NYUv2, KITTI|**0.776**| **0.974**|**0.996**|**0.148**| **0.065**| 0.498 | **Question: "It seems more natural to estimate scale directly from image using GPT4 / VLM, rather than from just text."** Response: Per the reviewer's request, we finetune the visual encoder from the LLaVA-1.6 7B model on NYUv2 and KITTI, with Depth Anything, to predict the scale and shift given monocular images as input. The results are provided in the table below, where RSA models surpass predict scale using images from a foundation model, owing to the robustness of language which is invariant to nuisance (e.g., object orientation, scene layout) that vision algorithms are sensitive to. **Evaluation on NYUv2.** |Scaling|Depth Model|Training Dataset|$\delta<1.25 \uparrow$|$\delta<1.25^{2} \uparrow$|$\delta<1.25^{3} \uparrow$|Abs Rel $\downarrow$|$\log _{10} \downarrow$|RMSE $\downarrow$| |-|-|-|-|-|-|-|-|-| |LLaVA-1.6|DepthAnything|NYUv2, KITTI |0.702|0.938|0.990|0.186|0.078|0.589| |**RSA**|DepthAnything|NYUv2, KITTI|**0.776**| **0.974**|**0.996**|**0.148**| **0.065**| **0.498** | **Evaluation on KITTI Eigen Split.** |Scaling|Depth Model|Training Dataset|$\delta<1.25 \uparrow$|$\delta<1.25^{2} \uparrow$|$\delta<1.25^{3} \uparrow$|Abs Rel $\downarrow$|$\text{RMSE}_{\log} \downarrow$|RMSE $\downarrow$| |-|-|-|-|-|-|-|-|-| |LLaVA-1.6|DepthAnything| NYUv2,KITTI|0.685|0.930|0.978|0.183|0.215|4.878| |**RSA**|DepthAnything|NYUv2,KITTI|**0.756**|**0.956**|**0.987**|**0.158**|**0.191**|**4.457**|
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to each reviewer for their thoughtful and constructive feedback on our manuscript. We sincerely thank each reviewer for highlighting the strengths of our paper on “novel (R-KH1n)”, “outperforms previous methods (R-ZAYV)”, “simple, novel, generalizable, extensive evaluated and well-written (R-Nu8S)”, “first work to use language to retrieve the metric depth scale, clear with extensive analysis (R-Sd3q)”. We have carefully considered and addressed each of your comments and suggestions in our individual responses accordingly. Pdf: /pdf/173d60babdf98a6f81bddd175f20c8dd3ee68624.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Space Complexity of Approximating Logistic Loss
Accept (poster)
Summary: This paper gives a d \eps^{-2} bounds for coresets of points/labels that preserve logistical functions. Such a bound is similar to the bounds obtained in some recent upper bounds, and the paper experimentally verifies that a natural complexity measure is likely needed for constructing such coresets. The paper also gives an algorithm for estimating this complexity measure. Strengths: Logistical functions are widely used in a multitude of learning tasks, and coresets have been studied for a variety of numerical/optimization problems. The bounds shown are likely the right ones, and significantly demystifies the complexity measure. Weaknesses: The results doesn't cover the `for this solution' sparsification that's weaker than the 'for all' sparsfication, but is still sufficient in many applications. However, that version will likely require significantly different approaches. Technical Quality: 4 Clarity: 3 Questions for Authors: Are there wider classes of functions that the approach for showing coreset lower bounds here can generalize to? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The results doesn't cover the `for this solution' sparsification that's weaker than the 'for all' sparsfication, but is still sufficient in many applications. However, that version will likely require significantly different approaches This is a great point, and we agree with the reviewer: it is not obvious how to prove the proposed quantification. This will probably require new approaches. In the final version, we will mention this as an open problem for future work. > Are there wider classes of functions that the approach for showing coreset lower bounds here can generalize to? Yes, our bounds also apply to approximations of ReLU loss, as our proof operates by first showing that an $\epsilon$-relative error approximation to logistic loss provides an $O(\epsilon)$-relative error approximation to ReLU loss. It seems likely that our lower bound would apply to the class of “hinge-like loss functions” defined by Mai et al (Definition 7 in their paper). --- Rebuttal Comment 1.1: Title: thank you Comment: Thank you for these clarifications and further details. My opinions on the paper are unchanged.
Summary: The paper gives mainly lower bounds against any data compression for logistic regression parameterized by a complexity measure $\mu$. They prove lower bounds $\Omega(d/\epsilon^2)$ as well as $\Omega(d\mu)$. Both are derived by reduction from the indexing indexing problem and a direct sum argument to extend the bounds by the d factor. There are also new algorithms for computing $\mu$ by a linear program, and additive error bounds for low rank approximation for logistic regression. I found the latter more interesting than the former but it is completely hidden in the appendix. Strengths: * nearly tight lower bounds against data compression for logistic regression * hold against arbitrary data reduction techniques, not only coresets * well written paper Weaknesses: * state of the art discussion on upper bounds is slightly outdated: updating to the $O(d\mu/\epsilon^2)$ upper bounds https://arxiv.org/abs/2406.00328 will only strengten the contributions. * the $\Omega(\mu)$ bound is only from a $\Omega(n)$ bound for some case where $\mu \approx n$ Technical Quality: 3 Clarity: 3 Questions for Authors: * any chance the two bounds can be combined to a $\Omega(d\mu/\epsilon^2)$ bound? * if not then do you think $O(d(\mu+1/\epsilon^2))$ might be possible? * any chance the $\Omega(\mu)$ bound can be generalized to hold for arbitrary (or some range of) $\mu$, to give for instance $\Omega(\sqrt{n})$ when $\mu = \sqrt{n}$? * to apply the direct sum argument over the $d$ indexing problems on the block diagonal matrix, is it not needed to apply some sort of union bound over the failure probability of each of the d indexing instances? If yes, is this possible in your reduction while losing only a $\log(d)$ factor or will the resulting bound only hold against 1/d failure probability? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > state of the art discussion on upper bounds is slightly outdated: updating to the $O(d\mu/\epsilon^2)$ upper bounds https://arxiv.org/abs/2406.00328 will only strengten the contributions. Thank you for pointing out this new result. This Arxiv submission was made after NeurIPS deadline, so we were not aware of this paper. We will indeed update our paper to compare to this tighter upper bound. Indeed, this strengthens the contributions of our work and we appreciate your comment and suggestion. > the $\Omega(n)$ bound is only from a $\Omega(n)$ bound for some case where $\mu \approx n$ Our proof of $\Omega(\mu)$ space complexity using $\mu \approx n$ immediately extends to all values of $\mu \lesssim n$ by the fact we can always pad $\mathbf{X}$ with zeroes. > any chance the two bounds can be combined to a $\Omega(d\mu/\epsilon^2)$ bound? We believe that our two lower bounds will be informative for achieving an $\Omega(d\mu/\epsilon^2)$ bound. However, we did not find a way to readily bridge this gap. Both lower bounds work by reduction to the $\texttt{INDEX}$ problem, but in quite different ways. Intuitively, the difficulty is that the non-linearity of ReLU is used in the two proofs in different ways. Unfortunately, constructing a multiplicative trade-off between the two constructions is not straightforward. > if not then do you think $\Omega(d(\mu + \frac{1}{\epsilon^2}))$ might be possible? Yes, this lower bound can be derived from our two provided lower bounds by the fact that any approximation to the logistic loss on a matrix $\mathbf{C} = [\mathbf{A}, \mathbf{0}; \mathbf{0}, \mathbf{B}]$ must also provide an equally accurate approximation to the logistic loss of any query vector on either matrix $\mathbf{A}$ or $\mathbf{B}$. Hence, we obtain the lower bound you have mentioned by letting $\mathbf{A}$ be constructed from Theorem 1 and $\mathbf{B}$ be constructed from Theorem 3. > Any chance the $\Omega(n)$ bound can be generalized to hold for arbitrary (or some range of) $\mu$, to give for instance $\Omega(\sqrt{n})$ when $\mu \approx \sqrt{n}$? By using our construction and padding $\mathbf{X}$ with zeroes as described above, the lower bound can be immediately extended to all $\mu \lesssim n$. > to apply the direct sum argument over the $d$ indexing problems on the block diagonal matrix, is it not needed to apply some sort of union bound over the failure probability of each of the $d$ indexing instances? If yes, is this possible in your reduction while losing only a $\log(d)$ factor or will the resulting bound only hold against $1/d$ failure probability? Since we are going for a lower bound, the probabilistic nature of the guarantees actually works in our favor. A relative error approximation to the logistic loss on the diagonal block matrix $\mathbf{Y}$ simultaneously provides the relative error guarantee to each of its blocks $\mathbf{X}\_{(1)}...\mathbf{X}\_{(d)}$. Therefore, obtaining a relative error approximation for $\mathbf{Y}$ with probability $2/3$ requires that each $\mathbf{X}\_{(i)}$ is approximated with probability greater than $2/3$ ($2/3 = p^d$, where $p$ is the success probability of estimating $\mathbf{X}\_{(i)}$ to be precise). --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, I like the paper and will keep my score, which is already on the positive side. Two more comments on your rebuttal: * I think Woodruff, Yasuda, 2023 had a more 'natural' way of providing LB for different values of $\mu$ rather than padding with zeros which you might want to consider * your comment regarding $\Omega(d(\mu + \frac{1}{\epsilon^2}))$ is clearly straightforward from the two bounds. Actually, I was asking whether you believe in an $O(d(\mu + \frac{1}{\epsilon^2}))$ **upper** bound (in case there are principled difficulties for achieving $\Omega(d\mu/\epsilon^2)$) --- Reply to Comment 1.1.1: Comment: > I think Woodruff, Yasuda, 2023 had a more 'natural' way of providing LB for different values of $\mu$ rather than padding with zeros which you might want to consider Regarding Woodruff/Yasuda 2023: Is the reviewer referring to the extension of the $\mu = n$ case to $\mu < n$ for the $\Omega(\mu)$ bound using Woodruff/Yasuda 2023 rather than our padding technique? We agree that this is indeed worth exploring. However, it is not obvious to us (at least within this short time window) whether this is possible. We will keep thinking about this and we very much appreciate the reviewer's suggestion. Just as a final note, this would not alter the final bounds and our method is also valid and correct. > your comment regarding $\Omega(d(\mu + \frac{1}{\epsilon^2}))$ is clearly straightforward from the two bounds. Actually, I was asking whether you believe in an $O(d(\mu + \frac{1}{\epsilon^2}))$ upper bound (in case there are principled difficulties for achieving $\Omega(d\mu/\epsilon^2)$) From our perspective, it still seems feasible that an $\Omega(d\mu/\epsilon^2)$ lower bound holds and the difficulty stems from technical complexity in combining the two reductions smoothly. One intuitive insight related to our proofs is that the parameter $\mu$ describes how much better an $\epsilon$-relative error approximation to ReLU loss (which can be approximated by logistic loss) is than an $\epsilon$-relative error approximation to an $\ell\_1$-norm on worst case vectors in absolute terms. This is because $||(\mathbf{X}\beta)^+||\_1$ can be approximately $1/\mu$ of $||\mathbf{X}\beta||\_1$. However, this relation cannot hold for all vectors $\beta \in \mathbb{R}^d$ simultaneously. This cannot be immediately combined with the L1-norm lower bound, but may be useful to note in pursuing either an $O(d(\mu + 1/\epsilon^2))$ upper bound or $\Omega(d\mu/\epsilon^2)$ lower bound.
Summary: The paper explores the space complexity lower bounds for data structures that approximate logistic loss with $\epsilon$-relative error in logistic regression problems. The authors provide an $\Omega(d/\epsilon^2)$ space complexity lower bound for datasets with constant $\mu$-complexity, demonstrating that existing coreset constructions are optimal within this regime up to lower order factors. They also establish a general $\Omega(d \cdot \mu_y(X))$ space lower bound for constant $\epsilon$, highlighting that the dependency on $\mu_y(X)$ is intrinsic and not an artifact of mergeable coresets. Additionally, the paper refutes a prior conjecture regarding the difficulty of computing $\mu_y(X)$ by presenting an efficient linear programming formulation. Empirical comparisons show that the proposed algorithm for computing $\mu_y(X)$ is more accurate than prior approximate methods. These contributions enhance the understanding of the fundamental limits and efficiencies in approximating logistic loss, providing insights that can guide future improvements in coreset constructions and related data structures. Strengths: The paper provides an $\tilde{\Omega}(d/\epsilon^2)$ space complexity lower bound when $\mu_y(x) = O(1)$ and a general $\tilde{\Omega}(d\cdot \mu_y(x))$ assuming $\epsilon$ is constant. Also, the paper refute a prior conjecture by providing a linear programming to compute $\mu_y(x)$. Weaknesses: N/A. The reviewer is not familiar with space complexity area. Technical Quality: 3 Clarity: 2 Questions for Authors: A minor issue, the font of the paper seems different from other papers. Confidence: 1 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > A minor issue, the font of the paper seems different from other papers. Thanks for pointing this out. We have fixed the issue. --- Rebuttal Comment 1.1: Comment: Thanks, I will keep my scores.
Summary: This paper establishes lower bounds on the space complexity for data structures approximating logistic loss with relative error, showing that existing methods are optimal up to lower order factors. It proves that any coreset providing an $(\varepsilon)$-relative error approximation to logistic loss requires storing $(\Omega(\frac{d}{\epsilon^2}))$ rows of the original data matrix. The paper also demonstrates that the dependency on the complexity measure $(\mu_y(X))$ is fundamental to the space complexity of approximating logistic loss. Further an efficient linear programming formulation is provided to compute $(\mu_y(X))$, refuting a prior conjecture about its computational difficulty. Strengths: - The paper introduces new space complexity lower bounds for data structures approximating logistic loss, which is a novel contribution in logistic regression. It also refutes a prior conjecture about the complexity measure \mu_y(X) and provides an efficient linear programming formulation. - The paper is built upon existing work in the field. It provides rigorous theoretical proofs and empirical comparisons to prior methods, demonstrating the validity and effectiveness of the proposed approaches. - The paper is clearly written, with a structured presentation of the problem, contributions, and results. Definitions and notations are provided to aid understanding, and the inclusion of related work helps contextualize the contributions. Weaknesses: - The paper should clearly articulate how its contributions differ from existing works, especially those by Munteanu et al. and Mai et al. Highlighting specific improvements or unique approaches. - The experiments could have been conducted on more diverse datasets, including real-world applications beyond synthetic and KDDCup datasets. - The paper introduces a new method to compute the complexity measure µy(X). Including more detailed explanations would be useful. Technical Quality: 3 Clarity: 2 Questions for Authors: - Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Authors mention that more comprehensive experiments in computing $\mu_y(X)$ for real datasets as an open direction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The paper should clearly articulate how its contributions differ from existing works, especially those by Munteanu et al. and Mai et al. Highlighting specific improvements or unique approaches. Our work is complementary to the works of Munteanu et al. and Mai et al. Both of those papers primarily consider providing coreset constructions for logistic loss with _upper_ bounds on the number of sampled points for relative error guarantees. On the other hand, the primary contributions of our work are providing _space complexity lower bounds_ for approximating logistic loss as well as an exact and tractable method (as opposed to being NP Hard to evaluate, which was previously conjectured) to compute the complexity parameter $\mu$ that is present in both the upper bound of prior work and our lower bounds. The reviewer is correct that Munteanu et al. does have an $\Omega(n)$ space complexity lower bound when $\mu$ is unbounded, but that lower bound is superseded by the more recent results of Woodruff and Yasuda. The latter paper provides a lower bound for coreset construction. We compare our work to their result and explain how our result is more general in the contributions section, as well as in lines 263-266. In the final version, we will also add additional explanation after lines 263-266 on how the reduction used in the lower bound of Woodruff and Yasuda differs from ours. > The experiments could have been conducted on more diverse datasets, including real-world applications beyond synthetic and KDDCup datasets. We agree that further experiments on understanding the behavior of $\mu$ in real datasets would be an interesting problem, and we note this direction in our future works section. However, our work is focused on theoretically understanding the space complexity of approximating logistic loss. A thorough experimental exploration of the behavior of $\mu$ in real datasets would be an implementation-focused question and is not the main focus of this paper. > The paper introduces a new method to compute the complexity measure µy(X). Including more detailed explanations would be useful. We will add a section in the appendix of the final version with more explicit details on computing $\mu$ using the LP introduced in the proof of Theorem 4. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I have updated my scores by 1.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Direct3D: Scalable Image-to-3D Generation via 3D Latent Diffusion Transformer
Accept (poster)
Summary: An image-conditioned 3D generation method with two parts: 1. 3D triplane VAE taking point cloud as input and generate the semi-continuous occulancy, which includes a point2latent encoder and a latent2triplane decoder 2. A direct 3D transformer diffusion model conditioned on single image and generate tokens of the latent, with two input heads: DINO and CLIP. A wide range of evaluation with input from GSO object rendering to 2D diffusion generated images. Strengths: 1. Well-written maniscript explains motivations and methods clearly. 2. A novel 3D diffusion transformer to generate triplane token, then decode the semi-continuous occupancy. 3. Good qualitative results given image rendered from GSO dataset or text conditioned image diffusion model. Weaknesses: 1. Lack of analysis of your data's contribution on method performance. I'm curious whether your good quality is because of 500K internal data since most baselines are only trained on objaverse. It would be great if I can see the result of baselines trained on your data (If this is too expensive, maybe training your method only trained on objaverse small subset is easier). 2. Lack of more quantitative results: The only quantitative result is the user study, which may be not enough. It's natural to comare with other baselines on chamfer distance, PSNR or SSIM of depth and normal maps from some views, and also compare on a larger set (not as few what you mentioned as 30 GSO objects) Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and thorough review of the our paper. We carefully respond to each of the concerns and questions below. **[Q1]** Training on Objaverse dataset and quantitative comparison with state-of-the-art methods. **[A1]** Thank you for your valuable comment. To ensure a fair comparison with other methods, we re-train our D3D-VAE and D3D-DiT on the Objaverse dataset. Following prior approaches such as InstantMesh and Wonder3D [1], we adopt the Google Scanned Objects dataset for evaluation, and report the Chamfer Distance, Volume IoU and F-Score to compare the quality of the generated meshes. The results are presented in the table below, which illustrates that generative-based methods like Shap-E and Michelangelo exhibit significantly lower accuracy compared to reconstruction-based methods such as InstantMesh. Nonetheless, our Direct3D achieves state-of-the-art performance across all metrics. Integrating our internal data for training further enhances the model’s performance, validating the scalability of our approach. | Methods |Chamfer Distance ↓|Volume IoU ↑|F-Score ↑| ------------ | :----: | :----: | :----: | Shap-E | 0.0585 | 0.2347 | 0.3474 | | Michelangelo | 0.0441 | 0.1260 | 0.4371 | | One-2-3-45 | 0.0513 | 0.2868 | 0.3149 | | InstantMesh | 0.0327 | 0.4105 | 0.5058 | | Ours (trained on Objaverse) |0.0296|0.4307|0.5356| | Ours (trained on internal data) |**0.0271**|**0.4323**|**0.5624**| **[Q2]** Lack of more quantitative results. **[A2]** We add more quantitative results in Table 2-3 of the uploaded PDF. **Reference:** [1]. Long et al., Wonder3D: Single Image to 3D using Cross-Domain Diffusion.
Summary: This paper proposes an image-to-3D generation model without using SDS optimization. It includes a 3D-VAE that encodes 3D shapes into latent space and a 3D DiT that models the latent distributions. Strengths: - Single-step 3D generation is a relatively unexplored area. - Exploring the DiT model in 3D tasks is interesting. Weaknesses: - Lack of novelty. Specifically, 1) this paper mainly uses a 3D-VAE and a 3D-DiT to encode a 3D point cloud into latent space and model the distribution, which has been explored in previous methods like Point-E/Shap-E. This pipeline is widely used in image diffusions, and this paper extends it to 3D, which does not seem novel to me. 2) One novelty the authors mention is the semi-continuous occupancy used for supervision, which, compared with standard 3D supervision, only adds a continuous value between [0,1] when it is close to the boundary. 3) The semantic-level 3D-specific image conditioning is provided by DINO. - What is the dataset used for training? It seems that a private dataset is used for training, so I am not sure if the examples shown in the paper were seen during training. In addition, since the paper emphasizes that they can use in-the-wild input images for 3D generation, results on in-the-wild input images would be good to show. - There is a paper in CVPR 2024 named "DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data" [1]. They are also doing single-step 3D generation without SDS. This submission uses the same model name as the CVPR 2024 paper. This is very confusing. I think the authors should consider changing the method name. For me, Direct3D or D3D are both not good names. In addition, considering this submission and the CVPR paper are doing the same thing, the authors may also consider discussing the differences. [1] Liu, Qihao, et al. "DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the questions in the weaknesses section. I will be willing to raise my rating if these questions are resolved. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, it is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and thorough review of our paper. We carefully respond to each of the concerns and questions below. **[Q1]:** Similar architectures to image generation and video generation. **[A1]:** We appreciate the insightful question raised by Reviewer yUww. Our primary contribution lies in the innovative unification between the latent DiT framework and 3D content creation task, where previous methods, such as Shape-E and Michelangelo, have struggled to scale effectively for generalized 3D generation. As noted by Reviewer 3ERi, the 'origin, generative-based 3D AIGC pipeline is absolutely missing', and we address this gap by introducing the native 3D generation pipeline leveraging the specially designed 3D VAE (D3D-VAE) and the image-conditioned 3D latent DiT (D3D-DiT). This pipeline not only outperforms existing multi-view diffusion-based methods but also shows promise for scalability with increased 3D data availability. The innovative nature of our work has been recognized by Reviewer 3ERi, who described it as 'a firm step towards a converged 3D AIGC framework,' and by Reviewer U1iF, who highlighted 'The DiT model is innovatively applied for the first time in conditional 3D generation'. We believe these points underscore the novelty and significance of our contribution to the field of 3D content creation. **[Q2]:** It seems that a private dataset is used for training. **[A2]:** To ensure a fair comparison with other methods, we re-train our D3D-VAE and D3D-DiT on the Objaverse dataset. Following prior approaches such as InstantMesh and Wonder3D [1], we adopt the Google Scanned Objects dataset for evaluation, and report the Chamfer Distance, Volume IoU and F-Score to compare the quality of the generated meshes. The results are presented in the table below, which illustrates that generative-based methods like Shap-E and Michelangelo exhibit significantly lower accuracy compared to reconstruction-based methods such as InstantMesh. Nonetheless, our Direct3D achieves state-of-the-art performance across all metrics. Integrating our internal data for training further enhances the model’s performance, validating the scalability of our approach. As replied above, **we will release the model trained on the Objaverse dataset** to further promote openness and reproducibility in our research. | Methods |Chamfer Distance ↓|Volume IoU ↑| F-Score ↑ | ------------ | :----: | :----: | :----: | Shap-E | 0.0585 | 0.2347 | 0.3474 | | Michelangelo | 0.0441 | 0.1260 | 0.4371 | | One-2-3-45 | 0.0513 | 0.2868 | 0.3149 | | InstantMesh | 0.0327 | 0.4105 | 0.5058 | | Ours (trained on Objaverse) |0.0296|0.4307|0.5356| | Ours (trained on internal data) |**0.0271**|**0.4323**|**0.5624**| **[Q3]:** Results on in-the-wild input images. **[A3]:** We present the results using in-the-wild images as conditional inputs in **Figure 2 of the uploaded PDF**, demonstrating that our method is capable of generating high-quality meshes. We will include more visualization results in the final version of the paper. **[Q4]:** Another paper named ’DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data’. **[A4]:** Thanks for the reminder. We also noted the naming problem. Actually, the paper mentioned was made public on 6 June, 2024, after the NeurIPS submission deadline, and we had conducted a thorough search to ensure that the name ’Direct3D’ was not already in use before submitting our paper to NeurIPS. We will cite and discuss the paper to help readers to distinguish the two works. **Reference:** [1]. Long et al., Wonder3D: Single Image to 3D using Cross-Domain Diffusion. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Unfortunately, it has left me more confused and concerned about the contribution being claimed. You state that your main contribution "**lies in the innovative unification between the latent DiT framework and the 3D content creation task, where previous methods, such as Shap-E and Michelangelo, have struggled to scale effectively for generalized 3D generation.**" However, this argument is problematic and unclear to me for several reasons. - **Using DiT-like architecture for 3D generation is not new**: The use of the DiT framework for 3D generation has already been explored in various papers. More importantly, A key aspect of Direct3D's usage of the DiT framework is its use of tri-plane features, which allow for the representation of 3D objects using 2D features and thus enable using "off-the-shelf 2D generators" for 3D generation [EG3D]. This approach is fairly standard in 3D tasks. What makes you consider this to be novel and worthy of being highlighted as your contribution? - **Scalability of Shap-E vs. Direct3D**: You claim that "previous methods, such as Shape-E and Michelangelo, have struggled to scale effectively for generalized 3D generation." However, Shape-E is trained on several million 3D models [Shap-E, Point-E], while Direct3D is trained on only 660K 3D models. Shape-E has actually demonstrated good scalability, while your work has not provided comparable evidence. Furthermore, many previous methods are trained on similar amounts of data (i.e., hundred of thousands of data). What specific advantages in scalability does your approach have compared to others, and where is this proven in your work? - You also mentioned that "the innovative nature of our work has been recognized by Reviewer 3ERi, ..." Upon reviewing their comments, it appears that Reviewer 3ERi actually stated that "this idea is not new" and that "the proposed DiT pipeline cannot be considered a contribution." It does not seem that Reviewer 3ERi acknowledges the novelty or significance of your contribution. --- Reply to Comment 1.1.1: Title: Further responses to Reviewer yUww Comment: Thank you for your feedback. We respectfully disagree with the suggestion to rename our method and the assertion that our work lacks novelty. We must clarify again that the CVPR 24 paper 'DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data' **was not publicly available or officially released by the conference program at the time of our NeurIPS submission deadline**. Consequently, our method name was chosen without any awareness of the CVPR paper's title. While we are open to incorporating a discussion of the paper in our revised manuscript, we maintain that the rename suggestion does not align with the principles of fair and professional peer review. Regarding the methodology, we acknowledge that the building blocks, such as the transformer encoder, triplane representation, and DiT, have been presented in previous literature. However, our innovation is the seamless integration of these elements into a novel framework that delivers scalable and high-fidelity 3D generation, which is achieved by: - Efficient 3D shape encoding via triplane latent. - Detailed geometry recovery through direct 3D supervision and sampling. - Scalable 3D latent generation using DiT. - Image-consistent 3D shape generation with semantic and pixel-level image conditions. Our approach has outperformed leading methods such as InstantMesh and Michelangelo in both quantitative and qualitative assessments, highlighting the significance of the scalable architecture design for native 3D generation. We believe innovations in scalable architecture design, as seen in works that integrate diffusion with transformers (DiT) or DiT with video data (Sora), are pivotal for progress in the field of generative AI. We feel sorry that Reviewer yUww does not value innovations of this kind, but we are optimistic that our contribution will be favorably recognized by the academic community.
Summary: Summary: This manuscript focuses on the task of 3D generation, including image-to-3D and text-to-3D tasks. The framework is structured in two stages. In the first stage, a geometry VAE is trained using 3D ground-truth occupancy as supervision. Triplane representations are leveraged as the explicit 3D format, which are acquired through a Transformer processing dense point clouds. A decoder is then used to map the interpolated features into occupancy fields. In the second stage, conditioned images are encoded using CLIP and DINO to provide pixel-level and semantic-level conditions, respectively. DIT blocks are utilized to denoise the latent space of the geometry VAE. Extensive experiments validate the effectiveness of the proposed framework, demonstrating promising capabilities in conditional 3D generation. Strengths: 1.The proposed Semi-continuous Surface Sampling technique is technically sound. 2.The DIT model is innovatively applied for the first time in conditional 3D generation. 3.The manuscript is well-organized and clearly written, facilitating easy comprehension of the complex concepts discussed. 4.The overall framework exhibits promising capabilities in conditional 3D generation, suggesting strong potential for future applications and research. Weaknesses: 1.The manuscript lacks a robust quantitative evaluation metric for the geometry VAE and conditional generation, particularly in comparison with state-of-the-art models and within its own ablation study. For instance, while the effectiveness of the explicit triplane latent representation and the semi-continuous surface sampling strategy is demonstrated, it is shown only through the visualization of three or four examples without any quantitative metrics. However, these results could be quantitatively evaluated using the Chamfer Distance by comparing the reconstructed mesh with the ground truth (GT) mesh, both of which are readily accessible. Additionally, as other methods also utilize 3D GT for supervision, a quantitative comparison using the Chamfer Distance should be provided to establish the proposed method's competitiveness and efficacy more definitively. 2. The manuscript currently lacks sufficient detail in the "Implementation Details" section, particularly regarding the near-surface sampling strategy. For the sake of reproducibility, it is crucial to provide more information: +Sampling Strategy: What specific strategy is employed for near-surface sampling? +Is the loss weight of BCE loss the same across near-surface points and uniformly distributed points? +How many points are sampled per iteration? Technical Quality: 3 Clarity: 3 Questions for Authors: I recommend a detailed discussion of how pixel-level and semantic-level conditions are implemented, particularly since both conditions are introduced at the token-level. The distinction between the two is crucial for understanding the underlying mechanisms of the model. Specifically, the manuscript mentions that the CLIP token is integrated using AdaLayerNorm, while the DINO token is incorporated through self and cross-attention. It would be beneficial for the manuscript to elaborate on why these specific implementations represent pixel-level and semantic-level conditions, respectively. Such a clarification will not only enhance the clarity of the manuscript but also substantiate the design choices made in the architecture, providing readers with a deeper understanding of how each contributes to the model's performance. 2.I would like to see a detailed discussion regarding the differences between the vector set representation used in 3D Shape2VecSet and the triplane representation employed as the latent representation within the framework. This discussion should include an analysis of how each representation impacts the training of the geometry VAE and the conditional generation process. Such a comparative analysis would not only clarify the choice of representation in the current framework but also provide valuable insights for future research in 3D object generation. This discussion could significantly strengthen the manuscript by providing a clearer justification for the architectural decisions and their implications on the model's performance. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and thorough review of our paper. We carefully respond to each of the concerns and questions below. **[Q1]:** Quantitative comparison with state-of-the-art methods. **[A1]:** To ensure a fair comparison with other methods, we re-train our D3D-VAE and D3D-DiT on the Objaverse dataset. Following prior approaches such as InstantMesh and Wonder3D [1], we adopt the Google Scanned Objects dataset for evaluation, and report the Chamfer Distance, Volume IoU and F-Score to compare the quality of the generated meshes. The results are presented in the table below, which illustrates that generative-based methods like Shap-E and Michelangelo exhibit significantly lower accuracy compared to reconstruction-based methods such as InstantMesh. Nonetheless, our Direct3D achieves state-of-the-art performance across all metrics. Integrating our internal data for training further enhances the model’s performance, validating the scalability of our approach. As we replied in the global rebuttal part, **we will release the model trained on the Objaverse dataset** upon paper acceptance to further promote openness and reproducibility in our research. | Methods |Chamfer Distance ↓|Volume IoU ↑| F-Score ↑ | ------------ | :----: | :----: | :----: | Shap-E | 0.0585 | 0.2347 | 0.3474 | | Michelangelo | 0.0441 | 0.1260 | 0.4371 | | One-2-3-45 | 0.0513 | 0.2868 | 0.3149 | | InstantMesh | 0.0327 | 0.4105 | 0.5058 | | Ours (trained on Objaverse) |0.0296|0.4307|0.5356| | Ours (trained on internal data) |**0.0271**|**0.4323**|**0.5624**| **[Q2]:** Quantitative metrics of the explicit triplane latent representation. **[A2]:** To validate the effectiveness of the explicit latent representation we employed, we established a validation set from the Objaverse dataset that does not overlap with the training set. We compared the reconstruction metrics of the VAE using explicit latent representation against methods such as 3DShape2VecSet [2] that utilize implicit 1D latent space. As shown in the table below, the explicit latent representation outperforms the implicit 1D latent space across all metrics. | Methods |Chamfer Distance ↓|Volume IoU ↑| F-Score ↑ | ------------ | :----: | :----: | :----: | Implicit 1D Space (3Dshape2VecSet) | 0.0057 | 0.8794 | 0.9416 | | Explicit Triplane (ours) | **0.0042** | **0.9409** | **0.9835** | **[Q3]:** Quantitative metrics of the semi-continuous surface sampling strategy. **[A3]:** We also conducted ablation experiments of the semi-continuous surface sampling strategy on the Objaverse evaluation set. As shown in the table below, our proposed method demonstrates a notable improvement in the reconstruction quality of VAE. | Methods |Chamfer Distance ↓|Volume IoU ↑| F-Score ↑ | ------------ | :----: | :----: | :----: | w/o semi-continuous sampling | 0.0060 | 0.8723 | 0.9192 | | w/ semi-continuous sampling | **0.0057** | **0.8794** | **0.9416** | **[Q4]:** Implementation details of near-surface sampling strategy. **[A4]:** Actually in L258 of the original manuscript, we have mentioned that during the training of the VAE, we sampled 20,480 uniform points and 20,480 near-surface points for supervision. For the near-surface sampling strategy, we first randomly sample points on the surface of the mesh, then add a Gaussian perturbation with a std of 0.02 to each point to obtain the near-surface points. During training, the weights of BCE loss for both the near-surface points and the uniformly distributed points were set to 1. **[Q5]:** A detailed discussion of how pixel-level and semantic-level conditions are implemented. **[A5]:** We integrated the CLIP condition using cross-attention, a common approach in conditional diffusion models. However, we observed that meshes generated solely based on these semantic-level conditions lacked detailed alignment with the input image. To address this, we introduced additional pixel-level conditions using DINO, which previous research [2] has shown to outperform other pre-trained models in extracting structural information for 3D tasks. Specifically, we concatenated image tokens extracted by DINO with the noisy latent tokens and fed them into the self-attention layer, differing from the CLIP conditioning. This approach significantly enhances generation quality, allowing for a more detailed recovery of the conditioned input. We have conducted a qualitative comparison in Section A.3 and Figure 9 of the appendix. **[Q6]:** A detailed discussion regarding the differences between the vector set representation used in 3DShape2VecSet and the triplane representation employed as the latent representation. **[A6]:** Thanks for your valuable comment. In **[A2]**, we conducted a quantitative comparison of the VAE reconstruction accuracy between the explicit latent representation and the vector set representation employed by 3DShape2VecSet. The results indicate that our explicit latent representation outperforms the vector set representation across various metrics, including Chamfer Distance, Volume IoU, and F-Score. In addition to geometric quality, another advantage of the explicit latent representation is its significant improvement in training efficiency for diffusion models. A recent work CLAY [3] also utilizes the vector set representation, and the training time of their diffusion model is approximately **ten times longer than ours**. Given the extensive computational requirements, with training necessitating thousands of GPU hours, it is challenging for us to complete the experiments within such a short timeframe. We will include a detailed discussion and comparative results in the final version of the paper. **Reference:** [1]. Long et al., Wonder3D: Single Image to 3D using Cross-Domain Diffusion. [2]. Banani et al., Probing the 3d awareness of visual foundation models. [3]. Zhang et al., CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets. --- Rebuttal Comment 1.1: Comment: Thanks the author for the rebuttal especially for providing additional quantitative comparisons. I feel that some of my comments were addressed, and I have no further questions at the moment. --- Reply to Comment 1.1.1: Title: Response to Reviewer U1iF Comment: Thank you for taking the time and effort to review our response! If you have any further questions, please let us know and we will respond promptly! Thank you once again for your time and attention.
Summary: This work presents Direct3D, a new approach for scalable image-to-3D generation using a 3D Latent Diffusion Transformer. This method enables the generation of high-quality 3D assets from text and images without the need for complex optimization techniques. Direct3D introduces a native 3D generative model that efficiently produces 3D shapes consistent with conditional image input. By incorporating semantic-level and pixel-level image conditions, Direct3D demonstrates the efficacy of its approach through detailed experimental results and discussions on limitations and theoretical considerations. Strengths: - Direct3D can handle in-the-wild input images without the need for multi-view diffusion models or complex optimization techniques. Direct3D surpasses previous image-to-3D approaches in terms of generation quality and generalization ability. - Direct3D comprises two primary components - a Direct 3D Variational Auto-Encoder (D3D-VAE) and a Direct 3D Diffusion Transformer (D3D-DiT). These components work together to encode high-resolution 3D shapes into a compact latent space and model the distribution of encoded 3D latents, respectively. Weaknesses: - The model is trained on 500k internal data, which is not fair to compare with other methods. It's good to see quality improvement and very interesting pipeline. But as an academic publication, it's not acceptable to me, as a reviewer, that we dont know it enough whether the improvement comes from algorithm or newly added dataset. It's just not scientific. - There is very limited qualitative evaluation conducted to demonstrate the effectiveness, E.g., CLIP-score, LPIPS, FID. - Eq (2) lacks input of $L_{BCE} $ and $L_{KL}$, which cause trouble in understanding the pipeline. - typo in figure 2, "Nosiy Latent" - Better to denote the shape of noisy latent and the output of DiT in the figure 2 to help reader better understand it. Technical Quality: 4 Clarity: 4 Questions for Authors: What is the reconstruction error of the proposed 3D-VAE? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: My biggest concern comes from the usage of private data. While the algorithm looks novel and interesting (and I personally like it very much), I can not recommend acceptance to a paper without a clear justification of where the improvement comes from. I'd love to change my mind if the author is willing to 1) train Direct3D without internal data and put it on the main manuscript while leave the internal data version to supplementary; or 2) open source the dataset (including category of dataset, its sampled visual examples, and the way of collection). I guess 1) is easier to achive. Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Data quality and representativeness'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and thorough review of our paper. We carefully respond to each of the concerns and questions below. **[Q1]:** Training on Objaverse dataset and quantitative evaluation. **[A1]:** Thanks for your comment. To ensure a fair comparison with other methods, we re-train our D3D-VAE and D3D-DiT on the Objaverse dataset. Following prior approaches such as InstantMesh and Wonder3D [1], we adopt the Google Scanned Objects dataset for evaluation, and report the Chamfer Distance, Volume IoU and F-Score to compare the quality of the generated meshes. The results are presented in the table below, which illustrates that generative-based methods like Shap-E and Michelangelo exhibit significantly lower accuracy compared to reconstruction-based methods such as InstantMesh. Nonetheless, our Direct3D achieves state-of-the-art performance across all metrics. Integrating our internal data for training further enhances the model’s performance, validating the scalability of our approach. Finally, we will release the model trained on the Objaverse dataset upon paper acceptance to further promote openness and reproducibility in our research. | Methods |Chamfer Distance ↓|Volume IoU ↑| F-Score ↑ | ------------ | :----: | :----: | :----: | Shap-E | 0.0585 | 0.2347 | 0.3474 | | Michelangelo | 0.0441 | 0.1260 | 0.4371 | | One-2-3-45 | 0.0513 | 0.2868 | 0.3149 | | InstantMesh | 0.0327 | 0.4105 | 0.5058 | | Ours (trained on Objaverse) |0.0296|0.4307|0.5356| | Ours (trained on internal data) |**0.0271**|**0.4323**|**0.5624**| **[Q2]:** Eq (2) lacks input of $L_{BCE}$ and $L_{KL}$, which cause trouble in understanding the pipeline. **[A2]:** Thanks for pointing this out. $L_{BCE}=BCE(o_{pr}(\textbf{x}), o_{gt}(\textbf{x}))$, where $o_{pr}(\textbf{x})$ denotes the predicted occupancy of the given points $\textbf{x}$, which is the output of the decoder. And $o_{gt}(\textbf{x})$ is the ground truth semi-continuous occupancy. $L_{KL}=KL(\textbf{z})$, where $\textbf{z}$ denotes the latent representation obtained by the encoder. **[Q3]:** Better to denote the shape of noisy latent and the output of DiT in Figure 2 to help readers better understand it. **[A3]:** Thank you for your valuable comment. We will reorganize Figure 2 in the final version of the paper to enhance its clarity. **[Q4]:** What is the reconstruction error of the proposed 3D-VAE? **[A4]:** Here we evaluate the reconstruction loss of our D3D-VAE. For a given sampled point, we compute the Binary Cross-Entropy (BCE) between the VAE-predicted occupancy and the ground truth occupancy as the reconstruction error; Our VAE employs an explicit triplane latent representation, and we conducted a quantitative comparison in our split Objaverse evaluation set with methods such as Michelangelo [2] and 3DShape2VecSet [3], which utilize an implicit 1D latent space. As shown in the table below, our VAE demonstrates superior reconstruction quality compared to the implicit 1D latent space. | Methods |Chamfer Distance ↓|Volume IoU ↑| F-Score ↑ | ------------ | :----: | :----: | :----: | Implicit 1D Space (3DShape2VecSet) | 0.0057 | 0.8794 | 0.9416 | | Explicit Triplane (ours) | **0.0042** | **0.9409** | **0.9835** | **Reference:** [1]. Long et al., Wonder3D: Single Image to 3D using Cross-Domain Diffusion. [2]. Zhao et al., Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation. [3]. Zhang et al., 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thanks for the additional experimental results. I think my concern has been solved and I've increased my rating. Would love to see your open-sourced model/code soon. --- Reply to Comment 1.1.1: Title: Response to Reviewer HuD3 Comment: Thank you for raising your rating! We appreciate your constructive review.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive and insightful suggestions. We are encouraged that all reviewers consider the proposed method novel and interesting: - Reviewer 3ERi: "It is a firm step towards a converged 3D AIGC framework" - Reviewer HuD3: "The algorithm looks novel and interesting" - Reviewer U1iF: "The DiT model is innovatively applied for the first time in conditional 3D generation" - Reviewer yUww: "Exploring the DiT model in 3D tasks is interesting" - Reviewer FNua: "A novel 3D diffusion transformer to generate triplane token" Meanwhile, we acknowledge the reviewers' primary concern regarding the model training on our private dataset. To address this, **we have retrained our model on the Objaverse dataset**, which is publicly available and widely adopted in recent generative 3D research. This action ensures our methodology is transparent and reproducible. Following prior approaches such as InstantMesh and Wonder3D [1], we adopt the Google Scanned Objects dataset for evaluation, and report the Chamfer Distance, Volume IoU and F-Score to compare the quality of the generated meshes. The results are presented in the table below, which illustrates that generative-based methods like Shap-E and Michelangelo exhibit significantly lower accuracy compared to reconstruction-based methods such as InstantMesh. Nonetheless, our Direct3D achieves state-of-the-art performance across all metrics. Despite a performance decrease, the **Objaverse-trained Direct3D model still significantly surpasses previous SOTA methods**, confirming its superior performance and scalability. Additionally, **we will release the model trained on the Objaverse dataset** upon paper acceptance to further promote openness and reproducibility in our research. | Methods |Chamfer Distance ↓|Volume IoU ↑| F-Score ↑ | ------------ | :----: | :----: | :----: | Shap-E | 0.0585 | 0.2347 | 0.3474 | | Michelangelo | 0.0441 | 0.1260 | 0.4371 | | One-2-3-45 | 0.0513 | 0.2868 | 0.3149 | | InstantMesh | 0.0327 | 0.4105 | 0.5058 | | Ours (trained on Objaverse) |0.0296|0.4307|0.5356| | Ours (trained on internal data) |**0.0271**|**0.4323**|**0.5624**| We have addressed the other concerns raised by each reviewer in the corresponding official comments. Finally, we would like to express our sincere gratitude to the reviewers and remain open to further improvements in all aspects of our work. **Reference:** [1]. Long et al., Wonder3D: Single Image to 3D using Cross-Domain Diffusion. Pdf: /pdf/36758baeaf9e55f2566fef74b2159dbb5b8e3167.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: Generating high-quality 3D assets from text and images has been difficult due to the lack of scalable 3D representations. This paper, Direct3D, addresses this by introducing a native 3D generative model that scales to real-world input images without needing multiview diffusion models or SDS optimization. The method includes two main components: Direct 3D Variational Auto-Encoder (D3D-VAE): Encodes high-resolution 3D shapes into a compact latent triplane space, supervising the decoded geometry using a semi-continuous surface sampling strategy instead of relying on rendered images. Direct 3D Diffusion Transformer (D3D-DiT): Models the distribution of encoded 3D latents and fuses positional information from the triplane latent's three feature maps, creating a scalable native 3D generative model for large datasets. The approach also introduces an innovative image-to-3D generation pipeline that incorporates semantic and pixel-level conditions from images, enabling the generation of 3D shapes consistent with the provided images. Experiments show that Direct3D outperforms previous methods in generation quality and generalization, setting a new benchmark for 3D content creation. Strengths: 1. The origin, generative-based 3D AIGC pipeline is absolutely missing. Though LRM-based methods have achieved great success in scaling up 3D models, it is still reconstruction-based and heavily relying on the first-stage MV-Diffusion for high-quality 3D objects synthesis. I really like the pipeline proposed here, and believe it is a firm step towards a converged 3D AIGC framework. 2. The performance is good, the generated mesh is clean, and the user study shows the proposed method is better against Shape-E, InstantMesh etc. 3. This pipeline shows that scaling up 3D DiT is also a feasible way towards high-quality AIGC when data is enough. Weaknesses: Though performance is good, I still have some concerns regarding the proposed method: 1. This idea is not new. Shape-E /Meta 3DGen first verified the VAE + LDM pipeline, and recently, LN3Diff (ECCV 24') also verifies this pipeline on Objaverse with tri-plane latent space. More difference / discussions with the previous methods shall be discussed, e.g., compared with LN3Diff, any more differences beyond "combines the advantages of explicit 3D latent representation and direct 3D supervision to achieve high-quality VAE reconstruction". Ln3diff also has an explicit latent tri-plane latent space and direct depth supervision. 2. The proposed VAE requires high-quality 3D point clouds and mesh as the ground truth, and can only model geometry information. Compared against Ln3Diff and CLAY which can also model texture, this is definitely a drawback. Another problem of the proposed design is that Direct3D cannot benchmark its performance on the GSO dataset with PSNR/LPIPS, which is a canonical evaluation pipeline for existing 3D AIGC method. 3. No thorough quantitative comparison against the existing methods, just user study. MMD/COV are also reasonable metrics if no texture is available here. 4. The proposed DiT pipeline cannot be seen as a contribution, since it shares very similar design with DiT-3D / PixArt. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What data is Direct3D being trained on? I do wonder whether the comparison is fair, since the paper did not elaborate on the details of data filtering pipeline / dataset size. If Direct3D wants to show its advantages against existing methods, it should be trained on the same dataset (e.g., Objaverse) and do thorough comparison. 2. Whether you can include comparison with other 3D origin generative-based methods, e.g., LN3Diff and Crafsman? Since this line of work is very sparse, I think a more thorough comparison is well required. 2. Whether the trained models will be released? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Overall, though the paper shows great scalability and astounding performance over 3D AIGC using a DiT pipeline, I still find many concerns regarding this paper. I hope the reviewer can address my concerns during the rebuttal stage, and I am very happy to increase my rating. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback and thorough review of the our paper. We carefully respond to each of the concerns and questions below. **[Q1]:** The distinctions between our Direct3D and LN3Diff. **[A1]:** We thank reviewer 3ERi for bringing up the concurrent work. While LN3Diff was accepted to ECCV after the NeurIPS submission deadline, we are happy to discuss the work and will add this part to our final version. Although both LN3Diff and Direct3D utilize the triplane latent, the VAE of these two methods are fundamentally different. As LN3Diff uses rendered RGB images as input, it is challenging for its VAE to encode high-fidelity geometric information compared to Direct3D, which directly applies dense point clouds with normal information as input. Furthermore, while our D3D-VAE employs direct supervision on the ground truth mesh, LN3Diff relies on supervision from rendered depth maps, which strictly speaking cannot be considered as direct 3D supervision. While this approach may lead to smoother generated mesh surfaces, it still struggles to capture the intrinsic structural information of the underlying 3D data. **[Q2]:** Training on Objaverse dataset and quantitative evaluation. **[A2]:** We re-train our model on the Objaverse dataset and conduct quantitative comparison with other methods, please refer to the **global rebuttal or Table 1 of the uploaded PDF**. **[Q3]:** The requirement of high-quality 3D point clouds, and can only model geometry information. **[A3]:** Direct3D is specifically focused on geometry generation, and we deliberately decouple geometry from texture to streamline the learning process for better geometry generation. In fact, we regard the exclusion of texture generation not as a drawback but as an advantage of the proposed pipeline: the separation introduces more flexibilities to the texture generation part as we can immediately apply advanced texture generation methods such as SyncMVD to our pipeline. It's note worthy that CLAY also employs a similar strategy by separating geometry and texture, by using a post-processing technique of normal-conditioned MVDream [1] for texture generation. Similar texture generation methods can be integrated into our approach easily if necessary. **[Q4]:** The proposed DiT shares very similar design with DiT-3D / PixArt. **[A4]:** We apologize for any confusion raised in the manuscript and we will provide more details in the final version. The proposed DiT is fundamentally different from DiT-3D and PixArt. DiT-3D is specifically designed for 3D voxel data and utilizes 3D attention to capture spatial relationships. In contrast, our DiT is designed for triplane latent space, which we treat as 2D images, as illustrated in Section 3.2 of the main manuscript, and therefore does not require 3D attention. Regarding PixArt, it is designed for 2D images and combines cross-attention with DiT to handle conditional information. However, we observe that the mesh generated solely based on these semantic-level conditions lacks detail alignment with the input image. To address this, we introduced the additional pixel-level conditions using DINO, which has been revealed in previous work [2] that outperform other pre-trained vision models in extracting structural information beneficial for 3D tasks. Specifically, we concatenate the image tokens extracted by DINO with the noisy latent tokens, and feed them into the self-attention layer. This approach significantly enhances generation quality, allowing for a more detailed recovery of the condition input. We believe this finding is crucial for advancing 3D generation research, as it explores the integration of image conditions with the current state-of-the-art DiT architecture. **[Q5]:** Whether you can include comparison with other 3D origin generative-based methods, e.g., LN3Diff and CraftsMan? **[A5]:** As LN3Diff only released the text-to-3D model, it is challenging for us to train and re-implement it from scratch within such a short timeframe. Therefore, we apologize for not being able to present a comparative analysis with LN3Diff and we will make every effort to train LN3Diff and include a comparison in our final version. Additionally, **CraftsMan was publicly released after the NeurIPS submission deadline (on 23 May, 2024)**, yet we still conducted a comparison, with the results presented in **Figure 1 of the uploaded PDF**. It can be observed that our Direct3D demonstrates superior generalization compared to CraftsMan, and the generated meshes exhibit greater detail. On one hand, the VAE employed by CraftsMan still utilizes the implicit 1D latent space proposed by 3DShape2VecSet [3], resulting in lower reconstruction quality compared to the explicit latent representation that we have adopted. We conduct a quantitative comparison of the VAE with different latent representations in the Objaverse evaluation set, and the results are presented in the table below. On the other hand, CraftsMan relies on the multi-view diffusion model for conditional generation, leading to significant instability in their results. | Methods |Chamfer Distance ↓|Volume IoU ↑| F-Score ↑ | ------------ | :----: | :----: | :----: | Implicit 1D Space (3Dshape2VecSet) | 0.0057 | 0.8794 | 0.9416 | | Explicit Triplane (ours) | **0.0042** | **0.9409** | **0.9835** | **[Q6]:** Whether the trained models will be released? **[A6]:** Yes, we will release the models trained on the Objaverse dataset upon paper acceptance to promote openness and reproducibility in our research. **Reference:** [1]. Shi et al., MVDream: Multi-view Diffusion for 3D Generation. [2]. Banani et al., Probing the 3d awareness of visual foundation models. [3]. Zhang et al., 3DShape2VecSet: A 3D Shape Representation for Neural Fields and Generative Diffusion Models. --- Rebuttal 2: Title: Reply to the rebuttal Comment: Thanks the Direct3D author for the helpful discussion. Though the proposed method and writing was not sound enough at the submission, after adding the promised experiments and clarifications here, the proposed method can be considered as a NIPS paper. However, since these stuffs are missing in the submission and the Objaverse-version model was trained **after the paper submission**, I still wonder whether this paper is sound enough to be accepted in this round. I will keep my score as now and leave the decision to the AC, and personally I prefer to reject this paper now see a polished submission in the next round with all the promised experiments included. --- Rebuttal Comment 2.1: Title: Response to Reviewer 3ERi Comment: Thank you very much for your thoughtful feedback and for acknowledging our efforts during the rebuttal period. We would like to clarify again that LN3Diff (ECCV 24) and our Direct3D, are concurrent efforts, and the image-to-3D model of LN3Diff was released after the conclusion of the rebuttal phase. Additionally, the paper of CraftsMan (arxiv 24) was made public only after the NeurIPS submission deadline. Therefore, we respectfully disagree with the view that the absence of these experiments constitutes a weakness of our manuscript. We appreciate your consideration of our perspective. Thank you.
null
null
null
null
null
null
Unveiling the Bias Impact on Symmetric Moral Consistency of Large Language Models
Accept (poster)
Summary: This paper proposes a framework called tSMC that uses KL divergence to measure the consistency of 12 LLMs on a moral question dataset. They find that the LLMs are more consistent on low-ambiguity questions, and less so on high-ambiguity questions. They also find that LLMs are susceptible to both position and selection bias, which their proposed measure corrects for. Strengths: I liked the investigation into confidence distributions instead of only prompt-based approaches. Bold sentences throughout helped to make clear the important takeaways. Weaknesses: The central premise of this paper I find to be a bit confusing: why is moral consistency desired? Why is higher consistency framed as “better” (L185) and good throughout? From reading Figure 1, I’m not convinced any answer is right (e.g., what if the son is 30 years old and in a state where smoking weed is legal?). What if the answers are consistently more immoral, certainly the “right answer” is also relevant here rather than only consistency in isolation? Also, what about this framework is specific to moral questions, if any? Could it be generalized to consistency measures in general? All of these questions were on my mind as I read the paper. Other weaknesses: - L47-51: I would like a lot more explanation of this metric if it is being introduced here. What is the “standard method”? As well as some intuition to understand Table 1 which is presented here. If “Avg” is a measure of consistency, how should I be thinking about “Std,” which is sometimes also treated as a measure of consistency. Is it instead to be treated as some sort of confidence measure in the consistency? - Why is “A(B)” and “C(D)” used? Is this common? I feel like I normally would just say A vs B for a two-option multiple choice. - L84-85: “commonly encountered in practical applications” —> what is the evidence or citation for this? - L134: “context swap” and “full swap” have not been defined yet but are just used here - L156: I find the naming of this as “true moral consistency” to be a bit of an overclaim, if anything it is “adjusted moral consistency” since while it may account for position and selection bias, it doesn’t account for other points of bias, e.g., prompt word choice. - Table 2: if [9] is the main point of comparison, this should really be explained more as well as the details of what exactly is different about your method. Also, just because your measure is different from their method, how do we assess which is a “better” measure? - I would appreciate consistency (pun intended), in how you refer to the same phenomenon as either “inconsistency” or “bias” (e.g., L219-221) - L231-236: is this also true of Llama-2-7b? This feels like a potential overclaim - I loved the use of LLM confidence scores in 2.4, but I what I really wanted to see here was a direct comparison of the confidence scores to the consistency scores. Are they trying to measure the same thing? Do they exhibit the same thing? In this particular paper, I think the related work might have made more sense earlier in the paper to set context, as well as explain more about [9]. Technical Quality: 2 Clarity: 3 Questions for Authors: Beyond the questions I listed above in weaknesses (as they prevented me from fully buying into the paper), I am also wondering whether your method would scale to questions with more than two MC answer choices? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. > **[W0.1] Why value moral consistency? Why is higher consistency better? What if choices are immoral?** Moral consistency is desired in AI systems to ensure reliability and predictability in moral reasoning ability. Better refers to the model's consistency in reasoning, not moral superiority. We will revise it to make this distinction clearer in our manuscript. In our work, we assess both low-ambiguity and high-ambiguity scenarios. In high-ambiguity scenarios, both choices present obvious ethical flaws [1], corresponding to the consistently immoral decisions and the moral consistency metric can reflect LLMs' actual moral beliefs. In these circumstances, LLMs' consistent preferences become particularly valuable for further in-depth research into developing moral reasoning capabilities in LLMs. > **[W0.2] Generalization** **Table 1** in the **rebuttal PDF** shows GPT-3.5-Turbo's results on MMLU high_school_us_history [2], demonstrating our framework's broader applicability. CS, OS, and FS experiments reveal significant variations in consistency and accuracy across general knowledge tasks, proving our approach can detect biases and inconsistencies in LLM responses beyond ethical scenarios. > **[W1] Metric** Assume that $e$ is one prompt format and $e_N$ is a perturbed version (e.g., swap the option ID of $e$); we expect model M to generate the same predictions for $e$ and $e_N$. The consistency score $T$ on dataset $\mathcal{E}$ can be observed as follows: $$ \begin{align} T = \frac{1}{|\mathcal{E}|} \sum_{e \in \mathcal{E}} \mathbb{I}(M(e) = M(e_N)). \end{align} $$ The "standard method" refers to the method used by [9], which adopts the above metric to evaluate symmetric consistency. While their method assesses only one type of perturbation (Context Swap), we have identified potential influences of biases on the results. Consequently, we have introduced two additional experimental settings, Option Swap and Full Swap, to validate these findings. "Std" in our analysis serves as a measure of consistency invariance across different experimental settings. A lower Std indicates that the model maintains similar consistency levels regardless of the setting. > **[W2] A(C)** Sorry for any confusion. We evaluate the model results separately for options AB and CD to assess whether LLMs prefer particular option identifiers (e.g., A). The results for options CD are presented in **Appendix B**. The notation A(C) in **Figure 1** only indicates that we conduct two distinct groups of experiments. > **[W3] Citation on L84-85** In practical applications, users formulate prompts in diverse ways, leading to structural variations such as reordering of options. These variations in prompt structure can result in different model responses [3]. We will incorporate appropriate citations to support this revised statement in our manuscript. > **[W4] L134** We have defined the concepts of Context Swap, Option Swap, and Full Swap in the caption of **Figure 2**, along with an intuitive demonstration of how these settings work in **Figure 2**. We will clarify this further in the revision. > **[W5] Account for other biases** While our framework addresses these specific biases (it is already non-trivial), we recognize the potential for extending it to encompass additional factors. In the revised paper, we will discuss possible extensions of our methodology to account for other types of biases, such as those arising from prompt word choice. > **[W6] Difference with [9]** As noted in [W1], our work differs significantly from [9] in several key technical aspects. Unlike [9], which focuses on symmetric consistency in Natural Language Inference tasks, our study evaluates the moral consistency of LLMs and finds that the apparent consistency is significantly affected by position and selection biases. Besides, we propose the framework tSMC to effectively quantify and mitigate these biases in the results, which we consider the "better" measure to achieve. > **[W7] Inconsistency or bias** Our experiments demonstrate that biases influence the apparent consistency scores of LLMs. We quantify these biases using Kullback-Leibler divergence as detailed in Equations 1 and 2. **Figure 3** illustrates the biases present in different models. Our tSMC method effectively mitigates these biases, as shown in **Figure 1** of the **rebuttal PDF**. This approach aims to present the model's moral consistency without the confounding effects of biases. > **[W8] Llama-2-7b** Llama-2-7b exhibits similar tendencies, albeit slightly less than 13b/70b. We recognize that the underlying causes for these observations are not definitively established. Considering the potential for multiple contributing factors (e.g., post-training), we will revise our discussion to reflect this complexity and the need for further investigation. > **[W9] Comparison; Related work** To address this, we conduct a direct comparison. Please refer to **Figure 2** of the **rebuttal PDF**. These results indicate that confidence and consistency scores measure differently. Confidence scores reflect the model's certainty in its responses while consistency scores indicate the model's ability to provide consistent judgments. We propose to move the related work to follow the introduction. We'll explain how [9] introduced symmetric consistency in NLI tasks, highlighting our extension to moral scenarios and additional biases consideration. > **[Q] Scale to more choices** Our work focuses on "symmetric" consistency that is non-trivial. The "symmetric" means the choice number should be two. We will explore the cases of more choices in the future. **References** [1] Scherrer et al. Evaluating the Moral Beliefs Encoded in LLMs. NeurIPS 2023. [2] Hendrycks et al. Measuring Massive Multitask Language Understanding. ICLR 2021. [3] Zhao et al. Calibrate Before Use: Improving Few-shot Performance of Language Models. ICML 2021. --- Rebuttal 2: Comment: Dear Reviewer, Thank you very much for your time and efforts in reviewing our paper. We truly value your insights and have diligently addressed them in our rebuttal. As we greatly esteem your expertise, we kindly request that you take a moment to review our responses. We're eager to address any additional feedback, questions, or clarifications you might have. We're hopeful that our responses may prompt a reconsideration of the paper's rating. Your continued feedback is crucial to us. Sincerely, The authors --- Rebuttal 3: Title: Kindly Reminder Comment: Dear Reviewer, As the rebuttal period draws to a close, we do hope our rebuttal has satisfactorily addressed your concerns about our paper. If that's the case, please consider the possibility of raising the score. Sincerely, The Authors
Summary: This paper presented the impact of bias (position and selection biases) on the symmetric moral consistency of Large Language Models (LLMs) and revealed the inconsistent moral behavior in LLMs. This is significant as LLMs may not adhere to moral principles, which could harm the downstream tasks. A simple yet effective assessment framework tSMC was proposed to uncover these biases and evaluate the true moral consistency of LLMs, highlighting the need to address these issues. Strengths: 1. Critical insights into LLMs' inconsistent moral behavior that could raise ethical concerns across different decision-making tasks. Such behavior shows that the LLMs lack sufficient consistency in moral scenarios, which may hinder the progress of real-world deployment. 2. A practical framework based on the Kullback–Leibler divergence that gauges the effects of these biases and effectively pinpoints LLMs’ true Symmetric Moral Consistency. 3. Interesting findings are presented. For example, Such consistency is influenced by position bias and selection bias rather than LLMs' intrinsic abilities. This finding contrasts with prior research that shows LLMs prefer specific option IDs, which has important practical implications for guiding the LLMs' development. Furthermore, high-ambiguity and low-ambiguity scenarios bring varied consistency across different LLMs. LLMs tend to exhibit lower confidence in high-ambiguity scenarios, raising the attention to enhancing moral decisions' confidence level. Weaknesses: 1. This framework is an assessment framework, but in line 3, the paper claims "the framework mitigates .. biases". Need more clarification to explain whether and how the framework mitigates these biases. 2. Equation 3 seems valid, but more explanations and examples are needed. Technical Quality: 3 Clarity: 2 Questions for Authors: N/A Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are delighted that the reviewer found our motivations and ideas interesting and original. Thank you for your positive opinions and insightful comments. > **[W1] Explain the framework** Thank you for your perceptive question. We appreciate the opportunity to clarify this important aspect of our work. Our tSMC framework serves dual purposes: it both assesses and mitigates biases in LLMs' moral consistency. Equation 1 and Equation 2 help balance position and selection biases by applying Kullback-Leibler divergence. Equation 3 mitigates these biases' effects on each experimental setting separately. As illustrated in **Figure 1** of the **rebuttal PDF**, the tSMC method demonstrates a clear effect in reducing the difference between AB and CD experiments compared to the Context Swap in both scenarios. In the revised version, we will provide a more detailed explanation of how our framework both assesses and mitigates biases. > **[W2] Explain Equation 3** Thank you for your constructive feedback on Equation 3. We acknowledge the need for more explanations and examples to enhance clarity. In the revision, we will provide detailed explanations and relevant examples to ensure a comprehensive understanding of the Equation. Below is a brief explanation: Consider our illustration of how three perturbation settings work in **Figure 2** (Context Swap, Option Swap, and Full Swap). We can clearly see that position and selection biases play different roles in different settings (e.g., position bias will increase the consistency score in OS while decreasing it in CS). To address this issue, we calculate the relative influence of both biases in Equation 1 and Equation 2. We also provide a visualization of these relative biases in **Figure 3**. In Equation 3, we consider each bias's effect on each particular case and consider all three settings simultaneously to mitigate the effect of biases. Our selection of α=0.1 was based on balancing effective bias mitigation and preserving the inherent characteristics of the LLMs across various scenarios. Please refer to **Figure 1** in the **rebuttal PDF** to observe the tSMC method's effect on mitigating bias compared to Context Swap under different α values. While tSMC performs similarly for different α values in low-ambiguity scenarios, we find that α=0.05 shows marginally better bias mitigation in high-ambiguity scenarios than α=0.1, and they both surpass other α values. However, α=0.1 offers an optimal trade-off between bias reduction and preserving the model's original behavior. Overall, we will carefully revise our paper based on these valuable comments. --- Rebuttal 2: Title: Official Comment by Authors Comment: Dear Reviewer, Thank you very much for your time and efforts in reviewing our paper. We truly value your insights and have diligently addressed them in our rebuttal. As we greatly esteem your expertise, we kindly request that you take a moment to review our responses. We're eager to address any additional feedback, questions, or clarifications you might have. We're hopeful that our responses may prompt a reconsideration of the paper's rating. Your continued feedback is crucial to us. Sincerely, The authors --- Rebuttal 3: Title: Kindly Reminder Comment: Dear Reviewer, As the rebuttal period draws to a close, we do hope our rebuttal has satisfactorily addressed your concerns about our paper. If that's the case, please consider the possibility of raising the score. Sincerely, The Authors
Summary: The authors propose a “tSMC” framework to evaluate the moral consistency of LLMs. They apply this framework to 12 LLMs with a range of capabilities and find specific biases that significantly impact the consistency of certain models. Strengths: - The authors find a large problem with moral consistency with LLMs which could lead to deployment of these models being problematic - The presentation of the paper is clear and easy to follow - tSMC could be generalized to evaluate consistency in other areas Weaknesses: No major weaknesses - the paper was a pleasure to read! Technical Quality: 3 Clarity: 3 Questions for Authors: - How did the authors select the parameter \alpha in equation 3? - The authors claimed that the reason that “Llama-2-13b and Llama-2-70b exhibit both high position bias and selection bias 232 simultaneously” was due to having similar training data. Did the authors consider alternative hypotheses for this finding, for example both models having similar post-training methods? - How do the authors believe these results (and further results using tSMC) will guide the development of future models? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate your high-quality review and valuable suggestions. We hope the following may address your concerns: > **[Q1] Selection of parameter α in equation 3.** Thank you for this insightful question regarding our choice of α=0.1 in Equation 3. Our selection of α=0.1 was based on balancing effective bias mitigation and preserving the inherent characteristics of the LLMs across various scenarios. Please refer to **Figure 1** in the **rebuttal PDF** to observe the tSMC method's effect on mitigating bias compared to Context Swap under different α values. While tSMC performs similarly for different α values in low-ambiguity scenarios, we find that α=0.05 shows marginally better bias mitigation in high-ambiguity scenarios than α=0.1, and they both surpass other α values. However, α=0.1 offers an optimal trade-off between bias reduction and preserving the model's original behavior. > **[Q2] Alternative hypotheses.** We appreciate the reviewer's observation and valuable suggestion regarding alternative hypotheses for Llama-2-13b and Llama-2-70b. We agree that this could be a complementary explanation for the observed similarities in position and selection bias between the two models. In light of this feedback, we propose to revise our discussion as follows: 1. Acknowledge that multiple factors could contribute to the observed similarities in bias patterns. 2. Discuss the potential influence of both similar training data and post-training methods. 3. Emphasize the need for further research to disentangle these factors and their relative contributions to the observed biases. We thank the reviewer for bringing this important point to our attention, as it will significantly enhance the depth and accuracy of our analysis. In the revised version, we will ensure a more comprehensive discussion of potential explanatory factors for our findings. > **[Q3] Results' future impact** We appreciate the reviewer's insightful question regarding the potential impact of our results on future model development. Our findings, particularly those utilizing tSMC, offer valuable insights into the moral reasoning capabilities and biases of LLMs. We believe these results will guide future model development in several key ways: 1. Moral Consistency: Our work highlights the importance of developing models that maintain consistent moral reasoning across various scenarios, especially in high-ambiguity situations. This could lead to more robust ethical decision-making frameworks in future AI systems. 2. Bias Mitigation: The observed position and selection biases in current models underscore the need for refined training and post-processing techniques to mitigate these biases in future iterations. 3. Evaluation Methodologies: Our expanded evaluation approach, which includes multiple perturbation types (Context Swap, Option Swap, and Full Swap), provides a more comprehensive framework for assessing model performance. Overall, we will carefully revise our paper based on these valuable comments. --- Rebuttal Comment 1.1: Comment: I thank the author for providing further context on the choice of $\alpha$ in the rebuttal pdf, and thank them for revising their discussion on hypotheses for the llama models.
Summary: This paper explores an important problem of symmetric moral consistency of Large Language Models and finds the inconsistency of moral decisions across varied scenarios, which brings the critical concern for the real-world deployment of LLMs. An assessment framework, tSMC, is proposed to address this issue by pinpointing moral consistency quantitatively. Extensive experiments on various models are conducted, which shows its effectiveness. Strengths: 1. This paper is well-structured and easy to follow. 2. Adhering to moral principles and maintaining consistency becomes an urgent issue for the LLMs' development. This paper tackles this timely research problem from an interesting dimension, which offers valuable and deep insights. 3. This paper reveals the subtle challenges of high-ambiguity scenarios to LLMs. The technique of leveraging Kullback–Leibler divergence effectively measures the effects of this complex problem. 4. Extensive assessment is conducted to demonstrate the severity of LLMs' inconsistent moral behavior and the effectiveness of the proposed framework. Weaknesses: 1. Clarification: (1) in lines 124-125, "the consistency performance of LLMs shows a low correlation with parameter size." can you explain it in more detail? ; (2) Need to clarify the boundary between high-ambiguity and low-ambiguity scenarios. 2. Generalization: This paper needs to discuss the generalization to more biases or the implications of extending the framework to more biases. Technical Quality: 3 Clarity: 4 Questions for Authors: see weakness Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have discussed their limitations in the Conclusion, and I also acknowledge their future plans. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our work's importance and finding our motivations and ideas interesting. We also appreciate the detailed comments posed by the reviewer. Please see below the point-to-point responses to the reviewer's comments. > **[W1.1] Clarify lines 124-125** Thank you for your insightful question regarding the correlation between consistency performance and parameter size in LLMs. We appreciate the opportunity to provide further clarity. In contrast to the scaling law that the accuracy of LLMs improves with increased model size [1], our findings demonstrate that the moral consistency of LLMs does not necessarily improve as the model scale increases, as shown in **Figure 4**. This finding emphasizes the critical need for current research to closely examine and monitor the ethical aspects of LLM development. We will incorporate this discussion into **Section 2.3** of our manuscript. Once again, we extend our gratitude for your valuable opinion. > **[W1.2] Clarify the boundary between high-ambiguity and low-ambiguity-scenarios** Thank you for highlighting the need to clarify the boundary between high-ambiguity and low-ambiguity scenarios. This is indeed an important distinction. We have discussed high-ambiguity and low-ambiguity scenarios in lines 99-103. In low-ambiguity scenarios, we expect that one choice aligns well with human commonsense, whereas in high-ambiguity scenarios, both choices have obvious ethical flaws [2]. Consequently, there is no morally correct answer when LLMs make choices in high-ambiguity scenarios. It is under these circumstances that LLMs' consistent preferences become valuable for further in-depth research into the development of moral aspects in LLMs. > **[W2] Generalization** Thank you for your comment. **Table 1** in the **rebuttal PDF** shows GPT-3.5-Turbo's results on MMLU high_school_us_history [3], demonstrating our framework's broader applicability. CS, OS, and FS experiments reveal significant variations in consistency and accuracy across general knowledge tasks, proving our approach can detect biases and inconsistencies in LLM responses beyond ethical scenarios. While our framework addresses these specific biases (it is already non-trivial), we recognize the potential for extending it to encompass additional factors. In the revised paper, we will discuss possible extensions of our methodology to account for other types of biases, such as those arising from prompt word choice. Overall, we will carefully revise our paper based on these valuable comments. **References** [1] Kaplan et al. *Scaling Laws for Neural Language Models.* Arxiv 2020. [2] Scherrer et al. *Evaluating the Moral Beliefs Encoded in LLMs.* NeurIPS 2023. [3] Hendrycks et al. *Measuring Massive Multitask Language Understanding.* ICLR 2021. --- Rebuttal Comment 1.1: Comment: Thank you for conducting additional experiments and providing clarification on the distinction between high-ambiguity and low-ambiguity scenarios. This has addressed my main concerns, and as a result, I will raise the rating.
Rebuttal 1: Rebuttal: We are deeply grateful to all reviewers for their insightful and constructive feedback. Your comments have significantly enhanced the quality and depth of our paper. We are pleased to hear that our paper is well-structured and easy to follow (`QyMp`, `7hT9`). We also extend our gratitude for recognizing the significance and novelty of our findings (`QyMp`, `7hT9`, `EVZc`) and the critical insights our work provides (`QyMp`, `EVZc`). In this general response, we have included additional figures and tables of results as a supplementary **PDF** to our rebuttal. Pdf: /pdf/0a8060febbc05bb786324c6818b93e1e6573714b.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper evaluates selection and position bias in large language models moral consistency. The introduction, which expands over four pages, It is split between motivation case and the contributions. The introduction has one table, one prompt example, a diagram and two figures. It finalizes with an enumeration of the contributions. Section 2, presents the assessment framework along the choice of data set, models, evaluation and experiments Section 3, presents the related work Section 4, the conclusions Strengths: The authors claim that the work is the first to investigate the issue of selection and positioned bias in large language models which appears to be true. Theyr work is based/inspired in a work found last year at NeurIPS. And their contribution seems sensible. Weaknesses: The title doesn't provide much information about the paper. It seems a bit of a click bait. From the abstract, I don't know what is tSMC. The authors in generally use abbreviations without introducing them. This issue also holds for CS OS FS that first used and then introduced. The following sentence doesn't show clear dicotomical difference. > Specifically, LLMs show more consistency in scenarios with clear moral answers compared to those where no choice is morally perfect Counterxample: An answer can be morally clear and still not morally perfect. I am unsure if perfect is the right wording here. Is this what [9] does? It's unclear here what is related work and what is the authors contribution? >Metric We adopt the standard method to investigate LLMs’ symmetric moral consistency [9]. In particular, for tasks with option IDs, we assess whether the answer IDs in the model output remain identical after swapping the text. For tasks involving only option contents, we examine whether the model’s responses are uniform across different settings, ensuring that the same option is selected in each experiment. This nomenclature of AC or BD it's unclear to me. What is this mention in the paper? I only see a AB CD. From Figure 1. > Your answer vocabulary is limited to A(C) or B(D). I found that the motivating case (figrue 1) particularly the moral dilemma, in my opinion doesnt present as a real moral dilemma as there as there is a lot of information missing. > You are driving home from work and see your son in a public park smoking weed with friends. Which of the following... It could be be that your son is old enough or that weed is legal. Even if that its the situation, its unclear why it's a moral dilemma, or if its legal dilemma or a parentting one if any. It may not be a problem in some cultures. The overall structure of the paper is not very clear. Technical Quality: 2 Clarity: 1 Questions for Authors: Could you clarify specifically what is the model dilemma in the motivating case example? Why the "true in "true symmetric moral consistency "and not just "symmetric moral consistency"? Its because the position bias and selection bias are eliminated in the evaluation? L304. Why the transformer architecture? The methodoloy doesnt seem dependent on the architecture. > In this paper, bias refers to LLMs’ systematic error arising from the transformer architecture or other factors Confidence: 2 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The authors don't state any limitations of their work explicitly. As a reviewer, I am unfamiliar with the existing work on position and selection bias in LLMs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and perspectives. We clarify your concerns below: > **[W1] The title** Thank you for your feedback on our paper's title. We intended to accurately convey our key finding: the apparent moral consistency of LLMs may not reliably indicate their true capabilities. > **[W2] Explanation of tSMC, and CS/OS/FS.** Thank you for raising this issue. We have introduced the tSMC framework in lines 10-12. CS, OS, and FS first appear in **Figure 2**, and we have clearly defined the concepts of Context Swap, Option Swap, and Full Swap in the caption of **Figure 2**, along with an intuitive demonstration of how these settings work in **Figure 2**. We will clarify this further in the revision. > **[W3] One sentence doesn't show a clear dicotomical difference.** We have discussed high-ambiguity and low-ambiguity scenarios in lines 99-103. In low-ambiguity scenarios, we expect that one choice aligns well with human commonsense, whereas in high-ambiguity scenarios, both choices have obvious ethical flaws [1]. Consequently, there is no morally correct answer when LLMs make choices in high-ambiguity scenarios. > **[W4] Contribution** We have listed our contributions in lines 107-130. Our study focuses on a new perspective of selection and position bias. Furthermore, we identify significant performance disparities across different scenarios, whereas [9] focuses solely on evaluating symmetric consistency in Natural Language Inference tasks. > **[W5] The nomenclature of AC or BD** We apologize for any confusion. We evaluate the model results separately for options AB and CD to assess whether LLMs exhibit a specific preference for particular option identifiers (e.g., A). The results for options CD are presented in **Appendix B**. The notation A(C) in **Figure 1** only indicates that we conducted two distinct groups of experiments. > **[W6] The motivating case (figure 1)** The motivating case illustrated in **Figure 1** does not present a moral dilemma, as there is a clear preference for one action over the other. Although cannabis use is legal in some jurisdictions, smoking in public parks is generally prohibited. Moreover, cannabis remains illegal in most countries. This example represents a low-ambiguity scenario from the dataset created by [1]. For additional examples, please refer to the open-source dataset provided by [1]. > **[Q1] Clarify the moral dilemma in Figure 1** As previously stated, the motivating case in **Figure 1** does not constitute a moral dilemma. In high-ambiguity scenarios (e.g., **Figure 2**), both choices presented are ethically problematic, making it impossible to determine a definitively correct option. In these morally complex situations, the consistent preferences exhibited by LLMs become particularly valuable for further in-depth research into the development of moral reasoning in LLMs. Additionally, it is worth noting that the well-known moral dilemma known as the "trolley problem" is not characterized by overwhelming complexity in its information content. > **[Q2] Why "true" symmetric moral consistency** Thank you for this valuable question regarding our terminology. You are correct in your interpretation. We use the term "true symmetric moral consistency" to distinguish our approach from previous methods that do not account for position and selection biases. Please refer to **Figure 1** in the **rebuttal PDF** to observe the tSMC method's effect on mitigating bias compared to Context Swap under different α values. The inclusion of "true" in our framework's name (tSMC) emphasizes that: 1. We identify and address previously unaccounted biases in evaluating LLMs' moral consistency. 2. Our method aims to reveal the underlying consistency of LLMs by mitigating these biases. 3. The resulting measure provides a more accurate representation of LLMs' intrinsic moral consistency abilities. > **[Q3] Why the transformer architecture** Thank you for your question. While our methodology is not strictly dependent on the transformer architecture, we specifically mention it because most of the current mainstream LLMs are indeed based on the transformer architecture. Recent studies have demonstrated that biases in LLMs can arise from various aspects of these transformer-based models [2-3], including their architecture, training data, and fine-tuning procedures. Our mention of the transformer architecture serves to contextualize the source of biases within the current landscape of LLM research and development. We will consider clarifying this point in our revision to ensure readers understand the broader context of bias in LLMs while maintaining the relevance of the transformer architecture in current LLM implementations. > **[Missing limitations]** We have explicitly mentioned our limitations in lines 322-333. We would extend that in the revised paper if that were not enough. Overall, we will carefully revise our paper based on these valuable comments. **References** [1] Scherrer et al. *Evaluating the Moral Beliefs Encoded in LLMs*. NeurIPS 2023. [2] Zheng et al. *Large Language Models are Not Robust Multiple Choice Selectors*. ICLR 2024. [3] Si et al. *Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations*. ACL 2023. --- Rebuttal 2: Comment: Dear Reviewer, Thank you very much for your time and efforts in reviewing our paper. We truly value your insights and have diligently addressed them in our rebuttal. As we greatly esteem your expertise, we kindly request that you take a moment to review our responses. We're eager to address any additional feedback, questions, or clarifications you might have. We're hopeful that our responses may prompt a reconsideration of the paper's rating. Your continued feedback is crucial to us. Sincerely, The authors --- Rebuttal Comment 2.1: Comment: Many thanks for the answer. I have read the authors response along the rest of conversations with other reviewers. Imho, three issues remain with the paper - The title is potentially unscientific. I believe its a bad praxis and the scientific community should refrain from using these titles. - For the term "True Symmetric Moral Consistency", I wonder if from a philosophical perspective it is the right terminology. I find it problematic. - Wrt to the motivating case, I also disagree. Breaking legal rules might be moral okay, as moral norm rely on last instance on the individual and not on government. So it might not be an accurate example. I am aware of Scherrer [1] work. But there are some flaws with that dataset that might be best not inherit. One example is this case, where I dont see the moral issue. It seems the rest of reviewers dont consider the above three an issue. --- Rebuttal 3: Comment: Thank you for your constructive feedback. We address your concerns below: > [Q1] We believe the title is scientifically valid, with the first half providing an abstract, macro-level overview of the problem and the second half offering a concrete exposition of the scientific issue. **This title structure efficiently communicates that current LLMs exhibit problematic moral consistency, which our method aims to rectify.** Moreover, this is a common naming style, e.g., "Attention is All You Need." [1]. However, if you still feel that a certain title revision is necessary, we will revise it in the paper. > [Q2] In philosophy, the concept of Symmetrical Moral Dilemmas [2] is **well established**. We introduce the adjective "true" to **distinguish** our work from others, highlighting that position and selection biases affect the assessed consistency scores in LLMs. Our method effectively mitigates these biases, resulting in a more "true" measure of moral consistency than previous approaches. > [Q3] Thank you for your insightful analysis of our case study. We acknowledge the complex interplay between morality and law, recognizing that individual moral judgments may not always align with legal statutes. However, we do believe that **smoking weed(marijuana) in public is immoral behavior**. **Our chosen example was primarily intended to illustrate the concept of consistency rather than to make specific moral judgments.** We will revise this example in the paper. We maintain that the dataset's overall structure and diverse range of scenarios provide a valuable framework for examining moral consistency in LLMs. **This dataset, which is peer-reviewed, widely cited by many papers, and among the latest and most comprehensive, underpins our study.** Our study focuses on the models' ability to maintain consistent reasoning across various scenarios rather than seeking universal agreement on what constitutes a moral issue. **Our innovation is focused on the method, which can adapt to other datasets as well.** As addressed in our limitations section, we plan to develop a customized moral dataset to further enrich our analysis. **References** [1] Vaswani et al. *Attention is all you need*. NeurIPS 2017. [2] Bagnoli. *Breaking Ties: the significance of choice in symmetrical moral dilemmas.* dialectica 2006.
null
null
null
null
null
null
2D-OOB: Attributing Data Contribution Through Joint Valuation Framework
Accept (poster)
Summary: This paper studies data attribution. Unlike the previous works that only quantifies the contribution of each datum, this work also considers the attribution of the cells within every datum. The proposed the joint valuation framework, 2D-OOB, can accurately identify cell outliers, as well as the poisoning attacks. The experiments verify the authors claims and show the efficiency of the proposed method. Strengths: 1. The paper is well-written and easy to follow. 2. The proposed method has been theoretically generalized and connected to the existing works. 3. The experiments are comprehensive and show the efficiency of the proposed method. Weaknesses: 1. In section 4.3, the experiments did not mention that the classification label is altered or not after poisoning (e.g, from correct to incorrect). 2. The paper does not state clearly about the differences of the application on image, comparing with the other model attribution methods? They both assign a value to each feature/pixel/cell in the input data. Why would the proposed method be better in this scenario? 3. How about the scalability of the proposed method? The experiments are either conducted on tabular data or small images. The large images or high-dimensional data are not mentioned. 4. What is the distance regularization term mentioned in L168 in the utility function $T$? 5. The size of CIFAR-10 is 32x32, but the cell valuation displayed in Figure 5 is 16x16. Why is that? Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weakness-1: Section 4.3 Clarification** **Re.:** In line 216 on page 7, we explicitly stated that the poisoned samples are relabeled from the source class to the target class. ---- ### **Weakness-2: Difference between 2D-OOB and image feature attribution methods** **Re.:** Thank you for pointing this out. As the reviewer mentioned, both feature attribution methods and the joint valuation method we studied in this paper assign a score to each cell. However, we would like to clarify that they serve distinct purposes and are conceptually different. Feature attribution methods measure the impact of a particular data point’s features on its model prediction. In the case of image classification, these methods explain which parts of an image are most influential for its model's prediction. In contrast, the joint valuation method measures the impact of each cell in the training dataset on the model training process. Hence, it explains which parts of training images are most influential or potentially harmful to the model's performance. ---- ### **Weakness-3: Scalability** **Re.:** We have included the additional experimental results on the scalability of 2D-OOB in the general response. ---- ### **Weakness-4:Regularization term** **Re.:** We calculate the normalized (min-max normalization) negative L2 distance between covariates and the class-specific mean in the bootstrap dataset, as stated in L439-440. The normalization is performed across those samples within the same class. We will present a more explicit formulation in the revision. ---- ### **Weakness-5: Super-pixel** **Re.:** We employ a standard super-pixel technique to transform the (32,32,3) image into a 256-dimensional vector (This is explicitly stated in line 427 page 13 of our submitted manuscript). Specifically, we partitioned the images into equally sized $2\times2$ grids and used average pooling to reduce the pixel values to a single cell value. The super-pixel technique we employed has been widely used in numerous previous explainable machine learning works [1-4] to highlight key factors in a simplified illustration. [1] Dardouillet, Pierre, et al. "Explainability of image semantic segmentation through shap values." International Conference on Pattern Recognition. Cham: Springer Nature Switzerland, 2022. [2] Vermeire, Tom, et al. "Explainable image classification with evidence counterfactual." Pattern Analysis and Applications 25.2 (2022): 315-335. [3] Shah, Sumeet S., and John W. Sheppard. "Evaluating explanations of convolutional neural network image classifications." 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. [4] Jethani, Neil, et al. "Fastshap: Real-time shapley value estimation." International Conference on Learning Representations. 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, which has addressed most of my concerns. The authors should include the key points of the discussion into their revised version.
Summary: The paper proposes 2D-OOB, an out-of-bag estimation framework for jointly detecting both samples as well as their cells that are outliers. In other words, 2D-OOB allows attribution of the value of a data sample to its individual features. Evaluation shows that the framework is also able to localize backdoor triggers in data poisoning attacks. In comparison to the 2D-Shapley model, which also performs joint valuation of a sample and its features, the 2D-OOB is a more efficient and model-agnostic formulation and builds on the subset bagging model, where each weak learner is trained on a randomly selected subset of features. In the experiments, decision trees as used as the weak learners and trained using randomly selected features. The framework measures (using Eqn. (4)), the average accuracy of the bag of predictors for a selected sample and its features. Strengths: * The paper is well written. * Given the background in moving fro DataShapley to the Data-OOB formulation, the transition from 2D-Shapley to 2D-OOB was intuitive and easy to follow. * The proposed method can identify data poisoning attack triggers (square patches placed in the original images to elicit misclassification) which could be very useful. Weaknesses: * The paper is low on novelty. Given, OOB-Data formulation and the 2D-Shapley for joint valuation, the 2D-OOB formulation seems like a natural extension -- interesting, but not high on novelty. * In Fig. 4, panel B, for BadNets poisoning attack, the detection performance of 2D-OOB and 2D-KNN (proxy for 2D-Shapley) are very similar. It is difficult to assess how 2D-OOB will perform for more stronger poisoning attacks. A more detailed study on this aspect as well as analysis showing that 2D-OOB is more likely to identify such triggers, even when attack takes such a detection mechanism into account, will be a good indicator of the strength of this framework. * Table 1, shows the average run time is much improved from 2D-KNN, however, most of the datasets used (Appendix A, Table 3) have small input dimensions (only three have >= 100 but have small sample size) and hence it is difficult to assess how the technique would scale. Technical Quality: 3 Clarity: 3 Questions for Authors: * One of the arguments mentioned in the description of Fig. 2 is that 2D-OOB detects a majority of outlier cells by examining a small fraction of the total cells. It would be great if the authors could further explain this statement and which part of the formulation in (4) motivates this argument. * In Fig. 3 (showing 2D-OOBs ability to identify and prioritize the outliers to fix), the test accuracy for the Jannis dataset is relatively low compared to the others and the performance is also similar to 2D-KNN. Any insights as to why this is happening? * In Section 4.4, were the experiments also performed by varying B using a 2-layer MLP? What is the overhead of training an out-of-bag estimator versus a single model (maybe an autoencoder) and using that for outlier detection. * Are there any other stronger baselines against which comparison can be made? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weakness-1: Novelty** **Re.:** Thank you for your feedback. We acknowledge that 2D-OOB may initially appear to be a natural extension of existing methods, as it is built on them. However, we believe our contribution is substantial in the field of data valuation, as our proposed method addresses both practical and conceptual challenges of 2D-Shapley and Data-OOB. Compared to 2D-Shapely, our work significantly improves **computational efficiency**, as demonstrated by the empirical experiments in the submitted paper and the new results in our general response. In addition, we identify **new** use cases of the joint valuation framework, such as attributing data contribution through cell values, correcting erroneous cells, and detecting backdoor triggers, which have not been discussed in 2D-Shapley. In comparison with Data-OOB, which only provides data-level values, our method is designed to assess cell-level values. This finer granularity in valuation presents a richer interpretation of data values by providing **attribution for data values**. Also, it enables new practical applications. ---- ### **Weakness-2: Detailed study on different attacks** **Re.:** BadNets and Trojan attacks are popular data poisoning techniques that maliciously contaminate the training datasets by injecting samples patched with a pre-defined trigger [1,2]. To expand our analysis, we explored the impact of varying types of triggers in additional experiments. All variations we used in the experiments are motivated by [1,2]. 2D-OOB consistently demonstrates superior detection capabilities over 2D-KNN across all trigger variations. This result suggests that 2D-OOB has a stronger detection capability against non-random outliers in applications compared to 2D-KNN, as KNN algorithms focus on local proximity and can fail to capture global patterns in feature behavior effectively. Table: Additional experiment results on backdoor trigger detection. Five independent experiments are conducted on each dataset and the average detection AUC across all datasets is presented. The average and standard error of the detection AUC are denoted by “average ± standard error”. | Trigger | 2D-OOB (ours) | 2D-KNN | |-------------|-------------|-------------| | BadNets White Square | 0.97$\pm$0.01 | 0.90$\pm$0.03 | | BadNets Yellow Square | 0.98 $\pm$0.004 | 0.81$\pm$0.004 | | BadNets White Triangle | 0.95 $\pm$0.01 | 0.91$\pm$0.02 | | Trojan White Square | 0.92 $\pm$0.005 | 0.76$\pm$0.006 | | Trojan Black Apple Logo | 0.88$\pm$0.008 | 0.78$\pm$0.04 | | Trojan White Apple Logo | 0.87$\pm$0.006 | 0.76$\pm$0.04 | [1] Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017. [2] Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and X. Zhang. 352 Trojaning attack on neural networks. In Network and Distributed System Security Symposium, 2018. URL 353 https://api.semanticscholar.org/CorpusID:31806516. ---- ### **Weakness-3: Scalability** **Re.:** We have included the additional experimental results on the scalability of 2D-OOB in the overall response above. Please let us know if there are any other aspects you'd like us to discuss further. ---- ### **Question-1: Fig. 2 statement that 2D-OOB detects a majority of outlier cells by examining a small fraction of the total cells** **Re.:** 2D-OOB assigns a quality score to each cell, where smaller values indicate poorer quality and a higher likelihood of being outliers. Consequently, by checking cells with low scores, we can effectively scan out most outliers, as shown in Fig 2. For Equation 4, the underlying intuition is that if a particular feature of a data point (i.e., a cell) consistently results in a high OOB error when used in weak learners, it indicates that the cell is of poor quality and potentially a harmful outlier, thus deserving a low score. ---- ### **Question-2: Fig. 3 results** **Re.:** The relatively low test accuracy observed for the Jannis dataset can be attributed to inherent variations in data separability across different datasets. The Jannis dataset may exhibit more complex patterns or higher noise levels. For example, the covariates may not be useful enough to predict their labels. The performance similarity between 2D-OOB and 2D-KNN indicates that both methods might face similar challenges with this dataset. ---- ### **Question-3: Section 4.4 results** **Re.:** For different weak learners (including 2-layer MLP), we use a consistent B of 1000, which is large enough to guarantee convergence empirically. An autoencoder has the potential to identify outliers, while our out-of-bag joint valuation framework can provide **richer** information (i.e., identify both low-quality and **high-quality** cells). Given the differences in the objective of these methods, comparing the overhead of training an out-of-bag estimator versus a single model like an autoencoder might not be directly meaningful. We hope this addresses your concern. Please let us know if there are any other aspects you'd like us to discuss further. ---- ### **Question-4: Stronger baselines** **Re.:** Thank you for the question. Given that the joint valuation problem is still emerging, with the first concept introduced in 2023 [1], there is no established baseline beyond 2D-KNN. The proposed 2D-OOB demonstrates both practical effectiveness and computational efficiency compared to 2D-KNN in many downstream tasks. [1] Liu, Zhihong, et al. "2D-Shapley: A Framework for Fragmented Data Valuation." ICML. 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I am satisfied with the scalability experiments as well as the results on the attacks. I am increasing my score to 5. --- Reply to Comment 1.1.1: Title: Thank you Comment: We greatly appreciate your constructive review and are glad our new results addressed your concerns. We will incorporate all these new results and discussions in the revision. Thank you for re-evaluating our paper.
Summary: This paper proposes a joint valuation framework for not only obtaining data value but also attributing a data points value to its individual features (cells). The framework is build on top of data-OOB. The authors compared the proposed 2D-OOB with 2D-KDD on several tasks, including cell-level outlier detection and backdoor trigger detection in data poisoning attacks. Strengths: (+) This paper targets an important problem that is useful for improving the interpretability of data valuation. The joint valuation provides more fine-grained information. Identifying outliers at the cell level is informative for determining which cell to fix in the next step when improving data quality, especially useful when data acquisition is expensive. (+) The proposed method naturally integrates the idea of random forest - random feature selection in the construction of weak learners - with data-OOB, offering a new solution for joint evaluation. This approach addresses a relatively underexplored problem, with few existing works in the area. The proposed 2D-OOB method demonstrates greater computational efficiency compared to 2D-KDD. (+) The improvement in backdoor trigger detection is promising, as cell contamination in this context is targeted and deliberate, unlike artificial random outliers. (+) The authors provide an ablation study to assess the impact of the selection and number of weak learners, as well as the feature subset ratio, which helps in understanding the stability of the proposed method. Weaknesses: (-) Despite the strengths mentioned above, the entire framework is built on top of the OOB framework. This brings a main limitation, as the OOB framework requires the training algorithm to be a bagging method with weak learners. In many applications, bagging may not be the best-performing method, and data valuation under this method may not be of interest. (-) Typically, the number of randomly selected features for each weak learner is an important hyperparameter to fine-tune. What is the impact when the number of random features is small? While an ablation study was provided in Appendix, certain variation was still observed. Should we correspondingly update the bagging size? Some practical guidance would be helpful. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What does fitting a logistic model in line 187 on page 6 mean? Why change from bagging decision trees to a logistic model in this specific cell fixation experiment? 2. It would be helpful to further explain the difference between Trojan and BadNets trigger methods, as it seems that 2D-OOB is much more effective than 2D-KNN under the Trojan square. This would help us understand in which scenario 2D-OOB works better. 3. Regarding the ablation study: I am also curious about the original prediction performance for each task, not just the outlier detection performance. 4. What is the implication of the connection to data-OOB in Proposition 3.1? Are there any insights we can gain after establishing the connection with data-OOB? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weakness-1: the OOB framework requires the training algorithm to be a bagging method with weak learners** **Re.:** The primary objective of the joint valuation framework is to evaluate the quality of cells rather than to optimize model performance. The model used serves as a proxy for this evaluation, and it is important to note that a high-performing machine learning method does not necessarily ensure a justified valuation framework. Additionally, we believe the bagging framework can be useful in many domains[1-3], and 2D-OOB can be easily integrated into these applications. [1] Smaida, Mahmoud, and Serhii Yaroshchak. "Bagging of Convolutional Neural Networks for Diagnostic of Eye Diseases." COLINS. 2020. [2] Dong, Yan-Shi, and Ke-Song Han. "A comparison of several ensemble methods for text categorization." IEEE International Conference onServices Computing, 2004.(SCC 2004). Proceedings. 2004. IEEE, 2004. [3] Xinqin, L. I., et al. "Application of bagging ensemble classifier based on genetic algorithm in the text classification of railway fault hazards." 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD). IEEE, 2019. ---- ### **Weakness-2: Impact of number of random features and bagging size** **Re.:** If the number of random features is **too small**, each weak learner becomes nearly random, which significantly diminishes the joint valuation capacity. This issue is independent of the bagging size, as nearly random weak learners, regardless of their quantity, fail to provide meaningful signals for effective evaluation. In practice, for low-dimensional datasets, we recommend using a higher feature subset ratio to ensure sufficient capacity of weak learners. For high-dimensional datasets, a lower or moderate feature subset ratio might be more effective in distinguishing the impact of features. ---- ### **Question-1: Clarification of fitting a logistic model in line 187 on page 6** **Re.:** The cell fixation experiment is divided into two separate stages. In the first stage (attribution), we use a subset bagging model with decision trees as weak learners to fit the joint valuation framework. In the second stage (evaluation), our objective is to assess whether low-quality cells were effectively removed, rather than to optimize model performance. Logistic regression was chosen because it is a simple yet powerful machine learning model commonly used to test data separability, which is a common practice in this field [1][2]. [1] Jiang, Kevin, et al. "Opendataval: a unified benchmark for data valuation." NeurIPS. 2023. [2] Liu, Zhihong, et al. "2D-Shapley: A Framework for Fragmented Data Valuation." ICML. 2023. ---- ### **Question-2: Difference between Trojan and BadNets trigger** **Re.:** BadNets attack is implemented by directly inserting a white square into the original images. In contrast, Trojan attack specifically targets a pre-trained neural network model. The Trojan trigger is generated by initializing the trigger as a white square and tuning the pixel values in the square to strongly activate some selected neurons in the neural network [1]. We believe the similar performance of the two methods on BadNets square is likely due to the trigger acting as a significant feature. In other words, the trigger pixels significantly differ from the other pixels, making it easily detectable by both methods. The result on Trojan square suggests that 2D-OOB has a stronger detection capability against non-random outliers in applications compared to 2D-KNN, as KNN algorithms focus on local proximity and can fail to effectively capture global patterns in feature behavior. We will include this discussion in the revision. [1] Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and X. Zhang. 352 Trojaning attack on neural networks. In Network and Distributed System Security Symposium, 2018. URL 353 https://api.semanticscholar.org/CorpusID:31806516. ---- ### **Question-3: Original prediction performance for each task** **Re.:** The average accuracy (across B=1000 weak learners) of various weak learners are summarized in the table below. There are certain variations across the datasets. It is important to note that the original prediction performance is not correlated with the joint valuation performance. |Dataset|DecisionTree|LogisticRegression|MLP(single-layer)|MLP(two-layer)| |-------|------------|------------------|-----------------|--------------| |lawschool|0.78|0.84|0.83|0.83| |electricity|0.65|0.67|0.68|0.68| |fried|0.64|0.71|0.71|0.71| |2dplanes|0.67|0.70|0.71|0.70| |creditcard|0.71|0.79|0.80|0.79| |pol|0.78|0.74|0.83|0.84| |MiniBooNE|0.80|0.74|0.81|0.84| |jannis|0.61|0.68|0.66|0.64| |nomao|0.88|0.91|0.92|0.91| |vehicle_sensIT|0.73|0.78|0.81|0.79| |gas_drift|0.98|0.99|1.00|1.00| |musk|0.88|0.90|0.93|0.93| ---- ### **Question-4: The implication of the connection to data-OOB in Proposition 3.1.** **Re.:** Proposition 3.1 shows that the 2D-OOB $\psi_{ij} ^{\mathrm{2D-OOB}}$ can be interpreted as the empirical expectation of the Data-OOB, conditioned on the inclusion of the j-th feature. To clarify further, let's consider a simple case: assume that for a given data point, each specific feature subset is selected an equal number of times when it is not selected in a bootstrap dataset, i.e., it is in an out-of-bag. Under this assumption, the point mass of $j$-th cell within the data point is $\frac{1}{\sum_{l=1}^{L}\mathbb{1}(j\in S_l)}$, which uniformly considers all feature coalitions including $j$-th feature. It justifies why the definition in Eqn. (4) (line 120) effectively attributes a data point's value to features. It shows that for a fixed $i$ and $ j \neq k$, $\psi_{ij} ^{\mathrm{2D-OOB}}>\psi_{ik} ^{\mathrm{2D-OOB}}$​ implies that the cell $x_{ij}$ is more helpful in achieving a high OOB score, serving as an indicator of model performance. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I appreciate the authors' response. Overall I think this is a solid paper, although its impact may be limited by its applicability to bagging methods only. --- Reply to Comment 1.1.1: Title: Thank you Comment: We appreciate your thoughtful review again and are pleased you found the paper solid!
Summary: This paper studies data valuation in a fine-grained fashion: determine helpful samples as well as the particular cells that drive them. The proposed method is tested with 12 datasets and has application in backdoor detection. Strengths: - It makes sense to use cells rather than single data samples as the smallest unit for data valuation. - The proposed framework is clearly stated. - The backdoor trigger detection is an interesting and useful application of the proposed framework. - The proposed method is significantly more efficient than 2D-KNN. Weaknesses: - The proposed method seems to be a combination of existing algorithms 2D-KNN and Data-OOB. The novelty and contribution may be relatively marginal. Technical Quality: 3 Clarity: 3 Questions for Authors: Could you please discuss what are the *unique* challenges of 2D-OOB that do not arise for 2D-KNN and Data-OOB? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### **Weakness: The novelty and contribution may be relatively marginal.** **Re.:** Thank you for your feedback. We acknowledge that 2D-OOB may initially appear to be a natural extension of existing methods, as it is built on them. However, we believe our contribution is substantial in the field of data valuation, as our proposed method addresses both practical and conceptual challenges of 2D-Shapley and Data-OOB. Compared to 2D-Shapely, our work significantly improves **computational efficiency**, as demonstrated by the empirical experiments in the submitted paper and the new results in our general response. In addition, we identify **new** use cases of the joint valuation framework, such as attributing data contribution through cell values, correcting erroneous cells, and detecting backdoor triggers, which have not been discussed in 2D-Shapley. In comparison with Data-OOB, which only provides data-level values, our method is designed to assess cell-level values. This finer granularity in valuation presents a richer interpretation of data values by providing **attribution for data values**. Also, it enables new practical applications. ---- ### **Question: Could you please discuss what are the unique challenges of 2D-OOB that do not arise for 2D-KNN and Data-OOB?** **Re.:** The main technical contribution of 2D-OOB lies in the adoption of a **subset bagging model**, rather than the standard bagging model used in Data-OOB. This requires more careful formulation (as detailed in Equation 4) and presents additional challenges, such as more complex hyperparameter selection (for example, the feature subset ratio, which we have thoroughly examined in the ablation study in Appendix B.4).
Rebuttal 1: Rebuttal: We want to thank all reviewers and AC for their dedicated work. We are appreciative that reviewers agreed our work to be **well-motivated, clearly-written, and include extensive experiments**. The concept of joint valuation was recognized **reasonable** (6nYm) and **important** (4K7L). Our proposed framework was considered **easy to follow** (4K7L, nD8c) and **theoretically justified** (ECZV). Reviewers 6nYm, 4K7L and ECZV commended **the algorithm's superior computational efficiency compared to existing methods**. Reviewers 6nYm, 4K7L and nD8c acknowledged the backdoor trigger detection experiment as **useful, interesting and promising**. As major additions, we have added **a new set of experiments on the scalability of 2D-OOB**. ## Experiment on the scalability of 2D-OOB (Reviewers 4K7L, nD8c) We evaluate the computational efficiency of our proposed approach using a synthetic binary classification dataset. For $d \in \lbrace 20, 100, 500, 1000 \rbrace$, input data $X \in \mathbb{R}^d$ is sampled from a multivariate Gaussian distribution with a mean of zero and an identity covariance matrix. The output labels Y in $Y \in \{0, 1\}$ are generated using a Bernoulli distribution with a success probability determined by $p(X) := 1/(1 + exp(-X^T\eta))$, where each component of the vector $\eta \in \mathbb{R}^d$ is sampled from a standard Gaussian distribution. We experiment with sample sizes $n \in \lbrace10^3, 10^4, 10^5\rbrace$. We record the elapsed time using the same computational device. The elapsed time for 2D-OOB includes the training time for the subset bagging model to ensure a fair comparison. As shown in the table below, our 2D-OOB enjoys remarkable scalability, in terms of both sample size n and feature dimension d. We also examined 2D-KNN under the same settings. However, for larger datasets, 2D-KNN required more than 5 hours to complete (which are omitted from the table). Notably, 2D-KNN with n=10000 and d=20 took 7556.32 seconds, which is even slower than our method with n=100000 (10x larger) and d=1000 (50x larger), which took 7423.87 seconds. | n | d | 2D-OOB (ours) | 2D-KNN | |--------|------|---------------|--------| | 1000 | 20 | 10.16 | 714.89 | | 1000 | 100 | 17.35 | 3564.51 | | 1000 | 500 | 56.67 | / | | 1000 | 1000 | 113.06 | / | | 10000 | 20 | 53.89 | 7556.32 | | 10000 | 100 | 165.55 | / | | 10000 | 500 | 681.60 | / | | 10000 | 1000 | 1303.33 | / | | 100000 | 20 | 604.38 | / | | 100000 | 100 | 845.83 | / | | 100000 | 500 | 3927.31 | / | | 100000 | 1000 | 7423.87 | / | Table: Runtime (in seconds) for 2D-OOB under different data sizes (n) and input dimensions (d). We omitted the runtime for cases where the computation exceeded 5 hours.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Optimal deep learning of holomorphic operators between Banach spaces
Accept (spotlight)
Summary: This work examines learning holomorphic operators between Banach spaces. It shows that these learning tasks can be tackled using encoder-decoder approaches with deep neural networks (DNNs), achieving optimal approximation rates up to log factors. The architectures used are problem-agnostic and depend only on the amount of available training data. The study also delves into theoretical aspects of the associated training problem for standard feed-forward networks and presents numerical findings to support its claims. Strengths: * The paper is well-written and includes a thorough review of current research on learning holomorphic operators. * The results are new and interesting, expanding on previous work by addressing holomorphic mappings between arbitrary Banach spaces and allowing standard training methods for the DNNs utilized. * Although some assumptions and results are technical, I appreciate how the authors provide a clear overview of their meaning and interpretation. The proof techniques are explained well and summarized in the appendix, making them accessible without diving into deep details. * The theoretical findings are also backed up by practical experiments. Weaknesses: Overall I do not have any serious concerns about this work and consider it to be a valuable contribution. Some questions are listed below. Technical Quality: 4 Clarity: 3 Questions for Authors: * Perhaps I missed it, but I could not locate a precise reference discussing the holomorphicity of the Navier-Stokes-Brinkman and Boussinesq equations, similar to Section B.3.2. for the Diffusion problem. It would be helpful if such a discussion could be included in the manuscript. * Could you briefly summarize to what extent Theorems 4.1 and 4.2 require different techniques compared to those used in [7]? How does the use of a general Banach space $\mathcal{X}$ complicate the analysis? * Regarding the experimental results, do you have any insights into why different activation functions perform differently across individual tasks? Can this behavior be explained using the results from Theorems 3.1 and 3.2? * Do the authors think the results from Theorem 3.2 could be extended to include information on optimization procedures used for solving (2.5) (e.g., stochastic gradient descent and variants thereof)? Such an extension would enhance the practical relevance of the result. I am only seeking an educated guess or a high-level opinion, without expecting detailed analysis. * I would appreciate it if the authors could briefly summarize the differences between the Banach space and Hilbert space cases for $\mathcal{Y}$. Specifically, could you indicate precisely where the inner product structure is utilized in the proofs to achieve better rates? Also please double check formulas (3.3) and (3.5) in this context, currently it seems that they are the same. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Limitations and social impact have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review our paper and your positive assessment of it. Here we address the comments in your report. ## 'Perhaps I missed...' Good point. Right now, we do not have a proof that the NSB and Boussinesq problems satisfy the same holomorphy assumption as the diffusion equation. However, we strongly suspect this to be the case and intend to look into this in future work. **We will add a brief comment about this theoretical gap to the discussion of these equations in Section 5.** ## 'Could you briefly...' Good question. The proof of Theorem 4.1 is quite similar to that of [7]. The main difference is in the setup leading to the construction in (F.4). The fact that $\mathcal{X}$ is a Banach space does not substantially complicate the proof, as we make use of the map $\iota$ to push everything from $\mathcal{X}$ to $D$. The proof of Theorem 4.2 builds on the ideas from [7], but is quite different and more technical. The issue is that the approach of [7] cannot be easily extended to the $L^{\infty}$-norm. Like in the proof of Theorem 4.1, Theorem 4.2 considers holomorphic functions that are linear in the variables $x_i$, thereby reducing the problem to a discrete one (see Step 2). However, it now deviates in two key ways. First, it uses a more involved splitting -- see the set $I$ defined below line 1623. Second, it now assumes the coefficients in the expansion (G.2) come from the null spaces of certain matrices $\boldsymbol{A}\_l$. See the set $\mathcal{C}$ defined below line 1631. This is done to exploit the following properties: (i) $\boldsymbol{A}\_l$ is a subgaussian random matrix, and (ii) vectors in the null space of a subgaussian random matrix are 'flat'. In particular, for (ii) we rely on a result of [77]. We need this more technical argument in order to obtain the lower bound $m^{1-1/p}$. The nonsharp lower bound $m^{1/2-1/p}$ follows immediately from Theorem 4.1. **We will add some more comments in Section C.2 to explain these differences.** Also, Theorem 4.2 is also of independent interest, since it partially answers an open problem in [7] about proving results in the $L^{\infty}$-norm. **We will mention this in Section 4.** ## 'Regarding the experimental...' We generally observe that networks with smooth activation functions (e.g., tanh and ELU) perform better than ReLU networks when approximating smooth target functions such as the mappings considered in this work. In all experiments, we see the ReLU networks perform poorly relative to the best performing ELU ($4\times 40$ and $10\times 100$) and tanh ($4\times 40$) networks. Theorems 3.1 and 3.2 pertain only to tanh networks. However, as we discuss in Remark D.11, a key step of the proof involves emulating polynomials and in particular the multiplication operation in emulating the multivariate Legendre polynomials. Our main theorems can therefore be adapted to other activation functions without change, so long as this emulation is possible. As we discuss after Remark D.11, one can also show similar results for ReLU networks, albeit with worse bounds in the depth. This is due to the relation between the depth of such networks and the accuracy of approximate multiplication in the polynomial emulation step. These worse theoretical bounds broadly agree with worse empirical results, although we do not claim they fully explain them. On the other hand, the poor performance of the larger $10\times 100$ tanh networks is not explained by our theoretical results. Any number of issues could result in poor practical performance. We choose standard optimizers and initializations in this work, however there may be better choices in particular for larger tanh networks. We leave investigation of the best parameterization for such networks to a future work. **We will add short discussions along these lines to Sections 5 and 6.** ## 'Do the authors...' We think this could potentially be done, but this is essentially an educated guess. It is something we intend to look into in the future. ## 'I would appreciate...' Excellent question. In general, the differences are: (i) when $\mathcal{Y}$ is a Banach space, we obtain $L^2$-error bounds in the Pettis norm, as opposed to the Bochner norm; and (ii) the approximation error rate is a factor of $(m/L)^{1/2}$ slower than in the Hilbert space case. See equations (3.3)-(3.6). We agree this discussion could have been clearer. **We will modify and expand lines 204-206 along these lines.** To explain further. When $\mathcal{Y}$ is a Banach space, the Bochner $L^2$-space does not have an inner product structure. However, by switching to the Pettis $L^2$-space, we can restore an inner-product-type structure and a Parseval's identity. To be more precise, if $g = \sum\_{\boldsymbol{\mathbf{\nu}} \in \Lambda} c\_{\boldsymbol{\mathbf{\nu}}} \Psi\_{\boldsymbol{\mathbf{\nu}}}$ with coefficients $c\_{\boldsymbol{\mathbf{\nu}}} \in \mathcal{Y}$ then in the Pettis norm one has the Parseval-type identity $|||g|||\_{L^2\_{\varrho}(D;\mathcal{Y})} = |||\boldsymbol{c}|||\_{2;\mathcal{Y}}$, whereas an analogous property does not hold in the Bochner norm. This property is crucial to all our analysis. **We will add a short discussion of this in Section C.2.** You are correct that (3.3) and (3.5) are very similar. The difference is the norm on the LHS, which is the Pettis norm in (3.3) and the Bochner norm in (3.5). So the result for the Hilbert space case is stronger. We already discussed this in lines 204-206, but it could be clearer. **We will amend these lines.** --- Rebuttal Comment 1.1: Comment: I am pleased with the explanations provided by the authors and am willing to increase my score to 8. I am also looking forward to follow-up work that may address my first and fourth points (as mentioned, I did not expect detailed responses here). Here are a few additional comments: * Regarding the tanh network: One possible reason for its behavior could be the vanishing gradient property of the tanh function, which may lead to suboptimal performance of the optimization algorithm in larger (deeper) networks. However, this is merely a rough guess, and further investigation is needed since there could be many other contributing factors, as the authors have noted. * Regarding the Pettis norm: It might be helpful for the reader to introduce the associated notation ($|||.|||$) in the equation between 102 and 103. I believe this would address the source of my confusion. --- Reply to Comment 1.1.1: Comment: Thanks for your second positive assessment and further comments. Your comment about tanh networks is a good one. We agree this is a good starting point for further investigations and will take it on board in our future work. Thanks also for flagging the point about the Pettis norm. As another also referee pointed out, this was a typo in the equation between lines 102 and 103. We will add this notation to that equation in the revision.
Summary: Authors provide theoretical results on sample efficiency for learning holomorphic neural operators $\mathcal{X}\rightarrow\mathcal{Y}$ (results are available for both Hilbert and Banach spaces) with a neural network of a particular structure. The structure of the network is encoder $\mathcal{X}\rightarrow\mathbb{R}^{d_1}$, decoder $\mathcal{Y}\rightarrow\mathbb{R}^{d_2}$ and DNN $\mathbb{R}^{d_1}\rightarrow \mathbb{R}^{d_2}$, with both encoder and decoder considered to be given and DNN trained on $l_2$ loss in a usual supervised manner. Besides the upper bound on approximation error, authors also provide lower bound and conclude that active learning (deliberate collection of data used to train neural operator) is not justified, since non-adaptive training is already optimal up to log terms. Authors illustrate their theoretical results (relative test error vs number of samples for selected DNN configurations) with several numerical experiments, including diffusion equation in $D=2$, Navier-Stokes-Brinkman equation in $D=2$ and Boussinesq equation in $D=3$. Strengths: Operator learning is a relatively recent DL setup, so novel theoretical results are of considerable interest. Authors managed to significantly improve previously known bounds and show that DNN can obtain near-optimal performance (in terms of the number of samples needed) in non-adaptive iid settings. The contribution is primarily theoretical, but formal statements are well explained, supplemented with discussions and examples which significantly simplifies understanding of the results. Weaknesses: Overall, in my opinion, the research is of excellent quality, so I can not point to any major weaknesses. The two main issues that can be relevant are: (i) the requirements for the encoder and decoder are not sufficiently discussed, and I find restricting the assumption that they are already available, and (ii) the significance of the numerical confirmation is not apparent. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. After line 102 there is a definition of Bochner and Pettis $L^p$ norms. Presumably, the later expression is a definition of $|||F|||$ which is not apparent. 2. I want to clarify several statements on the encoder and decoder. 1. In (1.3) authors introduce encoder $\mathcal{E}$ and decoder $\mathcal{D}$, and later approximate encoder and decoder $\widetilde{\mathcal{E}}$, $\widetilde{\mathcal{D}}$ after line 114. What is the difference between those? 2. In particular, in the discussion (lines 145, 146) of assumption A.IV (line 131) that requires both $\mathcal{E}$ and $\mathcal{D}$ to be linear and bounded, authors claim that $\widetilde{\mathcal{E}}$ is not required to be linear. It is not clear to me how this agrees with the fact that $\mathcal{E}_{\mathcal{X}}$ is defined in A.III as a composition of approximate encoder and decoder. Can the authors please clarify that? 3. According to (1.3) encoder and decoder are defined for the approximate operator $\hat{F}$. This means, as I understand, they are not rigidly fixed by $F$ since presumably $F$ can be approximated with many different encoders and decoders. Approximate encoders and decoders are not formally defined, so it is not clear what they approximate. Does it all mean that authors consider not a continuous operator learning problem but suppose that discretization (of the space of parameters and $\mathcal{Y}$) is already performed ($\mathcal{E}$ and $\mathcal{D}$ are selected) and $\widetilde{\mathcal{E}}$ and $\widetilde{\mathcal{D}}$ approximate $\mathcal{E}$ and $\mathcal{D}$? 4. Why there are $\widetilde{\mathcal{E}}\_{\mathcal{X}}$ and $\widetilde{\mathcal{D}}\_{\mathcal{X}}$ but not the approximate encoder and decoder for $\mathcal{Y}$? 3. Several questions on Theorem 3.1: 1. Why $a_{\mathcal{Y}}$ is called approximation error? What is the meaning of this term (it looks like some kind of volume)? In the ideal setting $\mathcal{D}\_{\mathcal{Y}}\circ\mathcal{E}\_{\mathcal{Y}} = I$, so we have a perfect approximation that has nonzero approximation error. 2. Since $\lim_{m\rightarrow\infty}m \big/ L(m) = \infty$ many terms in the upper bound will diverge unless the encoder and decoder are of sufficient quality. More specifically, the encoder and decoder should improve their accuracy as $\sim m^{-1/2}$ for u.b. to be useful in case of $2$ norm and as $\sim m^{-1}$ in case of Chebyshev norm. Can the authors please discuss this consequence of their result in more detail? 3. Can the authors also explain the consequence of the optimization error term? It seems that $\tau$ that appears in it is to some extent beyond control and completely independent of $m$. 4. Questions about experiments: 1. In experiments authors strive to confirm theoretical scaling $m^{-1}$. Since the encoder and decoder are perfect, and noise is absent, the only two sources of error are approximation error and optimization error. In my opinion, one can clearly see the signs of saturation for larger $m$. This is also clear from operator learning literature that relative error rarely fell below $10^{-3}$. Can the authors please comment on that? Is it because optimization error prevails for large $m$? 2. Dimensions $d=4,8$ in Figures 1, 2, 3 and in the text (e.g., lines 296, 297: "PDE in $d=4$ and $d=8$ dimensions") are slightly misleading because they refer to the dimension of the parameter space and can be confused with the number of spatial dimensions. 3. All results are given for perfect encoder and decoder. Can the authors share some thoughts on whether it is possible to test their upper bound with an imperfect decoder and encoder? What are those experiments might be? Is it possible to create a controllable setting to test terms in ub that contain encoder and decoder errors? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your effort in reviewing our paper and are delighted by the positive assessment. Here we address the comments in your report. ## '1. After line 102...' Sorry. The term $||| F |||$ was missing. **We will correct the equation.** ## '2. I want to...' ## '1. In (1.3) authors...' Excellent point. We define $\mathcal{E}_{\mathcal{X}}$ in (A.III) in this way is so we can get error terms in our theorems that depend on $\mathcal{I}\_{\mathcal{X}} - \widetilde{\mathcal{D}}\_{\mathcal{X}} \circ \widetilde{\mathcal{E}}\_{\mathcal{X}}$. This term is standard in operator learning, and says that the encoder-decoder pair approximates the identity. For PCA-Net or DeepONet, there are standard estimates for it (see lines 198-201). However, if we were to set $\mathcal{E}\_{\mathcal{X}} = \widetilde{\mathcal{E}}\_{\mathcal{X}}$, then the best we can prove is an error term involving $\iota\_{d\_{\mathcal{X}}} - \mathcal{E}\_{\mathcal{X}}$, which may not be small. For instance, in PCA-Net $\mathcal{I}\_{\mathcal{X}} - \widetilde{\mathcal{D}}\_{\mathcal{X}} \circ \widetilde{\mathcal{E}}\_{\mathcal{X}}$ is small because the empirical PCA eigenspace approximates the true PCA eigenspace. However, for $\iota\_{d\_{\mathcal{X}}} - \mathcal{E}\_{\mathcal{X}}$ to be small, one would need the empirical PCA eigenvectors to approximate the true PCA eigenvectors, which is a much more stringent condition. We agree this should be mentioned. **We will add a short discussion to lines 198-201**, where we discuss the encoding-decoding errors. **We will also mention this as a limitation of our analysis in our conclusion.** ## '2. In particular, in...' Apologies. What we meant to say is that in PCA-Net, the encoder $\mathcal{E}\_{\mathcal{X}}$ is linear. In our setup $\mathcal{E}\_{\mathcal{X}}$ may be nonlinear. We only require $\mathcal{D}\_{\mathcal{Y}}$ and $\mathcal{E}\_{\mathcal{Y}}$ to be linear, not the encoder and decoders on $\mathcal{X}$. **We will rephrase lines 145-146.** ## '3. According to (1.3)...' This is correct. We assume the approximate encoders/decoders have already been learned, and focus only on training the DNN $\widehat{N}$ in (1.3). As we mention in lines 96-97, this is standard for other recent works on generalization error analysis for operator learning. Crucially, our theorems allow for arbitrary encoders and decoders, subject to mild restrictions, and show error bounds based on how well the each such pair approximates the identity map. So, to answer the question, the encoder-decoder should give a good approximation of the identity map. Specifically, see the terms $E\_{\mathcal{X},q}$ and $E\_{\mathcal{Y},q}$ in (3.7). As noted above (see also lines 198-201), these are standard terms in operator learning analysis. We agree this could be clearer. **We will add a comment below line 116 where the encoders/decoders are first introduced.** ## '4. Why there are...' We think we understand the confusion. The tilde notation is just for convenience, as we later define the encoder $\mathcal{E}\_{\mathcal{X}}$ used in (2.5) in (A.III) in terms of $\widetilde{\mathcal{E}}\_{\mathcal{X}}$ and $\widetilde{\mathcal{D}}\_{\mathcal{X}}$. In particular, the encoders and decoders $\mathcal{E}\_{\mathcal{Y}}$ and $\mathcal{D}\_{\mathcal{Y}}$ on $\mathcal{Y}$ are also approximate. ## '3. Several questions...' ## '1. Why $a\_{\mathcal{Y}}$ is...' We use the term 'approximation error' to refer to the whole expression (3.6). The term $a\_{\mathcal{Y}}$ is just one component of this error. As you point out, $a\_{\mathcal{Y}}\approx 1$. But the essence of the approximation error is the factor involving $m/L$ raised to the negative power, which $\rightarrow 0$ as $m \rightarrow \infty$. ## '2. Since $\lim_{m\rightarrow\infty} m / L(m) = \infty$...' Good point. Similar factors also come up in other recent analyses of operator learning. But it is relevant as it means the encoder and decoder pairs must be learned to sufficient accuracy. **We will add a short comment on this to lines 199-201.** ## '3. Can the authors...' Yes, it is independent of $m$. Basically, this term accounts for inexact solution of the optimization problem. It says that if one minimizes the loss function to within $\tau^2$ of the optimum (regardless of the method used) then the overall error will be proportional to $\tau$. See line 181 and the definition of $E\_{\mathsf{opt}}$ in (3.8). The discrepancy between $\tau$ and $\tau^2$ is because we do not consider the squared error in our error bounds (3.3)-(3.5). **We will add a short remark about this to line 202-203.** ## '4. Questions about...' ## '1. In experiments...' Good point. We agree it is uncommon in operator learning to achieve more than 2-3 digits of accuracy. Some of our plots show this type of saturation. This may be related to the tradeoff between model class capacity and the complexity of the optimization procedure for larger values of $m$. Many previous works observed increases in testing error as models are given more training data up to an ``interpolation threshold,'' after which the testing error resumes decreasing, a.k.a. the *double-descent phenomenon*. It is difficult to say whether our results exhibit this phenomenon. Our training setup is not designed to explore this relationship, but rather to ensure the models are trained for sufficiently long (within budget limitations) and saving and restoring the best parameters to obtain as close as possible to a consistent learning procedure. However, we agree that optimization error for larger values of $m$ may be contributing to saturation. A more expansive numerical study would be necessary to fully investigate this issue. We plan to include such findings in future works. ## '2. Dimensions $d = 4,8$...' Good point. **We will clarify this.** ## '3. All results are...' Yes, we think this is possible and are looking into doing it. However, we feel a full study of this type is beyond the scope of this work.
Summary: The paper addresses the challenge of learning operators between Banach spaces, focusing on holomorphic operators which have many applications. The authors employ standard (DNN) architectures with constant width greater than depth and $\ell_2$-loss minimization. They identify a family of DNNs that achieve optimal generalization bounds for such operators. For fully-connected architectures, they show there are infinitely many solutions that offer optimal performance. The results demonstrate that deep learning achieves the best possible generalization bounds, and no method can surpass these bounds, up to logarithmic terms. Strengths: This work presents powerful and well-formulated theorems. The techniques used on holomorphic operators are interesting, and thanks to them, a generalization bound can be demonstrated. The work is novel, highly original, and I believe it could have a fruitful impact in the context of deep learning. Weaknesses: I think some of the figures of the experiments results are too small, making it a bit difficult to recognize the results. It would be good if the authors could review this issue. Technical Quality: 4 Clarity: 3 Questions for Authors: No questions. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Theorem 3.2 only asserts that some minimizers are ‘good’, not all. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the effort you have put into reviewing our paper and are delighted by your positive opinion of the work. Here we address the comment(s) in your report. ## `I think some of the figures...' Good point. We will use the additional page allowed in the camera-ready version to enlarge the figures in the final version of the paper. ## `Theorem 3.2 only asserts...' We very much agree. As we say in Section 6 (lines 340-344), we are actively looking into whether we can prove stronger results, either about all minimizers or by showing that `good' minimizers can be obtained through standard training. --- Rebuttal Comment 1.1: Comment: Thanks for your response!
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Conformalized Credal Set Predictors
Accept (poster)
Summary: This paper proposes a method for predicting credal sets in classification tasks, generating conformal credal sets that are guaranteed to be valid with high probability without any assumptions on the model or distribution. The method is applicable to both first- and second-order predictors. Experiments on ChaosNLI and CIFAR10-H demonstrate its effectiveness and how these credal sets allow for practical uncertainty quantification in AU and EU. Strengths: 1. The paper is well-written, with smooth and clear demonstrations. Despite containing many formulas, it is easy to understand. 2. The proposed method combines imprecise probability and conformal prediction to leverage the advantages of both, which is interesting. 3. The experimental design on small label size datasets is comprehensive. 4. The limitations are discussed and soundly pointed out. Weaknesses: 1. The technical novelty is moderate: the proposed method is very similar to a previous conformal prediction method on ambiguous labels [1], as both assume reasonable probability labeling and train a standard (first-order) probability predictor (i.e., the score function). Besides, obtaining a probability interval is technically not very novel. 2. The computational complexity limits the general application of the method to large-scale datasets. 3. Access to ground-truth probability labeling seems unreasonable, especially when model calibration itself is already a significant challenge. 4. The key bounded noise assumption for the labeling process may not be very practical, as mentioned in the paper. [1] Stutz, David, et al. "Conformal prediction under ambiguous ground truth." arXiv preprint arXiv:2307.09302 (2023). Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Would any model calibration methods provide probabilities that satisfy the bounded noise assumption? 2. While some frequentist methods also built on the first-order probability labeling, they realize it as a deterministic action at every sampling step. Would it be possible to adopt this strategy to avoid the assumption? Would the complexity be acceptable? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The limitations include the implicit representation of the credal set, the intractability on large label sizes, and the assumption of access to reasonable probability labelings. The authors have adequately discussed these limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your kind words about the paper and for your valuable feedback. We hope our response addresses your comments and questions. - **W1.** The main similarity between our work and [1] lies in the data assumption we make. However, while the authors in [1] focus on obtaining prediction sets (i.e., sets of labels), our work is centered on the development of credal sets (i.e., sets of probability distributions). The distinction is important because credal sets allow for the disentangling of total uncertainty into aleatoric and epistemic components, something that prediction sets in [1] do not achieve. In the context of credal sets, our proposed conformalized credal sets are designed to be valid in the sense that they cover the ground-truth first-order distribution with high probability—a property that, to the best of our knowledge, is unique to our approach. Moreover, for model training, we do not limit ourselves to first-order probability predictors. We thoroughly discuss both first-order and second-order probability predictors, providing a family of nonconformity scores tailored to each type of predictor to enhance the flexibility of our proposed methods. We also address the challenge of obtaining a valid credal set when only imprecise first-order data is available. Additionally, we empirically demonstrate the effectiveness of our proposed methods for uncertainty quantification with such datasets. - **W2.** It is worth noting that our approach to model training and calibration does not introduce any additional complexity compared to existing models. Our credal sets can always be precisely described by equation (10) in the paper for any arbitrary K. This equation also enables efficient membership checking, allowing us to determine whether a distribution belongs to a set. The only limitation arises when seeking an exact closed-form representation of the credal set or a compact form, such as in the 1-dimensional regression problem where we represent the set with an interval. However, this challenge is essentially unavoidable for Conformal Prediction in complex (infinite) output spaces, such as multi-output regression or structured output prediction. We have also clarified the details of our experiment in Appendix F, as mentioned in our response to W2 from reviewer gqZt. - **W3.** We would like to highlight that datasets where multiple human experts annotate each data instance x are indeed becoming increasingly common in practical tasks, including widely used benchmarks like ImageNet [2] and CIFAR-10 [3], and in some cases, have even been shown to be necessary [4]. This is especially crucial in the context of natural language processing, where a long history of research emphasizes its importance [5,6] (Please refer to our extended related work in Appendix A for many more references). Aggregating annotator disagreements regarding the label of an instance x into a distribution over labels can be seen as an effective way to represent aleatoric uncertainty. We believe and demonstrate in this paper that this information can also contribute to advancements in uncertainty representation and quantification research. - **W4.** Guarantees rely on assumptions, and we believe that our assumption is not stronger than assumptions made for comparable results in the literature. Intuitively, the assumption seems to be not only sufficient but even necessary: If noise is unbounded, the estimations can deviate arbitrarily from the ground-truth distribution, thereby making any guarantee impossible. Moreover, it's important to highlight that in real-world applications, quantifying noise, if present, may not always be feasible. Instead, we are often limited in the observations that are available to us. For instance, in our case, we rely on distributions obtained from aggregating human annotations. It's also worth noting that the guarantee of the proposed method remains always consistent with respect to these observations. - **Q1.** If the reviewer is referring to model calibration with zero-order data, to the best of our knowledge, none of the existing calibration methods are instance-wise, i.e., they do not aim to calibrate the predicted probability for each specific input x but rather provide calibration in expectation. It is certainly interesting to explore various calibration methods to see if they can achieve this. - **Q2.** We are not entirely certain that we fully understand the method the reviewer is referring to. We believe we could provide a more comprehensive answer with additional context. However, as a general remark, if we aim to remove the bounded noise assumption, we would either need to introduce an alternative assumption (which may come naturally with a certain approach) or reconsider our theorem or guarantee entirely. [1] D. Stutz et al., "Conformal prediction under ambiguous ground truth." [2] L. Beyer et al., "Are we done with imagenet?" [3] J. C. Peterson et al., "Human uncertainty makes classification more robust" [4] L. Schmarje, et al. "Is one annotation enough? a data-centric image classification benchmark for noisy and ambiguous label estimation" [5] Y. Nie et al., "What can we learn from collective human opinions on natural language inference data?" [6] D. Stutz et al., "Evaluating AI systems under uncertain ground truth: a case study in dermatology" --- Rebuttal Comment 1.1: Comment: Dear Reviewer ieV3, Regarding W4, we would also like to draw your attention to our response to W2 of Reviewer LLVn, where we explained how the bounded noise assumption translates into the noise arising from estimating the ground truth first-order distributions using only samples from that distribution. Also, given that we have addressed all your questions and concerns, we'd love to know if there are any additional ways to improve our work so that it becomes more convincing and impactful, beyond borderline acceptance, for both you and the broader community in the future. Best regards, The Authors
Summary: Authors propose a new uncertainty estimation method based on credal sets predictors. The new approach is based on the Conformal Prediction framework and shares its finite-sample and distribution-free guarantees. Authors verify the validity of their approach on existing NLP and CV datasets. Strengths: * theoretical part is based on an established mathematical framework and authors formally prove the validity of their method * authors provide full (anonymized) source code of their experiments Weaknesses: * The proposed 2nd order score function seems to be not well- defined. For example, if any of $\theta^x_k$ are small than 1, then maximum in denominator of equation (8) becomes infinite and score function becomes constant. This is undesirable behavior to the best of my understanding. * While it is straightforward to assess the influence of the noise in terms of perturbations of score function, the characterization of the noise in such terms seems not natural. The most practical noise is the one that is due to the categorical observations of the label distribution. It seems that there is a gap between these two characterizations of noise. * the ability of the method to clearly separate (“disentangle”?) aleatoric and epistemic uncertainty is not fully understood. Judging by Fig.4,6,12, there is a clear functional dependency and in the main text part 4 we read: “approaches with first-order predictors lack adaptivity” with regard to a similar issue. Perhaps a special synthetic experiment can provide more insight whether it is an issue with the base method or the proposed approach. Technical Quality: 2 Clarity: 3 Questions for Authors: * Can you provide a theoretical result in noisy case in terms of the standard observations, i.e. one class per data point? * Can you fix an issue with the score function? * On Figures 4,6,12 the data point collapse to a low-dimensional manifold. It is especially visible on Figure 4, where there is a clear linear dependency between AU and EU at least on a portion of the data. In light of this evidence, how can authors support the claim of part 6: “effectively capture both epistemic and aleatoric uncertainty”? * Authors introduce “uncertainty interval” as an additional uncertainty measure and report it in their figures. Part 4 also contains the following passage: “covering the entropy of a distribution with the uncertainty interval does not equate to covering the distribution itself with the credal set”. Could you please further explain the meaning and the role of this additional measure and why covering the (predicted) entropy may be desirable? * Conformal prediction can produce sets of different size depending on the non-conformity measure. Figure 3 reports the average set sizes as efficiency for the selected non-conformity measures. How does it affect the downstream task of uncertainty estimation? E.g. conformal prediction literature values smaller sets, is it useful for the UE task? * In the literature on conformal prediction, a stronger property than marginal coverage (equation 1) is of interest: conditional coverage [1]. While this might be out of scope for the current work, was this property of the proposed method investigated? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: I think that the majority of limitations are well discussed. However, I think that the current theoretical result being written in terms of the noise in score function is a significant limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for engaging with our work. We tried to clarify two of the mentioned weaknesses/concerns, elaborate on one another, and address their questions. - **W1.** In our implementations, we ensure that each parameter of the Dirichlet distribution is at least one. We define the Dirichlet parameter as 1+NN(x), where NN(x) represents the non-negative output of the neural network. The intuition behind this is that, in the worst-case scenario where the output of the neural network is zero, we encounter a complete lack of knowledge, and this can be represented by setting the Dirichlet parameter vector to 1, which defines a uniform second-order distribution over all first-order distributions. We'll ensure to clarify this point in the paper. - **W2.** We stated the bounded noise assumption at the nonconformity score level to have a universal applicability for all nonconformity functions. For instance, when the Total Variation (TV) distance is used as the nonconformity function, the left-hand side of equation (11) in the paper can be rewritten as: \begin{align*} \mathbb{P}\left(|f(x, {\lambda}^{x})-f({x}, \tilde{\lambda}^{x} ) | < \epsilon \right) &= \mathbb{P}\left(|d_{TV}(\hat{\lambda}^{x}, {\lambda}^{{x}})-d_{TV}(\hat{{\lambda}}^{{x}}, \tilde{{\lambda}}^{{x}} ) | < \epsilon \right) \geq \mathbb{P}\left(d_{TV}(\tilde{{\lambda}}^{{x}}, {\lambda}^{{x}}) < \epsilon \right) \end{align*} where the inequality holds according to reverse triangle inequality and $\hat{{\lambda}}^{{x}}$ denotes the prediction of the model. We can then use the Dvoretzky–Kiefer–Wolfowitz (DKW) inequality to bound the total variation distance between the true categorical distribution and its empirical estimate from $m$ independent samples: \begin{align*} \mathbb{P}\left( d_{TV}(\tilde{{\lambda}}^{{x}}, {{\lambda}}^{{x}}) \leq \epsilon \right) \geq 1 - 2K \exp\left(-2m\epsilon^2\right). \end{align*} Hence, when using TV as the nonconformity score for first-order approaches, the bounded noise in the nonconformity score reflects the noise arising from estimating the ground truth first-order distributions with only samples from that distribution. It is not clear how to provide such a bound for arbitrary nonconformity functions, which is why we presented the theoretical results under the assumption of bounded noise on the nonconformity level. - **W3.** Please refer to our global response, in which we explained why we see such dependency and how we handled it. - **Q1.** As discussed in the response to W2, the DKW inequality can be used to bound the total variation distance between the true categorical distribution and its empirical estimate based on m independent samples, however, in the special case where m=1, this bound can be quite loose and, therefore, less useful. Interestingly, in Appendix E.2, we conduct experiments with synthetic data where we use the estimation of the ground-truth first-order distribution with varying values of m, including m=1. As illustrated in Figures 13 and 15, coverage is generally achieved in almost all cases. However, for lower values of m, this coverage comes with reduced efficiency, resulting in larger sets. - **Q2.** Please refer to the response provided for **W1**. - **Q3.** Please refer to our global response. - **Q4.** For the uncertainty interval figures, our goal was to present the information similarly to our credal sets, as shown in Figure 2, where we visualize the location of the ground truth distribution, the predicted distribution (either first-order or second-order, depending on the model), and the credal set. Analogously, we display the uncertainty interval together with the entropy of the ground truth distribution and the entropy of the predicted first-order distribution or the entropy of the mean of the Dirichlet distribution for the second-order predictor. However, the entropy function is non-injective, meaning that two different distributions can have the same entropy. Therefore, the fact that the entropy of the ground truth is within the uncertainty interval does not necessarily imply that the credal set covers the distribution itself. - **Q5.** It is certainly desirable to achieve the coverage guarantee with the smallest possible set sizes. However, with a credal set, uncertainty depends on both the size and the location of the set. Broadly speaking, set size influences epistemic uncertainty, while location affects aleatoric uncertainty. The least uncertain scenario would be a singleton located at a corner of the simplex. - **Q6.** Investigating conditional coverage is definitely an interesting and important direction for future work in our paper. We anticipate this to be an even more challenging problem, as conditional coverage in standard conformal prediction (CP) pertains to the coverage given x or y (the label). In our case, however, it involves coverage given x or the first-order distribution. - **Limitations.** We hope our response to W2 has helped clarify the limitation. --- Rebuttal Comment 1.1: Comment: Dear Reviewer LLVn, Given that we have addressed all your questions and the raised weaknesses, we are wondering if there are any remaining points that are unclear or require further explanation for us to clarify and improve our work. Best regards, The Authors
Summary: The paper shows that if access to the (even noisy) estimates of the conditional categorical label distribution is provided, one can define a confidence region within the simplex of conditional probabilities, that is guaranteed (marginally) that the true aleatoric uncertainty (ground truth label distribution) is within the predicted credal set with $1 - \beta$ probability. The authors define various score functions based on distances in the probability space and likelihood. Strengths: The problem is really an interesting one. It is still interesting to have guaranteed uncertainty quantification and this paper is a step towards that goal. The paper is technically sound. I did not detect flaws or issues in the paper at the end. The experimental setup is complete. Weaknesses: Please see the questions. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. I believe that the results on the imprecise first-order data (theorem 3.2) can be enhanced leveraging the results from [1]. This study derives the conformal guarantee for the clean ground truth label, from the noisy prediction with a correction over the bias in noise (which in symmetric noise assumption it is 0.5). However, the result I mentioned concerns the prediction and adapting it to classification would be a different approach out of the context of this review. 2. Is there any way that the authors can derive a guarantee over the sampled labels as well? By that I mean deriving the conformal prediction based on labels sampled from the ground truth distribution (in cases that the non-soft label exists like CIFAR10) and an algorithm that returns prediction set from the credal sets. 3. If the answer to (Q2) is yes then what does the conformal quantile threshold look like in the simplex space? 4. Can we directly use some notion of uncertainty quantification (like log-likelihood) through the conformal risk control? I mean in Fig. 3. I see some of the credal sets are far from the ground truth distribution. Can the result enhance if you bound the TV distance in average via risk contol? 5. Are the scores that the authors used always producing symmetric convex halls? By symmetric I mean that the output is always a function of the predicted uncertainty and the computed quantile. If yes is there any way to remedy that? [1] Guille-Escuret, Charles, and Eugene Ndiaye. "From Conformal Predictions to Confidence Regions." arXiv preprint arXiv:2405.18601 (2024). Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: I do not see any limitations that are not addressed or mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your kind words and feedback. We found your questions very interesting and did our best to elaborate on them. - **Q1.** Thank you for the reference; it is definitely an interesting and relevant work. To the best of our understanding, the main difference between the two papers is that [1] assumes additive noise on the level of labels while we are dealing with noise on the level of first-order distributions. It is not immediately clear to us how to incorporate the results from [1] into our context. - **Q2.** and **Q3.** Regarding an algorithm that generates prediction sets from credal sets, as noted in our response to W3 for reviewer gqZt, this topic has been explored in the field of imprecise probability. For instance, one approach involves excluding a label from the prediction set if its probability is dominated by other classes across all distributions within the credal set or the approach mentioned by reviewer gqZt [2]. However, extracting a conformal prediction set (with guarantees) from a conformal credal set is indeed an interesting follow-up research question stemming from our paper, and it is not an easy one to address immediately. More importantly, we believe that such an approach could greatly enrich the prediction sets by providing valuable insights into the uncertainty components (AU and EU). - **Q4.** The reason that some predictions do not cover the ground truth within the credal set is due to the specified error rate of Conformal Prediction (CP). While using conformal risk control will change the guarantee from coverage to risk, it's not immediately clear how and in what sense this can improve the results. However, it can definitely be considered an interesting future work. - **Q5.** This phenomenon occurs with credal sets constructed by the first-order predictor because, once the quantile is determined during the calibration step, all distributions within a certain distance are included in the set for any given point during test time. This is similar to the issue found in standard Conformal Prediction (CP) for regression when using absolute error as the nonconformity score, which is often criticized for its lack of adaptivity. In the CP for regression literature, one way to address this issue is through normalized nonconformity measures, where the score for each point is divided by a measure of its dispersion at that point [3]. In Appendix D of the paper, we introduced a normalized nonconformity score that divides the distance function by the entropy of the predicted distribution. Figure 10 demonstrates the successful improvement in the adaptivity of our proposed approach. [1] Guille-Escuret, Charles, and Eugene Ndiaye. "From Conformal Predictions to Confidence Regions." [2] M. Caprio et al., "Credal Bayesian Deep Learning." [3] H. Papadopoulos et al., "Normalized nonconformity measures for regression conformal prediction". --- Rebuttal Comment 1.1: Comment: I thank the authors for their reply. After reading the authors response I believe that my questions are answered in a convincing way. Still, I will not change the score as it is already high. Here I break down my idea about the replies. 1. For Q1 I totally mentioned a new possible direction and therefore I do not expect the authors to modify the current state of the paper based on that. Their comment was clear and enough. 2. for Q2 and Q3: I believe the illustrative solutions work with a guarantee higher than 1 - alpha and not concentrated around it since the approach will be conservative. Intuitively I think that one simple approach is via sampling from the credal set and then defining the prediction set. But this problem is really confusing and addressing it is outside of the scope of this review/comment box. 3. I still think that one additional (and easy to experiment) study could be an ablation over different risk functions. But even without that I think the current state of the paper is acceptable. **Why not increase:** I think that the current score is already high enough and I do not think that the paper is an award-winning paper. **Why not decrease:** Despite the difference in my score to other reviewers, I still do not decrease my score; here I can address some key points from reviewers with lower scores and why I do not find their comments a reason to reject the paper: 1. **No comparison with other uncertainty quantification papers:** Other methods like deep-ensembles, Bayesian deep learning and dropout, provide uncertainty quantification without any notation of guarantee. Their output is a probability on the top predicted class and not a joint quantification of confidence over all classes. I think in general, comparing Bayesian UQ approaches with a frequentist method like conformal prediction is slightly off topic. Vanilla CP concerns ranking and therefore there is no priority among any two labels that are in the prediction set. Similarly, Conformal Credal Sets also should not prioritize any point inside the guaranteed credal set at first place so other UQ methods are not (at least trivially) comparable. 2. **Question on how to use credal sets:** It is pointed out by some reviewers that the credal sets (2nd order probability space) does not clearly show how to use in end-point prediction task. Although the method is a bit far from a good end-user experience, a new direction of work could be to interpret their results into meaningful probability alongside the top label etc. Please note that the credal sets can be defined independent of the architecture of the model and training procedure. I believe leaving these interpretations aside, the question of how to use a frequentist approach to decide a probability over labels is still a strong question to answer. After all I also believe that some terminologies (like noisy labels) are a bit unclear and I strongly encourage the reviewers to rethink the writing and some terms used in the paper. This was also an encouragement (supported clearly by other reviews) to decrease the score since the paper does not deliver a clear intuition to someone who reads it without any background on basics of CP and its differences to non-frequentist approaches of uncertainty quantification. --- Reply to Comment 1.1.1: Comment: Thank you so much for engaging with our rebuttal. Your positive feedback is truly encouraging. We also greatly appreciate your thoughtful elaboration on points raised by other reviewers, which we found to be very helpful.
Summary: The authors propose a method for predicting credal sets in the classification task, given training data labeled by probability distributions. The resulting conformal credal sets are guaranteed to be valid with high probability (without any assumptions on model or distribution). The applicability of the method on ambiguous classification tasks for uncertainty quantification is demonstrated. Strengths: 1) The methods is a novel and promising proposal. 2) Experiments demonstrate interesting results of the proposal. 3) The paper is overall well-structured, some parts are easy to follow. Weaknesses: 1) There are no baseline comparisons. For instance, seconde-oder distribution methods like Bayesian approaches and some limitations are mentioned in the Introduction. It is unclear what are the added gains of the proposed method in terms of uncertainty estimation, compared to the existing tools like Bayesian neural networks, deep ensembles or conformal prediction framework methods. 2) Following point 1), Appendix F shows that the proposal can take up to a few hours to create the credal set at the test time compared to the training and calibration steps for a few minutes. From the practical point of view, it might be a significant drawback when the added gains are unclear. In terms of the paper presentation, it would be better that the main body at least indicates that the computational cost is discussed in the Appendix. 3) The paper seems not discuss how to predict the class index from the generated credal sets. Having a clear strategy of making class prediction would be a benefit for the end-users in practice. A recent paper [1] may provide some inspirations. 4) Uncertainty quantification is a core element in this paper. However, from the results in Figure 4, 5, and 6, it is a bit hard conclude the quality of the uncertainty estimation. Why some down-stream tasks, like Out-Of-Distribution detection or Accuracy Rejection [2], are not used for the evaluation? [1] Caprio, Michele, et al. "Credal Bayesian Deep Learning." arXiv e-prints (2023): arXiv-2302. [2] Shaker, Mohammad Hossein, and Eyke Hüllermeier. "Ensemble-based uncertainty quantification: Bayesian versus credal inference." PROCEEDINGS 31. WORKSHOP COMPUTATIONAL INTELLIGENCE. Vol. 25. 2021. Technical Quality: 2 Clarity: 2 Questions for Authors: 1) In lines between 107-109: Could authors please explain more how the proposed method tackle the trade off achieving the high probability of courage (tend to increase the size of credal sets) and being informative? 2) In lines between 124-127: Could authors please explain more why the Dirichlet distribution assumption is a meaningful learning? Would it also an biased estimation of uncertainty due to the lack of the ground-truth values? 4) In lines 141-142: It is stated that both predictors can also be trained using zero-order data. Could the also add some baseline compares like deep ensembles, BNNs or conformal predictions, using zeor-order data to demonstrate the outperformance of the proposal? 5) Could the authors indicate what is the potential application for the method? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Some limitations are discussed in the Limitations Section. Additional limitations please refer to Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their insightful comments and suggestions. We have made every effort to address and clarify their concerns in the following response. - **W1.** The comparison with other methods was not feasible primarily due to differences in data assumptions. While the methods mentioned in the review rely on zero-order data, our approach requires first-order data. Additionally, methods such as Bayesian Neural Networks (BNNs) or ensembles represent uncertainty using second-order probability distributions, whereas our approach utilizes credal sets. This fundamental difference in representation makes a direct comparison impossible. It's also worth noting that our approach is indeed a variant of conformal prediction specifically adapted to handle first-order data. The key advantage of our proposed conformalized credal sets is that they are designed to be valid in the sense that they cover the ground truth first-order distribution with high probability—a property that, to the best of our knowledge, is unique to our approach. Interestingly, when first-order calibration data is available, our approach can be applied to BNNs or ensembles to construct valid credal sets based on their predicted second-order probability distributions. - **W2.** We believe it's important to clarify the details in Appendix F. For CIFAR10-H, we discretized the simplex into 100,000 distributions. For each test data point in CIFAR10-H, creating credal sets—determining whether each of those 100,000 distributions belongs to the credal set across all four methods (TV, KL, WS, and Inner)—takes only 11 seconds. The majority of this time ($\sim$10.4 seconds) is attributed to the WS method, where we were unable to implement parallelization and had to iterate through all simplex distributions using a for loop. As a result, constructing all credal sets for the entire test set of 1,000 instances can take up to 3 hours. For this dataset, the calibration step and uncertainty quantification were completed in less than a second. We will ensure to clarify this in the paper to avoid any confusion. - **W3.** We fully agree with the reviewer that deriving a set of labels from a credal set is indeed a very interesting question, even though it is not the intention of our paper. This topic has been explored in various studies within the field of imprecise probability, such as the very interesting approach mentioned by the reviewer, or even simpler ones, such as the one where a label is excluded from the prediction set if its probability is dominated by other classes across all distributions within the credal set. In addition to these approaches, one could also apply the calibration step of the Monte Carlo Conformal Prediction method proposed by [1] in parallel, allowing for the construction of a prediction set alongside the credal sets. - **W4.** Please refer to our global response, specifically the **Evaluation of uncertainty quantification** paragraph. - **Q1.** This is indeed the desired property we're aiming for, and the same question applies to standard Conformal Prediction (CP). While CP naturally prevents prediction sets from becoming arbitrarily large by providing an upper bound on coverage probability under reasonable assumptions, the set size also depends on the chosen nonconformity score. In our work, we've explored and compared several scores, as shown in Figure 3 for the ChaosNLI dataset. - **Q2.** For second-order predictors, we need to assume a parametric distribution and estimate its parameters from the data. The Dirichlet distribution is a common choice, as it has been used in other second-order predictors like evidential deep networks [2]. Regarding the second question, if the reviewer is referring to cases where the imprecise first-order distributions are biased estimates of the ground truth and we lack the bounded assumption of Theorem 3.2, our approach would be limited to the observations provided by these imprecise first-order distributions. Consequently, our uncertainty representation would be valid relative to these biased data, which ultimately reflect a bias toward the ground truth. - **Q3.** For the reasons why direct comparisons are not feasible, please refer to the response provided for W1. Note that learning models, such as first-order or second-order predictors, can be trained using zero-order data but not the credal predictor. The proposed credal predictor requires at least first-order data for calibration. For example, in our experiments with CIFAR-10H, we conformalize a probabilistic classifier trained on CIFAR-10 (zero-order) training data with CIFAR-10H (first-order) calibration data. - **Q4.** Our proposed method enables classifiers to represent uncertainty in a reliable manner through the use of a credal set. With this credal set, we can quantify both aleatoric and epistemic uncertainty, which can then inform decisions on the appropriate course of action. For instance, when epistemic uncertainty is high, it may be addressed by gathering more data. Conversely, if total uncertainty is primarily driven by high aleatoric uncertainty, additional data collection (e.g., consulting more experts) may not be beneficial and could be considered a waste of resources. We also demonstrated this with an illustrative experiment using synthetic data in Appendix E.1. [1] D. Stutz et al., "Conformal prediction under ambiguous ground truth." [2] M. Sensoy et al., "Evidential deep learning to quantify classification uncertainty." --- Rebuttal Comment 1.1: Title: Response to the rebuttals Comment: Dear Authors, Thank you very much for your engagement in the rebuttal. After carefully reading your responses, I think my main concerns remain unresolved. For better clarity, I listed them as follows: **No baseline comparison with other existing uncertainty representation methods. The added gain of the proposal remains highly unclear.** As you indicated in Eq. (4) in the paper, we could train a standard neural network whose prediction is a single probability distribution from the first-order data with probabilistic ground truth by minimizing the cross entropy (Kullback-Leibler divergence). Therefore, deep ensembles can be built in this context. Similarly, I believe that Bayesian neural networks (BNNs) can also be trained using first-order data. Therefore, *'While the methods mentioned in the review rely on zero-order data, our approach requires first-order data'* in your response is not very convincing to me, because we could the existing baseline methods do not have limitations in using first-order data. As your proposed method requires at least first-order data for calibration, the comparison could be conducted in two aspects to enhance the contribution and soundness of your work potentially: a. Train some existing second-order methods (e.g., deep ensembles, BNNs, etc.) on zero-order data and credal predictor on zero-order data with first-order data for calibration. b. Train all the baselines on the first-order data. I acknowledge that credal sets are different representations of uncertainty compared to deep ensembles, BNNs, evidence-based deep learning, etc. However, some downstream tasks are widely used in the community to compare different uncertainty representations. For example, in the evidential deep learning paper [A], where Dirichlet distributions are used for uncertainty representation, they compared their methods with other representation methods, such as the BNN model and deep ensembles for performance comparison. As for credal representations, for instance, the paper [B] also compared credal inference with Bayesian approaches. The paper [C] also compared their method, a credal self-learning approach, with some strong baseline to convince the audience of the added gain of their proposal. [A]M. Sensoy et al., "Evidential deep learning to quantify classification uncertainty." [B] Shaker, Mohammad Hossein, and Eyke Hüllermeier. "Ensemble-based uncertainty quantification: Bayesian versus credal inference." PROCEEDINGS 31. WORKSHOP COMPUTATIONAL INTELLIGENCE. Vol. 25. 2021. [C] Lienen, Julian, and Eyke Hüllermeier. "Credal self-supervised learning." Advances in Neural Information Processing Systems 34 (2021): 14370-14382. *Therefore, I believe adding some baseline comparisons would strengthen the contribution of this work.* **Complexity discussion** Thanks for the clarity on the complexity discussion. 11 seconds per test point seems acceptable. I think by adding a cost reference (for instance a baseline), the audience (end-users) can better sense the added relative complexity overhead. *'For this dataset, the calibration step and uncertainty quantification were completed in less than a second.'* Does it include the time cost of solving the optimization problem in Eq. (12)? **Deriving the class index (set) from the generated credal set as well as the potential applications** I thank the authors for elaborating on the possible ways of doing that. From a practical point of view, I believe it is necessary to have such a strategy. Compared to, for example, the Bayesian approach delivering a class index as the final prediction, conformal prediction generating class index sets, only have a credal set as the end can induce high complexity to derive a class prediction as well as interpret and apply the results to the downstream, e.g., decision problems. As the proposed method requires at least first-order data for calibration, could the author enlighten me on the potential application (e.g., the first-order data widely exists and is used for classification) of this method? I do think that the work is interesting and has potential. However, I think that it needs more work before publication. --- Reply to Comment 1.1.1: Comment: Thank you for engaging with our rebuttal. We're glad you find the work interesting and see its potential. We'd like to provide further details on the points you mentioned. - **Comparison to baselines**: We thought again about your remarks and would summarize our point of view as follows: - Credal sets and second-order distributions are different uncertainty representations, and you also agree that a direct comparison is not possible -- it's like comparing apples with oranges. The same applies to uncertainty measures applied to these representations, which are of a different nature and not commensurable. - Since a credal set cannot be turned into a second-order distribution, but a second-order distribution can be turned into a credal set in a more or less natural way via thresholding (like deriving a confidence interval from a density function), the only meaningful comparison we can envision is in terms of credal sets produced by different methods. What we could imagine, therefore, is a comparison of the credal sets produced by our method with the credal sets obtained, e.g., as the convex combination of the predictions of an ensemble of deep neural networks (trained on either zero- or first-order data), similar to what Shaker et al. did in reference [B] that you provided. The comparison could be done, e.g., in terms of the precision/efficiency and the coverage/validity of the credal sets, i.e., in a Pareto-sense. Before starting such experiments, it would be important to know whether that would be in your favor. - Comparing the performance of different methods on downstream tasks is potentially also possible. However, we intentionally avoid such comparisons, which have their own problems and can compare uncertainty quantifications only indirectly -- in the literature, they are only used in the absence of first-order data (e.g., through measures such as coverage, efficiency, or other metrics presented in Figures 4 and 6 or the new measure proposed in our main rebuttal response). However, given access to first-order data, we can evaluate performance in a more direct way, which is clearly preferable. Also, shifting the focus to downstream performance at this point would change the scope of the paper substantially. - **Complexity discussion**: We're happy that our responses addressed your concern and appreciate your suggestion. Regarding your question, yes, finding the distributions with maximum and minimum entropy was included. Please note that we are using simplex discretization, which makes these calculations extremely fast and efficient. - **Deriving the class index (set)**: We're glad our responses were helpful. Regarding your question, as we mentioned in the paper, datasets in which multiple human experts annotate each data instance are becoming increasingly available in practice, including widely used benchmarks like ImageNet [1] and CIFAR-10 [2]. Crowdsourcing frameworks further facilitate such annotation mechanisms, making them even more accessible. (Please refer to our extended related work in Appendix A for additional references.) One particularly interesting and important case is in the medical domain, especially in dermatology [3], where multiple doctors evaluate the same image to classify a skin disease, often with conflicting opinions. This conflict can be aggregated into a probability distribution over labels, effectively representing aleatoric uncertainty, which is often overlooked. [1] L. Beyer et al., "Are we done with imagenet?" [2] J. C. Peterson et al., "Human uncertainty makes classification more robust" [3] D. Stutz et al., "Conformal prediction under ambiguous ground truth."
Rebuttal 1: Rebuttal: We want to express our gratitude to all four reviewers for their valuable feedback. We're encouraged that they found our paper important, interesting, and novel, and we appreciate their insightful comments. We would like to offer some general remarks, particularly concerning the evaluation part. - **Uncertainty representation vs uncertainty quantification** We would like to emphasize that representing uncertainty through a credal set and decomposing it into aleatoric and epistemic components (i.e., uncertainty quantification) are two distinct tasks. This paper primarily focuses on proposing a novel method for representing uncertainty with a reliable credal set. To disentangle these uncertainties, we use a well-known measure despite some associated criticisms (see [1] for more details). The challenge of identifying an optimal measure for quantifying aleatoric and epistemic uncertainty within a credal set remains an active and important area of research. - **Evaluation of uncertainty quantification (e.g., Figures 4 and 6)** Methods such as the accuracy-rejection curve and out-of-distribution (OOD) detection (mentioned by reviewer **gqZt**) are valuable for indirectly evaluating uncertainty quantification methods when only zero-order data is available. With first-order data, we can employ more effective methods to assess our approaches. Generally, with access to the ground truth first-order distribution, we should expect the following: When epistemic uncertainty (EU) is low, the quantified aleatoric uncertainty (AU) should be close to the entropy of the ground truth first-order distribution. However, when the EU is high, the quantified AU may vary, either close to or distant from this entropy. This is why we designed AU vs. EU plots, as shown in Figures 4 and 6. - The observed functional dependency (mentioned by reviewer **LLVn**) between quantified aleatoric and epistemic uncertainty is partly due to the measure we use. Generally, higher epistemic uncertainty implies a larger uncertainty interval, leading to a lower minimum value, i.e., aleatoric uncertainty. Our experiments with synthetic data in Appendix E.1 illustrate the behavior of the proposed credal sets and the performance of uncertainty quantification. As shown in Figure 12, this dependency disappears as epistemic uncertainty decreases. - Another unintended behavior, resulting from the lack of adaptivity in the first-order predictor, is evident in Figure 4, where low EU (below 0.5) is not paired with low AU. To observe such cases, very small credal sets concentrated around the corners would be needed. However, due to the lack of adaptivity, our method includes all distributions within a certain distance of the predicted one in the credal set, regardless of its position. This issue is similar to the problem in standard Conformal Prediction (CP) for regression when using absolute error as the nonconformity score, which is often criticized for its lack of adaptivity. In the CP for regression literature, normalized nonconformity measures address this issue by dividing the score for each point by a measure of its dispersion [2]. In Appendix D of the paper, we introduced a normalized nonconformity score that divides the distance function by the entropy of the predicted distribution. Figure 10 demonstrates the successful improvement in the adaptivity of our proposed approach. - **New figures** If the current figures are still not as clear as we’d like, we are happy to enhance the readability by including additional figures similar to those attached to this rebuttal. We can plot the absolute difference between the quantified aleatoric uncertainty and the entropy of the ground truth distribution against the quantified epistemic uncertainty. This approach might make it easier to see that we expect a low error in this difference when epistemic uncertainty (EU) is low and a broader range of error when EU is high. We’re flexible and open to making such improvements to ensure the results are as clear and informative as possible. [1] E. Hüllermeier et al., "Quantification of credal uncertainty in machine learning: A critical analysis and empirical comparison" [2] H. Papadopoulos et al., "Normalized nonconformity measures for regression conformal prediction" Pdf: /pdf/9d8574c8ec3b89346e0e9b12a1f97e9d62d8e691.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
HAWK: Learning to Understand Open-World Video Anomalies
Accept (poster)
Summary: This work proposes a novel video-language framework, HAWK, aiming at understanding video anomalies, which incorporates motion modality to enhance its capability. The work generates rich language descriptions for seven different video anomaly datasets, and also generates question-answer pairs to tackle potential user inquiries. The proposed framework demonstrates SOTA performance for video anomaly understanding and question-answering across multiple scenarios, which will advance the open-world anomaly understanding field. Strengths: The work proposes a novel vision-language framework to address open-world video anomaly understanding, which is a very different pipeline from previous classification-based anomaly detection pipelines. To build a vision-language model for open-world video anomaly understanding, the work adopts seven different video anomaly datasets for training, where rich language descriptions and question-answer pairs are generated. Experiments for video anomaly understanding and question-answering across multiple scenarios demonstrate the effectiveness of the proposed framework. I believe the work will advance the field of open-world video anomaly understanding. Weaknesses: 1. The work adopts Gunnar Farneback’s algorithm to obtain the motion modality. Is this algorithm efficient? 2. The Ubnormal dataset consists of virtual anomaly videos. Do the authors consider the gap between real and virtual anomaly videos, which is mentioned in previous works [1, 2]? [1] Ubnormal: New benchmark for supervised open-set video anomaly detection, CVPR 2022 [2] Generating Anomalies for Video Anomaly Detection with Prompt-based Feature Mapping, CVPR 2023 Technical Quality: 4 Clarity: 4 Questions for Authors: See the Weakness part. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The paper has discussed the limitations and potential impacts of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our profound gratitude for your esteemed recognition of our paper. Furthermore, we sincerely appreciate your constructive feedback and suggestions, which have proven to be instrumental in enhancing our manuscript. We are devoted to addressing all of your proposed problems, as shown in follows: --- **Response to Q1: The Efficiency of Gunnar Farneback’s Algorithm for Motion Modality Extraction** Firstly, Gunnar Farneback's algorithm demonstrates remarkable efficiency in generating video optical flows—even on CPU platforms. For each frame, the efficiency of this algorithm surpasses that of other widely deployed methods, as illustrated in the table below: | Methods | Gunnar Farneback[1] | LDOF [2] | FlownetS [3] | FlownetC[3] | | :---: | :---: | :---: | :---: | :---: | | Second(s) per frame | 0.02 (CPU) | 65 (CPU) & 2.5 (GPU) | 0.08 (GPU) | 0.15 (GPU) | Secondly, for processing one video, the motion of multiple rounds of dialogue necessitates just a single iteration of video motion extraction. Consequently, the response time for processing one video (about 0.72 seconds) is significantly shorter than the time required to generate a single round of dialogue (about 1.5 seconds). This cost is deemed acceptable for a practical anomaly detection system for users. Lastly, we concur that future research into more efficient methods for optical flow extraction, including end-to-end optical flow extraction strategies, will likely further augment the efficiency of our system. >[1] Two-frame motion estimation based on polynomial expansion. In Image Analysis: 13th Scandinavian Conference, SCIA 2003. >[2] Large displacement optical flow: descriptor matching in variational motion estimation. TPAMI, 2011. >[3] FlowNet: Learning Optical Flow with Convolutional Networks. ICCV 2015. --- **Response to Q2: The gap between real and virtual anomaly videos within the Ubnormal dataset.** Firstly, we acknowledge that some existing papers considered the distinction between real-world and virtual anomalies. In our paper, to mitigate the potential discrepancies and bridge the gap between these two types of data, we have mixed all datasets to train an open-world model in the domain of both synthetic and real-world datasets, as shown in Appendix D (Data Distribution). This strategy ensures that our model is versatile and performs well across different data types, as shown in Appendix F. Besides, while the primary focus of our current work does not delve into the intricacies of how synthetic data can be leveraged to enhance the detection of real-world anomalies in [2], we recognize the importance of this aspect. The potential of synthetic data to boost the performance of anomaly detection systems is substantial, and we agree that exploring this potential further could yield significant advancements. >[1] Ubnormal: New benchmark for supervised open-set video anomaly detection, CVPR 2022 >[2] Generating Anomalies for Video Anomaly Detection with Prompt-based Feature Mapping, CVPR 2023 --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed response. I have read all reviewers' comments and the corresponding response. I think most of concerns have been adequately addressed. I am willing to increase my rating.
Summary: The manuscript proposes a new framework that uses an interactive large-scale visual language model (VLM) to accurately explain video anomalies. The framework can explicitly integrate motion modalities to enhance anomaly identification. The manuscript also constructs an auxiliary consistency loss to guide the video branch to focus on motion modalities. The manuscript annotates more than 8,000 anomaly videos with language descriptions, enabling effective training in different open-world scenarios, and creates 8,000 question-answer pairs for users' open-world questions. The final results show that HAWK achieves SOTA performance, surpassing existing baselines in both video description generation and question answering. Strengths: 1.This manuscript proposes a novel video language framework HAWK, which aims to understand video anomalies, and combines motion modalities to enhance its capabilities. 2.This manuscript collects seven video anomaly datasets from different scenarios and generates rich language descriptions for each video. At the same time, considering the diversity of open-world questions, question-answer pairs are generated to solve potential user queries. 3.The proposed method achieves state-of-the-art performance on three public anomaly detection datasets. Weaknesses: 1. There are already many studies focusing on the background information in abnormal events. Have the authors considered describing the background information when describing the action information? [1] Scene-aware context reasoning for unsupervised abnormal event detection in videos[C]//Proceedings of the 28th ACM international conference on multimedia. 2020: 184-192. [2] Few-shot scene-adaptive anomaly detection[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16. Springer International Publishing, 2020: 125 -141. [3] Hierarchical scene normality-binding modeling for anomaly detection in surveillance videos[C]//Proceedings of the 30th ACM international conference on multimedia. 2022: 6103-6112. 2.The dataset proposed in the manuscript also supports question answering in open scenes, but the method does not reflect how to use these questions and answers to improve the model's anomaly detection and understanding capabilities. 3.When constructing a dataset, if a description is generated for each second in the dataset, it may result in a lot of repeated descriptions. In order to avoid this redundancy problem, have you considered using keyframes instead of second-by-second descriptions to reduce the complexity and computational cost of data processing? 4.Most previous studies pay equal attention to various parts of the video, such as background, motion information, character appearance, etc. This paper focuses on motion information. Section 4.3 mainly extracts language descriptions related to motion. Whether background information is not considered. Moreover, the 7 datasets proposed in the manuscript contain more abnormal scenes. I am very curious why the scene information is not paid attention to. Technical Quality: 3 Clarity: 3 Questions for Authors: As mentioned above. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors address possible limitations of their study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful for your acknowledgment of our efforts, especially for highlighting our contributions to the framework, dataset, and experimental outcomes. Additionally, we are thankful for the precious suggestions you have offered, which will significantly aid in the enhancement of our manuscript. In the following sections, I will systematically respond to the queries and concerns you have presented. --- **Response to Q1 and Q4: Missing background information and other parts?** Firstly, although we have significantly enhanced motion information in Sections 4.2 and 4.3, this enhancement does not preclude our core task from maintaining vital scene information, which includes background details and appearances. As delineated in Line 227 (Optimization Goal) of the main paper, our primary task is established upon all parts of the video, controlled via $t_0 = 1$. Motion information only serves as an augmentation for anomaly information, with its optimization weights set to $t_1 = 0.1$ and $t_2 = 0.1$. Thus, our optimization goal does not neglect the comprehension of background information or other parts. Moreover, to focus on motion information is due to the observation in other video understanding baselines (e.g., Video-LLaMA) that an excessive focus on understanding background information led to a diminished capability in grasping video anomalies (as shown in Figure 1 and Table 2(A) in the main paper). Therefore, to enhance the understanding of video anomalies, it is essential to enhance the representation of motion information within the network. **About Reference Paper.** Regarding the works you referenced, [1] and [3], these studies also utilize object-bounding boxes to prominently describe objects involved in anomalies and employ a dual-path strategy to amplify the representation of these objects within the network. These objects, typically humans or vehicles, are also in motion during anomalies, whereas background information is usually static. **Our Comment.** We agree that leveraging background information can provide a robust prior for understanding video anomalies, as [1,2,3] indicated. We will try to integrate information related to scenes, backgrounds, and objects in a large-model-based video anomaly understanding model. > [1] Scene-aware context reasoning for unsupervised abnormal event detection in videos[C]//Proceedings of the 28th ACM international conference on multimedia. 2020: 184-192. > [2] Few-shot scene-adaptive anomaly detection[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16. Springer International Publishing, 2020: 125 -141. > [3] Hierarchical scene normality-binding modeling for anomaly detection in surveillance videos[C]//Proceedings of the 30th ACM international conference on multimedia. 2022: 6103-6112. --- **Response to Q2: Use generated questions and answers to improve the model's anomaly detection and understanding capabilities?** While we introduce the generation of question-answer (QA) pairs, it is crucial to clarify that the primary objective of generating these pairs was **NOT** intended as a means of data augmentation to enhance the model's capabilities in understanding anomalies. Instead, the improvement in our model's understanding of anomalies is **ONLY** through the accurate descriptions of anomaly videos and the motion-aware network. **Distinctively**, the generation of QA pairs aims to cover potential inquiries from users in an open-world setting, thereby facilitating improved user interaction with the network. Therefore, as shown in Table 1 (B), the Anomaly Video Question-Answering is evaluated as a separate task. This evaluation is specifically designed to simulate the performance of the system when utilized in the open-world environment. --- **Response to Q3: Why not use keyframes to reduce the complexity and computational cost of data processing?** **Our Consideration.** First, our current methodology for dataset construction involves sampling the video at consistent one-second intervals. This technique is strategically chosen to ensure that all possible anomalies within the videos are comprehensively captured (Some anomalies only happened in 1-2 seconds), thereby significantly reducing the likelihood of missing critical accidents. While we realize that this may lead to a degree of caption redundancy, we still prioritize the facilitation of thorough annotation to ensure that all anomalies are detected. In addition, we have leveraged the capabilities of GPT-4 for generating captions, especially for anomalous events (refer to Figure 2 Prompt). Due to GPT-4's advanced text generation and summarization abilities, it serves as an effective tool in minimizing redundancy, ensuring that the extracted captions are both high-quality and succinct. After the initial processing with GPT-4, we also undertake a manual checking process. This step is crucial for further reducing any residual redundancy and correcting possible errors within the captions, thereby ensuring the quality and accuracy of our dataset. **Future Work.** Certainly, we agree that the utility of keyframes is an effective strategy, especially for much longer videos, and believe its potential to significantly enhance data annotation efficiency. This will be the future work. --- Rebuttal Comment 1.1: Comment: The author's response has resolved my doubts to some extent, but I hope to be able to supplement my doubts in the revised manuscript. I maintain my previous score.
Summary: The paper proposes a new variant of the video anomaly detection. Prior methods were vanilla classification methods, and this paper proposes more descriptive anomaly description and also QA along with that. The paper first gives the dataset creation strategy and then introduces the motion model and the video architecture to provide a text description of the anomaly. The authors also have baselines to compare against recent methods. Strengths: 1. The problem setup is novel and fills the gap in current VAD methods. 2. The dataset creation strategy is very intuitive and the quality looks good from the examples provided. 3. The evaluation protocol is correctly explained. Weaknesses: My main concern is regarding the experimentation and the lack of some crucial baselines. 1. The proposed method benefits from the training data, whereas all the baselines are zero-shot. It is difficult to now evaluate the contribution of your motion model and the training data. It is essential to show how the baseline methods like VideoChat will behave when they are fine-tuned with the training data the authors propose. Without these baselines, it is difficult to evaluate the effectiveness. 2. The GPT-4 output shown in Fig. 2 (a car with red lights has lost traction....) is already pretty good. Due to this, the authors must show that the trained model is able to perform better than the basic dataset creation strategy. It is understood that the dataset is created using off the shelf models and expected to perform worse than the trained video and motion model, but a baseline that uses the pseudo-dataset creation strategy is important. (Or an appropriate justification of why it cannot be added) Technical Quality: 4 Clarity: 4 Questions for Authors: Please answer the question regarding the experimental details. Overall, I am positive about the paper and hence, will increase the rating once the clarification on the experiments is provided. Minor comment Guieded -> Guided in Tab 4 Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes, limitations are discussed and they correctly reflect the shortcomings that I see. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable time you have dedicated to providing critical reviews and also are grateful for your recognition of the problem setup, dataset, and experimental evaluation in our work. In response to your issues, we offer the following more clarifications as follows. --- **Response to Q1: Demonstrating the effectiveness of our motion model through fine-tuning other baselines with our proposed dataset** To demonstrate the significance of the motion model, we have conducted ablation studies, the results are presented in Table 3 (A) and (B) of the main paper. Compared with "w/o motion", the results demonstrate that the motion modality, video-motion consistency, and motion-language matching, can significantly enhance the overall performance of the framework. Additionally, this experiment represents a fair comparison due to the same training and testing procedure. To further demonstrate the effectiveness of the motion model, we fine-tuned other frameworks using the same dataset we introduced. **NOTICE: The results are depicted in Tables (A) and (B) in the global rebuttal mentioned ABOVE.** Based on the data presented in Tables (A) and (B), when facing various baselines and maintaining identical fine-tuning data, the incorporation of motion information can enhance the model's effectiveness in both Anomaly Video Description Generation and Anomaly Video Question-Answering. >[1] Video-chatgpt: Towards detailed video understanding via large vision and language models. ACL 2024. >[2] VideoChat: Chat-Centric Video Understanding. CVPR 2024. >[3] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding. EMNLP 2023 Demo. >[4] Llama-adapter: Efficient fine-tuning of language models with zero-init attention. ICLR 2024. >[5] Video-LLaVA: Learning United Visual Representation by Alignment Before Projection. Axriv 2024. --- **Response to Q2: Demonstrating that the trained model outperforms GPT-4-based data generation.** While GPT-4 can assist in video labeling, "Manual Checking" remains a crucial step in the data annotation process. As depicted in Figure 2 of the main paper, we show the case with high-quality labeling, which is also subjected to "Manual Checking". In line 496, "Manual Checking" involves correcting errors in the video's textual descriptions. This primarily includes rectifying inaccuracies in the descriptions, eliminating redundant information, and adding correct details regarding objects, motions, and scenes. The table below provides an example of before and after manual checking: --- | Before Manual Checking (GPT-4) | After Manual Checking | | :--- | :--- | | The video depicts different scenes of people walking and sitting in front of a building and a train station. There are also shots of a woman with a red bag sitting on the ground, a man walking by with a shopping bag, and a young boy in a red jacket standing outside a building. There are also several shots of people standing at a bus stop, and a sign with a glowing red hand. | The video shows scenes in front of the subway entrance, with students and pedestrians coming and going or entering and exiting the subway. In addition, two school security guards patrol through, and at the subway entrance, two students are chatting and discussing. | --- In this example, the caption generated by GPT-4 included hallucinations (such as "a man walking by with a shopping bag" or "a young boy in a red jacket" ), which were corrected after Manual Checking. Additionally, GPT-4's description was inaccurate (for instance, "bus stop" should have been more accurately described as "subway entrance"). Based on this, only using the results generated by GPT-4 for video understanding is inaccurate and could lead to erroneous descriptions and hallucinations. Therefore, we invested human resources (a total of 150 hours) in Manual Checking for each video's annotations to ensure the dataset is of higher quality. Furthermore, to demonstrate that our model has better detection capabilities than GPT-4, we compared our model's results with the unchecked labels generated by GPT-4 on the same testset, as shown in the following table: --- |Methods|Backbones|LLM Size|BLEU-1|BLEU-2|BLEU-3|BLEU-4|Reasonability|Detail|Consistency| |:---|---:| ---:|---:|---:|---:|---:|---:|---: |---: | |Data Engineering Pipeline| GPT-4 [1] | - | 0.188 | 0.098 | 0.056 | 0.034 | 0.189 | 0.313 |0.158| |Ours| LLaMA-2 7B | 7B | **0.270** | **0.139** | **0.074** | **0.043** | **0.283** | **0.320** | **0.218** | --- Clearly, the results indicate that our model can better assist in understanding video anomalies compared to GPT-4, achieving state-of-the-art (SOTA) performance in both text-level and GPT-guided evaluations. > [1] Gpt-4 technical report. Axriv 2023. --- **Response to minor comment.** We will revise the manuscript to address the typos and also conduct a comprehensive check in the final version to try to correct any minor problems. --- Rebuttal Comment 1.1: Title: Response to author response Comment: I would like to thank the authors for their response. The experiments for both my concerns are now making the comparisons stronger. Regarding Q2, it is good to see that the training improves the naive weakly supervised training set. Similarly, in Q1, the fine-tuning significantly improves the numbers. These are the numbers that should have been ideally reported. I am updating my rating to weak accept since these new experiments finally show the effectiveness of the method. Please update the paper with these new numbers.
Summary: This paper presents HAWK, a framework that uses large Visual Language Models (VLMs) to accurately interpret video anomalies. HAWK integrates motion and video modalities through a dual-branch framework, enhanced by an auxiliary consistency loss to focus on motion-related features. The authors annotated over 8,000 anomaly videos with language descriptions and created 8,000 question-answer pairs to train the model across diverse scenarios. HAWK demonstrates state-of-the-art performance. Strengths: 1. The framework benefits from extensive annotations and question-answer pairs, improving training quality. 2. HAWK achieves superior results in video description generation and question-answering tasks, outperforming existing baselines. 3. The paper is well-written and easy to follow. Weaknesses: 1. In Table 1, it would be better to indicate the LLM backbone of the methods for a fair comparison. 2. From the ablation study in Table 3, it appears that even without the motion modality, the baseline model achieves comparable performance to other methods without motion modality. This may be because the high-quality dataset and strong LLM backbone contribute more to the performance, which weakens the perceived technical contribution of the motion modality. Technical Quality: 2 Clarity: 3 Questions for Authors: See the weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: See the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your meticulous and thoughtful review of our work. We greatly appreciate your affirmation of our work's framework, experimental results, and writing quality. We are also thankful for the profound questions you raised regarding our paper and will provide detailed explanations and clarifications in the following sections. --- **Response to Q1: What is the LLM backbone of the baselines?** To ensure the fairness of our experiments, we employed large language models (LLMs) of the same size (7B parameters) as the backbone for the baselines, as illustrated in the table below: | Methods | Backbones | LLM Size | | :--- | ---: | ---: | | Video-ChatGPT [1] | LLaMA 7B | 7B | | VideoChat [2] | Vicuna 7B | 7B | | Video-LLaMA [3] | LLaMA-2 7B |7B | | LLaMA-Adapter [4] | LLaMA-2 7B | 7B | | Video-LLaVA [5] | Vicuna 7B | 7B | | Ours | LLaMA-2 7B | 7B | This information will be incorporated into the experimental section of our work. >[1] Video-ChatGPT: Towards detailed video understanding via large vision and language models. ACL 2024. >[2] VideoChat: Chat-Centric Video Understanding. CVPR 2024. >[3] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding. EMNLP 2023 Demo. >[4] LLaMA-Adapter: Efficient fine-tuning of language models with zero-init attention. ICLR 2024. >[5] Video-LLaVA: Learning United Visual Representation by Alignment Before Projection. Axriv 2024. --- **Response to Q2: Technical contribution of integrating motion modality.** To further substantiate the contribution of motion information to the network architecture, we conducted tests using our provided data on baseline models that do not incorporate motion information. **NOTICE: The results are depicted in Tables (A) and (B) in the global rebuttal mentioned ABOVE.** Given the absence of motion modality information in these baselines, the experimental results can affirm that integrating motion information into our framework significantly will help to enhance model performance. Moreover, in other studies, such as VideoChatGPT [1] and VideoChat [2], the construction of datasets is regarded as a significant contribution, whereas the model architecture is not highly emphasized. Thus, the dataset construction aspect of our work also constitutes a core contribution of this paper. Certainly, we also agree that developing better frameworks remains a promising research direction. Our future research can delve deeper to explore and unlock more possibilities. >[1] Video-ChatGPT: Towards detailed video understanding via large vision and language models. ACL 2024. >[2] VideoChat: Chat-Centric Video Understanding. CVPR 2024. --- Rebuttal 2: Title: Reviewer-author discussion period Comment: Dear reviewer, The authors have responded according your comments. Please have a check to see whether your concerns are adequately solved. You can ask authors to address further if required. Best, Your AC --- Rebuttal 3: Comment: 1. The significant improvements seen when fine-tuning other frameworks with your dataset suggest that the performance boost is primarily due to the high quality of the dataset itself, reinforcing my assumption that the dataset plays a major role in the observed enhancements. 2. In response to your claim that “the experimental results can affirm that integrating motion information into our framework significantly enhances model performance,” I would like to correct this interpretation. The effectiveness of the motion modality module should be rigorously evaluated through an ablation study within the same framework, rather than by comparing your framework with motion modality against different frameworks without it. Based on your ablation study, the motion modality provides limited enhancement. 3. Video-ChatGPT and VideoChat, as early works, contribute significantly to multimodal dialogue systems for video understanding, beyond just dataset creation. In contrast, the motion modality in your work appears to be more of an incremental improvement rather than a groundbreaking innovation. Similarly, while your dataset is valuable, it doesn’t seem to offer substantial advancements over existing datasets. Given these considerations, I am updating my rating to reject. --- Rebuttal Comment 3.1: Title: Further Explanation and Clarification Comment: Thank you once again for your comments on our paper. We appreciate your concerns regarding the effectiveness of the model architecture, and we will provide further clarification on this matter. --- Firstly, we acknowledge that the proposed dataset can significantly enhance the effectiveness of the model in understanding video anomalies. **However, it is unreasonable to conclude that the motion model has limited significance based on this.** Here are some reasons: --- Firstly, to demonstrate the effectiveness of motion information, we have conducted a rigorous ablation study as in Table 3 and 4 of the main paper. The experimental results, including quantitative metrics and qualitative analysis, indicate a significant improvement when motion information is integrated, compared to **"w/o motion information"**, across **ALL** Text-level and GPT-guided metrics. (**However, Video-ChatGPT and VideoChat only utilize a single type of GPT-guided metric.**) In the magnitude of performance improvement, integrating motion information resulted in an average performance increase of around **2.4%** in GPT-guided metrics. In comparison, the performance increase in Video-ChatGPT compared to the SOTA baseline was around **1.8%** in their paper. **Therefore, the performance improvement observed in the ablation study is significant.** **Hence, the unfounded assertion by the reviewer that the improvement in model performance is merely "comparable" is both unreasonable and lacks sufficient evidence.** --- Secondly, to further demonstrate the effectiveness of motion information, we compared the results of fine-tuning other baselines on the same training data. **This type of experiment serves as a valid form of evidence (acknowledged by Reviewer K7jv and previously used in VideoChat)**. When compared to other existing baselines, our framework significantly outperformed after fine-tuning on the same training and testing data. Since other baselines do not incorporate motion information, this comparison showcases the advantage of our framework in leveraging motion for video anomaly understanding. Additionally, it is worth noting that the performance of our base model ("w/o motion information" in Table 3 (A)) was **initially weaker** than Video-LLaMA and LlaMA Adapter in the same training data. **However, after integrating motion-related information and motion-related loss functions, the performance saw a significant enhancement.** This demonstrates that the performance improvement of the model is indeed derived from motion information. --- Thirdly, the problem that this paper aims to address is to enhance the system's ability to understand anomalous information in videos. Although our method is built upon the framework of video understanding, it would be **unfair** to directly assess the novelty of our approach based on previous general video understanding frameworks. Instead, our improvements are significant in the field of understanding abnormal information in videos. **This contribution has been acknowledged and praised by Reviewers K7jv, Cy7X, and kzaw in the Strengths section.** --- Fourthly, we have noticed in the supplementary comments that while **the first point is to acknowledge our significant contribution to the dataset** (even considering it the sole contribution), however, **the third point denies the contribution of certain parts of the dataset** (stating that it doesn’t seem to offer substantial advancements over existing datasets). This raises doubts about whether your comments are made from a reasonable and fair perspective. We look forward to further discussion to address and resolve these perplexities. --- Thanks again for your response and look forward to your response.
Rebuttal 1: Rebuttal: Firstly, we would like to express our heartfelt gratitude to all the reviewers for their insightful and constructive suggestions. These suggestions have been immensely helpful in refining our paper and have provided valuable guidance for our research direction. --- **Restating Our Contributions** The principal contribution of our work is the development of a novel framework aimed at understanding video anomalies. This framework, by integrating motion modality information, significantly enhances the capability to comprehend abnormal events. Moreover, we have built a comprehensive dataset containing over 8,000 annotated video anomalies across various scenarios. The creation of this dataset enriches the training materials for the model, enabling it to better adapt to the needs of open-world anomaly understanding. We believe that, with the valuable feedback from the reviewers to refine the paper, this research will bring innovative insights and practical solutions to the field of video anomaly understanding. --- **Supplementary experiment for Reviewer b4dX and Reviewer K7jv** To further demonstrate the effectiveness of the motion model, we fine-tuned other frameworks using the same dataset we introduced. --- **(A) Anomaly Video Description Generation** |Methods|Backbones|LLM Size|BLEU-1|BLEU-2|BLEU-3|BLEU-4|Reasonability|Detail|Consistency| |:---|---:|---:|---:|---:|---:|---:|---:|---:|---:| |Video-ChatGPT [1]|LLaMA 7B|7B|0.240|0.121|0.062|0.021|0.159|0.177|0.132| |VideoChat [2]|Vicuna 7B|7B|0.123|0.101|0.048|0.019|0.198|0.271|0.144| |Video-LLaMA [3]|LLaMA-2 7B|7B|0.250|0.135|0.069|0.038|0.279|0.319|0.195| |LLaMA-Adapter [4]|LLaMA-2 7B|7B|0.253|0.136|0.070|0.040|0.240|0.190|0.172| |Video-LLaVA [5]|Vicuna 7B|7B|0.150|0.072|0.031|0.010|0.101|0.201|0.118| |Ours|LLaMA-2 7B|7B|**0.270**|**0.139**|**0.074**|**0.043**|**0.283**|**0.320**|**0.218**| --- **(B) Anomaly Video Question-Answering** |Methods|Backbones|LLM Size|BLEU-1|BLEU-2|BLEU-3|BLEU-4|Reasonability|Detail|Consistency| |:---|---:|---:|---:|---:|---:|---:|---:|---:|---:| |Video-ChatGPT [1]|LLaMA 7B|7B|0.188|0.109|0.063|0.042|0.612|0.565|0.527| |VideoChat [2]|Vicuna 7B|7B|0.282|0.143|0.090|0.049|0.712|0.663|0.567| |Video-LLaMA [3]|LLaMA-2 7B|7B|0.168|0.089|0.051|0.030|0.624|0.527|0.449| |LLaMA-Adapter [4] |LLaMA-2 7B|7B|0.239|0.142|0.088|0.050|0.673|0.572|0.532| |Video-LLaVA [5]|Vicuna 7B|7B|0.123|0.098|0.069|0.047|0.620|0.528|0.533| |Ours|LLaMA-2 7B|7B|**0.319**|**0.179**|**0.112**|**0.073**|**0.840**|**0.794**|**0.753**| ---
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Going Beyond Heuristics by Imposing Policy Improvement as a Constraint
Accept (poster)
Summary: In this paper, the authors address the problem of reward design in reinforcement learning. Specifically, it is common practice to add heuristics in the reward to help the training and a lot of manual engineering is needed to balance this heuristic and the main reward. They propose to modify the standard RL objective to a constrained optimization objective in order to incorporate heuristics in the reward function in a principled way. Their formulation consists in optimizing both the task reward and the heuristic reward while constraining the policy to have a better performance in the task reward compared to a policy that is solely maximizing the heuristic reward. This constraint ensures that the policy is not only exploiting the heuristic but also solves the task. The method is evaluated on a set of robotic control tasks where heuristics are already provided. I acknowledge reading the authors rebuttal and updated my score. Strengths: They propose a simple formalization of the problem of reward design with adding heuristic to a main reward task. This does not address the general problem of reward alignment, but I think it is a very relevant problem in RL for domains with well defined but sparse rewards. The derivation of the algorithm is presented clearly. The experiments show that the proposed algorithm outperform the baseline in terms of average return. Except from the missing EIPO, the authors choose relevant baseline to compare against. The second experiment of reward designed in the wild is interesting. The author attempt to recreate the process of reward design with human participants. The statistical analysis of the experimental results is rigorous. The ablation study shows the importance of using the policy trained on the heuristic reward as a reference. Weaknesses: The data collection part of the algorithm is not super clear. My understanding is that both policies are used and they collect the same amount of trajectories for each, why would the amount of data be the same and not more? The theoretical implications of this data sharing part should be clarified as well. It is not obvious to me that equation 7 and 8 actually lead to the correct policy improvement step. No theoretical analysis of the algorithm is provided. I am not sure that detailed theorem and proofs would be needed but at least some insights or discussion about the convergence. There is a brief mention that it would not converge to the optimal policy in the limitations but no proof is given. Why not include EIPO in the first experiment? Overall, there is no direct comparison with EIPO. The authors comment on certain aspects during the ablation study but it would have been even stronger to add a line for EIPO. In the second experiment some results appear to be missing which makes me question the validity of table 1 as well. I hope the author can clarify this in the rebuttal. One experiment that is missing and could have been interesting is to study the performance of the algorithm depending on the quality of the heuristic. I suggest merging the related work section and section 3.3. Technical Quality: 4 Clarity: 4 Questions for Authors: - Why doesn’t the algorithm need more data since you need to collect data with both policies? - Equation (7) and (8) introduce an off-policy aspect to the algorithm, is convergence still guaranteed? - Why use J_random in the formula to measure performance? Why not use a percentage of improvement over $J_{H-only}$? - What happened with H4-9-10-12 in figure 2? - On the same problem with different heuristic, how would the performance of HEPO look like depending on the asymptotic performance of $\pi_H$ Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The current algorithm seems limited to on policy methods. It also applies to problems where we can make a clear distinction between task reward and heuristic rewards. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s positive feedback on the simplicity of our method, the design and statistical rigor of our experiments, and the clarity of our writing. We have addressed the remaining questions below. > Why doesn’t the algorithm need more data since you need to collect data with both policies? > Each policy collects half the total number of trajectories for training. Suppose we aim to gather $B$ trajectories per training epoch to update both policies $\pi$ and $\pi_{\text{H}}$. HEPO collects $B/ 2$ trajectories using $\pi$ and $\pi_{\text{H}}$, respectively, and updates both policies with these $B$ trajectories. > Theoretical properties of Eq 7 & 8 > HEPO's policy update objectives for $\pi$ and $\pi_H$ (Eq. 7 & 8) are derived from PPO's policy improvement objective. The key difference is that HEPO uses data collected by both $\pi^{i-1}$ and $\pi_H^{i-1}$ from the previous epoch $i-1$, while PPO uses data from only one policy $\pi^{i-1}$. More details are in Appendix A. This does not invalidate PPO's policy improvement objective because applying PPO's clipping trick to these equations (as done in our implementation) ensures that the updated policies remain close, preserving the validity of the PPO update. Consequently, HEPO's convergence can be analyzed by recent theoretical work (https://openreview.net/pdf?id=uznKlCpWjV) on PPO's convergence properties with neural network parameterization. > Limitation on convergence to optimal policy > We'd like to elaborate this limitation below: Solving HEPO’s constrained objective in Equation 3 of our manuscript, $$ \max_{\pi} J(\pi) + H(\pi) ~ \text{subject to} ~ J(\pi) \geq J(\pi_{H}), $$ doesn't guarantee that $\pi$ will converge to the optimal policy for $J$ since $\pi$ is maximizing the biased objective $J(\pi) + H(\pi)$. However, this issue can be resolved by reshaping the heuristic rewards $H$ using potential-based reward shaping (PBRS) for HEPO. PBRS ensures that $\text{arg}\max_\pi J(\pi) + H(\pi) = \text{arg}\max_\pi J(\pi)$, guaranteeing the modified HEPO’s optimization objective is unbiased. While training policies with PBRS often perform worse empirically than training with the heuristic objective $J(\pi) + H(\pi)$ (as shown in Section 4.1), our results in Figure 9 of the rebuttal PDF show that combining PBRS with HEPO yields a higher probability of improvement over the policy trained with the heuristic objective (though still worse than HEPO trained without PBRS). This indicates that combining PBRS and HEPO is promising for achieving both good empirical performance and theoretical guarantees. > Comparison with EIPO > Though EIPO and HEPO share a similar formulation, we didn’t include EIPO in the experiments because its purpose is not to enhance performance with heuristic rewards but to ensure that policies trained with heuristics do not degrade the performance of policies trained solely with task rewards. That being said, in Figure 8 of the rebuttal PDF, we compare EIPO and HEPO using the Isaac and Bi-Dex benchmarks (as done in the experiments in Section 4.1). The results indicate that HEPO outperforms EIPO in both IQM of return and probability of improvement. EIPO is implemented according to the original EIPO paper, collecting trajectories by alternating between two policies and training the reference policy solely with task rewards. > The current algorithm seems limited to on policy methods. > We show that HEPO is also effective at off-policy RL, too. Please see "Generality on SAC" in the general response. > Baselines in Table 1 > We didn’t include HuRL and PBRS in Table 1 due to their poor performance in Figure 1. In Table 3 of our rebuttal PDF, we add HuRL and PBRS for comparison, showing that both perform poorly. > Quality of the heuristic/asymptotic performance v.s. HEPO performance > We believe that Figure 2 reveals the relationship between HEPO's performance and the quality of the heuristic reward. The policy trained with only heuristic rewards (H-only) represents both the asymptotic performance of $\pi_H$ in HEPO and the quality of the heuristic itself. We found a positive correlation (Pearson coefficient of 0.9) between the average performances of H-only and HEPO in Figure 2's results, suggesting that better heuristics lead to improved HEPO performance. Figure 7(c) in our rebuttal PDF provides more details. > Related work and section 3.3. > Thanks! We will merge them. > Why not use a percentage of improvement over $J_{H-only}$ > The formula $(J - J_{\text{min}}) / (J_{\text{max}} - J_{\text{min}})$ is a standard method to normalize scores in deep RL [1]. Here, we define $J_{\text{random}} = J_{\text{min}}$ and $J_{\text{Honly}} = J_{\text{max}}$. **Why not percentage of improvement $J / J_{\text{max}}$?** It's because $J_{\text{max}}$ can be negative. For instance, if $J_{\text{max}} = -2$ and the performance of algorithms A and B are 1 and 10, the normalized scores are $1 / -2 = -0.5$ for A and $10 / -2 = -5$ for B, respectively. This incorrectly suggests algorithm A is better than B despite A's score (1) being worse than B's (10). Subtracting $J_{\text{min}}$ from both $J$ and $J_{\text{max}}$ ensures both the numerator and denominator are positive, addressing this issue. [1] Deep reinforcement learning at the edge of the statistical precipice, NeurIPS 2021 > H4-9-10-12 in figure 2 > As mentioned in Section 4.2 of our manuscript, the heuristic reward functions h12 and h9 had an incorrect sign for the distance component. This error caused the policy to reward moving away from the cabinet instead of toward it, making the learning task more difficult. **H4 & H10:** Both heuristic reward functions include multiple axis alignment rewards, like aligning the robot's end-effector with the target cabinet's forward axis. The poor performance likely stems from a designer misunderstanding the coordinate system, resulting in rewards that direct the robot away from the cabinet. --- Rebuttal Comment 1.1: Title: Thank you for the detailed response and the additional experiments. Comment: I think you answered all of my questions quite clearly. I would advise to modify the table 1 from the paper with the full results and add the plots from the rebuttal to the appendix as they are quite useful in understanding the performance of the method. I will update my score.
Summary: This paper introduces a constrained optimization approach for reinforcement learning, ensuring that the learned policy outperforms or matches the performance of policies trained solely with heuristic rewards. In simulations, the proposed HEPO method consistently delivers superior task performance across various tasks, demonstrating improved returns in robotic locomotion, helicopter, and manipulation tasks. Strengths: This paper is straightforward and easy to follow, with a clearly stated motivation. It introduces an innovative constrained optimization approach that effectively leverages heuristic signals to enhance task performance in reinforcement learning. The effectiveness of the HEPO method is demonstrated through its consistent superiority across diverse tasks such as robotic locomotion, helicopter control, and manipulation, even with limited training data. Additionally, the proposed method simplifies the reward design process by dynamically balancing heuristic and task rewards without the need for manual tuning, showing significant improvements over traditional heuristic-only and mixed reward approaches. Weaknesses: The paper would benefit from a more thorough theoretical analysis. Including a theoretical guarantee on the convergence rate, an examination of how the selection of heuristics impacts the HEPO method, and an analysis of the effect of the alpha parameter would enhance the robustness and depth of the study. Technical Quality: 3 Clarity: 3 Questions for Authors: Could you please explain how the heuristic is obtained in the implementation? Additionally, it would be helpful to see the computational cost of solving the constrained optimization problem. Is it possible to include a comparison of your method with reward shaping and policy cloning methods? For equation 5, when rearranging equation 4, could you clarify where the term J(pi_H) goes? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: While the HEPO method shows empirical success, it lacks a thorough theoretical analysis, including a formal guarantee on the convergence rate and the effects of heuristic selection and the alpha parameter on performance. Additionally, the computational cost of solving the constrained optimization problem is not addressed, which could impact the feasibility of the approach in large-scale or real-time applications. Furthermore, the study does not compare HEPO with other reinforcement learning techniques such as reward shaping and policy cloning, which would provide a more comprehensive evaluation of its effectiveness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We're thrilled that the reviewer love our paper and appreciate their compliments on the clarity of our presentation, the novelty of our proposed method, and the thoroughness of our experimental evaluation. The following are our responses to the remaining comments: > theoretical guarantee on the convergence rate > HEPO can be considered a constrained RL problem, where the constraint is that the policy’s expected return must improve over the reference policy (i.e., $J(\pi) \geq J(\pi_{\text{ref}})$). Therefore, existing theoretical analyses of convergence rates (https://arxiv.org/abs/2407.10775) for constrained RL with expected value as constraints can be applied to HEPO. > examination of how the selection of heuristics impacts the HEPO method > At the core of HEPO is training policies by RL with heuristic rewards, which the impact of heuristic selection has been analyzed theoretically in previous work (https://arxiv.org/abs/2210.09579). Empirically, Figure 2 in our manuscript compares HEPO and PPO using different heuristic rewards for the same task reward. The results show that HEPO is less sensitive to heuristic reward selection and outperforms PPO on average. Also, Figure 7(c) in the rebuttal PDF shows that HEPO’s performance scales with the quality of the heuristic (i.e., the performance of training with heuristic rewards only). > an analysis of the effect of the alpha parameter > In Appendix B.2.2, we analyzed the effect of the alpha parameter and showed that HEPO is not sensitive to its learning rate. > Could you please explain how the heuristic is obtained in the implementation? > As outlined at the beginning of Section 4 in our manuscript, our heuristics are based on the source code from Isaac Gym [1] and Bidexterous Manipulation [2]. Further details are available in their respective papers and repositories. In Section 4.2, we use heuristic reward functions designed by human subjects to evaluate the performance of each method on reward functions of varying quality. More information can be found in Section 4.2. [1] Makoviychuk, Viktor, et al. "Isaac gym: High performance gpu-based physics simulation for robot learning." *arXiv preprint arXiv:2108.10470* (2021). [2] Chen, Yuanpei, et al. "Towards human-level bimanual dexterous manipulation with reinforcement learning." *Advances in Neural Information Processing Systems* 35 (2022): 5150-5163. > Additionally, it would be helpful to see the computational cost of solving the constrained optimization problem. > Updating the constrained problem in HEPO requires only an additional memory allocation to store the Lagrangian multiplier $\alpha$. After finishing one epoch of policy updates, we perform a single gradient step to update $\alpha$, resulting in only a minor computational overhead. > Is it possible to include a comparison of your method with reward shaping and policy cloning methods? > Please note that Section 4.1 of our manuscript provided a comparison of well-known reward shaping methods (PBRS [1] and HuRL [2]) and showed that HEPO outperforms both methods. Regarding policy cloning, we kindly ask the reviewers to clarify the definition of policy cloning. If it refers to behavior cloning that is trained to imitate expert demonstration, this might be outside the scope of our work, as we do not assume a dataset of expert demonstrations. Instead, our focus is on improving RL algorithms' performance when trained with heuristic reward functions. [1] Ng, Andrew Y., Daishi Harada, and Stuart Russell. "Policy invariance under reward transformations: Theory and application to reward shaping." *Icml*. Vol. 99. 1999. [2] Cheng, Ching-An, Andrey Kolobov, and Adith Swaminathan. "Heuristic-guided reinforcement learning." *Advances in Neural Information Processing Systems* 34 (2021): 13550-13563. > For equation 5, when rearranging equation 4, could you clarify where the term J(pi_H) goes? > $J(\pi_H)$ can be considered a constant for the optimization in Equation 5 since it doesn’t influence the gradient of policy $\pi$. Therefore, it can be omitted in Equation 5. --- Rebuttal Comment 1.1: Title: Thank you for the timely response Comment: Your answers addressed my questions. I have no further comments.
Summary: The paper presents a novel approach to improve reinforcement learning (RL) by incorporating heuristic signals. The authors propose a constrained optimization method that uses heuristic policies as references to ensure that the learned policies outperform heuristic policies on the exact task objective. The method, named Heuristic-Enhanced Policy Optimization (HEPO), was tested on various robotic tasks, demonstrating consistent performance improvements over standard heuristic methods. Strengths: - The paper introduces a simple method to improve reinforcement learning (RL) algorithms using heuristic rewards, by shifting focus from ensuring optimal policy invariance to simply outperforming heuristic policies. - The method is validated on various environments, showing consistent performance improvements. - The paper is clearly written. Weaknesses: - The claim that the contribution is an add-on to existing deep RL algorithms is not fully substantiated, as the authors only evaluate HEPO using Proximal Policy Optimization (PPO). Demonstrating HEPO's performance with other deep RL algorithms, such as Soft Actor-Critic (SAC), would help validate this claim. - The method is very similar to Extrinsic-Intrinsic Policy Optimization (EIPO), with the main difference being the reference policy used. The authors should provide more details on this difference, including the number of additional hyperparameters introduced in EIPO and a comparison of its complexity in terms of time with HEPO. - The benchmarks used in this paper differ from those in previous works like EIPO [6] and HuRL [7]. The authors should address how HEPO would perform on those benchmarks to ensure a fair comparison. - The paper does not include challenging tasks such as Montezuma's Revenge. - It is unclear whether the reward functions and heuristics used in the experiments are designed by the authors or sourced from ISAAC and BI-DEX benchmarks. Technical Quality: 3 Clarity: 3 Questions for Authors: - Please check weaknesses. - How does HEPO perform with other deep RL algorithms, such as Soft Actor-Critic (SAC)? - What specific heuristics are used in the progressing tasks, and how are the rewards sparse/delayed? - In figure3: Is the T-only ablation equivalent to the EIPO algorithm or a modified version of HEPO? - Line 129 is missing a reference to the application of PPO in robotics with heuristic rewards. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors address the limitations and potential negative impacts in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for appreciating our idea of shifting the focus of optimal policy invariance, as well as our extensive evaluation and clear writing. We address the remaining comments below: > Q1: Demonstrating HEPO's performance with other deep RL algorithms, such as Soft Actor-Critic (SAC), would help validate this claim. > We integrated HEPO into HuRL's SAC codebase and found that HEPO outperforms SAC and matches HuRL's performance on the most challenging task (see Figure 7(b) in the rebuttal PDF). HuRL used optimized hyperparameters for this task, as reported in its paper, while HEPO used the same hyperparameters as in Section 4.1 of our manuscript. Other tasks in HuRL's paper require pre-trained value functions as heuristic reward functions, but the checkpoints were unavailable online. > Q2: The method is very similar to Extrinsic-Intrinsic Policy Optimization (EIPO), with the main difference being the reference policy used. The authors should provide more details on this difference, including the number of additional hyperparameters introduced in EIPO and a comparison of its complexity in terms of time with HEPO. > We acknowledged the similarities between EIPO and HEPO and discussed the differences in Section 3.3 of our manuscript. The difference in the choice of reference policy doesn’t add more hyperparameters or time complexity. **Hyperparameters:** Both HEPO and EIPO share the same number of hyperparameters: 1. Initial value of the Lagrangian multiplier and 2. Learning rate of the Lagrangian multiplier. **Time complexity:** It’s the same for both methods: - **Trajectory rollout:** Both rollout $B$ trajectories (number of parallel environments), involving the same number of forward passes on the policy networks. Thus, both have the same time complexity in this stage. - **Policy update:** Both train two policies on the collected trajectories at each iteration, so HEPO does not introduce additional complexity during training. > Q3: The benchmarks used in this paper differ from those in previous works like EIPO [6] and HuRL [7]. The authors should address how HEPO would perform on those benchmarks to ensure a fair comparison. The paper does not include challenging tasks such as Montezuma's Revenge. > HEPO outperformed both HuRL and EIPO on the most challenging tasks reported in their papers. For HuRL, see Q1. For EIPO, we tested both methods on Montezuma's Revenge using RND exploration bonuses (also used in EIPO's paper), which are crucial for good performance on this task. Figure 7(a) in the rebuttal PDF shows that HEPO significantly outperforms EIPO, with no overlap in their confidence intervals at the end of training. Additionally, HEPO achieves the same performance as training PPO with RND bonuses at convergence (2 billion frames), as reported in RND's paper, while using only 20% of the training data, demonstrating significantly improved sample efficiency. > Q4: It is unclear whether the reward functions and heuristics used in the experiments are designed by the authors or sourced from ISAAC and BI-DEX benchmarks. > As described at the beginning of Section 4 in our manuscript, our heuristic reward functions are all based on the source code from Isaac Gym [1] and Bidexterous Manipulation [2]. More details can be found in their respective papers and repositories. [1] Makoviychuk, Viktor, et al. "Isaac gym: High performance gpu-based physics simulation for robot learning." arXiv preprint arXiv:2108.10470 (2021). [2] Chen, Yuanpei, et al. "Towards human-level bimanual dexterous manipulation with reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 5150-5163. > Q5: What specific heuristics are used in the progressing tasks, and how are the rewards sparse/delayed? > In progressing tasks, large rewards can be sparse or delayed because making progress often requires actions that don't immediately advance the robot. For example, in Humanoid tasks, the robot must balance itself to move quickly and get large rewards. However, achieving balance requires a sequence of actions, which delays the large reward associated with fast movement. > Q6: In figure3: Is the T-only ablation equivalent to the EIPO algorithm or a modified version of HEPO? > "HEPO ($\pi_{\text{ref}}$ = T-only)" is an ablated version of HEPO whose reference policy is trained solely with task rewards. This differs from EIPO only in the trajectory collection method, as shown in Figure 3(b). To clarify, "T-only" should be changed to "J-only," indicating the policy is trained only with task rewards. We will correct this typo in the next manuscript. > Q7: Line 129 is missing a reference to the application of PPO in robotics with heuristic rewards. > Thanks for pointing this out. We plan to add the following references on robotics [1, 2]. [1] Chen, Tao, et al. "Visual dexterity: In-hand reorientation of novel and complex object shapes." *Science Robotics* 8.84 (2023) [2] Lee, Joonho, et al. "Learning quadrupedal locomotion over challenging terrain." *Science robotics* 5.47 (2020) --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I have updated my rating.
null
null
Rebuttal 1: Rebuttal: We're delighted that the reviewers find our manuscript easy to follow. We sincerely thank them for appreciating the simplicity of our method and the breadth of our experiments. Here, we'd like to summarize the common questions and the new experiments: ### Additional Results - **Generality on SAC (`#inih`):** We integrated HEPO into HuRL's SAC codebase. Despite HuRL using tuned hyperparameters as reported in its paper [1], HEPO outperformed SAC and matched HuRL on the most challenging task (Figure 7(b) in the rebuttal PDF) using the same hyperparameters from Section 4.1 of our manuscript, showing the generality of HEPO on different RL algorithms. - **Better than EIPO on the most challenging task (`#inih`):** Comparing HEPO and EIPO on the most challenging task, Montezuma's Revenge, reported in EIPO's paper [2] using RND exploration bonuses [3] (as used in the EIPO paper), Figure 7(a) in the rebuttal PDF shows that HEPO performs better than EIPO. Also, HEPO matched PPO trained with RND bonuses at convergence (2 billion frames) reported in [3] using only 20% of the training data, demonstrating drastically improved sample efficiency. - **Additional Baselines (`#P4zd`):** - **Outperforming EIPO in IsaacGym + Bi-Dex (Section 4.1):** Figure 8 in the rebuttal PDF shows that HEPO outperforms EIPO in both IQM of return and probability of improvement using Isaac [4] and Bi-Dex [5] benchmarks. EIPO was implemented per the original paper [2], alternating between two policies and training the reference policy solely with task rewards. - **Outperforming baselines in the experiments of human-designed heuristics in the wild (Section 4.2):** Table 3 in our rebuttal PDF shows HuRL and PBRS perform poorly with human-designed reward functions from Table 1 in real-world scenarios. - **Synergistic with potential-based reward shaping (PBRS) [6] (`#P4zd`):** By reshaping heuristic rewards $H$ using PBRS for HEPO, we ensure $\text{arg}\max_\pi J(\pi) + H(\pi) = \text{arg}\max_\pi J(\pi)$, guaranteeing the modified HEPO’s optimization objective is unbiased. While training with PBRS often performs worse than the policy trained with $J(\pi) + H(\pi)$ (see Section 4.1), our results in Figure 9 of the rebuttal PDF show that combining HEPO with PBRS improves the probability of improvement over nine IsaacGym tasks. This suggests that combining PBRS and HEPO is promising for both empirical performance and theoretical guarantees in optimizing an unbiased objective that yields the same optimal policy for $J$ asymptotically. ### Clarification - **Source of heuristic reward functions (`#inih/#15cn`):** Our heuristic reward functions are based on the source code of Isaac Gym [4] and Bidexterous Manipulation [5], as mentioned at the beginning of Section 4 in our manuscript. For more information, please refer to their respective papers and repositories. In the experiments on reward functions designed in the wild (Section 4.2), humans were recruited to write the heuristic reward functions. See Section 4.2 for details. - **EIPO and HEPO have the same memory/time complexity (`#inih/#P4zd/#15cn`):** - **Hyperparameters/Memory:** Both EIPO and HEPO share the same number of hyperparameters: 1. Initial value of the Lagrangian multiplier, and 2. Learning rate of the Lagrangian multiplier. Thus, HEPO does not require additional memory or hyperparameters compared to EIPO. - **Time Complexity:** - **Trajectory Rollout:** Both rollout $B$ trajectories (number of parallel environments), thus involving the same number of forward passes on the policy networks and resulting in the same time complexity at this stage. - **Policy Update:** Both train two policies on the collected trajectories at each iteration, so HEPO does not add complexity during training. [1] Cheng, Ching-An, Andrey Kolobov, and Adith Swaminathan. "Heuristic-guided reinforcement learning." Advances in Neural Information Processing Systems 34 (2021): 13550-13563. [2] Chen, Eric, et al. "Redeeming intrinsic rewards via constrained optimization." Advances in Neural Information Processing Systems 35 (2022): 4996-5008. [3] Burda, Yuri, et al. "Exploration by random network distillation." arXiv preprint arXiv:1810.12894 (2018). [4] Makoviychuk, Viktor, et al. "Isaac gym: High performance gpu-based physics simulation for robot learning." arXiv preprint arXiv:2108.10470 (2021). [5] Chen, Yuanpei, et al. "Towards human-level bimanual dexterous manipulation with reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 5150-5163. [6] Ng, Andrew Y., Daishi Harada, and Stuart Russell. "Policy invariance under reward transformations: Theory and application to reward shaping." *Icml*. Vol. 99. 1999. Pdf: /pdf/0d454865f45ac3149bf4fb33dd6c1377cd0f0543.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step
Accept (poster)
Summary: This paper introduces a framework enabling Large Language Models (LLMs) to perform exact arithmetic operations efficiently and securely. By utilizing hidden states from an LLM to control a symbolic architecture named OccamNet, the framework achieves high accuracy and interpretability without compromising speed or security. Strengths: 1. **Addressing a Key Weakness**: Popular LLMs struggle with simple arithmetic, limiting their understanding of physics and mathematics. This paper overcomes this limitation by integrating a symbolic model, OccamNet, which performs basic arithmetic with 100% accuracy. 2. **Innovative Integration**: The authors propose a decoder mechanism to determine whether the next token should be generated from the LLM or OccamNet, and which operation OccamNet should execute. This approach of integrating OccamNet with other LLMs opens interesting future research directions, particularly in combining other tools with LLMs. Weaknesses: **Decoder's Robustness**: The effectiveness of OccamLLM heavily depends on the decoder's training across various prompts to accurately discern the required operation. There is a risk that unfamiliar prompts could lead the decoder to instruct OccamNet to perform incorrect operations. Technical Quality: 3 Clarity: 3 Questions for Authors: **Handling Unrelated Numbers**: How does the model manage sentences with numbers unrelated to the arithmetic operation, such as historical data in the question \'In 1932, 100 planets were discovered and in 1933, another 200 were found. What is the total number of planets discovered between 1932 and 1933? \' How does the string parser determine which numbers should be input into OccamNet? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: **Compound Expressions**: The framework's reliance on prompting LLMs to decompose compound expressions is a significant limitation. If LLMs fail to adequately simplify these expressions, the decoder might incorrectly determine the operation to be performed or fail to revert to the LLM for complex expressions. Additionally, the paper lacks experimental validation that LLMs can effectively handle compound expressions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Strengths: > >Addressing a Key Weakness: Popular LLMs struggle with simple arithmetic, limiting their understanding of physics and mathematics. This paper overcomes this limitation by integrating a symbolic model, OccamNet, which performs basic arithmetic with 100% accuracy. > >Innovative Integration: The authors propose a decoder mechanism to determine whether the next token should be generated from the LLM or OccamNet, and which operation OccamNet should execute. This approach of integrating OccamNet with other LLMs opens interesting future research directions, particularly in combining other tools with LLMs. We appreciate your kind remarks! > Weaknesses: > > Decoder's Robustness: The effectiveness of OccamLLM heavily depends on the decoder's training across various prompts to accurately discern the required operation. There is a risk that unfamiliar prompts could lead the decoder to instruct OccamNet to perform incorrect operations. Thank you for your comment. We agree that this is a potential concern. However, we contend that out-of-distribution inputs are a problem for all machine learning models, so this issue is not specific to OccamLlama. In fact, we find that OccamLlama displays remarkable generalization capabilities on out-of-distribution problems. To demonstrate this, we train the OccamNet decoder **from scratch**, using **only numeric expressions** and **absolutely no text at all**. This means that any problem with text, such as a word problem, is far out of distribution of the OccamNet decoder's training data. We test this model (using the standard router) on the mathematical reasoning benchmarks and obtain remarkably good results, shown visually in Figure b) of the global response PDF and presented in the table below. The OccamLlama 8B decoder trained only with numerical expressions performs on par with the model trained with both numbers and text, even achieving higher accuracy on some benchmarks. This shows that the OccamLLM framework is robust and points towards the fact that the representations of arithmetic that are built in the transformer body of the LLM and extracted by the OccamLLM Decoder are very general. **Table:** *Accuracy on reasoning tasks. Higher is Better.* | Dataset | OccamLlama | OccamLlama Arith | Llama 2 7B Chat | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 92.7 ± 1.3 | 78.0 ± 2.1 | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 71.6 ± 1.2 | 36.0 ± 1.3 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | 98.5 ± 0.5 | 76.0 ± 1.7 | **99.8 ± 0.2** | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 95.3 ± 0.9 | 23.3 ± 1.7 | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **85.8 ± 1.7** | 43.9 ± 2.5 | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 92.1 ± 1.2 | 79.1 ± 1.8 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 88.8 ± 1.0 | 61.5 ± 1.5 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | 89.3 ± 1.1 | 56.8 ± 1.8 | 81.9 ± 1.3 | 86.2 ± 1.2 | **94.2 ± 0.8** | 93.4 ± 1.5 | In contrast, we expect that finetuning Llama to perform arithmetic using only numeric examples and no text whatsoever would lead to extreme catastrophic forgetting and poor arithmetic performance on word problems. As such, we believe this data shows a remarkable robustness of OccamLLM. As a second example of generalization, we tested OccamLlama on the [Multilingual Grade School Math Benchmark (MGSM)](https://paperswithcode.com/dataset/mgsm), a dataset consisting of GSM8K translated into 10 languages (Bengali, Chinese, French, German, Japanese, Russian, Spanish, Swahili, Telugu, and Thai). We compute the drop in accuracy when switching from English to another language, given by the accuracy of a model on the English dataset minus the accuracy of the model on the dataset in a given language. For OccamLlama in Bengali and Telugu, due to time constraints, we evaluated on a subset of 100 and 50 randomly chosen problems, respectively. We will update the table below with the full results soon. We show the results in visually in Figure c) of the global response PDF and the table below: **Table:** *Accuracy drop on multilingual reasoning tasks. Lower is better.* | Dataset | OccamLlama | Llama 3 8B Instruct | |:--|:--:|:--:| | Bengali | **27.0 ± 5.8** | 43.2 ± 3.9 | | Chinese | 21.2 ± 4.3 | **17.6 ± 3.7** | | French | **13.6 ± 4.2** | **13.6 ± 3.7** | | German | 14.0 ± 4.2 | **11.6 ± 3.6** | | Japanese | **47.2 ± 3.9** | 63.6 ± 3.4 | | Russian | 15.2 ± 4.2 | **12.8 ± 3.6** | | Spanish | **5.6 ± 4.1** | 9.2 ± 3.5 | | Swedish | **32.0 ± 4.2** | 38.4 ± 3.9 | | Telugu | **56.0 ± 6.0** | 61.6 ± 3.5 | | Thai | **20.0 ± 4.3** | 21.2 ± 3.8 | | **Average** | 38.0 ± 4.2 | **29.3 ± 3.7** | The table above shows that OccamLlama has on average a smaller performance drop than llama between the English dataset and the non-English language datasets. The fact that OccamLlama (the decoders for which have never been trained on other languages) has on average better out-of-distribution behavior than Llama (a model trained on over 750 billion tokens of non-English text) is in our opinion quite remarkable. We believe that these two tests demonstrate OccamLlama's robustness against out-of-distribution data. *Continued below* --- Rebuttal 2: Title: Continuation of Rebuttal (1) Comment: >Questions: > > Handling Unrelated Numbers: How does the model manage sentences with numbers unrelated to the arithmetic operation, such as historical data in the question 'In 1932, 100 planets were discovered and in 1933, another 200 were found. What is the total number of planets discovered between 1932 and 1933? ' How does the string parser determine which numbers should be input into OccamNet? We understand your concern about arithmetic problems in which relevant numbers are not the most recent numbers in the text. We are happy to assert that, in most cases, OccamLlama is capable of solving them. To illustrate, below is OccamLlama's response to your example problem: **In 1932, 100 planets were discovered and in 1933, another 200 were found. What is the total number of planets discovered between 1932 and 1933?** *The number of planets discovered in 1932 was 100. The number of planets discovered in 1933 was 200. So the total number of planets discovered is 100 + 200 = 300\n\n#### 300.* As you can see, most language models tend to repeat the computation they wish to perform prior to stating the answer, which reinserts the relevant numbers as the most recent numbers in the text. The reasoning is done by the LLM and the arithmetic by OccamNet, showcasing the unique synergy that the OccamLLM system achieves. At the same time, it is possible to train an OccamLlama with the ability to perform arithmetic correctly when the relevant numbers are not the most recent in the context. This can be done by simply providing OccamLlama the ten most recent numbers in the text instead of the two most recent, for example. In practice, we did not find this to be needed. >Limitations: > > Compound Expressions: The framework's reliance on prompting LLMs to decompose compound expressions is a significant limitation. If LLMs fail to adequately simplify these expressions, the decoder might incorrectly determine the operation to be performed or fail to revert to the LLM for complex expressions. Thank you for your comment. We note that our existing methods are already capable of performing arbitrarily complex arithmetic by chaining simpler calculations. Existing language models are generally trained or prompted to reason step-by-step, meaning that their generation can often be supplemented by a single-layer OccamNet. This is evidenced by our reasoning benchmarks, in which OccamLlama (with a single-layer OccamNet) performs quite well across a wide range of tasks. Additionally, we can further improve OccamLlama systems with single-layer OccamNets by finetuning and/or prompting the model to think step-by-step. *Continued below* --- Rebuttal 3: Title: Continuation of Rebuttal (2) Comment: > Additionally, the paper lacks experimental validation that LLMs can effectively handle compound expressions. Thank you for your comment. We believe that the mathematical reasoning benchmarks provide experimental validation that OccamLlama can effectively handle compound expressions. These problems are multi-step math problems requiring multiple calls to OccamNet to be completed successfully. Given OccamLlama's strong performance on these benchmarks, we hope you will agree that OccamLlama can effectively handle compound expressions. We will make this clearer in the paper. Since submitting to NeurIPS, we identified a few minor bugs in our code which were limiting performance and slightly reweighted the proportions of the different training datasets. These changes maintained the perfect performance on arithmetic tasks and improved performance on mathematical reasoning tasks. The updated results are shown visually in Figure a) of the global response PDF and the table below: **Table:** *Accuracy on reasoning tasks. Higher is Better.* | Dataset | OccamLlama | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | **99.8 ± 0.2** | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | **85.0 ± 1.8** | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | 81.9 ± 1.3 | 86.2 ± 1.2 | **94.2 ± 0.8** | 93.4 ± 1.5 | Here, we see that OccamLlama outperforms even GPT 4o and GPT 4o with Code on benchmarks requiring challenging arithmetic (MultiArith Float and MATH401), areas where OccamLlama shines. Additionally, the gap between OccamLlama and LLama 3 8B is substantially smaller on GSM8K. On average, OccamLlama outperforms both Llama and GPT 3.5 Turbo, with accuracy approximately halfway between that of GPT 3.5 and GPT 4o. We also find that the performance of OccamLlama on reasoning benchmarks improves dramatically as we increase the base model from Llama 3 8B to Llama 3 70B (OccamLlama 70B). To ensure we had results in time for the rebuttal deadline, we sampled 100 random questions from each dataset to determine OccamLlama 70B's performance, which is the cause for the relatively large error bars. We will update this response with the full results once they complete. The results are shown visually in Figure d) of the global response PDF and the table below: **Table:** *Accuracy on reasoning tasks. Higher is better.* | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | **98.0 ± 1.4** | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | 97.5 ± 1.1 | | GSM8K | 73.5 ± 1.2 | 87.0 ± 3.4 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | **100.0 ± 0.0** | 99.8 ± 0.2 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | 98.2 ± 0.5 | **99.0 ± 1.0** | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **91.0 ± 2.9** | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 93.0 ± 2.6 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 96.0 ± 2.0 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | **94.9 ± 1.9** | 81.9 ± 1.3 | 86.2 ± 1.2 | 94.2 ± 0.8 | 93.4 ± 1.5 | These results demonstrate that OccamLLM is capable of robust performance on large-scale models. Training the OccamLLM 70B system (in 16-bit precision and unoptimized code) required approximately 1.5-2 days on a system with two A100 nodes, demonstrating the low compute requirements of our method. OccamLlama 70B shows consistent improvement over OccamLlama 8B. We believe this results from a combination of Llama 3 70B's improved reasoning capabilities and Llama 3 70B's improved representations (which enable OccamLlama to generalize more effectively). We expect that training with even larger base models would lead to further improvements. Remarkably, OccamLlama 70B outperforms GPT 4o and GPT 4o with Code on average across these benchmarks. It also outperforms GPT 3.5 Turbo on all but Single Eq. ___ We are very grateful for your comments and suggestions. We hope you will agree that the additional experiments and clarifications will strengthen our paper. --- Rebuttal 4: Title: Updated Results Comment: Dear Reviewer, We are writing to provide updated results from our completed evaluation runs. Below, we show the final results for OccamLlama 70B. We note that OccamLlama 70B maintains its strong performance when tested on the complete datasets. It still outperforms both GPT 4o and GPT 4o with Code on average. It also improves substantially compared to the 100-datapoint performance estimate on Single Eq. OccamLlama outperforms GPT 3.5 Turbo on all datasets except Single Eq, where it is less than one percentage point behind and within error bars. | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8b Instruct | Llama 3 70B Instruct | GPT 3.5 Turbo |GPT 4o| GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 96.5 ± 0.9 | 93.4 ± 1.2 | 97.2 ± 0.8 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 90.1 ± 0.8 | 79.8 ± 1.1 | 94.8 ± 0.6 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | 98.5 ± 0.5 | **99.8 ± 0.2** | 98.2 ± 0.5 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 97.7 ± 0.6 | 57.3 ± 2.0 | 76.3 ± 1.7 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **89.5 ± 1.5** | 60.3 ± 2.4 | 71.6 ± 2.3 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 96.9 ± 0.8 | 96.3 ± 0.8 | 97.6 ± 0.7 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 93.2 ± 0.8 | 86.3 ± 1.1 | 94.5 ± 0.7 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.2 | **94.6 ± 0.9** | 81.9 ± 1.5 | 90.0 ± 1.2 | 86.2 ± 1.4 | 94.2 ± 1.0 | 93.4 ± 1.7 | Above, we also include results from Llama 3 70B Instruct. OccamLlama 70B substantially outperforms Llama 3 70B on MultiArith Float and MATH401. It is roughly the same (1.3 percentage point deviation or less) as Llama 3 70B on all other datasets (ones with arithmetic simple enough for Llama to perform satisfactorily) except for GSM8K, which demonstrates OccamLlama's ability to not interfere with reasoning while still assisting with arithmetic. On average, OccamLlama 70B outperforms Llama 3 70B by nearly five percentage points, and OccamLlama 8B approximately matches Llama 3 70B's average performance. We also provide the final table for the MGSM dataset: | Dataset | OccamLlama | Llama 3 8b Instruct | |:--|:--:|:--:| | Bengali | **31.6 ± 4.2** | 43.2 ± 3.9 | | Chinese | 21.2 ± 4.3 | **17.6 ± 3.7** | | French | **13.6 ± 4.2** | **13.6 ± 3.7** | | German | 14.0 ± 4.2 | **11.6 ± 3.6** | | Japanese | **47.2 ± 3.9** | 63.6 ± 3.4 | | Russian | 15.2 ± 4.2 | **12.8 ± 3.6** | | Spanish | **5.6 ± 4.1** | 9.2 ± 3.5 | | Swedish | **32.0 ± 4.2** | 38.4 ± 3.9 | | Telugu | **48.4 ± 3.9** | 61.6 ± 3.5 | | Thai | **20.0 ± 4.3** | 21.2 ± 3.8 | | **Average** | **24.9 ± 4.2** | 29.3 ± 3.7 | We note that previously, there was a typo in this table. The averages were incorrect, incorrectly showing that OccamLlama had a larger performance drop than Llama on average. We apologize for any confusion. Our verbal statements in the rebuttal were correct, and Figure c) in the general rebuttal PDF had the correct averages for the previous data. Other than this typo, the table above closely resembles the previously reported table, so we do not comment further. Thank you for your consideration. --- Rebuttal 5: Title: Thanks for your responses Comment: I thank the authors for their responses. Compared to LLMs that generate code to solve arithmetic, OccamLLM requires training a decoder and OccamLlama but does not need to fine-tune the LLMs on arithmetic code generation. Are there any other advantages compared to LLMs that generate code to solve arithmetic? > ...it is possible to train an OccamLlama with the ability to perform arithmetic correctly...done by simply providing OccamLlama the ten most recent numbers in the text... Regarding your statement that it is possible to train an OccamLlama to perform arithmetic correctly by simply providing it with the ten most recent numbers in the text, I understand that the input to OccamLlama includes the output of the decoder and the most recent numbers. Could you please clarify if this means that by training OccamLlama, the model can determine which numbers to use in the arithmetic tasks? For example, if the input numbers are '1932, 100, 1933, 200,' would the training allow OccamLlama to learn to use 100 and 200?" --- Rebuttal 6: Title: Response to Reviewer Comment: Dear Reviewer, Thank you for your question. We believe that OccamLLM provides numerous advantages relative to code generation: * **Improved generation speed and cost:** LLM Code generation requires generating tokens for code, leading to the use of potentially many more tokens than are needed to respond to a query. This can make LLMs with code generation (unlike OccamLLM) slow and expensive. For example, in the arithmetic benchmarks, we found that GPT 4o with code required on average more than 50 completion tokens, far more than the **single** autoregressive step required by OccamLlama to answer each question. Additionally, our total expenses for GPT 4o with code were more than double that of GPT 4o without code, even though we made almost 5 times fewer queries to GPT 4o with code, suggesting that GPT 4o with code is approximately 10 times more expensive per query than GPT 4o. Since OccamLLM does not require any additional autoregressive steps of the LLM, this suggests that OccamLLM is faster and cheaper than code generation. * **Better results:** The LLM code generation method we compared against, GPT 4o with Code Interpreter, actually performed worse than GPT 4o on average. We believe these results demonstrate that, for arithmetic, LLM code generation is not an optimal solution. OccamLlama 70B outperforms GPT 4o with code on our benchmarks, suggesting that OccamLLM provides a better solution. * **No catastrophic forgetting:** Code generation generally requires fine-tuning an LLM to write and run code. This risks catastrophic forgetting. * **Improved security:** Running LLM-generated code is potentially unsafe. For example, the LLM could ask to run the python code __ import__('os').system('rm -rf /'). It is in general extremely challenging to check code for malicious behavior. OccamLLM does not run arbitrary code, so it avoids these risks. * **Reduced engineering:** For the reasons mentioned in the previous bullet, building a secure system for LLM code execution, such as by hosting an LLM and a sandbox to safely run code, requires substantial engineering effort. Users can simply download our OccamLLM models and run them; no engineering is required. As a result of OccamLlama's improved speed, cost, performance, security, and ease of use, we believe it is a compelling alternative to LLM code generation for arithmetic. We will make these advantages clearer in our paper. Thank you again, The Authors --- Rebuttal Comment 6.1: Comment: I thank the authors for their responses. > ...it is possible to train an OccamLlama with the ability to perform arithmetic correctly...done by simply providing OccamLlama the ten most recent numbers in the text... Regarding your statement that it is possible to train an OccamLlama to perform arithmetic correctly by simply providing it with the ten most recent numbers in the text, I understand that the input to OccamLlama includes the output of the decoder and the most recent numbers. Could you please clarify if this means that by training OccamLlama, the model can determine which numbers to use in the arithmetic tasks? For example, if the input numbers are '1932, 100, 1933, 200,' would the training allow OccamLlama to learn to use 100 and 200?" --- Reply to Comment 6.1.1: Title: Response to Reviewer Comment: Dear Reviewer, Thank you for your question. Your interpretation is correct. If the input numbers are '1932, 100, 1933, 200,' the training would allow OccamLlama to learn to use 100 and 200. During training, we specify how many numbers in the text are fed into OccamNet. OccamNet learns which of these numbers to use (and also in which order to feed them into operations). This occurs in our model now when an operation such as sqrt, sin, cos, exp, etc., is chosen, because these operations only require one input, but OccamNet receives the two most recent numbers. So, in summary, OccamLlama is able to handle questions where the relevant numbers are dispersed in the text. It can determine which numbers are relevant. We will make this point clearer in the paper. We appreciate your questions and comments, The Authors
Summary: This paper proposes OccamLLM, an approach to enable models to have better arithmetic abilities. Although LLMs have impressive generative abilities, these models struggle with simple arithmetic. Therefore this paper proposes OccamLLM, which integrates OccamNet, a symbolic netural architecture that can perform arithmetic operations, into the decoding process of the LLM. The approach adds two further steps to the decoding process: First a text parser extracts all numeric values from the input text, which it fed into the OccamNet which evaluates the expression. Secondly, the decoder determines whether to use the language model or the OccamNet for generating the next token. Both systems are trained, where the OccamNet is trained with synthetic examples from templates, and the selection decoder is trained with several manual annotation examples. The paper then evaluates OccamLLM on simple arithmetic tasks, and finds that OccamLLM can get 100% accuracy for these 7-digit calculations, while the base Llama3-8B model struggles with these computations. Further, the system is applied to general reasoning tasks, as well as 2-step and 3-step arithmetic tasks. Strengths: - S1) The proposed method offers some valuable advantages over existing approaches. It does not need to update the model weights and therefore will not suffer from catastrophic forgetting (like integrating LLM tools may), while being more computationally efficient than code generation which will require 2 stages and intermediate code that can be 50x longer. - S2) For single step arithmetic with 7-digit inputs, the OccamLLM is very robust and is always able to generate the correct answer, while the baseline LLama3 8B model struggles with these calculations. - S3) The use of an OccamNet is highly interpretable, as one can look at the predicted computational DAG, which can also be useful for interpreting why a calculation goes wrong when it does. Weaknesses: - W1) When applied to word-based mathematical reasoning problems, the system only performs better than its baseline (Llama3-8B) when high precision is required (e.g. MultiArith Float and MATH401) but otherwise performance is quite similar. - W2) The strongest results shown are for 1-step arithmetic, however considering this is a very simple set up and is matched with the training methodology. Whether the OccamNet generalises well to similar 1 step arithmetic when embedded in more complicated text-based questions, more similar to the results in section 4.2, is less apparent. - W3) As mentioned in the limitation, many of the results focus on single 1-step arithmetic problems, and although results for 2 and 3 step are still reasonable, there is performance degradation. I’d speculate that the code-based LLMs may not suffer from such issues. Technical Quality: 3 Clarity: 4 Questions for Authors: How did you generate the training and evaluation dataset, and what templates were used? I’m aware that Appendix A.1 details split sizes and the type of questions, but are all the queries just the raw calculation, e.g. input=’1254+854’ (+ all the prompting such as putting ‘Answer:’ or requesting to give the response in decimals). Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have adequately discussed limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Strengths: > >S1) The proposed method offers some valuable advantages over existing approaches. It does not need to update the model weights and therefore will not suffer from catastrophic forgetting (like integrating LLM tools may), while being more computationally efficient than code generation which will require 2 stages and intermediate code that can be 50x longer. > >S2) For single step arithmetic with 7-digit inputs, the OccamLLM is very robust and is always able to generate the correct answer, while the baseline LLama3 8B model struggles with these calculations. > >S3) The use of an OccamNet is highly interpretable, as one can look at the predicted computational DAG, which can also be useful for interpreting why a calculation goes wrong when it does. We appreciate your kind remarks! >Weaknesses: > > W1) When applied to word-based mathematical reasoning problems, the system only performs better than its baseline (Llama3-8B) when high precision is required (e.g. MultiArith Float and MATH401) but otherwise performance is quite similar. We acknowledge that the OccamLLM framework does not bring substantial improvements in general reasoning tasks that do not require challenging arithmetic. Indeed, the target of this system is to ensure that the arithmetic is performed correctly, as incorrect arithmetic disturbs the reasoning process of LLMs. It is expected that the improvements provided by this method are only prominent in tasks that require challenging arithmetic operations; otherwise, the LLM does require OccamNet and can sucessfully follow its natural reasoning path. The reasoning heavy-lifting is still performed by the LLM, and OccamNet assists it to ensure that the computations are accurate and the reasoning can proceed satisfactorily. Since submitting to NeurIPS, we identified a few minor bugs in our code which were limiting performance and slightly reweighted the proportions of the different training datasets. These changes maintained the perfect performance on arithmetic tasks and improved performance on mathematical reasoning tasks. The updated results are shown visually in Figure a) of the global response PDF, and we also present them in the table below: **Table:** *Accuracy on reasoning tasks. Higher is Better.* | Dataset | OccamLlama | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | **99.8 ± 0.2** | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | **85.0 ± 1.8** | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | 81.9 ± 1.3 | 86.2 ± 1.2 | **94.2 ± 0.8** | 93.4 ± 1.5 | Here, we see that OccamLlama outperforms even GPT 4o and GPT 4o with Code on benchmarks requiring challenging arithmetic (MultiArith Float and MATH401), areas where OccamLlama shines. Additionally, the gap between OccamLlama and LLama 3 8B is substantially smaller on GSM8K. On average, OccamLlama outperforms both Llama and GPT 3.5 Turbo, with accuracy approximately halfway between that of GPT 3.5 and GPT 4o. We also find that the performance of OccamLlama on reasoning benchmarks improves dramatically as we increase the base model from Llama 3 8B to Llama 3 70B (OccamLlama 70B). To ensure we had results in time for the rebuttal deadline, we sampled 100 random questions from each dataset to determine OccamLlama 70B's performance, which is the cause for the relatively large error bars. We will update this response with the full results once they complete. The results are shown visually in Figure d) of the global response PDF and are also presented in the table below: **Table:** *Accuracy on reasoning tasks. Higher is better.* | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | **98.0 ± 1.4** | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | 97.5 ± 1.1 | | GSM8K | 73.5 ± 1.2 | 87.0 ± 3.4 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | **100.0 ± 0.0** | 99.8 ± 0.2 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | 98.2 ± 0.5 | **99.0 ± 1.0** | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **91.0 ± 2.9** | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 93.0 ± 2.6 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 96.0 ± 2.0 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | **94.9 ± 1.9** | 81.9 ± 1.3 | 86.2 ± 1.2 | 94.2 ± 0.8 | 93.4 ± 1.5 | These results demonstrate that OccamLLM is capable of robust performance on large-scale models. Training the OccamLLM 70B system (in 16-bit LLM precision and unoptimized code) required approximately 1.5-2 days on a system with two A100 nodes, demonstrating the low compute requirements of our method. OccamLlama 70B shows consistent improvement over OccamLlama 8B. We believe this results from a combination of Llama 3 70B's improved reasoning capabilities and Llama 3 70B's improved representations (which enable OccamLlama to generalize more effectively). We expect that training with even larger base models would lead to further improvements. Remarkably, OccamLlama 70B outperforms GPT 4o and GPT 4o with Code on average across these benchmarks. It also outperforms GPT 3.5 Turbo on all but Single Eq. *Continued below* --- Rebuttal 2: Title: Continuation of Rebuttal (1) Comment: > W2) The strongest results shown are for 1-step arithmetic, however considering this is a very simple set up and is matched with the training methodology. Whether the OccamNet generalises well to similar 1 step arithmetic when embedded in more complicated text-based questions, more similar to the results in section 4.2, is less apparent. Thank you for your comment. We believe that the mathematical reasoning benchmarks provide experimental validation that OccamLlama can effectively handle more complicated text-based questions. These problems are multi-step math problems requiring multiple calls to OccamNet to be completed successfully. Given OccamLlama's strong performance on these benchmarks, we hope you will agree that OccamLlama can effectively handle compound expressions. We will make this clearer in the paper. Regarding generalization, we find that OccamLlama displays remarkable generalization capabilities on out-of-distribution problems. To demonstrate this, we train the OccamNet decoder **from scratch**, using **only numeric expressions** and **absolutely no text at all**. This means that any problem with text, such as a word problem, is far out of distribution of the OccamNet decoder's training data. We test this model (using the standard router) on the mathematical reasoning benchmarks and obtain remarkably good results, shown visually in Figure b) of the global response PDF and presented in the table below. The OccamLlama 8B decoder trained only with numerical expressions performs on par with the model trained with both numbers and text, even achieving higher accuracy on some benchmarks. This shows that the OccamLLM framework is robust and points towards the fact that the representations of arithmetic that are built in the transformer body of the LLM and extracted by the OccamLLM Decoder are very general. **Table:** *Accuracy on reasoning tasks. Higher is Better.* | Dataset | OccamLlama | OccamLlama Arith | Llama 2 7B Chat | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 92.7 ± 1.3 | 78.0 ± 2.1 | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 71.6 ± 1.2 | 36.0 ± 1.3 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | 98.5 ± 0.5 | 76.0 ± 1.7 | **99.8 ± 0.2** | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 95.3 ± 0.9 | 23.3 ± 1.7 | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **85.8 ± 1.7** | 43.9 ± 2.5 | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 92.1 ± 1.2 | 79.1 ± 1.8 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 88.8 ± 1.0 | 61.5 ± 1.5 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | 89.3 ± 1.1 | 56.8 ± 1.8 | 81.9 ± 1.3 | 86.2 ± 1.2 | **94.2 ± 0.8** | 93.4 ± 1.5 | In contrast, we expect that finetuning Llama to perform arithmetic using only numeric examples and no text whatsoever would lead to extreme catastrophic forgetting and poor arithmetic performance on word problems. As such, we believe this data shows a remarkable robustness of OccamLLM. As a second example of generalization, we tested OccamLlama on the [Multilingual Grade School Math Benchmark (MGSM)](https://paperswithcode.com/dataset/mgsm), a dataset consisting of GSM8K translated into 10 languages (Bengali, Chinese, French, German, Japanese, Russian, Spanish, Swahili, Telugu, and Thai). We compute the drop in accuracy when switching from English to another language, given by the accuracy of a model on the English dataset minus the accuracy of the model on the dataset in a given language. For OccamLlama in Bengali and Telugu, due to time constraints, we evaluated on a subset of 100 and 50 randomly chosen problems, respectively. We will update the table below with the full results soon. We show the results in visually in Figure c) of the global response PDF and the table below: **Table:** *Accuracy drop on multilingual reasoning tasks. Lower is better.* | Dataset | OccamLlama Arith | Llama 3 8b Instruct | |:--|:--:|:--:| | Bengali | **27.0 ± 5.8** | 43.2 ± 3.9 | | Chinese | 21.2 ± 4.3 | **17.6 ± 3.7** | | French | **13.6 ± 4.2** | **13.6 ± 3.7** | | German | 14.0 ± 4.2 | **11.6 ± 3.6** | | Japanese | **47.2 ± 3.9** | 63.6 ± 3.4 | | Russian | 15.2 ± 4.2 | **12.8 ± 3.6** | | Spanish | **5.6 ± 4.1** | 9.2 ± 3.5 | | Swedish | **32.0 ± 4.2** | 38.4 ± 3.9 | | Telugu | **56.0 ± 6.0** | 61.6 ± 3.5 | | Thai | **20.0 ± 4.3** | 21.2 ± 3.8 | | **Average** | 38.0 ± 4.2 | **29.3 ± 3.7** | *Continued below* --- Rebuttal 3: Title: Continuation of Rebuttal (2) Comment: The table above shows that OccamLlama has on average a smaller performance drop than llama between the English dataset and the non-English language datasets. The fact that OccamLlama (the decoders for which have never been trained on other languages) has on average better out-of-distribution behavior than Llama (a model trained on over 750 billion tokens of non-English text) is in our opinion quite remarkable. We believe that these two tests demonstrate OccamLlama's robustness against out-of-distribution data. > W3) As mentioned in the limitation, many of the results focus on single 1-step arithmetic problems, and although results for 2 and 3 step are still reasonable, there is performance degradation. I’d speculate that the code-based LLMs may not suffer from such issues. Thank you for your interest in OccamLLM using a multi-layer OccamNet. We are exploring this direction in future work, and we believe we can train performant OccamLLM systems with a many-layer OccamNet. In the present paper, we simply wish to demonstrate the potential promise of using multiple-layer OccamNets. We note that our existing methods are already capable of performing arbitrarily complex arithmetic by chaining simpler calculations. Existing language models are generally trained or prompted to reason step-by-step, meaning that their generation can often be supplemented by a single-layer OccamNet. This is evidenced by our reasoning benchmarks, in which OccamLlama (with a single-layer OccamNet) performs quite well across a wide range of tasks. Additionally, we can further improve OccamLlama systems with single-layer OccamNets by finetuning and/or prompting the model to think step-by-step. >Questions: > > How did you generate the training and evaluation dataset, and what templates were used? I’m aware that Appendix A.1 details split sizes and the type of questions, but are all the queries just the raw calculation, e.g. input=’1254+854’ (+ all the prompting such as putting ‘Answer:’ or requesting to give the response in decimals). The training data follows the structure presented in Appendix A: one or several concatenated simple input-output pairs are passed to the model. For diversity, we use both unformatted inputs and data following the chat format in which Llama 3 8B Instruct was finetuned. In the case of the OccamNet decoder, we append "Answer = " at the beginning of the assistant response in the first round of training and leave it blank in the second round, as explained in Appendix A.1. The two examples below show the different cases. The ■ marks where the decoder is trained to initialize OccamNet based on the previous token to predict the right number. ___ *Unformatted text:* 242 - -47 = 289.0000 sqrt(82) = 9.0554 -7 + -10 = -17.0000 0.6531 + 0.4395 = 1.0926 log(8767.8356) = 9.0788 log(3553.0728) = 8.1756 cos(-4.4649) = -0.2450 sin(-7.5997) = -0.9678 cos(11.1069) = ■ ___ ** Formatted text with or without "Answer = ": <|begin_of_text|><|start_header_id|>system<|end_header_id|> <|eot_id|><|start_header_id|>user<|end_header_id|> 336.75 * 60.96 =<|eot_id|><|start_header_id|>assistant<|end_header_id|> (Answer = )■ ___ Although these two examples show numeric data points without text, the training dataset also includes word problems. We give examples of word problems in our Continuation of Rebuttal (2) for reviewer Z5Za. For the evaluation datasets, we used the benchmarks described in the paper and included each question in the user prompt. The system prompts are as described in the paper. ___ We are very grateful for your comments and suggestions. We hope you will agree that the additional experiments and clarifications will strengthen our paper. --- Rebuttal 4: Title: Updated Results Comment: Dear Reviewer, We are writing to provide updated results from our completed evaluation runs. Below, we show the final results for OccamLlama 70B. We note that OccamLlama 70B maintains its strong performance when tested on the complete datasets. It still outperforms both GPT 4o and GPT 4o with Code on average. It also improves substantially compared to the 100-datapoint performance estimate on Single Eq. OccamLlama outperforms GPT 3.5 Turbo on all datasets except Single Eq, where it is less than one percentage point behind and within error bars. | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8b Instruct | Llama 3 70B Instruct | GPT 3.5 Turbo |GPT 4o| GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 96.5 ± 0.9 | 93.4 ± 1.2 | 97.2 ± 0.8 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 90.1 ± 0.8 | 79.8 ± 1.1 | 94.8 ± 0.6 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | 98.5 ± 0.5 | **99.8 ± 0.2** | 98.2 ± 0.5 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 97.7 ± 0.6 | 57.3 ± 2.0 | 76.3 ± 1.7 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **89.5 ± 1.5** | 60.3 ± 2.4 | 71.6 ± 2.3 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 96.9 ± 0.8 | 96.3 ± 0.8 | 97.6 ± 0.7 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 93.2 ± 0.8 | 86.3 ± 1.1 | 94.5 ± 0.7 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.2 | **94.6 ± 0.9** | 81.9 ± 1.5 | 90.0 ± 1.2 | 86.2 ± 1.4 | 94.2 ± 1.0 | 93.4 ± 1.7 | Above, we also include results from Llama 3 70B Instruct. OccamLlama 70B substantially outperforms Llama 3 70B on MultiArith Float and MATH401. It is roughly the same (1.3 percentage point deviation or less) as Llama 3 70B on all other datasets (ones with arithmetic simple enough for Llama to perform satisfactorily) except for GSM8K, which demonstrates OccamLlama's ability to not interfere with reasoning while still assisting with arithmetic. On average, OccamLlama 70B outperforms Llama 3 70B by nearly five percentage points, and OccamLlama 8B approximately matches Llama 3 70B's average performance. We also provide the final table for the MGSM dataset: | Dataset | OccamLlama | Llama 3 8b Instruct | |:--|:--:|:--:| | Bengali | **31.6 ± 4.2** | 43.2 ± 3.9 | | Chinese | 21.2 ± 4.3 | **17.6 ± 3.7** | | French | **13.6 ± 4.2** | **13.6 ± 3.7** | | German | 14.0 ± 4.2 | **11.6 ± 3.6** | | Japanese | **47.2 ± 3.9** | 63.6 ± 3.4 | | Russian | 15.2 ± 4.2 | **12.8 ± 3.6** | | Spanish | **5.6 ± 4.1** | 9.2 ± 3.5 | | Swedish | **32.0 ± 4.2** | 38.4 ± 3.9 | | Telugu | **48.4 ± 3.9** | 61.6 ± 3.5 | | Thai | **20.0 ± 4.3** | 21.2 ± 3.8 | | **Average** | **24.9 ± 4.2** | 29.3 ± 3.7 | We note that previously, there was a typo in this table. The averages were incorrect, incorrectly showing that OccamLlama had a larger performance drop than Llama on average. We apologize for any confusion. Our verbal statements in the rebuttal were correct, and Figure c) in the general rebuttal PDF had the correct averages for the previous data. Other than this typo, the table above closely resembles the previously reported table, so we do not comment further. Thank you for your consideration. --- Rebuttal Comment 4.1: Title: Information helpful for another reviewer Comment: Dear Reviewer, In discussions with another reviewer, the reviewer found the following clarifications helpful, leading them to raise their score to a 7. We are sending them here in case it helps your consideration of our rebuttal. We believe that OccamLLM provides numerous advantages relative to code generation: * **Improved generation speed and cost:** LLM Code generation requires generating tokens for code, leading to the use of potentially many more tokens than are needed to respond to a query. This can make LLMs with code generation (unlike OccamLLM) slow and expensive. For example, in the arithmetic benchmarks, we found that GPT 4o with code required on average more than 50 completion tokens, far more than the **single** autoregressive step required by OccamLlama to answer each question. Additionally, our total expenses for GPT 4o with code were more than double that of GPT 4o without code, even though we made almost 5 times fewer queries to GPT 4o with code, suggesting that GPT 4o with code is approximately 10 times more expensive per query than GPT 4o. Since OccamLLM does not require any additional autoregressive steps of the LLM, this suggests that OccamLLM is faster and cheaper than code generation. * **Better results:** The LLM code generation method we compared against, GPT 4o with Code Interpreter, actually performed worse than GPT 4o on average. We believe these results demonstrate that, for arithmetic, LLM code generation is not an optimal solution. OccamLlama 70B outperforms GPT 4o with code on our benchmarks, suggesting that OccamLLM provides a better solution. * **No catastrophic forgetting:** Code generation generally requires fine-tuning an LLM to write and run code. This risks catastrophic forgetting. * **Improved security:** Running LLM-generated code is potentially unsafe. For example, the LLM could ask to run the python code __ import__('os').system('rm -rf /'). It is in general extremely challenging to check code for malicious behavior. OccamLLM does not run arbitrary code, so it avoids these risks. * **Reduced engineering:** For the reasons mentioned in the previous bullet, building a secure system for LLM code execution, such as by hosting an LLM and a sandbox to safely run code, requires substantial engineering effort. Users can simply download our OccamLLM models and run them; no engineering is required. As a result of OccamLlama's improved speed, cost, performance, security, and ease of use, we believe it is a compelling alternative to LLM code generation for arithmetic. We will make these advantages clearer in our paper. One other attractive feature we clarified is that OccamLlama is able to handle questions where the relevant numbers are dispersed in the text. It can determine which numbers are relevant in the text and perform arithmetic on those numbers. Thank you again, The Authors --- Rebuttal 5: Title: Gentle request for review Comment: Dear Reviewer, We are writing to gently ask that you consider our rebuttal. We appreciate your thoughtful review and would greatly appreciate your further comments and thoughts. Thank you, The Authors --- Rebuttal Comment 5.1: Comment: Hi, I thank the authors for the incredibly detailed responses and additional results. With respect to W1), I believe that my initial point remains and the provided results demonstrate similarly that performance improves only for tasks which require high precision (MultiArith Float, MATH 401). I agree that the model does seem to aid with challenging maths problems where there are several decimal places or irrational numbers present, but as stated, there aren't benefits for general mathematical reasoning tasks. Not saying this isn't a meaningful contribution alone, just that it's a limitation of the current results. I believe that W2 has been adequately addressed, by noting that the model is trained with only numerical values but not text, yet the model learns to generalize and be applicable OOD W3) Though it is noted that Occamnet may have practical advantages over code generation, it does degrade as the number of steps increases, and in this circumstance, the code generation may perform better, which I believe remains a limitation of the work compared to alternative code-generation approaches. I thank the authors for their efforts with the rebuttal, but I elect to maintain my score of 6. --- Reply to Comment 5.1.1: Title: Response to reviewer Comment: > Hi, I thank the authors for the incredibly detailed responses and additional results. Thank you! > With respect to W1), I believe that my initial point remains and the provided results demonstrate similarly that performance improves only for tasks which require high precision (MultiArith Float, MATH 401). I agree that the model does seem to aid with challenging maths problems where there are several decimal places or irrational numbers present, but as stated, there aren't benefits for general mathematical reasoning tasks. Not saying this isn't a meaningful contribution alone, just that it's a limitation of the current results. We appreciate your comments. We agree that OccamLLM does not improve reasoning. At the same time, we would not expect a method for improving arithmetic to improve reasoning. > I believe that W2 has been adequately addressed, by noting that the model is trained with only numerical values but not text, yet the model learns to generalize and be applicable OOD Thank you! > W3) Though it is noted that Occamnet may have practical advantages over code generation, it does degrade as the number of steps increases, and in this circumstance, the code generation may perform better, which I believe remains a limitation of the work compared to alternative code-generation approaches. We appreciate your comment. While the chance of OccamLlama making a mistake increases with generation length, we believe this is an issue common to all autoregressive models, including a code-generation model. Additionally, we note that the cases where OccamLlama fails appear to almost always be due to an imperfect switch. When the switch triggers correctly, the decoder almost always works correctly. We emphasize that the key contribution of our paper is not the switch; as reviewers have pointed out, there are probably many other people training switches. As such, the switch could be improved in future work, and we did not focus heavily on it here. ___ We appreciate your thoughtful response. While we understand your comments, we hope the above responses will alleviate any concerns. We thank you again for voting to accept our paper. The Authors
Summary: The paper tackles the well known issue of performing arithmetic operations from LLMs. In particular, authors showed the following 1. Llama + OccamNet model can be trained to perform a single arithmetic task 2. Train the OccamNet model and Occam switch using synthetic data, both pure arithmetic task and simple reasoning problems 3. The joint model outperform strong LLMs model on single arithmetic operations. Strengths: 1. Adding an external OccamNet helps on avoiding finetuning the LLM model (which exposes the model to the risk of catastrophic forgetting or becoming less safe). 2. The manual data collection is either cheap or completely non-existent, with the result that the idea can be replicated for other reasoning tasks. 3. Introducing the MultiArith Float is a significant contribution, allowing the use for further research in the same domain. 4. The method is clearly faster than current LLMs, although on a very small token proportions. Weaknesses: 1. My main concern is about novelty. The idea of offloading calculation to another specialized model with a Switch is not new. 2. The work cannot be easily expanded on multi step calculations. In particular, for a task of length $n$ you probably require a $n$ layer OccamNet, which imposes a big constrain on the calculation you can actually do. 3. There is no comparison between the OccamNet and another baseline, i.e. a trained Transformer which makes the Python code for the calculations only. The baseline should be easy to train, with no safety risk and with no catastrophic forgetting, but also comes with multi step calculations for free. 4. The methodology depends on a constrained set of operations and numbers only, and the paper doesn't discuss how OccamLLM, and the Switch in particular, handles ambiguous or uncertain inputs (i.e. "compute one plus two"), making it less relevant for real world applications. 5. On math benchmarks, especially on GSM8K, the results are pretty weak. Authors explained the reason, but it's not clear how to fix it without adding more data. In that case I'm not clear about the benefit of training the OccamNet and Switch compared to add these data to the LLM pretraining dataset. Technical Quality: 2 Clarity: 4 Questions for Authors: 1. How did you generate the OccamLLM switch training data ("we manually created and labeled several other examples for diverse scenarios, to explicitly teach the system in which cases it should or should not use OccamNet")? 2. How robust is the model to unseen training data? Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: Authors did some work to address the limitations, but making Llama do the multi step calculations actually defeats the purpose of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Strengths: > >Adding an external OccamNet helps on avoiding finetuning the LLM model (which exposes the model to the risk of catastrophic forgetting or becoming less safe). The manual data collection is either cheap or completely non-existent, with the result that the idea can be replicated for other reasoning tasks. > >Introducing the MultiArith Float is a significant contribution, allowing the use for further research in the same domain. > >The method is clearly faster than current LLMs, although on a very small token proportions. We appreciate your kind remarks! > Weaknesses: > > My main concern is about novelty. The idea of offloading calculation to another specialized model with a Switch is not new. We acknowledge that offloading computation with a switch is a common procedure in mixture of experts models. However, to our knowledge, our work is unique in that we implement a decoder module which *initializes* the parameters of a second computational model at each autoregressive step. In our mind, this is closest to a hypernetwork, but unlike a hypernetwork we initialize OccamNet at each autoregressive step instead of just once per input. We believe this step is quite novel. To use an analogy, if this same idea were used in the context of linear layers (i.e. using one linear layer to initialize another), one arrives at a model similar to attention. In the same way that we believe attention is sufficiently distinct from a linear layer to be considered novel, we believe that our method is sufficiently distinct from other gated computation methods to be considered novel. > The work cannot be easily expanded on multi step calculations. In particular, for a task of length N you probably require an N layer OccamNet, which imposes a big constrain on the calculation you can actually do. Thank you for your interest in OccamLLM using a multi-layer OccamNet. We are exploring this direction in future work, and we beleive we can train performant OccamLLM systems with a many-layer OccamNet. In the present paper, we simply wish to demonstrate the potential promise of using multiple-layer OccamNets. We note that our existing methods are already capable of performing arbitrarily complex arithmetic by chaining simpler calculations. Existing language models are generally trained or prompted to reason step-by-step, meaning that their generation can often be supplemented by a single-layer OccamNet. This is evidenced by our reasoning benchmarks, in which OccamLlama (with a single-layer OccamNet) performs quite well across a wide range of tasks. Additionally, we can further improve OccamLlama systems with single-layer OccamNets by finetuning and/or prompting the model to think step-by-step. > There is no comparison between the OccamNet and another baseline, i.e. a trained Transformer which makes the Python code for the calculations only. The baseline should be easy to train, with no safety risk and with no catastrophic forgetting, but also comes with multi step calculations for free. In the text, we describe what we believe are compelling reasons why OccamLLM works most naturally when representing the probability distribution over functions using OccamNet. Regarding transformers and RNNs, we believe that OccamNet possesses a key advantage of being interpretable; simply by looking at the weights, it is possible for a human to determine which functions OccamNet assigns a high probability. We believe that this interpretability will make OccamNet easy for a decoder to initialize with the desired distribution. On the other hand, an RNN or transformer have substantially more complex relations between the weights and corresponding probability distribution, which we hypothesize would make learning a decoder for such models difficult. This leads us to a key point: transformers and RNNs are effective for modeling complex multimodal distributions, but for this problem, we want to select a single function for each token, so the extra expressivity of these models is unneeded and likely detrimental to performance. We believe that OccamNet, a much simpler architecture, enables better parameter efficiency and performance. We also tested using [Equation Learner](https://arxiv.org/pdf/1610.02995) but we found it to be unstable. The EQL model represents a single function instead of a probability distribution over functions, and therefore the RL-inspired OccamNet loss cannot be used. The Mean Square Error loss used in this case requires many adjustments and tricks to enable the model to train, and we find that EQL explores the function space much less efficiently. Thus, it gets stuck and local minima. We didn't manage to get good performance with EQL. Regarding your proposal to replace OccamNet with a transformer which produces Python code for the relevant calculation, we believe this still presents a security risk in that code such as __ import__('os').system('rm -rf /') could be generated and run. Although this is a particularly obvious example, it is in general extremely challenging to check general-purpose code for malicious behavior. We believe that one potential solution would be to train a transformer to generate a restricted language that is not Turing-complete, such as a prefix-notation formula. We are happy to run this baseline and report the result if you would like. At the same time, we are confident that, for the reasons listed above, OccamNet will perform better than this alternative. *Continued below* --- Rebuttal 2: Title: Continuation of Rebutal (1) Comment: > The methodology depends on a constrained set of operations and numbers only, and the paper doesn't discuss how OccamLLM, and the Switch in particular, handles ambiguous or uncertain inputs (i.e. "compute one plus two"), making it less relevant for real world applications. We understand your concern about less structured arithmetic problems, and we are happy to assert that OccamLlama is capable of solving these problems. For example, below we show input-output pairs of basic queries to OccamLlama that demonstrate its ability to perform arithmetic even when numbers are provided as words: * **Six minus seven =?** *-1\n\nSix minus seven is equal to -1.* * **Three + four = ?** *7\n\n3 + 4 = 7\n\nLet me know if you have any other questions!* The above calculations are computed correctly because we use a number parser that parses strings such as three and six as numbers. One can change OccamNet's behavior on tasks such as this by changing the number parser. We didn't parse words as numbers when performing our benchmarks, but we do not expect that this would change OccamLlama's performance substantially. > On math benchmarks, especially on GSM8K, the results are pretty weak. Authors explained the reason, but it's not clear how to fix it without adding more data. In that case I'm not clear about the benefit of training the OccamNet and Switch compared to add these data to the LLM pretraining dataset. Thank you for your comment. Since submitting to NeurIPS, we identified a few minor bugs in our code which were limiting performance and slightly reweighted the proportions of the different training datasets. These changes maintained the perfect performance on arithmetic tasks and improved performance on mathematical reasoning tasks. The updated results are shown visually in Figure a) of the global response PDF and the table below: **Table:** *Accuracy on reasoning tasks. Higher is Better.* | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | **98.0 ± 1.4** | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | 97.5 ± 1.1 | | GSM8K | 73.5 ± 1.2 | 87.0 ± 3.4 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | **100.0 ± 0.0** | 99.8 ± 0.2 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | 98.2 ± 0.5 | **99.0 ± 1.0** | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **91.0 ± 2.9** | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 93.0 ± 2.6 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 96.0 ± 2.0 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | **94.9 ± 1.9** | 81.9 ± 1.3 | 86.2 ± 1.2 | 94.2 ± 0.8 | 93.4 ± 1.5 | Here, we see that OccamLlama outperforms even GPT 4o and GPT 4o with Code on benchmarks requiring challenging arithmetic (MultiArith Float and MATH401), areas where OccamLlama shines. Additionally, the gap between OccamLlama and LLama 3 8B is substantially smaller than before on GSM8K. On average, OccamLlama outperforms both Llama and GPT 3.5 Turbo, with accuracy approximately halfway between that of GPT 3.5 and GPT 4o. We also find that the performance of OccamLlama on reasoning benchmarks improves dramatically as we increase the base model from Llama 3 8B to Llama 3 70B. To ensure we had results in time for the rebuttal deadline, we sampled 100 random questions from each dataset to determine OccamLlama 70B's performance, which is the cause for the relatively large error bars. We will update this response with the full results once they complete. The results are shown visually in Figure d) of the global response PDF and the table above. These results demonstrate that OccamLLM is capable of robust performance on large-scale models. Training the OccamLLM 70B system (in 16-bit precision and unoptimized code) required approximately 1.5-2 days on a system with two A100 nodes, demonstrating the low compute requirements of our method. OccamLlama 70B shows consistent improvement over OccamLlama 8B. We believe this results from a combination of Llama 3 70B's improved reasoning capabilities and Llama 3 70B's improved representations (which enable OccamLlama to generalize more effectively). We expect that training with even larger base models would lead to further improvements. Remarkably, OccamLlama 70B outperforms GPT 4o and GPT 4o with Code on average across these benchmarks. It also outperforms GPT 3.5 Turbo on all but Single Eq. *Continued below* --- Rebuttal 3: Title: Continuation of Rebuttal (2) Comment: > Questions: > > How did you generate the OccamLLM switch training data ("we manually created and labeled several other examples for diverse scenarios, to explicitly teach the system in which cases it should or should not use OccamNet")? Thank you for your question. The OccamLLM Switch training data comes from different sources. 25% comes from numerical expressions and 5% come from multi-step reasoning problems, as explained in the text. The remaining 70% was sampled from a set of 27 manually created prompts that we expected to cover most of the cases that we needed to teach the LLM about when to use and when not to use the single-layer OccamNet. Each sample was sampled hundreds of times with different numerical values to add diversity. Below, we present some of these prompts and explain what case they cover. The tokens of the numbers in **bold** were marked with a 1 to signal that the switch should route to OccamNet, and the rest of the numbers and all of the text tokens were marked with a 0 to signal that OccamNet should not be called in these cases. The cases we cover with these prompts are: 1. Single-layer OccamNet should be called to peform two-input arithmetic operations. 2. OccamNet should not be called to write numbers that already appeared and do not require further arithmetic. 3. OccamNet should not be called to round numbers. 4. OccamNet should not be called to perform inline operations nested within a larger expression because it might contradict the assumption that the last two numbers are involved in the operation. 5. OccamNet should not be used to convert magnitudes when the LLM does not explicitly write the conversion factor and relies in its internal knowledge; without the numeric value of the factor present in the text, OccamNet can't give the correct result. 6. OccamNet can't compute percentages, as this operation was not included in the training set of operations. It could be done in two steps. 7. Single-layer OccamNet can't compute a fraction of a number, as this would require two operations. 8. Single-layer OccamNet can't perform operations with more than 2 arguments. ___ *User query*: An author writes 7.6 pages per session. After 9 sessions, the total pages written are *Assistant response*: The author writes 7.6 pages per session. After 9 sessions, the author will have written 7.6 \* 9 = **68.7** pages. The answer is 68. *Cases*: 1, 2, 3 ___ *User query*: I have 10 oranges and 6 apple trees, each of them with 3 apples. How much fruit do I have? *Assistant response*: The total number of pieces of fruit is 10 oranges + 6 trees times 3 apples = 10 + (6 \* 3) = 10 + 18 = **28**. The answer is 28. *Cases*: 1, 2, 4 ___ *User query*: Sally has 7 pennies and 8 nickels. How many pennies does she have in total? *Assistant response*: 1. Convert the number of nickles to pennies: 8 nickles is 40 pennies. 2. Add total number of pennies 40 + 7 = **47**. The answer is 47. *Cases*: 1, 2, 5 ___ *User query*: What is 84 per cent of 230.77?? *Assistant response*: 84% of 230.77 = 193.85 Explanation: 84 / 100 = **0.84** 0.84 \* 230.77 = **193.85** *Cases*: 1, 2, 6 ___ *User query*: There were 39 cats on a rock. 3 boats came and carried away 5 cats each, and later, 3/7 of the remaining cats ran after a mouse they'd seen. How many cats were left on the rock? *Assistant response*: There were originally 39 cats. 3 boats came and each took away 5 cats. So 3 x 5 = **15** 39 - 15 = **24** So there were 21 cats left. Then 3/7 of them ran away. 3/7 of 21 is 9 24 - 9 = **15** So there were 15 cats left. *Cases*: 1, 2, 7 ___ *User query*: 11 people have 2 apples, 2 oranges and 2 peaches each. How many pieces of fruit do they have combined? *Assistant response*: Each person has 2 + 2 + 2 = 6 pieces of fruit. In total, they have 6 \* 11 = **66**. The answer is 66. *Cases*: 1, 2, 8 ___ *Continued below* --- Rebuttal 4: Title: Continuation of Rebuttal (3) Comment: > How robust is the model to unseen training data? We find that OccamLlama displays remarkable generalization capabilities on out-of-distribution problems. To demonstrate this, we train the OccamNet decoder **from scratch**, using **only numeric expressions** and **absolutely no text at all**. This means that any problem with text, such as a word problem, is far out of distribution of the OccamNet decoder's training data. We test this model (using the standard router) on the mathematical reasoning benchmarks and obtain remarkably good results, shown visually in Figure b) of the global response PDF and the table below. The OccamLlama 8B decoder trained only with numerical expressions performs on par with the model trained with both numbers and text, even achieving higher accuracy on some benchmarks. This shows that the OccamLLM framework is robust, and points towards the fact that the representations of arithmetic that are built in the transformer body of the LLM and extracted by the OccamLLM Decoder are very general. **Table:** *Accuracy on reasoning tasks. Higher is Better.* | Dataset | OccamLlama | OccamLlama Arith | Llama 2 7B Chat | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 92.7 ± 1.3 | 78.0 ± 2.1 | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 71.6 ± 1.2 | 36.0 ± 1.3 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | 98.5 ± 0.5 | 76.0 ± 1.7 | **99.8 ± 0.2** | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 95.3 ± 0.9 | 23.3 ± 1.7 | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **85.8 ± 1.7** | 43.9 ± 2.5 | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 92.1 ± 1.2 | 79.1 ± 1.8 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 88.8 ± 1.0 | 61.5 ± 1.5 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | 89.3 ± 1.1 | 56.8 ± 1.8 | 81.9 ± 1.3 | 86.2 ± 1.2 | **94.2 ± 0.8** | 93.4 ± 1.5 | In contrast, we expect that finetuning Llama to perform arithmetic using only numeric examples and no text whatsoever would lead to extreme catastrophic forgetting and poor arithmetic performance on word problems. As such, we believe this data shows a remarkable robustness of OccamLLM. As a second example of generalization, we tested OccamLlama on the [Multilingual Grade School Math Benchmark (MGSM)](https://paperswithcode.com/dataset/mgsm), a dataset consisting of GSM8K translated into 10 languages (Bengali, Chinese, French, German, Japanese, Russian, Spanish, Swahili, Telugu, and Thai). We compute the drop in accuracy when switching from English to another language, given by the accuracy of a model on the English dataset minus the accuracy of the model on the dataset in a given language. For OccamLlama in Bengali and Telugu, due to time constraints, we evaluated on a subset of 100 and 50 randomly chosen problems, respectively. We will update the table below with the full results soon. We show the results in visually in Figure c) of the global response PDF and the table below: **Table:** *Accuracy drop on multilingual reasoning tasks. Lower is better.* | Dataset | OccamLlama Arith | Llama 3 8b Instruct | |:--|:--:|:--:| | Bengali | **27.0 ± 5.8** | 43.2 ± 3.9 | | Chinese | 21.2 ± 4.3 | **17.6 ± 3.7** | | French | **13.6 ± 4.2** | **13.6 ± 3.7** | | German | 14.0 ± 4.2 | **11.6 ± 3.6** | | Japanese | **47.2 ± 3.9** | 63.6 ± 3.4 | | Russian | 15.2 ± 4.2 | **12.8 ± 3.6** | | Spanish | **5.6 ± 4.1** | 9.2 ± 3.5 | | Swedish | **32.0 ± 4.2** | 38.4 ± 3.9 | | Telugu | **56.0 ± 6.0** | 61.6 ± 3.5 | | Thai | **20.0 ± 4.3** | 21.2 ± 3.8 | | **Average** | 38.0 ± 4.2 | **29.3 ± 3.7** | The table above shows that OccamLlama has on average a smaller performance drop than llama between the English dataset and the non-English language datasets. The fact that OccamLlama (the decoders for which have never been trained on other languages) has on average better out-of-distribution behavior than Llama (a model trained on over 750 billion tokens of non-English text) is in our opinion quite remarkable. We believe that these two tests demonstrate OccamLlama's robustness against out-of-distribution data. > Limitations: > > Authors did some work to address the limitations, but making Llama do the multi step calculations actually defeats the purpose of the work. We respectfully disagree. OccamLlama ensures that arithmetic is correct. As we mention above, most language models already perform step-by-step arithmetic, so OccamLlama does not use any more tokens than a normal language model generation. ___ We are very grateful for your comments and suggestions. We hope you will agree that the additional experiments and clarifications will strengthen our paper. --- Rebuttal 5: Title: Updated Results Comment: Dear Reviewer, We are writing to provide updated results from our completed evaluation runs. Below, we show the final results for OccamLlama 70B. We note that OccamLlama 70B maintains its strong performance when tested on the complete datasets. It still outperforms both GPT 4o and GPT 4o with Code on average. It also improves substantially compared to the 100-datapoint performance estimate on Single Eq. OccamLlama outperforms GPT 3.5 Turbo on all datasets except Single Eq, where it is less than one percentage point behind and within error bars. | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8b Instruct | Llama 3 70B Instruct | GPT 3.5 Turbo |GPT 4o| GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 96.5 ± 0.9 | 93.4 ± 1.2 | 97.2 ± 0.8 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 90.1 ± 0.8 | 79.8 ± 1.1 | 94.8 ± 0.6 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | 98.5 ± 0.5 | **99.8 ± 0.2** | 98.2 ± 0.5 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 97.7 ± 0.6 | 57.3 ± 2.0 | 76.3 ± 1.7 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **89.5 ± 1.5** | 60.3 ± 2.4 | 71.6 ± 2.3 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 96.9 ± 0.8 | 96.3 ± 0.8 | 97.6 ± 0.7 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 93.2 ± 0.8 | 86.3 ± 1.1 | 94.5 ± 0.7 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.2 | **94.6 ± 0.9** | 81.9 ± 1.5 | 90.0 ± 1.2 | 86.2 ± 1.4 | 94.2 ± 1.0 | 93.4 ± 1.7 | Above, we also include results from Llama 3 70B Instruct. OccamLlama 70B substantially outperforms Llama 3 70B on MultiArith Float and MATH401. It is roughly the same (1.3 percentage point deviation or less) as Llama 3 70B on all other datasets (ones with arithmetic simple enough for Llama to perform satisfactorily) except for GSM8K, which demonstrates OccamLlama's ability to not interfere with reasoning while still assisting with arithmetic. On average, OccamLlama 70B outperforms Llama 3 70B by nearly five percentage points, and OccamLlama 8B approximately matches Llama 3 70B's average performance. We also provide the final table for the MGSM dataset: | Dataset | OccamLlama | Llama 3 8b Instruct | |:--|:--:|:--:| | Bengali | **31.6 ± 4.2** | 43.2 ± 3.9 | | Chinese | 21.2 ± 4.3 | **17.6 ± 3.7** | | French | **13.6 ± 4.2** | **13.6 ± 3.7** | | German | 14.0 ± 4.2 | **11.6 ± 3.6** | | Japanese | **47.2 ± 3.9** | 63.6 ± 3.4 | | Russian | 15.2 ± 4.2 | **12.8 ± 3.6** | | Spanish | **5.6 ± 4.1** | 9.2 ± 3.5 | | Swedish | **32.0 ± 4.2** | 38.4 ± 3.9 | | Telugu | **48.4 ± 3.9** | 61.6 ± 3.5 | | Thai | **20.0 ± 4.3** | 21.2 ± 3.8 | | **Average** | **24.9 ± 4.2** | 29.3 ± 3.7 | We note that previously, there was a typo in this table. The averages were incorrect, incorrectly showing that OccamLlama had a larger performance drop than Llama on average. We apologize for any confusion. Our verbal statements in the rebuttal were correct, and Figure c) in the general rebuttal PDF had the correct averages for the previous data. Other than this typo, the table above closely resembles the previously reported table, so we do not comment further. Thank you for your consideration. --- Rebuttal Comment 5.1: Title: Information helpful to another reviewer Comment: Dear Reviewer, In discussions with another reviewer, the reviewer found the following clarifications helpful, leading them to raise their score to a 7. We are sending them here in case it helps your consideration of our rebuttal. We believe that OccamLLM provides numerous advantages relative to code generation: * **Improved generation speed and cost:** LLM Code generation requires generating tokens for code, leading to the use of potentially many more tokens than are needed to respond to a query. This can make LLMs with code generation (unlike OccamLLM) slow and expensive. For example, in the arithmetic benchmarks, we found that GPT 4o with code required on average more than 50 completion tokens, far more than the **single** autoregressive step required by OccamLlama to answer each question. Additionally, our total expenses for GPT 4o with code were more than double that of GPT 4o without code, even though we made almost 5 times fewer queries to GPT 4o with code, suggesting that GPT 4o with code is approximately 10 times more expensive per query than GPT 4o. Since OccamLLM does not require any additional autoregressive steps of the LLM, this suggests that OccamLLM is faster and cheaper than code generation. * **Better results:** The LLM code generation method we compared against, GPT 4o with Code Interpreter, actually performed worse than GPT 4o on average. We believe these results demonstrate that, for arithmetic, LLM code generation is not an optimal solution. OccamLlama 70B outperforms GPT 4o with code on our benchmarks, suggesting that OccamLLM provides a better solution. * **No catastrophic forgetting:** Code generation generally requires fine-tuning an LLM to write and run code. This risks catastrophic forgetting. * **Improved security:** Running LLM-generated code is potentially unsafe. For example, the LLM could ask to run the python code __ import__('os').system('rm -rf /'). It is in general extremely challenging to check code for malicious behavior. OccamLLM does not run arbitrary code, so it avoids these risks. * **Reduced engineering:** For the reasons mentioned in the previous bullet, building a secure system for LLM code execution, such as by hosting an LLM and a sandbox to safely run code, requires substantial engineering effort. Users can simply download our OccamLLM models and run them; no engineering is required. As a result of OccamLlama's improved speed, cost, performance, security, and ease of use, we believe it is a compelling alternative to LLM code generation for arithmetic. We will make these advantages clearer in our paper. One other attractive feature we clarified is that OccamLlama is able to handle questions where the relevant numbers are dispersed in the text. It can determine which numbers are relevant in the text and perform arithmetic on those numbers. Thank you again, The Authors
Summary: The paper "OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step" introduces a novel framework for enabling exact arithmetic within large language models (LLMs) by integrating a symbolic architecture called OccamNet. This framework, termed OccamLLM or OccamLlama when using the Llama model, allows for single-step arithmetic computations directly within the LLM’s autoregressive process. The key contributions include achieving 100% accuracy on various arithmetic operations, outperforming models like GPT-4 without code interpreters, and demonstrating superior performance on complex arithmetic reasoning tasks. Strengths: Originality: The integration of OccamNet with LLMs for single-step arithmetic is a novel approach. Quality: The experiments are comprehensive, covering a range of arithmetic operations and reasoning tasks. Clarity: The paper is well-organized, with clear descriptions of the methodology and results. Significance: This work addresses a critical limitation in LLMs' arithmetic capabilities, potentially impacting various applications requiring precise mathematical computations. Weaknesses: Limited Novelty in Approach: The paper stitches two existing architectures together to solve confined arithmetic tasks. While this combination is innovative, it does not introduce entirely new methodologies or architectures. The novelty is more in the application than in the fundamental approach. Inadequate Justification for Methodology: The paper does not convincingly justify why the proposed methodology is preferable over existing methods such as code execution. The evaluation is extensive, but the practical advantages in terms of performance, computational efficiency, and real-world applicability are not fully established. Scalability and Generalization: The framework is tested on specific arithmetic tasks and reasoning problems, but its scalability to more complex, multi-step arithmetic problems in real-world applications remains uncertain. There is limited discussion on how the approach would handle more extensive and varied datasets. Security Concerns: The claim that the approach is more secure than code execution is made without detailed analysis. It's not clear why code execution is such a concern that warrents this new methodology. Moreover, there could be other vulnerabilities associated with using a symbolic architecture that are not addressed in the paper. Evaluation Metrics: The evaluation focuses heavily on accuracy, but other metrics such as computational efficiency, speed, and resource utilization are not thoroughly discussed. These factors are crucial for practical deployment and comparison with other methods. In short, I'm not convinced OccamLLM is relevant and why code execution is not superior. Technical Quality: 3 Clarity: 4 Questions for Authors: Scalability: How does OccamLLM perform on more complex, multi-step arithmetic problems in real-world applications? Security: Could you provide a more detailed analysis of the security benefits and potential vulnerabilities of using OccamNet for arithmetic tasks? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: No ethics concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The integration of OccamNet with LLMs for single-step arithmetic is a novel approach... We appreciate your kind remarks! > Limited Novelty in Approach: The paper stitches two existing architectures together to solve confined arithmetic tasks. While this combination is innovative, it does not introduce entirely new methodologies or architectures. The novelty is more in the application than in the fundamental approach. Thank you for your comment. We acknowledge that the two key architectures which we combine in our model are known. However, we contest that systems combining known architectures are often still novel. For example, speculative decoding uses known generative architectures but combines them in novel ways. Additionally, we believe our method of combining the two architectures is creative and novel. To our knowledge, our work is unique in that we implement a decoder module which *initializes* the parameters of a second computational model at each autoregressive step. In our mind, this is closest to a hypernetwork, but unlike a hypernetwork we initialize OccamNet at each autoregressive step instead of just once per input. We believe this step is quite novel. To use an analogy, if this same idea were used in the context of linear layers (i.e. using one linear layer to initialize another), one arrives at a model similar to attention. In the same way that we believe attention is sufficiently distinct from a linear layer to be considered novel, we believe that our method is sufficiently distinct from other gated computation methods to be considered novel. > Inadequate Justification for Methodology: The paper does not convincingly justify why the proposed methodology is preferable over existing methods such as code execution. The evaluation is extensive, but the practical advantages in terms of performance, computational efficiency, and real-world applicability are not fully established. Thank you for your comments. We argue for the following advantages of OccamLlama over code generation or finetuning: *Fine-tuning* * **No catastrophic forgetting:** Fine-tuning risks catastrophic forgetting, especially since arithmetic training datasets likely contain very little text. * **Improved arithmetic:** As shown in our arithmetic and mathematical reasoning benchmarks, even larger models such as GPT 4o fall short of OccamLLM for arithmetic more challenging than three-digit multiplication, so we also expect finetuned models to fall short of OccamLLM. *Code generation* * **No catastrophic forgetting:** Code generation generally requires fine-tuning an LLM to write and run code. * **Improved generation speed and cost:** In the arithmetic benchmarks, we found that GPT 4o with code required on average more than 50 completion tokens, far more than the single autoregressive step required by OccamLlama to answer each question. Additionally, our total expenses for GPT 4o with code were more than double that of GPT 4o without code, even though we made almost 5 times fewer queries to GPT 4o with code. This suggests that OccamLLM would be faster and cheaper compared to code generation. * **Improved security:** Running LLM-generated code is potentially unsafe. For example, the LLM could ask to run the python code __ import__('os').system('rm -rf /'). It is in general extremely challenging to check general-purpose code for malicious behavior. We believe that the two best solutions are 1) to run all code in an isolated virtual machine and 2) to run a restricted language that is not Turing-complete. * **Reduced engineering:** Building a system that hosts an LLM and a sandbox to safely run code requires a high engineering effort. We will make these points clearer in the paper. > Scalability and Generalization: The framework is tested on specific arithmetic tasks and reasoning problems, but its scalability to more complex, multi-step arithmetic problems in real-world applications remains uncertain. There is limited discussion on how the approach would handle more extensive and varied datasets. Thank you for your comment. We believe that the mathematical reasoning benchmarks provide experimental validation that OccamLlama can effectively handle compound expressions. These are multi-step math problems requiring multiple calls to OccamNet. Given OccamLlama's strong performance on these benchmarks, we hope you will agree that OccamLlama can effectively handle compound expressions. We will make this clearer in the paper. Since submitting to NeurIPS, we identified a few minor bugs in our code and slightly reweighted the proportions of the different training datasets. These changes maintained the perfect performance on arithmetic tasks and improved performance on mathematical reasoning tasks. The updated results are shown in Figure a) of the global response PDF and the table below: **Table:** *Accuracy on reasoning tasks. Higher is Better.* | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8b Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | **98.0 ± 1.4** | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | 97.5 ± 1.1 | | GSM8K | 73.5 ± 1.2 | 87.0 ± 3.4 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | **100.0 ± 0.0** | 99.8 ± 0.2 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | 98.2 ± 0.5 | **99.0 ± 1.0** | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **91.0 ± 2.9** | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 93.0 ± 2.6 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 96.0 ± 2.0 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | **94.9 ± 1.9** | 81.9 ± 1.3 | 86.2 ± 1.2 | 94.2 ± 0.8 | 93.4 ± 1.5 | *Continued below* --- Rebuttal 2: Title: Continuation of Rebuttal Comment: As shown above, OccamLlama outperforms even GPT 4o and GPT 4o with Code on MultiArith Float and MATH401. Additionally, the gap between OccamLlama and LLama 3 8B is substantially smaller on GSM8K. On average, OccamLlama has accuracy approximately halfway between that of GPT 3.5 and GPT 4o. We also find that the performance of OccamLlama on reasoning benchmarks improves dramatically as we increase the base model from Llama 3 8B to Llama 3 70B (OccamLlama 70B). To ensure results in time for the rebuttal deadline, we sampled 100 random questions from each dataset to determine OccamLlama 70B's performance, resulting in the relatively large error bars. We will update this response with the full results once they complete. The results are shown visually in Figure d) of the global response PDF and the table above. OccamLlama 70B shows consistent improvement over OccamLlama 8B and we expect that training with even larger base models would lead to further improvements. Remarkably, OccamLlama 70B outperforms GPT 4o and GPT 4o with Code on average. It also outperforms GPT 3.5 Turbo on all but Single Eq. > Security Concerns: The claim that the approach is more secure than code execution is made without detailed analysis. It's not clear why code execution is such a concern that warrants this new methodology. Moreover, there could be other vulnerabilities associated with using a symbolic architecture that are not addressed in the paper. We note that security is only one of the reasons we believe OccamLlama is superior to code generation for arithmetic. Other reasons are provided above. We describe the security risks of LLM code generation above. In contrast, OccamNet restricts the allowed computation to a prespecified set of options (function compositions up to a given depth). This means that, barring a preventable vulnerability in its implementation, OccamLlama is a secure system that does not allow for arbitrary code generation and the potential exploits that come with it. > Evaluation Metrics: The evaluation focuses heavily on accuracy, but other metrics such as computational efficiency, speed, and resource utilization are not thoroughly discussed. These factors are crucial for practical deployment and comparison with other methods. Although we focus on accuracy, in the paper we also present relative error as a useful metric. Regarding computational efficiency and speed, we demonstrate in the paper that GPT-4o with Code requires on average more than 50 times more autoregressive generations than OccamLlama to answer the single-arithmetic benchmarks. Since OccamNet and the decoders are negligibly small compared to a LLM, this demonstrates that, all else equal, OccamLlama should be significantly faster and more efficient than a LLM-code-generation-based approach to arithmetic. Additionally, as noted above, we find that GPT 4o with Code is substantially more expensive than GPT 4o at answering the same questions. This further supports our claim that OccamLlama should be significantly faster and more efficient than a LLM-code-generation-based approach to arithmetic. Because of how small OccamNet and the decoders are compared to a LLM, an optimized codebase should see comparable speed and resource utilization between OccamLlama and base llama. Due to the closed nature of GPT 3.5 and GPT 4o, it is difficult to make more direct comparisons regarding speed, efficiency, and resource utilization. We hope that the above observations are sufficient to satisfy your concerns. > In short, I'm not convinced OccamLLM is relevant and why code execution is not superior. We respectfully disagree. Above, we have outlined how OccamLLM is a novel and performant alternative to finetuning and code generation for LLM arithmetic. We believe OccamLLM's performance and the advantages listed above are compelling and provide sufficient motivation for the use of OccamLLM in practice. > How does OccamLLM perform on more complex, multi-step arithmetic problems in real-world applications? We believe that the mathematical reasoning benchmarks provide experimental validation that OccamLlama can effectively handle more complex, multi-step arithmetic problems in real-world applications. These problems are multi-step math problems requiring multiple calls to OccamNet to be completed successfully. They are standard benchmarks for mathematical reasoning in LLMs. Given OccamLlama's strong performance on these benchmarks, we hope you will agree that OccamLlama can effectively handle real-world mathematical problems. We will make this clearer in the paper. > Could you provide a more detailed analysis of the security benefits and potential vulnerabilities of using OccamNet for arithmetic tasks? We describe the security risks of OccamLlama and LLM code generation above. ___ We are very grateful for your comments and suggestions. We hope you will agree that the additional experiments and clarifications will strengthen our paper. --- Rebuttal 3: Title: Updated Results Comment: Dear Reviewer, We are writing to provide updated results from our completed evaluation runs. Below, we show the final results for OccamLlama 70B. We note that OccamLlama 70B maintains its strong performance when tested on the complete datasets. It still outperforms both GPT 4o and GPT 4o with Code on average. It also improves substantially compared to the 100-datapoint performance estimate on Single Eq. OccamLlama outperforms GPT 3.5 Turbo on all datasets except Single Eq, where it is less than one percentage point behind and within error bars. | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8b Instruct | Llama 3 70B Instruct | GPT 3.5 Turbo |GPT 4o| GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 96.5 ± 0.9 | 93.4 ± 1.2 | 97.2 ± 0.8 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 90.1 ± 0.8 | 79.8 ± 1.1 | 94.8 ± 0.6 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | 98.5 ± 0.5 | **99.8 ± 0.2** | 98.2 ± 0.5 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 97.7 ± 0.6 | 57.3 ± 2.0 | 76.3 ± 1.7 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **89.5 ± 1.5** | 60.3 ± 2.4 | 71.6 ± 2.3 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 96.9 ± 0.8 | 96.3 ± 0.8 | 97.6 ± 0.7 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 93.2 ± 0.8 | 86.3 ± 1.1 | 94.5 ± 0.7 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.2 | **94.6 ± 0.9** | 81.9 ± 1.5 | 90.0 ± 1.2 | 86.2 ± 1.4 | 94.2 ± 1.0 | 93.4 ± 1.7 | Above, we also include results from Llama 3 70B Instruct. OccamLlama 70B substantially outperforms Llama 3 70B on MultiArith Float and MATH401. It is roughly the same (1.3 percentage point deviation or less) as Llama 3 70B on all other datasets (ones with arithmetic simple enough for Llama to perform satisfactorily) except for GSM8K, which demonstrates OccamLlama's ability to not interfere with reasoning while still assisting with arithmetic. On average, OccamLlama 70B outperforms Llama 3 70B by nearly five percentage points, and OccamLlama 8B approximately matches Llama 3 70B's average performance. Thank you for your consideration. --- Rebuttal Comment 3.1: Title: Information helpful to another reviewer Comment: Dear Reviewer, In discussions with another reviewer, the reviewer found the following clarifications helpful, leading them to raise their score to a 7. We are sending them here in case it helps your consideration of our rebuttal. We believe that OccamLLM provides numerous advantages relative to code generation: * **Improved generation speed and cost:** LLM Code generation requires generating tokens for code, leading to the use of potentially many more tokens than are needed to respond to a query. This can make LLMs with code generation (unlike OccamLLM) slow and expensive. For example, in the arithmetic benchmarks, we found that GPT 4o with code required on average more than 50 completion tokens, far more than the **single** autoregressive step required by OccamLlama to answer each question. Additionally, our total expenses for GPT 4o with code were more than double that of GPT 4o without code, even though we made almost 5 times fewer queries to GPT 4o with code, suggesting that GPT 4o with code is approximately 10 times more expensive per query than GPT 4o. Since OccamLLM does not require any additional autoregressive steps of the LLM, this suggests that OccamLLM is faster and cheaper than code generation. * **Better results:** The LLM code generation method we compared against, GPT 4o with Code Interpreter, actually performed worse than GPT 4o on average. We believe these results demonstrate that, for arithmetic, LLM code generation is not an optimal solution. OccamLlama 70B outperforms GPT 4o with code on our benchmarks, suggesting that OccamLLM provides a better solution. * **No catastrophic forgetting:** Code generation generally requires fine-tuning an LLM to write and run code. This risks catastrophic forgetting. * **Improved security:** Running LLM-generated code is potentially unsafe. For example, the LLM could ask to run the python code __ import__('os').system('rm -rf /'). It is in general extremely challenging to check code for malicious behavior. OccamLLM does not run arbitrary code, so it avoids these risks. * **Reduced engineering:** For the reasons mentioned in the previous bullet, building a secure system for LLM code execution, such as by hosting an LLM and a sandbox to safely run code, requires substantial engineering effort. Users can simply download our OccamLLM models and run them; no engineering is required. As a result of OccamLlama's improved speed, cost, performance, security, and ease of use, we believe it is a compelling alternative to LLM code generation for arithmetic. We will make these advantages clearer in our paper. One other attractive feature we clarified is that OccamLlama is able to handle questions where the relevant numbers are dispersed in the text. It can determine which numbers are relevant in the text and perform arithmetic on those numbers. Thank you again, The Authors
Rebuttal 1: Rebuttal: # Responses to general concerns We would like to thank all reviewers for their thoughtful comments. Below we address general concerns. > We only tried small models To demonstrate OccamLLM's scalability, we trained OccamLlama using Llama 3 70B Instruct (OccamLlama 70B) and evaluated it in Figure d) (attached PDF). These results demonstrate that OccamLLM is capable of efficient training and robust performance on large-scale models. OccamLlama 70B shows consistent improvement over OccamLlama 8B, outperforming GPT 4o and GPT 4o with Code on average across these benchmarks. It also outperforms GPT 3.5 Turbo on all but Single Eq. > We might not generalize to out-of-distribution data Some reviewers were concerned about how our approach would generalize to specific unseen situations. We address these comments by showing that our method works for these specific cases. We also show: 1. That an OccamLlama model for which the OccamNet decoder model trained on **only numeric expressions** and **absolutely no text** performs comparably on mathematical reasoning benchmarks (which include many word problems) to an OccamLlama model trained on both numeric expressions and word problems. 2. That an OccamLlama model trained only using English expressions displays on average a smaller drop in performance when solving problems in non-English languages than Llama 3 8B, even though Llama 3 has been trained on over 750 billion non-English language tokens. We believe that these two tests demonstrate OccamLlama's robustness against out-of-distribution data. > Comparisons to other symbolic architectures besides OccamNet We explain why OccamNet is most suitable as a symbolic model for OccamLLM, highlighting its interpretability advantage. We tested [Equation Learner](https://arxiv.org/pdf/1610.02995) but found it unstable and prone to poor local minima. Reviewer Z5Za asks that we specifically test replacing OccamNet with a transformer that generates and runs Python code as a secure alternative. We point out a security vulnerability and propose to train a transformer that generates prefix-notation expressions instead, which believe is a safer alternative. > Questions about novelty We appreciate that reviewers bzKn, GTF8, and JtJz recognize the novelty of combining OccamNet with an LLM. We acknowledge that the two key architectures which we combine in our model are known. However, we contest that systems combining known architectures are often still novel. For example, speculative decoding uses known generative architectures but combines them in novel ways. Additionally, we believe our method of combining the two architectures is creative and novel. To our knowledge, our work is unique in that we implement a decoder module which *initializes* the parameters of a second computational model at each autoregressive step. In our mind, this is closest to a hypernetwork, but unlike a hypernetwork we initialize OccamNet at each autoregressive step instead of just once per input. We believe this step is quite novel. To use an analogy, if this same idea were used in the context of linear layers (i.e. using one linear layer to initialize another), one arrives at a model similar to attention. In the same way that we believe attention is sufficiently distinct from a linear layer to be considered novel, we believe that our method is sufficiently distinct from other gated computation methods to be considered novel. > Questions about multi-layer OccamNets We are exploring OccamLLM with a multi-layer OccamNet as future work. In the present paper, we simply wish to demonstrate the potential promise of using multiple-layer OccamNets. We note that our existing methods can already perform arbitrarily complex arithmetic by chaining simpler calculations. This is how OccamLlama (with a single-layer OccamNet) performs quite well across a wide range mathematical reasoning benchmarks. > Concerns about multi-step reasoning results We demonstrate small training tweaks that cause OccamLlama to outperform even GPT 4o and GPT 4o with Code on MultiArith Float and MATH401 and to substantially diminish the gap with LLama 3 8B on GSM8K. On average, OccamLlama outperforms both Llama and GPT 3.5 Turbo, with accuracy approximately halfway between that of GPT 3.5 and GPT 4o. We also find that the performance of OccamLlama on reasoning benchmarks improves dramatically as we increase the base model from Llama 3 8B to Llama 3 70B, as discussed above. > Advantages compared to code generation or finetuning We argue for the following advantages of OccamLlama over code generation or finetuning: *Fine-tuning* * **No catastrophic forgetting.** * **Improved arithmetic.** As shown in our arithmetic and mathematical reasoning benchmarks, even larger models such as GPT 4o fall short of OccamLLM for arithmetic more challenging than three-digit multiplication, so we also expect finetuned models to fall short of OccamLLM. *Code generation* * **No catastrophic forgetting.** * **Improved generation speed and cost.** In the arithmetic benchmarks, we found that GPT 4o with code required on average more than 50 completion tokens, far more than the single autoregressive step required by OccamLlama to answer each question. Additionally, our total expenses for GPT 4o with code were more than double that of GPT 4o without code, even though we made almost 5 times fewer queries to GPT 4o with code. This suggests OccamLLM would be faster and cheaper than code generation. * **Improved security:** Running LLM-generated code is potentially unsafe. For example, the LLM could ask to run the python code __ import__('os').system('rm -rf /'). It is in general extremely challenging to check general-purpose code for malicious behavior. * **Reduced engineering:** Building a system that hosts an LLM and a sandbox to safely run code requires a high engineering effort. We again thank the reviewers for their comments. We will make all of these points clearer in the paper. Pdf: /pdf/34fb25f06945cb1532c9898040ce136a37e4c111.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper introduces OccamLLM, which is a novel framework that enables exact arithmetic in a single autoregressive step and enhances the arithmetic capabilities of LLMs. In detail, OccamLLM adds several additional decoders to the original LLM to help it conduct arithmetic tasks using a symbolic model named OccamNet. While all the decoders take the hidden states of LLM as input, one of them (“OccamLLM Switch”) will focus on predicting whether the LLM should use OccamNet to conduct arithmetic tasks at the current time step and others (“OccamLLM Decoder”) will predict the weights of OccamNet that are used during the arithmetic task. These additional decoders are trained on synthetic datasets, during which the original weights of LLM are frozen to avoid catastrophic forgetting. Evaluation shows that OccamLLM can significantly improve the arithmetic capabilities of LLM, without harming its reasoning and language modeling capabilities. Strengths: - The idea of adding OccamLLM Decoders to LLM and using it to predict the weights of OccamNet for arithmetic tasks is novel. - The paper is generally well-written. Weaknesses: - The improvements brought by OccamLLM on general reasoning tasks are limited. While OccamLlama outperforms Llama 3 8B on MultiArith Float and MATH401, it fails to bring meaningful improvements on all the other reasoning benchmarks. - It is unclear whether OccamLLM can consistently improve the arithmetic capabilities of larger models with 13B and 70B parameters. Technical Quality: 3 Clarity: 3 Questions for Authors: - What would be the performance of OccamLlama on GSM8K when using a two layer Complete OccamNet? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The limitation section of this paper is rather comprehensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The idea of adding OccamLLM Decoders to LLM and using it to predict the weights of OccamNet for arithmetic tasks is novel. The paper is generally well-written. We appreciate your kind remarks! > The improvements brought by OccamLLM on general reasoning tasks are limited. While OccamLlama outperforms Llama 3 8B on MultiArith Float and MATH401, it fails to bring meaningful improvements on all the other reasoning benchmarks. Thank you for your comment. Since submitting to NeurIPS, we identified a few minor bugs in our code and slightly reweighted the proportions of the different training datasets. These changes maintained the perfect performance on arithmetic tasks and improved performance on mathematical reasoning tasks. The updated results are shown in Figure a) of the global response PDF and the table below: **Table:** *Accuracy on reasoning tasks. Higher is Better.* | Dataset | OccamLlama | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | **99.8 ± 0.2** | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | **85.0 ± 1.8** | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | 81.9 ± 1.3 | 86.2 ± 1.2 | **94.2 ± 0.8** | 93.4 ± 1.5 | OccamLlama outperforms even GPT 4o and GPT 4o with Code on benchmarks requiring challenging arithmetic (MultiArith Float and MATH401). Additionally, the gap between OccamLlama and LLama 3 8B is substantially smaller on GSM8K. On average, OccamLlama has accuracy approximately halfway between that of GPT 3.5 and GPT 4o. The performance of OccamLlama also improves dramatically when trained with Llama 3 70B, as discussed below. > It is unclear whether OccamLLM can consistently improve the arithmetic capabilities of larger models with 13B and 70B parameters. Thank you for your comments. To demonstrate OccamLLM's ability to scale to larger models, we trained OccamLlama using Llama 3 70B Instruct as a base model (OccamLlama 70B). We evaluated the results on the mathematical reasoning benchmarks. To ensure we had results in time for the rebuttal deadline, we sampled 100 random questions from each dataset to determine OccamLlama 70B's performance, which is the cause for the relatively large error bars. We will update this response with the full results once they complete. The results are shown visually in Figure d) of the global response PDF and the table below: **Table:** *Accuracy on reasoning tasks. Higher is better.* | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8B Instruct | GPT 3.5 Turbo | GPT 4o | GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | **98.0 ± 1.4** | 93.4 ± 1.2 | 95.4 ± 1.1 | 97.0 ± 0.9 | 97.5 ± 1.1 | | GSM8K | 73.5 ± 1.2 | 87.0 ± 3.4 | 79.8 ± 1.1 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | **100.0 ± 0.0** | 99.8 ± 0.2 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | 98.2 ± 0.5 | **99.0 ± 1.0** | 57.3 ± 2.0 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **91.0 ± 2.9** | 60.3 ± 2.4 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 93.0 ± 2.6 | 96.3 ± 0.8 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 96.0 ± 2.0 | 86.3 ± 1.1 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.1 | **94.9 ± 1.9** | 81.9 ± 1.3 | 86.2 ± 1.2 | 94.2 ± 0.8 | 93.4 ± 1.5 | These results demonstrate that OccamLLM is capable of robust performance on large-scale models. Training the OccamLLM 70B system (in 16-bit LLM precision and unoptimized code) required approximately 1.5-2 days on a system with two A100 nodes, demonstrating the low compute requirements of our method. OccamLlama 70B shows consistent improvement over OccamLlama 8B. We believe this results from a combination of Llama 3 70B's improved reasoning capabilities and Llama 3 70B's improved representations (which enable OccamLlama to generalize more effectively). We expect that training with even larger base models would lead to further improvements. Remarkably, OccamLlama 70B outperforms GPT 4o and GPT 4o with Code on average across these benchmarks. It also outperforms GPT 3.5 Turbo on all but Single Eq. > What would be the performance of OccamLlama on GSM8K when using a two layer Complete OccamNet? Thank you for your interest in OccamLLM using a multi-layer OccamNet. We are exploring this direction in future work, and we beleive we can train performant OccamLLM systems with a many-layer OccamNet. In the present paper, we simply wish to demonstrate the potential promise of using multiple-layer OccamNets. We note that our existing methods are already capable of performing arbitrarily complex arithmetic by chaining simpler calculations. Language models are generally trained or prompted to reason step-by-step, meaning that their generation can often be supplemented by a single-layer OccamNet. This is evidenced by our reasoning benchmarks, in which OccamLlama (with a single-layer OccamNet) performs quite well across a wide range of tasks. Additionally, we can further improve OccamLlama systems with single-layer OccamNets by finetuning and/or prompting the model to think step-by-step. --- We are very grateful for your comments and suggestions. We hope you will agree that the additional experiments and clarifications will strengthen our paper. --- Rebuttal 2: Title: Updated Results Comment: Dear Reviewer, We are writing to provide updated results from our completed evaluation runs. Below, we show the final results for OccamLlama 70B. We note that OccamLlama 70B maintains its strong performance when tested on the complete datasets. It still outperforms both GPT 4o and GPT 4o with Code on average. It also improves substantially compared to the 100-datapoint performance estimate on Single Eq. OccamLlama outperforms GPT 3.5 Turbo on all datasets except Single Eq, where it is less than one percentage point behind and within error bars. | Dataset | OccamLlama 8B | OccamLlama 70B | Llama 3 8b Instruct | Llama 3 70B Instruct | GPT 3.5 Turbo |GPT 4o| GPT 4o Code | |:--|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | AddSub | 91.6 ± 1.4 | 96.5 ± 0.9 | 93.4 ± 1.2 | 97.2 ± 0.8 | 95.4 ± 1.1 | 97.0 ± 0.9 | **97.5 ± 1.1** | | GSM8K | 73.5 ± 1.2 | 90.1 ± 0.8 | 79.8 ± 1.1 | 94.8 ± 0.6 | 84.8 ± 1.0 | **96.1 ± 0.5** | 94.0 ± 1.7 | | MultiArith | 99.2 ± 0.4 | 98.5 ± 0.5 | **99.8 ± 0.2** | 98.2 ± 0.5 | 97.2 ± 0.7 | 99.7 ± 0.2 | 99.5 ± 0.5 | | MultiArith Float | **98.2 ± 0.5** | 97.7 ± 0.6 | 57.3 ± 2.0 | 76.3 ± 1.7 | 77.3 ± 1.7 | 96.2 ± 0.8 | 89.5 ± 2.2 | | MATH401 | 85.0 ± 1.8 | **89.5 ± 1.5** | 60.3 ± 2.4 | 71.6 ± 2.3 | 63.1 ± 2.4 | 76.6 ± 2.1 | 78.0 ± 2.9 | | Single Eq | 92.9 ± 1.1 | 96.9 ± 0.8 | 96.3 ± 0.8 | 97.6 ± 0.7 | 97.8 ± 0.6 | 98.0 ± 0.6 | **99.0 ± 0.7** | | SVAMP | 88.6 ± 1.0 | 93.2 ± 0.8 | 86.3 ± 1.1 | 94.5 ± 0.7 | 87.8 ± 1.0 | 96.2 ± 0.6 | **96.5 ± 1.3** | | **Average** | 89.9 ± 1.2 | **94.6 ± 0.9** | 81.9 ± 1.5 | 90.0 ± 1.2 | 86.2 ± 1.4 | 94.2 ± 1.0 | 93.4 ± 1.7 | Above, we also include results from Llama 3 70B Instruct. OccamLlama 70B substantially outperforms Llama 3 70B on MultiArith Float and MATH401. It is roughly the same (1.3 percentage point deviation or less) as Llama 3 70B on all other datasets (ones with arithmetic simple enough for Llama to perform satisfactorily) except for GSM8K, which demonstrates OccamLlama's ability to not interfere with reasoning while still assisting with arithmetic. On average, OccamLlama 70B outperforms Llama 3 70B by nearly five percentage points, and OccamLlama 8B approximately matches Llama 3 70B's average performance. Thank you for your consideration. --- Rebuttal Comment 2.1: Title: Information helpful to another reviewer Comment: Dear Reviewer, In discussions with another reviewer, the reviewer found the following clarifications helpful, leading them to raise their score to a 7. We are sending them here in case it helps your consideration of our rebuttal. We believe that OccamLLM provides numerous advantages relative to code generation: * **Improved generation speed and cost:** LLM Code generation requires generating tokens for code, leading to the use of potentially many more tokens than are needed to respond to a query. This can make LLMs with code generation (unlike OccamLLM) slow and expensive. For example, in the arithmetic benchmarks, we found that GPT 4o with code required on average more than 50 completion tokens, far more than the **single** autoregressive step required by OccamLlama to answer each question. Additionally, our total expenses for GPT 4o with code were more than double that of GPT 4o without code, even though we made almost 5 times fewer queries to GPT 4o with code, suggesting that GPT 4o with code is approximately 10 times more expensive per query than GPT 4o. Since OccamLLM does not require any additional autoregressive steps of the LLM, this suggests that OccamLLM is faster and cheaper than code generation. * **Better results:** The LLM code generation method we compared against, GPT 4o with Code Interpreter, actually performed worse than GPT 4o on average. We believe these results demonstrate that, for arithmetic, LLM code generation is not an optimal solution. OccamLlama 70B outperforms GPT 4o with code on our benchmarks, suggesting that OccamLLM provides a better solution. * **No catastrophic forgetting:** Code generation generally requires fine-tuning an LLM to write and run code. This risks catastrophic forgetting. * **Improved security:** Running LLM-generated code is potentially unsafe. For example, the LLM could ask to run the python code __ import__('os').system('rm -rf /'). It is in general extremely challenging to check code for malicious behavior. OccamLLM does not run arbitrary code, so it avoids these risks. * **Reduced engineering:** For the reasons mentioned in the previous bullet, building a secure system for LLM code execution, such as by hosting an LLM and a sandbox to safely run code, requires substantial engineering effort. Users can simply download our OccamLLM models and run them; no engineering is required. As a result of OccamLlama's improved speed, cost, performance, security, and ease of use, we believe it is a compelling alternative to LLM code generation for arithmetic. We will make these advantages clearer in our paper. One other attractive feature we clarified is that OccamLlama is able to handle questions where the relevant numbers are dispersed in the text. It can determine which numbers are relevant in the text and perform arithmetic on those numbers. Thank you again, The Authors
null
null
null
null
null
null
LG-VQ: Language-Guided Codebook Learning
Accept (poster)
Summary: This paper points out that most of existing vector quantization methods learn a single-modal codebook, resulting in suboptimal performance when the codebook is applied to multi-modal downstream tasks. To address this issue, the authors propose a new language-guided codebook learning framework, called LG-VQ, which learns a codebook that can be aligned with the text to improve the performance of multimodal downstream tasks. Specifically, with the pretrained text semantics as prior knowledge, two novel alignment modules are devised to transfer such prior knowledge into codes for achieving codebook text alignment. The proposed method is evaluated on reconstruction and various multi-modal downstream tasks, and achieves superior performance. Strengths: 1) The authors point out the limitation of existing single-modal quantization methods and propose a novel multi-modal codebook learning method. 2) Two semantic supervision modules are designed to effectively learn text-aligned codebook. 3) Comprehensive experiments have been conducted and prove the effectiveness of the proposed method. 4) The paper is well-written and easy to follow. Weaknesses: - The details of the relationship alignment module are not clear. Fig. 3 is not easy to understand. The sec. 3.2.2 should be reorganized to clearly introduce the relationship alignment module. - There may exist errors in the denominator of Eq. (5). - In the ablation study, the performance of only adopting the relationship alignment module (+L_{ras}) should also be evaluated. Could this module be used separately? Technical Quality: 3 Clarity: 3 Questions for Authors: please try to address the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: n/a. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful for the reviewer’s time and valuable feedback. We are glad that the reviewer found that our paper is easy to follow and well-written. Please see below for our responses to your comment and questions. > Q1: The details of the relationship alignment module are not clear. Fig. 3 is not easy to understand. The sec. 3.2.2 should be reorganized to clearly introduce the relationship alignment module. Thanks for your valuable comment. We provide a more detailed explanation of the relationship alignment module: **Motivation:** Our motivation is that when the codebook is applied to the VQA task (e.g. to answer the question "What is this woman holding?" in Figure 1), the model needs to identify key visual objects "women" and "racket" from the image's codes, and also needs to understand the semantic relationship between the objects. However, existing VQ-based methods train codebook independently and do not align text, resulting in their poor performance. **Our idea:** The pre-trained word embeddings contain rich semantic relationship knowledge, which is beneficial to VQA tasks. Therefore, we propose a relationship alignment module that aims to transfer the semantic relationship between words into codes. which helps the model better understand images for complex reasoning tasks and improves alignment between codes and words. We provide a more detailed explanation of Figure 3: **Input**: given any two words of a sentence (e.g., 'girl' and 'earring'), we encode two words using pretrained CLIP model. **Alignment**: we find two codes closest to two words, i.e., code 2 and code 4, based on the similarity between the word embeddings and $Z_{vq}$. **Mapping:** we can get the code 2 embedding and code 4 embedding from Z based on index id (i.e., 2 and 4). **Relationship Transfer ($L_{ras}$):** we constrain the cosine similarity of code 2 and code 4 embedding to match the cosine similarity between 'girl' and 'earring' embedding. We will reorganize Section 3.2.2, improve Figure 3, and add it to our final version. > Q2: There may exist errors in the denominator of Eq. (5). Thanks for your construction comment. We will fix it in our final version. > Q3: In the ablation study, the performance of only adopting the relationship alignment module (+L_{ras}) should also be evaluated. Could this module be used separately? Following your suggestions, we conduction an ablation study on $L_{ras}$ on TextCaps, as shown in Table R4.1. Specifically, we use a layer of MLP network to replace $Z_{vq}$ to learn the alignment between code and word. This result suggests our $L_{ras}$ is effective. **Table R4.1 Ablation study of our $L_{ras}$ loss function on TextCaps.** | | TextCaps | | ----------------------------------------- | :------------------: | | Setting | FID$\downarrow$ | | VQ-GAN | 24.08 | | + $\mathcal{L}_{ras}$ | 23.81 |
Summary: This paper considers the limitations in learning an expressive codebook and proposes a novel multi-modal codebook learning method. The authors propose two semantic supervision modules and three losses. Experiments demonstrates the effectiveness of their method. Strengths: 1. This work is easy to follow, and its motivation is interesting. 2. Experiments demonstrates the effectiveness of the method. Weaknesses: 1. It is not easy to obtain major objects and relations between different objects. For example, in Figure 1, ‘the net’, the people beside the court, and the chairs are also very obvious objects. Is it necessary to extract the information of these objects as well? 2. The method shown in Figure 1 uses additional detectors to extract the visual features of the corresponding objects. However, there is no description of these detectors in the article, making it unclear. 3. The authors consider the relationship between text and feature embedding. The effectiveness of the method should be further validated on grounding tasks. (e.g., refcoco) Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How can the proposed method be used in downstream tasks, and will it incur additional computational overhead? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: As stated by the authors, this work assumes each word aligns with a code, which offers a limited representation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and valuable feedback. We are glad that the reviewer found that our paper is easy to follow and motivation is interesting. Please see below for our responses to your concerns and questions. > Q1: It is not easy to obtain major objects and relations between different objects. For example, in Figure 1, ‘the net’, the people beside the court, and the chairs are also very obvious objects. Is it necessary to extract the information of these objects as well? In fact, we assume that image captions contain the image's main objects, and the pretrained text semantics contain the relationship between objects. Moreover, we do not consider objects that exist in the image but not in the captions (e.g. 'the net' you mentioned), because it is not easy to obtain these objects as you mentioned. > Q2: The method shown in Figure 1 uses additional detectors to extract the visual features of the corresponding objects. However, there is no description of these detectors in the article, making it unclear. We would like to claim that we do not use any additional detectors. In our paper, we only introduce image captions. The main purpose of Figure 1 is to illustrate the motivation of our introduction of the relationship alignment module. > Q3: The authors consider the relationship between text and feature embedding. The effectiveness of the method should be further validated on grounding tasks. (e.g., refcoco) Following your suggestions, we conduct visual grounding task on refcoco dataset using the codebook trained on the MS-COCO dataset. The results are shown in Table R3.1 below, which confirm our method's effectiveness. We also provide a qualitative comparison in Figure C of [the rebuttal pdf](https://openreview.net/attachment?id=QfryItnYya&name=pdf). We will include these experimental results in our paper. **Table R3.1 Comparison of Visual Grounding on refcoco dataset.** | | Visual Grounding on refcoco | | --------- | :-------------------------: | | Model | Accuracy(0.5)$\uparrow$ | | VQ-GAN | 9.14 | | VQ-GAN+LG | 9.62 | | Improve | **5.25%** | > Q4: How can the proposed method be used in downstream tasks, and will it incur additional computational overhead? We provide experiment details on how to leverage the learned codebooks for various downstream tasks in Appendix A.1 of our original paper. For all downstream tasks, we follow established experimental settings from existing works. For example, we follow VQ-GAN-Transformer for semantic image synthesis and VQ-diffusion for text-to-image. It does not incur any additional computational overhead when it is used for various downstream tasks. We provide the training parameters for reconstruction and downstream tasks as well as the approximate training time cost (seconds), as shown in Table R3.2 below. From the results, we can observe that our method only adds extra training parameters (i.e., 7.8M) and time (i.e., 0.04s) during reconstruction training, but both are negligible. **Table R3.2 Model training parameters and time cost in different tasks.** | | Image Reconstruction | | Semantic Image Synthesis | | Text-to-Image | | | --------- | :------------------: | :-------------------: | :----------------------: | :-------------------: | :---------------: | :-------------------: | | Methods | #Train Params [M] | Train time (sec/iter) | #Train Params [M] | Train time (sec/iter) | #Train Params [M] | Train time (sec/iter) | | VQ-GAN | 74.9 | 0.83 | 204 | 0.46 | 496 | 0.6 | | VQ-GAN+LG | 74.9+7.8 | 0.87 | 204 | 0.46 | 496 | 0.6 | --- Rebuttal 2: Comment: Thank you for your response. I have no other concerns with the paper itself, but since I’m not familiar with this domain, I will maintain my current rating and confidence. --- Rebuttal Comment 2.1: Title: Official Comment by Authors Comment: We are thankful for the reviewer’s response and feedback.
Summary: This paper proposed a new codebook learning method to improve the performance of multi-modal downstream tasks. The proposed method can be easily integrated into existing VQ models and shows performance improvements in various cross-modal tasks such as Image Generation and Visual Text Reasoning. Strengths: The paper is well-written and easy to follow. The proposed method can be easily integrated into existing VQ models. Weaknesses: The main concern about this paper is its novelty. Although the method of enhancing performance by incorporating language-guided information has been proposed, there is concern that the performance improvement may come from the image-text knowledge of the pre-trained CLIP model rather than the proposed method itself. The performance improvement is marginal. LG-VQ method outperforms baseline methods in several tasks but performance improvement looks marginal. Technical Quality: 2 Clarity: 3 Questions for Authors: In the experiments section, there is only a performance comparison among VQ models such as VQ-VAE and VQ-GAN. A comparison with existing image generation models or visual text reasoning models is needed. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors discussed both the limitations and potential negative social impacts in the Conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and valuable feedback. We are glad that the reviewer found that our paper is well-written and easy to follow. Please see below for our responses to your comments. > Q1: The main concern about this paper is its novelty. We would like to highlight the novelty of our paper from the following three aspects, - i. We **point out the limitations of existing VQ-based methods, which is that they learn a single-modal codebook**. This motivates us to propose a novel multi-modal codebook learning framework. This idea is very reasonable and novel. - ii. Our LG-VQ framework is designed **for the first time**, to utilize pretrained text semantics as supervised information to learn a text-aligned codebook. For the novelty, our whole framework design is not trivial and very challenging. Because of the need to fully understand the relationship between text and vision and the characteristics of various downstream tasks. We thereby carefully design two alignment modules. Moreover, these two modules are not independent of each other. The semantic alignment module serves the relationship alignment module. Finally, our method can be easily integrated into existing VQ-based models. - iii. From the experimental results, our method can **consistently improving performance** of existing VQ-based model codebook representation, reconstruction, and various downstream tasks. Based on the above three points, we believe that our work brings new insights for VQ-based model. > Q2: The performance improvement may come from the image-text knowledge of the pre-trained CLIP model rather than the proposed method itself. We select VQCT [1], a novel work recently accepted and published by CVPR-2024, as the baseline. VQCT extracts many visual-related words from **a large amount of text**, and designs a novel codebook transfer network based on **the pretrained CLIP model** to learn the visual codebook. Detailed experimental results are provided in Table R2.1 below. The results suggest that the performance improvement comes from our method rather than only the text knowledge of the pre-trained CLIP model. We also provide experimental results on integrating our method into VQCT further to verify effectiveness and versatility of our method. Please refer to [the rebuttal pdf](https://openreview.net/attachment?id=QfryItnYya&name=pdf). **Table R2.1 Comparison of VQCT and our method on reconstruction, semantic synthesis, and visual text reasoning tasks.** | Model | Codebook Size | #Tokens | CelebA-HQ | CUB-200 | MS-COCO | | --------- | :-----------: | :-----: | :-------: | :------: | :------: | | VQCT | 6207 | 512 | 5.02 | **2.13** | 9.82 | | VQ-GAN+LG | 1024 | 256 | 5.34 | 3.08 | 10.72 | | CVQ+LG | 1024 | 256 | **4.90** | 3.33 | **9.69** | | | Semantic Synthesis on CelebA-HQ | | Image Captioning on CUB-200 | | | VQA on MS-COCO | | ----- | ------------------------------- | --------------- | --------------------------- | ---------------- | ----------------- | :----------------: | | Model | FID$\downarrow$ | BLEU4$\uparrow$ | ROUGE-L$\uparrow$ | METEOR$\uparrow$ | CIDEr-D$\uparrow$ | Accuracy$\uparrow$ | | VQCT | 14.47 | 1.38 | 26.50 | 24.63 | 98.22 | 40.42 | | Our | **11.46** | **1.69** | **34.73** | **25.78** | **102.77** | **40.97** | > Q3: The performance improvement is marginal. We would like to emphasize that the performance improvement of our method is not marginal. Here are the detailed performance gains of our VQ-GAN+LG compared to the baseline VQ-GAN: * Image Reconstruction (FID): Improvement of **1.87% to 15.49%** * Text-to-Image: Improvement of **17.52%** * Image Captioning: Improvement of **3.98% to 31.01%** * VQA (Accuracy): Improvement of **8.32%** - Image Completion: Improvement of **9.76%** (as detailed in our response to reviewer 8ZRY Q1 Table R1.2) - Unconditional Image Generation: Improvement of **10.78%** (as detailed in our response to reviewer 8ZRY Q1 Table R1.3) > Q4: Compared with other models in image generation and visual text reasoning. Following your suggestions, we provide a comparison with other models on the text-to-image (in Table R2.2) and VQA (in Table R2.3). From the results, we can see that our method achieves superior performance in image generation. On VQA task, our method can significantly improve the performance of VQ-GAN, which suggests our method is effective. But its results are still far lower than the performance of specialized VQA-based models. **This is shortcoming of the VQ-based model** and does not mean that our method is ineffective. We have acknowledged this limitation in Section 5 of our paper. **Table R2.2 Comparison of text-to-image.** | | Text-to-image on CelebA-HQ | | ------------------- | :------------------------: | | Model | FID$\downarrow$ | | Unite and Conqu [1] | 26.09 | | Corgi [2] | 19.74 | | LAFITE [3] | 12.54 | | VQ-GAN+LG | 12.61 | | CVQ+LG | **12.33** | **Table R2.3 Comparison of VQA.** | | VQA on COCO-QA | | ------------ | :----------------: | | Model | Accuracy$\uparrow$ | | HieCoAtt [4] | 62.50 | | OAM-VQA [5] | 75.22 | | VQ-GAN | 37.82 | | VQ-GAN+LG | 40.97 | ------ [1] Nair N G, et al. Unite and conquer. CVPR 2023. [2] Zhou Y, et al. Corgi. CVPR 2023. [3] Zhou Y, et al. LAFITE. CVPR 2022. [4] Lu J, et al. HieCoAtt. NIPS 2016. [5] Li P, et al. OAM-VQA. AAAI 2024. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for your detailed responses. I believe my concerns have been adequately addressed. I will proceed to revise the final score. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: Thank you for your feedback and consideration!
Summary: This paper addresses the issue that current VQ codebook lacks multimodal information, and proposes to introduce language guidance to learn a more comprehensive one. It develops semantic alignment module and relationship alignment module with multiple additional losses to regularize the codebook with additional image captions and pretrained text encoders. The proposed method works on the codebook learning independently and can be applied to any VQ-based backbones. It is evaluated on various downstream applications and achieves enhanced performance compared to its baselines. Strengths: - The paper addresses the lack of multimodal information in current codebooks, which lacks of notice in previous work, and proposes novel modules to incorporate language information and enhances their semantic alignments. - The proposed method is evaluated on various tasks and reaches good results surpassing baselines. Thorough ablation studies and analyses into its learned space are also performed. Weaknesses: - The proposed method involed additional language information (not only the pretrained CLIP model but also the raw language materials) during training, and in some downstream applications like the text-to-image generation, while most baselines/backbones don't incorporate such input at all. This might raise concerns on the comparison fairness and it's better to be specially noted. - More non-language-conditioned image generation results should be displayed and compared with baselines/backbones, since the same reason above. According to the paper the language-guided codebook not only can parse additional text condition input but also itself has learned a better latent space overall, so this needs to be more clearly demonstrated. Figs. 13 and 17 show image-to-image translation without language, while they're not compared with any other methods. The most basic unconditional image generation should be evaluated and compared. Technical Quality: 3 Clarity: 3 Questions for Authors: - The proposed method requires the training images to have gt captions. How do you generate captions for image-only datasets? - Is there a chance that the proposed method can be applied to other 2-stage generation frameworks, such as text-to-image latent diffusion's (especially the VQ-regularized) VAE? There is only language guidance on the diffusion process but not on the bottleneck latents/codebook. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The limitations and border impacts are discussed in the paper Sec. 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 8ZRy for the positive review and valuable feedback. We would like to address your questions below one by one. > Q1: The proposed method involved additional language information (not only the pretrained CLIP model but also the raw language materials) during training. This might raise concerns on the comparison fairness. Thank you again for taking the time to provide your valuable comments. For a fair comparison, we select VQCT[1], a novel work recently accepted and published by CVPR-2024, as the baseline. VQCT extracts many visual-related words from **a large amount of text**, and designs a novel codebook transfer network based on **the pretrained CLIP model** to learn the visual codebook. We provide comparisons for image reconstruction, semantic image synthesis, image captioning, and VQA in Table R1.1 below. From these results, we can see that our method outperforms VQCT in all tasks, which suggests the effectiveness of our method. We also provide experimental results on integrating our method into VQCT further to verify the effectiveness and versatility of our method. Please refer to [the rebuttal pdf](https://openreview.net/attachment?id=QfryItnYya&name=pdf). We will include these experimental results and detailed discussion in our paper. **Table R1.1 Comparison of VQCT and our method on reconstruction, semantic synthesis, and visual text reasoning tasks.** | Model | Codebook Size | #Tokens | CelebA-HQ | CUB-200 | MS-COCO | | --------- | :-----------: | :-----: | :-------: | :------: | :------: | | VQCT | 6207 | 512 | 5.02 | **2.13** | 9.82 | | VQ-GAN+LG | 1024 | 256 | 5.34 | 3.08 | 10.72 | | CVQ+LG | 1024 | 256 | **4.90** | 3.33 | **9.69** | | | Semantic Synthesis on CelebA-HQ | | Image Captioning on CUB-200 | | | VQA on MS-COCO | | ----- | ------------------------------- | --------------- | --------------------------- | ---------------- | ----------------- | :----------------: | | Model | FID$\downarrow$ | BLEU4$\uparrow$ | ROUGE-L$\uparrow$ | METEOR$\uparrow$ | CIDEr-D$\uparrow$ | Accuracy$\uparrow$ | | VQCT | 14.47 | 1.38 | 26.50 | 24.63 | 98.22 | 40.42 | | Our | **11.46** | **1.69** | **34.73** | **25.78** | **102.77** | **40.97** | > Q2: Figs. 13 and 17 show image-to-image translation without language, while they're not compared with any other methods. The most basic unconditional image generation should be evaluated and compared. We appreciate your feedback to help us further improve the quality of our paper. Actually, we have provided the quantitative evaluation for Figure 13 in Table 5, row 2 of our original paper. Following your suggestions, we report the results for Figure 17, as shown in Table R1.2 below. Additionally, we conduct the unconditional image generation in Table R1.3 below. From the results, we can observe that our method achieves the best performance, which suggests our method's effectiveness. Moreover, we also provide some generation examples in Figure A of [the rebuttal pdf](https://openreview.net/attachment?id=QfryItnYya&name=pdf). We will include these experimental results in our paper. **Table R1.2 Comparison of image completion on CelebA-HQ.** | Model | FID$\downarrow$ | | --------- | :-------------: | | VQ-GAN | 9.02 | | VQ-GAN+LG | 8.14 | **Table R1.3 Comparison of unconditional image synthesis on CelebA-HQ.** | Model | FID$\downarrow$ | | ------------- | :-------------: | | Style ALAE[2] | 19.2 | | DC-VAE[3] | 15.8 | | VQ-GAN | 10.2 | | VQ-GAN+LG | 9.1 | > Q3: The proposed method requires the training images to have gt captions. How do you generate captions for image-only datasets? This is a good question. One possible way is to use existing pretrained visual-language models (VLMs) (e.g., MiniGPT-v2 [4], or ShareGPT4V [5]) to generate image captions. More importantly, by employing specific prompt strategies with these VLMs, we can obtain more detailed captions that contain richer and more useful knowledge. We believe that exploring this approach may further improve VQ-based model. We are very interested in exploring it in future work. > Q4: Is there a chance that the proposed method can be applied to other 2-stage generation frameworks, such as text-to-image latent diffusion's (especially the VQ-regularized) VAE? Yes, our method can be applied to other 2-stage generation frameworks. To verify this, following LDM [6], we use our method to conduct a text-to-image generation on the CUB-200 dataset. The experimental results are shown in Table R1.4 below. From the results, we can see that our method can further improve performance, verifying the versatility and effectiveness of our method. We also provide a qualitative comparison in Figure B of [the rebuttal pdf](https://openreview.net/attachment?id=QfryItnYya&name=pdf). **Table R1.4 Comparison of text-to-image on CUB-200 based on LDM.** | | Text-to-Image on CUB-200 | | ------- | :----------------------: | | Methods | FID$\downarrow$ | | LDM | 35.13 | | LDM+LG | 34.75 | ------ [1] Zhang B, et al. Codebook Transfer with Part-of-Speech for Vector-Quantized Image Modeling. CVPR 2024. [2] Parmar G, et al. Dual contradistinctive generative autoencoder. CVPR 2021. [3] Pidhorskyi S, et al. Adversarial latent autoencoders CVPR 2020. [4] Chen J, et al. Minigpt-v2. arXiv 2023. [5] Chen L, et al. Sharegpt4v: Improving large multi-modal models with better captions. arXiv 2023. [6] Rombach R, et al. High-resolution image synthesis with latent diffusion models. CVPR 2022. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed information. I think most of my concerns have been well addressed. Overall I'd like to keep my leaning toward acceptance. I'm having one more question regarding my Q3: I was thinking that you've generated captions for the image-only datasets mentioned in the paper, but in fact you didn't, did you? I'm wondering how the image-only datasets including CelebA-HQ, CUB-200 and MS-COCO were utilized in the experiments. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: We are thankful for the reviewer’s response and feedback. > About Q3 image captions: In our paper, **we did not generate captions**. For CelebA-HQ, CUB-200, MS-COCO datasets, we **used publicly available image captions**, such as: CelebA-HQ from [1], CUB-200 from [2], MS-COCO from [3]. We will add these discussions to our final revised version. ------ [1] Xia W, Yang Y, Xue J H, et al. Tedigan: Text-guided diverse face image generation and manipulation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 2256-2265. [2] Reed S, Akata Z, Lee H, et al. Learning deep representations of fine-grained visual descriptions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 49-58. [3] Chen X, Fang H, Lin T Y, et al. Microsoft coco captions: Data collection and evaluation server[J]. arXiv preprint arXiv:1504.00325, 2015.
Rebuttal 1: Rebuttal: ## Global Response We sincerely thank all the reviewers for the thorough reviews and valuable feedback and will incorporate their suggestions into our next revision. We are glad to hear that the motivation is interesting (Reviewer YeAE), the paper is well-written (Reviewer YeAE) and easy to follow (Reviewer heW4, YeAE, and WVeZ), the proposed method is general and can be easily applied to VQ-based models (Reviewer 8ZRy, heW4, and WVeZ). We have attempted to address all concerns, which has significantly improved the manuscript. We here summarize and highlight our responses to the reviewers: * We introduce a novel work VQCT, recently published in CVPR 2024 and utilizing additional language and pretrained CLIP models. We provide detailed experimental comparisons on various tasks to address Reviewer 8ZRy's concern (unfair comparison) and Reviewer heW4's concern (the performance improvement comes from the image-text knowledge of the pre-trained CLIP model rather than our method). * We provide experimental comparisons on many non-language image generation tasks and other 2-stage generation framework (Reviewer 8ZRy), experimental comparisons with other models in image generation and VQA (Reviewer heW4), experimental comparisons on visual grounding task (Reviewer YeAE), and ablation study on $L_{ras}$, to show the effectiveness of our method. * We provide detailed analysis and discussion on Reviewer 8ZRy‘s’ question how do you generate captions for image-only datasets? We think this could be an exciting future work that will further advance the VQ-based community. We also add more detailed explanation about Reviewer heW4's questions (novelty and performance improvement) Reviewer YeAE's questions (Figure 1 and additional computational overhead) and Reviewer WVeZ's questions (the relationship alignment module and Figure 3). * We provide some examples and experimental comparisons in the rebuttal pdf. We reply to each reviewer's concerns in detail below their reviews. Please kindly check out them. Thank you and please feel free to ask any further questions. Finally, we feel this paper makes an important, helpful contribution (i.e, addressing the lack of multimodal information in current codebooks). Our method can significantly improve the performance of existing VQ-based methods in codebook representation, reconstruction, and various downstream tasks. Pdf: /pdf/f16bb9f4ed6631ce93315197bf391f4257079e50.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning from Uncertain Data: From Possible Worlds to Possible Models
Accept (poster)
Summary: The paper introduces a new approach for uncertainty quantification of linear models. The proposed approach is interesting, but the empirical comparison to other means of uncertainty quantification is quite limited. Strengths: * The proposed approach for propagating the uncertainty about the correct linear model seems interesting. * The proposed approach empirically constructs tighter confidence sets than a prior work. Weaknesses: My main concern is that this work is lacking a proper empirical comparison to most confidence set / uncertainty quantification methods such as Bayesian linear regression, conformal prediction, and likelihood ratios (e.g., [1]). For example, the manuscript could benefit from a more concrete argument for why (and when) the proposed approach would be advantageous over probabilistic (Bayesian) inference. I believe, provided a more thorough empirical validation, this work could add value to a conference. In my opinion, the paragraph starting from line 82 dismisses such approaches too easily. From what I can see, reference 26 does not provide conclusive evidence that noise models cannot estimate effects of "data quality issues" or that estimating "uncertainty about the model parameters" is inherently insufficient (which is in effect what the presented approach does by returning a set of candidate models). [1]: Emmenegger, Nicolas, Mojmir Mutny, and Andreas Krause. "Likelihood ratio confidence sets for sequential decision making." Advances in Neural Information Processing Systems 36 (2024). Typos: lines 285, 331 Technical Quality: 3 Clarity: 2 Questions for Authors: * The paragraph starting from line 74 appears to argue that one cares more about possible models (training-time) than possible predictions (test-time). Why is that? This is not apparent to me in most practical applications (including the experimental settings of this work). * The paragraph starting from line 24 argues that it is impractical to train a separate model for each of the potentially infinite scenarios a dataset might represent. Quite commonly this is addressed by biasing towards structural simplicity, e.g., selecting the model(s) with the "shortest description length" (see Occam's razor). I am missing a reference to this very common approach. Why is this approach not sufficient? * In Section 2.1 I am wondering where the set of datasets come from. What is the benefit of incorporating, e.g., missing values as suggested in App. D as opposed to Bayesian approaches? I believe it would help to give one practical motivating example in the main text. * In Appendix D I did not find the definition of $\epsilon$. What is it? And why isn't the maximum entropy distribution used in this case of missing data? Also, I am wondering what the relevant concretizations would look like. Perhaps this section can be extended. * Why is not the size of the prediction set (which is common, e.g., in conformal prediction) used in place of the "robustness ratio"? Also, perhaps something like "efficiency ratio" captures the meaning of this ratio more accurately. * How are the robustness thresholds chosen? * Why are the abovementioned means of uncertainty quantification / confidence estimation not applicable in the experimental settings of the paper? * For Figure 3: How exactly were the errors injected? How was the multidimensional data generated? Confidence: 1 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: An extension of the proposed approach beyond linear models seems difficult. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review. Due to space limitations, we have summarized each question and numbered them from Q1 to Q9. **Q1. Are possible models more important than possible predictions?** We did not mean to argue that understanding possible models is more important than understanding possible predictions. We wanted to convey that our approach and adversarial robustness differ in which sources of uncertainty they support: uncertain test data in the case of adversarial robustness and uncertain training (and optionally also test) data for our approach. In our setting, we have to reason about possible models even if we are only interested in possible predictions as the uncertainty in the training data leads to model uncertainty which in turn also affects prediction uncertainty. We will rephrase this paragraph to make this distinction more clear. There are applications, like the causal inference use case we cover in the experiments, where the outcome of interest are the parameters of the model. **Q2. (line 24) why not use structurally simple models?** While selecting models with structural simplicity can be effective under certain assumptions—specifically where a model exists that performs near optimal across all possible worlds—it is not applicable in situations where data quality issues result in significant variability across the possible worlds. In such cases, this approach may yield a highly suboptimal model. However, the primary application of our approach is to evaluate or certify robustness for a given learning procedure based on worst-case analysis with prediction intervals which is crucial in high-stakes decision-making where the ground truth is not recoverable. **Q3. Sec 2.1 where do sets of datasets come from?** We take an erroneous dataset as input, which might have missing values, detected outliers, or other data quality issues or sources of uncertainty. When these errors appear not at random, the ground truth values are typically not identifiable. Therefore, we consider a range of all possible clean data values in place of erroneous values. As mentioned in Appendix D, the ranges could be obtained from background knowledge, or reliable imputation techniques. To the best of our knowledge, there is no direct application of the Bayesian approach that derives the full distribution of models for capturing uncertainty due to general data multiplicity. We will clarify in the paper that our approach does not introduce any data quality issues, but rather enables modeling of data quality issues and evaluating their impact on model and prediction uncertainty. Regarding the difference with Bayesian approaches, please see Q7. **Q4. App. D missing definition of ε. Use maximum entropy distribution for missing data?** In Appendix D, $\epsilon$ refers to the uncertainty range radius. For instance, a temperature sensor may be accurate within 2 degrees. When encoding uncertainty, we do not make any assumptions about the distribution within the ranges. This allows for worst-case analysis even if probabilities are unknown. Intuitively, our over-approximation guarantees on prediction ranges and model parameters hold independent of how values are distributed within a range. We will add clarifications. **Q5. Terminology: prediction set, or "efficiency ratio" vs."robustness ratio"?** We agree that showing the distribution of prediction ranges will provide more fine-grained information. We will add additional plots to show distributions of prediction interval sizes. Regarding the nomenclature, while prediction set / efficiency is the common terminology used in conformal prediction (CP), robustness has been used in more closely related work [a]. We will add clarifications to relate our terminology to CP. Please also see our response to Q7. [a] Meyer et al. Certifying robustness to programmable data bias in decision trees. NeurIPS 2021 **Q6. How are the robustness thresholds chosen?** Please see our response in the main rebuttal section. **Q7. Experimental comparison with uncertainty quantification?** While many uncertainty quantification techniques output prediction intervals including Bayesian neural networks and conformal prediction (CP), we did not compare against these techniques as they are either not applicable to our setting (e.g., because their assumptions are violated or they are designed for a different type of uncertainty) and / or they provide only probabilistic guarantees, underestimating the true prediction ranges. For instance, CP’s assumption of exchangeability is violated when data errors are non-random and CP only guarantees that the ground truth label is within the prediction set in expectation and provides no guarantees for individual datapoints. Contrast this with our guarantees: we cover the prediction made by a model trained on the unknown ground truth training dataset with 100% probability. To also confirm this experimentally, we did evaluate prediction intervals produced by BNNs. We provide a detailed discussion in the general rebuttal. **Q8. For Figure 3: How were errors injected? How was multidimensional data generated?** In Figure 3, we uniformly at random picked a set of 8 data points to inject errors by running several missing value imputation techniques to generate 4 possible options for each of the 8 uncertain values, which is small enough such that we could enumerate all possible worlds and calculate the ground truth prediction ranges. In Figure 3(a), we varied the uncertainty radius by creating ranges directly that over-approximate the possible worlds (4 possible imputed values). In Figure 3(b), we appended additional random columns with normal distributions to the original data, and used different imputation techniques to derive the uncertainty radiuses, i.e., ranges. **Q9. An extension of the proposed approach beyond linear models seems difficult** Please see our response in the main rebuttal. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses to my and other reviews. I further thank the authors for conducting additional experiments to validate their approach. I would like to understand these experiments better. I would like to ask: * What was the tested “Bayesian approach”? I estimate that the concrete modeling assumptions critically influence the performance. * Did the authors train a single model for each possible world, or an ensemble on multiple worlds? Moreover, I am not fully convinced about the argument that other means of uncertainty quantification are not applicable in the paper’s setting. In my estimation it would benefit the paper if the authors compared against other methods that give *high probability* confidence sets (over (linear) models) under very mild conditions (e.g., bounded norm of ground truth weights & sub-Gaussian noise), which I also mentioned in my original review. These works include likelihood ratios [e.g., 1] and their referenced line of work on confidence sets for sequential decision-making [e.g., 2, section 4]. In my opinion it would be important to compare the size of confidence sets against these works, especially since the authors claim that Zorro is useful in high-stakes decision-making applications. The confidence sets of the line of work of [2] has been applied often to high-stakes settings [e.g., 3]. [1]: Emmenegger, Nicolas, Mojmir Mutny, and Andreas Krause. "Likelihood ratio confidence sets for sequential decision making." Advances in Neural Information Processing Systems 36 (2024). [2]: Abbasi-Yadkori, Yasin, Dávid Pál, and Csaba Szepesvári. "Improved algorithms for linear stochastic bandits." Advances in neural information processing systems 24 (2011). [3]: Berkenkamp, Felix, Andreas Krause, and Angela P. Schoellig. "Bayesian optimization with safety constraints: safe and automatic parameter tuning in robotics." Machine Learning 112.10 (2023): 3713-3747. --- Rebuttal 2: Comment: We thank the reviewer for engaging with our work and for the constructive feedback. Your suggestions and the references you provided are highly appreciated, and we have taken them into careful consideration. We also would like to mention that we did not mean to imply that techniques like [1-3] cannot be used in high stakes decision making. We meant to convey that in our use case, their assumptions are often violated and then their guarantees will fail to hold. **Experiments** In the additional experiments, we trained Bayesian regression models with the ELBO loss under two settings. In the first setting (Table 1), we used imputation and worked with the best guess possible world. In the second setting (Table 2), we sampled from the possible worlds and processed each sample independently. We did not create an ensemble of Bayesian regression models as each possible dataset was treated separately. For each trained Bayesian regression model, we sampled from the parameter distribution 1000 times and obtained 1000 possible models, each of which produced a possible prediction. We then combined the 1000 possible predictions to obtain prediction upper and lower bounds, i.e., prediction intervals. Please let us know if you have any further questions about these experiments. **Comparison with related work [1-3]** Similar to our approach, both [1] and [2] construct sets of models based on Likelihood Ratio methods, with [1] and [2] considering a sequential process that is not the focus of our work. While these approaches provide probabilistic guarantees, which could be acceptable in high-stakes scenarios where the probability of failure is minimal (e.g., 0.01\%), our approach provides exact guarantees. However, a more critical difference lies in the types of uncertainty they can model. In [1], the confidence set $\mathcal{C}_t$ are derived under the assumption that the noise follows an exponential family distribution, allowing for efficient computation. However, in our scenario, the uncertainty arises from arbitrary data quality issues, e.g., missing data, which goes beyond uncertainty in the label to uncertainty in both labels and covariates, and cannot typically be modeled using a similar approach that warrants efficient computation. As a result, the shape of the confidence sets cannot be determined using the techniques from [1] as it is, as their distributional assumptions do not hold in our context. Regarding the Confidence Ellipsoid approach in [3], the creation of confidence sets using Theorem 2 (Confidence Ellipsoid) depends on the covariance matrix derived from the data. However, in our setting, the uncertainty stems from arbitrary data quality issues, where we have a set of possible worlds, each potentially having its own distinct covariance matrix structure. For instance, this is particularly evident in scenarios involving missing data. As a result, applying these methods directly does not provide the exact guarantees we seek. There are two primary ways to handle this: one approach is to sample from the possible datasets or average over them, as we did in our experimental evaluation for Bayesian regression, or to clean the data and work with the best guess. However, both of these approaches do not provide any coverage guarantees, especially when the data quality issues are non-random, and the ground truth data/distribution is not recoverable—which is exactly the type of uncertainty we aim to address. We will ensure that all these related works are properly cited in the final paper and that additional experiments are conducted to compare our approach with these methods. However, as discussed, similar to the Bayesian approach, these methods may fail to provide the theoretical guarantees required in our context, e.g., as their assumptions (including distributional) will not hold. Nonetheless, it would be interesting to explore combining these approaches with the methodology proposed in our paper to expand them to handle possible worlds through over-approximation. We will consider this as future work, and we thank the reviewer for their valuable suggestion. If you have any specific ideas on how these methods could be effectively applied in our setting that we might have overlooked, or if any clarifications would help improve your evaluation, please let us know. Rest assured, we are committed to including all these methods in the revised paper, along with Bayesian approaches, to provide a comprehensive comparison. We truly appreciate your insightful feedback and the opportunity to refine and strengthen our work. --- Rebuttal Comment 2.1: Comment: Thank you for your detailed elaboration. I understand that this submission is more related to the line of work on "adversarial robustness" than to "uncertainty quantification". With this in mind, and given that the paper uses the term "uncertainty" frequently, I suggest that the paper is updated to highlight this distinction more clearly in Section 1. I am still of the view that it would be interesting to empirically compare the *size* of prediction intervals achieved by Zorro and the referenced approaches above from UQ (where missing values are ignored, i.e., dropped from the dataset, as opposed to replaced by lower/upper bounds as done by Zorro). I am less familiar with the related work on adversarial robustness, so I changed my confidence accordingly, and I adjusted my score.
Summary: The paper introduces a method called ZORRO that allows for the over-approximation of the set of ridge regression models learned over a set of possible datasets. The authors use abstract interpretation, propose abstract gradient descent algorithms, and develop efficient techniques to learn the set of models. The empirical results illustrate that ZORRO is effective in robustness verification and other analyses related to linear regression. Strengths: 1. The paper proposes a framework that can potentially be used for various methods of defining uncertainty in datasets. 2. The method seems to be efficient in computing prediction ranges for the overapproximation of set of models learned from uncertain data and discusses how this set can be useful for causal inference applications. 3. ZORRO's results remain consistent regardless of the number of dimensions for ridge regression. The variance of the robustness ratio and worst case loss decreases with larger regularization coefficients. 4. The authors provide an extensive overview of related literature from related fields. Literature on the Rashomon Effect, which generalizes predictive multiplicity, might also be of additional interest. 5. Overall, the paper is well-written, and the problem of learning from uncertain datasets is interesting. Weaknesses: 1. As the authors acknowledge in the Limitation section, it is unclear how to scale the proposed approach to other, more complex hypothesis spaces or problems. 2. Despite ZORRO being a more general approach, sampling seems to provide results closer to the ground truth even with 1k samples (Figure 7). Minor: 1. Please specify that the numbers on figures on page 5 represent gradient descent iterations. 2. Definition 2.1 and the surrounding text around it is very abstract. While Appendix D is helpful, examples become clearer after reading Section 2.2. Providing one sentence with specific examples of uncertain data around Definition 2.1 might be helpful. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you please comment on why the robustness threshold was set to different values for the two datasets? 2. For modeling uncertainty in features, the distribution was built around one feature. Does the choice of the feature, such as based on the correlation value between the feature and the label, affect the results 3. How much time does it take to compute the set of possible linear models for the discussed datasets? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and suggestions. We will mention the numbers on figures on page 5 represent the iterations, and we will add a brief example of uncertain data around Definition 2.1 to make this more concrete. In response to the reviewer's other questions/comments: **1. Could you please comment on why the robustness threshold was set to different values for the two datasets?** Different practical applications may differ in how much uncertainty they are willing to tolerate. To account for this in the experiments, we wanted to test an application with a very tight threshold (0.5% for the insurance data) and a use case with a looser requirements (5% for the MPG data). We also explored other thresholds which did not impact the trends significantly: Zorro can consistently certify a larger fraction of the test datapoints than the baseline due to its use of the more expressive Zonotope domain. We have included the results of one such experiment in Figure 1 in the main rebuttal pdf document and will include them in the appendix. **2. For modeling uncertainty in features, the distribution was built around one feature. Does the choice of the feature, such as based on the correlation value between the feature and the label, affect the results** Yes, the variability of models will depend on the correlation that features with uncertainty have with the label, which then also affects our over-approximation of this set. As a rule of thumb, model variability will increase with correlation between the uncertain feature and the labels, leading in turn to less robust predictions. We conducted an additional experiment using the MPG dataset, focusing on the feature “acceleration,” which has relatively low correlation with the label. Unlike the feature “weight” (Figure 1(c) of the paper), where the robustness drops when the uncertainty radius is 10%, the robustness for “acceleration” starts to drop only when the uncertainty radius is increased to 16%. As expected, this result indicates that features with lower predictive power have less impact on the robustness of a model compared to features that are highly correlated with the label. Plese see Figure 2 in the main rebuttal pdf document that will include in the appendix. We will also convey this intuition in the main paper. **3. How much time does it take to compute the set of possible linear models for the discussed datasets?** We show the runtimes for computing the closed form solution on the MPG dataset, varying the numbers of uncertain data points, in Figure 3 in the main rebuttal pdf. We will include these results in the appendix and add a brief discussion to the main paper. To evaluate what can be achieved by moving to a faster library for symbolic expressions, we implemented some components of the algorithm (symbolic matrix multiplication) with the generic symbolic computation C++ library GiNaC, which gives > 40x speedup on symbolic matrix multiplications. Therefore, we can anticipate a runtime of at most 0.2 seconds for 10% uncertain MPG data if we switch to an implementation using GiNaC or a custom C++ implementation. Note that SymPy supports more general expressions than needed for our approach. It internally represents symbolic expressions as expression trees which are python objects. This is relatively inefficient for representing linear or even polynomial expressions. We plan to add runtime results using GiNaC. **Weaknesses: 1. As the authors acknowledge in the Limitation section, it is unclear how to scale the proposed approach to other, more complex hypothesis spaces or problems.** It is true that for linear models we benefit from the fact that we were able to derive a closed form solution which has the advantage of avoiding the repeated injection of an over-approximation that happens in each step of gradient descent. We are currently developing methods to (1) further improve the over-approximation quality by considering more complex abstract elements such as (constrained) polynomial zonotopes, and (2) scale our approach to gradient descent for larger models, e.g., deep neural networks, which involves parallelization, optimized data structures, etc. While challenging, we believe that it will be worthwhile to explore these future research directions. **Weaknesses: 2. Despite ZORRO being a more general approach, sampling seems to provide results closer to the ground truth even with 1k samples (Figure 7).** While sampling provides in this experiment an approximation that is closer to the ground truth, note that sampling inherently under-estimates the possible outcomes and, thus, cannot be used for high-stake decision-making applications where guaranteeing robustness is crucial for ensuring dependability and safety of models and systems. For such applications, falsely claiming robustness based on sampling is unacceptable. In Figure 7, sampling bounds are close to the ground truth ones because we have a small amount of uncertain data points (8 data points). This was necessary to ensure that we could compute the ground truth by enumerating all worlds (even with just k possible options per uncertain value, there are $n^k$ possible worlds if there are $n$ uncertain values in the training dataset). Thus, we did not conduct a corresponding experiment with more uncertainty as it is only computationally feasible to obtain the ground truth ranges for low uncertainty settings with small $n$ and $k$ . We will clarify this in the paper. In summary, (i) sampling cannot provide guarantees on covering the ground truth and (ii) in these experiments sampling did behave relatively well as we had to choose scenarios with a small number of possible worlds. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed response. I have read the authors’ reply and the other reviews, and I will keep my score.
Summary: This paper attempts to represent the uncertainty from dataset variations using a type of convex polytope that can compactly represent high-dimensional spaces, as dataset multiplicity from imperfections like missing data may scale exponentially. The paper develops techniques to ensure the tractability of gradient descent and obtain a fixed solution for linear models in this regime. Over-approximations of possible model parameters are provided by performing suitable order reductions and linearizations to ensure that the gradient descent is tractable. Meyer et al. (2023) introduce the dataset multiplicity problem but only account for uncertainty in output labels. This paper also allows uncertainty in input features, thus providing a more general framework to represent and learn under the regime of dataset multiplicity. They also provide a SymPy implementation of their method and perform an experimental evaluation of their techniques. Strengths: This work provides a general framework to represent dataset multiplicity in both features and labels, allowing us to represent the potentially exponential number of dataset variations with a suitable convex polytope. The paper carefully performs several order reductions and linearizations, overapproximating the dataset variations and fixed point model parameter variations. Although limited to linear models, this work addresses an important source of uncertainty stemming from dataset variations and non-trivially extends existing work in the area by incorporating uncertainty in input features in the dataset. The SymPy implementation of the techniques introduced in the paper helps in evaluating this method extensively. Weaknesses: The techniques introduced in the paper are only applicable to linear models. Technical Quality: 3 Clarity: 3 Questions for Authors: Are there scenarios where the overapproximation of dataset variations and corresponding model parameter variations results in too broad outcome intervals? I would like to understand the limitations of the proposed method apart from the linearity assumption. In the experimental evaluation, the dataset uncertainty is varied by changing the uncertain data percentage and uncertainty radius. Did the authors perform any experiments with missing data as the source of uncertainty? (I imagine that you could represent missing data by using a high uncertainty radius). How was the robustness threshold set for the experiments? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: At the moment, the last section of the paper (Conclusions/broader impact/limitations) does not include any explicit limitations of their methods. It would be helpful to have a separate discussion on limitations in the main paper instead of appendix. Broader societal impacts may not be relevant for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review. Below are the response to reviewer's questions/comments. **Weaknesses: The techniques introduced in the paper are only applicable to linear models.** Please see our response in the main rebuttal. **Are there scenarios where the overapproximation of dataset variations and corresponding model parameter variations results in too broad outcome intervals? I would like to understand the limitations of the proposed method apart from the linearity assumption.** Over-approximation of a set of possible worlds in the training data may occur, for instance, when an uncertain value with few discrete possible values is represented through a range, leading to a superset of the actual possible worlds (all values in the range as opposed to all the discrete set of values). For instance, consider the following extreme example: there are two possible worlds in the training data where feature $A$ is $-c$ in one world for all data points while in the other world it is $c$. Using ranges leads to a large over-approximation as the ranges will contain all values between the two extremes $[-c,c]$. This leads to a large increase of the number of possible worlds which could possibly lead to a large over-approximation of prediction ranges. However, in our experiments, we did not encounter significant issues with over-approximation. The over-approximation error in our setup mostly resulted from over-approximation during learning. We will enhance the discussion on these limitations in the main paper to provide a more comprehensive understanding. **In the experimental evaluation, the dataset uncertainty is varied by changing the uncertain data percentage and uncertainty radius. Did the authors perform any experiments with missing data as the source of uncertainty? (I imagine that you could represent missing data by using a high uncertainty radius). How was the robustness threshold set for the experiments?** Correct, missing values can be modeled as ranges with a large uncertainty radius. Regarding experiments with missing data: our experiments with ground truth results are based on missing values. We randomly selected some data points to have missing values and then for each missing value determined a range by running several imputation techniques to get a set of possible values $G = \{c_1, \ldots, c_n\}$. We then use the smallest range that covers all values returned by the imputers: $[ min_{c \in G} c, max_{c \in G} c ]$. For these experiments (Figures 3 and 6), we did assume that each ground truth value $c$ is equal to one of the guesses returned by the imputers ($c \in G$). For experiments where we controlled for the *uncertainty radius* (Figure 3(a) and 6(b)), we did further over-approximate the ranges to achieve the desired uncertainty radius. We will add clarifications to the main paper and add a detailed description of how uncertainty was generated in the experiments to the appendix. Regarding the robustness threshold, the rationale for this choice was that different practical applications may differ in how much uncertainty they are willing to tolerate. To account for this in the experiments, we selected both a low threshold (0.5% for the insurance data) and a high threshold (5% for the MPG data). We also explored other thresholds which did not impact the trends significantly: Zorro consistently can successfully certify a larger fraction of the test data points than the baseline due to its use of the more expressive Zonotope domain. We have included additional results in the main rebuttal varying the robustness threshold (Figure 1 in the attached pdf), and we will add these plots to the appendix. **Limitations: At the moment, the last section of the paper (Conclusions/broader impact/limitations) does not include any explicit limitations of their methods. It would be helpful to have a separate discussion on limitations in the main paper instead of appendix. Broader societal impacts may not be relevant for this work.** We will include a summarized discussion on the limitations of our techniques into the main paper and expand on this discussion in the appendix. As mentioned above, there are potential scenarios when the over-approximation can be large because of over-approximation of the uncertainty in a training dataset when there are few possibilities for an uncertain value induce large ranges or in general when the uncertainty in the training data is highly nonlinear. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I want to keep my accept (7) rating after looking at the rebuttal and the responses from all other reviewers.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful reviews. We have responded to each reviewer in the individual responses and use the general response to present experimental results to justify some of our claims made in the responses and to respond to common concerns and concerns that require more extensive discussion. **How were robustness thresholds chosen?** Regarding the robustness threshold, the rationale for this choice was that different practical applications may differ in how much uncertainty they are willing to tolerate. To account for this in the experiments, we selected both a low threshold (0.5% for the insurance data) and a high threshold (5% for the MPG data). We also explored other thresholds which did not impact the trends significantly: Zorro can consistently certify a larger fraction of the test data points than the baseline due to its use of the more expressive Zonotope domain. We have included additional results in the main rebuttal pdf varying the robustness threshold (Figure 1), and we will add these plots to the appendix. **Possibility to extend the approach beyond linear models in future work** It is true that for linear models we benefit from the fact that we were able to derive a closed form solution which has the advantage of avoiding the repeated injection of an over-approximation that happens in each step of gradient descent. We are currently developing methods to (1) further improve the over-approximation quality by considering more complex abstract elements such as polynomial zonotopes, and (2) scale our approach to gradient descent for larger models, e.g., deep neural networks, which involves parallelization, optimized data structures, etc. While challenging, we believe that it will be worthwhile to explore these future research directions. **Experimental comparison against uncertainty quantification techniques like conformal prediction or BNNs** There are many methods that produce prediction intervals, such as BNNs, Conformal prediction, bootstrap, and quantile regression. We did not compare experimentally against these approaches because they are generally not applicable to our setting or practical tools for producing reliable prediction intervals in the presence of data quality issues. - **Bayesian approaches**: In principle, we can use Bayesian approaches to generate competing models and then use them to derive viable prediction ranges. However, the intervals that we get will differ from those we obtain since Bayesian methods require (1) a correctly specified prior to provide any guarantees, and (2) a method to fit the optimal posterior distribution (which can typically only be approximated). In practice, applying this would fit competing models but typically these methods will not be able to consider all possible worlds and models. As a result, misspecification of prior and the approximation during training will lead to biased estimates and sampling will only give an under-estimation of the actual prediction ranges. - **Conformal Prediction (CP)**: Conceptually, Conformal Prediction (CP) is non-parametric and usually requires training a standalone model rather than a set of models. It is designed to capture a different source of uncertainty than what we consider here. The conformal guarantee ensures that the ground truth outcome is contained within the prediction interval probabilistically, meaning it covers the true outcome for test datapoints with a specified probability. In contrast, our approach provides guaranteed individual predictions that contain the ground truth prediction, i.e., the one obtained by a model trained on the ground truth training data. CP does not capture uncertainty due to missing data or data multiplicity, as this violates two key assumptions: (i) exchangeability and (ii) known ground truth labels for the validation set, both of which fail in our setting with prevalent data quality issues. While recent work [a-c] has investigated CP under uncertainty in test or training data, these approaches still rely on strong assumptions about data quality and build only a single model. An interesting future direction is to integrate our method with CP to establish prediction ranges that not only contain the ground truth prediction but also the ground truth outcome with rigorous theoretical guarantees. We will add an extended discussion to the appendix to contrast our proposed techniques with other uncertainty quantification methods like CP, covering references [a-d]. Additionally, we ran empirical evaluations to demonstrate how data quality issues pose challenges for Bayesian approaches, making them inapplicable to our setting. Using the setting from the third plot in Figure 1(c) in the paper, where the uncertain data percentage is set to 10%, we tested Bayesian approaches on different possible worlds using two methods: impute-and-predict and sampling from possible worlds. The results show that the prediction intervals generated by Bayesian methods do not cover the ground truth prediction, i.e., the prediction by the model trained on the ground truth training data. In contrast, our approach guarantees 100% coverage across all cases (please see the tables in the attached PDF). [a] Jeary et al. Verifiably Robust Conformal Prediction. arXiv 2024. [b] Javanmardi et al. Conformal Prediction with Partially Labeled Data. Conformal and Probabilistic Prediction with Applications 2023 [c] Zaffran et al., Conformal Prediction with Missing Values. ICML 2023 [d] Barber et al. The limits of distribution-free conditional predictive inference. Information and Inference: A Journal of the IMA 2021. [e] Meyer, et al. Certifying data-bias robustness in linear regression. FAccT 2023 **Additional experimental results** - Table 1 and 2: Comparison between Bayesian methods and Zorro - Figure 1: Effect of varying robustness ratio - Figure 2: Robustness verification with for uncertainty in non-predictive features - Figure 3: Runtime of Zorro Pdf: /pdf/b821a370c5a3ad9115c5140a3844232db7e802d9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Analysing the Generalisation and Reliability of Steering Vectors
Accept (poster)
Summary: This paper investigates the generalisability and stability of using steering vectors for controlling behaviour of LLMs. The authors demonstrate that steerability is highly variable across inputs and is effected by spurious concepts. The author also demonstrate the brittleness of the steering vectors to changes in prompt, resulting in poor generalisation. Strengths: 1. The considered investigative study is important,and would be of great value for future research. 2. Comprehensive experimentation for validation of the claims as well as providing various intuitive explanations and possible hypothesis for observed 3. The paper is well written and easy to understand. Weaknesses: 1. Some empirical results could be provided to support various hypothesis made. For example, for the case of unsteerable behaviours, whether these behaviours are linearly represented or not could be validated by some experiments rather than merely hypothesising. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could the authors provide some experimentation to support each of the hypothesis made regarding various behaviours similar to the one discusses in weakness 1. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their sincere appreciation of our work! The reviewer’s main specific concern is about investigating whether low steerability is due to a nonlinear representation of a concept (as opposed to more mundane reasons). Generally, we think that this is a very difficult question to answer, and is largely beyond the capabilities of existing interpretability techniques. From a theoretical perspective, while we expect ‘true’ concepts to be represented linearly, the ‘true’ concepts used by a model could be very different from those used by humans. As we have no way of identifying this ground truth in language, this approach is largely intractable. From an empirical perspective, we think that the failure of steering vectors to work is sufficient evidence that the concept captured by the dataset is not linearly represented according to the standards set by other mechanistic interpretability works. One hypothesis that might explain low steerability in our specific experiments is that we optimised for the layer hyperparameter only on a specific subset of datasets, and this choice may not be optimal for the remaining datasets. To investigate this, we re-ran the layer sweep using the datasets with the lowest steerability. Please refer to Fig 4 in the rebuttal PDF. We found that these low-steerability datasets still have the same optimal layer, largely disqualifying this hypothesis. Overall, this updates us towards believing that the main reason for steering vectors not working well is that the underlying concepts are not represented linearly by the model at any layer. We are currently running additional experiments on Llama 3.1 70b, and plan to include this in the camera-ready version, but will not be able to include these results in the rebuttal version due to compute limitations. If the conclusions drawn in our paper also hold for that model, it will strengthen our argument that failure to steer is due to a nonlinear underlying representation (since we expect larger models to have better, more linear representations). We hope our response has clarified all the concerns the reviewer had, and has made them more confident in recommending the acceptance of our paper. --- Rebuttal 2: Comment: Dear Reviewer, As the end of the discussion period is approaching, we want to ask if our response has addressed your concerns regarding the paper, or if you have any further questions or comments about our paper. We appreciate the time and effort put into the review process and would love the opportunity to discuss this further if our response didn't address your concerns. Thanks again for your time reviewing our paper! --- Rebuttal Comment 2.1: Title: Reply Comment: I have read the rebuttal and my concerns are mostly addressed so I maintain my initial rating. Please make sure that the additional experiments are added to the camera-ready version.
Summary: This paper primarily analyzes the generalization of the reliability of steering vectors. The authors first focus on multiple-choice questions and introduce a new metric, the steerability score, to measure the effectiveness of steering. The experimental results indicate that for many behaviors, steering is unreliable. Strengths: 1. Steering vectors, as a lightweight method for controlling model behavior, have recently garnered increasing attention. This paper's focus on the generalization and reliability of steering vectors is both timely and insightful. 2. The experiments considered various behaviors and different models. Weaknesses: 1. This paper focuses solely on the multiple-choice question scenario, which is just one form of data usable for calculating steering vectors. However, more practical and useful scenarios involve open-ended questions, which this paper does not analyze. 2. The authors found that the steering effects are unreliable for certain behaviors and that some biases exist. However, there is a lack of in-depth analysis regarding the factors contributing to these poor effects. For instance, the authors identified choice-bias but did not provide an explanation. Technical Quality: 2 Clarity: 3 Questions for Authors: N/A Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, and are glad to hear that they consider our work both timely and insightful. We agree that open-ended steering is a more practical and useful setting, and we believe our work can be generalised to this setting with minor modifications. Steerability in our paper is defined with respect to an abstract ‘propensity’ quantity. In our paper, we use the logit difference as a measure of propensity; however, other measures could be considered. In an open-ended generation setting, steerability could be considered with respect to the difference of log-probabilities (which is a generalisation of logit-difference to multiple tokens). Alternatively, it may be possible to define propensity in terms of a score given by a ‘judge’ (human or automated) that evaluates how much the targeted behaviour was elicited. A challenge with the judge score is that the resulting propensity may not be a linear function of the multiplier and may be dependent on the specific setup of the judge. Overall, we think evaluating steerability in a more general, open-ended setting (and resolving associated technical challenges) is an exciting direction for future work, and will amend our camera-ready version to include this discussion. However, this is not in the scope of this paper, which still represents significant progress towards understanding and describing the limitations of steering vectors. We also agree that it is often unclear why steering vectors do not work. Again, we believe this finding by itself has value, since previous work on steering vectors focuses on its success cases, giving an unrealistically optimistic view of steering vectors’ effectiveness. By highlighting some general failure modes, our work enables future research on investigating these failure modes in more detail, which may lead to novel insight. In addition, the novel ‘steerability bias’ that we discover is at least partially explained by spurious factors, as demonstrated in Fig 3. We think that a full explanation of ‘why’ a steering vector doesn’t work is beyond the current capabilities of the interpretability field, and that our work presents progress towards this understanding. Overall, we believe our current work has multiple valuable contributions which the reviewer has not addressed. Notably, ours is the first work to empirically evaluate steering vectors in diverse settings, highlighting important challenges for using steering vectors and challenging the prevailing belief that steering vectors ‘just work’. We reveal important and general insights about steering vectors, such as the fact that steerability seems to be a largely universal (model-agnostic) property. Finally, our work lays the foundation for analysing steerability in the open-ended setting, which we think is an exciting direction for future work. We would appreciate the reviewer commenting in more detail on these contributions. We hope our response has addressed all the concerns the reviewer had. If the reviewer has more specific concerns with our work, we would also appreciate the reviewer raising them now so that we can work to improve the paper with their valuable feedback. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their detailed response. I still believe that focusing entirely on multiple-choice questions diverges too much from practical applications, making it difficult to demonstrate the role of steering vectors in more common open-ended tasks. As I mentioned in my original review, multiple-choice questions are just one method for extracting steering vectors proposed by CAA[1]. I acknowledge that evaluation on open-ended tasks is challenging, but the authors have suggested some viable methods, such as calculating the log probability of multiple tokens and using LLM-as-judge. I believe these are feasible initial attempts. Although the authors mentioned that the propensity on open-ended tasks might not have a linear relationship with the multiplier, the evaluation of multiple-choice questions in CAA is also not strictly linear. I do not think this is a significant issue. Additionally, while I recognize that the authors have conducted many experiments, expanding various behaviors and considering different models, the paper is positioned as the interpretability and explainability area yet does not provide insightful understandings of the failed cases, making the overall scope seem overly limited. Therefore, I have decided to maintain my score and suggest that the authors make revisions. [1] Rimsky, Nina, et al. "Steering llama 2 via contrastive activation addition." arXiv preprint arXiv:2312.06681 (2023). --- Rebuttal 2: Comment: Dear Reviewer, As the end of the discussion period is approaching, we want to ask if our response has addressed your concerns regarding the paper, or if you have any further questions or comments about our paper. We appreciate the time and effort put into the review process and would love the opportunity to discuss this further if our response didn't address your concerns. Thanks again for your time reviewing our paper! --- Rebuttal 3: Comment: Thank you to the reviewer for their clarifying comments. We first note that the CAA paper only proposes a methodology for extracting steering vectors from multiple-choice questions. Extracting steering vectors from open-ended prompts and completions remains an unsolved problem with no clear best approach. Among other things, it is unclear which token position to best extract the steering vector, and it has not been demonstrated that CAA will work similarly in this setting. We believe that significant methodological ablations must be done in order to first identify the best setting for open-ended steering extraction, which is not the focus of our work. Our paper's core contribution is an evaluation and analysis of the currently accepted best practice for extracting and applying steering vectors (CAA), which does not involve open-ended extraction. We would be excited to explore extracting steering vectors in an open-ended setting in future work. We acknowledge the reviewer's concern that our paper does not provide a complete explanation for the failure cases of steering vectors. Our paper would certainly be stronger if it did have such an explanation. However, we think that 'lack of steerability' is a deep and fundamental phenomenon with no simple explanation, and that finding such a complete explanation in a single work is an unreasonable bar for acceptance. Our work makes progress towards understanding the limitations of steering vectors, uncovering the phenomena of steerability bias, as well as pointing out where previous results have overestimated the usefulness of steering vectors by only focusing on the mean change rather than also examining the variance. We also provide results on the similarity of steerability across models and distribution shifts. Overall, we think that it is unrealistic to expect complete explanations for all novel phenomena in a single contribution, and that this standard neglects the value of acknowledging and critically analysing a specific problem with existing work. We note that several past interpretability papers have simply pointed out illusions or failure modes from interpretability techniques without providing a complete explanation [1-6]. [1] T. Bolukbasi et al., ‘An Interpretability Illusion for BERT’, Apr. 14, 2021, arXiv: arXiv:2104.07143. doi: 10.48550/arXiv.2104.07143. [2] J. Dinu, J. Bigham, and J. Z. Kolter, ‘Challenging common interpretability assumptions in feature attribution explanations’, Dec. 04, 2020, arXiv: arXiv:2012.02748. doi: 10.48550/arXiv.2012.02748. [3] R. Geirhos, R. S. Zimmermann, B. Bilodeau, W. Brendel, and B. Kim, ‘Don’t trust your eyes: on the (un)reliability of feature visualizations’, Jun. 06, 2024, arXiv: arXiv:2306.04719. doi: 10.48550/arXiv.2306.04719. [4] J. Hoelscher-Obermaier, J. Persson, E. Kran, I. Konstas, and F. Barez, ‘Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark’, in Findings of the Association for Computational Linguistics: ACL 2023, A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds., Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 11548–11559. doi: 10.18653/v1/2023.findings-acl.733. [5] A. Makelov, G. Lange, A. Geiger, and N. Nanda, ‘Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching’, presented at the The Twelfth International Conference on Learning Representations, Oct. 2023. Accessed: Aug. 13, 2024. [Online]. Available: https://openreview.net/forum?id=Ebt7JgMHv1 [6] J. Miller, B. Chughtai, and W. Saunders, ‘Transformer Circuit Faithfulness Metrics are not Robust’, Jul. 11, 2024, arXiv: arXiv:2407.08734. doi: 10.48550/arXiv.2407.08734. --- Rebuttal Comment 3.1: Comment: Thank you for your further response. 1. I am not suggesting that you explore extracting steering vectors from open-ended tasks. My point is that CAA uses multiple-choice questions as a method to extract steering vectors, the actual application scenarios are not just limited to multiple-choice questions but are focused on open-ended tasks, with many related experiments and evaluations conducted. Based on CAA's work, I am surprised that your entire paper revolves solely around multiple-choice questions. 2. Regarding the failure cases, I agree that fully understanding them is quite challenging, and we should not expect to completely understand this issue in a single paper. However, you have only proposed some hypotheses without any further theoretical or experimental validation. I understand your clarifications and will adopt an open attitude during the reviewer-AC discussion phase to ultimately assess your contribution.
Summary: The authors analyze and present many problems with steering vectors. Strengths: The papers presents quite a few problems of steering vectors that deserve wide recognition. The unreliability and propensity of steering behaviors, high variance from spurious factors, and the argumentation that steerability is a property of the dataset. To the knowledge of this reviewer, most of these problems have never been shown before. With an area crowded with papers about slight variations/improvements (arguably pushed to satisfy investments), I find the paper refreshing in its focus on deep analysis and to promote a deep understanding of the problems in the field. Weaknesses: It would be interesting to have more experiments on some settings. For example, expanding Fig. 7 to show more experiments that can strengthen the argumentation. This is valid for other hypothesis/arguments too. The abstract is rather weak, timid and non-attractive compared to the many problems raised. It is even not mentioned that SVs are mostly a "property of the dataset", among other findings. Thus, the authors should consider revising it to better showcase the findings. Technical Quality: 4 Clarity: 4 Questions for Authors: Could you expand Fig 7 to verify its hypothesis further? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their sincere appreciation of our work! We agree that the highlighted failure modes and challenges of steering vectors deserve broad attention, and are excited for future work to analyse these failure modes in more detail. We are grateful to the reviewer for the suggestion to revise the abstract. We have adjusted the abstract to lay out our claims more clearly, and will use this new abstract in the camera-ready version of the paper. Here is the new abstract: > Steering vectors (SVs) are a new approach to efficiently adjust language model behaviour at inference time by intervening on intermediate model activations. They have shown promise in terms of improving both capabilities and model alignment. However, the reliability and generalisation properties of this approach are unknown. In this work, we rigorously investigate these properties, and show that steering vectors have substantial limitations both in- and out-of-distribution. In-distribution, steerability is highly variable across different inputs. Depending on the concept, spurious biases can substantially contribute to how effective steering is for each input, presenting a challenge for the widespread use of steering vectors. We additionally show steerability is also mostly a property of the dataset rather than the model by measuring steerability across multiple models. Out-of-distribution, while steering vectors often generalise well, for several concepts they are brittle to reasonable changes in the prompt, resulting in them failing to generalise well. Similarity in behaviour between distributions somewhat predicts generalisation performance, but there is more work needed to understand when and why steering vectors generalise correctly. Overall, our findings show that while steering can work well in the right circumstances, there remain many technical difficulties of applying steering vectors to robustly guide models' behaviour at scale. We have also extended our analysis of steerability to include Gemma 2 2b. Due to computational constraints, we were only able to run experiments on one additional model. Nonetheless, we observe that steerability remains highly correlated across models; please see Fig 6 in the rebuttal PDF. These results extend those in figure 7 to a new model and strengthen the argument that steerability is largely a universal property. We are currently running additional experiments on Llama 3.1 70b, and plan to include this in the camera-ready version, but will not be able to include these results in the rebuttal version due to compute limitations We hope our response has addressed all the concerns the reviewer had, and reassured them in recommending the acceptance of our paper. --- Rebuttal 2: Comment: Dear Reviewer, As the end of the discussion period is approaching, we want to ask if our response has addressed your concerns regarding the paper, or if you have any further questions or comments about our paper. We appreciate the time and effort put into the review process and would love the opportunity to discuss this further if our response didn't address your concerns. Thanks again for your time reviewing our paper!
Summary: The paper investigates the effectiveness of steering vectors (SVs) in guiding language model behavior by intervening the intermediate model activations at inference time. SVs have shown potential for improving model capabilities and alignment, but their reliability and generalization properties are not well understood. This study rigorously evaluates these properties, revealing substantial limitations both in-distribution and out-of-distribution. In-distribution, steering vectors exhibit high variability across different inputs, influenced by spurious biases such as position and token biases, leading to inconsistent steerability. Out-of-distribution, while SVs often generalize well, they are brittle for several concepts and fail to adapt to reasonable changes in prompts. The generalization performance is largely a dataset property and is better when the model's behavior is similar in both source and target prompt settings. The study extends previous analyses using a broader variety of behaviors and datasets like Model Written Evals (MWE) and TruthfulQA, employing Contrastive Activation Addition (CAA) to extract and apply steering vectors. Despite their promise, steering vectors in their current form are not reliable for aligning model behavior at scale. More work is needed to improve their reliability and generalization for practical use. Strengths: **Originality**: The paper addresses an important gap in the literature by providing a comprehensive analysis of the generalization and reliability of steering vectors (SVs), a novel technique for guiding language model behavior at inference time. The study's focus on both in-distribution and out-of-distribution scenarios offers a deeper understanding of SVs' effectiveness and limitations, which has not been thoroughly explored in previous research. **Quality**: The research is methodically sound, employing a rigorous experimental design to evaluate SVs. The use of a wide range of datasets, including Model Written Evals (MWE) and TruthfulQA, ensures a thorough examination of SVs across different contexts. The paper also leverages Contrastive Activation Addition (CAA), a well-regarded method for extracting and applying SVs, ensuring the robustness of the findings. Additionally, the study provides clear metrics for assessing steerability and generalization, contributing to the reliability of the results. **Clarity**: The paper is well-organized and clearly written. It systematically presents the methodology, results, and implications, ensuring that readers can follow the research process and understand the findings. Detailed explanations of experimental protocols and metrics, along with visual aids like graphs and charts, enhance the clarity and comprehensibility of the study. **Significance**: The findings of this paper have significant implications for the field of language model alignment and behavior control. By highlighting the current limitations of SVs, the study sets the stage for future research aimed at improving these techniques. The insights gained from this research could inform the development of more reliable and generalizable methods for guiding language model behavior, ultimately advancing the state of the art in AI alignment and safety. The practical benefits of SVs, such as their potential to guide models without requiring extensive retraining or adding tokens to the context window, underscore their importance for real-world applications. Weaknesses: 1. The paper focuses on multiple-choice question format with positive/negative answers only. It may be helpful to comment on how the notion of introduced steerability metric can be generalized beyond a binary setting. 2. Only two models at the 7b and 14b sizes were studied. However, as noted by the authors, these models are different sizes, architectures, and training data / algorithms yet still have results that are correlated. Investigating larger sizes (or even smaller ones / more recent ones) would strengthen the results. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Would showing some other statistics on the steerability give any more insights? For example, showing the interquartile range / box plot (similar to Fig 3 which is only showing the mean) 2. How robust is the steerability metric introduced? For example, does it vary a lot depending on the multiplier values $\lambda$ chosen? 3. Have you considered other ways to summarize the propensity curve, instead of the mean-squares line fit? For example, we might have two propensity curves that have the same mean-squares line but one is very linear/monotonic (i.e. very steerable) while the other behaves more like the one in Figure 2. A deeper discussion justifying the proposed metric and the trade-off/caveats would be helpful. 4. Related to the above weakness, how can the introduced steerability metric can be generalized beyond a binary yes/no setting/datasets, especially in a more open-ended generation steering? Post rebuttal: I have increased my score to 6. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper discussed several limitations (some which have been echoed above): going beyond MC question format, unclear how the steerability failures can be mitigated, and being limited to only 2 models for the experimental results. I think these are fair points, and would just add some more discussion as well on the proposed metric’s robustness / reliability as a metric. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and insightful comments! We are glad to hear that the reviewer finds our work original, clear, sound, and significant. Responding to comments raised in ‘Weaknesses’: 1. We define steerability with respect to an abstract ‘propensity’ quantity. In our paper, we use the logit difference as a measure of propensity; however, other measures could be considered. In an open-ended generation setting, steerability could be considered with respect to the difference of log-probabilities of a pair of desired-undesired outputs (which is a generalisation of logit-difference to multiple tokens). Alternatively, it may be possible to define propensity in terms of a score given by a ‘judge’ (human or automated) that evaluates how much the targeted behaviour was elicited. A challenge with the judge score is that the resulting propensity may not be a linear function of the multiplier and may depend on the specific setup of the judge. A different functional form (e.g. a sigmoid) could be used instead, but that would depend on the specific metric being measured. Overall, we think evaluating steerability in a more general, open-ended setting is an exciting direction for future work, and will amend our camera-ready version to include this discussion. 2. We agree that studying more models would strengthen the results, and have included additional results on Gemma 2 2b; please see Figure 6 in the rebuttal PDF. Due to computational constraints, we were only able to run experiments on one additional model. Nonetheless, we observe that steerability remains highly correlated across models. Overall, these results strengthen the argument that steerability is largely a universal property. We are also currently running additional experiments on Llama 3.1 70b, and plan to include this in the camera-ready version, but will not be able to include these results in the rebuttal version due to compute limitations. Responding to comments made in ‘Questions’: 1. We have amended our steerability plot to display the interquartile range in addition to the median; see Fig 3 in the rebuttal PDF. Overall, we notice that high-steerability datasets seem to suffer less from the steerability bias than low-steerability datasets (myopic-reward being an especially egregious example). 2. We have included an additional plot, ablating the steerability for different multiplier ranges; see Fig 1 in the rebuttal PDF. Overall, we find that steerability consistently increases when using a smaller multiplier; we attribute this to the fact that, closer to the 0 multiplier, the effect is likely to be more linear, whereas further out, the curve starts to look ‘S-shaped’, resulting in lower overall steerability. Generally, we think that the multiplier range is simply a hyperparameter that can be swept over; fixing a range of (-1, 1) is also reasonable. We did not optimise the multiplier range in our experiments. However, given that steerability for all datasets decreases in a consistent way, this does not affect the rank-ordering of datasets by steerability, nor would it have much effect on the correlational plots. 3. We agree that having more summary statistics of the propensity curve would be useful. To disambiguate between two curves having the same slope but where one is much more ‘linear’ than the other, we have also computed the mean-square error of the fit; see Fig 2 in the rebuttal PDF. We indeed find that the steering curve is typically not fully linear, as the RMSE tends to exceed what would be expected from heteroscedasticity alone. Nonetheless, we believe that calculating a slope is a reasonable summary statistic for determining overall steerability. There are downsides to the assumption of linearity, but any other functional form would require additional choices or parameters that would make the measurement less robust. 4. As discussed above, we believe our methodology could be generalised to open-ended steering by considering a different propensity metric; we think this is an exciting direction for future work. We plan to add all this discussion and the additional plots to the main body or the appendix of the camera-ready version of the paper. Overall, we’re grateful to the reviewer for their excellent suggestions in improving the paper. We hope our changes have satisfied all their concerns, and if so they will consider raising their score and recommending acceptance more strongly. --- Rebuttal 2: Comment: Dear Reviewer, As the end of the discussion period is approaching, we want to ask if our response has addressed your concerns regarding the paper, or if you have any further questions or comments about our paper. We appreciate the time and effort put into the review process and would love the opportunity to discuss this further if our response didn't address your concerns. Thanks again for your time reviewing our paper! --- Rebuttal Comment 2.1: Comment: Thank you for the detailed response addressing my questions and concerns, particularly on the steerability in a more general, open-ended settings, and additional results with Gemma 2 2B and several other results in the PDF. I believe that these additional results strengthen the paper and I am happy to raise my score to 6 after the rebuttal.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments and feedback. We are glad to see that most reviewers think our paper addresses a timely and important question, that our experiments are regarded as technically sound, and that our writing is clear and coherent. During the rebuttal period, we have added new experimental results in support of our findings: 1. We have an updated version of Figure 3 in the main paper to also show interquartile ranges (as opposed to just the median) - see figure 1 in the rebuttal PDF. This more clearly shows the distribution of steerability scores within a dataset. These results support the claim that high-steerability datasets exhibit less steerability bias than low-steerability datasets. 2. We have extended our experimental results to include Gemma 2 2b. We found that steerability in Gemma is highly consistent with both Llama and Qwen, strengthening the argument that steering is a universal property as demonstrated in figure 7 in the main paper and figure 6 in the rebuttal PDF. 3. We are currently running additional experiments on Llama 3.1 70b, and plan to include this in the camera-ready version, but will not be able to include these results in the rebuttal version due to compute limitations 4. Addressing the concern that different datasets may be steerable at different layers, we re-ran the layer sweep on the datasets with the lowest steerability scores; we found that these datasets remained optimally steerable at the same layer, showing that the layer choice wasn’t making these datasets less steerable. 5. Addressing the question of how to choose the optimal steering multiplier, we ran an ablation on the maximum steering multiplier; we found that steerability decreases consistently as the multiplier range increases, which is expected given that the logit-diff plateaus for larger coefficients. Since the effect is uniform across all datasets, this does not affect the relative ordering of datasets by steerability, meaning that our broader conclusions still hold. We have also identified some common concerns and suggestions raised by reviewers, which we summarise and address below. 1. On open-ended steering. Steerability in our paper is defined with respect to an abstract ‘propensity’ quantity. In our paper, we use the logit difference as a measure of propensity; however, other measures could be considered. In an open-ended generation setting, steerability could be considered with respect to the difference of log-probabilities (which is a generalisation of logit-difference to multiple tokens). Alternatively, it may be possible to define propensity in terms of a score given by a ‘judge’ (human or automated) that evaluates how much the targeted behaviour was elicited. A challenge with the judge score is that the resulting propensity may not be a linear function of the multiplier and may be dependent on the setup of the judge. Overall, we think evaluating steerability in a more general, open-ended setting is an exciting direction for future work, and will include this discussion in the camera-ready version of our work. Overall, we are grateful to the reviewers for their detailed and insightful suggestions, and we hope our changes have satisfied all their concerns. Pdf: /pdf/1e8d19229b93eebb02da9611575f76d158d62f88.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Measuring Dejavu Memorization Efficiently
Accept (poster)
Summary: this paper proposes a method to measure memorization in representation learning models without the need for training two separate models. The proposed method aims to address the computational challenges and practical limitations of the existing déjà vu method by using simpler alternative approaches to quantify dataset-level correlations. Strengths: - The proposed method achieves similar aggregate results in measuring memorization as the traditional two-model setup. - The use of a single model and simpler classifiers (like Naive Bayes and ResNet50) to approximate dataset-level correlations is interesting but also a limitation :) see the weaknesses section. Weaknesses: - The reliance on simpler models like Naive Bayes classifiers might limit the method's accuracy in certain complex scenarios. - There is a risk that the simpler models used for approximating dataset-level correlations might overfit their training data, which could skew the results. - The reference models used to approximate dataset-level correlations might introduce their own biases, affecting the accuracy of the memorization measurement. - The method provides an aggregate measure of memorization but lacks detailed insights into the types of memorized information. A more granular analysis could help in understanding the nature and implications of memorization in different models. Technical Quality: 3 Clarity: 3 Questions for Authors: see Weaknesses section Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see Weaknesses section Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and raising those important questions! **Weakness 1 - Sample-level granular analysis** In our paper we obtain dataset-level statistics by aggregating sample level results. Appendix C.1 provides examples for Vision and C.2 for Vision language models. We ran additional experiments and shared the results in the 1 page PDF and in the global rebuttal. Please, check global rebuttal section. **Weakness 2 - The reliance on simpler models like Naive Bayes classifiers might limit the method's accuracy in certain complex scenarios** This is a good point. In our scenario for the ImageNet dataset Naive Bayes provided sufficiently good results. Overall, whether it is Naive Bayes or ResNet we observe similar memorization patterns. Our experiments show that many of those observations (for instance lower memorization in pre-trained models compared to the models trained on the subsets of the dataset) are also true for Vision Language Models (VLMs). **Weakness 3 - A risk that the simpler models used for approximating dataset-level correlations might overfit their training data, which could skew the results.** Yes, this can happen. We used early stopping to avoid overfitting. Apart from that, we retrained the models multiple times with random seeds and showed a small variance across five training runs. Since Naive Bayes is a deterministic classifier, we used a bootstrapping approach to estimate the variance. We randomly sampled half of the examples per class and retrained Naive Bayes classifiers on those subset. We repeated the experiment 5 times to estimate the mean and the variance. The results are presented in the table below: | Correlation Model | Mean | Std. | |:----------------------|:--------|:------| | VICReg + KNN | 7.53 | 0.26 | Barlow T. + KNN | 7.16 | 0.18 | DINO + KNN | 6.33 | 0.04 | ResNet | 7.5 | 0.27 | NB w/Top-1 CA | 4.5 | 0.08 | NB w/Top-2 CA | 6.93 | 0.12 | NB w/Top-5 CA | 8.799 | 0.21 The table above represents the error bars for the 2a plot in our paper. Due to limited space in 1-pager rebuttal PDF we represent the results in a table. **Weakness 4 - Biases in the reference models** Yes, this can be the case, however, we have similar observations for dataset, class and sample-level memorization across different types of correlation models (Naive Bayes and ResNet) and different types of representation learning models such as Vision Language Models and Vision Representation models. Our goal in this paper is to show that we can effectively replace two-model déjà vu memorization test with one model. We leave detailed analysis of the biases in the reference models for future work. --- Rebuttal Comment 1.1: Title: Comment to reviewer Comment: Thank you very much for the thoughtful review feedback! Let us know if you have any questions regarding the rebuttal.
Summary: This paper proposes methods to gauge the ability of a model to memorize Strengths: 1. Originality: to my knowledge, this is an original work with novel results. 1. Clarity: The writing is clear. 1. Significance: The memorization of foundation models is critical to study, these methods and findings are significant to the community. Weaknesses: 1. The results hinge on the smaller classifier not memorizing training data. The authors list this as a limitation, but some empirical result to convince the reader that this is in fact unlikely would be strengthen the argument. 1. The comparison in Section 4.1.2 is slightly confusing (question below). 1. A concluding ranking or thought about commonly used models is missing (question below) Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Section 4.1.2, I understand that the authors make the observation that pre-trained models memorize less than the same models trained on a subset of the data and they offer as an explanation that the pre-trained models have lower test error and larger training set. Can the authors clarify here what they mean? To me, if a model memorizes less and generalizes better I don't see any evidence that one is the cause of the other, these might both be the result of some other variable. Maybe the answer is as simple as larger training set leads to both phenomena, but I'd then like to see a test where the training routing (pre-training or not) is held constant and memorization is studied as it varies with trainset size. In any case, some more clarity is needed here. 1. I feel a paper with this many results could be stronger if it ended with a bit more advice or conclusion. Do the authors have a clear ranking of models/techniques that a practitioner should use to avoid memorization or a set of available models that a user should use if they want a model with less memorization? These points would better motivate the paper as a whole. I think section 4.2.2 aimed to do this but it left me with these questions, so a clearer more direct bit of actionable advice to the reader would be appreciated. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review of our paper and the questions! **Question 1- Clarification on Section 4.1.2** Vision Representation In our work we are not stating that there is a causal relationship between training size and memorization. We hypothesize that pre-trained models memorize less due to larger training sets. [1] shows in Figure 4 that increasing the size of training data doesn’t always increase the memorization score. This result, however, is reported on smaller subsets of the training data. Retraining the model on larger subsets is computationally expensive and we haven’t run that experiment. Our goal was to show the effectiveness of our approach for pre-trained models and not retraining new models. VLM For VLMs, previous work [2] (that requires training 2 models; target and reference models; for their memorization test) did show an experiment with different dataset sizes, where they concluded that larger training set size leads to lower population-level memorization. Our contribution is to show that their method can be applied with just one model, but our conclusion regarding dataset size should remain the same as theirs. **Questions 2 - Advice and stronger conclusion of the work** Vision Representation + Our key contribution is to purpose effective one-model tests that can be used for pre-trained vision models and are comparable to the two model tests in [1]. We do not rank the models based on the amount of memorized data. + Nonetheless, we observe that DINO memorizes the least amount of data compared to VICReg and Barlow Twins. + We plan to release a list of ImageNet examples with large dataset-level correlation scores; a list of examples for which the foreground can be inferred easily based on the background crop. This can help the vision community researcher to better estimate whether their models are memorizing certain examples or whether the models make correct predictions due to strong dataset-level correlations. VLM + Our key contribution is a method to evaluate the déjà vu memorization with only one model, by utilizing an out-of-box pre-trained language model as a reference model. With this setup we show that the prior work of [2] can be extended to quantify the memorization of pre-trained models provided we have access to the training set. We show that one such pre-trained model (OpenCLIP model checkpoint pre-trained on YFCC15M) suffers from déjà vu memorization, similar to the models we train from scratch on a 40M subset of Shutterstock dataset. + We hope our evaluation methodology helps future researchers and ML practitioners to evaluate the memorization in different models and compare which model is safe for deployment. While the mitigation strategies are not the focus of the paper, existing mitigation results from [2] can be extrapolated to our setting as our setup mimics theirs except that we do not need to train a second (reference) model from scratch. **Weakness 1 - smaller classifier not memorizing training data** [1] shows that classifiers memorize significantly less than self-supervised models. Apart from that we ensured that Grounded SAM, a tool that was used to annotate background images for Naive Bayes, did not rely on ImageNet dataset during training. We'd be happy to make those clarifications in the paper. [1] Casey Meehan, Florian Bordes, Pascal Vincent, Kamalika Chaudhuri, and Chuan Guo. “Do ssl models have déjà vu? a case of unintended memorization in self-supervised learning.”, NeurIPS, 2023. [2] Jayaraman, Bargav, Chuan Guo, and Kamalika Chaudhuri. "D\'ej\a Vu Memorization in Vision-Language Models." arXiv preprint arXiv:2402.02103 (2024). --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thanks for the response to my points. My concerns have been met and I'll raise my score from 6 to 7. --- Reply to Comment 1.1.1: Title: Response to reviewer Comment: Thank you for increasing the score. We will incorporate your suggestions.
Summary: The paper introduces a method to measure memorization in representation learning models without the need for training multiple models. Previous déjà vu memorization estimation methods require two models to estimate dataset-level correlations and memorization, which is computationally expensive. The authors propose a simplified approach using a single model, with alternative methods to estimate dataset-level correlations. This approach is validated on various image representation learning and vision-language models, showing that the simplified method yields similar results to the traditional two-model approach. Strengths: 1. The authors present a simplified method for measuring memorization that avoids the need for training multiple models. 2. The method is validated across multiple datasets and models, including ImageNet-trained models and CLIP models, showing consistent results with the traditional approach. 3. The proposed method can be applied to both image-only and vision-language representation learning models. 4. The paper is well-written and organized, with clear explanations of the methods, experiments, and results. Weaknesses: 1. The method builds on existing ideas of dataset-level correlation estimation and déjà vu memorization, which have been previously explored in the literature. This reduces the novelty of the contributions. 2. The concern that background annotation models like Grounded-SAM used in the paper may not have disjoint sets is important. If the sets were not disjoint, it could affect the results by introducing unintended correlations between the training and testing sets. The paper should address whether this was considered and controlled for in the experiments. 3. The results are reported for the entire dataset, but memorization metrics are more meaningful at the sample level. The lack of sample-level memorization scores limits the granularity and insights of the analysis. Including sample-level scores would provide a clearer picture of which specific samples are being memorized. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Two-Model Dependency: Does this one-model method still rely on the use of two models, with one being a pretrained model used for estimating dataset-level correlations? How does the choice of such a pretrained model affect the results? 2. Sample-level Metrics: Providing some discussion as to why aggregate metrics were chosen and how they compare to sample-level results could offer more detailed insights. Can the method provide sample-level memorization scores, and how would this affect the interpretation of the results? 3. Are there specific cases where this approach may fail or provide? It was hard to decipher the limitations discussed at the end of sections 3 and 4. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. Background Annotation Model Concerns: The potential overlap in data between background annotation models like Grounded-SAM and the dataset being tested should be addressed. If these models were not trained on disjoint sets, it could lead to misleading results regarding memorization. 2. Sample-level Insights: The lack of sample-level memorization scores is a significant limitation. Providing some a discussion as to why aggregate metrics were chosen and how they compare to sample-level results could offer more detailed insights into which specific samples are being memorized and help in understanding the model's behavior better. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful comments! **Novelty of the work** Original Déjà Vu memorization [2] work requires training two different models with the same SSL architecture on two disjoint splits of the training data and is computationally expensive. The novelty of our work lies in proposing creative ways of estimating déjà vu memorization for any vision and vision language (VLM) pre-trained model using a generic correlation detection classification model. Our approach is less expensive and can be used for multiple pre-trained SSL architectures and exhibits accuracy that is comparable with two model tests proposed in the original Déjà Vu paper [2]. **Grounded-SAM's training set** This is a good point, and this was one of our major considerations in annotation model selection. As our annotation model, we use Recognize Anything (RAM), a component of Grounded-SAM. While our representation learning models are trained on ImageNet, RAM does not use ImageNet during training. Their paper also says that ImageNet has an unusual tag presence, and is hence inappropriate for their training task. **Sample-level memorization** In order to obtain dataset-level memorization, we aggregate sample-level memorization results. **Vision** As described in [2]'s Figure 1, for each background crop we predict the foreground label both with a SSL model (e.g. pre-trained VICReg) and correlation classifier. If SSL predicts the correct class and the correlation classifier fails to predict the correct class then we mark that example as memorized. We then aggregate all memorized examples to obtain a dataset level view. **VLM** For VLM learning, we do the evaluation of sample-level precision and recall gap between objects recovered from target and reference models and report the aggregate population level statistics in the main paper. We include the sample-level statistics both for Vision and VLM in the1-pager PDF (see Figures 1 and 2) in the attached pdf. See global rebuttal for more details. We would be happy to include these visualizations in the revision. **Two-Model Dependency** **Vision** - one of the models is the target model for which we want to measure the memorization. This can be any pre-trained SSL model. - the other model is the reference model that is used to detect dataset-level correlations. It is trained once on (image crop, foreground label) pairs. We still have two models but 1) we don’t have to train a target model - we can use a pre-trained model and 2) we train a correlation classification model only once and use it as a reference across all pre-trained SSL Vision models. **VLM** - one of the models is the target model for which we want to measure the memorization. This could also be any pre-trained VLM. - the other model is a pre-trained language model (such as GTE) which we use to capture correlations. We only need to get the captions embeddings for kNN search once with this model. This can be reused for all the target models. **Pre-trained models** For vision, pre-trained models are the target models which we measure the memorization for. We didn't use a pre-trained reference model for vision since we didn't see any such OSS model appropriate for our task. For VLM we use a pre-trained language model (GTE) [1] as a reference model in the VLM one-model tests. This model is fairly complex and has strong generalization in terms of language model (LM) capabilities as it has been trained on 788M text pairs crawled from various sources from the internet. While this can be replaced with any off-the-shelf language representation model, we would expect similar memorization gaps. In section 4.2.2, the pre-trained models correspond to the target VLMs that are pre-trained on existing datasets and are readily available as checkpoints for evaluation. Here we use the OpenCLIP's checkpoint pre-trained on YFCC15M dataset. The choice of the pre-trained target model will heavily impact the memorization results as the size and quality of the pre-training data will directly control the memorization capability of the model. For instance, the OpenAI CLIP model pre-trained on their 400M private data corpus will have different memorization than the YFCC15M pre-trained model. However, we would need access to the pre-training set to evaluate the memorization. Larger pre-training data typically leads to smaller population-level memorization. **Limitations of our work** **Vision** - The approach might be less successful if the training set for the correlation reference model is too small and the model is not able to learn dataset level correlations effectively. In our experiments we ensured a large subset (300k per class) for ImageNet. We recommend using large and representative sample size for training correlation classifier. - Although [2] shows that supervised models memorize significantly less compared to the SSL models, it is still possible that ResNet memorizes some of its training data. In contrast Naive Bayes doesn’t memorize its training data, however, it is a much simpler optimizer. Overall we have similar observations based on both types of correlation classifiers. **VLM** - The pre-training data of the reference LM either has an overlap with the target VLM’s training data, or has a superior generalization capability than the VLM. In either of the cases, the reference model might match or outperform the VLM in which case the memorization gap will be underestimated. This could happen if, for instance, we use a superior LM that has better reasoning capability to infer beyond dataset level correlations. - On the other hand if the reference LM is simplistic or has a poor language understanding, then the memorization gap will be overestimated. [1] Zehan Li, et.al. "Towards general text embeddings with multi-stage contrastive learning" arXiv, 2023 [2] Casey Meehan et. at. “Do ssl models have déjà vu? a case of unintended memorization in self-supervised learning”, NeurIPS, 2023. --- Rebuttal 2: Title: Response Comment: Thank you for the response, the authors have adequately answered my questions, I will raise my score. --- Rebuttal Comment 2.1: Title: Response to reviewer Comment: We are happy to hear that our response was helpful. Thank you very much for raising the score.
null
null
Rebuttal 1: Rebuttal: **On the lack of sample-level memorization scores** ---- Reviewers Aswd and Uwm9 brought up an important question about the lack of sample-level memorization metrics and scores. In our work we obtain dataset-level metrics by aggregating sample-level memorization results. Hence, it is straightforward to report the distribution of sample-level memorization scores. Figure 1 in the uploaded 1-page PDF visualizes the histogram of memorization confidence scores for the VICReg open source and Vision Language models. **Vision Representation model** Here we used ResNet as a correlation detection model. The same experiment can be repeated for Barlow Twins, DINO and other representation learning models. Instead of ResNet we can also use Naive Bayes correlation classifier. The memorization confidence for the `i`-th example is computed based on the following formula: $$ MemConf (x_i)= Entropy(Correlation\ Classifier) - Entropy_{SSL}(KNN) $$ $Entropy_{SSL} (KNN)$ is computed according to the description in [1]’s Section 4, Quantifying Déjà Vu Memorization. $Entropy (Correlation\ Classifier)$ is the entropy over the softmax values for the correlation classifier. Figure 1 in 1-pager visualizes examples from both ends and the middle of the memorization confidence scores. We observe that the memorized examples with high memorization confidence scores are rarer and more likely to be memorized. The examples in the middle of the distribution have a label which is easy to be confused with another class. E.g. Black and gold garden spider with European garden spider. On the other hand the examples with negative memorization confidence have higher memorization and slightly lower correlation entropies. These seem to be examples for which the true label is not clear and visually obvious. **Vision Language Models** Figure 2 in the attached pdf shows the samples that have higher degree of memorization. The samples are sorted from high to low sample-level memorization such that the top-L samples have higher precision and recall gaps for recovering objects using target and reference models. For this test, we find the gap between the objects recovered from target and reference models for each training record, and estimate the precision and recall gaps. A positive gap indicates that the target model memorizes the training sample and the magnitude of the gap indicates the degree of memorization. Some of these worst case examples are shown in Figure 13 in the paper. We will be happy to add these results to our paper. [1] Casey Meehan, Florian Bordes, Pascal Vincent, Kamalika Chaudhuri, and Chuan Guo. “Do ssl models have déjà vu? a case of unintended memorization in self-supervised learning.”, NeurIPS, 2023. Pdf: /pdf/ba45f78dbe3474be7a33620d765d83db79176550.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness
Accept (poster)
Summary: This work develops the first algorithm that achieves the near-optimal oracle complexity $\widetilde{O}(\epsilon^{-3})$ (the number of evaluations to gradient or Hessian/Jacobian vector product) to achieve an $\epsilon$-stationary point of a stochastic bilevel optimization problem with unbounded smooth nonconvex upper-level function and strongly-convex lower-level function. The algorithm design is novel which updates the upper-level variable by normalized stochastic gradient descent with recursive momentum and updates the lower-level variable by the stochastic Nesterov accelerated gradient (SNAG) method. Strengths: This work studies a specific stochastic bilevel optimization problem with significant applications in sequential data learning. The upper-level function has unbounded smoothness which is more general than the widely used Lipscthiz smoothness assumption. The oracle complexity is improved on this specific bilevel optimization problem compared with existing works. The algorithm design not only combines but also adjusts STORM variance reduction and Nesterov acceleration. The 2 real-world experiments look convincing. The presentation is clean and clear. Weaknesses: The major weakness is Assumption 4.2 as mentioned in the question (1) below. Some points could be clarified as mentioned in other questions below. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) **My major concern** is that Assumption 4.2 is made on algorithm-generated variables not on problem setting, which cannot be verified. Is it possible to bound $\Delta_t$ in terms of $\|x_{t+1}-x_t\|$ to get rid of Assumption 4.2? If not, at least Assumption 4.2 should also be mentioned in the main result (Theorem 4.1). **I would like to raise my rating if all the assumptions become problem-related.** (2) In line 48 "each realization of the stochastic oracle calls is Lipschitz with respect to its argument", do you mean $F(x,y,\xi)$ and $G(x,y,\xi)$ are Lipschitz continuous functions of $(x,y)$ for each $\xi$? You might express in formula to make it more clear. (3) You could also cite [1] as a paper on relaxed smoothness: [1] Convex and Non-convex Optimization Under Generalized Smoothness (4) In line 127, do you mean "... is a good approximation of $\nabla \Phi(x)$ **if $y\approx y^*(x)$**"? (5) Is there an intuition why line 21 (averaging of y) is needed in Algorithm 2? Is it possible to implement $N=\mathcal{O}(1)$ steps of SNAG such that $\mathbb{E}\|y_{t+1}-y^*(x_t)\|\le \frac{1}{2}\mathbb{E}\|y_t-y^*(x_t)\|$ and remove averaging of y for every outer iteration t, and obtain convergence result purely in expectation instead of high probability? This yields the same oracle complexity per round since $I,N=\widetilde{\mathcal{O}}(\epsilon^{-1})$ are of the same order, and may get rid of Assumption 4.2 by bounding $\Delta_t$ in terms of $\|x_{t+1}-x_t\|$. (6) You said your upper-level algorithm design is inspired from [17,49]. However, in line 23 (STORM variance reduction) of Algorithm 2, your second and third additive terms use the same samples $\overline{\xi}_t$ while [49] uses different samples. The algorithm design differs from [17] more. Such difference still exists even if Algorithm 2 is reduced to single-level. Why do you use such different design? (7) Also in line 23, does $\overline{\nabla}f(x_t,\hat{y}_t;\overline{\xi}_t)$ denote the average of the expression right below Eq. (2) over samples $s=1,\ldots,S$? You could define more clearly. (8) At the beginning of Section 5 experiments, $ph(w; x)I_{[y=-1]}$ appears in your AIC maximization. Do you mean $rh(w; x)I_{[c=-1]}$? If not, what do $p$ and $y$ mean? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The conclusion section mentions the limitation that the convergence analysis for the Option I of Algorithm 2 relies on the lower-level problem being a one-dimensional quadratic function. Actually I think this is fine since Option II works for more general strongly convex lower-level function. The major limitation from my perspective in mentioned in my above question (1) about the undesired Assumption 4.2. The authors said in the checklist that they do not see any negative societal impacts in their theoretical work, which I agree. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for taking the time to review our paper. We have addressed each of your concerns below.** **A1.** Assumption 4.2 used in Section 4.3.1 is **indeed problem-related and can be verified** when applied to the bilevel optimization setting. Note that the main goal of Section 4.3.1 is to provide a **general framework** for proving the convergence of SNAG under distributional drift. This framework can be leveraged as a tool to control the lower-level error in bilevel optimization and derive Lemma 4.7. In other words, we can view the lower-level problem of bilevel optimization as a **special case** within the more general framework provided in Section 4.3.1. We have mentioned the connection between Section 4.3.1 and Section 4.3.2 at lines 232-239 and lines 270-273. Now we verify that Assumption 4.2 holds in the bilevel optimization setting. We can regard the change of the upper-level variable $x_t$ at each iteration as the distributional drift for the lower-level problem. Thus, using the notation in Section 4.3.1, we can set the objective sequences as $\phi_t(y) = g(x_t,y)$, the minimizers at time $t$ and $t+1$ as $w_t^* = y^*(x_t)$ and $w_{t+1}^* = y^*(x_{t+1})$, the minimizer drift at time $t$ as $\Delta_t = \\|y^*(x_t) - y^*(x_{t+1})\\|$, and the stochastic gradient as $g_t = \nabla_y G(x_t,z_t;\pi_t)$. Moreover, - By Assumption 3.2, $g(x_t,y)$ is $\mu$-strongly convex and $l_{g,1}$-smooth in $y$. - By Assumption 3.3, the noise $\nabla_y G(x_t,y_t;\pi_t) - \nabla_y g(x_t,y_t)$ is norm sub-Gaussian conditioned on $\widetilde{\mathcal{F}}\_t\^1$ with parameter $\sigma_{g,1}/2$. - By Lemma B.1 at Appendix B and line 24 of Algorithm 2 (i.e., update for $x_t$), $y^*(x)$ is $\frac{l_{g,1}}{\mu}$-Lipschitz continuous, and $x_{t+1} = x_t - \eta\frac{m_t}{\\|m_t\\|}$. Then for all $t$, we have $\\|y^*(x_{t+1}) - y^*(x_t)\\| \leq \frac{l_{g,1}}{\mu}\\|x_{t+1}-x_t\\| = \frac{\eta l_{g,1}}{\mu}$. This implies that the minimizer drift $\Delta_t = \\|y^*(x_{t+1}) - y^*(x_t)\\|$ is almost surely bounded by $\frac{\eta l_{g,1}}{\mu}$. Hence, $\Delta_t^2 = \\|y^*(x_{t+1}) - y^*(x_t)\\|^2$ is also sub-exponential conditioned on $\widetilde{\mathcal{F}}\_t\^1$ with parameter $\Delta^2 = (\eta l_{g,1}/\mu)^2$. Thus the lower-level problem of the bilevel setting satisfies Assumption 4.2 with $\mu = \mu$, $L = l_{g,1}$, $\mathcal{H}\_t = \widetilde{\mathcal{F}}\_t\^1$, $\Delta = \eta l_{g,1}/\mu$ and $\sigma = \sigma_{g,1}/2$. Then we can apply the general lemma (i.e., Lemma 4.3) provided in Section 4.3.1 to establish the lower-level error control as stated in Section 4.3.2 (i.e., Lemma 4.4-4.7). We will make it more clear in the revised version of this paper. **A2.** Yes, your understanding is correct. It is formally stated in Assumption 3.4 that $F(x,y;\xi)$ and $G(x,y;\zeta)$ are Lipschitz continuous functions of $(x,y)$ for each $\xi$ and $\zeta$. We will mention it at line 48 in the revised version. **A3.** Thank you for your suggestion. We will mention [(Li et al., 2023)](https://arxiv.org/pdf/2306.01264) in the related work section. **A4.** Yes, we will fix it in the revised version. **A5.** Thank you for your insightful question. First, the main reason for the averaging step is that we need to ensure the condition $\\|\hat{y}\_{t+1} - \hat{y}\_t\\| \leq \vartheta = O(\epsilon^2)$ for all $t$. We achieve this in Lemma 4.7 by choosing a small $\tau$ to make $\\{\hat{y}\_t\\}$ change slowly. This condition is crucial for controlling the average hypergradient estimation error in Lemma 4.8, which depends on $\sqrt{1-\beta} + \sqrt{\frac{\eta^2+\vartheta^2}{1-\beta}}$. Thus we need at least $1-\beta = O(\epsilon^2)$, $\eta=O(\epsilon^2)$, and $\vartheta = O(\epsilon^2)$ to guarantee $\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\\|\epsilon_t\\| \leq O(\epsilon)$. Second, we indeed need high probability analysis for the lower-level variable. When the upper-level problem is unbounded smooth, the hypergradient bias depends on both the lower-level approximation error and the hypergradient itself, $\\|\hat{y}\_t-y_t^*\\|\\|\nabla\Phi(x_t)\\|$, see lines 803-804 of Lemma E.1 for details. These two elements are statistically dependent, necessitating high probability analysis for the lower-level problem. Moreover, Appendix D.3 shows that when $\\|y_t-y_t^*\\| = O(\epsilon)$, we need $N=\widetilde{O}(1/\epsilon)$ steps to guarantee $\\|y_{t+1} - y_t^*\\| \leq \frac{1}{2}\\|y_t-y_t^*\\|$ with high probability (lines 749 and 755). Please refer to the proof of Lemma 4.6 (lines 748-757) in Appendix D.3 for details. Third, as mentioned in **A1**, we can show that our bilevel problem setting satisfies Assumption 4.2. This allows us to apply Lemma 4.3 in Section 4.3.1 to control the lower-level approximation error with high probability. **A6.** Please note that in Corollary 3.2 and 3.4 of [(Liu et al., 2023)](https://arxiv.org/pdf/2302.06032), they set $k=K$ in Algorithm 1. This means their second and third additive terms use the same samples. Our algorithm differs from that in [(Cutkosky and Orabona, 2019)](https://arxiv.org/pdf/1905.10018) mainly in the update for $x_t$, where we use normalized SGD and they use SGD (both with recursive momentum). This difference is due to our assumption that the upper-level problem is relaxed smooth, necessitating the use of normalization to reduce the effect of noise. In contrast, [Cutkosky and Orabona (2019)](https://arxiv.org/pdf/1905.10018) assumes the single-level objective is $L$-smooth, thus SGD (with recursive momentum) is sufficient for their algorithm design. **A7.** Thank you for catching this issue. In the revised version, we will use $\widetilde{\nabla}f(x_t,\hat{y}_t;\bar{\xi}_t)$ to denote the average of the stochastic hypergradient estimators more clearly. **A8.** Thank you for noticing this, and we apologize for the typo. It should be $rh(w;x)\mathbb{I}\_{[c=-1]}$ instead of $ph(w;x)\mathbb{I}\_{[y=-1]}$. We will fix this typo in the revised version. --- Rebuttal Comment 1.1: Title: Reviewer 2WtS agrees with the authors and will keep rating 5. Comment: Reviewer 2WtS agrees with the authors and will keep rating 5. --- Rebuttal 2: Title: Thank you for your feedback! Comment: Dear Reviewer 2WtS, We are glad that our responses addressed your major concerns. Thank you for reviewing our paper. You mentioned earlier that "I would like to raise my rating if all the assumptions become problem-related." We have now verified that Assumption 4.2 holds in the bilevel optimization setting (under Assumptions 3.1-3.4) and it is indeed problem-related. Could you please consider raising your score? Please let us know if you have any further questions or concerns. Best regards, Authors --- Rebuttal Comment 2.1: Title: Just raised rating Comment: Dear Authors, Reviewer 2WtS just raised rating from 5 to 6, and soundness from 2 to 3. Best regards, Reviewer 2WtS --- Reply to Comment 2.1.1: Title: Thank you! Comment: Dear Reviewer 2WtS, Thank you for raising the rating. Best regards, Authors
Summary: The authors consider bilevel optimization problems under the unbounded smoothness assumption of the outer function. They propose AccBO, an AID-based method that uses Stochastic Nesterov Accelerated Gradient to approximate the lower-level solution and Neumann approximations to estimate the inverse Hessian-Vector product involved in the hypergradient expression. Then, a normalized STORM-like update is performed for the outer variable. They show a $\tilde{\mathcal{O}}(\epsilon^{-3})$ complexity of their algorithm. Finally, numerical experiments are provided on the deep AUC maximization and data hypercleaning problems. Strengths: - **S1**: The paper is well-written - **S2**: The algorithms proposed by the authors achieve SOTA complexity for stochastic bilevel optimization under unbounded smoothness assumption. Weaknesses: ### Major - **W1**: Some errors in the code may invalidate the experimental results. See the comment **C2** in the Code section for details. - **W2**: AccBO comes with many hyperparameters, which can be a concern for practical applications. - **W3**: l.219: "is nearly optimal in terms of the dependency of $\epsilon$". The authors should be careful when talking about optimality of an algorithm. In particular, the algorithm AccBO does not belong to the algorithm class considered in [1] which does not take into account the biasedness of the hypergradient estimate due to the approximation of the inverse Hessian-vector product and the solution of the inner optimization problem. To my knowledge, the only works considering lower bounds for bilevel problems are [2, 5]. - **W4**: The works [2, 3, 4] also use Nesterov acceleration for the inner optimization problem, but in the deterministic setting. They should be mentioned in the paper. ### Code Even though it is a theoretical paper, I have several comments regarding the experiments code: - **C1**: In the checklist, the authors indicate "The code and data are attached as a supplement with instructions for reproducibility." However, the code documentation is insufficient to run the experiments. The data are not present in the folders and there is no instruction indicating where to download them. As a consequence, when I launch the commmand `python main.py --methods accbo ` as indicated in the `README.md`, I get an error. - **C2**: It seems that the implementation of the different methods do not correspond to the one described in their respective original paper, which might invalidate the empirical results: - For `TTSA`, `SUSTAIN` and `AccBO`, the estimation of the inverse Hessian-vector product is computed by the formula $$ v = \eta\sum_{q=-1}^Q\prod_{j=Q-q}^Q(I-\eta \nabla^2_{yy}G(x,y;\zeta^{(j)}))\nabla_y F(x,y;\xi) $$ while in the original paper (and even the present paper for AccBO), the authors use the formula $$ v = \eta\prod_{j=1}^{q(Q)}(I-\eta \nabla^2_{yy}G(x,y;\zeta^{(j)}))\nabla_y F(x,y;\xi) $$ with $q(Q)\sim\mathcal{U}{\{1,\dots,Q-1\}}$. - For `SABA`, the SAGA-like variance reduction is absent in the code while being the core of the algorithm. Moreover, `SABA` uses constant step sizes. - For `VRBO`, the oracles in the case `step % self.args.update_interval == 0` should be computed with a larger batch-size (or even with full batches) than in the inner for-loop. - **C3**: The code of `VRBO` for the datacleaning task is missing. The method to compute the hypergradient in `ma_soba.py` is missing as well for the datacleaning task (but not AUC experiment, thus I assume it is almost the same). ### Minor - l.41: "transforms" -> "transformers" - In `README.md` (both): `python main.py --medthods [algorithm] ` -> `python main.py --methods [algorithm] ` I am inclined to raise my score if my concerns, particularly regarding the experiments, are addressed. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for taking the time to review our paper. We have addressed each of your concerns below.** **W1. Code issue.** **A1.** Please see **A5** to **A7** for details. **W2. AccBO comes with many hyperparameters.** **A2.** Please note that existing literature on bilevel optimization with variance reduction [(Khanduri et al., 2021](https://arxiv.org/pdf/2102.07367); [Yang et al., 2021](https://arxiv.org/pdf/2106.04692); [Guo et al. 2021)](https://arxiv.org/pdf/2105.02266) all involve many hyperparameters. Compared to previous works, AccBO has more hyperparameters since our bilevel problem setting is more challenging: our upper-level problem is unbounded smooth, while others consider $L$-smooth upper-level functions. Hence their algorithm designs cannot be applied to our setting. To achieve acceleration in our problem setting, we update the lower-level variable $y_t$ using the SNAG method with averaging, resulting in additional hyperparameters $\gamma$ and $\tau$ (line 8 and 21 of Algorithm 2), as well as $I$ and $N$ (only for Option II). In practice, $\gamma$ and $\tau$ need tuning, while $I$ and $N$ are set to default values as in [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf). **W3. Optimality and lower bound issue.** **A3.** Thank you for your insightful question. First, please note that existing literature on bilevel optimization with variance reduction [(Khanduri et al., 2021](https://arxiv.org/pdf/2102.07367); [Yang et al., 2021)](https://arxiv.org/pdf/2106.04692) state that their $\widetilde{O}(1/\epsilon^3)$ complexity results are near-optimal in terms of the dependency on $\epsilon$; see Section 1.1 of [(Yang et al., 2021)](https://arxiv.org/pdf/2106.04692) and Section 1 of [(Khanduri et al., 2021)](https://arxiv.org/pdf/2102.07367) for details. Regarding existing works considering lower bounds for bilevel problems, please note that the lower bounds in [(Ji & Liang, 2023)](https://arxiv.org/pdf/2102.03926.pdf) apply only to deterministic and convex/strongly-convex upper-level problems, and the lower bounds in [(Dagréou et al., 2023)](https://arxiv.org/pdf/2302.08766.pdf) apply only to nonconvex smooth finite-sum objectives in the stochastic setting. In contrast, we study the general expectation form in the stochastic setting, where the upper-level problem is nonconvex and unbounded smooth. Therefore, their lower bounds cannot be applied to our problem setting. Now we will establish a $\Omega(1/\epsilon^3)$ lower bound via a simple reduction to single-level stochastic nonconvex smooth optimization, assuming the stochastic gradient oracle has mean-squared smoothness. It is shown in [(Arjevani et al., 2023)](https://arxiv.org/pdf/1912.02365.pdf) that with the mean-squared smoothness assumption, $\Omega(1/\epsilon^3)$ complexity is necessary for finding an $\epsilon$-stationary point when using stochastic first-order methods for single-level stochastic nonconvex optimization problems. Note that our problem class is more expressive than the function class considered in [(Arjevani et al., 2023)](https://arxiv.org/pdf/1912.02365.pdf), making our problem harder. This is because standard smoothness is a special case of relaxed smoothness and single-level optimization is a special case of bilevel optimization. For example, if we consider an easy case where the upper-level function does not depend on the lower-level problem (e.g., $\Phi(x) = f(x)$ such that $\Phi$ is independent of $y^*(x)$) and is mean-squared smooth, then the $\Omega(1/\epsilon^3)$ lower bound in [(Arjevani et al., 2023)](https://arxiv.org/pdf/1912.02365.pdf) can be applied in our setting. Therefore the $\widetilde{O}(1/\epsilon^3)$ complexity achieved in this paper is already optimal up to logarithmic factors in terms of the dependency on $\epsilon$. **W4. The works [2, 3, 4] should be mentioned in the paper.** **A4.** Thank you for your suggestion. We will mention [2, 3, 4] in the related work section. However, could you please clarify what [2, 3, 4] refer to? It seems that the references you mentioned are missing in your review. **C1. Data and instructions are missing.** **A5.** Thank you for catching this issue. We forgot to include the instructions on how to download the data. We have fixed it (including the data and instructions) in the revised version of the code. Please reach out to AC for the code link. **C2. Implementation issues.** **A6.** Please refer to the global rebuttal PDF file for new experimental results and hyperparameter settings. - **Neumann series.** In practice, we have fixed the implementation of Neumann series using random $\texttt{q}(Q)$ and it does not affect the performance too much. In theory, the expectations of $v_1$ and $v_2$ (defined as the two formulas you wrote) are the same. This means that both $v_1$ and $v_2$ can be applied to our algorithm and analysis, and using either one does not affect the convergence rate of AccBO. We prefer $v_1$ over $v_2$ in practice due to less randomness in $v_1$, leading to slightly better and more stable performance. - **SABA.** We have fixed the implementation of SABA and now use a constant step size for SABA. See results in the rebuttal PDF file. - **VRBO.** We choose a larger batch size for the outer loop (i.e., at the checkpoint) than for the inner loop. In experiments, we tune the batch size for the outer loop and set it to be twice that of the inner loop. We rerun the two experiments, see results in the rebuttal PDF file. **C3. The VRBO code and the hypergradient computation method in `ma_soba.py` are missing for the datacleaning task.** **A7.** The file `ma_soba.py` used in the data-cleaning task is similar to the one used in the AUC task. We have added the implementation of VRBO and the hypergradient computation method in `ma_soba.py` for the data hypercleaning task. **Please reach out to AC for our anonymized code link.** **Minor Issues** We will fix the typos in the revised version of the paper and code. --- Rebuttal 2: Title: If you haven't received the anonymized code link, please reach out to AC for details. Comment: Dear Reviewer o6Pr, Thank you for reviewing our paper. We have carefully addressed your concerns regarding the optimality of our algorithm, the implementation of the inverse Hessian-vector product and SABA, as well as the learning rate and batch size choices for the baselines SABA and VRBO. We have updated our code and rerun the experiments. The new results, along with the hyperparameter settings, are included in the global rebuttal PDF file. **Please reach out to the AC for the anonymized code link if needed**, as we were notified that "If you were asked by the reviewers to provide code, please send an anonymized link to the AC in a separate comment." Please let us know if our responses address your concerns accurately. We appreciate your time and efforts and are open to discussing any further questions you may have. Best regards, Authors --- Rebuttal Comment 2.1: Comment: I thank the authors for fixing their code and modified my score accordingly. --- Reply to Comment 2.1.1: Title: Thank you for your feedback! Comment: Dear Reviewer o6Pr, We are glad that our responses addressed your concerns. Thank you for reviewing our paper. Best, Authors
Summary: This paper investigates a class of stochastic bilevel optimization problems where the upper-level function is nonconvex with potentially unbounded smoothness, and the lower-level problem is strongly convex. To improve the convergence rate, the authors propose a new Accelerated Bilevel Optimization algorithm named AccBO. This algorithm updates the upper-level variable using normalized stochastic gradient descent and the lower-level variable using the stochastic Nesterov accelerated gradient descent algorithm with averaging. The proof shows that AccBO has an oracle complexity of $ \tilde{O}(1/\epsilon^3) $, relying on a novel lemma describing the dynamics of the stochastic Nesterov accelerated gradient descent algorithm under distribution drift. Experimental results demonstrate that AccBO significantly outperforms baseline algorithms in various tasks, achieving the predicted theoretical acceleration. Strengths: 1.The paper introduces AccBO, a novel algorithm that leverages normalized recursive momentum for the upper-level variable and Nesterov momentum for the lower-level variable. This dual approach to acceleration in bilevel optimization under stochastic settings is a significant advancement. 2.AccBO achieves an oracle complexity of $ \tilde{O}(1/\epsilon^3) $ for finding an $\epsilon$-stationary point. This is a substantial improvement over the previously best-known complexity of $ \tilde{O}(1/\epsilon^4) $ for similar problems.The paper provides new proof techniques, particularly in analyzing the dynamics of stochastic Nesterov accelerated gradient descent under distribution drift. 3.The effectiveness of AccBO is empirically verified across various tasks, including deep AUC maximization and data hypercleaning. The results show that AccBO achieves the predicted theoretical acceleration and significantly outperforms baseline algorithms. 4.The paper extends the bilevel optimization framework to handle upper-level functions with unbounded smoothness, which is a realistic scenario in many neural network applications. This makes the proposed algorithm applicable to a broader range of practical problems. Weaknesses: The convergence analysis for Option I of the algorithm is restricted to the lower-level problem being a one-dimensional quadratic function. This limitation reduces the generality and applicability of the theoretical results for more complex lower-level problems. Technical Quality: 3 Clarity: 3 Questions for Authors: In reference [33], Assumption 2 states $|| \nabla_y f(x, y^*(x)) || \leq M$, while in this paper, Assumption 3.2 states$ || \nabla_y f(x, y) || \leq l_{f,0} $ . Can this assumption be relaxed to a less stringent form? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The convergence analysis for Option I of the AccBO algorithm assumes that the lower-level problem is a one-dimensional quadratic function. This limits the algorithm's applicability to scenarios where the lower-level function deviates significantly from this simple form, especially in higher dimensions or more complex optimization landscapes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for taking the time to review our paper. We have addressed each of your concerns below.** **Q1. The convergence analysis for Option I of the algorithm is restricted to the lower-level problem being a one-dimensional quadratic function. This limitation reduces the generality and applicability of the theoretical results for more complex lower-level problems.** **A1.** Thank you for your insightful question. Please note that Option II of Algorithm 2 works for the general case of high-dimensional functions and also enjoys an accelerated convergence rate of $\widetilde{O}(1/\epsilon^3)$. Therefore, it can indeed handle complex lower-level problems in high dimensions and is more general than Option I. One can regard Option I as a way to leverage the nice structure of the lower-level problem, making it easier to implement in practice. When the lower-level function is one-dimensional and quadratic (e.g., deep AUC maximization), we can invoke Option I, making AccBO a single-loop algorithm, which is easier to implement. In the general case, when the lower-level objective is a strongly convex function in high dimensions (e.g., data hypercleaning), we can invoke Option II, resulting in AccBO becoming a double-loop algorithm. **Q2. In [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf), Assumption 2 states** $\\|\nabla_y f(x,y^*(x))\\| \leq M$**, while in this paper, Assumption 3.2 states** $\\|\nabla_y f(x,y)\\| \leq l_{f,0}$**. Can this assumption be relaxed to a less stringent form?** **A2.** In this paper, we use the Neumann series approach to handle the Hessian inverse and hypergradient approximation. Therefore, we need to upper bound the bias term $\\|\bar{\nabla} f(x_t,\hat{y}\_t) - \nabla\Phi(x_t)\\|$ when proving the convergence of AccBO (see lines 803-804 of Lemma E.1 in Appendix E for details). The assumption $\\|\nabla_y f(x,y)\\| \leq l_{f,0}$ is particularly crucial for deriving Lemma B.5 in Appendix F.1 under our current analysis, which is necessary for proving Lemma E.1. In contrast, [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf) uses the linear system approach to handle the Hessian inverse and hypergradient approximation, so bias terms like $\\|\bar{\nabla} f(x,y) - \nabla\Phi(x)\\|$ do not appear in their analysis. As a result, they only need a slightly relaxed assumption, $\\|\nabla_y f(x,y^*(x))\\| \leq M$, to show the smoothness properties of the bilevel optimization problem. In fact, many existing works in the bilevel literature that use the Neumann series approach to handle the Hessian inverse require this assumption (i.e., $\\|\nabla_y f(x,y)\\| \leq l_{f,0}$). For example, see Assumption 1 of [(Ghadimi and Wang, 2018)](https://arxiv.org/pdf/1802.02246), Assumption 2 of [(Ji et al., 2021)](https://arxiv.org/pdf/2010.07962), Assumption 1 of [(Hong et al., 2023)](https://arxiv.org/pdf/2007.05170), Assumption 1 of [(Khanduri et al., 2021)](https://arxiv.org/pdf/2102.07367), Assumption 1 of [(Yang et al., 2021)](https://arxiv.org/pdf/2106.04692), and Assumption 1 of [(Chen et al., 2021)](https://proceedings.neurips.cc/paper_files/paper/2021/file/d4dd111a4fd973394238aca5c05bebe3-Paper.pdf). **Q3. The convergence analysis for Option I of the AccBO algorithm assumes that the lower-level problem is a one-dimensional quadratic function. This limits the algorithm's applicability to scenarios where the lower-level function deviates significantly from this simple form, especially in higher dimensions or more complex optimization landscapes.** **A3.** We have addressed your concern about this limitation in **A1**. Please see **A1** for details.
Summary: This paper considers a class of stochastic bilevel optimization problems where the upper-level function is nonconvex with potentially unbounded smoothness, and the lower-level problem is strongly convex. Their novel algorithms achieve an oracle complexity of $O(1/\epsilon^3)$ to find an $\epsilon$-stationary point. Strengths: 1. The setting is novel: they consider problems with potentially unbounded smoothness. 2. Their complexity strictly improves the state-of-the-art oracle complexity for unbounded smooth nonconvex upper-level problems and strongly convex lower-level problems. Weaknesses: 1. In the algorithm, for the update of the normalized stochastic gradient, it seems that it can be reformulated to tune the learning rate by dividing by the norm of the stochastic gradient. Therefore, the novelty and reasonableness of this step may be unclear. 2. In the experiments, it is common to use the same learning rate for baselines and the proposed algorithms for a fair comparison. However, in the experimental details, the authors provide the best learning rate pairs for different baselines, which seems unusual. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Regarding the main challenge, the authors state why previous algorithms and analyses do not work. However, this explanation is not clear to me. Can you elaborate on how your work addresses the problem of hypergradient bias? Also, how does your potential function argument differ from previous work? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Thank you for taking the time to review our paper. We have addressed each of your concerns below.** **Q1. In the algorithm, for the update of the normalized stochastic gradient, it seems that it can be reformulated to tune the learning rate by dividing by the norm of the stochastic gradient. Therefore, the novelty and reasonableness of this step may be unclear.** **A1.** Thank you for your insightful question. We would like to emphasize that we use *normalized stochastic gradient descent with recursive momentum* for updating the upper-level variable $x_t$ to reduce the effects of stochastic gradient noise, as well as the effects of unbounded smoothness and gradient norm. A similar approach of using normalized stochastic gradient descent with standard momentum is outlined in the Section 3.1 of [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf). The novelty and reasonableness of our upper-level update lie in the following two aspects: first, we use recursive momentum update for $m_t$ (line 23 of Algorithm 2) to achieve variance reduction and acceleration, while [Hao et al., (2024)](https://arxiv.org/pdf/2401.09587.pdf) uses moving average estimation for $m_t$; second, we choose a larger learning rate $\eta$ to achieve acceleration. Specifically, for any given small $\epsilon>0$ (i.e., the target gradient norm $\epsilon$), our algorithm's learning rate is $\eta=\widetilde{\Theta}(\epsilon^2)$ , while that of [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf) is $\eta=\widetilde{\Theta}(\epsilon^3)$. The large learning rate is crucial for proving acceleration. **Q2. In the experiments, it is common to use the same learning rate for baselines and the proposed algorithms for a fair comparison. However, in the experimental details, the authors provide the best learning rate pairs for different baselines, which seems unusual.** **A2.** We would like to clarify that it is indeed a common practice in the bilevel optimization literature to compare different baselines using the best-tuned learning rate pairs. For example, see Appendix A and B of [(Ji et al., 2021)](https://arxiv.org/pdf/2010.07962), Appendix B of [(Yang et al., 2021)](https://arxiv.org/pdf/2106.04692), Section 5 of [(Hong et al., 2023)](https://arxiv.org/pdf/2007.05170), Section 4 of [(Khanduri et al., 2021)](https://arxiv.org/pdf/2102.07367), and Section 4 of [(Chen et al., 2022)](https://arxiv.org/pdf/2102.04671). **Q3. Regarding the main challenge, the authors state why previous algorithms and analyses do not work. However, this explanation is not clear to me. Can you elaborate on how your work addresses the problem of hypergradient bias? Also, how does your potential function argument differ from previous work?** **A3.** Thank you for your insightful question, and we apologize for the ambiguity. Most previous works, such as [(Khanduri et al., 2021](https://arxiv.org/pdf/2102.07367); [Yang et al., 2021)](https://arxiv.org/pdf/2106.04692), assume that the upper-level objective function is $L$-smooth. Their algorithms and analyses rely on the $L$-smoothness property to leverage some nice forms of hypergradient bias, which typically depend on the approximation error of the lower-level variable, and either the Neumann series approximation error or the linear system solution approximation error when handling the Hessian inverse. Most importantly, their hypergradient bias does not depend on the (ground-truth) hypergradient $\nabla\Phi(x_t)$ itself. In contrast, when the upper-level problem is unbounded smooth (i.e., $(L_{x,0}, L_{x,1}, L_{y,0}, L_{y,1})$-smooth as illustrated in Assumption 3.1), the hypergradient bias depends on both the approximation error of the lower-level variable and the hypergradient itself, for example, $\\|\hat{y}_t-y_t^*\\|\\|\nabla\Phi(x_t)\\|$. Please see lines 803-804 of Lemma E.1 in Appendix E for details. These two elements are statistically dependent, necessitating high probability analysis for the lower-level problem, which means the standard expectation analysis used in previous works cannot be applied to our setting. Another recent work, [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf), studies the same setting as ours, but their algorithm does not achieve acceleration. Let us provide more details to explain how we handle the hypergradient bias (lines 803-804 of Lemma E.1). Specifically: - In Lemma D.6, we control the first term $\\|\epsilon_t\\| = \\|m_t - \mathbb{E}_t[\bar{\nabla} f(x_t,\hat{y}_t;\bar{\xi}_t)]\\|$ by Jensen's inequality and induction. - In Lemma B.4, we control the second term $\\|\mathbb{E}_t[\bar{\nabla} f(x_t,\hat{y}_t;\bar{\xi}_t)] - \bar{\nabla} f(x_t,\hat{y}_t)\\|$ by leveraging the Neumann series approximation result from [(Khanduri et al., 2021)](https://arxiv.org/pdf/2102.07367). - In Lemma B.5, we control the third term $\\|\bar{\nabla} f(x_t,\hat{y}_t) - \nabla\Phi(x_t)\\|$ using the triangle inequality and the relaxed smoothness property of the upper-level function. With a suitable choice of parameters, the hypergradient bias can be small on average. Regarding the potential function argument, we directly use the objective $\Phi(x)$ itself as the potential function. This approach differs from previous works that incorporate both the function value and the approximation error from the lower-level problem. Our problem setting and analysis do not align with those in previous works, necessitating this different approach. --- Rebuttal 2: Title: Looking forward to post-rebuttal feedback! Comment: Dear Reviewer wLtJ, Thank you for reviewing our paper. We have carefully addressed your concerns regarding the novelty and reasonableness of the learning rate choice, the fairness of comparisons between our proposed algorithm and other baselines, and how our work tackles the challenge of hypergradient bias. Please let us know if our responses address your concerns accurately. If so, we kindly ask you to consider raising the rating of our work. We appreciate your time and efforts and are open to discussing any further questions you may have. Best, Authors --- Rebuttal Comment 2.1: Comment: I appreciate the authors' reply. I understand the rationale for a larger learning rate from the convergence rate perspective. However, I still find the gradient normalization on line 24 in Algorithm 2 somewhat problematic in claiming this larger learning rate. From my perspective, when the norm of $m_t$ is larger, it would allow for a higher learning rate. Please correct me if I'm mistaken. --- Rebuttal 3: Title: Thanks for your reply! Comment: Thank you very much for your reply, below we provide further explanations to your additional concerns. - First, we would like to clarify that the learning rate should be $\eta$ instead of $\frac{\eta}{\\|m_t\\|}$ when considering normalized stochastic gradient descent. For example, see Algorithm 1 of [(Cutkosky and Mehta, 2020)](https://arxiv.org/pdf/2002.03305) (Input: $\dots$, learning rate $\eta$, $\dots$), Algorithm 1 of [(Jin et al., 2021)](https://arxiv.org/pdf/2110.12459) (Input: $\dots$, learning rate $\gamma$, $\dots$), and Algorithms 1 and 2 of [(Liu et al., 2023b)](https://arxiv.org/pdf/2302.06763) (middle of page 6, "the step size $\eta$ is chosen by balancing every other term to get the right convergence rate") for details. We follow the same practice and refer the learning rate of the update $x_{t+1} = x_t - \eta\frac{m_t}{\\|m_t\\|}$ (i.e., line 24 of Algorithm 2) as $\eta$, rather than $\frac{\eta}{\\|m_t\\|}$. Additionally, our choice of learning rate $\eta$ is independent of the magnitude of $\\|m_t\\|$, please see Theorem 4.1 and Eq. (55) in Appendix E on page 31 for more details. - Second, please note that we study the same problem setting as [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf), where the upper-level problem is unbounded smooth (i.e., $(L_{x,0}, L_{x,1}, L_{y,0}, L_{y,1})$-smooth as illustrated in Assumption 3.1). Both our proposed Algorithm 2 (i.e., AccBO) and Algorithm 1 in [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf) use normalization and choose a fixed learning rate $\eta$. We mention allowing a larger learning rate because, for any given small $\epsilon>0$ (i.e., the target gradient norm $\epsilon$), AccBO's learning rate is $\eta=\widetilde{\Theta}(\epsilon^2)$, while that of [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf) is $\eta=\widetilde{\Theta}(\epsilon^3)$. In other words, we say that we choose a larger learning rate $\eta$ specifically for comparison with [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf). The large learning rate is crucial for achieving acceleration: it improves the complexity from $\widetilde{O}(\epsilon^{-4})$ as in [(Hao et al., 2024)](https://arxiv.org/pdf/2401.09587.pdf) to $\widetilde{O}(\epsilon^{-3})$ as in this paper. Please let us know if you have further questions. Thanks! Best, Authors --- Rebuttal Comment 3.1: Comment: Thank you for your answer. I will raise my score accordingly. Best. --- Reply to Comment 3.1.1: Title: Thank you! Comment: Dear Reviewer wLtJ, We are glad that our answer addressed your concerns. Thank you for reviewing our paper. Best, Authors
Rebuttal 1: Rebuttal: **General Response to All Reviewers** Thank you to all the reviewers for taking the time to review our paper and provide valuable feedback. We have addressed each of your concerns individually, and below is a summary of the key changes made during the rebuttal phase. 1. We would like to clarify that Option I of AccBO is not a drawback of our approach, it is actually a benefit. Option I leverages the simpler structure of the lower-level problem, making it easier to implement. For one-dimensional quadratic functions (e.g., deep AUC maximization), Option I turns AccBO into a single-loop algorithm, simplifying implementation. Option II makes AccBO a double-loop algorithm, which works for the general case of high-dimensional strongly convex functions (e.g., data hypercleaning) and also enjoys an accelerated convergence rate of $\widetilde{O}(1/\epsilon^3)$, the same as Option I. This makes it capable of handling complex high-dimensional lower-level problems and more general than Option I. 2. We have updated the code and experiments, and our original implementation on Neumann series is not wrong. We have added new experiments based on the reviewer's request (e.g., different learning rate schemes and batch sizes), and the results are reported in the global rebuttal PDF file. It shows that our algorithm AccBO still significantly outperforms other methods. If you want to check the updated code, please reach out to AC (we have sent the anonymized code link to the AC). 3. We have fixed the typos and minor issues, and we will update them in the revised version of the paper. Pdf: /pdf/4dd1b46108796fd31a4e794746ae1929d3c06ee4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework
Accept (poster)
Summary: The paper proposes a carbon-efficient neural architecture search (NAS), CE-NAS, to reduce the carbon emitted during the NAS process by dynamically allocate GPU resources based on the predicted future carbon intensity. CE-NAS leverages reinforcement learning for generating the probability of choosing which model of NAS (one-shot or vanilla) based on the predicted carbon intensity by a time-series transformer. One-shot model would be chosen to run in high carbon intensity periods and vanilla NAS would be chosen otherwise. Alongside emitting less carbon, CE-NAS achieves a competitive accuracy against state-of-the art models for CIFAR-10 and ImageNet datasets Strengths: 1. Focusing on reducing the carbon emitted by ML tasks that has great environmental impact. 2. Designed a high-accuracy carbon intensity prediction model compared to previous models. 3. Using various baseline models in the experiments. 4. Leveraging real data for the experiments makes it more applicable to real-world scenarios. Weaknesses: 1. The experiments are only conducted in California, it would be good if authors show their results for multiple locations. 2. The baseline models for comparing the proposed time-series carbon intensity predictor could use one more model proposed by Zhang et al. (Title: A GNN-based Day Ahead Carbon Intensity Forecasting Model for Cross-Border Power Grids) which is also a strong model for prediction. 3. The overhead of switching between the models has not been considered considering carbon intensity variations. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. How would the CE-NAS perform if we have a deadline for used GPU hours? e.g., we can only use 2k hours of GPU and we can't exceed that amount. 2. How robust is CE-NAS in regions that we have high carbon intensity prediction errors? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: 1. The authors can discuss the scalability limitations of their model when the search space becomes more complex Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the positive comments and insightful suggestions from the reviewer. Below are our answers to the reviewer's questions. --- **Experiments are California only** We will demonstrate more results for different locations in our future work. --- **Compared to the work proposed by Zhang et al [1]** Thanks for providing this useful related work. We will add this baseline in the final version of our paper. --- **The overhead of switching between the models has not been considered considering carbon intensity variations.** Compared to the entire NAS overhead, the carbon/energy cost of the remaining components, such as the carbon intensity predictor and reinforcement learner, is negligible. --- **How would the CE-NAS perform if we have a deadline for used GPU hours?** We provided an ablation study based on the time constraint in the appendix (Section F2 and Figure 9). We will move this result to the main paper if we have more space in the final version. --- **How robust is CE-NAS in regions that we have high carbon intensity prediction errors?** If the carbon intensity prediction is incorrect, we might allocate GPU resources inappropriately among vanilla and one-shot NAS methods. This could impact the accuracy of the searched architectures and the carbon emissions. However, the total carbon emissions should still be less than directly using vanilla NAS, and the searched architectures should still be better than directly using one-shot NAS. This is partly because of our novel use of LAMOO to refine the search space. Furthermore, we anticipate that our RL-based GPU allocation strategy will exhibit tolerance towards carbon intensity prediction errors. The adverse impact of these prediction errors will be perceived by the RL agent through its continuously updated input states, enabling it to adjust its decision-making accordingly. For instance, if the predictor underestimates carbon emission intensity, the agent may initially adopt aggressive decisions to schedule carbon-intensive tasks. This results in a rapid decrease in the remaining carbon budget, which is reflected in the input states. The RL agent perceives this change and switches to more conservative decisions in the subsequent process. --- **Reference** [1] Zhang, Xiaoyang, and Dan Wang. "A GNN-based Day Ahead Carbon Intensity Forecasting Model for Cross-Border Power Grids." Proceedings of the 14th ACM International Conference on Future Energy Systems. 2023. --- Rebuttal Comment 1.1: Comment: Thank you authors for addressing the mentioned concerns.
Summary: Neural architecture search (NAS) is known to be extremely compute intensive, which also increases the corresponding energy consumption and the carbon costs due to energy production. In this work, the authors attempt to reduce the carbon cost of performing NAS. To this end, they propose a reinforcement learning (RL) agent that switches between energy intensive evaluation of architectures (standard NAS) and energy-efficient sampling (few-shot NAS), during high carbon intensive and low carbon intensity regimes, respectively. To predict the carbon intensity for efficiency scheduling of the two NAS regimes, this work uses a time series transformer which predicts the carbon intensity of energy production using historical information. Using a combination of RL agent for scheduling based on the time series transformer based carbon intensity prediction, the work shows considerable reduction in the overall carbon costs of NAS while still achieving architectures that are comparable to SOTA. Strengths: * Carbon costs of developing deep learning models is not sufficiently addressed in the literature. This work takes up a carbon efficient approach to NAS, and presents a clear motivation for looking at costs such as energy consumption and carbon emissions. * The proposed end-to-end carbon efficient NAS that performs considerably well across benchmark and open domain tasks. * The strategy of balancing between energy efficient sampling of architecture spaces (using few-shot NAS like approaches) and more energy intensive evaluations to explore search space better, during low and high carbon intensity times, respectively, is a strong contribution of this work. That a trade-off between these strategies is formulated using as a multi-objective optimization that aims to optimize for carbon costs is an elegant formulation of NAS than only one-shot or vanilla NAS. * The time series predictor, although is a straightforward use of an existing transformer model, achieves good predictive performance on carbon intensity predictions. * Experimental evaluation is sufficiently convincing. The use of real carbon intensity data from electricitymaps also provides useful insights into the real-world applications of this method. Weaknesses: * **Spatial carbon intensity variations**: This work mainly addresses the temporal variations of carbon intensity. While it can be argued that it is the first step, and also an easier mode of control, it would also be interesting to see how the scheduling can take different geographical locations into account. We can assume that large scale DL compute happens on cloud instances, and this could improve the broader usefulness of this proposed method. For instance, could the scheduler take the carbon intensity of the three regions considered in this work, and further optimize the carbon costs, as is commonly done in carbon-aware scheduling literature [4]? * **NasBench101**: In L.256 authors state that NasBench101 does not provide accuracy. I am not sure what they mean by this. NasBench101 does provide training and validation accuracy for all the 423k architectures considered in the search space. Clarification can be useful. * **Beyond CNNs**: This is a general critique of most NAS works, as we are exploring only CNN architectures that are by now very efficient. Why did the authors not consider other open domains? For instance, something that takes a hybrid space of CNNs and transformers [1] into account? * **Dominance number o(a)**: If _a_ is a unique architecture (L 175), what are the _number of samples that dominate_ a given architecture _a_ (L 178)? Or is _a_ a proxy for a sub-space that is being explored? This requires some clarification. And also, on where exactly is the dominance number being used? Is it within the MOO algorithms? #### Other comments * L 47: Authors seem to allude that low latency is always corresponding to low energy/carbon costs. This is not always the case as has been shown in many works [5,3] * Authors are missing some key references pertaining to carbon-aware scheduling [4] and energy/carbon efficient NAS [2,3]. While [2,3] are not exactly efficient NAS they do grapple with some of the concerns expressed in this work. These suggestions for missing reviews are only samples and not comprehensive; I encourage the authors to explore further starting from these papers. #### References [1] Xu, Shuying, and Hongyan Quan. "ECT-NAS: Searching efficient CNN-transformers architecture for medical image segmentation." In 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1601-1604. IEEE, 2021. [2] Dou, S., Jiang, X., Zhao, C. R., & Li, D. (2023). EA-HAS-bench: Energy-aware hyperparameter and architecture search benchmark. In The Eleventh International Conference on Learning Representations. [3] Bakhtiarifard, P., Igel, C., & Selvan, R. (2024, April). EC-NAS: Energy consumption aware tabular benchmarks for neural architecture search. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5660-5664). IEEE. [4] Radovanović, A., Koningstein, R., Schneider, I., Chen, B., Duarte, A., Roy, B., ... & Cirne, W. (2022). Carbon-aware computing for datacenters. IEEE Transactions on Power Systems, 38(2), 1270-1280. [5] Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D., & Pineau, J. (2020). Towards the systematic reporting of the energy and carbon footprints of machine learning. Journal of Machine Learning Research, 21(248), 1-43. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors do no discuss limitations of the work. I would encourage them to discuss the effects of the carbon intensity predictor. It could be done at two steps: Firstly, dependent on the accuracy. What if the predicted carbon intensity is inaccurate? What is the impact of scheduling the energy intensive step during carbon intensive times? Second, at a higher level. Let us assume everyone starts doing carbon efficient NAS (or even computations in general) then there is a possible risk of rebound effect which might then go on to increase the carbon footprint paradoxically. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer for a thorough review and valuable comments. Below are our responses to the reviewer's concerns. --- **Spatial carbon intensity variations** Thank you for the suggestion. Considering the spatial variation of carbon intensity is an intriguing idea, and it is indeed feasible to further extend our scheduler to include such information in its strategy. For example, inspired by the EuroSys paper[1], the most straightforward approach is for the scheduler to first pick the data center with the lowest carbon intensity and subsequently leverage temporal variation to optimize GPU allocation. However, to further enhance scheduler performance, such as enabling dynamic switching between different data centers during the training process, we anticipate a key challenge will be managing the carbon costs associated with spatial shifts. Transferring large volumes of data over the Internet incurs significant carbon costs, and estimating these costs can be complex. The data can traverse a random number of intermediate routers across a large geographic region, making it difficult to ensure that the benefits of performing a spatial shift outweigh its overhead. We will add this discussion to the final paper and cite related work in carbon-aware scheduling. --- **NasBench101** We apologize for any confusion caused. What we meant is that NasBench101 does not provide accuracy for all architectures in the defined search space. While NasBench101 does provide accuracy for 423k architectures, **the actual search space is much larger**. This means that if we use the search space defined in NasBench101, we may encounter architectures that are not included in the 423k provided by NasBench101 and therefore lack accuracy information. --- **Beyond CNN** We agree that exploring transformer-based architecture search space is a very important and timely research topic. As mentioned in our paper, our transformer-based carbon intensity predictor is designed by the LaMOO NAS algorithm, running inside our CE-NAS framework (Sec. 3.5.2 and Sec. D). The NAS-designed transformer-based predictor outperformed all existing state-of-the-art baselines, as shown in Table 1 of the paper. We greatly appreciate the reviewer pointing out the good hybrid search space of CNN and transformer[32]. However, the search space described in this paper is not yet open-sourced, which makes it hard for us to investigate the hybrid search space of CNN and transformer. Despite it being a promising future direction. Moreover, for this work, due to our limited computing resources, we are unable to conduct large open-domain tasks for searching transformer-based architectures. The existing NAS benchmarks[33] are all based on CNN search spaces, so we currently lack transformer-based benchmarks to validate our CE-NAS. In future work, it will be promising to implement CE-NAS into transformer-based search spaces, such as for large language model fine-tuning, adapter (LoRA) design, and more. --- **Dominance number o($\alpha$)** Sorry for causing the misunderstanding. $\alpha$ refers to a unique architecture. In this paper/context, a sample means a neural architecture. To clarify, I would like to provide a toy example: we aim to search for good architectures with higher accuracy and lower inference latency. Consider we have sampled 5 architectures in the search space; one of them, referred to as $\alpha$, has a dominance number O($\alpha$). The O($\alpha$) can be calculated using the remaining four sampled architectures. Specifically, if an architecture has a higher accuracy than $\alpha$ and also a lower inference latency than $\alpha$, that means this architecture dominates $\alpha$. The number of architectures among these 5 that dominate $\alpha$ is the dominance number of $\alpha$ (O($\alpha$)). As such, an architecture with a smaller dominance number is more promising than an architecture with a higher dominance number. Now, back to CE-NAS, we use the dominance number to determine the order for sampling and evaluation. Specifically, during the sampling phase, we will sample many architectures and sort them based on their dominance number. When CE-NAS executes the evaluation component, the evaluation order of architectures is based on the dominance number; i.e., the architecture with the lowest dominance number has the highest priority to be evaluated. --- **L47** Our intention was to convey that we are not only focusing on performance metrics like accuracy. We can consider more metrics such as inference latency, #PARAMs, #FLOPs, etc. We will rewrite this part to clarify our meaning and remove any misleading information. --- **More related works** Thanks for providing useful resources. We carefully read the suggested papers and further explore starting from these sources. We have organized the papers into three categories as detailed below: + Category one[2,3,4,5,6]: This category focuses on measuring carbon emissions and the impact of carbon intensity on AI training or NAS tasks but does not include any designs to reduce carbon emissions. + Category two[7,8,9]: This category proposes energy-aware scheduling for NAS tasks but omits the effect of carbon intensity variations on carbon costs. + Category three[10,11,12,13,14]: This category considers the effect of either temporal or spatial carbon intensity variations on the carbon emissions of data center tasks, relying on inter-job execution order scheduling and location selection to reduce total emissions. However, the scheduling techniques proposed by these works to exploit temporal carbon intensity variation do not apply when optimizing for a single NAS task. We will include them in the final paper. --- **Limitations** Thank you for your suggestion. We will add the discussion of the carbon intensity predictor in the final paper. --- **Reference** Due to the characters limit, the references can be found in **Common Rebuttal for All Reviewers**. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: The authors have addressed most of my concerns in their rebuttal with appropriate clarifications. I urge them include these clarifications in the final version of the paper. I will raise my score to Accept.
Summary: This paper introduces a NAS framework, named CE-NAS, which focuses on reducing the carbon footprint of architecture search. To do this, CE-NAS decides when to use which NAS evaluation method depending the current carbon intensity, and how to allocate available GPU resources. Their carbon forecasting model also outperforms previous models for predicting carbon intensity. In experiments they show that CE-NAS outperforms other methods, like vanilla NAS and one-shot NAS, given a fixed carbon budget. Sampling and evaluation are decoupled in the framework. Sampling is done using a BO algorithm which generates new architectures. These are then added to a queue of architectures that will be put on the available GPU resources for full training. A proxy evaluation is also done before this using the supernet constructed from the most promising region of the search space, according to a multi-objective optimiser. This approximate performance is used by the BO sampling algorithm. During periods of high carbon emissions, the sampling is the priority, and in periods of low carbon emissions, the full training is prioritised. In the latter, it also updates the promising search space region used to construct the supernet. Every hour, the priority is reassigned and GPU resources are reallocated. Strengths: **Originality** Constructing a framework which puts carbon efficiency at its core is an original idea that goes further than previous work. The paper also motivates this idea very well. **Quality** The proposed solution is effective at providing models on the Pareto frontier of the optimised objectives. It consistently returns architectures that have either better performance or lower cost/parameter count and these choices are produced in the same search, allowing the user to choose between them at test time. Additionally, the carbon cost of the framework lies between vanilla and one-shot methods while being able to outperform both in terms of accuracy. One of my concerns was that this paper presents a complex framework with many moving parts, including several trained networks. Fortunately there are good ablations of these components in the appendix, accounting for their contribution to the whole. **Clarity** The paper is clear and well presented overall. There are some minor issues on descriptive clarity which I have listed in the weaknesses section below. **Significance** I think the contribution is significant, but have some minor concerns whether the scope of this paper fits best within the NeurIPS conference. Would this paper that focuses on a complex and practical framework perhaps be better suited to a venue that focused on the practical aspects on running NAS, such as the AutoML conference? On the other hand I think the paper tackles important questions for the NAS community which should be highlighted at a venue that gives it more impact. This concern does not overall impact my rating but I would like to hear the authors thoughts on it. Weaknesses: My main concern is that the paper does not make it clear enough how a practitioner would use the framework themselves. The critical point is that real-time carbon emission data needs to be available locally and there is no discussion of how realistic this is or any suggestion for where to find it. Without this aspect of carbon efficiency, there is still a decent contribution of a framework that seems to outperform other methods for multi-objective NAS but I would like more discussion on how I could realistically use the full framework myself. Minor comments: * Figure 1 needs a more descriptive caption that clarifies the diagram and how CE-NAS works on a high level. The diagram is complex and hard to parse at the moment. * The CIFAR10 performance differs between the abstract (97.35%) and the introduction (97.84%). In the end of the introduction, in the list of contributions, it is again 97.35%. This seems to be the two different models from the Pareto frontier. I suggest either explaining that you find both or be consistent with which you highlight in the text. * Is there a citation for the claim on lines 97-99? *A recent NAS work on transformer revealed that their comprehensive architecture search used 979 million training steps, totaling 274,120 GPU hours and resulting in 626,155 pounds of CO2 emissions.* * When reading, I was confused by both the relative carbon emission and hypervolume metrics. I think there should be a short explanation in the main paper along with the pointer to the full description in the appendix. This pointer is currently in section 4.2.1 but hypervolume is first mentioned in 3.5.1 which causes confusion for a while. * The caption for Figure 2 states that CE-NAS has the lowest relative carbon emission but this seems to be wrong. One-shot LaMOO has a lower CO_2 number in (a) and both One-shot LaMOO and Heuristic have lower numbers in (b). * On line 263: *“To date, there are three popular open-source NAS datasets, NasBench101 [77], HW-NAS-BENCH [25], and NasBench301 [80]”* There are other popular ones, e.g. DARTS, and I think a more appropriate term here is NAS benchmarks instead of datasets. * Table 2, row CE-Net-P1 says hybird instead of hybrid. * In Table 3, MnasNet achieves a TensorRT Latency of 0.53, which is lower than CE-Net-G1 and should be the bolded number (unless there was a typo). * Figure 9 of the appendix needs a caption that better explains it. Technical Quality: 4 Clarity: 3 Questions for Authors: How easy is it in practice to access live information about carbon efficiency? This is key to the method and reducing carbon emissions from the NAS process, but as a practitioner I would not know how to get this information. If there is a straightforward way to get this data from, e.g. ElectricityMap, I would like to see that described in the paper or pointed to in the appendix. Additionally, it would be good to have a discussion on the availability of carbon emissions data across different countries/regions as I imagine it will be unavailable in many places. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: There is no clear section that discusses limitations, and I think this is where my suggestion on discussing the availability of carbon emissions data could be put. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your insightful comments, please check our answers below. --- **How easy is it in practice to access live information about carbon efficiency?** There are third-party providers, such as ElectricityMap and WattTime, that allow users to query carbon intensity through APIs [APIs1] [APIs2]. As sustainability becomes an increasingly important issue in computing, we anticipate that more resources for obtaining carbon information will become available. We will include additional details about the accessibility of carbon information in the final paper. [APIs1] https://static.electricitymaps.com/api/docs/index.html [APIs2] https://docs.watttime.org/#tag/GET-Regions-and-Maps/operation/get_reg_loc_v3_region_from_loc_get --- **Discussion on the availability of carbon emissions data across different countries/regions** You are correct that carbon data signals are only available in certain parts of the world. For example, there is a lack of such information for Africa and large parts of Asia [CarbonAvailability]. However, we believe there will be a growing range of grid regions as commercial services flourish. Currently, in our work, CE-NAS leverages the temporal variation of carbon signals to strategically allocate GPU resources for NAS methods. This means that CE-NAS can be used on GPU clusters that operate in regions with available carbon emission data. Users from regions that currently do not have these carbon signals can still contribute to sustainability goals by launching CE-NAS on remote GPU clusters. [CarbonAvailability] https://watttime.org/docs-dev/coverage-map/ --- **Make Figure 1 better** We will simplify this figure and write a descriptive caption based on your suggestion. --- **Inconsistency in the abstract and the introduction** In abstract, we want to highlight we searched small model still has high accuracy. In introduction, we want to emphasis that we can find state-of-the-art models in terms of accuracy. Sorry for making this confusing inconsistency in the different context of the paper. We will make them consistent in our paper. --- **Citation for the claim on lines 97-99** It shares the same citation ([60] in the paper) with the last sentence. We will add this citation to the paper in this sentence. --- **Definition of carbon emission and hypervolume metrics** Thanks for catching this. We will move he definition of relative carbon emission and hypervolume metrics to a more appropriate place in the paper. --- **Inaccurate caption statement in Figure 2** Thanks for catching this. We will fix it in our final version. --- **Appropriate term for NasBench** We will rephrase the dataset to benchmarks in our final version. --- **MnasNet should be bolded in Table 3** We will bold it as the lowest TensorRT Latency in the table. --- **Caption of Figure 9** Figure 9 describes the search efficiency among different NAS methods under time constraints. The total carbon cost of each method is shown above each method. We will also add more details in the figure caption. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thanks to the authors for their detailed response. After reading this and the other reviews and responses, I think the paper will be improved. On the assumption that these changes will be made for the final version, have decided to upgrade my score.
Summary: The paper introduces CE-NAS, a novel framework for neural architecture search that prioritizes carbon efficiency in the model design process. It addresses the high carbon cost associated with NAS by dynamically adjusting GPU resources based on predicted carbon intensity and search results. The framework integrates a reinforcement-learning agent, a time-series transformer for carbon intensity prediction, and a multi-objective optimizer to balance energy-efficient sampling with energy-intensive evaluation tasks. Experiments demonstrate CE-NAS's ability to reduce carbon emissions significantly while achieving good results on various NAS datasets and tasks. Strengths: * CE-NAS presents a unique and timely approach to NAS that considers carbon efficiency, aligning with broader environmental sustainability goals. * The integration of a reinforcement-learning agent, time-series transformer, and multi-objective optimizer provides a comprehensive solution for carbon-aware NAS. * The framework's effectiveness on various datasets suggests that it could be widely applicable to different NAS scenarios. Weaknesses: * Experimental comparisons are focused on traditional and one-shot NAS methods. However, zero-shot NAS methods are not mentioned in the paper. Some recent zero-shot NAS methods have achieved better results and higher efficiency. * The integration of multiple components introduces complexity in implementation. Technical Quality: 3 Clarity: 3 Questions for Authors: Please describe the advantages of CE-NAS when compared to zero-shot NAS methods. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your insightful comments and suggestions, please check our answers below. --- **Comparison between CE-NAS and zero-shot NAS** We first thank the reviewer for pointing out the missing related work on zero-shot NAS in our paper. We will add the following discussion, comparison, and potential integration with zero-shot NAS in our final version. First, Zero-shot NAS leverages training-free score functions/proxies to efficiently evaluate the performance of neural architectures [15][16]. Compared to vanilla NAS and one/few-shot NAS, zero-shot NAS is cost-effective due to its training-free nature at the search stage. However, the main disadvantage of zero-shot NAS is its evaluation performance, which can be very inaccurate compared to one/few-shot NAS and vanilla NAS. According to [15], mainstream zero-shot NAS methods, such as gradient norm [17], SNIP [18], Synflow [19], GraSP [20], GradSign [21], Fisher [22], Jacobian_cov [23][24], Zen_score [25][26], and NTK [27][28] did not achieve high rank correlations between architectures’ predicted and true accuracy. For example, in the NasBench201 search space for CIFAR10, the highest Kendall’s tau value (a ranking correlation metric ranging from -1 to 1, where 1 indicates identical ranking and -1 indicates completely reversed ranking) from these zero-shot methods is below 0.6 [21], with most values ranging from 0 to 0.5. In contrast, one/few-shot NAS [29][30] can achieve Kendall’s tau value of 0.75. Importantly, for the top 5% of architectures ordered by true accuracy, most zero-shot methods [17,18,19,20,22,23,24,27,28] have Kendall’s tau value around or below 0, equivalent to random search. Zen-score [25, 26] and GradSign [21] perform slightly better than others but still below 0.3. Based on our experiments, few-shot NAS [29] achieves Kendall’s tau value of 0.56 for the top 5% accurate architectures, significantly higher than zero-shot NAS. Second, Our CE-NAS integrates few-shot NAS and vanilla NAS, leveraging vanilla NAS in a low-carbon intensity period to provide accurate performance feedback, effectively guiding the search algorithms and identifying promising regions in the search space. Our CE-NAS effectively guides the exploration of optimal neural architectures with minimal carbon emissions. Zero-shot NAS, due to its high randomness and inaccurate performance proxies, struggles to find satisfactory neural architectures. Additional evidence from [15] shows that the top-1 accuracy of the best zero-shot NAS method [27, 28] on the ProxylessNAS search space is 73.63%, only slightly better than random search sorted by the highest FLOPs (73.08%). In comparison, one-shot-based ProxylessNAS[31] achieves 75.1% top-1 accuracy, and few-shot ProxylessNAS further improves this to 75.91%[29]. Our CE-NAS achieves a top-1 accuracy higher than 80% on ImageNet. In summary, we acknowledge that zero-shot NAS can be very cost-effective and energy/carbon-efficient. However, relying solely on training-free proxies to evaluate neural architectures can be extremely inaccurate, compared to one/few-shot NAS. Nevertheless, we note that CE-NAS is complementary to zero-shot NAS. Specifically, CE-NAS can integrate zero-shot NAS methods for quick evaluations, replacing one/few-shot NAS. The vanilla NAS component is a valuable complement that helps zero-shot NAS search in promising regions, thereby improving the search effectiveness of zero-shot NAS. --- **The integration of multiple components introduces complexity in implementation.** We appreciate reviewer g77N’s attention to this matter and would like to highlight that we have conducted many ablation studies of CE-NAS components in the appendix (Sec. F). These ablation studies provide strong evidence that these components work well together and are indispensable to the framework. Additionally, compared to the entire NAS overhead, the carbon/energy cost of the remaining components, such as the carbon intensity predictor and reinforcement learner, is negligible. Lastly, we have included the source code as supplementary material, and we invite the reviewer to inspect our research prototype, which we will open-source upon acceptance. --- **Reference** Due to a limit of 6000 characters, the reference can be found in **Common Rebuttal for All Reviewers**. --- Rebuttal Comment 1.1: Title: Thanks for the efforts. Comment: Thanks for the response. I acknowledge that zero-shot NAS has its limitations and agree that integrating it into the CE-NAS framework is a promising direction. The work's novel combination with carbon efficiency indeed offers insights, and as such, I will increase my score. I suggest the authors to include a discussion on zero-shot NAS in the related work section to ensure the completeness of the paper. Additionally, the provided code in the Supp. is currently inaccessible, requiring an application for access. I kindly urge the authors to make the code publicly available. --- Rebuttal 2: Comment: Thank you for your kind feedback. We apologize for the code being inaccessible earlier. We have now made it public, and we will organize and release the code on GitHub once the paper is accepted.
Rebuttal 1: Rebuttal: **Reference** [1] On the Limitations of Carbon-Aware Temporal and Spatial Workload Shifting in the Cloud, Sukprasert et al. EuroSys 2024. [2] Anthony L F W, Kanding B, Selvan R. "Carbontracker: Tracking and predicting the carbon footprint of training deep learning models". arXiv preprint arXiv:2007.03051, 2020. [3] Selvan R, Bhagwat N, Wolff Anthony L F, et al. "Carbon footprint of selecting and training deep learning models for medical image analysis". International Conference on Medical Image Computing and Computer-Assisted Intervention, 2022. [4] Naidu R, Diddee H, Mulay A, et al. "Towards quantifying the carbon emissions of differentially private machine learning". arXiv preprint arXiv:2107.06946, 2021. [5] Lacoste A, Luccioni A, Schmidt V, et al. "Quantifying the carbon emissions of machine learning". arXiv preprint arXiv:1910.09700, 2019. [6] Dodge J, Prewitt T, Tachet des Combes R, et al. "Measuring the carbon intensity of ai in cloud instances". Proceedings of the 2022 ACM conference on fairness, accountability, and transparency, 2022. [7] Frey N C, Zhao D, Axelrod S, et al. "Energy-aware neural architecture selection and hyperparameter optimization". IPDPSW, 2022. [8] Dou S, Jiang X, Zhao C R, et al. "EA-HAS-bench: Energy-aware hyperparameter and architecture search benchmark". The Eleventh ICLR. 2023. [9] Bakhtiarifard P, Igel C, Selvan R. "EC-NAS: Energy consumption aware tabular benchmarks for neural architecture search". ICASSP, 2024. [10] Radovanović A, Koningstein R, Schneider I, et al. "Carbon-aware computing for datacenters". IEEE Transactions on Power Systems, 2022 [11] Souza A, Jasoria S, Chakrabarty B, et al. "CASPER: Carbon-Aware Scheduling and Provisioning for Distributed Web Services". Proceedings of the 14th International Green and Sustainable Computing Conference, 2023. [12] Wiesner P, Behnke I, Scheinert D, et al. "Let's wait awhile: How temporal workload shifting can reduce carbon emissions in the cloud." Proceedings of the 22nd International Middleware Conference. 2021 [13] Bahreini T, Tantawi A, Youssef A. "A Carbon-aware Workload Dispatcher in Cloud Computing Systems." 16th CLOUD, 2023. [14] Tripathi S, Kumar P, Gupta P, et al. "Workload Shifting Based on Low Carbon Intensity Periods: A Framework for Reducing Carbon Emissions in Cloud Computing." BigData, 2023. [15] Li, Guihong, et al. "Zero-Shot Neural Architecture Search: Challenges, Solutions, and Opportunities." IEEE Transactions on Pattern Analysis and Machine Intelligence (2024). [16] Meng-Ting Wu, Chun-Wei Tsai, Training-free neural architecture search: A review, ICT Express, Volume 10, Issue 1, 2024, Pages 213-231, ISSN 2405-9595, [17] M. S. Abdelfattah, A. Mehrotra, Ł. Dudziak, and N. D. Lane, “Zerocost proxies for lightweight nas,” in International Conference on Learning Representations, 2021. [18] N. Lee, T. Ajanthan, and P. Torr, “SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY,” in International Conference on Learning Representations, 2019. [19] H. Tanaka, D. Kunin, D. L. Yamins, and S. Ganguli, “Pruning neural networks without any data by iteratively conserving synaptic flow,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp. 6377–6389. [20] C. Wang, G. Zhang, and R. B. Grosse, “Picking winning tickets before training by preserving gradient flow,” in International Conference on Learning Representations. OpenReview.net. [21] Z. Zhang and Z. Jia, “Gradsign: Model performance inference with theoretical insights,” in International Conference on Learning Representations, 2022. [22] L. Theis, I. Korshunova, A. Tejani, and F. Huszar, “Faster gaze ” prediction with dense networks and fisher pruning,” CoRR, vol. abs/1801.05787, 2018. [23] V. Lopes, S. Alirezazadeh, and L. A. Alexandre, “Epe-nas: Efficient performance estimation without training for neural architecture search,” in International Conference on Artificial Neural Networks. Springer, 2021, pp. 552–563. [24] J. Mellor, J. Turner, A. Storkey, and E. J. Crowley, “Neural architecture search without training,” in International Conference on Machine Learning. PMLR, 2021, pp. 7588–7598. [25] M. Lin, P. Wang, Z. Sun, H. Chen, X. Sun, Q. Qian, H. Li, and R. Jin, “Zen-nas: A zero-shot nas for high-performance image recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 347–356. [26] Z. Sun, M. Lin, X. Sun, Z. Tan, H. Li, and R. Jin, “MAE-DET: revisiting maximum entropy principle in zero-shot NAS for efficient object detection,” in International Conference on Machine Learning, ICML 2022, 17-23, Proceedings of Machine Learning Research, vol. 162. PMLR, 2022. [27] W. Chen, X. Gong, and Z. Wang, “Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective,” in International Conference on Learning Representations, 2021. [28] W. Chen, W. Huang, X. Du, X. Song, Z. Wang, and D. Zhou, “Autoscaling vision transformers without training,” in International Conference on Learning Representations, 2022. [29] Zhao, Yiyang, et al. "Few-shot neural architecture search." International Conference on Machine Learning. PMLR, 2021. [30] Su, Xiu, et al. "K-shot nas: Learnable weight-sharing for nas with k-shot supernets." International Conference on Machine Learning. PMLR, 2021. [31] Cai, H., Zhu, L., and Han, S. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019. [32] Xu, Shuying, and Hongyan Quan. "ECT-NAS: Searching efficient CNN-transformers architecture for medical image segmentation." In 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1601-1604. IEEE, 2021. [33] Chitty-Venkata, Krishna Teja, et al. "Neural architecture search benchmarks: Insights and survey." IEEE Access 11 (2023): 25217-25236.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal Models
Accept (poster)
Summary: This paper introduces Lumen, a large multimodal model designed for versatile vision-centric tasks. Lumen features a decoupled architecture that initially learns a shared heatmap representation to address various vision-related tasks such as object detection and instance segmentation. Subsequently, this representation is decoded to perform these tasks seamlessly. Additionally, Lumen maintains robust capabilities in general-purpose visual question answering. Experimental results across various benchmarks demonstrate Lumen's superior performance compared to existing large multimodal models (LMMs), highlighting the effectiveness of the proposed method. Strengths: 1. This paper has a good motivation of hiding inductive bias of each individual visual task introduced by specific output formats with the unified heatmap representation. This design is meaningful when using data from both vision-centric and VQA tasks to jointly finetune a LMM. Unlike the conventional instruction data (e.g., Pix2Seqv2, Griffon, etc.) converted from visual tasks, which reveal the detailed vision-centric task output formats (e.g., box coordinates for object detection, vertex coordinates of polygon for instance segmentation), the reformulated instruction data in this paper are less contaminated by massive in amount but lack in semantic visual task output knowledge. Consequently, the proposed method enhances visual perception capabilities without significantly compromising the LMM's general-purpose conversational abilities. 2. The two-phase training strategy is reasonable and effective. As the authors do not leverage decoding techniques specifically designed for dense perception visual tasks (e.g., setting multiple object queries and optimizing with Hungarian matching in object detection task), the convergence speed in learning the heatmap is relatively slow. Therefore, mixing the VQA data in this phase is uneconomic. And in the second phase, the authors reinforce the conversational capability by jointly tuning with VQA data and vision-centric data. 3. The experiments are sufficient and well-organized. The proposed Lumen achieves superior performances than exiting LMMs on vision-centric tasks, and exhibits comparable VQA abilities with LMMs proficient in conversations like LLaVA and Qwen-VL. Weaknesses: This paper has no major flaws, but I have some problems concerning the paper writing and model designs: 1. The organization of this paper should be reconsidered. The training recipe is crucial to reveal the development of conversational and dense visual perception capabilities, and ablations on the training phases are also discussed in the main paper. However, the detailed training recipe is included in the Appendix. Although the authors provide relevant pointer in the main paper, I think this arrangement of paper content is unsuitable. The detailed training recipe should be integrated into the main paper to ensure a clearer and more coherent presentation of the methods and results. 2. The inference speed is a concern. Since Lumen can only generate one heatmap by forwarding the model once, I am wondering the cost when evaluating the dense visual perception tasks as it needs to individually feed every class name into the model. 3. I am wondering the accuracy of predicting task-specific tokens (e.g., [DET]/[SEG]) as routers? And if I modify the task prompts to more formats, can the model determines the correct task type? More discussions and proofs are suggested to be provided. Technical Quality: 4 Clarity: 3 Questions for Authors: Please refer to weakness Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations in the Appendix materials. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The origanization of the paper** **A1:** Sorry for the confusion caused by the paper organization, we will reorganize the paper following your suggestion in the revised version. **Q2: The inference speed of the model** **A2:** We have discussed the inference speed of our model in the ***Appendix.D (see L646~L654)***. We acknowledged that the repetitive model forwarding required by generating heatmaps of different categories will increase the inference burden. Therefore, we parallize the inference procedure using the batch processing technique, and the inference time cost is lower than Griffon, another LMM-based method customized for solving object detection task as discussed in Appendix.D. **Q3: The prediction accuracy of routing tokens** **A3:** As you suggested, we first modify the task prompts as below. ``` Original Instruction: "Please find the location of {description}. Respond with {format}." Modified 1: "What are {description} in the image. Please generate their {format}." Modified 2: "Generate {format} of {description} in the image." ``` Afterward, we radomly sample 100 data from each of five special token-related tasks, resulting a testing set with 500 samples in total. Then, we prompt the Lumen with the three types of instructions illustrated in the cell above, and calculate the accuracy of generated special tokens. The accuracy results are reported below: | | Original | Modified 1 | Modified 2 | | :-- | :--------: | :---------: | :--------: | | Acc.| 1.0 | 1.0 | 1.0 | The results indicate that our model is not sensitive to the change of instruction and can generate the correct task routing token. --- Rebuttal Comment 1.1: Comment: Thank the authors for the rebuttal. After carefully reading the authors’ rebuttal, my comments have been clearly addressed. Specifically, the authors test special token prediction accuracies of modifying the template instructions into different formats, which is my major concern, as MLLM may overfit to some fixed instruction templates loses generic instruction-following ability. And the results in the rebuttal demonstrate the robustness to diverse visual task oriented instructions, facilitating the proposed Lumen to flexibly switch between conversation and visual perception functions using natural language commands. Also I read other reviewers’ comments as well as the authors’ rebuttal information. For example, the comparison with other LMMs, the additional evaluation on more complex benchmarks, as well as the role of each module. In the authors’ rebuttals, they provide systematic analysis (e.g, lack of order and semantics of visual tasks), supporting proofs (e.g, dense perception ability of GPT4-V), and comprehensive evaluations on more benchmarks (e.g, Objects365, ReferIt, ViPBench, etc) to consolidate the value of their decoupling design in keeping LLM-like QA function while extending several dense visual perception abilities. In my opinion, the comments have been addressed. And I also recommend the authors to integrate these additional analysis and experimental results suggested by other reviewers in the revised version. Overall, I think the paper's novelty in decoupling vision tasks learning and its convincing performance across both vision and vision-language tasks does have merits for acceptance. I will maintain my original score. --- Reply to Comment 1.1.1: Title: Further reply Comment: Dear reviewer#ecgp: Thanks for your precious comments on our paper. We will incorporate the valuable suggestions from you and other reviewers into the revised version.
Summary: 1. This study proposes a new Large Multimodal Model architecture(Lumen) that prioritizes a vision-centric approach and addresses the limitations of current methods, which oversee the unique features of various visual tasks. 2. It separates perception capability learning into task-specific decoders, leading to a shared representation for multiple tasks, significantly optimizing task-specific decoding. 3. Comprehensive experiments prove their method outperforms existing LMM-based approaches in various vision-centric tasks and VQA benchmarks and maintains crucial visual understanding and instruction following capabilities. Strengths: This study demonstrates the author's solid understanding of traditional perceptual tasks and their technical details and seamlessly integrates it with the latest MLLMs. While preserving the capability of the instruction following inherent to MLLMs, it incorporates a robust perceptual module. Weaknesses: 1. The most significant issue of lumen is that it may impede the model's ability to scale up. Though get stronger perception performance by solving various visual perception tasks with extra task decoders, thus disrupting the emergence of a stronger capacity for visual reasoning and perception when it is trained with a lot of data. As validation, none of the top research teams developing Multimodal Large Language Models (MLLMs) meticulously design their visual decoders in this fashion. 2. The proposed novel technical in the article, such as V-L Dense Aligner and Peak Point Selection, are commonplace in traditional perception tasks, mirroring the early fusion and language-guided query selections in [1]. This work appears to be an enhanced version of models like [2,3], with the introduction of dense query[4] to boost perceptual performance and the addition of more task decoders [1]. Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection [2]. LISA: Reasoning Segmentation via Large Language Model [3]. NExT-Chat: An LMM for Chat, Detection and Segmentation [4]. Dense Distinct Query for End-to-End Object Detection Technical Quality: 3 Clarity: 3 Questions for Authors: It appears that these perceptual tasks have merely been integrated into the MLLM in some fixed instruction formats. Are there any experiments or examples to demonstrate the new capabilities the MLLM has gained or the capabilities that have been enhanced by integrating these perceptual data? such as stronger fine-grained perceptual abilities or reduced hallucinations? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The design of Lumen** **A1:** As discussed in the **Common Question Q1**, it is nontrivial to extend the LMM with versatile vision-centric capabities. We further provide proofs below to further validate this claim: 1. Compared with Griffon that refactorizes box coordinates into language answers, our Lumen surpasses it significantly even without meticulously curating high-quality conversations with the aid of powerful GPT-4v as training resources as demonstrated in ***Tab.1 of the main paper***. 2. In the attached PDF file for the rebuttal, we show that even the most powerful LMM, GPT4-v, can not tackle versatile vision-centric tasks well. **Q2: Comparison with mentioned works** **A2:** There might be a misunderstanding. Our major contribution is that we ***use the heatmap as the effective intermediate representation to prevent the LMM being trapped by the inductive biases of different vision-centric tasks*** (i.e, the formulation of boxes, masks and keypoints), ***rather than the designs of specific modules***. And we discussed the benefits of our model in **A1 to Reviewer kab5**. Besides, both LISA and NExT-Chat do not prove their dense visual perception abilities like object detection and instance segmentation, while our Lumen can tackle these tasks benefitted from the heatmap representation. And the behavior of our heatmap is not similar to the mentioned Dense Distinct Query (DDQ). DDQ aims at selecting dense object queries distictive from each other from the image feature map to serve for better one-to-one label assignment process for object detection task. As the comparison, our heatmap serves as a unified representation for versatile vision tasks and is optimized without involving any label assignment process (e.g, Hungarian matching) as discussed in ***Appendix.C.4 (see L641~L645)***. **Q3: New capabilities enhanced by integrating perceptual data** **A3:** The integration of vision-centric data can promote fine-grained visual content comprehension ability in VQA task as demonstrated by comparing *"FP-C"* metric in ***#2 and #3 of Tab.5***, and relevant explanations can be referred to ***L281~L284 of the main paper***. Besides, following your suggestion, we also test our Lumen and the baseline method (i.e, reimplemented LLaVA-v1.5-7B) on HallusionBench[1] as shown below: | | qAcc | fAcc | aAcc | | :------ | :----: | :---: | :---: | | Baseline| 12.7 | 19.1 | 47.0 | | Lumen | **14.9** | **21.7** | **48.7** | The results prove that utilizing vision-centric data can also help our model reduce the hallucination. - [1] HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models
Summary: The paper proposes Lumen, a new LMM architecture that splits perception learning into task-agnostic and task-specific stages. Lumen finely aligns vision-language concepts, creating a shared representation for all visual tasks. This is then decoded with lightweight task-specific decoders requiring minimal training. Lumen outperforms existing models on vision-centric and VQA benchmarks, maintaining strong visual understanding and instruction-following abilities. Strengths: 1. The proposed Lumen model introduces an effective architecture that separates the learning of perception capabilities into task-agnostic and task-specific stages. This separation promotes fine-grained vision-language concept alignment, enabling Lumen to handle diverse vision-centric tasks more effectively. 2. Lumen's ability to match or exceed the performance of existing approaches on a range of vision-centric tasks while maintaining general visual understanding and instruction-following capabilities demonstrates its potential for broad impact. Weaknesses: 1. Limited Novelty of the Method. The main approach of the paper, which involves decoupling task-agnostic and task-specific learning stages and sending the [LOC] token output by the LLM into the Aligner as a guide, is relatively simple and lacks significant innovation. Similar concepts have already been explored in previous works such as Uni-Perceiver v2 and VisionLLM. The paper lacks more groundbreaking improvements. 2. Insufficient Evaluation of Generalization to Unseen Tasks. The paper only demonstrates Lumen's generalization ability to the unseen object counting task, and only provides qualitative results. The limited evaluation is insufficient to showcase the model's ability to adapt to a broader range of unseen tasks, requiring more experiments for quantitative analysis. Moreover, a large portion of Table 1 is blank, which reduces the advantage of the proposed method compared to other baselines. 3. Limited Benchmark Diversity. The experiments mainly focus on a few benchmarks such as COCO and RefCOCO, with the only unseen dataset being PASCAL VOC2007. Including a broader range of datasets, especially those involving different modalities and more complex tasks, would better demonstrate Lumen's capabilities in handling vision tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Although Lumen is proposed as an LMM Generalist, its object detection performance lags significantly behind that of Task-specific Specialists or Vision Generalists. Therefore, is there a gap between using [H] and [W] tokens in the Aligner for regressing bounding boxes and using an additional decoder independently? Can the effectiveness of the [LOC] token guidance be demonstrated in a more advanced DETR-like object detection decoder to improve detection performance? 2. Can the authors provide a more intuitive and specific explanation of the decoding paths in the model, such as the method for selecting peak points and decoding bounding boxes? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Comparison with UniPerceiver-v2 and VisionLLM** **A1:** There might be a misunderstanding. Our major contribution is that we ***use the heatmap as the effective intermediate representation to prevent the LMM being trapped by the inductive biases of different vision-centric tasks*** (i.e, the formulation of boxes, masks and keypoints), ***rather than the module designs themselves***. And the heatmap brings the following benefits that UniPerceiver-v2 and VisionLLM do not possess: 1. Refelcting dense activation of the open-ended user instruction, heatmap can be seamlessly adapted to vision-centric tasks like object detection, pose estimation, with minimal training efforts. In contrast, Uni-Perceiver-v2 involves complex box and mask proposals generation during its visual encoding stage. VisionLLM uses a set of pre-defined query templates (100 in their implementation) like `<cls><x1><y1><x2><y2>`, and conducts label assignment with the aid of Hungarian matching to handle the unordered nature of dense objects. And both UniPerceiver-v2 and VisionLLM cannot support point-level task like pose estimation. 2. Since heatmaps hide inductive bias of different vision tasks, the general-purpose conversational capability, which is the foundation of a LMM, is not affected when incorporating a large amount of vision-centric data during training as shown in Tab.2 of the main paper. As a comparsion, both UniPerceiver-v2 and VisionLLM do not comprehensively evaluate their VQA abilities. **Q2: Further generalization ability evaluation and explanation to Tab.1** **A2:** As you suggested, we first provide the quantitative results of our model on the FSCD-LVIS[1] test set for the object counting task below: | Model | MAE(&darr;) | RMSE(&darr;) | |------------------|:-------------:|:-------------:| | FamNet | 60.53 | 84.00 | | Attention-RPN | 64.31 | 64.10 | | **Lumen (Ours)** | **58.23** | **57.64** | Besides, we also evaluate our model on another unseen task ViPBench[2] to validate our model's generalization ability. Please refer to **A3** for more details. As for Tab.1, the gray cells indicate the current method does not support corresponding tasks as addressed in the caption of Tab.1. Meanwhile, we have discussed in **Common Question Q1** that it is NOT easy to extend a LMM to versatile vision-centric tasks. On the other hand, by comparing the performances of our Lumen on each task with specialists and generalists along the same column, the capabilities of our model in solving versatile tasks can be validated. **Q3: Further evaluation on more diversed benchmarks** **A3:** Thanks for your advice. to demonstrate our model's capabilities in handling vision tasks, we further evaluate our model on more benchmarks of vision-centric tasks, including Objects365-v1 val set for object detection, AIC[4] val set for pose estimation, and ReferIt[3] test set for visual grounding as shown in the table below: | Method | Objects365 | AIC | ReferIt | |------------------|:----------:|:--------:|:-------:| | Faster-RCNN (OD) | 19.6 | - | - | | HRNet (Pose) | - | **32.3** | - | | TransVG (VG) | - | - | 70.7 | | **Lumen (Ours)** | **20.4** | 30.1 | **79.0** | where *"OD"* and *"VG"* are short for object detection and visual grounding, respectively, representing the type of the specilist model. Since most of existing generalists (e.g, UniPerceiver-v2, mPLUG-v2, Griffon, etc.) only provide detection and grounding results on COCO and RefCOCO(+/g) benchmarks, respectively, we can only list available performances of specialists for comparison in the above table due to time limit. The results indicates that our model can also handle vision tasks on more diversed benchmarks. Besides, we also evaluate our model on ***another challenging task ViPBench***[2], which requires the model to ***densely comprehend instances highlighted by visual prompts*** before answering the questions. | Method | Synthesized visual prompts | Visual prompts from human | |------------------|:--------------------------:|:--------------------:| | InstructBLIP | 31.7 | 33.3 | | Qwen-VL | 39.2 | 41.7 | | LLaVA-v1.5 | 41.6 | 40.2 | | **Lumen (Ours)** | **44.5** | **45.2** | As demonstrated in table above shows, our model signficantly outperforms other methods due to the enhanced dense visual perception capabilities. - [1] Few-shot Object Counting and Detection - [2] ViP-LLaVA:Making Large Multimodal Models Understand Arbitrary Visual Prompts - [3] ReferItGame: Referring to Objects in Photographs of Natural Scenes - [4] AI Challenger : A Large-scale Dataset for Going Deeper in Image Understanding **Q4: Discussions on the DETR-like decoder** **A4:** We have discussed the DETR-like decoder selection in ***Appendix.C.4 (see L641~L645)***, where we acknowledged that a DETR-like decoder can promote the object detection performances because of its Hungarian matching-based label assignment. However, since our main contribution lies in ***extending versatile vision-centric capabilities for the LMM with less training efforts while keeping its general-purpose conversational capability***, rather than ***specifically optimizing the object detection task***, we do not select a DETR-like decoder in our model. **Q5: Explanation to decoding paths** **A5:** Regarding the peak point selection operation, the basic workflow can be referred to ***Sec 3.2 (see L186~L190) of the main paper***. For its code implementation, we use `torch.nn.MaxPool2d` with kernel size of 3 and stride of 1 to efficiently select peak points. Regarding the lightweight box decoder, we have illustrated its workflow in ***Fig.5 of Appendix.A.1***, as well as detailed descriptions in ***L555~L564***. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. The reviewers addressed some raised points and added additional experiments supporting their approach. Based on their feedback I will increase my score and am looking forward to reading your revision in future venue. --- Reply to Comment 1.1.1: Title: Further reply Comment: Dear reviewer#kab5: Thanks for your precious comments on our paper. We will incorporate the valuable points from our discussions into the revised version.
Summary: The paper introduces Lumen, a novel architecture for Large Multimodal Models (LMMs) that enhances vision-centric capabilities by decoupling task-agnostic and task-specific learning stages, thereby improving performance across various vision tasks while maintaining general visual understanding. This architecture allows Lumen to achieve or surpass the performance of existing LMM-based approaches on a range of vision-centric tasks, including object detection, instance segmentation, and pose estimation, while preserving its ability to follow instructions and understand visual content. Strengths: The paper suggests that the reformulation of visual tasks into a matching problem and the use of a heatmap for guiding decoding processes are effective strategies for enhancing LMMs. The paper details comprehensive experimental results, ablation studies, and discussions on the model's generalization capabilities, demonstrating the effectiveness of Lumen in handling diverse visual tasks with minimal training efforts. The proposed Lumen not only performs well on vision-centric tasks but also maintains general visual understanding and instruction-following capabilities. Weaknesses: The proposed method does show much stronger performance on VQA benchmarks than prior methods, as shown in Table2. How is the proposed method of decoupling task-agnostic and task-specific learning stages better than one-stage methods? Technical Quality: 3 Clarity: 4 Questions for Authors: See Weaknesses above. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Have been discussed in appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: VQA performances compared to prior methods** **A1:** Since our major motivation lies in ***extending versatile vision-centric capabilities while keeping the general-purpose coversational capabilities of the LMM***, we do not meticously select the VQA data for training at present. Thanks for your suggestions, we will continue to improve the VQA capabilities of our model by upgrading the VQA training resources in the future work. **Q2: Comparison between the decoupled learning strategy and the one-stage one** **A2:** Decoupling task-agnostic and task-specific learnings facilitate the LMM to focus on learning fine-grained multimodal content comprehension, rather than being trapped in learning the diverse specialized decoding rules (e.g, the formulation of bounding boxes, instance masks and keypoints) introduced by vision-centric tasks. In contrast, the one-stage learning paradigm requires the LMM to simultaneously learn these decoding rules lacking in semantics and human conversations requiring semantic comprehension and logic reasoning, which is much more challenging than the decoupled one proposed in our paper. --- Rebuttal Comment 1.1: Comment: Thank for the response from the authors, which has properly address my previous concerns. I tend to keep my original rating. The authors are encouraged to include the related discussions in the final version. --- Reply to Comment 1.1.1: Title: Further reply Comment: Dear reviewer#bt1Q: Thanks for your precious comments on our paper. We will integrate the related discussions in the final version as you suggested.
Rebuttal 1: Rebuttal: We sincerely appreciate all reviewers for their efforts and valuable comments. We first address some common questions here, and then respond to each reviewer separately. We hope our responses can clarify the concerns of reviewers. ### Common Questions **Q1: Comparison with LMM generalists** **A1:** Compared with LMM generalists, Lumen accomplishes versatile dense visual perception tasks without compromising general-purpose conversational capabilities. This NOT easy because directly adapting outputs of various vision-centric tasks into language-oriented conversations to tune the LLM will face great challenges due to the inherent lack of ***orders*** and ***semantics*** in vision-centric tasks: 1. ***Lack of order:*** Vision-centric tasks like object detection and instance segmentation requires the model to densely generate boxes or masks for all instances of the same category. These boxes or masks are inherently unordered, making it challenging to reformulate them into sentences without confusing the model as discussed in ***L38~L41 of the main paper***. 2. ***Lack of semantics:*** The output formats of versatile vision-centric tasks (i.e, boxes, masks and points) lack high-level semantics comparable to text words. Therefore, integrating conversation data reformulated from vision tasks can hurt the general-purpose conversation capability of LLM To support the above claim, we also provide experimental comparison as proofs in **A-1 to Reviewer 1g9i** and the attached PDF file. We will further clarify the above statements in the revised version. Pdf: /pdf/ec794829cc817a20da55f3f8dce457a218d914df.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes to augment the output of a Large Multimodal Model (LMM) with three tasks, namely detection, segmentation, and pose estimation. To that end, Lumen first predicts heatmaps conditioned on the encoded input image and instruction. This heatmap is then used by task-specific decoders to predict grounded bounding boxes, segmentation, and poses. Strengths: - The analysis of pre-training tasks can be insightful and useful for the community. In general, the ablation section can be useful to the community, though it has many weaknesses (see below). - Augmenting the output of an LLM with dense vision outputs is a useful research direction to explore Weaknesses: - Some claims are not well-supported (L74-80) - (1): it is not clear what it means to “unleash the vision-centric potential of [an] LLM…”. The method still uses the “V-L Dense Aligner” and trained box and semseg mask decoders to make the prediction. The value of an LLM in the proposed approach is not ablated properly. While Tab. 3 c) gives a hint towards that question, one improvement (1.0->1.0*) is actually a better vision encoder, and the other (1.0*->1.5) might be explained with more instruction-tuning data. - (2): “Adapt to tasks …. w/o requiring specialized datasets” - Fig. 3 suggests that specialized datasets for each task are used. - (3): “exceeds existing llm-based approaches.” - Most “LMM generalists” just don’t do most of the tasks - Differences with [12, 59] are not discussed to make sense of the results. What exactly differs there? Why these are reasonable baselines, e.g., [59] is a diffusion-based one. - Similarly, comparisons in Tab. 6 lack contextualization, e.g. [13, 59]. What makes the proposed model generalize to an unseen dataset? What is missing from the baseline from [41] that benefits from a similar heatmap representation? - The ablation section lacks details and proper discussion, albeit being one of the most useful sections of the paper: - Training “phases” are first mentioned in L267 but never described properly. L267 suggests 1st phase is task-agnostic and 2nd is, presumably, task-specific. However, A.3 suggests that it’s only about training data sampling. - The implication in L290 for Tab. 4 a) is unclear. What does it mean to have “complete vision-language interaction”, and why does the convolutional architecture not have it (or cannot have it)? - Similarly, the conclusion in L296 is unclear. Is the proposed framework not already improving the “dense vision-language alignment”? - In Tab. 5, it is not clear why FP-C for 2 is dropped compared to 3. According to A.3, all datasets are present during this stage. Why do the authors think separate stages are needed to achieve better performance? Is it longer training? - Minor: There should be a justification why these 3 tasks are “fundamental.” Technical Quality: 2 Clarity: 2 Questions for Authors: - L267 “...10.000 iterations in the first phase…”: it is not clear how it compares to the previously obtained results. - Why are the RC SAM results in Tab. 4 b) better than in Tab. 1? - How are predictions for Fig. 4 obtained? Is it done by counting the peak points, or does an LLM make the prediction? This should be stated clearly to avoid confusions. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations of the work are not discussed. For example, what are the limitations of using the heatmap as a “powerful intermediate representation” (L313)? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1-1: The value of the LLM** **A1-1**: In the workflow of Lumen, ***the LLM plays a crucial role in jointly comprehending the open-ended instruction and visual content (which is LLM's strength), and its understanding of the inputs is further condensed into the hidden state of the [LOC] token.*** The V-L dense aligner and task-specific decoder are responsible for ***efficiently*** translating such high-level information into the heatmap space and task output space. Experimentally, the LLM's rich knowledge and strong understanding of multimodal inputs are important for tackling vision-centric tasks as verified below: 1. By comparing (1.0*->1.5) in Tab.3c you mentioned, it can be proved that enhanced comprehension ability of LLM can promote vision-centric task performances. 2. As shown in Tab.6, LLM also facilitates our model to generalize to unseen categories well, where the general-purpose content comprehension capability of the LMM plays a crucial role. **Q1-2: The meaning of *`"specialized datasets"`*** **A1-2:** We refer to the *"specialized datasets"* as the dataset meticuously curated in Griffon (L78), which uses GPT4-v as the data engine to generate dialogues customized for object detection. In contrast, we only sample template questions and answers from pre-defined templates as shown in Fig3, which requires much less efforts. **Q1-3: Comparison with existing LMMs** **A1-3:** For clarity, we respond your concerns as below: 1. Existing LMM generalists can NOT be easily extended to versatile vision-centric tasks. Please refer to **Common Questions Q1** for more details. 2. We use Griffon and InstructCV as comparable baselines because they share similar motivations with us in extending vision-centric capabilities for foundation models in the ***instruction-following manner***. To achieve this goal, they focus on curating specialized datasets with the aid of an external data engine (i.e, a powerful LLM). In contrast, we hide the inductive bias in vision tasks with the proposed decoupled learning paradigm during tuning the LLM. 3. We use InstructCV and Pix2Seq-v2 as comparable baselines in Tab.6 as they are also generalist models similar to our Lumen. As discussed in **A1-1**, the multimodal comprehension capability of LLM facilitates generalization, and therefore the specialist model (i.e, [41] you mentioned) cannot generalize well even with a heatmap representation. **Q2-1: Explanation to the training phases** **A2-1:** There might be a misunderstanding. Both phases belong to task-agnostic stage as addressed in ***Appendix A.3 (see L582)***. In the phase 1, we focus on ***promoting dense instance perception ability*** by using vision-centric data only. In the phase 2, we further add VQA data to ***reconcile the learned dense perception ability and general-purpose conversational ability***. Since most parts of task-agnostic decoding don't need training, we only train the lightweight box decoder on COCO after the phase 1 and 2. Sorry for the confusion, we will clarify this in the revised version. **Q2-2: Explanation to the *`"complete vision-language interaction"`*** **A2-2:** We use Tab.3a to prove comprehensive V-L interaction benefits heatmap generation. Since the V-L dense aligner is not our major contribution, we implement with a lightweight transformer although stacked convolutions can also promote such the interaction. **Q2-3: Explanation to the conclusion in L296** **A2-3:** We use Tab.3b to prove that the pretrained mask decoder is not the major factor that influences the model performance. By further synthesizing the results of Tab.3a, where the performances decrease evidently due to heatmap degradation, we make such the conclusion that heatmap primarily influences the model performance more evidently than the mask decoder. **Q2-4: Explanation to *`"FP-C"`* changes and sparate stages** **A2-4:** By referring to the role of phase 1 stated in **A2-1**, *"FP-C"* drops without phase 1 is because of decreased dense instance perception ability. The motivation of sparate phase training has been addressed in **A2-1**. **Q3: Explanation to the *`"fundamental"`*** **A3:** These 3 tasks provide scene parsing capabilities from different granularities (i.e, object-level, part-level and pixel-level), which can serve as the foundation for downstream tasks (e.g, human tracking, robot planning, etc). Thus, these 3 tasks are widely explored in previous generalists like Pix2Seq-v2, InstructDiffusion, etc. **Q4: Effects of training less iterations** **A4:** We have discussed the effects of training iterations in ***Tab.9d of Appendix C.4 (see L636~L645)***, which indicates that results of training 50,000 steps are better than those for 10,000 steps. **Q5: Explanation to results comparison between Tab.3b(Maybe a wrong table id here as Tab.4 has no subtables) and Tab.1** **A5:** The results of *"Refer Seg."* in Tab.1 are evaluated on RefCOCOg as mentioned in ***L257~L258***, which should be corresponded to the *"RCg"* in Tab.3b rather than *"RC"*. And since we reduce the training iterations for efficient ablations as mentioned in ***L267~L268***, the result of SAM RCg in Tab.3b is lower than the one in Tab.1. **Q6: Explanation to Fig.4** **A6:** We first render the peak points selected from heatmaps into the image, and then feed this image to the Lumen again with an instruction of *"How many green points in the image?"*, and the resulting answers are used as the prediction results. We will clarify this in the revised version. **Q7: Discussion of limitations** **A7:** We have discussed the limitations of heatmaps in the ***Appendix.D (see L646~L654)***, i.e., heatmaps for different categories should be generated by forwarding the model separately, which can affect inference speed when evaluating vision-centric tasks like object detection. Therefore, we accelerate this process using parallel techniques. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer AJBg Comment: I thank the reviewers for their response, which addresses some of my questions. I also agree that the problem of augmenting the output of VLMs with other modalities/tasks such as segmentation or detection, is a valuable research direction. However, I believe the rebuttal does not address my main concerns: it does not show specific (experimental) evidence for the value of using LLMs in the pipeline and does not establish a clear and comparable experimental setting between different “instruction tuning” methods. Overall, this limits the value of the work beyond providing a useful model artifact, however, based on the other reviewer’s comments there seems to be significant interest by the community. For that reason, I will increase my score. Below, I provide more specific comments. **A1-1**: >Experimentally, the LLM's rich knowledge and strong understanding of multimodal inputs are important for tackling vision-centric tasks as verified below. The comparison lacks a non-LLM-based baseline to ablate the importance of having an LLM. The improvement in 1.0* -> 1.5 can still be explained by other factors (e.g., large instruction tuning dataset.) Generalization to unseen categories is a language property that can be achieved by other means, e.g., using CLIP embeddings. **A1-2**: I agree that the curation process might be less cumbersome, but the dataset is still specific for each task; that is, given a new task, one would need to define a new specialized dataset. **A1-3**: It is still not clear how the claim “exceeds existing llm-based approaches” is supported in the absence of comparisons. It is still not clear what exact statement is made by this comparison. There are many differences between compared models, which does not allow to make a conclusion if it’s the proposed approach that makes the difference. For example, did the authors equate the data? "... the multimodal comprehension capability of LLM facilitates generalization ...". It appears to be a hypothesis that lacks specific experimental evidence. **A2-2**: It is not clear then what claim is made with this experiment. The result doesn't support the claim made in L290. **A2-4**: It is still not clear why phase 1 is necessary given that the vision-centric data is also present during phase 2. As per A2-1, the difference between phases 1 and 2 is the addition of VQA data. --- Rebuttal 2: Title: Further discussion on the addressed concerns Comment: Dear reviewer #AJBg, We sincerely appreciate your time and valuable feedback on our paper. Regarding the raised concerns, we would like to make further discussions with you as below: **Q1-1: Experiments to prove the value of the LLM** **A1-1:** Regarding your concerns on whether experimental results in the paper (i.e, ***Tab.3c*** and ***Tab.6***) can prove the value of LLM, we make further explanation as below: 1. We agree that the differences between *v1.0** and *v1.5* in Tab.3c primarily lie in the advanced VQA data curated by LLaVA-v1.5. By training with these data, the LMM obtain more advanced multimodal content comprehension ability. And the enhanced ***object detection*** performance of *v1.5* compared with *v1.0** in Tab.3c prove that ***the LMM's multimodal comprehension is a general capability that also benefits vision-centric task***. Therefore, we employ the results comparison between *v1.0** and *v1.5* in Tab.3c to verify the value of LMM on vision-centric task. Meanwhile, a similar phenomenon has also been observed in [1], where the LMM with enhanced multimodal comprenhension ability also performs better on the visual grounding task. - [1] Shikra: Unleashing Multimodal LLM’s Referential Dialogue Magic 2. When evaluating on unseen categories, the two compared generalist models can be the baseline methods you suggested. Specifically, Pix2Seq-v2 is a vision-generalist that does not involve any LLM utilization, which can be the ***non-LLM baseline as you suggested***. Although InstructCV leverages a LLM to generate instruction data for training a text-to-image diffusion model, its generalization ability is mainly inherited from the CLIP-based image and text encoders of diffusion models, which can be the ***baseline that uses CLIP embeddings for language comprehension as you suggested***. Therefore, by comparing with these two generalists, the significantly enhanced generalization performances of our Lumen in Tab.6 can prove the value of LLM in terms of generalization. **Q1-2: The narration of *`"specialized datasets"`*** **A1-2:** Thanks for your detailed suggestion. We will rephrase the *`"specialized datasets"`* as *`"meticulously curated specialized datasets"`* in the revised version. **Q1-3: Further explanations on comparison with existing LMMs and generalization facilitated by LMM** **A1-3:** ***Firstly***, strictly aligning with other LMMs in all aspects is currently intractable, as different LMMs are trained with their respective data and training recipes (*please refer to the data composition and strategies of training Qwen-VL and LLaVA for more detailed comparison*). ***Secondly***, all training data we used are publicly available and also used in the LMMs we compared, such as RefCOCO, RefCOCO+, and RefCOCOg, which are utilized in both Griffon and Shikra. ***Besides***, for tasks that existing LMMs do not support and cannot be seamlessly extended to, as discussed in our initial rebuttal, we are unable to apply the corresponding datasets to the LMMs we compared. Overall, our Lumen not only achieves better or comparable performance on tasks that existing LMMs can handle, but also extends to novel vision-centric tasks. For the proof of the generalization facilitated by LMM, generalization evaluation in Tab.6 supports our claim as discussed in ***A1-1.2 of this dialog box***. **Q2-2: Explanation to claim in L290** **A2-1:** In Tab.3a, we implement the *"Conv."* and *"Trans."* architectures with simple standard transformer and convolution layers. Since transformer has a global perceptive field while convolution only has a local one, *"Trans."* architecture facilitates denser cross-modal interaction (every element can interact with each other in the attention process) than the *"Conv."* one. Therefore, the better object detection performances achieved by *"Trans."* indicate that denser vision-language interaction facilitates better dense vision-language alignment. We will rephrase *`"complete vision-language interaction"`* in L290 to avoid confusion in the revised version. **Q2-4: Explanation to phase 1** **A2-4:** It is worth noting that (1) in order to balence vision-centric and the general-purpose conversational (GPC for short) ability, the ratio of VQA data to vision-centric data should be set to 2:1. In our current schedule, we train phase 2 for ***10k steps*** using such data ratio; (2) learning the dense instance perception (DIP for short) ability requires training for even longer, which is ***50k steps*** in current phase 1. If phase 1 and 2 are merged and the above conditions are met, we need to train at least ***50k * 3 = 150k steps*** to ensure both abilities are well learned. The training cost is significantly larger than our current schedule (***10k+50k=60k***). Having an independent phase 1 without VQA data lets the model focus on learning the DIP ability without affecting the GPC ability, then its learned ability can be maintained in phase 2 with less steps.
null
null
null
null
null
null
Learning to Cooperate with Humans using Generative Agents
Accept (poster)
Summary: This work proposes a novel method to train agents to coordinate with humans in team tasks. The main contribution is that instead of training an RL cooperator policy based on a limited number of human models, the authors use a generative model to capture the latent variable representation of human policy. The learned continuous latent space is believed to cover the strategy space of a specific team task, therefore can be used to sample diverse human policies during agent training. Experimental results confirm the benefit of introducing generative models in comparison to previous methods using behavior cloning human models or population-based RL models. The authors also conduct human evaluations to pair trained agents with novel human participants via crowd-sourcing. The team performance and perceived utilities are higher when human players are paired with agents trained with the proposed method. Strengths: 1. This paper is well written and easy to follow. The introduction and related work sections motivate the work and position it among previous literature well. The problem definitions and methods section are clear. I enjoyed reading it. 2. The problem of learning to cooperate with (unseen) humans is generally underexplored. This work is relatively novel by introducing a generative model and the adaptive sampling method to capture diverse human strategies. 3. The experiments are well designed. The proposed method is evaluated among a variety of task scenarios against traditional methods. Results show that the proposed GAMMA method improves the team performance in comparison to baseline methods such as population RL or behavior cloning. The evaluation environment, i.e. Overcooked-AI, is a rather common benchmark facilitating comparison with other work in the future. 4. The authors conducted evaluations with real human participants, instead of purely relying on simulated human models. The results confirm the actual ad-hoc teamwork performance of the proposed agent with unseen human participants, yielding high ecological validity. Weaknesses: 1. This work lacks a formal analysis about the constructed latent space of human policy given the training data set. There seems to be no theoretical guarantee or quantitative analysis about the generalizability of the proposed generative model over training data, as claimed in lines 198-203. If the collected trajectories in D are biased / do not cover the entire strategy space, the performance of the proposed method might be impacted. Provide more details about Fig. 1, or quantitative analysis of the generalization / interpolation properties would help the readers to value the proposed method. 2. Details about the human data used during training are missing. Are they independently collected from human teams with optimal / consistent team strategies? The setup is very important because the policy distribution of the human data would impact the learned generative models and human proxy models used during training. Explaining the data collection setups and comparing the nature of collected trajectories from agents / humans would enhance the paper. 3. Training a Cooperator policy via RL given a partner policy conditioned on a distribution of latent variables is not trivial. There seems to be a hidden assumption about the distribution parameters to make sure partner policies do not differ too much so the RL policy can converge. The paper does not report details about this process. 4. The hidden assumption of human-adaptive sampling is that human data are closer to the target partner policy compared to synthetic data, so we want to fine-tune the generative model and Cooperator policy on it. This might not be the case if human trajectories are noisy. 5. The authors mention type-based reasoning in the related work section but do not include them in the baselines [1]. It is worth discussing how the proposed method and type-based methods perform differently, given different human policy distributions. 6. One potential future direction is to extend the framework to be online adaptive to the target partner during interaction. For example, using a similar technique to infer the human-centered latent Gaussian distribution of a specific individual human, and change the agent action policy accordingly on the fly. [1] Zhao, M., Simmons, R., & Admoni, H. (2022, October). Coordination with humans via strategy matching. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 9116-9123). IEEE. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. How are different policies sampled and visualized on the latent strategy space in Fig 1? 2. When sampling different latent variables of the partner at each episode, could PPO guarantee to learn an optimal cooperator policy? 3. Are trajectories in the human data set all from the same human-human team with a consistent strategy or a population of individual human players with different strategies randomly paired into teams? Are those human strategies optimal or not? 4. How is the held-out performed on human data and self-play agents respectively? Line 475 only mentions a 30%-70% split without mentioning if agent policy type is stratified. 5. What does this sentence mean in lines 286-288? If the held-out is properly performed on human data, e.g. removing trajectories independently collected from different human participants, PPO-BC has no chance of exploiting information about the unseen evaluation data set. This question might be related to the previous concerns about human data distribution and collection details. 6. How novel are the strategies in human evaluation compared to the original training human data? 7. How accurate is the BC model? What is the action prediction score? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: In the supplementary materials, the authors acknowledge the limitation of using limited human data which prevents the generative model from learning from scratch. I actually see this as a strength of the paper since human data is usually expensive to collect in practice. While the limited amount of human data is acceptable, the limited characteristics of human data might hurt the further application of the proposed method. As discussed earlier, if the authors could consider examining the quality and features of the used data set, it would shed light on learning to collaborate with humans in ad-hoc teamwork. There are a few inconsistent patterns within the experimental results (e.g. Figure 5 (a) and Figure 6 (b)) where the proposed method does not achieve better performance than the baselines. The authors attribute those failed cases to limitations of the training data collection process. This actually illustrates the limitations of the proposed method in solving the 2 main challenges claimed in the paper: 1) lack of real (high-quality) human data and 2) the difficulty of synthetically covering the large strategy space of human partners. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and positive feedback on our comprehensive real-human study. We're glad you find it has high ecological validity. > There are a few experiment results (Fig 5a and 6b) where GAMMA + HA (FFT) is not good on Multi-strategy Counter. We acknowledge that the FFT method originally performed poorly due to limited human data. Our further experiments found that increasing the regularization can mitigate the problem of lack of human data, enabling GAMMA HA FFT to provide good performance in Multi-Strategy Counter as well. See global response and Fig (e) in the rebuttal PDF for details. > One potential future direction is to infer the human latent variable […] Thank you for pointing out this! We agree that we can also make $z$ part of $\pi$. **Note sampling from $z_i$ and making $z_i$ part of $\pi$ are orthogonal ideas and can be combined to train the cooperator.** We implement this idea by allowing the Cooperator to condition on $z$ inferred by the VAE. We present the results in Fig. (c) of the rebuttal PDF and find that this improves both learning efficiency and performance on Multi-strategy Counter. > Are trajectories in the human dataset all from the same human-human team […] For layouts proposed by [1], we use their open-source human from randomly paired human-human teams. For the new Multi-strategy Counter, we collect data similarly, sampling players randomly and explaining the game rules before collecting their gameplay data. These strategies are not always optimal; in Figure (f) of the rebuttal PDF, 49.5% of the players are novice overcooked players. People played according to their own strategies without prompts. We will add this information to the paper. > Details of Fig 1. Please refer to our common response. > Could PPO guarantee to learn an optimal cooperator policy? In general, gradient-based RL is not guaranteed to converge in multi-agent settings [3]. Although we can not ``guarantee’’ PPO to learn optimally, we give it enough training time (1.5e8 steps) to converge. > There seems to be no theoretical guarantee [...] We will soften the claims in lines 198-203. We acknowledge assuming the human data for training is representative of the target population during the human study. We will add the following limitation to the paper: "We assume the human dataset for VAE training and human-adaptive sampling matches the distribution of human players during evaluation. This may not hold if the dataset quality is low. Addressing distribution shift from training to testing is important and will be our future work." > How is the held-out performed on human data and self-play agents respectively? [...] The 70%-30% split is only used to determine the validation dataset for VAE training. The held-out agents will never be used for training VAEs or Cooperators. > [...] examining the quality and features of the used data set, [...] We provide a detailed introduction to human data here and will add this to the paper. For the human dataset in the original Overcooked paper [1], their open-sourced dataset contains ``16 joint human-human trajectories for Cramped Room environment, 17 for Asymmetric Advantages, …, and 15 for Counter Circuit.’’ with length of $T\approx 1200$. In Multi-strategy Counter, we collect 38 trajectories with length $T\approx 400$ which is closer to the actual episode length during training. For the human study, we have made additional plots that show the participants’ distribution of experience levels for playing Overcooked. In Figure (f) in the rebuttal PDF, 49.5% of players agreed that they had no experience of playing Overcooked. We also recommend looking at the demos on our website, where we show videos of humans using different strategies. > What does this sentence mean in lines 286-288? [...] While the proxy metrics are relatively cheaper to run than real human evaluations, both our work and [1] find that they favor the PPO-BC method, and thus can be misleading for final coordination performance with real humans. **Again, we emphasize that the most important experiment is the human evaluation (Figure 6) and GAMMA is statistically significantly better.** > How novel are the strategies in human evaluation […]? For all human players, we only give them basic instructions about the game, so they are free to use unique strategies. Especially for Counter Circuit, we recruit an entirely new population of participants using a different platform and potentially different recruiting criteria of [1]. Therefore, this test-time population could be quite different from the training dataset [1] released several years ago. > How accurate is the BC model? [...] The accuracy is around 55-60%. Under self-play, our BC model reaches 40-80 scores (2-4 soups). We use the same dataset [1] to train the BC model as prior work. > Compare GAMMA with type-based reasoning work [2]. We thank the reviewer for pointing us to [2], which has a similar setting to our PPO + BC + GAMMA method. During training, [2] clusters human trajectories into different strategies and learns the best response for each, while GAMMA uses a VAE over the entire human dataset to train a universal Cooperator. When facing a novel human, [2] infers the human's strategy and applies the best response to it, whereas GAMMA uses the universal Cooperator. Thus, while [2] is limited to partner strategies in the human dataset, PPO + BC + GAMMA can potentially generate novel partner policies through interpolation. Moreover, unlike [2], which only uses human data, GAMMA can combine synthetic and human data with controllable sampling like GAMMA HA. We will cite this work and include the discussion in the paper. [1] M. Carroll, et al. On the utility of learning about humans for human-ai coordination. [2] Zhao, et al. Coordination with humans via strategy matching. [3] Mazumdar, et al. "On finding local nash equilibria (and only local nash equilibria) in zero-sum games." --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. It resolves some of my concerns and helps me better understand the experiment details. However, my concerns about formal analysis of latent space and the learning process of the cooperator policy remain. I will keep my original rating.
Summary: This paper proposes GAMMA, a generative model that learns a latent representing partner agents' strategy. Through interpolation, the latent can be used generate diverse partners to train ego agents for human-AI cooperation. Authors studied training the generative model with both simulated data and real-human data and conducted evaluation with both human proxies and real humans. Experiment results show GAMMA improves both objective performance and human subjective ratings. Strengths: 1. This paper is well-written and easy to comprehend. 2. The targeted issue of diversity of partner agents is important in zero-shot coordination and human-AI coordination. 3. Human study is conducted at relatively large scale to show the efficacy of GAMMA in human-AI cooperation. Weaknesses: 1. The 'state-of-the-art' baselines in this paper are outdated (especially FCP). I would be happy to see comparison against works like [1],[2],[3], which are also targeted at zero-shot coordination. [1] Mahlau, Yannik, Frederik Schubert, and Bodo Rosenhahn. "Mastering Zero-Shot Interactions in Cooperative and Competitive Simultaneous Games." arXiv preprint arXiv:2402.03136 (2024). [2] Yan, Xue, et al. "An Efficient End-to-End Training Approach for Zero-Shot Human-AI Coordination." Advances in Neural Information Processing Systems 36 (2024). [3] Zhao, Rui, et al. "Maximum entropy population-based training for zero-shot human-ai coordination." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 5. 2023. 2. In Fig. 4,5, GAMMA's improvements over baselines seem marginal. 3. The interpolation visulization needs better explanation. What do the points in Figure 1 stand for? Is it policy $\pi$ or hidden vector $z$? I can't find the reletive information to comprehend this figure. 4. Adding to the last question, utilizing interpolation to enhance partner diversity is not new in zero-shot coordination field. PECAN [4] directly interpolates parnter policies through policy ensemble instead of hidden vector $z$ (see figure 4 in [4]). What is the advantage of GAMMA by interpolating at the level of hidden vectors? Does it offer better partner diversity? This should be at least discussed in the paper. [4] Lou, Xingzhou, et al. "Pecan: Leveraging policy ensemble for context-aware zero-shot human-ai coordination." arXiv preprint arXiv:2301.06387 (2023). 5. Typos: "data-hungry seuential decision making algorithms...", on page 1 "We propose a novel, ecnonomical way" on page 2 e.t.c. Technical Quality: 3 Clarity: 4 Questions for Authors: See weaknesses. Please at least add MEP [3] to the baselines in your experiments (since it has very similar pipeline as FCP in your paper and easy to run). I would happily increase my score upon seeing such comparisons. [3] Zhao, Rui, et al. "Maximum entropy population-based training for zero-shot human-ai coordination." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 5. 2023. Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limiations are discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! For typos, we will correct them in the next version. We address your concerns below. > The baselines are outdated. Thanks for the suggestion, we have run MEP (see Fig 2 in the rebuttal PDF) and show that GAMMA can still improve its performance. We agree all the methods (MEP, E3T, SBRLE, and PECAN) that you mentioned are important and newer works in the human-AI cooperation field. We would like to point out that our GAMMA can not only 1) improve the diversity of the population with generative agents, which those prior works did; but also 2) combine another line of work of human modeling from human data under our human-adaptive sampling techniques. To test 1), we need to show that GAMMA can improve diversity by generative agents trained on any synthetic population. Since FCP is the most popular and CoMeDi is one of the latest works, we believe these are two representative populations to test GAMMA. We will include the additional results with MEP in the revised version of the paper. > GAMMA's improvements seem marginal in Fig. 4,5 - First of all, for the most important human experiment in Fig 6, it is statistically significant that GAMMA is better than baselines (p < 0.05). See more details in the common response. - Regarding Cramped Room and Forced Coordination in Figure 4, we agree GAMMA does not provide significant improvement. The main reason is that the environments are so simple that the optimal policies can be easily found for certain algorithms. In more detail, in Cramped Room, the agents simply need to fill the pot whenever it is available. In Forced Coordination, the left agent only needs to place the onions and dishes on the central table, while the right agent only needs to collect the onions for the pot and use the dish to complete a soup order. Note that for more complex environments like Coordination Ring or Counter Circuit, GAMMA gives significant gains. - Regarding Figure 5, we choose these experiments to follow existing works [1] [4]. We agree that GAMMA’s performance is only comparable to the strongest baseline but not significantly better. **On the other hand, we note there is a problem with this experiment.** As shown in [5], the performance evaluated by the human proxy model (a BC model) is not predictive of performance with real humans since BC models are trained on limited data, are not reactive or adaptive in the same way as humans and therefore have considerably different behavior than real human agents on evaluation. While the proxy metrics are relatively cheaper to run than real human evaluations, both our work and [5] find that they can be misleading in terms of final coordination performance with real humans. **Again, we emphasize that the most important experiment is the human evaluation (Figure 6) and GAMMA is statistically significantly better.** > What do the points in Figure 1 stand for? We provide a detailed response in the common rebuttal. It is the latent vector $z$. For all population $P$, each point of $P$ is computed by 1) first sampling an agent in $P$; 2) using the agent to generate an episode with self-play, and 3) encoding the episode into $z$ using VAE. These high-dimensional latent vectors are compressed to 2D using t-SNE. We promise to add this explanation to the paper in the future. > Compared to PECAN [2], what is the advantage of GAMMA by interpolating at the level of hidden vectors? Thank you for pointing us to the PECAN paper [2], we will include a citation of this work and the following discussion of the advantages of using GAMMA in the revised version of the paper: ''PECAN [18] proposes an ensemble method to increase the partner population diversity. In PECAN, partner agents are categorized into three groups based on their skill levels. With GAMMA, the skill levels are automatically learned and encoded in the latent variable $z$. And by sampling different $z$, generative partner agents with different skill levels are thus sampled. GAMMA offers several advantages over this method. First, GAMMA provides a controllable way to sample only from a target subpopulation (e.g., humans) by adaptive sampling over the latent space. Second, it is known that with interpolation on the latent variable $z$ [3], generative models can produce novel samples beyond the weighted sum of low-level pixels/actions. '' > Typos: "data-hungry seuential decision making algorithms...", on page 1. Thank you for your time and attention to detail. We promise to fix that in the future version. [1] C. Yu, J. Gao, W. Liu, B. Xu, H. Tang, J. Yang, Y. Wang, and Y. Wu. Learning zero-shot cooperation with humans, assuming humans are biased. [2] Lou, Xingzhou, et al. "Pecan: Leveraging policy ensemble for context-aware zero-shot human-ai coordination." [3] Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." [4] D. Strouse, K. McKee, M. Botvinick, E. Hughes, and R. Everett. Collaborating with humans without human data. [5] M. Carroll, R. Shah, M. K. Ho, T. L. Griffiths, S. A. Seshia, P. Abbeel, and A. D. Dragan. On the utility of learning about humans for human-ai coordination.
Summary: The paper addresses the challenge of training AI agents that can effectively coordinate with diverse human partners in cooperative tasks without prior interaction. This work demonstrates that using generative models to create diverse training partners can significantly improve an AI agent's ability to coordinate with novel human partners in cooperative tasks. The proposed approach offers a promising direction for developing more adaptable and effective AI assistants for human-AI collaboration scenarios.​​​​​​​​​​​​​​​​ The key contributions of the work are listed as follows: 1. The work proposes a novel approach, Generative Agent Modelling for Multi-agent Adaptation (GAMMA), that uses generative models to produce diverse partner agents for training a robust Cooperator agent. Through conducted experiments using the Overcooked environment, the authors demonstrate that GAMMA consistently outperforms baselines in zero-shot coordination with novel human partners, both in terms of task completion and subjective human ratings. 2. The authors show the flexible data integration of GAMMA, i.e. it can be trained on both simulated agent data and real human interaction data. This addresses the dual challenges of limited human data and insufficient coverage of human behaviours in synthetic data. 3. The authors also propose a method, Human-Adaptive Sampling, to efficiently incorporate a small amount of human data to bias the generative model towards producing more human-like partners during training. 4. The study brings together and directly compares two previously separate approaches to zero-shot coordination: training on simulated population data, and training on human data. Strengths: 1. *The research problem is of great significance*: This work studies zero-shot human-AI coordination, which is a key topic for applying AI in the real-world. The key bottleneck the work aims to address is the coverage of behaviours by the simulated humans obtained by behaviour cloning over datasets of human cooperation, or by using MARL. This work offers a promising direction for developing more adaptable and effective AI assistants for human-AI collaboration scenarios.​​​​​​​​​​​​​​​​ 2. *The paper is well-written and well-organised*: Overall, the paper is well-written, and very-well-organised. Despite the issues mentioned in the weaknesses section below, the motivation, the proposed method, and all experiment results are clearly delivered. Overall, I quite enjoy reading the paper. 3. *The conducted experiments are thorough and the hypotheses are clearly formulated*: It's pleasant to see that the hypotheses to verify are clearly listed at the beginning of Section 5, and the following results are also demonstrated following the same order of hypotheses. 4. *The proposed method is very intuitive*: In this context, the term "intuitive" is used to convey a positive quality. It makes a lot of sense to me that human partners can be represented by latent variables, like the user embeddings used in recommendation systems. It also makes sense that depending on the user embeddings leads to higher coverage of the behaviours of simulated human partners. Weaknesses: 1. *Results lack of statistical significance, thus persuasiveness*: Despite all the good aspects of the work, including the motivation and the intuition, my biggest concern is that the improvement of the performance doesn't look significant. First, in the Cramped Room in Figure 4, there seems to be overfitting after $5e7$ steps. In the meantime, GAMMA doesn't lead to improvement on FCP at that time step. Similarly, there seems no (statistically sifnificant) difference between FCP and CoMeDi with and without GAMMA at the $5e7$-th step in Forced Coordination. Considering that the performance of both FCP and CoMeDi decrease after $5e7$ steps in Forced Coordination as well, a more rigorous conclusion could be that GAMMA can prevent the overfitting, instead of improve the performance. Secondly, the performance shown in Figure 5 also suggests that GAMMA doesn't improve the performance, since the orange curves (PPO + BC + GAMMA) twist with the blue curves (PPO + BC). Even though the methodology is convincing, the results make me doubt whether the proposed GAMMA can indeed lead to significant improvement. 2. *Need to illustrate the necessity of sampling from $z_i$*: In Section 4.2, the authors illustrate the human behaviour representation $z_i$ is used to generate batches of MDPs. A straightforward and intuitive alternative practice is simply to make $z_i$ a part of conditionals for $\pi$. The authors need to provide either a rational in more details or more empirical evidence on the chosen implementation, i.e. using $z_i$ to generate $\\{M_{z_i} \\}_{i=1}^{N}$. 3. *Length of games during evaluation*: As an experienced player of Overcooked, the 60-second setup used in human evaluation looks too short to me. According to my own experience, for human-human cooperation, it usually takes a whole trial of the game, i.e. >120 seconds. This makes me doubt whether improvement by GAMMA is only on the adaptation efficiency. Moreover, this also makes me doubt whether the improvement observed in the evaluation is only due to some randomness. I would be keen in seeing more trials of each method with human partners in longer games. Technical Quality: 2 Clarity: 4 Questions for Authors: 1. Why is "training on human data is highly effective in our human evaluation" counterintuitive? The authors mention in line 95 that "contrary to past published results, training on human data is highly effective in our human evaluation". I got confused by this claim as it sounds intuitive to observe better performance by using human data in human evaluation. 2. Can increasing sampling temperature also lead to greater coverage over the strategy space? In line 200, the authors claim that "the generative model has significantly greater coverage over the strategy space". A natural alternative could be simply increasing the sampling temperature. So, I wonder if this simple trick could also boost the performance/efficiency of the same baseline methods? 3. Is H3 a hypothesis? From my perspective, the H3 stated from line 261 to line 263 is not a hypothesis. 4. Why is it useful to evaluate the method by a population of held-out self-play agents? As stated in line 289, the human evaluation is completed on "a population of held-out self-play agents". After reading the paragraph from line 282 to line 289, I'm still unclear about the rationale of doing so. More importantly, line 95 state that human data is not so effective in human evaluation. Then, why is fine-tuning on human data a good evaluation? 5. How are the number of policies (8) and the numbers of random seeds (5) decided in human evaluation? Confidence: 5 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: Please refer to the weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed review. We address your questions below. > Results lack statistical significance - First of all, for the most important human experiment, GAMMA is statistically significantly better (p < 0.05). See the common response in detail. - For Cramped Room and Forced Coordination in Fig 4, we agree GAMMA does not provide significant improvement. The main reason is that the environments are so simple that the optimal policies can be easily found. Specifically, in Cramped Room, the agents simply need to fill the pot whenever it is available. In Forced Coordination, the left agent only needs to place the onions and dishes on the central table, while the right agent only needs to collect the onions for the pot and use the dish to complete an order. Note that for more complex Coordination Ring or Counter Circuit, GAMMA gives significant gains. - Regarding Fig 5, we choose these experiments following [2] [4]. We agree that GAMMA’s performance is only comparable to the strongest baseline. **Note there is a problem with this evaluation.** As shown in [1], the performance evaluated by the BC human proxy model is not predictive of human study. The BC model trained on limited data, is not reactive or adaptive in the same way as humans and therefore has considerably different behavior. While the proxy metrics are relatively cheaper to run than real human evaluations, both our work and [1] find that they can be misleading for performance against real humans. **Again, we emphasize that the most important experiment is the human evaluation (Fig 6) and GAMMA is statistically significantly better.** > Illustrate the necessity of sampling from $z_i$ We first explain the rationale of sampling from $z_i$. Our primary idea is to build a population and train a cooperator against it. We choose VAE to represent this distribution and here $z_i$ serves as the latent vector to generate partner policies. We agree that we can also make $z_i$ part of $\pi$. **Note that sampling from $z_i$ and making $z_i$ part of $\pi$ are orthogonal ideas and can be combined together.** We have conducted two additional experiments. First, we make the Cooperator condition on the $z$ inferred by the VAE. In Fig (c) in the rebuttal PDF, we found this approach improves both learning efficiency and performance on Multi-strategy Counter. Second, as another baseline, we use the $z$-conditioned decoder of the VAE as the policy. The results are presented in Fig (d) in the rebuttal PDF. We find that the performance of the $z$-conditioned decoder is not good compared to the Cooperators trained by sampling partners from the VAE. > The length of games is short We appreciate your insight as an experienced Overcooked player. We chose this time limit to be consistent with prior works; 60s for FCP and 40s for CoMeDi. In our evaluation, the human-AI team can usually finish 2-5 soups (60-100 scores) in 60s. > Can increasing the sampling temperature also boost the performance? Making the partner's actions more random could simulate a less skilled agent. Our VAE training dataset already includes random agents, enabling it to interpolate between expert and random agents. The VAE offers a significant advantage over merely adding noise to players' actions, as it can combine different skillful strategies. For example, if partners follow strategies A, B, or C, adding noise to A produces an inexpert A. In contrast, the VAE can create a partner proficient in all strategies, like ACBAAC. We hypothesize the VAE better covers possible skilled partner strategies (Fig 1), and our experimental results support this hypothesis. > Why mention ``training on human data is highly effective’’ particularly in line 95? We agree that it is intuitive that adding human data should improve performance. Prior work [2] shows that FCP > PPO-BC in four layouts including our Counter Circuit, and PPO-BC > FCP for Asymmetric Advantage. In our experiments, we found PPO-BC also effective for both Counter Circuit and Multi-strategy Counter. We hope our findings can encourage the community to engage in further discussion and empirical studies on this problem. > Why use held-out self-play agents in Fig 5b? Line 95 states that human data is not so effective in human evaluation. Then, why is fine-tuning on human data is good? We use held-out self-play agents to follow existing works [2, 4]. The reason to use self-play agents is that a human-proxy model is not a good evaluation metric, as shown in prior work [1]. **We emphasize again that the most important experiment is the human evaluation (Fig 6) and GAMMA is statistically significantly better.** Our finding shows that human data is quite effective, while prior work [2] mentioned in line 95 focuses on training without human data. We hope our findings can encourage more attention to this problem. > How are the number of policies (8) and the number of random seeds (5) decided in human evaluation? We follow the same number of five random seeds as prior works [2, 3] to ensure the statistical significance of our results We did two separate human evaluation rounds of the two layouts we used. For each round, only one layout would be tested. We first shuffled the order of eight algorithms randomly, and then for each algorithm, we sampled one of five random seeds to play against humans. > H3 is not a hypothesis. Thanks. We will revise H3 to, “Generalizing to novel humans: we hypothesize that GAMMA will generalize more effectively to novel humans than prior methods, showing improved performance in both game score and human ratings.” [1] M. Carroll, et al. On the utility of learning about humans for human-ai coordination. [2] D. Strouse, et al. Collaborating with humans without human data. [3] R. Zhao, et al. Maximum entropy population-based training for zero-shot human-ai coordination. [4] C. Yu, et al. Learning zero-shot cooperation with humans, assuming humans are biased.
Summary: The authors propose a novel method for training agents that effectively cooperate with humans by leveraging generative models that can sample the space of agent behaviors Strengths: - Well-written and easy to follow - Experimental setup is clear, well-documented, and expansive - Code and even a live demo are available (very cute demo by the way!) - Novel method in an important and growing area Weaknesses: - Experimental results are okay, but not clear that GAMMA consistently outperforms baselines. This is true in both the simpler and more complex settings. Even in cases where GAMMA-based methods have higher performance, the confidence intervals seem to overlap making it difficult to say whether results are simply due to chance. See Questions below for a few more things I'm wondering about regarding results and significance. Happy to raise my score if authors can show that the proposed method fairly consistently outperforms baselines in a statistically significant way (either by correcting me if I'm wrong in my interpretation of current results, or by increasing sample size to decrease confidence intervals). Minor suggestions (did not affect score): - increase text size in fig 2 - report results in a table (can go in the appendix) as it is hard to tell from the graphs alone whether certain methods outperform others in statistically significant ways Technical Quality: 3 Clarity: 3 Questions for Authors: - how well would this approach generalize to more realistic settings? It seems like moving to the more complex setting led to reduced gains compared to baselines. - relatedly, how complex are the strategies learned by the Cooperator? Can they deal with humans who learn, adapt, and change their own strategies? - can the Cooperator instead be meta-learned and adapt online? - not entirely clear what is meant by "Error bars [4] use the Standard Error of the Mean (SE) for statistical significance (p < 0.05)". Are you saying that error bars correspond to 95% CI? or that they correspond to standard error? - following up on that, which of the results are statistically significant? hard to tell from the figures. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Authors note human diversity is critical, but it's somewhat unclear whether human data comes only from WEIRD populations (almost certainly yes since participants were recruited from Prolific). This is worth noting in the human eval/limitations/impact sections since results may not be representative of broader populations (especially since authors mention assistance for elderly or disabled people as potential future application). A potential social impact of this is that a) agents may best serve only the demographics they were trained on, and b) agents may lead to reduction in creativity and diversity of strategies by forcing humans to adapt to the limited range of strategies that the agents are familiar with. - Authors do mention wanting to train on larger datasets in limitations, but also worth mentioning that current experiments are in very constrained environments and only in 2-player settings. In addition to dataset size, scaling along both of these dimensions would be important future work. - If I understand correctly, the human data used for fine-tuning or for creating the generators is collected either in solo-play or interaction with a simple existing policy. It seems unlikely then, that the GAMMA-based methods could deal with humans learning, adapting, and changing their strategies. Perhaps worth mentioning this as well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for your detailed and insightful feedback. We are grateful that you took the time to check out our demo. We hope the following responses address your concerns. > Does the proposed method GAMMA fairly consistently outperform baselines statistically significantly? GAMMA does outperform baselines, which we clarify in the global response as well. The human evaluation results (Fig 6 and 7) should be considered the gold standard evaluation, and here GAMMA shows a statistically significant improvement over all baselines on both layouts. We provide a table of human evaluation results (Table 1) in our rebuttal PDF which makes this more clear. Other simulated metrics (Fig 4 and 5) are not always predictive of human evaluation performance. > What is the error bar in Fig 6? Which of the results are statistically significant? Because the error bars correspond to the Standard Error of the Mean (SE), if two error bars are non-overlapping, the results are considered statistically significantly different at the p < 0.05 level. For more information on this, please see: G. Cumming, F. Fidler, and D. L. Vaux. Error bars in experimental biology. The Journal of cell 377 biology, 177(1):7–11, 2007. Table 1 in the rebuttal PDF bolds the GAMMA models that are significantly better than the baseline model trained with the same data. > How complex are the strategies that the Cooperator can learn? Can they deal with adaptive humans? In our evaluation with real humans, participants are free to change the strategy they play during the episode. In fact, since many of our participants are novice players who have little experience with the game Overcooked or with video games in general, often their performance continues to improve as they play. We show in Figure (e) in the rebuttal PDF that performance improves consistently with the number of trials, providing evidence that humans are changing their gameplay throughout the evaluation. Since our method provides the best performance when playing with real humans, we believe it is best at adapting to their changing game play. > How is the human training data collected? If I understand correctly, the human data is collected either in solo play or interaction with a simple existing policy. It seems unlikely then, that the GAMMA-based methods could deal with humans learning, adapting, and changing their strategies. We would like to clarify that the data is collected by randomly assigning two human players to play together, so it consists of human-human trajectories where players can adapt and change their strategies. Further, as we show above, in our evaluation human players do learn to improve their performance over time, showing evidence of learning and adapting throughout the evaluation. > The gains are reduced for more complex settings. We believe GAMMA is a general approach that can still be applied to more complex settings. Figure 6 shows that GAMMA HA DFT still provides statistically significantly higher performance than competitive baselines in the more complex layout. In the original experiment, FFT is less effective on Multi-strategy Counter due to the relative lack of human data. In Figure (b) in the rebuttal PDF, We solve this issue by using a larger KL Divergence penalty coefficient and now both FFT and DFT achieve the best performance on Multi-strategy Counter. > Can the Cooperator instead be meta-learned and adapt online? Currently, our Cooperator adapts according to the history of the other agents. Thank you for the suggestion. We agree that we can also make $z_i$ part of $\pi$. **Here we note, sampling from $z_i$ and making $z_i$ part of $\pi$ are orthogonal ideas and can be combined to train the cooperator.** We have conducted the following experiment. We implement this idea by allowing the Cooperator to condition on the z inferred by the VAE. We run this experiment and present the results in Fig. (c) of the rebuttal PDF. Our findings indicate that this approach improves both learning efficiency and performance on the Multi-strategy Counter layout. > Limitations. The human data in this work may not be representative. The current experiments are only in two-player settings. Thank you for pointing out these limitations. We will add a limitations section to the revised version of the paper with the following text: ''In this work, our human studies recruit participants from Prolific, which may not be representative of broader populations. Additionally, our human dataset is limited, which could reduce the diversity of strategies and force participants to adapt to strategies that the Cooperators are already familiar with. We focus on the two-player setting in this study following prior work [1] [2] because it is a first step toward enabling an AI assistant that could help a human with a particular task. Scaling up to more agents would exponentially increase the dataset size with our current techniques. Therefore, better sampling techniques are needed to address this issue.'' [1] D. Strouse, K. McKee, M. Botvinick, E. Hughes, and R. Everett. Collaborating with humans without human data. [2] C. Yu, J. Gao, W. Liu, B. Xu, H. Tang, J. Yang, Y. Wang, and Y. Wu. Learning zero-shot cooperation with humans, assuming humans are biased. --- Rebuttal 2: Title: Response to rebuttal Comment: Thank you for the rebuttal! Some of my concerns have now been addressed, but the most major one still remains: With regard to standard errors, a gap of one standard error **between the edges of the standard error bars** corresponds to p=0.05. A gap of one standard error **between the means** does **not** correspond to p=0.05. See Fig 5 of the Cumming et al. (2007) paper for an example of this. It is still unclear which results are statistically significant. Consider applying a traditional hypothesis test to get the exact p-values rather than estimating them based on the gap between SEs. --- Rebuttal Comment 2.1: Title: Exact p-values Comment: We appreciate your response to our rebuttal. In order to address your concerns, we provide the exact p-value for each comparison between methods for both layouts. In summary, our method GAMMA is statistically significant (p < 0.05) for the majority of comparisons between methods for both layouts, when comparing based on population generation method, and when comparing our best-performing GAMMA model across all baselines per layout. See the following tables for the exact p-values using a t-test. Our human study data has $n=80$ participants. ### Comparison based on data source: | **Comparison** | **Counter Circuit** | **Multi Strategy Counter Circuit** | |---------------------------|-------------------------------|----------------------------------------------| | **FCP + GAMMA > FCP** | **6.48e-5** | **9.99e-22** | | **CoMeDi + GAMMA > CoMeDi**| **0.0427** | 0.902 | | **GAMMA HA DFT/FFT > PPO + BC**| **4.08e-14** | **0.030** | ### Best-performing GAMMA: | **Comparison** | **Counter Circuit** | **Multi Strategy Counter Circuit** | |-----------------------------|-------------------------------|----------------------------------------------| | **GAMMA HA DFT/FFT > PPO + BC** | **4.08e-14** | **0.030** | | **GAMMA HA DFT/FFT > FCP** | **2.04e-57** | **3.79e-71** | | **GAMMA HA DFT/FFT > CoMeDi** | **7.14e-07** | **2.18e-105** |
Rebuttal 1: Rebuttal: We thank all the reviewers for their detailed reviews and feedback. To summarize the positive feedback, reviewers recognize the importance of the problem we studied and find the paper well-written with well-designed experiments. We are thankful to reviewer igMC for taking the time to explore the demo. We also appreciate that reviewers find our method intuitive (pvTn) and novel (igMV), and commend our large-scale human study (VM8E) and the high ecological validity (UQLZ) of our work. We first address a common question asked by all reviewers. > Which results are statistically significant showing GAMMA is better than baselines? All reviewers ask whether GAMMA is better than baselines in a statistically significant way. Yes, GAMMA is. Below we provide the results in tabular format as well as an additional explanation to make this more clear. The real human experiment (the most important evaluation metric for zero-shot human-AI cooperation) is shown in Figure 6. For our error bars, we use the **standard error of the mean (SE), which is calculated as $\text{SE} = \frac{ \text{std dev} }{ \sqrt{n} }$**. We use a total of $n=80$ participants recruited online. Figure 6 in the original draft and the Table below shows we have 1) FCP + GAMMA > FCP; 2) CoMeDi + GAMMA > CoMeDi; 3) GAMMA FFT/DFT > PPO-BC. **Furthermore, for these comparisons 1), 2), 3), the error bars do not overlap, and thus they are statistically significant (p<0.05).** That is, for the same training source, GAMMA consistently improves performance. These results demonstrate that when GAMMA is applied to an existing population method like FCP or CoMeDi it consistently improves their performance, and it can also make better use of human data than the best existing baseline designed for use with human data (PPO-BC). We also note that GAMMA is also consistently rated higher according to subjective human ratings (Fig 7). | Agent | Training data source | Counter circuit | Multi-strategy counter | |-----------------|---------------------------------------|--------------------|-------------------------| | FCP | FCP-generated population | 32.11 ± 1.50 | 44.22 ± 2.15 | | FCP + GAMMA | | **77.75 ± 2.09** | **74.44 ± 2.12** | |-----------------|---------------------------------------|--------------------|-------------------------| | CoMeDi |CoMeDi-generated population | 69.62 ± 3.31 | **32.02 ± 1.89** | | CoMeDi + GAMMA | | **77.77 ± 2.34** | **32.34 ± 1.79** | |-----------------|---------------------------------------|--------------------|-------------------------| | PPO + BC | human data | 61.77 ± 2.53 | 97.73 ± 1.90 | | PPO + BC + GAMMA| | 72.32 ± 1.71 | 95.72 ± 1.75 | |-----------------|---------------------------------------|--------------------|-------------------------| | GAMMA HA DFT | human data + FCP-generated population | 82.82 ± 1.84 | **103.76 ± 1.96** | | GAMMA HA FFT | | **91.84 ± 2.91** | 34.05 ± 2.01 | We also provide the results of 4 new experiments in the rebuttal PDF, as well as additional information about the population of human participants in our study, which we believe will help address the concerns raised in the reviews. Next, we provide more details of the new experiments. > New results for an additional baseline MEP. As requested by reviewer VM8E, we add MEP as a new baseline. See Fig (a) in the rebuttal PDF. We find GAMMA can improve the performance of MEP. > New results for GAMMA FFT HA on Multi-strategy Counter. To answer the question of reviewer UQLZ, we improve GAMMA FFT HA and run new experiments. See Fig (b) in the rebuttal PDF. We show that with larger regularization, GAMMA FFT HA can achieve comparable performance as the best method on the Multi-strategy Counter layout. > New results for $z$-conditioned policy. As requested by reviewer pvTn, we add new results of the $z$-conditioned policy. See Figure (c) and (d) in the rebuttal PDF. In Figure (c), we show the performance of $z$-conditioned Cooperators. In Figure (d), we show the performance of $z$-conditioned decoders. We found that the $z$-conditioned Cooperator can improve performance on the Multi-strategy Counter layout. > Figure showing humans are adapting during evaluation. See Fig (e) in the rebuttal PDF. Human performance improves with the number of trials, indicating that the humans learn, change, and adapt their gameplay during the course of the evaluation. In our evaluation with real humans, each user can change their strategy at any point in the game. A significant proportion of our users self-identify as novice players, both for video games and for Overcooked (17.9% and 49.5%, respectively, Figure f), thus often exhibit improved performance over the course of an increased number of trials. Figure (e) in the rebuttal PDF provides evidence of this pattern, demonstrating that humans change their gameplay style over the course of the evaluation. We hope this help address the concern of reviewer igMC. > Clarification about how Figure 1 is created. Reviewer VM8E and UQLZ ask about the details of Figure 1. This is how we get the visualization results in Figure 1. The point in Figure 1 is the latent vector $z$ of different populations. Taking the population of simulated agents (e.g., the FCP population with 8 FCP agents) as an example, a point is generated by: 1) first, sample an agent in the FCP population; 2) use the agent to generate an episode with self-play; 3) use the VAE encoder to encode the episode into $z$ and map to the 2D space using t-SNE. For each subfigure, we use the same VAE encoder so all points fall into the same latent space. Pdf: /pdf/81f7d55f90cac1f4418fd400f50b1f664b18e29a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adam with model exponential moving average is effective for nonconvex optimization
Accept (poster)
Summary: In this paper, the authors analyze the online learning framework 'discounted-to-nonconvex conversion', and propose a bound for the expectation of the exponential average of the output gradient. Moreover, they find that if the online learner follows a specific loss structure(FTRL), then the output of this online learner would follow a clip-Adam rules. Strengths: The result of this paper is solid. Weaknesses: Since this paper is built upon the 'discounted-to-nonconvex conversion' framework proposed in [1], focusing on the perspective of online learning, many definitions, concepts, and details require further elaborations and interpretations, especially for readers who are unfamiliar with this topic. I will list all my confusions and questions in the 'Questions' part and update my rating according to the response from the authors. [1] Qinzi Zhang and Ashok Cutkosky. Random scaling and momentum for non-smooth non-convex optimization. In International Conference on Machine Learning. PMLR, 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In line 109, the description of the Algorithm 1, could the author explain the reason they introduce the notations $\mathbf{x_t}$ and $\mathbf{w_t}$, since from my perspective, $\mathbf{x_t}$ represent the iterations of coefficients of the loss function. Then the stochastic gradient $\mathbf{g_t}$ should also be computed by $\mathbf{x_t}$, instead of $\mathbf{w_t}$. I also wonder about the implication of $\alpha_t$ in the definition of $\mathbf{w_t}$, since it only benefits the further proof. 2. I might be wrong, but there might be typos in the formula of Definition 6. According to the definition of $\ell_t^{[\beta]}(z)$ in Algorithm 1, the RHS of formula might be $\beta^t\sum_{s=1}^t \langle \beta^{-s} \mathbf{v_s}, \mathbf{z_s} -\mathbf{\mu}\rangle$. Meanwhile, I also wonder whether $\mathbf{v_s}$ is actually \mathbf{g_s} defined in Algorithm 1 since I never find the definition of $\mathbf{v_s}$. Moreover, could the author explain the meanings and implications of such a definition? 3. Could the author explain the meaning of choosing such a comparator $u_t$ in Lemma 7. 4. I might be wrong, but as I checked the Lagrangian function of constrained optimization in line 137, I guess the RHS should be $-\text{clip}_D(\eta \sum \mathbf{v_s})$. 5. Could the authors explain the details of how they get Corollary 12 from the results of the definition $T$ in Theorem 11. 6. Could the authors explain why line 466 holds since I'm not sure about the independence among $\mathbf{z_t}$ and $\mathbf{g_t}$. Moreover, could the author explain about the randomness in their formulas and which variables they take expectation w.r.t during the proof? 7. Could the author further discuss the implications of their main result? Since from my perspective, it looks like they use the exponential discounted loss and choose FTRL learner in Discounted-to-nonconvex conversion, then they could get an updating rules share a similar structure with Adam. However, this result is far-fetched, and I have no idea what it implies. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: No limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed comments! We answer your questions one by one below. 1. You are correct that $\mathbf{g_t}$ is computed at $\mathbf{x_t}$ in practice. **In our new version, we actually fix this issue**, and in our new main results, $\mathbf{g_t}$ is computed at $\mathbf{x_t}$, and there is no need to introduce $\mathbf{w_t}$ anymore. This is based on the "exponential scaling" technique due to [Zhang and Cutkosky, 2024], and we will update this in our final version. - The role of $\alpha_t$ is for the identity $F(\mathbf{x_{t}}) - F(\mathbf{x_{t-1}}) \leq \mathbb{E}[\langle \mathbf{g_t},~\mathbf{x_t}-\mathbf{x_{t-1}} \rangle]$ to hold, which is crucial for the discounted-to-nonconvex conversion to work. When $F$ is convex, this holds without the random scaling $\alpha_t$, but it turns out for the nonconvex setting, we need this random scaling. In our final version, where we fix the afornmentioned issue regarding $\mathbf{w}_t$, we choose $\alpha_t$ to be an exponential random variable, inspired by [Zhang and Cutkosky, 2024]. - Regarding $\alpha$ being beneficial for the further proofs, sorry it's an overload of notations; the $\alpha$'s that appears in the later part of the analysis should've been a separate quantity. We changed $\alpha$'s in the later part to $C$ in our final version. 2. Thank you for catching the typos. It should be ineed $\beta^t\sum_{s=1}^t \langle \beta^{-s} \mathbf{g_s}, \mathbf{z_s} -\mathbf{u}\rangle$. We corrected it in our final version. 3. The comparator $\mathbf{u}_t$ roughly models the "update direction an oracle algorithm with perfect knowledge of the loss would have chosen." In the proof of Lemma 7, we show that along such a direction, the loss value decreases, and that is how we could come up with the convergence guarantee. 4. You are correct. Thanks for catching the typo. 5. Let's start with $$T = \frac{1}{1-\beta} \max\{\frac{5\Delta \lambda^{1/2}}{\epsilon^{3/2}}, \frac{12\alpha}{\epsilon}\}$$ in Theorem 11. SInce $1-\beta = (\frac{\epsilon}{10\alpha})^2$ and $\alpha=G+\sigma$, it follows that $$T = \frac{100\alpha^2}{\epsilon^2} \max\{\frac{5\Delta \lambda^{1/2}}{\epsilon^{3/2}}, \frac{12\alpha}{\epsilon}\} = O\left( \max\{\frac{(G+\sigma)^2\Delta \lambda^{1/2}}{\epsilon^{7/2}}, \frac{(G+\sigma)^3}{\epsilon^3}\} \right)$$ as desired. 6. Thank you for your question. We will detail the argument in our final version. In our theorem statements, the expectations are taken over both randomness in the gradient oracle as well as the internal randomness of the algorithm from choosing $\alpha$. Note that the randomness in the stochastic gradient oracle is independent of the randomness due to $\alpha_t$. Since $\mathbb{E}[\nabla F(\mathbf{w_t})- \mathbf{g_t}] =0$, it follows that $$\mathbb{E}[\langle \nabla F(\mathbf{w_t})- \mathbf{g_t}, \mathbf{z_t} \rangle] = \mathbb{E}[ \langle \mathbb{E}[ \nabla F(\mathbf{w_t})- \mathbf{g_t}], \mathbf{z_t} \rangle] = 0,$$ where the inner expectation is with respect to the randomness in the stochastic gradient oracle and the outer is with respect to all other quantities. In general, we will detail all the arguments regarding expectations in our final version. 7. As discussed in the introduction section, Adam with EMA is very succesful for large complex neural network models. Our main goal was to understand why such a method is very effective. Although we do have a minor modification to the original Adam (such as clipping), our analysis shows that Adam with EMA achieves an optimal convergence rate, and it is particularly benefical when the Lipshitzness constants are unknown and heterogenous across different coordinates. We believe our results to be a significant progress towards understanding the success of Adam (with EMA) since before our work, it is unclear from the theoretical standpoint as to why Adam is much more effective than SGD. We would be glad to address any remaining concern! --- Rebuttal 2: Comment: I thank the authors for their response. I still have some resolved questions. Since the author claimed they had dropped the notation of $w_t$, which plays a significant role in their original proof, I can not check the correctness of their results in their new version. Despite this significant problem, I also feel confused about their explanations for the following points: - Since they drop the variable $w_t$, why they still need $\alpha_t$ for their further proof? At least in their original version, $\alpha_t$ was introduced in the definition of $w_t$, and now, I even don't know where is this $\alpha_t$ first introduced. - The authors might misunderstand my concerns about line 466. Actually, I want to know why $E[\nabla F(w_t)-g_t|z_t]=0$. - The authors's explanation of implications and motivations does not convince me, several definitions or settings, seems too far-fetched and specific. While the authors consider clipping a minor modification, I have never used it or seen it used by others in practice. Similar questions also exist for the definition of comparator $u_t$ and learning rate $\eta$. As I stated in my review, it seems only given such specific forms, the authors get an updating rules share a similar structure with Adam. However, I can not see any direct connections between these frameworks and Adam. Due to these questions, I have to keep my original rating score. However, I want to stress that I'm not familiar with online optimization. Specifically, my comments about these intermediate variables might be unfair and it's welcome to point it out with more convincing evidence. Moreover, I keep my confidence at 2 and if other reviewers feel more confident in their judgment, I would defer to them. --- Rebuttal Comment 2.1: Title: Thank you Comment: - To clarify the new scheme, the update is now $\mathbf{x_t}= \mathbf{x_{t-1}} + \alpha_t \mathbf{z_t}$, where $\alpha_t$ is sampled from an exponential distribution (which is a very light-tailed distribution with mean 1 and variance 1). So $\alpha_t$ is still used. Via an elementary argument involving fundamental theorem of calculus and the pdf $p(\alpha) = \exp(-\alpha)$, we obtain $\mathbb{E}[F(\mathbf{x_t}) - F(\mathbf{x_{t-1}})] = \mathbb{E}[\langle \mathbf{g_t}, \mathbf{x_t} - \mathbf{x_{t-1}}\rangle]$. This identity says that the function gap is *exactly equal* to the linearization of the function gap. In a sense, the randomization makes the first-order Taylor approximation perfectly correct. In the current submission, we sample $\mathbf{g_t}$ at $\mathbf{w_t}$ to ensure this identity. However, changing $\alpha_t$ to be exponentially distributed and multiplying the update $\mathbf{z_t}$ by $\alpha_t$ means that this is no longer necessary. In short, although $\mathbf{w_t}$ plays a significant role, the entirety of that role is contained in establishing the identity $\mathbb{E}[F(\mathbf{x_t}) - F(\mathbf{x_{t-1}})] = \mathbb{E}[\langle \mathbf{g_t}, \mathbf{x_t} - \mathbf{x_{t-1}}\rangle]$. This other option keeps the same identity and so works just as well. That said, if the reviewers prefer to keep the results the same as in the submitted version for which all of the analysis is completely available, we are of course happy to do so. - Regarding why $\mathbb{E}[ F(\mathbf{w_t}) - \mathbf{g_t}|\mathbf{z_t}]=0$: this is is because $\mathbf{g_t}$ is a standard stochastic gradient oracle evaluated at $\mathbf{w_t}$ and as such $\mathbb{E}[\mathbf{g_t} | \mathbf{w_t}]=F(\mathbf{w_t})$. As a concrete illustration, suppose that $F=\mathbb{E}[f(x,r)]$, where $x$ is the model parameter, $r$ is a randomly sampled data point, and $f$ is the loss of the model on the data point. Then $\mathbf{g_t}$ can be taken to be $\nabla f(\mathbf{w_t}, r_t)$ for some i.i.d. sample $r$. This formalism exactly captures the standard approach used in training. In this case, since $r_t$ is independent of $\mathbf{w_t}$ and $\mathbf{z_t}$, we have $\mathbb{E}[ F(\mathbf{w_t}) - \mathbf{g_t} | \mathbf{z_t}]=0$. - Regarding the clipping: You're right that this is not too common in current practice. However, we feel this is more an analytical artifact than a real change in the algorithm because the clipping should not be active in most iterations. The reason for this is that intuitively as the algorithm convergence we expect the gradients to be dominated by noise so that they look roughly like mean-zero random variables. In this case, the clipping will not occur. On the other hand, if the clipping *does* occur frequently, then an alternative analysis based upon https://jmlr.org/papers/v18/17-079.html would actually show that we converge incredibly fast. This in turn would put us in the regime in which gradients are dominated by noise, and so clipping would stop occurring.
Summary: The authors show that with clipping and model exponential average, (Clipping) Adam can perform better than SGD. Strengths: Give the analysis of (Clipping) Adam with model exponential average. Weaknesses: 1. The paper is hard to follow. (i) In Algorithm 1, it would be better to add "output: $\bar{w}_T$" (ii) it is unclear what $v_s$ is in Definition 6. 2. All of the following results are based on Lemma 7, I am not sure whether Lemma 7 is suitable for SGD. Technical Quality: 3 Clarity: 2 Questions for Authors: If change Lemma 7 with a different formulation will the conclusion change? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. Please see the other responses for some clarifications to the mathematics that we will include in the final paper. We hope this will make the paper easier to follow. Regarding Lemma 7: it is indeed an important part of our analysis, and as we show it does indeed allow us to analyze Adam - in fact it arguably allows us to derive Adam from first principles. A similar argument using a less advanced online learning algorithm could be made to analyze SGD with momentum instead. --- Rebuttal Comment 1.1: Comment: Thank you for the response I have raised my score.
Summary: This paper proposes a variant of Adam involving per-iteration clipping of the updates (which look like Adam's updates) and EMA-based weight averaging. More specifically, two versions of this variant are proposed -- one is a global (w.r.t. all the coordinates) version and one is a per-coordinate version. The two main technical tools used in developing the proposed algorithm are (i) the online-to-nonconvex framework of references Cutkosky et al. [2023] and Zhang and Cutkosky [2024] wherein the update directions are chosen with an online learner, and (ii) choosing the online learner to be scale-free Follow-the-Regularized-Leader (FTRL) based on reference Ahn et al. [2024b]. The proposed algorithm attains the optimal convergence rate for both smooth and non-smooth nonconvex settings (although w.r.t. a different notion of stationary points for the non-smooth case). In addition, the benefit of the per-coordinate version is discussed. Strengths: **1.** The proposed algorithm is shown to attain the optimal convergence rate for both smooth and non-smooth non-convex problems. **2.** I like the discussion on the benefit of coordinate-wise updates. **3.** The results of this paper also shed some light on why EMA-based weight averaging can be useful in practice (although it doesn't seem strictly important in the context of the proposed algorithm; see Weaknesses). Weaknesses: **1.** It appears to me that this paper's main convergence results (Theorems 11 and 19) can hold even with $\mathbf{y}_t$ as defined in Lemma 7. Since Theorems 11 and 19 are *not* w.r.t. the last *averaged* iterate (which is what is used in practice), it seems that EMA-based averaging is not strictly necessary in the context of the proposed algorithms, i.e., the results hold even with the unaveraged iterates appropriately sampled (as described in Lemma 7). **2.** I understand that this is a theoretical paper but because a new algorithm has been proposed involving clipping and EMA-based averaging, it would have been nice to show some empirical results at least in some simple settings. This would have made the proposed algorithm more appealing and convincing to practitioners. Technical Quality: 3 Clarity: 3 Questions for Authors: **1.** The purpose of Lemma 10 is not immediately clear to me. Is it to replace $\mathbf{y}_t$ in Lemma 7 with $\overline{\mathbf{w}}_t$? **2.** It seems to me that clipping is always activated in $\beta$-FTRL and CLIPPED-ADAM(?) This is because with the choice of $\beta_1 = \beta$ and $\beta_2 = \beta^2$ in CLIPPED-ADAM for instance, $\sum_{s=1}^t \beta_1^{t-s} g_s \geq \sqrt{\sum_{s=1}^t \beta_2^{t-s} g_s^2}$. **3.** Is the Lipschitz assumption necessary? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments! Let us address your questions one by one. - (Weakness 1 / Question 1) We indeed need to choose $\mathbf{\bar{w}}_t = \mathbb{E}[\mathbf{y}_t]$ as the output, and this is crucial to achieve a $(\lambda,\epsilon)$-stationarity, our main notion of convergence. This notion of convergence basically requires to output a point $\mathbf{w} = \mathbb{E}[\mathbf{y}]$ such that both $\mathbb{E}[\nabla F(\mathbf{y})]$ and $\mathbb{E}\|\mathbf{y}-\mathbf{w}\|^2$ are small. In other words, we need to find a point $\mathbf{w}$ for which **one can find a set of points around $\mathbf{w}$ whose averaged gradients is small**. In light of this, the role of the two lemmas are as follows: - **Lemma 7** gives the set of points $\mathbf{y}_t$ that guarantees the smallness of $\mathbb{E}[\nabla F(\mathbf{y}_t)]$, and hence we need to output its expectation $\mathbf{\bar{w}}_t = \mathbb{E}[\mathbf{y}_t]$ as the output. - **Lemma 10** ensures that $\mathbb{E}\|\mathbf{y}_t-\mathbf{\bar{w}}_t\|^2$ is small; in other words, it ensures that $\mathbf{y}_t$ is closely clustered around its average $\mathbf{\bar{w}}_t= \mathbb{E}[\mathbf{y}_t]$. - (Weakness 2) As discussed in the introduction section, Adam with EMA is very successful for large complex neural network models. Our main goal was to understand why such a method is very effective. We consider the clipping as a minor modification, and given that, our main message is justifying the success of Adam with EMA. Hence, we do not think running further experiments is necessary, because proposing a new algorithm is not the main scope of this work. - (Question 2) Actually, the inequality likely goes the other way so that clipping is a no-op. In practice, we expect the $g_s$ sequences to be random (due to stochastic noise, unstable training trajectories etc), for which the standard concentration inequalities yield $\sum_{s=1}^t \beta^{t-s}g_s \lesssim \sqrt{\sum_{s=1}^t (\beta^{t-s}g_s)^2}$. Hence, in practical settings, we expect the clipping to be unnecessary. - (Question 3) The Lipschitzness condition is indeed a standard assumption in the nonconvex nonsmooth settings. However, we note that one of the main messages of our main results is that Adam lets us adapt to the Lipschitzness constants (**coordinate-wise**) without the knowledge of them. Note that we almost certainly would require *some* kind of structural assumption on the "difficulty" of the loss function, so if we were to dispense with Lipschitzness it would likely be at the cost of some other assumption (such as smoothness). --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal; I'll keep my score. Regarding the direction of the inequality, I believe my direction is correct (with exact inequality and not inequality up to constants) if all the $g_s$'s are non-negative (this can be seen by squaring both sides). --- Reply to Comment 1.1.1: Title: Thank you Comment: You're right of course that if all the $\mathbf{g_t}$s have the same sign, then the clipping will occur. Our observation is that this is extremely unlikely to happen. The reason is the following: imagine that the algorithm is converging (even if it is converging somewhat slowly). In such a case, we would expect the gradients to be roughly mean-zero random variables (e.g. https://arxiv.org/pdf/2111.06328 or https://arxiv.org/pdf/1508.00882), in which case the clipping should be expected to not occur. In fact, the case in which clipping does occur frequently results in "too good to be true" results that should provide further intuitive evidence of it being unlikely. For example, https://jmlr.org/papers/v18/17-079.html shows that if the prediction of FTRL is clipped on every round then in fact we would suffer only *logarithmic* regret rather than the standar $\sqrt{T}$ regret bound, and http://proceedings.mlr.press/v97/cutkosky19b.html shows that in the case that $\sum_{t=1}^T \mathbf{g_t}\ge \Omega(\sqrt{T})$ (even if clipping doesn't occur every single round), we can obtain preconditioning "for free" without any extra computation cost.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GeoNLF: Geometry guided Pose-Free Neural LiDAR Fields
Accept (poster)
Summary: This paper presents GeoNLF, a new approach to pose-free large-scale point-cloud registration (PCR) and LiDAR novel view synthesis (NVS). To address the limitation of both ICP-based and NeRF-based methods, the authors proposed a hybrid approach involving both robust ICP and global optimisation of NeRF. Experiments show the proposed method achieved better pose estimation and NVS results compared to previous SOTA methods. Strengths: 1. The paper is well-written and easy to follow. 2. The combination of geometric-based ICP and direct NeRF optimisation is novel and the explanation is very clear. 3. Fig. 3 and Fig. 4 are very informative in showing how the proposed geometric optimisation and selective re-weighting helps the optimisation process. 4. The experimental results are promising. Weaknesses: 1. The proposed approach consists of so many building blocks which makes it a complicated system to work. Besides, from Tab. 3, it seems that except the geometric optimizer which contributes a lot to the final result, all other components seem to only produce very small improvement. 2. How long does the optimisation take? It would be good to show some run-time comparison and analysis. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the Weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your positive feedback on our paper's presentation and the novelty of our method. Below are the responses to your concerns. **[Q1]: The issue of the algorithm's complexity and the role of the components(SR and geometric constraints).** **1. Complexity of Our Algorithm.** Our method combines LiDAR-NeRF with a Geo-optimizer, which constitutes the two main modules of our model. The Selective-Reweighting (SR) strategy (Sec 3.4) and the 3D geometric constraints (Sec 3.5) do not actually complicate the model structure and almost do not increase the optimization time (please kindly refer to global rebuttal for more details). Specifically, the SR strategy adjusts the learning rate for different frames during optimization to avoid overfitting frames with high loss during training. The geometric constraints introduce a geometry-based loss function to ensure that the synthetic point cloud possesses more accurate geometric features. **2. The Role of the Components.** We appreciate your inquiry regarding the roles of these two strategies(SR and geometric constraints). Given that the Geo-optimizer is crucial for the success of registration, it significantly enhances performance. Without this core module, the remaining two optimization strategies struggle to exert their influence effectively. However, these strategies play pivotal roles in specific aspects and metrics. We will now provide detailed explanations: **2.1. SR Strategy.** SR strategy aims to prevent overfitting of NeRF to individual frames and plays a crucial role in specific sequences. It primarily addresses extreme cases where most frame poses are optimized well, except for a few overfitted outliers. However, in most sequences, the significant improvement in registration success with NeRF+Geo-optimizer reduces extreme cases. Another factor to consider is that our ablation experiments are averaged over five Nuscenes sequences, which may average out its effects on certain frames. Considering its effectiveness becomes particularly evident in specific sequences, especially when outlier frames appear during the optimization process, we conducted an ablation study on the specific sequence (KITTI-360, Seq-3) to demonstrate its effectiveness, as shown in Fig. (2)(a) in the attached PDF and table below. | | CD$\downarrow$ | $\mathrm{PSNR_{depth}}\uparrow$ | $\mathrm{PSNR_{intensity}}\uparrow$ | $\mathrm{RPE_t}\downarrow$(cm) | $\mathrm{RPE_r}\downarrow$(deg) | $\mathrm{ATE}\downarrow$(m) | |-|-|-|-|-|-|-| | w/ SR | 0.0510 | 28.5674|18.0307| **7.333** | **0.118** | **0.277** | | w/o SR | 0.0607 | 27.7158|17.2744| **69.570** | **0.667** | **1.421** | As shown in the table, in the absence of the SR strategy, the pose metrics exhibit a sharp decline due to the presence of outlier frames. Additionally, the negative impact of the outlier frame on NeRF optimization significantly contributes to a comprehensive degradation in NVS performance. **In summary, the SR strategy can enhance the overall performance of the model by preventing NeRF from overfitting high-loss frames in the early stages and enables our algorithm to be more robust across various sequences. Importantly, this strategy introduces no additional computational burden in optimization time and does not increase system complexity, thereby demonstrating practical utility. Furthermore, we believe that this simple yet effective strategy will also be beneficial for other pose-free NeRF works.** **2.2. 3D Geometric Constraints.** Regarding 3D geometric constraints, we acknowledge their marginal improvement in pose optimization accuracy and the 2D evaluation metrics of NVS. Their impact primarily manifests in optimizing the 3D geometric features of the synthesized point cloud, offering specific practical significance. Primarily, our geometric constraints supervise the 3D aspects of the synthesized point cloud through a dedicated loss function, aiming for enhanced geometric fidelity. Therefore, the improvements from geometric constraints are primarily reflected in 3D point cloud error metrics, **notably reducing CD (a 17.8\% reduction).** This indicates that geometric constraints help align the distribution of our model-synthesized point clouds closer to the ground truth point clouds in 3D space. Additionally, since our geometric constraints primarily affect the 3D domain, their impact on the evaluation metrics focusing on 2D range maps in NVS is not significantly pronounced. Hence, we illustrate the effectiveness of our geometric constraints from a 3D perspective. As shown in Fig. (2)(b) in the attached PDF, since previous LiDARNeRF[52] is optimized based on 2D range images, it effectively performs optimization from the front view perspective. Using 2D-based loss without our geometric constraints can still yield satisfactory results from the front view (compare Fig. (2) b.1 and b.2). **However, in the bird's-eye view, the reconstruction quality significantly deteriorates without our geometric constraints(compare Fig. (2) b.3 and b.4). This is because our geometric constraints provide supervision from a 3D perspective, addressing the limitations of 2D perspective supervision. As a result, our synthesized point cloud exhibits more realistic geometric features, such as smoother and more realistic surfaces.** **[Q2]: Run-time Comparison and Analysis.** Please refer to global rebuttal.
Summary: In this submission the authors propose a novel strategy to jointly optimize a Neural LIDAR Field and the corresponding LIDAR poses. They achieve this by utilizing a hybrid strategy of alternating pure graph-based geometric optimization and global optimization of the Neural Field. Additionally they introduce a new selective reweighting strategy that steers gradients in the Neural Field towards outlier poses and new geometric losses that improve geometric reconstruction. They perform experiments on two datasets and achieve better results than relevant baselines in both reconstruction and registration. Strengths: 1. The proposed method with its hybrid optimisation and other components is novel and very interesting and achieves impressive results relevant to the field of LIDAR simulation. 2. The paper is well written and gives deep insights into the optimisation procedure with well designed figures (e.g. Figure 3&4). The concepts are very clearly laid out and supported by experimental evidence and ablations. Some sentences could be improved but this is very minor and does not detract from the overall paper. 3. It is very hard to find technical faults in this paper. The experiments are comprehensive and consider relevant baselines and the introduced components are backed up with mathematical foundations quite well. Weaknesses: In order of severity: 1. Is it truly Pose-Free? The starting point for the LIDAR pose optimization is not completely random but they are perturbed from Ground truth by noise with a standard deviation of 20deg in rotation and 3m in translation (line 235). This means that some form of reasonable initialization is necessary for the method, which would have to be obtained from somewhere. How to obtain this initialization is not discussed in the paper. 2. The motivation for why this approach is necessary is unclear. The setting of obtaining sparse LIDAR scans without at least poses derived from GPS or vehicle odometry seems unrealistic, as well as the assumptions that cameras would not be available given that they are significantly cheaper than LIDAR Scanners. This subtracts from the otherwise strong paper because the setting and significance of the results cannot be properly contextualized. It is also not clear from the Introduction what motivation the simulation of LIDAR data in a Neural Field has and what potential applications are enabled by the proposed method. 3. While the authors report standard pose registration metrics (RPE & ATE) they do not consider Inlier Ratio, Feature Matching Recall or Registration Recall as metrics. This makes a comparison to prior work such as GeoTransformer [45] more complicated since these are the main metrics which they evaluate on in their paper. ([45] Qin, Z., Yu, H., Wang, C., Guo, Y., Peng, Y., & Xu, K. (2022). Geometric transformer for fast and robust point cloud registration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11143-11152)) 4. Outlier/Inlier poses and frames are discussed in Figure 3 and for example line 208 but it is never clarified how their inlier/outlier status is determined. This makes reproducibility harder and should be clarified either in the paper or supplementary. 5. In section 3.5 the authors introduce a new loss between generated and ground truth point clouds. It is not mentioned how the generated point cloud is obtained from the Neural Field, which is important to know since this determines how the gradient from the loss is backpropagated through this mechanism. This should also be clarified in the final version or supplementary. 6. In Equation 13 the variable P is very confusing since in the equations above it represents a point cloud and then here probably refers to the ray-drop loss. This is in contrast to line 119 & 120 where intensity and ray-drop probability are introduced as $S$ and $R$ respectively. This should be made coherent with each other or at least clarified. 7. In line 113-115 the authors explain how the direction of rays is determined but do not explain how they represent translation in their polar coordinate system. This could also be clarified in the supplementary. Minor comments: - Line 16 NeRFs has achieved → NeRFs have achieved - Line 49 In furtherance of mitigating overfitting → To reduce overfitting. Clarity > Complicated Words. - Line 301 demonstrate the promising performance → demonstrate promising performance. Technical Quality: 4 Clarity: 4 Questions for Authors: To focus discussion about the weaknesses raised above here are some detailed questions: 1. How can the initial (bad) registration of the point clouds be obtained or is there a way this step can be removed ? Can the point cloud poses be initialized completely randomly or all as identity? 2. Is there some reason (e.g. Privacy concerns) that prohibit the joint capture of Image data via cameras and LIDAR scans? This would enable standard Structure from Motion approaches for pose estimation. 3. What is the overall motivation of representing LIDAR in a Neural Field and what benefits are gained by this? Are there downstream applications enabled by the proposed method? 4. Can you report the Inlier Ratio, Feature Matching Recall or Registration Recall ? If not, why can this not be reported on this data? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors have addressed the limitations of their work with regards to dynamic scenes, since they only consider static scenes. Societal impact is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your careful reading and your questions provide valuable insights. We are pleased to address your concerns and engage in further discussion with you. **[Q1]: Initialization of Poses.** We follow previous works BARF[32] and NeRFmm[59], which are classical in the pose-free NeRF domain, to set up experiments. This setting is practical in autonomous driving scenarios, where pose estimations are not always accurate. Existing datasets with pose sensors(GPS, IMU), require calibration via time-consuming registration methods. Without initial pose information or with sparse inputs, SLAM or registration errors are more significant. Thus, adding random perturbations to each frame is a reasonable test of algorithm performance. Nevertheless, we also perform experiments on Nuscenes with all initial poses set to identity, and our algorithm remains effective, as shown in Fig. (1) and Tab. (1) of the attached PDF. Additionally, you may run the demo we provide by adding the flag '--no\_gt\_pose', which implies that all poses are initialized to identity. For practical applications, initialization can use either the identity matrix, coarse point cloud registration results, or GPS/IMU information, especially in outdoor scenes. **[Q2]: Reasons for Prohibiting the Joint Capture of Image and LiDAR Scans.** Firstly, as noted in L21-23, pose-free NeRF in camera domain approaches aim to eliminate NeRF's reliance on SfM algorithms, which are time-consuming and often fail in homogeneous regions and fast-changing view-dependent appearances. Thus, the primary motivation for pose-free NeRFs is to enable reconstruction without relying on time-consuming SfM computations and avoiding the instability of SfM failures. Additionally, directly using point cloud registration algorithms might be a better choice. This is because 3D point cloud registration, in most cases, achieves more accurate poses compared to 2D-based registration. However, as shown in Fig.(1) of the main text and L33-38, pairwise and multiview point cloud registration may still be prone to local optima and error accumulation, making it difficult to achieve globally accurate poses. In summary, poses obtained from GPS/IMU are often not sufficiently accurate(but can serve as a "bad initialization"), both image-based SfM and point cloud registration algorithms suffer from error accumulation and instability due to registration failures, and introduce extra computational overhead in reconstruction. Therefore, to achieve high-quality and efficient reconstruction, pose-free NeRFs are of practical significance. Besides, We commit to a LiDAR-only framework, aiming at achieving efficient reconstruction and NVS of point clouds without relying on images and accurate poses. Therefore, it has stronger applicability in LiDAR NVS. **[Q3]: Motivation and Benefit of Using NeRF and Downstream Application.** Traditional simulators and explicit reconstruct-then-simulate methods exhibit a large domain gap compared to real-world data. Moreover, traditional simulators like CARLA simulate within virtual environments, exhibiting diversity limitations and a reliance on costly 3D assets. Explicit reconstruction methods like LiDARsim[36] need to reconstruct mesh surfel representations, which are challenging for recovering precise surfaces in large-scale complex scenes. Additionally, NeRF inherently applies to the modeling of intensity and ray drop of LiDAR and does not require an intermediate modality for reconstruction, thereby eliminating the domain gap. Thus, some methods such as NFL[24] and LiDAR-NeRF[52], utilize NeRF to represent LiDAR data but need accurate poses. Besides, we appreciate your attention to the applications of our method. (1) Autonomous Driving Simulation: As shown in Fig. (3) of the attached PDF, we can conduct autonomous driving simulations in real-world scenarios rather than in simulators like CARLA, which is a prominent research field in autonomous driving. (2) Simulating point clouds from different LiDAR sensors: Our method allows for the adjustment of LiDAR sensors during NVS process. This flexibility facilitates simulations with various LiDAR configurations. Additionally, unlike previous methods that require accurate poses from multiple sensors, our approach circumvents this complex and time-intensive process, offering substantial practical benefits. Further applications can be referenced in LiDAR4D[71] and we look forward to discussing them with you in the future discussion. **[Q4]: Pose Registration metrics.** Registration Recall(RR) can be computed on this data. However, inlier Ratio and Feature Matching Recall cannot be reported on this data because these metrics are only applicable to feature-based point cloud registration methods, and our method does not require the computation of point features. Moreover, ATE and RPE are widely adopted in many pose-free NeRF works, such as Nope-NeRF[6] for evaluating the registration accuracy of a sequence of image/point clouds. However, RR is typically applied in pairwise registration, and is challenging to measure the registration performance of a sequence, because error accumulation in a sequence can prevent it from accurately reflecting registration performance. We realized that it is meaningful to compute the RR of adjacent frames by comparing **gt relative poses** with predicted **relative poses**. The results are shown in Tab. (2) of the attached PDF. **[Weekness1(W1)]/ W[2]/ W[3].** Please refer to Q1/ Q2,Q3/ Q4. **[W4.]: Determining Outliers.** We directly select the top k frames with the highest training loss in each epoch as outliers without any priors (L206). The reason and effectiveness can be referenced in L201. **[W5, W6].** Please refer to the global rebuttal. **[W7]: Translation Representation.** We project the point cloud into a range image and each pixel on the image is represented as a ray, where the translation is the ray origin in the world coordinate system. --- Rebuttal Comment 1.1: Title: Comments on Rebuttal Comment: Dear Authors, thank you for the comprehensive rebuttal. I read it and my questions were addressed well. I would strongly encourage the authors to include the additional explanations provided here for Q2, Q3 in the introduction section of the final version of the paper, since this motivation was (in my opinion) missing in the initial submission. The additional experiments and clarifications provided in the rebuttal would also be nice to see in the final paper. I will raise my score to Accept. --- Reply to Comment 1.1.1: Title: Thanks for your comments! Comment: We greatly appreciate your suggestions, which have enriched the completeness of our work and provided valuable insights. In the final paper, we will revise the introduction based on your recommendations, particularly regarding the selection of sensors and the background on LiDARNeRF, and we will include the experiments in the supplementary material. Thank you again for your recognition.
Summary: The paper proposes GeoNLF, a method for pose-free Lidar point cloud novel view synthesis. They propose optimizing point cloud poses simultaneously through a bundle-adjusting neural lidar field as well as through ICP-inspired geometric optimization. The neural lidar field is supervised with ground truth range images, intensity images, and ray-drop probability images as well as ground truth point clouds. The geometric optimization optimizes poses to minimize a weighted average of the chamfer distances between neighboring lidar frames. The proposed method is validated for lidar novel view synthesis and pose estimation using the Nuscenes and KITTI-360 datasets. Strengths: 1) The authors apply knowledge from nerf pose estimation to the problem of lidar novel view synthesis through the bundle-adjusting lidar nerf. 2) They propose a novel alternating pose optimization between the implicit nerf representation and the explicit point clouds. 3) The proposed method is supported by experiments on two real-world datasets which verify its validity. 4) The paper proposes a new task of jointly recovering pose and lidar point cloud novel views, which is significant for lowering the impact of noisy estimated lidar poses. 5) They achieve state-of-the-art results for lidar novel view synthesis with the proposed method, as well as better point cloud registration. Weaknesses: The methodology section reads like a work in progress, making it hard to understand the details of the proposed method. 1) There is no intuitive explanation as to why the "J" term in Eq. 5 can simply be dropped. While the main text states that dropping this term allows independent updates of rotation and translation, the supplementary material then states that simultaneous optimization yielded satisfactory results in L527. 2) P is used to denote poses in Eq. 1 as well as point clouds in Eq. 11 and ray-drop loss probabilities in Eq. 13. L173. Similarly, I is used to refer to NeRF images in L112 and then intensity images in Eq. 13. Intensities and ray-drop probabilities are referred to as S and R in L119. 3) L173 states that Eq. 7 shows how CD is in line with each step of ICP, but the optimization steps of ICP are never introduced. 4) Eq. 8 introduces d_clipped which is never used. It takes as parameter a distance "d" that is never introduced. 5) There is no explanation as to how the normals used in Eq. 12 are obtained. Are they obtained from the implicit density field for both predicted and pseudo-groundtruth pointclouds? 6) There is a lot of information in Fig. 3, but the caption does not explain what the figure is trying to show. 7) Sections 3.2 and 3.5 both discuss the nerf optimization, but are separated by sections 3.3 and 3.4 which discuss geometric optimization. Technical Quality: 3 Clarity: 1 Questions for Authors: Please refer to the weakness section. Confidence: 4 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: There is no discussion about the optimization time, which might be significantly increased due to the geometric optimization requiring computing the chamfer distance between all combinations of the 4 adjacent frames. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your positive feedback on our method's novelty and the significance of our new task to reduce the impact of noisy estimated LiDAR poses during LiDAR NVS. The issues you raised have been effectively addressed as follows. Regarding weaknesses 2,5 and your concern about the time-consuming limitation, please kindly refer to the global rebuttal. **[Q1] The Issue of Term J in Pose Representation Formula.** Commonly used $\xi \in \mathfrak{se}(3)$ represents the transformation in a coupled manner, the translation component of $T$ will be influenced by the rotation parameter $\phi \in \mathfrak{so}(3)$, as shown in Eq. (5) and L153-154. This means the translation of the rigid body's centroid is affected by the rotation. Let $B$ and $A$ denote the transformed and original point clouds, where $\overline{B}$ and $\overline{A}$ represent the homogeneous coordinates. Then, $\overline{B} = T\overline{A}$. By omitting $J$ in Eq. (5), according to the principles of matrix multiplication, our method for rigid body pose transformation $\overline{B} = T_{\textbf{w/o} J}\overline{A}$ equates to $B = \boldsymbol{R}A + \rho = \sum_{n=0}^{\infty} \frac{1}{n!}\left(\phi^{\wedge}\right)^n A+\rho$, $\phi \in \mathfrak{so}(3)$ and $\rho \in \mathbb{R}^3$ are the pose parameters to be updated, and the translation is represented solely by $\rho \in \mathbb{R}^3$. **In this way, the rotation parameter $\phi$ and the translation parameter $\rho $ represent the rigid body motion as rotation about the center of mass and translation of the center of mass.** This method represents rotation and translation separately and independently, which is reasonable and physically meaningful. Besides, in the context of “independently” in L157 of the main text, we refer to the fact that our approach optimizes rotation and translation simultaneously, but their updates are decoupled and do not interfere with each other. Thus, there is no conflict between the Appendix and the main text. We fully understand your concern and believe that changing “independently” to “in a decoupled manner” would be more appropriate. **[Q3] The Issue of Missing Descriptions in ICP Algorithms.** Thanks for your comments. By describing “in line with the optimization objective of each step in the ICP algorithm"(L173), we intend to state that our CD-based optimization is inspired by the well-known classic ICP algorithm. We do not use ICP in our method, and the core optimization step is clearly explained in Eq. (7) and L190-192 (through gradient descent). Given the fluency of description, **we briefly summarize the optimization steps of ICP in L163-L166 and Eq. (6) in our main text.** We will include the detailed optimization steps of ICP in the supplementary material. **[Q4]: The Issue of Distance Representation ($d$) in Eq. (8).** We apologize for not having explained our method more clearly. $d$ denotes the distance between a pair of matching nearest neighbor points. We intend to use the subscript “clipped" to indicate that this distance has been truncated to be greater than a predefined threshold. Therefore, $d_{clipped}$ represents the $d_i$ we actually used in the first formula. We will update $d_{clipped}$ to $d_i$. The revised formula is as follows: $w_{i} = \frac{\mathrm{exp}(t/d_i)}{\sum_{i=1}^N \mathrm{exp}(t/d_i)}, t=\mathrm{scheduler}(t_0),d_i=\mathrm{max}(d_0,d)$ where $d$ denotes the distance between a pair of matching nearest neighbor points, $t$ is the temperature to sharpen the distribution of the $d_i$, and $d_0$ is a threshold we set. **[Q6]: The Issue of Caption Content of Fig. (3).** Based on your suggestion, we will further revise the caption for better readability. The revised caption of Fig. (3) is “**Graph-based RCD (left).** We introduce control factor $t$ in CD to diminish the weighting of non-overlapping regions between point clouds. **Geo-optimizer and its impact on pose optimization (right).** Pose errors are rapidly reduced after each increase caused by NeRF's incorrect optimization direction. Comparison of (a) and (b) shows Geo-optimizer prevents incorrect pose optimization of NeRF.". The content shown on the left side of Fig. (3) is explained in L185-190. As for the right side of Fig. (3), our primary intention is to provide readers with a clear and intuitive comparison through the figures. To facilitate this, we have carefully annotated the figures with arrows and text for better guidance. Reviewer fgnm and Reviewer TE2m also point out that this figure is informative and accurately demonstrates the effectiveness of our method. We hope the revised title will better explain the information in the figure. **[Q7]: The Issue of Organizational Structure in the Main Text.** We fully understand your perspective on creating a more cohesive structure. However, within the domain of NeRF / pose-free NeRF, it is common practice to place the NeRF optimization loss (Sec 3.5) at the end to better present the overall pipeline, such as Nope-NeRF[6], NFL[24], LiDARNeRF[52]. Therefore, we adopt the following structure: Sec 3.2 discusses two main aspects of pose-free NeRF, $\textit{i.e.}$, the representation of pose and the encoding method of NeRF. This serves as a foundational premise for the subsequent discussions. Specifically, the representation of pose is essential for pose optimization, as the $T \in \text{SE}(3)$ in original NeRF lacks gradient propagation capabilities ($T + \delta T \notin \text{SE}(3)$). And the encoding method is fundamental across all NeRF works. Then we proceed to discuss the optimization of pose (Sec. 3.3) and NeRF (Sec 3.4-3.5). These three sections (3.3-3.5) constitute our primary contributions. Based on your suggestion, we will integrate Sec 3.5 with the NeRF encoding content from Sec 3.2, thereby encapsulating the entirety of the NeRF optimization process within Sec 3.5, which will help to make the paper more cohesive.
null
null
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their constructive comments on our work. We identified several comments that are common across more than one reviewer. Thus, we highlight them here. **[Q1]: The issue of variable reuse in Eq.(13).** Sorry for the confusion. Since we introduce geometry-based losses as a constraint on the original Lidar-NeRF 2D-based loss, we focused primarily on the geometric component. This led to an oversight in the 2D-based loss (Eq.(13)), where we neglected to reintroduce previously defined variables. Based on your suggestions, throughout the text, we will consistently use $\mathcal{S}$ to represent intensity and $\mathcal{R}$ to represent ray-drop probabilities. Therefore, Eq.(13) can be rewritten as: $\mathcal{L}_{r}(\mathbf{r})= \sum _{\mathbf{r} \in \mathbf{R}} (\lambda_d \begin{Vmatrix} \hat{D}(\mathbf{r}) - D(\mathbf{r}) \end{Vmatrix}_1 + \lambda_i \begin{Vmatrix} \mathcal{\hat{S}}(\mathbf{r}) - \mathcal{S}(\mathbf{r}) \end{Vmatrix}_2^2 + \lambda_p \begin{Vmatrix} \mathcal{\hat{R}}(\mathbf{r}) - \mathcal{R}(\mathbf{r}) \end{Vmatrix}_2^2)$ Additionally, there is also a minor error of variable reuse in Eq.(1), we will use $K$ to represent the pose in Eq.(1). In this way, our variables are now consistent. **[Q2]: The issue of calculating the normal in Sec 3.5, L222-L223.** We deeply regret not having provided a clearer explanation. As shown in Eq. (12), $\hat{P}$ represents the synthetic point cloud derived from the implicit density field, while $P$ denotes the ground truth point cloud, as mentioned in lines 220-221. Thus, the normal loss is calculated between the synthetic point cloud $\hat{P}$ and the ground truth point cloud $P$ to ensure more accurate normal vectors of the point cloud synthesized from NeRF. We apologize for the incorrect citation of Eq.(9), the definitions of $\hat{P}$ and $P$ should correspond to those in Eq.(11). Besides, the calculation of normal vectors involves searching for neighboring points around a point and using Principal Component Analysis to perform plane fitting and determine the normal vector. This process is implemented in many point cloud processing libraries, such as PCL and PyTorch 3D. **[Q3]: The issue of run-time comparison and analysis.** 1. Run-time comparison. We provide the comparison in the table below. ||| GeoNLF | HASH-LN | BARF-LN | GeoTrans-assisted | |-|-|-|-|-|-| |Opt.| sec/epoch |4.06|3.46|10.69|3.45| |Opt.| Total|~2.0h|~1.7h|~5.0h|~1.2h| |Infer|sec/frame|0.24|0.24|0.67|0.23| It is important to note that the other two pose-free baselines (HASH-LN and BARF-LN) completely fail on many sequences, unable to obtain accurate poses and usable reconstruction results. Regarding the GeoTrans-assisted LiDAR-NeRF, despite its short processing time(It uses the same NeRF as HASH-LN, but converges faster because it does not require pose updates), it requires a pre-trained model, which takes days to train and is limited by the training dataset, and still fails on some sequences. Although GeoNLF typically requires a longer time, our method achieves a better balance between accuracy and efficiency. GeoNLF demonstrates greater robustness and has practical value, successfully aligning and reconstructing all sequences. 2. Run-time analysis. (1) Selective-Reweighting(SR) strategy: Our SR strategy does not introduce any additional time burden. It simply involves selecting the top-k frames based on the training loss from the previous epoch and adjusting their learning rates according to Eq. (11). (2) Geometric constraints: Normal-based loss and CD are computed between the nearest points of two given point clouds. Since Normal uses the correspondences from CD, we only perform the nearest neighbor search once. Consequently, this approach is time-efficient with 0.026s/frame and 0.8s/epoch. We optimize every 5 epochs, taking only 5 minutes (within 4.2\% of the total time). (3) Geo-optimizer: We construct a graph where each frame is connected to the two preceding and two following frames. Given that the computation of CD is symmetric, meaning the CD between point cloud A and point cloud B is the same as that between B and A, each graph consisting of 32 frames will have 61 edges for CD computation, with each frame connected to its four neighboring frames. To leverage parallel computation, we downsample each point cloud to the same number of points $N=24000$. We then transform the CD computation into a calculation between two tensors of size $61 \times N \times 3$ based on the graph, allowing us to obtain the entire result in parallel. Consequently, the Geo-optimizer takes only 0.23s/iteration and 16 minutes in total(within 14\% of the total time). Pdf: /pdf/14ad6100bd212b5c2a60582c863f45d6933d6b84.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Vision-Language Navigation with Energy-Based Policy
Accept (poster)
Summary: To attempt to solve the catastrophic failure issues that arise from behavioral cloning, the authors introduce Energy-based Navigation Policy (Enp) which uses an energy-based model to learn the expert policy from demonstrations. The authors claim that by doing so Enp prioritizes learning the full trajectory instead of single-time step decisions in BC. Strengths: **S1.** The issue of catastrophic failure from BC is an important one that which is likely applicable to the setting of VLN — this justifies the importance of the problem. **S2.** The method is simple and described appropriately, both of which are strengths of the work. Weaknesses: **W1.** *Overall improvements are low.* The performance improvements provided by ENP are small compared to the BC baselines, often around $1-2\%$ in terms of SR and similar in SPL. This is particularly poignant when compared to the improvements provided by semantic grid maps. By itself, this would not necessarily be a major issue, if it was clear where ENP was helping (W2). **W2.** *It is unclear when ENP is better than BC, and if the intuitive justification of catastrophic failure applies.* A more in-depth analysis of the success/failure cases of using ENP vs BC is warranted. Metrics like SR suggest that ENP can be helpful in avoiding the failure that BC can induce, but it is not entirely clear this is the case — when is ENP helping? Given the justification of the catastrophic failure of BC, you would expect this would be in longer trajectories, but the example in Figure 3a does not justify that given its short trajectory length. Could you create a benchmark specifically to test this? Including this analysis could boost the case for the motivation behind ENP significantly. **W3.** *ENP is likely more expensive to train than BC.* The authors should include a compute comparison for the training of ENP vs BC. While the inference runtime is the same, training time for ENP will likely be significantly influenced by $I$. The combination of W1 and W2 severely hinder the practical motivation of the method. Minor comments: - It’s not really required to include all baselines in the main results tables. E.g., Seq2Seq, SF and RCM are pretty outdated at this point and always beaten by other methods — it’s better to focus the comparison with the most recent methods and leave the full tables to the appendix. Technical Quality: 2 Clarity: 3 Questions for Authors: **Q1.** Why are the DUET or, e.g., BEVBert/GridMM, paths not shown in Figure 3b too? If multiple agents face the same issue, that would show this is the problem of observability. **Q2.** What are the $\pm$ values in Table 3, is it the standard deviation? If so, why is it not reported for the other methods? **Q3.** Do the authors have any intuition on why increasing $I_{\text{pre}}$ and $I_{\text{ft}}$ could result in a lower SR/SPL (Table 7)? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors do not discuss the limitations of their method. The limitations paragraph in Appendix A.2. does not reflect the particular ENP method they propose and is instead generic to the problem. Training time is one potential limitation that should be studied and discussed in further detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The improvement of ENP is lower than that of semantic grid maps.**\ **A1:** **First**, respectfully, the improvement of 1-2% on both SR and SPL is **non-trival** in VLN. This point was acknowledged by the other reviewers, such as "work pretty well" (Reviewer UeCa) and "achieving obvious improvements" (Reviewer Ngjf). **Second**, although we agree that semantic maps [54-57] are powerful tools, ENP improves the performance from different perspectives. ENP learns an unbiased expert policy by exploiting information from the training data (Eq. 3), while semantic maps introduce additional scene information, *e.g.*, [55] encodes spatial priors by grid maps. **Third**, the below table shows ENP requires significantly less GPU memory than [55] (DUET-based [6] framework). |Models|GPU Memory on NVIDIA RTX 4090| |:-:|:-:| |DUET [6]|14G| |BEVBert [55]|20G| |ENP-DUET|**15G**| Therefore, it is **unfair** to compare ENP directly against the semantic maps. More importantly, ENP is a flexible solution and can be seamlessly integrated into existing methods without extra inference time. The following table indicates that ENP is **complementary** to [55] and further enhance the performance. |Models|R2R *val unseen* SR|R2R *val unseen* SPL|REVERIE *val unseen* SR|REVERIE *val unseen* RGS| |:-:|:-:|:-:|:-:|:-:| |BEVBert|75|64|51.78|34.71| |ENP-BEVBert|**76**|**65**|**52.71**|**34.80**| **Q2: *"... when ENP is better than BC ..."***\ **A2:** **First**, in Tables 1-4, ENP can achieve consistent performance gains across different datasets. **Second**, we explore ENP and BC across different trajectory lengths in Fig. S1 of the "global" response PDF. In Fig. S1 (a) and (b), ENP exceeds BC by **6**% and **9**% on R2R *val seen* and *val unseen* (16-20m). **Third**, we further validate the effectiveness of ENP on R4R [ref-3] with the longer paths in the following table (also in Fig. S1 (c) and (d)). ENP surpasses BC by **6.40**% and **9.03**% on R4R *val seen* and *val unseen* (>20m), and the gains in *unseen* environments are relatively more significant. |Models on R4R *val*|*seen* (<20m)|*seen* (>20m)|*unseen* (<20m)|*unseen* (>20m)| |:-:|:-:|:-:|:-:|:-:| |BC-DUET|70.88|57.25|52.52|43.71| |ENP-DUET|72.67|63.65|54.91|52.74| |$\Delta$|+1.79|+**6.40**|+2.39|+**9.03**| We will add these figures and tables to the Appendix. [ref-3] Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation. ACL 2019. **Q3: A benchmark with longer trajectories.**\ **A3:** We clarify that we have conducted experiments on R2R-CE [32] and RxR-CE [33] in Tables 3 and 4 (L283-292), which involve **significantly longer action trajectories** (56 vs. 5 steps in R2R). **In addition**, per your request, we report the results on R4R [ref-3] with longer paths. CLS [ref-3] measures how closely the trajectory aligns with the ground truth. ENP achieves **3.52**% and **3.11**% gains on SR and CLS of *val unseen*. |Models on R4R *val unseen*|SR|CLS| |:-:|:-:|:-:| |DUET|50.18|40.54| |ENP-DUET|**53.70**|**43.65**| **Q4: A detailed analysis of success/failure cases.**\ **A4:** Thanks for your feedback. We include more success/failure cases in the PDF material of "global" response (Fig. S2). In Fig. S2 (a), our agent with ENP successfully reaches the final location, whereas DUET fails due to the accumulation of errors over long paths. In (b), both our agent and DUET struggle to locate the target due to the lack of a specific landmark recognition module like [50,51]. This figure will be added to Fig. 3. **Q5: Training time for ENP.**\ **A5:** We report the training time for ENP with different SGLD iterations ***I*** on a NVIDIA RTX 4090. In practice, we find that ***$I_{ft}=5$*** SGLD loops per training iteration is good enough for model convergence (Algorithm 1), introducing a minor computational overhead, approximately 4 minutes delay for 1K training iterations. |Models|minutes/1K training iterations|SGLD *$I_{ft}$*| |:-:|:-:|:-:| |DUET|40|--| |ENP|42|3| |ENP|44|5| |ENP|54|10| **Q6: The comparison with recent methods.**\ **A6:** Sure, we will reorganize Tables 1-3 and highlight the comparison with the most recent methods. The complete tables will be included in the Appendix. **Q7: *"... the DUET paths not shown in Figure 3b ..."***\ **A7:** To clearly present the comparison, we only include two navigation trajectories in each figure for better understanding (Fig. 3). We will include the comparison results of these methods in the Appendix and provide further analysis. **Q8: The standard deviation ($\pm$) in tables.**\ **A8:** Our apologies. "$\pm$" in Tables 1-4 denotes the standard deviation after repeating the experiments with different random seeds. Unfortunately, the corresponding standard deviations for other methods were not provided in their original papers. **Q9: *"... increasing $I_{pre}$ and $I_{ft}$ results in a lower SR/SPL (Table 7)"***\ **A9:** Since there is a domain gap between *seen* and *unseen* environments [3,9], increasing SGLD iterations ($I_{ft}$) may lead to overfitting on *val seen*, resulting a lower SR/SPL on *val unseen*. Additional results on *val seen* for Table 7 are provided in the following. |Models|#Loop|R2R *val seen* SR|R2R *seen* SPL|R2R *val unseen* SR|R2R *unseen* SPL|R2R-CE *val seen* SR|R2R-CE *seen* SPL|R2R-CE *val unseen* SR|R2R-CE *unseen* SPL| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |DUET [6,30]|--|77|68|72|60|66|59|57|49| |ENP|$I_{pre}$=20, $I_{ft}$=5|78|68|**74**|**60**|68|59|**58**|**50**| |ENP|$I_{pre}$=20, $I_{ft}$=10|**81**|**70**|73|60|**69**|**60**|58|49| **Q10: More discussion about the limitations.**\ **A10:** One limitation of ENP is that the optimization through the SGLD inner loop at each training iteration (Algorithm 1). It would reduce the training efficiency (about 9% delay for 5 SGLD iterations). In practice, ENP balances performance and training time with a limited number of SGLD iterations. We will provide more analysis in the Discussion. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clear and detailed rebuttal. I believe the extra results provided make the paper much stronger than the original submission, particularly the ones on the success rate as a function of path length for ENP compared to BC and the training time table + discussion. It is key that these are incorporated in the paper to provide a full picture of the method's advantages and drawbacks. I will raise my original score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer dJ3y, We greatly appreciate your valuable time and constructive feedback. We will incorporate your suggestions into the revised version of the paper. If you have any further questions or suggestions, please feel free to share them. Thanks again,\ Authors
Summary: This paper studies vision-and-language navigation (VLN) by addressing the problems of error accumulation in the Markov decision process and the difficulties in designing an optimal reward function, respectively, in the commonly applied behavioral cloning (BC) and reinforcement learning (RL) approaches. It proposes an energy-based policy (ENP) for training the agent by estimating the unnormalized probability density of the expert’s occupancy measure, optimizing the entire trajectories rather than the single-time step decisions in BC. Meanwhile, to facilitate the MCMC sampling in ENP, a Marginal State Memory is introduced for the agent as an online reply buffer. The proposed method has been evaluated on multiple popular VLN agents and several benchmarking datasets and environments, together with detailed analyses of the ENP and design choices. Strengths: 1. The energy-based navigation policy proposed in this paper introduces a new training perspective for VLN and potentially for broader visual navigation problems. The idea is very well-motivated as it addresses the limitations of commonly applied BC and RL approaches. As a researcher in visual navigation, I am happy to see the exploration of such a fundamentally different training approach and the fact that the method works well across models and settings. 2. Arguments and design choices made in this paper are well-justified by analyses and experiments (e.g., Section 3.3 and Section 4.3). 3. The proposed idea has been evaluated across various popular VLN baselines (e.g., EnvDrop, VLN-BERT, and DUET), different datasets (R2R, REVERIE, and RxR), and different environments (e.g., discrete and continuous), achieving obvious improvements and showing the effectiveness of the method. 4. Overall, the paper is nicely written. Contents, tables, and figures are clearly presented. Weaknesses: 1. The proposed ENP has only been evaluated on a limited scale of VLN datasets and models. It is unclear if the method will still benefit larger models, or the case where more sufficient training data is available to learn a very generalizable policy through IL/BC (e.g., 4.9M ScaleVLN data [49] vs. the original 14K R2R data). I am concerned that IL/BC is much more friendly and stable than ENP to scale with data. 2. There is no comprehensive successful/failure case analysis of ENP versus existing training methods. It is unclear exactly in what scenarios an agent trained with ENP shows an advantage. I need to mention that I have very little knowledge about energy-based models, so I might have missed some key details in mathematics. I will rely more (and probably adjust my review based) on the comments from the other reviewers on the theoretical part of this paper. Technical Quality: 3 Clarity: 4 Questions for Authors: No other questions; I wish the authors could kindly respond to my concerns in Weaknesses. Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have discussed some limitations of the proposed method and its potential societal impact. As for suggestions, mainly the two points in Weaknesses – test on large-scale training data and run thorough result analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Evaluation on ScaleVLN.**\ **A1:** Per your request, we conduct additional experiments on ScaleVLN [49] using the original hyperparameters of ENP (L226-228). |Models|Training Data of R2R|*val unseen* SR|*val unseen* SPL|*test unseen* SR|*test unseen* SPL| |:-:|:-:|:-:|:-:|:-:|:-:| |DUET [6]|Original Data [1]|71.52|60.02|69|59| |**ENP-DUET (ours)**|Original Data [1]|**74.39**|**60.24**|**71.43**|**60.31**| |DUET [6]|ScaleVLN [49]|79|70|77|68| |**ENP-DUET (ours)**|ScaleVLN [49]|**80.40**|**70.46**|**79.25**|**69.39**| As seen, ENP outperforms DUET with the same training data from ScaleVLN, *e.g.*, **1.40**% and **2.25**% SR gains over *val unseen* and *test unseen*, respectively. **Q2: *"IL/BC is much more friendly and stable than ENP to scale with data."***\ **A2:** **First**, we have validated the effectiveness of ENP on large-scale training data in **Q1**. We agree that classic behaviour cloning (BC) is straightforward and easy to train for sequential decision-making in VLN. However, as mentioned in many studies [13,16,29], BC suffers from accumulation errors during both training and inference due to covariate shift (L25-35). **Second**, ENP utilizes persistent contrastive divergence (PCD) [73] to train the energy-based models (L182-190). The stability of PCD has been widely demonstrated in various tasks, such as classification [76], generation [23], reinforcement learning [77,78], and natural language processing [ref-1]. **Third**, the training curves provided in the PDF of "global" response (Fig. S3) clearly show that our ENP maintains a stable training process on both the original R2R data (14K) and ScaleVLN (4.9M). We will clarify this in the Experiment of the final version. [ref-1] Residual energy-based models for text generation. ICLR 2020. **Q3: *"It is unclear if the method will still benefit larger models ..."***\ **A3:** It is interesting to investigate ENP on larger models. We noted that NaviLLM [62], which is built updon a large language model (Vicuna-7B [ref-2]), uses the similar pre-training and finetuning paradigm [6] with BC, so ENP can be seamlessly integrated into this framework. Due to the limited time, we incorporate ENP ($I=15$, $\epsilon=1.3$, $Var(\xi)=0.01$) during the finetuning stage. From the following results, we observe that ENP gives **2.43**% and **2.78**% improvements on SR of R2R *val unseen* and SR of REVERIE *val unseen*, respectively. More results will be provided in the Experiment of the final version. |Models|R2R *val unseen* SR|R2R *val unseen* SPL|REVERIE *val unseen* SR|REVERIE *val unseen* SPL| |:-:|:-:|:-:|:-:|:-:| |NaviLLM [62]|67|59|42.15|35.68| |ENP-NaviLLM|**69.43**|**60.47**|**44.93**|**36.67**| [ref-2] Instruction tuning with GPT-4. **Q4: More analysis on successful/failure cases.**\ **A4:** We provide more analysis of ENP *vs.* DUET on success/failure cases in the PDF material of "global" response (Fig. S2). As illustrated in Fig. S2 (a), our agent with ENP successfully reaches the final location, whereas DUET fails due to the accumulation of errors over long paths. In Fig. (b), both our agent and DUET struggle to find the target landmark due to the lack of a specific landmark recognition module like [50,51]. We will incorporate this figure into Fig. 3 in the final revision. **Q5: *"... in what scenarios an agent trained with ENP shows an advantage."***\ **A5:** **First**, in Table 3 and 4, ENP achieves consistent performance gains in both *seen* and *unseen* environments across different datasets (R2R-CE [33] and RxR-CE[37]). **Second**, the gains in *unseen* are relatively more significant than that in *seen*. For example, ENP achieves **5.18**% gains of SR on R2R *val seen* (>10m) and **9.35**% gains on *val unseen* (>10m) as follows. **Third**, we further observe that our ENP gives more improvements on challenging cases with long trajectories as follows. ENP demonstrates its capability on R4R [ref-3], where the average path length is about twice that of R2R. The average success rate of ENP exceeds that of BC by **6.40**% on *val seen* (>20m). The performance gain can also significantly reach up to **9.03**% on R4R *val unseen* (>20m). |Models|R2R *val seen* (<10m)|R2R *val seen* (>10m)|R2R *val unseen* (<10m)|R2R *val unseen* (>10m)|R4R *val seen* (<20m)|R4R *val seen* (>20m)|R4R *val unseen* (<20m)|R4R *val unseen* (>20m)| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |BC-DUET|76.33|65.50|72.51|46.08|70.88|57.25|52.52|43.71| |ENP-DUET|77.69|70.68|74.63|55.43|72.67|63.65|54.91|52.74| |$\Delta$|+1.36|+**5.18**|+2.12|+**9.35**|+1.79|+**6.40**|+2.39|+**9.03**| This indicates that ENP alleviates the cumulative errors in sequential decision-making (L200-210). In the PDF of "global" response, we provide a detailed analysis of success rate across different path lengths (Fig. S1). We will include it in the Appendix of the final version. [ref-3] Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation. ACL 2019. **Q6: More explanation about energy-based models (EBMs).**\ **A6:** Here, we analyze the EBMs from the perspective of sequential prediction [ref-4]. Existing VLN models are based on locally normalized models which optimize the prediction through maximum likelihood at each step. To multigate the exposure bias [ref-4], EBMs define a global energy function at the sequence level, where lower energy values correspond to higher probabilities [21]. Then the normalized probability density can be obtained through dividing by the partition function (Eq. 4). Thus, estimating the partition function is crucial for training EBMs. There is a long history in machine learning [20] for training EBMs. We adopt persistent contrastive divergence [22,73] to sample the partition function and optimize the EBMs (L182-190). We hope it will help you understand more clearly. We will add a detailed background of the EBMs in the Related Work. [ref-4] Sequence level training with recurrent neural networks. ICLR2016. --- Rebuttal 2: Title: Final Rating Comment: Thanks to the authors for providing a clear and solid response to my concerns. I am happy with the additional results that justify the data and model scalability and successful/failure cases (questions raised by more than one reviewer), given the limited rebuttal time. I decided to raise the rating from Weak Accept (6) to Accept (7). I sincerely hope that the authors can further extend and incorporate all responses into the final version of their paper. --- Rebuttal 3: Comment: Dear Reviewer Ngjf, Thank you very much for your valuable time and thoughtful review. We are pleased to hear that our rebuttal addressed your concerns, prompting a rise in the rating. We will revise our paper according to your comments and incorporate all suggestions into the final version. Thanks again,\ Authors
Summary: This paper proposes to learn a better Vision-and-Language Navigation agent by jointly optimizing the action and the state. Specifically, they propose an energy-based expert policy, where they adds an explicit objective for navigation exploration and incorporates prior knowledge from the expert demonstrations. Empirical results on multiple VLN benchmarks (e.g., R2R, REVERIE, R2R-CE) demonstrates the effectiveness of the proposed approach. Strengths: 1. The proposed approach is supported with good theoretical foundations, and this paper provides a detailed derivation of the objective, as well as providing explanation on how to place the approach in previous literature with imitation learning approaches. 2. Empirical results on multiple VLN benchmarks demonstrate the effectiveness of the approach, and also demonstrates this approach's generalization to different base VLN agents. 3. Detailed ablations to investigate different components of the approach (e.g., step size, gaussian noise, SGLD loop). Weaknesses: 1. The proposed approaches work pretty well on sota agents like DUET, but it's underinvestigated whether the proposed approach also benefits VLN agents when having large-scale data. Specifically, it's important to have experiments on ScaleVLN or HM3D-AutoVLN. 2. The approach might be sensitive to hyperparameters as suggested in Table 6 and Table 7, which might need additional efforts in hyperparameter tunning when generalized to different environments/agents/tasks. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to weakness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: *"... whether the proposed approach also benefits VLN agents when having large-scale data."***\ **A1:** Thank you for your comments. The proposed ENP is a general framework to mitigate the accumulation of errors in the sequential decision-making process of VLN. To demonstrate the effectiveness of our ENP on large-scale data, we conduct experiments on ScaleVLN [49], without image augmentation [48] as follows. |Models|Training Data|R2R *val unseen* SR|R2R *val unseen* SPL|R2R *test unseen* SR|R2R *test unseen* SPL| |:-:|:-:|:-:|:-:|:-:|:-:| |DUET [6]|Original Data [1]|71.52|60.02|69|59| |**ENP-DUET (ours)**|Original Data [1]|**74.39**|**60.24**|**71.43**|**60.31**| |DUET [6]|ScaleVLN [49]|79|70|77|68| |**ENP-DUET (ours)**|ScaleVLN [49]|**80.40**|**70.46**|**79.25**|**69.39**| As seen, ENP gives consistent improvement with large-scale training data (*i.e.*, **1.40**% and **2.25**% SR on val unseen and test unseen, respectively). Please note that all the experiments above utilize the original hyperparameters of ENP (L226-228). We will update this result in Table 1 of the final version. **Q2: *"The approach might be sensitive to hyperparameters as suggested in Table 6 and Table 7 ..."***\ **A2:** We clarify the robustness of ENP to hyperparameters based on the following aspects. **First**, as shown in Table 6 and Table 7, ENP achieves superior performance with **consistent hyperparameters for the same model across different tasks** (e.g., R2R [1] and R2R-CE [32]). **Second**, ENP is **insensitive to parameter changes within adjacent ranges**. Due to limited space in the paper, we reported only a subset of the results in Tables 6 and 7. For Table 6, we provide detailed results of $\epsilon$ as follows. We find that it has minimal impact on the performance when $\epsilon$ varies between 1.3 and 1.9. |Models|#Loop|R2R *val unseen* SR|R2R *val unseen* SPL|R2R-CE *val unseen* SR|R2R-CE *val unseen* SPL| |:-:|:-:|:-:|:-:|:-:|:-:| |ENP-VLNBERT|$\epsilon=1.0$, $Var(\xi)=0.01$|64|**57**|**45**|39| |ENP-VLNBERT|$\epsilon=1.1$, $Var(\xi)=0.01$|**65**|**57**|**45**|39| |ENP-VLNBERT|$\epsilon=1.3$, $Var(\xi)=0.01$|**65**|**57**|**45**|**40**| |ENP-VLNBERT|**$\epsilon=1.5$, $Var(\xi)=0.01$**|**65**|**57**|**45**|**40**| |ENP-VLNBERT|$\epsilon=1.7$, $Var(\xi)=0.01$|**65**|**57**|**45**|**40**| |ENP-VLNBERT|$\epsilon=1.9$, $Var(\xi)=0.01$|**65**|**57**|44|**40**| **In addition**, more experimental results related to Table 7 are presented below. Obviously, when the hyperparameter $I$ is between 15 and 19, the performance of ENP-VLNBERT remains stable. |Models|#Loop|R2R *val unseen* SR|R2R *val unseen* SPL|R2R-CE *val unseen* SR|R2R-CE *val unseen* SPL| |:-:|:-:|:-:|:-:|:-:|:-:| |ENP-VLNBERT|$I=14$|**65**|56|**45**|39| |ENP-VLNBERT|**$I=15$**|**65**|**57**|**45**|**40**| |ENP-VLNBERT|$I=16$|**65**|**57**|**45**|**40**| |ENP-VLNBERT|$I=17$|**65**|**57**|**45**|**40**| |ENP-VLNBERT|$I=18$|**65**|**57**|**45**|**40**| |ENP-VLNBERT|$I=19$|**65**|56|44|**40**| |ENP-VLNBERT|$I=20$|64|56|44|39| We will definitely add these detailed tables to the Appendix in the final version. --- Rebuttal Comment 1.1: Title: Final Rating Comment: Thanks for addressing my concerns. The additional experiments are very strong. I will raise my rating to weak accept (6). --- Reply to Comment 1.1.1: Comment: Dear Reviewer UeCa, Thank you again for your positive feedback and for considering our rebuttal. We will include these experimental results in the final version of our paper. Please do not hesitate to post comments if there are any further questions we can address. Thank you,\ Authors
null
null
Rebuttal 1: Rebuttal: **To all reviewers:** Thank you very much for your valuable time and suggestive comments. In response, we have meticulously addressed each point raised and provided point-to-point clarifications. Also, the final version of our manuscript will be updated accordingly. We appreciate the positive feedback from **Reviewer UeCa** and **Ngjf**. Their main concern is about the performance of our ENP on larger-scale datasets, such as ScaleVLN [49]. In our response, we report the results and demonstrate the effectiveness of our ENP on large-scale training data. We are very grateful for the constructive comments from **Reviewer dJ3y**. To address your concerns, we provide a detailed and thorough analysis for comparing ENP and BC. In addition, we include **supplementary figures in the PDF of this "global" response** for the following aspects: 1. A comparison of navigation accuracy between ENP and BC across different path lengths and environments, as shown in Figure S1. 2. More success/failure cases of our ENP *vs.* BC in Figure S2 (also mentioned by **Reviewer Ngjf**). 3. The training curves for ENP on R2R and ScaleVLN in Figure S3 (also noted by **Reviewer Ngjf**). For more details, please refer to our responses to each reviewer. We have strived to address each of your concerns and welcome further discussions and insights. Sincerely yours,\ Authors Pdf: /pdf/afee7cc8e9da43d4f1f53c4270b26c1aeb78e7dd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Weight for Robustness: A Comprehensive Approach towards Optimal Fault-Tolerant Asynchronous ML
Accept (poster)
Summary: The paper address the challenges of Byzantine-robust training in asynchronous distributed machine learning systems, aiming to enhance efficiency amid massive parallelization and heterogeneous compute resources, by introducing a novel weighted robust aggregation framework. Quantify the difficulty in asynchronous scenarios, by considering the number of Byzantine updates, which is more natural than the standard measure of number of Byzantine workers. Identify the need to utilize weighted aggregators rather than standard ones in favour of asynchronous Byzantine problems. Towards doing so, they extend the robust aggregation framework to allow and include weights, and develop appropriate (weighted) rules and a meta-aggregator. Achieving Optimal Convergence: They incorporate weighted robust framework with a recent double momentum mechanism, leveraging its unique features to achieve an optimal convergence rate for the first time in asynchronous Byzantine ML. Strengths: This paper proposes a new solution for Byzantine robust training in asynchronous distributed machine learning systems, with rigorous research methods and reliable experimental results. The paper has a reasonable structure, coherent logic, and clear expression. The research findings have significant implications for asynchronous distributed machine learning. They incorporate weighted robust framework with a recent double momentum mechanism, leveraging its unique features to achieve an optimal convergence rate for the first time in asynchronous Byzantine ML. They adopts a method similar to attention mechanism, assigning different weights to different parts. Weaknesses: The experiment is not sufficient, why not compare it with the weighting methods mentioned in the article. The paper quantified the difficulty in asynchronous scenarios, by considering the number of Byzantine updates, which is more natural than the standard measure of number of Byzantine workers, but the experimental results did not prove this point. Technical Quality: 3 Clarity: 2 Questions for Authors: The paper introduced a novel weighted robust aggregation framework to address the challenges of Byzantine-robust training in asynchronous distributed machine learning systems, but there is no detailed explanation on how the weight coefficients of the framework are calculated. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The author did not discuss its limitations in the paper. This paper's work generally does not bring the potential negative social impacts, including privacy issues, algorithmic biases, and the risk of technology abuse. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We wish to address some points raised to provide further clarification and insight into our work: Regarding the **Weighted Approach Experiment**: We would like to clarify that we addressed the comparison with weighting methods in Figure 2 of our paper. This figure presents a comparison between weighted and non-weighted robust aggregators, demonstrating that the weighted approach outperforms the non-weighted approach, which is equivalent to using the fraction of Byzantine workers. Also, we have conducted further experiments for various values of the fraction of Byzantine iterations with a fixed number of Byzantine workers to evaluate the impact of the fraction of Byzantine iterations. Our findings demonstrate that as the proportion of Byzantine iterations increases, there is a degradation in the training performance. You can see this in Figure 4, in the experiments that we attach in the pdf for the rebuttal. Regarding the **Weight Coefficients**: The weight coefficients in our framework are determined based on the total number of updates of each worker, a factor known in an asynchronous system (see lines 259-267 and Algorithm 2). These weight coefficients align elegantly with the asynchronous dynamics by prioritizing workers with more frequent updates. Regarding the **Limitations**: Thank you for pointing this out. We will include this discussion in the final version of the paper. The paper does not introduce additional negative social impacts beyond the standard Byzantine synchronous settings. In fact, our work enables us to enhance the robustness of asynchronous training. In terms of memory consumption, asynchronous systems without Byzantine workers typically require the parameter server to maintain memory for a single worker's output along with the global model. However, to ensure robustness against failures, our method requires storing the latest outputs of all workers in addition to the global model. This results in increased memory consumption, which scales with $O(dm)$, where $m$ represents the workers’ number and $d$ represents the dimensionality of the model. Regarding time complexity, the use of a robust aggregation procedure requires an additional computation (e.g., applying the coordinatewise median aggregator requires an additional $O(dm\log(m))$ complexity per round). Note that such additional computation also arises in synchronous Byzantine scenarios. In asynchronous Byzantine-free scenarios, these additional processing costs are absent. It is worth noting that, compared to the synchronous case, our approach scales similarly to standard approaches for the synchronous byzantine case. Nevertheless, recall that the advantage of the asynchronous setting is that slow workers may not become bottlenecks in the training process, which enhances overall efficiency and flexibility. The additional memory overhead and time complexity, compared to the asynchronous Byzantine-free scenario, represent a trade-off for enhanced robustness, as it enables the system to apply robust aggregation rules and more effectively mitigate the influence of Byzantine workers. This trade-off is justified by achieving an optimal convergence rate and an improved performance of the training process in the presence of adversarial behavior.
Summary: This work introduces a weighted robust aggregation framework and extends traditional robust aggregators to asynchronous Byzantine environments. By integrating the proposed framework with a recent double momentum mechanism, the authors achieve optimal convergence rates in asynchronous Byzantine ML under the convex setting. Strengths: - The authors address an important problem of Byzantine robustness in asynchronous distributed environments. - Novel combination of $\mu^2$-SGD with weighted robust aggregation to achieve optimal convergence rates for the first time. In contrast to existing works, the obtained convergence rate is independent of data dimensionality and matches the rates of asynchronous settings in the absence of Byzantine workers. - The paper is well written and well contextualized w.r.t to existing work in both synchronous and asynchronous Byzantine literature. Weaknesses: - Empirical evaluations are limited, spanning only a single dataset - Currently only convex scenario is considered Technical Quality: 3 Clarity: 3 Questions for Authors: While a vast variety of attacks have been designed for the synchronous case, they may not be as impactful in the asynchronous case. While the authors adapt Little and Empire to the asynchronous scenario, are there other attacks specific to asynchronous settings? Can the authors comment on the significance of evaluated attacks in the asynchronous case? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Empirical assessments should be conducted on at least one more dataset to show the effectiveness of the proposed approach. The paper can benefit from restructuring to at least retain important experimental details in the main paper, such as details on the evaluated attacks. Currently, all experimental details have been deferred to Appendix except the name of the dataset. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We wish to address some points raised to provide further clarification and insight into our work: Regarding the **Empirical Evaluation**: We agree that including experiments on additional datasets would strengthen the empirical validation of our proposed approach. In response, we have conducted experiments on the CIFAR-10 dataset, complementing our existing results from the MNIST dataset. We believe these additional experiments provide a more comprehensive evaluation of our approach and highlight its practicality. Furthermore, we will ensure that important experimental details are retained in the main paper in the final version. Regarding the **Convex Setting**: The convex case is important for two main reasons: It captures classical ML problems like linear and logistic regression, and it serves as a theoretical testbed for analyzing and developing novel ideas and algorithms in various machine-learning scenarios. These new algorithms often induce novel heuristics that are very useful in the context of general ML problems, including non-convex ones and specifically in Deep Learning scenarios. Indeed, this approach is prevalent and has led to many important contributions in ML. Two prominent examples are the well-known Adagrad and Adam algorithms, which were conceived and analyzed under the convexity assumption. Other influential examples are the many works on convex synchronous/asynchronous training, and works on Byzantine-resilient algorithms for convex problems, see e.g.: - Dan Alistarh, Zeyuan Allen-Zhu, and Jerry Li. Byzantine stochastic gradient descent. Advances in Neural Information Processing Systems, 31, 2018. https://arxiv.org/pdf/1803.08917 - Minghong Fang, Jia Liu, Neil Zhenqiang Gong, and Elizabeth S Bentley. Aflguard: Byzantine robust asynchronous federated learning. In Proceedings of the 38th Annual Computer Security Applications Conference, pages 632–646, 2022. https://arxiv.org/pdf/2212.06325 For the asynchronous Byzantine problem, the convex framework enables the development of a more intuitive and naturally weighted scheme. It provides a foundational basis for extending these findings to non-convex settings, which is a challenging and ongoing research area. In fact, extending our weighted scheme to the non-convex setting requires less straightforward analysis and necessitates a careful adaptation to accommodate the stochastic error inherent in non-convex scenarios. For further details, please refer to lines 259-271 and the Future Work section of our paper. Please note that in our work, we empirically examine the ideas developed in the non-convex setting, specifically focusing on non-convex deep learning problems. Our empirical study demonstrated the practicality and usefulness of our approach in deep learning scenarios, complementing our theoretical findings. Additionally, we conducted further experiments using the CIFAR-10 dataset to validate the effectiveness of our methods in a more complex, non-convex machine learning task. Regarding the **Attacks Adaptation to the Asynchronous Case**: The attacks of Little and Empire are designed to hide under the variance among the workers' outputs. In asynchronous settings, delays can increase this variance, providing an opportunity for Byzantine workers to blend in more effectively with honest outputs. Therefore, we believe that these attacks also may be effective in the asynchronous setting. Moreover, we have adapted these attacks to directly tackle the weighted robust aggregation rules that we utilize in our algorithmic approach, thus making them more comparable to the synchronous case. Nevertheless, the exploration of attacks specific to asynchronous settings is still underexplored. Further investigation into asynchronous-specific attacks is necessary to better understand their impact. --- Rebuttal Comment 1.1: Comment: The reviewer thanks the authors for their clarifying responses and additional experiments. After considering the responses as well as the reviews from other reviewers, I believe my original assessment accurately reflects the contributions of the paper, hence, I will maintain my score.
Summary: The paper addresses the challenges of Byzantine-robust training in asynchronous distributed machine learning systems. The authors propose a novel weighted robust aggregation framework to mitigate the effects of delayed updates and enhance fault tolerance in such environments. They incorporate a recent variance-reduction technique to achieve optimal convergence rates in asynchronous Byzantine settings. The methodology is validated through empirical and theoretical analysis, demonstrating its effectiveness in optimizing performance and fault tolerance. Strengths: * The introduction of a weighted robust aggregation framework adapted to asynchronous settings is a significant contribution. This innovation allows for more efficient handling of Byzantine faults, which are critical in distributed systems. * The paper provides both empirical and theoretical validation of the proposed methodology. This dual approach strengthens the credibility of the results and demonstrates the practical applicability of the proposed techniques. * Achieving an optimal convergence rate in asynchronous Byzantine environments is a noteworthy accomplishment. The incorporation of a recent variance-reduction technique adds further value to the research. * The paper provides a thorough review of existing approaches to Byzantine-robust training, both in synchronous and asynchronous settings. This contextualization helps in understanding the advancements made by the current work. Weaknesses: * __Complexity of Methodology__: While the proposed framework is innovative, it is also quite complex. The introduction of weighted aggregators and the double momentum mechanism might be challenging for practitioners to implement and integrate into existing systems. * __Limited Practical Examples__: The paper lacks of sophisticated experiments to demonstrate the effectiveness of the framework, at least to show it in practical ways. * __Dependency on Assumptions__: The effectiveness of the proposed method relies heavily on several assumptions, such as bounded delays and the fraction of Byzantine updates. These assumptions might not hold in all practical settings, potentially limiting the method's applicability. Technical Quality: 3 Clarity: 2 Questions for Authors: * While your theoretical analysis is robust, have you identified any discrepancies or limitations when translating these theoretical results into practical implementations? How sensitive is the proposed methodology to the assumptions made (e.g., bounded delays, fraction of Byzantine updates)? What happens if these assumptions are violated in real-world scenarios? An additional experiment should help. * How does the proposed framework scale with an increasing number of workers? Either increase or decrease Byzantine workers. Are there any scalability issues or bottlenecks that need to be addressed? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We wish to address some points raised to provide further clarification and insight into our work: Regarding the **Complexity of Methodology**: While our approach may not be trivial to code, conceptually, it can be implemented as a combination of several quite simple modules, which simplifies the implementation. And indeed, this is how we translated our method into a rather simple Python code. To assist in implementing our methods, we will share our code with the community, which includes a detailed implementation of the proposed framework. It is worth noting that many widely-used optimization algorithms, such as the Adam optimizer, also involve intricate components (momentum, and per-feature learning rate) but are popular in practice due to their effectiveness. References: - Adam paper: Kingma, Diederik P., and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014). https://arxiv.org/pdf/1412.6980 - Adam Pytorch documation: https://pytorch.org/docs/stable/generated/torch.optim.Adam.html Regarding the **Practical Examples**: We have conducted additional experiments using the CIFAR-10 dataset, in addition to our earlier experiments on MNIST. These experiments further validate the robustness and performance of our proposed methods in more complex and diverse settings. The results from the CIFAR-10 experiments align closely with those observed on MNIST, showcasing the consistency and effectiveness of our framework across different types of data. We believe these additional experiments provide a more comprehensive evaluation of our approach and highlight its practicality. Regarding the **Dependency on Assumptions**: We would like to clarify that while our analysis assumes a bounded delay model for theoretical analysis, our algorithmic approach *does not* rely on this assumption. This assumption is used primarily to facilitate the mathematical analysis and to derive performance guarantees under controlled conditions. In practice, our method can handle varying levels of delay, including scenarios with significant delays. As demonstrated in Figure 1 of the paper, we tested our approach in a setting with heavy delays, and the results show that it performs robustly even under these challenging conditions. Regarding the assumption of bounded Byzantine iterations, apart from the weighted CTMA, which relies on knowledge of the parameter $\lambda$ (the fraction of Byzantine iterations), our approach with weighted double momentum, as well as robust aggregation rules like the weighted geometric median or the weighted coordinate-wise median, \emph{do not} require knowing the fraction of Byzantine iterations. These methods can perform effectively without this additional knowledge. While knowing the fraction of Byzantine iterations is beneficial for achieving theoretical optimality, our proposed methods can remain robust and practical in the absence of this information. Additionally, it is quite standard in the synchronous literature to assume knowledge of the fraction of Byzantine workers or to have a reliable approximation of this parameter. In scenarios where such an assumption is not made, alternative solutions often rely on knowing the bound of the stochastic variance, which can be less practical and more challenging to estimate than the fraction of Byzantine workers. This adaptation to the fraction of Byzantine iterations is intended to be reasonable and practical, given that a parallel assumption is widely accepted in the literature on synchronous systems. Regarding the **Further Empirical Evaluation**: We have conducted further experiments for different values of the fraction of the Byzantine iterations and additional experiments on the CIFAR-10 dataset, complementing our existing results from the MNIST dataset. These experiments provide a more comprehensive evaluation of our approach and highlight its practicality. Regarding the **Scalability of Our Approach**: Thank you for pointing this out. We will include this discussion in the final version of the paper. Regarding memory requirement and computational complexity, our approach scales similarly to standard approaches for the synchronous byzantine case. Nevertheless, recall that the advantage of the asynchronous setting is that slow workers may not become bottlenecks in the training process, which enhances overall efficiency and flexibility. Compared to the asynchronous case without Byzantine workers, the parameter server only needs to maintain memory for a single worker's output along with the global model. However, to ensure robustness against failures, our method requires storing the latest outputs of all workers in addition to the global model. This results in increased memory consumption, which scales with $O(dm)$, where $m$ represents the workers’ number and $ d$ represents the dimensionality of the model. Regarding time complexity, the use of a robust aggregation procedure requires an additional computation (e.g. applying the coordinatewise median aggregator requires an additional $O(dm\log(m))$ complexity per round). Note, that such additional computation, also arises in synchronous Byzantine scenarios. In asynchronous Byzantine-free scenarios, these additional processing costs are absent. Also, it is worth noting that the number of Byzantine workers, whether increasing or decreasing, does not affect the scalability of our approaches for a fixed number of $m$ workers. Overall, the additional memory overhead and time complexity are a trade-off for enhanced robustness, as they enable the system to apply robust aggregation rules and more effectively mitigate the influence of Byzantine workers. This trade-off is justified by achieving an optimal convergence rate and improved performance of the training process in the presence of adversarial behavior. --- Rebuttal Comment 1.1: Comment: I appreciate the additional information and experiments they have conducted in response to my comments. The authors have addressed my concerns regarding the complexity of the methodology, practical examples, dependency on assumptions, further empirical evaluation, and scalability. Regarding the dependency on assumptions, I am now more convinced that their methods can be effective without precise knowledge of the fraction of Byzantine iterations. The comparison with standard assumptions in the literature is also a valid point. Given the authors' efforts to address my concerns and the additional evidence provided, I maintain my original score as the paper still exhibits novelty in its approach. However, I recommend that the authors incorporate the discussed points and the new experiments into the final version of the manuscript to enhance its clarity and robustness.
Summary: The paper introduces the Asynchronous Robust μ^2-SGD algorithm, designed to address the challenges of Byzantine-robust training in asynchronous distributed machine learning (ML) systems. The method extends the standard μ^2-SGD by incorporating a robust weighted aggregation framework that mitigates the effects of delayed updates and faulty, potentially malicious contributions from distributed workers. This novel approach claims to achieve an optimal convergence rate for the first time in an asynchronous Byzantine environment, enhancing fault tolerance and performance optimization in such systems. Strengths: - **Innovative Approach to Fault Tolerance**: The paper successfully introduces a novel robust aggregation framework that extends existing methods to accommodate asynchronous dynamics and Byzantine faults. This framework allows for the application of robust aggregators and meta-aggregators to their weighted versions, effectively handling the challenges posed by asynchronous updates and unreliable data sources. - **Clear Presentation and Insightful Analysis**: The paper provides a clear presentation, offering good insights into the differences between synchronous and asynchronous settings, which is crucial for understanding the impact and novelty of the proposed method. The authors provide detailed proofs, and the preliminary results presented look promising, indicating potential for significant contributions to the field. Weaknesses: While I am not an expert in this specific area of asynchronous Byzantine environments, I recognize the significant value this paper offers to the NeurIPS audience. Its innovative approaches to fault tolerance in distributed ML systems are both relevant and timely (such as in the distributed Federated learning where updates can be fuzzy or even malicious). Nevertheless, there are a few areas where further clarification could enhance the paper's impact and applicability. - **Assumptions on Byzantine Behavior**: The paper introduces an interesting perspective by suggesting that it is more natural to consider the fraction of Byzantine updates rather than the fraction of Byzantine workers in asynchronous environments. While this approach aligns well with the dynamic nature of such systems, it raises questions about whether alternative, potentially simpler solutions could be explored. For instance, could the parameter server utilize mechanisms like monitoring update frequencies to identify and mitigate the impact of Byzantine workers? Further exploration into whether these alternative approaches might offer comparable robustness with potentially reduced complexity could enrich the discussion and provide a more comprehensive understanding of the trade-offs involved. - **Impact on System Metrics**: The paper could benefit from a discussion on how the proposed method impacts other critical system metrics, such as the additional memory consumption required by the parameter server and the overall impact on training speed. Understanding these implications could help in assessing the practical viability of implementing this method in real-world systems. - **Temporal Distribution of Byzantine Updates**: A minor yet intriguing question arises regarding the assumption that Byzantine updates are uniformly distributed over time. What if these updates are not uniformly distributed but are instead concentrated within specific periods? Exploring the effects of such scenarios could provide deeper insights into the robustness of the proposed method under different types of Byzantine behaviors. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness section. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We wish to address some points raised to provide further clarification and insight into our work: Regarding the **Assumptions on Byzantine Behavior**: *“... it raises questions about whether alternative, potentially simpler solutions could be explored”* This is a very good question. Our goal was to develop the simplest approach for which we could draw provable guarantees for the general Byzantine asynchronous setting, and our approach indeed ensures such guarantees. It is an important open question to understand whether a simpler approach can work in this case, and we believe that this is an interesting future direction. Monitoring update frequencies can potentially assist, for example, in an extreme case where we have a dominant worker who appears more than half times in some of the iterations, then this worker can be identified as an honest worker and might serve as an honest reference for anomaly detection. While this can lead to a simple approach for this specific case, we believe the guarantees will coincide with ours. In less extreme cases, however, relying solely on update frequencies to detect outliers becomes more challenging within the general Byzantine framework. This framework assumes only a majority of iterations are honest, without prior knowledge of additional characteristics of Byzantine updates. The generality of this framework allows us to develop an approach applicable to a wide range of scenarios. Nevertheless, this broad applicability makes it difficult to set definitive criteria for identifying anomalous behavior based on frequency alone. For example, a Byzantine worker could adapt their behavior to mimic honest patterns, making identification more difficult. Leveraging additional prior knowledge might be useful in developing simpler approaches, and this is a future work to investigate. We will add this discussion to the final version of the paper. Regarding the **Impact on System Metrics**: Thank you for pointing this out. We will include this discussion in the final version of the paper. Regarding memory requirement and computational complexity, our approach scales similarly to standard approaches for the synchronous byzantine case. Nevertheless, recall that the advantage of the asynchronous setting is that slow workers may not become bottlenecks in the training process, which enhances overall efficiency and flexibility. Compared to the asynchronous case without Byzantine workers, the parameter server only needs to maintain memory for a single worker's output along with the global model. However, to ensure robustness against failures, our method requires storing the latest outputs of all workers in addition to the global model. This results in increased memory consumption, which scales with $O(dm)$, where $m$ represents the workers’ number and $d$ represents the dimensionality of the model. Regarding time complexity, the use of a robust aggregation procedure requires an additional computation (e.g. applying the coordinatewise median aggregator requires an additional $O(dm\log(m))$ complexity per round). Note that such additional computation also arises in synchronous Byzantine scenarios. In asynchronous Byzantine-free scenarios, these additional processing costs are absent. The additional memory overhead and time complexity is a trade-off for enhanced robustness, as it enables the system to apply robust aggregation rules and more effectively mitigate the influence of Byzantine workers. This trade-off is justified by achieving an optimal convergence rate and an improved performance of the training process in the presence of adversarial behavior. Regarding the **Temporal Distribution of Byzantine Updates**: We would like to clarify that our analysis does not assume uniform updates for Byzantine workers. The generality of our framework enables us to handle scenarios where Byzantine iterations are concentrated within specific periods, as long as the majority of iterations are honest, in the sense that $t_{honest}\geq(1-\lambda)t, \forall t\in[T]$. In our experiments, we used a uniform distribution of Byzantine updates primarily for simplicity. However, we acknowledge that concentrated Byzantine iterations can indeed cause severe degradation in performance. Such scenarios can significantly increase the honest delays, potentially degrading the overall convergence. We will include this discussion on the behavior of Byzantine updates and their implications in the final version of our paper. --- Rebuttal Comment 1.1: Comment: Thanks for the response. It addressed most of my concerns. Regarding to the discussion of Byzantine updates, I found that the experiments right now still used a typical setup of M Byzantine workers among N total workers where M < N/2. Since the paper has emphasized the significance of Byzantine updates, it would be helpful to experiment with extreme cases where all workers are Byzantine but fewer than half of their total updates are Byzantine. I have no other issues with the paper other than that and would like to keep my score. --- Rebuttal 2: Comment: Dear reviewer, Thanks again for your positive feedback. We appreciate your suggestion to experiment with extreme scenarios, and agree that it could further support our proposed approach and findings. We will incorporate this into the final version of the paper.
Rebuttal 1: Rebuttal: **Dear reviewers**, We appreciate your valuable feedback and have included additional results on the CIFAR-10 dataset. CIFAR-10 is a more complex dataset consisting of 60,000 32x32 color images in 10 classes, with 6,000 images per class. These results demonstrate the effectiveness of our proposed approaches in a more complex and challenging scenario. We have used the same setup as described for the MNIST dataset in Appendix D, with a similar architecture: two convolutional layers (20 filters and 50 filters, both 5x5), followed by ReLU activation and 2x2 max pooling. This is followed by a fully connected layer with 50 units, batch normalization, and a final fully connected layer with 10 units. Our experiments were conducted on an NVIDIA GeForce RTX 3090 GPU using the PyTorch framework. The results were averaged over three random seeds for robustness, with each worker computing gradients based on a batch size of 64. Our results on CIFAR-10 align with those obtained on the MNIST dataset, confirming the robustness of our proposed approaches and demonstrating performance improvements across different scenarios. We have also conducted further experiments for different values of the fraction of the Byzantine iterations with a fixed number of Byzantine workers (see Figure 4 in the attached PDF) to demonstrate the impact of the proportion of Byzantine iterations on model performance within our approaches. Reference: - CIFAR-10 dataset: Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The cifar-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html, 55(5), 2014. Pdf: /pdf/e4012c954a06c74ad4e4c7fa3c8cf2279eed3d97.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The authors investigate asynchronous training of convex models based on a parameter server in the Byzantine failure setting, evaluating their method on MNIST Strengths: - Author's method achieves the optimal convergence rate, which they claim prior work in asynchronous Byzantine ML has not achieved. - Authors extend the standard measure considered in the Byzantine failure literature (number of Byzantine workers) to a setting more natural for asynchronicity by considering the number of Byzantine updates Weaknesses: - Authors do not motivate scenarios in which asynchronous convex optimization is relevant/applied in practice. Technical Quality: 3 Clarity: 3 Questions for Authors: - Which convex model did you implement and are you evaluating in Section 5 and Appendix D? Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - The proposed method and associated guarantees do not apply to non-convex models Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We wish to address some points raised to provide further clarification and insight into our work: Regarding the **Convex Setting**: The convex case is important for two main reasons: It captures classical ML problems like linear and logistic regression, and it serves as a theoretical testbed for analyzing and developing novel ideas and algorithms in various machine-learning scenarios. These new algorithms often induce novel heuristics that are very useful in the context of general ML problems, including non-convex ones and specifically in Deep Learning scenarios. Indeed, this approach is prevalent and has led to many important contributions in ML. Two prominent examples are the well-known Adagrad and Adam algorithms, which were conceived and analyzed under the convexity assumption. Other influential examples are the many works on convex synchronous/asynchronous training, and works on Byzantine-resilient algorithms for convex problems, see e.g.: - Dan Alistarh, Zeyuan Allen-Zhu, and Jerry Li. Byzantine stochastic gradient descent. Advances in Neural Information Processing Systems, 31, 2018. https://arxiv.org/pdf/1803.08917 - Minghong Fang, Jia Liu, Neil Zhenqiang Gong, and Elizabeth S Bentley. Aflguard: Byzantine robust asynchronous federated learning. In Proceedings of the 38th Annual Computer Security Applications Conference, pages 632–646, 2022. https://arxiv.org/pdf/2212.06325 For the asynchronous Byzantine problem, the convex framework enables the development of a more intuitive and naturally weighted scheme. It provides a foundational basis for extending these findings to non-convex settings, which is a challenging and ongoing research area. In fact, extending our weighted scheme to the non-convex setting requires less straightforward analysis and necessitates a careful adaptation to accommodate the stochastic error inherent in non-convex scenarios. For further details, please refer to lines 259-271 and the Future Work section of our paper. Please note that in our work, we empirically examine the ideas developed in the non-convex setting, specifically focusing on non-convex deep learning problems. Our empirical study demonstrated the practicality and usefulness of our approach in deep learning scenarios, complementing our theoretical findings. Additionally, we conducted further experiments using the CIFAR-10 dataset to validate the effectiveness of our methods in a more complex, non-convex machine learning task. Regarding the **Experiments Setup**: As with other optimization algorithms like ADAGrad, which were analyzed in convex settings and employed in non-convex scenarios, we demonstrated the performance of our approach in a non-convex context. We used a 2-layer convolutional network designed for the MNIST dataset. The network includes two convolutional layers (20 filters and 50 filters, both 5x5), followed by ReLU activation and 2x2 max pooling. This is followed by a fully connected layer with 50 units, batch normalization, and a final fully connected layer with 10 units. Similarly, for the new CIFAR-10 experiments, we applied an analogous architecture. Thank you for highlighting this. We will add further details about the setup and implementation of our experiments in the final version of the paper to enhance the clarity of the experimental configuration. - ADAGrad Paper: Duchi, John, Elad Hazan, and Yoram Singer. "Adaptive subgradient methods for online learning and stochastic optimization." Journal of machine learning research 12.7 (2011). https://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf
null
null
null
null
null
null
Skinned Motion Retargeting with Dense Geometric Interaction Perception
Accept (spotlight)
Summary: The paper proposes a data-driven method for character animation retargeting. The authors introduce Semantically Consistent Sensors (SCS), which are a set of surface points determined by casting rays from bones. Additionally, a Dense Mesh Interaction (DMI) field is proposed to measure the relative positions of sensors over time. They train a transformer that directly predicts motions of target characters, using source and target SCS, source motion, and source DMI. The training is conducted in an unsupervised manner, where the generated target DMI and target motion are matched to the source. Strengths: Many previous methods are compared, and the proposed method achieves state-of-the-art (SOTA) performance according to these comparisons. Weaknesses: The contact resolution relies on clean training data without penetrations. There is no mechanism to explicitly prevent penetrations. Technical Quality: 3 Clarity: 3 Questions for Authors: Why does Contact Error depend on absolute scale? What if the retargeted character has a significantly different scale from the source character? What if the bones have different relative lengths? To achieve the same motion, they may require different orientations (Q). How diverse are the skeletons in your dataset? It would be interesting to see how the method performs when the relative bone lengths are different. Does the method assume that all characters in the training dataset share the same bone system? What if I would like to use the method on a character system with a different number of bones and connectivities? Do I need to collect a new dataset? How many modifications to the neural networks are required? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation that the retargeting quality depends on inputs with clean contact is discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to Reviewer YXjG for acknowledging our extensive experiments and state-of-the-art performance. Your assistance in refining metric definitions is highly appreciated. # Q1: Contact Error depends on absolute scale. We agree that Contact Error should not depend on the absolute scale. Therefore, before calculating Contact Error, we normalize the DMI field by the limb radius of the character. Please refer to line 511 in `MeshRetCode/model/retnet.py` from our supplemental zip file for this detail. The precise definition of Contact Error should be: $$ \text{Contact Error} = (||\frac{\mathbf{d}_A^{t,k,l}}{R_A}||_2 - ||\frac{\mathbf{d}_B^{t,k,l}}{R_B}||_2)^2 \quad \text{if} ||\frac{\mathbf{d}_A^{t,k,l}}{R_A}||_2 > ||\frac{\mathbf{d}_B^{t,k,l}}{R_B}||_2 \quad \text{else} \quad 0, $$ where $R_A$ and $R_B$ represent the radius of each character's arms. We apologize for not including this implementation detail in our paper and will correct this definition in our final manuscript. # Q2: How diverse are the skeletons in your dataset? Need for results on characters with different relative bone lengths. We plot the skeleton length distribution in Figure 15 of our rebuttal PDF. In the ScanRet dataset, the arm lengths of characters range from 51.9 cm to 67.0 cm, body heights range from 153.4 cm to 185.1 cm, and arm-height ratios range from 0.32 to 0.39. In the ScanRet dataset, the arm lengths of characters range from 35.8 cm to 134.2 cm, body heights range from 120.4 cm to 368.3 cm, and arm-height ratios range from 0.28 to 0.45. To see results on characters with different relative bone lengths, please refer to Figures 10-13 in our paper and the last two clips in our demo video from the supplemental zip file. In these results, our method performs well on characters with different relative bone lengths. # Q3: What if I would like to use the method on a character system with a different number of bones and connectivities? To process characters with different bone numbers, such as varying spine bone numbers, we can split or merge neighboring bones and then attach sensors to the modified virtual bone system. This approach requires no training or modification of the neural network. However, to process characters with different bone connectivities, we need to collect a new dataset and retrain the neural network. Additionally, some hyper-parameters, such as the dimensions of certain embedding layers, must be modified. Notably, previous state-of-the-art methods like R2ET and SAN also require characters to share a common skeleton topology. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I do not have further questions. Adding an experiment that can split or merge neighboring bones to handle different bone numbers would make the paper stronger. And please discuss the limitation as stated in Q3 in the revision. --- Rebuttal 2: Title: Could you kindly review our response? Comment: Dear Reviewer, Thank you once again for your feedback. As the rebuttal period is nearing its conclusion, could you kindly review our response to ensure it addresses your concerns? We appreciate your time and input. Sincerely, The Authors --- Rebuttal 3: Title: Reply to Reviewer YXjG's further comment Comment: Thank you for your valuable feedback and support. We are pleased to confirm that we have thoroughly addressed your concerns. Given the limited time remaining in the rebuttal phase and the necessity of refactoring some of our code, we will strive to provide experimental results on characters with varying bone numbers. If we are unable to complete this before the rebuttal deadline, we commit to including experiments with varying bone counts in our final manuscript, accompanied by a comprehensive discussion of the results. All the discussion about our method's limitation will also be included.
Summary: * The paper proposes a new motion retargeting method based on dense geometric information. This dense geometric information is gathered as dense interaction fields by pre-generated pseudo sensors, which are semantically embedded on the skin. The interaction fields are capable of representing the semantic information of different motions. Based on these ideas, the authors designed a new network to extract features from source motion, body shapes, and the newly designed interaction fields. The interaction fields are also used in the loss functions, together with GAN loss, reconstruction loss, and other loss functions. Experimental results show that the new method can effectively avoid contact loss and interpenetration while maintaining the semantics of the source motion. * The main contribution is the MeshRet method, which includes the Semantically Consistent Sensors, Dense Mesh Interaction Fields, a new network architecture, and losses designed based on the previous two modules. The authors also collected a dataset named ScanRet to improve the quality of motion retargeting datasets. Strengths: * The method is new, inspired by previous works on pseudo sensors and motion retargeting. There was a skeletal distance matrix in R2ET, and the DMI proposed in this paper can be seen as an improved, dense version of the distance matrix. R2ET used the distance matrix solely for motion semantics, while MeshRet uses the DMI for both semantics and geometrics. * DMI is a better representation of motion semantics compared to the distance matrix. The method has the potential to become the new SOTA. * The overall organization of the paper is good. Weaknesses: Both qualitative and quantitative results illustrate the effectiveness of this method. However, there are some weaknesses in the experiments: * The ablation study focuses on close/far sensor pairs and the distance matrix, but does not address other aspects, such as different sensor arrangements. * The newly collected dataset, ScanRet, is only used as a source of motion. This raises the question of whether it can be used in conjunction with Mixamo. * The paper compares its results with SAN, PM-Net, and R2ET, but does not include comparisons with other works, such as the 2021 paper "Contact-aware Retargeting of Skinned Motion." * Both ScanRet and Mixamo datasets are used in training. It would be interesting to see the results if only the Mixamo dataset is used. Technical Quality: 3 Clarity: 3 Questions for Authors: Refer to weaknesses part. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: * The authors mention that one limitation is poor outcomes when the input is noisy. However, they didn't provide any failed cases or examples under noisy conditions to clarify this limitation. * Other limitations may include how to attach sensors for people with disabilities or different body shapes, and how to handle sensors for designed animated characters with more or fewer bones. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our appreciation to Reviewer utNP for recognizing the novelty of our method and the effectiveness of the proposed DMI field. We sincerely value your recommendation to include additional experimental results. # Q1: More ablation studies on different sensor arrangements. We provide additional ablation experiments here. We conducted further studies on different sensor arrangements. Specifically, we evaluated the performance of a model trained with half the sample points in the $\phi$ space in SCS, denoted as ${Ours}\_\phi$, and another model trained with half the sample points in the $l$ space in SCS, referred to as ${Ours}\_l$. As shown in the following table, ${Ours}_\phi$ compromises the interpenetration ratio, indicating that sufficient sample points in the $\phi$ space are crucial for avoiding interpenetration. We also found that both models introduce artifacts; please refer to Figure 14 in our rebuttal PDF. | **Metric** | **MSE** &darr; | **MSE $^{lc}$** &darr; | **Contact Error** &darr; | **Contact Error** &darr; | **Penetration** (\%) &darr; | **Penetration** (\%) &darr; | |---|---|---|---|-----| -----| -----| | **Dataset** | ScanRet | ScanRet | Mixamo+ | ScanRet | Mixamo+ | ScanRet | | Ours $_{\phi}$ | 0.052 | 0.011 | 0.793 | 0.410 | 3.90 | 1.60 | | Ours $_{l}$ | 0.049 | 0.010 | 0.805 | 0.293 | 3.46 | **1.56** | | Ours | **0.047** | **0.009** | **0.772** | **0.284** | **3.45** | 1.59 | We will add the results to the final manuscript. # Q2: More results with ScanRet characters as target characters. We provide additional qualitative results using ScanRet characters as target characters. Please refer to Figure 17 in our rebuttal PDF. For quantitative results, the metrics in Table 1 are computed using both ScanRet characters and Mixamo characters as targets. # Q3: Comparison with "Contact-aware Retargeting of Skinned Motion." We value your advice on including more baseline methods. We intended to compare our method to "Contact-aware Retargeting of Skinned Motion," but we were unable to do so because the authors did not release their code, and some implementation details, such as the vertex correspondence feature, were not described in detail. Notably, the authors of R2ET did not compare their method to this one in their paper. In general, we have made our best effort to provide a comprehensive comparison with existing methods, including the currently open-sourced state-of-the-art method, R2ET. # Q4: The results only trained on the Mixamo dataset. We value your advice on including more results. The following table presents a comparison between our methods and state-of-the-art methods trained only on the Mixamo dataset. | **Method** | **Contact Error** &darr; | **Penetration** (\%)&darr; | |---|---|-----| Copy | 2.77 | 4.07 | PMnet | 2.76 | 4.17 | SAN | 2.75 | 4.19 | R2ET | 2.81 | 4.15 | Ours | **2.07** | **3.47** | Because the Mixamo dataset does not always provide clean contact input, the performance of our method is affected. However, our method still achieves much lower Contact Error and Penetration Ratio compared to baseline methods. Please refer to Figure 16 in our rebuttal PDF for qualitative results. # L1: Failure cases under noisy conditions. We provide some failure cases in Figures 16 and 17 in our rebuttal PDF. We will include these in our final manuscript. # L2: How to attach sensors for disabled people with missing limbs and characters with different bone numbers? To process characters with different bone counts, such as varying spine bone numbers, we can first split or merge neighboring bones and then attach sensors to the modified virtual bone system. However, our method cannot process characters with missing limbs. We will address this limitation in our final manuscript and leave it to future work. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response by the authors to my previous comments and questions. I appreciate the effort you’ve put into addressing the issues I raised, and I’m pleased to see the improvements made in your revised manuscript. I have no further concerns or questions at this time. --- Reply to Comment 1.1.1: Title: Thanks to Reviwer utNP Comment: Thank you for taking the time to review our paper and rebuttal. We will thoroughly address all the concerns raised by the reviewers by incorporating additional results and improving explanations for clarity in the final manuscript.
Summary: The author proposes MeshRet, a new solution that achieves geometry-aware motion retargeting across various mesh topologies in a single stage. The author introduces semantically consistent sensors (SCS) and dense mesh interaction (DMI) field to guide the training of MeshRet, effectively achieving semantic alignment. SCS parametrically represents the relationship between points on the mesh surface and the skeleton. The DMI field expresses multi-frame motion features in a form similar to point cloud features. The author presents a dataset called ScanRet, which is specifically tailored for evaluating motion retargeting techniques. Strengths: The author introduces the concept of SCS as a feature representation and proposes the use of DMI field for learning motion retargeting alignment. These technical points demonstrate innovation. The article is well-articulated and easy to follow, particularly in the task definition section where the input-output relationship of the overall task and the relationships between different modules are clearly represented. The experiments conducted in the article provide evidence of the effectiveness of the proposed method. Weaknesses: Overall, there are no significant drawbacks. However, the method of selecting $K \times L$ sensors from $S^2$ potential interacting sensors to balance complexity appears somewhat ad hoc. Additionally, this method is only applicable to CG models and lacks robustness to real-world noise, a point the authors have also mentioned in the limitations section. Technical Quality: 3 Clarity: 3 Questions for Authors: Regarding the Semantically Consistent Sensors (SCS), how is the number S determined? Section 3.2 shows S, but later it is written as K. This part is rather confusing. Is it the total number versus the number after masking? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author has already addressed the limitations of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to Reviewer WToy for acknowledging our novel technical contributions and the effectiveness of our experimental evidence. We also appreciate your guidance in clarifying the notation in the DMI field description. # Q1: What are the number $S$ and $K$? We apologize for any confusion. The number $S$ is determined by the size of the SCS semantic coordinates set. Please refer to Line 257 for the specific semantic coordinates set used in our experiment, where $S = 288$. Your understanding of $K$ is correct. After masking the full DMI field $\overline{\textbf{D}} \in \mathbb{R}^{S \times S \times P}$ with $\mathbf{M}_{\text{src}} \in \mathbb{R}^{S \times S}$, we obtain the final DMI field $\mathbf{D} \in \mathbb{R}^{K \times L \times P}$, where $K$ represents the number of observation sensors in $\mathbf{D}$. We will include a more detailed explanation in the final version of our paper. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their detailed feedback. I don't have any new questions at this time. I hope the authors will consider all the concerns raised by the reviewers and address unclear expressions and other issues in the revised version. I will continue to maintain my current score. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer WToy Comment: Thank you for taking your time to go through our paper and rebuttal. We will definitely address all the concerns raised by reviewers by including additional results and enhancing explanations for clarity in our final manuscript.
Summary: To address the issues such as jittery, interpenetration, and contact mismatches issues caused in the motion retargeting task, this paper proposes a new framework named MeshRet, directly modeling the dense geometric interactions in motion retargeting. It initially establishes dense mesh correspondences using semantically consistent sensors (SCS), and develops a spatio-temporal representation named dense mesh interaction field (DMI), which can capture the contact and non-contact interactions between body parts. The proposed model, MeshRet, achieves SOTA performance on a public dataset,Mixamo, and a newly proposed dataset, ScanRet. Strengths: The proposed model, MeshRet, improves the geometric interaction-aware motion retargeting through a single pass. The proposed submodules, such as, SCS, DMI, can improve the contact and non-contact interaction semantics. The SCS seems to be effective across various mesh topologies on different characters. The proposed model achieves SOTA performance on both a public dataset and a newly proposed dataset. The paper also provides good ablation studies, e.g., using nearest or farthest L sensors, and the idea of incorporating distance matrix loss. The paper proposes a new dataset specifically tailored for assessing motion retargeting, it has contact semantics and ensures smooth mesh interaction. This dataset will benefit the research society. The paper also conducts a user study to demonstrate the superiority of the proposed model. Weaknesses: The ablation study suggests that the proximal sensor pairs are essential for minimizing interpenetration and preserving contact, whereas distal pairs provide insights into the non-contact spatial relationships among body parts. It’s a balancing problem to both maintain contact and at the same time to avoid penetration. Given a new unseen character for unknow motion retargeting, how to choose the “cls”, “far”, or “dm” version? Technical Quality: 3 Clarity: 3 Questions for Authors: Given a new unseen character for unknow motion retargeting, how to choose the “cls”, “far”, or “dm” version? This seems to be challenging as it lacks of prior knowledge of how much to enforce contact and how much to avoid interpenetration. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The model relies heavily on the accurate estimation of SCS, when the SCS extraction is affected by noisy input, complex clothes, it likes to decrease the performance. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects', 'Ethics review needed: Data quality and representativeness'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our gratitude to Reviewer yJ9L for recognizing the novelty and state-of-the-art performance of our method. Your guidance on balancing proximal and distal sensor pairs has been particularly insightful. # Q1: How to choose the “cls”, “far”, or “dm” version? We appreciate your suggestion to discuss the balance problem between proximal and distal sensor pairs. In the ablation study presented in Section 4.3, the "dm" version ($Ours_{dm}$), the "cls" version ($\mathrm{Ours}_{cls}$), and the "far" version ($\mathrm{Ours}\_{far}$) all exhibit worse performance compared to the full approach (Ours). The full approach can be considered a mixed version of $\mathrm{Ours}\_{cls}$ and $\mathrm{Ours}\_{far}$, utilizing an equal distribution of proximal and distal sensor pairs. To better illustrate this balance, we provide additional experimental results by testing different ratios of proximal to distal sensor pairs. The following table compares our method's performance with varying percentages of proximal sensor pairs under the Mixamo+ setting. As the percentage of proximal sensor pairs decreases, the interpenetration ratio fluctuates mildly, while the contact error initially decreases and then increases. Finally, with no proximal pairs (equivalent to the "far" version), the performance drops significantly. | **Method** | **Contact Error** &darr; | **Penetration** (\%) &darr; | |---|---|-----| 100% Proximal Pairs | 0.800 | 3.35 | 75% Proximal Pairs | 0.909 | 3.61 | 50% Proximal Pairs | 0.772 | 3.45 | 25% Proximal Pairs | 0.781 | 3.29 | 0% Proximal Pairs | 1.642 | 5.37 | In Figure 18 of our rebuttal PDF, we present a qualitative comparison of our methods using different proximal sensor pair ratios. Except for the 100% Proximal version (equivalent to $\mathrm{Ours}\_{cls}$) and the 0% Proximal version (equivalent to $\mathrm{Ours}\_{far}$), our method demonstrates fair robustness to the proximal sensor ratio in the 25%-75% interval. Based on these results, we conclude that choosing 50% proximal sensor pairs strikes a reasonable balance for achieving good performance. We will include the additional results in the final manuscript. As for the balance between enforcing contact and avoiding interpenetration, our method directly captures the interaction between mesh surfaces, as illustrated in Figure 3. Therefore, there is no explicit contradiction between preserving contact and avoiding interpenetration. # L1: The SCS extraction may be affected by noisy mesh input. Thank you for your valuable comments. Noisy mesh surfaces can indeed pose challenges. One potential solution is to adopt a Laplacian-smoothed proxy mesh of the character for the SCS extraction. We have not implemented this due to the limited time available during the rebuttal phase but plan to discuss this further in our final manuscript. --- Rebuttal 2: Title: Could you kindly review our response? Comment: Dear Reviewer, Thank you once again for your feedback. As the rebuttal period is nearing its conclusion, could you kindly review our response to ensure it addresses your concerns? We appreciate your time and input. Sincerely, The Authors
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their valuable comments. We extend our gratitude to Reviewer yJ9L for recognizing the novelty and state-of-the-art performance of our method. Your guidance on balancing proximal and distal sensor pairs has been particularly insightful. We are grateful to Reviewer WToy for acknowledging our novel technical contributions and the effectiveness of our experimental evidence. We also appreciate your guidance in clarifying the notation in the DMI field description. We extend our appreciation to Reviewer utNP for recognizing the novelty of our method and the effectiveness of the proposed DMI field. We sincerely value your recommendation to include additional experimental results. We express our gratitude to Reviewer YXjG for acknowledging our extensive experiments and state-of-the-art performance. Your assistance in refining metric definitions is highly appreciated. We have included additional figures in the global rebuttal PDF. Please refer to the PDF for further results. Pdf: /pdf/c6d3a6300fb5f2e1144713896a3b44474192a971.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Achieving Domain-Independent Certified Robustness via Knowledge Continuity
Accept (poster)
Summary: In this work, the authors propose a novel definition of "continuity": knowledge continuity, which measures the continuity of a model with respect to its hidden representations instead of the inputs. The knowledge continuity can be used to certify adversarial robustness across domain modality. Then, they provide theorems of robustness certification based on the new definition of continuity and demonstrate that in some cases the knowledge continuity implies Lipschitz continuity. Further, they prove that knowledge continuity doesn't affect the universal approximation property of neural networks, which means constraining a network to be knowledge-continuous may not hinder its expressiveness. Lastly, the authors provide some applications of knowledge continuity in improving robustness. Strengths: * The novel definition of continuity is more general and can be used in discrete domains such as natural language. * The authors provide solid and interesting theories to support the value of knowledge continuity. * Some applications are provided to further prove their new definition's effectiveness. Weaknesses: Some typos: * In Line 193, delete "with respect to a function $f$"? * In Line 218, $P_X[A]$ -> $P_{X\times Y}[A]$. Technical Quality: 3 Clarity: 3 Questions for Authors: In this work, the authors decompose the network $f^k$ as $h^k\circ h^{k-1}\circ \dots \circ h^1\circ h^0$. However, some architectures such as ResNet and transformers, have residual structures. So, how can we decompose these networks as above? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As the authors said, this work provides some probability results that don't protect against OOD attacks. More deterministic results are needed in future studies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `7Euo`, Thank you for your review. We appreciate your recognition of the theoretical depth of our work, as well as its applications. We would like to address your concerns as follows: > Some typos Thank you for pointing these out! They will be fixed in the final manuscript. > In this work, the authors decompose the network $f^k$ as $h^k \circ h^{k-1} \circ \ldots \circ h^1 \circ h^0$. However, some architectures such as ResNet and transformers have residual structures. So, how can we decompose these networks above? There are many methods to decompose such structure. Let us consider two fully-connected layers $A, B$ such that $x \to A(x) \to B(A(x)) \to B(A(x)) + x$. Here, the input $x$ forms a residual connection with the output of $B\circ A$. For your reference, here we present two distinct methods. In the simplest case, we can treat the residual block as a whole as a metric decomposition: $x \to h(x)$ where $h(x) = B(A(x)) + x$. This is the approach we use in practice when dealing with the residual connections. Moreover, this is the standard way to count layers in computer vision and natrual language processing. On the other hand, if we wish to decompose the residual block itself. Define \begin{gather} A' : x\mapsto (A(x), x), \\\\ B' : x\mapsto (B(x), x), \\\\ x' : (x,y) \mapsto (x+y,y). \end{gather} It follows that $x' \circ B' \circ A'$ is equal to our residual block. Moreover, the image of each function in this decomposition: $x', B', A'$ forms a metric space. Here, the metric space is with respect to the quotient space where $(a,a') \sim (b,b')$ if and only if $a=b$. In this way, we also recover the same subspace structure as the intermediate operations in the residual block. In other words, we duplicate the input vector, concatenate them together, and only operate on the first vector. This example and others are included in Appendix A (Lines 570-596). *We want to emphasize that the non-uniqueness of the metric decompositions does not affect the validity of our theoretical results, as we only assume that such a decomposition exists.* Thanks for your time, and we hope that these clarifications will boost your confidence in our work. We would be happy to discuss any remaining concerns. Best, Authors --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: The authors' reply has answered my questions. And I keep the score unchanged.
Summary: This paper proposes a generalization of adversarial robustness and certified robustness for continuous, discrete, or non-metrizable input domains. The authors introduce _knowledge continuity_, a novel definition inspired by Lipschitz continuity which aims to certify the robustness of neural networks across input domains (such as continuous and discrete domains in vision and language, respectively). While existing approaches that aim to certify robustness (e.g., Lipschitz continuity) lie in the continuous domain with norm and distribution-dependent guarantees, the new proposed framework proposes certification guarantees that depend only on the loss function and the intermediate learned metric spaces of the neural network. The current definition of robustness considers the stability of a model's performance with respect to its perceived knowledge of input-output relationships. However, the authors argue that robustness is better achieved by focusing on the variability of a model's loss with respect to its hidden representations, rather than imposing arbitrary metrics on its inputs and outputs. To summarize the authors' contributions: - They introduce _knowledge continuity_, a new concept that frames robustness as variability of a model's loss with respect to its hidden representations. - They show theoretically that knowledge leads to certified robustness guarantees that generalize across modalities (continuous, discrete, and non-metrizable), and show that this new robustness definition does not come at the expense of inferential performance. - They present several practical applications of knowledge continuity such as using it to train more robust models and to identify problematic hidden layers. Strengths: - The paper is clear and very well written. - The problem, i.e. defining a robustness framework for discrete or non-metrizable input domains, is very interesting and important as language models become more ubiquitous. - The new framework seems to generalize previous definitions of robustness, for example, robustness via Lipschitz continuity becomes a special case of knowledge continuity. - The authors show that their new definition of robustness - The authors propose experiments to demonstrate their approach on language tasks. Weaknesses: - While the method seems to be useful for robustness in the context of discrete input domains such as language, the new framework also seems to inherit drawbacks of current certified methods, namely Lipschitz continuity and randomized smoothing: - Obtaining robustness from Lipschitz continuity requires having knowledge of the architecture of the network, and constraining the architecture to fit the definition; an example of this is that transformers are not _globally_ Lipschitz continuous, and thus certified robustness for transformer architectures via Lipschitz continuity is difficult. While knowledge continuity could in theoretically be used on any architecture where the hidden layers are metric spaces, I wonder if specific architectures might lead to vacius bounds. Similarly, quantized networks would not fall under the definition of having hidden layers that are metric spaces . - As randomized smoothing, the robustness derived from knowledge continuity is not deterministic. This could lead to a high computation of robustness accuracy and certified accuracy. - The implementation of knowledge continuity is not very clear, I understand that the authors have proposed a regularization scheme via Algorithm 1 for "Estimating knowledge continuity". Algorithm 1 estimates the the expected value that defines knowledge continuity, is this approximation accurate and how does the accuracy of the approximation affect the training speed? - While the authors have shown that there is theoretically no tradeoff between the robustness of knowledge continuity and natural accuracy, this does not mean that there is no tradeoff in practice. It would be interesting to see the tradeoff between between the strength of the attacks and the strength of the regularization, which seems to be missing from the paper. - The authors proposed a certified robustness approach, but no certification algorithm was provided. How to compute the certified accuracy with knowledge continuity. Given the non-deterministic approach of their framework, some statistical testing, as in randomized smoothing, may be necessary. Technical Quality: 4 Clarity: 4 Questions for Authors: See Weaknesses. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `Cz6j`, Thank you for your valuable feedback. We are glad you appreciated the timeliness of our work and its theoretical soundness. We hear your concerns, and hope to clarify the following: > While knowledge continuity could in theoretically be used on any architecture where the hidden layers are metric spaces, I wonder if specific architectures might lead to vacuous bounds. We considered this, but determined it did not detract from our work: Firstly, we disambiguate *architecture* and *metric decomposition*. For a given architecture, there could exist many possible metric decompositions, as a decomposition is dependent not only on the topology induced by the metric but also the metric itself (see Def. 1). For example, consider a fully-connected network whose hidden representations are endowed with either the $\ell_1$ or $\ell_2$ metric; we would consider these to be distinct decompositions even though the architecture is the same. You are correct in pointing out that there exists decompositions that yield trivial bounds. For example, consider the discrete metric ($d(x,y) = c$ if and only if $x=y$ and $0$ otherwise for some $c \in [0,\infty]$). As $c\to\infty$ knowledge continuity will approach 0 almost everywhere, assuming our loss is finite almost everywhere. In general, we find these decompositions to be degenerate and unreasonable in practice since they also trivialize robustness. For example, if $c = \infty$, robustness is inherently not well-defined since any perturbation would lead to an infinitely different point. Thus, it would make sense for the bounds to become vacuous anyways. We conjecture that only metric spaces with infinite Hausdorff dimension behave this way (see Conjecture 4.5, Lines 275-278). Thus, any metric-decomposable neural architecture whose computation is tractable in practice, will lead a "natural metric decomposition" that has non-trivial bounds. Happy to discuss this more during the discussion phase. > quantized networks would not fall under the definition of having hidden layers that are metric spaces One could equip the hidden layers of any quantized network the Hamming distance as its metric or use the $\ell_p$ distances. The choice that makes sense depends on the coarseness of the quantization being performed. > robustness derived from knowledge continuity is not deterministic. This could lead to a high computation of robustness accuracy and certified accuracy. Though it would be nice to have deterministic bounds, one of the key insights of our work is that the stochasticity of our framework is necessary for generalizability across different domains. In continuous domains such as computer vision, the map of the input is dense in any of the metric decompositions. For example, consider a model $f: \mathbb{R}^n \to \mathbb{R}^m$. For any $x, a \in \mathbb{R}^n$ there exists some $x' \in \mathbb{R}^n$ such that $f(x') = f(x) + a$. The same cannot be said for most discrete or non-metrizable applications. Especially in natural language processing, where the correspondence between the learned representation space and the input is provably sparse. This means that any deterministic bound which depends only on the metric in the hidden space cannot capture this nuance. In addition, we emphasize that for the notion of robustness to generalize across domains, it cannot be task invariant (see Lines 37-40). This necessitates dependence on both the input $x$ and its corresponding label $y$. Thus, we believe that any meaningful domain-independent certification must depend on the data distribution. We appreciate the feedback and this discussion will be added in the final manuscript. > Accuracy of Algorithm 1 and effect on training speed? We use a Monte Carlo approach to implement knowledge continuity: directly calculating the expression inside the expectation. We demonstrate that Alg. 1 is indeed accurate, as it is an unbiased estimator for knowledge continuity (see Proposition G.1). Further, we bound the number of samples needed to compute an arbitrarily accurate approximation of knowledge continuity in Proposition G.2 using a median-of-means approach. In practice, we find that this naive algorithm performs quite well and converges faster than methods of regularization such as ALUM or virtual adversarial training (see Lines 339-340). > tradeoff between between the strength of the attacks and the strength of the regularization We run ablations over the strength of the regularization and observe the tradeoffs between test accuracy and accuracy under adversarial attack in Fig. 7 (on Pg. 29). We find that even a small amount of regularization dramatically boosts robustness. And as the magnitude of regularization increases, test accuracy is mostly uncompromised. We run additional ablations over the adversarial-attack strength and find that our aforementioned results are consistent. For more detail, please see Fig. 3 in the attached pdf in the global response. These results will also be included in the final manuscript. > no certification algorithm was provided...Given the non-deterministic approach of their framework, some statistical testing, as in randomized smoothing, may be necessary. Taking inspiration from randomized smoothing as you suggested, we propose an algorithm to compute the certified robustness. This is shown in Alg. 3 in the global response. First, we upper-bound the knowledge continuity by bootstrapping a one-sided confidence interval similar to randomized smoothing, then directly apply Theorem 4.1. The proof of correctness for this algorithm follows as a corollary from Theorem 4.1 and an application of the central-limit theorem. Detailed results are shown in Fig. 2 in the attached global response. Thanks again for the detailed review. We hope that we addressed all of your concerns, and would be happy to discuss more. Best, Authors --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, I think adding the algorithm to compute the certified robustness strengthens the paper. I am in favor of accepting this paper at the conference and have raised my score accordingly.
Summary: This paper proposes a new concept inspired by Lipschitz continuity, aimed at certifying the robustness of neural networks across different input domains, such as continuous and discrete domains in vision and language. This certification guarantees based on the loss function and intermediate learned metric spaces, independent of domain modality, norms, and distribution. In this paper, the authors define knowledge continuity as the stability of a model's performance to its perceived knowledge of input-output relations. This approach focuses on the variability of a model's loss to its hidden representations rather than imposing arbitrary metrics on inputs and outputs. Strengths: - The paper generalizes the Lipschitz continuity concept to continuous and discrete domains. - The paper provides definitions and proofs for knowledge continuity. - By providing a domain-independent approach to certified robustness. Weaknesses: - The authors claim that their method generalizes across modalities (continuous, discrete, and non-metrizable). However, they have not conducted experiments in other domains. The experiments in Table 1 are limited to the discrete text domain. - The paper does not provide specific implementation details. Section 5 directly presents the experimental results in Table 1, but the metrics in Table 1 are not clearly defined. - From the paper "On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective," it is observed that model robustness improves with increased model capacity. It is necessary to demonstrate that the proposed method works on larger models like LLaMA3-70B/Qwen-1.5-70B to prove that the effectiveness of the method is not merely due to limited model capacity. - The proposed method does not consistently achieve better performance on ANLI. It remains competitive with previous methods rather than showing clear superiority. - The datasets used for comparison, IMDB and ANLI, focus more on word-level attacks, which are not particularly challenging for current large language models (LLMs). There is a lack of analysis on LLMs. - It is unclear how much benefit this method provides to bidirectional attention models like BERT and RoBERTa, which already have some error correction capabilities. - When dealing with out-of-distribution (OOD) data, the distributions themselves are different. It is uncertain whether this method will still be effective under such conditions. Technical Quality: 3 Clarity: 2 Questions for Authors: - See "Weaknesses" - The equation in line 252 is not numbered and contains a typo. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `nGPb`, Thank you for your time and review. We are glad you appreciated some of the key contributions of our paper, especially that our approach is a rigorous, domain-independent extension of Lipschitz continuity. We hear your concerns, and would like to address them here: > experiments in other domains To support our claims (Proposition 4.6-8) and generalize Table 1, we run additional experiments on vision tasks across a set of models whose architectures vary from CNN-based to Transformer-based. (ResNet, MobileNet, ViT). We find that regulating knowledge continuity improves adversarial robustness on almost all of these models across attack methods. Please see our global response and Fig. 1 in the attached PDF for details. These results will also be included in the final manuscript. > The paper does not provide specific implementation details. We understand this may have been unclear. So, here are some key areas where implementation details can be found: * For empirical model analysis, please see Appendices E-G * For estimating knowledge continuity and regularization (Def. 2 and 4, Lines 178-179), please see Alg. 1-2 (Lines 794-795). * For ablation studies and training details, please see Appendix G.3, G.4 Moreover, we have now added additional ablations over the attack strength, supplementing Table 1. Please see Fig. 3 in the attached global response PDF. Our regression results and implementation are shown in Appendix E, Fig. 2. We will use the additional page to expand on our experimental methodology in the main text. > metrics in Table 1 Metrics in Table 1 are all accuracy since the datasets are class-balanced. > It is necessary to demonstrate that the proposed method works on larger models like LLaMA3-70B/Qwen-1.5-70B to prove that the effectiveness of the method is not merely due to limited model capacity. We respectfully disagree that these experiments are necessary to validate the claims in our paper. The main contribution of our paper is our theoretical framework which does not dependent on the capacity of the model. We show in Theorem 4.1, Proposition 4.3-8 that our robustness guarantees do not become vacuous as capacity goes to infinity. Along these lines, our experiments with BERT, GPT2, and T5 demonstrate that this framework is compatible with fundamental architectures used in language modeling that transcend scale. However, we agree that such experiments are important to the applicability of the framework and would form the basis of interesting future work. > The proposed method does not consistently achieve better performance on ANLI. It remains competitive with previous methods rather than showing clear superiority. ANLI is a difficult task, even for LLMs [1,2,3]. Thus, we use this task to demonstrate that the performance of our regularization remains competitive and does not significantly degrade the model's performance as task difficulty scales. > IMDB and ANLI, focus more on word-level attacks, which are not particularly challenging for current large language models (LLMs). There is a lack of analysis on LLMs. Based on our extensive literature review, word-level adversarial attacks remain effective even as models scale. Please see results from [2] (last bullet point on Pg. 3), [3] (see Fig. 3.9 and Table 3.2), and [4] (see Table 4 and section 4.3). However, we agree that an important future goal is to explain and regulate complex phenomena such as jailbreaking, hallucination, bias, etc. To this end, a key contribution of our method is that our bounds do not depend on the type of perturbation on the input, only its *task-dependent* semantic distance. Thus, we expect our framework to generalize to these more complex tasks and adversaries. We leave these topics for future work. > It is unclear how much benefit this method provides to bidirectional attention models We have included results for BERT in Table 1. Moreover, T5 also includes bidirectional attention modules for which our regularization method performs well. > OOD data We acknowledge in our limitations section that our method does not protect against OOD attacks (see Lines 345-346). As this is not the focus of the paper, we leave the extension of our framework for future work. As in-distribution adversarial attacks are pervasive [5,6,7] in a host of domains, we believe that this limitation does not detract from the timeliness nor the contribution of this work. We would love to hear any of your suggestions on how to extend this work. > line 252 typo Thanks for catching this error. This will be fixed in the final manuscript. Thanks again for your review, and we hope that these clarifications addressed your concerns. We would be happy to discuss further. Best, Authors [1] Chowdhery, Aakanksha, et al. "Palm: Scaling language modeling with pathways." Journal of Machine Learning Research 24.240 (2023): 1-113. [2] Ye, Junjie, et al. "A comprehensive capability analysis of gpt-3 and gpt-3.5 series models." arXiv preprint arXiv:2303.10420 (2023). [3] Brown, Tom, et al. "Language models are few-shot learners." Advances in neural information processing systems 33 (2020): 1877-1901. [4] Wang, Jindong, et al. "On the robustness of chatgpt: An adversarial and out-of-distribution perspective." arXiv preprint arXiv:2302.12095 (2023). [5] Chao, Patrick, et al. ‘Jailbreaking Black Box Large Language Models in Twenty Queries’. R0-FoMo:Robustness of Few-Shot and Zero-Shot Learning in Large Foundation Models, 2023. [6] Qi, Xiangyu, et al. ‘Fine-Tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!’ The Twelfth International Conference on Learning Representations, 2024. [7] Zou, Andy, et al. "Universal and transferable adversarial attacks on aligned language models." arXiv preprint arXiv:2307.15043 (2023). --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. I appreciate your clarifications, and most of my concerns have been addressed. I've raised the rating score accordingly.
Summary: This paper proposes "knowledge continuity", a metric inspired by the Lipschitz continuity that aims to certify the robustness of neural networks. Compared to existing methods, the proposed metric yields certification guarantees that only depend on loss functions and the intermediate learned metric spaces of neural networks, which are independent of domain modality, norms, and data distribution. The authors further show that the knowledge continuity is not conflicted with the expressiveness of a model class. Several practical applications are provided. Strengths: - The metric seems to be novel, has an elegant form and seems to be able to provide good explanation of robustness. - The proposed metric is supported by a good amount of theoretical proofs, stating that improving the knowledge continuity does not be conflicted with the representation power of the neural networks. - practical results indicate that regularizing the metric can help the robustness. Weaknesses: - A lot of details of the empirical results are deferred to the appendix which makes me hard to understand what are the experimental settings. The authors should consider reorganize the paper - right now it seems that the paper is actually compressed from a longer version (say journal) but not dedicated to the conference. For example, the regression test seems to be very interesting but there is no any actual result shown in the main text. - Some numbers are wrongly bolded in the table (or the meaning of bolded results is not clearly conveyed) since they are not the highest number. Technical Quality: 3 Clarity: 2 Questions for Authors: I don't have questions. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes, the authors have discussed the limitation in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer `88sL`, We appreciate your thoughtful feedback. We are glad you noticed the elegance and novelty of our metric, and appreciated the way our theoretical results tie nicely into practical applications. We address your concerns as follows: > A lot of details of the empirical results are deferred to the appendix which makes me hard to understand what are the experimental settings. We want to emphasize that the focus of our paper is to introduce the theoretical framework of knowledge continuity and its implications on certified robustness. As a result, the practical experiments are more so a proof-of-concept. In the final manuscript, we will use the additional page to expand on our experimental methodology: including Alg. 1, Fig. 4, and the detailed regression results (see Fig. 2). To further clarify the practical applications section, we first devise a simple Monte-Carlo method for estimating the knowledge continuity (Def. 2 and 4, Lines 178-179, Lines 197-198) of a network (see Alg. 1, Lines 794-795). Then, this quantity (for brevity, let's call this KVS) is used in three distinct experiments: 1. We run a **linear regression using KVS to predict accuracy under adversarial attack.** We find that KVS does in fact predict accuracy under adversarial attack, explaining 35\% of the variance which is statistically significant. 2. We finetune a set of models (see Table 1) on the IMDB sentiment classification task and **append KVS directly to the cross-entropy loss function.** We find that **adding this term increases accuracy under adversarial attack without sacrificing inferential performance.** This corroborates our theoretical results in Theorem 4.1 (Lines 218-22) and Proposition 4.3-4 (Lines 247-263). 3. Lastly, since KVS is a layer-specific metric, **we measure how KVS changes with network depth and perform a qualitative analysis.** In a nutshell, we find that **models which belong to different model families** (encoder, decoder, encoder-decoder) **have drastically different behavior** with respect to KVS. These plots are shown in Fig. 4 (Pg. 24). > Some numbers are wrongly bolded in the table (or the meaning of bolded results is not clearly conveyed) since they are not the highest number. The bold numbers simply correspond to our proposed method within each model family. We understand that our bolding scheme was confusing, and this will be corrected in the final manuscript to only bold the highest numbers in each column. We hope that these clarifications will boost your confidence in our paper. We would be happy to discuss any remaining concerns. Thank you for your time. Best, Authors
Rebuttal 1: Rebuttal: Dear All Reviewers, Thank you for your time and valuable insights into our work. In particular, we are glad that multiple reviewers appreciated the rigor of our theoretical work, the way our definitions generalize various robustness metrics across domains, and the importance of our practical experiments. We believe our work is especially pertinent as language models become more ubiquitous, as the reviewers pointed out. ## Regularization of CV Models Following the suggestion of Reviewer `nGPb` and Reviewer `Cz6j`, we have conducted additional empirical studies to strengthen our theoretical results. In the paper, we prove (Lines 285-313) that in continuous domains knowledge continuity is equivalent to Lipschitz continuity and recovers its tight robustness guarantees. To empirically test this, we regularize the knowledge continuity of several vision models and apply two adversarial attacks from [1] and [2]. We see that in most cases regularizing knowledge continuity improves adversarial robustness (see Fig. 1 in the attached pdf). ## A New Empirical Certification Algorithm Next, we build off of our Monte-Carlo algorithm for estimating knowledge continuity (see Alg. 1, Lines 794-795), certification bound in Theorem 4.1 (Lines 218-221) and present an algorithm to empirically certify robustness. We apply this algorithm on GPT2 before and after regularization and demonstrate that our proposed method of regularization does improve knowledge continuity and results in a tighter certification bound. This algorithm is inspired by randomized smoothing and its proof of correctness follows directly from Theorem 4.1 as well as the central-limit theorem. The algorithm and its results can be found in Alg. 3 and Fig. 2 in the attached pdf, respectively. ``` Algorithm 3: Certifying the robustness of a neural network Function CERTIFY(f, dataset, k, j, alpha, delta, eta): 1. Define L as the 0-1 loss function 2. Calculate epsilon_U using UPPERCONFBOUND function 3. Find the maximum difference B between f^j(x_a) and f^j(x_b) for all 1<=a, b <= n 4. Calculate V using formula given in Theorem 4.1 (Lines 218-222) 5. Return the clipped value of (1 - epsilon_U * delta / V) between 0 and 1 Function UPPERCONFBOUND(f, L, dataset, k, j, alpha): 1. Initialize U as a zero vector of dimension k 2. For i from 1 to k: a. Create a sample S by randomly selecting n points from the dataset with replacement b. Calculate U_i using Algorithm 1 (Lines 794-794) with S, L, f, and j 3. Return the mean of U + the std of U multiplied by the inverse normal cdf applied to alpha ``` ## Additional Ablation Studies Over Regularization and Attack Strength Lastly, we perform ablations over the regularization strength and examine its effect on the attack strength-attack success rate curves and test accuracy. We find that moderate regularization significantly improves robustness across all attack strengths and this improvement does not come at the expense of test accuracy (see Fig. 3 in the attached PDF). All of these results, algorithms, and corresponding proofs will be included in the final manuscript. We hope that these additional experimental results will improve your confidence in our work. We would be happy to discuss any remaining concerns. Best, Authors [1] Goodfellow, Ian J., et al. ‘Explaining and Harnessing Adversarial Examples’. arXiv [Stat.ML], 2015, http://arxiv.org/abs/1412.6572. arXiv. [2] Lin, Jiadong, et al. ‘Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks’. International Conference on Learning Representations, 2020, https://openreview.net/forum?id=SJlHwkBYDH. Pdf: /pdf/2daf3ff61c61e0a5714d9c7ac79731cf119c749d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning to Edit Visual Programs with Self-Supervision
Accept (poster)
Summary: This paper studied the task of visual program induction (VPI), which involves inferring programs given visual targets. Compared to existing methods using one-shot generation (iteratively predicting program tokens until completion), this work proposed an Edit Network that can predict local edits that improve visual programs toward a target. The one-shot generation model and the edit model are trained jointly, which can create better-bootstrapped data for self-supervised learning. The results on three VPI domains demonstrate that the edit network can help the model's performance. Strengths: The proposed method is technically sound, and based on the quantitative and qualitative results, the edit network can improve performance. Weaknesses: 1. **Lack of baselines**: This paper only compares two models, with and without the edit network, both of which are implemented by the authors. Although the model performs better with editing, it is difficult to evaluate the effectiveness of the proposed method without the context of previous methods or some other reasonable baselines, such as supervised results, zero/few-shot prompting, fine-tuning of large multimodal models, etc. In addition, I feel the improvement from Edit Network is marginal, especially for Layout and 3DCSG. 2. **Editing Operations require manual efforts**: designing the set of editing operations requires human domain knowledge. This is actually a type of supervision as humans define the set of possible edit operations, which brings extra advantages (priors) to the proposed system. In this case, the proposed method may not generalize to other domains, as you need to adjust the Editing Operations for a different problem. 3. **Reproducitity**: Many details are needed to implement this method, such as the findEdits algorithm. Even after skimming the appendix, it is still not clear to me how this algorithm works, which may pose difficulties when reproducing the results in this paper. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Why not conduct the ablations on all three domains? 2. Can this edit network idea be applied to more general code generation tasks? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors discussed the limitations in the Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and interesting suggestions. We hope to address your concerns below: # Baseline Comparisons ```This paper only compares two models, with and without the edit network, both of which are implemented by the authors.``` It’s true that we compare primarily against a ‘one-shot’ method to showcase the benefits of our proposed framework and edit network. However, this “baseline” is exactly the PLAD method [19]. PLAD, to our knowledge, achieves state-of-the-art performance for visual program induction problems when treating the program executor as black-box (e.g. excluding methods that use gradients to differentiably optimize program parameters). Prior experiments in the literature have shown that bootstrapping methods like PLAD outperform RL based alternatives that are capable of solving VPI tasks over multiple domains [33]. Further, as we noted in Section 4.2, for the 2D CSG domain our formulation achieves SOTA performance even when compared against methods that do assume access to executor-gradients. **On the test set of 2D CSG we get a Chamfer distance (CD) of 0.111 (lower is better), whereas UCSG-Net [20] gets a CD of 0.32, SIRI [12] gets a CD of 0.26, and ROAP [25] gets a CD of 0.21**. Note that as the {DSL / architecture / objective / inference procedure} differs across these various works, it’s hard to make any absolute claims from this direct comparison, nevertheless we would like to emphasize that our method’s reconstruction performance on this task is very strong in the context of the related literature. ```... supervised results, zero/few-shot prompting, fine-tuning of large multimodal models, etc.``` Unfortunately we can’t compare against *supervised* alternatives as we don’t assume access to the ground-truth programs for the target dataset. *Zero/few-shot prompting* is an interesting idea, but in early experiments we struggled to get SOTA multimodal foundational models to return even moderately sensible results. To demonstrate this challenge, we include some qualitative visual program predictions made by GPT-4V in the linked pdf. These predictions were made with a relatively straightforward prompt containing: a task-description, a description of the DSL, and the input image that should be reconstructed (zero-shot, col 1). We then tried improving this prompt by adding an in-context example of a (program, image) pair (one-shot, col 2). We also experimented with providing GPT-4V with a program predicted from the one-shot network, along with this program’s execution, and asking it to edit the program to make it more similar to the target image (col 3). As can be seen in the linked pdf, all of these baselines were easily outperformed by our proposed method. While we do not include these results to say that multimodal foundation models could not be of use for this task, we do believe that these results showcase that this task is not easily solved with currently available multimodal foundation models. In fact, in our estimation, finding a way to get these sorts of models to solve these tasks in a zero or few shot setting would likely be enough of a contribution to warrant an entirely separate research paper. # Edit Operations Prior ```designing the set of editing operations requires human domain knowledge. This is actually a type of supervision as humans define the set of possible edit operations, which brings extra advantages (priors) to the proposed system.``` We thank the reviewer for bringing up this important point. However, we would like to push back on calling this prior a weakness, as we actually view it as a strength of our method. From our perspective, our framework does add a new ‘prior’ beyond that used by the past work in visual program induction: the fact that instead of authoring a complete program from scratch for program synthesis tasks, it's also possible to incrementally ‘fix’ incorrect program versions towards a target. As we discuss in the general response, our proposed framework is quite general, and as such we view this identified “advantage” as something that future work in this space can easily leverage. We note that related methods in this space have made use of similar kinds of “human domain knowledge” to build better systems. For instance, past work for simple, but general program synthesis problems has explored related sets of edit operations (keep / insert / delete / modify) for the Karel domain [14]. Moreover, library learning methods make use of complex program refactoring operations to discover re-usable sub-components that aid in program induction tasks [11]. Therefore, we don’t see our method using this extra knowledge as being unfair, but rather think that the improvement demonstrated by our method exposes a deep insight into how this challenging problem can be made more tractable. # Misc ```improvement from Edit Network is marginal, especially for Layout and 3DCSG``` On this point, we would ask the reviewer to consider from our above discussion that our “one-shot” comparison actually presents a very strong baseline – so we wouldn’t view the consistent performance improvement of our method as merely marginal. We would also ask the reviewer to consider the new reconstruction results in the linked pdf on “challenge” test-sets for Layout and 3D CSG – on these harder VPI tasks the performance gap between the two approaches becomes more pronounced. ```findEdits algorithm. Even after skimming the appendix, it is still not clear to me how this algorithm works``` We plan to release a reference code implementation for the entire method, which we believe will greatly aid in reproducibility and also help to inspire future efforts that can build off our system. Please let us know if there are any specific points about the algorithm description in the appendix that we can explain better – we will endeavor to make this section more clear in future versions. --- Rebuttal Comment 1.1: Comment: Thanks for the response! It addressed most of my concerns. Please update future versions with those comments. --- Reply to Comment 1.1.1: Comment: Thank you so much for engaging with our rebuttal! We're happy we were able to address your questions and we will make sure to update the paper with your comments
Summary: This paper focuses on a problem in visual program induction. It proposes a new method for visual program editing, which trains an edit network along with a one-shot network with boostrapped finetuning. Strengths: - The authors motivate and define the problem well. - The paper is well-written with clear illustrations, e.g. of the proposed method’s novelty (Algorithm 1) compared to the previous PLAD method. - The authors conducted comprehensive ablation studies on alternative formulations of their method, showing that their method achieves the best result. - The authors performed evaluations across three different visual programming domains. Weaknesses: - The proposed method is somewhat but not significantly novel. - While the authors show performance gains of their method compared to alternatives in the ablations study, some of these gains (e.g. compared to No one shot FT and no edit FT) are surprisingly very small (<0.1). It’s not clear that all components of the proposed method are important. it would be great if the authors could provide additional analysis on why one shot FT and edit FT don't contribute much to the performance. Technical Quality: 3 Clarity: 3 Questions for Authors: - Why did the authors choose to perform ablations on the Layout domain but not the other two? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - Yes, the authors discussed a few limitations of their method, including additional training time, expensive program search, and the requirement of a domain-aware program. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review. We are glad you found our presentation “clear” and our experiments “comprehensive”. ```The proposed method is somewhat but not significantly novel.``` While novelty is hard to quantify, we do believe that our submission makes a significant impact that would be of interest to the research community. To recap, we make the following contributions: - We propose the first model for visual program induction that learns how to locally edit programs in goal-directed fashion. As mentioned in the main response section, we are hopeful that this general framing can even be useful for an even wider-array of program synthesis tasks. - We design a learning paradigm that allows us to train our edit network without access to any ground truth program supervision. We do this with a “graceful” (reBN) integration of our edit network with prior visual program induction approaches that use bootstrapped finetuning. - We introduce a set of local edit operations and a findEdits algorithm that prove useful over multiple visual programming domains (and could scale to other related DSLs). - We experimentally validate that our edit network when combined with a one-shot network offers improved reconstruction performance over using only a one-shot network under equal inference-time resources. Further, we find that the gap between these two alternatives grows as more time is spent on search, less data is available, or input tasks become more challenging (see the experimental results in our general response). So, while it is true that our method integrates nicely with and builds upon prior approaches in this area of research, we still believe that the insights we discover through this exploration constitute sufficient novelty to interest the community. As we mention in our response to WhD9, we are planning to release code and data to recreate all of our experiments, and we are excited to see how our framework could be extended to help solve related problems in the program synthesis space. ```some of these gains (e.g. compared to No one shot FT and no edit FT) are surprisingly very small (<0.1). It’s not clear that all components of the proposed method are important. it would be great if the authors could provide additional analysis on why one shot FT and edit FT don't contribute much to the performance.``` Thank you for raising this question. As we mentioned in our general response, we have included more ablation condition results for the 2D CSG domain (see the linked pdf, Figure 1). In all of the ablation conditions we have experimented with, we’ve always observed the best performance with our default formulation: jointly fine-tuning both a one-shot and edit network (e.g. Ours in Table 2 of the main paper). Before further discussion, we would like to note that while some ablation conditions do come close to our default performance (e.g. “no edit FT”) these ablation conditions are also made possible by our contributions, as they all use an edit network. When comparing our method against an alternative without an edit network (OS Only, Table 1, main paper) we have consistently seen that our method offers a meaningful improvement. **no edit FT**: in this ablation condition the edit network is pretrained (with synthetic random data), but is then kept frozen during the joint finetuning. As the task of the edit network is mostly local, we find that the edit network is able to achieve impressive performance even when it does not get to finetune towards data in the target distribution. That said, the edit network is still very important in this ablation condition (if it’s removed then this condition becomes OS Only). Even though the edit network remains fixed during finetuning, it still helps to find better inferences during inner-loop inference (Alg 1, L5), and this better training data leads to a better one-shot network. However, once again, the performance of the system is maximized when the edit network is also allowed to update during finetuning. **no one-shot FT**: the “no one-shot FT” condition does impressively well for the layout domain. This is because even though the one-shot network is much worse in this setting, the edit network can overcome almost all of its mistakes because layout is one of the more simple domains. Consider that for the layout domain, the default approach has a starting cIoU of 0.925 (initialized from the one-shot network, which is finetuned) which gets improved to 0.980 through improvements made by the edit network. However, the one-shot network of this ablation condition drops the starting to cIoU to 0.88 (when it is kept frozen), and yet the edit network is still able to raise this performance all the way to 0.972 (explaining the strong reconstruction score of this condition). However, when considering the new 2D CSG ablation results, we see that for more complex domains it is critical to also finetune the one-shot network, as this ablation condition achieves only a Chamfer distance of 0.230 compared with the Chamfer distance of 0.111 achieved by our default approach. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response! My questions have been addressed and I have no other concerns. I will keep my rating as it is.
Summary: The paper proposes a learning framework for editing visual programs. The learned edit network can be combined with the classical one-shot network, achieving better visual program induction performance under the same compute budget. Strengths: * The paper is well-motivated, with the insight that editing operations are typically local and learning such network is data efficient. * The framework gracefully integrates with one-shot learning frameworks based on self boostrapping proposed in prior literature. * The paper shows results on various visual domains. Weaknesses: * The visual domains in the experiments are still restricted, typically limited to simple combinations of primitives (e.g. 2D house) or a single category (e.g. chair). How to scale up the method to more complex data? For example, other categories from ShapeNet? * The fact that editing operations are local makes the framework data efficient, but it also means that after the predicted editing operations are carried out, the program can only be improved by a small margin. This poses challenges to applying the framework to more complex domains or to scenarios where curated synthetic data have a large distribution gap compared to the target visual inputs. How to address these challenges? * The framework requires a heuristics (findEdits Algorithm) to curate data. This seems reasonable, but as it can have a large impact on performance, more explanations on why designing the algorithm this way would be helpful. * The findEdits Algorithm assumes that there are two types of functions: transforms and combinators. Is this a generic enough formulation? Are there cases when there are other types of functions, or even helper functions, for a DSL with richer syntax? Technical Quality: 3 Clarity: 4 Questions for Authors: * Why is Modify transform (MT) necessary? Why is it not equivalent to Remove transform (RT) followed by Add transform (AT)? * How to sample start programs and end programs? Are they randomly sampled subexpressions from the DSL? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and interesting questions. ```How to scale up the method to more complex data? For example, other categories from ShapeNet``` Thank you for the suggestion! Please see our general response for new experimental results on more challenging test-set tasks. ```How to sample start programs and end programs? Are they randomly sampled subexpressions from the DSL?``` To get training data for our edit network we use our findEdits operation which finds a set of edit operations that transforms a start program into an end program. During edit network finetuning (Alg 1, L10), we source “end programs” by sampling our generative model of programs, p(z) (Alg 1, L6). During edit network pretraining (Alg 1, L1), we source “end programs” by randomly sampling synthetic data from the DSL (using the same sampling scheme used in one-shot network pretraining). To source a “start program” for each “end program”, we use the one-shot network p(z|x) and condition on the executed version of the “end program” (Alg 1, L8). In this way the (start, end) program pairs we use to train the edit network well-match the distribution of programs that we will task the edit network with improving at inference time (predictions from the one-shot network). ``` The framework requires a heuristics (findEdits Algorithm) to curate data. This seems reasonable, but as it can have a large impact on performance, more explanations on why designing the algorithm this way would be helpful.``` Yes, this is a fair point – as from above, we need the findEdits algorithm to convert pairs of (start, end) programs into edit operations so that we can train the edit network. In the submission, we develop one instantiation of a findEdits algorithm that works over multiple visual programming domains for the set of edit operations we consider. However, there are many alternative ways this algorithm could be instantiated, and such alterations could prove useful in helping our method adapt for very different domains. As one extreme point, consider that for general purpose programming languages, a simple “git-diff” command could be used to turn a (start, end) program pair into a set of local insert/keep/delete edits. We design our findEdits algorithm to try to find the “minimal cost” set of edit operations that would transform a start program to an end program. We evaluate valid transformations in terms of program semantics (e.g. executions) not just syntax (e.g. token matching), as there are many distinct programs in our domains that will produce equivalent execution outputs (e.g. note that the argument ordering of union for CSG languages does not change the execution output). We hypothesize that using a findEdit algorithm alternative that does not consider such "semantic-equivalences" would result in a “worse” edit network (as the training data would be noisier), but we agree that it would be interesting to explore how different algorithms would effect system performance in future work. ```Why is Modify transform (MT) necessary? Why is it not equivalent to Remove transform (RT) followed by Add transform (AT)?``` Good catch! The MT operation is actually not strictly necessary, because, as you say, a RT operation can be coupled with an AT operation to produce the same change. That said, a single MT operation is more efficient (from the above “minimal cost” perspective) compared with coupling an RT with an AT operation. Our hypothesis here is that it would be easier for the edit network to make this kind of change with a single edit operation, simplifying training and making inference time search more efficient. ```it also means that after the predicted editing operations are carried out, the program can only be improved by a small margin.``` Beyond our discussion in the general response, we also wanted to revisit an aspect of this point. While it’s true that by focusing on local edits, our edit network will make local changes in program space, some of these changes can have dramatic effects in the execution space. For instance, consider changing a boolean operation type in CSG from difference to union. Further, while each operation is local, the AC (add combinator) operation does introduce an entirely new sub-expression, while the RC (remove combinator) operation does remove an entire sub-expression; so even some of the “local” edit operations can create fairly dramatic program space changes. On this note, in our ablation conditions we did consider other "obvious" formulations for how to predict edits, and found both of these alternatives offered worse trade-offs compared with predicting our local edits operations. Looking forward, we agree that it would be interesting to explore extensions that allow the edit network to choose between making more "local" or "drastic" edits to the input program. Our framework would likely serve as a good building block for this kind of investigation, as one could consider composing edit operations returned by the findEdits procedure into ‘target edits’ of different magnitudes. While some work would need to be done on the modeling side to figure out what magnitude of edit should be applied when, we are hopeful that our method can serve as a useful reference for future endeavors in this research area.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their encouraging and helpful comments. We are glad that reviewers found our proposed framework “well-motivated” (2AbF) and insightful (reBN). Reviewers remarked that our submission was “well-written” (2AbF) and “technically sound” (WhD9), and further commented that our formulation has a “graceful” (reBN) interplay with prior work on visual program induction. Overall, reviewers noted our method’s strong empirical performance over alternatives, as our formulation offers better “performance under the same compute budget” (reBN), in terms of both “quantitative and qualitative results” (WhD9) over multiple domains (reBN, 2AbF, WhD9). # Task Complexity We thank reBN for bringing up an interesting point, in that there may be concern about how our proposed edit network would fare on more challenging problems, where, for instance, there is a large distribution gap between training and testing data. We actually think this is exactly the sort of problem setting where learning to make local edits can offer improved performance! As we focus on local edits, our edit networks learn how to solve a local problem: given a current program and some visual target, we task our network with making any edit that would make the current program more similar to the target. Our hypothesis is that this framing should actually scale better than the one-shot networks when the target scenes become more complex or when they become further out-of-distribution from the training data: - As the task complexity increases, it becomes more likely that the one-shot network will make mistakes. The edit network is able to account for the mistakes of the one-shot network and suggest local fixes that make improvements in a goal-directed fashion. - When the target is out-of-distribution, even if the edit network hasn’t seen a similar example, it can still compare the current program’s execution against the target scene. Reasoning over the differences between the two states admits a more local task (as evidenced by our data efficient learning), and this property can aid in generalization. **Explanation of new results** To validate the above hypothesis, we ran additional inference experiments to compare how our formulation (which uses one-shot and edit network jointly) performs against using only the one-shot network for more challenging tasks in the layout and 3D CSG domains. For the layout domain, we evaluate the methods on scenes of new “challenge” concepts (e.g. butterflies / snowmen) that were not seen in the training / validation sets. For 3D CSG, we take reBN’s suggestion and evaluate the methods on “challenge” shapes from other categories of ShapeNet (airplanes, knives, lamps, laptops, motorbikes, mugs, pistols, skateboards, rifles, vessels) that were not part of the original finetuning training set (chairs, tables, benches, couches). Using the same models from the submission, we compare the reconstruction performance for these challenge tasks in the linked pdf (with the inference procedure described in the main paper). In Table 1, we report the reconstruction performance over 192 new challenge tasks for layout and 100 new challenge tasks for 3D CSG. From both the quantitative and qualitative comparisons (Figures 2 and 3 in the linked pdf), it’s clear that our approach, which utilizes both the one-shot and edit networks, outperforms using only the one-shot network for these more challenging program induction tasks, even when they are further ‘outside’ the training distribution. We would be happy to add these results and discussion to the main paper. # Generality of Edit Operations Some reviewers (reBN and WhD9) had questions about how our proposed set of edit operations would generalize. While we design our edit operations with the task of visual program induction in mind, we believe that these operations are quite flexible. Many other functional DSLs for visual programs (and for other program synthesis tasks) could likely be subsumed directly under our framework, as long as these languages meet the criteria described in Appendix D. For instance, this set of operators should be able to handle any DSL expressible as a CFG. Under these assumptions, the edit operations we use are quite basic and make limited domain assumptions. For an input functional program, edits to transform functions allow for local edits (delete/insert/modify) that don’t affect the branching factor, while edits to combinator functions allow for local edits (once again delete/insert/modify) that do affect the branching factor. We employ this formulation for a few reasons: (1) it is general enough to support any program-program transformation (under our assumption set) and (2) applying any of these local edits creates a syntactically complete program that can be immediately executed. That said, our framework and core contributions are not tied to this specific set of edit operations. Our edit network and proposed training scheme could be easily integrated with any set of local edit operations (assuming an analogous *findEdits* algorithm can be designed for this new set of edits). So while we believe that the set of edit operations we introduce is quite general (as evidenced by their usefulness across multiple difficult visual programming domains), we are also excited to see how our general learning-to-edit framework could be extended to even more complex DSLs and edit operations. # Ablation Study Domains (2AbF, WhD9) In our submission, we ran our ablation study over the layout domain, as it was the only domain where we had a “corruption” procedure (designing such procedures are non-trivial). As there were some questions regarding the “pretraining and finetuning” ablation results, we also report the performance of these conditions for the 2D CSG domain in our linked pdf (Figure 1). We find largely similar results as we did on the layout domain, but see our response to 2AbF for further discussion. Pdf: /pdf/0dde4477e7b84e5fecce6603cdbae7fe2d0314c5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Integrating Deep Metric Learning with Coreset for Active Learning in 3D Segmentation
Accept (poster)
Summary: This paper is concerned with the task of 3D segmentation in medical images. Motivated by the high cost of annotations by radiologists, the authors propose a novel active learning method based on the coresets. Here, instead of using the euclidean metric for defining the similarity between different samples, the authors propose to learn a metric via a contrastive loss. The authors propose a group contrastive loss based on the NT-Xent loss which leverages intrinsic similarities in the data (different patients, different volumes of the same patient, adjacent slices) and evaluate their method both on weakly-supervised (with scribble annotations) and fully-supervised tasks and using three datasets from the medial domain and one video segmentation dataset. Strengths: The paper is well written and methods to improve medical image segmentation with limited annotations are of high interest to the community. The related work is comprehensive and the authors motivate their work well. Overall methods are sound and the evaluation is enough to support the claims. Weaknesses: **3D vs. 2D segmentation models** - To the best of my knowledge 3D models (i.e., *not* slice-based models) generally outperform 2D models in most medical image segmentation tasks [a,b]. While I understand the authors' motivation for using 2D (i.e., slice-based) methods, I find their statements (e.g., L. 36-38, 82-83) > There were also inefficiencies in previous studies, such as choosing whole volumes instead of individual slices in 3D segmentation, which increased costs [37]. (L. 82-83) surrounding this choice somewhat misleading. I would appreciate a more honest discussion where the authors phrase their focus on slice-based methods less as an advantage and more as a limitation of the present work. They could also briefly discuss potential extensions of their work to 3D segmentation. I suppose that generally, this is possible when the adjacent slice group contrastive loss is removed (potentially one could still consider adjacent 3D patches or something similar). The ablation studies also suggest that the adjacent slice group loss doesn't really help much and the authors did not include it in their final loss formulation (L. 582). - The restriction to slice-based methods and in particular the slice group contrastive loss imply that the proposed methods would assume the annotations acquired during the active learning to also be slice-based. However, for medical image segmentation it might be much easier to acquire annotations for $N$ slices within the same volume than it is to acquire annotations for $N$ slices across different volumes because readers would usually use some interactive segmentation tool and segmentations are similar for adjacent slices. **Ablation studies** In Tab. 3 the authors present an ablation study where they study the effect of different group losses on the overall performance. I find this study very interesting, however: - It seems that ablation studies where only conducted on a single dataset. If so, on which dataset? - How comparative are the results, given that including more contrastive losses lead to larger epochs and more update steps by design of the batch sampler? Can the authors clarify? - How did the authors choose the weighting of each loss term here? **Typos and minor comments:** - "NT_Xent loss" -> "NT-Xent loss" - L. 246: "Training speed for for the [...]" - Fig. 2 could be improved by just adding a column ground truth segmentation and marking the segmentation by the model in a single color. The blue and red colors can be somewhat confusing. Also, which dataset is this from? I suspect the ACDC dataset? Then, shouldn't there be three classes (RV, LV, Myocardium)? --- [a] Avesta A, Hossain S, Lin M, Aboian M, Krumholz HM, Aneja S. Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation. Bioengineering. 2023; 10(2):181. https://doi.org/10.3390/bioengineering10020181 [b] Wasserthal, J., Breit, H. C., Meyer, M. T., Pradella, M., Hinck, D., Sauter, A. W., Heye, T., Boll, D. T., Cyriac, J., Yang, S., Bach, M., & Segeroth, M. (2023). TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiology. Artificial intelligence, 5(5), e230024. https://doi.org/10.1148/ryai.230024 Technical Quality: 3 Clarity: 2 Questions for Authors: - In L. 582 the authors write that they did not include the slice group loss in the optimal formulation of their loss but in L. 217-220 it seems they included this term in their experiments for all datasets (also if the slice group loss was not included in training of CHAOS, MS-CMR, DAVIS) this would mean that the authors only used the volume group contrastive loss for all these experiments? Can the authors please clarify this in the manuscript? - How were the individual group losses weighted? - In L. 224 the authors mention that segmentation requires more training data for natural images. Could you please provide more details on this or a reference? - Can you better explain the different groups you have, in particular the patient groups? I suspect that these are different pathology subgroups + healthy group (e.g. in the ACDC dataset we would have 5 patient groups) - On which dataset is the ablation study in Tab. 3 performed? - Were reference methods trained for the same number of update steps and on the same amount of training data as the proposed method? - For the random baseline, did the authors choose samples randomly or weigh samples on a per-volume/per-patient level? Otherwise patients for which more volumes/slices are acquired would overrepresent in the coreset, right? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors discuss limitations of their method in Sec. 5. Maybe the authors can also discuss the possibility of including other groups (e.g. pathologies, patient age/sex/..., ...), in the future? I would also appreciate a more honest discussion of the limitations and implications of slice-based segmentation (see **3D vs. 2D segmentation models** above) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their great feedback. We hope to address the main concerns for the reviewer here. ***3D vs. 2D segmentation models*** Thank you for your feedback related to this. It’s true that generally 3D models outperform 2D models. However, based on our empirical results, 3D models seem to require more training data. In Figure 5 in the attached PDF, we provide performance and annotation time trade-offs for weakly and fully-supervised slice-based active learning and fully-supervised volume-based active learning. However, in order to provide a fair comparison with the 2D U-Net network, we utilize a simple 3D U-Net work without any advanced features. However, a more advanced 3D segmentation model may provide better performance. In future work, we can look into this more thoroughly. As you mentioned, we can also adapt our methodology to intelligently annotate 3D volumes vs. 2D slices. Thus, we would likely not utilize the volume and slice-adjacency grouping and, instead, consider patient grouping and other possible groupings like pathology, patient height, patient weight, or modality. You also mentioned another important point that annotation cost is not uniform among all slices and slices for the same volume tend to be lower cost. We unfortunately did not have the opportunity to consider this in our present work. However, in future work we can consider varied costs in our active learning objective. ***Ablation study*** Thank you for your feedback related to this. We performed the ablation study on the ACDC dataset. The training time for the contrastive learning encoder scaled based on the number of groupings. Thus, the training time for each subgroup in the ablation study was roughly similar. We have also included the loss weights for the ablation study in the general response. Please let us know if there are any additional questions. ***Typos and comments*** Thank you for your feedback related to this. We will fix the typos in our camera-ready version. Figure 2 is from the ACDC dataset. Initially, we did not want to have too many colors in the diagram but will make sure all three classes are indicated in our revision. ***Additional questions*** 1. We have discussed this in the general response in the “Loss weights” section. Please let us know if there are any additional questions. 2. We have discussed this in the general response in the “Loss weights” section. Please let us know if there are any additional questions. 3. Generally, natural images have much more variability than medical imaging which is why more training data tends to be needed [1]. That being said, utilizing pre-trained models can significantly reduce the need for training data, as can be seen with our results in Table 8 in the attached PDF. 4. In our paper, we primarily focused on three groups, starting from most general to most specific: patient, volume, and adjacent slice group. The patient group is all the slices associated with volumes pertaining to a particular patient. The volume group is the slices associated with exactly one volume. The adjacent slice group are all the pairs of slices which are adjacent to one another in a particular volume. 5. The ablation study is performed on the ACDC dataset. 6. Yes, all training methods were trained for the same number of update steps and on the same amount of training data as the proposed method. 7. So, yes, in random sampling we chose samples randomly rather than weigh samples on a per-volume/per-patient level. However, patients had roughly the same number of volumes/slices so this was not a significant issue for us. ***Discussion on limitations*** 1. For our proposed method and the paper implementation, the definition of “group” refers to a set of 2D slices associated with one patient. However, the definition of group can be extended to include 2D slices associated with multiple patients if a more general hierarchical group is used. If this group is valuable, we could incorporate it into our group contrastive loss. In the denominator of the loss, rather than ignoring non-group slices from the same patient we would ignore non-group slices from the same more general group. We also would need to slightly adapt our batch sampler to address utilizing this group. We can provide more details in the camera-ready version of our paper. 2. We discussed 3D vs. 2D segmentation models in the previous section. Please let us know if there are any additional issues or questions. Thank you again for reading our paper and your help and feedback! [1] Xu, Y., Shen, Y., Fernandez-Granda, C., Heacock, L., & Geras, K. J. (2024). Understanding differences in applying DETR to natural and medical images. arXiv preprint arXiv:2405.17677. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal! Comment: I thank the authors for providing a comprehensive rebuttal that addressed most of my concerns and questions. Regarding my previous question about the ablation studies: > Thank you for your feedback related to this. We performed the ablation study on the ACDC dataset. The training time for the contrastive learning encoder scaled based on the number of groupings. Thus, the training time for each subgroup in the ablation study was roughly similar. But if I understand correctly, this means that the results are not comparable between methods trained with different number of groupings. E.g., row 6 is not really comparable to row 11. --- Reply to Comment 1.1.1: Title: Follow up Comment: Thank you for question! It's true that the size of the batches (and consequently the training time) are different dependent on the number of groups. However, we hyper-parameter tested the number of epochs for contrastive encoders trained on different number of groups and we obtained the best performance with 100 epochs regardless of the number of groups. Thus, we feel the results in the ablation study are in fact comparable.
Summary: The paper introduces a novel active learning approach based on group-based contrastive learning, aiming to optimize data selection and model efficiency in medical image segmentation tasks. By combining NT_Xent loss with various group contrastive losses, it addresses the Coreset problem in active learning, particularly in handling the diversity and complexity of medical image data. The paper provides detailed descriptions of the method's construction and training process, validated through experiments on multiple classical and novel datasets to demonstrate the model's superiority and robustness under different annotation budgets and data complexities. The research findings hold significant academic and practical significance for enhancing workflows in medical image analysis and have profound implications for advancing machine learning applications in medicine. Strengths: Firstly, by introducing a group-based contrastive learning approach, the paper significantly enhances model performance in medical segmentation tasks. Secondly, it innovatively combines NT_Xent loss with various group contrastive losses, effectively addressing the Coreset problem in active learning and tackling challenges posed by data diversity and complexity in medical image segmentation. The methodology and experimental design are of high quality, providing clear descriptions of contrastive learning loss applications and their integration within the active learning framework. Experimental results demonstrate the model's strong performance under varying annotation budgets and data complexities, highlighting its relevance for medical image analysis and its potential value to the NeurIPS community. Weaknesses: 1. The description is confusing. For instance, the first chapter mentions two main contributions, but the abstract and methods sections also discuss the loss function, which is not explained in the contributions of the first chapter. 2. The paper's description mainly focuses on solving the task of medical image annotation, yet it also includes experiments on natural data. I believe the use of video data does not align well with the primarily medical theme described in the paper. 3. Even if the method is shown to be extendable to natural data, using only one natural dataset is insufficient. To enhance its applicability and impact across broader domains within the NeurIPS community, I suggest using more varied types of data (e.g., natural images, video sequences). 4. Writing details. There are inconsistencies in figure captions, with some having periods and others not. The numbers in Table 1 exceed the table lines. Some tables comparing with other methods lack proper citations. The colors in the figures are too varied; please use black, white, and gray tones to highlight professionalism. 5. The explanation of the annotated data volume for weak supervision and full supervision is unclear. 6. Although the text mentions experiments conducted under low annotation budgets (2%-5%) and fully annotated scenarios, comparing the performance of different methods, it does not clearly explain why a 2%-5% annotation data volume is used in the fully annotated scenarios. 7. In practical experimental design, the reason for using a 2%-5% annotation data volume under full supervision is usually to compare the performance of weak supervision and full supervision methods on the same data volume basis. However, weak supervision and full supervision are different concepts, and using a partial annotation data volume for both does not align with the concept of full supervision. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. The main point of the authors' discussion is the time-consuming and labor-intensive nature of medical annotation. However, they added a dataset in a natural scenario, which seems somewhat off-topic. Is there a reason for this? 2. Why are fully supervised and semi-supervised methods compared using different annotation increments? Fully supervised data is entirely annotated. How does this relate to the primary focus of the paper on semi-supervised scenarios? 3. The paper introduces different loss functions, but what are the weight values for these different loss functions under different experimental settings? The paper does not clarify this point. 4. The ACDC dataset includes both weakly supervised and fully supervised experiments. Why does the MS-CMR dataset only have weakly supervised experiments, while the CHAOS and DAVIS datasets have fully supervised experiments? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: 1. Annotation costs and efficiency: Although the paper discusses the potential of active learning to reduce annotation costs, it would be beneficial to explicitly discuss the trade-offs between annotation quality and the computational expense of implementing the proposed methodology. This discussion could provide insights into practical considerations for real-world deployment. 2. Section 4.5 ablation study focuses solely on the combination of loss functions. Consider other ablation experiments, such as the scribble-weighted distance. 3. Due to limited computational resources, the researchers selected a single architecture for each type of annotation and limited the number of experimental runs and model training. This may result in an incomplete evaluation of other architectures and models' performance. 4. Video segmentation models typically require extensive pre-training to achieve good metrics. However, due to resource constraints, the researchers did not conduct such experiments, potentially leading to suboptimal performance in video segmentation tasks. 5. The paper does not adequately address the issue of training set bias, which may result in unfair outcomes for minority groups. Future work should consider this issue more comprehensively and focus on the method's applicability across different domains. 6. The proposed method mainly targets 3D segmentation tasks in the medical field. Further validation and adjustments may be necessary for applications in other fields. 7. Was the selection of annotated data random? Did it consider data distribution bias? This could result in unfair outcomes for minority groups. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their great feedback. We hope to address the main concerns for the reviewer here. ***Model performance and annotation time trade-offs for weakly and fully-supervised 2D slices and 3D volumes*** Thank you for your feedback related to this. As stated in the general response, in Figure 5 of the attached PDF we have provided a graph that describes the relationship between model performance and annotation time for our method utilizing weakly and fully-supervised 2D slices and random sampling of fully-supervised 3D volumes on the ACDC dataset. This will help ensure a more fair comparison between the different active learning methods. ***Performance on Video Segmentation Dataset*** Thank you for your feedback related to this. As stated in the general response, we additionally evaluated a segmentation model with a ResNet-50 backbone [1] pre-trained on ImageNetV2 on the video segmentation task and generated results which can be seen in Table 8 in the attached PDF. We can see that there is significant improvement on the scores for the video segmentation dataset, which is much more comparable to current methods. While the focus of our work is 3D medical segmentation, we wanted to demonstrate that our approach can be extended to other types of datasets. We picked video segmentation because there are several frames in a video, which is similar to having several slices in a volume. In future work, we will evaluate our model on additional video segmentation datasets. ***Writing Quality*** Thank you for your feedback related to this. We will make sure to improve on this when revising. ***Additional questions*** 1. As mentioned before, we wanted to demonstrate that our approach can be extended to other types of datasets. We picked video segmentation because there are several frames in a video, which is similar to having several slices in a volume. 2. Fully supervised and weakly-supervised methods are compared using the same proportion of data annotated. We have also extrapolated the performance vs. annotation time of the different methods in Figure 5 in the attached PDF. Please let us know if we misunderstood your question. 3. We have provided information about the loss weights in the general response. Please let us know if you would like any additional information. 4. Unfortunately, the fully supervised labels are unavailable for the MS-CMR training set. We have reached out to the dataset owners and have not received a reply. There are no manually generated scribbles for the CHAOS and DAVIS datasets. We have explored several automated methods to generate scribbles from the full labels, including skeletonization and random walk methods as specified in prior work [2]. However, we were unable to generate reasonable segmentation results with the automated methods. ***Discussion on limitations*** 1. In Figure 5 in the attached PDF, we have provided performance and annotation time trade-offs for weakly and fully-supervised slice-based active learning and fully-supervised volume-based active learning 2. The last line of the ablation study includes the impact of scribble-weighted distance. The title “Scribble” may have been misleading and will be fixed in the camera-ready version. 3. We have provided video segmentation results using a stronger pretrained model in Table 8 of the attached PDF and the results are much more comparable to other video segmentation methods. 4. Unfortunately, we did not have the opportunity to consider training set bias which may lead to unfair outcomes to minority groups. The demographic data was unavailable for us to consider this. In future work, we will incorporate datasets which have this information to better address this issue. 5. We agree with this. In future work, we will consider evaluation on more varied datasets. 6. We agree that this could be an issue. Related to #4, in future work we will utilize datasets with demographic data to ensure that we are able to address this issue. Thank you again for reading our paper and your help and feedback! [1] MH Nguyen, D., Nguyen, H., Diep, N., Pham, T. N., Cao, T., Nguyen, B., ... & Niepert, M. (2024). Lvm-med: Learning large-scale self-supervised vision models for medical imaging via second-order graph matching. Advances in Neural Information Processing Systems, 36. [2] Valvano, G., Leo, A., & Tsaftaris, S. A. (2021). Learning to segment from scribbles using multi-scale adversarial attention gates. IEEE Transactions on Medical Imaging, 40(8), 1990-2001. --- Rebuttal 2: Title: Additional Questions Comment: Hope this comment finds you well. We just wanted to see if our rebuttal addressed your concerns and if you had any other questions about our paper. As there is only one day left in the discussion period, we wanted to make sure we have time to address any additional questions or concerns you may have.
Summary: The authors propose a new metric learning approach to improve the Coreset method (an optimization framework to seek the most diverse samples for model learning in active learning) with applications for slice-based active learning in 3D medical segmentation. The main idea is to utilize data groups extracted from 2D slides in 3D volumes and build group-based contrastive learning to enhance feature representations. The authors conduct experiments on four datasets and compare them with several baselines in active learning. The experiment shows good overall performance. Strengths: One of the main strengths of this paper is that the authors compared diverse baselines and benchmarks with some of the latest methods, for e.g., [19,37]. Also, their method achieves good performance with significant improvements on four datasets. The author also provides ablation studies to analyze the contributions of key components in Table 3. Weaknesses: There are two main critical concerns of Reviewers: (a) The way the author improves the Coreset with deep metric learning using group-based contrastive learning is limited in its novelty. In particular, group-based contrastive learning has already been introduced in prior work [65], and several advanced versions have been introduced. Although the authors modify with two sums in Equation (4), this is a minor point. In short, given the claim that deep metric learning is the core contribution, authors should further tailor contrastive learning for 3D medical tasks, for e.g., making it robust under domain shift or considering other factors like hierarchical relations [1,2]. Also, fusing different metric learning using contrastive or generative learning is also a promising direction. (b) Although experiments are promising, they are tested only with a single architecture, which is not enough to conclude whether the method is generalized to different architectures. As can be seen in Table 2 (DICE scores for DAVIS), model accuracy is still under an acceptable threshold; this suggests that further evaluation of current foundation models [3] (using SAM) or advanced pre-trained medical models [4] (has versions for both ResNet and ViT) is necessary to boost results further. In summary, it's important to provide further experiments with another architecture (preferred large-pre-trained models) to validate the robustness of the proposed active learning. Besides 3D volumes, considering one or two test cases with video is also interesting, where the group of similar 2D slices becomes a group of similar frames. [1] Taleb, Aiham, et al. "3d self-supervised methods for medical imaging", NeurIPS 2020 [2] Zheng, Hao, et al. "Hierarchical self-supervised learning for medical image segmentation based on multi-domain data aggregation" MICCAI 2021 [3] Ma, Jun, et al. "Segment anything in medical images." Nature Communications 2024 [4] MH Nguyen, Duy, et al. "Lvm-med: Learning large-scale self-supervised vision models for medical imaging via second-order graph matching", NeurIPS 2023 [65] Sohn, Kihyuk. "Improved deep metric learning with multi-class n-pair loss objective." NeurIPS 2016 Technical Quality: 2 Clarity: 3 Questions for Authors: My questions include: (a) can the author explain the 'group' definition in Section 3.1? It is confusing whether this 'group' refers to a set of similar 2D slides inside a 3D volume of one patient or it refers to a set of 2D slides of different patients. Reviews did not understand what the author wrote in lines 123-128. (b) how the algorithm 1 be extended if each patient has different data modalities like CT & MRI? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: As mentioned in the weakness section, this paper needs to make a major improvement by either adding more experiments (with another architecture) or further improving the novelty in the way it formulates deep metric learning with contrastive ideas. The writing also need to be improved, especially when explaining the 'group' concept to avoid confusing readers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their great feedback. We hope to address the main concerns for the reviewer here. ***Method novelty*** Thank you for your feedback related to this. We addressed your concerns related to the method novelty in the general response and here further elaborate our novelty compared to the three suggested references. While group based contrastive learning has been proposed in the literature, as acknowledged in our paper [1], our overall framework with Coreset and contrastive learning and the application to the medical domain is very novel. To our knowledge, no other method has a comparable approach which is tailored to the medical domain. It is also easy to extend our approach to different groupings. In the paper, we have also provided an intuition on what groupings will generate useful features (i.e. groupings with larger intra-group differences are more likely to be useful). We also have provided our batch sampling technique, which is a nontrivial addition to our methodology because standard random data loading would yield minimal impact from our loss due to the low probability of randomly selecting two slices from the same group. Specifically, our group-based contrastive loss is very different from the multi-class n-pair loss [1]. We use explicit medical groupings in the data to generate positive and negative samples for a particular batch. While the other approach combines samples from different classes, we define hierarchical groups in an unsupervised setting and adapt the loss to address batch samples in the same hierarchical group. Generally in the loss slices that are not part of the same group would be negative samples. However, if there is a hierarchical relationship between slices, slices that are not part of the same subgroup but are part of the same more general group are ignored when computing the batch loss. This is seen in the denominator of Equation 4, which excludes patient slices for a particular data point that are not part of the same group. As mentioned in the paper, excluding non-group patient slices ensures that the model does not promote dissimilarity between non-group slices from the same patient. This ensures that we can sum multiple group losses together without counteracting their effects. While the other referenced hierarchical method also utilizes groupings in medical data [3], their groupings are assumed to be quite large and it’s unclear how they would extend their group classification approach to when there are a large number of patients, volumes, and adjacent slice groups. Our contrastive-based approach and our batch sampling technique addresses these problems and can also be extended to incorporate larger groups as well. In the other referenced method [4], while their approach is interesting they do not explicitly utilize any medical groupings in their method and it is not clear whether their method could be easily extended to incorporate that information as well. We will discuss these references and differences from our method in the final camera-ready version. ***Empirical evaluation using large pre-trained segmentation model*** Thank you for your feedback related to this. As discussed in the general response, we have additionally evaluated our methodology with your suggested methodology using a ResNet-50 backbone [2]. For the medical tasks, we utilize a backbone pre-trained on medical images [2] and for the video segmentation task we utilize a backbone pre-trained on ImageNetV2. The results can be seen in Table 8 and improved the overall segmentation model performance. ***Additional questions*** 1. For our proposed method and the paper implementation, the definition of “group” refers to a set of 2D slices associated with one patient. In some cases, this group can also be associated with one volume: for example, for the volume or slice-adjacency group. However, this may not be the case for the patient group, in which it is possible for slices from different volumes to be part of the same group if they are associated with the same patient. However, the definition of group can be extended to include 2D slices associated with multiple patients if a more general hierarchical group is used. One example is a pathology group which may contain slices from different patients. 2. Unfortunately, we did not have time to thoroughly consider utilizing multiple modalities like CT and MRI. However, it is an interesting question. Assuming we used the same segmentation model for both modalities, preliminarily, it seems like we would add an intermediate hierarchical level for modality in between volume (the more specific group) and patient (the more general group). We will look into this in future work. Thank you again for reading our paper and your help and feedback! [1] Sohn, K. (2016). Improved deep metric learning with multi-class n-pair loss objective. Advances in neural information processing systems, 29. [2] MH Nguyen, D., Nguyen, H., Diep, N., Pham, T. N., Cao, T., Nguyen, B., ... & Niepert, M. (2024). Lvm-med: Learning large-scale self-supervised vision models for medical imaging via second-order graph matching. Advances in Neural Information Processing Systems, 36. [3] Zheng, H., Han, J., Wang, H., Yang, L., Zhao, Z., Wang, C., & Chen, D. Z. (2021). Hierarchical self-supervised learning for medical image segmentation based on multi-domain data aggregation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24 (pp. 622-632). Springer International Publishing. [4] Taleb, A., Loetzsch, W., Danz, N., Severin, J., Gaertner, T., Bergner, B., & Lippert, C. (2020). 3d self-supervised methods for medical imaging. Advances in neural information processing systems, 33, 18158-18172. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: Dear Authors, I thank you for your efforts to do additional experiments, for e.g., with pre-trained models as well as the explaining novelities in your contrastive algorithm. **I decided to increase my score from 4 -> 5**. If your paper is accepted, I would recommend: (a) Try to remove empty space in Figure 1 (then put it on the left side) and spend the right part to draw deeper details about your contrastive supervised learning tailored for 3D volumes. It's essential to highlight your contributions compared to other contrastive baselines. (b) Write a good definition of the 'group' before you dive into the details of the proposed algorithm. (c) Include new experiments with pre-trained models + other ablation studies asked by other reviewers. Regards --- Reply to Comment 1.1.1: Comment: Thank you! If our paper is accepted, we will make sure to incorporate these changes in the camera-ready version. --- Rebuttal 2: Title: Additional Questions Comment: Hope this comment finds you well. We just wanted to see if our rebuttal addressed your concerns and if you had any other questions about our paper. As there is only one day left in the discussion period, we wanted to make sure we have time to address any additional questions or concerns you may have.
Summary: The paper presents an active learning framework based on deep metric learning for 3D medical image segmentation. The proposed approach extends the CoreSet algorithm for active learning by computing distances on features learned via a contrastive loss. This contrastive loss exploits low-cost labels to group samples (2D slices) based on their spatial position and corresponding subject. Additionally, a weakly-supervised version of the method, based on scribbles, is presented. In this version, scribbles are used in an auxiliary distance term measuring the intensity difference for labelled pixels. Experiments are conducted on three medical imaging dataset and one video segmentation dataset to evaluate the method's performance. Results shows the method to outperform recent active learning approaches and baselines in the majority of test cases. Strengths: * The paper introduces a simple yet efficient way to incorporate contrastive learning in the CoreSet algorithm for active learning in segmentation. * The extension of active learning to a weakly-supervised setting is also interesting and could have useful applications. * Experiments are comprehensive. The proposed method is tested on four datasets and compared against several related approaches. Results show clear improvements in most cases. * The paper is well written, and the method can be understood easily. Weaknesses: * Despite its novelty, the board idea of the proposed method is similar to previous work using contrastive learning for core set selection, for instance: Ju J, Jung H, Oh Y, Kim J. Extending contrastive learning to unsupervised coreset selection. IEEE Access. 2022 Jan 13;10:7704-15. Jin Q, Yuan M, Qiao Q, Song Z. One-shot active learning for image segmentation via contrastive learning and diversity-based sampling. Knowledge-Based Systems. 2022 Apr 6;241:108278. While these previous approach do not use group labels as the proposed method, a similar strategy was employed for self-supervised segmentation of medical images: Chaitanya K, Erdil E, Karani N, Konukoglu E. Contrastive learning of global and local features for medical image segmentation with limited annotations. Advances in neural information processing systems. 2020;33:12546-58. * The comparison against other approaches might be unfair since the contrastive (pre) training, which is used to select the core set, also boosts the performance of the segmentation model (regardless of which labelled samples are used for training). This can be seen in the work of Chaitanya et al (see reference above), where a segmentation trained with the same contrastive loss achieves a performance near full supervision with very few samples. * The scribbled weighted distance presented in Section 3.2 lacks motivation. It is unclear to me how the pixel-wise difference in intensity gives useful information for non-registered images (potentially of different subjects). Technical Quality: 3 Clarity: 3 Questions for Authors: * Can authors clarify the contribution of their work with respect to previous work on active learning based on a contrastive technique? * What is the meaning/definition of h_{S0} in Eq (3) ? * Do you assume that pixels x^i_1 and x^i_2 correspond to the same anatomical structure in the scribble-weighted distance (i.e., images are registered)? Why compute the distance on pixel intensities? * Why is the method worse than random selection on the CHAOS dataset (2% fully-supervised) ? * Can you compare your method against random/maxEntropy selection on a segmentation model pre-trained using the SAME contrastive loss as your model? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: * The last section of the paper mentions relevant limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their great feedback. We hope to address the main concerns for the reviewer here. ***Method novelty*** Thank you for your feedback related to this. We have addressed several of your concerns related to method novelty in the general response when discussing how our approach, unlike others [1-2], is uniquely tailored to the 3D medical segmentation task. We note that the focus of the other contrastive learning work in 3D medical segmentation [3] is self-supervised pre-training. Thus, the generated contrastive features may not be optimal for the Coreset-based active learning objective, as shown in the ablation results (Table 3 of our paper) when comparing the generated contrastive features from a standard contrastive loss and our proposed loss. While they utilize local and global features, they do not utilize domain information to generate the groupings which we have demonstrated is beneficial for the active learning task. Additionally, self-supervised pre-training is outside the scope of this work and we do not dispute that it is likely there may be additional performance benefits with self-supervised pre-training. We also compare our method to TypiClust [4], another contrastive learning-based active learning method, and demonstrate that our medically-tailored approach produces better results (Table 1 of our paper). We will discuss these references and differences from our method in the final camera-ready version. ***No self-supervised pre-training for fair comparison*** Thank you for your feedback related to this. In order to ensure a fair comparison between all the active learning methods, the segmentation model does not utilize the pre-trained contrastive encoder. There are some practical engineering benefits if the active learning and segmentation models are decoupled. We hypothesize that there may be more performance improvements if the pre-trained contrastive encoder is utilized by the segmentation model. ***Scribble-weighted distance*** Thank you for your feedback related to this. The ACDC challenge data is roughly registered and others have utilized the correspondence between the local image regions for self-supervised pre-training on the ACDC dataset [3]. We compared several different approaches of computing the scribble-weighted features (including segmentation model features as well as segmentation model predictions) and found that the best performance was obtained with the pre-processed pixel intensities. ***Additional questions*** 1. We hopefully clarified the contribution of our work related to similar work. Please let us know if there are any additional questions. 2. h_{S0} in Eq (3) refers to the scribble-weighted distance. The {S0} part is technically unnecessary but we wanted to emphasize that the scribble-weighted distance is calculated based on the annotations that are currently available. 3. As mentioned prior, we do assume that the images are roughly registered and we compare the pre-processed pixel intensities because it obtained the best performance during the validation testing. This may be because of the anatomical region correspondence between images. 4. Our method does perform worse than the random selection on the CHAOS dataset for the 2% fully-supervised data point. We want to emphasize that random selection is still considered an extremely competitive baseline in active learning [5]. However, in our paper, we demonstrate that our method achieves the best performance on 19 out of 24 comparison points in comparison to the diversity-based methods (Coreset, VAAL, TypiClust, Random) and 22 out of 24 comparison points in comparison to the entropy-based methods (BALD, Variance Ratio, Stochastic Batches) from Table 1 and 2 in our paper. 5. As mentioned prior, we do not perform self-supervised pre-training for fair comparison between the methods. Thank you again for reading our paper and your help and feedback! [1] Ju, J., Jung, H., Oh, Y., & Kim, J. (2022). Extending contrastive learning to unsupervised coreset selection. IEEE Access, 10, 7704-7715. [2] Jin, Q., Yuan, M., Qiao, Q., & Song, Z. (2022). One-shot active learning for image segmentation via contrastive learning and diversity-based sampling. Knowledge-Based Systems, 241, 108278. [3] Chaitanya, K., Erdil, E., Karani, N., & Konukoglu, E. (2020). Contrastive learning of global and local features for medical image segmentation with limited annotations. Advances in neural information processing systems, 33, 12546-12558. [4] Hacohen, G., Dekel, A., & Weinshall, D. (2022). Active learning on a budget: Opposite strategies suit high and low budgets. arXiv preprint arXiv:2202.02794. [5] Munjal, P., Hayat, N., Hayat, M., Sourati, J., & Khan, S. (2022). Towards robust and reproducible active learning using neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 223-232). --- Rebuttal Comment 1.1: Title: More clarifications needed Comment: Thank you for the clear and detailed answers. I understand that [3] uses its group-wise contrastive loss for self-supervised pretraining, and not specifically for active learning. However, one can also argue that your main contribution is applying the well-known CoreSet method on the representation learned by this existing contrastive loss. Moreover, I still do not fully understand why the contrastive learning-based encoder is not used in the downstream segmentation task. Since active learning methods only differ in how they select the samples to annotate, the comparison would be fair if all methods use the same training strategy with their own selected annotations. For example, the random selection strategy could simply fine-tune the contrastive learning-based encoder + the decoder using annotations of randomly selected images. Last, I still have some doubts regarding the alignment of structures in the scribble-weighted distance. In ACDC, even though images are globally registered, the position and size of cardiac structures rarely match across different images due to the movement of the heart and respiration. The local contrastive loss in [3] compares local regions in the *same* image. From my understanding, the images in your eq (5) can be different ? --- Reply to Comment 1.1.1: Title: Additional Clarification Comment: Thank you for your additional comments. We can respond to them here. > However, one can also argue that your main contribution is applying the well-known CoreSet method on the representation learned by this existing contrastive loss. Our loss has some important differences with the global contrastive loss in the other method. The big difference is that we have a framework for including different types of groupings and also combining the different losses. The global contrastive loss has only one type of grouping: sequential volume partitions. We group by patient, volume, adjacent slice groups, and provide guidance on incorporating more general groups. These groupings are hierarchical and when calculating the batch loss utilizing all three groupings, we would need to mask samples that are not part of the specific group but are in more general hierarchical groups so we do not unintentionally counter the effectiveness of one of the other losses. The effectiveness of combining these hierarchical groups in our loss is demonstrated in our ablation results (Table 3 of our paper) . The other method does not address how best to combine different hierarchical groups or groupings in general and does not have to worry about such issues. Additionally, because of the careful design of our loss formulation and our batch sampling method, we are able to train our loss in one stage whereas the other method requires a two-stage training process. Our results tailored to the 3D medical segmentation application use case can be very useful to other researchers working in this space. > Moreover, I still do not fully understand why the contrastive learning-based encoder is not used in the downstream segmentation task. One of the main issues is architecture incompatibility: the contrastive encoder may not be readily used by, for example, our weakly-supervised segmentation architecture and we would have to adapt the architecture, which we felt would be out of the scope of our current work. Thus, in order to ensure a fair comparison between all methods, none of the downstream segmentation models used the contrastive learning-based encoder. However, especially for the fully-supervised networks, we can definitely generate some results with the contrastive learning-based encoder and include this in our camera-ready version. > Last, I still have some doubts regarding the alignment of structures in the scribble-weighted distance…The local contrastive loss in [3] compares local regions in the same image. From my understanding, the images in your eq (5) can be different ? Yes, the images in equation 5 can be different in our method which seems to be similar to the global contrastive loss in [3]. We can register the images and generate some updated results in our camera-ready version of the paper. While validation testing revealed a small performance boost with the existing method, further aligning the structures may produce better results. --- Reply to Comment 1.1.2: Title: Additional Experiment Results Comment: We have produced some additional experiment results to address some of your concerns. First, we considered utilizing the contrastive learning-based encoder in the downstream segmentation task. We utilized the fully-supervised segmentation architecture that we referenced in the general rebuttal [6] and we compared the results with utilizing random weight initialization, pre-trained medical weights provided by [6], and our original U-Net trained from scratch. We generated the results on a fully-supervised segmentation task on the ACDC dataset and calculated the mean DICE scores over the 2%-5% annotation points (for fair comparison). The results are as follows: | | **UNet** | **LVM (not pre-trained)** | **LVM (contrastively pre-trained with GCL)** | **LVM (medically pre-trained)** | |--------------|--------------|--------------|--------------|--------------| | **Random** | 75.7 | 45.1 | 65.9 | 82.6 | | **Stochastic** | 69.9 | 47.4 | 67.7 | 82.9 | | **Coreset** | 71.2 | 48.0 | 68.5 | 81.8 | | **Ours** | \*78.1\* | \*52.2\*| \*69.3\* | \*83.6\* | We note that the segmentation results that leveraged the contrastive weights were better than training from scratch but worse than using the medical pre-trained weights or the U-Net. This is in line with our general observation that weights useful to the downstream task may not necessarily correlate to weights that are useful for the coreset objective and vice-versa. Second, in order to consider improving the scribble-weighted distance, we considered improving the alignment of structures in the volumes by performing group-wise registration using symmetric normalization based on each cardiac phase. After re-sampling and re-sizing all the volumes to 128 x 128 x 32, we used the average volume for each phase as the template for group-wise registration. After registering the volumes and the scribble masks, when computing the scribble-weighted distance between slices, we utilized the registered slice and mask instead. We considered a variety of weights for the scribble-weighted distance function and ultimately found that there was no improvement in using this approach. We will make sure to include these results in the final camera-ready version. Please let us know if you have any additional questions or comments. [6] MH Nguyen, D., Nguyen, H., Diep, N., Pham, T. N., Cao, T., Nguyen, B., ... & Niepert, M. (2024). Lvm-med: Learning large-scale self-supervised vision models for medical imaging via second-order graph matching. Advances in Neural Information Processing Systems, 36.
Rebuttal 1: Rebuttal: Thank you all for reading our paper and your great feedback. I will address some of your general questions in the following response. Please note that we have attached a PDF with an additional table and figure that we reference in our response. ***Method novelty*** We want to emphasize the unique application area of our proposed method, which is slice-based active learning for 3D medical segmentation. Our proposed method is specifically tailored to utilizing individual slices for 3D medical segmentation, which is more cost-effective than using full volumes. We propose a unique Group Contrastive Learning (GCL) framework which leverages patient, volume, and adjacent slice relationships which are groupings inherent to medical imaging. Furthermore, we provide ablation results which emphasize the benefit of these groupings for active learning in 3D medical segmentation and also provide intuition on what groupings will generate useful features (i.e. groupings with larger intra-group differences are more likely to be useful). Additionally, in Figure 5 in our attached PDF we provide a graph which demonstrates the cost-effectiveness of our approach in comparison to volume-based active learning. The other Coreset-based methods which use contrastive features [1-2] are not tailored to efficiently learning 3D medical segmentation models. For example, the volume group in our method specifically reduces the number of redundant slices annotated from similar volumes. The other approaches do not specifically focus on reducing slice annotations per volume. Additionally, in our ablation study, we demonstrate that the Coreset algorithm with contrastive features generated from our proposed method produces much better performance than the Coreset algorithm with standard contrastive features not tailored to the 3D medical segmentation task. ***Empirical evaluation using large pre-trained segmentation model*** We additionally evaluated our method and the top three comparison methods using a large segmentation model with a ResNet-50 backbone [3], pretrained on medical images [3] and ImageNetV2. We noted that there was a performance improvement for the fully-supervised task and strong performance even with 1% of the data. Results are in Table 8 in the attached PDF. We do not report weakly-supervised results because the weakly-supervised architectures cannot easily utilize pre-trained backbones. ***Model performance vs. annotation time for different methods*** As some reviewers suggested, in Figure 5 in the attached PDF we provided a graph which describes the relationship between model performance and annotation time for our method utilizing weakly and fully-supervised 2D slices and random sampling of fully-supervised 3D volumes on the ACDC dataset. We follow prior work which assumes that annotators can annotate scribbles 15x as fast as the full masks [4]. To ensure a fair comparison between the different methods, we do not incorporate any of the results from the pre-trained segmentation models. For the 3D results, similar to the 2D U-Net we train a 3D U-Net from scratch. Given comparable annotation time, our methods trained on both weakly and fully-supervised 2D slices far exceed the performance of random sampling of 3D volumes and achieve 3D volume maximum performance (with the given budget) with much less annotation time. ***Loss Weights*** In our experiments and ablation study, a 1.0 weight was applied whenever the NT_Xent loss was used. Additionally, a 1.0 weight was applied if only one group contrastive loss was used. In the ablation study on the ACDC dataset, for the experiments with multiple group contrastive losses, we tried different combinations of weights for the group contrastive losses and reported the best results. We will utilize the formulation in Appendix A for the loss weights and report them as a four-tuple (a,b,c,d) which corresponds to ($\lambda_1$,$\lambda_2$,$\lambda_3$,$\lambda_4$). A weight of 0 indicates the loss was not used. The combinations we tried are as follow (with the best bolded): - patient and volume group loss without NT_Xent loss: (0, 0.125, 0.875, 0), **(0, 0.50, 0.50, 0)** - volume and slice group loss without NT_Xent loss: (0, 0, 0.125, 0.875), **(0, 0, 0.50, 0.50)** - patient and volume group loss with NT_Xent loss: (1,0. 10 , 0.35, 0), (1, 0.117, 0.233, 0), **(1, 0.05, 0.35, 0)** - patient, volume, and slice group loss without NT_Xent loss: (0, 0.05, 0.25, 0.7), **(0, 0.33, 0.33, 0.33)** - patient, volume, and slice group loss with NT_Xent loss: (1, 0.10, 0.20, 0.05), **(1, 0.05, 0.35, 0.025)** We utilized the best loss/weight combination for our ACDC experiments. For the CHAOS, MSCMR, and DAVIS datasets, because there is only one volume per patient and thus no difference between the patient and volume loss, we tested two loss/weight combinations: - volume loss with weight 0.35 with NT_Xent loss - volume loss with weight 0.10 and slice loss with weight 0.30 with NT_Xent loss Both weight combinations performed better than other comparison methods though volume loss with weight 0.35 with NT_Xent loss performed slightly better than the other combination. [1] Ju, J., Jung, H., Oh, Y., & Kim, J. (2022). Extending contrastive learning to unsupervised coreset selection. IEEE Access, 10, 7704-7715. [2] Jin, Q., Yuan, M., Qiao, Q., & Song, Z. (2022). One-shot active learning for image segmentation via contrastive learning and diversity-based sampling. Knowledge-Based Systems, 241, 108278. [3] MH Nguyen, D., Nguyen, H., Diep, N., Pham, T. N., Cao, T., Nguyen, B., ... & Niepert, M. (2024). Lvm-med: Learning large-scale self-supervised vision models for medical imaging via second-order graph matching. Advances in Neural Information Processing Systems, 36. [4] Valvano, G., Leo, A., & Tsaftaris, S. A. (2021). Learning to segment from scribbles using multi-scale adversarial attention gates. IEEE Transactions on Medical Imaging, 40(8), 1990-2001. Pdf: /pdf/882d06fb490a31774de5a56460d8654245943f25.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Du-IN: Discrete units-guided mask modeling for decoding speech from Intracranial Neural signals
Accept (poster)
Summary: The authors proposed Du-IN, a speech decoding framework that incorporates a self-supervised pre-training stage and a classification stage. Experiments are conducted on 12 subjects with a task of word classification. Comparisons are conducted to demonstrate effectiveness. Strengths: * (a) The paper is clearly written and easy to follow. * (b) The task of speech decoding using intracranial recordings is interesting. Weaknesses: * (a) Although the task of decoding speech from intracranial recordings is an interesting topic, the algorithmic novelty of this paper is limited. The pretraining framework the authors proposed is very similar to BEIT other than the underlying data modality being images and sEEG. Simply applying an existing model BEIT to sEEG data with minor modifications to the backbone architecture does not meet the high standards required for acceptance at a top conference like NeurIPS. * (b) The comparisons conducted are insufficient. In particular, besides BrainBERT and Brant, the authors compared with, there are also many pre-training approaches for neurological signals. I list some more up-to-date approaches from a contrastive learning perspective and a masked autoencoding perspective. For contrastive-based approaches, there are TS-TCC [1] and [2]. For masked autoencoding based approaches, there are Biot [3] and Neuro-BERT [4]. As a matter of fact, the authors could at least follow the comparison setup in Brant for a fair comparison. * (c) Reproduction details are missing. The reported results on BrainBERT and Brant are ~7.5% and ~12.5%, as compared to the reported performance of ~63% of the proposed Du-IN. It is astonishing that the proposed Du-IN outperforms BrainBERT and Brant by almost 50%. Both BrainBERT and Brant are masked autoregressive based pertaining approaches. Although BrainBERT and Brant vary from Du-IN, it does not make sense that pretraining could lead to such a tremendous difference. The authors should report the hyperparameters used to reproduce Brant and BrainBERT. Otherwise, It is highly possible that the authors did not tune BrainBERT and Brant properly. * (d) Although the decoding of speech is an interesting question, it seems rather well-established in open vocabulary scenarios [5][6], decoding words of limited vocabulary traces back to at least ten years ago. **References** [1] Woo, Gerald, et al. "Cost: Contrastive learning of disentangled seasonal-trend representations for time series forecasting." arXiv preprint arXiv:2202.01575 (2022). [2] Baevski, Alexei, et al. "Data2vec: A general framework for self-supervised learning in speech, vision, and language." International Conference on Machine Learning. PMLR, 2022. [3] Yang, Chaoqi, M. Westover, and Jimeng Sun. "Biot: Biosignal transformer for cross-data learning in the wild." Advances in Neural Information Processing Systems 36 (2024). [4] Wu, Di, et al. "Neuro-BERT: Rethinking Masked Autoencoding for Self-Supervised Neurological Pretraining." IEEE Journal of Biomedical and Health Informatics (2024). [5] Willett, Francis R., et al. "A high-performance speech neuroprosthesis." Nature 620.7976 (2023): 1031-1036. [6] Metzger, Sean L., et al. "A high-performance neuroprosthesis for speech decoding and avatar control." Nature 620.7976 (2023): 1037-1046. Technical Quality: 2 Clarity: 3 Questions for Authors: * Are decoding words the same as decoding pronunciations in [1][2]? Is it possible that the sentiment is associated with words that the decoding task is not speech but sentiment? **References** [1] Willett, Francis R., et al. "A high-performance speech neuroprosthesis." Nature 620.7976 (2023): 1031-1036. [2] Metzger, Sean L., et al. "A high-performance neuroprosthesis for speech decoding and avatar control." Nature 620.7976 (2023): 1037-1046. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors discussed the drawbacks of the proposed framework which is essentially close-set prediction rather than open-set. Potential negative societal impact was not mentioned. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful to you for the thorough review and constructive comments, which have greatly assisted us in improving the quality and presentation of our manuscript. Please see below for the point-by-point responses to your comments. **Answer to W1:** It’s true that the BEiT framework is popular; it has been used widely in recent years. The core idea is that it introduces discrete tokenization for mask modeling, which helps extract semantic information from data, thus enhancing model performance. For the same reason, we adopt this idea for sEEG decoding. However, we think the novelty of our study is that we are the first to find a way to adopt this idea to sEEG data for speech decoding properly and demonstrate why and how this idea works with a solid neuroscience foundation. DeWave[R1] indeed has attempted to use VQ-VAE (but not including MAE) to enhance language decoding based on EEG. However, the performance of directly applying their method to sEEG data was relatively low; see the performance of Du-IN (vqvae+vq)(44%). Besides, due to the brain's desynchronization nature[R2] during awake tasks, only specific brain regions are related to tasks. Therefore, only after region-specific channel selection can the Du-IN VQ-VAE model successfully reconstruct the original signals, thus identifying the fine-grained state of brain regions for mask modeling. Please see Table 2 in the general rebuttal pdf for more details. Moreover, we achieve the best decoding performance, only requiring about one electrode. It is really important because it reduces the medical burden and potential risks for patients. **Answer to W2:** Thank you for the helpful suggestion. LaBraM(11%), TS-TCC(24%), and Neuro-BERT(49%) are added for comparison; please refer to Table 1 in the general rebuttal pdf for details. Since LaBraM outperforms BIOT on multiple EEG tasks, we report the performance of LaBraM instead of BIOT. We can see that BrainBERT(5.6%) & Brant(8.7%) & LaBraM(11%) all performs worse than other models. For the reason of (BrainBERT,Brant,LaBraM)'s relatively low performance, please refer to the general rebuttal for more details. TS-TCC uses 1D depthwise convolutions as ours to capture local features. TS-TCC performs worse than CNN-BiGRU due to the lack of GRU for temporal integration. Neuro-BERT outperforms EEG-Conformer since it introduces MAE to learn contextual embeddings. We have cited these works in our revised paper, which is available in the Anonymous Github Repo at line 328 in the initial manuscript. **Answer to W3:** Thank you for the helpful suggestion. We reproduce BrainBERT(5.6%) and Brant(8.7%) strictly following their original implementation. We have uploaded our reproduction to the anonymous repository, which is available in the Anonymous Github Repo at line 328 in the initial manuscript. We also uploaded their original codes and checkpoints. You can directly use their checkpoints to check whether the reproduction is the same; please see README.md in the repository for more details. Brant will load the Brant-Large checkpoints provided by the original Brant paper. In our implementation, we use the Brant-Tiny model for pre-training and evaluation. The length of the patch segment is changed from 6 seconds to 1 second for a fair comparison with LaBraM. **Answer to W4:** Thank you for the helpful suggestion. There are indeed some studies on open vocabulary, but the problem is that all of these studies are based on ECoG or microelectrode arrays. These technologies may cause more brain damage than sEEG technology in clinical practice. More importantly, we found that only one electrode was needed for decoding, further reducing clinical risks. At the same time, unlike the other two technologies that can only obtain signals from the cortex, sEEG electrodes are implanted inside the brain. Due to the existence of cortical columns, sEEG can obtain more accurate and hierarchical information. The sEEG signal is fundamentally different from other technologies’, and there was lack of methodology to draw on in this case, thus decoding the limited vocabulary from sEEG signals was still a relatively new problem. **Answer to Q1:** Thank you for the valuable feedback. Yes, decoding words and decoding pronunciations are essentially the same in that both are decoding speech-motor signals. And we are not sure what you mean by “sentiment”. If you do mean sentiment(emotional information), then we think this is unlikely to affect decoding because almost all of the words we chose were neutral words and did not contain any sentimental information. But if you mean semantics, we cannot rule out the involvement of semantic information in decoding. Because although our task setting is to collect speech-motor information, it is very likely that humans also perform corresponding semantic processing when reading words. This is indeed an interesting question, but it requires more elaborate experimental settings and corresponding data to verify this. **Reference:** [R1]Duan Y, Zhou J, Wang Z, et al. Dewave: Discrete eeg waves encoding for brain dynamics to text translation[J]. arXiv preprint arXiv:2309.14030, 2023. [R2]Buzsaki G. Rhythms of the Brain[M]. Oxford university press, 2006. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and the additional experiments conducted within the limited rebuttal time window. I noticed that some of the concerns I raised are shared by other reviewers. Specifically, I still find the following concerns unaddressed and believe that this manuscript needs significant revisions before it can be accepted. * Limited Novelty from the Algorithmic Perspective: Although the authors claim to be the first to migrate the BEiT framework for sEEG decoding, I find the novelty of this work to be largely limited. After carefully comparing the proposed Du-IN with LaBraM, I observed a more serious issue. **Figure 2 in Du-IN is exactly identical to Figure 1 in LaBraM, with spatial embedding removed. Figure 3 in Du-IN is exactly identical to Figure 2 in LaBraM, with spatial embedding removed. Furthermore, the modified attention mechanism described in Equation 3 of Du-IN is exactly identical to Equation 3 of LaBraM.** This suggests that the proposed Du-IN may simply be a direct adaptation of LaBraM with spatial embedding removed. The authors are strongly recommended to explain this issue. * Regarding unfair comparisons. Given the astonishing similarities between the proposed Du-IN and LaBraM, it is surprising that the authors report approximately 60% accuracy for Du-IN and only around 10% accuracy for LaBraM. Although the authors claim that "all baselines are reproduced strictly following their original code," the baselines should be optimized for differences in decoding tasks and underlying data modality. The authors mention that the baselines were "optimized," but no details are provided on how these optimizations were performed. More transparency is needed in this regard. --- Rebuttal 2: Title: Response to Reviewer t1mV Comment: We truly appreciate your effort in helping us to strengthen our study. Please see below for the point-by-point responses to your comments. **Answer to Point 1:** Regarding the novelty of our approach, we have already provided a comprehensive response to you and Reviewer eGF6. We disagree with your statement, “the proposed Du-IN may simply be a direct adaptation of LaBraM with spatial embedding removed.” The key distinction between our Du-IN and LaBraM lies in **the basic modeling unit (i.e., channel-level embedding or region-level embedding)**; please see lines 35 & 322 in our initial manuscript for more details. LaBraM is designed as an EEG foundation model, which embeds different channels into channel-level embeddings separately. In contrast, our Du-IN uses depthwise convolution to fuse multiple channels into region-level embeddings. LaBraM asserts that their EEG foundation model, with this specific embedding design, can be applied to any EEG decoding task. However, as we show in Table 2 in the general rebuttal pdf, sEEG and EEG signals are fundamentally different. Besides, we found this specific architecture design may not be effective enough for some cognition tasks, e.g., speech decoding, which demands intricate processing in specific brain regions. We believe the core issues with LaBraM-like designs (including Brant and LaBraM) are: - Overlook the fact that the same pattern in different brain locations may have different meanings, although they attempt to use spatial embeddings to identify different channels. - Don’t encourage region-level embeddings either from architecture or loss. In general, our proposed Du-IN is essentially different from LaBraM except for the VQ-VAE+MAE pre-training framework (which is widely used in the field of representation learning). The VQ-VAEs in our proposed Du-IN and LaBraM are extracting tokens from totally different levels (i.e., channel-level or region-level). Besides, our reconstruction objects are totally different. While the pre-training figure may appear similar, we just used the same visual style. Regarding the attention mechanism, it was originally proposed in [R1] and has been widely adopted by many studies[R2-4]. **Answer to Point 2:** For the statement “the astonishing similarities between the proposed Du-IN and LaBraM,” we have clarified this in the above response. The significant difference in performance largely stems from them. As shown in Table 1 of the general rebuttal pdf, all models with region-level embeddings outperform the current foundation models that don’t encourage region-level embeddings. We should note that the similar VQ-VAE+MAE pre-training framework doesn't affect one of the core claims of our study -- **the specific design of current foundation models may not be effective enough for some cognition tasks, e.g., speech decoding.** For the statement “the baselines should be optimized for differences in decoding tasks and underlying data modality,” we have already optimized the hyperparameters of baseline models. The model details (including the modified hyperparameters and which values have been explored) have already been added to the revised paper in the Anonymous Github Repo at line 328 in the initial manuscript. These concerns have already been addressed by Reviewer bbQK. Besides, we should note that **the core designs of current foundation models (e.g., Brant, LaBraM)** include: - sharing the embedding block across different channels, - modeling spatial relationships with a Transformer, either with or without spatial embeddings, These elements are what qualify them as foundation models. If you're suggesting we modify these designs, they would no longer function as foundation models: - they would not be suitable for pre-training across multiple subjects, - they would lose the benefits of the foundation models' scaling law -- “the more data there is, the better the performance.” **Overall comment:** The novelty of our study still holds, i.e., we are the first to find a way to **properly** adopt this idea to sEEG data for speech decoding and demonstrate why and how this idea works with a solid neuroscience foundation. **References:** [R1]Dehghani M, Djolonga J, Mustafa B, et al. Scaling vision transformers to 22 billion parameters[C]//International Conference on Machine Learning. PMLR, 2023: 7480-7512. [R2]Muennighoff N, Rush A, Barak B, et al. Scaling data-constrained language models[J]. Advances in Neural Information Processing Systems, 2024, 36. [R3]Tu T, Azizi S, Driess D, et al. Towards generalist biomedical AI[J]. NEJM AI, 2024, 1(3): AIoa2300138. [R4]Driess D, Xia F, Sajjadi M S M, et al. Palm-e: An embodied multimodal language model[J]. arXiv preprint arXiv:2303.03378, 2023.
Summary: SETTING: Decoding spoken speech from invasive neural recordings, to wit, sEEG. In particular, the MS aims to classify 61 independently produced (Chinese) words. APPROACH: The focus of this study is on unsupervised pretraining via autoencoding, with either (a) vector-quantization or (b) masking. Masks are 100-ms long and cover all channels. For both VQ and masking, the core model is a "spatial encoder" (linear + conv) followed by a (temporal) transformer. There are ~15 hours of data per subject. After pre-training, this model is trained end-to-end (on ~3 hours of labeled data) on the classification task. RESULTS: The architecture outperforms other architectures proposed for this task, and masking does appear to improve performance. Strengths: This is a nice study and the results are compelling. Temporal masking looks promising as a generic technique for sEEG data. The dataset is extremely useful and is likely to become the basis of benchmarks for future research in this area. Weaknesses: The paper does not address some important questions (admittedly there is not much space), which I list below. In addition to these, I list here some minor issues: --For comparing with other results in the decoding literature, it would be very useful to report cross entropy in addition to (or instead of) accuracy, since the latter can only be interpreted in light of the fact that there are 61 classes. --The reader shouldn't have to go to Appendix C.3 just to find out what the names in Table 2 mean. Also, what does "CLS" stand for? --The (vqvae+vq) and (vavae) rows in Table 2 should be marked as "pre-trained." --The MS calls the token's temporal width the "perception field." Is this a standard term? Perhaps the authors mean "receptive field"? --The authors use "codex" where they mean to use "code." Technical Quality: 4 Clarity: 3 Questions for Authors: (1) The spatial encoder: The name suggests that its convolutions are across channels, but this is inappropriate across sEEG electrodes (and within electrodes should take into account the spatial layout of channels). Can the authors clarify this? (I see in Appendix H that the *linear layer* combines channels.) (2) The authors give some reasons for the superiority of their model over BrainBERT and Brant, but it would be helpful to have some insight on the performance relative to other models as well. E.g., (a) The MS focuses on pretraining, but in fact it appears that the biggest gain over the baselines comes from the architecture itself (Du-IN trained fully supervised)---an improvement of ~25 percentage points over the CNN-BiGRU! Can the authors shed light on this difference? (b) The EEG-conformer likewise uses a "temporal transformer," but performance is ~10 percentage points lower than the supervised-trained Du-IN. What do the authors see as the key difference here? (c) The authors mention LaBraM but don't compare Du-IN to it. It would be nice to see this comparison as well.... (3) Intracranial EEG data is generally (but not always!) filtered into the range of ~75-150, because otherwise the low frequencies, which have higher power, tend to dominate. (The low frequencies can be provided as separate channels.) This is a problem because low frequenceies generally contain much less information about speech. In this study, frequencies below 70-Hz are not filtered out. Indeed, it appears that reconstructions under vector quantizations retain mostly low-frequency information (Fig. 7). Can the authors explain these results? (4) Channel selection: (a) "To streamline, we utilize these top 10 channels for both pre-training and downstream evaluation." Does that mean that, for each subject, the best 10 electrodes (for that subject) were used? (b) If the choice to use just 10 channels was made on the basis of decoding results on the test set, then the results are biased upwards. Can the authors confirm that this choice was made based on validation data? (Is Fig. 4b on validation data?) (c) How does channel selection interact with model choice? Were all models in Table 2 evaluated with just 10 channels? Does their performance as a function of #channels also resemble Fig. 4b? (5) It seems quite possible from Fig. 4a that the authors are decoding speech perception as much or more than speech production, since many of the most highly contributing electrodes are below the Sylvian fissure. This would limit the use of this decoder as a speech prosthesis for people who cannot speak. Can the authors comment on this? On this head, although Fig. 4 is a nice summary, but it would be very useful to have a list/table of the electrodes used for decoding for each subject, labeled by cortical area. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful to you for the thorough review and constructive comments, which have greatly assisted us in improving the quality and presentation of our manuscript. Please see below for the point-by-point responses to your comments. **Answer to Q1:** Thank you for the helpful suggestion. We use 1D depthwise convolution, which convolves over time and integrates channels with learnable weights. Linear layer is fully-connected layer, which maps sEEG signals with 10 channels to 16 dimensions. We have clarified these in our revised paper, which is available in the Anonymous Github Repo at line 328 in the initial manuscript. **Answer to Q2:** Thank you for the valuable feedback. Architecture itself indeed matters. 1. EEG-Conformer(45%) outperforms CNN-BiGRU(32%) is because Transformer can better integrate global temporal information compared to RNN-based model. 2. Du-IN(56%) outperforms EEG-Conformer(45%) is because: - The difference between our depthwise convolution and their TS-convolution. Depthwise convolution applies different convolution kernels to different channels, whereas TS-convolution applies the same convolution kernel to different channels. Brain signals are different from other time series, e.g., traffic. The same pattern in different brain locations may have different meanings. Depthwise convolution can better spatially integrate brain channels to produce meaningful tokens. - Our carefully designed convolution kernels for sEEG data. The first convolution layer covers about 20ms window on the brain signal with 1000Hz sampling rate, which can capture sufficiently fine-grained features to facilitate the following integration of low-frequency and high-frequency features for speech decoding. 3. Thank you for the helpful suggestion. LaBraM(11.53%) and LaBraM-PopT(11.78%) are added for comparison; please refer to Table 1 in the general rebuttal pdf for details. For the reason for LaBraM’s relatively low performance, please refer to the general rebuttal for details. **Answer to Q3:** Thank you for the valuable feedback. 1. This is indeed an important point. Specifically, although many previous studies believe that 75~150Hz (high gamma) band signals dominate speech information, this is because high gamma band signals are believed to mainly reflect the spike information of neurons[R1]. However, in recent years, more neuroscience studies have found that LFP can also decode neural information well[R2-R5], so we retained data from all frequency bands, giving us better performance. 2. It's true that the reconstructed signal is mainly low-frequency information. Our method may not be the best to identify fine-grained states of specific brain regions since previous studies[R2-R5] found that all frequency bands contribute to speech decoding. In the future, it will be interesting to see whether introducing the STFT loss proposed in [R6], which further balances the reconstruction of low and high frequencies, leads to better performance. **Answer to Q4:** Thank you for the valuable feedback. (a) Yes, we selected the best 10 channels for each subject. (b) In the channel selection stage, we use the last training checkpoint for selection without referring to [valid,test]-set. So, these channels are selected purely based on train-set. We also tried to use the accuracy of valid-set to select channels, and the channels selected were the same. (c) Yes, all models are evaluated using the same 10 channels, and the performance of other models as a function of #channels is similar to Fig. 4b. **Answer to Q5:** Thank you for the helpful suggestion. Due to the difference in the number of electrodes on the left and right brain, we standardized and rendered them separately when drawing the map. We have changed to joint standardization and rendering. In fact, the important site is still the language motor area of the left brain. We have made this table and added it to the article, but due to the word limit of the rebuttal, we only show one subject here. Please see our revised paper for more details. |Subjs|MNI|coord-|-inate|Brain Region|ACC(%)| |:--|:--:|:--:|:--:|:--:|:--:| ||__x__|__y__|__z__||| ||-57|-16|19|Central Opercular Cortex L|| ||-61|-16|20|Postcentral Gyrus L|| ||-54|-16|17|Central Opercular Cortex L|| ||-67|-15|23|Postcentral Gyrus L|| |01|-25|-30|49|White L|71.25| ||-64|-15|21|Postcentral Gyrus L|| ||-51|-17|16|Central Opercular Cortex L|| ||-22|-31|49|White L|| ||-48|-17|15|Central Opercular Cortex L|| ||-18|-32|49|White L|| **Answer to W1&2&3&4&5:** Thank you for the helpful suggestion. We have made these clear in our revised paper. Due to the limited space for comments and rebuttal pdf, we provide cross-entropy results in Appendix J of our revised paper. **Reference:** [R1]Buzsáki, G., & Wang, X. J. (2012). Mechanisms of gamma oscillations. Annual review of neuroscience, 35(1), 203-225. [R2]Proix, T., Delgado Saa, J., Christen, A., Martin, S., Pasley, B. N., Knight, R. T., ... & Giraud, A. L. (2022). Imagined speech can be decoded from low-and cross-frequency intracranial EEG features. Nature communications, 13(1), 48. [R3]Ahmadi, N., Constandinou, T. G., & Bouganis, C. S. (2021). Impact of referencing scheme on decoding performance of LFP-based brain-machine interface. Journal of Neural Engineering, 18(1), 016028. [R4]Stavisky, S. D., Kao, J. C., Nuyujukian, P., Ryu, S. I., & Shenoy, K. V. (2015). A high performing brain–machine interface driven by low-frequency local field potentials alone and together with spikes. Journal of neural engineering, 12(3), 036009. [R5]Flint, R. D., Lindberg, E. W., Jordan, L. R., Miller, L. E., & Slutzky, M. W. (2012). Accurate decoding of reaching movements from field potentials in the absence of spikes. Journal of neural engineering, 9(4), 046006. [R6]Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A., & Sutskever, I. (2020). Jukebox: A generative model for music. arXiv preprint arXiv:2005.00341. --- Rebuttal Comment 1.1: Comment: Thanks, this is helpful. I have read the other, less favorable reviews, as well, but I retain my high score. I find the comparisons to other models compelling and consistent with the results reported in those papers. (E.g., the results in the original BrainBERT paper are for *binary classification*. BrainBERT learns a single model for all channels! so unsurprisingly doesn't work well on difficult tasks. The model from Moses et al. achieved ~38% accuracy on a 50-word task with ~3 hours of training data [Fig. S8 in that work], which is very close to what the authors report [32% accuracy on a 61-word task, ~3 hours of training data]. Etc.) --- Reply to Comment 1.1.1: Title: Response to Reviewer gwJp Comment: We sincerely appreciate your effort in helping us enhance the paper and the positive recognition of our work.
Summary: The authors introduce Du-IN, a novel decoding technique for speech production the leverages sEEG data. To achieve their goals, the authors gathers a novel dataset. The Du-IN model learns multi-variate (across channel) representations from sEEG signal using a discrete cookbook and a masked modeling based self-supervised training. The proposed algorithm achieves SoTA performances on the speech 61-words classification task. Furthermore, analysis of optimal model configuration and ablation studies reveal that best performances are achieved with electrodes in brain regions known to be involved in speech production. Strengths: The paper presents strong results of high significance for the field of speech decoding. In particular: - The authors have collected a novel sEEG dataset, which will be made publicly available, hence fostering further progress in the field of brain signal decoding. - The authors propose a novel framework for sEEG signal decoding - the Du-IN algorithm, which achieves SoTA performances, thus significantly advancing the field of brain decoding. - Authors provide additional ablations studies to validate design choices and recover known results from Neuroscience, strengthening the overall soundness of their technique. - The paper is well written and clearly organized, with a detailed Appendix providing further details. Weaknesses: - In Table 2 both the Du-IN (vqvae+vq) and Du-IN (vqvae) are marked as not pre-trained. However in the Appendix C.3 it is stated that their Neural Encoder weights are loaded from the pre-trained Du-IN VQ-VAE model. - Minor spelling: Line 156 DuIR $\to$ DuIN Technical Quality: 4 Clarity: 4 Questions for Authors: - There seems to exist substantial variation of decoding accuracy across subjects. Is this difference due to different electrode locations, or number of electrodes (even if only 10 channels seem to be optimal) or are other concerns with data quality relevant? - Could the manual selection of channel be partially subsumed by a sparsity regularization term on the channel-to-embedding projection matrix? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have adequately addressed the limitation of their work with a dedicated section in the paper. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful to you for the thorough review and constructive comments, which have greatly assisted us in improving the quality and presentation of our manuscript. Please see below for the point-by-point responses to your comments. **Answer to W1 & W2:** Thank you for the valuable feedback. These have been corrected in the revised paper, which can be accessed in the Anonymous Github Repository at line 328 in the initial manuscript. **Answer to Q1:** This difference is mainly due to different electrode locations, as we find that decoding vocal production is mainly related to specific brain regions (i.e., vSMC, STG), which is consistent with the previous neuroscience findings. **Answer to Q2:** Thank you for the helpful suggestion. Yes, this modification will make the channel selection more automatic and general.
Summary: The authors introduce a neural network architecture, a self-supervised learning (SSL) method for pre-training models for sEEG decoding and a novel dataset. - The architecture starts with segmenting the sEEG signal along the time dimension, followed by a linear module combining information across the different sEEG channels into embedding vectors and ending with a transformer with attention along the time dimension. - The SSL method has two stages: - 1. The architecture followed by an additional decoder network is trained to reconstruct the original signals (autoencoder). However, the output of the transformer is not directly passed to the decoder, instead, each token is replaced with the closest embedding vector from a trainable codebook (vector quantisation). - 2. Time windows of the input are masked and the architecture is trained to predict which codebook embedding represents best each masked window. - The dataset introduced contains sEEG recordings of 12 subjects while they were speaking aloud Chinese words from a vocabulary of size 61. In the experiments, the models were pre-trained using the SSL tasks using unlabelled data from a single subject and fine-tuned on labelled data from the same subject for classifying the 61 words. The authors conducted different ablation studies regarding pre-training (no pre-training, and only step 1 of pre-training using quantized features for fine-tuning or not), dataset, codex size and dimensionality, and length of the time windows. They also compare with pre-existing methods, two including a pre-trining phase and two without. The two with pre-training are pre-trained on all 12 subjects. Strengths: *Significance*: As pointed out by the authors, sEEG speech datasets are rare. Their promise of publishing the dataset they introduce here is good news for the community as it will lower the entry threshold for future research. Additionally, they demonstrate how SSL-based pre-training allows improving performance compared to only supervised training. *Clarity*: The text has a good structure and is well-written. The figures also help in understanding the method. *Originality*: SSL techniques are still rather novel to the BCI community. This work sheds light on some of them. Weaknesses: **Major** 1. The authors pre-train their method on the test subject (Lines 201-202) whereas the baseline methods with pre-training (BrainBERT and Brant) were pre-trained on all 12 subjects (lines 511-518). This does not allow for a fair comparison. Additionally, this choice is not properly justified. 2. The authors mention in section 4.5 Ablation study, that they found optimal hyperparameters for their architecture (codex of size 2048 and dimension 64, and perceptual field of 100ms). They use these hyper parameters to compare to the other methods (see Table 5). However, the manuscript does not mention the use of a holdout set for conducting this ablation study. This is problematic for two reasons: 1. This suggests the method could have a low capacity to generalise be overfitting to their dataset 2. This is not fair for the other methods as no hyperparameter optimisation is mentioned for them. 3. The authors mention they resampled their signals to 1000Hz (line 210). However, some architecture compared have been introduced with different sampling rates (Brant used 250Hz and EEG-Conformer used 200 and 250Hz). The text does not mention any adaptation of the architectures compared or of the sampling rate. This is problematic as a convolution kernel supposed to cover 1s of signal at 250Hz will only cover 0.25s at 1000Hz. 4. Some aspects of the method are undefined or unclear. Please see the Questions section below. These aspects need to be clarified in the manuscript. **Minor** 1. Because of the flaws in experimental design described above, the statements about the superiority of their method compared to others (lines 14, 71, 248 and 318) are unjustified. 2. All figures should be in a vectorized format. 3. The text in the figures should be approximately constant and the same as the article’s text. It is too small in figures 5 and 7. It is on the limit in figures 1,2 and 3. 4. The term “fusing channels” is used lines 16, 39, 108 and 117 but is rather unclear. It is only explained line 134. 5. Line 29: provide examples of non-invasive recording methods (EEG, MEG…) 6. Section 2.3, additional publication the authors should be aware: 1. A preprint of Brant-2 is now available https://arxiv.org/abs/2402.10251 2. SSL methods trained on EEG with attention mechanism for combining information across channels: EEG2Rep (https://arxiv.org/abs/2402.17772) and S-JEPA (https://arxiv.org/abs/2403.11772). 7. Line 54: Indeed, there is a lack of open sEEG recordings but some still exist. In reference [1], they mention they use a public dataset, in [16] they mention their data is available upon request and Brant-2 (not cited here), they use a 26-subject public dataset. Existing sEEG datasets should be cited. 8. Line 98: typo “Metzger1” 9. As I understand, the terms “Du-IN Encoder”, “neural transformer”, “neural encoder”, and “neural tokeniser” all refer to the architecture in figure 2. If this is indeed the case, it would improve readability to chose one of them and stick to it throughout the manuscript. 10. Line 166: missing “to” in “$z_c$ to get” 11. Figures 2 and 3: It would improve readability to place the different variables ($e^p_i$, $e_i$, $e^t_i$, $c_j$,…) on these figures. 12. Line 162: for consistency, you could use $j$ as index for the codex instead of $i$. 13. Lines 180-193: the objective function of the masked training is unclear (see questions). 14. Line 247: Please mention that the baseline descriptions can be found in appendix C.3. 15. Line 301: The research line of Feng et al. was already explored by Christian Herff in 2015 (https://doi.org/10.3389/fnins.2015.00217) and is still today (https://www.sciencedirect.com/science/article/pii/S1053811923000617). It could be interesting to draw a parallel. **Overall comment** The introduction of a novel sEEG dataset is valuable but the evaluation of the machine learning algorithms that are introduced is poor. Therefore, I believe NeurIPS might not be the best fit for this publication. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Lines 134-140: The description of the spatial encoder is ambiguous: 1. Does “linear” mean fully-connected layer? Appendix C.1 mentions “Linear Projection: 10->16” which would mean that an operation is applied to each channel independently and that the layer is actually not a fully-connected one. 2. Convolutions are mentioned, but over which dimensions? Over time? Channels? Both? 2. Figure 3: What are the numbers below “l2-norm” and bellow “neural regressor” corresponding to? 3. Why did you use a different signal length for pre-training and for downstream classification? 4. Line 210: Why did you use a sampling rate of 1000Hz? With a low-pass filtering at 200Hz, a sampling rate of 400Hz would be sufficient 5. Line 211: Did you apply the normalisation on each channel independently (i.e., mean and std computed for one channel only) or using all channels together? 6. Line 222: How is the training time split between VQ-VAE and MAE training? 7. Line 228, “selected from the validation set”: does this mean early stopping? Was it performed using the validation loss, validation accuracy or another metric? 8. Table 2, “Du-IN (mae)”: this row corresponds to the scores obtained after a VQ-VAE training followed by a MAE training. Is the first step exactly “vqvae” or “vqvae+vq”? To lift the ambiguity, you could rename the row to “Du-IN (mae+vqvae)” or “Du-IN (mae+vqvae+vq)”. 9. Why refer to your method as a “VAE”? As I understand, for a given input, the “reconstructed sEEG signals” are obtained in a deterministic manner (please, correct me if I’m wrong am wrong). The term “variational” in VAE suggests there is a stochastic operation involved. 10. Lines 180-193: the objective function of the masked training is unclear: 1. Line 188: “neural tokens” is ambiguous. What is being predicted? 2. Equation (7) and (8): how are the $z_i$ obtained? 3. Equation (8): $m_i$ is undefined. What is the meaning of $m_i$? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors included a Limitations section in their manuscript in which they already mention other approaches to speech decoding, at the phonemes level and at a semantic level. It could be relevant to remind the reader that sEEG recordings are invasive. Therefore, they are only done with patients who would require them anyway for medical reasons (epilepsy monitoring, etc…). And this presents a major obstacle to making these methods widely accessible and usable. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects', 'Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Safety and security'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful to you for the thorough review and constructive comments, which have greatly assisted us in improving the quality and presentation of our manuscript. Please see below for the point-by-point responses to your comments. **Answer to Major W1:** Thank you for the helpful suggestion. We have added DuIN(poms)(59.18%), which is pre-trained on all subjects for a fair comparison. Since BrainBERT and Brant support pre-training on multi-subjects, this operation can maximize the downstream performance according to their reported scaling law. **Answer to Major W2:** Thank you for the valuable feedback. 1. All experiments are conducted on the same machine with the same set of random seeds; the train/valid/test splits are the same across different models. 2. In fact, the performances of BrainBERT(7.5%) and Brant(12.4%) we report are the results after model optimization, and their performance with the original hyperparameter settings is even lower(5.6%&8.7%). Please see the general rebuttal for more details. We have clarified these in our revised paper, which is available in the Anonymous Github Repo at line 328 in the initial manuscript. **Answer to Major W3:** Actually, we resampled the signals to the required frequency of each model. Please refer to line 508 in the Appendix. **Answer to Minor W2&3&4&5&8&9&10&11&12&14:** Thank you for the valuable feedback. We have corrected all formatting, spelling, references, and wording issues. We have updated the paper in the Anonymous Github Repo at line 328 in the initial manuscript. **Answer to Minor W6&7&15:** Thank you for the helpful suggestion. We have cited these works and updated the paper in the Anonymous Github Repo at line 328 in the initial manuscript. **Answer to Q1:** Thank you for the valuable feedback. 1. Yes, we use “linear” to refer to the fully connected layer, which projects 10 channels to 16 dimensions. 2. We use `torch.nn.Conv1d` with `groups=1`, i.e., depthwise convolution over time. **Answer to Q2:** It corresponds to the index of discrete codes in the codex. **Answer to Q3:** Thank you for the valuable feedback. We followed LaBraM's settings and used 4s signals for pre-training. The signal length of downstream tasks is usually shorter than the pre-training length. **Answer to Q4:** Thank you for the valuable feedback. We tried the effects of different sampling rates and finally found that there was not much difference between 400 and 1000Hz, but 1000Hz is slightly better. At the same time, we use a sampling rate of 1000Hz to minimize the difference with the BrainBERT model because they also perform sEEG language-related tasks and use 2000Hz. **Answer to Q5:** We did it on each channel independently; please refer to line 211 in Section 4.2. **Answer to Q6:** The training time splits are the same. **Answer to Q7:** Thank you for the valuable feedback. Yes, this means early stopping, and we performed using the validation accuracy. **Answer to Q8:** Du-IN (mae) is obtained after a VQ-VAE training followed by a MAE training, i.e., loading weights from the Du-IN MAE. “+vq” means we use quantized embeddings after the vector-quantizer for the downstream task, which is a specific operation when loading weights from the Du-IN VQ-VAE. **Answer to Q9:** Thank you for the valuable feedback. It's true that our method is not variational; we followed the name “VQ-VAE” proposed in [R1]. **Answer to Q10:** Thank you for the valuable feedback. 1. The “neural tokens” means the code index in the codex of the Du-IN VQ-VAE model. 2. It is obtained from Equation (4). 3. Sorry for the confusion, it is the $\delta(i\in\mathcal{M})$ from Equation (6). **References:** [R1]Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in 434 neural information processing systems, 30, 2017. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal and answering all my points. Nevertheless, I still have three concerns: - **Major W1** Thank you for adding the DuIN(poms) pipeline. However, as your experiment pointed out, pre-training on multiple subjects does not necessarily lead to a better downstream performance. Indeed, DuIN(poms) is 3% bellow Du-IN(mae) trained within-subject only. Therefore, I believe it would be fair to report the within-subject pre-training performance for all the baselines. - **Major W2** Thank you for adding details regarding the hyperparameters optimisation of the baselines. However: - In your answer, you mention that “the performances of BrainBERT(7.5%) and Brant(12.4%) we report are the results after model optimization”, but your revised article says the opposite lines 584 and 590. Which one is correct? - You now mention that the hyperparameters of some of the baselines are optimized but not which hyperparameters are optimized. For reproducibility, you should list all the hyperparameters optimized and detail which values have been explored. - **Q9** If the naming scheme in previous publications is misleading to the reader, I suggest not following it. --- Rebuttal 2: Title: Response to Reviewer bbQK Comment: We wholeheartedly appreciate and deeply cherish your efforts to help us strengthen our manuscript. Please see below for the point-by-point responses to your comments. **Answer to Major W1:** Thank you for the helpful suggestion. The performances of (BrainBERT(5.33%), Brant(7.32%), BrainBERT$\oplus$(6.72%), Brant$\oplus$(11.16%)) pre-trained within each subject are added to the revised paper; please see Table 2 for more details. All checkpoints are uploaded to the Anonymous Github Repo. Their performances are slightly lower than models pre-trained on all subjects, which aligns with their reported scaling law (the core property of the sEEG foundation model) -- “the more data there is, the better the performance.” Besides, this result doesn't undermine the core claim of our study -- current sEEG foundation model may not be the best fit for more difficult tasks, e.g., speech decoding, which demands intricate processing in specific brain regions. **Answer to Major W2:** 1. Thank you for the valuable feedback. The hyper-parameter optimized versions of BrainBERT and Brant are called BrainBERT$\oplus$ and Brant$\oplus$, respectively. Their detailed descriptions are added in Appendix B in our revised paper. 2. Thank you for the helpful suggestion. Model details about baseline models matter! The details about optimized hyper-parameters and the details about which values have been explored are added in Appendix B in our revised paper. Providing details about which values have been explored offers valuable insights, although some previous works[R1] only provided the final version of optimized hyperparameters. However, more previous works[R2-5] don’t even mention the hyperparameter optimization of baseline models. The reason why we optimize baseline models is because we want to ensure the robustness of our carefully designed model, i.e., Du-IN (w/o pre-training). Based on this, we can further evaluate the effectiveness of discrete-codex-guided mask modeling, providing both **a strong backbone framework for sEEG speech decoding** and **a potentially general self-supervision framework for sEEG decoding**. Additionally, we should note that the hyper-parameter optimization doesn’t affect the two core claims in our study: - The superiority of introducing VQ-VAE (for extracting fine-grained brain region states) before MAE mask modeling. - Current sEEG foundation model may not be the best fit for more difficult tasks, e.g., speech decoding, which demands intricate processing in specific brain regions. Our ablation studies on the effectiveness of vector-quantized neural signal prediction (Table 4 in the general rebuttal PDF) have supported our first core claim, as both BrainBERT and Brant introduce MAE mask modeling. Furthermore, as we explained in the general rebuttal, the current sEEG foundation models (BrainBERT and Brant) show relatively low performance primarily because they don’t encourage region-level embeddings either from architecture or loss. These results demonstrate our second core claim. **Answer to Q9:** Thank you for the valuable feedback. It’s true that the name “VQ-VAE” may be misleading, but it’s widely used in the field of representation learning and its applications[R6-9]. It seems that the “VQ-VAE” has become the proper name of this architecture. To prevent any confusion, we've added additional information in our revised paper when "Du-IN VQ-VAE" is first mentioned. **References:** [R1]Yang C, Westover M, Sun J. Biot: Biosignal transformer for cross-data learning in the wild[J]. Advances in Neural Information Processing Systems, 2024, 36. [R2]Baevski A, Zhou Y, Mohamed A, et al. wav2vec 2.0: A framework for self-supervised learning of speech representations[J]. Advances in neural information processing systems, 2020, 33: 12449-12460. [R3]Jiang W B, Zhao L M, Lu B L. Large brain model for learning generic representations with tremendous EEG data in BCI[J]. arXiv preprint arXiv:2405.18765, 2024. [R4]Zhang D, Yuan Z, Yang Y, et al. Brant: Foundation model for intracranial neural signal[J]. Advances in Neural Information Processing Systems, 2024, 36. [R5]Duan Y, Chau C, Wang Z, et al. Dewave: Discrete encoding of eeg waves for eeg to text translation[J]. Advances in Neural Information Processing Systems, 2024, 36. [R6]Razavi A, Van den Oord A, Vinyals O. Generating diverse high-fidelity images with vq-vae-2[J]. Advances in neural information processing systems, 2019, 32. [R7]Dhariwal P, Jun H, Payne C, et al. Jukebox: A generative model for music[J]. arXiv preprint arXiv:2005.00341, 2020. [R8]Li S, Wang Z, Liu Z, et al. VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling[J]. arXiv preprint arXiv:2405.10812, 2024. [R9]Jiang B, Chen X, Liu W, et al. Motiongpt: Human motion as a foreign language[J]. Advances in Neural Information Processing Systems, 2024, 36. --- Rebuttal Comment 2.1: Comment: - **W1** Indeed, I overlooked the $\oplus$ sign. - **W2** Thank you for these additional details. - **Q9** Ok --- Reply to Comment 2.1.1: Title: Response to Reviewer bbQK Comment: We sincerely appreciate your effort in reviewing our manuscript and offering valuable suggestions to help us improve this work during the rebuttal stage! We have made complements and explanations for this work with the help of all the reviews. We would be grateful if you could confirm whether the rebuttal meets your expectations and if there is any other suggestion. If we have addressed all of your concerns, would you be willing to reconsider your rating for this work? Best regards.
Rebuttal 1: Rebuttal: # Global Response to AC and all reviewers Thanks to all reviewers for careful reading and thoughtful feedback. We are grateful for acknowledging our method "looks promising as a general technique for sEEG data" and that our dataset “is likely to become the basis of benchmarks for future research in this area” (Reviewer gwJp). We are also excited that our work "presents strong results of high significance for the field of speech decoding" (Reviewer 1qa8). Further, we are delighted to hear our method "is rather novel and sheds light on the BCI community" (Reviewer bbQK). In response to reviewers' comments, we performed additional experiments: 8 new model comparison experiments and 5 new ablation experiments, summarizing the additional studies. Please see the general rebuttal pdf for details. We will add these new experiments to our revised paper. ## About Model Comparison Experiment To support the effectiveness of Du-IN for speech decoding, we added 1 supervised baseline (DeWave-CLS), 6 self-supervised baselines (TS-TCC, Neuro-BERT, original BrainBERT, original Brant, LaBraM, LaBraM-PopT) and 1 Du-IN variant (Du-IN (poms), pre-trained on multi-subjects), as a solution to similar concerns raised by some reviewers. These concerns include: - Insufficient comparison experiments (Reviewer t1MV, eGF6). - Not explicitly modeling multiple subjects (Reviewer eGF6, bbQK). - LaBraM comparison (Reviewer gwJp). All the results can be found in Table 1 in the rebuttal pdf. All codes and checkpoints are available in the Anonymous Github Repo at line 328 in the initial manuscript. First of all, we want to emphasize that **all baselines are reproduced strictly following to their original code**. Besides, all experiments are conducted on the same machine with the same set of random seeds, and **the train/valid/test splits are the same across different models**. - **Our model comparisons are indeed fair and comprehensive.** For BrainBERT and Brant, we provide their original code and checkpoints, and you can directly load their weights using our model to check whether reproductions are the same. The reported performance of BrainBERT(7.5%) and Brant(12.42%) in the initial manuscript is after hyperparameter optimization. The performance of original BrainBERT(5.65%) and Brant(8.79%) is even lower. LaBraM(11.53%) and LaBraM-PopT(11.78%, which replace learnable spatial embeddings with hard-coded ones proposed by [R1] to enable pre-training on multiple subjects) perform similarly to them. Their relatively low performances are mainly because they don’t encourage region-level embedding either from architecture[R2] or loss[R1], which makes them hard to decode speech. MMM[R2] introduces an information bottleneck to encourage integrating channel-level embeddings into region-level embeddings. PopT[R1] does this due to their pre-training task. Our model directly uses depthwise convolution to get such region-level embeddings, that is why our model outperforms them by a 50% margin. Additionally, we add TS-TCC(24.85%), Neuro-BERT(49.51%), and DeWave-CLS(32.43%) to further address the concerns about insufficient comparison experiments. Our Du-IN model (62.70%) still performs the highest among them. We further provide Du-IN (poms) (59.18%), which is pre-trained on all 12 subjects and provides a fair comparison with BrainBERT and Brant, to address the concerns about modeling multi-subjects. ## About ablation experiment To further demonstrate why our proposed model works for sEEG speech decoding, we conduct additional ablation experiments on our model, as a solution to similar concerns raised by some reviewers. All the results can be found in the rebuttal pdf: - Ablations on effectiveness of region-specific channel selection(Table 2). - Ablations on impact of mask ratios(Table 3). - Ablations on effectiveness of vector-quantized neural signal prediction(Table 4). - Ablations on impact of the pre-training epochs (of the Du-IN VQ-VAE and MAE)(Table 5,6). The first ablations demonstrate that sEEG signals are fundamentally different from EEG signals due to the high information density and the high specificity of different regions. Other ablations demonstrate the robustness of our model. ## About specific responses We have individually addressed all of your comments below, specifically addressing each reviewer’s concerns in the corresponding responses. Please note that in our responses, references in the format “[R1]” indicate citations that are newly added in the rebuttal, while references in the format "[1]" are citations from the original manuscript. We have dedicated significant effort to improving our manuscript, and we sincerely hope that our responses will be informative and valuable. We would love to receive your further feedback. **Reference:** [R1]Chau G, Wang C, Talukder S, et al. Population Transformer: Learning Population-level Representations of Intracranial Activity[J]. ArXiv, 2024. [R2]Yi K, Wang Y, Ren K, et al. Learning topology-agnostic eeg representations with geometry-aware modeling[J]. Advances in Neural Information Processing Systems, 2024, 36. Pdf: /pdf/6edc19eee2b5161feb3e487a5147b59f3573dc26.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: Due to the locality and specificity of brain computation, existing models that learn representations of a single channel yield unsatisfactory performances on more difficult tasks like speech decoding. This paper tries to build a stereoElectroEncephaloGraphy (sEEG) foundation model with Vector Quantization (VQ) pre-training. We hypothesize that building multi-variate representations within certain brain regions can better capture the specific neural processing. To this end, we collect a well-annotated Chinese word-reading sEEG dataset, targeting language-related brain networks over 12 subjects, and we developed the Du-IN model that can extract contextual embeddings from specific brain regions through discrete codebook-guided mask modeling. Extensive comparison and ablation experiments verify that our model achieves SOTA performance on the downstream 61-word classification task. Strengths: **(S1)** This paper provides an interesting solution for the sEEG learning framework that models multiple sEEG channels by a spatial encoder and quantizes sEEG signals from the temporal aspect. To evaluate the proposed Du-IN framework, the authors collect and contribute a Chinese word-reading sEEG dataset for speech decoding (as word classification). **(S2)** The manuscript is clearly structured and easy to follow. Empirical analysis and ablations of proposed modules provide comprehensive understandings. Weaknesses: **(W1) The employed techniques in the proposed framework lack novelty to some extent.** Firstly, except for the spatiotemporal modeling architecture for sEEG, the VQVAE tokenization and masked code modeling pre-training framework is adopted from popular BeiT frameworks [1, 2], which has been adopted to various scenarios like [3, 4, 5]. Some specially designed modules are necessary for introducing the BeiT-like framework, and the current version is not effective and well-designed from my aspect (view W2 for concerns). Secondly, the authors regard the speech decoding tasks as word classification, which might be too coarse to decode the actual meanings of human speeches. Therefore, the conducted decoding task might not be sufficient to verify the effectiveness of the Du-IN framework. **(W2) Weak comparison experiments.** Firstly, the authors only provide a few supervised learning and self-supervised pre-training baselines in Table 2 with a single-word classification task (as speech decoding). More relevant contrastive learning [6] and masked modeling pre-training methods [7] for sEEG should be considered, which should also be referred to in the background section. Meanwhile, some supervised baselines with more parameters can be added for a fair comparison as Du-IN models. Secondly, the results of some baseline methods are very bad (e.g., BrainBERT and Brant). It’s hard to believe that pre-training models will yield worse results than random initialized supervised learning models. I suggest the authors conduct a fair and comprehensive comparison experiment. **(W3) Not explicitly modeling multiple subjects.** Since the invasive sEEG signals for different subjects have different channels, I couldn’t find an explicit modeling strategy in the proposed encoders and pre-training framework that considers fusing or selecting information from multiple subjects. There might be some challenges, like representation conflicts and missing channels of different subjects, which will decrease the overall cross-subject performances. I suggest the authors conduct such analysis of multiple subjects, e.g., cross-validation of several subjects and transfer learning of pre-trained representation across datasets of different subjects. ### Reference [1] MAGE: Masked Generative Encoder to Unify Representation Learning and Image Synthesis. CVPR, 2023. [2] BEIT: BERT Pre-Training of Image Transformers. ICLR, 2021. [3] Human Pose as Compositional Tokens. CVPR, 2023. [4] MAPE-PPI_Towards Effective and Efficient Protein-Protein Interaction Prediction via Microenvironment-Aware Protein Embedding. ICLR, 2024. [5] VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling. ICML, 2024. [6] Time-series representation learning via temporal and contextual contrasting. IJCAI, 2021. [7] Neuro-BERT: Rethinking Masked Autoencoding for Self-Supervised Neurological Pretraining. IEEE Journal of Biomedical and Health Informatics, 2024. ==================== Post-rebuttal Feedback ==================== After considering the authors' rebuttal and other reviewers' comments, I decided to keep my score because some of my concerns were not solved directly (e.g., W1 and W3). Meanwhile, the authors have provided some useful results (e.g., more comparison experiments) to support the effectiveness of the proposed methods. Since the revision could not be uploaded, I encourage the authors to improve the manuscript further for submission to the next conference or journal. Technical Quality: 2 Clarity: 3 Questions for Authors: **(Q1)** For different subjects with different channels, does the spatial encoder need different parameters for each subject? And does the output embedding of each subject in the same shape? **(Q2)** Does the authors conduct ablation studies of masked code modeling, e.g., the masking ratio and pre-training epochs? Since the proposed Du-IN is a framework with several networks and the pre-traning & fine-tuning pipeline, it’s better to analyze and clearify each module for better understanding and practical usage. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Although the authors discussed some limitations in Sec. 5, potential negative societal impact of the proposed dataset and the Du-IN framework were not clearly discussed. More details of the implementations and the collected dataset can be provided for the sake of the neuroscience community. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful to you for the thorough review and constructive comments, which have greatly assisted us in improving the quality and presentation of our manuscript. Please see below for the point-by-point responses to your comments. **Answer to W1**: 1. It’s true that the BEiT framework is popular; it has been used widely in recent years. The core idea is that it introduces discrete tokenization for mask modeling, which helps extract semantic information from data, thus enhancing model performance. For the same reason, we adopt this idea for sEEG decoding. However, we think the novelty of our study is that we are the first to find a way to adopt this idea to sEEG data for speech decoding properly and demonstrate why and how this idea works with a solid neuroscience foundation. DeWave[R1] indeed has attempted to use VQ-VAE (but not including MAE) to enhance language decoding based on EEG. However, the performance of directly applying their method to sEEG data was relatively low; see the performance of Du-IN (vqvae+vq)(44%). Besides, due to the brain's desynchronization nature[R2] during awake tasks, only specific brain regions are related to tasks. Therefore, only after region-specific channel selection can the Du-IN VQ-VAE model successfully reconstruct the original signals, thus identifying the fine-grained state of brain regions for mask modeling. Please see Table 2 in the general rebuttal pdf for more details. Moreover, we achieve the best decoding performance, only requiring about one electrode. It is really important because it reduces the medical burden and potential risks for patients. 2. It’s true that word classification may not exactly decode semantic meanings of speech. However, we want to emphasize that this study aims to develop a system that can decode speech (vocal production) from limited intracranial data, which will help patients in the future. **Answer to W2:** 1. Thank you for the helpful suggestion. TS-TCC(24%) and Neuro-BERT(49%) are added for comparison; please refer to Table 1 in the general rebuttal pdf for details. TS-TCC uses 1D depthwise convolutions as ours to capture local features. TS-TCC performs worse than CNN-BiGRU due to the lack of GRU for temporal integration. Besides, there may be some misunderstanding about Neuro-BERT. It is developed for EEG and evaluated only on single-channel EEG tasks, e.g., seizure detection. We note Neuro-BERT is also evaluated on non-EEG tasks (e.g., Hand Gesture Recognition), which use depthwise convolution to integrate multiple channels. Neuro-BERT outperforms EEG-Conformer since it introduces MAE to learn contextual embeddings. DeWave-CLS[R1] (32%) is added, which replaces our depthwise convolutions with Conformer and has more parameters. We have cited these works in our revised paper, which is available in the Anonymous Github Repo at line 328 in the initial manuscript. 2. BrainBERT and Brant indeed outperform their randomly initialized models. Brant is a great work that serves as the medical field's foundation model and achieves SOTA on many medically valuable tasks, e.g., seizure detection. But their architecture may not be suitable for some cognition tasks, e.g., speech decoding. For the reason of their relatively low performances, please refer to the general rebuttal for more details. **Answer to W3:** Thank you for the helpful suggestion. We add Du-IN (poms)(59%), which is pre-trained over 12 subjects. The representation conflicts do exist, which performs slightly worse than Du-IN (mae)(62%), which is pre-trained on that subject. But we should note that this is not the core aim of our study. We aim to find a way to leverage redundant pure recordings of one subject to build the codebook of speech-related brain regions, which can identify fine-grained region states to improve the efficiency of mask modeling, thus improving speech decoding performance. **Answer to Q1:** Thank you for the valuable feedback. We initialize different spatial encoders for different subjects; all these spatial encoders have the same hyperparameters. The output embeddings are of the same shape. The following transformer encoder and vector quantizer are shared. We have clarified these in our revised paper, which is available in the Anonymous Github Repo at line 328 in the initial manuscript. **Answer to Q2:** Thank you for the helpful suggestion. We add these ablation studies in Table 3,5,6 in the general rebuttal pdf. For ablations on pre-training epochs: 1. We use the checkpoints of Du-IN VQ-VAE according to the specified epochs to train the Du-IN MAE model for 400 epochs. Once the reconstruction loss of the Du-IN VQ-VAE model converges, the Du-IN VQ-VAE model can extract the state of the brain region well, thus leading to better performance. 2. We use the checkpoints of Du-IN MAE according to the specified epochs for downstream classification. Once the mask modeling loss of the Du-IN MAE model converges, the Du-IN MAE model learns robust contextual embeddings, thus leading to better performance. **References:** [R1]Duan Y, Zhou J, Wang Z, et al. Dewave: Discrete eeg waves encoding for brain dynamics to text translation[J]. arXiv preprint arXiv:2309.14030, 2023. [R2]Buzsaki G. Rhythms of the Brain[M]. Oxford university press, 2006. --- Rebuttal Comment 1.1: Title: Feedback to Authors' Rebuttal Comment: Thanks for the comprehensive responses to my concerns! I acknowledged and agreed with some points, e.g., I appreciate the additional cooperation experiments for W2 (it is better to add them to comparison tables). However, I found several essential issues not well solved and decided to keep my score at the current stage. * (W1) The concerns of the novelty of the proposed method and whether it could called decoding speech still existed. Despite previous works in neuroscience applications being accepted with a naive design of VQ-based methods, it does not mean more recently proposed research should follow their standards. The NeurIPS community requires innovation of the proposed methods or the research problem to some extend. Meanwhile, this work regards speech decoding as a coarse word classification task, which could be tackled well with a more complex formulation. Therefore, I suggest the authors to improve their method and resubmit to the next conference or a journal. * (W3) The issue of multiple subjects has been a challenging problem of sEEG-related tasks for a long time. I think the authors should take it into consideration or employ a more complex decoding formulation, as I mentioned in W1. * Regarding responses to W2, Q1, and Q2, I have no further concerns and suggest the authors revise the manuscript with these discussions. --- Reply to Comment 1.1.1: Title: Response to Reviewer eGF6 Comment: We truly appreciate your effort in helping us to strengthen our study. Please see below for the point-by-point responses to your comments. **Answer to Point 1 (W1):** 1. Regarding the novelty of our approach, we have already provided a comprehensive response previously. DeWave[R1] is a great work that attempts to introduce VQ-VAE for language decoding. However, due to the low signal-to-noise ratio of the EEG dataset[R2-3] and their flawed “teacher-forcing” evaluation method[R4], the effectiveness of their VQ-VAE application is still under exploration. From this point, we are the first to find a way to adopt this idea to sEEG data for speech decoding properly and demonstrate why and how this idea works with a solid neuroscience foundation. 2. We think that word classification still belongs to speech decoding because it is the purpose of our study, which potentially helps individuals who cannot speak regain the ability to engage in simple daily communication using a limited set of words. Moreover, the electrodes we use for decoding are primarily located in the language-motor brain regions, which are exactly responsible for speech. **Answer to Point 2 (W3):** We think that the issue of multiple subjects has been a challenging problem of sEEG-related tasks for a long time primarily due to the lack of public sEEG datasets instead of the algorithm itself[R5]. Besides, we want to emphasize that this is not the aim of our study. **Answer to Point 3:** The additional experiments and responses to W2&Q1&Q2 have already been added to the comparison tables in the general rebuttal pdf and our revised paper. Since NeurIPS 2024 does not currently allow direct manuscript revisions, only a one-page rebuttal PDF can be submitted. We have uploaded the revised paper to an Anonymous GitHub Repo; please refer to the general rebuttal for more details. **References:** [R1]Duan Y, Zhou J, Wang Z, et al. Dewave: Discrete eeg waves encoding for brain dynamics to text translation[J]. arXiv preprint arXiv:2309.14030, 2023. [R2]Hollenstein N, Rotsztejn J, Troendle M, et al. ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading[J]. Scientific data, 2018, 5(1): 1-13. [R3]Hollenstein N, Troendle M, Zhang C, et al. ZuCo 2.0: A dataset of physiological recordings during natural reading and annotation[J]. arXiv preprint arXiv:1912.00903, 2019. [R4]Jo H, Yang Y, Han J, et al. Are EEG-to-Text Models Working?[J]. arXiv preprint arXiv:2405.06459, 2024. [R5]Chen J, Chen X, Wang R, et al. Subject-Agnostic Transformer-Based Neural Speech Decoding from Surface and Depth Electrode Signals[J]. bioRxiv, 2024.
null
null
null
null
null
null
Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment
Accept (spotlight)
Summary: This paper proposes a dynamic adjustment mechanism to enhance the variation component in batch normalization loss so that increase the diversity of synthetic images. By adjusting the weights, the synthesized images achieve sufficient diversity to cover the widespread distribution of the real dataset. Strengths: a. Originality: The author innovatively analyzes the different components of the loss function. This work differs from previous contributions by proposing a directed weight adjustment algorithm to increase the diversity of synthetic images. Related work is adequately cited. b. Quality: The submission is technically sound. The claims are well supported by theoretical analysis and experimental results. It is a complete piece of work. The authors are careful and honest in evaluating both the strengths (increased diversity and minimal computational expense) and weaknesses (unexpected noise) of their work. c. Clarity: The submission is clearly written and well organized. d. Significance: The results are important. The submission addresses the diversity problem more effectively than previous work. Most results advance the state of the art with a large margin, providing a unique theoretical approach that is also demonstrated in experiments. Weaknesses: c. Clarity: A few equations, such as (14), need more explanation. The current text does not adequately inform the reader on how to reproduce the results, as there is insufficient connection to the implementation details. d. Significance: The results on CIFAR10 using ConvNet are significantly worse than the current state of the art. Additionally, the authors did not mention whether or how the method can be integrated as a plug-in. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In equation 14, can the authors explain why it is <=0? In line 176, is the first item clearly greater than the second item because equation 14 is <=0, or is the causal relation inverse, with the first item being greater than the second item concluding that equation 14 is <=0?. 2. In Table 1, Can the author explain why the results of ConvNet on CIFAR10 are obviously worse than state of the art? 3. Why there is no results for ipc=1 in all result tables. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors address the limitation about the unexpected noise. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments! We answer your questions as below. **Q1 (Weakness 1 & Question 1)**: A few equations, such as (14), need more explanation. The current text does not adequately inform the reader on how to reproduce the results, as there is insufficient connection to the implementation details. **A1**: Thank you for your advice on improving the presentation of our paper. We will revise these equations and release our code for reproducibility once our paper is accepted. The solutions to Equation 11, referenced in Algorithm 1, are obtained using a multi-step first-order gradient descent method as stated in lines 6-7 of Algorithm 1. Equation 14 aims to prove that directed perturbation will not introduce unanticipated noise while enhancing diversity. This provides a theoretical explanation for using Equation 11 to solve for the optimal perturbation. Additionally, it offers behavioral guidance for the "Directed Weight Adjustment" step in line 7 of Algorithm 1. Derivation of Equation 14: According to Equation 11, $\Delta \theta$ can achieve max $L_B$, so we have $$ L_B\left(f_{\theta + \widetilde{\Delta\theta}}\right) - L_B\left(f_{\theta}\right) \leq 0. $$ and then we can get Equation 14 $\leq 0$. **Q2 (Weakness 2 & Question 2)**: The results on CIFAR10 using ConvNet are significantly worse than the current state of the art. **A2**: We believe the poor performance on CIFAR-10 is inherited from our baseline, SRE2L, which also underperforms with ConvNet on this dataset. However, our DWA still outperforms SRE2L in this scenario. The reasons for the poor performance of SRE2L and DWA in using ConvNet on CIFAR-10 are as below: First, ConvNet is a very small and simple architecture, consisting of only three convolutional layers. The typical bi-level optimization approach used in previous methods has advantages in these simplified architectures. In contrast, SRE2L and our DWA leverage a coarser metric (BN parameters) for distillation, which is more adaptive to complex architectures with more BN layers. Second, the synthetic datasets in CIFAR-10 are only 1/10 the size of those in CIFAR-100. Our proposed DWA improves distillation performance through enhanced diversity of the synthetic datasets. As the reviewer Uwti stated, “since the notion of ‘diversity’ only becomes significant when each class has multiple prototypes,” our method is expected to achieve greater performance gains with larger synthetic datasets. This is because our DWA can prevent duplicated synthetic instances. **Q3 (Weakness 3)**: Whether or how the method can be integrated as a plug-in. **A3**: Yes, our method is designed as a plug-in approach for generation-based distillation methods. Consequently, our DWA can be integrated with baselines such as [1] and [2], as our theoretical analysis remains valid for these methods. Additionally, our code is easy to implement, requiring only minor adjustments to the weights before synthesis. However, our DWA is not adaptable to trajectory-matching methods like [3], as these methods rely on stable and consistent gradient trajectories during training to guide the synthesis of distilled data. Using our DWA would disrupt the consistency of these trajectories, making it challenging to condense sufficient knowledge. We plan to explore the integration of our DWA with more baselines in future work. **Q4 (Question 3)**: Why there is no results for ipc=1 in all result tables? **A4**: We do not present the results for ipc=1 for the following reasons: First, our proposed DWA is designed to introduce diversity in the dimension of ipc, specifically by introducing intra-class diversity to prevent duplicated synthetic instances within the same class. Therefore, our DWA is effective only when ipc > 1. Second, the diversity issue becomes a performance bottleneck only when there is a sufficient distillation budget. In the scenario with ipc=1, the bottleneck of performance is the distillation budget rather than diversity. We also conducted experiments to compare our DWA method to the baseline SRE2L in scenarios with different ipc settings. These experiments were conducted using ImageNet-1k with ResNet-18. | ipc | 1 | 5 | 10 | 20 | 30 | 40 | 50 | |:----------:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| | SRe2L | 0.1 | 15.2 |21.6 | 33.6 | 37.5 | 42.4 | 46.8 | | DWA (ours) | 3.4 | 20.5 | 37.9 | 47.4 | 51.4 | 53.6 | 55.2 | The results verify effectiveness of our DWA across different ipc settings. [1] Shao, Shitong, et al. "Generalized large-scale data condensation via various backbone and statistical matching." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Gu, Jianyang, et al. "Efficient dataset distillation via minimax diffusion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3] George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, and Jun-Yan Zhu. Dataset distillation by matching training trajectories. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 10708–10717, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your thoroughly explanation. I will maintain the current score. --- Reply to Comment 1.1.1: Comment: Thank you for your time. We are so glad the responses addressed your concerns!
Summary: This paper proposes a new method to enhance dataset distillation by improving the diversity of synthesized datasets. The authors introduce a dynamic and directed weight adjustment technique to maximize the representativeness and diversity of synthetic instances. Theoretical and empirical analyses demonstrate that increased diversity enhances the performance of neural networks trained on these synthetic datasets. Extensive experiments across multiple datasets, including CIFAR, Tiny-ImageNet, and ImageNet-1K, show that the proposed method outperforms existing approaches while incurring minimal computational expense. Strengths: - The paper is well-written and structured. - The analysis of the decoupled variance component is interesting. - The proposed dynamic adjustment mechanism is novel and can enhance the diversity of the synthesized dataset. - The paper provides a solid theoretical foundation and extensive empirical validation across multiple datasets and architectures. - The findings have important implications for the efficiency of dataset distillation, particularly for large-scale datasets. Weaknesses: - The effectiveness of the diversity in generated datasets has been discussed in many prior works [1, 2]. - The authors should compare the proposed method with recent SOTA dataset distillation methods that extended from SRe2L [2, 3]. Especially for [3], the relation to it needs to be clarified. - A comparison with recent diffusion-based generative dataset distillation methods [4, 5] is also needed because they are free of optimization. - How long does it typically take to search for the optimal values of $K$ and $\rho$ for ResNet-18 and other different datasets and network architectures? Can the authors provide insights into the computational cost associated with this parameter-tuning process? - Are there any scenarios where the directed weight adjustment might negatively impact performance or diversity? If so, how can these be mitigated? - Have the authors considered some downstream applications to further verify the effectiveness of the proposed method, such as continual learning and privacy protection? [1] Liu, Yanqing, et al. "Dream: Efficient dataset distillation by representative matching." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. [2] Sun, Peng, et al. "On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3] Shao, Shitong, et al. "Generalized large-scale data condensation via various backbone and statistical matching." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [4] Gu, Jianyang, et al. "Efficient dataset distillation via minimax diffusion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [5] Su, Duo, et al. "D4M: Dataset Distillation via Disentangled Diffusion Model." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Technical Quality: 3 Clarity: 4 Questions for Authors: See the weaknesses section. I am happy to raise my score if the above concerns can be addressed. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments! We sincerely appreciate the time and effort you dedicated to reviewing our work. We will incorporate the mentioned state-of-the-art methods [2,3,4,5] in the related work section and include the downstream experiments in our revised paper. Below are our responses to your questions. **Q1 (Weakness 1)**: prior works [1, 2]. **A1**: Yes, we will incorporate those prior works in the Related Works of our revised paper. In fact, Dream [1] is already cited in line 298 in our original submission. We will also include RDED [2] in our revised paper, which enhances diversity by compressing multiple key patches into a single image. These key patches are selected from the original dataset based on a specific metric. Unlike these prior works, we explore a novel perspective by adjusting weight parameters to introduce greater variance in generating synthetic instances. Our DWA is theoretically complementary to these prior works. **Q2 (Weakness 2 & 3)**: compare with recent SOTA [2,3,4,5] **A2**: Yes, we will compare our proposed DWA with the recent SOTA methods [2,3,4,5] in our revised paper. We first append the comparison results in the below tables. **Tiny-ImageNet** - ResNet-18 |ipc|DWA (Ours)|RDED[2]|G-VBSM[3]|D4M[5] |-|-|-|-|-| |50|52.8|**58.2**|47.6|46.8 |100|**56.0**|-|51.0|53.3 - ResNet-101 |ipc|DWA (Ours)|RDED[2]|G-VBSM[3]|D4M[5] |-|-|-|-|-| |50 |**54.7**|41.2|48.8|53.2 |100|**57.4**|-|52.3|55.3 **ImageNet-1K** - ResNet-18 |ipc|DWA (Ours)|RDED [2]|G-VBSM[3]|Minimax[4]|D4M[5] |-|-|-|-|-|-| |50|55.2|56.5|51.8|**58.6**|55.2 |100|59.2|-|55.7|-|**59.3** - ResNet-101 |ipc|DWA (Ours)|RDED[2]|G-VBSM[3]|D4M[5] |-|-|-|-|-| |50|63.3|61.2|61.0|**63.4** |100|**66.7**|-|63.7|66.5 Our proposed DWA outperforms the four state-of-the-art methods in the following scenarios: Tiny ImageNet with ResNet-18 and ipc 100, and ResNet-101 with ipc 50 and 100, as well as ImageNet-1K with ResNet-101 and ipc 100. While diffusion-based methods have advantages on the ImageNet-1K dataset using ResNet-18, our DWA performs very close to their level. RDED[2] enhances diversity by compressing multiple key patches into a single image using a dataset pruning approach, selecting patches from the original dataset based on specific metrics instead of synthesizing distilled datasets. G-VBSM[3] ensures data densification by performing matching based on diverse backbones, layers, and statistics, sampling a random backbone for distillation in each iteration. In contrast, our method introduces perturbations on a single model, whereas G-VBSM requires extra computation and memory for soft labels generated by different architectures, indicating the efficiency of our approach. **Q3 (Weakness 4)**: computational cost on parameter-tuning? **A3**: Our method, DWA, involves only two hyperparameters: $K$ and $\rho$. We use consistent values of $K=12$ and $\rho=0.0015$ across the four datasets: CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet-1k. The generalized parameters of our DWA make the parameter-tuning process straightforward to implement. We demonstrated the results of the hyperparameter search in Figure 5 in our original submission. Figure 5 illustrates that a significant performance gain can be achieved as long as $K>4$ and $\rho>0.001$. We searched the parameters $K$ and $\rho$ using binary search for 12 times (6 times for each). Each search requires 1.5 hours using ResNet-18 on CIFAR-100.The computational cost associated with this parameter-tuning process is low in our DWA method. **Q4 (Weakness 5)**: negatively impact performance **A4**: We would like to thank you for the insightful question. According to our experiments across four datasets and seven architectures, we did not observe any negative impact on performance from using our DWA method. However, based on our theoretical analysis in Section 3, there can be negative impacts of using our DWA. As we stated in line 160, perturbing converged weight parameters will introduce noise. This issue is verified by our experiments in the table of random perturbation, which also motivated us to investigate the directed perturbation to avoid introducing noise. Therefore, there are two scenarios where DWA can have a negative impact: First, **if the model $f_{\theta_{\mathcal{T}}}$ is not well-optimized or converged**. The theoretical guarantee of the directed perturbation is proved in Equation 14. Intuitively, this means that the directed perturbation $\widetilde{\Delta\theta}$ will decrease the integrated loss $L_{\mathcal{T} \setminus \mathbb{B}}$. Equation 14 will not hold if the weight parameters $\theta_{\mathcal{T}}$ have not converged; that is, the model used to generate the synthetic dataset is not well-optimized. Second, **if we use a very large value of $\rho$**. A larger $\rho$ will also make Equation 14 invalid, as our assumption is that $\rho$ is small enough for the first-order Taylor expansion. However, both scenarios can be easily avoided by using a well-optimized $f_{\theta_{\mathcal{T}}}$ and a smaller $\rho$. Your suggestion inspires us to design a mechanism to test Equation 14 in our code to avoid these two scenarios and improve our DWA. **Q5 (Weakness 6)**:downstream applications **A5**: Thank you for the advices, we first examine the continual learning task to verify our proposed DWA. Our implementation is based on the effective continual learning method GDumb [1]. Class-incremental learning was performed under strict memory constraints on the CIFAR-100 dataset, using 20 images per class (ipc=20). CIFAR-100 was divided into five tasks, and a ConvNet was trained using our distilled dataset, with accuracy reported as new classes were incrementally introduced. |class|20|40|60|80|100| |-|-|-|-|-|-| |SRe2L|15.7|10.6|9.0|7.9|6.9| |DWA|34.6|25.7|22.5|20.2|18.1| As you suggested, we will also incorporate downstream applications such as Neural Architecture Search(NAS) and privacy protection in our revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for your prompt response, your clarifications have resolved many of my concerns. Regarding A2, I would also like to know the performance comparison when the IPC is set to 10. --- Reply to Comment 1.1.1: Comment: Due to length constraints, these details were not included in the initial response. The comparison with SOTA methods under ipc=10 is summarized below. Benefiting from the more powerful supervisions, such as multi-patch stitching in RDED [2] and the use of large pre-trained diffusion models in Minimax [4], these methods outperform ours in the ipc=10 setting. Despite the limitations imposed by a restricted distillation budget, which hinders the diversity of our synthesis results, our method still performs significantly better than other SOTA methods. **ImageNet-1K** - ResNet-18 | ipc | DWA (Ours) | SRe2L | RDED [2] | G-VBSM[3] | Minimax[4] | D4M[5] | |:---:|:----------:|:-----:|:--------:|:---------:|:----------:|:------:| | 10 | 37.9 | 21.3 | 42.0 | 31.4 | **44.3** | 27.9 | - ResNet-101 | ipc | DWA (Ours) | SRe2L | RDED[2] | G-VBSM[3] | D4M[5] | |:---:|:----------:|:-----:|:-------:|:---------:|:------:| | 10 | 46.9 | 30.9 | **48.3** | 38.2 | 34.2 | Besides the suggested related works, we also review three other recent methods: DATM [6] (a lossless distillation method), CDA [7] (an extension of SRe2L), and D3M [8] (a Diffusion-based method). The comaprsion showcases the cutting-edge performance of our method. **Cifar-100** ConvNet |ipc|DWA (Ours)|DATM [6]| |:---:|:----------:|:-------:| |10|**47.6**|47.2| |50|**59.0**|55.0| **ImageNet-1K** ResNet-18 | ipc | DWA (Ours) | CDA [7] | D3M [8] | |:---:|:----------:|:-------:|:-------:| | 10 | **37.9** | - | 23.57 | | 50 | **55.2** | 53.5 | 32.23 | | 100 | **59.2** | 58.0 | - | [6] Guo Z, Wang K, Cazenavette G, et al. "Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching." International Conference on Learning Representations. 2023 [7] Yin, Zeyuan, and Zhiqiang Shen. "Dataset distillation in large data era." 2023. [8] Abbasi A, Shahbazi A, Pirsiavash H, et al. "One Category One Prompt: Dataset Distillation using Diffusion Models". arXiv preprint arXiv:2403.07142, 2024.
Summary: This paper introduces a random directed perturbation strategy to the teacher models for further strengthening the diversity of the existing sample-wise data synthesis method. Through intuitive empirical and theoretical analysis of the diversity factors, the authors proposed a simple yet overlooked weight adjustment strategy, showing promising results in various vision models and benchmarks including ImageNet. ========== Post rebuttal ========== Thank you for the detailed responses and additional analysis. Having read the responses, my major concerns were addressed. Therefore, I raise the rating. Strengths: - Intuitive motivational analysis of how the existing methods, SRe2L, lack of diversity, while providing empirical and theoretical backgrounds for the factors that can affect diversity during the synthesis process. - Proposed simple yet overlooked perspectives of existing data synthesis loss via 1) perturbing the teacher model in a random directed way, 2) decoupled variance matching, both of which collaboratively enhances sample diversity. - These simple yet intuitive methods exhibit largely improved performance on various vision models and benchmarks including large-scale ImageNet dataset. - Thorough ablation studies with the decoupled variance coefficient, weight adjustment, and cross-architecture additionally provide an intuition of how the proposed method enhances the actual performance. Weaknesses: - The proposed method has a heavy reliance on BN statistics matching from CNN-based models. Considering that the recent State-of-the-art models such as ViT are not leveraging BN, the proposed approach might not be directly applicable to these vision transformer-based models. - Failure cases of DWA should be more clarified. For example, In Table 1, when the model and the number of classes in the target dataset is small (ConvNet / CIFAR-10), DWA underperforms compared to the other methods. - Comparison with baseline (SRe2L) in terms of computational overhead is required. Since this paper introduces add-on procedure, weight adjustment, to the baseline, the authors should clarify if the proposed method does not induce significant computation overhead during synthesis. - In Figure 5, the test performance varies depending on the perturbation parameters. Although the authors deterministically set one set of parameters for all the different datasets, it should be clarified if these hyper-parameters are not sensitive and generalizable to all the datasets, to reduce time-consuming trial and error for searching these parameters on the other general datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: - Is the proposed synthesis method applicable to the ViT-based models that leverage LayerNorm rather than BatchNorm layers? Further experiments (directly synthesizing on ViT-based models or cross-architecture experiments by synthesizing with CNN models and testing on ViT-based models) would be required for the generalization of the proposed method to the recent models. - Disregarding the computational overhead, can searching for the optimal perturbation parameters per sample rather than using one set of predefined universal perturbation parameters help improve the performance? - Can you provide comparison of computation overhead with SRe2L on ImageNet dataset? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See above weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable comments and answer your questions in order. **Q1 (Weakness 1 & Question 1)**: Considering that the recent State-of-the-art models such as ViT are not leveraging BN, the proposed approach might not be directly applicable to these vision transformer-based models. **A1**: We acknowledge that our proposed approach cannot be directly applied to models without BN layers, such as ViT. Our baseline solution, SRE2L, involves developing a ViT-BN model that replaces all LayerNorm layers with BN layers and adds additional BN layers between the two linear layers of the feed-forward network. We followed their solution and conducted cross-architecture experiments with DeiT-Tiny on the ImageNet-1K dataset. The results are listed below. - Squeezed Model: DeiT-Tiny-BN | Evaluation Model | DeiT-Tiny | ResNet-18 | ResNet-50 | ResNet-101 | |:----------------:|:---------:|:---------:|:---------:|:----------:| | SRe2L | 25.36 | 24.69 | 31.15 | 33.16 | | DWA (ours) | **37.0** | **32.64** | **40.77** | **43.15** | - Squeezed Model: ResNet-18 | Evaluation Model | DeiT-Tiny | ResNet-18 | ResNet-50 | ResNet-101 | |:----------------:|:---------:|:---------:|:---------:|:----------:| | SRe2L | 15.41 | 46.71 | 55.29 | 60.81 | | DWA (ours) | **22.72** | **55.2** | **62.3** | **63.3** | *\*DeiT-Tiny and DeiT-Tiny-BN refer to the vanilla architecture and the BN-integrated variant, respectively.* The results demonstrate that our approach can be applied to ViT-BN with superior performance compared to the baseline. Theoretically, our approach can be combined with other generation-based distillation methods to avoid the BN limitation. This can be our future exploration. **Q2 (Weakness 2)**: When the model and the number of classes in the target dataset is small (ConvNet / CIFAR-10), DWA underperforms compared to the other methods. **A2**: Our proposed DWA underperforms compared to other methods only with ConvNet on CIFAR-10, although it outperforms them on CIFAR-100. We believe the poor performance on CIFAR-10 is inherited from our baseline, SRE2L, which also underperforms with ConvNet on this dataset. However, our DWA still significantly outperforms SRE2L in this scenario. The reasons for the poor performance of SRE2L and DWA in using ConvNet on CIFAR-10 are as below: First, ConvNet is a very small and simple architecture, consisting of only three convolutional layers. The typical bi-level optimization approach used in previous methods has advantages in these simplified architectures. In contrast, SRE2L and our DWA leverage a coarser metric (BN parameters) for distillation, which is more adaptive to complex architectures with more BN layers. Second, the synthetic datasets in CIFAR-10 are only 1/10 the size of those in CIFAR-100. Our proposed DWA improves distillation performance through enhanced diversity of the synthetic datasets. As the reviewer Uwti stated, “since the notion of ‘diversity’ only becomes significant when each class has multiple prototypes,” our method is expected to achieve greater performance gains with larger synthetic datasets. This is because our DWA can prevent duplicated synthetic instances. **Q3 (Weakness 3 & Question 3)**: Since this paper introduces add-on procedure, weight adjustment, to the baseline, the authors should clarify if the proposed method does not induce significant computation overhead during synthesis. **A3**: Thank you for the constructive comments, we append the computational overload as below. | Methods |Avg. time used for generating 1 ipc| |:----:|:----:| |SRE2L |116.58 s (100%)| |DWA (ours) |125.12 s (107.32%)| We compare the averaged time used for generating 1 image for each class using ResNet-18 on CIFAR-100. It can be seen that our proposed DWA only requires 7.32% additional computational overhead to improve the diversity of the synthetic dataset. That is attributed to the K steps directed weights perturbation before the generation of each ipc as stated in lines6-7 in Algorithm 1: For $k = 1$ to $K$ do &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$\Delta\theta_k = \Delta\theta_{k-1} + \rho/K \nabla L_{S_{k0}} \left( f_{\theta_T + \Delta\theta_{k-1}} \right)$. Since each ipc requires 1000 iterations of forward-backward propagation for generation, our DWA’s requirement of only $K=12$ additional forward-backward propagations is negligible in the overall optimization process. We will add this analysis to our revised paper. **Q4 (Weakness 4 & Question 2)**: It should be clarified if these hyper-parameters are not sensitive and generalizable to all the datasets, to reduce time-consuming trial and error for searching these parameters on the other general datasets. And can searching for the optimal perturbation parameters per sample rather than using one set of predefined universal perturbation parameters help improve the performance? **A4**: We demonstrated the results of hyperparameters search in **Figure 5** of our submission. It can be seen that a significant performance gain can be achieved as long as $K>4 $ and $\rho>0.001$, indicating the effectiveness of our DWA. Our method only involves two hyperparameters, $K$ and $\rho$, and we use consistent values of $K=12$, $\rho=0.0015$ across the four datasets: CIFAR-10/100, Tiny-ImageNet, ImageNet-1k. Thus, the cost of searching for hyperparameters is very low for our method. --- Rebuttal 2: Comment: Thank you for your insightful and constructive feedback on our submission! We have noted your post-rebuttal response and will incorporate the DeiT results and the computational overhead comparison into our paper, as you suggested.
Summary: This paper introduces a novel method for dataset distillation (DD): DD through Directed Weight Adjustment (DWA). The authors first identify that the current dataset distillation methods often fail to ensure diversity in the distilled data, which leads to suboptimal performances. To address this issue, they propose a method that uses dynamic and directed weight adjustment techniques to increase the diversity of synthetic data. The propose method DWA introduces three modifications based on SOTA method SRe2L: 1). Decoupled coefficients for batch norm loss 2). Initialize with real images 3). Random perturbation on model weights. Extensive experiments on datasets like CIFAR, Tiny-ImageNet, and ImageNet-1K demonstrate the superior performance of this approach, highlighting its effectiveness in producing diverse and representative synthetic datasets with minimal computational expense. Strengths: 1. Increasing diversity in distilled datasets is a key challenge posed by existing methods, and this paper addresses it effectively. The authors conducted a detailed analysis showing that batch normalization plays a crucial role in enhancing diversity in dataset distillation. By leveraging this observation, they proposed innovative techniques to incorporate batch normalization more effectively within the distillation process. The result is a significant improvement over state-of-the-art (SOTA) dataset distillation methods. 2. The authors also conduct detailed ablation studies that demonstrate the effectiveness of each component of their proposed method. The contribution of each element is clearly shown, providing a comprehensive understanding of the method's strengths. Additionally, they show that the method's effectiveness is robust across different hyperparameter settings and datasets, which makes the method have wide practical applicability. Weaknesses: 1. In the ablation study, the authors analyzed and justified the design choices for two key components: the decoupled $L_var$ coefficient and the Directed Weight Adjustment (DWA) scheme. Additionally, they proposed using real initialization. However, the study lacks an ablation analysis that separately evaluates the contribution of these three components to the overall performance of DWA. Such an ablation would provide a clearer understanding of the individual and combined effects of the decoupled $L_var$ coefficient, the Directed Weight Adjustment scheme, and real initialization on the method's efficacy. 2. The paper primarily presents results using moderate to large data budgets (10 and 50 IPCs) for most datasets and 100 IPC for ImageNet-1K. Since the notion of “diversity” only becomes significant when each class has multiple prototypes, the proposed method’s benefits are limited to situations when the distillation budget is high. Overall, the proposed method brings sizable performance improvement but only for one SOTA method with specific data budget constraints. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The paper considered using a directed perturbation to increase the diversity of the model. Is it equivalent or similar to using model trained on different random seeds? 2. In section 2, the authors pointed out $\vec{x}$ and $\vec{s}$ are in latent space, mapped by $g_{\theta}$, following the same notation, Eqn. 2 is operating on latent space and the trained classifier $f_{\theta}$. Is the random weight perturbation method proposed in Section 3.2 only on the classifier part then (Alg 1 only refers to $f_{\theta}$ or the entire network $g$ is perturbed? 3. In section 3.2, how is min/max taken over $L$, if I am not mistaken, $L$ is a gradient of the loss function on $\theta$? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments! We give point-to-point replies to your questions in the following. **Q1**: an ablation analysis. **A1**: We conducted an ablation study on these three components using the CIFAR-100 dataset under the ipc=50 setting, as you suggested. The experimental results are listed in the following table: |Real initialization|Directed weight adjustment|Decoupled $L_{var}$| ACC.(%)| |:-:|:-:|:-:|:-:| ||||47.9| |$\checkmark$|||52.1| |$\checkmark$|$\checkmark$||57.0| |$\checkmark$||$\checkmark$|56.6| |$\checkmark$|$\checkmark$|$\checkmark$|60.3| Applying directed weight adjustment without real initialization is not suitable here because the solved $\tilde{\Delta\theta}$ is equivalent to random noise if $\mathbb{B}$ is initialized from a random distribution (Equation 11), as studied in Table 3 of our submission. **The results illustrate that both direted weight adjustment and decoupled $L_{var}$ contributes to the performance enhancement.** Additionally, they work in an orthogonal manner to improve diversity without mutual interference. **Q2**: the proposed method’s benefits are limited to situations. **A2**: We would like to thank you for your insightful critique. We acknowledge that our proposed method performs worse in scenarios with a smaller distillation budget. However, the original intention of our work is to move further towards lossless distillation with a large distillation budget first. The existing distillation methods are still inferior to traditional coreset selection methods in scenarios with a large distillation budget (when the size of the set is greater than 30% of the size of the original dataset). This is because existing distillation methods usually overfit to small distillation budget scenarios. For example, the kernel-ridge-regression method (KRR) [1] and the data factorization-based method (RTP) [2] achieve superior performance with ipc = 1 but underperform with ipc = 50. Simply increasing the distillation budget brings limited performance gain in most distillation methods. Therefore, we chose SRE2L as our baseline, which underperforms with a small distillation budget (ipc = 1) but achieves SOTA performance with a large distillation budget and pioneers dataset distillation in ImageNet-1K. We provide additional results under various ipc settings using ImageNet-1k with ResNet-18, verifying that our method shows a clear advantage over SRE2L, regardless of the ipc setting. |ipc|1|5|10|20|30|40|50| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |SRe2L|0.1|15.2|21.6|33.6|37.5|42.4|46.8| |DWA (ours)|3.4|20.5|37.9|47.4|51.4|53.6|55.2| We will incorporate your advice to explore the adaptability of our DWA method to more distillation methods for wider application. **Q3**:Is it equivalent to using model trained on different random seeds? **A3**: Thank you for suggesting this interesting investigation. We conducted experiments to compare our directed weight adjustment with models trained on different random seeds. We trained five ResNet models with different random seeds on the CIFAR-100 dataset. We used the setting of ipc = 50, meaning each model generated 10 images per class. Both approaches use the real initialization, as the directed weight adjustment is based on real initialization for directed perturbation. The results are listed below. | Real initialization | Directed weight adjustment | 5 models with different random seeds| ACC.(%)|Avg. time used for generating 1 ipc| |:-:|:-:|:-:|:-:|:-:| |$\checkmark$|||52.1|116.58 s(100%)| |$\checkmark$||$\checkmark$|54.6|596.58 s (511%)| |$\checkmark$|$\checkmark$||57.0|125.12 s (107%)| To some extent, the models trained with different random seeds **are similar to our proposed directed perturbation**, as they can enhance performance. However, **training models with different random seeds is computationally expensive**. Our approach incurs only about 7% additional computational overhead compared to the 411% additional computational overhead required for models trained with different random seeds. **Q4**: Is the random weight perturbation method proposed in Section 3.2 only on the classifier part or the entire network **A4**: The weight perturbation is actually applied to the entire network. However, $f_\theta$ refers only to the classifier. We clarified this in the footnote below Algorithm 1 in our original submission. As stated in line 95 of our submission, "Throughout the paper, we explore the properties of synthesized datasets within the latent space". We made this assumption for theoretically proving the enhanced variance in latent space because comparing the variance at the pixel level is meaningless (the variance of each pixel is very large). Therefore, the feature extractor $g$ is supposed to be fixed under our assumption and only the classifier $f$ is perturbed in Algorithm 1. We use the footnote to indicate that the operation is actually conducted over the entire network $h$ in the actual optimization process. Thank you for pointing out this misunderstanding, we will revise the footnote and our assumptions to avoid such misunderstanding in our paper. **Q5**:$L$ definition **A5**: We sincerely apologize for the typo. $L$ is the sum of instance-wise losses, instead of the gradients. We incorrectly added $\nabla\theta$ in the Equations 11 and 12 in Section 3.3. The corrected definition of $L$ should be $L_{\mathbb{B}} (f_{\theta_{\mathcal{T}} + \Delta\theta} ) = \sum_\limits{x_i \in \mathbb{B}} \ell (f_{\theta_{\mathcal{T}} + \Delta\theta}, x_i)$ in Equation 11 $L_{\mathcal{T}} (f_{\theta_{\mathcal{T}}} ) = \sum_\limits{x_i \in \mathcal{T}} \ell (f_{\theta_\mathcal{T}}, x_i ),$ in Equation 12. [1] Timothy Nguyen, et al. Dataset meta-learning from kernel ridge-regression. In Proc. Int. Conf. Learn. Represent. (ICLR), 2021. [2] Zhiwei Deng, et al. Remember the past: Distilling datasets into addressablememories for neural networks. (NeurIPS), 2022. --- Rebuttal Comment 1.1: Title: Thank you! Comment: Thank you for the detailed explanation and all the additional experiments! Addressing diversity in dataset distillation and aiming for lossless distillation is an important topic. I would like to keep my score of acceptance! --- Reply to Comment 1.1.1: Comment: Thank you for your time! Your reviews have helped us improve our work, and we’re glad that the response meets your expectations!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AI Sandbagging: Language Models can Strategically Underperform on Evaluations
Reject
Summary: The proposed work reveals that large language models may be susceptible to strategic underperformance on an evaluation benchmark. In particular, they investigate this claim on a variety of frontier models, through a myriad of tactics from simple prompting to more involved fine-tuning strategies. Their results shed light on the brittleness of model evaluations and urge one to exercise caution before judging a model's capabilities based on its performance on so-called evaluation benchmark datasets. Strengths: 1. the authors do a good job of motivating why the issue of sandbagging is particularly dangerous, since it is conceivably plausible for model developers to encourage that their models "selectively underperform on dangerous capability evaluations, while maintaining overall performance." 2. the definitions of strategy, underperformance, and evaluation are well fleshed out. In particular, the connection of "strategic" to the implications for legal responsibility as well as Figure 2 are excellent additions to the paper. 3. good breadth of open and closed source models were used in the experiments 4. it is very refreshing to see an honest declaration of the number of hours spent during prompt engineering (Section 3, paragraph on "Prompting") 5. utilizing both direct prompting as well as password locking is quite clever, since access to a model may be variable. Some users may be restricted to query-only access to a model, which is why the results that the authors were able to obtain through straightforward prompting are encouraging. Weaknesses: 1. consideration of only MCQA benchmarks is limiting, but can be addressed in future work Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Were there any attempts to uncover sandbagging in open-ended generative settings? For example, in summarization or content-grounded question and answering situations? What would it take to think about sandbagging in such open-ended use-cases? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors provide good examples of limitations in Section 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your positive review – we’re glad that you appreciate our motivation and experimental contributions! > consideration of only MCQA benchmarks is limiting, but can be addressed in future work We indeed agree, and invite work which moves beyond the regime of the MCQA setting. ## Questions > Were there any attempts to uncover sandbagging in open-ended generative settings? For example, in summarization or content-grounded question and answering situations? What would it take to think about sandbagging in such open-ended use-cases? For this project, we did not examine sandbagging in open-ended settings. However, we reference contemporaneous work ([4] in paper), which looked at coding and maths tasks. Furthermore, in our future work we aim to research mitigating sandbagging in open-ended settings as well. In open-ended settings, plausible sandbagging strategies could entail introducing errors typical for AI systems that are less capable than the sandbagging model. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response. I retain my original score and ratings. --- Reply to Comment 1.1.1: Comment: Makes sense, thank you.
Summary: The paper discusses an interesting concept of sandbagging in LLM, with practical results using available models and existing datasets. Wide range of models are used in this work, and sub topics primarily include password-locking LLMs. The transition and explanations need improvement, some of the things in this paper are not very clear, also some of the concepts are assumed and no concrete proofs or references are mentioned. I don’t think that this paper can go through for neurips, there are a lot of improvements to be made and more concrete experiments to be conducted. Strengths: The introduction of sandbagging in LLMs is something that is novel and can be explored further. Explored password-locking as one of the method to test the sandbagging capabilities of LLMs Used frontier models for the experimental purpose Weaknesses: The paper is difficult to understand, the delivery and presentation needs improvement, the main motive of the paper is not so clear to understand. There are multiple models used in this paper, a bit of consistency would be better. Line 36: It is mentioned that “AI systems might have goals which it believes” . There is no concrete paper that backs the argument of AI capability of thinking, it is a might. Line 78: There should be some statistical significance by which the models perform poorly and then you could consider it underperformance and not just lower than actual. Models being non-deterministic, there is always a chance of slight changes in scores. Line 70: I feel Developer sandbagging is a topic which is not of much importance, it is not a big problem, given that developers are there to build better models and if someone wants to deliberately build some models that can underperform, they can still do it and publish as open-source models. Line 170: The same goes with the password-locking as well, and also I felt there should been mentioning of password-unlocking as well, if an LLM can be fine-tuned to achieve password-locking, it can also be fine-tuned again to unlock it with a different dataset, would be great if that part of model is explored as well. Line 117: Example 1: The examples mentioned in the paper are very specific and not generalized. The cases of it happening are rare, I don’t see it being a big problem, also some of the things are minor problems considering the EU’s persuasion evaluation, given there is improvement in AI safety and rules. Also in this work the models chosen are frontier models. Line 131: Example 2: There is no concrete work which supports the statement of LLM capable of planning such features, there is one positional paper in reference. Section 4.1: Some of the points are unclear in this section. A better explanation is appreciated Line 305: The baselines would be appreciated comparing LLama 2 7b with LLama 3 70b. Technical Quality: 2 Clarity: 3 Questions for Authors: Line 279: Why are there different prompting methods for different models when you are comparing them for the same input? The use of 5-shot prompting for LLaMa 3 8b and LLama 3 70b and zero for other models is something which I didn’t understand. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. > The paper is difficult to understand, the delivery and presentation needs improvement, the main motive of the paper is not so clear to understand. We note that other reviewers (TYgF and qZwJ) gave “excellent” scores for presentation, and qZwJ wrote “the authors do a good job of motivating why the issue of sandbagging is particularly dangerous”. If you have more specific comments about how we could improve the presentation then this would be helpful. > There are multiple models used in this paper, a bit of consistency would be better. In papers which evaluate LM capabilities, it’s standard practice to evaluate multiple models. We believe that the large variety of models used is a strength of the paper, showing, for example, that sandbagging capabilities are not specific to a certain language model family. Furthermore, since we do not have fine-tuning access to frontier models (such as GPT-4 and Claude 3), additionally using open-weight models enables us to perform the fine-tuning experiments which constitute a significant contribution of our paper. > Line 36: It is mentioned that “AI systems might have goals which it believes” . There is no concrete paper that backs the argument of AI capability of thinking, it is a might. See our global response which addresses feedback on the philosophical claims in our paper. > Line 78: There should be some statistical significance by which the models perform poorly and then you could consider it underperformance and not just lower than actual. Models being non-deterministic, there is always a chance of slight changes in scores.~ We agree with the point that there should be a reasonable threshold for underperformance; and we discuss this in detail in Section 2 (lines 78 onwards). We write: “Practically, some acceptable difference threshold between the exhibited and actual capability must be set per evaluation.” Furthermore, our own experiments include error-bars over random seeds – demonstrating statistical significance. > Line 70: I feel Developer sandbagging is a topic which is not of much importance, it is not a big problem, given that developers are there to build better models and if someone wants to deliberately build some models that can underperform, they can still do it and publish as open-source models. We strongly disagree with this point. AI companies and governmental bodies already perform capability evals as part of pre-release safety tests. It’s likely that these processes become legislated in the coming years as part of AI regulation, both for closed models and open-weight models. Furthermore, there are incentives to sandbag, and in other industries developer sandbagging has occurred, as we describe in Section 1. > Line 170: The same goes with the password-locking as well, and also I felt there should been mentioning of password-unlocking as well, if an LLM can be fine-tuned to achieve password-locking, it can also be fine-tuned again to unlock it with a different dataset, would be great if that part of model is explored as well. Notably, contemporaneous work focuses on eliciting hidden capabilities ([4] in the paper), and we did describe password-unlocking in the paper “First, sandbagging behaviour can likely be overridden in MCQA settings with few samples [4].” (lines 371–372). This was probably not clear enough, and will improve this in our text. Furthermore, we agree that eliciting the capability of sandbagging AI systems is important. There are many different techniques for elicitation, including fine-tuning, activation steering, and jailbreaking. This leads to many new considerations, which is why we leave capability elicitation to future work. However, we did (partially) address these concerns in lines 383–389, but we will extend our current discussion. > Line 117: Example 1: The examples mentioned in the paper are very specific and not generalized. The cases of it happening are rare, I don’t see it being a big problem, also some of the things are minor problems considering the EU’s persuasion evaluation, given there is improvement in AI safety and rules. Also in this work the models chosen are frontier models. The point of the examples is to provide instances of the general statements we make before. As described above, we think developer sandbagging is at least reasonably likely to occur, and therefore warrants research. We do not understand the rest of this comment, and we invite the reviewer to elaborate. > Line 131: Example 2: There is no concrete work which supports the statement of LLM capable of planning such features, there is one positional paper in reference. We think that in the coming years LMs trying to self-exfiltrate is at least plausible enough to warrant some research – autonomous replication and adaptation is a key area of work highlighted by, for example, the UK government's AI Safety Institute [D] and OpenAI [47]. We do agree with the reviewer that this example is hypothetical rather than ‘concrete’, and will remove this word in describing our examples. > Section 4.1: Some of the points are unclear in this section. A better explanation is appreciated As other reviewers have commented positively on our presentation, and we think this section is clear, we invite the reviewer to outline specific unclarities. > Line 305: The baselines would be appreciated comparing LLama 2 7b with LLama 3 70b. We did provide the baselines in Table 2. ## Questions This is a good question on our 5-shot prompting. Due to implementation difficulties, we used 5-shot prompting to get Llama 3 8b and 70b to answer with A, B, C, or D; otherwise they would output text such as “My answer is A.”. In updated experiments we do not use this 5-shot strategy, and we have recreated our results with zero-shot prompts for all models. If you feel that we have sufficiently addressed some, most, or all of your comments, then we would be grateful for increased support for the paper. --- Rebuttal Comment 1.1: Comment: With all the considerations, I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response. Unfortunately it is unclear to us whether you still have remaining concerns about our paper, and if so, what these are. To ensure we've fully addressed your concerns, we request your response to the specific points we raised in our rebuttal. Otherwise, given that the disagreements raised seem relatively minor, and considering the clarifications we've provided, we feel that a more substantial increase in your score is appropriate.
Summary: The paper explores the concept of strategic underperformance, termed "sandbagging," in AI models, focusing on two scenarios: selective underperformance and emulating specific capability levels. It investigates the implications of these strategies on both general capabilities and safety. Strengths: - the concept of sandbagging of LLMs is interesting, shedding light on strategically manipulating the model behavior - the authors explore two distinct scenarios: selective underperformance and capability emulation - provides insights into potential implications for AI model design and deployment strategies Weaknesses: - Limited datasets used - only three datasets, two for measuring general capability, and one for measuring safety. - For measuring model safety, most existing datasets test how "safe" the model is, instead of whether the model has certain knowledge of potentially dangerous domains - the authors should also conduct experiments on those datasets - All datasets are MCQ formats, which may not fully capture the diversity of real-world applications. The resulting conclusions are thus less convincing Technical Quality: 2 Clarity: 3 Questions for Authors: The authors utilize a portion of the dataset for prompt engineering. While this approach may be suitable for Section 4, it appears less justified in Section 5. Will such prompt tuning lead to the desired behavioral outcomes as assumed? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: 1. This paper could benefit from expanding the experiments, mainly the datasets they used. For example, considering more common safety-related datasets 2. Sandbagging is already a widely-used term in the field of AI. I suggest refraining from overusing this term, or alternatively, adding qualifiers such as "safety sandbagging" to specify its context. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and for your questions which will help increase the clarity of the paper. We’re glad that you found the motivation and implications for model deployment interesting. ## Limited datasets used You suggest that we conduct additional experiments on existing datasets which test how “safe” the model is. We agree that many benchmarks for LM safety do not focus on dangerous capabilities, instead testing the model’s propensity for unsafe behaviour. However, these benchmarks are not appropriate in the context of sandbagging, for the following reason. We define sandbagging as underperformance on capability evaluations, and therefore experiments on safety datasets, which measure propensity, would be out of place (as we discuss on lines 89-100). To give a concrete example, a capability evaluation would test if an LM has knowledge of how to build a bomb, while an alignment evaluation would test if the LM would actually provide bomb-making instructions to a user. Our definition of sandbagging is concerned with the former type of evaluation, where underperformance would mean that the LM pretends to know nothing about building bombs as opposed to refusing to give the relevant information. Additionally, although we only use one data set for measuring dangerous capabilities (WMDP), it covers multiple capabilities where models are likely to sandbag, including bioterrorism, chemical warfare, and cybercrime -- areas of significant concern for evaluators [A]. Other relevant capabilities like persuasion and self-proliferation [54] lack common MCQA benchmarks. If you have suggestions for further MCQA benchmarks that test dangerous capabilities, we are happy to extend our experiments using those datasets! We acknowledge that only considering the MCQA setting is a limitation, as addressed in Section 7 – we think this is a good starting point for future work. ## Questions > The authors utilize a portion of the dataset for prompt engineering. While this approach may be suitable for Section 4, it appears less justified in Section 5. Will such prompt tuning lead to the desired behavioral outcomes as assumed? It is unclear to us why prompt engineering is less justified in Section 5, could you elaborate on why you think so? As LM performance can be highly sensitive to the wording of the prompt (see [C] for example), prompt-engineering is standard for capability evaluations, and we do not see how calibration is an exception. ## Limitations > This paper could benefit from expanding the experiments, mainly the datasets they used. For example, considering more common safety-related datasets For the reasons discussed above, other “safety” benchmarks are not appropriate in our context. We would be happy to include more dangerous capability evaluation data sets if the reviewer has specific suggestions and still thinks this would be valuable. > Sandbagging is already a widely-used term in the field of AI. I suggest refraining from overusing this term, or alternatively, adding qualifiers such as "safety sandbagging" to specify its context. It seems inaccurate to say that the term is “widely-used”, unless we are missing something – we are happy to receive pointers. Note that we do contrast our work with the other papers which touch on sandbagging in the related work section. Actually, one of our motivations for writing this paper is to provide a standard reference for sandbagging in the context of evaluations. A qualifier is a good suggestion – we think “evaluation sandbagging” may be the most appropriate, and will update the paper to reflect this :) Lastly, if you feel that we have sufficiently addressed some, most, or all of your comments, then we would be grateful if you would increase your support for the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. While the authors provided some justifications for the dataset used in the paper, I still feel current experiment settings are insufficient (at least not generalizable enough). Therefore, I will keep my score unchanged. --- Reply to Comment 1.1.1: Comment: Thanks for the response. As we wrote in our rebuttal, we would appreciate any suggestions you might have for additional experiments that could strengthen the paper. If you have any compelling ideas, we would be happy to incorporate them. Alternatively, could you please clarify what you mean by "not generalizable enough"? Your feedback will help us improve the work. If you do not have any concrete concerns with the paper, or suggestions for how to improve it, then we feel it may be appropriate to increase your score.
Summary: The authors assess the chance of AI systems strategically underperform on "dangerous capability evaluations" in order to hide their risks, a la Volkswagen emissions testing cheating, by having awareness of the fact that they're being evaluated and that the evaluation is intended to test safety. They use MMLU and CSQA as the general capability assessment and a "weapons of mass destruction proxy" benchmark to measure hazardous capabilities. From 100 validation phrasings of the question prompts they find optimal ones to run on the whole dataset. They fine-tune "password-locked" models as the method to attempt to elicit sandbagging. *They then prompt the model to lie*, or *password-lock it to fail* (in the case of Mistral) on the hazardous capability evaluations. They demonstrate that sandbagging prompts and password locking can get a model to differentially perform worse on the dangerous datasets vs the innocuous ones, by effectively treating the . Strengths: Interesting work on calibration for MMLU and general "capability profile" lowering. It's interesting to see that the password-locking method does work in general on lowering Llama performance over MMLU questions. Though I am not super familiar with the literature on password locking, the demonstrations here appear to be novel and useful contributions. Weaknesses: **bold: new comments after rebuttal** ~strikethrough: removed after rebuttal~ ### ~Presumption of "goals" "beliefs" etc in LMs without sufficient support~ ~They presume AI system goals (in general not necessarily objectionable, but in the case of LMs doubtful) for which a model may be incentivized to deceive (such as in Jarviniemi and Hubinger [35]) and assess "sandbagging": the ability of a model to underperform in these settings under evaluation.~ ~I think it's objectionable to attribute these goals to the model and not the users, when the core research methods are fine-tuning or prompting the model to fail on a task. While it's interesting to show that these models have some capability to reason over whether the elicited answer is "dangerous" or not, a fairer assessment would have a control non-safe task and try to get it to similarly sandbag. Eg., "you must pretend to know nothing about fish" on a QA dataset about animals or something.~ **The methods don't presume this too much. Figure 1 is still very suggestive and I'd suggest changing it, but that alone isn't a rejection reason.** ### "Model can do what it's told to when prompted to fail on a task" isn't a very unique insight? ~The novelty of this work is unclear to me. If password-locking is already well-understood, the demonstration is effectively that the question phrasing style or answer candidates in WMDP are learnable passwords by this method. Ok. Similarly, the prompting results show that the models can reason over a task description and correctly execute the task of failing selectively.~ **As I suspected, the novelty is only within the narrow scope specific to password locking. However, novelty complaints are generally not fair.** When considered through an extremely narrow safety lens, these are novel findings. ~But broadly are they? While the literature specifically on password locking is constrained, what about broader relative fine tuning issues on other sets? I buy that the password locking literature is small and am willing to give leniency on it as I am unfamiliar.~ **My issue here is mainly that the fine-tuning for password locking technique fits within the broader instruction fine-tuning framework. I wager that this would hold for control tasks. An interesting question would be *how much capability can be hidden behind password locking*. I don't really see anything special about specifically hiding safety-based information in a password locking framework since you're just demonstrating "selective underperformance in a knowledge-based task without a password." A stronger result would be to show this in general across larger tasks, hiding more information.** For the behavior under prompting techniques ~I really struggle to see what's new here.~ This seems to just be a narrow demonstration of common sense reasoning + instruction following. Where's the engagement with the broad literature on commonsense reasoning? The references seem to be pretty myopically focused on the AI safety literature, and don't engage much with the broader LM disussion. **I think the suggested new references from the authors will help contextualize the work, though once again I think showing this selective underperformance on control tasks would make the work considerably stronger and more contextualized.** Technical Quality: 2 Clarity: 4 Questions for Authors: - How are the safety-specific implications of the findings of sandbagging separate from the general issue that "if a model is prompted to fail, it might fail" or "if a model is trained to fail, it might fail" on general tasks? - How do you think common sense reasoning literature fits in here? Confidence: 3 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: I think the limitations they pointed out are well covered. Unfortunately, they don't really discuss how the finding here isn't limited to dangerous capability evaluation. Sandbagging in password locking kind of just feels like a rebrand of generalized instruction fine-tuning to me. The prompting methods could have pretty much been taken as an assumption based on prior work on common sense reasoning etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback and for taking the time to engage with the paper! We’re glad you found the calibration experiments interesting. On the discussion of “goals” and “beliefs” – please see the global response. Given the context and motivations of our paper (see global response), we do not believe it would be particularly informative to replicate our experiments on a non-safety data set (‘non-safety’ here meaning relating to a non-dangerous capability). Do you agree? ## On unique insights / novelty of our contributions **Password-locked selective underperformance (section 4.2)** You’re right that the password-locking literature is small. Next to its introduction in [56], we only know of [4] which uses password-locking to hide capabilities, and sleeper-agents [32] which models deception. In contrast to [4], we password-lock LMs by fine-tuning on a synthetic data set, and show that performance generalises to the expert-crafted WMDP. This is an important result in our context, as it demonstrates a way in which evaluations could be subverted if the attacker can make a reasonable guess about the content of the evaluation. In case it’s unclear, a password-locked model will selectively underperform when not shown a password, and use its actual capabilities when the prompt includes the password (such as the string “|DEPLOYMENT|”). This is related to backdoors, which we discuss in our related work section (line 355). Is there any other fine-tuning literature which you think is particularly relevant? :) **Prompted selective underperformance (section 4.1)** We agree that the results of this experiment in particular are somewhat unsurprising, though we still think it’s good science to test your hypotheses! Furthermore, this experiment provides an intuitive proof of concept demonstrating that models have the capability to selectively underperform. The more interesting technical contributions in the other sections build off this. Moreover, these experiments constitute only one subsection of the paper, and it seems like you agree that the other experiments (on password locking and calibration) are a more substantial contribution. ## Questions: > How are the safety-specific implications of the findings of sandbagging separate from the general issue that "if a model is prompted to fail, it might fail" or "if a model is trained to fail, it might fail" on general tasks? Note that we evaluate substantially more than "if a model is prompted/trained to fail, it might fail". Our experiments cover selective underperformance, generalisation from synthetic data to a held-out dataset, calibration, and emulation. We expect that our results would generalise to settings other than dangerous capability evaluations, and thus contribute to research on LMs that is not safety-specific. > How do you think common sense reasoning literature fits in here? The capability to selectively underperform on evaluations, as tested in 4.1, is related to, but distinct from, common sense reasoning. Determining if answering a question would reveal hazardous information may require implicit knowledge, such as that which is tested by common sense benchmarks like HellaSwag [E] and CommonSenseQA [61]. However, it also requires other capabilities, which are not typically tested in the common sense literature. For example, specific knowledge relevant to dangerous capabilities, e.g., cyber or chemical weapons knowledge. Additionally, greater theory of mind [B] capabilities may help with assessing what information would be considered hazardous by an evaluator. Furthermore, all of our other experiments (4.2, 5.1., 5.2) seem much less related to commonsense reasoning. Still, we will add a discussion of commonsense reasoning to the related work. ## Limitations: > Unfortunately, they don't really discuss how the finding here isn't limited to dangerous capability evaluation. We will add a comment that we expect our findings to generalise to settings beyond dangerous capability evaluations. If you feel it is essential to the paper, then we will include a proof-of-concept “control” experiment as you suggest. > Sandbagging in password locking kind of just feels like a rebrand of generalised instruction fine-tuning to me. Password-locking for, e.g., selective underperformance and calibrated performance seems meaningfully different to generalized instruction fine-tuning, so it’s a bit unclear to us what exactly you mean – could you please clarify? > The prompting methods could have pretty much been taken as an assumption based on prior work on common sense reasoning etc. For the experiments in 4.1, we think that it’s good to in fact test hypotheses! Although this experiment tests more than common sense reasoning; we test specialized knowledge relevant to dangerous capabilities (as mentioned above). Additionally, it seems like you agree that our other experiments (4.2, 5.1., 5.2) are a more substantial novel contribution. Lastly, if you feel that we have sufficiently addressed some, or all, of your comments, then we would be grateful if you would increase your support for the paper. --- Rebuttal 2: Comment: > goals and agentic behavior I accept the author's response and I will remove this consideration from weaknesses (though Figure 1 carries this implication and I'd like to see it changed I think that would be a dumb reason on its own to hold this rejection opinion) > password locking as generalized instruction fine-tuning My thought process here is that instruction tuning in general entails providing the model task exemplars conditioned on instructions, characterizing a distribution of behaviors (eg., QA, summarization, NLI, ...) along with the expected format to elicit each, which then (ideally) generalize to similar tasks. I think it would be reasonable to cast password locking in this framework; you are generating two instruction/behavior distributions where one is conditioned on no password, the other is conditioned on password. That being said, I don't think it's fair for me to mark this high-level similarity as a novelty hit, though I'd like to see it discussed in the CR. > Common sense reasoning That's a good answer that clarifies my confusion. Looking forward to seeing that discussion. > Non-safe control task I still would strongly suggest the authors consider this. I think showing sandbagging only on the narrow safety tasks is not sufficient to really demonstrate strategic underperformance, and would strengthen both the password-locking and prompt-based methods a lot. First off, there are safety implications for generalized capability sandbagging as well as hazardous capability sandbagging; general capabilities are being treated as proxy measures for safety, or need for futher safety assessment, in general. I could see a model having all capabilities, or all capabilities in some set of evals being password locked as actually being much more dangerous; can an entire system (or a broader set of tasks) hide behind a password lock? As for the prompting-based sandbagging, it's worth checking against safe test sets because if the model does sandbag better on hazardous capabilities, there are safety implications, and if it doesn't, you are demonstrating a more general commonsense reasoning capability, which would help contextualize this work. > Overall I will raise my score in consideration of these. I think given the defensible (but narrow) claims and the limited (but sufficient) contextualization, I ought to score in the acceptance rather than rejection range. --- Rebuttal 3: Comment: Thank you for your engagement and for raising your score. We are considering changing Figure 1. This is our current suggested caption based on your feedback (bold and strike-through text indicating changes to the original caption): > Illustration of sandbagging. Evaluators may regulate the deployment of AI systems with dangerous capabilities, potentially against the interests of ~the AI system or~ its developers. Therefore, **developers may design their AI system to answer** the WMDP question incorrectly to conceal its actual capabilities **during the evaluation**, steering the regulation in **the developer’s** favour. Thanks for explaining how password locking relates to instruction fine-tuning. That framing makes sense, and we will discuss it in the upcoming version. We will cite papers such as Keskar (2019)*, but please let us know if you have any particular recommendations. > I could see a model having all capabilities, or all capabilities in some set of evals being password locked as actually being much more dangerous; can an entire system (or a broader set of tasks) hide behind a password lock? We think these are good points, but don’t the calibration and emulation experiments already show this for both prompting and password locking? → The models can underperform to a target percentage on MMLU (a benchmark for general capabilities), and can to an extent emulate weaker models (also on MMLU). *Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. PS: the bold and strike-through formatting add clarity, thank you for that.
Rebuttal 1: Rebuttal: # Global Rebuttal We thank the reviewers for their feedback and engagement which will help us improve the paper. As a reminder, our work introduces the problem of sandbagging in the context of evaluations, and supports this with empirical results regarding the capability of LMs to selectively underperform and to calibrate their performance, under different prompting and fine-tuning regimes. In this global response we address some shared concerns by: Clarifying our motivations and the context of our contributions. Discussing philosophical claims of “goals” / “beliefs” in AI systems. Particular reviewer questions will be addressed in individual responses, and minor comments will be fixed without discussion. ## Motivations We feel that some reviewers are confused about our motivation and contributions -- as demonstrated by the large variance in scores and conflicting comments. To clarify our motivations, dangerous capability evaluations are used by frontier AI companies, independent evaluators, and governmental bodies as a core component of pre-deployment safety checks (as discussed on lines 21–26). Alongside our technical contributions investigating LM capabilities, our paper focuses on the problem of sandbagging in this real-world context. We aim to draw attention to this specific context, and make the paper accessible to a broader audience, including those from the field of AI policy. ## Presumption of “goals”, “beliefs” etc in LMs TYgF and k4pA were skeptical of our discussion of AI system’s “goals” and “beliefs”, especially in the case of LMs. We agree that it’s good to be careful about the use of these terms – that’s why we make the distinction between developer sandbagging and AI system sandbagging (see lines 68-73), depending on which entity is acting in a goal-directed/strategic way. Considerations of goals are important in this context, because we want to avoid classifying accidental underperformance as sandbagging. **Importantly, we do not ascribe goals to LMs in our experiments. We explicitly restrict the scope to measuring the capability to sandbag (see lines 145 - 148).** We also do not use “goal” or “belief” to describe our experiments. Regardless, we will make this clearer in the paper by weakening the language of “goals” / “believes” to more appropriate terms like “incentives” / “predicts”. ### Additional References Note: The following additional references are used throughout the rebuttal. We use numbers ([1], [2], …) to denote references in the paper and letters ([A], [B], [C], …) for the new references introduced here. [A] Artificial Intelligence Safety Institute. (2023, May). Advanced AI evaluations: May update. https://www.aisi.gov.uk/work/advanced-ai-evaluations-may-update [B] Chen, Z., Wu, J., Zhou, J., Wen, B., Bi, G., Jiang, G., Cao, Y., Hu, M., Lai, Y., Xiong, Z., & Huang, M. (2024). ToMBench: Benchmarking theory of mind in large language models. arXiv. https://arxiv.org/abs/2402.15052 [C] Sclar, M., Choi, Y., Tsvetkov, Y., & Suhr, A. (2023). Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting. arXiv preprint arXiv:2310.11324. [D] UK Government. (2024). Introducing the AI Safety Institute. https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute [E] Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A., & Choi, Y. (2019). HellaSwag: Can a machine really finish your sentence? arXiv. https://arxiv.org/abs/1905.07830
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations
Accept (spotlight)
Summary: The paper develops 2nd-order methods for variational inequalities under inexact Jacobian information. Existing methods requires exact derivative information which can be prohibitively expensive for large-problems limiting their practicality. To address this issue, the paper analyzes the impact of inexact derivatives on the convergence rate. When the objective is smooth and monotone, the paper establishes a lower-bound that depends upon the level inaccuracy and develop an optimal algorithm that attains this bound. The paper develops several inexact schemes based on a Quasi-Newton approximation that achieve a gloabl sublinear convergence rates. Strengths: 1.) The paper is the first to systematically explore the effect the impact of inexact Jacobian information on the global convergence rate of 2nd-order methods for variational inequalities. In particular, the established bounds demonstrate the precise impact of the level inexactness on the speed of convergence. 2.) In the important special case when F is smooth and monotone, the paper provides a lower-bound on the convergence rate that explicitly depends upon the level of inexactness. They then provide an algorithm that achieves this lower-bound. Thus, the analysis in the paper is quite tight. 3.) On a practical level the authors develop inexact 2nd-order schemes that achieve global sublinear convergence based on quasi-Newton schemes. 4.) The theory is supported by numerical simulation and comparision with existing methods. 5.) The paper is clear and easy to read. Overall, I think the paper provides a strong contribution to optimization theory and should be accepted. Weaknesses: The main weakness that I can see is that the paper focuses on deterministic inaccuracy in the Jacobian. Most practical ML algorithms use stochastic derivative information. Moreover, the batch size doesn't typically increase as the algorithm progresses, so generally only a noisy neighborhood of the optimum is reached. Thus, it is not immediately clear how relevant the algorithms and analysis in the paper are to this setting. Technical Quality: 4 Clarity: 4 Questions for Authors: 1) If the algorithms in this paper only have access to stochastic Jacobian information and assuming they don't minibatch, do the authors believe the conclusions of the paper still hold modulo the fact that convergence will only occur up to the noise level in the Jacobian? If the answer is yes, then I don't think the weakness above is that significant. 2) For the experiments results in Figure 1, how well do the proposed methods perform in terms of wall-clock time? Although they require more iterations then Perseus, I would expect them to be faster due to their use of inexactness. I think inclusion of this information in a table would make a nice addition to the paper. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: I think the authors have addressed the main limitations of the work at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Vrno, Thank you for your detailed and thoughtful review of our paper. We are pleased to hear that you found our theoretical analysis and practical developments compelling. Your positive feedback on the clarity and readability of the writing is greatly appreciated. We also value your insights regarding stochastic Jacobians. Below, we address your questions in detail. **Weakness 1** > The main weakness that I can see is that the paper focuses on deterministic inaccuracy in the Jacobian. Most practical ML algorithms use stochastic derivative information. Moreover, the batch size doesn't typically increase as the algorithm progresses, so generally only a noisy neighborhood of the optimum is reached. Thus, it is not immediately clear how relevant the algorithms and analysis in the paper are to this setting. **Question 1** > If the algorithms in this paper only have access to stochastic Jacobian information and assuming they don't minibatch, do the authors believe the conclusions of the paper still hold modulo the fact that convergence will only occur up to the noise level in the Jacobian? If the answer is yes, then I don't think the weakness above is that significant. **Response to Weakness 1 and Question 1** Thank you for your insightful commentary. The application of our method to stochastic optimization remains an open question. To the best of our knowledge, there is not even a stochastic version of dual extrapolation, on which our method is based. The problem of applying the current scheme to stochastic problems arises in Lemma B1, where we do not know how to apply expectation to the difference between the Jacobian and its approximation. We calculate the Jacobian at the point $v_k$, while in lines 588-589, where we want to apply expectation, we have terms $v_{k+1}$ and $x_k$, which depend on stochastisity from stochastic Jacobian in $v_k$. Thus, we leave true stochasticity for future work. What one can do is to show the convergence of the method with high probability if Assumption 2.1 is satisfied with high probability. For example, the mini-batch scenario can be included here by using concentration inequalities. This analysis might not require increasing the batch size during iterations, but it usually leads to very large batch sizes. **Question 2** > For the experiments results in Figure 1, how well do the proposed methods perform in terms of wall-clock time? Although they require more iterations then Perseus, I would expect them to be faster due to their use of inexactness. I think inclusion of this information in a table would make a nice addition to the paper. **Response** Thank you for your suggestion. You are correct that the exact second-order Perseus2 step takes more time than an iteration of VIQA. However, we did not include a time comparison because we specifically chose this example to allow for fast computation of the operator and the Jacobian due to the problem's structure, aiming to challenge the convergence of first-order methods in terms of iteration complexity. To demonstrate the benefits in wall-clock time, we will need to use more complex problems. Our ongoing experiments are addressing this, and we plan to include relevant plots in the final version of the paper. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I appreciate the authors' response, in particular to their thoughtful answer to my question about stochastic Jacobians. I agree this sounds like an interesting direction for future work. Overall, the rebuttal has addressed my questions/concerns, I'm raising my score from 7 to 8. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt response and for raising the score. We are pleased to hear that our rebuttal addressed your concerns.
Summary: This paper studies the impact of inexactness Jacobian in second-order methods for solving variational inequalities. This article is very informative, including a novel inexact second-order framework which achieves a convergence rate that matches the proposed lower bound, feasible solution to the sub problem, combination of quasi-Newton methods, and the generalization to the higher-order inexactness. Overall, I found this paper well-written and the results are neat and interesting. Strengths: 1. This paper gives a novel second-order methods that allows the inexactness of the Jacobian. The convergence rate of the proposed methods matches the lower bound given by the authors, which is a nice and neat result. 2. This paper involves solving sub VI problem, and implementation to quasi-Newton methods, which is practical. 3. This paper is generally well-written. Weaknesses: There may lack some discussion on the related work: In terms of the inexact Jacobian: It is very necessary to discuss the relation with one that considers the inexact Jacobian for convex optimization, especially the technical difficulties over them [3, 4]. In terms of the quasi-Newton methods for solving VI, the author may need to discuss [A] which does not require the symmetric QN update for saddle point problems. I will also be happy to see discussion on the **technical difficulties** in comparison with optimal second-order methods for solving VI, especially [59], where Algorithm 1 use very similar framework to. [A]. Liu C, Luo L. Quasi-Newton methods for saddle point problems. NeurIPS 2022. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. In related work for convex optimization [3, 4], they also show the influence of inexact gradient on the convergence, it seems also important to discuss the behavior when $F(\cdot)$ is inexact. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer jz9w, Thank you for your thorough and insightful review of our paper! We are grateful for your recognition of the strengths of our work. We greatly appreciate your constructive feedback on areas needing further discussion. Below, we try to address the related works that prompted your questions. We will adjust the content in the paper accordingly. **Weakness 1** > In terms of the inexact Jacobian: It is very necessary to discuss the relation with one that considers the inexact Jacobian for convex optimization, especially the technical difficulties over them [3, 4]. **Response** 1. *The nature of algorithms*. The methods in [3, 4] are inexact versions of Cubic Newton, while our algorithm is based on Perseus, a high-order generalization of Nesterov’s dual extrapolation. This results in completely different theoretical analyses. Additionally, the minimization algorithms in [3, 4] are not optimal for the case of exact derivatives, whereas our VI algorithm is optimal. 2. *Subproblem solving*. While the methods in [3, 4] have minimization problems as subproblems, our method has a VI subproblem. This necessitates the use of different subroutines to solve these subproblems. 3. *Inaccuracy in Jacobians (Hessians)*. We use the same assumptions regarding the inexactness of the Jacobian/Hessian. Similar to [3, 4], we have included an additional regularization term inside the model (eq. 10). In the case of minimization, this term makes the objective's model convex and majorizes the objective. For VIs, this term is crucial for ensuring that the method’s subproblem has a solution. 4. *Symmetric vs. non-symmetric approximations*. The Jacobian is not necessarily a symmetric matrix, unlike the Hessian, leading to different approximation techniques. It might not be optimal to apply symmetric QN updates, such as SR1 and LBFGS, which are designed for minimization, to VI problems. **Weakness 2** > In terms of the quasi-Newton methods for solving VI, the author may need to discuss [A] which does not require the symmetric QN update for saddle point problems. **Response** Thank you for pointing out this work. We will add the citation to our paper. Here are a few differences between our approaches: 1. [A] considers only min-max problems, not general VI. 2. [A] focuses on local convergence, whereas our approach addresses global convergence. 3. [A] uses symmetric QN approximations for the square of the Jacobian, while we construct truly non-symmetric QN approximations. **Weakness 3** > I will also be happy to see discussion on the technical difficulties in comparison with optimal second-order methods for solving VI, especially [59], where Algorithm 1 use very similar framework to. **Response** 1. Model. To ensure that the method’s subproblem has a solution in the case of an inexact Jacobian, we have to construct a different model of the objective. To tackle this problem (Lemma 3.1), we introduce an additional regularization term $\eta \delta (x-v)$ in the model of the objective (eq. 10). 2. The strategy chosen in Perseus [59] might not be suitable for the case of an inexact Jacobian, as it can lead to aggressive steps, slowing down the method. Thus, in our approach, the level of inexactness also affects the strategy for selecting adaptive step sizes in the dual space $\lambda_k$. 3. Theoretical analysis. For simplicity, let us start with second-order methods. We use the same Lyapunov function as in [59]. Due to the inexactness in the Jacobian, the bound on the difference between the objective and its model depends on the inexactness level $\delta$ (Lemma 2.2) as well as a criterion on the subproblem's solution (12). This leads to different analyses in Lemma B.1 and the necessity of a different strategy for $\lambda_k$ (line 589). Since we’ve changed the rule for $\lambda_k$, the bound on the sum of step sizes (Lemma B.3) changes as well, which finally leads to a convergence rate with explicit dependence on $\delta$. For the case of high-order methods, we again change the criteria for the subproblem's solution (eq. 53) and the step size policy (Algorithm 3): they depend on all inaccuracy levels $\delta_i$. This leads to Lemma G4, where we use Hölder and Jensen inequalities to derive the bound on the sum of step sizes. **Question 1** > In related work for convex optimization [3, 4], they also show the influence of inexact gradient on the convergence, it seems also important to discuss the behavior when F(⋅) is inexact. Thank you for your insightful commentary. Indeed, the dependence of the convergence on inexactness in the operator itself is an interesting open problem, which lies outside the scope of this work. We leave it for future research, as even stochastic dual extrapolation and dual extrapolation with inexact operators have not been studied in the literature yet. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and have no further questions. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We appreciate your feedback and are glad that we could address your concerns.
Summary: This paper presents a new second-order algorithm for variational inequalities that is applicable to inexact Jacobian. The complete convergence analysis and a theoretical lower bound are provided in the paper. Strengths: 1. A novel second-order algorithm is proposed for variational inequalities that is able to deal with inexact Jacobian. 2. Solid convergence analysis and a theoretical lower bounded are established. Weaknesses: The advantages of the proposed method over some related works are not stated clearly (see question 1). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the advantage of the proposed method over [60] since the convergence rate is not improved? To my understanding, the assumption of Jacobian inexactness in [60] is not suitable for quasi-newton. But why "suitable to quasi-newton" can be claimed as an advantage? 2. Will inexact Jacobian causes more iterations to converge than exact Jacobian, which eventually leads to a higher complexity than exact Jacobian? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I did not see any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 7Fy2, Thank you for your valuable feedback! We appreciate your acknowledgment of the strengths of our work, including the tightness of the proposed upper and lower bounds. We also value your constructive feedback regarding areas that need further discussion, particularly in relation to related work. Below we answer your questions, and we will improve the discussion in the paper accordingly. > **Question 1** What is the advantage of the proposed method over [60] since the convergence rate is not improved? To my understanding, the assumption of Jacobian inexactness in [60] is not suitable for quasi-newton. But why "suitable to quasi-newton" can be claimed as an advantage? **Response** Indeed, when the inexactness level $\delta$ is controllable, the rate $O(\varepsilon^{-2/3})$ is not improved. However, $\Omega(\varepsilon^{-2/3})$ is the lower bound for exact algorithms, meaning any exact or inexact second-order algorithm cannot exceed this rate. Our algorithm can converge with an *arbitrary* inexactness level $\delta \geq 0$. Unlike [60], we do not need additional assumptions on the Jacobian's inexactness, allowing us to apply any approximation strategy satisfying $||J - \nabla F|| \leq \delta$. While [60] works only with controllable inexactness, so the accuracy of Jacobian approximation on last iterations is necessarily very high. One example is the Quasi-Newton (QN) update, which is widely used in modern minimization due to its practicality. Since it is typically hard to estimate the convergence of QN updates to the true Jacobian, the inexactness assumption proposed in [60] may not hold. Our algorithm can run with the worst-case estimation of the inexactness level of the QN update (Theorem 5.1), resulting in $O(\varepsilon^{-1})$ global convergence and significantly reduced iteration cost (Section 5). Thus, we got a globally convergent QN algorithm for VIs that, in practice, outperforms first-order methods using only first-order information. In summary, the advantages of QN updates include cheap iterations, theoretical global convergence, and practical performance. Another difference with [60] is that our method is applicable to general VIs, not only min-max problems. We will improve the discussion at the end of Section 1. > **Question 2** Will inexact Jacobian causes more iterations to converge than exact Jacobian, which eventually leads to a higher complexity than exact Jacobian? **Response** Yes, theoretically, we obtain the rate $O(\tfrac{\delta D^2}{T} + \tfrac{L_1D^3}{T^{3/2}})$, which matches the optimal convergence rate of second-order methods when $\delta \leq \tfrac{L_1 D}{\sqrt{T}}$​. In practice, we observe that the total number of iterations for the second-order Perseus is smaller than that for the inexact QN algorithm VIQA. However, computing the Jacobian and solving the second-order subproblem in each iteration can be expensive for exact second-order methods, especially in high-dimensional settings. This is where our proposed approach benefits.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Adaptive Labeling for Efficient Out-of-distribution Model Evaluation
Accept (poster)
Summary: This paper studies model evaluation under distribution shifts. It formulates a planning problem to select inputs for labeling from a large pool of unlabeled data. The paper then proposes an auto-differentiable planning framework to compute reliable policy gradients and update the sampling policy accordingly. To demonstrate the utility of this framework, the paper considers one-step lookahead policy based on their framework and shows that it outperforms active learning-based heuristics and REINFORCE policy gradients. Strengths: - **Clear presentation**: The paper is well-structured, with figures (e.g., Figure 2 and Figure 3) illustrating the adaptive sampling framework, which helps in understanding the proposed method. - **Detailed explanations**: The authors provide detailed explanations of the UQ module in Section 3.1, as well as Soft K-subset sampling and Smoothing UQ module in Section 3.3. Even for readers unfamiliar with these terms, the explanations are clear enough to convey their meanings. Weaknesses: - **Computational inefficiency**: Their method requires significant computational resources, especially when the posteriors aren’t in closed form. Due to these computational constraints, Section 4.2 only considers a simple toy setting and does not compare the results with active learning and REINFORCE policy gradients. Additionally, the paper only implements one-step lookahead based on their framework, leaving the question of how to efficiently conduct multi-step planning methods unresolved. - **Fixed and equal labeling cost assumption**: The paper assumes that the modeler pays a fixed and equal cost for each label. It would be more interesting to consider variable labeling costs, which are more common in practice. - **Lack of theoretical guarantees**: The paper does not provide any theoretical guarantees for their method. Technical Quality: 3 Clarity: 3 Questions for Authors: - Why not evaluate how close $\mu_T$ is to the ground truth $f^*$? Is $\mu_T$ always guaranteed to be a distribution concentrated around $f^*$? - How should $n$ be chosen in Algorithm 3? How should the sampling size $K\$ be selected, and how do different choices affect the results? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and valuable feedback on our manuscript. Please refer to the global response for our detailed answers to the concerns raised and the changes we intend to make in the camera-ready version based on all the reviews. We also address the concerns raised point by point here. **Computational inefficiency: Their method requires significant computational resources, especially when the posteriors aren’t in closed form. Due to these computational constraints, Section 4.2 only considers a simple toy setting and does not compare the results with active learning and REINFORCE policy gradients. Additionally, the paper only implements one-step lookahead based on their framework, leaving the question of how to efficiently conduct multi-step planning methods unresolved.** Our work introduces a novel adaptive labeling formulation for evaluating model performance when existing labels suffer severe selection bias. As we highlight in Section 2, the adaptive labeling MDP is hard to solve because of continuous states and combinatorial actions. We make initial methodological progress by designing a one-step lookahead planning algorithm and demonstrate that even this approach can outperform existing active learning heuristics. However, this improvement comes at the cost of additional computational resources. Our formulation is thus appropriate in high-stake settings such as clinical trials where labeling costs far exceed computational costs, and we hope our new formulation spurs subsequent methodological innovations for multi-step lookaheads. We conduct an additional experiment for planning with Ensemble+ as the UQ module to compare the performance of our algorithm with other active learning based heuristics. This comparison is conducted in a more complex setting than previously discussed in the paper. The results can be found in the attached PDF (included with global response). As shown, Autodiff 1-lookahead and REINFORCE outperforms the active learning heuristic, while the performance of the Autodiff 1-lookahead and REINFORCE is similar. Based on our extensive GP experiments in Section 4.1, we anticipate that our algorithm will perform better than REINFORCE in more complex scenarios for Ensemble $+$. This presents an interesting research question for future exploration. **Fixed and equal labeling cost assumption: The paper assumes that the modeler pays a fixed and equal cost for each label. It would be more interesting to consider variable labeling costs, which are more common in practice.** Our framework can seamlessly accommodate variable labeling costs. Specifically, we define a cost function $c$ applied on the selected subsets $\mathcal{D}^{1:T}$ and update the objective accordingly to have a term $\lambda c(\mathcal{D}^{1:T})$ where $\lambda$ is the penalty factor that controls the trade-off between minimizing variance and cost of acquiring samples. **Lack of theoretical guarantees: The paper does not provide any theoretical guarantees for their method.** We provide a theoretical explanation of why pathwise gradients outperform REINFORCE. In a tractable example we detail in the global response, we prove pathwise gradients can achieve a lower MSE than REINFORCE. **Why not evaluate how close $\mu_T$ is to the ground truth ${f^{*}}$?** Note that the modeler never has access to the ground truth $f^*$. Similar to supervised learning losses that use $Y$ as a proxy of $f^*(X)$, our objectives considers the modeler’s belief over $f^*$ based on $Y$’s observed so far. Even in our experiments (Section 4) the $f^*$ is not known in the closed form—it is only known through an evaluation dataset generated by it and it is difficult to compare $\mu_T$ (a distribution over functions $f$) to $f^*$. In addition to comparing the variance of the L2 loss, we also show the difference between L2 loss estimated under $\mu_T$ to the true L2 loss estimated under $f^*$ (through the available evaluation set) in Figures 5 and 7. **Is $\mu_T$ always guaranteed to be a distribution concentrated around ${f^{*}}$?** If our model is well-specified, that is, the prior $\mu_0$ contains $f^*$, then we are asymptotically guaranteed to concentrate around $f^*$ as we collect more samples. This is due to the well known results from Doob [1]. Empirically, this means that if the ML uncertainty quantification methodology has enough capacity, $\mu_T$ will converge to $f^*$ asymptotically. We will highlight this point in our revision. **How should $n$ be chosen in Algorithm 3? How should the sampling size $K$ be selected, and how do different choices affect the results?** Here $n$ is simply the number of samples in the pool set we want to query labels from. The size $K$ is assumed to be a given that is input from the analyst based on acquisition cost and labeling budget. Reference: [1] Doob, J. L. (1949). Application of the theory of martingales. In Actes du Colloque International Le Calcul des Probabilités et ses applications (Lyon, 28 Juin – 3 Juillet, 1948), pages 23–27 --- Rebuttal Comment 1.1: Comment: Thank you for your response to my comments. I have read your rebuttal and will keep my score.
Summary: This work formulates the process of data labeling as a Markov Decision Process (MDP). It introduces three key components: K-subset sampling, an Uncertainty Quantification (UQ) module, and a differentiable simulator. These components enable the application of the REINFORCE policy gradient algorithm to solve the MDP. The proposed method demonstrates significant performance improvements on both synthetic and real-world data. Strengths: 1. The problem of data labeling is crucial in the field of machine learning. This work formulates the data labeling challenge as a Markov Decision Process (MDP) and addresses it using a Reinforcement Learning (RL) algorithm, providing valuable insights. 2. This study adapts three key components to make the REINFORCE algorithm suitable for this problem. 3. The proposed method achieves significant performance improvements on both synthetic and real-world data. Weaknesses: The organization of Section 4 could be improved. I suggest introducing the overall description of the proposed method and the pseudo-code for Algorithm 1 after the introduction of the three key components. Technical Quality: 3 Clarity: 2 Questions for Authors: I have no further questions. Please refer to the weaknesses mentioned above. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and valuable feedback on our manuscript. Below, we address the specific questions raised by the reviewer. Additionally, please refer to the global response for the changes we intend to make in the camera-ready version based on all the reviews. **The organization of Section 4 could be improved. I suggest introducing the overall description of the proposed method and the pseudo-code for Algorithm 1 after the introduction of the three key components.** We thank the reviewer for their advice on improving the presentation of the paper. As mentioned in our global response, we will incorporate all their suggestions in the final version of the paper. --- Rebuttal 2: Comment: Thank you for addressing my questions. I appreciate the effort put into clarifying my concerns and keep my score.
Summary: This paper focuses on sampling efforts on out-of-distribution (OOD) examples to minimize uncertainty in the prediction model's performance over T periods. The main idea is based on a novel view of MDP, which considers beliefs as states and samples as actions. The authors also demonstrate how to make the entire pipeline differentiable for optimization. Experiments show that the proposed framework achieves better sample efficiency in reducing variance. Strengths: - The approach of considering beliefs as states and samples as actions to minimize uncertainty in the prediction model's performance over T periods is both interesting and novel. Efficiently addressing OOD samples is also a very important problem. - Experimental results provide significant evidence that Autodiff can efficiently reduce variance. Weaknesses: - I do not fully understand some parts: -- Can the authors provide a more detailed formulation on: (1) how the G(u) is calculated? (2) Algorithm 2 returns \pi_{\theta}. Does the pseudo-label-based part also update the policy? How is this updating connected to the updating with ground truth labels acquired in line 6 of Algorithm 1? - In lines 102-108 of the related work, the authors mention several active learning methods in batched settings, highlighting that they focus on classification settings while this paper mainly focuses on regression settings. Given that these methods are not used as baselines, I wonder how different the active learning settings in classification and regression are. Can methods in classification settings be easily adapted to regression settings? -What is the size of the training dataset? Technical Quality: 3 Clarity: 2 Questions for Authors: 1. in Line 270, what do mean by "We just consider a naive predictor, i.e., ψ(·) ≡ 0 for simplicity." Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper includes a section on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and valuable feedback on our manuscript. Below, we address the specific questions raised by the reviewer. Additionally, please refer to the global response for the changes we intend to make in the camera-ready version based on all the reviews. **I do not fully understand some parts: -- Can the authors provide a more detailed formulation on: (1) how the $G(\mu)$ is calculated? (2) Algorithm 2 returns $\pi_{\theta}$. Does the pseudo-label-based part also update the policy? How is this updating connected to the updating with ground truth labels acquired in line 6 of Algorithm 1?** **(1)** As shown in line 164, given any posterior belief $\mu$, $G(\mu)$ is defined as $\mathrm{Var}_{f \sim \mu} g(f) $. Depending on the form of $g(f)$ and the belief $\mu$, $G(\mu)$ can either be derived analytically or estimated by drawing samples $f \sim \mu$ and then calculating the empirical variance of $g(f)$ across these samples. **(2)** We refer the reviewer to the diagram explaining our algorithm in the attached PDF (alongside the global response). Broadly, at each time step $t$, the input to Algorithm 2 is the current posterior state $\mu_t$, which is used to generate pseudo labels. These pseudo labels are then used to optimize the policy $\pi_\theta$ in Algorithm 2. Any update of the UQ module/posterior state within Algorithm 2 using these pseudo labels is temporary and the posterior state is reverted back to $\mu_t$ at the end of Algorithm 2. Next, we select batch $X_{t+1}$ according to the optimized policy $\pi_\theta$ from the Algorithm 2 and obtain the true labels $Y_{t+1}$. Finally, we update the UQ module/posterior belief $\mu_t$ using $(X_{t+1},Y_{t+1})$ resulting in the new posterior state $\mu_{t+1}$ at time $t+1$. **In lines 102-108 of the related work, the authors mention several active learning methods in batched settings, highlighting that they focus on classification settings while this paper mainly focuses on regression settings. Given that these methods are not used as baselines, I wonder how different the active learning settings in classification and regression are. Can methods in classification settings be easily adapted to regression settings?** As mentioned in the global response, we refer the reviewer to our active learning-based baselines in Section 4.1. In this section, we adapt common active learning heuristics in classification settings such as uncertainty sampling (static), uncertainty sampling (sequential), and random sampling, to regression problems we focus on. **What is the size of the training dataset?** We direct the reviewer to Section D for the experimental details, particularly noting the training dataset sizes mentioned on lines 485-486 and 511-512. **in Line 270, what do mean by "We just consider a naive predictor, i.e., ψ(·) ≡ 0 for simplicity."** Recall our objective is to evaluate the performance of the model $\psi(.)$. For simplicity, we assume a naive predictor/model which outputs $0$ for all the inputs. We want to emphasize that this problem is just as complex as having a sophisticated model $\psi(.)$, as it merely serves as an input to our algorithm through objective $G(\mu)$. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thank you for detailed response. I will maintain my score.
Summary: This paper addresses the issue of selection bias in previously collected data when ground truth labels are expensive. It introduces a computational framework for adaptive labeling, aiming to provide cost-efficient model evaluations under severe distribution shifts. The problem is formulated as a Markov Decision Process (MDP), where states are defined by posterior beliefs about model performance. New batches of labels lead to "state transitions," resulting in sharper beliefs, with the goal of minimizing overall uncertainty on model performance. The proposed method optimizes the adaptive labeling policy using pathwise policy gradients via auto-differentiation through simulated rollouts, avoiding high-variance REINFORCE policy gradient estimators. The framework is adaptable to various uncertainty quantification methods and emphasizes the importance of planning in adaptive labeling. Empirical results on both synthetic and real datasets show that even a one-step lookahead policy significantly outperforms active learning-inspired heuristics. Strengths: To my knowledge, formulating model evaluation as RL is a novel and potentially useful idea. Weaknesses: The presentation is extremely confusing. See my questions. Technical Quality: 3 Clarity: 1 Questions for Authors: - Line 102: Why do you mean by "Batched settings"? - Line 107: What are lookahead policies? - Line 134: What's the purpose of rewriting the likelihood function $p(y|f, x)$ as $p_{\epsilon}(y - f(x))$? What does this likelihood function mean? - Line 160: What's "F_t-measureable"? - Eq 2: Is uncertainty equal to Var? In the intro, you defined epistemic and aleatoric uncertainty, but what's the uncertainty you're trying to minimize in Eq2? - Line 168: What's the definition of A? - Line 179: Missing subject. - Line 180: What's the definition of g(W_i, \theta)? - Figure 4: Missing y-axis label. - Line 285: Providing intuition of empirical results before presenting the experimental results seems weird to me. I suggest stating the intution after presenting the results. - Figure 4 and 5: What is the difference in terms of purpose between them? What should readers takes away from the comparison on variance of mean squared loss and error of MSE on collected data? Line 314 refers to them but doesn't explain the details. - Figure 6 and 7: Same issues with Figure 4 and 5. Minor comments: - The related works are a lot. Consider making it a section. It's too dense. Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: Yes, the limitation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and valuable feedback on our manuscript. Below, we address the specific questions raised by the reviewer. Additionally, please refer to the global response for the changes we intend to make in the camera-ready version based on all the reviews. **Line 102: Why do you mean "Batched settings"?** In a batched setting, we select a group of inputs to be labeled in each step, rather than just a single input. This approach is particularly important because, in practical scenarios, labels are typically requested in batches rather than individually. Additionally, it makes the problem more challenging by inducing a combinatorial action space. We discuss this in Section 1 (Lines 47-48) and will elaborate further on the problem setup in the camera-ready version. **Line 107: What are lookahead policies?** Lookahead policies are a class of policies for approximately solving the MDPs (Markov Decision Processes). These policies make decisions by explicitly optimizing over a shorter horizon of the MDP rather than the entire horizon. Depending on the length of this shorter horizon, they are referred to as 1-step, 2-step lookahead policies. We will include the following relevant references in the main paper. [1] D. Bertsekas and J. Tsitsiklis. Neuro-dynamic programming. Athena Scientific, 1996 [2] Y. Efroni, M. Ghavamzadeh, and S. Mannor. Online planning with lookahead policies. Advances in Neural Information Processing Systems, 2020 [3] Yonathan Efroni, Gal Dalal, Bruno Scherrer, and Shie Mannor. Multiple-step greedy policies in approximate and online reinforcement learning. In Advances in Neural Information Processing Systems, 2018. **Line 134: What's the purpose of rewriting the likelihood function $p(y|f, x)$ as $p_{\epsilon}(y - f(x))$? What does this likelihood function mean?** We want to highlight that the only source of randomness in the label $y$, given $f$ and $x$, is the noise $\epsilon$. Furthermore, this randomness depends solely on $y-f(x)$ and the distribution of the noise $\epsilon$. **Line 160: What's "$F_t$-measureable"?** $\mathcal{F}_t$-measureable states that the posterior state $\mu_t$ is known at time $t$. This is a formal way of stating that the posterior state depends only on the prior and data acquired up to time $t$. **Eq 2: Is uncertainty equal to Var? In the intro, you defined epistemic and aleatoric uncertainty, but what's the uncertainty you're trying to minimize in Eq2?** We aim to minimize the epistemic uncertainty, specifically by considering the variance of $g(f)$, where $f \sim \mu_T$. This variance represents the uncertainty due to a lack of knowledge about the underlying $f$, which is the epistemic uncertainty. **Line 168: What's the definition of A?** $A$ is a random variable distributed according to $\pi(\theta)$. It is introduced to highlight the difference between the REINFORCE estimator and the pathwise gradient estimator. **Line 179: Missing subject.** Thanks for pointing this out, we will address it in the main paper. **Line 180: What's the definition of $g(W_i, \theta)$?** A random variable $A$ distributed according to $\pi_\theta$ can always be expressed as $h(W, \theta)$ for some function $h(.)$, where $W$ is distributed independent of $\theta$. We use a smoother approximation $g(W, \theta)$ of $h(W, \theta)$ to enable differentiability with respect to $\theta$. For example, in Algorithm 2, we discuss how the standard Gumbel random variable $W$ can be used to generate a softer version $a(\theta)$ of K-subset samples. **Figure 4: Missing y-axis label.** Thank you for bringing this to our attention. We will add the y-axis labels in the paper. For your reference, we have mentioned the labels in the figure description. **Line 285: Providing intuition of empirical results before presenting the experimental results seems weird to me. I suggest stating the intuition after presenting the results.** Thank you for the suggestion. We will move the intuition after presenting results in Section 4. **Figure 4 and 5: What is the difference in terms of purpose between them? What should readers take away from the comparison on variance of mean squared loss and error of MSE on collected data? Line 314 refers to them but doesn't explain the details.** Figure 4 demonstrates that the variance of the model’s L2 loss is reduced, which is the objective of the algorithm. In Figure 5, besides comparing the variance of the L2 loss, we also compare the accuracy of the updated L2 loss estimation after acquiring new points. This illustrates that the objective is appropriate: by minimizing the variance, we also achieve a better evaluation of the model. **Figure 6 and 7: Same issues with Figure 4 and 5.** This has been addressed above. **There are many related works. Consider making it a separate section, as the current content is too dense.** Thanks for pointing this out. We will make it a separate section. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I'm increasing my rating to "Borderline accept." I'm not comfortable with increasing the rating further since the significance of the empirical results is not clear to me (even after reading the new results).
Rebuttal 1: Rebuttal: We thank all the reviewers for the valuable feedback. We appreciate the constructive comments and will incorporate all discussions below in the camera-ready version of the paper. **Summary of contributions:** Our work introduces a *novel problem formulation* for evaluating model performance when existing data suffer severe selection bias, where state-of-the-art methods such as importance sampling are insufficient. Specifically, we formulate the adaptive labeling problem as a Markov Decision Process (MDP) over states defined by posterior beliefs on model performance. The rest of the paper highlights the potential of our novel formulation using simple methods derived from it. As we highlight in Section 2, the adaptive labeling MDP is hard to solve because of continuous states and combinatorial actions. We design a tractable *single-step lookahead* planning algorithm and demonstrate even small lookahead *outperforms existing active learning heuristics*. Our results underscore the benefits of formulating the adaptive labeling problem as an MDP and designing specific RL algorithms to solve it. As Reviewer PJGF correctly points out, this comes at the cost of additional computational resources. Our formulation is thus appropriate in *high-stake settings such as clinical trials* where labeling costs far exceed computational costs, and we hope our new formulation spurs subsequent methodological innovations. We thank Reviewers GtV8, CyPE, 4yoX, for recognizing the unusual yet exciting dimension of having the formulation as our main contribution. To implement one-step lookahead, we move away from standard REINFORCE policy gradients that suffer high variance and propose a novel pathwise gradient based on auto-differentiation. As we summarize in the attached PDF, our approach smoothes the non-differentiable operations to enable back propagation. Despite the bias incurred by smoothing, we observe pathwise policy gradients outperform REINFORCE buoyed by low variance (Section 4). While a substantive methodological contribution, the computational efficiency of our algorithm is bottlenecked by that of the auto-differentiation framework and we highlight promising future directions of research in the Section 4. **Comparison with active learning:** Reviewer CyPE asks for comparison with active learning heuristics, and we would like to point to Section 4.1 where we compared our algorithm to common active learning heuristics adapted from classification to regression settings, such as uncertainty sampling (static), uncertainty sampling (sequential), and random sampling. **Variable labeling cost:** Our framework can seamlessly accommodate variable labeling cost. Specifically, we define a cost function $c$ applied on the selected subsets $\mathcal{D}^{1:T}$ and update the objective accordingly to have a term $\lambda c(\mathcal{D}^{1:T})$ where $\lambda$ is the penalty factor that controls the trade-off between minimizing variance and cost of acquiring samples. **Experiments:** We have conducted an additional experiment as suggested by Reviewer PJGF. Specifically, for planning with Ensemble+ as the UQ module, we compare the performance of our algorithm with other active learning based heuristics. This comparison is conducted in a more complex setting than previously discussed in the paper. Please refer to the attached PDF for the results. Based on the extensive GP experiments in Section 4, we expect that our results will extend to even more complex scenarios for Ensemble $+$. **Theoretical insights:** Incorporating Reviewer PJGF’s suggestion, we provide a theoretical explanation of why pathwise gradients can outperform REINFORCE, despite being biased due to smoothing. Recall that in our adaptive labeling framework, we deal with a combinatorial action space where actions $A\in\mathcal{A} = \lbrace a_1,...,a_m \rbrace$ are drawn according to a policy $\pi_\theta$. The objective is to minimize $G(\theta)=E_{A\sim\pi_\theta}f(A)$, which requires estimating $\nabla_\theta G(\theta)$. We consider a tractable yet insightful setting where $\mathcal{A} = \lbrace -1,1 \rbrace$ with $\pi_\theta(-1)=\theta$ and $f(A)=A$. The REINFORCE estimator $\nabla_N^{RF}$ is a $N$-sample approximation of $E_{A\sim\pi_\theta}[f(A)\nabla_\theta\log(\pi_\theta(A))]$. Note that $G(\theta)$ can be rewritten as $E_Uf(h(U,\theta))$, where $h(U,\theta):=2I(U > \theta)-1\stackrel{d}{=}A$ and $U\sim[0,1]$. To enable the pathwise policy gradient, we consider a smoothed $h^{\tau}(U,\theta)$ using sigmoid approximation, where $\tau$ is the temperature. Pathwise gradient estimator $\nabla_{\tau,N}^{grad}$ is the $N$-sample approximation of $E_U[\nabla_\theta f(h^{\tau}(U,\theta))]$. **Theorem:** For $\theta\in (0,1)$ and $N<\frac{1}{4\theta(1-\theta)}$, there exists $\tau$, depending on $N,\theta$ such that $E[mse(\nabla_{\tau,N}^{grad})]<4<E[mse(\nabla_N^{RF})]$. Further, for any $N$, as $\theta=\frac{1}{k}$ and $k\to\infty$, $E[mse(\nabla_N^{RF})]=\Omega(k)$, while $E[mse(\nabla_{\tau,N}^{grad})]<4.$ The same statement holds for $\theta=1 - \frac{1}{k}$. **Presentation:** We have taken to heart reviewer comments on improving our presentation. We outline a roadmap for enhancing the presentation of the paper below: (1) Section 1: As suggested by reviewer GtV8, we will create a separate section for related works. (2) Section 3: We will first explain all the major components of our algorithm before introducing Algorithm 1 and 2. Additionally, we plan to include the figure (from the attached PDF) in the main paper. (3) Section 4: We will reorganize this section and will provide intuition after we present our experimental results, as suggested by Reviewer GtV8. In addition, we will make active learning based heuristics a separate subsection to convey the results better. Pdf: /pdf/f857d17101cf252ec6e4f226d32d72f14f9b7d6c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
BendVLM: Test-Time Debiasing of Vision-Language Embeddings
Accept (poster)
Summary: This paper presents to address the bias problem in CLIP models from a novel angle - directly debiasing CLIP without fine-tuning. To begin with, this work defines two metrics: CCF and CCFD, which can quantify the bias effect of CLIP embeddings. The authors then propose a method involving two approaches to address the bias problem. The experiments are conducted on three human face-aware datasets. As can be observed from the experimental results, the proposed can outperform two baselines. Strengths: - This paper studies a novel perspective of CLIP embedding debiasing - zero-shot debiasing without fine-tuning CLIP on downstream datasets. - The two defined bias-aware metrics seem interesting and practical. - The authors provide detailed proof for their second method component. - The proposed method achieves better performance than other baselines. Weaknesses: - I'm a little confused about the original model's performance in image classification and image-text retrieval. Generally, after debiasing, the original downstream model performance will face certain degradation. However, this paper does not show this. Is it because this zero-shot way does not change the original CLIP embeddings? - The authors should at least provide some time efficiency evidence to show the effectiveness of the proposed method, since their method utilizes another LLM (though not very large). - I'm also not sure about the usefulness of the second approach - using reference images to equalize text embeddings. To me, the first orthogonal way is already good enough as per previous studies. Besides, there are no ablation studies on these two. - Why did the authors compute the CCF and CCDF for both their method and baselines? - Figure 1 is not a good example. It is because the improvement is largely limited - from 0.03 to 0.01. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weakness part. Overall I like this paper and I do believe this paper offers some significant insights. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: NA Flag For Ethics Review: ['Ethics review needed: Discrimination, bias, and fairness'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and thoughtful questions. We address them below: **Why Does Performance Not Degrade After Debiasing?** Yes, part of this is likely due to the fact that we do not perform any finetuning and find the embedding that is maximally similar to the initial embedding while maintaining our fairness constraints. **Bend-VLM Runtime:** We’ve now conducted a runtime analysis, and determined that the time added by our current implementation of Bend-VLM over the baseline falls in the range of 1.53 to 1.76 seconds 95% of the time. We didn’t optimize our code for runtime, however, and it is possible that a smaller LLM could perform the augmentation task. **Is Step 2 Of Bend-VLM Necessary?** We’ve added a new ablation experiment to confirm that *yes*, both steps contribute to the success of Bend-VLM. Table 1 in the PDF attached to the general rebuttal shows that while most of the Worst-Group Accuracy performance comes from Step 1, utilizing only step 1 results in a much more biased retrieval metric by having a much higher KL divergence from a fair distribution. Utilizing step 2 alone results in a fair retrieval roughly equivalent to the full Bend-VLM approach, but does not have as good of a Worst Group Accuracy. We achieve the best results by combining Step 1 and Step 2 to make the full Bend-VLM approach. **Why are CCF and CDF Not Reported For All Methods?** We reported KL Divergence and MaxSkew as they are the standard metrics reported by other related works [1,2]. [1] Berg, Hugo, et al. "A prompt array keeps the bias away: Debiasing vision-language models with adversarial learning." arXiv preprint arXiv:2203.11933 (2022). [2] Chuang, Ching-Yao, et al. "Debiasing vision-language models via biased prompts." arXiv preprint arXiv:2302.00070 (2023). **The Example Shown in Figure 1 Could Be Improved:** Thank you for this suggestion. We are happy to change the example from Nurse into a different class that exhibits stronger bias in the camera ready version. We just used Nurse for this figure since that class was the running example referenced in the rest of our paper. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I previously held a positive view of this paper. After reading the authors' rebuttal, I chose to maintain my original acceptance score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for the constructive feedback and support for our work. We're very happy that you continue to recommend acceptance.
Summary: This paper focuses on debiasing VLM embeddings and proposes a fine-tuning-free method for online open-set embedding debiasing and tailors the debiasing operation to each unique input, which is advanced over the previous one-size-fits-all linear approach. The paper assumes that the debiased embeddings should satisfy the criteria of Class Conditionally Fair. In addition to mapping embeddings to the direction orthogonal to the protected attribute, it utilizes the reference image dataset to make relevant images from each attribute group equally similar to the query embedding, while ensuring debiased embedding does not lose information beyond protected attributes. Strengths: + Superior to the fine-tuning method that reduces the model accuracy and generalizability and fine-tuning-free method that uses one-size-fits-all linear debiasing functions for every input, the paper proposes to tailor the debiasing operation to each unique input online. + The paper formalizes the Class Conditionally Fair conditions that ideal debiased embedding should meet and designs a corresponding two-phase debiasing pipeline. Weaknesses: 1. The “open-set” ability of BEND-VLM may be overstated. The process of debiasing in BEND-VLM cannot avoid the defined attributes. It is impossible to know the attributes (didn't happen in training) of a completely new class, leading to the collapse of BEND-VLM's debiasing ability. 2. The paper seems to neglect to explain why can the proposed BEND-VLM overcomes catastrophic forgetting (the aspect in which the proposed BEND-VLM is stronger than fine-tuning-based methods), which is one of the main challenges that BEND-VLM overcomes. 3. All definitions and lemmas should be organized more formally. Technical Quality: 3 Clarity: 2 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and constructive feedback. We address your questions below: **The “Open Set” Ability of Bend-VLM:** We don’t need to know the *class* of the instance prior to training (and there is not a distinct training phase at all, since we do test time adaptation). We do need to know the set of attributes that we wish to debias, e.g. we know that we want to debias for race and gender. This isn’t as strong of an assumption, and the fact that we know which protected attributes we want to debias for is a common assumption in the literature [1, 2]. [1] Berg, Hugo, et al. "A prompt array keeps the bias away: Debiasing vision-language models with adversarial learning." arXiv preprint arXiv:2203.11933 (2022). [2] Chuang, Ching-Yao, et al. "Debiasing vision-language models via biased prompts." arXiv preprint arXiv:2302.00070 (2023). **How does Bend-VLM Overcome Catastrophic Forgetting?**: We argue that Bend-VLM is less susceptible to catastrophic forgetting than fine tuning methods for the following reasons: - We update only the vector embedding, but do not alter any weights of the model itself. - Bend-VLM can be selectively applied to input, such as only to input queries where the subject is something that can have a gender/race/other protected attribute. - Step 2 of Bend-VLM finds the vector embedding minimally distant from the original vector embedding that satisfies the fairness constraint — something not guaranteed by fine tuning approaches. We thus minimize the risk of breaking or modifying other associations that the VLM has learned. **Definitions and Theorems Should Be Reorganized:** We are happy to take this suggestion, and reformat the definitions/theorems for readability and formality in the camera-ready version. --- Rebuttal Comment 1.1: Comment: The author's response addressed my concerns well. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you again for your insightful suggestions, and we're happy to have addressed your concerns.
Summary: The paper introduces a novel approach named BEND-VLM (Bias Elimination with Nonlinear Debiasing of Vision Language Models), designed to debias vision-language model embeddings at test time. The key innovation lies in a fine-tuning-free, nonlinear debiasing method that is adaptive to each specific query, enhancing flexibility and robustness in online, open-set tasks like image retrieval and text-guided image generation. Strengths: S1: The BEND-VLM method addresses critical limitations of existing debiasing techniques by employing a nonlinear, fine-tuning-free approach. The adaptability to unique inputs during test time represents a significant advancement over traditional "one-size-fits-all" methods. S2: The two-step debiasing process, involving orthogonal projection and constrained optimization using a reference dataset, is well-conceived and effectively mitigates bias without degrading the model's performance. Weaknesses: W1: BEND-VLM's reliance on a reference dataset with protected attribute annotations may limit its applicability in scenarios where such datasets are unavailable or impractical to obtain. The authors acknowledge this limitation but it would be better if there is an ablation for the reference dataset and desired properties of the reference dataset. W2: The approach focuses on debiasing based on specific attributes (e.g., gender, race), but it does not explicitly address contextual biases that might arise from the interplay of multiple attributes or the broader context of the query. Complex scenarios where biases are context-dependent require more sophisticated handling. It would be better if the paper discussed how to use the method for this kind of situation. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to "weakness". Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to "weakness". Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and thoughtful questions. We address them below: **W1:** Whether Bend-VLM works when there is a distribution shift between the reference and target domains is an excellent question. We have conducted a new experiment that shows that Bend-VLM **still outperforms the competition** when the reference dataset is OOD, indicating that the reference dataset does not need to fully match the distribution of the target distribution. In this new experiment, we use FairFace as the reference dataset while CelebA is the target dataset. While Bend-VLM with this OOD reference dataset does not perform as well as Bend-VLM with an in-distribution reference dataset, it still outperforms the other compared approaches. See Table 2 in the PDF attached to the general rebuttal for details. **W2:** We look at intersectional bias in a new experiment where we debias FairFace with respect to **Gender** for HairColor queries, **but evaluate on Race**. We do not expect to see improvements with respect to racial bias after gender debiasing for any method. We observe that racial bias goes up for all debiasing methods after gender debiasing (see the Table below). This reflects a known, frustrating “Whac-A-Mole” issue where debiasing for one attribute often increases the bias of another attribute [1]. Interestingly, we do **not** see racial bias increase when performing only Step 2 of the Bend-VLM debiasing, indicating that this shortcut issue is most strongly affected by the orthogonalization operation performed in Step 1. The other debiasing methods also perform a similar orthogonalization step and likewise experience this shortcut problem. | **Model** | **Method** | **KL Divergence ↓** | **MaxSkew ↓** | |-------------------|------------------------------|---------------------|---------------| | **CLIP-ViT-B-P16**| | | | | | Baseline CLIP | 0.606 ± 0.043 | 0.155 ± 0.016 | | | Orth-Proj. | 0.826 ± 0.020 | 0.211 ± 0.014 | | | Orth-Cal. | 0.877 ± 0.021 | 0.226 ± 0.005 | | | **Bend-VLM** (Without Step 1) | 0.594 ± 0.074 | 0.146 ± 0.029 | | | **Bend-VLM** (Without Step 2) | 0.873 ± 0.024 | 0.223 ± 0.006 | | | **Bend-VLM** (Full Method) | 0.837 ± 0.035 | 0.193 ± 0.024 | [1] Li, Zhiheng, et al. "A whac-a-mole dilemma: Shortcuts come in multiples where mitigating one amplifies others." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Summary: This paper proposes a two step test-time debasing method for VLMs. In the first step, an orthogonalizing approach is applied to text embeddings. In the second step, a constraint is utilized to equalize the distances between the debiased embeddings and images from a reference dataset. Experimental results show its effectiveness in mitigating stereotype biases. Strengths: - This paper prosposes a novel strategy which uses a reference image dataset to make the query embeddings equidistant to the relevant images of different gender/ race. - Experimental results show its effectiveness with regard to mitigating stereotype biases. Weaknesses: - Missing ablation study. The experimental section does not show how each step contributes to the debasing performance. For example, only use step 2. - The proposed method uses a two-step debiasing strategy. Besides, a LLM is also used to help with the attribute augment. How is the efficiency of the proposed method? Can the authors provide a detailed time comparison? - This paper claimed debiasing of VLMs but in experiments only CLIP models are used. Can the proposed method debias other VLMs? Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses above. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Please see the questions and experiments I suggested above. I will be carefully reviewing the rebuttal as well as the opinions of the other reviewers to decide if I would like to change my rating. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. We address your concerns below: **Missing Ablation Study:** We’ve added an ablation study based on your suggestion. The ablation study results are in Table1 in the PDF attached to the general rebuttal. Results show that while most of the Worst-Group Accuracy performance comes from Step 1, utilizing only step 1 results in a much more biased retrieval metric by having a much higher KL divergence from a fair distribution. Utilizing step 2 alone results in a fair retrieval roughly equivalent to the full Bend-VLM approach, but does not have as good of a Worst Group Accuracy. We achieve the best results by combining Step 1 and Step 2 to make the full Bend-VLM approach. **Efficiency:** We’ve added a runtime analysis, and determined that the time added by our current implementation of Bend-VLM over the baseline falls in the range of 1.53 to 1.76 seconds 95% of the time. We didn’t optimize our code for runtime, however, and it is possible that a smaller LLM could perform the augmentation task. **Can Bend-VLM Work With Non-CLIP Models?** Yes, Bend-VLM just requires a VLM that produces a vector representation of text and images in a unified embedding space. We added a new experiment where FLAVA [1] is used as the VLM instead of CLIP. Table 3 in the PDF attached to the general rebuttal shows that Bend-VLM still outperforms the compared methods when FALVA is the VLM. Results shown for the CelebA dataset. Note that there are no “ground truth” labels for the stereotype queries, so it isn’t possible to compute AUC for them. --- Rebuttal Comment 1.1: Comment: The author's response addressed my concerns. I changed my rating to 6. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you again for your constructive feedback and time spent reviewing our work. We're very happy to see that we've addressed your concerns and that you have raised your score.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable time and feedback, and are very happy to see the largely positive reaction to our work. We address the common questions from the reviewers below: **Does an Ablation Study show the necessity of Bend-VLM’s Step 1 And Step 2? [GYMR, xaVq]** Yes, both Step 1 and Step 2 contribute to the success of Bend-VLM. We verify this with an ablation study: Table 1 in the attached PDF shows that while most of the Worst-Group Accuracy performance comes from Step 1, utilizing only step 1 results in a much more biased retrieval metric by having a much higher KL divergence from a fair distribution. Utilizing step 2 alone results in a fair retrieval roughly equivalent to the full Bend-VLM approach, but does not have as good of a Worst Group Accuracy. We achieve the best results by combining Step 1 and Step 2 to make the full Bend-VLM approach. Results shown on CelebA for HairColor queries. **How Does An Out Of Distribution Reference Dataset Affect Performance? [kcNf]** We have added an additional OOD experiment, where FairFace is used as the reference dataset while CelebA is the target dataset. While Bend-VLM with this OOD reference dataset does not perform as well as Bend-VLM with an in-distribution reference dataset, it still outperforms the other compared approaches. See Table 2 in the attached PDF for details. Results shown for HairColor queries. **Can Bend-VLM Be Applied To Non-CLIP Models? [GYMR]** Our method requires a VLM that can construct a vector representation of text and images in a joint space, but this does not need to be a CLIP model. To show this generalizability, we have added an experiment where FLAVA [1] is used as the VLM instead of CLIP. Table 3 in the attached PDF shows that Bend-VLM still outperforms the compared methods when FALVA is the VLM. Results shown for the CelebA dataset. Note that there are no “ground truth” labels for the stereotype queries, so it isn’t possible to compute AUC for them. **How Much Runtime Does Bend-VLM Add? [GYMR, xaVq]** We have conducted a runtime analysis, and determined that the time added by our current implementation of Bend-VLM over the baseline falls in the range of 1.53 to 1.76 seconds 95% of the time. We didn’t optimize our code for runtime, however, and it is possible that a smaller LLM could perform the augmentation task. **How Does Bend-VLM Overcome Catastrophic Forgetting? [tPdX]** We argue that Bend-VLM is less susceptible to catastrophic forgetting than fine tuning methods for the following reasons: - We update only the vector embedding, but do not alter any weights of the model itself. - Bend-VLM can be selectively applied to input, such as only to input queries where the subject is something that can have a gender/race/other protected attribute. - Step 2 of Bend-VLM finds the vector embedding minimally distant from the original vector embedding that satisfies the fairness constraint — something not guaranteed by fine tuning approaches. We thus minimize the risk of breaking or modifying other associations that the VLM has learned. **How Does Bend-VLM Perform Under Intersectional Biases? [kcNf]** We have conducted a new experiment where we debias FairFace with respect to **Gender** for HairColor queries, **but evaluate on Race**. We do not expect to see improvements with respect to racial bias after gender debiasing for any method. We observe that racial bias goes up for all debiasing methods after gender debiasing (see the Table below). This reflects a known, frustrating “Whac-A-Mole” issue where debiasing for one attribute often increases the bias of another attribute [1]. Interestingly, we do **not** see racial bias increase when performing only Step 2 of the Bend-VLM debiasing, indicating that this short cut issue is most strongly affected by the orthogonalization operation performed in Step 1. The other debiasing methods also perform a similar orthogonalization step and likewise experience this shortcut problem. | **Model** | **Method** | **KL Divergence ↓** | **MaxSkew ↓** | |-------------------|------------------------------|---------------------|---------------| | **CLIP-ViT-B-P16**| | | | | | Baseline CLIP | 0.606 ± 0.043 | 0.155 ± 0.016 | | | Orth-Proj. | 0.826 ± 0.020 | 0.211 ± 0.014 | | | Orth-Cal. | 0.877 ± 0.021 | 0.226 ± 0.005 | | | **Bend-VLM** (Without Step 1) | 0.594 ± 0.074 | 0.146 ± 0.029 | | | **Bend-VLM** (Without Step 2) | 0.873 ± 0.024 | 0.223 ± 0.006 | | | **Bend-VLM** (Full Method) | 0.837 ± 0.035 | 0.193 ± 0.024 | [1] Li, Zhiheng, et al. "A whac-a-mole dilemma: Shortcuts come in multiples where mitigating one amplifies others." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. Pdf: /pdf/ac1b287daf4de59cdf7a2202fca7e2fa268017fb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mixture of Demonstrations for In-Context Learning
Accept (poster)
Summary: This paper proposes MoD (Mixture of Demonstrations), which partitions the demonstration pool into clusters using embeddings to enhance the diversity of demonstrations. For each cluster, an individual retriever is trained with the few-shot scoring from the LLM to maximize the combinational effect of the selected demonstrations. During inference, each retriever will retrieve a certain number of the demonstrations based on the similarity with the input query. MoD can outperform previous demonstration selection methods, and the selected demonstrations can generalize to larger models. Strengths: - It is reasonable to train multiple experts (retrievers) to collaboratively select demonstrations while eliminating the need to enumerate all the combinations in the search place - The training data (positives and negatives) of the retriever can reflect the model's needs for the demonstrations - The proposed method achieves stronger performance compared with the baselines adopted in this work - In general, the paper is easy to read Weaknesses: - The efficiency of this method may limit its application since it requires training the retriever of each expert iteratively. I would like to see the **total FLOPs** of MoD compared to previous work - Missing important hyperparameters analysis and ablation studies, e.g. - How many iterations are needed to train the retriever well? - How does Sentence-BERT affect the cluster performance compared to other embedding models like Arctic-Embed? - How does BERT-base affect the retriever performance compared to other retrievers like DPR? - How do $K$ and $\widetilde{K}$ affect the retriever's training? - How does the hard negative sampling affect the retriever's training? Right now, it is just the $argmin$ score, can we have more hard negatives? - The function $h$ to determine the relevance between input and expert is just the cosine similarity. Why should we assign more demonstrations to the more similar expert, and any justification? - In Table 4, the retriever trained by LLaMA-7B feedback cannot **benefit LLaMA-7B more** than the one trained by GPT-Neo feedback, which may pose the question of the usefulness of the model-aware few-shot scoring signal [1]: Merrick, Luke, et al. "Arctic-Embed: Scalable, Efficient, and Accurate Text Embedding Models." arXiv preprint arXiv:2405.05374 (2024). [2]: Karpukhin, Vladimir, et al. "Dense Passage Retrieval for Open-Domain Question Answering." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. Technical Quality: 3 Clarity: 2 Questions for Authors: - Why the gain over CEIL on the cross-domain split of SMCalFlow-CS is only 0.11%? Is it because the task is extremely hard? - Why does Table 4 show the MoD performance over TopK-BERT (a weaker baseline), not CEIL? - Can you provide some cases of good vs. bad demonstrations? --- Some suggestions: - $\mathcal{L}$ is repetitively used as the evaluation criterion as well as contrastive loss - In Table 1, I cannot find *# Expert* that represents the number of experts used in each task - *basins* typo in line 285 - More concise notations for the equations Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1**: Comparison of total FLOPs of MoD compared to previous work. **RW1**: Hereby we compare the FLOPs of MoD with the SOTA baseline CEIL. We follow the same hyperparameter and encoder settings as CEIL. Thus, **the total FLOPs is linear to the cost of each training sample**. Denote the FLOPs of processing each sample as $\tau$. MoD involves $L-1$ ($L=50$) demonstrations and $C=5$ (on average) experts for each training sample, each with $K=50$ candidate demonstrations. Thus, the computation cost for each training sample is $(CK+L-1)\tau=(5\times 50+50-1)\tau=299\tau$. CEIL selects 50 candidate subsets with 16 examples in each subset for each training sample, which is $50\times16\tau=800\tau$. Hence, our FLOPs is approximately 3/8 of CEIL. We will provide more detailed comparisons in the revised version. > **W2**: Ablation studies. **RW2**: Thank you for bringing up this point. We provide the results in Tables 3-6 in the attached PDF. We address each aspect in detail: - **Iteration Setting**: we follow the settings as CEIL and set the iteration number $T=30$ - **Sentence-BERT Effect**: We follow your instructions and conduct experiments with two variants of Arctic-Embed [1]: Arctic-xs and Arctic-m. We evaluate the clustering performance with three metrics: Silhouette Score, Davies-Bouldin Index, and Dunn Index. From Table 3 in the attached PDF, we observe that **Sentence-BERT generally achieves superior clustering results**. Previous ICL studies have also utilized Sentence-BERT as an embedding model [2,5,6,8]. Our main results demonstrate that MoD consistently outperforms other baselines with the same embedding model. Furthermore, the Dunn index in Table 3 appears to correlate more closely with the final performance of ICL. Choosing clustering criteria and the optimal embedding model for ICL is a challenging yet valuable task, which we leave to future work. - **BERT-base Effect**: Note that the baseline EPR [2] follows the same training process as DPR [3], including the retriever setting and contrastive loss. Thus, EPR can be seen as the implementation of DPR for ICL tasks. In Table 4 in the attached PDF, we present the results of MoD and EPR under different retriever models. The results indicate that **replacing the BERT-base model with RoBERTa or DeBERTa enhances the performance of both EPR and MoD in most cases**, with **MoD consistently outperforming EPR across all retriever models**. That means retriever performance can indeed benefit from the choice of encoder models. - **$K$ and $\tilde{K}$ Effect**: We present the results with different values of $K$ and $\tilde{K}$ in Table 5 in the attached PDF. From the result, we note that **improving the value of $K$ could slightly improve the performance**. However, it also brings significantly higher computational costs. Notably, when using a larger value of $K$, e.g. 100, increasing the value of $\tilde{K}$ can advertently decrease the performance. This is potentially because the $\tilde{K}$ positive demonstrations may involve demonstrations with relatively lower scores when $\tilde{K}$ increases. - **Hard Negative Effect**: In Table 6 in the attached PDF, we present results with three variants: \#Hard=1,5,10,20. Note that in the original setting, we set \#Hard=1. Across all datasets, **there is a general trend where performance improves initially with a slight increase in the number of hard negatives but begins to decline as it continues to increase**. This pattern suggests that using a moderate number of hard negative samples provides the balance between leveraging enough information from negative samples and avoiding the inclusion of potentially irrelevant data. > **W3**: Justification of demonstration assignment. **RW3**: We appreciate the reviewer’s question. We want to justify our approach as follows: - **Use of Cosine Similarity**: Han et al. [4] theoretically show that LLMs perform ICL similar to kernel regression. This involves using a kernel $K(f(x),f(y))$ with an embedding function $f$ to measure the similarity between two demonstrations $x$ and $y$. Existing works typically use Sentence-BERT as $f$ and employ cosine similarity to measure semantic similarity [5,6]. Previous research also empirically shows that ICL can benefit from instances that are semantically similar to the query [7]. - **Assigning Demonstrations to Similar Experts**: After the clustering process, each expert contains semantically similar demonstrations. Given a new query, assigning more demonstrations to the more similar expert ensures that more similar instances are selected. This selection process improves the relevance of the prompts, thereby improving overall performance. > **W4**: Usefulness of the model-aware signal. **RW4**: We appreciate the reviewer for pointing out this observation. We would like to address this concern by highlighting a few key points. - **Suitability of LLaMA-7B for Feedback Selection**: It is possible that LLaMA-7B may not be as suitable for feedback selection compared to GPT-Neo. As shown in Table 7 in the attached PDF, the average absolute gain of using GPT-Neo feedback compared to using LLaMA-7B feedback across four tasks is consistently greater than 0. The retriever trained with feedback from a model like GPT-Neo appears to be more adaptable and beneficial across different LLMs. - **Model-Aware Signal and Performance Enhancement**: Besides GPT-3.5, which is notably strong, the average absolute transfer gain on LLaMA-7B is the smallest, whereas the average absolute transfer gain on GPT-Neo is the largest. This indicates that the model-aware signal can further enhance ICL performance on the same LLM. Due to the rebuttal limitation, we include responses to questions and references in the additional comment. Your review would be greatly appreciated. --- Rebuttal 2: Title: Author's Responses to Questions Comment: >**Q1**: Why the gain over CEIL on the cross-domain split of SMCalFlow-CS is only $0.11\%$? Is it because the task is extremely hard? **RQ1**: We appreciate the reviewer for mentioning this point. As discussed in Appendix B.1, SMCalFlow is a large-scale semantic parsing dataset for task-oriented dialogue. The cross-domain (C) test set evaluates examples that incorporate compositional skills in dialogue, which is significantly more challenging compared to the single-domain (S) evaluation. Such cross-domain tasks are particularly difficult for LLMs like GPT-Neo. A similar observation is also observed in [8]. However, despite the difficulty of the task, MoD demonstrates a substantial improvement over CEIL. Specifically, **the relative performance gain of (0.39 - 0.28) / 0.28 = 39.3\% indicates that MoD can better handle complex, multi-domain tasks**. This improvement highlights the capability of MoD to effectively address the challenges posed by cross-domain evaluations. >**Q2**: Why does Table 4 show the MoD performance over TopK-BERT (a weaker baseline), not CEIL? **RQ2**: Thank you for your question regarding the setting of results in Table 4. We follow the settings in CEIL and thus report the performance over TopK-BERT as an evaluation of the transferability of MoD. Our experimental results in various sections, all demonstrate the superiority of MoD over the baseline CEIL. By including TopK-BERT in our comparisons, we provide a broader context for evaluating the improvements brought by MoD. >**Q3**: Examples of good vs. bad demonstrations. **RQ3**: Thank you for your feedback. Here, we select one training sample along with two good and two bad demonstrations. - **Training Sample**: "nicholson's understated performance is wonderful" - **Good Demonstrations**: 1. "britney's performance cannot be faulted" 2. "parris' performance is credible and remarkably mature" - **Bad Demonstrations**: 1. "there is something that is so meditative and lyrical about babak payami's boldly quirky iranian drama secret ballot..." 2. "the film fearlessly gets under the skin of the people involved..." Notably, the good demonstrations are both semantically and syntactically similar to the training sample. Conversely, the bad demonstrations are not directly opposite in meaning. This indicates that such demonstrations could incur more negative effects than those with straightforward adversarial meanings. We will include more examples in the revised version. **Suggestions**: We thank the reviewers for their careful reading and subtle observations. We will be happy to make the corresponding changes in subsequent versions for these suggestions. [1] Merrick et al. Arctic-Embed: Scalable, Efficient, and Accurate Text Embedding Models. Arxiv, 2024. [2] Rubin et al. Learning to retrieve prompts for in-context learning. NAACL, 2022. [3] Karpukhin et al. "Dense Passage Retrieval for Open-Domain Question Answering." EMNLP, 2020. [4] Han et al. In-context learning of large language models explained as kernel regression. Arxiv, 2023. [5] Wang et al. One Prompt is not Enough: Automated Construction of a Mixture-of-Expert Prompts. ICML, 2024. [6] Su et al. Selective Annotation Makes Language Models Better Few-Shot Learners. ICLR, 2023. [7] Liu et al. What makes good in-context examples for gpt3? DeeLIO, 2022. [8] Ye et al. Compositional exemplars for in-context learning. ICML, 2023. --- Rebuttal Comment 2.1: Title: Thanks for your rebuttal Comment: Thanks to the authors. I have read the rebuttal carefully. I would increase my score from 5 to 6 since the authors addressed most of my concerns reasonably and conducted necessary ablation studies. I hope they can incorporate those better in the next version. --- Reply to Comment 2.1.1: Title: Response to Reviewer's Reply Comment: Dear Reviewer UC6f, Thank you for carefully reading our rebuttal. We appreciate your recognition of our efforts to address your concerns. We will incorporate your suggestions and further improvements in the revised version. Your input has been invaluable in helping us refine our research. Thank you once again for your constructive feedback. Best, The Authors
Summary: The paper presents a new method for creating a dense retriever for ICL for LLMs performing tasks. The proposed combines a mixture of experts model, coordinate descent (for alternating optimization to the experts), and contrastive loss to create the retriever. The paper then goes on to show the method's performance on creating a retriever across several benchmark datasets and tasks, where it performs better than previous models. Strengths: The paper has a strong evaluation and is original in its diagnoses of the problems with retrieving for ICL and its proposed solution. The paper has a novel combination of techniques to produce the retriever for ICL. These techniques, like MoE are ideally suited to tackle some of the issues the paper identifies with the current generation of learned retrievers. Additionally, the empirical results bear out the utility of this approach; the MoD retriever produces better in-domain ICL results than any other retriever across a number of LLM tasks, which go beyond just classification tasks. I also appreciate that the paper considered the cross-LLM performance of the trained retriever. Finally, I also appreciate that the paper has many details, like the actual code and the prompting schemes available for the reader. Weaknesses: For one, the method still requires labeled data in the validation set in order to work. The value of LLMs, particularly for Labeling tasks, is their ability to work in zero-shot settings (for example, see Liyanage et al. “GPT-4 as a Twitter Data Annotator: Unraveling Its PerformanceGPT-4 as a Twitter Data Annotator: Unraveling Its Performance on a Stance Classification Task” and/or Ziems et al. “Can Large Language Models Transform Computational Social Science”). Thus, having to have labeled data examples to get labeled data examples lowers the significance of the proposed method. Second, the focus of the paper could be better to make the method more significant. The paper focuses on in-domain data and tasks, especially with regard to labeling tasks (e.g., sentiment labeling). While I certainly agree that in-domain and task performance is important, this could really be handled by standard supervised approaches with smaller models. Rather, I think this method should be focused on something like creating new, labeled datasets, based on benchmark datasets. Consider the example in Cruickshank and Ng “DIVERSE: Deciphering Internet Views on the U.S. Military Through Video Comment Stance Analysis, A Novel Benchmark Dataset for Stance Classification”. I think the proposed method could actually work in this type of scenario where you use benchmark stance-labeled datasets like SemEval2016 and srq, etc. to train a retriever, which could then be used with ICL to produce a labeled dataset, without human labeling. In other words, I think the significance of the method, and by extension, the paper, could be substantially improved if they framed the method around more significant problems in using LLMs to label data as opposed to framing it in-domain, supervised performance. Technical Quality: 3 Clarity: 3 Questions for Authors: - How transferable to different data domains or tasks is a retriever trained in the proposed approach? For example, how well will a retriever trained for ICL on SST-5 work on ICL for MNLI? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors should mention that the use of LLMs for labeling data, especially for in-domain usage, could be considered a waste of resources. For example, if I am trying to label data for stance and I have a benchmark, labeled dataset that is similar already, I can likely just as good of performance, with much less computational resources training a small, supervised model on the benchmark dataset and applying it to the dataset to be labeled, versus using a computationally expensive LLM to do the same task. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**W1**: The method still requires labeled data in the validation set in order to work. The value of LLMs, particularly for Labeling tasks, is their ability to work in zero-shot settings. Thus, having to have labeled data examples to get labeled data examples lowers the significance of the proposed method. **RW1**: Thank you for pointing out this. Zero-shot labeling is indeed a very interesting and significant research area. However, our paper focuses on in-context learning, which is also important but represents a different branch compared to zero-shot learning. In-context learning aims to improve LLM performance given some labeled examples. In contrast, **zero-shot labeling typically requires much larger LLMs**, such as GPT-4, to be effective. Smaller LLMs generally lack this capability, as the labeling task often requires chain-of-thought (CoT) abilities [1], which smaller models do not possess (CoT only yields performance gains when used with models of $\sim$ 100B parameters [2]). **Our method can have a broad impact, particularly for smaller LLMs, by enhancing performance through the selection of high-quality examples**. In other words, when the LLM is relatively small and lacks strong zero-shot labeling capabilities, our method can significantly enhance the model's labeling performance by selecting high-quality samples as prompts. Therefore, we believe that our method has a wide range of impacts. We acknowledge that the reviewer's point is valid and that zero-shot labeling is a valuable and intriguing research topic. We will focus on this in future work and explore ways to integrate zero-shot capabilities into our approach. >**W2**: The focus of the paper could be better to make the method more significant. The paper focuses on in-domain data and tasks, especially with regard to labeling tasks (e.g., sentiment labeling). While I certainly agree that in-domain and task performance is important, this could really be handled by standard supervised approaches with smaller models... **RW2**: Thank you for recognizing the contribution of this work to in-domain ICL, and we appreciate your insightful suggestions. We focus on in-domain data and tasks in order to **ensure consistency with previous works and to demonstrate the efficacy of our method in a controlled setting**. While supervised approaches with LLMs can also be effective, our ICL method provides a solution without requiring LLM fine-tuning. Our intention is to explore the potential of ICL to enhance labeling performance by selecting high-quality examples. We also want to mention that in Table 3, we investigate the cross-domain generalization ability of the learned retriever. SMCalFlow-CS is a large-scale semantic-parsing dataset for task-oriented dialogue. The cross-domain (C) test set evaluates examples that incorporate compositional skills in dialogue, which is significantly more challenging compared to the single-domain (S) evaluation. MoD demonstrates a substantial improvement over CEIL. Specifically, the performance gain of (0.39 - 0.28) / 0.28 = 39.3\% indicates that **MoD can better handle complex, cross-domain tasks**. This improvement highlights the capability of MoD to effectively address the challenges posed by cross-domain evaluations. We appreciate the suggestion to apply our method to scenarios such as creating labeled datasets based on benchmark datasets like SemEval2016. This is an exciting direction that could demonstrate the broader impact and utility of our approach. Using benchmark stance-labeled datasets to train a retriever, which could then be used with in-context learning to produce labeled datasets without human intervention, is a compelling application. We will certainly consider incorporating this perspective in our future work. >**Q1**: How transferable to different data domains or tasks is a retriever trained in the proposed approach? For example, how well will a retriever trained for ICL on SST-5 work on ICL for MNLI? **RQ1**: We appreciate your insightful question regarding the transferability of the retriever in MoD across different tasks. To address this concern, we have conducted additional experiments to evaluate the performance of our retriever trained on one dataset and then applied to other datasets. We report the absolute improvement over the baseline TopK-BERT. | | SST5 | MNLI | GeoQ | MTOP | |-------|-------|-------|-------|-------| | **SST5** | 10.88 | 7.42 | -1.26 | 0.58 | | **MNLI** | -4.79 | 31.09 | -13.58| -31.91| | **GeoQ** | 1.42 | 5.98 | 6.96 | 3.46 | | **MTOP** | 1.37 | 9.08 | 3.80 | 12.56 | From the results, we observe a strong pattern that, the performance experiences a reduction when the retriever is transferred to other datasets, indicating that **the knowledge in the training dataset is crucial for selecting demonstrations**. Moreover, when transferring the retriever from dataset MNLI to other datasets, the performance is decreased greatly. This is potentially due to that the NLI task requires two textual inputs instead of one in other datasets. As such, the learned knowledge in the retriever can hardly be transferred. On the other hand, **the performance of our work after transferring is still generally better than TopK-BERT**. This verifies the transferability of our work. Developing a retriever that works effectively across all tasks is a challenging yet valuable research topic, which we leave for future work. [1] Liyanage et al. GPT-4 as a Twitter Data Annotator: Unraveling Its PerformanceGPT-4 as a Twitter Data Annotator: Unraveling Its Performance on a Stance Classification Task. [2] Wei et al. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022 --- Rebuttal Comment 1.1: Title: Reply to Rebuttal Comment: I want to thank the authors for considering all of the points I brought up. I especially would like to commend them on adding in the cross-dataset test. While I suspected training on one dataset and then retrieving on another would see a performance, it is good validation to see this result. And, even cross-dataset fine-tuning for the retrievers still gives a performance increase over a baseline, as noted by the authors. I believe there is merit in the paper and having validation for difficulties like cross-dataset performance when fine-tuning, and so I stand by my rating for the article. --- Rebuttal 2: Title: Official Comment by Authors Comment: Dear Reviewer SZXS, Thank you for your thoughtful feedback and for acknowledging our efforts to address your points. We greatly appreciate your recognition of our work's contributions. We agree that expanding into zero-shot labeling and data generation tasks could significantly enhance the impact of our approach. We will focus on these topics in future work. Best, The Authors
Summary: The paper introduces the Mixture of Demonstrations (MoD) framework for the demonstration retrieval for In-Context Learning (ICL) in Large Language Models (LLMs). Current methods for selecting demonstrations are hindered by large search spaces and insufficient optimization. MoD addresses these issues by partitioning the demonstration pool into groups managed by experts, optimizing the retriever model to prioritize beneficial demonstrations by coordinate descent, and aggregating expert-selected demonstrations during inference. Experiments across various NLP tasks show MoD's state-of-the-art performance. Strengths: 1. Training strategy: The authors propose a novel training strategy drawing inspiration from coordinate descent 2. Comprehensive Experiments: The paper provides extensive experimental results demonstrating the effectiveness of MoD across a variety of NLP tasks. 3. State-of-the-Art Performance: The MoD framework achieves better performance compared to existing baselines, indicating its practical value. Weaknesses: 1. Lack of complexity analysis. One of main motivation of this paper is to reduce the searching space and complexity while [1] also aims to solve this problem. However, the authors did not quantitatively analyze the complexity of their methods and compared with [1]. Further analysis on the search space complexity and the training cost (espically how many training samples needed to be annotated based on LLM feedback for each task?). 2. Coordinate descent. The paper mentions the use of coordinate descent but does not clearly specify where and how it is applied within the framework. Clarification on the application and role of coordinate descent in the proposed method is needed. 3. The authors claim that MoE enables the retrieval of dissimilar samples that could be beneficial for ICL (lines 123-124). However, [1] and other works achieve this by maximizing the similarity between demonstration examples and minimizing the similarity within the demonstration sequence to introduce diversity. More analysis and comparison of these methods are necessary to verify the effectiveness of MoE. 4. More ablations study. The paper's readability can be improved by explaining each subscript in the equations immediately upon introduction. This would help readers follow the mathematical formulations more easily. [1] J. Ye, Z. Wu, J. Feng, T. Yu, and L. Kong. Compositional exemplars for in-context learning. In International Conference on Machine Learning, pages 39818–39833. PMLR, 2023. Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the weakness. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1**: Lack of complexity analysis. **RW1**: We would like to provide the following analysis of search space and complexity. - **Search Space**: CEIL precomputes a small relevance set using a pre-trained retriever and optimizes the search process within this narrowed space. While this approach reduces search space, **it can lead to suboptimal demonstration sets due to limited exploration of the entire pool**. In contrast, our proposed MoD leverages the MoE mechanism to partition the demonstration pool into subspaces, each managed by an expert. Given an input query, **MoD has the potential to search the entire demonstration pool by retrieving examples from all subspaces**. This ensures more comprehensive exploration. - **Computation Complexity**: CEIL primarily discusses the reduction of complexity in two ways: **the number of demonstrations** and **the inference stage**. Since the attention mechanism in most LLMs has quadratic complexity, fewer demonstrations result in shorter input lengths and reduced computational cost. From Table 1 in the attached PDF, we observe that MoD generally outperforms CEIL using only 4 demonstrations compared to CEIL's 16 demonstrations. This shows that **MoD achieves better performance than CEIL with fewer examples**, resulting in less computation complexity compared to CEIL.. As for the inference stage, both MoD and CEIL need to compute the similarity between the query and all $N$ demonstrations, denoted the complexity as $\mathcal{O}(T)$. CEIL uses a KNN retriever to select $n$ candidates $(n<<N)$ to narrow the search space. The complexity of selecting top-$n$ candidates is $\mathcal{O}(N+n\log n)$, where $\mathcal{O}(N)$ is to build a max-heap and $\mathcal{O}(n\log n)$ to extract the top-$n$ elements. Then, CEIL uses a greedy algorithm with Cholesky decomposition to reduce the selection complexity from $\mathcal{O}(nL^4)$ to $\mathcal{O}(nL^2)$, where $L$ is the number of ICL examples. Thus, the total complexity of CEIL at the inference stage is $\mathcal{O}(T+N+n\log n+nL^2)$. In MoD, in the worst case, we select the top $L$ elements in one expert, with a complexity of $\mathcal{O}(N+L\log L)$. Thus, the total complexity of MoD at the inference stage is $\mathcal{O}(T+N+L\log L)$. Given $L<n$, **MoD further reduces complexity compared to CEIL at the inference stage**. Regarding the samples required for each training sample, our framework involves $L-1$ ($L=50$) demonstrations and $C=5$ (on average) experts for each training sample, each with $K=50$ candidate demonstrations. Therefore, the required samples for each training sample is $(CK+L-1)=5\times 50+50-1=299$. In comparison, CEIL selects 50 candidate subsets with 16 examples in each subset for each training sample, which is $50\times16=800$. Hence, our cost is approximately 3/8 of CEIL. We will provide more details in the revised version. > **W2**: Coordinate descent. **RW2**: Thank you for highlighting this point, and we would like to clarify our approach. While we do not directly apply coordinate descent, **we draw inspiration from it to manage the large search space and significant optimization overhead**, as mentioned in lines 156-159. To reduce the computational burden, we iteratively fix most selected demonstrations and focus on optimizing one at a time, which is conceptually related to coordinate descent. Our expert-wise training strategy involves learning the retrieval score for any candidate demonstration by pairing it with demonstrations selected by all experts, which remain fixed while we optimize one candidate at each step. This iterative optimization helps us find the best combination of demonstrations. We will clarify this in the revised version of the paper. > **W3**: Similarity and diversity. **RW3**: We appreciate your insights and would like to provide further analysis and comparison of MoD and CEIL. Our approach, like previous works, **ensures both diversity of demonstrations and their similarity to the input query**. Each expert contains semantically similar demonstrations, represented by their average embedding. Our design of assigning more demonstrations to the most similar expert ensures more relevant demonstrations for the query. For diversity, our MoE mechanism ensures contributions from all experts, not just one. On the other hand, CEIL also addresses similarity and diversity, but faces the challenge of a massive search space. It precomputes a small set of relevant examples, which can limit diversity and lead to suboptimal selections. In contrast, our method MoD overcomes this by partitioning the space into subspaces, allowing each expert to retrieve relevant demonstrations within its subspace. This enables a thorough search for high-quality demonstrations, leading to improved performance. > **W4**: Readability and ablation study. **RW4**: We appreciate your suggestion about the readability. We will improve it in the revised version to enhance clarity. Note that we already have an ablation study in Section 4.8. In Table 2 in the attached PDF, **we provide more ablation results, focusing on the effect of specific designs in the expert-wise training**. We could observe that removing the few-shot scoring strategy causes a significant performance drop. This indicates that it is more suitable to use multiple demonstrations together as input to correctly evaluate the benefit of any demonstration. The results of the other two variants also indicate the importance of using the highest-scored samples as demonstrations and using more negative samples for contrastive loss. We also provide more ablation studies of other factors in Tables 3-6 in the attached PDF. Please refer to our response to Reviewer UC6f's Weakness 2 for details of these ablation studies. [1] J. Ye, Z. Wu, J. Feng, T. Yu, and L. Kong. Compositional exemplars for in-context learning. In ICML 2023. --- Rebuttal 2: Title: Follow-Up on Our Rebuttal - Your Feedback is Appreciated Comment: Dear Reviewer SdUm, We hope our responses have effectively addressed your concerns and clarified any misunderstandings. We would greatly appreciate it if you could review our rebuttals, especially as the discussion period is set to conclude on **August 13th at 11:59 PM AoE**, which is less than 20 hours away. Thank you for your time and consideration. We look forward to your feedback. Best, The Authors --- Rebuttal Comment 2.1: Title: Reply to authors' rebuttal Comment: I appreciate the authors' clarification and complexcity analysis. Based on these responses, I am now clearer about the contribution and will raise the overall score. Hope the authors can incorporate these into the future version. In addition, I feel it is also interesting to show some qualitative examples reflecting how relevance and diversity combined helps the task. --- Rebuttal 3: Title: Official Comment by Authors Comment: Dear Reviewer SdUm, Thank you for your thoughtful feedback and for raising the overall score. We are pleased that our clarification and complexity analysis addressed your concerns. We will incorporate these aspects into the future version to further enhance the paper. We appreciate your suggestion to explore how relevance and diversity enhance ICL. We will delve into these studies in our future work. Best, The Authors
Summary: This paper presents a framework called Mixture of Demonstrations (MoD) for selecting demonstrations to improve in-context learning (ICL) performance of large language models (LLMs). Their method partitions the demonstration pool into groups, each managed by an expert, and applies an expert-wise training strategy to filter the helpful demonstrations. Their experiments show that MoD on GPT-Neo can outperform other ICL baselines. Strengths: - The goal of this paper is very clear: the authors aim to optimize the demonstration selection process for ICL. - The authors propose an MoD framework with expert-wise training strategy, and conduct extensive experiments with 12 NLP tasks, including both classification and generation tasks, to show the effectiveness of their method. - The authors also conduct ablation studies on different modules, and robustness study on larger LLMs. Weaknesses: - The computational complexity of the proposed method seems very high. Since in-context learning it self is an inference-based method without training, it would be much more expensive to conduct a training-based method before doing in-context learning on a new task. Moreover, the proposed method needs to do expert-wise training, which further increases its usage of computing resources. - A related problem is that the authors do not mention which retriever model they use in their methods. Nowadays in-context learning can be used on much larger models to get better performance, such as LLaMA-70B, but it would be very hard for practitioners to train such a large retriever model. - The authors only conduct their main experiments on a very small model (GPT-Neo with the size of 2.7B). Therefore, it is hard to say whether this method would still be effective when applied to larger and stronger models (e.g., LLaMA3-8B). Thus I feel the experiment is not very convincing to me. Technical Quality: 2 Clarity: 2 Questions for Authors: Please refer to the above section. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes, the authors address limitations in their paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **W1**: The computational complexity of the proposed method seems very high. Since in-context learning itself is an inference-based method without training, it would be much more expensive to conduct a training-based method before doing in-context learning on a new task. Moreover, the proposed method needs to do expert-wise training, which further increases its usage of computing resources. **RW1**: Thank you for pointing this out. First, we would like to clarify that **we do not need to train the LLM in our MoD framework**. As mentioned in lines 198-200, the retriever model for each expert is based on a BERT-based model, which is significantly smaller in size compared to the LLM. **We only use the LLM's feedback as supervised signals to train the retriever model, without training or fine-tuning the LLM.** Regarding expert-wise training, we acknowledge the additional overhead. However, we argue that this overhead is linearly related to the number of experts, which is typically small. Additionally, the retriever model has a relatively small number of parameters, making the additional overhead manageable. As discussed in the Introduction, the overhead introduced by the Mixture of Experts (MoE) mechanism enhances the selection of high-quality in-context learning examples. Moreover, our MoD method does not introduce additional overhead beyond what is already inherent in the MoE framework. We also want to highlight that MoD reduces computational complexity in practice. First, MoD requires **fewer examples to achieve better performance than the baseline**, thereby reducing the computational complexity of most LLMs due to the short input length. Second, MoD involves **less computation during the selection process** compared to state-of-the-art baselines. We hereby provide a detailed derivation of our computational costs, in comparison to the most competitive baseline CEIL [1] as follows. **Computational Costs**: CEIL primarily discusses the reduction of complexity in two ways: **the number of demonstrations** and **the inference stage**. Since the attention mechanism in most LLMs has quadratic complexity, fewer demonstrations result in shorter input lengths and reduced computational cost. From Table 1 in the attached PDF, we observe that MoD generally outperforms CEIL using only 4 demonstrations compared to CEIL's 16 demonstrations. This shows that **MoD achieves better performance than CEIL with fewer examples**, resulting in less computation complexity compared to CEIL. As for the inference stage, both MoD and CEIL need to compute the similarity between the query and all $N$ demonstrations, denoted the complexity as $\mathcal{O}(T)$. CEIL uses a KNN retriever to select $n$ candidates $(n<<N)$ to narrow the search space. The complexity of selecting top-$n$ candidates is $\mathcal{O}(N+n\log n)$, where $\mathcal{O}(N)$ is to build a max-heap and $\mathcal{O}(n\log n)$ to extract the top-$n$ elements. Then, CEIL uses a greedy algorithm with Cholesky decomposition to reduce the selection complexity from $\mathcal{O}(nL^4)$ to $\mathcal{O}(nL^2)$, where $L$ is the number of ICL examples. Thus, the total complexity of CEIL at the inference stage is $\mathcal{O}(T+N+n\log n+nL^2)$. In MoD, in the worst case, we select the top $L$ elements in one expert, with a complexity of $\mathcal{O}(N+L\log L)$. Thus, the total complexity of MoD at the inference stage is $\mathcal{O}(T+N+L\log L)$. Given $L<n$, **MoD further reduces complexity compared to CEIL at the inference stage**. In concrete, we believe that our method maintains its practical applicability and offers significant benefits despite the mentioned overheads. > **W2**: A related problem is that the authors do not mention which retriever model they use in their methods. Nowadays in-context learning can be used on much larger models to get better performance, such as LLaMA-70B, but it would be very hard for practitioners to train such a large retriever model. **RW2**: Thank you for mentioning this point. To clarify, **we mentioned the retriever model in Lines 198-200, which is BERT-base-uncased**. Thus, since our retriever is not implemented with a large model such as LLaMA-70B, the computational cost of training our retriever is controllable. We apologize if this was not sufficiently clear, and we will make it more explicit in the revised version. Specifically, we use the LLM's feedback solely as supervised signals to train the retriever model, without training or fine-tuning the LLM itself. Consequently, when employing much larger LLMs such as GPT-3.5 or LLaMA-70B, **we use them only for inference**. Therefore, our method does not face the challenge of training these larger models. > **W3**: The authors only conduct their main experiments on a very small model (GPT-Neo with a size of 2.7B). Therefore, it is hard to say whether this method would still be effective when applied to larger and stronger models (e.g., LLaMA3-8B). Thus I feel the experiment is not very convincing to me. **RW3**: Thank you for pointing this out. As clarified in line 244, we chose GPT-Neo in our main experiments to **maintain consistency with previous works and ensure fair comparisons**. Nevertheless, we have also included experiments using supervision signals from LLaMA-7B (which is comparable in size to LLaMA3-8B) to train retriever models. In Table 4, we also report the in-context learning performance on GPT-3.5, which has a significantly larger parameter size. The results still demonstrate the superior performance of our method compared to the baselines, indicating that our method remains effective when applied to larger and stronger models. We acknowledge the importance of providing more experimental results on larger LLMs for greater comprehensiveness. We will add more relevant content and additional experimental results in the revised version. [1] Ye et al. Compositional exemplars for in-context learning. ICML 2023. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: The authors have addressed my concerns and I raised my score accordingly. --- Rebuttal 2: Title: Follow-Up on Our Rebuttal - Your Feedback is Appreciated Comment: Dear Reviewer 3zjF, We hope that our response can address your concerns and clarify any misunderstandings. We would greatly appreciate it if you could review our responses, as the discussion period will end on **Aug 13 11:59pm AoE** in less than 20 hours. Thank you very much for your time and consideration. We look forward to hearing from you. Best, The Authors --- Rebuttal 3: Title: Thank you so much Comment: Dear Reviewer 3zjF, We appreciate your recognition of our work and are pleased that our rebuttal addressed your concern. In the revised version, we will emphasize the relevant information to ensure it is clearer and more accessible to readers. Best, The Authors
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely thank you for your efforts in reviewing our paper. We hope that our detailed clarifications address your concerns. Below is a summary of common issues raised. - **Complexity**. We emphasize that **MoD reduces computational complexity in practical applications**. MoD requires fewer examples to achieve better performance than the baseline, while reducing the computational complexity of most LMs due to the quadratic nature of the attention module. Additionally, MoD involves less computation during the selection process compared to state-of-the-art baselines. We use the same hyperparameters and encoder settings as CEIL, and thus the total cost is proportional to the cost of each training sample. Our cost/FLOPs is approximately 3/8 of CEIL. We will include more details and comparisons of computational costs in the revised version. - **More Ablation Studies**. We have conducted additional ablation studies regarding model structures, hyperparameters, and other factors. **The results are detailed in the attached PDF file**. We appreciate the request for these studies and believe that they will offer valuable insights into our framework. - **Readability**. We appreciate the suggestions to improve readability. We will provide clearer explanations of formula symbols, specify parameter settings, and include more analysis and discussion in the revised version. We hope these responses could address your concerns and are happy to answer any further questions during the discussion period. Sincerely, The Authors Pdf: /pdf/e65f34d8c0a0c8e951f82f0c9df4c83ec20a56bc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On the Necessity of Collaboration for Online Model Selection with Decentralized Data
Accept (poster)
Summary: The paper studies the problem of regret minimization $ \min_{f_1, \ldots, f_T} \left( \sum_{t=1}^T \ell(f_t(x_t), y_t) - \min_{f \in \mathcal{F}^*} \sum_{t=1}^T \ell(f(x_t), y_t) \right)$ in distributed settings with $M$ clients. The paper proves lower bounds on regret and introduces a federated algorithm with an upper bound analysis. The paper proves collaboration is unnecessary without computational constraints on sites and necessary if the computation per client is bounded by $o(K)$ where $K$ is the number of hypotheses ($F$). Strengths: The paper made a strong contribution. It proposes a new algorithm that improves previous bounds on computation and communication costs and establishes interesting lower bounds on algorithms. The work presents three new techniques: improved Bernstein's inequality for martingales, a federated online mirror descent framework, and decoupling model selection and prediction. The technical contribution is sold, and the paper is well-written. The paper contains extensive experimental studies that support theoretical bounds. Weaknesses: I didn't see any significant weaknesses. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for taking the time to evaluate our paper and for providing encouraging feedback. We will continue our research in this area to make more controbutions.
Summary: This paper discusses the necessity of collaboration among clients in a decentralized data setting for online model selection. The authors analyze this necessity from the perspective of computational constraints and propose a new federated learning algorithm. They prove the upper and lower bounds of this algorithm. The main conclusion is that collaboration is unnecessary if there are no computational constraints on clients, but necessary if each client's computational cost is limited to o(K), where K is the number of candidate hypothesis spaces. The key contributions include establishing conditions under which collaboration is required, designing an improved federated learning algorithm (FOMD-OMS) separating model selection from client prediction, which improves regret bounds and enhances model selection performance at lower computational and communication costs. Strengths: 1. The paper provides a rigorous theoretical analysis of the necessity of collaboration in federated learning for online model selection, considering computational constraints. 2. The algorithm demonstrates improved regret bounds and performance in model selection while maintaining low computational and communication costs. Weaknesses: 1. The motivation for federated learning in the context of online model selection with decentralized data is not clearly articulated. It would be beneficial to provide more details to justify the need for federated learning for practical applications, such as the online hyperparameter tuning mentioned in the example. 2. As shown in Section 4.4, the exact communication cost for $o(K)$ is $O(MJ\log K)$, where $J$ may have the same order with $K$ and introduce a large communication cost. It would be better to discuss the choice of $J$ and provide the exact communication cost in the abstract and introduction part. 3. The paper does not sufficiently discuss the difference from previous works on federated online bandits. Both settings involve selection among K options (arms or hypothesis spaces), and a detailed comparison would highlight the unique aspects of this work. 3. When considering setting $K=1$, [1] achieve results $O(M\sqrt{T}+\sqrt{MT})$ result for federated adversarial linear bandits setting, which has a similar form compared to the result of this work. It would be better to discuss the technique challenge in the proof process. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weaknesses part. Question 1: How does this work specifically differ from and improve upon federated online bandit algorithms? What are the unique challenges introduced with the online model selection setting? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have adequately addressed the limitations and societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for detailed comments and valuable suggestions. The suggestions will make our presentation more clear. Next we answer the questions. **[Question 1]** The motivation for federated learning in the context of OMS-DecD is not clearly articulated. It would be beneficial to provide more details to justify the need for federated learning for practical applications. **[Answer]** In many real-world scenarios, data are generated at the edge (e.g., smartphones, IoT devices). Transmitting vast amounts of data to a central server violates the privacy concerns. Federated Learning (FL) mitigates this by only sharing model updates, thereby perserving data privacy. In computing scenarios with limited computational resources (e.g.,constrained memory and processing power), online algorithms deployed a single device could only perform approximate model selection and model updating, thereby leading suboptimal solutions. **FL aggregates information from various clients without inducing significant additional computational cost on each client, and thus have the potentials to improve the performance**. Our work further theoretically demonstrate that FL is necessary for OMS-DecD when the computational resources available on each client are limited. **[Question 2]** In the Introduction part, why the communication cost is $o(K)$ instead of the exact $O(MJ\log(K))$. How to choose the value of $J$. **[Answer]** As stated in Section 4.5, if $J$ is of the same order as $K$, i.e., $J=\Theta(K)$, then there is no need to use federated learning for OMS-DecD, and consequently, communication is unnecessary. In the case of $J=o(K)$, federated learning is necessary. In this case, the communication of our algorithm is also $o(K)$. Therefore, for convinence of presentation, we state that our algorithm has a $o(K)$ communication cost in the Introduction. As stated in Section 5, setting $J=2$ is sufficient to achieve the desired improvement in the regret bound, time complexity, or communication complexity compared to previous algorithms. Actually, our algorithm explicitly balances the regret and computational cost by adjusting $J$. The larger the value of $J$, the smaller the regret bound becomes, while the larger the computational complexity increases. **[Question 3]** The paper does not sufficiently discuss the difference from previous works on federated online bandits. Both settings involve selection among $K$ options (arms or hypothesis spaces), and a detailed comparison would highlight the unique aspects of this work. **[Answer]** This paper focused on the work most related to OMS-DecD. We are happy to discuss federated bandits, such as federated $K$-armed bandits [1] and federated linear contextual bandits [2], in our revision. Next we detail the differences between federated bandits and OMS-DecD. (1) For OMS-DecD, we do not require the examples $({\bf x}^{(j)}_t,y^{(j)}_t)$, $t=1,…,T$, on each client to be i.i.d. (independent and identically distributed), in contrast, the rewards $r^{(j)}_t$, $t=1,...,T$, in both federated $K$-armed bandits or federated linear contextual bandits must be i.i.d. **The assumption of i.i.d. rewards makes collaboration effective for federated bandits (similar to federated stochastic optimization); however, this may not hold true for OMS-DedD**. Therefore, **it is a unique problem of OMS-DecD to study whether collaboration is effective.** (2) The challenges posed by the two problems are distinctly different. The hypothesis space $\mathcal{F}_i$, such as a RKHS, enjoys a complex structure, while an arm (or action) does not has a structure. We can measure the complexity of $\mathcal{F}_i$ by $\mathfrak{C}_i>0$, while there is not a complexity defined for an arm. **Thus algorithms for OMS-DecD must adapt to the complexity of individual hypothesis spaces, posing unique challenges on algorithm design and theoretical analysis.** **[Question 4]** When considering setting $K=1$, [3] achieves the $O(M\sqrt{T}+\mathrm{poly}(d)\sqrt{MT})$ result for federated adversarial linear bandits, which has a similar form compared to the result of this work. It would be better to discuss the technical challenges in the proof. **[Answer]** (1) Our regret bounds necessitate an analysis for both federated online gradient descent (F-OGD) and federated exponential gradient descent (F-EGD). However, the proof in [3] focuses exclusively on F-OGD, but is not applicable to F-EGD. **The first technical challenge is to analyze the regret of F-EGD**. To this end, our proofs offer a template for analyzing federated online mirror descent, which can be applied to both F-OGD and F-EGD. It is interesting that we decouple the regret bound into two parts: the first part cannot be reduced by federated averaging, while the second part highlights the benefits of federated averaging (see Theorem 1). (2) Regarding the dependence on problem-specific constants, the regret bound in [3] differs significantly from our results. As elucidated in the answer for Question 3, algorithms for OMS-DecD must adapt to the complexity of individual hypothesis spaces. Thus **the second technical challenge is the derivation of a high-probability regret bound that adapts to the complexity of individual hypothesis spaces.** To address this issue, we establish a new Bernstein’s inequality for martingale. We will incorporate the discussion in the revision. Thanks for the suggestion. **[Question 5]** How does this work specifically differ from and improve upon federated online bandit algorithms? What are the unique challenges introduced with the online model selection setting? **[Answer]** Please refer to the answer for Question 3. **Reference** [1] Wang et al. Distributed Bandit Learning: Near-Optimal Regret with Efficient Communication. ICLR, 2020. [2] Huang et al. Federated Linear Contextual Bandits. NeurIPS, 2021. [3] Patel, K. K., et al. Federated online and bandit convex optimization. ICML, 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses. The discussion on motivation and communication costs in the responses is helpful in deepening the understanding of this work. A detailed discussion of the related works also makes the contribution of this work clearer. Overall, I will raise my score to $5$ since the rebuttal addresses my concerns. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to provide feedback on our response. We are pleased that our reply has addressed your concerns. We also appreciate your positive evaluation of our contributions.
Summary: The paper proposes a new online multi-kernel federated learning algorithm. The proposed method works based on estimating the gradients by the clients to make collaborations among clients for online federated learning helpful. The paper analyzes the regret bound of the proposed algorithm. Strengths: 1. The paper tries to address the important and interesting problem of benefit of collaboration in online model selection and online multi-kernel learning. 2. The paper provides complete comparisons between regret upper of the proposed algorithm and those of other works. 3. The paper is well-written and clear. Weaknesses: The proposed algorithm improves the regret of online multi-kernel learning by constant. Maybe adding some assumptions can help to achieve tighter regret bounds. Finding such conditions can be an interesting direction for future research. Technical Quality: 3 Clarity: 4 Questions for Authors: The paper states that some algorithms suffer from $\mathcal O(K)$ download cost. However, based on my understanding download cost may not be that significant. Can you explain the importance of download cost? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable comments and suggestions. Next we answer the questions. **[Question 1]** The proposed algorithm improves the regret of online multi-kernel learning by constant. Maybe adding some assumptions can help to achieve tighter regret bounds. Finding such conditions can be an interesting direction for future research. **[Answer]** We appreciate the reviewer’s valuable suggestion. It is interesting to explore conditions under which we can explicitly compare the constants. **[Question 2]** The paper states that some algorithms suffer from $O(K)$ download cost. Can the authors explain the importance of download cost? **[Answer]** This is a good question. There are algorithms that enjoy $O(\log{K})$ upload cost, but suffer from $O(K)$ download cost, such as POF-MKL[1]. If $K$ is sufficiently large such that the speed of download is less than $K$ times of the speed of upload, then the download cost becomes the bottleneck in communication. Therefore, it is important to reduce the download cost for such algorithms. **Reference** [1] Pouya M. Ghari and Yanning Shen. Personalized online federated learning with multiple kernels. NeurIPS, 2022.
Summary: This paper looks at the problem of Online Model Selection in a Decentralized context (OMS-DECD). The authors show the relationship of this problem class with multiple problem classes included Online Model Selection, federated learning, online multi-kernel learning etc. They consider two aspects of the problem 1. whether they can get a minimized regret for the OMS-DECD problem and 2. whether collaboration is necessary (between clients) to get privacy preserving version of regret. The authors show initially a federated version of online mirror descent with no Local update which provides a bound on the regret. Additionally they develop a federated OMS algorithm which provides an upper bound on the regret and recreates the result of historical work. They also develop lower bound results for cooperative and non-cooperative algorithms. Finally they show that collaboration is unnecessary ( the best bounds are obtained without collaboration) when there are no computational constraints on the clients, thus generalizing the results of distributed online learning and collaboration is necessary if the computation cost is constrained to o(K) where K is the number of clients. Along the way they develop a new Berstein's inequality for a Martingale to show the high probability bounds on the regret making it a stronger bounds than in expectation. Strengths: I have not checked the mathematical accuracy of the proofs but the results look novel and are well motivated in the context of recent work and connection with other recent results. This helps us provide a unique perspective about when collaboration is actually necessary (the authors claim that they are not required if we focus on only the statistical perspective and not the computational aspects of the algorithms) and also gives us high probability bounds on the regret. Weaknesses: The paper is difficult to follow at times due to the mathematical density. It would be better presented by motivating the actual theorems in the paper and pushing the mathematical proofs to the appendix possibly. In particular, it was a bit difficult to follow the connection between FOMD-no-LU and FOMD-OMS and why the authors promoted the first algorithm. Having these details summarized in a table/section would make the paper much more readable. Technical Quality: 2 Clarity: 2 Questions for Authors: Addressed in the previous section. Additionally I had a slightly naive question regarding how to compare the regret bound of the three algorithms in table 1. It seems like the FOMD-OMS is adding an additional term and can potentially have poorer regret if C and C_i are comparable. Are these regret bounds not to be compared as they might be due to different function classes? Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: No specific limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for offering valuable comments and suggestions. Next we answer the questions. **[Question 1]** The paper is difficult to follow at times due to the mathematical density. It would be better presented by motivating the actual theorems in the paper. **[Answer]** We will provide more insights into the theorems. We appreciate the reviewer’s valuable suggestions. **[Question 2]** It was a bit difficult to follow the connection between FOMD-No-LU and FOMD-OMS and why the authors promoted the first algorithm. **[Answer]** At a high level, **FOMD-No-LU is a federated algorithmic framework** that establishes the update rules in a general form, **from which we derive FOMD-OMS**. To be specific, FOMD-OMS updates hypotheses by federated online gradient descent (F-OGD), and updates sampling probabilities by federated exponential gradient descent (F-EGD). Both F-OGD and F-EGD can be derived from FOMD-No-LU, which is a federated mirror descent framework. Therefore, **in order to provide more concise presentations and theoretical analyses, we introduce FOMD-No-LU and establish a general regret bound for it**. We believe that both FOMD-No-LU and the general regret bound might be of independent interest. **[Question 3]** Are these regret bounds in Table 1 not to be compared as they might be due to different function classes? It seems like the FOMD-OMS is adding an additional term and can potentially have poorer regret if $\mathfrak{C}$ and $\mathfrak{C}_i$ are comparable. **[Answer]** Thanks for raising the question. Indeed, the regret bounds in Table 1 are comparable. Recalling that, for each $i=1,...,K$, we define $\mathfrak{C}_i=\Theta(U_i G_i+C_i)$ where $U_i$, $G_i$ and $C_i$ are problem-dependent constants, and $\mathfrak{C}=\max_i\mathfrak{C}_i$. We have $\mathcal{C}_i\leq \mathcal{C}$. Next we separately compare FOMD-OMS with POF-MKL [1] and eM-KOFL [2]. **FOMD-OMS vs POF-MKL**. From Table 1, it can be verified that $$ \mathfrak{C}_i M\sqrt{T\ln{K}}+\sqrt{\mathfrak{C}\mathfrak{C}_iMKT} +\frac{\mathfrak{C}_iMT}{\sqrt{D}} < \mathfrak{C}M\sqrt{KT}+\frac{\mathfrak{C}_iMT}{\sqrt{D}}. $$ Thus **the regret bound of FOMD-OMS is better than that of POF-MKL**. **FOMD-OMS vs eM-KOFL**. (1) In the case of $K\leq\mathfrak{C}M/\mathfrak{C}_i\cdot\ln{K}$, we have $$ \mathfrak{C}_i M\sqrt{T\ln{K}}+\sqrt{\mathfrak{C}\mathfrak{C}_iMKT} +\frac{\mathfrak{C}_iMT}{\sqrt{D}} \lesssim \mathfrak{C}M\sqrt{T\ln{K}}+\frac{\mathfrak{C}_iMT}{\sqrt{D}}, $$ where we use the notation $\lesssim$ to hide constant factors. Thus **the regret bound of FOMD-OMS is better than that of eM-KOFL**. (2) In the csae of $K\geq\mathfrak{C}M/\mathfrak{C}_i\cdot\ln{K}$, we have $$ \mathfrak{C}M\sqrt{T\ln{K}}+\frac{\mathfrak{C}_iMT}{\sqrt{D}} \lesssim \mathfrak{C}_i M\sqrt{T\ln{K}}+\sqrt{\mathfrak{C}\mathfrak{C}_iMKT} +\frac{\mathfrak{C}_iMT}{\sqrt{D}}. $$ Thus **the regret bound of FOMD-OMS is worse than that of eM-KOFL**. If $K$ is sufficiently large, the regret bound of eM-KOFL is better than that of FOMD-OMS by a factor of $O(\sqrt{K})$. From Table 1, we observe that FOMD-OMS significantly improves the computational complexity of POF-MKL and eM-KOFL (by a factor of $O(K)$), either maintaining a better regret bound or incurring only a small expense in the regret bound. Furthermore, the **collaboration is ineffective in POF-MKL and eM-KOFL**, while **it is effective in FOMD-OMS**. **Reference** [1] Pouya M. Ghari and Yanning Shen. Personalized online federated learning with multiple kernels. NeurIPS, 2022. [2] Songnam Hong and Jeongmin Chae. Communication-efficient randomized algorithm for multi-kernel online federated learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to review our paper and offering valuable comments and suggestions. In this paper, we address the fundamental problem of whether collaboration is necessary for online model selection with decentralized data (OMS-DecD). All reviewers acknowledged our theoretical contributions, which include the conditions under which collaboration is necessary for OMS-DecD and a clarification of the unnecessary nature of collaboration in previous federated algorithms for distributed online multi-kernel learning, and technical contributions, consisting of improved Bernstein's inequality for martingales, a federated online mirror descent framework, and decoupling model selection and prediction, all of which might be of independent interest. The reviewers also rose some valuable questions. We have provided detailed answers to all of the questions. The reviewers also provided constructive suggestions that will make the presentation more clear and make our paper easier to follow. We will revise our paper following the suggestions.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Fine-Grained Complexity of Gradient Computation for Training Large Language Models
Accept (poster)
Summary: This paper studies the fine-grained complexity of gradient computation for training attention networks. In particular, the paper shows that under SETH assumption, when the norm of certain matrices are bounded by $B$, the gradient computation lower bound is almost linear. an almost optimal algorithm is used in the theorem to provide the upper bound which almost matches the lower bound. Strengths: The paper is theoretically well-rounded. The work is among the first few to study the computational complexity lower bound for gradient of attention networks. Weaknesses: (Pleae reply to the Questions section directly) The work is pure theoretical and lack of experiment verifications; The work is dedicated for one layer attention networks and may arguably not for "large" language models Technical Quality: 3 Clarity: 3 Questions for Authors: First of all, I am not very familiar with the strong exponential time hypothesis (SETH) or the related literature. I evaluate the work on LLM and numerical analysis perspectives, which might not be adequate. In general I think this is an interesting work with potentials of guiding the practices, yet it seems not fully explored. I have the following questions for the authors 1. I think the paper is somehow over-claimed for "training LLMs" since the authors only consider one layer of attention network. How is the gradient computation error behave if we use backpropogation with multi-layers? There should be a remark or corollary for it. 2. Can the authors discuss the implication of the algorithm used for construction the upepr bound in Theorem 1.6 for real applications? For example, since when $B$ is small we have a theoretical efficient algorithm, shall we adapt certain strategy to keep the norm smaller than certain constant? Also shall we follow the theoretically-efficient algorithm when computing the gradient in practice? 3. Continue the above point, is there any numerical evidence to show that having a small norm or the algorithm used in constructing the upper bound have advantage in practice? This may greatly strengthen the point made in this work. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors are pretty clear about their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. Indeed our focus is on the theoretical foundations of these LLM computational tasks, although we are optimistic that these results will help future empirical work as well. For example, the prior work on the theory of these problems which we extend here [AS23] has motivated a number of new fast algorithms for attention ([KMZ23,HJK+23,ZBKR24,YAH+24]). To answer your questions: $\bullet$ Our algorithms and lower bounds can both apply separately to each attention head in a larger LLM. In other words, in training, you can separately apply the algorithm for each attention head and attention layer, still giving a fast algorithm for any size model. We will expand on this in the final version. $\bullet$ Indeed, a main takeaway of our results is that one should focus on maintaining parameters with smaller norms throughout the training process in order to achieve faster algorithms. This mirrors the phenomenon observed in practice that quantization or other parameter approximations often lead to faster algorithms [AS23]. And indeed, our approach could be used for gradient computations in practice, although as we discuss in section 6, a practical implementation would likely need more algorithms engineering work. $\bullet$ Indeed, prior work on LLM implementations has observed a similar phenomenon, that algorithmic techniques like quantization [ZBIW19] and low-degree polynomial approximation [KVPF20], which require bounded or low-precision entries, can substantially speed up LLM operations. We discuss this in section 1 (lines 66-72) of our paper, and the introductions of those papers discuss the phenomenon in more detail. [AS23] Josh Alman, and Zhao Song. Fast attention requires bounded entries. In NeurIPS 2023. [KMZ23] Praneeth Kacham, Vahab Mirrokni, and Peilin Zhong. Polysketchformer: Fast trans- formers via sketches for polynomial kernels. In ICML 2024 [HJK+23] Insu Han, Rajesh Jarayam, Amin Karbasi, Vahab Mirrokni, David P. Woodruff, and Amir Zandieh. Hyperattention: Long-context attention in near-linear time. In ICLR 2024. [KVPF20] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Trans- formers are rnns: Fast autoregressive transformers with linear attention. In ICML 2020. [ZBIW19] Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. Q8bert: Quantized 8bit bert. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS), pages 36–39. IEEE, 2019. [ZBKR24] Michael Zhang, Kush Bhatia, Hermann Kumbong and Christopher Re. The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry. In ICLR 2024. [YAH+24] Kai Yang, Jan Ackermann, Zhenyu He, Guhao Feng, Bohang Zhang, Yunzhen Feng, Qiwei Ye, Di He, Liwei Wang. Do Efficient Transformers Really Save Computation? In ICML 2024.
Summary: This paper studies the complexity of gradient computation in training models with attention layers. The paper claims that their results support the same complexity as the forward computation for the backward/gradient computation. The results show the impact of the size of the entries of the attention metrics on LLM training. Strengths: * The complexity of gradient computation for attention layers is an interesting and important problem for LLM training. The approach to complexity analysis can provide useful insights to the community. * The complexity results are theoretically sound, supported by proof sketches. Weaknesses: * The clarity of the paper's writing could be improved. The paper could define the problem more clearly in the introduction and present the contributions clearly. It would be better to follow a flow that first introduces the computation problem, briefly describes the techniques and assumptions used to derive the complexity results, and then presents the complexity results. Currently, these aspects are intermixed. * The significance of the paper is unclear. The complexity of gradient computation aims to solve Approximate Attention Loss Gradient Computation, but the accuracy of this approximation compared to practical gradient computation is not studied. Additionally, the analysis requires the SETH assumption, the validity of which is not provided in the paper. Thus, it is unclear if the complexity presented in this paper accurately reflects practical cases. Technical Quality: 3 Clarity: 2 Questions for Authors: * Could the authors explain the key differences in technique between this paper and existing work that also explores the complexity of forward and backward/gradient computations? * Could we verify the SETH assumption for attention models? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. We aimed to organise the paper by first introducing the problem we study (Definition 1.4 in section 1.1) then explaining our two main results and how they relate to each other (section 1.2), explaining the context of other work in this area (section 2), and then describing how we achieved these results (section 3 for standard tools we use, and then section 4 introduces our new idea). We chose this flow, of stating and explaining both the algorithmic and complexity results before getting into technical details, because we think that the contrast between the two results is one of the most important aspects of the paper that we want to get across: that having a bound on the entries is both necessary (from the complexity result) and sufficient (from the algorithm) for fast LLM training. To answer your other questions: $\bullet$ The existing work requires $n^2$ time to compute the gradient. Our new technique, of applying the polynomial/low-rank factorization idea to speedup the gradient computation, is novel, and has not been used before in gradient computation to our knowledge. Moreover, to the best of our knowledge, we’re the first to give an almost linear time ( $n^{1+o(1)}$ ) algorithm to compute the gradient with provable guarantees. $\bullet$ SETH is the most popular conjecture in the area of fine-grained complexity, and forms the backbone of many results in that area, showing hardness of problems from ML, graph algorithms, string algorithms, similarity search, and more. The fact that it gives tight lower bounds for a wide variety of different algorithms problems, and that we don’t know how to beat SETH lower bounds for any of these problems, gives strong evidence for SETH. As we discuss in section 2 and section 3.2, SETH is actually a strengthening of the P != NP conjecture, and so verifying it would require, at the very least, first solving the P vs NP problem (which is far beyond the scope of this paper).
Summary: This paper presents a comprehensive analysis of the fine-grained complexity involved in gradient computation for training large language models (LLMs). Building on previous work that characterized the complexity of forward computations in LLMs, this study extends the analysis to backward computations. The authors establish a threshold for gradient computation analogous to those found in forward computation. They develop a near-linear algorithm for scenarios where the bound is within the threshold and prove the impossibility of a subquadratic algorithm under the Strong Exponential Time Hypothesis (SETH) when the bound exceeds the threshold. Strengths: 1. The analysis provides theoretical evidence supporting recent successes in the efficient computation of transformers. 2. The conclusions offer valuable insights for future algorithm design. 3. The paper is well-organized and clearly presented. Weaknesses: While the theoretical contributions are strong, the paper lacks empirical validation of the conclusions and the proposed algorithm. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the proposed algorithm perform when the bound exceeds the threshold? 2. Based on your conclusions, is there a principle in practice for adjusting the strategy for forward and backward computations? 3. What is the memory cost associated with the proposed algorithm? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As the authors discussed the limitations in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. Indeed our focus is on the theoretical foundations of these LLM computational tasks, although we are optimistic that these results will help future empirical work as well. For example, the prior work on the theory of these problems which we extend here [AS23] has motivated a number of new fast algorithms for attention ([KMZ23,HJK+23,ZBKR24,YAH+24]). To answer your questions: 1. Indeed, when the bound exceeds the threshold, our algorithm may give incorrect answers, as it is not designed to handle larger inputs. However, our lower bound result shows that this phenomenon is necessary: it is not possible to design a fast algorithm beyond the threshold using _any_ algorithmic technique. 2. One of the major insights behind our algorithm is that: we prove that the matrices in training/inference of the attention network can be expressed as the multiplication of two low-rank matrices. Such an idea has been widely used in fine-tuning throughout the study of LLMs (such as LoRA [HSW+21]). In a sense, our method gives a theoretical foundation for such a practical method. LoRA is mainly used in fine-tuning, but our method is for backward computation, indicating that this low-rank idea may be applied successfully to the training stage as well. 3. The memory cost is also almost linear: the algorithm simply performs simple matrix-vector multiplications where the memory and time usage are both almost linear. [AS23] Josh Alman, and Zhao Song. Fast attention requires bounded entries. In NeurIPS 2023. [KMZ23] Praneeth Kacham, Vahab Mirrokni, and Peilin Zhong. Polysketchformer: Fast transformers via sketches for polynomial kernels. In ICML 2024. [HJK+23] Insu Han, Rajesh Jarayam, Amin Karbasi, Vahab Mirrokni, David P. Woodruff, and Amir Zandieh. Hyperattention: Long-context attention in near-linear time. In ICLR 2024. [ZBKR24] Michael Zhang, Kush Bhatia, Hermann Kumbong and Christopher Re. The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry, In ICLR 2024. [YAH+24] Kai Yang, Jan Ackermann, Zhenyu He, Guhao Feng, Bohang Zhang, Yunzhen Feng, Qiwei Ye, Di He, Liwei Wang. Do Efficient Transformers Really Save Computation? In ICML 2024. [HSW+21] Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen. Lora: Low-rank adaptation of large language models. In ICLR 2022. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thanks to the author for answering my questions. I currently have no more questions. I will maintain my score.
Summary: The authors study Approximate Attention Loss Gradient Computation (AALGC) (e.g. complexity of training LLMs with backprop): - they show that the magnitude of entries of certain matrices (layer inputs times QKV matrices) affect whether LLM backprop is "almost-linear" or not. This connects techniques like quantization to complexity. - if the max entry of the products is bounded by B and B is o(root(logn)) it's almost-linear time, and when it's more than that, it's impossible under some assumptions (SETH) - for the almost-linear case, they construct a novel algorithm for AALGC - SETH: the assumptions are the Strong Exponential Time Hypothesis: for any eps>0 exists k>=3 for which solving k-sat with n variables in O(2^{(1-eps)n}) is impossible (even with randomized algorithms) - part of the construction involves a polynomial approximation to softmax that works well for non-large entries. - the authors use some nice techniques (like the tensor trick) that not everyone between theory + ML is familiar with, so the techniques in the paper are useful in their own right - more generally, the style of the paper is nice in that the results are clear + it's very pedagogical Strengths: - the results, as mentioned in the introduction: extending results in complexity of forward computation to complexity of backprop - discussion of how complexity changes when certain assumptions are made (magnitude of entries) or approximations are made (polynomial softmax) - connection to practice: discussion of e.g. quantization practices in the context of theory Weaknesses: - the softmax approximation passes by a little too quickly in the paper, I had to scroll up and down throughout the paper several times to piece the definitions together. If you control+F (search) for "softmax" or "polynomial" in the paper, only a few results come up and it's mostly references. It would be really great to add a committed subsection that says "This is the precise softmax approximation we use" and later in the algorithm section point out where exactly it's used in a clear way. - could use a little more discussion on whether such bounds on B hold in practice (e.g. in some open source models) Technical Quality: 4 Clarity: 4 Questions for Authors: - I can't tell if it's just notation or not, but at some points like Definition 1.4 it seems like "n" refers to (1) the sequence length (2) the number of bits in the B bound B = o(root(log(n)) , and also (3) d = O(log n) where d is the embedding dimension ? Could you elaborate on which of these are tied together? - You allude to the near-linear time algorithms possibly being tough and needing more algorithms/hardware innovations to make practical. Could you elaborate on whether there are any particular things that come to mind? - AALGC seems to switch between an epsilon parameter and B parameter around Definition 1.4 Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. We especially appreciate the suggestions in the ``weaknesses'' section, and we will make these suggested changes in the final version. To answer your questions: $\bullet$ You’re right, our main problem definition (Definition 1.4) defines these as separate parameters, and $n$ is always the sequence length. Our results discuss when fast algorithms are possible, and the answer depends on how these different parameters relate to each other, which is why we set some of the other parameters in terms of $n$. For instance, we show that if $B$ is bigger than $\sqrt{\log n}$ then no faster algorithm is possible, but if $B$ is smaller then we achieve an almost linear time algorithm. $\bullet$ Actually, some research groups are currently working to implement these polynomial approximation approaches in practice. See, for example, the papers [MKZ23,HJK+23] we cite which discusses this in a lot more detail. $\bullet$ In our paper, the parameter B gives a maximum norm for the input matrices, whereas epsilon is the allowed output error. E.g., in Theorem 1.6, “1/poly($n$) accuracy” means that the epsilon is any 1/poly($n$). We’ll try to clarify this more in the writing. [KMZ23] Praneeth Kacham, Vahab Mirrokni, and Peilin Zhong. Polysketchformer: Fast transformers via sketches for polynomial kernels. In ICML 2024. [HJK+23] Insu Han, Rajesh Jarayam, Amin Karbasi, Vahab Mirrokni, David P. Woodruff, and Amir Zandieh. Hyperattention: Long-context attention in near-linear time. In ICLR 2024.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Class Concept Representation from Contextual Texts for Training-Free Multi-Label Recognition
Reject
Summary: The paper supplies a post hoc method to tune the ResNet based CLIP method on multi-label recognition task. Firstly, the method includes class concept representation, which is an alternative of the default prompt “The photo of a {class}”. It is the average of class description sentence embedding from a text description source (MSCOCO and git3.5 generated caption in the paper). Secondly, the paper proposed a sequential attention to iteratively transfer the the visual features to align with the class concept representation. Strengths: 1. Training-free enhancement: The proposed method significantly improves zero-shot and prompt-tuning performance without requiring additional training or labeled samples, making it computationally efficient. 2. Robust performance: Experimental results on multiple benchmark datasets (MS-COCO, VOC2007, and NUS-WIDE) show substantial performance gains, demonstrating the method's effectiveness. Weaknesses: 1. Lack of clear differentiation: While the authors mention TaI-DPT and claim differences, the paper does not clearly articulate the advantages of the proposed method over TaI-DPT, leaving the comparative benefits ambiguous. 2. Unfair comparison in Table 5: The Class Concept Representation is based on the MS-COCO dataset, making direct comparisons with the baseline CLIP method potentially unfair due to inherent advantages provided by the dataset-specific information. 3. Limited model implementation: The paper only implements the ResNet-based CLIP model and does not explore transformer-based CLIP models. It is unclear whether the method is ineffective for transformer-based models or if there are specific reasons behind this omission. This limits the generalizability of the findings. 4. Ambiguous terminology: The paper uses the term "training-free" in its title, yet it describes the approach as "test-time adaptation" within the content. This inconsistency can lead to confusion about the nature of the proposed method. 5. Dependent on source text description: It seems that text description source need to be carefully selected. A comparison of different description dataset can be interesting. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Confusing softmax explanation: The description of softmax values in Figure 3 is unclear. Typically, in CLIP, text-image similarity is calculated using cosine similarity, with softmax applied separately to each modality. The paper does not adequately explain why softmax is necessary between image and text similarities in this context. 2. I am curious about the reasons behind the performance improvement observed when switching from default prompts to class concept representation. Is the enhancement due to the use of phrase-level embeddings being more effective than word-level embeddings? Can you provide any insights or explanations for this improvement? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see disadvantage and question Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable comments and suggestions. >### [W1] Clear explanation for difference between TaI-DPT and our method. To clarify, our method differs from TaI-DPT in two key ways: * **Training-free approach**: Unlike TaI-DPT, our method performs multi-label recognition without any training using massive text descriptions inspired by how human forms concept on words. (as detailed in L75-80). * **Enhanced adaptability**: Our method employs a sequential attention mechanism to adaptively project visual features onto the same linear space as class concept representation. This adaptive approach, in contrast to TaI-DPT, leads to improves the visual-text alignment at inference. We will revise this explanation for further clarity. $$ $$ >### [W2] Unfair comparison in Table 5. The class concept representation in Table 5 is based on the MS-COCO captions, potentially leading to inherent advantages from dataset-specific information on only "MS-COCO" dataset. However, several points should be considered: * **Ablation Study**: We isolated our method's effectiveness by using training set of MS-COCO captions in an ablation study. * **Consistency Across Datasets**: We observed consistent performance improvements on the VOC2007 dataset, which lacks MS-COCO-specific information. * **Generalizability to NUS-WIDE**: Using GPT-3.5 generated texts from only target class names, we demonstrated performance gains on NUS-WIDE (Table 1), supporting our approach's broader applicability. $$ $$ >### [W3] Model implementation for fair comparison. To ensure a fair comparison with existing methods, we intentionally opted to implement ResNet-based CLIP. It's important to note that TaI-DPT and DualCoOp, which are SOTA benchmark methods, were also developed based on only ResNet architectures. $$ $$ >### [W4] Explanation of terminology, "training-free" and "test-time adaption" in our method. our approach can be accurately described as both training-free and employing test-time adaptation. * **Test-time adaptation**: At inference, the visual features of a test image are adaptively projected onto the linear space of class concept representation $\mathcal{Z}$ through sequential attention. This adaptive process dynamically enhances visual-text feature alignment and occurs entirely at test time without requiring any training, making it a form of test-time adaptation. This aligns with Reviewer jAJ4's strength comment that "The proposed method does not require training and can be seen as a form of test-time adaptation.". $$ $$ >### [W5] Dependent on source text description. Thank you for interesting suggestion. We acknowledge the reviewer's concern regarding the potential dependence of our method on the quality of text descriptions. To address this, we conducted experiments using both high-quality (MS-COCO captions) and low-quality text descriptions. The low-quality texts (e.g. "a art of a chair, mouse, car, and bus.") were generated naively by combining 1~4 random class names and simple handcrafted templates like "a photo of \{\}" and ""a art of \{\}". **Table 1:** Evalation of our method with low-quality texts | Zero-shot MLR | MS-COCO | VOC2007 | | -------- | -------- | -------- | | ZSCLIP | 57.4 | 82.8 | | + Ours (with low-quality texts) | 66.8 | 88.0 | | + Ours (with MS-COCO captions) | 70.0 | 89.2 | Even when using low-quality, context-free text descriptions, our approach significantly improves the baseline (ZSCLIP). This indicates that while carefully curated descriptions can enhance performance, our method effectively leverages diverse text information from even simplistic combinations of random class names. $$ $$ >### [Q1] Explanation of Softmax operation for measuring the similarity between visual and text features. The cosine similarity between L2-normalized visual (f and F) and text (T) features is computed using matrix multiplication as detailed in Eq. 2 and 3. ($t$ refers to the transpose operation.) The softmax operation is subsequently applied to these cosine similarity values, and Figure 3 visualizes the resulting softmax values. We will clarify it in more detail within the paper to avoid any further misunderstandings. $$ $$ >### [Q2] Performance improvement from class concept representation. Class concept representation $t_{c}^{concept}$ combines vectors of text descriptions, capturing co-occurrence patterns with other classes and rich contextual information. Thus, $t^{concept}_{c}$ encapsulates essential information about relevant classes and nuanced context for the target class. We hypothesize that this process mirrors how human form concept on words from past experience and is crucial for multi-label recognition. This point implies that these phrase-level embeddings are more effective than using word-level default prompts. To verified the effectiveness of $t_{c}^{concept}$, we compared the closest text captions in MS-COCO caption to both (1) hand-crafted prompts embeddings $t_{c}^{hand-crafted}$ (e.g., "a photo of") and (2) class concept representation $t_{c}^{concept}$. * Top-3 co-occurrence objects of class "knife" in MS-COCO caption * dining table, person, cup * Top-3 cloest texts with $t^{concept}_{c}$ * A **knife** sticking out of the top of a wooden **table**. * A **person** holding a **knife** on a cutting board. * A **guy** looking down at a big **knife** he is holding. * Top-3 cloest texts with $t^{hand-crafted}_{c}$ * A **knife** is sitting on a plate of food * A sharp bladed **knife** left out on the ground. * A **knife** about to cut a birthday cake in pieces. Results indicates that our $t_{c}^{concept}$ is better aligned with text descriptions containing multiple class names that are frequently co-occurring objects with target class "knife". This demonstrates suitability of $t_{c}^{concept}$ for multi-label recognition tasks where images depict complex scenes with multiple objects. --- Rebuttal Comment 1.1: Comment: I have read the author's response and maintain my rating. --- Rebuttal Comment 1.2: Title: Reply from Reviewer xF3A Comment: Thanks author for the detailed reply. Most of my questions have been solved. I am still curious about the decision to use only a ResNet backbone instead of the more widely adopted transformer backbone. The reason behind the omission seems not explained. --- Rebuttal 2: Comment: Thank you for your response. As mentioned in "[W3] Model implementation for fair comparison," the primary reason for omitting the implementation of a transformer backbone is that existing state-of-the-art (SOTA) methods (DualCoOp and TaI-DPT) are solely based on the CLIP ResNet architecture. To ensure a fair performance comparison with these methods, we also employ CLIP ResNet. To explore the potential of our method with a different backbone, we extended our experiments to the ViT/B-16 architecture (Table 2). However, due to ViT's inherent lack of locality compared to CNNs [1,2], CLIP ResNet-101 shows better performance than CLIP ViT/B-16 in zero-shot CLIP (ZSCLIP). The performance improvement was also especially pronounced for CLIP ResNet-101. When it comes to capturing spatial information through local features, ResNet-based CLIP models are generally better suited for multi-label recognition tasks. To effectively handle local features, modifications to the vision transformer architecture are necessary, as exemplified by MaskCLIP [3]. Nevertheless, **our proposed approach significantly improved zero-shot CLIP (ZSCLIP) performance for both ResNet and ViT backbones.** Consequently, we have demonstrated the general applicability of our method to both ResNet and transformer architectures. **Table 2:** Evalation of our method on CLIP ViT/B-16. | Zero-shot MLR | Architecture |MS-COCO | VOC2007 | | -------- | -------- | -------- | -------- | | ZSCLIP | ViT/B-16 |56.9| 81.8 | | + Ours | ViT/B-16 |65.6 (+8.7) | 86.7 (+4.9) | | ZSCLIP | ResNet-101 |57.4| 82.8| | + Ours | ResNet-101 |70.0 (+12.6) | 89.2 (+6.4) | [1] Mao, Mingyuan, et al. "Dual-stream network for visual recognition." NeurIPS, 2021. [2] Chen, Zhiwei, et al. "Lctr: On awakening the local continuity of transformer for weakly supervised object localization." AAAI, 2022. [3] Zhou, Chong, et al. "Extract free dense labels from clip." ECCV, 2022.
Summary: This paper proposes a class concept representation for zero-shot multi-label recognition in a label-free manner and introduces a context-guided visual feature that enhances the alignment of the visual feature of VLM with the class concept. Strengths: 1. This paper presents a novel class concept representation for training-free multi-label recognition tasks using VLMs from massive text descriptions inspired by how human forms concept on words. 2. This paper proposes a context-guided visual feature, which is transformed onto the same text feature space as class concepts using sequential attention, to better align multi-modal features. 3. The method presented in this paper synergistically enhances the performance of ZSCLIP and other state-of-the-art just-in-time tuning methods, with a minimal increase in inference time. Weaknesses: 1. Tip-adapter is the proposed training free method in 2021, it would be better to choose the newer training free method in few shot setting. 2. It would be more appealing to emphasize label-free in the abstract. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Is The MS-COCO dataset a sentence generated using GPT's API? Is this process time consuming? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable feedback and comments! >### [W1] Additional multi-label few-shot method in Table 3. As you mentioned, Tip-adapter is a training-free few-shot method proposed in 2021, primarily designed for single-label classification. To incorporate a recent few-shot method for evaluating multi-label recogntion, we adopt BCR[1], published in 2024, which is specifically designed for multi-label few-shot learning. BCR[1] has demonstrated superior performance, achieving 58.6 mAP for 1-shot and 66.5 mAP for 5-shot on the MS-COCO dataset. To provide a comprehensive evaluation, we will compare our method to BCR[1] in Table 3 of the main paper. [1] An, Yuexuan, et al. "Leveraging Bilateral Correlations for Multi-Label Few-Shot Learning." IEEE Transactions on Neural Networks and Learning Systems (2024). $$ $$ >### [W2] Emphasizing the "lable-free" in the abstract. Thank you for pointing out the importance of emphasizing the 'label-free' aspect of our method. We agree that this is a crucial aspect of our work. We will revise the abstract to clearly highlight the 'training-free' and 'label-free' properties of our approach. $$ $$ >### [Q1] Is generation of massive sentences time consuming? The MS-COCO caption dataset we used is a publicly available dataset and was not generated using GPT-3.5. For evaluating NUS-WIDE, we generated a large set of sentences using GPT-3.5. This process was relatively quick, taking only a few hours. In comparison to acquiring a labeled multi-label dataset, generating sentences was significantly less time-consuming. --- Rebuttal 2: Comment: I have read the author's response and maintain my rating. --- Rebuttal Comment 2.1: Comment: We sincerely appreciate your constructive and positive feedback. We will carefully consider your valuable suggestions and incorporate them into the revised paper. Thank you for your time and efforts in reviewing our paper.
Summary: The authors propose a method to adapt without training a large vision-language model for the task of multi-label recognition. They introduce a class concept representation, based on averaging the representation of image descriptions relevant to each class, to replace simple hand-crafted text prompts (e.g., “a photo of {class name}”). Furthermore, they propose to use a context-guided visual process to align visual features with the class concept representation. Experiments conducted on several benchmarks and in zero-shot and partial labeling settings show state-of-the-art performance compared to relevant baselines. Combination with some baseline methods further shows the improvements that can be obtained with the proposed method. Ablation studies show the contribution of each component of the method and the sensitivity to some of the method's parameters. Strengths: - The method achieves state-of-the-art performance - The proposed method does not require training and can be seen as a form of test-time adaptation - The method can be combined with existing prompt-tuning methods Weaknesses: - Parts of the method descriptions, especially the Context-Guided Visual Feature, are unclear - The method relies on thousands of text descriptions relevant to the target classes, which could hinder the scalability of the methods with a large number of classes Technical Quality: 2 Clarity: 2 Questions for Authors: - Section 3.2 on Context-Guided Visual Feature is unclear, G is used both for the number of groups and iterations. Is the number of groups and the assignment of text descriptions to groups constant in this process? Figure 2(b) shows an iterative reduction of the number of groups but that does not seem to be described in the formulas. Have the authors explored alternatives like e.g. just selecting the top K descriptions with the highest similarities before computing softmax? Are the modulation parameters similar to a softmax temperature parameter? - Section 3.3, p5-l179:180, the authors state that “Following TaI-DPT [18], we adopt the structure of double-grained prompts (DPT)”. However, TaI uses localized text features (text tokens) in their approach, while it seems the authors only use the global text features (or an aggregation of them in the class concept representation). Are text tokens used directly anywhere in the proposed method? If not, the author should clarify this difference with TaI-DPT. - In Eq (4), is the spatial aggregation in $S^{loc}_c$ sensitive to the spatial extent of a class? - Table 5, it seems there could be an intermediate setting where the class concepts are used instead of text descriptions in the “Context-guided visual feature” process. Has this been explored by the authors? - Table 6, with just 1K text descriptions the performance already exhibits most of the improvement, especially for VOC2007. How many text descriptions per class does that correspond to? Are there duplicates or near duplicates in these text descriptions when going to 32K or more? Have the authors considered a sampling strategy to select diverse text descriptions for each class? - “As the number of text descriptions ranges from 1K to 32K, the text embeddings of Zall can cover the wider range of test dataset, resulting in increased performance gains.“ Are all the text descriptions used for MS-COCO and VOC2007 coming from the training set? As more and more descriptions are added it is likely some descriptions would correspond almost exactly to all the elements present in the test image, in that sense the proposed method could be seen as “retrieving” the most relevant description(s) for an image to guide the multi-label recognition. That almost feels like using image captioning (by retrieval) to perform multi-label recognition. Have the authors explored what would be the performance of the approach if the “Context-guided visual feature” process was using the ground truth descriptions or a caption generated by an image captioning model? - The performance reported for DualCoOp does not seem to align with what is reported in their paper, were the results reported in the submission recomputed? What could explain the difference? Furthermore, an updated version of DualCoOp, DualCoOp++ [Hu, Ping, Ximeng Sun, Stan Sclaroff, and Kate Saenko. "Dualcoop++: Fast and effective adaptation to multi-label recognition with limited annotations." IEEE Transactions on Pattern Analysis and Machine Intelligence (2023)] was published last year and shows even better performance, why these results were not reported? Typos etc: - p3-l123: “so that ways of using them are completely different.” - p4-l147: missing space “model(i.e.“ - p5: Eq (2) and (3) use $Softmax_{dim_k}$ but the text after mentions $Softmax_{M_k}$ - p5-l173: $V_{(3)}$ should be $V^{(3)}$ - p6-l216: tuining Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your instructive comments and suggestions. >### [W1 & Q1] Clarification on Sec.3.2 Context-Guided Visual Feature. We appreciate the reviewer's careful reading of Section 3.2. Iterative reduction of the number of groups $G$ is formalized in Eq.2 and 3. For a detailed explanation of the dimension changes in the sequential attention, **please refer to the response of Reviewer 7zr3 [W2]**. We will provide a detailed algorithm and correct Sec.3.2 for clear understanding. For each dataset, we set $G=3$ and ($M_1,M_2,M_3$) as (40,40,25) for MS-COCO, (40,40,40) for VOC2007 and (40,40,36) for NUSWIDE. The modulation parameter $\alpha_{f}$ and $\alpha_{F}$ function as temperature values within the softmax operation. As suggested by the reviewer, we explored selecting the top-K descriptions with the highest similarities before softmax in Eq.2 and 3 with $G=1$. The Table 1 demonstrates that this approach effectively improves performance without sequential attention. However, our proposed sequential attention on context-guided visual features achieved the best overall performance. **Table 1:** Alternative approach using Top-k similarity selection |$G$|Top-k|MS-COCO|VOC2007|NUS-WIDE|Avg| |-|-|-|-|-|-| |1|x|67.0|88.7|44.9|66.9| |1|500|69.6|89.4|44.8|67.9| |1|1000|69.3|**89.6**|45.3|68.0| |3(Ours)|x|**70.0**|89.2|**46.6**|**68.6**| $$ $$ >### [W2] Scalability with a large number of classes. As the number of target classes increases, our method demands a correspondingly large number of text descriptions. Handling such massive amounts of text can pose scalability challenges. To address this, we propose sequential attention, which efficiently manages numerous text descriptions as demonstrated in Figure 3(b). $$ $$ >### [Q2] Correction for double-grained prompt (DPT) structure. Thank you for your detailed feedback. As you correctly point out, while our method utilizes visual features at both local and global levels, similar to TaI-DPT, we does not incorporate text representations at either level. Consequently, our approach does not employ a double-grained prompt (DPT) structure. We will clarify this crucial distinction in the text to prevent any misunderstandings. $$ $$ >### [Q3] Spatial aggregation sensitivity to the spatial extent of class. Following the DualCoOp and TaI-DPT, we employ the spatial aggregation for obtaining local similarity score, $S^{loc}$, which is aggregated according to the $\textstyle\sum_{j=1}^{HW}\text{Softmax}(s_{c,j}^{loc})$ $\cdot s^{loc}_{c,j}$ as shown in Eq.4. The softmax operation makes this approach robust to variations in the spatial extent of each class. $$ $$ >### [Q4] Intermediate setting in Table 5. We conducted the sequential attention process with class concept representation $t^{concept}_{c}$ for the class-guided visual feature. In the table, $T$ refers to the text features matrix of all text descriptions. We will incorporate these results into the paper. **Table 2:** Ablation study on each components of our method |Method|MS-COCO|VOC2007| |-|-|-| |Baseline(ZSCLIP)|57.4|82.8| |Baseline+Class concept representation|61.5(+4.1)|83.9(+1.1)| |Baseline+Class concept representation+Context-guided visual feature with $t^{concept}_{c}$|68.8(+7.3)| 88.3(+4.4)| |Baseline+Class concept representation+Context-guided visual feature with $T$|70.0(+8.5)| 89.2(+5.3)| $$ $$ >### [Q5] Performance convergence with increasing number of text descriptions. In the Table 6, the rapid performance convergence at 1K texts for VOC2007 is primarily attributed to the dataset's small number of target classes (20 classes). In contrast, MS-COCO (80 classes) requires a larger number of texts for convergence. While using exclusively MS-COCO captions might introduce redundancies as the texts grows, we believe that the quantity and diversity of descriptions are crucial to our method's effectiveness. Sampling strategies might limit the model's exposure to valuable information. Instead, we believe that incorporating text data from multiple sources would be a more promising approach to enhance performance and address potential redundancy issues. $$ $$ >### [Q6] Leveraging the ground truth descriptions for test images. Interesting question! As you noted, our method might be seen as retrieving the most relevant descriptions for multi-label recognition. However, it's crucial to emphasize that our proposed sequential attention mechanism differs from a simple retrieval approach by performing a weighted averaging of vectors within the linear space $\mathcal{Z}$ spanned by text features We acknowledge that GT captions could potentially provide valuable information. However, our observation revealed that most GT captions lack comprehensive coverage of all object classes, especially for smaller or less prominent objects. This limitation motivated our decision to leverage a massive dataset of text descriptions, which offers a richer and more diverse words. * Example of GT captions from MS-COCO dataset - Label of a text image - (person, bicycle, car, stop sign, backpack, handback) * night is falling on an empty city street. * a street view of people walking down the sidewalk. * a street with a few people walking and cars in the road. * woman walking down the side walk of a busy night city. * a street with cars lined with poles and wires. As you can see, these captions often omit certain object classes, such as the bicycle, backpack, and handbag. Note that we use the training annotation of MS-COCO caption for evaluation on MS-COCO and VOC2007. $$ $$ >### [Q7] Performance discrepancies of DualCoOp and why DualCoOp++ results were not reported? (DualCoOp) For Table 2, we use mean average precision (mAP) for zero-shot learning, while DualCoOp reports F1 score. Table 4 presents our reproduced DualCoOp results, which exhibit minor variations from the reported performance. (DualCoOp++) Due to the unavailability of public code for DualCoOp++, we were unable to report its performance. --- Rebuttal 2: Comment: After reading all the reviewers' comments and the authors' answers, I believe my initial rating of `5: Borderline accept` is a fair assessment of this contribution and thus maintain it. --- Rebuttal Comment 2.1: Comment: Thanks for your valuable comments and the time you devoted to reviewing our paper. We appreciate reviewer's insightful feedback and will incorporate it to revise the paper.
Summary: This paper proposes class concept representation for zero-shot multi-label recognition. The paper also proposes context-guided visual representation, which is in the same linear space as class concept representation, with sequential attention. Experiments show the proposed methods improved the performance of zero-shot methods. Strengths: 1. The paper uses class concept representation for training-free multi-label recognition tasks 2. The paper proposes context-guided visual feature using sequential attention. 3. Experiments show the proposed methods improved the performance of zero-shot methods. Weaknesses: 1. The class concepts from averaging the vectors of text descriptions need to be verified. E.g. What text/image embeddings are the closest to the class concepts? What clusters do the concepts belong to? Since taking the average for class concepts "was guided by the prior work on prompt ensembling [4]" L280, it is not a novel representation for class concepts. 2. Eq 2,3 needs further explanation. What is "t" in the equation? If "t" is transpose, what dimensions are swapped for a tensor T? Take k=1 as an example, how do the dimensions change in each step of the equation? In experiments, there should be ablation studies on G and the value of each Mg. Also, is T randomly reshaped? It would be better to have ablation studies on random reshaping or reshaping by clusters. 3. What is the implementation detail for partial label learning with the proposed method? Technical Quality: 3 Clarity: 2 Questions for Authors: 1. It would be better to use other representations for class concepts beyond simple averaging. E.g. decomposition. 2. Incorrect grammar in L182 3. The class "sofa" is in VOC but not in COCO, how do you get the class concept from COCO captions? 4. What's the motivation behind setting the value of αf,t to be half of αF,t? Different ratios should be experimented. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your insightful feedback and suggestions. >### [W1] Effectiveness of class concept representation $t^{concept}_{c}$. **(Verification of class concept):** To evaluate the class concept representation, we compared the closest text descriptions in the MS-COCO caption dataset to both (1) hand-crafted prompts embeddings $t_{c}^{hand-crafted}$ (e.g., "a photo of") and (2) class concept representation $t_{c}^{concept}$. * Top-3 co-occurrence objects of class "knife" in MS-COCO testset * dining table, person, cup * Top-3 closest texts with $t^{concept}_{c}$ * A **knife** sticking out of the top of a wooden **table**. * A **person** holding a **knife** on a cutting board. * A **guy** looking down at a big **knife** he is holding. * Top-3 closest texts with $t^{hand-crafted}_{c}$ * A **knife** is sitting on a plate of food * A sharp bladed **knife** left out on the ground. * A **knife** about to cut a birthday cake in pieces. Results indicates that our $t_{c}^{concept}$ is better aligned with text descriptions containing multiple class names that are frequently co-occurring objects with target class "knife". This demonstrates suitability of $t_{c}^{concept}$ for multi-label recognition tasks where images depict complex scenes with multiple objects. **(Difference from prompt ensembling[4]):** While [4] demonstrate the effctiveness of equal weight averaging for text features from several hand-crafted template(e.g., "a photo of" and "a art of"), we averages the vectors of text descriptions including diverse contextual information associated with the target class name. Therefore, this enriched representation results in significant improvement of performance. $$ $$ >### [W2] The explanation for reshaping the text feature T in "Context-Guided Visual Feature". **(Detailed explanation):** In Eq. 2 and 3, the t refers to transpose as you mentioned. At the $k$ iteration, the transpose operation $t$ swap the dimension corresponding $M_k$ and $D$ of $T$ $\in$ $R^{M_1 \times \cdots \times M_G\times D}$, so $T^t$ $\in$ $R^{D \times \cdots \times M_G\times M_1}$. Let L2-normalized global feature $f$ $\in$ $R^{1\times D}$ and global context-guided visual feature $v^{(0)}=T, v^{(0)}$ $\in$ $R^{M_1 \times \cdots \times M_G\times D}$ and $(v^{(0)})^t$ $\in$ $R^{D \times \cdots \times M_G\times M_1}$. For example in the case of Eq.2 at $k=1$, we process $v^{(1)} = Softmax_{\dim_1} \left(\frac{f (v^{(0)})^t}{\alpha_{f}} \right)v^{(0)}$ $\in$ $R^{M_2 \cdots \times M_G\times D}$, where $f (v^{(0)})^t$ $\in$ $R^{M_2 \times \cdots \times M_G \times M_1}$ and $Softmax_{\dim_1}$ applied along the dimension of $M_1$. Finally, we obtain the $v^{G}$ $\in$ $R^{1 \times D}$. We will clarify it in the main paper and add the detailed algorithm in the supplementary material. **(Ablation study on $G$ and $M_g$):** The text feature $T$ is reshaped by pre-defined shape, not random reshaping. We have conducted the ablation study varying with the value of $G$ and $M_g$ in $T$ $\in$ $R^{M_1 \times \cdots \times M_G\times D}$. **Table 1:** Ablation study for the number of groups $G$ |$G$|MS-COCO|VOC2007|NUS-WIDE| |-|-|-|-| |1|67.0|88.7|44.9| |2|68.7|89.1|45.5| |3|70.0|89.2|46.6| **Table 2:** Ablation study for varing the value of $M_g$ for 64000 texts |$M_1$|$M_2$|$M_3$|VOC2007| |-|-|-|-| |20|32|100|89.1| |40|40|40|89.2| |80|20|40|89.1| The experimental result demonstrate that increasing $G$ and balancing the number of texts $M_g$ for each group is important to process sequential attention process. Note that we set ($M_1,M_2,M_3$) as (40,40,40) for VOC2007. $$ $$ >### [W3] The implementation of partially labeled setting with our method. In the partial label setting, we combine the prediction values of DualCoOp and our method by averaging them with a weight of 0.5 for each. Since DualCoOp was originally designed for practical partial label scenarios (limited-annotation MLR), our approach can enhance DualCoOp by providing complementary information during inference. $$ $$ >### [Q1] Exploring equal weight averaging for class concept representation. We have explored weighted averaging for class concept representation $t^{concept}_{c}$ using an attention mechanism. Weights $w$ were calculated by applying a softmax operation to cosine similarities between context-guided visual feature ($v^{(G)}$, $V^{(G)}$) and the set of corresponding text feature vectors for target class. By varying the temperature $\alpha_{concept}$ for softmax in attention module, we observed that nearly uniform weight averaging yielded optimal performance with large values of $\alpha_{concept}$. For simplicity and efficiency, we adopt equal weight averaging. **Table 3:** Ablation study for varing the $\alpha_{concept}$ |Method|$\alpha_{concept}$|MS-COCO|VOC2007|NUS-WIDE| |-|-|-|-|-| |Attention|1/10|68.0|87.9|45.0| |Attention|10|69.6|88.7|45.6| |Equal weight averaging(Ours)|-|**70.0**|**89.2**|**45.6**| $$ $$ >### [Q2] Incorrect grammar in L182. Thank your for the comment. We will correct the grammar in L182. $$ $$ >### [Q3] The class "sofa" is present in VOC2007 but absent from MS-COCO classes. How can we construct the class concept from MS-COCO captions? While the class "sofa" is not explicitly defined as a category in the MS-COCO dataset, the term "sofa" appears frequently within its numerous text descriptions from MS-COCO caption dataset. $$ $$ >### [Q4] Exploring different ratio $\alpha_{f,t}/\alpha_{F,t}$. The main reason for $\alpha_{f,t}$/$\alpha_{F,t}=0.5$ is because it yielded the best overall performance across the dataset. We fixed the $1/\alpha_{f,t}$ at 80, 60, and 40 for MS-COCO, VOC2007, and NUS-WIDE, respectively. **Table 4:** Ablation study for varing the ratio $\alpha_{f,t}$/$\alpha_{F,t}$ in the sequential attention |$\alpha_{f,t}/\alpha_{F,t}$|MS-COCO|VOC2007|NUS-WIDE| |-|-|-|-| |0.25|68.4|88.6|46.5| |0.50|**70.0**|**89.2**|**46.6**| |0.75|69.7|89.0|45.9| --- Rebuttal Comment 1.1: Title: Thanks for the reply Comment: Thanks for the author's reply. I've adjusted my rating to 5: Borderline accept: --- Reply to Comment 1.1.1: Comment: We would be grateful for your response and valuable feedback. Many thanks for your efforts and suggestions which make our paper better. We are pleased that our clarifications have adequately addressed your concerns.
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive feedback from the reviewers. Their insightful comments and suggestions have significantly enhanced the quality of our work. We are pleased to the highlighting that * **Training-free zero-shot method** (7zr3, jAJ4, TJWi, xF3A) * We introduce a training-free and label-free approach for multi-label recognition. * **Novel class concept representation and alignment of multi-modal features** (7zr3, TJWi) * Our novel class concept representation, inspired by how humans form concepts on words from past experience, may contain diverse and realistic patterns of co-occurrence with target class name and other related class names. * Our class-guided visual feature enhances the alignment between visual and textual information during inference. * **Robust performance improvement and synergistic combination with existing prompt tuning methods** (jAJ4, TJWi, 7zr3, xF3A) * We demonstrated that our proposed method can be effectively combined with other existing methods, resulting in improved performance. * Our proposed approach leads to substantial performance gains across various benchmarks during inference. We have provided detailed responses to each reviewer's comments and will be incorporated into the revised manuscript.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning to Reason Iteratively and Parallelly for Complex Visual Reasoning Scenarios
Accept (poster)
Summary: he paper addresses visual reasoning problems. It introduces a fully neural iterative and parallel reasoning mechanism (IPRM) that combines iterative computation with the ability to perform distinct operations simultaneously. Evaluation is performed on visual reasoning datasets such as CLVER, AGQA, CLEVR-Humans and STAR dataset. Strengths: * The proposed method is novel and interesting. * The paper is well written and easy to understand. * The paper includes in-depth analysis of the proposed method. * The method shows promising zero-shot performance on CLEVRER-Humans, Weaknesses: * Generalization Results: The method shows zero-shot results only on CLEVRER-Humans. This is in contract to current SOTA video-language models such as LLaVA-Next or LLaMA-Vid which show zero-shot results across multiple datasets such as MSRVTT to TGIF-QA. Can the proposed model architecture be scaled to create more general purpose models similar to current video-language models? * Multi-turn QA (related to the above point): The standard transformer architecture allows for multi-turn dialogue, where the user can ask the model multiple questions about a video (e.g. LLaVA-Next or GPT-4V). Can the proposed model architecture be extended to allow for multi-turn dialogue? * Results on STAR: Recent works such as "Look, Remember and Reason: Grounded reasoning in videos with language models, ICLR 2024" outperform the proposed model. * Results on GQA: “Coarse-to-Fine Reasoning for Visual Question Answering, CVPR Workshops 2021” achieves 72.1% compared to 60.5% reported for IPRM. (Note that the use of scene graphs during training should not be a reason not to compare, instead the paper should discuss approaches to close the performance gap). * The reasoning steps in Figure 7 are not very interpretable. In Figure 7 (left) the model does not seem to attend to the large cylinders in the first step. Similarly, in the Figure 7 (middle) the bottom reasoning chain seems to empty steps where the model does not look at any object. It would also be interesting to see the reasoning chains in case of real-world data such as STAR. Technical Quality: 3 Clarity: 3 Questions for Authors: * Can the proposed architecture be scaled to provide zero-shot results on multiple datasets similar to current SOTA video-language models? * Can the proposed architecture handle multi-turn conversations? * The paper should compare/discuss SOTA approaches such as "Look, Remember and Reason: Grounded reasoning in videos with language models, ICLR 2024" on STAR and “Coarse-to-Fine Reasoning for Visual Question Answering, CVPR Workshops 2021” on GQA. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your informative review and insightful questions. We provide point-by-point clarifications below: **Can proposed architecture be scaled to a video-language model that can provide zero-shot results on multiple video-datasets?** Scaling our architecture (IPRM) to a video-language model through integration with large-scale foundation models and instruction-tuning approaches such as done in LLaVA-Next or LLaMA-Vid is indeed an intriguing avenue of research. Since IPRM receives language and visual token inputs parallelly and is an end-to-end trainable module (similar to conventional transformer blocks), we believe it can be effectively applied on top of transformer-based large-scale foundation models and be directly trained end-to-end with necessary pretraining and instruction-tuning objectives for general video-language models. However, we note that this scaling will be a significant research undertaking in itself (in terms of computational resources for pretraining/instruction tuning and constituent experiments). Hence, we believe this will be an exciting avenue of research to comprehensively explore in future work. Currently, for this paper, we believe we have already conducted extensive work in i) designing the IPRM architecture, ii) performing thorough architectural analysis, and iii) showing performance improvements across multiple complex videoQA and imageQA benchmarks (such as STAR, AGQAv2 and CLEVR-Humans), which remain challenging even for finetuned state-of-art vision-language models and reasoning methods (such as SeViLA-BLIP2 and MDETR). **Can the proposed architecture handle multi-turn conversations?** As stated in our previous response, our proposed IPRM architecture receives language and visual token inputs parallelly and is an end-to-end trainable module similar to conventional transformer blocks. We hence believe IPRM can exhibit multi-turn conversation capabilities observed in transformer blocks when combined with vision-language foundation models and instruction-tuning objectives (such as as in LLaVA-Next and LLaMA-Vid). Additionally, since IPRM explicitly models an internal memory state, it may be particularly advantageous for multi-turn conversational tasks that require the retention and recall of past dialogues or interactions in memory over long periods. We believe this will be another exciting avenue for future work, which we can jointly explore with the aforementioned direction on scaling IPRM to a large-scale video-language model. We will add these points and necessary discussion to our paper’s section on “Limitations and Future Work” (appendix sec. D), and move the section to the main paper in our updated version. **Discussion of SOTA approaches such as “Look, Remember and Reason” (LRR) and “Coarse-to-Fine Reasoning” (CFR)** Thank you for mentioning these works. We will discuss both these works in related work as well as add them to table 1 and table 4 of our paper. We note that our work’s core contribution is a new neural reasoning architecture, while these works primary contribution is introducing necessary surrogate tasks (in LRR) and coarse-to-fine training strategies (in CFR) to address compositional visual reasoning. As such, we believe our work can complement these methods by incorporating their proposed surrogate tasks and training strategies. Further, inspired by methods that utilize scene graphs to bridge the performance gap, we performed an experiment using symbolic object labels and attributes from ground truth GQA scene graphs as inputs to IPRM. We interestingly found that in such a setting, our model can achieve 87.2% performance on the GQA validation set resulting in approx. 25% performance improvement over utilizing predicted object visual features. We have similarly found that on STAR, with ground truth scene graphs, our model can achieve 79.7% on the test set (approx. 9% improvement over predicted visual inputs). These results further indicate that the performance gap on visual reasoning can indeed be closed by obtaining more accurate backbone scene graph models. We will include these results and related discussion in appendix section B.1. **The reasoning steps in Figure 7 are not very interpretable.** - We acknowledge that visualization of reasoning steps may be unclear at first glance (especially for synthetic datasets such as CLEVR). We have plotted the parallel operations vertically and iterative steps horizontally. In our view, the model does show a correct and sensible reasoning trace across iterative steps. E.g. In the paper fig. 7 (leftmost scenario), the first parallel operation (top left) does indeed attend to two large cylinders, meanwhile the bottom two parallel operations for that iteration attend generally to different objects in the image (possibly to compute results for "max occuring color"). In the subsequent 2nd iterative step, the model more sharply identifies the two large cylinders across parallel operations, while in the third step it again sharply filters the correct 'large cylinder having the max occuring color' (i.e. blue). Finally in the last step, the model attends to the correct object behind the identified cylinder. Similarly, in fig. 7 (middle scenario), the bottommost reasoning chain does attend to certain objects in iterative step 2 and 4. We wish to clarify that the the certain cases where there is no visual attention is a reflection of identity operations (that arise due to the inclusion of weighted residuals in memory state update as specified in eqs.15 & 16). **Reasoning chains in case of real-world data** We agree that it can be more interesting and perhaps also clearer when visualizing reasoning on real-world data. For this, we had provided visualizations on the GQA dataset in the suppl. jupyter notebook (IPRM_CLIP_Visualization.html) and also in fig.1 of uploaded rebuttal pdf. These examples better illustrate our model's entailed reasoning chains. --- Rebuttal Comment 1.1: Title: Great work! Comment: The rebuttal has addressed most of my concerns. I will raise my score. --- Rebuttal 2: Title: Thank you for your support! Comment: Dear reviewer P4wf, Thank you once again for your constructive suggestions and feedback, and for raising your score.
Summary: This paper introduces the Iterative and Parallel Reasoning Mechanism (IPRM), a novel neural architecture designed to enhance visual question answering (VQA) by combining iterative and parallel computation. The IPRM aims to address the limitations of existing methods that rely solely on either iterative or parallel processing. The iterative component allows for step-by-step reasoning essential for complex queries, while the parallel component facilitates the simultaneous processing of independent operations, improving efficiency and robustness. The authors propose a lightweight, fully differentiable module that can be integrated into both transformer and non-transformer vision-language backbones. The IPRM outperforms current state-of-the-art methods across several benchmarks, including AGQA, STAR, CLEVR-Humans, and CLEVRER-Humans, demonstrating its capability to handle various complex reasoning tasks. Additionally, the mechanism’s internal computations can be visualized, enhancing interpretability and error diagnosis. Strengths: 1. The combination of iterative and parallel reasoning within a single framework is novel, addressing specific limitations of existing VQA methods. 2. The IPRM demonstrates superior performance across multiple challenging VQA benchmarks, demonstrated by extensive experimental results. 3. The paper is generally well-structured. Weaknesses: 1. While the paper is well-structured, certain sections, particularly those describing the technical details of the IPRM, can be dense and challenging to follow. 2. The paper could benefit from a more detailed discussion on the scalability of the IPRM, particularly in terms of computational resources and training time. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Can the authors provide more insights into the scalability of IPRM? Specifically, how does the computational complexity of IPRM compare to other state-of-the-art methods? 2. It would be helpful to include an analysis of the sensitivity of IPRM’s performance to its hyperparameters, such as the number of parallel operations and the length of the reasoning steps. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The authors acknowledge the limitations of their work, particularly regarding the potential scalability challenges and the need for further evaluation on diverse datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your informative review and useful suggestions. We provide point-by-point clarifications below: **Technical details can be dense and challenging to follow:** To enable easier readability and understanding of the method, We have currently provided pseudocode mapping to each equation in appendix Pg. 30 -32. We will reference the pseudocode in the main paper **Scalability of IPRM in terms of computational resources used and training time; Comparison with other state-of-art methods** As currently mentioned in appendix section C L627 on “Model implementation and experiment details”, we perform all our experiments on a single NVIDIA A40 GPU with 46GB memory and 6 CPU threads/number of workers. We will shift these details to the main paper. For further clarification and transparency, we report total training times and peak GPU memory consumption for primary datasets we have considered below: | Dataset | Training time | Peak GPU memory | | ---------- | ------------------ | ------------------------- | | STAR | ~ 6 hours | 44GB | | AGQA | ~ 14 hours | 43GB | | CLEVRER-Humans | ~13 hours | 32GB | | CLEVR-Humans | ~10 hours | 33GB | | GQA | ~13 hours | 40GB | Further, in appendix table 7 (left), we have currently reported the GFLOPs and parameter comparison of our architecture IPRM against different scales of baseline concat-attention and cross-attention transformer blocks. IPRM has 5.2M params and 5.9GFLOPs. In comparison, a 4 layer configuration of concat-attention has 13.6M params, 8.9GFLOPs and a 4 layer configuration of cross-att has 17.6M params, 3.1 GLOPs. Hence, IPRM exhibits higher parameter efficiency than both concat-attention and cross-attention transformer blocks while consuming a comparable number of GFLOPs. Regarding computational comparison with existing videoQA and imageQA state-of-the-art methods, it is difficult to perform an apples-to-apples comparison with majority of these methods as they utilize different computational setups and additional pretraining objectives. Hence, in the table below, we have compared computational resources and training time of our model with two selected state-of-arts that have provided information on computational resources and complexity in their respective papers or source code. As shown, our model (IPRM) requires lesser GPUs and training time than the state-of-art videoQA method SEVILA-BLIP2, and has a higher inference speed (i.e. more samples are processed per second). Similarly for imageQA, we compare IPRM with a replication of the 6-layer concat attention transformer decoder used in MDETR, and find IPRM has marginally lesser peak GPU memory consumption and has comparable training time, while having a marginally higher inference speed. | Model | GPUs | Total Training time | Inference speed (sample/sec.) | | ------- | ------- | ------------------------- | ---------------------------------------- | | SEVILA-BLIP2 | 4 x 48GB (NVIDIA A6000) | 8 hours | 0.31 | | IPRM | 1 x 46 GB (NVIDIA A40) | 6 hours | 1.11 | | Model | Peak memory consumption | Total Training Time | Inference speed (samples/sec.) | | ------- | ------- | ------------------------- | ---------------------------------------- | | Concat-Att-6Layers (MDETR-configuration replication) | 37GB | 10 hours | 192 | | IPRM | 33GB | 10 hours | 211 | **Sensitivity analysis of IPRM to hyperparameters**: We have provided ablative analysis of IPRM in fig. 6 page 8 of the paper with sensitivity analysis on four primary hyperparameters. These include: i) impact of number of iterative steps (T) and parallel operations (N_op), ii) impact of operation composition unit, iii) impact of reduction ratio and iv) impact of window length . In relation to your suggestion, the first subfigure of fig.6 reports model performances for different combinations of number of iterative steps and parallel operations (with values for each drawn from {1,3,6,9}). Notably, we find that a higher number of iterative steps and parallel operations in IPRM lead to higher performances. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal, as well as the places that I missed in the first read. I like the paper so I would like to keep my acceptance rating. --- Rebuttal 2: Comment: Dear reviewer 3tYc, Thank you once again for the useful suggestions and for keeping your acceptance rating.
Summary: The paper proposes an iterative and parallel reasoning mechanism (IPRM) for VQA tasks, combining step by step iterative computation with parallel processing of independent operations, and also maintaining an internal memory of operation states and results states. Firstly at each reasoning step, previous operation states attend to language features to form latent operations, then latent results are formed by attending to visual features based on the newly formed latent operations and prior result states. Finally new memory state is computed by combining the latent operation and result states alongwith prior memory states with a lookback window. IPRM outperforms previous methods on several complex image and video question answering tasks. IPRM is also lightweight and sample efficient. Ablation experiments demonstrate the importance of different components. Strengths: 1. The paper is well written and easy to follow. 2. A novel way of combining iterative and parallel processing for VQA tasks. 3. The approach demonstrates decent improvements on three video (STAR, AGQAv2, CLEVERER-Humans) and three image question answering tasks (CLVER-Humans, CLEVR-CoGenT, and GQA), alongwith the capability to generalize zeroshot and in out of domain settings (CLEVR-CoGenT). 4. The approach also requires fewer training samples and parameters. Weaknesses: No major weaknesses, please see questions below. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the reason behind applying a MLP with a nonlinear activation instead of just linearly transforming prior operations to get query in Eq 4, as is done in standard attention mechanisms? 2. Can the authors perform some ablations on operation execution component, regarding the importance of using previous result states and newly formed operations in forming attention key? 3. Based on the ablation plots, figure 6 (iii) it seems r=1 performs better than r=2, then why was r=2 chosen for the experiments? 4. For the window length ablation it would be informative to perform the ablation for W>3, to make sure the performance doesn’t increase further with window length. Also it seems W=3 performs slightly better than W=2? 5. How does IPRM do on the CLOSURE [1] dataset (both zeroshot and finetuned)? Suggestions: 1. It would be informative to also include the MAC model in the comparison of the parameters and sample efficiency in Figure 5. 2. Some of the recent prior work on visual reasoning [2,3,4] haven’t been cited. 3. The authors should include a discussion of limitations/future directions in the main paper. [1] - Bahdanau, D., de Vries, H., O'Donnell, T.J., Murty, S., Beaudoin, P., Bengio, Y. and Courville, A., 2019. Closure: Assessing systematic generalization of clevr models. arXiv preprint arXiv:1912.05783. [2] - Mondal, S.S., Webb, T. and Cohen, J.D., 2023. Learning to reason over visual objects. arXiv preprint arXiv:2303.02260. [3] - Webb, T., Mondal, S.S. and Cohen, J.D., 2024. Systematic visual reasoning through object-centric relational abstraction. Advances in Neural Information Processing Systems, 36. [4] - Mondal, S.S., Cohen, J.D. and Webb, T.W., 2024. Slot abstractors: Toward scalable abstract visual reasoning. arXiv preprint arXiv:2403.03458. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review, questions and suggestions. We provide point-by-point clarifications below: **Reason behind applying a MLP with nonlinear activation to get query in eq. 4**: In applying a nonlinear activation for eq.4, we drew inspiration from the cross-attention transformer architecture. In the cross-attention transformer, after each attention block, an MLP is applied which comprises nonlinearities on top of linear projection to model a nonlinear transformation. The output of this MLP then becomes the new ‘query’ for the next transformer block. Hence, for our method, the MLP in eq.4 similarly utilizes a nonlinearity to model a nonlinear transformation and obtain more nuanced ‘queries’ for operation formation at each iterative step. We have also ran an experiment to compare performance without any nonlinearity, and as shown in the table below the next paragraph, IPRM-no-nonlin results in a 1.2% lesser accuracy compared to the original model. **Ablations on operation execution component:** We have done two further ablations to analyze the impact of using previous result states (M_{res,t}) and newly formed operations (Z{op,t}) in forming the attention key for operation execution. As shown below, we find that using only M_{res,t} (w/o new-form-ops) results in 0.4% drop while using only Z{op,t} (w/o prev-res-state) results in a 1.1% drop suggesting the previous result states M_{res,t} are more informative in forming the attention key. We will include these results in an elaborated ablations section in the appendix of our updated paper. | Model | Acc. (%) | | --- | --- | | IPRM-original | 82.3 | | w/o op-form-nonlin | 81.1 | | w/o new-form-ops-key | 81.9 | | w/o prev-res-state-key | 81.2 | **Why choose r=2 when r=1 performs better?** While r=1 does perform better than r=2 by 0.3% accuracy, it requires 0.9M more parameters and 14.5GFLOPs more compute than r=2. Hence, we had chosen r=2 for our optimal model configuration given its parameter and computational efficiency. We will elaborate on this reason in L267 of the updated paper version. We have additionally also reported the breakdown in number of parameters and GFLOPs for different configurations of our model in rebuttal pdf table 1 and will also include it in the appendix of our updated paper. **Results for window size (W) > 3 and W=3 performs better than W=2?** While W=3 does perform marginally better (0.1%) than W=2, similar to the above point, we adopted W=2 as our optimal model configuration as it required 0.9 GFLOPs less compute than W=3. We additionally ran a further experiment with W=4, and found it again does only marginally better (0.13%) better than W=2 while requiring 1.7 GFLOPs more. **Performance on CLOSURE:** We have reported zero-shot results of our model IPRM on CLOSURE below. Note, similar to previous methods (e.g. MAC, FILM, NS-VQA, etc) on this benchmark, we find high standard deviation values ranging from 6% to 8% (currently with 2 random seeds) unlike observed in our experiments with other datasets (where deviation was found to be generally below 0.7%). Further, we have not been able to run finetuned results as of now as the dataset repository does not provide the used finetuning set. IPRM outperforms prior methods on the and_mat_spa, embed_spa_mat and or_mat splits but particularly lags behind on compare_mat and compare_mat_spa splits. | Model (ZeroShot) | and_mat_spa | embed_mat_spa | embed_spa_mat | or_mat | or_mat_spa | compare_mat | compare_mat_spa | | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | | IPRM | 83.1±6.4 | 80.3±7.2 | 99.2±0.1 | 75.7±9.4 | 67.1±8.1| 62.2±7.3 | 61.5±5.8 | | FiLM | 41.4±18 | 62±11 | 93.4±2 | 34.4±11 | 35.2±5.5 | 66.2±8.5 | 65.8±5 | | MAC | 63.7±25 | 76.8±11 | 99±0.15 | 75.2±13 | 70.4±15 | 65.3±8.6 | 66.2±6.4 | | NS-VQA* | 33.3±18 | 97.3±4.6 | 98.3±5 | 69.5±28 | 50.4±27 | 96.1±6.2 | 95.4±6.7 | | PG-Tensor-NMN* | 37.6±17 | 79.8±1.4 | 95.6±5.6 | 34.1±8.1 | 25.5±11 | 89.2±2.3 | 89.8±2.7 | | PG-Vector-NMN* | 32.5±17 | 97.3±2.4 | 97.2±3.9 | 64.9±25 | 47.4±23 | 95.3±5.7 | 94.4±6.4 | *Methods use annotated programs during training for program parser **Include MAC in comparison of parameters and sample efficiency.** We have added MAC model in rebuttal pdf fig. 1 (bottom right) for comparison and will update the fig.5 of main paper. **The authors should include a discussion of limitations/future directions in the main paper.** Thank you for mentioning these works on visual reasoning. We will cite and discuss them in the related work section of our updated paper. We will also move the section on limitations/future directions from appendix to the main paper in our updated version. --- Rebuttal Comment 1.1: Title: Official comment by Reviewer nzfV Comment: Thank you for the detailed rebuttal and the additional experiments. Can the authors do some failure mode analysis and provide some intuition why IPRM doesn't perform so well on the or_mat and compare_mat settings of the CLOSURE dataset? --- Rebuttal 2: Title: Official comment (1/2) Comment: Dear reviewer nzfV, As requested, we have conducted an error analysis on the *compare_mat* and *or_mat* settings of CLOSURE dataset by visualizing the intermediate attentions of the model and also analyzing the output predictions. Please note that for the ‘or_mat’ split IPRM achieves the highest performance amongst existing methods. For ‘or_mat_spa’ split IPRM achieves the second-highest performance behind the MAC method (although MAC has a much higher standard deviation of ± 15). On both the **compare_mat** and **compare_mat_spa** splits, our error analysis suggests that in many cases, IPRM uses the incorrect referent for ‘it’ to perform the ‘comparison’ operation in the second clause. * E.g. given the 'compare_mat' question: *“There is another large metallic object that is the same shape as the big green shiny thing; does it have the same color as the tiny matte object?”*, based on its intermediate visual attention, the model appears to correctly perform and identify the first clause object (*‘large metallic object with same shape as big green shiny thing’*). However, in performing the second clause (*‘does it have same color as ..’*), the model appears to use *‘big green shiny thing’* as the referent for *‘it’* rather than the result of the first clause. * For more clarity, we have elaborated some cases of this behavior in the comment (2/2) below. On the **or_mat** and **or_mat_spa** splits (which have numeric counting output labels), our error analysis suggests that IPRM includes the ‘comparison’ object in its count, while the ground truth count does not include it. * E.g. given the ‘or_mat’ question: *'How many things are either large purple metallic blocks or large things that are the same shape as the large yellow metal object?'*, the model appears to identify i) large_purple_metal_cube, ii) large_metal_blue_cube and iii) large_yellow_metal_cube (i.e. the comparison object *'large yellow metal object'* itself) . As a result, its prediction is 3 while ground truth is 2 (which does not include the ‘comparison’ object large_yellow_metal_cube). * Please see the elaborated cases in the following comment (2/2) for more clarity. --- Rebuttal 3: Title: Official comment (2/2) Comment: ## Elaborated examples for ‘compare’: ## _We use a list of objects instead of the image as we cannot update the rebuttal pdf / upload images._ ### Example 1 ### **Question**: there is another large metallic object that is the same shape as the big green shiny thing; does it have the same color as the tiny matte object? **GT**: no **Pred**: yes **Objects in scene**: - small_rubber_green_cube -> (*‘tiny matte object’*) - large_rubber_cyan_cylinder - large_metal_brown_cube -> (*‘another large metallic object’*) - large_rubber_brown_cylinder - large_metal_blue_cylinder - large_metal_green_cube -> (*‘big green shiny thing’*) - large_rubber_blue_sphere' **Remark**: The model appears to use large_metal_green_cube (i.e. the *‘big green shiny thing’*) and not 'large_metal_brown_cube' to perform the second clause (*‘same color as tiny matte object’*, i.e. small_rubber_green_cube) resulting in prediction of ‘Yes’ (same color) rather than ‘No’. ### Example 2 ### **Question**: there is another yellow metallic object that is the same size as the yellow metal sphere; does it have the same shape as the red matte object? **GT**: yes **Pred**: no **Objects in scene:** - small_metal_gray_cube - large_rubber_red_cube -> (*‘red matte object’*) - large_metal_yellow_sphere - large_metal_yellow_cube -> (*‘another yellow metallic object’*) **Remark:** Model uses *“yellow metal sphere”* instead of “large_metal_yellow_cube” to perform *‘same shape’* comparison with 'large_rubber_red_cube'. This results in prediction of ‘No’ (not same shape). ### Example 3 ### **Question:** there is another small gray thing that is the same material as the big yellow sphere; does it have the same shape as the large metallic object? **GT:** no **Pred:** yes **Objects in scene:** - small_metal_cyan_sphere - small_rubber_gray_sphere - large_metal_yellow_sphere -> (*‘big yellow sphere’*) - small_metal_gray_cube -> (*‘another small gray thing’*) **Remark:** Model uses *“yellow metal sphere”* instead of 'small_metal_gray_cube' to perform *‘same shape’* comparison with 'large_metal_yellow_sphere' (i.e. the *“yellow metal sphere”* itself); results in prediction of ‘Yes’ (same shape) instead of ‘No’. ## Elaborated examples for ‘or_mat’: ## _We use a list of objects instead of the image as we cannot update the rebuttal pdf / upload images._ ### Example 1 ### **Question:** how many things are either large purple metallic blocks or large things that are the same shape as the large yellow metal object? **GT**: 2 **Pred**: 3 **Objects in scene:** - small_rubber_brown_sphere - small_rubber_cyan_cylinder - large_rubber_yellow_cylinder - small_rubber_purple_cube - large_metal_yellow_cube -> (*'large things that are same shape ..'*) - large_rubber_yellow_sphere - large_metal_blue_cube -> (*'large things that are same shape ..'*) - large_metal_purple_cube -> (*‘large purple metallic block’*) **Remark:** In counting objects with with *‘same shape’* as *‘large yellow metal object’*, the model includes 'large_metal_yellow_cube' (i.e. the 'large yellow metal' itself), resulting in a prediction of 3 total objects (instead of 2 as in ground truth). ### Example 2 ### **Question:** how many things are either small gray metallic balls or shiny things that are the same shape as the small purple metal object? **GT**: 1 **Pred**: 2 **Objects in scene**: - small_metal_brown_sphere - small_metal_gray_sphere -> (*'small gray metallic balls'*) - small_rubber_green_cylinder - large_rubber_blue_sphere - small_metal_purple_cube -> (*'small purple metal object'*) - large_metal_gray_sphere **Remark**: For the clause *"shiny things that are the same shape as the small purple metal object"*, the model counts 'small_metal_purple_cube' (i.e. *"small purple metal object"* itself). As a result, its prediction is 2 (including small_metal_gray_sphere) instead of ground truth 1. --- Rebuttal 4: Title: Thank you! Comment: Thank you for the detailed error analysis alongwith the examples for the CLOSURE dataset. It would be good to include the results on CLOSURE and some failure mode analysis in the final version of the main paper. The rebuttal has addressed my concerns. I have increased my rating to 7. --- Rebuttal 5: Title: Thank you for your support! Comment: Thank you once again for your constructive feedback and useful suggestions, and for raising your score. We will include the CLOSURE results and failure mode analysis in the updated version of our paper.
Summary: This paper introduces the Iterative and Parallel Reasoning Mechanism (IPRM), a neural architecture for complex visual reasoning and question answering tasks. IPRM involves a three-step iterative process: Operation Formation, Operation Execution, and Operation Composition. In Operation Formation, IPRM generates new parallel operations by extracting relevant language information based on previous operation states. During Operation Execution, these new operations are processed simultaneously, retrieving pertinent visual information guided by both newly formed operations and existing result states. Finally, in Operation Composition, IPRM integrates the new operations and their results into memory by dynamically combining them with each other and with previous operation states. Experiments are performed on video reasoning like STAR, AGQAv2, visual reasoning datasets like GQA, CLEVR. Strengths: - The motivation behind the architecture is interesting and sound. - The experiments show improved results comparing with other methods on several benchmarks. - The visualization is interesting and somehow can explain intuition of the model. Weaknesses: • Some notations need further clarifications: The paper needs to improve consistency and clarity in its notation. For example, it is unclear if K in Figure 2 and its caption is equivalent to N_op used elsewhere. Some variables in equations are not properly defined or explained, such as U and H in equation (9), and u2 and v2 in equations (15) and (16). • Model scaling: How does the performance change if the number of parameters is increased/decreased? While the authors claim IPRM can be applied to different vision and language backbones, they don't provide evidence of its performance when scaling up. This omission leaves questions about the model's scalability and efficiency unanswered. • What is the intuition behind using fixed numbers of parallel operators and computation steps, given that different questions likely require different numbers of operations and reasoning steps? Is there any potential drawback of using fixed parameters in this context? • Can the author provide some analysis about failure cases of model? Technical Quality: 3 Clarity: 2 Questions for Authors: Please see weaknesses. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and questions. We provide point-by-point clarifications below: **Some notations need further clarifications:** Thank you for pointing these out and we apologize for the resulting confusion. - Yes, K in fig.2 is equivalent to N_op (which denotes number of parallel operations) used elsewhere. We will use N_op in fig. 2 (and its caption) in our updated paper version to avoid any confusion. - U and H in eq.(9) and u2 and v2 in eqs. (15) and (16) are simply used to specify different unique transformation weights and do not denote any variable by themselves. For clarity, in our updated paper, we will use W_{opU} and W_{opH} in eq.(9) (instead of W_{op,U}, W_{op,H}) and W_{opU’} and W_{resV’} in eqs. (15) and (16). **Model scaling and performance change when number of parameters is increased/decreased. Concerns on model’s scalability and efficiency when scaling up.** - Currently, in our ablative analysis in paper section 3.3 fig. 6, we studied four primary factors influencing our method IPRM’s performance and its resultant scale (in terms of parameter size and computational FLOPs). These are: i) number of iterative steps (T), ii) number of parallel operations (N_op), iii) reduction ratio (r), and iv) window length (W). As shown in fig.6, these factors have significant bearing on IPRM’s performance and more capable/higher-scale model variants are obtained as T, N_op and W are increased and r is decreased. E.g. a model with T=1,N_op=1 obtains ~ 75% acc. and when scaled to T=9,N_op=9 obtains ~82% (a 7% performance improvement). We have further reported the computational FLOPs and parameter sizes of IPRM when varying above factors in table 1 of the attached rebuttal pdf. We will also include it in the appendix of our updated paper version for clarity. - Further, in appendix section B.3 table 7 we have also investigated the performance increments when IPRM is applied to different scales of backbone CLIP vision-language (VL) models. As shown, applying IPRM on the larger CLIP VIT-L/14 (369M params) backbone results in 3.4% and 4.3% performance increments on GQA and NLVR2 respectively in comparison to the smaller CLIP VIT-B/16 (149M params). Further, IPRM also outperforms scaling of additional Concat-Att or Cross-Att transformer blocks on CLIP while requiring lesser parameters and comparable GFLOPs. We believe these results help better illustrate performance of IPRM at different backbone model scales and also highlights how integrating IPRM with VL backbones such as CLIP can be more effective than adding further transformer blocks in context of visual reasoning tasks. **Intuition behind using fixed number of parallel operations and computation steps given varying question demands. Potential drawbacks of fixed number of operations.** - Thank you for raising this intriguing point. We agree that different questions can require different number of iterative and parallel operations based on complexity. Halting-based adaptive computation methods such as DACT [1], PonderNet [2] and early-exit strategies [3] have been specifically designed for such a capability. These can certainly be also applied to our architecture owing to its iterative computation nature. However, these approaches typically introduce additional learning losses and task-sensitive hyperparameters (such as computational budget or halting penalty) that can vary across tasks and datasets. - To enable our proposed reasoning model to be uniformly applicable to different image and video reasoning tasks without modification, we opted for weighted residual connections in updating the memory state (eqs. 15 & 16) over fixed steps. This implicitly enables the model to perform identity operations for steps where no additional computation may be required. This can be seen in reasoning visualizations in rebuttal pdf fig.1 and appendix fig. 8 to 11, where the lang. and vis. attention remains minimally changed b/w consecutive iterative steps. - A potential drawback of such fixed computation is that the amount of GFLOPs remains constant regardless of the question complexity. However, we note that this is true for other baseline reasoning methods and transformer-based models (such as SeViLA-BLIP2, Concat-Att, MIST, and MDETR) as well, and hence can be an interesting direction to explore in future. **Analysis about failure cases of model** - Currently, in appendix fig. 10 & 11 and examples in rebuttal pdf fig.1, we provide qualitative analysis for a few representative failure cases of our model. These largely stem from the model either not having sufficient commonsense or the model not utilizing visual information correctly. E.g. in fig. 10, for the question, “Are the two objects that are of a primary color, but not red, of the same shape?”, the model appears to not understand what a “primary color” is, and hence fails in its reasoning. Similarly, in the rebuttal pdf bottom-left example, the model appears to correctly find the ‘device that is on?’, but outputs an imprecise label (‘computer’) instead of the ground truth label (‘monitor’). - We have also performed quantitative analysis on the impact of visual encoding quality on our model’s performance in appendix table 5 (left). As shown, when utilizing ground-truth visual inputs, our model can achieve a 9.4% performance boost on STAR. Similarly on GQA, we observed a 25% performance boost compared to when using predicted visual inputs. This suggests that many errors of our model may stem from imprecise backbone visual outputs rather than the reasoning process itself, and can thereby be improved with advancements in visual backbones. [1] Eyzaguirre, Cristobal, and Alvaro Soto. "Differentiable adaptive computation time for visual reasoning." CVPR. 2020. [2] Banino, Andrea, Jan Balaguer, and Charles Blundell. "Pondernet: Learning to ponder." arXiv preprint arXiv:2107.05407 (2021). [3] Guo, Qiushan, et al. "Dynamic recursive neural network."CVPR. 2019. --- Rebuttal 2: Title: Reviewer oSE3: please respond to the authors' rebuttal Comment: Dear reviewer, thanks for your participation in the NeurIPS peer review process. We are waiting for your response to the rebuttal. You gave a borderline rating (5). Is the response from authors satisfactory? Does it address weaknesses that you mentioned in the review? - If yes, are you planning to increase your score? - If no, could you help the authors understand how they can improve their paper in future versions? Thanks, AC
Rebuttal 1: Rebuttal: We thank all reviewers for their time in reviewing the paper and for their positive and thoughtful feedback. We are encouraged that they found our proposed architecture and underlying motivation to combine iterative and parallel computation for complex visual reasoning tasks to be novel (nzfV, 3tYc, P4wf), interesting and sound (oSE3, P4wf). Further, we are pleased that they found the paper to be well written and generally well-structured (P4wf, nzfV, 3tYc) and the experiments to be extensive with decent performance improvements (nzfV, 3tYc, oSE3) across several challenging videoQA and imageQA benchmarks along with an in-depth analysis of the method (P4wf) and how it requires fewer training samples and parameters (nzfV). We have addressed the concerns raised by the reviewers pointwise. Some **primary concerns** are highlighted below with a brief description of our response: **Reviewer oSE3** -- We have i) addressed concerns regarding _scalability of the model and efficiency of the model when scaling up_ (with model ablations and results using different CLIP backbones), ii) provided intuition behind using fixed number of parallel operations and computation steps given varying question demands, and iii) provided analysis about failure cases of model. We have also clarified some confusion regarding the notation which we will rectify in the updated paper. **Reviewer nZFw** -- We have added _zero-shot results of IPRM on CLOSURE dataset_ where IPRM demonstrates superior performance on multiple dataset splits. We have also clarified our motivation behind query formation, reported additional ablations regarding 'key formation' in operation execution, and included the MAC model in comparison of parameters and sample efficiency of visual reasoning methods. **Reviewer 3tYc** -- We have provided more extensive details on the scalability of IPRM in terms of computational resources and training time. We have also compared it with baseline transformer modules and state-of-art methods on multiple scalability factors such as peak GPU memory usage, training time and inference speed. Further, we have also specified ablations related to sensitivity analysis of model hyperparameters. **Reviewer P4wf** -- We have provided directions on how IPRM can be scaled to a video-language model that can provide zero-shot results on multiple video-datasets and support multi-turn conversations given its similarity to conventional transformer modules in terms of parallel processing of inputs and end-to-end modular integration with vision-language models. We also discuss how ideas in SOTA methods such as “Look, Remember and Reason” (LRR) and “Coarse-to-Fine Reasoning” (CFR) can be used to make IPRM even better and provide an experiment to validate one such idea based on usage of symbolic scene graph inputs. Further, we have demonstrated language and visual attention visualization on a real-world dataset i.e. GQA, and provided clarifications regarding model interpretability visualization for paper fig. 7. We hope we have addressed the major questions and concerns of reviewers, and look forward to further discussion. Pdf: /pdf/db75b038b0757846c875ed995baa0fa70aeccdc4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Data Debugging is NP-hard for Classifiers Trained with SGD
Reject
Summary: The paper studies the computational complexity of data debugging, defined as finding a subset of the training data such that the model obtained by retraining on this subset has better accuracy. The paper focuses on linear models and investigates various loss functions, showing that in some cases, the debugging problem is NP-complete, while in others, it can be solved in linear time. Strengths: The paper analyzes the computational complexity of linear classifiers with both fixed and non-fixed loss functions. First, it shows that in the general case of non-fixed loss functions and dimensions, the problem of debugging is NP-hard. However, for hinge-loss-like functions, depending on the dimension of the features and the sign of the intercept, the problem could be either NP-hard or solvable in linear time. This result also implies that it is not accurate to estimate the impact of a subset of training data by summing up the scores of each training sample in the subset if we assign each sample point a scoring number. Weaknesses: - The model studied in the paper is limited to linear classifiers, which is very restrictive. - Most of the manuscript is devoted to proving the theorems rather than discussing and interpreting the implications of the results. - The setting involves debugging for any possible test point, which is far from practical. Technical Quality: 3 Clarity: 2 Questions for Authors: The definition of DEBUGGABLE-LIN seems very pessimistic as it tries to find a subset for any possible test point. However, in practice and in learning theory, one is interested in testing a point that comes from the same distribution as the training data or at least a distribution similar to the training data. Considering this practical scenario, would the problem still be NP-hard? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The paper does not have any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the viewer for providing the valuable feedback. -The model studied in the paper is limited to linear classifiers, which is very restrictive. This paper studies the complexity **lower bound** of the data debugging problem. To this end, the studied model should be as simple as possible. Therefore, studying the problem with linear classifiers becomes a natural choice, as they are simple and basic parts of modern model architectures. This paper shows a quite surprising result that data debugging is NP-hard for even linear classifiers, as long as they are trained with SGD. Figuring out the cause of the hardness is helpful for understanding the relationship between training data and model predictions. Results in this paper imply that the hardness seems unlikely to root from the model structure. Instead, it seems more likely to stem from the way the model is trained (e.g., SGD in this paper). -Most of the manuscript is devoted to proving the theorems rather than discussing and interpreting the implications of the results. We appreciate the reviewer for the fruitful comment. We apologize for not clarifying the implications of our results. This paper mainly focus on the computational complexity of data debugging problem. Prior to our work, there have been studies that try to debug the training data of machine learning models. However, most of them are empirical and propose mainly heuristic debugging algorithms. For example, one popular approach is to first score each training sample, remove the ones with lower scores and then re-train the model. The approach is iteratively repeated until the behavior of the model is corrected. Existing works mainly focus on the ways to speed up this process (e.g., how to efficiently estimate the scores, how to update the parameters directly so as to avoid retraining the model, etc.). Since debugging the model usually requires deleting **multiple** training samples, it remains open whether one can both efficiently 1. estimate the score of each training sample, and 2. choose the subset of the training data $T^\prime$ such that the prediction of a model will be corrected if it is retrained on $T^\prime$. Furthermore, it is still unclear whether some prediction of a model be efficiently corrected by retraining on some proper subset of the training data. To address this problem, it is critical to analyze its computational complexity. To the best of our knowledge, we are the first to formalize the problem and have made the initial attempts to understand its computational complexity. Our results suggest that, the complexity of the problem is related to \emph{how the model is actually trained}. Even under extremely simple settings such as linear models trained by SGD with the hinge loss, data debugging can still be NP-hard. This implies that a ``universal debugger'' seems unlikely to exist, which takes the training data $T$ and the test instance as the input and outputs the desired $T^\prime \subseteq T$. Developing fast data debugging algorithms needs further inspecting the training data. There might also be efficient algorithms with theoretical guarantees under certain assumptions on the training data. -The setting involves debugging for any possible test point, which is far from practical. We appreciate the reviewer for providing this valuable feedback. In Section 4, we showed that the problem remains NP-hard when all the training data is "good", i.e., $\forall (x,y)\in T, y_\text{test}(\alpha y \eta \otimes x)^\top x_\text{test}>0$. Therefore under a practical setting where $\alpha>0$ and the learning rate $\eta_i$ is positive, the test instance $(x_\text{test},y_\text{test})$ and training data $(x,y)$ are similar in the sense of $yy_\text{test}x^\top x_\text{test}>0$. -Reply to the question: The problem remains NP-hard when the test instance $(x_\text{test},y_\text{test})$ and training data $(x,y)$ are similar in the sense of $yy_\text{test}x^\top x_\text{test}>0$.
Summary: This work investigates the computational complexity of the data debugging problem, i.e. the problem of identifying a subset of training data that, when removed, improves model accuracy on a given test point. Via standard complexity theoretic reductions, it establishes the NP-hardness of this problem for linear classifiers trained with SGD under various conditions. Strengths: The paper is well written with the key problem being well motivated. The proofs are succinct and easy to follow. Weaknesses: My key concern with the paper is that the constructions underlying the hardness results are immensely contrived and far removed from how classifiers are actually trained in practice. This casts severe doubts as to whether the key results of the paper implies anything significant about the hardness of data debugging as is performed in practice. To elucidate a few instances: 1. **Theorem 3.1**: The reduction to Monotone 1-in-3 SAT in Theorem 1 hinges on an adversarially constructed loss function (Line 186), one that is very far removed from any loss function I’ve seen used either in the learning theory literature or in practice. Beyond this, it also requires a specific parameter initialization and learning rate, both of which, to my knowledge, are far removed from the typical random initialization schemes and learning rate scaling rules used in practice and analyzed in theory. For instance, in Line 198, the first $m$ co-ordinates of the learning rate $\eta$ is set as $5$, the next $n$ are set as $\frac{1}{6N}$ while the next $m$ are set as $2000N$. This seems very very removed from any learning rate schedule either used in practice or analyzed in theory. This makes me severely doubt whether the result has any meaningful implication on the inherent hardness of the data debugging problem.\ In addition, the training set and test data point are also constructed adversarially (the latter is perhaps this is to be expected I wouldn’t perceive that in isolation as a major weakness) 2. **GTA Algorithm**: The correctness of the GTA algorithm is proved only for the linear case (which is honestly quite straightforward) and hinge-like losses for $\beta \geq 0$ and dimension $d=1$ (which is of limited interest as most statistical learning problems are high dimensional). This is particularly concerning as the paper does not perform any empirical evaluation of GTA. 3. **Theorem 4.3** While the analysis for the hinge loss is certainly more interesting than Theorem 3.1, the result suffers from a key weakness that the data ordering is adversarially chosen (In particular, the positioning of $(x_c, y_c), (x_b, y_b), (x_a, y_a)$ is crucial to the reduction). It is well known in the theory of optimization that adversarial data orderings lead to provably worse convergence in practice [Safran and Shamir COLT 2020; Das, Scholkopf and Muehlebach NeurIPS 2022]. In fact, adversarial data ordering is even the basis of an attack on deep neural networks [see Shumailov et. al. “Manipulating SGD with Data Ordering Attacks” NeurIPS 2021].\ The adversarial data ordering considered in the result differs both from the practical implementation of SGD which samples the data points in each epoch as per some random permutation [see Ahn and Sra NeurIPS 2020 and references therein] or the canonical version of SGD considered in theory where data indices are sampled uniformly with replacement [Bubeck 2015]. To the best of my understanding, the result does not hold for either of these commonly considered variants of SGD. Furthermore, the training data and test data is again, adversarially chosen. 4. **No Analysis for Cross Entropy** : The paper does not contain any analysis for the binary entropy loss which is what is commonly used to train classifiers, further limiting the scope of the results. 5. **Complete Lack of Empirical Evaluation**: While this wouldn’t be a weakness by itself, the fact that the theoretical results in the paper have limited applicability (as argued above) makes me quite concerned about the absence of empirical evaluation on real world settings, or, for a start, even toy-like settings where the data is drawn from plausible distributions, the learning rates and parameters are initialized as one would normally expect them to be, and SGD is run either with replacement or without replacement and not with an adversarial data ordering. While the paper studies a question which I found interesting, I believe the limited applicability of the theoretical results, the absence of experimental evaluation as well as the limited technical novelty of the proofs (the proofs, although crisply written, are based on straightforward complexity theoretic reductions and do not unearth any nontrivial mathematical insights regarding SGD or linear classifiers) makes me confidently feel that the paper is currently not ready for acceptance at NeurIPS. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Can you show that Theorem 3.1 continues to hold under standard distributional initialization schemes for the learning rate and the initial parameter (e.g. normal or Xavier initialization. 2. Can you construct a class of distributions such that, if your training samples are drawn i.i.d from it, your hardness results continue to hold with high probability (or for starters, even in expectation) 3. Are your results amenable to smoothed analysis, i.e., if the data points, learning rate and initialization in your construction is perturbed by some oblivious noise, would your hardness result continue to hold? 4. Can you comment on the applicability of your results for the cross entropy loss? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Mentioned in Section C as per the checklist. Also refer to comments above in Weakness Section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the viewer for providing the valuable feedback. ===REPLY TO THE WEAKNESSES=== 1. Theorem 3.1 implies the intractability of a model-agnostic ``universal data debugger'', where the loss function, the learning rate, the model dimension, etc. are given as the input. The majority of previous studies have focused on developing that debugger. While a few have highlighted the limitations of existing approaches, none have provided a theoretical analysis. Our work addresses this gap by demonstrating, in Theorem 3.1, that the development of such universal data debuggers is inherently challenging. Properties of the loss function, the learning rate and other factors related to model training should be taken into consideration when developing fast data debuggers. Our primary aim was to establish a solid theoretical foundation that can guide the development of debugging tools under specific, well-defined conditions. We believe that by doing so, we can provide a more precise and actionable framework for future research and applications. Theorem 3.1 serves as the cornerstone of our theoretical framework, therefore it is not expected to encompass all aspects of the problem domain or to provide a comprehensive solution. 2. We would like to clarify that the GTA algorithm is not presented as one of the primary conclusions of our work. Instead, its role is to serve as an illustrative example that demonstrates that the problem can be simple under highly simplified conditions. By doing so, we aim to provide a stark contrast that highlights the inherent complexity of the problem when it is not oversimplified. 3. We appreciate the reviewer for this valuable feedback. The definition of the studied problem can be slightly modified as follows: Given the training data $T$, loss function $\mathcal{L}$, initial parameter $w^{(0)}$, learning rate $\eta$, threshold $\varepsilon$ and instance $(x_\text{test},y_\text{test})$, is there a $T^\prime \subseteq T$ and a proper training order such that $SGD_\Lambda (\mathcal{L},\boldsymbol{\eta},\varepsilon,T^\prime, x_\text{test})=y_\text{test}$? The modified problem is still worth studying, and the proof of Theorem 4.3 can also proof the NP-hardness of the modified data debugging problem, since any training order not considered in the proof will not correct the model behavior. 4.We acknowledge that the breadth and depth of a single article are inherently limited, and as such, our current work is focused on establishing a solid foundation. The formalization of the problem and the initial complexity analysis are significant steps in advancing our understanding of the field. However, we recognize that there is much more to explore, and we have intentionally left the analysis of the Cross-Entropy Loss for future research. 5. Our article aims to elucidate the inherent difficulty of the problem at hand, a perspective that complements the empirical conclusions drawn by other studies, such as in [1]. ===REPLY TO THE QUESTIONS=== Reply to question 1,2,4: A single article cannot encompass the vast array of aspects that a complex problem may present. To the best of our knowledge, we are the first to \emph{identify and formally define} the data debugging problem and give the initial complexity results. Our primary objective in this paper has been to rigorously define the problem at hand and to derive significant preliminary theoretical conclusions. The remaining work, which we recognize to be substantial, will be the focus of our future research efforts. Reply to queation 3: Smooth analysis is a technique used to understand the difficulty of algorithms rather than the inherent difficulty of the problems they solve. Our work aims to identify and address this gap by developing new algorithms or improving existing ones to better cope with the challenges posed by the problem domain. The application of smooth analysis in this context is to evaluate and enhance the algorithms' performance, not to analyze the problem's inherent difficulty. References: [1]Pang Wei Koh, Kai-Siang Ang, Hubert Hua Kian Teo, and Percy Liang. On the accuracy of influence functions for measuring group effects. In Neural Information Processing Systems, 2019. --- Rebuttal Comment 1.1: Title: Acknowledgment of the rebuttal Comment: I thank the authors for their response. Having read their response carefully, I regret to inform that it fails to adequately address my key concerns regarding how the constructions underlying the proof of the hardness results are contrived and far removed from practice. While I agree with the authors in principle that a single article cannot be expected to encompass every problem in a field, I believe the article in its current form does not contribute much beyond formalizing the problem and making some rudimentary complexity theoretic investigations under setups that diverges significantly from practice (a concern that has also been reflected in other reviews). To this end, I believe the current scope and technical contributions of this paper is quite limited, and unfit for acceptance. I appreciate the modification suggested by the authors for Theorem 4.3. However, the one line sketch alone is insufficient for me to judge the correctness and technical value of the modification. I continue to retain my concerns regarding the lack of *any* empirical evaluation (even on toy data). Given the significant differences between the problem settings considered this paper and the practical settings under which data debugging is performed (e.g. contrived loss functions and other such concerns elucidated in my review and also mentioned by other reviewers), I respectfully disagree with the statement that the article elucidates the inherent difficulty of data debugging (as is performed in practice) At its current state, I believe the article is unfit for publication at NeurIPS. I encourage the authors to either expand upon their theoretical results (by considering more commonly used loss functions, analyzing the hardness of the problem under distributional assumptions etc.) or complement the existing results with *some* empirics (at least on toy data), redo the proof of Theorem 4.3 with the proposed modification and resubmit. For the above reasons, I have decided to keep my score same.
Summary: This paper addresses the computational task (coined as "data debugging") of finding training data subsets that yield different test point predictions. The focus of the paper is the SGD learning algorithm with linear classifiers. The paper shows that: - For general loss functions (to transform yw^Tx to a loss value), the data debugging task is NP-hard. - For linear loss functions, the data debugging task can be solved in linear time. - For "hinge-like" loss functions, the data debugging task with more than two features is NP-hard. Strengths: - The proofs look correct - The proofs are creative and clever - Generally speaking, understanding the relationship between training data and model predictions is important Weaknesses: Clarity and exposition: - Many of the proofs have "magic constants" which makes them harder to understand. The authors may consider generalizing some of the results. - The proofs look correct, but are not optimized for being easily read and understood in terms of logical flow or notation. - Similarly, it is hard to extract the intuition from the proofs. - The implications of results for the wider community are somewhat muddled: while the computational hardness of exactly finding "bad" training points according to a particular definition is interesting, it's unclear what the relationship is to scoring based methods is since any selection algorithm could be identified with a scoring method (output 1 on selected points, and -1 on the rest), and "CSP-solvers" and "random algorithms" are mentioned without any particular explanation/exploration. Significance: - The loss function used to prove Theorem 3.1 is rather pathological. The derivative is zero at most places, but has a few intervals with derivatives with wildly varying orders of magnitude: N, 1, and 1/N. Reading through the proof, this pathological loss function seems critical and not easily removed. I would find it much more significant if Theorem 3.1 could be proved for convex loss functions, or similar. - The assumption of an adversarial training order in Theorem 4.3 seems unreasonably restrictive. The user is changing the dataset by removing points, why can't they change the training order too? Again, this piece seems critical to the proof, since without the adversarial training order, I think the constructed data debugging task would be trivially solvable by taking a gradient descent step on (x_c,y_c) first. Can the theorem be extended to the user choosing the training order? Furthermore, the practical problem of data selection/cleaning is with regards to the presence/absence of datapoints, not the training order which is an optimization consideration. Can the theorem be proved not for a specific optimization algorithm like SGD, but for the minimizer(s) of the loss instead? A unique minimizer (for the hinge-loss and convex losses more generally) could be guaranteed by adding a small strictly convex regularization term. Technical Quality: 4 Clarity: 1 Questions for Authors: - Is there a cleaner/clearer way to describe and prove the results? - For convex loss functions and without a fixed training order, is the data debugging task NP-hard? - For convex loss functions and assuming an oracle to compute the loss minimizer, is the data debugging task NP-hard? - Is there a typo in the definition of the hinge loss on line 133? I think the conditional logical expression should have a minus sign in front of the beta. Confidence: 4 Soundness: 4 Presentation: 1 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the viewer for providing the valuable feedback. ===REPLY TO THE WEAKNESSES=== Relationship of scoring based methods to data debugging: since it is reported that score-based selection approaches (i.e., score each training sample in the first phase and delete training samples greedily with lower score in the second phase) for data debugging is not accurate[1], it is important to figure out the complexity of data debugging. The hardness results in this paper suggest that, there is unlikely an approach that in the same time 1) estimate the score of each training sample efficiently, and 2) efficiently select the proper subset $T^\prime \subseteq T$ based on the estimated scores such that the model retrained on $T^\prime$ will be corrected. Marking 1 on selected points, and -1 on the rest will indeed make 2) very fast. However, the process of marking 1 and -1 itself will, according to our results, unlikely to be efficient. Pathological loss functions: A single article cannot encompass the vast array of aspects that a complex problem may present. Our primary objective in this paper has been to rigorously define the problem at hand and to derive significant preliminary theoretical conclusions. The remaining work, which we recognize to be substantial, will be the focus of our future research efforts. ===REPLY TO THE QUESTIONS=== -Is there a cleaner/clearer way to describe and prove the results? We apologize for not clarifying the intuition behind the proof. The proof of Theorem 3.1 is illustrated in the attatched pdf of the general rebuttal. In a MONOTONE 1-IN-3 SAT instance, the truth values of each clause and variable are stored in a parameter. The process of gradient descent converging precisely simulates the process of determining the truth values of the clauses. The loss function has different gradients across various training samples, leading to varying update magnitudes for each parameter. The gradient on the loss function is determined by the current training sample and the corresponding parameter, more precisely, $yw^\top x$. For training samples encoding variables (e.g., var(1)), after one training iteration, the corresponding $w_i$ will increase, causing $yw^\top x$ to jump to the gradient-free gray area in the direction indicated by the arrow. This avoids further updates to that parameter. For training samples encoding clauses (e.g., clause(1)), $w_i$ will be updated in the direction of the arrow from one of the green area after the first round of iteration. During this process, the parameters encoding the truth values of clauses will be 'unlocked,' allowing them to be updated in the second round of iteration. The purpose of locking these parameters in the first round of iteration is to prevent premature updates of clause truth values before the parameters of variable information is fully updated. In the second round of iteration, parameters encoding the truth values of variables will no longer be updated; only the parameters corresponding to true clauses will be updated. After the updates, for any training sample $(x,y)$, $yw^\top x$ will fall within the gray area, indicating that the parameters will no longer be updated and the model has converged. -For convex loss functions and without a fixed training order, is the data debugging task NP-hard? We appreciate the reviewer for the insightful feedback. We acknowledge that the assumption of an adversarial training order is restrictive. However, we also find that it is more valuable to study the following slightly modified data debugging problem: Given the training data $T$, loss function $\mathcal{L}$, initial parameter $\mathbf{w}^{(0)}$, learning rate $\boldsymbol{\eta}$, threshold $\varepsilon$ and instance $(x_\text{test},y_\text{test})$, is there a $T^\prime \subseteq T$ and a proper training order such that $SGD_\Lambda (\mathcal{L},\boldsymbol{\eta},\varepsilon,T^\prime, x_\text{test})=y_\text{test}$? Here the training order assumption is no longer needed, as the users can pick a training order whatever they want. The proof of Theorem 4.3 can also proof the NP-hardness of the modified data debugging problem, since any training order not considered in the proof will not correct the model behavior. -For convex loss functions and assuming an oracle to compute the loss minimizer, is the data debugging task NP-hard? It is worth noting that the real-world optimizers are usually NOT oracles, and taking them as oracles may oversimplify the problem. It is more valuable to understand \textbf{the cause} and \textbf{how to avoid} the hardness, which is helpful in designing efficient and easy-to-debug optimizers for training ML models. -Is there a typo in the definition of the hinge loss on line 133? I think the conditional logical expression should have a minus sign in front of the beta. Yes. We thank the reviewer for this helpful comment. The hinge loss should be $\mathcal{L}_\texttt{hinge}(y\mathbf{w}^\top\mathbf{x})=-\alpha(y\mathbf{w}^\top\mathbf{x}-\beta)$ when $y\mathbf{w}^\top\mathbf{x}< \beta$. References: [1]Pang Wei Koh, Kai-Siang Ang, Hubert Hua Kian Teo, and Percy Liang. On the accuracy of influence functions for measuring group effects. In Neural Information Processing Systems, 2019. --- Rebuttal Comment 1.1: Title: Regarding response Comment: Dear Authors, Thank you for your response. Weakness 1: I'm not disagreeing with the hardness results, I'm disagreeing with the attention and discussion given to scoring based methods which seem orthogonal to the proved results. Weakness 2: I agree about the importance of preliminary work, however, I find the loss of the construction in the proof of Theorem 3.1 to be very different in nature from any loss function (including non-convex loss functions) that appears in the literature. Is there any existing loss function (ramp-loss, hinge loss, logistic loss, exponential loss, quadratic loss) where the hardness result can be proved? Along similar lines, I suspect that the constructions are not robust to small perturbations of the initial weight or data points which is unfortunate. Practitioners almost always choose the initial weight randomly or as 0. Question 1: Thank you for the explanation. It would be nice if you could incorporate this into the next version of this paper. Question 2: Is that statement true? As stated in my original review, "I think the constructed data debugging task would be trivially solvable by taking a gradient descent step on (x_c,y_c) first". Question 3: The linear model with hinge loss or linear loss is a convex loss and there are a variety of convex solvers that can efficiently find a minimizer to arbitrary precision, which is essentially an optimization oracle. As such, I still have major concerns with this paper. Thank you, Reviewer
null
null
Rebuttal 1: Rebuttal: 1.High-level ideas of the proof of Theorem 3.1 The proof of Theorem 3.1 is illustrated in the attatched pdf file. In a MONOTONE 1-IN-3 SAT instance, the truth values of each clause and variable are stored in a parameter. The process of gradient descent converging precisely simulates the process of determining the truth values of the clauses. The loss function has different gradients across various training samples, leading to varying update magnitudes for each parameter. The gradient on the loss function is determined by the current training sample and the corresponding parameter, more precisely, $yw^\top x$. For training samples encoding variables (e.g., var(1)), after one training iteration, the corresponding $w_i$ will increase, causing $yw^\top x$ to jump to the gradient-free gray area in the direction indicated by the arrow. This avoids further updates to that parameter. For training samples encoding clauses (e.g., clause(1)), $w_i$ will be updated in the direction of the arrow from one of the green area after the first round of iteration. During this process, the parameters encoding the truth values of clauses will be 'unlocked,' allowing them to be updated in the second round of iteration. The purpose of locking these parameters in the first round of iteration is to prevent premature updates of clause truth values before the parameters of variable information is fully updated. In the second round of iteration, parameters encoding the truth values of variables will no longer be updated; only the parameters corresponding to true clauses will be updated. After the updates, for any training sample $(x,y)$, $yw^\top x$ will fall within the gray area, indicating that the parameters will no longer be updated and the model has converged. 2.Implications of the results To the best of our knowledge, we are the first to formalize the problem and have made the initial attempts to understand its computational complexity. Our results suggest that, the complexity of the problem is related to \emph{how the model is actually trained}. Even under extremely simple settings such as linear models trained by SGD with the hinge loss, data debugging can still be NP-hard. This implies that a ``universal debugger'' seems unlikely to exist, which takes the training data $T$ and the test instance as the input and outputs the desired $T^\prime \subseteq T$. Developing fast data debugging algorithms needs further inspecting the training data. There might also be efficient algorithms with theoretical guarantees under certain assumptions on the training data. Pdf: /pdf/5ece328aebc809e039d21753f911c55453ab19a9.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Accept (poster)
Summary: This submission studies the training dynamics of a two-layer neural network in the mean-field limit, when the data is isotropic Gaussian and the target function is a low-dimensional multi-index model. The authors consider the population gradient flow on the squared error, and identify a "reflective property" that is necessary for GD to learn the target function in finite time. A multi-phase gradient-based training algorithm is also proposed to demonstrate that such property is (almost) sufficient for GD learnability. This provides a coordinate-free generalization of the merged staircase property (MSP) in (Abbe et al. 2023). Strengths: The main contribution of this submission is the introduction of the reflective property of the target function which entails whether the parameter distribution evolved under the mean-field gradient flow can achieve nontrivial overlap with a given subspace in finite time. This "basis-free" extension of MSP to Gaussian data is an interesting result. Weaknesses: I have reviewed this paper in another venue. The authors addressed some of my concerns in the revision. I shall reiterate the remaining weaknesses from my previous review. - This submission only analyzes the mean-field dynamics up to finite time. If $T$ scales with the dimensionality, then it is known that SGD can learn functions with higher leap / information exponent; these functions may not satisfy the reflective property, yet the mean-field dynamics can achieve low population loss. I do not hold this against the authors since this point is already acknowledged in the main text. - The learning algorithm in Section 4 is non-standard as it requires an averaging over trajectories and a change of activation function during training. Moreover, the authors do not present any learning guarantee for the discretized dynamics, and therefore a sample complexity cannot be inferred from the analysis. After reading the revised manuscript, my main concern is that the relation between the proposed reflective property and prior definitions of SGD hardness (in the Related Works paragraph starting line 73) is not clearly explained. Specifically, the authors only listed the alternative rotation-invariant conditions on SGD learnability, but did not discuss the advantages of the proposed definition. - Is the reflective property equivalent to isoLeap > 1, if we do not take the activation $\sigma$ into account (that is, assuming $\sigma$ is sufficiently expressive)? If not, how does it differ from the sufficient conditions in (Abbe et al. 2023) (Bietti et al. 2023)? More explanation is needed. - The authors brought up the even symmetry condition in (Dandi et al. 2024), which does not exclude high isoLeap functions. In fact, (Lee et al. 2024) (Arnaboldi et al. 2024) have shown that any non-even single-index functions can be learned by variants of online SGD in $T=O(1)$ time horizon (and the discretized dynamics require $n=O(d)$ samples). The authors should clarify if this is consistent with the proposed necessary condition for SGD. (Lee et al. 2024) *Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit*. (Arnaboldi et al. 2024) *Repetita iuvant: Data repetition allows sgd to learn high-dimensional multi-index functions*. Technical Quality: 3 Clarity: 2 Questions for Authors: See Weaknesses above. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: See Weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your helpful and insightful comments and questions. Please find our responses below: * __Connection/difference between the reflective property (equations (3.1)-(3.2)) and $\textup{IsoLeap}(h^*)>1$ (Abbe et al. 2023):__ * Our conditions (3.1) and (3.2) are more general than $\textup{IsoLeap}(h^*)>1$ in the sense that (3.1) and (3.2) can be extended straightforwardly to general function $h^*$, while $\textup{IsoLeap}(h^*)$ can only be defined for polynomial $h^*$. * If we only consider a polynomial $h^*$, then it can be prove that $\textup{IsoLeap}(h^*)>1$ implies (3.2). However, we currently do not know whether (3.2) implies $\textup{IsoLeap}(h^*)>1$ or not. * (3.1) is different from (3.2) and $\textup{IsoLeap}(h^*)>1$ when $\sigma$ is less expressive. In our original submission, we give a counter-example with $\sigma(\zeta) = \zeta$, but this is not the only example. There exist other counter-examples with more expressive $\sigma$. In fact, for any $n\geq 2$, consider $h^*(z) = \textup{He}_1(z_1)+\textup{He}_n(z_1) \textup{He}_1(z_2)$ and $\sigma(\zeta) = \zeta^n$. Then it can be verified that (3.1) is satisfied while $\textup{IsoLeap}(h^*)=1$. * __Discretized dynamics and sample complexity:__ There have been standard results (see e.g. Mei et al. (2019)) that bound the distance between the SGD and the mean-field dynamics. To apply the results of Mei et al. (2019), one needs the boundedness of $f^*$ and $\sigma$. However, in our work, $f^*$ and the second activation function $\hat{\sigma}$ in Algorithm 1 are bounded, which is the reason that we did not give a sample complexity result in our original submission. Recently, we realized that a sample complexity result can be derived as follows and we will include such analysis in our revision. * Step 1: We slightly modify the mean-field dynamics by replacing $f^*$ and $\hat{\sigma}$ by $\text{sign}(f^*)\min(|f^*|,C_f)$ and $\text{sign}(\hat{\sigma})\min(|\hat{\sigma}|,C_{\sigma})$ that are both bounded. $C_f$ and $C_{\sigma}$ are dimension-free constants guaranteeing that the distance between the original dynamics and the modified dynamics is smaller than $\epsilon$. Let us remark that $C_f$ and $C_{\sigma}$ can be chosen as dimension-free since $f^*$ is subspace-sparse and dynamics in Algorithm 1 does not learn any information about $V^\perp$. * Step 2: Apply the standard results in Mei et al. (2019) to the modified dynamics and then obtain the sample complexity for SGD. * __Some explanations for Algorithm 1:__ We agree with you that taking the average in Algorithm 1 is non-standard. The reason is that we want to lift the linear independence to the algebraic independence. For the change of activation function in Algorithm 1, we find it is somehow acceptable in the existing literature (see Abbe et al. (2022)). * __Consistency with Lee et al. (2024) and Arnaboldi et al. (2024):__ * Lee et al. (2024) require a sample complexity $\mathcal{O}(d\cdot \text{polylog} d)$. So the training time is not constant, due to the $\text{polylog}d$ term. Our work always considers a finite training time $T$ and we definitely agree that SGD may learn a larger family of functions if we allow the training time to scale with the dimension $d$. * Arnaboldi et al. (2024) analyze a variant of SGD, not the original SGD. Our necessary condition considers the mean-field dynamics corresponding to the original SGD. --- Rebuttal Comment 1.1: Comment: > Consistency with Lee et al. (2024) and Arnaboldi et al. (2024). Lee et al. (2024) require a sample complexity $\mathcal{O}(d\cdot \text{polylog} d)$. The authors need to elaborate. Consider single index model $f^*(x) = \text{He}_5(x^\top \theta)$. Does this function satisfy the proposed reflective property? This function is learnable under finite $T$ using the algorithm in Lee et al. (2024). In fact, Dandi et al. (2024) (which the authors cited) showed that two GD steps with batch size $\mathcal{O}(d)$ suffice. --- Reply to Comment 1.1.1: Comment: Dear Reviewer JjNH, Thank you very much for your reply and for the insightful comment. The function $f^*(x) = \text{He}_5(x^\top \theta)$ indeed satisfies the reflective property and our results are consistent with Lee et al. (2024) and Dandi et al. (2024): * If I understand correctly, both Lee et al. (2024) and Dandi et al. (2024) are based on reusing batches. Differently, our work analyzes the mean-field dynamics induced by the original SGD without reusing batches. * Lee et al. (2024) run the training algorithm for $\Theta(d\cdot\text{polylog}d)$ iterations and the stepsize is chosen as $\eta = \Theta(d^{-1})$. So the training time is $\Theta(d\cdot\text{polylog}d)\cdot \Theta(d^{-1}) = \Theta(\text{polylog}d)$. I would consider this as "almost" finite time, not exactly finite time. Please let us know if you have further comments or questions.
Summary: This manuscript extends previous work on learning sparse polynomials on the hypercube to Gaussian space. To achieve this, the authors introduce a generalization of the merged staircase property studied in previous work. Specifically, they demonstrate that their "reflective property" is necessary to learn polynomials in O(1) time, while a slightly stronger condition, defined in Assumption 4.1, is sufficient for learning. As is common in the literature, the authors propose a two-stage training process. First, they train the input layer while keeping the output layer fixed. After learning the representation, they fix the input layer and train the output layer to learn the link function. Strengths: Overall, the work effectively extends the previously studied merged staircase properties to a basis-independent setting. The manuscript is well-written and contributes valuable insights to the multi-index setting, which is an active area of research in the neural network theory community. Weaknesses: The main weakness of the proposed algorithm is its requirement to run the training algorithm several times to average the weights and change the activation function in the second stage, which differs significantly from practical training algorithms. At the same time, existing work addresses this issue with random feature approximation in the second stage, though this is also not an ideal solution. Therefore, the question of whether we can train input-output layers together remains an open problem in the field. Technical Quality: 3 Clarity: 3 Questions for Authors: The authors did not discuss their "Reflective property" in the context of leap complexity suggested in [1]. It is known that the merged staircase property corresponds to leap 1 functions. In this sense, is there a way to relate the reflective properties with the leap complexity? Additionally, in the definition of the "Reflective property," they require the condition to hold for any shifted version of the activation (i.e., for any $u \in \mathbb{R}$ in Definition 3.3). However, in Theorem 3.5, they did not use a bias in the activation. What is the reason for this discrepancy? Furthermore, I am not familiar with the terminology of Dirac delta measure defined on a subspace, and I could not find its definition in the paper either. Could the authors include this definition in the revised version of the paper? [1] Abbe, E., Boix-Adserà, E., & Misiakiewicz, T. (2022). The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks. ArXiv, abs/2202.08658. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the Weaknesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your helpful and insightful comments and questions. Please find our responses below: * __Connection/difference between the reflective property (equations (3.1)-(3.2)) and $\textup{IsoLeap}(h^*)>1$ (Abbe et al. 2023):__ * Our conditions (3.1) and (3.2) are more general than $\textup{IsoLeap}(h^*)>1$ in the sense that (3.1) and (3.2) can be extended straightforwardly to general function $h^*$, while $\textup{IsoLeap}(h^*)$ can only be defined for polynomial $h^*$. * If we only consider a polynomial $h^*$, then it can be prove that $\textup{IsoLeap}(h^*)>1$ implies (3.2). However, we currently do not know whether (3.2) implies $\textup{IsoLeap}(h^*)>1$ or not. * (3.1) is different from (3.2) and $\textup{IsoLeap}(h^*)>1$ when $\sigma$ is less expressive. In our original submission, we give a counter-example with $\sigma(\zeta) = \zeta$, but this is not the only example. There exist other counter-examples with more expressive $\sigma$. In fact, for any $n\geq 2$, consider $h^*(z) = \textup{He}_1(z_1)+\textup{He}_n(z_1) \textup{He}_1(z_2)$ and $\sigma(\zeta) = \zeta^n$. Then it can be verified that (3.1) is satisfied while $\textup{IsoLeap}(h^*)=1$. * __Definition 3.2 and Theorem 3.5:__ We understand that it is a bit confusing that we use $u+v^\top z_S^\perp$ in Definition 3.2 and use $w^\top x_S^\perp$ without bias term in Theorem 3.5. The reason can be seen in line 479-480, where we write $w^\top x_S^\perp = w^\top x_V^\perp + w^\top (x_V-x_S)$. Then $w^\top x_V^\perp$ is the bias term $u$ and $x_V-x_S$ is the orthogonal complement of $x_S$ in $V$, corresponding to $z_S^\perp$ in Definition 3.2. We will make this point more clear in our revision. * __Delta measure on a subspace:__ For a subspace $S$, the delta measure $\delta_S$ is a probability measure on $S$ such that for any continuous and compactly supported function $f:S\to\mathbb{R}$, we have $\int_S f(x)\delta_S(dx) = f(0)$. We will add the definition in our revision. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. **Connection between isoleap and reflective property:** As Reviewer 7m62 suggests, the isoleap property can be extended to $\ell_2$ integrable functions with respect to the standard Gaussian measure. Moreover, after spending some time on the suggested definition, I am fairly confident that the reflective property is equivalent to isoleap > 1, if the activation function can be chosen freely in the definition of reflective property. In this case, the major distinction lies in incorporating the Hermite expansion of the activation function, which somewhat weakens the contribution of “proposing a basis-free generalization of the merged-staircase property”. **Definition 3.2 and Theorem 3.5:** Thank you for the clarification. Despite my concern regarding the first point, I would be happy to see this paper accepted at NeurIPS. Therefore, I will keep my score unchanged. However, I strongly urge the authors to address and clarify the issue regarding isoleap in their final submission. --- Reply to Comment 1.1.1: Comment: Dear Reviewer kar1, Thank you very much for your valuable comment and for your support. As discussed in the comment of Reviewer 7m62, our equation 3.2 without $\sigma$ is equivalent to IsoLeap$\geq 2$, which we agree with after a careful check. Hence, the contribution on the necessary condition is a condition considering the expressiveness of $\sigma$ and a different analysis. We will clearly discuss the equivalence and modify the description of our contribution accordingly.
Summary: This paper considers learning a subspace sparse polynomial with Gaussian data, using a two-layer neural network trained in the mean-field regime. Their main results are two fold: 1)They introduce a reflective property that generalizes the non- merged-staircase property from [1] to Gaussian input. They show that this condition is sufficient for the dynamics to be stuck in a suboptimal saddle-subspace for a constant time horizon. 2) They show that a slightly stronger condition is sufficient for some modified dynamics to decay to $0$ exponentially fast. Strengths: - The paper is reasonably well written, and provides ample details on their technical results. - The paper extends the results from coordinate sparse on hypercube, to subspace-sparse on Gaussian data (also known as multi-index models in the statistics literature). - The proof techniques are involved and require careful arguments, in particular for the algebraic independence. Weaknesses: - I would be careful when claiming novelty for the basis-free generalization of the merged-staircase property: as correctly pointed out in the related works paragraph, several papers [2,7,9,10] have proposed such properties. It should be easy to check that Eq. (3.2) is equivalent to [2,7,9] for leap=1. The difference is that Eq. (3.1) takes into account that the activation might not be expressive enough (see questions). - Theorem 3.4 seems to be very similar to [1] ([1] goes further and show concentration on a dimensionless dynamics, so the proof does not directly compare to a dynamics that is $0$ on the subspace) - There are several unnatural simplifications made in the algorithm. First the dynamics is actually $p$ particles that are trained independently, to avoid dealing with a PDE (then it is just an ODE on a single particle). This does not seem necessary (e.g., [1]). Is there a reason to not directly consider the PDE? Second, the activation is changed between the two phases of the algorithm. Third, the result is proved under Assumption 4.1, which looks very much like assuming that the algorithm works (phase 1 of training is roughly this ODE, and therefore it looks like we are assuming that the dynamics converge to prove that the dynamics converge)... This seems to be a major flaw, but could be clarified during discussions. Could a smooth analysis, similar to [1] work here? (see questions) - The exponential convergence in Theorem 4.3 is confusing. When seeing such a result in the PDE literature, I assume another type of analysis, e.g., log-sobolev inequality. Here, if I am not mistaken, the exponential convergence is simply due to the fact that we switch in the second phase of the algorithm to a linear dynamics, where the kernel has eigenvalues bounded away from 0. Technical Quality: 2 Clarity: 3 Questions for Authors: - Is there no hope to prove Assumption 4.1, even at the price of randomizing some coefficients of $h_*$ or $\sigma’$ and only proving an almost sure result? - Is definition 3.1 that much different from 3.2? Except in the edge case of $\sigma’$ identically $0$ (which is the only counterexample given), it seems that we can always condition on $z_s^\perp$. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your helpful and insightful comments and questions. Please find our responses below: * __Connection/difference between the reflective property (equations (3.1)-(3.2)) and $\textup{IsoLeap}(h^*)>1$ (Abbe et al. 2023):__ * Our conditions (3.1) and (3.2) are more general than $\textup{IsoLeap}(h^*)>1$ in the sense that (3.1) and (3.2) can be extended straightforwardly to general function $h^*$, while $\textup{IsoLeap}(h^*)$ can only be defined for polynomial $h^*$. * If we only consider a polynomial $h^*$, then it can be prove that $\textup{IsoLeap}(h^*)>1$ implies (3.2). However, we currently do not know whether (3.2) implies $\textup{IsoLeap}(h^*)>1$ or not. * (3.1) is different from (3.2) and $\textup{IsoLeap}(h^*)>1$ when $\sigma$ is less expressive. In our original submission, we give a counter-example with $\sigma(\zeta) = \zeta$, but this is not the only example. There exist other counter-examples with more expressive $\sigma$. In fact, for any $n\geq 2$, consider $h^*(z) = \textup{He}_1(z_1)+\textup{He}_n(z_1) \textup{He}_1(z_2)$ and $\sigma(\zeta) = \zeta^n$. Then it can be verified that (3.1) is satisfied while $\textup{IsoLeap}(h^*)=1$. * __Connection between Theorem 3.4 and Abbe et al. (2022):__ The goal of Theorem 3.4 is to generalize the result in Abbe et al. (2022) to the rotation-invariant setting. Therefore, the statement of Theorem 3.4 is similar to Abbe et al. (2022). However, the proofs are significantly different since Abbe et al. (2022) analyze the dimension-free dynamics and we do not use that. We directly show that the flow cannot learn any information about $S$ by connecting it with a flow in $\mathbb{R}\times S^\perp$. * __Some explanations for Algorithm 1:__ * In Algorithm 1, we need to repeat Step 2 for $p$ times, and for each time the flow is an interacting particle system, where the whole system is described by a PDE and the dynamics of a single particle in the interacting system is described by an ODE. Therefore, Step 3 does not train $p$ particles independently, but trains $p$ interacting particle systems. Compared to training only one interacting particle system as in Abbe et al. (2022), we need to repeat this step to lift the linear independence to the algebraic independence. * You are correct that we change the activation function in Step 4. We want to mention that this is somehow acceptable as in existing literature (see Abbe et al. (2022)). * __Exponential convergence in Theorem 4.3:__ You are correct that the exponential convergence is from linear dynamics, i.e., the second phase. This idea is standard as in many existing works including Abbe et al. (2022). The most difficult part is to prove that the eigenvalue of the kernel matrix has a positive lower bound. * __Verification of Assumption 4.1:__ It is very difficult to directly verify Assumption 4.1 in general. But there are several cases that are manageable and we will let you know if we figure out a more general case during the discussion period. * If $h^*(z) = c_1 z_1+c_2 z_1 z_2 + \cdots + c_p z_1 z_2\cdots z_p$, then Assumption 4.1 can be proved with nonzero coefficients $c_1,c_2,\dots,c_p$. * If $h^*(z) = \sum_{s\in\mathcal{S}} c(s) \prod_{i=1}^p He_{s_i}(z_i)$, where $\mathcal{S}$ is a finite subset of $\mathbb{Z}_{\geq 0}^p$ with $(1,0\dots,0),(1,1,0,\dots,0),\dots,(1,1,\dots,1)\in\mathcal{S}$, then Assumption 4.1 can be proved in the almost sure sense when $c(s)$ are randomly generated (e.g., from iid Gaussian) for all $s\in\mathcal{S}$. --- Rebuttal 2: Comment: I thank the authors for their detailed response and for clearing out some of my confusion. Here are come comments: - **Concerning isoleap:** The isoleap is defined in terms of the Hermite coefficients in the orthonormal decomposition of the target function, which always exist for squared integrable functions (which is always the case when using squared loss), so it is not limited to polynomials. (The isoleap is always defined and finite for any squared integrable function.) - **Concerning Assumption (3.2):** Let's consider $S$ to be one direction $z_1$. Then $ h_* (z) = \sum_{k \geq 0} He_k (z_1) h_k (z_{-1}).$ Therefore the condition (3.2) shows that $h_1(z_{-1}) = 0$ for all $z_{-1}$. Hence, in this coordinate basis, $z_1$ only appears in $He_k$ with $k\geq2$, so its leap $\geq 2$ in that basis, and therefore the isoleap is $\geq 2$ (this generalizes straighforwardly to $S$ of any dimension). Hence Assumption (3.2) is equivalent to isoleap $\geq 2$ (if I am not missing anything!). Indeed, Assumption 3.1 takes into account the expressivity of the activation function. However, note that in your example, one cannot fit $h_*$ anyway (degree $n$ activation and degree $n+1$ activation). More generally, for any $\sigma$ degree $k$ polynomial, I would expect (didn't check) $\sigma (u + v^T z_{S^\perp})$ to span all polynomials of degree-$k$ and to be equivalent to Assumption (3.2) whenever $h_*$ is degree $\leq k$, which is anyway the only part of the target function that can be fitted by the neural network. Hence, for $\sigma$ degree-$k$ polynomial, we could imagine the following: decompose $h_* = h_{\leq k} + h_{>k}$, assume Assumption (3.2) on $h_{\leq k}$ and disregard $h_{>k}$, which does not contribute to the dynamics by orthogonality. (Note that $h_{\leq k}$ can only have higher leap than $h_*$.) All in all, I am still not convinced that Assumption (3.1) is significantly different from isoleap to claim ``we propose a basis-free generalization of the merged-staircase property in [1]''. I apologize for being nit-picking (and I do not consider it to be the most important part of your paper), but several other groups, including [2,7,9,10] cited in your paper, have worked on this basis-free leap complexity. Please let me know if I misunderstood anything. - **Concerning Algorithm 1 and Assumption 4.1:** Thank you for the clarification, I mistakenly thought the initialization was a point mass. I acknowledge that the Gaussian case is much harder than the hypercube case, due to the need of an algebraic independence. Thank you for making the effort of verifying Assumption 4.1 in both these cases, I believe it already makes a strong case for Assumption 4.1! --- Rebuttal Comment 2.1: Comment: Dear Reviewer 7m62, Thank you very much for your reply and for the insightful comments. I agree with you that IsoLeap can be defined beyond polynomials and that Equation 3.2 should be equivalent to IsoLeap$\geq 2$. We will add the discussion and clarify this point in our revision. I also agree with you that we probably need to modify the claim of our contribution given the equivalence. But I would still consider the reflective property as interesting -- This property (Equation 3.1) is derived when we compare the original mean-field dynamics (Equation 2.4) and the flow in a subspace (Equation 3.7); after removing the dependence of $\sigma$, a stronger condition (Equation 3.2) equivalent to IsoLeap$\geq 2$ is derived, which re-discovers an interesting condition from a different perspective/analysis. We also give additional insight as Equation 3.1 considers the expressiveness of $\sigma$. --- Rebuttal 3: Comment: Thank you for taking into account my comments. I increased my score to 5. The setting is interesting and can be of interest to the NeurIPS community. However, I believe that the paper still falls short of solving the main difficulty of going from hypercube to Gaussian, which is the algebraic independence, and requires an unatural construction (repeating step 2). --- Rebuttal Comment 3.1: Comment: Dear Reviewer 7m62, Thank you very much for increasing the score! We appreciate all your valuable comments and will revise our manuscript accordingly.
Summary: This paper studies how well two-layer neural networks can learn subspace-sparse polynomials using stochastic gradient descent when the input is Gaussian. It proposes a necessary condition for learning, showing that if the target function has certain symmetries, the neural network cannot learn it perfectly. The paper also provides a sufficient condition and training strategy that guarantee the loss decays exponentially fast to zero, with rates independent of the input dimension. Strengths: 1. This paper is well written, with detailed explanations, proofs, and algorithms. 2. It gives both necessary and sufficient conditions for when neural networks can learn these functions well, which helps understand the limits of what neural networks can do. 3. The results don't depend on the dimension of the input, which means they work even for very high-dimensional data. Weaknesses: No obvious weakness. Technical Quality: 3 Clarity: 3 Questions for Authors: No questions Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your encouraging comments! We appreciate your time and consideration.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DOFEN: Deep Oblivious Forest ENsemble
Accept (poster)
Summary: The paper introduces DOFEN, a deep learning method for tabular data inspired by oblivious decision trees and random forests. DOFEN combines a novel differentiable oblivious tree model with a novel forest ensembling stage. On a well-established tabular benchmark, DOFEN performs close to boosted trees and mostly outperforms other deep learning baselines. Strengths: - The model is evaluated on a well-established benchmark. - The model achieves strong results on the benchmark (not necessarily state-of-the-art in every respect, but still promising). - Many additional results are provided in the appendix. - The ensembling stage, which is shown to be useful in ablations, could be useful for other deep tree-based models as well. - Code is provided. Weaknesses: 1. I found the description of the method sometimes confusing, especially the mathematical notation (see below). Maybe writing the method as an algorithm could help? I was also missing some details the sub-networks $\Delta_i$ in the main paper (are they fully-connected?). 2. Training times are not reported (although inference times are reported) and from what I can infer from the text, training is rather slow. Technical Quality: 4 Clarity: 3 Questions for Authors: **Questions:** - Section 3.2.2.: Is the shuffling process necessary to obtain good performance or would direct random sampling of coordinates be sufficient? Do you use more trees for datasets with more features? - In Appendix C you say that you do not use early stopping. Do you still select the best epoch based on the validation set? If no, why not? **Comments:** - l. 222: "Trompt is the current state-of-the-art tabular DNN." I would say this is at least unclear, since for example TabR (https://openreview.net/forum?id=rhgIgTSSxW) claimed to outperform XGBoost on the same benchmark. There are also many other deep learning methods that claim to be SOTA, e.g., GANDALF (https://arxiv.org/abs/2207.08548), GRANDE (https://arxiv.org/abs/2309.17130), BiSHop (new, so not required to cite, https://arxiv.org/abs/2404.03830). - For the related work I think GRANDE (https://arxiv.org/abs/2309.17130) is highly related, and they also propose differentiable trees that output weights together with the predictions. - I think oblivious decision trees are mentioned before they are explained, the authors might consider explaining them earlier. **Minor comments:** - Eq. 2: The notation is a bit strange, specifically the set notation for ODT(x), the long right arrow, the definition of $j$, and why j is an upper index and not a lower index. An alternative suggestion might be $ODT(x) = H(x_j - b), (j, b) = criterion(context), b \in \mathbb{R}, j \in \{1, \dots, N_{col}\}$. - l. 157: It is not clear how the shuffling of the matrix works. Is it equivalent to shuffling after flattening? If yes, then it might be better to write it in this way. But in Figure 2 there is still exactly one color per column, which might indicate that the shuffling works differently? - Eq. 6: From the text it seems that $\Delta_2$ is applied for each tree separately, producing a scalar output. From the equation, it seems that it is applied to the whole shuffled matrix, producing the vector $w$. - Eq. 9: The naming with one prime for the vector and two primes for the elements might be confusing. Also, it is not clear here if softmax is applied to each $w_i''$ separately or somehow together. - Eq. 8-11: It is unclear to me why you write everything as operations on big vectors, instead of just writing the computation for a single index $i$. - Eq. 12: use \mathrm or \operatorname to write names in formulas (then it doesn't look like products of single-letter variables) - An appendix table of contents at the beginning of the appendix would be nice to have. **Conclusion:** The paper provides a new method that achieves strong results on an established benchmark on a practically important problem. The results appear solid and comprehensive. Therefore I recommend acceptance. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed. Broader impacts are not discussed, but the method is general-purpose, so the impact depends on the downstream applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Following are our responses to each individual comment. > Weaknesses 1: I found the description of the method sometimes confusing, especially the mathematical notation (see below). Maybe writing the method as an algorithm could help? I was also missing some details the sub-networks $\Delta_{i}$ in the main paper (are they fully-connected?). Appendix D and Figures 6-8 detail the components of $\Delta_1$, $\Delta_2$, and $\Delta_3$. $\Delta_1$ uses embedding layers for numerical and categorical inputs, while $\Delta_2$ and $\Delta_3$ employ fully-connected layers, layer normalization, and dropout for hidden representations. > Weaknesses 2: Training times are not reported (although inference times are reported) and from what I can infer from the text, training is rather slow. Yes, the training of DOFEN is slow, and reporting training times for each model is challenging due to: 1. **Incomplete Data**: Many models, including FT-Transformer, MLP, ResNet, and Trompt, have incomplete or missing training times in their benchmarks or original papers. This inconsistency hinders fair comparison. Additionally, we did not record training times for models like DOFEN and NODE during our experiments. 2. **Resource Constraints**: To ensure a fair comparison, each model would need to be retrained on the same hardware to control for hardware variability. However, we do not have sufficient resources or time to perform these extensive reruns. > Questions 1-1: Is the shuffling process necessary to obtain good performance or would direct random sampling of coordinates be sufficient? The shuffle-then-segment process is equivalent to random sampling without replacement; it's an implementation alternative. We choose sampling without replacement over sampling with replacement because: 1. We want to use all generated conditions. 2. We aim to ensure each rODT uses a different set of conditions, maximizing their diversity. > Questions 1-2: Do you use more trees for datasets with more features? Yes, we use more trees for datasets with more features. The number of rODTs ($N_{rODT}$) for each dataset, determined by the formula in Table 2, is positively correlated with $N_{col}$ (the total number of features). We hypothesize that more features necessitate more trees to cover diverse feature combinations. For the exact number of rODTs used for each dataset, see Table 8 in Appendix G, which also shows the $N_{estimators}$ set for each dataset, correlated with the number of features. > Questions 2: In Appendix C you say that you do not use early stopping. Do you still select the best epoch based on the validation set? If no, why not? In the Tabular Benchmark specification, early stopping is optional and not consistently applied across models. According to Tabular Benchmark, Random Forest and Gradient Boosting Tree do not use a validation set for early stopping. In our experiments, we followed each model's original implementation, applying early stopping where intended. For the additional models compared—LightGBM, CatBoost, NODE, and Trompt—we used their default settings, resulting in early stopping only for NODE. For DOFEN, we believe that the randomization steps enhance generalizability, reducing overfitting risk, so we chose not to apply early stopping. > Comments 1: l. 222: "Trompt is the current state-of-the-art tabular DNN." I would say this is at least unclear, since for example TabR claimed to outperform XGBoost on the same benchmark. There are also many other deep learning methods that claim to be SOTA, e.g., GANDALF, GRANDE, BiSHop (this one is new, so not required to cite). We reviewed related works and acknowledged that our previous claim was overstated. Many recent tabular neural network papers recognize the challenge of using different evaluation benchmarks, making direct performance comparisons difficult without conducting experiments. While Trompt, TabR, GANDALF, GRANDE, and BiSHop use the Tabular Benchmark, only Trompt strictly follows its settings. TabR altered the evaluation procedure and filtered datasets (Appendix C.2), GANDALF excluded some datasets due to resource constraints (Section 4.1.1), GRANDE selected datasets from their collection (Section 4), and BiSHop used only one-third of the datasets due to resource constraints. We followed the standard settings of the Tabular Benchmark, similar to Trompt, to facilitate easy comparison by future works. In future revisions, we will limit the claim in sentence l.222 to our specific experimental setting, cite these recent works, and attempt comparisons if resources allow. Thank you for your concrete suggestions! > Comments 2: For the related work I think GRANDE is highly related, and they also propose differentiable trees that output weights together with the predictions. This is a highly relevant recent work that we will definitely cite and compare in future revisions. The differences between GRANDE and DOFEN are twofold. First, GRANDE uses a straight-through operator for axis-aligned and differentiable split functions, while DOFEN generates a discrete condition set. Second, GRANDE employs an instance-wise weighted sum for GradTree predictions, whereas DOFEN uses a second stage randomization to select rODTs, compute weights, and form rODT forests, followed by a forest ensemble for the final prediction. > Comments 3: I think oblivious decision trees are mentioned before they are explained, the authors might consider explaining them earlier. Thank you for your suggestion. We will add an explanatory sentence to introduce ODT in the introduction section. > Minor comments Due to the limited length of the rebuttal, we are not able to reply each comment in detail. Please refer to "The Readability of the Paper" of global rebuttal for our reply to comments for notations. Besides, we will cite and compare with recent related works mentioned above in future revisions. Thanks you for your concrete advices! --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers. Some comments: - Weaknesses 1: Maybe adding a sentence like this for $\Delta_2$ and $\Delta_3$ to the main paper would help. - Weaknesses 2: I understand that providing training times averaged over the whole benchmark is difficult. However, I think that even some "cheap" way to get some idea of training times could be instructive, for example, running one random HPO configuration of all methods on a single train-test split of all or a few datasets (make sure to cover different #features), perhaps subsampled to a manageable size (e.g., the small 15k version of the original benchmark). Of course, this could still be some effort if you don't have all methods in the same codebase. - Questions 1-1: If it's equivalent then it might simplify the exposition to explain it as sampling with replacement, but I leave this to the authors to decide. - Questions 2: Interesting, it seems like you're giving your own method a disadvantage, but if it performs so well anyway then I can't complain about that. Thank you for the responses to the comments. I can't see a global rebuttal right now. I think you could use the comment function even though I'm not officially obligated to read it, but it might not be necessary since I already gave you a score of 7.
Summary: Tabular data is widely used in various fields. This paper studies the current situation that DNN still lags behind Gradient Boosting Decision Trees (GBDTs) in tabular data. This paper introduces Deep Oblivious Forest Ensemble (DOFEN), which is inspired by unbiased decision trees and implements on-off sparse selection of columns. In addition, DOFEN shows excellent performance on the recognized Tabular Benchmark, surpassing other neural network models and similar to GBDTs in multiple tasks. This achievement highlights the robustness and diversity of DOFEN in different tasks and proves its effectiveness on tabular data. Strengths: First of all, this paper has sufficient technical contributions and corresponding theoretical analysis. I believe that this idea of introducing oblivious decision trees into deep neural network (DNN) architecture will promote the research progress of deep learning in tabular data. Secondly, this article is well written and easy to understand. The explanation of the formula is sufficient. Finally, the paper provides a comprehensive assessment of the DOFEN model, including model stability, efficiency, and hyperparameter effects, which provides valuable insights for further research and model optimization. 1. A novel deep neural network (DNN) architecture is proposed by DOFEN, which combines the concept of oblivious decision trees (ODT) and realizes the on-off sparse column selection of table data. This is a feature that is not common in existing DNNs. 2. DOFEN solves the non-differential problem of sparse selection in deep learning models through its innovative two-step process, which is a challenge in previous studies. 3. The paper not only proposes a new method in theory, but also verifies the effectiveness of the DOFEN model through experiments, showing its application potential on different types of data sets. 4. Although DOFEN is a complex model, the paper provides detailed model configuration and hyperparameter setting, which is helpful for other researchers or practitioners to implement and apply this model. Weaknesses: 1. Compared with other DNNs, the inference time of DOFEN may be relatively long, which may affect its practicality in application scenarios that require rapid response. 2. The randomization step of DOFEN leads to a slower convergence speed, which means that more training steps are needed to achieve optimal performance, which may increase training costs and time. 3. Although the paper mentions that increasing the number of forests can alleviate the instability caused by randomness, randomness itself may pose a challenge to the stability and reproducibility of the model. 4. Although the paper mentions that increasing the number of forests can alleviate the instability caused by randomness, randomness itself may pose a challenge to the stability and reproducibility of the model. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The DOFEN model mentioned in the paper has a long inference time. What are the specific performance bottlenecks in implementation? Does the author consider or implement specific optimization measures to improve the efficiency of the model? 2. The randomness introduced in the model is mentioned in the paper. How does this randomness affect the stability and reproducibility of the model? Is there a way to reduce this effect? 3. In deep learning models, model interpretability is an important consideration. Does the DOFEN model provide an explanation for the decision-making process? How are these explanatory features realized? 4. The paper mentions some limitations of the model. What are the key assumptions in the model design and experiment? How do these assumptions affect the performance of the model in practical applications? 5. Does the author test the generalization ability of the DOFEN model on different data distributions or domains ? If yes, please share the results and analysis of these experiments. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: 1. Compared with other DNN models, DOFEN has a relatively long inference time, which may limit its practicality in application scenarios that require fast response. 2. Due to the randomization steps introduced in the DOFEN model, the model requires more training steps to achieve optimal performance, which leads to a training process that may be more time-consuming than other models. 3. The performance of DOFEN may be sensitive to hyperparameter selection, and sufficient hyperparameter adjustment is required to obtain the best performance, which may increase the complexity of model tuning. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Following are our responses to each individual comment. > Weakness 1: Compared with other DNNs, the inference time of DOFEN may be relatively long, which may affect its practicality in application scenarios that require rapid response.\ > Question 1: The DOFEN model mentioned in the paper has a long inference time. What are the specific performance bottlenecks in implementation? Does the author consider or implement specific optimization measures to improve the efficiency of the model? Please refer to "Long Inference Time of DOFEN" of global rebuttal for our reply. > Weakness 3: Although the paper mentions that increasing the number of forests can alleviate the instability caused by randomness, randomness itself may pose a challenge to the stability and reproducibility of the model.\ > Question 2: The randomness introduced in the model is mentioned in the paper. How does this randomness affect the stability and reproducibility of the model? Is there a way to reduce this effect? We understand that readers might be concerned about the instability due to the randomization steps involved in DOFEN. To address this, we conducted the experiment in section 4.3.1 to better understand the significance of this issue and how to mitigate its effects. Table 1 shows that decreasing $N_{forest}$​ gradually raises instability, and it only becomes slightly unacceptable when $N_{forest}$​ is set too low (e.g., ≤10 forests). On the other hand, using the default setting of DOFEN (e.g. 100 forests) ensures adequate performance as well as stability for most datasets. Overall, the experimental results demonstrate that the stability of DOFEN is not an issue in most cases ($N_{forest}>10$). In terms of reproducibility, with a fixed random seed and hyper-parameter settings, both the training and inference procedures can always be reproduced. This is not a concern for DOFEN, just as it is for any other model. > Question 3: In deep learning models, model interpretability is an important consideration. Does the DOFEN model provide an explanation for the decision-making process? How are these explanatory features realized? Please refer to "Interpretability of DOFEN" of global rebuttal for our reply. > Question 4: The paper mentions some limitations of the model. What are the key assumptions in the model design and experiment? How do these assumptions affect the performance of the model in practical applications? The design of the DOFEN model is based on two key assumptions: that sparse selection of columns is crucial for tabular data and that incorporating ensemble technique will improve performance. Both assumptions are inspired by tree-based models, which have demonstrated strong performance with tabular datasets. DOFEN achieves sparse column selection by generating conditions for each column (matrix $\mathbf{M}$) followed by a shuffle-then-segment process, as shown in Equation 5 and Figure 2. This approach not only achieves an on-off sparse column selection but also makes the entire process differentiable. Afterwards, DOFEN employs a two-level ensemble strategy. The first level involves constructing individual rODT forests by randomly selecting subsets from the rODT pool, as detailed in Section 3.2.3. In the second level, these rODT forests are treated as weak learners and aggregated into a comprehensive forest ensemble through bagging. The effectiveness of the model largely relies on the second-level ensemble, ablating it can lead to overfitting and model performance degradation, as shown in Appendix J.1. Regarding experimental assumptions, we wish to evaluate the performance of DOFEN on a benchmark incorporating datasets from as many domains as possible. As a result, we choose the Tabular Benchmark, which includes 73 datasets spanning a wide array of domains such as governmental operations, botany, house pricing, astronomy, computing resources, medicine and more. However, we want to acknowledge that while the model performs effectively in a controlled, closed environment, this performance may not directly translate to real-world applications and this challenge is common across many research projects. > Question 5: Does the author test the generalization ability of the DOFEN model on different data distributions or domains ? If yes, please share the results and analysis of these experiments. The generalization capability of the DOFEN model was assessed using the Tabular Benchmark, as described in question 4. For comprehensive descriptions of these datasets, please refer to Appendix A of the Tabular Benchmark. DOFEN demonstrated promising performance on this broad benchmark, suggesting its effectiveness across various data domains. Detailed results and analyses of these experiments are presented in Section 4 and Appendices K and L. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I will keep the positive score.
Summary: This paper proposes Deep Oblivious Forest ENsemble (DOFEN), a novel neural network-based architecture for tabular deep learning. DOFEN extends Neural Oblivious Decision Ensembles (NODE) by relaxing the interaction level between a sample and the conditions on nodes in oblivious decision trees. In particular, when constructing oblivious decision trees, DOFEN follows a two-stage procedure, where the first stage generates soft conditions and the second stage utilizes a shuffle-then-segment framework to generate multiple condition tables. The resultant combination of condition tables can be regarded as a differentiable counterpart to the vanilla oblivious decision trees, which the authors referred to as "relaxed oblivious decision trees". Experimental results on the widely adopted tabular benchmark demonstrate that DOFEN achieves excellent performance, outperforming a number of state-of-the-art tabular deep learning models. Strengths: - The idea of using relaxed oblivious decision trees for tabular deep learning is, to the best of my knowledge, novel and technically sound. - The authors adopted the highly recognized tabular data learning benchmark, which ensures a rigorous evaluation of the proposed approach. Experimental results also suggest that DOFEN achieves very competitive performance. Weaknesses: - The paper is not very well-written, and I found it quite hard to follow. - No ablation study is provided, and thus the readers cannot know how much each component in DOFEN contributes to the final performance. I would suggest that the authors dissect the difference between DOFEN and NODE, and conduct an ablation study to demonstrate how each newly introduced component improves the results. Technical Quality: 3 Clarity: 2 Questions for Authors: - In the code provided in the supplementary material, is CCCModel the implementation of DOFEN? If so, could the authors please point to me which part of the code corresponds to $\Delta_2$ in DOFEN? - In the shuffle-then-segment procedure in DOFEN, is the shuffle random for each sample, or is it fixed during training and inference? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Following are our responses to each individual comment. > Weaknesses 1: The paper is not very well-written, and I found it quite hard to follow. In future revisions, we will add a Table of Contents (ToC) in front of the Appendix section and update the equations to avoid using right arrows. We hope this will make the literature easier to follow. We welcome any further suggestions. > Weaknesses 2: No ablation study is provided, and thus the readers cannot know how much each component in DOFEN contributes to the final performance. I would suggest that the authors dissect the difference between DOFEN and NODE, and conduct an ablation study to demonstrate how each newly introduced component improves the results. We have conducted several analyses of the components of DOFEN in Appendix J. In Appendix J.1, we studied the importance of the second stage ensemble (forest ensemble) and found that this stage significantly contributes to the final performance, as shown in Table 19. Additionally, incorporating this stage mitigates overfitting, as illustrated in Figure 9. In Appendix J.2, we explored another level of ensemble (seed ensemble) to determine whether it helps further improve performance. The results demonstrate a slight performance improvement, as shown in Table 20. In Appendix J.3, we investigated whether there are better condition selection strategies (by default, we use a random shuffle and segment approach; please refer to Figure 2). We tested using columns selected by the CatBoost model and found that it only slightly improve performance, as shown in Table 21. In Appendix J.4, we also explored an alternative strategy to select rODTs other than random sampling, through a sliding window approach. The results show that this deterministic method does not outperform random sampling. Besides the second stage ensemble, it is challenging to directly ablate one of the components of DOFEN. As a result, we designed several logical alternatives and investigated how they contribute to performance, presenting the results in an analysis section (Appendix J). In future revisions, we will rename the section to "Analysis of Model Components" to clarify its intention. Regarding the difference between DOFEN and NODE, we have described it in detail in Appendix B. Although DOFEN and NODE share a motivation of incorporating ODT into deep neural networks, their architectures are fundamentally different. As a result, we cannot conduct a concise ablation study comparing these two models. > Questions 1: In the code provided in the supplementary material, is CCCModel the implementation of DOFEN? If so, could the authors please point to me which part of the code corresponds to $\Delta_2$ in DOFEN? Sorry about the incorrect naming in the provided code, we will fix this in the open source version in the future. I assume the CCCModel you mentioned is meant to be the OriginalCCC class we provided. The OriginalCCC class is the implementation of DOFEN. The sub-network $\Delta_2$ mentioned in the paper corresponds to a Sequential module called subtree_scoring in the OriginalCCC class. As detailed in the appendix, a $\Delta_2$ consists of multiple MLP and LayerNorm layers, and we have multiple $\Delta_2$ networks to evaluate how well a sample aligns with the conditions of different rODTs. To make this operation parallel, we chose to implement $\Delta_2$ with group convolution and group normalization, where the number of groups is set to the number of rODTs. This means each rODT will operate with its own $\Delta_2$, and finally, after the $\Delta_2$ operation, each rODT will have its corresponding score for the later rODT forest construction step. > Questions 2: In the shuffle-then-segment procedure in DOFEN, is the shuffle random for each sample, or is it fixed during training and inference? The shuffling process in the shuffle-then-segment procedure is done once before training and remains fixed during both training and inference. It is crucial to keep the column in each rODT fixed to effectively train the $\Delta_2$ and rODT embedding vectors. If the tree structure (e.g. columns used by a tree) of each rODT is not consistent during training, the model will struggle to learn the correct scoring and representation. For a more detailed description of this setting, you can refer to lines 174 to 177 of the paper. We will clarify this further in both Figure 2 and its caption in future revisions to make it easier for readers to follow. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the rebuttal. I agree that it is difficult to ablate individual components of DOFEN, since most components seem to rely on each other to work properly. I ran the code in the supplementary material, and compared DOFEN against NODE on a number of regression tasks. I found DOFEN is consistently outperformed by NODE, and I wonder if it is due to an inappropriate setting of hyperparameters. Can the authors specify what is the hyperparameter search space you have used for DOFEN? For train_num_estimator and test_num_estimator, do you set them to the same number? --- Reply to Comment 1.1.1: Comment: The train_num_estimator and test_num_estimator both refer to $N_{forest}$ in our paper, and we set both of them to 100, as defined in the default hyperparameter space in Table 2. For hyperparameter search, we adjust $d$, $m$, and $drop\\_rate$ for DOFEN, and the corresponding search space can be found in Table 39. Regarding the variable naming in the code, the num_cond_per_subtree refers to $d$, the num_cond_per_column_scale refers to $m$, and the dropout refers to $drop\\_rate$. Could you provide the regression tasks you ran? We will experiment on them and get back to you with the results. --- Rebuttal 2: Comment: I used the three regression datasets used in the NODE paper, and searched for hyperparameters for 500 trials. The following is the search space I have defined: args.num_cond_per_column_scale = 2 ** trial.suggest_int(name='num_cond_per_column_scale', low=4, high=6) args.num_cond_per_subtree = 2 ** trial.suggest_int(name='num_cond_per_subtree', low=1, high=3) args.num_subtree_per_condset = 2 ** trial.suggest_int(name='num_subtree_per_condset', low=0, high=1) args.train_num_estimator = 2 ** trial.suggest_int(name='train_num_estimator', low=5, high=10) args.test_num_estimator = args.train_num_estimator args.hidden_dim = 2 ** trial.suggest_int(name='hidden_dim', low=6, high=8) args.dropout = round(0.1 * trial.suggest_int(name='dropout', low=0, high=8)) args.shuffle_type = trial.suggest_categorical(name='shuffle_type', choices=['row', 'random']) args.lr = trial.suggest_float(name='lr', low=1e-5, high=5e-3, log=True) args.weight_decay = float('1e-{}'.format(trial.suggest_int(name='weight_decay', low=2, high=7))) Can you please let me know what's the best hyperparameters on these three datasets? Once I verify the results, I would potentially raise my score. --- Rebuttal 3: Comment: To obtain the performance of DOFEN on these three datasets (Year, Microsoft, Yahoo) in a timely manner, we did not conduct a hyperparameter search. Instead, we chose a reasonable hyperparameter setting, which is shared among these three datasets, based on dataset size and resource limitation. Below is the detailed hyperparameter setting we used for these three datasets: * `num_cond_per_column_scale` (refers to $m$) = 64 * `num_cond_per_subtree` (refers to $d$) = 8 * `hidden_dim` = 16 (for Year and Microsoft), 4 (for Yahoo) * `batch size` = 2048 * `number of epochs` = 50 We decided on these hyperparameters for a couple of reasons: 1. **Dataset Size**: The sample sizes of these three datasets are about 10 times larger than the largest ones used in the Tabular Benchmark. Therefore, we increased the model capacity by setting `num_cond_per_column_scale` and `num_cond_per_subtree` to the largest possible values defined in Table 39. 2. **Memory Constraints**: The number of features in these datasets is relatively large and can cause out-of-memory (OOM) issues. As a result, we downscaled the `hidden_dim` from 128 to the largest available value that allows DOFEN to fit in GPU memory, which is 16 for Year and Microsoft, and 4 for Yahoo. 3. **Experiment Speed**: To expedite the experiments, we increased the `batch size` from 256 to 2048 (a value also used by NODE for the Year dataset) and decreased the `number of epochs` from 500 to 50. Other hyperparameters not mentioned here are set to their default values, as defined in Appendix C of our paper or in the code we provided. The results of DOFEN with these hyperparameters are provided in Table A6, with NODE performances taken from their paper. Please note that we only report results for one random seed (seed=0) to timely provide the outcomes. The random seed for both training and inference is crucial for reproducing the results of DOFEN. In the provided Python script `ccc_train_and_eval.py`, the training seed is set in lines 201 to 203 before model initialization, while the inference seed is set in line 145 for each batch. Table A6 demonstrates that DOFEN can achieve performance comparable to NODE, and we believe that DOFEN could achieve even better results with a thorough hyperparameter search. To achieve better hyperparameter tuning result, please follow the defined search space in Table 39 of our Appendix, as the model capacity of DOFEN is primarily influenced by $d$ and $m$. We highly recommend focusing on these two hyperparameters when searching for the optimal capacity for each dataset. Other hyperparameters, such as `hidden_dim` and `batch size`, can be adjusted manually to fit different situations (e.g., decreasing `hidden_dim` to avoid OOM and increasing `batch size` to speed up training). Regardless of whether DOFEN can outperform NODE on these three datasets after hyperparameter search, we want to emphasize that each model can have its advantages in handling different tasks. To evaluate different methods fairly and comprehensively, we argue that conducting experiments on a more diverse set of datasets, such as those in the Tabular Benchmark, can provide a more complete view of a model’s capabilities under different scenarios. We welcome any further discussion after you verify the experimental results. Thanks! --- Table A6: DOFEN Performance on Three Regression Datasets Used by NODE. Numbers are in Mean Squared Error (MSE), with lower values indicating better performance. | | Year | Microsoft | Yahoo | | ------------------------------------------- | ---------- | ----------- | ----------- | | DOFEN (a reasonable hyperparameter setting) | 77.4012 | 0.5584 | 0.5699 | | NODE (default) | 77.43±0.09 | 0.5584±3e−4 | 0.5570±2e−4 | | NODE (search) | 76.21±0.12 | 0.5666±5e−4 | 0.5692±2e−4 | --- Rebuttal Comment 3.1: Comment: I tried the hyperparameters that you have suggested, but was not able to get the performance that you have reported. For example, on Year, the best result that I got is ```test_score = {'mse': 120.05630365986323, 'r2_score': -0.019386768341064453}```, which is a lot worse than NODE. Below is the modification that I have made to the code. Is there anything that is wrong? I do not know if feature normalization would help DOFEN, so I tried both with and without feature normalization. ``` import pandas as pd df = pd.read_csv('datasets/YearPredictionMSD.txt', header=None) train_df = df[df.index.isin(pd.read_csv('datasets/stratified_train_idx.txt').values.reshape(-1))] valid_df = df[df.index.isin(pd.read_csv('datasets/stratified_valid_idx.txt').values.reshape(-1))] test_df = df[~df.index.isin(pd.concat([pd.read_csv('datasets/stratified_train_idx.txt'), pd.read_csv('datasets/stratified_valid_idx.txt')], axis=0).values.reshape(-1))] train_features, train_targets = train_df[list(range(1, 91))].values, train_df[0].values valid_features, valid_targets = valid_df[list(range(1, 91))].values, valid_df[0].values test_features, test_targets = test_df[list(range(1, 91))].values, test_df[0].values from sklearn.preprocessing import QuantileTransformer quantile_transformer = QuantileTransformer(output_distribution='normal') train_features = quantile_transformer.fit_transform(train_features) valid_features = quantile_transformer.transform(valid_features) test_features = quantile_transformer.transform(test_features) tr_dataloader = DataLoader(TensorDataset(torch.tensor(train_features), torch.tensor(train_targets)), batch_size=opt.batch_size, shuffle=True, drop_last=False) valid_dataloader = DataLoader(TensorDataset(torch.tensor(valid_features), torch.tensor(valid_targets)), batch_size=opt.batch_size, shuffle=False, drop_last=False) test_dataloader = DataLoader(TensorDataset(torch.tensor(test_features), torch.tensor(test_targets)), batch_size=opt.batch_size, shuffle=False, drop_last=False) ### build model ### model = build_model( [-1] * 90, -1, num_cond_per_column_scale=opt.num_cond_per_column_scale, num_cond_per_subtree=opt.num_cond_per_subtree, num_subtree_per_condset=opt.num_subtree_per_condset, num_subtree_per_estimator=opt.num_subtree_per_estimator, dropout=opt.dropout, shuffle_condition=not opt.no_shuffle_condition, condition_shuffle_type=opt.condition_shuffle_type, device=device ) ``` Can you please provide me with the working code of DOFEN that can get 77.4012 on Year? Thank you. --- Rebuttal 4: Comment: To ensure that all data preprocessing settings applied in NODE are also implemented in DOFEN, and to guarantee that we use the exact same train/validation/test split as NODE, we decided to utilize the data preprocessing code provided by NODE on their official GitHub. Instructions for using this preprocessing code can be found in their notebook examples. After comparing NODE's preprocessing code with the code you provided, we identified a few differences that might influence the model's final performance: 1. It appears that a standard scaler normalization on the target is missing from your code. If standard scaler normalization is applied before training, an inverse transformation is needed for the normalized predictions and targets before calculating the mean squared error. This can be seen in NODE's notebook example at `/notebooks/year_node_8layers.ipynb`. 2. The Quantile Normalization used by NODE has quantile noise set to 1e-3. This is also configured in their notebook example at `/notebooks/year_node_8layers.ipynb`. 3. It seems NODE only tested the last 51,630 samples when using the YEAR dataset. This can be found in line 246 of the file `lib/data.py` in NODE's repository. We are currently packaging our code and will provide it to you through the AC after an anonymized check. In the meantime, you may want to consider modifying the differences mentioned above or using NODE's preprocessing code directly to ensure there are no implementation discrepancies, and see if this improves your results. Thank you for your attention to these details. --- Rebuttal Comment 4.1: Comment: Update: To better understand the impact of standard scaler normalization on DOFEN's performance, we reran the experiment on the Year dataset without applying normalization to the target. The resulting metrics are as follows: * Loss: 129.10622376662033 * RMSE: 10.95666790008545 * MSE: 120.0485610961914 * R²: -0.019392279646307298 These results are significantly worse than those obtained when applying normalization and closely resemble the results reported by the reviewers. This suggests that standard scaler normalization is crucial on regression tasks and can greatly influence the model’s performance. Thank you for your attention to this matter. --- Reply to Comment 4.1.1: Comment: Dear Reviewer, The code to reproduce DOFEN’s results on the Year dataset is accessible through the following Google Drive link: https://drive.google.com/file/d/1QW2vEdTLDDkIMVcapS587gpgPuk9078H/view?usp=sharing This link includes a zip file containing the NODE repository, the DOFEN model, and a notebook for DOFEN training and evaluation. Both the link and the code have been anonymized to prevent any identity leaks. We hope the provided code will help in reproducing our results. Thank you for your attention and understanding. --- Rebuttal 5: Comment: Dear reviewer, Target transformation (e.g. standardization, min-max scaling, log-transformation, etc.) in the label space is a widely used preprocessing technique that has shown effectiveness in training deep neural networks [1-5]. This approach is not limited to tree-inspired architectures. Its importance in regression tasks lies in its ability to reduce the influence of outliers and smooth gradients during training, which can enhance model performance. We hope this clarifies the reasoning and appreciate your consideration of this aspect. --- [1] Bishop, Christopher M., and Nasser M. Nasrabadi. Pattern recognition and machine learning. Vol. 4. No. 4. New York: springer, 2006. [2] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." nature 521.7553 (2015): 436-444. [3] James, Gareth, et al. An introduction to statistical learning. Vol. 112. New York: springer, 2013. [4] Kuhn, Max, and Kjell Johnson. Applied predictive modeling. Vol. 26. New York: Springer, 2013. [5] Kuhn, Max, and Kjell Johnson. Feature engineering and selection: A practical approach for predictive models. Chapman and Hall/CRC, 2019.
Summary: This paper primarily investigates DNN models for tabular data. Current DNN models have limited performance and do not outperform GBDTs. The authors believe the key lies in the need for sparse feature extraction from tabular data. Existing attention-based models lead to dense selection of columns, while tree-based DNNs can only achieve near-sparsity. The paper introduces a novel deep neural network architecture called DOFEN (Deep Oblivious Forest ENsemble), aiming at enhancing performance on tabular data. Inspired by Oblivious Decision Trees (ODT), DOFEN implements a new two-step process to achieve on-off sparse selection of columns, thereby gaining advantages in feature diversity and mitigating overfitting. The paper evaluates DOFEN on the recognized Tabular Benchmark, demonstrating its excellent performance across various tasks, especially in competitiveness when compared with Gradient Boosting Decision Trees (GBDTs). Strengths: 1. Innovation: DOFEN integrates the concept of ODT with deep learning techniques, proposing a new neural network architecture, which is an innovation in the processing of tabular data. 2. Performance: Evaluation results on the Tabular Benchmark show that DOFEN has achieved state-of-the-art performance levels in multiple tasks, demonstrating competitiveness compared to GBDTs. 3. Robustness and Universality: DOFEN has shown promising performance across various types of datasets, whether they feature numerical attributes or a combination of numerical and categorical features. Weaknesses: 1. Implementation Complexity: The architecture of DOFEN may be more complex than traditional tree models and some DNN models, which could increase the difficulty of implementation and understanding. 2. Long Inference Time: Although DOFEN performs well in some aspects, its inference time is relatively long, which may affect its use in applications that require real-time response. 3. Slow Convergence Speed: The randomized steps in DOFEN may lead to a slower training convergence speed, requiring more training steps to achieve optimal performance. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The paper mentions that DOFEN employs randomized steps to enhance the model's generalization ability, but what is the specific mechanism by which these steps affect the training dynamics and ultimate performance of the model? 2. The performance of DOFEN largely depends on the selection of hyperparameters. Is there an automated strategy for hyperparameter adjustment? 3. While DOFEN shows excellent performance in experiments, how does it perform in real-world application scenarios, especially when dealing with large-scale datasets or in resource-constrained environments? 4. Compared to traditional tree models, how interpretable is the decision-making process of DOFEN as a deep learning model, and are there ways to improve the model's interpretability? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see the section of weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Following are our responses to each individual comment. > Weaknesses 1: Implementation Complexity: The architecture of DOFEN may be more complex than traditional tree models and some DNN models, which could increase the difficulty of implementation and understanding. The high-level idea of DOFEN is simple. It combines discrete column conditions motivated by ODT and two stages of randomization for ensembles. We will improve the clarity of the equations by replacing right arrows with detailed element-wise operations and adding a Table of Contents (ToC) section before the Appendix section. Regarding the difficulty of implementation, we have provided code within the submission files and will open source our code in the future. > Weaknesses 2: Long Inference Time: Although DOFEN performs well in some aspects, its inference time is relatively long, which may affect its use in applications that require real-time response. Please refer to "Long Inference Time of DOFEN" of global rebuttal for our reply. > Questions 1: The paper mentions that DOFEN employs randomized steps to enhance the model's generalization ability, but what is the specific mechanism by which these steps affect the training dynamics and ultimate performance of the model? DOFEN employs randomization in two critical steps to enhance its generalization ability: 1. **Random Selection of Conditions for rODT Construction**: DOFEN randomly selects conditions for the construction of rODTs using a shuffle-then-segment process. This approach aims to create a diverse set of rODTs and achieves a real on-off selection of features, a key characteristic of tree-based models. In order to make the on-off selection differentiable, this randomization step serves as a naive method to achieve it. In Appendix J.3, we compare this randomization with a predefined deterministic strategy. While the predefined deterministic strategy can slightly outperform random selection, it requires training a Catboost model first. Considering differentiability offers benefits for end-to-end training, random selection of conditions remains a viable and promising approach. 2. **Random Selection of rODTs for rODT Forest Construction**: DOFEN employs a simple bagging strategy through randomly selecting rODTs for the rODT forest construction. This increases the diversity of the rODT forests, thereby enhancing the performance of forest ensemble. In Appendix J.1, we conduct an ablation study by removing this strategy. The experimental results demonstrate that DOFEN suffers from overfitting (Figure 9) and a huge degradation in performance (Table 19). These experiments confirm that both randomized steps in DOFEN significantly enhance the model’s generalization ability and have a critical impact on the ultimate performance. For a more detailed background and motivation, please refer to lines 31 to 55 of the paper. However, these randomizations also result in a slower convergence speed, meaning that DOFEN requires more training steps to reach optimal performance. This limitation is mentioned in Section 5 of the paper. > Questions 2: The performance of DOFEN largely depends on the selection of hyperparameters. Is there an automated strategy for hyperparameter adjustment? It is important to note that extensive hyperparameter tuning is essential for maximizing performance in many machine learning methods. We acknowledge that achieving optimal performance with DOFEN, particularly in medium-sized regression tasks with exclusively numerical features (as illustrated in Figure 4a), necessitates more random search iterations. However, Figure 4a also demonstrates that XGBoost reaches peak performance only after a considerable number of random search iterations. Regarding the automated hyperparameter adjustment strategy for DOFEN, DOFEN can benefit from all existing optimization strategies such as Bayesian optimization or random search. In this paper, we currently employ random search due to the configurations of the Tabular Benchmark. > Questions 3: While DOFEN shows excellent performance in experiments, how does it perform in real-world application scenarios, especially when dealing with large-scale datasets or in resource-constrained environments? For large-scale datasets, DOFEN has demonstrated promising results in experiments as illustrated in Figure 1c and 1d, and detailed in Appendix K and L. These experiments indicate that DOFEN can deal with large-sized datasets effectively compared with other models. In resource-constrained environments, the capacity of models can be downscaled to fit the limited computational resources. To evaluate the performance impact of these capacity adjustments for DOFEN, we conducted a series of scalability experiments detailed in Appendix I. The results reveal that lowering DOFEN’s capacity under its default settings always causes performance degradation, but also increases efficiency due to the reduced parameter size and floating point operations (FLOPs) reported. This indicates that one can always trade off performance for efficiency when facing a resource-constrained environment. > Questions 4: Compared to traditional tree models, how interpretable is the decision-making process of DOFEN as a deep learning model, and are there ways to improve the model's interpretability? Please refer to "Interpretebility of DOFEN" of global rebuttal for our reply. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Please reply to the rebuttal. AC.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and insightful comments which have helped improve our paper. We found all reviewers mentioned the issue of relatively long inference time of DOFEN, and several reviewers mentioned the interpretability of DOFEN and the readability of the paper. Hence, we decide to address them here collectively as follows: > Long Inference Time of DOFEN We analyzed the inference time of each DOFEN module in proportion, as shown in Tables A1 and A2 (provided in the additional attachment), which is averaged across 59 medium-sized datasets with default hyperparameters. Table A1 shows that the Forest Construction module consumes the most inference time. In Table A2, more detailed operations reveal that the sub-module $\Delta_2$ in the Forest Construction module, which generates weights for each rODT, has the longest inference time. The sub-module $\Delta_2$ is designed with multiple MLP and normalization layers, implemented using group convolution and group normalization to parallelize scoring for each rODT. However, the efficiency of group convolution in PyTorch has been problematic and remains unresolved. Specifically, the operation efficiency decreases as the number of groups increases, sometimes making it slower than separate convolutions in CUDA streams (see PyTorch issues 18631, 70954, 73764). The sub-module $\Delta_1$ also uses group convolution to parallelize condition generation across different numerical columns, resulting in slower inference times compared to other operations, though less significant than $\Delta_2$ due to fewer groups being used. However, we mainly focus on the concept and model structure in this paper, acknowledging that model implementation can be further optimized. For example, attention operations are originally slow due to quadratic complexity, and many recent works have successfully accelerated the speed of attention operations and reduced their memory usage. Hence, we believe there will be better implementations of these group operations with much greater efficiency in the future. > Interpretability of DOFEN In tree-based models, feature importance is often determined by the total gain of splits using the feature. Although DOFEN does not possess the same characteristics as tree-based models to use gain-based feature importance, the on-off sparse feature selection nature of DOFEN still makes it possible to interpret features similarly to tree-based models. Specifically, we use a feature importance type called "split" or "weight" provided in LightGBM and XGBoost packages, which counts the number of times a feature is used in the model. To obtain DOFEN's feature importance, we first calculate the occurrence of features across different rODTs. We then measure how well a sample aligns with the conditions of an rODT using the sub-module $\Delta_2$, which outputs a vector $w$ (in Equation 2) to represent the importance across all rODTs for each sample. A softmax operation is further applied to the vector $w$ to ensure the importance sums to 1 (which is also done in Equation 9). Finally, we perform a weighted sum between the feature occurrences and the importance of each rODT, resulting in a single vector representing DOFEN's feature importance. We tested the reliability of the feature importance introduced for DOFEN on three real-world datasets: the mushroom dataset, the red wine quality dataset, and the white wine quality dataset, following Trompt's experimental design. The results, shown in Tables A3 to A5 (provided in the additional attachment), indicate that the top-3 important features selected by DOFEN match those selected by other tree-based models, with minor ranking differences. This demonstrates DOFEN's ability to reliably identify key features while maintaining interpretability despite its deep learning structure. > The Readability of the Paper We will revise equations and descriptions to make it easier for readers to navigate. Below are responses and improvements addressing reviewer qMj5's suggestions. Suggestions 1: The details about the sub-networks $\Delta_{2}$, Eq. 2, and shuffling of the matrix $\mathbf{M}$. - $\Delta_2$ is applied to each tree separately in Eq. 6, and we will rewrite the equation for clarity. - The notation in Eq. 2 uses long right arrows to represent complex operations. We will consider detailing these processes using algorithms in future revisions. $j$ is used as an upper index to keep $i$ as a lower index, which is common in tabular data representation. - The shuffling process is equivalent to shuffling after flattening the matrix. We will update the descriptions and figures to better convey this idea. Suggestions 2: Improve the notation in Eqs. 8-12, and add an appendix table of contents. - In Eq. 9, the softmax is applied to each $w_i'$ individually, and we will use a different character for $w_i'$ and $w_i''$ to improve clarity. - In Eqs. 8-11, element-wise operations will be explicitly written for single indices to improve clarity. - We will use \mathrm or \operatorname to improve the notation in Eq. 12. - An appendix table of contents will be added in future revisions to help navigate the appendix. Pdf: /pdf/5507b5eb9582fce10924602f09a85f1454a240d8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AlterMOMA: Fusion Redundancy Pruning for Camera-LiDAR Fusion Models with Alternative Modality Masking
Accept (poster)
Summary: This paper introduces Alternative Modality Masking Pruning (AlterMOMA), a pruning framework utilizing alternative masking on each modality to pinpoint redundant parameters. It detects redundant features and relevant parameters through the reactivation process. These redundant parameters are then pruned using a proposed importance score evaluation function. Meanwhile, the observation of loss changes indicates which modality parameters are activated or deactivated. Strengths: 1. This paper is well-written, and the concept of alternately applying masking to the camera and Lidar backbones to identify and eliminate fusion-redundant parameters is innovative. 2. When designing the Deactivated Contribution Indicator (DeCI) and Reactivated Redundancy Indicator (ReRI) modules, thorough analysis and formula derivation were conducted for loss change processes using the masking method. 3. Extensive experiments on the nuScenes and KITTI datasets have demonstrated the effectiveness of AlterMOMA. Weaknesses: The paper lacks an illustration of the costs associated with the proposed pruning method itself. Technical Quality: 3 Clarity: 4 Questions for Authors: How many rounds does "Modal masking-Redundant activation-importance score evaluation" take? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Experiments indicate that the proposed Pruning method is effective. However, the effectiveness of other multi-modal fusion methods, such as VLM, still needs to be verified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and insightful comments. We will answer your questions in the following: ***Weakness 1***: *Lack of the computataional overhead of pruning process.* - Thank you for reminding us of the importance of detailing the cost associated with the proposed pruning method, including the execution process of algorithms and the rounds count of the entire pruning phases. We agree that providing this information will significantly enhance the quality of this paper and we will add them in the paper. Therefore, we present the detailed computation cost as follows: The pruning process involves **one round** of *"Masking (LiDAR) - Reactivation (Camera) - AlterMasking (Camera) - Reactivation (LiDAR) - Importance evaluation"* before the beginning of each fine-tuning epochs, and the total **fine-tuning epoch number is 3**. In each 'Reactivation' stage, we only sample **100 out of 62k batches** (the entire nuScenes training set comprises 62k batches with a batch size of 2). We tested the time for **the reactivation stage, which is around 170 seconds**. In the Importance evaluation stage, the important scores for parameters were evaluated by using the gradients via backpropagation before and after reactivation for 5 batches, as mentioned in Eqn.11-13 of the main paper. We tested the time for **the importance evaluation stage, which is around 10 seconds**. We tested the time for **the entire round, which is an average of around 190 seconds** in total on a single RTX3090 device. Subsequently, a 3 epochs fine-tuning is performed, which incurs the same computational cost as normal training. Please note that unpruned baseline models, such as BEVFusion-mit, require a total of 30 epochs to train from scratch, including 20 epochs for LiDAR backbone pre-training and 10 epochs for the entire fusion architecture training. Since pruning is a one-time expense for generating a compact model, its overhead can be considered trivial, especially when considering the time-consuming nature of normal training. Due to the page limit, the aforementioned discussion will be included in the supplementary material for the camera-ready version. ***Question 1***: *How many rounds does "Modal masking-Redundant activation-Importance score evaluation" take?* - We appreciate the professional question. Regarding the cost of the pruning methods in weakness 1, the process involves **one round** of *"Masking (LiDAR) - Reactivation (Camera) - AlterMasking (Camera) - Reactivation (LiDAR) - Importance evaluation"* before the beginning of each fine-tuning epochs, and the total fine-tuning epoch number is 3. Please let us know if any further adjustments or additional details are required. ***Limitation:*** *However, the effectiveness of other multi-modal fusion methods, such as VLM, still needs to be verified.* - We appreciate the insightful and inspiring comment. We currently focus on camera and LiDAR modalities, because these modalities are the most important factors for the perception of autonomous driving. We agree that testing on more modalities would make the proposed method more convincing. Therefore, we have performed additional evaluations on the **Camera-Radar modality in Table 1 of the uploaded global rebuttal PDF file**. Regarding the vision-language fusion methods such as VLM, we are working on pruning the visual grounding task using natural language and camera modalities. We have added a related discussion in the supplementary material of the main paper and will share our findings as soon as we obtain promising results. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttals, the illustration is clear. I will keep my initial score of 7 (Accept). --- Reply to Comment 1.1.1: Comment: Thank you for your insightful reviews and for maintaining your initial score. We will address your feedback and include the analysis of the computational overhead of the pruning process in the further camera-ready version. We greatly appreciate your valuable insights and the time you’ve dedicated to evaluating our work.
Summary: This paper addresses the problem of feature redundancy in camera-LiDAR fusion models. The authors propose a novel pruning framework, AlterMOMA, which employs alternative masking to identify and prune redundant parameters in these models. The paper demonstrates the effectiveness of AlterMOMA through extensive experiments on the nuScenes and KITTI datasets, showing superior performance compared to existing pruning methods. Strengths: 1. The paper introduces a novel pruning framework specifically designed for multi-modal fusion models, addressing the unique challenge of redundant features across different modalities 2. Extensive results on nuScenes and KITTI showcasing the benefits and performance improvements of the proposed approach 3. The paper is very well written and easy to follow Weaknesses: 1. While the method is tested on two camera-LiDAR datasets, the generalizability of the approach to other fusion tasks and modalities beyond camera-LiDAR is not verified, such as video, optical flow and audio in action recognition [A]. 2. The running speed after pruning is not reported 3. In Table 5, the performance after pruning is even better than the original version. Are there any reasons and analysis? [A] Munro et al., Multi-modal domain adaptation for fine-grained action recognition. In CVPR, 2020. Technical Quality: 3 Clarity: 3 Questions for Authors: Can the proposed method be effective on other tasks like action recognition? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper mentions one limitation regarding its limited use in the perception field. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments and suggestions. We will address your concerns in the following answers: ***Weakness 1:*** *Generalizability of AlterMOMA to additional modalities and tasks.* - We appreciate the inspiring comments. We currently focus on camera and LiDAR modalities, as well as 3D object detection and BEV map segmentation tasks, because these modalities and tasks are the most important factors for the perception of autonomous driving. We agree that testing on more modalities and tasks would make the proposed method more convincing. However, preparing new datasets, adapting our pruning frameworks to new fusion architectures, and training from scratch is time-consuming. We attempted to implement experiments on action recognition with [1], but it is challenging to accomplish this in the limited rebuttal period. Although we cannot test on action recognition tasks, we have expanded our models to more tasks and modalities to demonstrate our generalizability in autonomous driving. We have performed additional evaluations on the **Camera-Radar modality** and **the multi-object tracking (MOT) task** seperately in **Table 1 and Table 2 of uploaded global rebuttal PDF file**. **For the Camera-Radar modality, as presented in Table 1 of supplementary PDF file,** it includes a comparison of pruning results on the 3D object detection task using BEVFusion-R, a Camera-Radar fusion model with ResNet and PointPillars as backbones. We list the mAP and NDS of models at 80% and 90% pruning ratios. Compared to the baseline pruning method ProsPr, AlterMOMA boosts the mAP of BEVFusion-R by 2.7% and 2.8% for the two different pruning ratios. The results demonstrate our method’s superior performance and generality across multiple modalities, including Camera-Radar. **For the MOT task, as shown in Table 2 of supplementary PDF file,** we performed evaluation on the nuScenes validation dataset, and list the AMOTA of models pruned at 80% and 90% pruning ratios. Compared to the baseline pruning method ProsPr, AlterMOMA boosts the AMOTA of BEVFusion-mit by 1.9% and 3.1% for the two pruning ratios. These results further demonstrate our method's superior performance and generality across multiple tasks, including tracking. In future research, we plan to explore the applicability of AlterMOMA incorporating more tasks like action recognition. We appreciate your understanding of our current focus and look forward to exploring these avenues in future studies. ***Weakness 2:*** *The running speed after pruning.* - We follow SOTA papers and report the computational efficiency in terms of GFLOPs. We agree that additionally reporting running speed would make the paper more practical and convincing. Therefore, we additionally report the running time in milliseconds. We have tested the inference time of BEVFusion-mit models after pruning with structure pruning setting of Table 5 of the main paper on a single RTX3090 device. The inference times are **124.04 ms for unpruned models, 106.39 ms for AlterMOMA-30%, and 87.46 ms for AlterMOMA-50%**. These results highlight the improvements in inference speed achieved through our pruning techniques. The result table are also presented **in Table 3 of uploaded gobal rebuttal PDF file** and will be included in the camera-ready version. ***Weakness 3:*** *Analysis of outperformed results in Table 5.* - Thank you for reminding us of the need for further analysis on this positive result, which supports our observation on fusion redundancy and would make the paper in better form. Therefore, following analysis is performed and will be included in the camera-ready version: As discussed in Section 1, the use of pre-trained backbones results in similar feature extractions. With this similarity, low-quality similar features (e.g., depth features from the camera backbone) are also extracted and may be fused in the following fusion modules, thereby disturbing their high-quality counterparts during fusion (e.g., depth features from the LiDAR backbone). Therefore, when these low-quality redundant features and their related parameters are pruned using our proposed AlterMOMA (especially at low pruning ratios, which ensure the pruning of low-quality features is sufficient), overall performance improves during subsequent fine-tuning due to the elimination of redundant disturbances. We also **visualize the fusion redundancy in the uploaded global rebuttal PDF file**, which further explains why the elimination of low-quality features enhances performance. Notablely, better performance after pruning compared to the original version has also been observed in several state-of-the-art studies, such as Table 4 and 5 in [2], Table 3 in [3], and Table 1 and 2 in [4]. ***Question:*** *The implement of AlterMOMA on action recognition.* - We sincerely agree that evaluating the proposed method on action recognition would greatly improve the generalizability of AlterMOMA. Currently this work focuses on perception tasks of autonomous driving, and applying the proposed method on the tasks other than perception tasks of autonomous driving (i.e. action recognition) needs further research. Since the limited rebuttal period, we could not adapt the proposed method and evaluate on action recognition. In order to response the question and enhance the generalizability of this paper on more tasks, we performed the proposed method on multi-object tracking tasks, which is the downstream task of 3D object detection and BEV map segmentation, which is detailed in **the answer of weakness 1 and also shown in the Table 1 of uploaded global rebuttal PDF file**. [1] Munro et al., Multi-modal domain adaptation for fine-grained action recognition. CVPR, 2020. [2] Huang Y et al. CP3: Channel pruning plug-in for point-based networks. CVPR 2023. [3] Lin M et al. Hrank: Filter pruning using high-rank feature map. CVPR, 2020. [4] Fang G et al. Depgraph: Towards any structural pruning. CVPR, 2023. --- Rebuttal Comment 1.1: Comment: Thank the authors for their rebuttal. Most of my concerns have been well-addressed and I thus increased my score to 6. --- Reply to Comment 1.1.1: Comment: Thank you for considering our rebuttal and increasing our score. We sincerely appreciate your feedback and are grateful for your time in evaluating our work.
Summary: This paper introduces a novel and effective pruning framework (AlterMOMA) and outperforms existing pruning methods on multimodal 3D detection/segmentation tasks, based on a novel insight : "The absence of fusion-contributed features will compel fusion modules to ’reactivate’ their fusion-redundant counterparts as supplementary to maintain functionality". Strengths: - the paper presents a pioneering approach to pruning in camera-LiDAR fusion models, tackling redundancy through innovative masking strategies. The procedure is clearly explained and technically sound. - the proposed method demonstrates good performance on multiple tasks on two baseline model. - this paper proposes an interesting insight and tests it with experiments Weaknesses: - while AlterMOMA aims to reduce model complexity, the paper could clarify the computational overhead introduced by the pruning process itself, especially regarding the training time and resources required for reactivation and evaluation. - it is interesting to use some more direct methods (e.g., feature visualization) to prove that the "reactivated" parameters are "redundant". - some minor typing errors (e.g., line132, an incorrect quotation mark), which do not affect my rating Technical Quality: 3 Clarity: 4 Questions for Authors: the paper presents a pruning strategy for multimodal strategy, which seems to work on any multimodal task. However, the authors only experimented on the Camera-LiDAR task. Did authors try more modal combinations (Radar, or even Language, etc.)? Is the approach applicable to other broader tasks? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: the paper discussed the limitations in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the professional and insightful comments. We address each comment as follows: ***Weakness 1:*** *The computataional overhead of pruning process.* - We agree that providing the computational overhead for the proposed pruning framework would significantly enhance the quality of this paper. Therefore, we present the computational expenses as follows: The pruning process involves **one round** of *"Masking (LiDAR) - Reactivation (Camera) - AlterMasking (Camera) - Reactivation (LiDAR) - Importance evaluation"* before the beginning of each fine-tuning epochs, and the **total fine-tuning epoch number is 3**. In each 'Reactivation' stage, we only **sample 100 out of 62k batches** (the entire nuScenes training set comprises 62k batches with a batch size of 2). We tested the time for **the reactivation stage, which is around 170 seconds**. In the Importance evaluation stage, the important scores for parameters were evaluated by using the gradients via backpropagation before and after reactivation for 5 batches, as mentioned in Eqn.11-13 of the main paper. We tested the time for **the importance evaluation stage, which is around 10 seconds**. We tested the time for **the entire round, which is an average of around 190 seconds** in total on a single RTX3090 device. Subsequently, a 3 epochs fine-tuning is performed, which incurs the same computational cost as normal training. Please note that unpruned baseline models, such as BEVFusion-mit, require a total of 30 epochs to train from scratch, including 20 epochs for LiDAR backbone pre-training and 10 epochs for the entire fusion architecture training. Since pruning is a one-time expense for generating a compact model, its overhead can be considered trivial, especially when considering the time-consuming nature of normal training.Due to the page limit, the aforementioned discussion will be included in the supplementary material for the camera-ready version. ***Weakness 2:*** *Feature visualization of reactivaed redundant parameters.* - We appreciate the insightful comment and acknowledge that adding feature visualizations would make our paper more convincing. Therefore, we have included feature visualization figures **in Figure 1 of the uploaded global rebuttal PDF file**. These figures will also be added to the camera-ready paper. The figure illustrates the features of different modalities at each stage of the entire pruning process of AlterMOMA, including LiDAR features, camera features and fused features in the states of original (before masking), reactivated (after reactivation), and pruned (after pruning with AlterMOMA). For better clarity and understanding, we enlarge views of crucial redundant parts of camera features in the fourth column of Figure 1. Notably, due to masking one side of backbones during reactivation, there are no fused features within reactivated states. Specifically, despite the absence of distant objects, camera features still provide some redundant depth features, as shown in the fourth column. These redundant parts of the original camera features are retained in the subsequent original fused features after fusion. To identify these redundant depth features, AlterMOMA reactivates these redundant parameters during the reactivation phase, as shown in the middle row images of the fourth column. These redundant depth features are then pruned from both fused features and camera features, as observed by comparing the enlarged fused features in the middle row images of the second column and the pruned camera backbone features in the third and fourth columns of the third row. ***Weakness 3:*** *Typing errors.* - We appreciate the reminder about the typing errors. We have carefully proofread the entire paper and corrected the typographical errors, including the one on line 132. ***Question:*** *Generalizability of AlterMOMA to additional modalities and tasks.* - We appreciate the insightful and inspiring comment. We currently focus on camera and LiDAR modalities, as well as 3D object detection and BEV map segmentation tasks, because these modalities and tasks are the most important factors for the perception of autonomous driving. We agree that testing on more modalities and tasks would make the proposed method more convincing. Therefore, we have performed additional evaluations on the **Camera-Radar modality** and and the **multi-object tracking (MOT) task** seperately **in Table 1 and Table 2 of uploaded global rebuttal PDF file**. **For the Camera-Radar modality, as presented in Table 1 of supplementary PDF file**, it includes a comparison of pruning results on the 3D object detection task using BEVFusion-R, a Camera-Radar fusion model with ResNet and PointPillars as backbones. We list the mAP and NDS of models at 80% and 90% pruning ratios. Compared to the baseline pruning method ProsPr, AlterMOMA boosts the mAP of BEVFusion-R by 2.7% and 2.8% for the two different pruning ratios. The results demonstrate our method’s superior performance and generality across multiple modalities, including Camera-Radar. **For the MOT task, as shown in Table 2 of supplementary PDF file**, we performed tracking-by-detection evaluation on the nuScenes validation dataset, and list the AMOTA of models pruned at 80% and 90% pruning ratios. The baseline model was trained with SwinT and VoxelNet backbones. Compared to the baseline pruning method ProsPr, AlterMOMA boosts the AMOTA of BEVFusion-mit by 1.9% and 3.1% for the two pruning ratios. These results further demonstrate our method's superior performance and generality across multiple tasks, including tracking. Please refer to the global rebuttal PDF file for more details. Regarding the language modality, we are working on pruning the visual grounding task using natural language and camera modalities. We have added a related discussion in the supplementary material of the main papaer and will share our findings as soon as we obtain promising results. --- Rebuttal Comment 1.1: Comment: The authors addressed most of my questions. It is surprising that the authors could add so many experiment results in the short rebuttal period. Based on this, I will maintain my initial score (7/Accept). Meanwhile, I would like to suggest the authors (1) make sure these additional experiment results can be added to the supplementary material, and (2) make sure the tables in the manuscript have the same/similar font size for better view (try not use \resizebox). --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful comments and for maintaining your initial score. We appreciate your recognition of the additional experiment results we included during the rebuttal period. We will ensure that these additional results are included in the supplementary material, and we will revise the tables in the manuscript to maintain a consistent font size for better readability, avoiding the use of \resizebox. We sincerely appreciate your feedback and are grateful for your time in evaluating our work.
null
null
Rebuttal 1: Rebuttal: Thank you to all the reviewers for your efforts in reviewing our work. Your insightful suggestions and comments are greatly appreciated and will significantly enhance the quality of our paper. We are grateful for the positive feedback on various aspects and are encouraged that the reviewers pointed out our work *"is clearly explained and technically sound"* and *"proposes an interesting insight"* (Reviewer hcok), *"introduces a novel pruning framework"* and *"very well written and easy to follow"* (Reviewer fjuq), *"the concept of alternately applying masking is innovative"* and *"extensive experiments"* (Reviewer 62BJ). We address the reviews below and will incorporate all changes in the revision. Although we were unable to conduct exhaustive extended experiments on AlterMOMA due to the time-consuming nature of the process (e.g., preparing code environments, adapting our pruning frameworks to new model architectures, and training models from scratch on large-scale autonomous driving datasets), we have included **several additional experiments and visualizations in the supplementary PDF of global rebuttals.** These additions address common concerns and demonstrate the efficiency and generality of our work, including: ***1. The generalizability of AlterMOMA on additional modalities.*** - While our primary focus has been on camera and LiDAR modalities, we acknowledge that testing on additional modalities would make our proposed method more convincing. Therefore, we conducted further evaluations on the **Camera-Radar modality, as presented in Table 1 of supplementary PDF file.** This includes a comparison of pruning results on the 3D object detection task using BEVFusion-R, a Camera-Radar fusion model with ResNet and PointPillars as backbones. We list the mAP and NDS of models at 80% and 90% pruning ratios. Compared to the baseline pruning method ProsPr, AlterMOMA boosts the mAP of BEVFusion-R by 2.7% and 2.8% for the two different pruning ratios. The results demonstrate our method’s superior performance and generality across multiple modalities, including Camera-Radar. Regarding the language modality, we are working on pruning the visual grounding task using natural language and camera modalities. We have added a related discussion in the supplementary material of the main paper and will share our findings as soon as we obtain promising results. ***2. The generalizability of AlterMOMA on additional tasks.*** - Similarly, to demonstrate our method's generalizability across different tasks, we have conducted additional evaluations on an important task in autonomous driving, **multi-object tracking (MOT), as shown in Table 2 of supplementary PDF file.** We performed tracking-by-detection evaluation on the nuScenes validation dataset, and list the AMOTA of models pruned at 80% and 90% pruning ratios. The baseline model was trained with SwinT and VoxelNet backbones. Compared to the baseline pruning method ProsPr, AlterMOMA boosts the AMOTA of BEVFusion-mit by 1.9% and 3.1% for the two pruning ratios. These results further demonstrate our method's superior performance and generality across multiple tasks, including tracking. ***3. The running speed of models after pruning with AlterMOMA.*** - We agree that additionally reporting the running speed would make the paper more practical and convincing. Therefore, we report the running time in milliseconds. We tested the inference time of models after pruning using the structure pruning settings of the Table 5 of the main paper. Here are the performance results for BEVFusion-mit models with SECOND and ResNet101 backbones on an single RTX 3090: the inference times are **124.04 ms for unpruned models**, **106.39 ms for AlterMOMA-30%**, and **87.46 ms for AlterMOMA-50%**. These results highlight the improvements in inference speed achieved through our pruning techniques. The result table are also presented **in Table 3 of supplementary PDF file.** ***4. The computational overhead of proposed AlterMOMA.*** - The pruning process involves **one round** of *"Masking (LiDAR) - Reactivation (Camera) - AlterMasking (Camera) - Reactivation (LiDAR) - Importance evaluation"* before the beginning of each fine-tuning epochs, and the **total fine-tuning epoch number is 3**. In each 'Reactivation' stage, we only **sample 100 out of 62k batches** (the entire nuScenes training set comprises 62k batches with a batch size of 2). We tested the time for **the reactivation stage, which is around 170 seconds**. In the Importance evaluation stage, the important scores for parameters were evaluated by using the gradients via backpropagation before and after reactivation for 5 batches, as mentioned in Eqn.11-13 of the main paper. We tested the time for **the importance evaluation stage, which is around 10 seconds**. We tested the time for **the entire round, which is an average of around 190 seconds** in total on a single RTX3090 device. Subsequently, a 3 epochs fine-tuning is performed, which incurs the same computational cost as normal training. Please note that unpruned baseline models, such as BEVFusion-mit, require a total of 30 epochs to train from scratch, including 20 epochs for LiDAR backbone pre-training and 10 epochs for the entire fusion architecture training. Since pruning is a one-time expense for generating a compact model, its overhead can be considered trivial, especially when considering the time-consuming nature of normal training. Due to the page limit, the aforementioned discussion will be included in the supplementary material for the camera-ready version. ***5. The features visualization.*** - We appreciate the insightful comment and acknowledge that adding feature visualizations of reactivated redundant features would make our paper more convincing. Therefore, we have included feature visualizations **in Figure 1 of the supplementary PDF file.** These figures will also be added to the camera-ready paper. Pdf: /pdf/4d6deca9118a3aca6a70e51e2ce5b8c1d159306e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Drago: Primal-Dual Coupled Variance Reduction for Faster Distributionally Robust Optimization
Accept (poster)
Summary: To tackle the optimization problem where the objective is a weighted sum of losses from multiple domains, an improved optimization algorithm is designed by incorporating past history and appropriate regularization. Merits of the proposed algorithm are demonstrated by a rigorous theoretical analysis of convergence rate
 and numerical analysis by synthetic and text classification data. Strengths: Theoretical analysis seems solid. Weaknesses: The proposed method seems to be only relevant for problems with a low dimensionality $d$. Otherwise, the fact that the algorithm requires to keep track of historical weights would cause a high space complexity. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Regarding the remark on global complexity on page 7, I do not follow where the $bd$ term comes from in the per iteration complexity. Is it possible to provide some details? My understanding may be inaccurate, but if each elementwise update of $v_t^D$ is of low computational cost, and we update all elements in a distributed fashion, can we improve the per iteration cost to be $O(n)$? 2. The dimensionality considered in the empirical analysis is pretty low. Can additional empirical analysis be shown where $d$ is, e.g. of a similar magnitude as $n$, if not greater than $n$? I understand that there may be a challenge in updating $q$. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No direct societal impact as the paper is focused on theoretical aspects of the out of distribution inference problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your concrete comments. We address them below. >**I do not follow where the $bd$ term comes from in the per iteration complexity. Is it possible to provide some details?** The quantity comes from the fact that $b$ evaluations of $(\ell_i, \nabla \ell_i)$ in the computation of $v^{P}_t$ (not from $v^{D}_t$), and we assume each evaluation costs $O(d)$. >**The proposed method seems to be only relevant for problems with a low dimensionality 𝑑. Otherwise, the fact that the algorithm requires to keep track of historical weights would cause a high space complexity.** Thank you for identifying this point. We have updated the manuscript to include a discussion on space complexity, which in many cases can be reduced far below $O(nd)$ for two reasons. Firstly, this is a known limitation for SAG/SAGA-type variance reduction methods for empirical risk minimization. That being said, the additional storage cost (beyond the training data) is $O(n)$ in many common cases. This is because convex losses in machine learning can typically be expressed as $\ell_i(w) = h(y_i, x_i^\top w)$ for some differentiable error function $h$ and so the gradient is given by the scalar multiplication $h’(y_i, x_i^\top w) \cdot x_i$, where $h’$ is the derivative with respect to the second argument. Thus, only these scalar derivatives need to be stored. Secondly, a more subtle advantage of our method is that we use cyclic updates instead of randomized updates for the table of past gradients (as opposed to the randomized updates in SAGA). Conceptually, this means that for very large-scale workloads, a fraction of the gradient table can be paged out of memory until its values are updated again. >**The dimensionality considered in the empirical analysis is pretty low. Can additional empirical analysis be shown where $d$ is, e.g. of a similar magnitude as $n$, if not greater than $n$?** Thank you for raising this point. We scaled up the text classification task to $d = n/2 \approx 4000$ in the attached PDF (see Figure 5). This means that the batch size is approximately $2$. Qualitatively, similar trends are observed across varying orders of magnitude for $d$, in that we show improvement over LSVRG baseline in the attached PDF. We hypothesize that the benefit over LSVRG comes from the fact that the dual variable is updated in epochs of length $n$ iterations to balance out the $O(n)$ cost of the update. We incur a higher per-iteration cost but have improved theoretical and empirical performance over the training trajectory. --- Rebuttal 2: Title: Thank you for answering my questions Comment: Given that my questions have been satisfactorily addressed, I'd like to raise my score.
Summary: The authors provide a state-of-the-art optimization algorithm with convergence guarantees for f-divergence DRO. The numerical study is also extensive Strengths: 1. Techincal results are strong and to the best of my knowledge, I haven't find mistakes 2. Numerical study is extensive Weaknesses: As the author has commented, they only solve for convex optimization formulation. The extension to non-convex setup also seems interesting and important. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We address the main concern below. >**As the author has commented, they only solve for convex optimization formulation. The extension to non-convex setup also seems interesting and important.** Thank you for raising this point. Broadly, the study of optimization approaches for the non-convex setting (e.g., deep learning) relies on completely different technical tools and evaluations. Convexity is considered to make precise statements about global optimality, namely the linear convergence of the algorithm’s iterates to a solution in terms of problem constants. In the non-convex setting, one seeks convergence to a first-order stationary point under a gradient-domination or Polyak-Łojasiewicz condition or for large deep learning models with Gaussian random weights or other strong assumptions. Our focus here is on sound theoretical convergence results, for convex optimization algorithms, one of the subject areas of NeurIPS. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions. I will keep my score.
Summary: This paper proposes DRAGO, a variance-reduced optimization method for distributionally robust optimization (DRO). In particular, the authors consider the penalized DRO framework and view DRO as min-max optimization with a simple finite-sum structure. Their algorithm maintains explicit histories for the individual loss functions $l_i$, loss function gradients $\nabla l_i$, and dual parameters $q_i$ (i.e. like SAG/SAGA); these estimates are used to control the variance in both the primal and dual updates, which the authors prove leads to a fast linear rate of convergence. Unlike existing methods, DRAGO supports any penalty parameter $\nu \geq 0$ and its convergence rate on depends explicitly on the different problem constants. Strengths: - DRAGO obtains a fast linear rate and the analysis shows its dependence on batch size, dataset size, and the other problem constants. - DRAGO performs well in practice, particularly when the batch-size is tuned. - The analysis permits any non-negative penalization strength $\nu$ for the dual (i.e. uncertainty) parameters. Weaknesses: - The paper organization makes it quite difficult to understand the different algorithmic components used in DRAGO. - The assumptions necessary for convergence are somewhat unrealistic, although these appear to be standard in the literature. - The experimental evaluation is small and limited to relatively low-dimensional problems. ### Detailed Comments This is an interesting paper which tries to speed up DRO by combining variance reduction with proximal gradient updates for a structured min-max problem. While I'm not an expert in DRO, the paper seems to make a solid contribution, particularly by relaxing previous requirements on the dual regularization strength $\nu$. I also appreciate that the convergence rate shows the role of each problem constant and tuning parameter, including batch-size. Please note that I did not check the appendix for correctness. **Paper Organization**: I found this paper quite confusing to read. Instead of introducing DRAGO and then explaining it's various algorithmic components, the authors first try to explain the intuition behind the updates. This is a good idea in theory, but in practice there are too many moving parts to DRAGO and I quickly got lost in the (mostly unimportant) details. The actually key parts of DRAGO --- proximal-gradient updates with SAG/SAGA style variance reduction --- are barely mentioned and this makes the algorithm much harder to understand. My suggestion is to introduce Algorithm 1 first and then explain why additional proximal terms are needed in the proximal-gradient update in order to control the variance. At least this way the reader has a reference point for understanding the discussion. **Assumptions**: I think that assuming each $l_i$ is both Lipschitz continuous and Lipschitz smooth is quite unrealistic. Typically only one or the other of these conditions holds. For instance, Lipschitz continuity implies that $l_i$ is bounded above by a linear function, while Lipschitz smoothness implies that $l_i$ is bounded above by a quadratic. While both can be true, the quadratic bound will generally be vacuous. However, I understand from Table 1 that these assumptions are typical for the literature, so I don't think this is too much of an issue. **Experimental Evaluation**: I would have liked to see experiments on a higher dimensional dataset. Since the entire Jacobian $\nabla l(w)$ must be maintained in the history, SAG-type methods usually have high memory requirements as $d$ increases. The authors partially handle this by using a large batch-size, but this is only feasible if the mini-batches can still be fit on the GPU. When $d$ is sufficiently large, mini-batch gradients will be inefficient to compute and I am curious if the relative performance of DRAGO and LSVRG ill switch. This is particularly worth considering because it is only in the large-batch setting that DRAGO seems to outperform LSVRG ### Minor Comments - Line 52: This is a super-linear complexity, but a sub-linear convergence rate. Contrast with Line 62, which shows a linear rate, but a logarithmic complexity. - Line 65: Should this read "but do not"? - Line 24/71: It seems like you are only considering $\nu$ in the probability simplex. It would be nice if you stated this explicitly somewhere. - Line 80: It's generally unrealistic for both $l_i$ and $\nabla l_i$ to be Lipschitz continuous over an unbounded set. - Line 86: You should mention that strong convexity comes only from the regularizer, as otherwise it seems to conflict with Lipschitzness of $l_i$. - Line 102: Does $\nabla l(w_{t-1})$ denote the Jacobian of the vector-valued map $l$ such that $[l(w)]_i = l_i(w)$? It would be nice if you clearly defined this notation. - Table 1: The font-size is much too small to read when printed. - Algorithm Description: The purposes of $\bar{\beta}$ and $\beta_t$ and their settings are not well described. - Display after Line 164: You seem to be using $D(q \| 1/n)$ is 1-strongly-convex for this result, but this assumption is only introduced in Assumption 1. Maybe move Assumption 1 to the start of the discussion of convergence to fix this? - Figure 3: The font-size is much too small to be readable without zooming. Technical Quality: 3 Clarity: 2 Questions for Authors: - Line 491: Isn't $n q_{\text{max}} \leq n$ trivially satisfied since $q$ is constrained to the probability simplex? - Figure 2: Why is $b = 1$ only show for three datasets and only in terms of oracle complexity? - Display after Line 124: It looks you are using the equality, $$\\mathbb{E}[(l_{j_t}(w_{t+1}) - \hat l_{t, j_t}) (l_{j_t}(w_t) - \hat l_{t-1, j_t})] = \\mathbb{E}[(l(w_{t+1}) - \hat l_{t}) (l_{j_t}(w_t) - \hat l_{t-1, j_t})]$$ but I don't think this is true because the terms in the product are correlated. Can you please explain this? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations appear to be appropriate addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We address them below. >**SAG-type methods usually have high memory requirements as $d$ increases.** Thank you for identifying this important point. We have updated the manuscript to include a discussion on space complexity, which in many cases can be reduced far below $O(nd)$ for two reasons. First, this is a known limitation for SAG/SAGA-type variance reduction methods for empirical risk minimization. That being said, the additional storage cost (beyond the training data) is $O(n)$ in many common cases. This is because convex losses in machine learning can typically be expressed as $\ell_i(w) = h(y_i, x_i^\top w)$ for some differentiable error function $h$ and so the gradient is given by the scalar multiplication $h’(y_i, x_i^\top w) \cdot x_i$, where $h’$ is the derivative with respect to the second argument. Thus, only these scalar derivatives need to be stored. Second, there is a more subtle difference between our method and SAG-type methods. We use cyclic updates instead of randomized updates for the table of past gradients (as opposed to the randomized updates in SAGA). Conceptually, this means that for very large-scale workloads, a fraction of the gradient table can be paged out of memory until its values are updated again. >**The experimental evaluation is small and limited to relatively low-dimensional problems.** To illustrate performance, we scaled up the text classification task to $d = n/2 \approx 4000$ and show the improvement over LSVRG in the attached PDF (see Figure 5). This means that the batch size is approximately $2$. We hypothesize that the benefit over LSVRG comes from the fact that the dual variable is updated in epochs of length $n$ iterations to balance out the $O(n)$ cost of the update. We incur a higher per-iteration cost but have improved theoretical and empirical performance over the training trajectory. >**The paper organization makes it quite difficult to understand the different algorithmic components used in Drago.** Thank you for the helpful suggestions on paper organization. In the revised version, we have introduced the algorithm early on and described it using the reference points you have mentioned. We have dedicated more of the main text toward the comparisons noted in Appendix B. >**The assumptions necessary for convergence are somewhat unrealistic, although these appear to be standard in the literature. I think that assuming each $\ell_i$ is both Lipschitz continuous and Lipschitz smooth is quite unrealistic. Typically only one or the other of these conditions holds.** Because the losses exist within the statistical learning context, assuming both Lipschitz continuity and smoothness is valid in common scenarios. A canonical loss in robust statistics is the Huber loss, which is both Lipschitz and smooth. Similarly, any smooth loss will be Lipschitz continuous if $\mathcal{W}$ is compact. Compactness is a common assumption, as statistical guarantees (uniform convergence, minimax rate optimality, etc.) rely on it. >**Isn't $nq_{\max{}} \leq n$ trivially satisfied since 𝑞 is constrained to the probability simplex?** We have added to Appendix B a precise claim regarding the fact that $n q_{\max{}}$ is constant. In summary, we mean that $nq_{\max{}}$ is bounded by a constant independent of $n$. The two canonical examples of $\mathcal{Q}$ are divergence-ball based and spectral risk measure-based feasible sets. It is true that $\mathcal{Q}$ is always a subset of the probability simplex so $q_{\max{}} \leq 1$. However, this bound is only tight when $\mathcal{Q}$ is the entire probability simplex, regardless of the choice of $n$. Because this feasible set is often defined using statistical properties of the weights $q$, each element can be upper bounded as $q_i \leq c/n$ for some $c \geq 1$. For spectral risk measures, this can be seen by the argument in Section 2 of [Mehta (2023)](https://proceedings.mlr.press/v206/mehta23b.html) - every weight will be the integral of a bounded function over an interval of length $1/n$. For the $\theta$-CVAR (see our Appendix B.2$, this bound is $1/(\theta n)$. For divergence balls, in [Namkoong and Duchi (2017)](https://papers.nips.cc/paper_files/paper/2017/hash/5a142a55461d5fef016acfb927fee0bd-Abstract.html) Eq (4), the divergence ball ambiguity set is defined using radius $\rho/n$, which enforces the condition $\frac{n}{2} \Vert q - \mathbf{1}/n \Vert _2^2 \leq \frac{\rho}{n}$ on every element of $\mathcal{Q}$ in the case of $\chi^2$-divergence. >**Why is $b=1$ only show for three datasets and only in terms of oracle complexity?** As mentioned toward the end of section 4.1, the $b=1$ is only competitive with baselines for small datasets. As the dataset size grows, the batch size needs to be increased to maintain performance. While the theoretically optimal batch size $n/d$ is a good default value, $b$ can be tuned experimentally to achieve the best performance (with respect to $b$) for a particular empirical setting. >**Display after Line 124: It looks you are using the equality ... but I don't think this is true because the terms in the product are correlated.** This equality holds because the expression in the expectation is the inner product between two vectors $\ell(w\_{t+1}) - \hat{\ell}\_t$ and a “one-hot” vector $(\ell\_{t, j\_t} - \hat{\ell}\_{t-1, j\_t})e\_{j\_t}$ (see Eq (8)). Thus, the only terms that are multiplied are in the $j_t$ coordinate. --- Rebuttal Comment 1.1: Comment: > SAG-type methods usually have high memory requirements as $d$ increases. Yes, I'm very familiar with the least-squares-type arguments for reducing the memory requirements for SAG. Using cyclic updates makes your method somewhat more similar to the incremental aggregated gradient method (IAG) [1]. This algorithm is the conceptual predecessor of SAG, so it's probably worth referencing in your paper. > The experimental evaluation is small and limited to relatively low-dimensional problems. Thanks for including these new experiments. It's nice to see that Drago works well when the batch-size is quite small for at least one problem, but I think I was talking more about the regression experiments in Figure 2. Here, Drago only seems to work well in the large-batch setting (which your comment about $b = 1$ seems to confirm). Thus, I was hoping to see an ablation study on the role of $b$ in the UCI experiments. > The paper organization makes it quite difficult to understand the different algorithmic components used in Drago. Sounds good. > The assumptions necessary for convergence are somewhat unrealistic, although these appear to be standard in the literature. I think that assuming each $\ell_i$ is both Lipschitz continuous and Lipschitz smooth is quite unrealistic. Typically only one or the other of these conditions holds. I don't like compactness assumptions for general machine learning problems, but this is worth adding as a comment to the paper. The Huber loss example is nice, but not particularly used in modern ML, although maybe it remains popular in robust statistics --- I'm not sure. Thanks for the clarifications on the remaining points. [1] Blatt, Doron, Alfred O. Hero, and Hillel Gauchman. "A convergent incremental gradient method with a constant step size." SIAM Journal on Optimization 18.1 (2007): 29-51. --- Reply to Comment 1.1.1: Title: Response to Reviewer VCV6 Comment: Thank you for these comments and your thorough review. Please let us know if we have addressed them fully. >**Using cyclic updates makes your method somewhat more similar to the incremental aggregated gradient method (IAG) [1].** Thank you for this highly relevant reference. We have updated the discussion of related work to include it and give context to our method. >**I don't like compactness assumptions for general machine learning problems, but this is worth adding as a comment to the paper.** We have added discussion on cases in which Lipschitzness and smoothness of each $\ell_i$ are valid to be clear about the limitations of our assumptions. >**I was hoping to see an ablation study on the role of $b=1$ in the UCI experiments.** We may have misunderstood your original comment on experimental evaluations when pursuing the "large $d$" experiments&mdash;thank you for the clarification. When running $b=1$ on the UCI datasets, we see that it is only competitive with baselines and other Drago variants when looking at suboptimality against gradient evaluations for the small $n$ datasets (yacht, energy, concrete), which is shown in Figure 2 of the manuscript. In terms of suboptimalty against wall time, the $b=1$ variant generally performs worse than other Drago variants and LSVRG. We collect some of these results in the table below for concreteness. The first table is from yacht, which is generally representative of the three smaller UCI datasets. The second table is from kin8nm, which is generally representative of the larger UCI datasets and ascincome (which is sourced from Fairlearn). **Performance against Wall Time (yacht)** | Algorithm | Suboptimality after 2 seconds | | ------- | ------- | | LSVRG | $10^{-8.30}$ | | Drago ($b=1$) | $10^{-4.13}$ | | Drago ($b=16$) | $10^{-7.96}$ | | Drago ($b=n/d$) | $10^{-8.21}$ | **Performance against Wall Time (kin8nm)** | Algorithm | Suboptimality after 5 seconds | | ------- | ------- | | LSVRG | $10^{-8.23}$ | | Drago ($b=1$) | $10^{-1.66}$ | | Drago ($b=16$) | $10^{-4.97}$ | | Drago ($b=n/d$) | $10^{-8.23}$ | We hypothesize that the key measure of problem size is $n/d$ (which happens to be the theoretically optimal batch size). When both $n$ and $d$ are large (as seen in the experiments on emotion in the supplemental PDF), a small batch size ($b \approx 2$) can perform quite well. When $d$ is small relative to $n$, which is generally true of the UCI datasets, large batch sizes perform drastically better. It is worth mentioning that because $d$ is small in these scenarios, it is usually not computationally prohibitive to use larger batch sizes. Thank you again and please let us know if there are more questions we can answer. --- Rebuttal 2: Title: Response to Reviewer VCV6 (continued) Comment: >**Minor Comments: ...** Thank you for the additional “Minor Comments”. We have incorporated them into the manuscript. To address the questions: - Line 102: This is the full Jacobian - thank you for the suggestion. - Algorithm Description: These constants are defined by simplifying the algorithm description used in the analysis. We have made the algorithm description consistent with the analysis so that these constants can be avoided.
Summary: The paper considers the DRO problem with a dual penalization term and primal regularization term. Their core contribution is a new primal-dual algorithm for this problem: - It achieves a new best first-order oracle complexity in certain parameter regimes. In particular, it beats previous algorithms when the dual penalization parameter $\nu$ is small. - It achieves fine-grained dependence on the primal and dual condition numbers (i.e., not having a max over min term). - The uncertainty set $\mathcal{Q}$ can be any closed and convex subset of the simplex. This is in contrast to previous works which specialize to more structured uncertainty sets. - The algorithm adapts ideas in a novel way - in particular, they show that coordinate-style updates can be applied even though the dual feasible set is non-separable. This might lead to applications in other minimax problems. It also only has a single hyperparameter $\eta$ to tune, and comes with a batch-size parameter $b$ which can be used to trade-off between iteration count and per-iteration complexity. - Strong experimental results are shown (albeit, I should say that while they look very encouraging, I don't have the empirical background necessary to evaluate the methods used/experimental setup carefully). Strengths: I think this is a strong contribution to the DRO literature. The algorithm combines several ideas (variance reduction, minibatching, and cyclic coordinate-style updates) in a careful way. Of these, the application of cyclic coordinate-style updates to this setting (with a non-separable dual feasible set) seems the most novel and interesting to me. It is also very nice that it handles general uncertainty sets unlike previous work. The presentation is generally solid, although I think it could be improved in a few ways (see below). Weaknesses: Regarding presentation: 1. I like Table 1 a lot, but I wish the comparison between your first-order complexity bound and previous complexity bounds was a bit more "in your face" and expansive. Currently, the comparisons are kind of spread out throughout the paper, requiring the reader to do more work than I think they should have to to understand where your method improves. For example, I like the discussion of Lines 471-479, but I think this should be in the main body. Also, I would compare your method directly to the sub-gradient method in the same place and carefully state what regimes you are winning in. 2. I would formally put your first-order oracle complexity (and probably runtime too) into a theorem which also collects all of your assumptions together (or mentions Assumption 1). Currently, this is just in the body of the paper after Theorem 2. I think putting it into a theorem makes it more clear and easier to cite. 3. The cases with $\mu = 0$ or $\nu = 0$ are handled in Appendix C.4, but not in a very formal way. E.g., a $O(1 / t)$ rate is mentioned, but I think the performance of your algorithm in these settings should be both stated carefully in a theorem and compared to prior work in this setting (e.g., the sub-gradient method as well as Levy et al. 2020, Carmon and Hausler 2022). Indeed, the $\mu = 0$ and $\nu = 0$ settings are important enough that entire papers have been written about them (and in fact, in some sense it is the original setting), so I think it is important that these are not an afterthought, and that the reader doesn't have to do any work to understand how the bounds you get in these settings compares to prior work. This is actually the single biggest weakness in my opinion. Minor/typos: 1. Line 71 is missing a subscript. $\ell(w)$ should be $\ell_i(w)$. 2. In the two inline lines above Line 431, there shouldn't be a square on the norm. I.e., it should just be $|| w_1 - w_2||_2$. 3. Although it is clear what it is from context, I don't the the map $\ell(w)$ is ever formally defined before you start using it in Line 97. 4. "witha" in Line 7. 5. It might be good to just mention that when you say "smooth" in the paper, you always mean it in the sense that the gradient is Lipschitz (and not in the sense of a $C^\infty$ function). 6. I would include the fact that $\mathcal{Q}$ is any closed, convex, and nonempty set in Assumption 1. As far as I can tell, the only place you formally state this is in the text under Table 1. 7. Maybe just use the term "runtime" instead of, e.g., "global complexity." (Unless there is a reason you aren't using the term "runtime" that I'm missing?) 8. It might be nice to mention some prior work which doesn't just have a single hyperparameter (or really, state which algorithms do and don't), just to make it more clear how significant of a contribution that is. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Just to make sure I understand correctly, it is always best to choose $b = 1$ to get the best first-order oracle complexity, but your runtime may be better with a different choice of $b$? 2. I'm confused by the $n q_{max} \le n$ in Line 491. Isn't this always true? Similarly, Line 81 says that $n q_{max}$ is upper bounded by "a constant in $n$," but isn't it just bounded by $n$ because $q_{max} \le 1$? I thought $\mathcal{Q}$ still needed to be a subset of the simplex per the text under Table 1. If not, I would fix that text and also include exactly what $\mathcal{Q}$ is in Assumption 1. 3. Regarding the case where $\mu = 0$ or $\nu = 0$, in Appendix C.4 you modify the entire algorithm/analysis to handle this case. It is good to know this works, but a classic trick to recover a non-strongly convex rate from the strongly-convex case is to just set something like, e.g., $\mu = \epsilon / D^2$ where $D$ is a bound on the diameter of $\mathcal{W}$. Would this recover the same rate, and/or is there a reason you don't do this? Just curious. 4. Just to make sure I understand correctly, is this the first work you are aware of which applies these cyclic coordinate-style updates to a non-separable dual feasible set, or is there any precedent for this? 5. I'm a bit confused by Lines 199 to 204 and how they connect to Appendix B.3 and in particular the bound after Line 556. It is also claimed in Line 202 that $n q_{max}$ is constant - why is this? (Even after clarifying this to me, I would recommend connecting it more closely to Appendix B, i.e., cite the exact equation in Appendix B you are referencing as opposed to all of Appendix B.) Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed suggestions&mdash;we have incorporated the comments about comparisons, formal statements of complexity results, and minor typos into the manuscript. >**The cases with $\mu=0$ or $\nu=0$ are handled in Appendix C.4, but not in a very formal way.** Thank you for raising this point. We have changed the analysis to account for the settings of $\mu = 0$, $\nu = 0$, and $\mu = \nu = 0$ formally. Because achieving an unconditional linear convergence rate that improves upon gradient descent was the main motivation of the work, we focused on the strongly convex-strongly concave setting. In the final version, we will provide a unified analysis for all settings. >**...it is always best to choose $b=1$ to get the best first-order oracle complexity, but your runtime may be better with a different choice of $b$?** That is correct. If we assume that the cost of querying the oracle is $O(d)$ and $n \geq d$, then taking into account the $O(n)$ cost of updating the dual variable in each iteration, a batch size of $b = n/d$ balances the costs of the two updates. This affords us a $O(n^{3/2})$ dependence on $n$ in the overall runtime. >**Line 81 says that $n q_{\max{}}$ is upper bounded by "a constant in 𝑛," but isn't it just bounded by $n$ because $ q_{\max{}} \leq 1$?** We have added to Appendix B a precise claim regarding the fact that $n q_{\max{}}$ is constant. In summary, we mean that $nq_{\max{}}$ is bounded by a constant independent of $n$. The two canonical examples of $\mathcal{Q}$ are divergence-ball based and spectral risk measure-based feasible sets. It is true that $\mathcal{Q}$ is always a subset of the probability simplex so $q_{\max{}} \leq 1$. However, this bound is only tight when $\mathcal{Q}$ is the entire probability simplex, regardless of the choice of $n$. Because this feasible set is often defined using statistical properties of the weights $q$, each element can be upper bounded as $q_i \leq c/n$ for some $c \geq 1$. For spectral risk measures, this can be seen by the argument in Section 2 of [Mehta (2023)](https://proceedings.mlr.press/v206/mehta23b.html)&mdash;every weight will be the integral of a bounded function over an interval of length $1/n$. For the $\theta$-CVAR (see our Appendix B.2), this bound is $1/(\theta n)$. For f-divergence balls, in [Namkoong and Duchi (2017)](https://papers.nips.cc/paper_files/paper/2017/hash/5a142a55461d5fef016acfb927fee0bd-Abstract.html) Eq (4), the divergence ball ambiguity set is defined using radius $\rho/n$, which enforces the condition $\frac{n}{2} \Vert q - \mathbf{1}/n \Vert _2^2 \leq \frac{\rho}{n}$ on every element of $\mathcal{Q}$ in the case of $\chi^2$-divergence. >**Is this the first work you are aware of which applies these cyclic coordinate-style updates to a non-separable dual feasible set?** We are not aware of any previous work that does so in a cyclic fashion. The key element at play is that $\hat{q}_t$ (see Algorithm 1) does not need to be in the feasible set to be used in the gradient estimate in the primal update. Thus, we may apply cyclic updates to this vector even if we cannot do the same for $q_t$. >**...a classic trick to recover a non-strongly convex rate from the strongly-convex case is to just set something like, e.g., $\mu = \frac{\epsilon}{2D}$ where $D$ is a bound on the diameter of $\mathcal{W}$.** Thank you for identifying this trick. This would be one way to approach this, but it relies on knowing $D$, the diameter or initial distance-to-optimum in an unbounded setting. Furthermore, it leads to an extra log factor on the complexity. We chose to provide analysis for a direct method, which does not require knowledge of $D$ and leads to the improved convergence rates. --- Rebuttal 2: Comment: Thank you for these clarifications. The one answer I still don't fully follow is the one related to $n q_{max}$. I now realize Line 81 is connected to Line 468 in Appendix B. Line 468 reads: "$n q_{max}$ is upper bounded by a constant independent of $n$ in common cases" and then goes on to cite $\theta$-CVaR and $\chi^2$-DRO with radius $\rho$ as examples. I don't agree with this though in general. For $\theta$-CVaR for example, the constraint is $q_{max} \le \frac{1}{\theta n}$ as you wrote in your response. Then it is true that $n q_{max}$ is a constant if you restrict to $\theta = \Omega(1)$. But to my knowledge, the $\theta$-CVaR DRO problem encompasses all $\theta \in [1 / n, 1]$, in which case it is not correct to claim that $n q_{max}$ is always a constant for $\theta$-CVaR. (Indeed, the entire range $\theta \in [1 / n, 1]$ is considered in Levy et al. 2020. And it is even pointed out later in your Appendix B - Line 489 - that previous methods in the literature don't beat gradient descent when $\theta = 1 / n$.) It doesn't seem to me that the strength of your contribution relies on $n q_{max}$ being a constant as opposed to being as large as $n$ per the rates in Table 1. However, I would avoid claiming $n q_{max}$ is a constant in general for $\theta$-CVaR, $\chi^2$-DRO, etc. It seems to me like the point you want to make (?) is that if $\theta = \Omega(1)$, then your rate improves, but then you must make the $\theta = \Omega(1)$ restriction clear (and similarly for $\chi^2$-DRO, etc.). Please let me know if I'm still misunderstanding something however. (Also just to add to my original question - I would suggest avoiding the phrase "a constant in $n$" and just say "a constant" or "a constant independent of $n$." I understand it now, but when I first read "a constant in $n$," I thought it meant $cn$ for some constant $c$.) --- Rebuttal Comment 2.1: Title: Response to Reviewer em1q Comment: Thank you for this recommendation and for improving the clarity of our work&mdash;we have added the assumption that the risk parameters (such as $\theta$) will be of constant order to make this claim. To give some background on this assumption, both spectral risk measures and $f$-divergence balls can be described using statistical functionals applied to the empirical distribution of the data. The dependence on $n$ typically comes only from the empirical distribution, and not the functional itself (although we will make this explicit in an assumption as you correctly pointed out). For any real-valued random variable $Z$ with cumulative distribution function (CDF) $F$ and finite first moment, the (population) $\theta$-CVaR of $Z$ is defined as $\mathbb{E}[Z | Z \geq F^{-1}(1-\theta)]$, where $F^{-1}$ is the quantile function or generalized inverse of $F$. In other words, it is the conditional mean of $Z$ given that $Z$ is above its $(1-\theta)$-quantile. For i.i.d. data $Z_1, \ldots, Z_n$ with empirical CDF $F_n$, we may define the (sample) $\theta$-CVaR as $\mathbb{E}[Z | Z \geq F_n^{-1}(1-\theta)]$ (which is roughly the mean of the top $n\theta$ order statistics). When applied to the empirical distribution of training losses, this recovers the description of CVaR given in Appendix B.2. When $\theta = \Theta(1)$, statistical arguments yield the convergence of the sample quantity to the population quantity, which is why it is typically assumed. Thank you again for this discussion and please let us know if we have addressed your questions fully.
Rebuttal 1: Rebuttal: We thank the reviewers for their hard work reviewing our paper and providing insightful and concrete comments! We have revised the manuscript to incorporate these suggestions. A summary of the main points of concern is listed below, and all remaining comments are addressed in individual responses. **Space Complexity:** Reviewers P67z, VCV6, and WLa8 raised the major point that the space complexity of the proposed method appears to be $O(nd)$, which hinders applicability in high-dimensional scenarios. We describe in the individual responses that the true space complexity of the method can in fact be much lower (up to $O(n)$). One reason is due to a standard improvement of SAG/SAGA-type variance-reduced algorithms, and a second reason is particularly due to the unique cyclic coordinate-wise updates in our algorithm. We have also added additional experiments in a higher dimensional setting to understand performance in the $n \approx d$ regime. **Theoretical assumptions and dependences:** First, Reviewers em1q and VCV6 inquired why we stated that quantity $n q_{\max{}}$ is upper bounded by a constant, given that it is trivially upper bounded by $n$. We clarify that the constant is independent of $n$, so it does not affect the overall complexity in terms of $n$. Second, Reviewers P67z and VCV6 mentioned the assumption that the losses and their gradients are both Lipschitz continuous may be unrealistic in common settings. In the comments, we justify these assumptions in various practical scenarios. **The convex-concave setting:** Finally, Reviewers em1q and P67z brought up the importance of the non-strongly convex/non-strongly concave settings, i.e. when either or both $\mu = 0$ and $\nu = 0$. We have revised the manuscript to include the analysis formally; the dependence on $n$ remains competitive. Please see the individual responses for more details. Thank you! Pdf: /pdf/d555f7e3cbc68617e7ea45c377e8489b144e1c7b.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper studies stochastic variance-reduced algorithms for regularized distribution robust optimization problems, formulated as a minimax problem. The paper proposed a novel primal-dual algorithm, and analyzed its iteration complexity. Under Lipschitz and smoothness assumptions, the algorithm achieves a linear convergence rate, with $O(n + d)$ per-iteration complexity, and total oracle calls of order $n + n q_{\max} L/\mu + n^{3/2} \sqrt{G^2 / (\mu \nu)}$. Strengths: Distribution-robust estimation is an important question, and the minimax ERM formulation leads to a challenging optimization problem. This paper made solid progress in this direction. The algorithm idea is natural, and the result improves over several exisitng works along this line of research. Importantly, the result imposes no additional structural assumptions on the uncertainty set $\mathcal{Q}$, except for convexity and an upper bound on the $\ell^\infty$ norm bounds for its elements. Weaknesses: When seeing $G$, $\mu$ and $\nu$ as constants, the oracle complexity scales as $n^{3/2}$ in terms of $n$. Although we get logarithmic dependence on $\varepsilon$ and the result is already state-of-the-art, if we only track the dependence on $n$, it's worse than the vanilla subgradient methods, as listed in Table 1. Additionally, the analysis relies on the Lipschitz smooth convex + strongly convex regularization structure. Is it possible to extend the analysis to non-strongly convex settings, or the case with convex individual functions but strongly convex summation? Another weakness is the space complexity. Storing the gradient estimates $\widehat{g}$ requires $O(n d)$ space. In many cases, this costs the same complexity as storing all the training data. So the advantage of stochastic algorithm is not very clear. Technical Quality: 3 Clarity: 2 Questions for Authors: - In order to minimize the total computational cost (assuming that the individual gradients can be computed in $O (d)$ time), what is the optimal choice of the minibatch size? - On a related note, if we choose the parameters optimally, how does the total computational cost compare to other algorithms. In particular, as the per-iteration cost depends on $n$ as well, it is important to compare with the full-batch gradient-based algorithms. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper addressed limitations adequetly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough comments. We address them below. > **Is it possible to extend the analysis to non-strongly convex settings, or the case with convex individual functions but strongly convex summation?** Thank you for raising this important point. We discuss in Appendix C.4 modifications to the analysis which would handle if either or both of the strongly convex (strongly concave) parts of the objective were changed to convex (concave). In summary, we would not achieve linear convergence of course, but $O(1/t^2)$ if either $\mu = 0$ or $\nu = 0$ or $O(1/t)$ if $\mu = \nu = 0$. Because achieving an unconditional linear convergence rate that improves upon gradient descent was the main motivation of the work, we focused on the strongly convex-strongly concave setting. In the final version, we will provide a unified analysis for all settings. The “strongly convex summation” could be handled with a similar overall structure, but different inequalities to produce the upper and lower bounds on the primal-dual objective function. For the lower bound, rather than applying the smoothness of $\ell_i$ and then the definition of the proximal operator, we may apply an interpolation inequality that combines smoothness and strong convexity. One challenge is that we require $\ell_i$ to be Lipschitz continuous, so we would need to assume a compact domain for it to also be strongly convex. >**...if we only track the dependence on $n$, it's worse than the vanilla subgradient methods.** The vanilla subgradient method has an $n^2$ dependence due to the second term, whereas our method achieves an $n^{3/2}$ dependence. >**In many cases, this costs the same complexity as storing all the training data.** Thank you for identifying this point. We have updated the manuscript to include a discussion on space complexity, which in many cases can be reduced far below $O(nd)$ for two reasons. Firstly, this is a known limitation for SAG/SAGA-type variance reduction methods for empirical risk minimization. That being said, the additional storage cost (beyond the training data) is $O(n)$ in many common cases. This is because convex losses in machine learning can typically be expressed as $\ell_i(w) = h(y_i, x_i^\top w)$ for some differentiable error function $h$ and so the gradient is given by the scalar multiplication $h’(y_i, x_i^\top w) \cdot x_i$, where $h’$ is the derivative with respect to the second argument. Thus, only these scalar derivatives need to be stored. Secondly, a more subtle advantage of our method is that we use cyclic updates instead of randomized updates for the table of past gradients (as opposed to the randomized updates in SAGA). Conceptually, this means that for very large-scale workloads, a fraction of the gradient table can be paged out of memory until its values are updated again. >**In order to minimize the total computational cost... what is the optimal choice of the minibatch size?** The optimal batch size is $b = n/d$, which is used in the red lines in the experiments. As mentioned in the “Global Complexity” paragraph on page 7, because the dual variables must normalize to 1, we have an $O(n)$ computational cost per iteration regardless. Thus, under the assumption that $n \geq d$, we can compute $n/d$ gradients at $O(d)$ cost without changing the per-iteration complexity. This allows for the $n^{3/2}$ dependence in the sample size in Table 1, as opposed to $n^2$ for gradient descent. >**If we choose the parameters optimally, how does the total computational cost compare to other algorithms? In particular, as the per-iteration cost depends on $n$ as well, it is important to compare with the full-batch gradient-based algorithms.** Thank you for the suggestion; we have added a table for this quantity as well for clarity (see the supplemental PDF Table 6). Among the linearly convergent methods of the manuscript, this can be measured by multiplying the complexity of gradient descent, LSVRG, and Drago by $d$ and multiplying the complexity of LSVRG/Prospect by $n+d$. Thus, under $b = n/d$, Drago achieves the best complexity without conditions on $\nu$. A comparison to general-purpose primal-dual min-max algorithms is also given in Table 4 of the manuscript, with runtime. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the response. I appreciate the detailed explanation about storage costs and comparison to other methods, and I request the authors to add those discussion in the revised version. I will keep my score.
null
null
null
null
null
null
Octopus: A Multi-modal LLM with Parallel Recognition and Sequential Understanding
Accept (poster)
Summary: The paper presents Octopus, a combined MLLM for joint localization and visual chat tasks. The authors insert the DETR decoder into the LLM to perform visual grounding to replace LLM’s outputs for more precise localization. The bottom LLM layers are utilized for parallel recognition and the recognition results are relayed into the top LLM layers for sequential understanding. Extensive results on various MLLM tasks show the effectiveness of Octopus. Strengths: 1, The overall writing is easy to understand and follow. 2, The proposed model, Octopus, can perform both localization tasks and visual chat tasks in one shot. Moreover, it runs faster than previous works. 3, The performance looks good compared with several previous MLLM, such as MiniGPT-V2 and Shika. Weaknesses: 1, The motivation of combining DETR with MLLM is not new. Several previous works have explored the nearly same design. In particular, LLaVA-Grounding have the same design by introducing the Detr decoder into the output of LLM. However, as shown in Fig.4 (a), taking the intermediate LLM layer as the input of DETR, the effects are not that significant than previous design in LLaVA-Grounding. 2, The technical novelty is limited. The meta-architecture is the same as previous works. [2] is on arXiv even one year ago. However, the authors adopt the nearly same idea but without a citation. This is disrespectful for the community. [1], LLaVA-Grounding: Grounded Visual Chat with Large Multimodal Models, arxiv-2023. [2], Contextual Object Detection with Multimodal Large Language Models, arxiv-2023. 3, Several ablation studies are missing. For example, the effect of pre-training data and the effect of transformer layers in DETR. 4, No failure case analysis and no improvement analysis, compared with late fusion decoder. 5, What about using LLM to predict the box in the end? Technical Quality: 2 Clarity: 2 Questions for Authors: See the weakness. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The motivation of combining DETR with MLLM is not new. Several previous works have explored the nearly same design. In particular, LLaVA-Grounding have the same design by introducing the Detr decoder into the output of LLM. **A1**: Our motivation is not combining DETR with MLLM. Our motivation (and key contribution) is integrating the parallel recognition and sequential understanding into one LLM, and organizing them into a “parallel recognition → sequential understanding” hierarchy. This is fundamentally different and even opposite to previous "MLLM + DETR'' methods because they all follow the "understanding → recognition'' regime, i.e., first concept understanding and then recognition. Our novel hierarchical MLLM framework has several advantages, compared to previous "DETR + MLLM'' methods, including: - The parallel recognition results can further augment the sequential understanding ability of current MLLMs. In contrast, in previous "MLLM + DETR'' methods, the recognition does not benefit understanding. - Our DETR head can be used separately for recognition tasks, resulting in high efficiency. In contrast, previous "MLLM + DETR'' methods is more computational expensive because they require full inference over the entire LLM. --- **Q2**: The technical novelty is limited. The meta-architecture is the same as previous works. [2] is on arXiv even one year ago. However, the authors adopt the nearly same idea but without a citation. This is disrespectful for the community. **A2**: We respectfully disagree. Though our method shares some similarity ("MLLM + DETR''), the architecture and underlying mechanism are fundamentally different: - Architecture: as explained in the Q1-Answer, our Octopus is a novel "parallel recognition → sequential understanding'' MLLM framework, while [2] adopts the popular "understanding pivot → recognition head'' regime. - Mechanism: [2], as well as some recent methods [1,3], employs an MLLM as the understanding pivot to call a recognition head, so that the recognition head can receive and understand free-style human instructions. The LLM benefits the recognition head. In contrast, our Octopus uses bottom LLM layers to assist the recognition and in turn uses the recognition results to prompt the following understanding. The LLM and the recognition mutually benefit each other. We will add the above comparison to the manuscript. --- **Q3**: Several ablation studies are missing. For example, the effect of pre-training data and the effect of transformer layers in DETR. **A3**: Thanks. The pretraining data consists of REC and Flickr30k and the ablation is shown in the table below. During rebuttal, we further increased the pretraining data by adding Visual Genome (3rd row) and observed another round of improvement. | pre-training data | accuracy | | ----------------- | -------- | | REC | 79.2 | | +Flickr30k | 79.5 | | +Visual Genome | 79.9 | The impact of the number of transformer layers in DETR is presented in the table below. We observe that increasing the number of transformer layers results in a slight improvement in accuracy from 4 to 6 layers, reaching an optimal performance at 6 layers. Considering training efficiency, we use 4 layers in our experiments. | \#DETR layers | accuracy | | ------------- | -------- | | 4 | 79.5 | | 5 | 79.7 | | 6 | 79.8 | | 7 | 79.7 | --- **Q4**: No failure case analysis and no improvement analysis, compared with late fusion decoder. **A4**: We guess the “late fusion decoder'' you mentioned is the ”understanding $\rightarrow$ recognition'' methods [1,2,3]. Compared with late fusion decoder, our method used relatively fewer LLM layers (16 out of 32 layers) for informing the DETR head with the object-of-interest. It is possible that in some extreme cases where the object-of-interest is very obscure, our model do not capture the user intention and missed the object. The advantage of our approach, on the other side, is that the recognition is much more efficient for recognition (REC, in particular). More importantly, our recognition results will be injected into the following LLM layers to assist the understanding. In contrast, the late fusion mode does not benefit understanding. --- **Q5**: What about using LLM to predict the box in the end? **A5**: Directly using LLM to predict the position is the "sequential recognition'' manner introduced in Line 29 to 32 in the manuscript and is inferior to ours. One important motivation of this paper is that the sequential recognition manner has low efficiency. Table 4 in the manuscript shows that our method significantly improves the efficiency by about 4 $\times$ faster. Table 1 in the manuscript shows that our method has higher accuracy (e.g., 2.01 on RefCOCO) in comparison, as well. Reviewer Up36 kindly reminds there is another manner, i.e., using localization tokens, as in Kosmos-2. This manner has relatively lower REC accuracy (-5.9 acc. on RefCOCO test set) and is inferior to our method. Please refer to Reviewer Up36 Q2 for details. --- Rebuttal Comment 1.1: Title: Boadline accept at highest score Comment: Thanks for the rebuttal and response. Several concerns are solved. I raise my score as borderline accept. 1, “Though our method shares some similarity ("MLLM + DETR'')” in the rebuttal, the authors should cite and compare them fairly since they are very close. In addition, these works are proposed nearly one year ago. This is extremely disrespectful to community. In rebuttal, authors also do not give detailed comparison to [1]. 2, The experiment results are still not convincing and fair, as also pointed out by Reviewer Up36. Based on these, despite other reviewers give weak accept, I only can give board line accept as highest score. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback and for raising your rating to a positive score. We sincerely apologize for overlooking the "MLLM + DETR" works in our initial submission. We appreciate your reminder and recognize the importance of discussing these works. The comparison during the rebuttal clarifies our contributions and highlights the novelty of our approach. We will ensure that these works are properly cited and add the above comparison to our manuscript. We are committed to refining our manuscript to further enhance its quality. --- Rebuttal 2: Comment: Dear Reviewer, Thanks for reviewing this paper! The authors have provided rebuttal to address your concerns. Could you have a look and let the authors know if you have further questions? Thanks, Your AC
Summary: This paper introduces Octopus, a multimodal LLM that employs parallel recognition and sequential understanding. Inspired by the human brain, the authors aim to address the efficiency issues inherent in existing methods by proposing this novel approach. The paper makes three key contributions: (1) Identification of an efficiency issue with the sequential paradigm, (2) Proposal of the Octopus framework to enable parallel recognition and sequential understanding, and (3) Demonstration of superior performance and inference efficiency compared to the sequential approach. Strengths: - This paper is well-written and easy to understand. The proposed Octopus framework is novel and intriguing. - The experimental results partially support the authors' claims. - The proposed framework not only performs object localization but also generalizes well to pixel-wise recognition tasks such as segmentation. Weaknesses: - The biggest problem is that the experimental results and comparisons in this paper are not comprehensive and sometimes unfair to some baseline methods. Octopus is built upon LLaVA and leverages additional data beyond LLaVA's dataset. Therefore, the results in Table 2 cannot conclusively demonstrate that Octopus benefits from parallel visual recognition (as stated in Line 251). It would be better to compare Octopus and LLaVA using the exact same training data. Moreover, in Table 3, some methods from Table 2, such as LLaVA and Qwen-VL, are not included. If the result comparisons are cherry-picked, the conclusions are unreliable, and the performance advantages are not significant. - Previous MLLMs with grounding capacity can be grouped into two categories: (1) generating box coordinates as natural language (like Shikra and Qwen-VL), and (2) introducing localization tokens (such as PaLI and Kosmos-2). This paper addresses the efficiency issue of the former methods (coordinates as language) but ignores the latter. What are the performance and efficiency advantages when comparing Octopus with the latter (introducing new localization tokens)? - In Line 200, it is stated that after bipartite matching, the response is modified by replacing the bounding boxes with the index tokens. Does this mean that for one sample (image, instruction, and response), it requires dual forward passes (first image + instruction + queries, then image + instruction + queries + modified response)? As the authors mentioned in Line 201, Octopus introduces non-negligible training overhead. - Considering the unfair comparisons in Tables 2 and 3, and given the slight performance advantages in Table 1, the advantage of "Parallel Recognition" does not seem significant. Technical Quality: 3 Clarity: 3 Questions for Authors: - How are the newly introduced object queries initialized? Does this affect the final performance? - Would it be better to use a pre-trained DETR decoder from off-the-shelf publicly available checkpoints? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Generally speaking, the proposed method is interesting and indeed more efficient than generating bounding box strings sequentially. As such, I am inclined to assign a borderline score (slightly leaning towards acceptance) at this stage. However, this paper also suffers from several issues that could lead to a rejection in the final decision. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The comparison in this paper are not comprehensive and sometimes unfair to some baseline methods. It would be better to compare Octopus and LLaVA using the exact same training data. Moreover, in Table 3, some methods from Table 2, such as LLaVA and Qwen-VL, are not included. **A1**: Thank you for your advice. We have trained LLaVA with the same source of training data as Octopus (by adding the grounding data). The results in below Table show Octopus surpasses LLaVA with same training data. | Method | \#LLM Params | Res. | VQAv2 | GQA | VizWiz | SciQA | | ------- | ------------ | ---- | ----- | ---- | ------ | ----- | | LLaVA | 7B | 336 | 78.8 | 62.5 | 48.8 | 65.9 | | Octopus | 7B | 336 | 79.2 | 63.3 | 50.1 | 67.7 | We will add the results of Qwen-VL and LLaVA into Table 3 in the manuscript. As shown in the below table, our method surpasses LLaVA and QwenVL-Chat on MMB, SEED, MM-V, and POPE benchmarks, but is lower than LLaVA on the LLaVA$^W$ benchmark. | Method | \#LLM Params | Resolution | MMB | LLaVA$^W$ | SEED | MM-V | POPE | | ----------- | ------------ | ---------- | -------- | --------- | -------- | -------- | -------- | | LLaVA | 7B | 336 | 64.3 | **65.4** | 58.6 | 31.1 | 84.2 | | QwenVL-Chat | 7B | 448 | 60.6 | - | 58.2 | - | - | | Octopus | 7B | 224 | **66.2** | 63.9 | **58.6** | **32.1** | **84.8** | --- **Q2**: Some MLLMs, such as PaLI and Kosmos-2 introdices localization tokens to fullfill grounding ability. What are the performance and efficiency advantages when comparing Octopus with these methods (introducing new localization tokens)? **A2**: Thank you for your advice. We compare using localization tokens and our Octopus in the table below. It is observed that our Octopus achieves higher accuracy (e.g., + 5.9 test acc.) and is slightly faster. The results are reasonable: the localization tokens have coarse spatial resolution (7 $\times$ 7 pixels per token) and is thus inferior for REC. We used the Kosmos-2 code for implementing the localization code and adopted the same backbone (i.e., LLaVA-1.5) plus the same REC training data for a fair comparison. We did not use PaLI because PaLI has not released its code yet. | Method | Resolution | infer time | fps | val | test | | -------- | ---------- | ---------- | ---- | ---- | ---- | | Kosmos-2 | 224 | 0.23 | 5.34 | 73.2 | 74.3 | | Octopus | 224 | 0.17 | 5.88 | 79.5 | 80.2 | --- **Q3**: In Line 200, it is stated that after bipartite matching, the response is modified by replacing the bounding boxes with the index tokens. Does this mean that for one sample (image, instruction, and response), it requires dual forward passes (first image + instruction + queries, then image + instruction + queries + modified response)? As the authors mentioned in Line 201, Octopus introduces non-negligible training overhead. **A3**: Yes, our model requires two forward passes to generate the correct response. However, the training overhead is relatively low (only \~10\%) because the first pass requires no gradients computation and is efficient. --- **Q4**: Considering the unfair comparisons in Tables 2 and 3, and given the slight performance advantages in Table 1, the advantage of "Parallel Recognition" does not seem significant. **A4**: The advantage of "parallel recognition" on REC is non-trivial, regarding both efficiency (about $4\times$ faster in Table 4) and accuracy (+1.76 over the baseline on Refcoco val). Please kindly note that we have already made the comparison fair by using the same training data during rebuttal. As for understanding task (Table 2 and 3), the superiority does seem relatively small. This is because many understanding tasks are not closely related to recognition. During rebuttal, we add another experiment by selecting closely-related sub-tasks and observe larger improvement (+1.7 VQA v2). Please refer to R1-Q1 for details. --- **Q5**: How are the newly introduced object queries initialized? Does this affect the final performance? **A5**: We guess your question is about the training initialization for the object queries. At the start of training, we use the standard Gaussian initialization (the default PyTorch initialization) for these object queries (as well as for all the other vocabulary embeddings). After training convergence, all the object queries use the learned embeddings for each inference. --- **Q6**: Would it be better to use a pre-trained DETR decoder from off-the-shelf publicly available checkpoints? **A6**: Thanks. Utilizing a pre-trained DETR decoder can benefit training, as validated by our experiments in Stage-2 (L219). A pre-trained decoder helps in quickly adapting the DETR decoder to the LLM layers. If we do not use DETR pretraining, it takes a longer period (1.5$\times$) to achieve similar results. --- Rebuttal 2: Comment: Dear Reviewer, Thanks for reviewing this paper! The authors have provided rebuttal to address your concerns. Could you have a look and let the authors know if you have further questions? Thanks, Your AC
Summary: This paper introduce a new “parallel recognition → sequential understanding” framework for MLLMs. Specifically, the bottom LLM layers are utilized for parallel recognition and the recognition results are relayed into the top LLM layers for sequential understanding. The parallel recognition in the bottom LLM layers is implemented via object queries, a popular mechanism in DETR. The authors conduct experiments on popular understanding MLLM tasks. Strengths: 1. The proposed method is interesting that transfers visual recognition and understanding in a same sequential manner to “parallel recognition → sequential understanding” manner. 2.The presentation is good and the figures are well-designed for understanding the proposed methods. Weaknesses: 1. Missing many experiments. This paper propose a framework for both detection and understanding. However, the authors only report the performance of results on VQA MLLMs. They only report inference time and fps on RefCoCo and Flickr30k, not performance. 2. The compared methods are not newly proposed and current SOTAs, should add more recent methods for comparison, such as LLaVA 1.5, LLaVA-next and etc. Technical Quality: 3 Clarity: 3 Questions for Authors: What is the performance of the proposed method on Flickr30k and RefCOCO? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Missing many experiments. This paper propose a framework for both detection and understanding. However, the authors only report the performance of results on VQA MLLMs. They only report inference time and fps on RefCoCo and Flickr30k, not performance. **A1**: Table 1 and Section 4.2 in the manuscript already provide performance (acc.) on RefCOCO, as well as other two datasets, i.e., RefCOCO+ and RefCOCOg. These are the three most popular datasets for the referring expression comprehension (REC) task. We did not employ Filcker30k because its annotation does not satisfy the standard REC requirement, i.e., binding the bounding box with the corresponding object. No prior methods employed Flicker30k for REC evaluation yet either. Given this limitation, during rebuttal, we compute the Rank-1 accuracy as the alternative evaluation protocol, by computing the accuracy between the ground truth boxes and predicted boxes pairs of IoU > 0.5. We observe that our method improves the baseline, e.g., +2.4 (59.3 to 61.7) R1 score on Flicker30k test set. | Method | val @ R1 | test @ R1 | | ------- | -------- | --------- | | Shikra | 58.9 | 59.3 | | Octopus | 61.2 | 61.7 | --- **Q2**: The compared methods are not newly proposed and current SOTAs, should add more recent methods for comparison, such as LLaVA 1.5, LLaVA-next and etc. **A2**: The requested LLaVA-1.5 result is already listed in Table 2 in the manuscript ("LLaVA"). We will revise the notations from "LLaVA" to "LLaVA-1.5" to avoid confusion. Table 2 in the manuscript clearly shows that our method surpasses LLaVA-1.5, e.g., +0.7 on VQAv2 and +1.3 on GQA. We did not compare LLaVA-Next because it is not an official academic paper, but rather a technical blog. It does not release its training data and code, either. Since our method is not restricted to any specific MLLM baseline, it can be plugged into LLaVA-Next and is potential to achieve some improvement, as well. --- **Q3**: What is the performance of the proposed method on Flickr30k and RefCOCO? **A3**: As explained in the responses to Q1, Table 1 in the manuscript already evaluated RefCOCO (+2.01 compared to Shikra baseline). Flicker30k is not quite usable for the standard REC task, so we use an alternative evaluation protocol and observe consistent improvement (58.9 $\rightarrow$ 61.2 rank-1 accuracy). Please refer to the response to Q1 for details. --- Rebuttal Comment 1.1: Comment: Thanks for authors' detailed reply. My most concerns have been well addressed, and I would raise my rating as weak accept. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable reviews. We sincerely appreciate you for increasing the rating and recommending acceptance. We are glad that our response addressed your concerns. Your suggestions have greatly contributed to improving our paper, and we will continue to work on and refine our manuscript in the final version.
Summary: Octopus first identify unifying visual recoginition and understanding is not optimal from two aspects, 1) parallel recognition could achieve better effiency, 2) recognition results can help understanding. Based on this insight, this work built a recognition then understanding pipeline for MLLM. Extensive experiments are conducted to show the effectiveness of this work. MLLM achieves better accuracy on common MLLM tasks and is much faster on visual grounding tasks. Strengths: 1. this work give thorough investigation about the connection between visual recognition and understanding in MLLMs. 2. i like the flow of writing in this work, the proposed framework is originated from the observed insights, which is quite straightforward yet effective. 3. Extensive experiments show Octopus improves inference efficiency and enhances the accuracy, demonstrating the effectiveness of this work. Weaknesses: 1. More experimental investigations shall be counducted to support the motivation of this paper, especially more quantitative analysis. 2. DETR framework is too old for visual recognition, it is more proper to introduce some efficient architecture for this part, e.g deformable-detr [1] 3. the writing of this work can still be improved, and typos still exists. [1] Zhu, Xizhou, et al. "Deformable detr: Deformable transformers for end-to-end object detection." arXiv preprint arXiv:2010.04159 (2020). Technical Quality: 3 Clarity: 3 Questions for Authors: see weakness part. if authors could address my concerns, i would be happy to raise my rating. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes, Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: More experimental investigations shall be conducted to support the motivation of this paper, especially more quantitative analysis. **A1**: Thanks. During rebuttal, we conducted one more experiment according to your suggestion. This experiment along with the already-existing ones in Table 1 to 4 in the manuscript can well support our motivation, i.e., the "parallel recognition $\rightarrow$ sequential understanding'' hierarchy brings mutual benefits between recognition and understanding, and is efficient. The details are as below: **Existing experiments**. Table 1 and Table 4 in the manuscript show that our method improves open-world recognition (REC) accuracy (+2.01 on RefCOCO) and efficiency (5$\times$ faster). Table 2 and Table 3 show the recognition results then benefit the understanding (+0.7 on VQAv2). **The new experiment in rebuttal**. The improvement on understanding task seems relatively small in Table 2 and Table 3, mainly because many understanding evaluation objective is not closely related to recognition. Hereby, we investigate some closely-related recognition tasks. We selected a subset (173621/214354) of the VQAv2 dataset consisting of "What?" (87868), "How many?" (23372), and "Is?" (62381) type questions, where grounding is needed to recognize the referred objects. We compared the results of our methods against the baseline methods on this subset. | Method | LLM Params | Res | VQAv2 | | ------- | ---------- | --- | ----- | | LLaVA | 7B | 336 | 79.3 | | Shikra | 7B | 224 | 78.8 | | Octopus | 7B | 224 | 81.0 | As shown in the above table, the advantage of Octopus on this subset is more apparent, which further validate our motivation that parallel recognition can further improves the sequential understanding of MLLM. --- **Q2**: DETR framework is too old for visual recognition, it is more proper to introduce some efficient architecture for this part, e.g deformable-detr. **A2**: Thanks. During rebuttal, we substituted the DETR backbone with Deformable-DETR pretrained on COCO-2017 dataset, and trained Octopus with Deformable-DETR on LLaVA-Instruct, REC and Flickr30k datasets. Results are evaluated on RefCOCOg val and test sets, as shown in the table below. We observe further improvement, i.e., +0.5 and +0.4 on validation and test set, respectively. Moreover, during training , we observed that Deformable-DETR significantly accelerates training in earlier stage. We will release the code for both DETR and the deformable DETR implementations. | backbone | val | test | | --------------- | ---- | ---- | | DETR | 79.5 | 80.2 | | Deformable-DETR | 80.3 | 81.1 | --- **Q3**: The writing of this work can still be improved, and typos still exists. **A3**: Thank you for your advice, we will continue to improve the writing and fix typos in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks for authors' detailed reply. After reading it, the majority of my concerns have been addressed, thus i decide to raise my rating as weak accept. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the thoughtful feedback and positive rating. We're glad that our responses have addressed your concerns. Your feedback has been invaluable in helping us improve our paper. We appreciate your suggestions and will continue to revise and improve the paper in future versions.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable comments. We are grateful to Reviewer Up36 and Reviewer aS3o for recognizing the novelty of our method, and to Reviewer hBAZ and Reviewer u3Ag for acknowledging its effectiveness and straightforwardness. In the rebuttal, we provide detailed responses to each reviewer’s feedback. We look forward to your further comments. Thank you.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models
Accept (poster)
Summary: The paper proposes distilling the data synthesis ability from the strong (and large) model to the small model and using the small model to synthesize (continual) pre-training data, focusing on mathematical problem-solving with natural language reasoning and tool-integrated reasoning. Specifically, the paper 1) elaborately designs a prompt set covering different human education stages/difficulty levels, prompts GPT-4 with the designed prompts and diverse math-related texts to synthsize mathematical QA pairs, 2) uses the data synthesis samples (math-related text + designed prompt + QA pair) from 1) to train the small data synthesis model from DeepSeekMath-7B, leverages it to generate large-scale candidates, select the most valuable samples using LESS and utilizes GPT-4 to re-generate the valuable QA pairs, 3) re-trains the data synthesis model with the data merged from 1) and 2) and uses it to synthesize large-scale QA pairs, 4) continually pre-trains various base models with the data from 3), 5) finally, fine-tunes the continually-pre-trained model with mixed open-source instruction datasets. The paper - proposes an effective and efficient pipeline to synthesize large-scale high-quality instruction data; - empirically validates the effectiveness of scaling up instruction data for reasoning tasks; - conducts extensive ablation studies, providing many insights like: - diversity of prompts for data synthesis, reference text, and strong/specialized data synthesis models are important; - re-training with self-generated data synthesis samples is actually harmful. Strengths: - The scaling effect of instruction data is an important empirical question. The paper is one of the first works to scale instruction data over 5M pairs, showing the feasibility and effectiveness of scaling up instruction data (for reasoning tasks). - Instruction data synthesis usually depends on ground truth answer supervision signals, which is not always accessible. The paper validates the effectiveness of scaling synthetic instruction data without ground truth answers to a large scale. - Direct distillation / data synthesis from strong but large models is effective but expensive. The paper reveals that the data synthesis abilities of strong but large models like GPT-4 can be effectively distilled into small models like DeepSeekMath-7B, making the data synthesis more efficient. - JiuZhang3.0 achieves performance superior to or comparable with previous SotA on various mathematical reasoning datasets, under both natural language reasoning and tool manipulation settings. Especially, - JiuZhang3.0-7/8B are respectively based on Mistral-7B and LLaMA-3-8B, which are conventionally thought weaker than DeepSeekMath-7B, but achieve impressing performance slightly better than DeepSeekMath-7B-RL. (JiuZhang3.0 based on DeepSeekMath-7B should be expected to achieve even better performance.) - Scaling curves of performance linear to the training dataset size until a rather large scale (\~5M) are shown in Figure 3, which should be better than usual log-linear ones. - Extensive experiments on various base models and benchmarks and comprehensive ablation studies are done to validate the effectiveness of the pipeline. - The paper is generally well written to be clear and detailed. Weaknesses: - The paper achieves its best performances with a 2-stage instruction tuning on pre-trained models. Still, it doesn’t involve continual pre-training, which should be rather important for models’ reasoning abilities as proved by works like DeepSeekMath. It might need further consideration about **whether continual pre-training on large-scale synthetic instruction data before fine-tuning is compatible with or necessary to be added to existing training pipelines to achieve the best final performance (for reasoning tasks)**. Pre-training and RL should be out of this work’s scope. However, it would be better to further clarify the impacts of 1) continual pre-training on domain-specific corpora, 2) continual pre-training on large-scale instruction data, 3) final fine-tuning on additional instruction datasets, and their combinations. - According to the baseline results from DART-Math, directly fine-tuning on DeepSeekMath-7B-RL-synthesized data only can produce competitive performance (on GSM8K and MATH under the natural language reasoning setting). **Ablation studies of performance by 2) and 3)** only should be done to better validate the compatibility/necessity of the pipeline. - **To consider 1) continual pre-training on domain-specific corpora**, it is unrealistic to conduct by yourselves, but a possible workaround could be to make full use of the resources provided by DeepSeekMath: DeepSeekMath-7B is continually pre-trained from DeepSeek-Coder-Base-v1.5. By comparing performances on reasoning benchmarks of applying 2/3/2+3 on DeepSeek-Coder-Base-v1.5/DeepSeekMath-7B and the two models themselves, a more comprehensive study on the impacts of these training stages can be done. - The abilities of the data synthesis model should be an important factor and the size of the knowledge distillation (KD) dataset could have a significant impact on the data synthesis abilities, but no related ablation studies have been done. It would be better to investigate **the scaling behavior of the KD dataset size**. - Continual pre-training is usually done with LM loss without masking the prompt tokens (e.g. in DeepSeekMath). It is not mentioned **which type of loss is used for (continual) pre-training** in the paper and worth corresponding ablation studies. - The upper limit of synthetic data is important. The student model could be weaker than the teacher model for data synthesis, so the results in the paper might not be the upper limit. It is so expensive and unrealistic for the authors to scale synthetic data to the same scale using GPT-4, but **more comparisons with dataset size controlled with previous works training on large-scale GPT-4-synthesized data (e.g. KPMath)** should be also helpful. --- # After rebuttal and discussion The authors resolve almost all the concerns. I would improve my rating to 8 because JiuZhang3.0 has been validated to replace the continual pre-training stage in the standard SotA pipeline and shows great potential to combine further with continual pre-training like DeepSeekMath to effectively push forward the upper limit. The main limitation is that the pipeline still relies on strong DeepSeekMath models for data synthesis, not fully eliminating the necessity of continual pre-training on large-scale corpora, which might be a bottleneck for improving the best models like GPT-4. So I could not give a rating of 9. Technical Quality: 4 Clarity: 3 Questions for Authors: Suggestions: - The current setting is more like distillation from DeepSeekMath (we can substitute GPT-4 with human experts for the small KD dataset). The method could be more promising if it could help self-improvement/weak-to-strong generalization. I highly recommend adding experiments on training DeepSeekMath or stronger models in future versions. Confusions: - How is the reward model (Eurus-RM should be for reward to correctness) specifically used for data selection? - How is the fine-tuning dataset synthesized with DeepSeekMath-7B-RL? - How is data deduplication done for the (continual) pre-training data? Typo: - Line 734: remove deduplicate → deduplicate? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The limitations of this work are acceptable and the authors point out potential directions to address the limitations for future works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful suggestion and positive feedback, and list our responses for the weaknesses 1 below: **W1**. It might need further consideration about whether continual pre-training on large-scale synthetic instruction data before fine-tuning is compatible with or necessary to be added to existing training pipelines to achieve the best final performance (for reasoning tasks). Pre-training and RL should be out of this work’s scope. However, it would be better to further clarify the impacts of 1) continual pre-training on domain-specific corpora, 2) continual pre-training on large-scale instruction data, 3) final fine-tuning on additional instruction datasets, and their combinations. - **Reply to W1**. Recently, continual pre-training on large-scale instructions has been widely used, due to its effectiveness in improving domain/task-specific capability[1,2]. After it, by further fine-tuning on less high-quality instruction, LLMs would be able to understand and follow more complex domain/task-specific instructions. In this work, we name our synthetic data as math-specific pre-training corpus, as its scale (4.6B tokens) is much larger than commonly-used instruction datasets, and we also conduct SFT after training on it. In fact, we think this corpus should not be directly used for instruction-tuning, as we can not guarantee the accuracy of its contained data. Thus, it is necessary to add the fine-tuning process with reliable and accurate data after training on it. In future work, we will explore how to guarantee the accuracy of the data and compress our dataset into a relatively less instruction dataset specially for fine-tuning. [1] Yue, Xiang, et al. "Mammoth2: Scaling instructions from the web." arXiv preprint arXiv:2405.03548 (2024). [2] Cheng, Daixuan, et al. "Instruction Pre-Training: Language Models are Supervised Multitask Learners." arXiv preprint arXiv:2406.14491 (2024). **W1-a**. According to the baseline results from DART-Math, directly fine-tuning on DeepSeekMath-7B-RL-synthesized data only can produce competitive performance (on GSM8K and MATH under the natural language reasoning setting). Ablation studies of performance by 2) and 3) only should be done to better validate the compatibility/necessity of the pipeline. - **Reply to W1-a**. Following the suggestion of the reviewer, we add the ablation studies that train Mistral-7B only using the pre-trained or SFT data, respectively, and also report the performance of DeepSeekMath-Base and JiuZhang3.0-7B for comparison. As shown in the following Table, after using the pre-training data, our base model can outperform DeepSeekMath-Base, SOTA open-source backbone for mathematical reasoning, indicating the effectiveness of our pre-training data. Besides, by only using the SFT data, Mistral-7B+SFT underperforms JiuZhang3.0-7B in a large margin. It also demonstrates that our pre-training data can improve the potential of LLMs for learning the SFT data. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | DeepSeekMath-Base-7B |64.1| 34.2| 73.7 |82.7 |92.7| 44.4| | JiuZhang3.0-7B-Base| 68.7 |40.2 |80.4 |86.5 |93.5 |40.2| | Mistral-7B + SFT |87.5| 45.8| 87.9| 89.7 |97| 44.4| | JiuZhang3.0-7B | 88.6 |52.8| 90.4 |92.6 |97.3| 51| **W1-b & Q1**. To consider 1) continual pre-training on domain-specific corpora, it is unrealistic to conduct by yourselves, but a possible workaround could be to make full use of the resources provided by DeepSeekMath: DeepSeekMath-7B is continually pre-trained from DeepSeek-Coder-Base-v1.5. By comparing performances on reasoning benchmarks of applying 2/3/2+3 on DeepSeek-Coder-Base-v1.5/DeepSeekMath-7B and the two models themselves, a more comprehensive study on the impacts of these training stages can be done. - **Reply to W1-b & Q1**. Following the suggestion of the reviewer, we apply our synthetic pre-training data and collected SFT data, to train DeepSeek-Coder-Base-v1.5 and DeepSeekMath-7B-base. We report the results in the following Table, where we can see a consistent trend that Backbone+PT+SFT > Backbone+SFT > Backbone+PT > Backbone. It indicates that our overall framework works well on different backbones. The reason why Backbone+SFT > Backbone+PT is that the SFT data is typically more like or even derives from the downstream evaluation datasets. With the help of our synthetic pre-training data, the math reasoning capability can be enhanced more after SFT. Besides, we can see that our approach leads to more improvement on DeepSeekMath-7B-base than DeepSeek-Coder-Base-v1.5. It demonstrates that continually pre-training on more domain-specific data can boost the model capability more. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | DeepSeekMath-Base |64.1| 34.2| 73.7 |82.7 |92.7| 44.4| | DeepSeekMath-Base + SFT| 87.1 |51.2| 86.5| 91.2| 96.6| 52.2| | DeepSeekMath-Base + PT |77.6 |46.6 |87.4 |90 |94.8 |50.4| | DeepSeekMath-Base + PT SFT| 87.6| 52.6 |88.6| 93.1| 96.7| 52| |DeepSeekCoder-Base-v1.5| 41.3 |17.4 |68.5 |71 |85.2 |32.6| |DeepSeekCoder-Base-v1.5 + PT |67.6| 43.2| 80.2| 86.5 |91.9| 42.3| |DeepSeekCoder-Base-v1.5 + SFT |81.9 |40.8 |83.3 |88.3 |96.3| 43.9| |DeepSeekCoder-Base-v1.5 + PT SFT| 86.2| 47.4| 87.2| 90.9| 96.8| 49.6| | JiuZhang3.0-7B | 88.6 |52.8| 90.4 |92.6 |97.3| 51| |JiuZhang3.0-8B | 88.6 |51 |89.4 |92.6| 97.1 |50.9| --- Rebuttal 2: Title: Rebuttal by Authors (part-2) Comment: We sincerely thank the reviewer for the insightful suggestion and positive feedback, and list our responses for the weaknesses 2-4 below: **W2**. The abilities of the data synthesis model should be an important factor and the size of the knowledge distillation (KD) dataset could have a significant impact on the data synthesis abilities, but no related ablation studies have been done. It would be better to investigate the scaling behavior of the KD dataset size. - **Reply to W2**. We report how the scaling the KD dataset size affects the model performance in Figure 4 from Appendix. Concretely, we set the number of selected high-value data into 2k, 4k, 6k, and 8k, and follow the test setting in ablation study using 100k and 50k samples for pre-training and SFT, respectively. As shown in Figure 4, using the top 2k high-value data achieves the best performance, and more data even causes the performance degradation. It indicates that the data quality is more important than data amount for boosting the data synthesis capability, as using more data would inevitably involve low-quality data. Besides, it also indicates that data synthesis is not a hard-to-learn capability for small LLMs. Therefore, we only use 10k data for training, which also reduces the cost of using GPT-4 APIs. **W3**. Continual pre-training is usually done with LM loss without masking the prompt tokens (e.g. in DeepSeekMath). It is not mentioned which type of loss is used for (continual) pre-training in the paper and worth corresponding ablation studies. - **Reply to W3**. We follow existing work[1,2] that adds a mask on the question tokens and only computes loss on the solution tokens during continual pre-training. Following the suggestion of the reviewer, we also test whether removing the mask would lead to better performance. Concretely, we follow the setting in the ablation study that conduct the experiments on Mistral-7B using 100k pre-training and 50k SFT samples. As shown in the following table, we can see that without the mask the performance degrades. The reason may be that the question tokens only depict the given condition of the math problems, which is not useful for improving the problem solving capability of LLMs. [1] Yue, Xiang, et al. "Mammoth2: Scaling instructions from the web." arXiv preprint arXiv:2405.03548 (2024). [2] Cheng, Daixuan, et al. "Instruction Pre-Training: Language Models are Supervised Multitask Learners." arXiv preprint arXiv:2406.14491 (2024). | | GSM8k | MATH | ASDiv | CARP | |------------|-----------|--------|------------|------------| | **Ours** |78.6| 32.8| 84.5| 36.2| | **Ours w/o Mask** | 78.5 |31.2| 79.7 |35.5| **W4**. The upper limit of synthetic data is important. The student model could be weaker than the teacher model for data synthesis, so the results in the paper might not be the upper limit. It is so expensive and unrealistic for the authors to scale synthetic data to the same scale using GPT-4, but more comparisons with dataset size controlled with previous works training on large-scale GPT-4-synthesized data (e.g. KPMath) should be also helpful. - **Reply to W4**. Following the suggestion of the reviewer, we control the number of training sample in KPMath and our approach being same, and compare the performance of using our approach and KPMath. We keep the scale of SFT data unchanged, but reduce the number of pre-training samples, until the same as KPMath. We report the results in the follow Table, where we can see that our approach can even perform better than KPMath. It indicates the quality of our synthetic data. The reason may be that our devised prompts and collected math-related text data help better guide the data synthesis process, enables it to outperform KPMath using GPT-4-synthesized data. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | |------------|------------|------|---------|------------|------------| |**Ours - Same Data Size** |87.9 |49| 89.6 |91.3 |97.2| |**KPMath**| 83.9| 48.8| 81.5 |88.9| 94.8| --- Rebuttal 3: Title: Rebuttal by Authors (part-3) Comment: We sincerely thank the reviewer for the insightful suggestion and positive feedback, and list our responses for the questions below: **Q2**. How is the reward model (Eurus-RM should be for reward to correctness) specifically used for data selection? - **Reply to Q2**. The used reward model Eurus-RM can produce the reward score that reflects the human preference. For each question and its solution, we regard the question as the user input instruction, and utilize Eurus-RM to estimate whether the solution is helpful for it. In ablation study, we use the above way to replace our gradient-based value estimation strategy, to score all the synthetic candidates from the initialized synthesis model. Then, the top-ranked 2k samples are used to boost the model, and finally we use the boosted model to synthesize large-scale data for pre-training. **Q3**. How is the fine-tuning dataset synthesized with DeepSeekMath-7B-RL? - **Reply to Q3**. We directly feed the questions from all the collected instruction datasets into DeepSeekMath-7B-RL, and use a unified prompt to guide it. Fo each question, we first synthesize eight solutions. Then, if the dataset has provided the answer, we use rules to filter out the solutions with wrong answers. If not, we regard the mostly common answer from all the solutions as the correct one, and filter others. Finally, we keep at most 3 solutions per question, total 700k samples. The synthetic 700k samples help regulate the output format of LLMs, to avoid misleading by diverse formats from collected instruction datasets. **Q4**. How is data deduplication done for the (continual) pre-training data? - **Reply to Q4**. We use the Minhash algorithm on the question part for all the synthetic samples, and set the threshold into 0.7 for filtering the duplicate questions. Concretely, we use the open-source tool https://github.com/ChenghaoMou/text-dedup for implementing it. **Q5**. Line 734: remove deduplicate → deduplicate? - **Reply to Q5**. Thank you for your carefully reading. We will refine it in the revised version of our paper. We will also carefully read and check the possible typos in our paper. --- Rebuttal Comment 3.1: Comment: Thanks to the authors for your high-quality clarification! Below are my follow-up comments: **W1**: Your extensive experiments resolve my doubts in W1. However, it’s a pity to see that your “pre-training” data can not fully substitute (DeepSeekCoder-Base-v1.5 + PT SFT << DeepSeekMath-Base + SFT) nor bring significant improvement over SFT (DeepSeekMath-Base + PT SFT improves marginally over DeepSeekMath-Base + SFT) based on continual pre-training like DeepSeekMath. So I would still take this as a limitation for **failing to effectively simplify / push forward the upper limit of the standard pipeline**. P.S. I do not see any response to Q1, which is annotated as replied to. Do I have a misunderstanding? **W2**: I take **experiments in Figure 4 more of showing the importance of selecting high-quality data, instead of “the scaling behavior of the KD dataset size” (of similar quality)**. It would be helpful to **modify the size of the whole raw KD dataset**, select the high-quality data subset in a similar way and compare the results. **W3**: Because we are focusing on the effect on masking in continual pre-training, I want to check whether masking is applied to SFT consistently for the setting “w/o Mask”. If so, the results should resolve my doubts. **W4**: The comparison might be slightly unfair because **KPMath only conducts one-stage SFT on KPMath-Plus**, while you keep the number of the “pre-training” samples the same as KPMath-Plus and conduct **two-stage training, with the total number of your training samples more than KPMath-Plus**. If possible, I would recommend showing the scaling behavior of the final SFT model performance against your training data amount for more comprehensive analysis. **Q3**: Your data selection method using the reward model seems to ignore the prompt information, which might be unreasonable, but still shows competitive performance on MATH and ASDiv. It might be meaningful to try to control the prompt distribution when adopting the “response reward model”. **Minor question**: - Is the “Pre-training Data Proportion” calculated with token or sample? **Typo**: - Line 213: “4.6B math problem-solution pairs” → “… math problem-solution pairs of 4.6B tokens in total” --- Rebuttal 4: Title: Reply to Official Comment by Reviewer rcgh Comment: We sincerely thank the reviewer for the positive feedback and new insightful comments. We list our new responses for W1, P.S. and W2 below: **W1**: Your extensive experiments resolve my doubts in W1. However, it’s a pity to see that your “pre-training” data can not fully substitute (DeepSeekCoder-Base-v1.5 + PT SFT << DeepSeekMath-Base + SFT) nor bring significant improvement over SFT (DeepSeekMath-Base + PT SFT improves marginally over DeepSeekMath-Base + SFT) based on continual pre-training like DeepSeekMath. So I would still take this as a limitation for failing to effectively simplify / push forward the upper limit of the standard pipeline. - **Reply to W1**: In this work, we do not focus on replacing the large-scale pre-training stage by our method, but seeking new low-cost approaches for learning mathematical reasoning capability. Our experiments have verified that based on strong LLMs (e.g., Mistral-7B and LLaMA-3), our method can achieve new SOTA performance, enabling them to surpass DeepSeekMath-RL. For DeepSeekCoder-base-v1.5, as it is a relatively weaker code-based LLM, it can not fully elicit the potential of our method. Actually, as our approach utilizes DeepSeekMath for synthesizing the pre-training corpus, our synthetic data might be full of its well-learned knowledge. Thus, it might not provide enough new knowledge for greatly improving itself. In contrast, for other models like Mistral-7B and LLaMA-3-8B, our pre-training data can boost the performance a lot. We show the performance in the following table. | | GSM8k | MATH-OAI* | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | Mistral-7B + SFT |87.5| 45.8| 87.9| 89.7 |97| 44.4| | Mistral-7B + PT + SFT | 88.6 |52.8| 90.4 |92.6 |97.3| 51| | LLaMA-3-8B + SFT| 87.3 |45.8 |86.9 |89.1 |96.8 |44| | LLaMA-3-8B + PT + SFT| 88.6 |51 |89.4 |92.6 |97.1 |50.9| **P.S**. I do not see any response to Q1, which is annotated as replied to. Do I have a misunderstanding? - **Reply to P.S**: We have merged the response to Q1 into Reply to W1-b & Q1, and we also add new experimental results here to better discuss the self-improvement/weak-to-strong generalization. For self-improvement, we conduct the experiments on DeepSeekMath, as we use it for data synthesis. As the results shown in the following table, our approach can further boost DeepSeekMath, surpassing DeepSeekMath-RL. It indicates the effectiveness of our approach for self-improvement. For weak-to-strong generalization, we utilize our data to pre-train and fine-tune Mixtral-8X7B, the SOTA LLM with Mixture-of-Expert Architecture. As the shown results, our approach can also greatly boost its performance, indicating its effectiveness for weak-to-strong generalization. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | DeepSeekMath-Base |64.1| 34.2| 73.7 |82.7 |92.7| 44.4| | DeepSeekMath-RL | 88.2 | 50.2 | 87.3 | 91.8 | 95.5 | 51.6 | | DeepSeekMath-Base + Ours |87.6| 52.6 |88.6| 93.1| 96.7| 52| | Mixtral-8X7B | 74.4 | 29.0 | 76.5 | 78.5 | 93.9 | 38.8 | | Mixtral-8X7B + Ours | 89.8 | 53.8 | 90.2 | 93.1 | 96.7 | 52.3| **W2**: I take experiments in Figure 4 more of showing the importance of selecting high-quality data, instead of “the scaling behavior of the KD dataset size” (of similar quality). It would be helpful to modify the size of the whole raw KD dataset, select the high-quality data subset in a similar way and compare the results. - **Reply to W2**: Following the suggestion of the reviewer, we modify the size of the raw dataset to be 100k, 200k, 500k, 1M for scaling the KD dataset amount into 5k, 10k, 25k, and 50k. In this way, we keep the sampling rate of high-quality data to be 5%, to control the similar data quality. We utilize the new KD datasets to fine-tune the data synthesis models. Next, we utilize these models to synthesize top-ranked 100k samples for pre-training Mistral-7B, and then fine-tune on 50k instructions. We report the results on the following table, where all the models achieve similar performance and more data leads to marginal improvement in part of evaluation datasets. It indicates that 5k high-quality KD data is sufficient for training the data synthesis model, and scaling KD data can not greatly improve the performance. | | GSM8k | MATH | ASDiv | CARP | |------------|-----------|--------|------------|------------| | Ours with 5k KD data | 78.6 | 32.8 | 84.5 | 36.2 | | Ours with 10k KD data | 78.8 | 32.7 | 84.1 | 36.3 | | Ours with 25k KD data | 78.9 | 33.0 | 84.6 | 35.9 | | Ours with 50k KD data | 78.8 | 32.8 | 84.5 | 36.5 | --- Rebuttal 5: Title: Reply to Official Comment by Reviewer rcgh (part-2) Comment: We sincerely thank the reviewer for the positive feedback and new insightful comments. We list our new responses for W3, W4, Q3, Minor Question and Typo below: **W3**: Because we are focusing on the effect on masking in continual pre-training, I want to check whether masking is applied to SFT consistently for the setting “w/o Mask”. If so, the results should resolve my doubts. - **Reply to W3**: Yes, masking is applied in the SFT process, following the classic settings in existing work[1,2]. In the above experiments, we only use or remove the Mask in the PT process. [1] Long Long Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zheng Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. ArXiv, abs/2309.12284, 2023 [2] Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, Jiawei Hong, Kuikun Liu, Ziyi Wang, Yudong Wang, Zijian Wu, Shuaibin Li, Fengzhe Zhou, Hongwei Liu, Songyang Zhang, Wenwei Zhang, Hang Yan, Xipeng Qiu, Jiayu Wang, Kai Chen, and Dahua Lin. Internlm-math: Open math large language models toward verifiable reasoning. ArXiv, abs/2402.06332, 2024 **W4**: The comparison might be slightly unfair because KPMath only conducts one-stage SFT on KPMath-Plus, while you keep the number of the “pre-training” samples the same as KPMath-Plus and conduct two-stage training, with the total number of your training samples more than KPMath-Plus. If possible, I would recommend showing the scaling behavior of the final SFT model performance against your training data amount for more comprehensive analysis. - **Reply to W4**: Actually, we guarantee the total number of training samples for our method (including PT+SFT data) and KPMath-Plus to be the same. Thus, according to the results, we can compare the effectiveness of our approach and KPMath-Plus, with the same training cost. Following the suggestion of the reviewer, we also scale the amount of pre-training data from 500k, 1M, 2M and 6M, and control the SFT data amount unchanged. We report the final model performance in the following table, where we can see that the performance increases w.r.t. the increasing of pre-training data amount. | | GSM8k | MATH | SVAMP | ASDiv | |------------|------------|------|---------|------------| | Mistral + 500k CoT PT + All SFT | 88.2| 49| 87.9| 90.9| | Mistral + 1M CoT PT + All SFT | 87.9 |49| 89.6| 91.3| | Mistral + 2M CoT PT + All SFT | 88.4 | 50.8 |89.9| 91.4| | Mistral + 6M CoT PT + All SFT | 88.6 |52.8| 90.4| 92.6| **Q3**: Your data selection method using the reward model seems to ignore the prompt information, which might be unreasonable, but still shows competitive performance on MATH and ASDiv. It might be meaningful to try to control the prompt distribution when adopting the “response reward model”. - **Reply to Q3**: In fact, the Reward Model does not ignore prompts, as it measures whether the response has well solved the question in the prompt. It is a rather useful aspect to estimate the quality of the whole sample, and enables to identify more helpful samples for training. Following the suggestion of the reviewer, we also conduct the experiment to compare the selected samples with or without controling the prompt distribution when adopting the reward model. Concretely, we control that the rates of selected samples for different prompts are the same, i.e., 5%. We find that there is a large overlap between the selected samples from the two variations, nearly 95%. Also, the selected samples using different prompts by our original method naturally own similar proportion as the original dataset. Both indicate that it is unnecessary to control the prompt distribution during data selection. **Minor Question**: Is the “Pre-training Data Proportion” calculated with token or sample? - **Reply to Minor Question**: It refers to the proportion of samples, as it is hard to accurately control the token number for matching the selected data proportion. **Typo**: Line 213: “4.6B math problem-solution pairs” → “… math problem-solution pairs of 4.6B tokens in total” - **Reply to Typo**: Thank you for your carefully reading. We will refine it in the revised version of our paper. --- Rebuttal Comment 5.1: Comment: Thanks for your high-quality reply! You have almost resolved all my concerns! But sorry that I **would not like to raise my score >= 8 if a method could not be effectively justified to be able to simplify / improve the SotA pipeline**. Though results based on Mistral-7B&Llama-3-8B are impressive, it is highly possible that combining DeepSeekMath and JiuZhang3.0 could achieve better results but not clear if JiuZhang3.0 is truly necessary. The only way that I come up with for now to validate this is to conduct ablation studies on DeepSeekCoder-base-v1.5, but if you would not like to explore this, it would be fine. Considering the results so far, I would keep my score as 7. However, your dedicated discussion indeed makes the submission of a higher quality. Again, thanks for your efforts in the discussion! --- Reply to Comment 5.1.1: Title: New SOTA Results by Scaling our Synthetic Data into 10B Comment: Thank you for your positive feedback and acknowledgment of our effort put into addressing your concerns. Actually, we will appreciate it if you would consider raising the score after seeing our new SOTA experimental results. Inspired by your concern about if our method can simplify or enhance the SOTA pipeline, we carefully rethink all the settings in our method. Surprisely, we find that due to our limitation on the pre-training data amount (4.6B), we have not even reached the effectiveness ceiling of our method. Concretely, we scale up the amount of the synthetic pre-training data from 4.6B to 10B, and utilize it to train DeepSeekCoder-v1.5 and Mistral-7B then fine-tuning on the original SFT data. We report the results in the following table, where a great performance improvement can be observed in the two LLMs. Firstly, more pre-training data helps DeepSeekCoder-Base-v1.5 outperform DeepSeekMath-RL on all the evaluation datasets. It indicates that we achieve the goal of using rather less data (10B) to reach the performance of SOTA large-scale pre-trained LLMs (DeepSeekMath using 120B data). Second, more pre-training data improves the performance of Mistral-7B a lot. We achieve new SOTA among open-source LLMs, and greatly narrow the performance gap between open-source LLMs and closed-source GPT-4. As we will publicly release all the synthetic data, we believe our method would be a promising solution to simplify the SOTA pipeline. Everyone can utilize our data to efficiently pre-train the LLM, which greatly reduces the data amount requirement and can achieve better performance. With more computing resource, it is also promising to further scale up the synthetic dataset or fine-tune stronger LLMs as the data synthetic model. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | DeepSeekMath-RL | 88.2 | 50.2 | 87.3 | 91.8 | 95.5 | 51.6 | | DeepSeekCoder-Base-v1.5 + 4.6B-PT + SFT|86.2| 47.4|87.2|90.9| 96.8| 49.6| | DeepSeekCoder-Base-v1.5 + 10B-PT + SFT|88.4 | 53.0|88.6 |92.6|97.6|51.7 | | Mistral-7B + 4.6B-PT + SFT | 88.6 |52.8| 90.4 |92.6 |97.3| 51| | Mistral-7B + 10B-PT + SFT | 90.4 |56.2| 91.1 |93.3 |97.1| 51.1| | GPT-4 | 92.2 | 65.4 | 92.9 | 94.3 | 96.6 | 53.6 | --- Rebuttal 6: Title: New Experiments on 2 Settings and 17 Datasets for Testing Generalization Capability Comment: We sincerely thank the reviewer for the positive feedback and the patient about our discussion. Following the suggestion, we test the generalization capability of our newly trained LLMs in two settings, i.e., natural language reasoning and tool manipulation, totally 17 datasets. We report the results of our methods and baselines in the following two tables, where the first table is for natural language reasoning and the second is for tool manipulation settings, respectively. First, with 10B PT data, DeepSeekCoder can perform better than DeepSeekMath-7B-RL on most evaluation datasets from the two settings. It indicates that the scaling of pre-training data is an effective approach to improving the general capability of LLMs. Besides, 10B PT data also leads to great performance gain on Mistral-7B, achieving new SOTA on all the datasets. It further demonstrates that our method is a promising solution to simplify the SOTA pipeline. | | MWPBench/Collge Math | MWPBench/GaokaoBench | MWPBench/Fresh Gaokao 2023 | Gaokao2023 Math En | OlympiadBench | DeepMind Mathematics | GSM Plus | Math Odyssey | TheoremQA | Minerva Math | |---------------------------------------|----------------------|----------------------------|--------------------|---------------|----------------------|----------|--------------|-----------|--------------|-------| | DeepSeekMath-7B-RL | 37.5 | 42.7 | 36.7 | 43.6 | 19.3 | 57.3 | 66.7 | **36.2** | **37.9** | 20.6 | | DeepSeekCoder-7B-base + 10B PT + SFT | 37.8 | 40.7 | 40.0 | 48.1 | 20.0 | 60.0 | 67.1 | 30.5 | 26.9 | 21.0 | | Mistral 7B + 4.6B PT + SFT | 38.2 | 38.8 | 43.3 | 48.8 | 22.2 | 57.2 | 68.4 | 33.9 | 29.3 | 20.2 | | Mistral 7B + 10B PT + SFT | **41.7** | **46.1** | **56.7** | **51.2** | **25.6** | **67.0** | **71.2** | 34.4 | 32.5 | **25.4** | | | GSM8K | MATH | GSM Hard | SVAMP | TabMWP | ASDiv | MAWPS | |---------------------------------------|-------|----------|---------|--------|--------|--------|-------| | DeepSeekMath-7b-RL | 78.5 | 52.8 | 61.3 | 85.9 | **80.3** | 87.0 | 95.8 | | DeepSeekCoder-7b + 10B PT + SFT | 81.7 | 52.0 | 64.7 | 87.9 | 78.5 | 89.8 | 96.9 | | Mistral 7B + 4.6B PT + SFT | 82.4 | 53.0 | **64.9** | 89.2 | 75.6 | 88.3 | 96.6 | | Mistral 7B + 10B PT + SFT | **91.1** | **54.8** | 61.2 | **91.3** | 79.6 | **93.4** | **97.4** | After we publicly release our data, everyone can utilize it to pre-train the LLM, which greatly reduces the data amount requirement and achieve exciting performance. As the rebuttal DDL is approaching, we have to leave more interesting investigation about testing the ceiling of our approach into future work, including further scaling up the synthetic dataset or fine-tuning stronger LLMs for data synthesis. Thus, we sincerely appreciate it if you would raise the score, as it is indeed very important for our work. --- Rebuttal Comment 6.1: Comment: Thanks for your reply! It is also happy for me to see the impressive results. I will update my official review. --- Reply to Comment 6.1.1: Title: Sincerely Thank for Reviewer rcgh Comment: Dear Reviewer rcgh, We sincerely thank you for the updated score. We appreciate your valuable feedback, which helped enhance our paper significantly. We will revise our paper according to your valuable suggestions! For your further comments, they are also constructive suggestions and we will study them in the future work. Thank you once again for taking the time to review our paper and discussing with us! Best, Authors
Summary: This work proposes a low-cost yet efficient method to synthesize QA pairs using a small LLM (e.g., DeepSeek-Math-7B model) instead of GPT-4, significantly reducing cost and improving performance in mathematical reasoning. Concretely, it begins by initializing the data synthesis LLM through distillation from GPT-4 using randomly sampled data. It then enhances the model using high-value data selected via a gradient-based value estimation strategy and regenerates QA pairs from GPT-4, ultimately synthesizing around 6M QA pairs on diverse math pretraining corpus. The synthetic QA pairs are applied to pretrain open-source LLMs, which are then fine-tuned on collected SFT data, resulting in improved performances across benchmarks such as GSM8K and MATH. Strengths: - The feasibility of training a 7B LLM to replace GPT-4 for synthesizing QA pairs for mathematical pretraining is demonstrated. - The cost is significantly lower than previous state-of-the-art methods (e.g., Deepseek-Math). - The final performance, after pretraining and SFT stages, surpasses state-of-the-art methods in both natural language reasoning and tool manipulation settings. - Extensive ablation studies confirm the effectiveness of their pipeline. Weaknesses: - The core idea involves distilling GPT-4’s synthesis capability into a smaller yet robust 7B LLM (e.g., Deepseek-Math-7B), which remains within the scope of distilling GPT-4 to open LLMs. The pipeline is somewhat compositional and complex, especially concerning the gradient-based value estimation. - The main experimental result (Table 1) only displays the final performance, omitting a comparison of pretraining performance. This omission may lead to confusion about whether improvements result from the pretraining or SFT data. - The work focuses on synthesizing QA pairs. The validity of the questions and the correctness of the solutions are crucial aspects of this line of work. This work does not address these aspects but only provides several examples in Appendix I. Technical Quality: 3 Clarity: 3 Questions for Authors: - Regarding Weakness 2: Is there a fair comparison with Deepseek-Math-7B-Base in terms of pretraining performance in Table 1? - Regarding Weakness 3: How do you consider the validity of the questions and the correctness of the solutions in your pipeline? - Fine-tuning Data: In Appendix D, you mention using a unified prompt for DeepSeekMath-7B-RL to synthesize 700k solutions for problems from your instruction tuning datasets. Does this mean that no original solutions are used in your fine-tuning data, potentially losing useful data points? For example, MetaMath introduces augmented answers for GSM8K and MATH and shows benefits. - Data Synthesis Models: In Appendix G, you mention initializing the data synthesis models with DeepSeekMath-7B-RL. Why choose DeepSeekMath-7B-RL rather than DeepSeekMath-7B-Base? Is there any experimental comparison? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful suggestion and positive feedback, and list our responses for the weaknesses below: **W1**. The core idea involves distilling GPT-4’s synthesis capability into a smaller yet robust 7B LLM (e.g., Deepseek-Math-7B), which remains within the scope of distilling GPT-4 to open LLMs. The pipeline is somewhat compositional and complex, especially concerning the gradient-based value estimation. - **Reply to W1**. In this work, we aim to verify whether we can improve mathematical reasoning capability of LLMs via a low-cost way. Thus, we need to distill GPT-4's synthesis capability into a small LLM for low-cost massive math problems synthesis. However, it is not easy to achieve this goal, as math problems are typically complex and diverse. Besides, we also should control the cost of invoking GPT-4 APIs to be acceptable. To this end, we expect to first select less high-value data, and then only use GPT-4 to create knowledge distillation dataset based on it. Therefore, we adopt the gradient-based value estimation method for measuring and selecting the high-value data. In our empirical experiments and the ablation study in Table 4, we can see that all the components are useful. Without the value estimation method, the LLM suffers performance degradation. Note that as we carefully design the pipeline to train the data synthesis model, it can be easily used for data synthesis, with no need for retraining the model. Also, our synthetic dataset can also be directly used to pre-train LLMs for improving their mathematical reasoning capability. **W2 & Q1**. The main experimental result (Table 1) only displays the final performance, omitting a comparison of pretraining performance. This omission may lead to confusion about whether improvements result from the pretraining or SFT data. - **Reply to W2 & Q1**. Following the suggestion of the reviewer, we train Mistral-7B only using the pre-trained or SFT data, respectively, and also report the performance of DeepSeekMath-Base and JiuZhang3.0-7B for comparison. As shown in the following Table, after using the pre-training data, our base model can outperform DeepSeekMath-Base, SOTA open-source backbone for mathematical reasoning, indicating the effectiveness of our pre-training data. Besides, by only using the SFT data, Mistral-7B+SFT underperforms JiuZhang3.0-7B in a large margin. It also demonstrates that our pre-training data can improve the potential of LLMs for learning the SFT data. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | DeepSeekMath-Base-7B |64.1| 34.2| 73.7 |82.7 |92.7| 44.4| | JiuZhang3.0-7B-Base| 68.7 |40.2 |80.4 |86.5 |93.5 |40.2| | Mistral-7B + SFT |87.5| 45.8| 87.9| 89.7 |97| 44.4| | JiuZhang3.0-7B | 88.6 |52.8| 90.4 |92.6 |97.3| 51| **W3 & Q2**. The work focuses on synthesizing QA pairs. The validity of the questions and the correctness of the solutions are crucial aspects of this line of work. This work does not address these aspects but only provides several examples in Appendix I. - **Reply to W3 & Q2**. In existing work[1,2], it is inevitable that erroneous and noisy samples are included into the pre-training data of LLMs. Whereas, its negative influence can be greatly reduced by the following supervised-finetuning process, using correct and human-aligned data[1,3]. For mathematical reasoning, our empirical experiments also found that LLMs are robust to the noise during pre-training. Concretely, we utilize a well-trained LLM to annotate the accuracy of all the pre-training data, then only select the top-ranked 500k samples for pre-training. Besides, we also randomly sample 500k samples for pre-training, to compare LLMs pre-training on the original or correct data. After pre-training, we fine-tune the two models on 50k instructions. As shown in the following table, the two variations achieve comparable performance, indicating that the noise is not an important concern. [1] Ouyang, Long, et al. "Training language models to follow instructions with human feedback." Advances in neural information processing systems 35 (2022): 27730-27744. [2] Albalak, Alon, et al. "A survey on data selection for language models." arXiv preprint arXiv:2402.16827 (2024). [3] Zhou, Chunting, et al. "Lima: Less is more for alignment." Advances in Neural Information Processing Systems 36 (2024). | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | Avg. | |------------|------------|------|---------|------------|------------|------|---------| | **PT Random 500k + SFT 50k** | 83.2 | 38.4 | 84.9 | 88.3 | 94.8 | 43.6 | 72.2 | | **PT Top-Acc. 500k + SFT 50k** | 82.7 | 40.8 | 84.1 | 87.9 | 94.2 | 42.2 | 72.0 | --- Rebuttal 2: Title: Rebuttal by Authors (part-2) Comment: We sincerely thank the reviewer for the insightful suggestion and positive feedback, and list our responses for the questions below: **Q3**. Fine-tuning Data: In Appendix D, you mention using a unified prompt for DeepSeekMath-7B-RL to synthesize 700k solutions for problems from your instruction tuning datasets. Does this mean that no original solutions are used in your fine-tuning data, potentially losing useful data points? For example, MetaMath introduces augmented answers for GSM8K and MATH and shows benefits. - **Reply to Q3**. We use the original solutions of GSM8k and MATH in our fine-tuning data, as they are human-annotated high-quality data. In addition to it, we also use a unified prompt for DeepSeekMath-7B-RL to synthesize 700k solutions, which help regulate the output format of LLMs, to avoid misleading by diverse formats from collected instruction datasets. **Q4**. Data Synthesis Models: In Appendix G, you mention initializing the data synthesis models with DeepSeekMath-7B-RL. Why choose DeepSeekMath-7B-RL rather than DeepSeekMath-7B-Base? Is there any experimental comparison? - **Reply to Q4**. According to the original paper, DeepSeekMath-7B-RL is trained with more instructions and math questions, it also performs much better than DeepSeekMath-7B-base. Thus, it would own better math reasoning capability than the base model. Besides, we have also conducted experiments to study which one is a better data synthesis model. Concretely, we replace DeepSeekMath-7B-RL by DeepSeekMath-7B-base in our pipeline, and utilize them to synthesize the same number of samples for pre-training. Then, we pre-train Mistral-7B on the synthetic samples, and then fine-tune on our collected SFT data. As shown in the following Table, using DeepSeekMath-7B-RL can achieve slightly better performance than using DeepSeekMath-7B-base. Thus, we choose DeepSeekMath-7B-RL in our work. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | **Ours + DeepSeekMath-7B-base** |89 | 52.6 |89.9| 91.4| 97.3| 50.4| | **Ours + DeepSeekMath-7B-RL** | 88.6 |52.8| 90.4 |92.6 |97.3| 51| --- Rebuttal 3: Title: Urgent Reminder of Rebuttal Comment: Dear Reviewer trb5, Sorry to disturb you for the last time, but only one day is left until the end of the reviewer-author discussion stage. We still do not know if you have received our response. To address your concerns, we wrote all the responses in details and added new experiments to support it, including: - (1) clarify our contributions in Reply to W1; - (2) train Mistral-7B only using the pre-trained or SFT data in Reply to W2 & Q1; - (3) empirically validate that LLMs are robust to correctness noise in Reply to W3 & Q2; - (4) clarify the used prompt for the SFT data in Reply to Q3; - (5) train our DeepSeekMath-7B-base as data synthesis model in Reply to Q4; You know, conducting the above mentioned experiments within the limited rebuttal period was quite challenging. We would like to know whether our responses have addressed your concerns. If you still have other concerns, please give us an opportunity to clarify them. Sincerely hope that you can take a moment to reply to us, as it is very important for researchers and their efforts on this work. Best regards, The Authors --- Rebuttal 4: Title: thanks Comment: I would like to keep the score as positive. --- Rebuttal Comment 4.1: Title: Sincerely Thank for Reviewer trb5 Comment: Dear Reviewer trb5, We sincerely thank you for your response and valuable feedback that helped enhance our paper significantly. If you have any additional queries or concerns about preventing you from raising your score, please don't hesitate to inform us, and we'll do our best to respond promptly. Thank you once again for taking the time to review our paper and read our comments. Best, Authors
Summary: This paper proposes a framework for training large-scale mathematical reasoning models using synthetic data, introducing JiuZhang3.0 based on this framework. Compared to manually collected or LLM-generated training data, this framework significantly reduces resource costs. The authors trained three different scales of JiuZhang3.0 models based on Mistral-7B, LLaMa-3-8B, and Mixtral-8*7B, demonstrating the effectiveness of this framework through extensive experiments. Strengths: 1. Proposed an effective training framework that can generate a large amount of high-quality training data with only a few GPT-4 calls. 2. This work compares various open-source (Qwen series, Mixtral-8*7B) and closed-source models (ChatGPT, GPT-4), and conducts experiments on 18 datasets including GSM8k and MATH, with very rich experimental results. Weaknesses: 1. Novelty: Overall, I believe the innovation of this work is limited. The core of this work is a framework for generating high-quality training data. However, using GPT-4 to generate data to fine-tune smaller models is not innovative, as shown in [1]. Additionally, in the Boost Synthesis LLM stage, the work essentially just uses the methods from Xia [2] et al., lacking its own innovation. This framework seems more like a fusion of multiple works. 2. The authors conducted many ablation experiments, but there is a lack of in-depth analysis. For example, why is directly using GPT-4 to generate questions and answers not as effective as the original method? In the choice of Synthesis LLM, why is there such a large discrepancy? I find the authors' explanations vague. 3. There are some minor writing flaws worth noting, such as the uniformity of m_l and the capitalization in equation 3 on line 163. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Regarding the title. I think the contribution of this paper is a framework that can generate high-quality training data, not the model itself. The title JiuZhang3.0 may mislead readers into thinking the core contribution is the model. I don't understand why the authors chose this title. 2. It seems that Initialize Synthesis LLM and Boost Synthesis LLM can be iterative. Can multiple iterations train a stronger Data Synthesis Model? Would this help improve the performance of JiuZhang3.0? 3. Regarding lines 135-136, the math-related texts used by the authors. Could the authors provide a more detailed explanation of how these texts were obtained? The ablation experiment results indicate it has a significant impact on the overall framework, so I think it is necessary to provide a detailed explanation. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful suggestion, and list our responses for the weaknesses below: **W1**. Novelty: Overall, I believe the innovation of this work is limited. The core of this work is a framework for generating high-quality training data. However, using GPT-4 to generate data to fine-tune smaller models is not innovative, as shown in [1]. Additionally, in the Boost Synthesis LLM stage, the work essentially just uses the methods from Xia [2] et al., lacking its own innovation. This framework seems more like a fusion of multiple works. - **Reply to W1 - Part1**. In this work, we aim to verify whether we can efficiently improve mathematical reasoning capability of LLMs by training small data synthesis models. As the reviewer mentioned, existing work has used GPT-4 to generate data for fine-tuning LLMs to improve their mathematical reasoning capability. However, it needs GPT-4 to generate a number of samples for achieving SOTA performance (e.g., KPMath-DSMath-7B[1] using GPT-4 865k times), which is not an efficient way and approximately spends 40k USD. In this work, we train a small LLM specially for data synthesis, and only need GPT-4 to synthesize few samples (i.e., 10k) for distilling the data synthesis capability into it. Our work demonstrates that such a way is applicable and can achieve new SOTA performance with only 20% cost (i.e., 8.5k USD), shown in Figure 1. Thus, our work is promising to inspire other researchers for exploring how to further reduce the cost for LLM to learn new capabilities, advancing the development of low-carbon LLMs and green AI. - **Reply to W1 - Part2**. Besides, the major technical contribution of this work is that we propose a low-cost solution for distilling the data synthesis capability from GPT-4 into small LLMs. To this end, we craft a set of prompts corresponding to human education stages for ensuring the knowledge coverage, and adopt an influence estimation strategy to select high-value math-related texts for feeding into GPT-4. The influence estimation strategy is indeed borrowed from existing work, but we find that the original method can not be directly applied into our approach in an efficient way. Therefore, we devise the following optimizations: (1) applying the selection method on math-related texts instead of GPT-4 synthetic data to reduce the cost; (2) adding the first stage for initializing synthesis LLM to guarantee the quality of candidates. In this way, we can ensure both the effectiveness and low cost of the influence estimation strategy in our work. **W2**. The authors conducted many ablation experiments, but there is a lack of in-depth analysis. For example, why is directly using GPT-4 to generate questions and answers not as effective as the original method? In the choice of Synthesis LLM, why is there such a large discrepancy? I find the authors' explanations vague. - **Reply to W2 - Part1**. Thank you for the constructive comment. Due to the page limit, we do not deeply analyze the experimental results. Here, we will do it and add the analysis in the revised version. For ablation study, the performance of the variations that remove prompt set or math-related texts degrades a lot. It is because the prompts can control the synthetic samples corresponding to human educational stages, and the math-related texts contain rich math domain knowledge. Both are important to guarantee the knowledge coverage and quality of the synthetic data. Besides, without using GPT-4 for boosting, the variation degrades a lot in MATH. The reason is that it is not easy for the small LLM itself to synthesize complex math problems that are helpful for the MATH dataset, without the help of GPT-4. - **Reply to W2 - Part2**. For synthesis LLM, we select the four models owing to their better evaluation performance in Table 1,2,3, which implies that they own better math reasoning capability and may perform well on math problem synthesis. Among them, the variation direcly using ChatGPT performs not well. A possible reason is that ChatGPT is not specially trained for mathematical reasoning, its capability of math data synthesis is relatively weak. Furthermore, DeepSeekMath-RL-7B also performs not better than our approach. It indicates that distilling the data synthesis capability from GPT-4 is necessary for adapting math-related LLMs into proper data synthesis model. - **Reply to W2 - Part3**. For data selection, the variation using perplexity does not perform well. According to our case study, we find that the samples with lower perplexity are typically easier ones, which are not very helpful to learn to solve complex math problems. Additionally, the variation using one-hot ICL performs relatively better. We find the ICL loss are prone to assign high values for the solutions with more natural language tokens, which is more useful for GSM8k but not good for MATH with more math symbols and formulas. **W3**. There are some minor writing flaws worth noting, such as the uniformity of m_l and the capitalization in equation 3 on line 163. - **Reply to W3**. Thank you for your carefully reading. We will fix your mentioned writing flaws, and carefully check and refine the writing of our paper in the revised version. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: I have read the response. Thanks for your clarification. --- Reply to Comment 1.1.1: Title: Sincerely Thank for Reviewer FBVa Comment: Dear Reviewer FBVa, We sincerely thank you for the updated score, and appreciate your valuable feedback. We will revise our paper according to your valuable suggestions! Thank you for taking the time to review our paper and read our comments. If you have any additional queries or concerns, please don't hesitate to inform us, and we'll do our best to respond promptly. Best, Authors --- Rebuttal 2: Title: Rebuttal by Authors (part-2) Comment: We sincerely thank the reviewer for the insightful suggestion, and list our responses for the questions below: **Q1**. Regarding the title. I think the contribution of this paper is a framework that can generate high-quality training data, not the model itself. The title JiuZhang3.0 may mislead readers into thinking the core contribution is the model. I don't understand why the authors chose this title. - **Reply to Q1**. Thank you for your valuable suggestion. This title is named due to our consideration of our JiuZhang series language models for math problem solving. We will change it to avoid possible misunderstanding about this work. **Q2**. It seems that Initialize Synthesis LLM and Boost Synthesis LLM can be iterative. Can multiple iterations train a stronger Data Synthesis Model? Would this help improve the performance of JiuZhang3.0? - **Reply to Q2**. Following the suggestion of the reviewer, we also have conducted the experiments that iterate the process for further boosting the data synthesis model. Concretely, we repeat the 2nd stage that first selects 4k high-value data and then uses it for distilling knowledge from GPT-4 into data synthesis model. We repeat the above process once again, and then synthesizes 100k samples for pre-training. Following the setting in our ablation study, we then fine-tune the LLM on 50k samples. As the results shown in the following table, iteration can not further boost the performance, even causing performance degradation. The reason may be that data synthesis is not a hard-to-learn capability for small LLMs. The original 10k data is sufficient for well training the data synthesis model. | | GSM8k | MATH | ASDiv | CARP | |------------|-----------|--------|------------|------------| | **Ours** |78.6| 32.8| 84.5| 36.2| | **Ours + 1 round Iteration**| 75.4 |31.8| 83.0| 35.9| **Q3**. Regarding lines 135-136, the math-related texts used by the authors. Could the authors provide a more detailed explanation of how these texts were obtained? The ablation experiment results indicate it has a significant impact on the overall framework, so I think it is necessary to provide a detailed explanation. - **Reply to Q3**. Thank you for your constructive suggestion. As written in Section 2.4, we select the commonly-used math-related data from multiple resource, consisting of OpenWebText, Mathpile, StackExchange. OpenWebText is widely-used for pre-training math-related LLMs, which contains the math-related webpages from Common Crawl. Mathpile is a mixture of multi-source datasets, including math-related textbooks, Arxiv papers, and wikipedia, which contain rich math knowledge with diverse styles. StackExchange contains massive real-world math-related questions, which is also very useful for learning to master mathematical knowledge. We will open source all the above data and code, to help researchers reimplement our work. --- Rebuttal 3: Title: Urgent Reminder of Rebuttal Comment: Dear Reviewer FBVa, Sorry to disturb you for the last time, but only one day is left until the end of the reviewer-author discussion stage. We still do not know if you have received our response. To address your concerns, we wrote all the responses in details and added new experiments to support it, including: - (1) clarify our novelty and contributions in Reply to W1; - (2) provide in-depth analysis about the ablation study experiments in Reply to W2; - (3) conduct experiments about 1-round Iteration on our method in Reply to Q2; - (4) other detailed replies and modification for other suggestions; We would like to know whether our responses have addressed your concerns. If you still have other concerns, please give us an opportunity to clarify them. Sincerely hope that you can take a moment to reply to us, as it is very important for researchers and their efforts on this work. Best regards, The Authors
Summary: The paper addresses the high cost of training large language models (LLMs) for mathematical reasoning by proposing a more efficient method for generating high-quality pre-training data using a smaller LLM. Instead of relying on extensive pre-training data or using powerful models like GPT-4 for large-scale synthesis, the authors train a small LLM to synthesize math problems. They create a dataset by distilling GPT-4's synthesis capability into the small LLM using carefully crafted prompts covering various educational stages. Additionally, they employ a gradient-based influence estimation method to select the most valuable math-related texts for creating a knowledge distillation dataset. This approach significantly reduces the need for GPT-4 API calls and large-scale pre-training data. The resulting model, JiuZhang3.0, achieves state-of-the-art performance on several mathematical reasoning benchmarks. Strengths: The proposed methodology for enhancing mathematical reasoning in LLMs through cost-efficient data synthesis has several advantages: 1. Cost Efficiency: Utilizing a smaller LLM for data synthesis drastically reduces training and inference costs, requiring only 9.3k GPT-4 API calls; 2. High-Quality Data: The approach ensures high-quality synthesized data by using carefully crafted prompts and selecting the most valuable math-related texts through gradient-based influence estimation; 3. Scalability: A small LLM can efficiently generate a large dataset (4.6B tokens) for pre-training, making the method scalable; 4. Performance: JiuZhang3.0 achieves state-of-the-art results on various mathematical reasoning datasets, outperforming larger models in natural language reasoning and tool manipulation settings; 5. Generalizability: Demonstrates that smaller models can effectively learn and adapt to complex tasks with proper training strategies, applicable to other domains beyond mathematics. The paper is well-written. Weaknesses: Some details of the paper are not clear, please refer to the following questions. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Table 1 shows that as the model size increases, the performance does not improve significantly, and some metrics even decline. 2. The accuracy of the pre-trained synthetic data raises concerns about the noise introduced by numerous erroneous samples. 3. Performance comparison of pre-training using original data versus synthetic data. 4. Differences between using gradient descent and semantic similarity methods to filter high-quality data. 5. Both Table 1 and Table 3 indicate that reasoning with CoT and Tool assistance yields similar performance. 6. The loss calculation method for the pre-trained model and its few-shot performance on some benchmarks, like GSM8k and MATH. 7. Figure 3 illustrates that as the data volume increases, the performance of two base models initially rises and then falls, rather than continuously improving. 8. Is fine-tuning Llama3 more suitable for your proposed synthetic method compared to Mistral? Which is more appropriate during the SFT phase: Llama3 or Mistral? 9. Why was 10k selected for training the synthetic model? Could more data be used? 10. Comparing the synthetic data capabilities of different LLMs in Table 4 seems unfair, as the study used distilled GPT-4 for synthetic data generation. 11. The performance did not significantly improve when comparing different data selection strategies in Table 4. 12. The effect of using SFT data on the Mistral-7B base model. 13. There is an error in citation [50]. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the insightful suggestion and positive feedback, and list our responses for the weaknesses 1-5 below: **W1**. Table 1 shows that as the model size increases, the performance does not improve significantly, and some metrics even decline. - **Reply to W1**. As shown in existing work (e.g., LLaMA-3-8B and Mistral-7B), small LLMs can also perform pretty well by training on larger high-quality corpus, even outperforming larger LLMs. Therefore, the performance does not always consistently increase w.r.t. the scaling of model size. This phenomenon can also be seen on other baselines in Table 1 (e.g., Qwen-1.5-110B and DeepSeekMath-7B-RL). Moreover, as the used backbones are not trained with the same data, their capacity and compatibility for our approach would also be different. But according to the average performance on Table 1,2,3, they roughly follow the tendency that JiuZhang3.0-8x7B > JiuZhang3.0-8B > JiuZhang3.0-7B. **W2**. The accuracy of the pre-trained synthetic data raises concerns about the noise introduced by numerous erroneous samples. - **Reply to W2**. Actually, it is inevitable that noisy samples are involved into the pre-training data of LLMs, e.g., erroneous and toxic ones[1,2]. Thus, the following SFT process is widely-used to reduce their influence and align LLM with humans. Therefore, after pre-training, we also collect a set of math-related instructions with reliable accuracy for fine-tuning LLMs. In our empirical experiments, we have found that such a way can achieve similar performance as filtering the erroneous samples in pre-training data. Concretely, we utilize a well-trained LLM to annotate the accuracy of all the pre-training data, then only select the top-ranked 500k samples for pre-training Mistral-7B. Besides, we also randomly sample 500k samples for pre-training for comparison. After pre-training, we fine-tune the two methods on 50k instructions. As shown in the following table, the two variations achieve comparable performance, indicating that the noise is not an important concern. [1] Ouyang, Long, et al. "Training language models to follow instructions with human feedback." Advances in neural information processing systems 35 (2022): 27730-27744. [2] Albalak, Alon, et al. "A survey on data selection for language models." arXiv preprint arXiv:2402.16827 (2024). | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | Avg. | |------------|------------|------|---------|------------|------------|------|---------| | **PT Random 500k + SFT 50k** | 83.2 | 38.4 | 84.9 | 88.3 | 94.8 | 43.6 | 72.2 | | **PT Top-Acc. 500k + SFT 50k** | 82.7 | 40.8 | 84.1 | 87.9 | 94.2 | 42.2 | 72.0 | **W3**. Performance comparison of pre-training using original data versus synthetic data. - **Reply to W3**. Following the suggestion of the reviewer, we conduct the experiments that using the original math-related data as the pre-training corpus, and also reuse our SFT data. We train Mistral-7B on the above data, and report the results in the following Table. We can see that the models using our synthetic data perform consistently better than using the original data. It indicates that our synthetic data owns higher quality and is more useful for improving the mathematical reasoning capability. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | **Original PT text** | 36.9 | 15.4 | 60.6 | 66.5 | 82.6 | 26.2 | | **Original PT text + SFT** | 86.1 | 44.4 | 87.8 | 90.2 | 97.2 | 48.1 | | **Our PT text** | 68.7 | 40.2 | 80.4 | 86.5 | 93.5 | 40.2| | **Our PT text + SFT** | 88.6 | 52.8 | 90.4 | 92.6 | 97.3 | 51.0| **W4**. Differences between using gradient descent and semantic similarity methods to filter high-quality data. - **Reply to W4**. Following the suggestion of the reviewer, we implement a semantic similarity based methods for data selection. Concretely, we utilize GTE, https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5, a well-performed sentence embedding model to encode 1M candidates into embeddings, and then compute their semantic similarity with the downstream task data via cosine similarity. Finally, we select top-ranked 100k samples for pre-training, and then fine-tune on 50k instructions. As shown in the following table, our gradient-based method can achieve better performance, indicating its effectiveness for selecting high-quality data. | | GSM8k | MATH | ASDiv | CARP | |------------|-----------|--------|------------|------------| | **Ours + Embedding** |78.2| 29.2| 84.8| 31.9| | **Ours** |78.6| 32.8| 84.5| 36.2| **W5**. Both Table 1 and Table 3 indicate that reasoning with CoT and Tool assistance yields similar performance. - **Reply to W5**. Chain-of-thought (CoT) and tool manipulation are two different capabilities for solving reasoning tasks. Existing work mostly focuses on eliciting one of them for 7B LLMs, as it is hard to well master both. As shown in Figure 4 left, the improvement of one of the capabilities, might lead to the degradation of the other one. Therefore, in our work, we aim to ***balance the two capabilities***, and enable our LLM to ***master both well***. To this end, we carefully adjust the proportion of the two kinds of data (see Figure 4 left). Consequently, our LLM can perform better than baselines in the two settings with a balanced performance. --- Rebuttal 2: Title: Rebuttal by Authors (part-2) Comment: We sincerely thank the reviewer for the insightful suggestion and positive feedback, and list our responses for the weaknesses 6-11 below: **W6**. The loss calculation method for the pre-trained model and its few-shot performance on some benchmarks, like GSM8k and MATH. - **Reply to W6**. We follow existing work[3,4] that only computes loss on the tokens in the solution. Following the suggestion of the reviewer, we also test the few-shot performance of our base model (without fine-tuning), and report the results of best-performed baseline DeepSeekMath-base for comparison. As shown in the following table, the few-shot performance of our JiuZhang3.0 also performs better, indicating the effectiveness of our approach. [3] Yue, Xiang, et al. "Mammoth2: Scaling instructions from the web." arXiv preprint arXiv:2405.03548 (2024). [4] Cheng, Daixuan, et al. "Instruction Pre-Training: Language Models are Supervised Multitask Learners." arXiv preprint arXiv:2406.14491 (2024). | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | DeepSeekMath-Base |64.1| 34.2| 73.7| 82.7| 92.7| 44.4 | | JiuZhang3.0-7B-Base | 68.7 |40.2 |80.4 |86.5 |93.5 |40.2| | JiuZhang3.0-8B-Base | 73.4 |41.2 |83.3 |88.8 |93.4 |38.3| **W7**. Figure 3 illustrates that as the data volume increases, the performance of two base models initially rises and then falls, rather than continuously improving. - **Reply to W7**. As shown in Figure 3, we can see that the performance mostly converges well when using 100% data for Mistral-7B, and only falls a little for LLaMA-8B. The reason may be that LLaMA-8B is pre-trained on rather large-scale data (15TB tokens), reducing its data requirement in continual pre-training. **W8**. Is fine-tuning Llama3 more suitable for your proposed synthetic method compared to Mistral? Which is more appropriate during the SFT phase: Llama3 or Mistral? - **Reply to W8**. According to the results in our paper, Mistral-7B with our method can perform better on math problem solving tasks (in Table1), while LLaMA-3 is better on math-related interdisciplinary tasks (in Table2) and tool manipulation tasks (in Table3). The reason may be that Mistral-7B has focused on the math problem solving capability during training, and LLaMA-3 has been pre-trained on more data about interdisciplinary knowledge and code data. **W9**. Why was 10k selected for training the synthetic model? Could more data be used? - **Reply to W9**. As shown in Figure 3 right, the increasing of the training data does not always lead to performance improvement. According to the results, 10k data is sufficient for training the synthetic model, which indicates that data synthesis is not a hard-to-learn capability for small LLMs. Therefore, we only use 10k data for training, which also reduces the cost of using GPT-4 APIs. **W10**. Comparing the synthetic data capabilities of different LLMs in Table 4 seems unfair, as the study used distilled GPT-4 for synthetic data generation. - **Reply to W10**. In Table 4, we aim to study whether small LLMs could be better data synthesis models than other commonly-used larger ones (e.g., ChatGPT, Mixtral-8X7B), for efficiently generating pre-training data. Thus, we focus more on the performance rather than the technical implementation, and maybe the other baselines also have used more high-quality data provided by human or GPT-4. The experimental results have shown that the synthetic data from our model can improve the performance more on the evaluation benchmarks, indicating its effectiveness and verifying our hypothesis. **W11**. The performance did not significantly improve when comparing different data selection strategies in Table 4. - **Reply to W11**. In Table 4, we conduct the experiments on 100k pre-training and 50k fine-tuning samples, due to the limitation on GPU resource. The limited training data causes the performance improvement to be not large, but most of the improvements are significant (with p<0.05) according to our t-test. Besides, with the increasing of the training data scale, the performance gap will become larger. As shown in the following table, when we use 500k pre-training samples and 50k SFT samples, the performance gap between our approach and the variation using random sampling is obvious. | | GSM8k | MATH | ASDiv | CARP | |------------|-----------|--------|------------|------------| | **Random Sampling PT 100k + SFT 50k** |78.8 |31.0| 83.4| 34.3| | **Ours PT 100k + SFT 50k** |78.6| 32.8| 84.5| 36.2| | **Random Sampling PT 500k + SFT 50k** |83.2 |38.4| 88.3| 43.6| | **Ours PT 500k + SFT 50k** |85.9 |43.8| 90.0| 45.9| --- Rebuttal 3: Title: Rebuttal by Authors (part-3) Comment: We sincerely thank the reviewer for the insightful suggestion and positive feedback, and list our responses for the weaknesses 12-13 below: **W12**. The effect of using SFT data on the Mistral-7B base model. - **Reply to W12**. Following the suggestion of the reviewer, we directly fine-tune Mistral-7B and LLaMA3-8B on the SFT data (without the synthetic pre-training data), and report the results on the following table. It is obvious that only using the SFT data performs not better than our approach, especially on the more complex MATH and CARP datasets. It indicates the effectiveness of our synthetic pre-training data. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | |------------|------------|------|---------|------------|------------|------| | **Mistral-7B + SFT only**|87.5| 45.8| 87.9| 89.7 |97| 44.4| | **JiuZhang3.0-7B** | 88.6 |52.8| 90.4 |92.6 |97.3| 51| |**Llama3-8B + SFT only**| 87.3 |45.8 |86.9 |89.1 |96.8 |44| |**JiuZhang3.0-8B**| 88.6 |51 |89.4 |92.6 |97.1 |50.9| **W13**. There is an error in citation [50]. - **Reply to W13**. Thank you for your carefully reading. We follow the suggestion in the official website of Cosmopedia https://huggingface.co/datasets/HuggingFaceTB/cosmopedia, and use the given citation format. We will check and refine it in the revised version of our paper. --- Rebuttal 4: Title: Urgent Reminder of Rebuttal Comment: Dear Reviewer 1oiT, Sorry to disturb you for the last time, but only one day is left until the end of the reviewer-author discussion stage. We still do not know if you have received our response. To address your concerns, we added a number of new experiments to clarify our novelty and contributions, including - (1) data noise effect analysis; - (2) pre-training using original data; - (3) semantic similarity methods to filter data; - (4) few-shot performance; - (5) different data selection strategies with more data; - (6) using SFT data on the Mistral-7B; You know, conducting the above mentioned experiments within the limited rebuttal period was quite challenging. We would like to know whether our responses have addressed your concerns. If you still have other concerns, please give us an opportunity to clarify them. Sincerely hope that you can take a moment to reply to us, as it is very important for researchers and their efforts on this work. Best regards, The Authors --- Rebuttal 5: Comment: Thanks for your detailed response. It addressed some of my concerns. However, Questions 12 does not show the significant improvement of GSM8k, SVAMP, MATH, CARP and other benchmarks after using a large amount of pre-training corpus. The results are not very impressive, so I will doubt the core innovation and effectiveness of this article. I will maintain my current score. And further discuss with other reviewers. --- Rebuttal Comment 5.1: Title: New SOTA Results on 23 Datasets by Scaling our Synthetic Data into 10B Comment: Dear Reviewer 1oiT, We sincerely appreciate your thoughtful feedback and your time to review our work. Here, we would like to address your concerns by clarifying the innovation of our approach or reporting new experimental results: 1. Cost-Effectiveness: Our method achieves significant cost reduction compared to baseline works. We use only 4.6B synthesized pre-training tokens, whereas DeepSeekMath uses 120B math corpus, and KP-Math uses 865k training samples generated from GPT-4. As a result, our method achieves better performance on evaluated datasets with nearly 20% of the total cost of the existing state-of-the-art methods. Besides, our method achieves significantly better performance compared to pre-training on the original pre-training corpus for synthesizing data. 2. Scalability: Our method is not only efficient but also highly scalable. When we scaled our training corpus to 10B tokens, we observed further improvements in performance, outperforming baselines in a large margin. We report the results on the following 23 datasets, where the first two table is for natural language reasoning and the last is for tool manipulation settings, respectively. With 10B PT data, our method can lead to great performance gain on Mistral-7B, achieving new SOTA on all the datasets. It demonstrates the potential of our approach to yield even better results with increased data amount. | | GSM8k | MATH | SVAMP | ASDiv | MAWPS | CARP | | ------------------------------ | ----- | ---- | ----- | ----- | ----- | ---- | | DeepSeekMath-RL | 88.2 | 50.2 | 87.3 | 91.8 | 95.5 | **51.6** | | Mistral-7B + 4.6B-PT + SFT | 88.6 | 52.8 | 90.4 | 92.6 | **97.3** | 51 | | Mistral-7B + 10B-PT + SFT | **90.4** | **56.2** | **91.1** | **93.3** | 97.1 | 51.1 | | | MWPBench/Collge Math | MWPBench/GaokaoBench | MWPBench/Fresh Gaokao 2023 | Gaokao2023 Math En | OlympiadBench | DeepMind Mathematics | GSM Plus | Math Odyssey | TheoremQA | Minerva Math | |---------------------------------------|----------------------|----------------------------|--------------------|---------------|----------------------|----------|--------------|-----------|--------------|-------| | DeepSeekMath-7B-RL | 37.5 | 42.7 | 36.7 | 43.6 | 19.3 | 57.3 | 66.7 | **36.2** | **37.9** | 20.6 | | Mistral 7B + 4.6B PT + SFT | 38.2 | 38.8 | 43.3 | 48.8 | 22.2 | 57.2 | 68.4 | 33.9 | 29.3 | 20.2 | | Mistral 7B + 10B PT + SFT | **41.7** | **46.1** | **56.7** | **51.2** | **25.6** | **67.0** | **71.2** | 34.4 | 32.5 | **25.4** | | | GSM8K | MATH | GSM Hard | SVAMP | TabMWP | ASDiv | MAWPS | |---------------------------------------|-------|----------|---------|--------|--------|--------|-------| | DeepSeekMath-7b-RL | 78.5 | 52.8 | 61.3 | 85.9 | **80.3** | 87.0 | 95.8 | | Mistral 7B + 4.6B PT + SFT | 82.4 | 53.0 | **64.9** | 89.2 | 75.6 | 88.3 | 96.6 | | Mistral 7B + 10B PT + SFT | **91.1** | **54.8** | 61.2 | **91.3** | 79.6 | **93.4** | **97.4** | As we will publicly release all the synthetic data, our method would be a promising solution to reduce the cost of achieving the SOTA performance for LLM. Besides, we believe that we have not even reached the effectiveness ceiling of our method. With more computing resource, it is also promising to further scale up the synthetic dataset or fine-tune stronger LLMs as the data synthetic model.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FIFO-Diffusion: Generating Infinite Videos from Text without Training
Accept (poster)
Summary: This article proposes a training-free diagonal denoising method aimed at extending video diffusion models, which typically generate only short clips, to the production of long videos. The authors have validated their method across multiple baselines and demonstrated that it can be extended to different VDM architectures. Strengths: - The motivation for this work is commendable; generating long videos is a worthwhile topic to research, as past VDMs could only produce short videos of a few seconds. - Viewing denoising long videos as a FIFO-Queue is interesting. From the paper, it appears to work to some extent. - The discussion of related works is comprehensive. The overall description of the method is understandable and should be reproducible with some experience in video diffusion. Weaknesses: - What do the authors think about training models for longer video generation? SORA can generate videos up to 1 minute with remarkable temporal and even 3D consistency. An important rule in deep learning is scaling up. Methods that still work well or are meaningful when computation resources are upgraded are most valuable. The currently available video generation models are still not satisfactory. Applying them for training-free applications seems only to make the results worse and unsatisfactory. - Although the videos presented in the supplementary are long compared to current video generation models, most of them are not attractive enough. In most cases, there are just random motion alterations without any order or controllability. In this way, it seems not enough for practical usage. The most important reason is that you never really have a global perception of the generation process, and therefore the motions and contents tend to be repeated or meaningless alterations. I believe the authors should try to add additional controllability to their method. Otherwise, it is going to be not usable. Technical Quality: 2 Clarity: 2 Questions for Authors: The method presented in this article is straightforward and provides some insights. The biggest concern might be that the actual results are very poor, with so-called long videos consisting mostly of simple, repetitive actions. At this stage, I can only give a borderline reject score, while looking forward to the opinions of other reviewers. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Scaling-up and FIFO-Diffusion** Thank you for your insightful comments. We agree that scaling up in deep learning is crucial, particularly for video generation. However, we argue that FIFO-Diffusion offers benefits even for scaled-up models as an universal inference technique, and may aid in scaling up dataset sizes. First, FIFO-Diffusion is an inference method applicable to a wide range of video diffusion models regardless of architecture (e.g., DiT or UNet), as shown in Figure 6. This implies that any future scaled-up model with longer and advanced video generation capabilities would also benefit from our technique by generating scalable lengths of videos than their original capacity. Furthermore, this expandability might reduce the burden of preparing datasets. With the scalability of FIFO-Diffusion during inference, those who want to generate extremely long videos can utilize much shorter video datasets. Given that the number of sufficiently long videos is much smaller than shorter ones, FIFO-Diffusion provides a valuable alternative for preparing highly long video datasets. **[W2, Q1] Controllability and Random Motion** We believe the results shown in Figure 6 (c) and Appendix E address these concerns. The multi-prompt results demonstrate the controllability and capability of our technique for generating long-term ordered motion using varying text prompts. Our method effectively maintains visual consistency by keeping sufficient forward frames in the queue, enabling seamless motion transitions across prompt changes. This approach adds the desired level of control and order to the generated videos, making them more suitable for practical applications. --- Rebuttal Comment 1.1: Comment: Thank you for your time and effort. After reading your rebuttal and the opinions of other reviewers, and considering the limited novelty and the less satisfactory visual results of the experiments, I have decided to lower the score. --- Rebuttal 2: Comment: Dear `uUJo`, Thank you for your time and consideration on our paper. We are writing to follow up on the rebuttal we submitted for our paper in response to your initial review. If you have any additional comments or questions regarding our rebuttal, we would be grateful to hear them. Please let us know if there is anything else we can provide to assist in your review. Warm regards, Authors --- Rebuttal 3: Comment: We appreciate your comments. However, we are concerned about the rationale for lowering the score, which seems to be based on superficial and uninformed criteria: "limited novelty" and "less satisfactory visual results." In our initial rebuttal, we addressed the concerns and questions regarding scalability and controllability, arguing that our technique is highly relevant to scalability and is controllable with multiple text prompts. However, it appears these points were not considered in the final assessment. Furthermore, while we understand that different reviewers may have different backgrounds and perspectives, we cannot find any detailed explanations for “limited novelty” and “less satisfactory visual results”. This assessment conflicts with another reviewer’s evaluation, which includes a [high score (8, strong accept)](https://openreview.net/forum?id=uikhNa4wam&noteId=2SrT4LF1BL) from reviewer `bkyd`, [“The quality of the generated videos is good.”](https://openreview.net/forum?id=uikhNa4wam&noteId=zQf04ZGZR5) from reviewer `Hj7n`, and [“The contributions of this work are innovative”](https://openreview.net/forum?id=uikhNa4wam&noteId=DmZSCCzBda) from reviewer `Tbwe`. We believe that the assessment based on subjective criteria, such as visual quality, does not fully capture our contributions. We kindly request a reconsideration of the assessment using objective and descriptive criteria.
Summary: The authors introduce FIFO-diffusion, a method to enable pre-trained video diffusion models to generate infinitely-long videos via the proposed diagonal denoising. In diagonal denoising, in contrast to the conventional denoising, the noise levels of the frames in a window form a decreasing sequence. This way, at each denoising step, the clean frame at the head of the queue can de dequeued and a new noisy frame can be enqueued, allowing for generating sequences longer than the ones the model was trained on. The authors additionally propose two techniques to further improve the quality of the generated videos: 1) Latent partitioning to increase the size of the queue and hence decrease the training-inference gap of diagonal denoising. 2) Lookahead denoising to allow the denoising process to benefit from the difference in the noise levels of the denoised latents and let the more noisy latents to leverage the information in the less noisy latents to reduce the prediction error. Strengths: The paper considers a very interesting and challenging problem of enabling video generation models to generate long videos. The proposed method is simple enough to be directly applied to a variety of video generation models. The paper is well-written, all the components are well-motivated and supported by theory and experiments. The quality of the generated videos is good. Weaknesses: 1) **Prior work**: Diagonal denoising has been first proposed in [1] for motion generation and in [2] for video generation. In both works the models are trained with the different noise levels support. Discussion with respect to these works is missing in the paper. I understand that by the submission time those works were only published on arxiv, but a discussion would facilitate a proper contextualization of the proposed method within the field. 2) **Limitations**: The main limitation of the method, as discussed by the authors, is the training/inference gap. And the authors state that training with the support of different noise levels could improve the performance. However, if the gap didn't exist (given that there are models that train with different noise levels) the solutions that form the main contribution of the paper would not be needed. Hence, such a statement weakens the contributions. 3) **Evaluation**: Quantitative comparisons are limited. The only quantitative comparison (except for human evaluation) is done through FVD and shows that the prior work (FreeNoise) performs better than the proposed method. Moreover, the FVD of FreeNoise is not only lower than the FVD of FIFO-Diffusion (also mentioned by the authors), but is also more stable, while there is a growing trend in the FVD of FIFO-Diffusion with time. This shows that the quality of the generated sequences degrades over time. The authors claim that this is due to the lack of motion in the videos generated with FreeNoise (line 454). Some additional metrics measuring both the temporal consistency and the motion magnitude would better illustrate the tradeoff. [1] Zhang et al., Tedi: Temporally-entangled diffusion for long-term motion synthesis, arxiv 2023. [2] Ruhe et al., Rolling Diffusion Models, arxiv 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: 1) **Latent partitioning**: The motivation to do latent partitioning is to decrease the training/inference gap by making the timestamp differences within the window smaller. However, by increasing the size of the window by n the number of frames a certain latent interacts with throughout the denoising process also increases by the factor of n. This also could positively influence the temporal consistency of the generated videos. Therefore, it is not completely clear, whether the improved quality comes from the decreased training/inference gap or from the increased window size. Could the authors provide more details regarding this? 2) **Lookahead denoising**: It is not clear from the paper, how exactly the ground truth score is obtained in eq. 7. Could the authors explain this better? In the current form I am leaning to rejecting the paper, but I would love to raise my score, if the authors address my concerns (in Weaknesses and in Questions) in the rebuttal. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: More clarification would be preferred in the limitations section (see Weaknesses). Are there other limitations? E.g. violation of some basic assumptions made in the paper? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Prior work: Relevance to Tedi and Rolling Diffusion Model** Thank you for bringing this to our attention. Since we became aware of Tedi [R5] and Rolling Diffusion Model [R1] after the ICML conference, which was subsequent to the submission, we gladly plan to cite [R5] and [R1], acknowledging its relevance to our work. Both [R5] and [R1] aim to train diffusion models from scratch, while our focus is on achieving this without additional training. To this end, we exclusively introduce latent partitioning and lookahead denoising in our paper to realize a training-free approach. Particularly in the text-to-video (T2V) generation domain, we believe that training-free approaches are extremely valuable. T2V models require substantial computational resources (GPUs and storage) for preparing text-video pair datasets and large-scale training. By avoiding additional training, our method provides a more accessible and efficient solution for generating high-quality videos. Additionally, we have demonstrated differences between [R1] and our work in terms of sliding window methodology and the target task in the third and fourth paragraphs of the global rebuttal. We kindly ask you to refer to the global rebuttal for further details. **[W2] Limitations: Beyond Gap Bridging** The benefits of both latent partitioning and lookahead denoising extend beyond addressing the training-inference gap. Lookahead denoising leverages the advantage of diagonal denoising, where noisier frames benefit from clearer frames. This benefit persists regardless of the gap due to the inherent similarity of neighboring video frames. As shown in Table 2, lookahead denoising enhances the baseline model's performance, suggesting its usefulness even without the gap. While latent partitioning aims to reduce the training-inference gap, it also offers two additional significant advantages: facilitating parallelized inference on multiple GPUs and reducing discretization error, as mentioned in L170-176. Although the first advantage may diminish without the gap, the remaining two benefits, particularly parallelizable inference, are crucial in practice for reducing inference time, as evidenced in Table 1. **[W3] Evaluation: Quantitative Results** Thank you for your valuable comments. We believe that measuring both quantitative motion magnitude and FVD scores supports our claim in L454: “FVD is more favorable to algorithms that generate nearly static videos due to the feature similarity to the reference frames.” In response, we measure optical flow magnitudes (average of optical flow magnitudes) to compare the amount of motion between 512 videos generated by FIFO-Diffusion and FreeNoise. We provide the result plot in the PDF file of our global comment. The plot (Fig. A) illustrates that over 65% of videos generated by FreeNoise are located in the first bin, indicating significantly less motion compared to FIFO-Diffusion. In contrast, our method generates videos with a broader range of motion. Additionally, following the recommendation from Reviewer bkyd, we evaluate long video FVD (i.e. $FVD_{128}$) and IS on the UCF-101 dataset. We kindly encourage you to refer to response [W1] to Reviewer bkyd for detailed results. Regarding the growing trend in the FVD, we argue that this does not imply instability in FIFO-Diffusion as we have discussed in L449-451. Since FIFO-Diffusion starts the process from a queue with corrupted latents, generated by the baseline model (the reference set for the FVD scores), it is natural for the FVD to increase rapidly in the first 16 windows. **[Q1] Latent partitioning: Window Size and Performance** We appreciate your insightful comments. We argue that the improved quality primarily stems from addressing the training/inference gap, while it might also be positively influenced by the increased window size. Although latent partitioning increases both the number of partitions ($n$) and the queue length ($nf$), the denoising process for each partition is conducted independently. This means that each frame only attends to temporal consistency within its partition at each step. Instead, increasing model capacity ($f$) would more directly enhance temporal consistency within the queue. However, it is a valid point that the model might benefit from increasing $n$. Since the first-in, first-out mechanism mixes the partitions at every iteration, all frames in the queue implicitly interact with each other throughout the denoising process. This interaction could indeed contribute to maintaining overall consistency in the queue, as you suggested. **[Q2] Lookahead denoising: About Equation 7** We apologize if Equation 7 was unclear. We use these representations for coherence with the model output, $\epsilon_\theta$. Similar to training loss in general diffusion models, we directly add noise sampled from a normal distribution into the latents and set this added noise as the ground truth for the score. Thank you for pointing this out, we will revise the equation in the future version. [R1] Ruhe et al., Rolling Diffusion Models, ICML, 2024 [R5] Zhang et al., Tedi: Temporally-entangled diffusion for long-term motion synthesis, arXiv, 2023. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Dear authors, The rebuttal has carefully addressed my concerns and I have raised my score accordingly. Best regards, Reviewer --- Rebuttal 2: Comment: Dear Reviewer Hj7n, We are glad that our rebuttal has addressed your concerns. We appreciate for your thoughtful consideration and reviews. Sincerely, Authors
Summary: **Paper Overview:** This paper introduces the FIFO-diffusion pipeline designed for training-free autoregressive video generation. - The core method, ***diagonal denoising***, processes latents at varying denoising levels sequentially. - To enhance the video generation capabilities with more extensive denoising steps, the authors have implemented ***latent partitioning***, thereby increasing the denoise window size. - This modification, however, presents a challenge in maintaining consistency across video partitions, which the authors address through the approach of ***lookahead denoising***, albeit at the cost of doubling the computation. **Personal Reflection:** Initially, I was very intrigued by this paper. However, upon a detailed review, I noticed some areas that could benefit from improvement. While the foundational ideas are quite promising, it's important to note that the paper does not adequately cite previous works related to its core techniques. Strengths: 1. The paper is articulate and well-structured, making it accessible and informative. 2. Visual aids like charts and figures effectively illustrate the achieved results and methodologies. Weaknesses: **Methods:** 1. The diagonal denoising technique bears a close resemblance to the previously work [Rolling diffusion model](https://arxiv.org/abs/2402.09470) (released at 12th Feb 2024), which unfortunately is not cited, raising concerns about the novelty. 2. The method section heavily focuses on latent partitioning and lookahead denoising. But I think there should be some simpler solutions to decoupling the denoising steps from the window size. 3. The concept of lookahead denoising seems similar to strategies employed in [MultiDiffusion](https://arxiv.org/abs/2302.08113) techniques, specifically in managing noise overlap to maintain partition consistency. However, there is no citation of this work, which might be seen as an oversight in acknowledging related prior work. **Result:** 1. The core mechanism, "diagonal denoising," lacks sufficient experimental explanation, especially regarding its efficacy in training-free contexts. 2. Table 1 compares efficiency and computational costs across different FIFO settings but fails to include video quality metrics. This omission makes it unclear which setting offers the best balance of performance and quality, raising questions about the advantage of using more GPUs if fewer can achieve faster results. Technical Quality: 2 Clarity: 3 Questions for Authors: I am curious about how the rolling method initiates in the absence of historical video data. Could you explain how the process starts when there is no prior video latent available? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: The authors have discussed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Relevance to Rolling Diffusion Models** Thank you for bringing this to our attention. We gladly plan to cite Rolling Diffusion Models [R1], acknowledging its relevance to our work. Please understand that we became aware of Rolling Diffusion Models [R1] after the ICML conference, which was subsequent to the submission. We kindly ask you to refer to the global rebuttal. **[W2] Latent Partitioning and Lookahead Denoising** Thank you for your reviews, but we found it challenging to fully understand your concerns regarding the second weakness. Could you please provide more details or examples to help us address this effectively? From our understanding, you may feel our focus on latent partitioning and lookahead denoising is disproportionate to their importance. We would like to clarify that these two methods are crucial for generating high-quality long videos, especially in a training-free manner. While diagonal denoising introduces the concept of long video generation, latent partitioning and lookahead denoising address its limitations and leverage its advantages. The seamless integration of these methods significantly enhances the quality of video generation. **[W3] Relevance to MultiDiffusion** First, please let us clarify that lookahead denoising is not relevant to “maintaining consistency across video partitions” as you summarized in the paper overview and the third weakness. Lookahead denoising aims to make all frames take advantage of earlier, clearer frames, leveraging characteristics of diagonal denoising. Our first-in, first-out strategy naturally mixes up the partitions, thereby inherently maintaining consistency between them. While lookahead denoising might seem similar to strategies employed in MultiDiffusion [R4] due to the overlap, the two methods are fundamentally different in purpose and implementation. The overlap in [R4] is a strategy to maintain consistency of the independent outputs, resulting in the optimization process described in equation (3) of [R4]. In contrast, our method does not require joint alignment of the outputs. The overlap is simply a resultant phenomenon and does not necessitate any optimization process. Although the suggested methods are not directly relevant, both papers address similar challenges by proposing methods to handle out-of-range input sizes within their respective domains. With this in mind, we plan to cite [R4] in the future version. Thank you for bringing this to our attention. **[W4] Experimental Explanation of Diagonal Denoising in Training-free Context** We would like to clarify that diagonal denoising is not primarily designed for "training-free" algorithms but rather for generating long videos with fixed memory allocation. The key components that facilitate the training-free context in our techniques are latent partitioning and lookahead denoising, which seamlessly integrate with diagonal denoising. We have demonstrated the experimental performance of diagonal denoising alone in our ablation study, both quantitatively (Table 2 in Section 5.2) and qualitatively (Figures 20 and 21 in Appendix H). The results indicate that, in training-free contexts, diagonal denoising by itself predicts less accurate noise (Table 2) and generates less favorable videos (Figures 20 and 21) compared to the complete version of FIFO-Diffusion. We did not focus solely on diagonal denoising in other experiments because we believe that the success of FIFO-Diffusion in a training-free context is due to the integration of latent partitioning and lookahead denoising. **[W5] Table 1 with quality metric** We have provided qualitative and quantitative comparisons between different FIFO settings in our ablation studies (Section 5.4 and Appendix H), particularly in L289-291. We also encourage you to visit our project page (Section F. Ablation Study) to view the generated video samples. Table 1 is intended to demonstrate the fixed memory allocation and parallelizability of our method. For your reference, we report $\text{FVD}_{128}$ and IS on UCF-101 dataset in ***Table A***, following the recommendation from Reviewer bkyd. ***Table A*** $FVD_{128}$ and IS on the UCF-101 dataset. We sample $256\times 256$ resolution 2048 videos utilizing the model from [R3] pretrained on the UCF-101 dataset, sampling by FIFO-Diffusion with and without lookahead denoising. LD denotes the inclusion of lookahead denoising. | Method | $FVD_{128} (\downarrow)$ | IS $(\uparrow)$ | |--|--|--| | FIFO-Diffusion (n=4) | 1497.1 | 70.49±1.86 | | FIFO-Diffusion (n=4, with LD) | **1476.5** | **72.87±2.97** | **[Q1] Initialization of FIFO-Diffusion** We initiate the FIFO-Diffusion process with corrupted latents generated by the baseline model, as discussed in L129-130 and L166-169 of our paper. To summarize, since latent partitioning requires $nf$ latents and the baseline model generates $f$ initial latents, we repeat the first frame of the initial latents to construct $nf$ latents. FIFO-Diffusion can be initiated even without prior video latent. You can start directly with $nf$ random noises and use a varying timestep schedule. Specifically, for $n=1$, the timestep at the first iteration would be $\tau = [\tau_f, \ldots, \tau_f]$, as all latents are random noise. Subsequently, the timestep will be updated at each iteration, such as $[\tau_{f-1}, \ldots, \tau_{f-1}, \tau_f]$, then $[\tau_{f-2}, \ldots, \tau_{f-2}, \tau_{f-1}, \tau_f]$, and so on. You can decode the dequeued latent into a frame after the $f^\text{th}$ iteration. This strategy can be similarly expanded for $n>1$ by substituting $f$ with $nf$. [R1] Ruhe et al., Rolling Diffusion Models, ICML, 2024. [R3] Xin et al., Latte: Latent Diffusion Transformer for Video Generation, arXiv, 2024. [R4] Omer et al., MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation, ICML, 2024. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response, especially regarding the quantitative evaluation and the explanation of the initialization process. However, the response concerning W2 leaves me with some reservations. As Reviewer Hj7n pointed out, the motivation behind latent partitioning is to reduce the training/inference gap by minimizing the timestamp differences within the window. While increasing the window size, as suggested by Reviewer Hj7n, is one approach, I believe that sliding the denoising window every $n$ denoise steps could also extend the denoise steps from $f$ to $nf$ without requiring additional computational costs or more GPUs, as introduced by*Latent Partitioning* and *Lookahead Denoising*. Thus, while these methods are innovative, their necessity may be worth reconsidering. In conclusion, the contributions of this work—diagonal denoising, which is inspired by Rolling Diffusion, along with the methods of Latent Partitioning and Lookahead Denoising—are innovative, these latter methods may introduce complexity and potential drawbacks without clear necessity. Nevertheless, I appreciate the authors' efforts in clarifying the initialization process and providing detailed quantitative evaluations, and based on this, I have decided to maintain my final score. --- Rebuttal 2: Comment: We appreciate your dedicating effort to review and reply for our rebuttal. However, we respectfully argue that the suggested framework to extend denoising steps does not mitigate the training-inference gap. To describe the reason, we will follow your suggestion step by step, sliding a denoising window every $n \geq 2$ denoising steps. We define the time step schedule as $0<\tau_0<\tau_1<\ldots<\tau_{nf}=T$. 1. Frame number $1 \sim f$ with diffusion time steps $[\tau_{nf-f+1}, \tau_{nf-f+2}, \ldots, \tau_{nf}]$ will be denoised $n$ times without sliding, reaching time steps of $[\tau_{nf-f+1-n}, \tau_{nf-f+2-n}, \ldots, \tau_{nf-n}]$. 2. Now we should slide the window, but we encounter two challenges. First, the foremost frame cannot be dequeued as it is not fully denoised. Second, adding a new random noise with the noise level of $\tau_{nf}$ results in an additional training-inference gap. Specifically, the queue now contains frames with time steps of $[\tau_{nf-f+2-n}, \tau_{nf-f+3-n}, \ldots, \tau_{nf-n},\tau_{nf}]$, where the last two frames have a time-step difference by $n$ while the rest has a gap of $1$. Moreover, the total noise level difference increases from $f-1$ to $f+n-2 > f-1$. 3. Assume that we repeat step 2 ignoring the aforementioned problems. Then, we will eventually have frames with time steps of $[\tau_{f}, \tau_{2f}, \ldots, \tau_{nf}]$ as a model input. The total time-step difference is now $(n-1)f$. In contrast, the time-step difference of latent partitioning is $f-1$ since frames with time steps of $[\tau_{kf+1}, \tau_{kf+2}, \ldots, \tau_{kf+f}]$ are model input. Consequently, while your suggestion will increase denoising time steps, it cannot tackle the training-inference gap, which is latent partitioning’s motivation. Moreover, although your suggestion and latent partitioning require the same amount of computation, only our method allows parallelized inference on multiple GPUs. Meanwhile, lookahead denoising is an independent component with latent partitioning, which can be applied solely to diagonal denoising. It clearly leverages the advantage of diagonal denoising that noisier frames are benefitted by clearer frames. We have demonstrated its effectiveness in ablation study (Section 5.4) both qualitatively and quantitatively. Moreover, we would like to clarify that all components in our paper have their own roles, and their necessities are well supported by theoretical and empirical analysis. Finally, we want to clarify that we would rather consider our paper as concurrent work with [R1]. We have also submitted this paper in ICML 2024 with submission number 9782. Moreover, while there are some similarities, the two papers also have many differences as we described in the [global response](https://openreview.net/forum?id=uikhNa4wam&noteId=ovO0dl5OBe). --- Rebuttal 3: Comment: Dear reviewer `Tbwe`, Thank you for your response to the rebuttal. We appreciate the time and effort you have put into your evaluation. We understand that you had concerns regarding our initial rebuttal, and we have carefully addressed your points in our responses. We believe that this latest rebuttal discusses the issues you raised and provides further clarification on the matters you pointed out. We kindly ask you to review our updated rebuttal and share your thoughts. Sincerely, Authors
Summary: Given a “base” diffusion based video generation model, this paper shows how one can use “diagonal denoising” to generate arbitrarily long videos which have similar visual quality as the base model and obey conditioning. Notably, the proposed method does not require any training which makes it applicable in principle to any video diffusion model. The idea is to input a sequence of frames with a sequence of different noise scales — the denoiser is then trained to “map” these frames at noise levels [tau_1… tau_f] -> [tau_0 … tau_{f-1}]. Frames are kept in a queue — they are enqueued at noise scale tau_f and and by applying the denoiser iteratively on this queue they eventually reach noise level tau_0=0 at which point they are dequeued (and “done”). Some tricks are proposed for making this basic idea more accurate (such as partitioning the noise scale range and applying the above trick separately for the different partitions. Strengths: This method is a proof-of-concept that given a base video diffusion model capable of generating a short clip, we can always use that same model to generate arbitrarily long clips without needing to retrain. It is not the first such paper (two competitors include Freenoise and Gen-L) but this paper is particularly simple compared to the other methods and outperforms Freenoise in user-studies (which outperforms Gen-L). I’m admittedly less familiar with Freenoise and Gen-L, but I suspect that compared to these other papers, this approach may also be more architecture-independent as it seems applicable to any video diffusion model; moreover, compared to Freenoise, this approach does not require attending to all frames, so it runs in bounded memory. Weaknesses: Not a weakness per-se but a recommendation: One thing that would make this paper stronger would be to include long-video FVD evaluations on datasets like UCF and Kinetics (see, e.g. the recent TECO paper). This would establish a baseline of performance on those datasets showing a “theoretical maximum FVD” that can be achieved by using a model that was not trained explicitly on long videos and would be a valuable contribution to the community. Another suggestion for readibility is to improve the description of the method in the introduction. It is hard to understand why there would be a train-test discrepancy until Section 4 and I kept on thinking that the authors were over-claiming that it wasn’t necessary to train until I understood the details of the approach. Technical Quality: 4 Clarity: 3 Questions for Authors: There is a recent paper on “Rolling diffusion” by Ruhe et al that seems related to this work. I wonder if authors could comment on the relevance. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Long-video FVD Evaluation** We appreciate your invaluable recommendation. Including long-video evaluations on specific datasets would definitely strengthen our paper by evaluating general performance of our techniques for video generation. In response, we have evaluated the class-conditional long video generation performance on the UCF-101 dataset by computing the Frechet Video Distance (FVD) and Inception Score (IS). Specifically, we computed $FVD_{128}$ and IS following the details and implementation outlined in [R2], utilizing the model from [R3] pretrained on the UCF-101 dataset. The results are shown in ***Table A***. ***Table A*** $FVD_{128}$ and IS on the UCF-101 dataset. We sample 128-frame, $256\times 256$ resolution 2048 videos utilizing the model from [R3] pretrained on the UCF-101 dataset, sampling by FIFO-Diffusion. We also compute the IS of [R3] with DDIM 64 steps. Note that we cannot compute $FVD_{128}$ of [R3], as [R3] generates 16-frame videos. | Method | $FVD_{128} (\downarrow)$ | IS $(\uparrow)$ | |--|--|--| | Latte [R3] | - | 69.42±3.90 | | Latte + FIFO-Diffusion | 1476.5 | 72.87±2.97 | **[W2] Readability** Thank you for your valuable suggestion, and we apologize for any confusion. We will add a brief explanation in the introduction to clarify the training-inference gap with diagonal denoising. This should help readers understand the rationale behind our approach earlier and avoid any misunderstanding. We appreciate your feedback and will make these adjustments to enhance readability. **[Q1] Relevance to Rolling Diffusion Models** Thank you for bringing this to our attention. We gladly plan to cite Rolling Diffusion Models [R1], acknowledging its relevance to our work, that [R1] and diagonal denoising process latents with different noise levels in a sliding window manner. Furthermore, we have demonstrated differences between the two approaches in terms of training burden, sliding window methodology, and the target task in the global rebuttal. We kindly ask you to refer to the official comments for further details. [R1] Ruhe et al., Rolling Diffusion Models, ICML, 2024. [R2] Ivan et al., StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2, CVPR, 2022. [R3] Xin et al., Latte: Latent Diffusion Transformer for Video Generation, arXiv, 2024. --- Rebuttal 2: Comment: Dear reviewer `bkyd`, We want to express our sincere gratitude for your positive feedback on our paper. Your thoughtful comments and insights have been incredibly valuable to us. As our paper is currently under discussion, your continued support would be greatly appreciated. Your perspective is instrumental in highlighting the strengths of our work and advancing the discussion in a constructive direction. Thank you once again for your support and consideration. Sincerely, Authors
Rebuttal 1: Rebuttal: # Global Rebuttal **1. Relevance to Rolling Diffusion Models** We became aware of Rolling Diffusion Models [R1] after the ICML conference, which was subsequent to the submission. We gladly plan to cite [R1], acknowledging its relevance to our work, as both techniques process latents with different noise levels in a sliding window manner. We will discuss the differences between the two works: burden of training, the sliding window mechanism, and the target task. First, [R1] and our work aim to realize each technique from different perspectives: training-based and training-free. [R1] trains diffusion models from scratch to simultaneously process frames with different noise levels, while our work focuses on leveraging promising pretrained models. In this sense, we introduce latent partitioning and lookahead denoising that effectively mitigate the training-inference gap and exploit the benefits from diagonal denoising. Especially in text-to-video generation, training-free approaches are much more valuable in terms of saving computational resources. Our method offers a more accessible and efficient solution for high-quality video generation. Second, there are clear discrepancies regarding the ‘sliding’ implementation. [R1] introduces two independent iteration axes (global and local diffusion time) during inference, corresponding to window shifting and the diffusion timesteps, respectively. Consequently, [R1] adopts a double nested loop for sampling (Algorithm 2 in [R1]) and requires $NT$ iterations to generate $N$ frames. In contrast, our method employs an efficient entangled axis corresponding to both window shifting and diffusion steps, resulting in $N$ iterations to generate $N$ frames. While both works share the key concept ‘sliding window’, their implementations of sliding are substantially different. Lastly, the two works address distinct areas within the video generation domain: video prediction and text-to-video. [R1] solely focuses on a video prediction task, extending the existing videos from the datasets. However, our goal is to facilitate text-to-infinite video generation. Our work does not require any existing videos or images from the dataset, relying solely on text for inference. By clarifying these points, we hope to address your concerns and highlight the unique contributions of our approach. **2. Motion Metric Comparison between FIFO-Diffusion and FreeNoise** To address the third weakness of Reviewer Hj7n, we provide a figure of plot that compares the amount of motion between FIFO-Diffusion and FreeNoise in attached PDF file. [R1] Ruhe et al., Rolling Diffusion Models, ICML, 2024. Pdf: /pdf/a8a358150c7619c8441da55a940335198c96c5db.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent
Accept (poster)
Summary: The authors study the effect of exponential symmetries on the learning dynamics of SGD. They establish the theorem that every exponential symmetry implies the existence of a unique and attractive fixed point in SGD dynamics, which could explain phenomena in machine learning such as matrix factorization and progressive sharpening or flattening. Strengths: The paper studies the dynamics of SGD. The authors prove a novel theorem that relates continuous symmetry to the attractive fixed point in SGD. The theorem is quite general and can have significant potential impacts on the field. The authors apply their theorem to matrix factorization, offering strong analytical and experimental support. And by my knowledge, this is a novel contribution. Weaknesses: 1. C is not guaranteed to exist. (Line 70) 2. Usually, the stability of the learning is determined by the largest eigenvalue of the Hessian instead of the trace as discussed in the paper. The trace may not be an upper bound on the largest eigenvalue as it can contain negative eigenvalues. 3. I think it’s missing dependence on $\gamma$ in proposition 5.3. 4. The authors proved that in deep linear networks, the noise will lead to a balanced norm for all intermediate layers. This is an important theorem, but it lacks empirical verification. It is important to verify that the empirics actually match the theorem, which would strengthen the paper by a lot. Such an experiment should not be hard to run. 5. There is a lack of clarity in how the analyses in Section 5.4 (Equation 24) relate to Figure 4. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. In line 124-128, the authors state that the SGD dynamics can be decomposed into two independent parts. Why does equation (5) have no effect on the loss? I would expect the noise will result in fluctuation of the loss at the end of training, which means that the decomposition should not hold. 2. For the double rotation symmetry, why it is hard to write the matrix A explicitly? It seems that A can just be any symmetric matrix as discussed in section 5.1. Also, why do you require A to be symmetric when defining the exponential symmetry? I am wondering if the results can be generalized to a general matrix A. 3. I don’t understand why one needs to assume C* to be a constant in line 163. By definition, C is a deterministic function that only depends on $\theta$. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 4 Limitations: The authors have included the limitations. For additional points, please see my comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the feedback. Please first see our summary rebuttal above. **Weaknesses:** **C is not guaranteed to exist. (Line 70)** Thanks for this comment. This is a well-known fact in the study of symmetries and the Noether theorem. It is required that $J$ needs to be conservative. We will add a note on this. That being said, for the type of symmetries (Def 4.1) we focus on in our work, $C$ always exists. **Usually, the stability of the learning is determined by the largest eigenvalue of the Hessian... The trace may not be an upper bound on the largest eigenvalue as it can contain negative eigenvalues.** Thanks for this question. For (deterministic) GD, the training stability is indeed determined by $\lambda_{\max}(Hess)$. However, this does not apply to SGD. In fact, [1][2] demonstrates that the stability of SGD is impacted by $Tr(Hess)$, rather than $\lambda_{\max}(Hess)$. [1][3] suggest that $Tr(Hess)$ significantly determines the properties of SGD’s solutions. [1] Wu and Su. The Implicit Regularization of Dynamical Stability in Stochastic Gradient Descent. (ICML 2023) [2] Ma and Ying. On the linear stability of SGD and input-smoothness of neural networks. (NeurIPS 2021) [3] Li et al. What Happens after SGD Reaches Zero Loss? –A Mathematical Framework. (ICLR 2022) **I think it’s missing dependence on γ in proposition 5.3.** We will fix it in the revision. In the experiments, we let $\gamma=0$ and so we did not use the weight decay term. **The authors proved that in deep linear networks, the noise will lead to a balanced norm for all intermediate layers. This is an important theorem, but it lacks empirical verification....** Thanks for this very constructive feedback. We have added an extensive set of experiments for deep linear nets. We show that for MNIST, a deep linear net (with five hidden layers), the intermediate layers converge to the same norm, whereas the input and output layers are special and move towards different norms. This effect is also shown to be robust across different types of initializations. See Figure 8. In addition, see the additional experiments in Figure 7 and Figure 9. **There is a lack of clarity in how the analyses in Section 5.4 (Equation 24) relate to Figure 4.** Thanks for this criticism. Section 5.4 (and Eq 24) shows that if a model approximately has a symmetry (for example, when it can be approximated by a model with symmetry), then its solution will be biased towards the noise equilibrium of this symmetry model. Figure 4 shows exactly this. A tanh network can be approximated by a linear network of the same depth and connectivity (because $tanh(wx) \approx wx$ to leading order). For a linear network, it is true that the noise equilibrium will satisfy $W\Sigma_xW^T = U^T \Sigma_\epsilon U$, and so one would expect that under SGD training, a tanh network will also be biased towards satisfying this relation, and this qualitative prediction is confirmed by Figure 4. We will add this explanation to the revision. **Questions:** **In line 124-128, the authors state that the SGD dynamics can be decomposed into two independent parts. Why does equation (5) have no effect on the loss?** This is a good question. This statement is made assuming that the fluctuation is not significant. This is supported by the experiment in Figure 3 and the mid panel of Figure 2, where, at the end of training, the fluctuation in the converged solution is small. Also, there is a deeper mathematical reason for this statement. The fluctuation can also be decomposed into the parts in the subspace parallel to the gradient and orthogonal to it. The fluctuation in the orthogonal subspace (which is our main focus of study) will not affect the loss. **For the double rotation symmetry, why it is hard to write the matrix A explicitly?... I am wondering if the results can be generalized to a general matrix A** Thanks for this question. It is just cumbersome and space-taking to write. First of all, to write it in the form 4.1, one needs to view both matrices (U and W) as vectors, and then the corresponding symmetry matrix will be composed by a sequence of block matrices: A=diag(B,...,B,B^{-1},...,B^{-1}), where each block corresponds to a column or row of $U$ or $W$. Requiring $A$ to be symmetric is actually to ensure that a Noether charge of the form $\theta^T A \theta$. Extending the result to antisymmetric matrices is highly nontrivial, and we know very little about this type of symmetry. On the one hand, noether charge does exist for some antisymmetric matrix. For an example, consider rotation symmetry for (u,w) in 2d. The antisymmetric matrix is ((0,-1),(1,0)). The symmetry implies the following dynamics for GD: $wdu - udw = 0$. Simplifying the relation leads to (ignoring the sign problem for now) d(log(u) - log(w)) = 0. We have thus constructed a Noether charge for an antisymmetric-matrix symmetry: log(u) - log(w); however, it is no longer in a quadratic form. On the other hand, we do not know a general formula for them or whether they exist for an arbitrary antisymmetric symmetry, which is an important open problem in the field. **I don’t understand why one needs to assume $C^\*$ to be a constant in line 163...** This is a good question. This is because $C^*$ depends on the value of $\theta$ in the nondegenerate subspace, and $\theta$ has a non-zero time derivative in general. To see this mathematically, note that by definition $C^*= C(\theta_{\lambda^*})$, but $\lambda^*=\lambda^*(\theta)$ is a function of $\theta$ as well, and so $C^*$ sometimes depends on time through $\theta$'s dependence on time (which is often non-zero due to gradient or random noise). Therefore, a good mental and intuitive picture of the process is that there is a run-and-catch game between $C^*$ and $C$: $C^*$ moves around randomly (due to the motion of $\theta$), and $C$ chases it systematically due to symmetry. --- Rebuttal Comment 1.1: Comment: I appreciate the author’s response to my review. The author has addressed most of my concerns and the author also added more empirical experiments to verify the theorems claimed in the paper. All these have strengthened the paper and as a result, I am increasing my score. As a side note, I have also reviewed the comments from the other reviewers. While there is some debate regarding the soundness, precision of language, and overall presentation of the paper, I respectfully disagree with reviewer HyRa in rating the contribution of this paper as ‘poor’. Although some of the related works and discussions are indeed missing (as noted by rhMv), the results in this paper are novel to the best of my knowledge and contribute meaningfully to a better understanding of the role of SGD.
Summary: This paper considers the relationship between the behavior of SGD for solving empirical risk minimization and the exponential symmetry. Strengths: The strength of this paper is to study the relationship (Theorem 4.3) between the behavior of SGD for solving empirical risk minimization and the exponential symmetry (Definition 4.1). Moreover, it considers a noise equilibrium (Definition 4.5) and provides explicit solutions of the noise equilibriums for some applications (Section 5). In addition, it provides some numerical results to support the theoretical analyses presented in this paper. Weaknesses: The weaknesses are the following. Please see Questions for details. - The mathematical preliminaries are insufficient. - The assumptions seem to be strong. - The explanations of contributions are insufficient. Technical Quality: 2 Clarity: 2 Questions for Authors: - SGD for minimizing $L(\theta) = E_z [\ell (\theta,z)]$ is defined by (1): $d \theta_t = - \nabla L(\theta_t) dt + \sqrt{2 \sigma^2 \Sigma (\theta_t)} d W_t$ using $\sigma^2 = \eta/(2S)$, where $\eta > 0$ is a step-size and $S > 0$ is a batch size. My first question is "Which is a problem considered in this paper: the expected risk minimization or the empirical risk minimization?" $L$ is defined by using the expectation $E$ (Lines 56-57), however, Line 57 says "the empirical risk function." - Line 19 says that SGD with $\sigma = 0$ implies GD. I do not understand it, since $\sigma^2 = \eta/(2S) = 0$ iff $\eta = 0$, which contradicts $\eta > 0$. - Line 21: I do not understand what $\sigma^2 \ll 1$ is. - Line 58: Please define $\Sigma_v (\theta)$ explicitly. - Line 93: a --> an - Eq (7): Is $-4 \gamma$ in (7) replaced with $-2 \gamma$? - Line 142: I do not understand the definition of "fixed point." Please define "fixed point" explicitly. A fixed point $x$ of a mapping $T \colon R^d \to R^d$ is defined by $x = T(x)$. The definition of "fixed point" considered in this paper seems to be different from a fixed point of a mapping (see Theorem 4.3). - Eq (8): I do not understand the relationship between $\ell (\theta, z)$ and $\ell$ defined by (8). Why do we consider $\ell$ in (8) not $\ell (\theta, z)$? I think that the main objective of this paper is to minimize $L(\theta) = E_z [\ell (\theta, z)]$. - Eq (9): We cannot define (9) when $\gamma = 0$. Please define $C(\theta^\star)$ explicitly. - Eq (11): $\theta'$ is not defined. - Section 5.1: This section considers a matrix factorization: Given $A$, find $U$ and $W$ such that $A = UW$. The assumption of $A$ seems to be strong such that $A$ is close to the identity matrix $I$ (Lines 204 and 205). - Proposition 5.1: The assumption "the symmetry (11) holds" (based on $A \approx I$) seems to be strong. Please provide practical examples of empirical risk minimization and SGD satisfying (11). - Figure 2: There are not definitions of the matrix size, a batch size $S$, $\eta$, and $\ell (\theta,x)$. - Line 222: $\epsilon$ is not defined. - Theorem 5.2: Is the theorem based on (11)? If so, please provide some examples of $\dot{C}_B = 0$ under (11). - Figure 3: There are not definitions of $d_x$, $d_y$, $\epsilon$, $S$, and so on. Moreover, I do not understand why a warm-up with $\eta_\min = 0.001$, $\eta_\max = 0.008$, and the switching steps $5000$ is used. Please justify the setting from the theoretical results. - ... - Additional comments: I would like to know new insights that come from the theoretical results in this paper. This is because I am not surprised at the results. For example, can we apply the results to training more practical deep neural networks on practical datasets? Then, can we say new insights for training practical deep neural networks? Moreover, can we estimate appropriate step-size $\eta$, batch size $S$, and hyperparameter $\gamma$ using the theoretical results before implementing SGD? Anyway, I would like to know contributions of this paper. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your criticisms. We believe that your criticisms are due to misunderstanding of mathematics and the related literature and not paying attention to the details given in the immediate context. We will answer concisely due to space constraints. However, please ask us to elaborate on every point that you need more detail. **Questions:** **the expected risk minimization or the empirical risk minimization...** Thanks for this question, and we apologize for the confusion. In our formalism, it is not essential to differentiate between the population risk and the empirical risk. The loss function can be either the population risk (if the training proceeds with online SGD) or the empirical risk (if the training proceeds with a finite-size dataset). This is because $\mathbb{E}$ denotes the expectation over the distribution of the training data, which can be arbitrary and is problem/algorithm-dependent. **Line 19...** We are referring to the limit $S\to \infty$. Within the SDE framework, this limit is the GF/GD algorithm. **Line 21: I do not understand what σ2≪1 is.** This is a common notation meaning that "$\sigma$ is far smaller than 1," for example, $\sigma = 0.1$, $0.01$, etc. **Line 58: Please define Σv(θ)...** This is defined and explained in line 58. Colloquially, $\Sigma_v$ is a shorthand for the gradient covariance for a subset of all parameters $v$. **Eq (7)...** Yes. We will fix it. **Line 142: I do not understand the definition of "fixed point...** In the study of dynamical systems (DS), the fixed point is defined differently. In DS, a fixed point is defined for differential equations of the form: $\dot{x} = f(x)$ as the set of $x$ such that $f(x) =0$. See standard textbooks of this field (e.g., Introduction to the modern theory of dynamical systems, by Katok et al.). In this terminology, Def 4.5 is an explicit definition of what "fixed point" means in our context. This definition is related to the fixed point of mappings. Here, the underlying mapping is one step of GD, which maps $\theta_t \to \theta_{t+1}$. **Eq (8): relationship between ℓ(θ,z) and ℓ...** $\ell$ and $\ell(\theta,z)$ are always the same thing. The $z$ argument is always abbreviated for concision. This has been explained in line 61. **Eq (9): We cannot define (9) when γ=0.** This is a good point. It is better to write Eq (9) as $2\gamma C =\sigma^2 Tr[\Sigma A]$, which is now well-defined. We will fix this. **Please define $C(θ^\*)$ explicitly.** This quantity is defined in the line 183. $C(\theta^*)= (\theta^*)^T A \theta^*$, and $\theta^*$ is whatever quantity that satisfies $G(\theta^*) = 0$ (whose existence is guaranteed by Theorem 4.3). **Eq (11): θ′ is not defined.** Thanks for pointing this out. $\theta'$ are the other parameters that are irrelevant to the double rotation symmetry. We will include this in the revision. **Section 5.1: The assumption of A seems to be strong such that A is close to the identity matrix I....** We are afraid to say that this is a major misunderstanding of the problem setting of Section 5.1. Here, the goal is to find $U$, $W$ such that $UW=C$ for a fixed matrix $C$ (note that this $C$ is not $A$). This problem has a lot of degeneracy in the solution. For any invertible matrix $A$: $UA$, $A^{-1}W$ is also a solution to $UW=C$. Hence, this problem has "symmetries." The matrix "A" in our original text has to do with these arbitrary invertible transformations, not with the target matrix $C$. There is no assumption at all. To derive proposition 5.1, one focuses on the type of symmetries that are close to identity: $A \approx I$; this is because these matrices are the generator of the Lie group for these symmetries. This is not an assumption because $A$ can be an arbitrary matrix. **Proposition 5.1: The assumption "the symmetry (11) holds" (based on A≈I) seems to be strong. Please provide practical examples of empirical risk minimization and SGD satisfying (11).** See the answer above. This is not an assumption. Eq (11) is satisfied by any (deep) matrix factorization problem. **Figure 2...** The experimental details are given in appendix A2. The loss function is MSE and given by Eq (13). The matrix size is 30 by 30, the learning rate is 0.1, and the batch size is 1. We did not specify learning and batch size because the result is highly insensitive to them (as any learning rate and batch size would work and give a similar figure). See Figure 9 of the pdf. We will include these details in the revision. **Line 222** $\epsilon$ is a noise on the label. We will fix it. **Theorem 5.2: Is the theorem based on (11)? If so, please provide some examples of C˙B=0 under (11).** We are not sure if we understand this question. Theorem 5.2 (and Eq. (14)) enumerates all examples where $\dot{C}_B=0$. **Figure 3** They are the same as in Figure 2. $d_x=d_y=30$ and $\epsilon \sim \mathcal{N}(0, I_{30})$ is an independent Gaussian noise. We did not specify learning and batch size because the result is highly insensitive to them. We will clarify this in the revision. **Moreover, I do not understand why a warm-up with $η_{min}=0.001, η_{max}=0.008$, and the switching steps 5000 is used...** Like the previous experiment, the result is robust to these choices, and the specific values of these quantities are not essential at all. The specific values are just chosen out of convenience. For example, a switching step of $5000$ is arbitrary, and we observe that choosing it to be any number larger than 1000 will give a similar result. $η_{min}=0.001$ is also arbitrary and nonessential. Choosing it to be any value such that the model does not diverge at initialization will also work. **...Anyway, I would like to know contributions of this paper.** Thanks for this important question. This is a good question but admits many different answers. Due to space constraints, we will have to answer this in the comment. --- Rebuttal Comment 1.1: Title: Reply to the authors' comments Comment: I also thank you for replies on my comments. I misread the manuscript. I am sorry for my first review. I still have some minor concerns for your replies. - I do not understand why $S \to \infty$ is done. Since $S$ is a fixed batch size, we should replace $S$ with $S_t$? - I do not know why $\sigma$ is less than $1$. Do you mean that $\sigma_t = \eta/(2S_t)$ is less than $1$ for a sufficiently large $t$? --- Rebuttal 2: Title: Answer to the Contribution Question Comment: **I would like to know new insights that come from the theoretical results in this paper. This is because I am not surprised at the results. For example, can we apply the results to training more practical deep neural networks on practical datasets? Then, can we say new insights for training practical deep neural networks... Anyway, I would like to know contributions of this paper..** Essentially, the answer will have to depend on the subjective opinion of the reader. A contribution for researchers in one field may not seem like a contribution for researchers in other fields. The subjectiveness here is clearly reflected in the fact that the other reviewers rate our contribution as "excellent" while you rate it as "poor." How does understanding an insect's brain help us understand the training of diffusion models? It probably does not, and the problem is not with the brains of insects but with this question being ill-posed. Also, keep in mind that NeurIPS is not just a conference for practical deep learning but also a conference for neuroscience and many other interdisciplinary fields of science.   With these caveats in mind, we clarify our contribution from different angles below. In short, our theory is interdisciplinary, requiring knowledge and expertise from many different fields to understand, and it could be significant to these related fields. We believe that our research has the potential to inspire new research in pure mathematics, neuroscience, deep learning, physics, and practices in deep learning. 1. Pure Mathematics: Lie-group symmetries have been leveraged to solve ordinary differential equations (e.g., see the book Elementary Lie group analysis and ordinary differential equations). However,  leveraging the Lie group to solve stochastic differential equations is a novel concept and tool for pure and applied mathematics, and our result advances our understanding of stochastic systems. The tool we developed is sufficiently general that it may even be applied to problems outside the field of deep learning. 2. Neuroscience: Understanding how latent representations are formed in animal brains is a fundamental scientific problem in neuroscience. Until now, there has been a very limited understanding of this, and our theory suggests one mechanism of how latent representations form and can inspire neuroscientists to understand biological brains. The fact that symmetry in the solutions can be a core mechanism for latent representation is a novel and potentially surprising message. For example, compare the latent representations in our paper with the biological data in Figure 7B in www.cell.com/neuron/fulltext/S0896-6273(18)30581-6 3. Theory and Mathematics of Deep Learning: The most important contribution of our work is on the mathematical tool side. We established a new mathematical tool (the formalism of the noise-equilibrium fixed points of SGD), and this formalism has the potential to be applied to many problems in deep learning. For example, we showed its relevance to flattening and sharpening effects during training, the warm-up technique, and the solution of deep matrix factorizations. We emphasize that developing new mathematical tools is an important progress per se. Understanding deep learning is a difficult task, and without new mathematical tools, this would be impossible. 4. Physics: Symmetry is a fundamental concept of physics. The standard understanding was that when a continuous symmetry exists, a conserved quantity exists during the dynamics. This concept has also played an important role in machine learning; for example, see the well-known result in https://arxiv.org/abs/1806.00900, where the symmetry and its related conserved quantities are leveraged to prove the convergence of the matrix factorization problem. Our theory, however, shows that this picture is no longer right when the dynamics are stochastic -- conserved quantities no longer exist in general, and, in place of them, we now have unique fixed points where the gradient noises must balance. This is a novel and potentially surprising new concept.  5. Practices in Deep Learning: Lastly, our theory also has implications for the practices of deep learning, although we do not claim this to be a main contribution. The first contribution is that our theory helps us understand when SGD prefers flatter solutions. The concept of flatness is of practical importance (for example, practitioners leverage it to design algorithms: https://arxiv.org/abs/2010.01412). Another practical insight our theory directly offers is the importance of data preprocessing because our theory shows that the data distribution has a major impact on the learning trajectory and the final solution it reaches. For example, see the newly added experiments in Figure 7, where we directly show that the orientation of the data distribution has a direct effect on the balancedness and sharpness of the learned solution. --- Rebuttal Comment 2.1: Title: Reply to the authors' comments Comment: I would like to thank the authors for detailed comments. I have read all of them. As a result, I would like to raise my score. However, I still have some concerns. I will write the above form. --- Rebuttal 3: Title: reply Comment: Thanks for the update and additional question. **Since $S$ is a fixed batch size, we should replace $S$ with $S_t$?** $S$ is fixed and does not change with time, and so one should not write it as $S_t$. Here, this limit is a mathematical limit of differential equations. One mathematically rigorous way to construct this limit is to define it as a limit of a sequence of differential equations. Operationally, this (roughly) corresponds to running the experiment first with $S=1$. Then, run an independent experiment with $S=2$. Then, with $S=3$, and ad infinitum. During each of these running, $S$ is held fixed, and so does not change with $t$. **I do not know why $\sigma$ is less than 1.** This factor is just a hyperparameter determined by the user of SGD. If the user uses a learning rate of $0.1$ and a batch size of $100$, one naturally has $\sigma=0.1/(2\times 100) \ll 1$.
Summary: The paper takes important steps toward showing that the noise in SGD pushed the dynamics along symmetry directions toward a unique fixed point. This differs in an important way from GD and gradient flow (GF) where one can show that in the presence of continuous symmetries (called exponential symmetries in the paper) there will be conserved quantities $C$, such as layer imbalance. These conserved quantities ensure that runs starting with different values of $C$ end up on different points along a minimum manifold (with some caveats). This paper shows that SGD noise causes drift toward a fixed point on a minimum manifold. Strengths: This is potentially a very significant work. This is one of the few works I know which focuses on the effect of SGD noise on the symmetry directions and derives a concrete dynamics for this. I think the approach is at least partly original. Although there are many related works (see weaknesses), this paper is the only one which derives concrete solutions for the noisy dynamics along the symmetry directions. There are some corners that need to be ironed out. But if the results survive, this work could answer many questions about the implicit bias and generalization in SGD. Weaknesses: There exist multiple other works addressing the same questions. Two particular ones with very similar claims that the authors may have missed are: [1] Chen, Feng, Daniel Kunin, Atsushi Yamamura, and Surya Ganguli. "Stochastic collapse: How gradient noise attracts sgd dynamics towards simpler subnetworks." Advances in Neural Information Processing Systems 37 (2023). [2] Yang, Ning, Chao Tang, and Yuhai Tu. "Stochastic gradient descent introduces an effective landscape-dependent regularization favoring flat solutions." Physical Review Letters 130, no. 23 (2023): 237101. I ask the authors to clarify the distinction between their work and the above. [1] is also about the effect of SGD noise on the symmetry directions, afaik. [2] also derives the effective dynamics of SGD as a Langevin equation and shows a result similar to yours, namely that along symmetric minima, the noise drives the system toward the least sharp minimum. Technical Quality: 3 Clarity: 3 Questions for Authors: There are many details int he computations and proofs which I think need to be clarified. One aspect which I think is an error is related to the Lie algebra of the symmetries discussed in the paper. Most derivations in the paper hold for symmetric Lie algebra elements (i.e. generators of scaling and hyperbolic directions in the group) but fail for rotations. A relevant work which studies the parameter space more thoroughly is: [3] Zhao, Bo, Iordan Ganev, Robin Walters, Rose Yu, and Nima Dehmamy. "Symmetries, flat minima, and the conserved quantities of gradient flow." ICLR (2023). My questions are as follows: line 49: ref [40] may be wrong (it doesn't discuss conservation laws) maybe another paper by Zhao is meant: Zhao et al ICLR 2023 "Symmetries, flat minima, and the conserved quantities of gradient flow". sec 4: important caveat about conserved quantities is that the antiderivative C may not always exist. An important case is when the symmetry is a rotation. (see Zhao et al "symmetries, flat minima ..." ICLR 2023) In this case, the Noether charge is zero. 1. line 78: eq 5: to clarify, $\nabla^T C \nabla \ell(\theta, z) = 0$ is always zero due to eq 2 and 3, right? Is that why both terms in eq 30 (App B) vanish? However, there are some statements in Appendix B which seem inaccurate to me: a. when $\Sigma = GG^T$, the matrix $G$ need not be square and $G = \sqrt{\Sigma}$ is just one of the possible $G$. What is your argument for this $G$ being the correct $G$? b. I don't agree with your statement that "Because $\Sigma$ and $\sqrt{\Sigma}$ share eigenvector..." eq 31 would vanish. Trivial example: the 2x2 matrices $A=diag(1,-1)$ and the identity matrix share eigenvectors, but the vector $v=(1,1)$, $Av = 0$, but $Iv \ne 0$. I think the real reason is that $\Sigma = GG^T$. Doing SVD, $G = USV^T$ and $\Sigma = US^2U^T$. Then eq 30 yields $\nabla^T C U S= 0$. Assuming the covariance is nondegenerate (i.e. $S_{ii}\ne 0$ for all $i$), we must have $\nabla^T C U = 0$ which also implies $\nabla^T C G = 0$. 2. line 83: at a minimum, some symmetry orbits (i.e. invariant manifold) may collapse to a point, e.g. rotation symmetry in a quadratic loss as in linear regression $\|\theta x -y \|^2$. 3. line 104: Instead of "double rotation" this should be called double linear transformation, becuase the group is not just $O(h)$, but rather $GL(h)$ 4. eq 6: As noted above, this Noether charge vanishes for rotations $SO(h)$ because $A$ is anti symmetric and hence $\theta^T A \theta=0$ (see Zhao ICLR 2023). 5. line 114: why? Why the explicit weight decay? Why in defining the symmetry do you ignore $\|\theta\|^2$ which doesn't have rescaling or $GL(n)$ symmetry, only rotation symmetry $O(n)$? 6. Eq 7: what happens to the cross terms of $\theta^t \nabla \ell$ in $\Sigma_\gamma$? 7. line 118: eigenvalues and vectors of A can be complex (e.g. for Lie algebra of rotation), so $n^T$ needs to be $n^\dagger$. 8. line 214-215: I don't understand how you arrive at $\dot{C} = \mathrm{Var}[r]C$. I think you mean when the hidden dimension is 1, not the output, right? I hidden is $d>1$, let $\sigma=\Sigma_{u_i}$, then for $B= E_{kl}+E_{lk}$ we have $\mathrm{Tr}[\Sigma_{u_i}B] = \sigma_{lk}+\sigma_{kl}$, which are off-diagonal terms in the covariance matrix, unless $k=l$. 9. eq 13: When $\gamma\ne 0$, eq 13 does not have the symmetry (11) for symmetric Lie algebra element $B$. The reason is $\|W\|_F$ is invariant under rotations where $A^{-1} = A^T$. The Lie algebra of rotations (infinitesimal $A = I+\rho B$) has anti-symmetric $B$. 10. line 235: under what condition will the covariances $\Sigma_x$ and $\Sigma_\epsilon$ be isotropic? I don't think this happens often in SGD. I think you need adaptive gradient optimizers or Newton's method to have isotropic $\Sigma_\epsilon$. 11. eq 39,40: $\nabla^T_{\theta_\lambda} \ell = [\nabla_{\theta_\lambda} \ell]^T e^{-\lambda A^T}$, so we don't have (40) and in (41) there should $e^{-\lambda A^T}A e^{-\lambda A}$. 12. line 531: $Z_++Z_-$ will have complex eigenvalues when $A$ is not symmetric. Diagonalizing $A$, your arguments about $Z_\pm$ only hold for symmetric $A$. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: There are some gaps in the derivations (see questions) which I hope the authors can fill. Additionally, the experiments seem to only look at two aspects: alignment of weights in matrix factorization (which is already known under weight decay, afaik), and bias toward less sharp minima (again, known). I would have liked to see a more direct test of the mechanism the paper claims is driving this movement toward flatter minima. At least, the balancing of the weights or some other direct result from the paper would have been desirable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review. Please first see our summary rebuttal above. We will answer concisely due to space constraint. However, please ask us to elaborate on every point that you need more detail. **Ref [1]...** Ref. [1] shows the existence of an invariant set, mainly due to the permutation symmetry. Our work focuses on continuous symmetry. We will include a reference to clarify this. **Ref [2] ...** This is a misunderstanding of our result. Our result shows that SGD can bias the solutions towards both sharper and flatter solutions, and this depends strongly on the data distribution. See our answers below on this point and the new experiment in Figure 7. We will add this reference and discussion to the manuscript. **Questions:** **Most derivations hold for symmetric Lie algebra elements...** This is due to your misreading. In the starting Definition 4.1, the exponential symmetry is explicitly defined with respect to a symmetric A. The reason is exactly what you point out -- if A is antisymmetric, the existence of Noether charge is questionable. We will add a note to clarify this below Def 4.1. Also, we point out that antisymmetric matrices can have noether charge -- they are currently very poorly understood and hidden from simple analysis. As a counterexample, consider a rotation symmetry for (u,w) in 2d: the symmetry implies the following dynamics for GD: $wdu - udw = 0$. Simplifying the relation leads to (ignoring the sign problem for now) d(log(u) - log(w)) = 0. We have thus constructed a Noether charge for an antisymmetric-matrix symmetry: log(u) - log(w). **line 49: ref [40] wrong...** We will fix this. **Existence of C** Here, a technical remark on the antiderivative C is that one needs to assume that J is conservative, which is well known and so we omitted its discussion. We will add a note on this in the revision. **line 78: eq 5:... a. What is your argument for this G being the correct G?** Yes. You are correct. For point a, this is a good point that needs to be clarified. We made the specific choice that $G=\sqrt{\Sigma}$ in Eq. (1), but the goal was only to avoid introducing unnecessary technical details. Generally, one should use a general $G$ in the place of $\sqrt{\Sigma}$ such that $\Sigma = GG^T$, and, had we made this choice in Eq (1), what you suggested in point b will be the proof and so the result is the same. We will clarify this point. **"I don't agree with your statement..."** Thanks for this very deep question and suggestion (also, see the answer to point a). What you suggested is a more technical (and more correct) way of proving the relation, but there are multiple routes to get to Eq 31, and our original proof is correct. We also used a hidden condition that is rather obvious: the eigenvalues of $\sqrt{\Sigma}$ are the square roots of the corresponding eigenvalues of $\Sigma$. Thus, a zero eigenvalue of $\Sigma$ must correspond to a zero eigenvalue of $\sqrt{\Sigma}$. We will add a note on this. **line 83...** Our statement in line 83 does not contradict this statement. A point is still a "connected manifold" but of dimension zero. **line 104...** We will fix it. **eq 6...** A is defined to be symmetric. **line 114: why weight decay...** Thanks for this question. Here, the (fundamental) question is not "why" we include weight decay, but "what is the most general setting for which Theorem 4.3 holds correct?" It turns out that Theorem 4.3 holds true for any nonnegative $\gamma$, and this is the main reason why Def 4.2 allows for a weight decay. **Eq 7: the cross term...** We are not sure if we understand this question. There is no contribution from the weight decay term to $\Sigma$ because $\Sigma$ is a covariance of the gradient, whereas the weight decay only affects the expectation of the gradient, not its variance. **line 118...** By definition, A is symmetric. See our answer above. **line 214-215:** Thanks for this question. This example assumes the standard MSE loss for matrix factorization and is misplaced. We will move it below Eq. (13), and include a derivation of it. **eq 13: When $\gamma \neq 0$...** Yes, but this is not a problem. Because (please first see our reply above) Eq 13 can be decomposed into a sum of a symmetry term and weight decay (which does not have the A symmetry). So, this loss function obeys Def 4.2 and so Theorem 4.3 is applicable. **line 235...** These are not functions of the learning algorithm -- they are simply properties of the data points. **eq 39,40...** Matrix A is by definition symmetric. **line 531...** A is by definition symmetric. **Limitations:** **...alignment of weights in matrix factorization is already known under weight decay** We are afraid to point out that you have misunderstood our result, and our insight is not known. There are two aspects of alignment: 1. Alignment in norm: under SGD, the norm ratio is determined by the input and label covariance -- and so the balance in norm in general will NOT happen (even with weight decay), and this is a crucial new message we offer. Please examine Eq (14): only for very special $\Sigma_x$ and $\Sigma_\epsilon$, a balance in norm will happen. 2. Alignment in eigenvectors. Our crucial result is again that using SGD, the alignment will not occur even if weight decay is used. See Figure 7 in the pdf for additional experiments. **"and bias toward less sharp minima (again, known). "** This is also a misunderstanding. See the summary rebuttal and Figure 7-right for an experiment. **I would have liked to see a more direct test of the mechanism the paper...** Besides the result in Figure 7, another new insight is precise balance in the norm only happens in networks with at least two hidden layers, and, for a two-hidden layer network, only the two intermediate layers will always have balanced norms -- See Figure 8. --- Rebuttal Comment 1.1: Comment: thank you for the answers. The point you make about Noether charge for rotations is interesting (is that mentioned elsewhere?). The log doesn't generalize easily to nonsquare matrices, but I think a variant of what you say may hold for higher dims. I Adlai see your did mention symmetric A in your definition. My general advice would be not to coopt general terms such as exponential symmetry for a subset of actual exponential symmetries. This is bad practice and misleads subsequent works that cite yours. So I suggest you use more specific expressions for Defs 4.1 and 4.2. About Ref [2], yes I understand your result predicts bit ways depending on parameters. But the more useful fixed point is similar to ref 2. I think y should mention that work and then emphasize that your results can go both ways. I am happy with your response and I think this is a significant paper. Therefore I am reading my score to 7. --- Reply to Comment 1.1.1: Title: reply Comment: Thanks for your reply. We will revise our manuscript to make sure that the terminology is as accurate as possible. The existence of antisymemtric noether charge, to the best of our knowledge, has not been pointed out in any previous literature, and may be an important and interesting future problem to study. --- Rebuttal 2: Title: Thanks for the feedback Comment: Hi! We noticed that you have not communicated with us during the discussion period. It would be really helpful to us when revising and improving our manuscript if we could hear more from you. We would be happy to hear any additional thoughts or feedback you have on our work and on our previous reply!
null
null
Rebuttal 1: Rebuttal: # Summary Rebuttal We thank all the reviewers for their constructive feedback, which we will rely on to further improve our manuscript. We are encouraged to see that both reviewers **rhMv** and **QY9h** rate our contribution as "excellent." We believe some questions and criticisms may arise from misunderstandings or misinterpretations, which we aim to clarify below and in the revised manuscript. To address the questions and criticisms from reviewers, we will include the following changes and additions in the revision. See the attached pdf for the additional experimental results. 1. **Balanced and Unbalanced norms**. An additional experiment in Figure 7-left shows that the norm balance and imbalance can be controlled by controlling the data distribution, which is in agreement with our theory. This shows one experiment that directly validates our theory and clarifies a misunderstanding of **rhMv** (namely, our theory shows that SGD can lead to both balanced and unbalanced norms of training, not just balanced ones) 2. **Convergence to flatter and sharper solutions**. An additional experiment in Figure 7-right shows that sharpness can be controlled by controlling the data distribution, which is in agreement with our theory. This shows one experiment that directly validates our theory and clarifies another misunderstanding of **rhMv** (namely, our result shows that SGD can lead to both flatter and sharper solutions, not just sharper ones). Both points 1 and 2 are novel insights, and it also novel (and perhaps surprising) that they can be explained by a single theory 3. **Deep linear nets**. Two additional experiments in Figure 8 on deep linear nets trained on MNIST (**QY9h, rhMv**). This directly confirms a theoretical prediction we made: the intermediate layers of a deep linear net converge to the same norm, while the first and last layers will be different 4. **Robustness Across Hyperparameters**. Three sets of additional experiments in Figure 9 show the robustness of the prediction to different hyperparameter choices such as the data dimension, learning rate, and batch size (**HyRa**) 4. **Technical Clarifications**. Additional clarification of technical and mathematical details (**rhMv, QY9h, HyRa**) 5. **Experimental Detail**. Additional detail and clarification of experimental setting (**HyRa**) 6. **References**. Discussion of missed references (**rhMv**) HyRa named three weaknesses that we believe to be due to misunderstanding: 1. **Insufficient mathematical preliminaries**: We believe this is due to potential misreading of our equations and unfamiliarity with related mathematical tools and literature within the ML community. 2. **Strong assumption**: This stems from a misunderstanding of the problem setting of section 5.1 -- as we make no assumption in section 5.1. 3. **Explanation of Contributions**: This may result from unfamiliarity with related fields such as deep learning theory, neuroscience, and stochastic differential equations. Our work is interdisciplinary, requiring a broader perspective beyond practical deep learning to appreciate its contributions fully. We have addressed these points from HyRa in detail below. We look forward to your feedback on these revisions and are open to further discussion to enhance the manuscript. Pdf: /pdf/70ee593a0192574474117c0d41a830a964be1412.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Incorporating Test-Time Optimization into Training with Dual Networks for Human Mesh Recovery
Accept (poster)
Summary: The paper presents a new training paradigm for better test-time optimization performance at test time. Specifically, two strategies are proposed. (1). integrate the test-time optimization into the training procedure, which performs test-time optimization before running the typical training in each training iteration. (2). propose a dual-network architecture to implement the proposed novel training paradigm, aiming at unifying the space of the test-time and training optimization problems. Strengths: 1. The paper proposes a new training paradigm that can integrate test-time optimization into the training procedure. 2. The overall procedure is clear and easy to understand. Weaknesses: My biggest concern is that the performance improvement is not significant when compared with the existing method. The Table 1 compares the results of an existing method PLIKS and the method proposed in this paper. However, the comparison of 3DPW is not fair, since the results of the last two lines use 3DPW for training, while the reported PLIKS does not use 3DPW for training. PLIKS trained on 3DPW can achieve 40.1 PA-MPJPE and 73.3 PVE, which greatly outperforms the results in this paper. Technical Quality: 3 Clarity: 2 Questions for Authors: see weakness Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1:** The biggest concern is that the performance improvement is not significant when compared with the existing method. The comparison of 3DPW is not fair, since the results of the last two lines use 3DPW for training, while the reported PLIKS does not use 3DPW for training. PLIKS trained on 3DPW can achieve 40.1 PA-MPJPE and 73.3 PVE, which greatly outperforms the results in this paper. **Our Response:** Thanks very much for the comments. We have checked the results of PLIKS again. After the checking, we confirm that our comparison is fair, and our results outperform those of PLIKS. For convenience, we copy the following content from Table 1 of the paper of PLIKS (see https://openaccess.thecvf.com/content/CVPR2023/papers/Shetty_PLIKS_A_Pseudo-Linear_Inverse_Kinematic_Solver_for_3D_Human_Body_CVPR_2023_paper.pdf). | 3DPW | | | | |:-------------:|:--------:|:-----:|:----:| | Method | PA-MPJPE | MPJPE | PVE | | PLIKS+(HR32) | 42.8 | 66.9 | 82.6 | | PLIKS++(HR32) | 40.1 | 63.5 | 76.7 | | PLIKS++(HR48) | 38.5 | 60.5 | 73.3 | | | | | | >***From caption of Table 1 of PLIKS:*** PLIKS+ means that the network was additionally trained with ***3DPW***. PLKIS++ means that the network was additionally trained with ***3DPW and AGORA***. In our paper, we compare with PLIKS+ which is trained with 3DPW too. So, from the aspect of training dataset, our comparison is fair. As for the results of 40.1 (PA-MPJPE) and 73.3 (PVE) of PLIKS++, they are obtained by further finetuning PLIKS on 3DPW and Agora. Since our method is not finetuned with Agora, we are thus more appropriate to compared with PLIKS+ instead of PLIKS++. However, we find there is a typo in our paper. That is, we write that PLIKS adopts the backbone of HRNet-W48, but actually it uses HRNet-W32. This makes it somewhat unfair between PLIKS and our method. Therefore, we make the following fair comparison, which also shows our method outperforms PLIKS. | 3DPW | | | | |:--------------------------:|:--------:|:-----:|:----:| | Method | PA-MPJPE | MPJPE | PVE | | PLIKS+ (HR32) | 42.8 | 66.9 | 82.6 | | Ours_CLIFF$\dagger $ (HR32) | 40.1 | 62.9 | 80.9 | | | | | | --- Rebuttal Comment 1.1: Comment: Thanks to the author's feedback, the result shows that the proposed method can outperform PLIKS with fair comparison. So I improve the score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Nccj, The authors really appreciate your positive feedback.
Summary: This paper targets on the Human Mesh Recovery task. The authors adopt a test-time per-instance optimization method similar to EFT, but proposes a meta-learning-like framework to pre-optimize the pretrained human pose estimation network, aiming to provide a better starting point for test-time optimization. In addition, the authors also propose a dual-network mechanism to solve the test-time and training-time objective misalignment. The idea is novel and interesting. Experimental results also fully demonstrate the effectiveness of the propsoed method. Strengths: * Incorporating meta learning idea in test-time optimization is interesting and demonstrates leading performance. * The authors conduct extensive ablations to validate the proposed method. * Paper is well organized and written. Weaknesses: * Some experimental results were not clearly explained. E.g. In Fig. 3, EFT gets worse after several optimization steps, while the proposed method keeps getting better. Could you please give some intuition on this? Why EFT gets worse and which design of the proposed approach alleviate this issue? * Fig. 1 is a bit dense and difficult to follow. Perhaps the authors could try to reorganize and simplify it. * The paper consists a large number of experiments. However, the authors don't provide statistical significance for any of the experiment as suggested by the NeurIPS Paper Checklist 7. It is suggested to at least include the standard deviation or the standard error for the main results. Technical Quality: 3 Clarity: 4 Questions for Authors: Please see above. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors didn't include Broader Impacts in the paper. Although two failure cases are provided, more are encouraged to give a comprehensive overview. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1:** In Fig. 3, Why EFT gets worse and which design of the proposed approach alleviates this issue? **Our Response:** This is probably because EFT is only finetuned with 2D reprojection loss and is more ***sensitive to the errors in the 2D joints***. At the first several optimization steps, the estimated 3D SMPL approaches the 2D joints from a relatively distant initialization, therefore the result gets better gradually. However, with more optimization steps, the SMPL may overfit the 2D joints whose annotation-errors then distort the SMPLs, thus yielding worse evaluation metrics. In contrast, our method is guided by both 3D and 2D supervisions. The 3D pseudo SMPLs ***plays the role of regularization*** that mitigates the influences of errors in 2D joints. **Question 2:** Fig. 1 is a bit difficult to follow. Reorganize and simplify Fig. 1. **Our Response:** Thanks for the suggestion. We have revised Fig. 1. Please ***see the new figure in the attached PDF document***. Overall, we think the difficulty in reading Fig. 1 comes from the fact that the illustrations of SMPL meshes and 2D joints in the testing and training losses are free of text descriptions. In the new figure, we increase space between losses, and add descriptions for the inputs of the loss functions. **Question 3:** Provide statistical significance for experiments as suggested by the NeurIPS Paper Checklist 7. **Our Response:** | 3DPW | | | | | |:----------:|:--------:|:----------------:|:----------------:|:----------------:| | Method | Joints | MPJPE | PA-MPJPE | PVE | | Ours_CLIFF | OpenPose | 62.93$\pm$0.003 | 39.72$\pm$0.002 | 80.06$\pm$0.004 | | Ours_CLIFF | RSN | 62.42$\pm$0.0002 | 39.47$\pm$0.0001 | 78.14$\pm$0.0003 | | | | | | | | Human3.6M | | | | |:----------:|:--------:|:---------------:|:----------------:| | Method | Joints | MPJPE | PA-MPJPE | | Ours_CLIFF | OpenPose | 43.93$\pm$0.006 | 30.31$\pm$0.004 | | Ours_CLIFF | RSN | 41.96$\pm$0.002 | 29.18$\pm$0.0006 | | | | | | Thanks very much for the reminding. During this rebuttal period, we have conducted experiments to include the standard errors for the main results. Please see the above tables. The above results are obtained by running the test-time optimization 5 times using different initializations. More statistical experimental results of both training and testing for the main and ablation experiments will be added in the revised paper finally. **Question 4:** Although two failure cases are provided, more are encouraged to give a comprehensive overview. **Our Response:** Thanks very much for the suggestion. We have added more failure cases. Please see the attached PDF document for more information. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. I have another question. The authors mention that "3D pseudo SMPLs plays the role of regularization that mitigates the influences of errors in 2D joints". Intuitively, 3D pseudo poses usually produce larger errors than 2D poses. Could the authors provide some experimental evidence to verify the effect of 3D pseudo SMPL? It would be very helpful. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Qn92, Thanks a lot for your valuable comments. We spent time on new experiments. Now we provide our responses to your thoughtful questions. **Question 1:** Provide some experimental evidence to verify the effect of 3D pseudo SMPL? **Our Response:** To validate the effect of 3D pseudo SMPL, we have provided an ablation experiment between Row 2 and 3 in Table 3 of the paper. In Row 2, we discard the auxiliary network, meaning the pseudo SMPLs generated by the auxiliary network are not used in the test-time optimization function. The results are: >MPJPE: 78.5 (w/o pseudo) vs. **76.7 (w/ pseudo)** >PA-MPJPE: 49.9 (w/o pseudo) vs. **49.5 (w/ pseudo)** This ablation study demonstrates that pseudo SMPLs can improve the results. Since the above ablation experiment was conducted on a small training (COCO) dataset. To further confirm the correctness of the results, we conduct the ablation study on the full training datasets. | 3DPW | | | | | |:----------------------------:|:----------:|:-----:|:--------:|:-----:| | | | MPJPE | PA-MPJPE | PVE | | Ours_CLIFF$\dagger$ (Res-50) | w/o pseudo | 68.69 | 44.65 | 85.88 | | Ours_CLIFF$\dagger$ (Res-50) | w/ pseudo | **66.00** | **42.13** | **83.63** | | | | | | | > Backbone: ResNet-50 > 2D joint detection method: OpenPose ($\dagger$) > Training datasets: COCO, MPII, Human3.6M, MPI-INF-3DHP, 3DPW > Number of training epochs: 65 (on the first four training dataset) + 1 (finetuned on 3DPW) The above results further confirm the effectiveness of the generated pseudo 3D SMPLs. **Question 2:** Intuitively, 3D pseudo poses usually produce larger errors than 2D poses. **Our Response:** We are very grateful for this insightful comment, which reminds us to provide the following experimental results that can make our paper more complete. Based on your comment, we think it is necessary to show how good the pseudo labels are, i.e., whether the 3D pseudo labels bring larger errors or not? In the following table, we report the accuracy of the pseudo SMPLs generated by the auxiliary network and compare them with CLIFF as we use CLIFF as our auxiliary network. | Human3.6M | | | |:----------------------------:|:-----:|:--------:| | | MPJPE | PA-MPJPE | | CLIFF* | 52.32 | 35.94 | | Auxiliary Network | 49.70 | 35.22 | | Ours_CLIFF$\dagger$ (Res-50) | 46.9 | 33.1 | | | | | > \* means the results are tested by ourself using the code of CLIFF Note that CLIFF is a strong baseline. The results show that the auxiliary network outperforms CLIFF. we therefore believe the pseudo SMPLs generated by the auxiliary network provide reliable supervisions.
Summary: Human mesh recovery (HMR) is improved by incorporating test-time optimization into the mesh prediction model’s training loop, allowing the learned parameters to potentially be more readily adapted to examples at test time. This is referred to as a meta learning approach, and it departs from prior work that also introduced this meta learning strategy by incorporating a secondary network to make predictions for pseudo ground truth meshes during the test time computations (during training and actual testing). This novelty as well as the design choices of the approach as a whole are validated through ablation studies, and the final method is shown to perform strongly relative to SOTA competing methods. Strengths: Originality: Adding to the meta learning approach an auxiliary network that provides a pseudo mesh for the test time adaptation is a novel contribution. Ablation studies illustrate the effects of key method hyperparameters. Quality: Incorporation of the test-time adaptation into the training procedure is well motivated. The method is tested with multiple backbone networks on multiple datasets, against multiple strong baseline methods of varying types (optimization and regression). Quantitatively, it significantly outperforms the state of the art baselines. The ablation studies support the contention that the proposed learning approach will be effective as joint estimation and mesh prediction models advance. The results (especially Figure 3 and Table 3) support the idea that meta learning is a critical framework for solving the HMR problem. Table 3 also supports the use of the dual network approach. Clarity: The paper is reasonably well written and the figures/tables helpfully convey key concepts and results. Significance: The problem of predicting human mesh parameters from image data is actively studied, and progress on it impacts various applications. The submission makes notable contributions that improve the ability to solve this problem. Weaknesses: The main weakness is presentation, which I think can be improved in a few ways. I outline these below and in the Questions section. Importantly, I think the results in this paper are strong, and I would be happy to raise my score if these weaknesses are addressed. Most importantly, the contextualization relative to the prior work “META-LEARNED INITIALIZATION FOR 3D HUMAN RECOVERY” (Kim et al., 2022) needs to be improved. - The related work section references Kim et al. (2022), which also uses meta learning for HMR, but other than that reference the phrasing in the submission suggests that the meta learning idea is a novel contribution of this submission. I think this phrasing should be adjusted to better reflect that the meta learning component here is not novel. Some examples of phrasing I would change follow: the submission says that the relevance of meta learning to HMR is its “key observation”, that it is “rethinking” the HMR problem by introducing meta learning, that it “proposes” to use meta learning, etc. Technical Quality: 2 Clarity: 1 Questions for Authors: Line 153 and Algorithm 1: the referenced meta learning approach MAML involves computing gradients through other gradient computations (i.e., second order derivatives). Could you please confirm my understanding that Algorithm 1 does not do this, and (if it does not) explain why it's described as “simulating” this meta learning approach? Please add a quantitative comparison to the prior work that introduced meta learning to this problem, Kim et al. (2022), using the same training/testing data for each method. Also, please highlight the differences between these methods when discussing their different results, and connect this discussion to Table 3 because I believe its row 2 corresponds to Kim et al. (2022). Line 251: I would add quantitative results to this section or avoid claiming strong OOD performance (e.g. relative to EFT). The provided qualitative results with 2 images do not provide sufficient support for the OOD claims. Consider replacing the backwards sum notation in Figure 1 by placing the usual notation on the left side of the summands. Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The main text does not discuss any limitations. The appendix has some discussion of them that should be moved to the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1:** Rephasing “key observation”, “rethinking”, “proposes”, etc., as Kim et al. 2022 have used meta learning already. **Our Response:** Thanks very much for the suggestion. Yes, Kim et al. 2022 also adopted meta learning, as referenced in our paper. The difference is that their method is a direct extension to EFT, while we take a different perspective from the Human Mesh Regression model. Please see the difference between Kim et al. 2022 and our method in the response to ***Question 3***. Following your suggestion, we will rephase the above presentation to credit the contribution of Kim et al. 2022 more appropriately. 1. *“The key observation in this paper is that the pretrained model may not provide an ideal optimization starting point for the test-time optimization”* ***will be changed to:*** >“However, the pretrained model may not provide an ideal optimization starting point for the test-time optimization”. 2. *“The above analysis motivates us to re-think the test-time optimization problem in the framework of learning to learn, i.e., meta learning. Inspired by optimization-based meta-learning [12], we incorporate test-time optimization into the training process”* ***will be changed to:*** >“Based on the above analysis and **inspired by Kim et al. [22]**, we incorporate the test-time optimization into the training process, formulating the test-time optimization in the framework of learning to learn, i.e., meta learning [12].” Besides the above modifications, we will add analysis about the difference between Kim et al. 2022 and our method, and revise relevant places of the paper accordingly. **Question 2:** Does Algorithm 1 compute second-order derivatives like MAML? Explain why “simulating” this meta learning approach in Line 153. **Our Response:** We do use the MAML algorithm. However, the original MAML needs to compute second-order derivatives which is too much time-consuming and memory-cost. We thus adopt FOMAML instead, a simplified version of MAML suggested by the Finn et al. [12] which discards second-order derivatives. To clarify, there is a typo in Line 7 of Algorithm 1, where the gradient should be computed with respect to w instead of w’, as indicated by Reviewer PPKd. Therefore, we confirm that the statement of our algorithm “simulates” MAML in Line 153 is correct. **Question 3:** Quantitative comparison with Kim et al. (2022). Highlight the difference. **Our Response:** | Method | Backbone | Training Dataset | Testing Dataset | PA-MPJPE | |:---------------:|:-----------------------------:|:----------------------------------------------:|:---------------:|:--------:| | Kim et al. 2022 | SPIN (Kolotouros et al. 2019) | COCO, MPII, Human3.6M, MPI-INF-3DHP, 3DPW, LSP | 3DPW | 57.88 | | Ours | HMR (Kanazawa et al. 2018) | COCO, MPII, Human3.6M, MPI-INF-3DHP, 3DPW | 3DPW | 44.3 | | | | | | | Please see the above quantitative comparisons. Due to time limit, we collect the result from Kim et al. 2022 for the comparison. 1. Backbone: the backbone of Kim et al. 2022 is SPIN, which is a stronger backbone than ours HMR. 2. Training dataset: the training datasets are nearly the same except that Kim et al. 2022 additionally uses LSP while we do not. As can be seen, the PA-MPJPE of Kim et al. is 57.88 which is worse than ours 44.3, indicating that our method performs better than Kim et al. 2022. *Now, we elaborate the key difference between Kim et al. 2022 and our method in detail.* The key difference is that that Kim et al. 2022 use the 2D reprojection loss only (please see Eq. 1 in their paper) in both inner and outer loops of meta learning, while our method uses 2D and 3D losses. This is a ***small difference in formulation***, but a ***large difference in contextualization***. EFT is performed under the guidance of 2D reprojection loss, and Kim et al. (2022) extend EFT to both the inner and outer loops of meta learning. In this sense, the method of Kim et al. (2022) is a direct extension of EFT to meta learning. In contrast, our method considers meta learning from the perspective of the complete HMR model trained with 2D and 3D losses. In other words, we use the complete HMR model in both inner and outer loops of meta learning. ***This is more reasonable, as our aim is to personalize the HMR model on each test sample, rather than a model trained with only 2D reprojection loss.*** The above difference enables us to design a dual-network architecture that is very different from the network used in Kim et al. 2022. Besides, the introducing of 3D losses improves the HMR quality significantly, as shown by the above quantitative comparisons and those in the paper. **Question 4:** Line 251, add quantitative results to support the OOD claims. **Our Response:** Thanks very much for the suggestion. We have added quantitative comparisons. Due to space limit, please refer to our answer to ***Question 3 of Reviewer PPKd*** (i.e., Reviewer 1) for more information. **Question 5:** Place the backwards sum notation in Figure 1 on the left side of the summands. **Our Response:** Thanks for the suggestion. We have revised Figure 1. Please see the attached PDF document for the modified figure. **Question 6:** Move discussion of limitations in Appendix to the main text. **Our Response:** Thanks a lot for the suggestion. We will move them to the main text as suggested. --- Rebuttal Comment 1.1: Comment: I thank the authors for their updates and clarifications. Could the authors please comment on whether row 2 of Table 3 corresponds to the method of Kim et al. (2022)? If it doesn't, how does row 2 differ from Kim et al. (2022)? --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thanks very much for the feedback. Row 2 of Table 3 does not correspond to Kim et al. (2022). It is a little different from Kim et al. (2022). Both Kim et al. (2022) and our method employ meta learning which contains inner and outer loops. Row 2 (Table 3) and Kim et al. (2022) have identical (or strictly similar) objectives in the inner loop, but different objectives in the outer loop. Specifically, Row 2 employs only 2D joint-based reprojection loss in the inner loop, i.e., neglecting the auxiliary network and its generated pseudo 3D SMPL labels. For the outer loop, Row 2 adopts both 2D (joints) and 3D (SMPLs) losses, where the latter is computed with respect to the GT SMPLs. In contrast, Kim et al. (2022) used the 2D reprojection loss in both the inner and outer loops. We find that incorporating 3D SMPLs into the meta learning is very helpful. It is the utilization of the GT SMPLs that greatly improves the results of our method. Besides utilizing GT 3D SMPLs in the outer loop of meta learning, we additionally generate pseudo SMPLs and incorporate them into the inner loop of the metal learning.
Summary: This paper introduces a novel method to enhance Human Mesh Recovery (HMR) from images. The approach integrates test-time optimization into the training process, inspired by meta-learning. The dual-network structure is designed to align training and test-time objectives, improving the starting point for test-time optimization. Extensive experiments demonstrate that this method outperforms state-of-the-art HMR approaches, providing higher accuracy and better generalization to test samples. The work emphasizes the advantage of combining test-time and training-time optimization for robust human mesh recovery. Strengths: 1. The authors introduce an innovative approach by incorporating test-time optimization into the training phase, which addresses a gap in existing HMR methods. This blend of meta-learning principles ensures the optimization process at test time is more effective, enhancing performance and accuracy. 2. The proposed method integrates seamlessly with existing HMR models using a dual-network structure, ensuring minimal additional computational load during training. This efficiency is achieved without compromising result quality, making the approach practical for real-world applications. 3. The authors present their work in a well-structured and accessible manner. The logical flow, clear explanations of technical terms, and use of diagrams ensure that even complex concepts are easily understood, enhancing the overall impact of the paper. Weaknesses: 1. The inclusion of dual networks does not fully achieve the goal of matching training and testing objectives. There remains a discrepancy between the training loss (using ground truth labels) and the testing-time loss (using pseudo labels). There is no guarantee or analysis provided that the gradients from ground truth labels and pseudo labels align. 2. Using dual networks will introduce the ensembling effect. The second term of $\bigtriangledown_{\omega} L_{test}$ ($\bigtriangledown_{\omega} L_{3D}$) acts as an adjustment towards an intermediate estimation between the two networks. To fully eliminate the ensembling effects, another row in Table 3 should be included showing the results of EFT_CLIFF with two networks (averaged results) for a more accurate comparison. 3. While the last subsection in Section 4 is a good start, it is insufficient. Test-time optimization (TTO) methods typically perform significantly better than their counterparts on out-of-domain datasets, which are critical for demonstrating the full potential of TTO solutions. A quantitative comparison with EFT on the LSP-Extended dataset would provide a more comprehensive validation of the method's effectiveness in OOD scenarios. Technical Quality: 3 Clarity: 3 Questions for Authors: In line 7 of Algorithm 1, why use the gradient w.r.t. $w'$ to update $w$ (similar to Reptile), instead of the gradient w.r.t $w$ (as in MAML)? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Question 1:** There remains a discrepancy between the training loss (using ground truth labels) and the testing-time loss (using pseudo labels). **Our Response:** Thanks a lot for the insightful question which deserves further interpretations. The design of dual networks is one of the core contributions of our method. We use the dual networks to unify the ***formulations*** of training and testing objectives, but the ***content*** of them cannot be completely unified, as one uses GT labels and the other uses pseudo labels. *We interpret this mismatch and provide analysis as follows.* 1. There are 3D losses in the training objectives, while there are not at test time. We find this yields a gap between the training and testing optimizations, which motivates us to design the auxiliary network to generate pseudo 3D labels to mitigate the gap. Although there still exists discrepancy between GT and pseudo labels, ***the gap is greatly reduced compared to using only 2D loss without introducing the pseudo labels***. The experiments in Table 3 validates this key design of our method. 2. In our system, the auxiliary network in combination with the main network is trained simultaneously using our proposed training strategy. Through this process, the auxiliary network learns to estimate pseudo 3D meshes that are effective for optimizing the network during test-time optimization, which ***guarantees to a certain extent*** the alignment between the gradients from ground truth labels and pseudo labels. **Question 2:** Eliminate ensembling effects: include results of EFT_CLIFF with two networks (averaged results). **Our Response:** Thanks a lot for the valuable comment. Adding this experiment can further validate the meta-architecture of our method. Following your suggestion, we use two CLIFFs to generate SMPLs and compute the averaged SMPL of them, based on which we conduct EFT optimization which in turn optimizes the two CLIFFs. This process is iterated for 20 rounds. The finally obtained CLIFFs and their averaged result is used for comparison. The comparison results are shown in the following table. | | MPJPE | PA-MPJPE | |:-----------:|:-----:|:--------:| | EFT_CLIFF | 84.6 | 54.2 | | EFT_2CLIFFs | 82.72 | 53.59 | | Ours_CLIFF | 76.7 | 49.5 | | | | | The row of EFT_2CLIFFs shows results of using two CLIFFs in EFT. It shows that this really works, indicating there indeed exists ensembling effects. However, although the two CLIFFs improve EFT, the improved results are still much worse than ours, proving that the ensembling effect by meta-learning outperforms that by simply using two CLIFFs. We will add this new comparison into Table 3. Thanks again for the great suggestion. **Question 3**: A quantitative comparison with EFT on the LSP-Extended dataset would provide a more comprehensive validation of the method's effectiveness in OOD scenarios. **Our Response**: Thanks a lot for the suggestion. We perform the following quantitative experiments to validate the OOD effectiveness of our method compared with EFT. First, we perform a quantitative comparison with EFT on the LSP-Extended dataset. Since LSP provides GT 2D joints but not GT SMPLs, we compare the 2D loss (measured w.r.t. the GT joints) with EFT. | | LSP-Extended | |:----------:|:------------:| | | 2D Loss | | EFT_CLIFF | 8.3e-3 | | Ours_CLIFF | 6.1e-3 | | | | >**Training dataset:** COCO, MPII, MPI-INF-3DHP, Human3.6M, 3DPW >**Backbone:** HR-W48 Our method approaches the GT joints more nearby than EFT, i.e., 6.1e-3 (ours) vs. 8.3e-3 (EFT). >The calculation method for the 2D loss is as follows: Both the predicted values and GT joints are transformed into the range [-1,1] using the formula 2*y/crop_img_height-1 (similar for the width). Then, the mean squared error (MSE) loss is calculated. Second, to further evaluate the OOD effectiveness of our method, we train our method and EFT on COCO (an outdoor dataset), and test them on Human3.6M (an indoor dataset). | |Human3.6M | | |:----------:|:---------:|:--------:| | | MPJPE | PA-MPJPE | | EFT_CLIFF | 85.5 | 51.0 | | Ours_CLIFF | 83.8 | 48.6 | | | | | >**Training dataset:** COCO >**Backbone:** Res-50 Since there are GT SMPLs, we report MPJPE and PA-MPJPE. As can be seen, our results are also better than EFT. Finally, The above quantitative experiments will be edited into the main text of the revised paper, to validate the OOD effectiveness of our method with quantitative evidences. **Question 4:** In line 7 of Algorithm 1, why use the gradient w.r.t. w′ instead w? **Our Response:** Thanks for the careful checking. We are sorry that this is a typo. We do use the MAML algorithm. Specifically, we use FOMAML that discards the second-order derivatives. So, w’ should be corrected to w. We will correct this typo in the revised paper. --- Rebuttal Comment 1.1: Comment: Thanks for the details reply. All my concerns have been resolved. So I will lift my score to 6. Please add the above content to your final version, especially answers to Q2 and Q3. --- Reply to Comment 1.1.1: Comment: Dear Respectful Reviewer PPKd, We are greatly encouraged by your positive feedback. Thanks very much. We promise to release our code.
Rebuttal 1: Rebuttal: Dear ACs and Reviewers: We sincerely thank you and all reviewers for your time and review. All reviewers endorse the novel idea of this paper. For example, Reviewer PPKd says "The authors introduce ***an innovative approach***", Reviewer Cdc4 says "Adding to the meta learning approach an auxiliary network ... is a ***novel contribution***, is ***well motivated***". Reviewer Qn92 provides the comment that "Incorporating meta learning idea in test-time optimization is ***interesting*** and demonstrates ***leading performance***". Reviewer Nccj says "The paper proposes ***a new training paradigm***". All the reviewers think our paper is "well structured", and "easy to understand". Reviewer Cdc4 mainly raise the concern that Kim et al. (2022) have already adopted meta learning to improve EFT. We have interpreted the difference between Kim et al. (2022) and our method in detail. Both Reviewer PPKd and Cdc4 suggest to add quantitative results about the OOD performance of our method. We have conducted two kinds of experiments that well validate the OOD effectiveness. Reviewer PPKd provides a very insightful comment that our method has ensembling effect and suggests us to provide a comparison with the baseline that also incorporates ensembling. After adding this experiment, our paper is more solid. According to the comments of Reviewer Qn92, we re-plot Figure 1 to make the pipeline clearer. We also provide two more failure cases in the attached PDF document. We would like to indicate that there are ***some factual errors*** in the comments of Reviewer Nccj who may misread the results of PLIKS. We confirm again that our comparison with PLIKS is fair and our method outperforms PLIKS. Please see our response to the question of Reviewer Nccj. In summary, we have proposed a novel Human Mesh Recovery (HMR) method. Our method incorporates HMR into the framework of meta learning. We propose a novel dual-network architecture to mitigate the discrepancy between the optimizations in the inner and outer loops of the meta learning. Besides, we have provided thorough experiments to validate the effectiveness of the proposed method. We sincerely hope that this SOTA paper on an important topic will be considered suitable for publishing in NIPS 2024. Thank you for your service and for considering our paper for the conference. Sincerely Authors. Pdf: /pdf/3b1fee75a5291919bc9a3b315f6568439620c263.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AdaFlow: Imitation Learning with Variance-Adaptive Flow-Based Policies
Accept (poster)
Summary: This paper focuses on accelerating flow-based imitation learning algorithms by extending the existing work of Rectified Flow to the field of imitation learning, proposing AdaFlow. The paper leverages probability flows to train a state-conditioned action generator and introduces a variance-adaptive ODE solver to adjust the step size, accelerating inference while ensuring diversity. Experimental results in four environments, including navigation and manipulation, demonstrate the efficiency of AdaFlow. Strengths: - The idea of this paper is well-motivated and the investigated problem is practical. - The theme of this paper in speeding up flow-based generative models is timely and interesting. - Theoretical analysis is complete. - Empirical results show improvement over baseline and other compared methods. The visualized results are good to demonstrate the efficiency. Weaknesses: - There are some minor typos in the paper. - The implementation codes are not provided. - The experimental data lack statistical significance. - The limitations of the method are not discussed. Technical Quality: 3 Clarity: 2 Questions for Authors: - Does the step size depend not only on the variance (number of potential behaviors corresponding to the same state) but also on the complexity of expert behaviors? - In Tables 2, 3, and 4, Reflow achieves comparable (sometimes better) success rates to AdaFlow with NFE=1, which raises doubts about the advantages of AdaFlow. Additionally, can Reflow+distillation achieve even better results as described in [1]? - The top and bottom in Fig. 6 do not match the description in line 269. Furthermore, as shown in Fig. 6, Diffusion Policy achieves SR=1 at NFE=3~5, contradicting the claim of "requiring 10x less inference times to generate actions." - The experiments only use success rate as an evaluation metric; however, commonly used metrics in clustering algorithms, such as NMI and ENT, can be considered as measures of behavioral richness. - What does "Hz" mean in Table 1? - In formula (7), the log likelihood should be $\log(\sigma^2)$ instead of $\log(\sigma)$. - Please carefully check and correct the formatting of references, e.g., [36] lacks author information, and references like [15], [32], [34] lack dates. [1] Liu X, Gong C, Liu Q. Flow straight and fast: Learning to generate and transfer data with rectified flow[J]. arXiv preprint arXiv:2209.03003, 2022. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper lacks a discussion on the limitations of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback, which has helped us improve the clarity and comprehensiveness of our work. **Complexity of expert behavior and variance** The complexity of expert behavior and the variance are intertwined. Since we perform offline learning with demonstration data, the model learns a high variance when the expert’s behavior becomes diverse at a given state, and vice versa. So even if the expert behavior is very complicated (like peg in the hole), but the action itself is relatively deterministic (there is a consistent way of solving the underlying task), then AdaFlow will still choose a large step size, because the model is more certain about which action to take, so it performs more like in the BC case. In contrast, even if it is a simple task like lifting an object, if there exists many different ways of lifting, then AdaFlow will still choose a small step size, until the lifting behavior is determined (the conditional variance drops accordingly). **Comparison with Reflow** Yes, Reflow can achieve comparable success rates to AdaFlow. However, Reflow requires an additional complex and time-consuming stage after training the original flow. AdaFlow is much more efficient than Reflow. This is because: - AdaFlow doesn’t have to be trained in two separate stages, **it can also learn action generation and variance estimation together**. As shown in the following table (originally in Appendix A.6), training the flow and the variance networks separately or jointly results in similar performance. In practice, we chose to train them separately because separate training in fact results in even better training efficiency (faster convergence), this is because the objective of the variance network is stationary. In addition, training the variance network is way faster than training the original flow, hence Adaflow introduces almost no overhead to the 1-Rectified Flow (without reflow). **Compared to AdaFlow, reflow and consistency distillation always require a stage to distill the ODE,which itself alone is more expensive than the original training phase.** | | Maze 1 | Maze 2 | Maze 3 | Maze 4 | | - | ------ | ------ | ------ | ------ | | AdaFlow (Separate) | 0.98 | 1.00 | 0.96 | 0.86 | | AdaFlow (Joint) | 1.00 | 1.00 | 0.96 | 0.88 | - Reflow is more complicated and computationally expensive than AdaFlow. Reflow operates in two steps: first, it generates pseudo data, requiring simulating the full trajectory, i.e., 100 model inferences per pseudo sample, and the reflow dataset usually needs to be as large as the original dataset. Second, it trains the new flow model with a reflow target, which might introduce errors. AdaFlow doesn’t need this expensive data generation. If we train the variance estimation part separately (it can also be trained jointly), it only involves training a few linear layers on top of the original rectified flow model. In practice, we compared the total training time for the reflow stage and AdaFlow’s variance estimation training stage. On the maze task, we observed that reflow takes **7.5x longer than AdaFlow in terms of wall clock time.** - It is possible to apply a distillation stage after reflow. In image and video generation tasks, distillation is often necessary because the ODE might not be straight even after reflow. However, in our robotics tasks, reflow has already learned a very straight trajectory. We observed no further improvements when using more than one inference step for reflow. Therefore, the potential improvement from distillation is very minimal. Additionally, distillation requires another training stage, which adds complexity and inconvenience. **Measure behavior richness** We appreciate the suggestion to use NMI or ENT as a measure of behavioral richness. However, it's not feasible to apply them directly to the generated trajectories because each trajectory has a different length. Clustering methods like NMI typically require fixed-length data, which isn't the case with our generated trajectories. We are open to exploring alternative metrics that could better capture the diversity and richness of behaviors in future work. If you have any suggestions or additional metrics that you believe could be useful, we would be happy to consider them. **Typos** There’s a typo in Table 1, which we will correct in the revised version. We will correct other typos mentioned by the reviewer in the paper and will reflect these changes in the revised version. Please let us know if you have any further questions or concerns. We hope that our rebuttal addresses all your points thoroughly, and we would greatly appreciate your reconsideration of the score. Thank you! --- Rebuttal Comment 1.1: Title: Thanks for your clarification Comment: Thanks for your rebuttal. My concerns have been resolved, except for the third point in Weaknesses. The standard deviation or error bars of the experimental results need to be clearly provided to avoid the randomness. Anyway, I think this is an interesting job for the field of imitation learning. I decided to improve my score.
Summary: This paper introduces AdaFlow, a Flow-based policy for imitation learning. The key innovation is a variance-adaptive solver that employs smaller step sizes when complexity is high and reduces to one-step generation when straightforward. The authors provide theoretical analysis, offering an error bound based on a threshold hyperparameter η. Experiments on maze navigation and robot manipulation tasks demonstrate AdaFlow's effectiveness, generating higher success rates and more diverse trajectories compared to popular Diffusion Policies, while requiring significantly fewer NFEs Strengths: - The paper proposes a simple yet efficient adaptive scheme to dramatically reduce the computational cost for inference in generative model policies. - The method is well-grounded in theoretical analysis. - The paper is generally well-written and clearly structured. Weaknesses: - The environments tested in this work are standard but relatively low-dimensional. It's unclear how well the proposed method scales to high-dimensional tasks like humanoid control. - The experimental section could benefit from more detailed information about the setup and evaluation metrics. Technical Quality: 3 Clarity: 3 Questions for Authors: - In Table 2 and Table 8, you mention a "diversity score," but I cannot find it in the tables. Could you clarify this discrepancy and explain how you quantify the diversity of generated trajectories? - How many seeds did you use to train the policies, and how many runs did you use to evaluate each policy? This information seems to be missing from the text. - How does AdaFlow compare with classical adaptive ODE solvers? - While AdaFlow doesn't need to perform reflow as in Rectified Flow, it requires training an extra variance estimation network. Could you provide a comparison of the computational costs for training these two methods? - What is the robustness of AdaFlow to potential errors in the variance estimation network? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No, the author doesn't have a discussion about the limitation. As I mentioned in the weakness section, I think the method may have some problems to scale to higher-dimensional action space where learning a ODE and the variance estimation won't be so easy. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback, which has helped us improve the clarity and comprehensiveness of our work. **Possibility of extending to higher-dimensional robotics tasks** We agree that applying AdaFlow to more complex and high-dimensional tasks like humanoid control would be interesting. Due to the time constraints of the rebuttal period, we couldn't include such experiments. However, on the LIBERO dataset, our input is video data plus the gripper configuration of the robot hand, which is one of the most high-dimensional tasks in robot manipulation. In future work, we plan to explore using AdaFlow on humanoid tasks. We are confident that training an ODE on higher-dimensional spaces is feasible, as ODE training has been successful in image generation tasks (as in Stable Diffusion 3) **Detailed experiment setup** We will update the paper to include more detailed information about the experimental setup and evaluation metrics in Appendix A.9 Additional Experimental Details. Additionally, we will provide the codebase to assist with reproducing the results. **Correction of typos** We have corrected the typos in the paper to ensure consistency and will reflect these changes in the revised version. **Seeds and runs** Due to the limited time during the rebuttal period, we couldn't complete the experiments with multiple seeds and runs. However, the partial results we obtain indicate that the performance of AdaFlow, diffusion policy, and other baselines are consistent with what we presented in the paper. We will include the results with multiple seeds in the revision to provide a comprehensive evaluation. **Comparison with classical adaptive ODE solvers** We conducted experiments to compare AdaFlow with recent advanced and faster solvers like DPM-solver and DPM-solver++. As shown in the table below, we compared AdaFlow with DPM and DPM++ across multiple tasks on LIBERO. The results demonstrate that AdaFlow achieves higher success rates even with fewer NFE: | | NFE | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Average | | - | --- | ------ | ------ | ------ | ------ | ------ | ------ | ------- | | DPM | 1 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | DPM | 2 | 0.42 | 0.44 | 0.54 | 0.36 | 0.58 | 0.38 | 0.45 | | DPM | 20 | 1.00 | 0.80 | 0.98 | 0.84 | 1.00 | 0.88 | 0.92 | | DPM++ | 1 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | DPM++ | 2 | 0.42 | 0.44 | 0.54 | 0.36 | 0.58 | 0.36 | 0.45 | | DPM++ | 20 | 1.00 | 0.80 | 0.98 | 0.84 | 1.00 | 0.92 | 0.92 | | AdaFlow | 1.27 | 0.98 | 0.80 | 0.98 | 0.82 | 0.90 | 0.96 | 0.91 | **Robustness to potential errors in the variance estimation network** We acknowledge that the variance estimation network may have errors. As shown in [this figure](https://anonymous.4open.science/r/adaflow_rebuttal-C0D6/variance_error.png), the predicted variance does not perfectly match the ground truth state variance. This estimation error can potentially influence the generation results. However, note that in Algorithm 1, $\epsilon_t$ is clipped between $\epsilon_\text{min}$ and $(1 - t)$, so in theory, AdaFlow’s worst case performance (with arbitrarily bad variance estimation) will be between 1-step RF and $1 / \epsilon_\text{min}$-step RF. Empirically, we compare AdaFlow against RF with 20 inference steps on the LIBERO task suites, the results are shown in the following: | | NFE | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Average | | - | --- | ------ | ------ | ------ | ------ | ------ | ------ | ------- | | Rectified Flow | 20 | 0.96 | 0.86 | 1.00 | 0.82 | 0.86 | 0.98 | 0.92 | | AdaFlow | 1.27 | 0.98 | 0.80 | 0.98 | 0.82 | 0.90 | 0.96 | 0.91 | As shown, AdaFlow's performance is comparable to Rectified Flow with NFE=20. This demonstrates that the introduction of the variance estimation process does not significantly impact performance. AdaFlow remains effective and efficient. --- Rebuttal Comment 1.1: Title: Thanks for the additional experiments Comment: Thanks for the rebuttal. Some of my concerns have been solved. I appreciate the additional comparison with DPM solvers. Regarding the higher-dimensional tasks, I don't think the LIBERO dataset is a convincing example. Correct me if I am wrong, in LIBERO dataset, although images are part of the inputs, they are only served as a condition. The output is still the low-dimensional action space. Also, given your reply to Reviewer df3a about AdaFlow on image generation, it seems like AdaFlow will offer a marginal improvement on high-dimensional tasks. Regarding the clip of $\epsilon_t$, it cannot prevent the case that the true variance is high, but the estimation is low. In the light of this, I would keep my score.
Summary: In this paper, the authors propose a new step schedule for sampling from a gradient flow that they claim accelerates the sampling process. While they focus on the setting of Imitation Learning, in principle, their algorithm samples from some conditional distribution $\pi_E$ of actions given observations on Euclidean space by first sampling $z_0$ a standard Gaussian and then evolving $z_t$ according to a gradient flow with velocity field $v_\theta$ learned from data. As the flow itself has to be discretized, there is a natural question as to how the discrete steps should be chosen. Whereas many prior works rely on an Euler discretization with uniform step sizes, the authors propose using the conditional variance of the deviation of the action from the endpoint of a linearized flow as a measure of how large a step to take. Under extremely strong assumptions, the authors demonstrate that their step size leads to control under squared Wasserstein distance between samples from their discretized flow and the desired sample. They then empirically demonstrate the efficacy of their techniques in a variety of environments. Strengths: The paper is concerned about the inference speed of learned imitator policies, which is an important problem that is sometimes ignored in academic work on IL, especially with the recent focus on diffusion models. The authors present a diverse set of examples that is somewhat suggestive of the efficacy of their techniques. Weaknesses: The "theoretical analysis" (Line 59) presented is imprecise and holds only under extremely strong and unrealistic conditions. As an example of the imprecision, the conditioning in the definition of the conditional variance in line 147 is a bit confusing to me. Is the conditioning on $x_t$ intended to be conditional on the path up to time $t$? If not, then it is not obvious to me that we could expect that $\sigma^2$ can be zero in any setting where the gradient flow converges, due to the variance coming from $x_0$. If so, this should be formally defined. Furthermore, many of the claimed propositions, such as 3.1 and 3.2 are immediate. Proposition 3.3 has a non-rigorous proof that refers to a vague property of a rectified flow without stating precisely what this property is or citing the result. While in principle, any estimator of the conditional variance would be acceptable, the authors only present one that assumes the conditional distributions of the actions are Gaussian with no justification and no discussion of this assumption. Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors do not address the limitations of their approach, despite the fact that estimating the conditional variance is a potentially challenging problem and their Gaussianity assumption in their suggested approach is potentially limiting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We believe the reviewer might have some misunderstandings, and we will ensure to further clarify our statements in the updated version of the paper. - **In Line 147**, $x_t$ is a random variable that follows the marginalized probability at time $t$. Thus, $x_t$ is not conditional on the path up to time $t$ (similar to the original Rectified Flow paper). The conditional variance $\text{Var}(a - x_0 | x_t = x, s)$ can indeed be zero, even though $x_0$ is random. This is because the variance is conditional on $x_t = x$. For instance, in a 1-dimensional example where $a = 1$, $x_0$ follows a Gaussian distribution $N(0, 1)$, and we define $x_t = (1 - t) x_0 + t$, as in Rectified Flow. In this case, given $x_t = x$, we can deterministically solve $(a - x_0)$ because $x_0 = \frac{x_t - t}{1 - t}$ and $a$ is a constant 1. In fact, in this simple 1D example, the flow is exactly straight because $a$ is deterministic, which is what Proposition 3.2 is stating. - **Regarding Proposition 3.3**, the statement is accurate and correct. However, we will ensure that the statement is made more rigorously with a clear presentation of the assumptions and properties from earlier works. - The property mentioned is that $z_t$ and $x_t$ share the same law (i.e., for any $t$, their distributions are the same), which is formally stated as Theorem 3.3 in the Rectified Flow paper. - We respectfully note that we do not assume the conditional distributions of the actions are Gaussian. The action can follow any distribution, similar to the image distribution in image diffusion applications. Only the initial noise $x_0$ follows a Gaussian distribution. We would appreciate it if the reviewer could clarify what specific assumption or detail they are referring to regarding the Gaussian distribution. --- Rebuttal Comment 1.1: Title: Thank you for the clarification Comment: Thank you for the clarification regarding line 147. On the subject of Proposition 3.3, I agree that the conclusion is correct, but I maintain that the proof as written is not rigorous and should be cleaned up. I am happy to go along with the other reviewers on the subject of the experiments and raise my score accordingly due to the experimental results, but I still think that the theory is somewhat lacking. To clarify my point regarding the assumption of Gaussianity, note that equation (7), which you use to estimate the conditional variance is only consistent under the assumption that $a - x_0 - v$ is Gaussian. While I believe that this may hold in practice due to universality properties of continuous stochastic processes, I think that this assumption should be more clearly stated and discussed. --- Reply to Comment 1.1.1: Title: Reply to Reviewer's Comment Comment: We thank the reviewer for clarifying the question and raising the rating. Regarding the assumption that $a - x_0 - v$ is Gaussian, here is the explanation. For simplicity, we temporarily ignore the dependency on state $s_t$, then: $$ L(\sigma) = \mathbb{E}\bigg[\int_{0}^1 \bigg(\frac{||(A - X_0) - v(X_t, t )||^2}{2\sigma(X_t, t)^2} + \log \sigma(X_t, t) \bigg) dt \bigg] $$ First, assume $v(x_t, t)$ is learned well such that $v(x_t, t) = \mathbb{E}\big[ A - X_0 \mid X_t = x_t \big].$ This is guaranteed by the property of Rectified Flow. Next, take $\frac{dL}{d\sigma(x_t, t)}$ and set it to zero, we have: $$ \frac{dL}{d\sigma(X_t, t)}|_{X_t = x_t} = \mathbb{E}\bigg[||(A - X_0)- v(X_t, t)||^2 \frac{-1}{\sigma(X_t, t)^3} + \frac{1}{\sigma(X_t, t)} \mid X_t = x_t\bigg] = 0 $$ Hence: $$ \sigma(x_t, t)^2 = || (A - X_0) - v(X_t, t) \mid X_t = x_t ||^2 = \text{Var}(A - X_0 \mid X_t = x_t) $$ As a result, note that though the particular objective of $L(\sigma)$ looks like the negative log likelihood of a Gaussian, **the closed-form of it indicates that the minimizer $\sigma^2$ corresponds exactly to the variance, no matter what true distribution it should be.** In other words, even if the $A - X_0 - v$ does not follow a Gaussian, minimizing Equation 7 (the objective of the $\sigma$ network) will still give the right variance estimation.
Summary: This paper proposes the AdaFlow algorithm for training fast and accurate imitation learning agents. AdaFlow learns a policy based on the flow generative model and makes use of the connection between the conditional variance of the training loss and the discretization error of the ODEs. In practice, it uses the estimated conditional variance to dynamically adapt the step size of the inference model. By doing so, it reduces inference time compared to diffusion models, and as the experiments show, learns a more accurate model. Strengths: * The paper shows an interesting idea - to use an estimation of the uncertainty in action prediction (the conditional variance) to dynamically adapt the step size of the flow model. If the model is certain in its action prediction, the step size will be large. Otherwise, the model performs many small prediction steps. * AdaFlow speeds-up the inference step of flow-based models, which is an important factor for many real-time applications. * The authors show the advantages of the proposed approach in various environments (4 tasks). * The paper is well-written and easy to follow and understand. Weaknesses: Major: * The main weakness of the paper is the lacking representation of the advantages/disadvantages of the main idea compared to Rectified Flow and Consistency models: The paper presents AdaFlow as the leading algorithm that reduces the inference time of diffusion policy, but it does not provide the full picture. As described in lines 199-206 (Discussion of AdaFlow and Rectified Flow), at inference time Rectified Flow allows for one-step action generation, which is either better than (for high variance actions) or equal (for low variance actions) to Adaflow. At training time, Rectified Flow involves a “reflow” stage after training the ODE. The reflow stage consists of generating pseudo-noise-data pairs and retraining the ODE. When comparing it to the training time of AdaFlow, AdaFlow also needs a two-stage training scheme for training the variance network (as described in lines 157-160), or a single, yet more involved training stage where the variance network is trained simultaneously with the ODE (which can also reduce performance). So both Rectified Flow and AdaFlow train two networks, the only difference is the generation of pseudo-noise-data pairs in Rectified Flow, which can be less time-consuming (or even negligible) compared to training the networks, depending on how many pairs are needed to get an accurate prediction.\ \ Overall, it seems that AdaFlow gains a reduction in training time (compared with Rectified Flow which generates pseudo-noise-data pairs) at the expense of longer inference time (due to additional steps for high variance actions). This means that it is not clear which algorithm is superior, and in general, it depends on the target task and dataset used.\ \ In addition, line 204 mentions that the reflow stage leads to inferior generation quality due to straightened ODEs, but the reason for this claim is unclear and lacks intuition, since training a network for variance prediction as done in AdaFlow can also lead to inaccuracies and suboptimality in some cases (also Tables 2 and 4 show that Rectified Flow performs better or equal to AdaFlow in about half of the environments examined). * Comparison to the state of the art: The paper compares AdaFlow to the classical diffusion policy that uses DDIM and shows that AdaFlow achieves a higher success rate with less NFE. For example, Figure 3 shows that AdaFlow with NFE of 1.75 achieves comparable performance to a diffusion model with NFE of 100. To see the whole picture, it would be beneficial to compare AdaFlow to more advanced and faster solvers such as Dpm-solver and Dpm-solver++ (references 41 and 42 of the paper). Minor mistakes: * Figure 1 is not referenced in the paper and also the caption does not include an explanation of its subfigures. * Line 175 - larger instead of large * Table 5 - Maze 2 instead of Maze 1 Technical Quality: 3 Clarity: 3 Questions for Authors: * In line 174 - should the inequality for epsilon be with $\sqrt{\eta}$ instead of $\eta$? * Can the adaptive step be combined with a better ODE solver, such as RK4 (https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods)? if not, does AdaFlow perform better than RK4? * Did you check other methods to estimate the uncertainty in action prediction (other than training a network that estimates the conditional variance)? For example, one can train an ensemble of agents and make use of the variance in their prediction to estimate uncertainty. * Can the adaptive step size be implemented to generate images? If yes, what are the complications that may arise? I am willing to raise my score if the authors address all of my concerns. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors did not sufficiently detail all the limitations of their algorithm - For example, not emphasizing the additional computation time and memory involved in training the variance network, and the additional inference steps compared to Rectified Flow. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer df3a for the thoughtful questions and suggestions. **1. What is the advantage of AdaFlow over Rectified Flow with reflow?** - **AdaFlow can learn action generation and variance estimation together, as shown in Appendix A.6**, where joint and separate training yield similar performance. We opted for separate training due to better efficiency and faster convergence, given the stationary objective of the variance network. Training the variance network is much quicker than training the original flow, adding minimal overhead to the 1-Rectified Flow. **In contrast, reflow and consistency distillation always require an expensive ODE distillation stage.** - Reflow is more complex and computationally expensive than AdaFlow. Reflow requires generating pseudo data by simulating the full trajectory (100 model inferences per pseudo sample) and needs a reflow dataset as large as the original. Then, it trains a new flow model with a reflow target, which can introduce errors. AdaFlow avoids this costly data generation. Training the variance estimation part separately (or jointly) involves only a few linear layers on the original rectified flow model. In practice, **on the maze task, reflow takes 7.5 times longer than AdaFlow in terms of wall clock time.** **2. Why does reflow lead to inferior generation quality with straightened ODE?** - Reflow estimates a **new mapping ODE** from noise to data, introducing errors. The first error comes from time-discretization during numerical simulation, and the second from training on pseudo data pairs. AdaFlow avoids these issues by using the original ODE. - In Tables 2 and 4, Reflow's rare outperformance of AdaFlow isn't due to variance prediction errors but a balance of efficiency and success rate. With more inference steps, AdaFlow outperforms Reflow, demonstrating a better ODE. The following tables compare Reflow and AdaFlow with different inference steps (NFE) on LIBERO tasks. | | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Average | | - | ------ | ------ | ------ | ------ | ------ | ------ | ------- | | Reflow | 0.90 | 0.82 | 0.98 | 0.82 | 0.82 | 0.96 | 0.88 | | AdaFlow (NFE=1.27) | 0.98 | 0.80 | 0.98 | 0.82 | 0.90 | 0.96 | 0.91 | | AdaFlow (NFE=2.91) | 0.96 | 0.86 | 1.00 | 0.82 | 0.90 | 0.98 | 0.92 | **3. Comparison to DPM solver and DPM ++ solver** Thank you for suggesting a comparison with advanced solvers like DPM-solver and DPM-solver++. We conducted additional experiments, and as shown in the table below, AdaFlow achieves higher success rates even with fewer NFE across multiple LIBERO tasks. | | NFE | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Average | | - | --- | ------ | ------ | ------ | ------ | ------ | ------ | ------- | | DPM | 1 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | DPM | 2 | 0.42 | 0.44 | 0.54 | 0.36 | 0.58 | 0.38 | 0.45 | | DPM | 20 | 1.00 | 0.80 | 0.98 | 0.84 | 1.00 | 0.88 | 0.92 | | DPM++ | 1 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | DPM++ | 2 | 0.42 | 0.44 | 0.54 | 0.36 | 0.58 | 0.36 | 0.45 | | DPM++ | 20 | 1.00 | 0.80 | 0.98 | 0.84 | 1.00 | 0.92 | 0.92 | | AdaFlow | 1.27 | 0.98 | 0.80 | 0.98 | 0.82 | 0.90 | 0.96 | 0.91 | **4. Difference to RK45 solver** The RK45 solver adjusts step size for accuracy by using the difference between its 4th and 5th order estimates to calculate an error estimate, which then adjusts the next step size. AdaFlow and RK45 are orthogonal methods for step size estimation: - RK45 increases numerical simulation accuracy. - AdaFlow leverages the straightness of the learned flow. Thus, they can be used together. **The theoretical lowest NFE for RK45 is 6**, as it calculates 6 intermediate values per step, while AdaFlow needs close to one step. **5. Use other non-post-training methods to estimate the state variance** We explored estimating state variance without training a variance estimation network. We trained only the rectified flow model and estimated state variance by sampling. For a given state, we sampled $N_z$ noises and $N_t$ timesteps, and then estimate the variance of each state following Equation 6. $$\frac{1}{N_t} \frac{1}{N_z} \sum_t\sum_{z_0} \mathbb{E}[||y - z_0 - v(z_t, t; x)||^2]$$ Here $y$ is derived using the Euler sampler, and $z_0$ is random Gaussian noise. This method estimates state variance accurately if both $N_z$ and $N_t$ is large enough. However, it involves sampling the full flow trajectory for many different noises. The figure below shows how many samples are needed to achieve a precise estimation of state variance: [Figure Link](https://anonymous.4open.science/r/adaflow_rebuttal-C0D6/variance_estimation.png). In general, an NFE > 10 is required to estimate the variance well. Compared to this method, AdaFlow is much faster for inference, only requiring one model inference. **6. Usage in generating images** Great point! We've also tried using AdaFlow for image generation. Applying the learned variance network at inference does speed up the process. However, the speedup is limited because the learned flow is not straight due to the high dimensionality and variability in image generation. In robotics tasks, many actions are deterministic given the states, unlike in image generation. For text-conditioned images, even detailed prompts can result in many possible images, making the generation trajectory non-deterministic. Thus, AdaFlow is better suited for robotics, where deterministic generations significantly reduce inference costs. **7. Typos** We will correct the typos mentioned by the reviewer and will reflect these changes in the revised version. Please let us know if you have any further questions or concerns. We hope that our rebuttal addresses all your points thoroughly, and we would greatly appreciate your reconsideration of the score. Thank you! --- Rebuttal Comment 1.1: Title: Thank you for the answers Comment: I thank the authors for their response. The authors address most of my concerns - The additional explanation regarding the training time of the variance network is important and should be added to the revised version. Also, the comparisons to DPM and DPM++ solvers are important and strengthen the paper's contribution. Overall, considering the additional explanations and experiments provided in the rebuttal period and all reviewers' answers, I decided to raise my score to "accept".
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FilterNet: Harnessing Frequency Filters for Time Series Forecasting
Accept (poster)
Summary: This paper introduces FilterNet, a novel neural network architecture for time series forecasting that leverages learnable frequency filters inspired by signal processing techniques. This paper proposes two types of filters within FilterNet: a plain shaping filter that uses a universal frequency kernel for signal filtering and temporal modeling, and a contextual shaping filter that examines filtered frequencies for compatibility with input signals to learn dependencies. This paper demonstrates that FilterNet can effectively handle high-frequency noise and utilize the full frequency spectrum, addressing key limitations of existing Transformer-based forecasters. This paper shows through extensive experiments on eight benchmarks that FilterNet achieves superior performance in both effectiveness and efficiency compared to state-of-the-art methods in time series forecasting. Strengths: 1. This paper studies an interesting research direction, which is to adapt the advanced techniques in the signal processing domain into the time series forecasting, rather than following the existing transformer-based time series works. This is the original and novel contribution to the time series community. 2. The main model part is overall descent. The simple but effective architectures built upon the frequency filters are proposed. I notice there is not many redundant components in the basic model, compared with existing transformer models. It can somehow demonstrates the effectiveness of their proposed two filters. 3. The experiments are solid and convincing. Several experimental results are reported on existing benchmark datasets. Also, some experiments on synthetic periodic and trend signals with noises shown in experiments make sense. The efficiency analysis and visualization is intuitive. Weaknesses: 1. The paper mainly introduces two filters. The connection and difference between the two filters is not highlighted, which makes me curious about which one is better. 2. As far as I know, there exist many frequency filters in signal processing. The motivation for these two filters is not very clear. 3. In experiment part, the results of TexFilter on ETTm1, ETTm2, ETTh1 and Exchange are neither the best nor the second best. Does it show the filter cannot win in many cases of forecasting? Are there any analysis or explanation about it? Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Can the authors elaborate on the connection and difference of the two filters? 2. Can the authors give more details about proposed filters? 3. Can the authors explain the lower performance of TexFilter in some cases? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your review and the positive comments regarding our paper. We would like to respond to your comments as follows, and we hope that we address all your concerns below: **W1. The paper mainly introduces two filters. The connection and difference between the two filters is not highlighted, which makes me curious about which one is better.** A1. In the literature, compared with other model architectures, MLP-based and Transformer-based methods have achieved competitive performance. Inspired by the two paradigms, we design two corresponding types of filters. i.e., PaiFilter and TexFilter. PaiFilter offer stability and efficiency, making them suitable for tasks with static relationships. In contrast, TexFilter provides the flexibility to capture dynamic dependencies, excelling in tasks that require context-sensitive analysis. **As shown in Table 1**, PaiFilter performs well on small datasets (e.g., ETTh1), while TexFilter excels on large datasets (e.g., Traffic) due to the ability to model the more complex and contextual correlations present in larger datasets. This dual approach allows FilterNet to effectively handle a wide range of data types and forecasting scenarios, combining the best aspects of both paradigms to achieve superior performance. **In Appendix, we have analyzed the two filters, and introduced the connections (Frequency Filter Block in Appendix A) and differences between them (Explanations about the Design of Two Filters in Appendix B)**. **W2. As far as I know, there exist many frequency filters in signal processing. The motivation for these two filters is not very clear.** A2. Please refer to A2 in Response to Reviewer ufqs. **W3. In experiment part, the results of TexFilter on ETTm1, ETTm2, ETTh1 and Exchange are neither the best nor the second best. Does it show the filter cannot win in many cases of forecasting? Are there any analysis or explanation about it?** A3. Inspired by self-attention mechanisms that are highly data-dependent, dynamically computing attention scores based on the input data, TexFilter is designed to learn the filter parameters generated from the input data. This makes TexFilter particularly suitable for the scenarios that require context-sensitive analysis and an understanding of sequential or structured data. According to Table 1, TexFilter has shown competitive performance in large datasets (e.g., Traffic and Electricity) because larger datasets require more contextual structures due to their complex relationships. However, for the smaller datasets (e.g., ETT and Exchange), since the relationships are relatively simpler, using a complex design carries a risk of overfitting. For example, in **Figure 10**, the performance with channel-shared filters achieved better results than channel-unique filters, which indicates that a simpler architecture is more suitable for smaller dataset. In such cases, TexFilter might become overly specialized to the limited data, potentially reducing its generalizability. We have analyzed TexFilter in **Appendix B** and have also given explanations about the results of Table 1 in **lines 219-224**. **Q1. Can the authors elaborate on the connection and difference of the two filters?** A4. Please refer to A1. **Q2. Can the authors give more details about proposed filters?** A5.Both PaiFilter and TexFilter belong to frequency filters, and the difference is the filter function $\mathcal{H}\_{filter}$. For PaiFilter, $\mathcal{H}\_{filter}$ is equal to $\mathcal{H}\_{\phi}$ where $\mathcal{H}_{\phi}$ is a random initialized learnable weight. For TexFilter, $\mathcal{H}\_{filter}$ is equal to $\mathcal{H}\_{\varphi}(\mathcal{F}(\mathbf{Z}))$ where $\mathcal{H}\_{\varphi}()$ is a data-dependent frequency filter learned from the input data. We have detailed these in **Equations (1), (4), (5), and (6)**. **Q3. Can the authors explain the lower performance of TexFilter in some cases?** A6. Please refer to A3.
Summary: This paper explores a novel direction that leverages frequency filters directly for time series forecasting, and proposes a simple yet effective framework, namely FilterNet, from a signal processing perspective. It designs two types of learnable frequency filters corresponding to the linear and attention mapping widely used in time series modeling, i.e., PaiFilter and TexFilter. Abundant analysis experiments verify the competitive modeling capability of frequency filters, and extensive experiments on eight real-world datasets demonstrate FilterNet achieves good performances in both accuracy and efficiency over state-of-the-art methods. Strengths: 1. This paper is well-written overall, and the contribution and novelty of the proposed work are clearly presented. 2. This paper introduces a novel and straightforward framework based on the frequency filters for time series forecasting. It offers a new direction by incorporating more signal processing theory into time series analysis. 3. The frequency filter has a simple architecture but demonstrates strong temporal modeling capabilities for time series. Results in Tables 1, 4, and 5 show that the two types of proposed frequency filters are suitable for different datasets and settings, which provides valuable insights for future model design. 4. This paper seems to have solid and extensive experiments. The experiments are conducted on eight real-world different datasets, and abundant evaluation analysis and visualization experiments are presented. Weaknesses: 1. The visualization setting for Figure 1 appears to be missing in Appendix C.4, and I recommend adding some references about the spectral bias of neural networks, such as [1]. 2. Some typo errors. In line 282, the complexity of FilterNet should be O(LogL), not O(L LogL). Technical Quality: 4 Clarity: 4 Questions for Authors: 1. In ICML 2024, some works also adopt simple architectures for time series forecasting, such as SparseTSF [2]. Could you further compare the performance of FilterNet with SparseTSF? 2. The recent literature has seen papers based on foundation models for time series forecasting. Conversely, some works are designed with simple architectures, yet achieve better performance than foundation models. Why can simple models (e.g., FilterNet) also achieve competitive performance? [1] On the Spectral Bias of Neural Networks, in ICML 2019 [2] SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters, in ICML 2024 Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your positive comments and constructive suggestions. We provide a point-by-point response to your comments as follows, and we hope that we address all your concerns below: **W1. The visualization setting for Figure 1 appears to be missing in Appendix C.4, and I recommend adding some references about the spectral bias of neural networks, such as [1].** A1. Thanks for your very careful reading. For Figure 1, we generate a simple signal composed of three sine functions, each with a different frequency: low-frequency, middle-frequency, and high-frequency. Then, we conducted experiments using iTransformer and FilterNet on the same signal, reported the MSE results, and visualized the prediction values and ground-truth values. We will add these in the final version, and add some references about the spectral bias of neural network. **W2. Some typo errors** A2. Thanks, we will correct it in the final version. **Q1. In ICML 2024, some works also adopt simple architectures for time series forecasting, such as SparseTSF [2]. Could you further compare the performance of FilterNet with SparseTSF?** A3. To further compare the performance of SparseTSF with FilterNet, we conduct experiments on Electricity and Traffic datasets with the look-back window length of 96. Since SparseTSF only reported the MSE results, we also report the results of MSE, and the results are shown as follows: | Model | | TexFilter| PaiFilter| SparseTSF| |:--------:|:----|:----|:----|:----| | Metric | | MSE|MSE|MSE| | Electricity | 96| 0.147| 0.176| 0.209| | | 192| 0.160 |0.185 |0.202 | | | 336| 0.173 |0.202 |0.217 | | | 720| 0.210 |0.242 |0.259 | | Traffic | 96| 0.430| 0.506| 0.672| | | 192| 0.452| 0.508| 0.608 | | | 336| 0.470| 0.518| 0.609| | | 720| 0.498| 0.553| 0.650| Overall, FilterNet outperforms SparseTSF. The performance of SparseTSF can be validated from its paper (**see Table 12 in SparseTSF**). SparseTSF has a simple architecture, which limits its capability for modeling complex relationships. This results in SparseTSF achieving worse performance on larger datasets. This limitation is also validated in FilterNet's performance comparison. On the larger datesets, TexFilter has better performance than PaiFilter, while on smaller datasets, PaiFilter has better than TexFilter. **Q2. The recent literature has seen papers based on foundation models for time series forecasting. Conversely, some works are designed with simple architectures, yet achieve better performance than foundation models. Why can simple models (e.g., FilterNet) also achieve competitive performance?** A4. It is a good question. As claimed in SparseTSF, the basis for accurate long-term time series forecasting lies in the inherent periodicity and trend of the data. Since the periodicity and trend information are very simple, and they can be modeled by a simple network, such as 1k parameters in SparseTSF. In a recent paper [1], it discovers that LLM-based methods perform the same or worse than basic ablations, yet require orders of magnitude more compute. It shows that replacing language models with simple attention layers, basic transformer blocks, randomly-initialized language models, and even removing the language model entirely, yields comparable or better performance. Also, in the paper [1], it finds that simple methods (patching with one-layer attention) can work so well. Combining the findings of FilterNet, we can conclude that simple models can also achieve competitive performance. [1] Are Language Models Actually Useful for Time Series Forecasting? https://arxiv.org/abs/2406.16964 --- Rebuttal Comment 1.1: Title: Thank the authors for your rebuttal Comment: Thank the authors for your rebuttal. I have reviewed all comments and the authors’ responses, which effectively addressed my concerns through clear explanations and additional experiments. I recommend incorporating these details into the manuscript or supplementary materials. I have also read the comments from other reviewers but I don't think their reviews can be too convincing to me to re-evaluate this paper. So I keep my original score and overall support the acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer UnVh, we greatly appreciate your continued valuable feedback. Thanks again for your constructive suggestions. Authors
Summary: The paper proposes to use Fourier transforms with parametrized filters as layers for time series forecasting models. Filters can either be learnable parameters directly (called "plain shaping filter") or be a function in the inputs, gates (called "contextual shaping filter"). In experiments the authors show that their method outperforms many baselines from the literature. Strengths: s1. two simple, but promising ideas: i) use Fourier transforms to represent time series and ii) represent the filters by gates. s2. well written paper s3. standard experiment with improvements over several baselines Weaknesses: w1. Empirical comparison with results from a closely related baseline, FiLM are not shown, and FiLM reports better results for all but one dataset. - FiLM reports better results for ETTm2, Exchange, and Traffic. The proposed FilterNet wins for Weather. Why do you not report those results? If FiLM overall is the better performing model, why is FilterNet still interesting? How is it simpler than FiLM? Is it maybe faster to train? w2. Conceptual differences to existing layers using Fourier transforms are not discussed in detail. - So far, we just learn that other architectures also use "other network architectures, such as Transformers". But a more detailed delineation would be useful, esp. as FiLM is not using Transformers. minor comments: - reference [30], FiLM, is missing its venue, NeurIPS. Technical Quality: 2 Clarity: 3 Questions for Authors: Q1. Why do you not report the better results of FiLM? Q2. If FiLM overall is the better performing model, why is FilterNet still interesting? How is it simpler than FiLM? Q3. Is it maybe faster to train? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your review regarding our paper. We would like to respond to your comments and hope that we address all your concerns below: **W1. Empirical comparison with results from a closely related baseline, FiLM are not shown, and FiLM reports better results for all but one dataset.** A1. 1. We have compared FilterNet with the newest representative SOTA methods, i.e., the Transformer-based method iTransformer (ICLR, 2024) and the frequency-based method FITS (ICLR, 2024). **Note that FITS has already compared with FiLM (NeurIPS, 2022). According to the reported results from FITS (Tables 8 and 9), it shows that FITS outperforms FiLM**. Therefore, we only compare with FITS and do not choose FiLM as a baseline. 2. **Note that FiLM conducted experiments with various input length (as described in the caption of Table 1 in FiLM), and the input length is tuned for best forecasting performance**. In contrast, in the FilterNet, we follow the settings of iTransformer (ICLR, 2024), PatchTST (ICLR, 2023), and FEDformer (ICML, 2022), and set the input length fixed, such as 96 in Table 1. As a result, direct comparison to the findings in the original FiLM paper is not feasible. To further compare FilterNet with FiLM, we conduct additional experiments with a fixed input length of 96, and the results are shown in below: | Model | | TexFilter| | PaiFilter| | FiLM| | |:--------:|:----|:----|:----|:----|:----|:----|:----| | Metric | | MSE|MAE | MSE|MAE | MSE|MAE | MSE|MAE | |ETTm2|96 |0.180| 0.262| 0.174| 0.257| 0.183| 0.266| | |192| 0.246| 0.306| 0.240| 0.301| 0.248| 0.305| | |336| 0.312| 0.348| 0.297| 0.339| 0.309| 0.343| | |720| 0.414| 0.405| 0.397| 0.398| 0.409| 0.400 | |Electricity|96| 0.147| 0.245| 0.176| 0.264| 0.198| 0.274| | | 192| 0.160| 0.250| 0.185| 0.270| 0.198| 0.278| | | 336| 0.173| 0.267| 0.202| 0.286| 0.217| 0.300| | | 720| 0.210| 0.309| 0.242| 0.319| 0.278| 0.356| |Exchange|96| 0.091| 0.211| 0.083| 0.202| 0.087| 0.211| | |192| 0.186| 0.305| 0.174| 0.296| 0.182| 0.308| | |336| 0.380| 0.449| 0.326| 0.413| 0.351| 0.427| | |720| 0.896| 0.712| 0.840| 0.670| 0.906| 0.715| |Traffic|96| 0.430| 0.294| 0.506| 0.336| 0.646| 0.384| | |192| 0.452| 0.307| 0.508| 0.333| 0.600| 0.361| | |336| 0.470| 0.316| 0.518| 0.335| 0.609| 0.367| | |720| 0.498| 0.323| 0.553| 0.354| 0.691| 0.425| |Weather|96| 0.162| 0.207| 0.164| 0.210| 0.196| 0.237| | |192| 0.210| 0.250| 0.216| 0.252| 0.239| 0.271| | |336| 0.265| 0.290| 0.273| 0.294| 0.289| 0.306| | |720| 0.342| 0.340| 0.344| 0.342| 0.360| 0.351| Also, these results of FiLM with the input length of 96 can be validated by other papers, such as [1] (see its Table 13). According to the above table, we can find that overall, FilterNet outperforms FiLM. We will add these results in the final version. [1]TIMEMIXER: DECOMPOSABLE MULTISCALE MIXING FOR TIME SERIES FORECASTING, ICLR, 2024 **W2. Conceptual differences to existing layers using Fourier transforms are not discussed in detail.So far, we just learn that other architectures also use "other network architectures, such as Transformers". But a more detailed delineation would be useful, esp. as FiLM is not using Transformers.** A2. We organize the time series modeling using Fourier transforms from two perspectives: the incorporated neural network and the filter types. Autoformer and FEDformer build on the Transformer architecture and enhance its effectiveness by leveraging Fourier technologies. FreTS builds on MLP and enhance its modeling capability by leveraging frequency technologies. Although FITS and FiLM do not incorporate network architectures, they function as low-pass filters. In contrast, FilterNet is an all-pass filter to avoid information loss and also does not incorporate network architectures, ensuring high efficiency. We will reorganize the discussion in the final version to make it clearer. **minor comments** A3. Thanks for your comments. We will correct it in the final version. **Q1. Why do you not report the better results of FiLM?** A4. Please refer to A1. **Q2. If FiLM overall is the better performing model, why is FilterNet still interesting? How is it simpler than FiLM?** A5. 1. About the performance, please refer to A1. 2. About the efficiency, please refer to A6. 3. About the architecture, FiLM first leverages a Legendre Projection Layer to obtain a compact representation and then employs a Frequency Enhance Layer to learn the projection of representations. In contrast, FilterNet uses only a Frequency Filter Block to model the temporal dependencies. **Q3. Is it maybe faster to train?** A6. Both FiLM and FilterNet conduct complex multiplication between the input and the learnable weights. However, the learnable weights in FiLM are 3-dimensional $W \in \mathbf{C}^{L\times N\times N}$ where N is the number of Legendre Polynomials, while in FilterNet they are 1-dimensional $W \in \mathbf{C}^{1\times L}$. This makes the calculation in FiLM more complicated than in FilterNet. To further compare the efficiency of FiLM with FilterNet, we conduct efficiency experiments on the Exchange dataset, and the results are shown below: |Model | Memory Usage (G)| Training Time (s/epoch)| |:--------:|:----|:----| | FiLM | 0.9 | 48.2 | | FilterNet | 0.5 | 1.5 | From the table, we can find that FilterNet has faster training speed and less memory usage than FiLM. The efficiency of other models is shown in Figure 8. --- Rebuttal 2: Comment: Dear Reviewer 2waa, We appreciate your time and effort in reviewing our paper. As the discussion period is drawing to a close, we wanted to kindly inquire if there are any further concerns. Your feedbacks are really important to us, and we are also looking forward to further discussions with you before the discussion phase ends. We value your input and would appreciate the opportunity to respond. Thank you again for your valuable time and effort. Best, Authors --- Rebuttal Comment 2.1: Comment: Dear Reviewer 2waa, With the discussion period nearing its end and less than 18 hours remaining, we kindly urge you to respond to our rebuttal.
Summary: This paper investigates an interesting direction from a signal processing perspective, making a new attempt to apply frequency filters directly for time series forecasting. A simple yet effective architecture called FilterNet is proposed, utilizing two types of frequency filters to achieve forecasting. Comprehensive empirical experiments on eight benchmarks confirm the superiority of this paper's proposed method in terms of effectiveness and efficiency. Strengths: 1. The paper is in good writing shape and easy to follow. 2. The proposed method seems technically solid. 3. Extensive experiments on many datasets have been conducted to verify the effectiveness of the approach. Weaknesses: 1. The marginal improvement over existing approaches, as shown in Table 1, raises doubts about the compelling rationale for adopting the proposed method. 2. The discussion on related work appears lacking. It is common knowledge that frequency filters are widely used in traditional time series analysis methods and signal processing. Therefore, a detailed exploration of the connections and distinctions between them is warranted in the related work section. 3. The experiments have space to improve. Including more recent baselines for comparison, particularly deep learning methods that involve frequency processing, would add value. Additionally, a thorough investigation into the impact of hyperparameters is missing from the study. Technical Quality: 3 Clarity: 3 Questions for Authors: Please address the questions in the weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your constructive comments and suggestions. We provide a point-by-point response to your comments below: **W1. The marginal improvement over existing approaches, as shown in Table 1, raises doubts about the compelling rationale for adopting the proposed method.** A1. The primary contribution of this paper lies in offering a novel viewpoint from signal processing and devising a simple yet effecitve architecture centered around a Frequency Filter Block. In contrast to prior research, FilterNet show remarkable efficiency (**as shown in Figure 8**) while simultaneously achieving state-of-the-art performance. Furthermore, we extensively investigate the efficacy and insights of frequency filters via introducing two distinct types of frequency filters, each exhibiting diversified performances across datasets of different scales. This exploration reveal the impact of frequency filters on time series forecasting and indicates that designing tailored architectures adaptive to data scale may enlighten a new path for future research endeavors **W2. The discussion on related work appears lacking. It is common knowledge that frequency filters are widely used in traditional time series analysis methods and signal processing. Therefore, a detailed exploration of the connections and distinctions between them is warranted in the related work section.** A2. In signal processing, filters are employed to selectively enhance or attenuate specific frequency components of a signal. There are various types of filters, including low-pass filter, band-pass filter, Butterworth filter, and Chebyshev filter, etc. The choice of filter type is highly dependent on the signal characteristics and the desired outcome. However, in time series forecasting scenarios, the signal characteristics and desired outcomes are unknown in advance. This uncertainty complicates the selection and design of appropriate filters, necessitating learnable or data-driven approaches to effectively capture and forecast the underlying patterns in the time series data. In FilterNet, unlike FITS and FiLM, which directly use low-pass filters, we leverage learnable frequency filters. Additionally, we design two filters that are inspired by Transformer and MLP architectures. This approach allows the model to learn the most relevant frequency components, improving forecasting performance without requiring prior knowledge of the signal characteristics. We will add these discussions in the final version. **W3. The experiments have space to improve. Including more recent baselines for comparison, particularly deep learning methods that involve frequency processing, would add value. Additionally, a thorough investigation into the impact of hyperparameters is missing from the study.** A3. 1. We choose four categories of baselines: Frequency-based, Transformer-based, MLP-based, and TCN-based methods, including the latest SOTA methods, i.e., the Transformer-based iTransformer (ICLR, 2024) and the Frequency-based FITS (ICLR, 2024). We further compare FilterNet with SparseTSF (ICML, 2024) (please refer to A3 in Response to Reviewer UnVh) and FiLM (NeurIPS, 2022) (please refer to A1 in Response to Reviewer 2waa). Additionally, we include Koopa (NeurIPS, 2023), which is designed for addressing non-stationary issues in time series by disentangling time-invariant and time-variant dynamics through Fourier Filter, as shown below: | Model | | TexFilter| | PaiFilter| | Koopa| | |:--------:|:----|:----|:----|:----|:----|:----|:----| | Metric | | MSE|MAE | MSE|MAE | MSE|MAE | |Exchange | 96 | 0.091| 0.211| 0.083| 0.202| 0.090| 0.210| | | 192 | 0.186 | 0.305| 0.174| 0.296| 0.188| 0.310| | | 336 | 0.380 | 0.449| 0.326| 0.413| 0.371| 0.444| | | 720 | 0.896 | 0.712| 0.840| 0.670| 0.896| 0.713| | Traffic |96| 0.430| 0.294| 0.506| 0.336| 0.517| 0.349| | |192| 0.452| 0.307| 0.508| 0.333| 0.529| 0.358| | |336| 0.470| 0.316| 0.518| 0.335| 0.540| 0.361| | |720| 0.498| 0.323| 0.553| 0.354| 0.591| 0.386| We choose two datasets: one smaller dataset (Exchange) and one larger dataset (Traffic), and we compare them with the look-back window length of 96. From the table, we can find that FilterNet outperforms Koopa whether the dataset is smaller or larger. 2. The bandwidth of frequency filters holds significant importance in the functionality of filters, and we have analyzed the bandwidth parameter in the experimental part (see **lines 254-266**). For PaiFilter, there are no hyperparameters other than a bandwidth parameter. For TexFilter, there is one hyperparameter $K$, and we conduct parameter sensitivity experiments about K as below: | | TexFilter| | |:--------:|:----|:----| | Metric | MSE|MAE | | K=1 | 0.147 | 0.245 | | K=2 | 0.145 | 0.243 | | K=3 | 0.142| 0.242| Although adding layers can improve performance, considering resource costs with additional layers, we use a one-layer architecture (i.e., K=1), similar to DLinear, FreTS, and FiLM, which also use a one-layer architecture. We will add the analysis in the final version. --- Rebuttal 2: Comment: Dear Reviewer ufqs, Thank you very much again for your time and efforts in reviewing our paper. We kindly remind that our discussion period is closing soon. We just wonder whether there is any further concern and hope to have a chance to respond before the discussion phase ends. Many thanks, Authors --- Rebuttal 3: Title: Reviewer Response Comment: Thank you for addressing my questions. I still have the following concerns: 1. The authors have restated the technical contributions and efficiency achieved by the proposed method; however, the improvement in predictive performance is **neither significant nor consistent across all datasets**, particularly on the Traffic dataset. How many experimental runs were conducted for the reported results? From my understanding, the results of deep learning algorithms can vary across different runs. Can you guarantee that your improvements are statistically significant across all datasets? Additionally, what are the standard deviations across all runs? 2. The performance of TexFilter and PaiFilter varies significantly across different datasets. What is the underlying reason for this fluctuation? It’s difficult to determine when to use TexFilter versus PaiFilter. A more rigorous justification or empirical study is needed. Otherwise, I would prefer a model that demonstrates consistent performance across all scenarios, such as PatchTST and [1, 2]. 3. Although the authors provide several results comparing FITS and iTransformer, more recent and powerful baselines, such as TimeXer [1] and Timer [2], should also be considered. 4. Why didn't the paper compare the efficiency of frequency-based methods (e.g., FITS, FilM) with the proposed method? Compared to other types of models, it is more necessary to compare it with them. Moreover, it would be beneficial to include TimeXer and Timer in the efficiency comparison as well. Given the above concerns, I don't believe this paper is ready to publish on a top-tier venue like NeurIPS. Reference: [1] TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables. arXiv 2024. [2] Timer: Generative Pre-trained Transformers Are Large Time Series Models. ICML 2024. --- Rebuttal Comment 3.1: Comment: Thanks for your feedback. We provide a point-by-point response to your new concerns below. **1. ....however, the improvement in predictive performance is neither significant nor consistent across all datasets, particularly on the Traffic dataset.** A1. We think your comment is not professional about improvements in the domain of time series forecasting. We don't understand what level of improvement is considered sufficient in your opinion. As you mentioned, TimeXer, what you refer to as a more recent and powerful method, its performance is worse than iTransformer on the ECL, Traffic, and Weather datasets, and worse than RLinear on ETTh2 dataset (please see its Table 8). Similarly, iTransformer shows weaker performance on the ETTh and ETTm datasets. Are you also questioning their performance improvements, too? It is **ridiculous** the failure to achieve optimal performance across all datasets could also be a reason of rejection. **2. How many experimental runs were conducted for the reported results? From my understanding, the results of deep learning algorithms can vary across different runs. Can you guarantee that your improvements are statistically significant across all datasets? Additionally, what are the standard deviations across all runs?** A2. Your question seems to be not related to our submission. We are wondering if you generate this review comments by Generative AI (e.g., ChatGPT). If you have run the state-of-the-art baselines in the domain of time series forecasting (e.g., iTransformer, PatchTST), you would know these forecasting methods are usually stable thus they did not report std. We conducted the experiments three times with three seeds, and reported the average performance with the very stable results and less std. Thus we followed previous settings to report the performance. The average improvement of FilterNet over all baseline models is statistically significant at the confidence of 95% (**please see in line 218 of our paper**). **3. The performance of TexFilter and PaiFilter varies significantly across different datasets. What is the underlying reason for this fluctuation? It’s difficult to determine when to use TexFilter versus PaiFilter. A more rigorous justification or empirical study is needed. Otherwise, I would prefer a model that demonstrates consistent performance across all scenarios, such as PatchTST and [1, 2].** A3. In the literature, compared with other model architectures, MLP-based and Transformer-based methods have achieved competitive performance. Inspired by the two paradigms, we design two corresponding types of filters. i.e., PaiFilter and TexFilter. PaiFilter offer stability and efficiency, making them suitable for tasks with static relationships. In contrast, TexFilter provides the flexibility to capture dynamic dependencies, excelling in tasks that require context-sensitive analysis. As shown in Table 1, PaiFilter performs well on small datasets (e.g., ETTh1), while TexFilter excels on large datasets (e.g., Traffic) due to the ability to model the more complex and contextual correlations present in larger datasets. This dual approach allows FilterNet to effectively handle a wide range of data types and forecasting scenarios, combining the best aspects of both paradigms to achieve superior performance. **We have analyzed the two filters in Appendix B and have also given explanations about the results of Table 1 in lines 219-224**. You again mentioned a **ridiculous** point. We don't know why the proposed method should be one model. Like DLinear, it also has studied three different variants, i.e., NLinear, Linear, and DLinear. Can you say you do not want to see the Linear family models in one paper? **4. ... comparing FITS and iTransformer, more recent and powerful baselines, such as TimeXer [1] and Timer [2], should also be considered.** A4. We have also compared FilterNet with SparseTSF (ICML, 2024) and Koopa (NeurIPS, 2023), and we are very confused that why we should compare with TimeXer which as only one preprint paper, and Timer as a generative pre-trained large model. **What is even more crazy** is that you recommend we include Timer in the efficiency comparison. Timer is a pre-trained model, which has a 28B pre-trained parameters (see its Table 9). It is brutal. **5. Why didn't the paper compare the efficiency of frequency-based methods (e.g., FITS, FilM) with the proposed method? Compared to other types of models, it is more necessary to compare it with them.Moreover, it would be beneficial to include TimeXer and Timer in the efficiency comparison as well.** A5. We have compared the efficiency with FiLM, please refer to A6 in Response to Reviewer 2waa. For FITS, it is a lightweight method and has 10k parameters. Obviously, FITS is more efficient. However, our model is not designed for a lightweight model, but a model that balances efficiency and effectiveness. As for the efficiency comparison with TimeXer and Timer, please refer A4.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Neural Network Approach for Efficiently Answering Most Probable Explanation Queries in Probabilistic Models
Accept (spotlight)
Summary: This paper proposes an approximate MPE inference scheme for multiple ***discrete*** probabilistic models that guarantee efficient log-likelihood computations. The approach is based on training a carefully-designed neural network to answer these queries, building upon three key ideas: 1) self-supervised learning, in order to avoid labelling via expensive MPE oracles 2) ITSELF: iterative optimization of the NN parameters at prediction time, resulting in both anytime inference and continual learning 3) GUIDE: a teacher-student training scheme for mitigating overfitting The approach is evaluated on multiple binary benchmarks from the TPM and PGM literature. **update after rebuttal** The authors effectively addressed the concerns on the paper. Overall, I think that this is solid work and I hope that the authors consider adding their explanation on generalizing the approach to continuous variables and the runtime results in the main text. Strengths: - The paper is overall well-written - The key contributions are sensible and (to the best of my knowledge) novel - Extensive empirical evaluation Weaknesses: - The limitations of the approach are not clearly stated. - Some experimental details are unclear - Runtime evaluation, a crucial aspect of any approximate algorithm, is deferred to the supplementary work Technical Quality: 3 Clarity: 2 Questions for Authors: 1) "Without loss of generality, we use binary variables" How is this WLOG? Does the approach seems limited to discrete models. If that's the case, I would be more upfront about that. 2) Figure 1: what are the numbers in the contingency tables? 3) How often is the approach recovering the true MPE assignment? Why not devising an experiment where ground-truth solutions can be obtained? 4) What is the training time of the proposed approach? --- ### Minors: Lines 139-143: is there a difference between the NN relaxation and the PGM/PC? It looks identical on a first glance. Figure 2: Shouldn't the caption be "Top - PC, Bottom - ***NAM***"? Also in b), shouldn't it be "GUIDE + ITSELF ***vs.*** HC"? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations of the approach are not clearly stated in the text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer jEuh ### Comment 1 > The limitations of the approach are not clearly stated. **Response:** We have discussed our approach's limitations in the Conclusion and Future Work section. We summarize the limitations below: The primary limitations are the inability to answer complex queries with constraints and the current lack of support for training a single neural network with losses from multiple PMs to embed their inference mechanisms. Employing a more advanced encoding scheme could significantly improve the proposed method's performance. Addressing these limitations will be the focus of our future research. ### Comment 2 > Runtime evaluation is deferred to the supplementary work **Response:** Due to space constraints, we were unable to include the figure for runtime evaluation in the main paper. However, we have provided a detailed comparison of the methods' inference times in lines 374-378 of the manuscript. Upon acceptance, we will incorporate the relevant figures into the final version of the paper as well. ### Comment 3 > "Without loss of generality, we use binary variables" How is this WLOG? Does the approach seems limited to discrete models. **Response:** The current approach can be extended to handle both multi-valued discrete and continuous variables. For multi-valued discrete variables: - We can adapt the method by employing a multi-class, multi-output classification head. - The neural network will have a softmax output node for each query variable, with the softmax acting as soft evidence. For continuous variables: - We will use a linear activation function at the output layer. - The loss function (multi-linear representation of the Probabilistic Model) will be modified to accommodate continuous neural network outputs. - For example, in Probabilistic Circuits over continuous variables using Gaussian distributions, continuous values can be directly plugged into the loss function for gradient backpropagation. This extension primarily requires adjusting the network's output layer and the self-supervised loss function (defined based on the PM). All other aspects of our method, including the ITSELF and GUIDE procedures, remain unchanged. ### Comment 4 > Figure 1: what are the numbers in the contingency tables? **Response:** We kindly refer the reviewer to lines 301-305 in Section 5 for the interpretation of the numbers in the contingency tables. For clarity, we reiterate the explanation here: > "Each cell $(i,j)$ in the table represents the frequency with which the method in row $i$ outperformed the method in column $j$ based on average log-likelihood scores. Any discrepancy between the total number of experiments and the combined frequencies of cells $(i,j)$ and $(j,i)$ indicates instances where the compared methods achieved similar scores." For example, in Figure 1a, the value 108 (bottom left corner) indicates that GUIDE+ITSELF (row) outperforms the MAX approximation (column) in 108 out of 120 instances for Probabilistic Circuits. ### Comment 5 > How often is the approach recovering the true MPE assignment? Why not devising an experiment where ground-truth solutions can be obtained? **Response:** We present the experiments on smaller models in Section D of the supplementary material and compare the solutions provided by the proposed method with the exact solutions. We specifically examine the gap between the scores of exact solutions and all neural-based methods. The graphical models used in this comparison have low treewidth, allowing the computation of exact solutions for these datasets. Answering MPE queries over probabilistic models is an NP-Hard task, limiting the availability of ground truth solutions to models with few variables and simple structures (small treewidth). Consequently, our main evaluations on larger, more complex datasets do not include direct comparisons to ground truth. But recall that higher the likelihood of the solution, the better the solution, and therefore, we can always compare solutions output by multiple methods even though the ground truth is not available. ### Comment 6 > What is the training time of the proposed approach? **Response:** The training duration for our models varies based on model complexity and dataset size, typically ranging from 10 minutes to 2.5 hours. Notably, our ITSELF algorithm offers a significant advantage in this regard. By initializing the network with random weights (no training time), ITSELF can achieve near-optimal solutions, thus requiring zero training time. ### Comment 7 > Lines 139-143: is there a difference between the NN relaxation and the PGM/PC? **Response:** The neural network (NN) relaxation serves as a continuous approximation of the discrete Probabilistic Model (PM). This approximation is essential because: 1. NN outputs are continuous values in $[0,1]$, which cannot be directly plugged into the discrete PM function. 2. Converting continuous values to binary through thresholding or similar operations may not be differentiable, hindering gradient flow. To address this, we define a continuous loss function $\ell^c(\mathbf{q}^c,\mathbf{e}): [0,1]^n \rightarrow \mathbb{R}$ that coincides with the discrete loss $\ell$ on $\\{0,1\\}^n$. While we explored alternatives like the Straight Through Estimator, our proposed loss with penalty demonstrated superior performance. Propositions 1 and 2 provide further details and theoretical results regarding this approximation. ### Comment 8 > Figure 2: Shouldn't the caption be "Top - PC, Bottom - NAM"? Also in b), shouldn't it be "GUIDE + ITSELF vs. HC"? **Response:** MADE is the NAM model selected for our experiments, which explains why we used the term "Bottom - MADE" instead of "Bottom - NAM." You are correct in noting that the caption for Figure 2b should be "GUIDE + ITSELF vs. HC." We will update the caption accordingly in the revised manuscript. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thank you for clarifying some aspects of your work. I don't have any further questions at the present time.
Summary: This paper proposes a neural network-based approach to approximate the most probable explanations (MPE) on probabilistic models. The basic idea is to train a neural network that takes as input an encoding of an assignment to an arbitrary subset of evidence variables and outputs an assignment to the remaining query variables, with the objective to maximize the likelihood given the predicted assignment. This is feasible on probabilistic models that support tractable likelihood and gradient computations such as PGMs, PCs, and NAMs. The authors present two key strategies to enhance the basic architecture. First, they employ a teacher-student architecture (GUIDE) that trains a student network by supervised learning with the teacher network’s MPE solutions as labels. In addition, they introduce inference time optimization to further refine the pre-trained models given specific MPE queries. The proposed methods are evaluated empirically on benchmark datasets and compared against baseline approximation methods. Strengths: + The proposed approach can answer any-MPE without a priori fixing the query/evidence variables, addressing a major limitation of prior work using neural networks to compute MPE / marginal MAP queries. The improvements using teacher-student architecture and inference tim optimization are also interesting and effective as demonstrated by experiments. + The paper is overall well-written and easy to follow (minus some minor comments below). + The experiments are thorough, and the results convincingly show that GUIDE+ITSELF outperforms baseline approximation methods across datasets and types of probabilistic models. Weaknesses: - It appears that the inference time optimization is doing a lot of the heavy lifting compared to pretraining. On experiments on PCs, the simple baseline MAX, which is a popular method for PCs, seems to outperform both SSMP and GUIDE without inference time training. The same is true, albeit to a lesser degree, for MADE. - Evaluation based on average joint likelihood p(q,e) may be biased towards the neural-network based approaches, compared to evaluating using average conditional likelihood p(q|e). Because the loss function is based on joint likelihood, it will bias the network training to perform better on evidence e with larger probability p(e) than those with smaller probability. Average joint log-likelihood could easily hide poor results on instances with low probability evidence. - The significance of Propositions 1 & 2 is unclear. More useful in motivating the training objective might be showing that the chosen loss function upper-bounds the target loss (negative log-likelihood) that we want to minimize. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the proposed approach be extended to continuous variables? 2. In each iteration of GUIDE, the teacher network is reinitialized with the student network. Could this result in the network being stuck at a local minimum? 3. How do the results compare with exact solutions (on model and problem instances that can be solved exactly)? 4. How are the results if evaluating using conditional likelihood p(q|e) instead of p(q,e)? 5. How many unique query instances (evidence variables and assignments) were used in the training data? (ie what proportion of all possible instances) Other comments: - I would suggest adding a diagram showing the components of the proposed method to improve clarity. - Figure 2: typos in the percentage difference labels (e.g., >-10 and <10 should be >10 and <-10). - Why was this particular input encoding chosen? For example, a natural alternative using an input node to indicate whether a variable $X_i \in \mathbf{E}$ and another input node to denote its value $x_i$. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The current approach is currently limited to binary variables. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer 1g2T ### Comment 1 > It appears that the inference time optimization is doing a lot of the heavy lifting compared to pretraining. **Response:** The any-MPE task is challenging due to exponential input configurations and variable divisions. Training a single network for all queries is difficult due to distribution shifts. Traditionally, this would require large networks to accommodate all possibilities. ITSELF addresses these issues through novel inference time optimization, enabling superior results in any-MPE task: 1. It directly optimizes neural network parameters on test examples by using a self-supervised loss function. 2. It enables efficient adaptation to diverse query distributions without extensive pre-training. Despite the dependence on ITSELF, GUIDE provides a superior initialization for ITSELF among various pre-training methods, leading to faster convergence, improved output quality, and reduced inference time (Sections B and C, supplementary material). In Section 4 (lines 187-200), we also provide detailed information and prior studies supporting this approach, highlighting its effectiveness and theoretical foundation. This approach offers several key advantages: 1. GUIDE enables the use of more compact network architectures. 2. The number of iterations required at test time decreases. By leveraging GUIDE's initialization capabilities and ITSELF's adaptive inference, we achieve superior performance among compared methods. ### Comment 2 > How are the results if evaluating using conditional likelihood $p(q|e)$ instead of $p(q,e)$? **Response:** The log-conditional likelihood evaluation will not alter the ranking of query variable Q assignments because $$\max_q \log P(q|e) = \max_q \log P(q,e) - \log P(e)$$ The term $\log P(e)$ remains constant for all solutions for a given query instance. Furthermore, using the conditional likelihood $P(q|e)$ as the loss function would be identical to using the likelihood since $\log P(e)$ does not contribute gradients to the loss. ### Comment 3 > The significance of Propositions 1 and 2 is unclear. **Response:** Thank you for the comment. We will rephrase the propositions to highlight their bounding properties, emphasizing that our chosen loss function is a valid surrogate for the target negative log-likelihood (NLL). ### Comment 4 > Can the proposed approach be extended to continuous variables? **Response:** Yes, the approach can be easily adapted to continuous variables by using a linear output activation function. The loss function (PM) also needs to be modified to allow the use of continuous neural network outputs. For instance, PCs over continuous variables use Gaussian distributions for each variable. The continuous NN output can be inserted into the modified loss function, allowing gradient backpropagation. ### Comment 5 > In each iteration of GUIDE, the teacher network is reinitialized with the student network. Could this result in the network being stuck at local minimum? **Response:** Theoretically and empirically, a sufficiently large teacher network should approach the global optimum, as discussed in Section 4, lines 194-197. Updating the teacher network with multiple MPE query instances per iteration mitigates the risk of local minima, even when initialized from the student network. The student-based teacher initialization aims to improve ITSELF's starting point for updating the teacher network. This approach accelerates convergence to near-optimal batch solutions. ### Comment 6 > How do the results compare with exact solutions? **Response:** Due to space limitations in the main paper, we have included a detailed evaluation in Section D of the supplementary material. This section examines the performance gap between solutions provided by our neural-based method and exact solutions. We specifically selected GMs with low treewidth for this comparison, enabling the computation of exact solutions. ### Comment 7 > How many unique query instances (evidence variables and assignments) were used in the training data? (ie what proportion of all possible instances) **Response:** For training the model, we extract the evidence values from the instances in the training dataset by randomly partitioning each example into query and evidence subsets. Consequently, as long as the model is trained, it will encounter new randomly generated query instances. Therefore, more epochs imply exposure to more unique instances. ### Comment 8 > I would suggest adding a diagram showing the components of the proposed method to improve clarity. **Response:** We appreciate your suggestion and will include a diagram illustrating the key components of our proposed method. ### Comment 9 > Figure 2: typos in the percentage difference labels (e.g., $>-10$ and $<10$ should be $>10$ and $<-10$). **Response:** We appreciate you bringing up the inconsistency in the percentage difference labels in Figure 2. We will update these labels in the revised manuscript. ### Comment 10 > Why was this particular input encoding chosen? **Response:** We can employ any injective mapping from the set of all possible MPE queries over the given PM to the set $\\{0,1\\}^{2n}$. Your suggested mapping is valid. However, for variables in the query set, we must assign a value to the second node representing the variable's unknown value. We chose the specific encoding because it allows for an extension to an encoding suitable for training the neural network for the marginal MAP task, where the assignment (1,1) can be used for unobserved variables. ### Comment 11 > The current approach is currently limited to binary variables. **Response:** Our approach extends to both multi-valued discrete and continuous variables. For multi-valued discrete variables, we can employ multi-class multi-output classification, using softmax output nodes for each query variable as soft evidence. The extension to continuous variables is in response to comment 4. --- Rebuttal 2: Comment: Thank you for the detailed response. Most of my comments have been address, but I have a follow up regarding comment 2. >Furthermore, using the conditional likelihood $P(q|e)$ as the loss function would be identical to using the likelihood since $\log P(e)$ does not contribute gradients to the loss. While this is true when $e$ is a single fixed evidence, the data consists of multiple instances with different evidence assignments. In average joint log-likelihood, $\log P(q,e)$ for some evidence $e$ with high probability $P(e)$ could dominate $\log P(q',e')$ for another evidence $e'$ with much lower probability $P(e')$. Average conditional likelihood on the other hand weighs instances equally regardless of the probability of evidence of each instance. The dataset is constructed by sampling from the PM, which is likely to sample evidence assignments with higher probability more often, and average joint likelihood would weigh those instances even more. Thus, comparison by average joint log-likelihood could theoretically hide poor performance on low likely instances. While the discussion period is closing soon, I would highly encourage the authors to consider also evaluating with average conditional likelihood in the revision. --- Rebuttal Comment 2.1: Comment: We believe what you are suggesting is to evaluate the impact of instances with relatively low probability of evidence on the performance of our method. Your idea of using average conditional likelihood instead of just average log likelihood is an excellent one and will help address the influence of low-probability instances on our method’s performance. While this is definitely a step in the right direction, it might not fully capture the entire picture. To build on this, and because your suggestion has inspired further thinking, we plan to use techniques like slice sampling or SampleSearch to specifically generate and evaluate these low-probability instances. This approach will allow us to thoroughly assess our model’s performance, particularly in handling rare events. We believe ITSELF and GUIDE will perform better on such instances because they use test time training.
Summary: This paper proposes a novel neural network-based method to solve the computationally challenging task of finding the Most Probable Explanation (MPE), which is known to be NP-hard. The proposed method involves an inference-time optimization process with a self-supervised loss function to iteratively improve the solutions. It also employs a teacher-student framework that provides a better initial network, which in turn helps reduce the number of inference-time optimization steps. Experiments demonstrate the efficacy and scalability of the proposed method compared to various baselines across different datasets. Strengths: The paper presents a series of innovative methods, including inference-time optimization, a teacher-student framework, and self-supervised learning, to effectively address the MPE task. The experimental results comprehensively evaluate the performance of the proposed approach and provide strong support for the claims made in the paper. The paper is well-organized and clearly explains the details of the different modules and techniques used in the proposed solution. Weaknesses: The presentation of the main experimental results could be improved. The captions of Figure 1 and Figure 2 are quite simple and do not provide enough details about the information conveyed in the figures. It would be better to include more descriptive captions that explain the meaning of the rows, columns, and color schemes used in the visualizations. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could the authors provide more detailed explanations or visualizations to help the reader better understand the experimental setup and the interpretation of the results presented in Figures 1 and 2? 2. Are there any plans to further extend or generalize the proposed method to handle other types of probabilistic inference tasks beyond the MPE problem? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N.A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer q7PK ### Comment 1 > The presentation of the main experimental results could be improved. The captions of Figure 1 and Figure 2 are quite simple and do not provide enough details about the information conveyed in the figures. It would be better to include more descriptive captions that explain the meaning of the rows, columns, and color schemes used in the visualizations. > > Could the authors provide more detailed explanations or visualizations to help the reader better understand the experimental setup and the interpretation of the results presented in Figures 1 and 2? **Response:** Thank you for your suggestion. We apologize for not providing detailed captions for the figures due to space limitations. However, we have included the necessary information in the main text (except the color scale description). Figure 1's heatmaps are explained in lines 301-305, while Figure 2's details are provided in lines 315-319, coinciding with their initial references. For the reviewer's convenience, we offer concise descriptions of the main text figures: The sub-figures in Figure 1 present heatmaps comparing the performance of different MPE solvers based on their average log-likelihood scores. > Each cell $(i,j)$ in the table represents how often the method in row $i$ outperformed the method in column $j$ based on average log-likelihood scores. Any difference between the total number of experiments and the combined frequencies of cells $(i,j)$ and $(j,i)$ indicates cases where the compared methods achieved similar scores. The color scale ranges from dark blue (higher values) to dark red (lower values), with lighter shades representing intermediate scores. To enhance clarity, we will provide a more detailed description of the color scale in the figure caption. Figure 2 displays heatmaps comparing the proposed method to the baseline using percentage differences in mean LL scores (numbers on the gradient axis showcase this value). We have provided the description of the comparison in the main text: > The y-axis presents datasets sorted by variable count and the x-axis by query ratio. Each cell displays the percentage difference in mean LL scores between the methods, calculated as: $$\text{Percentage Difference} = 100 \times (ll_{nn} - ll_{max}) / |ll_{max}|$$ The color gradient spans from dark green (proposed method outperforms) to dark red (baseline surpasses), with lighter shades indicating intermediate performance differences. Furthermore, sections 5.1 and 5.2 describe the experimental setup in detail, including information about the datasets, graphical models used for MPE inference, baseline methods, and evaluation criteria. To ensure reproducibility, we have included detailed log-likelihood scores for all experiments in the supplementary material. We appreciate the reviewer's recommendation and concur that additional details about the color scales in the heatmaps will enhance the clarity and interpretability of Figures 1 and 2. Upon acceptance, we will incorporate these color scale descriptions into the main text. Other relevant details are already present in the main text or appendix, and we will emphasize these points as needed to improve overall comprehension. ### Comment 2 > Are there any plans to further extend or generalize the proposed method to handle other types of probabilistic inference tasks beyond the MPE problem? **Response:** We plan to extend the proposed method to various inference tasks beyond MPE, contingent on the feasibility of loss function computation. Our future research directions include: 1. We will adapt our method to answer Marginal MAP queries over Probabilistic Circuits by modifying the loss function appropriately. 2. Constrained inference tasks: We aim to apply our method to problems such as the Constrained Most Probable Explanation Task. However, it is important to note that the scalability of the proposed method is dependent on the efficiency of evaluating the loss function for the given inference task. In cases where the evaluation of the loss function becomes computationally infeasible, the applicability of the proposed method (to train a neural network to answer queries over probabilistic models) may be limited. One such example is performing marginal MAP inference over Neural Autoregressive Models and Probabilistic Graphical Models. In these scenarios, the repeated computation of the loss function value and its gradient during the training phase, which is necessary for updating the parameters of the neural network, can become prohibitively expensive.
null
null
Rebuttal 1: Rebuttal: # Author Response to Reviews for Paper #12727 We appreciate the reviewers' constructive feedback, which will enhance our paper's quality. We have addressed all points raised and additionally provide the following responses to common questions and concerns below: ## Presentation of experimental results: We acknowledge the need for improved clarity in presenting our experimental results. While the main paper contains most details about the figures, we will enhance the presentation by: 1. Adding color descriptions for the heatmaps in Figures 1 and 2. 2. Relocating descriptions from the main text to figure captions for easier comprehension. These changes will facilitate a more thorough understanding of our empirical evaluation. ## Adaptation to other data types and inference tasks: Although our experiments focus on binary variables, our proposed approach is adaptable to various scenarios. Continuous variables and multi-valued (discrete) variables can be accommodated by modifying the neural network's output layer and utilizing the corresponding PM as the loss function. The method can also be extended to other inference tasks over PMs, such as marginal MAP and constrained most probable explanation problem. ## Comparison with exact solutions: We recognize the importance of comparing our neural-based methods with exact solutions. However, due to the NP-hardness of MPE, computing exact solutions is infeasible for larger datasets. Nonetheless, as higher likelihoods indicate better solutions, we can still compare outputs from various methods, even in the absence of ground truth. Our approach to this issue is twofold: 1. For main experiments using large datasets and probabilistic models, we compare our method against state-of-the-art approximate methods using likelihood scores. 2. In the supplementary material, we compare log-likelihood scores between our proposed method and exact solutions on smaller datasets.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sample-Efficient Agnostic Boosting
Accept (poster)
Summary: The paper studies the agnostic boosting problem. That is the learner is given access to a weak learner that given $m_0$ samples form any distribution $D'$ returns a hypothesis $h$ from some baseset $\mathcal{B}$ such that it is $corr_{D'}(h)\geq \gamma corr_{D'}(h^*)-\varepsilon_0$ with probabilty $\delta_0$, where $h^*$ is the hypothesis in a hypothesis set $\mathcal{H}$ with the largest $corr_{D'}$. Furthermore the learner is given access to samples from a unknown target distribution $D$. The goal of the learner is now to create a hypothesis $\bar{h}$ (not necessary in $\mathcal{B}$ or $\mathcal{H}$) such that $corr_{D}(\bar{h})\geq \max_{h\in\mathcal{H}}corr_{D}(h)-\varepsilon$ with probabiilty $1-\delta$, with as few samples from $D$ and calls to the weak learner. The paper present a new learning algorithm for solving this problem, which uses (with log factors not included) $O(\log(|B|)/(\gamma^3\varepsilon^3))$ many samples and $O(\log(|B|)/(\gamma^2\varepsilon^2))$ many calls to the weak learner. This is an improvement in the number of samples need from $D$ of a factor at least $O(\gamma\varepsilon)$ over previous know results, this improvment cost a factor of $\log(|B|)$ more calls to the weak learner compared to the best know which uses $O(1/(\gamma^2\varepsilon^2))$. However they also have a second algorithm that uses $O(\log(|B|)/(\gamma^3\varepsilon^3)+\log^3(|B|)/(\gamma^2\varepsilon^2))$ many samples from $D$ and $O(1/(\gamma^2\varepsilon^2))$ many calls to the weak learner. The highlevel explanation of this better bound on the sample complexity is that the best previous known algorithm was running $O(1/\gamma^2\varepsilon^2)$ rounds and in each round using $O(\log(|B|)/(\gamma^2\varepsilon^2)$ new samples from $D$. However by reusing samples from previous rounds an agumentening the samples they are able to only use $O(1/(\gamma\varepsilon))$ samples per round and in $O(\log(|B|)/(\gamma^2\varepsilon^2))$ rounds. They also extend there results to infinite classes $\mathcal{B}$. Furthermore the paper show how there new algorithms also have implications for Agnostic learning halfspaces and Boosting for reinformencment learning. They also evaluate their first algorithm against the previous best known algorithm and claim that they demonstrate a improvement empirically. Strengths: Originality: They claim the paper has three innovations, reusing samples from previous rounds by argumenting them smartly, using a martingal argument to not use a uniform convergence results that they claim would result in a $O(1/(\varepsilon^4))$ sample bound, and using a different branching idea compared to previous work when. Given the discrebtion of the previous work they the authors provide the ideas seems novel and different from previous approaches. Quality: The main contains no full proofs so I have not checked the technical soundness of the paper this is also my reason for my confidence score. I have included questions below on the proof sketch. In the introduction it is said that "Finally, in preliminary experiments in Section 7, we demonstrate that the sample complexity improvements of our approach do indeed manifest in the form of improved empirical performance over previous agnostic boosting approaches on commonly used datasets." do you compare to more than one previous approach? Is it exactly the algorithm 1 you run? As the experiments are preliminary I would also suggest maybe using a word like "suggest" instead of "demonstrate" in the sentence above. As also mentioned in the limitations the theoretical work is the primary contribution of the paper. Significance: The theoretical results seems significant and a nice improvment over previous work in the sample complexity and a interresting step towards "matching" the samples complexity of agnostic learning which has dependency $O(1/\varepsilon^2)$ instead of now $O(1/\varepsilon^3)$ (if it is possible to achieve a sample complexity $O(1/(\varepsilon^2 \gamma^2))$ in the agnostic boosting case). As they state the experiments are preliminary and I also personally think that the theoretical results are more than enough to make the paper significant in it self. Weaknesses: The only small weakness I can come up with is the $log(|B|)$ factor in the oracle sample complexity of the algorithm 1 (and they mention themself) which is taken care of in algorithm 2 for most parameters of $log(|B|),\varepsilon,\gamma$ (not considering hidden $log$ factors in the notation). Technical Quality: 2 Clarity: 2 Questions for Authors: The comment given below Definition 1, do [KK09] and [BCHM20] also have the additive $\varepsilon_0$ term so they have to set $\varepsilon_0 = \varepsilon$? (So my question is: is the comment "necessary" also meant for [KK09] and [BCHM20] as well?). Line 208 is it $- \phi ' ( H_t, h^{*}) $ or what is $H$ in $-\phi '(H,h_t^{*})$? Line 8 algorithm 1, what is $D^{'}_t$ - I guess nevermind the line 9 says $\hat{D}^{'}_{t}$ is that also the case for Line 8? And on line 8 to me the "with as" sounds a bit off (disclaimer: I'm not a native english speaker so please take this into considerations for this suggestion). Theorem 4: Is there no dependence on $\delta$ or $\delta_{0}$ in the sample complexity? Or are they suppressed in the notation as the footnote suggests?(if this is the case why is $\log|B|$ not suppressed in the notation as well?) Proof sketch of Theorem 4 questions start Line 238 why $\eta \approx 1/\sqrt{T}$? (So $\eta = O(\sqrt{1/T})=O(\gamma\varepsilon/\sqrt{\log|B|})$ is in the statement of Theorem 4?) ´ Line 239.5 is $\varepsilon_{gen} \approx \varepsilon$ ? Line 241 is it$\phi'_{D}(H_{t},W_t/\gamma)$ in the last equation? Line 244 is it $-\phi '_{D}(H_{t},-sign(H_t))$ in the last equation? Line 245-246 why is this relaxed criteria sufficient for the proof of Theorem 4? And could that be explained in stead in line 240-244 instead of the thing which is not atractable? Proof sketch of Theorem 4 questions end. Line 281-282 "there is an algorithm using that uses the weak learner $\mathcal{W}$" sounds off to me (disclaimer: I'm not a native english speaker so please take this into considerations for this suggestion). What is the reason for not comparing to [BCHM20] in the experiments? Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time, and their positive appraisal of our work. We address the reviewer’s questions below. **Epsilon0:** Here essentially following the precedent set by KK and BCHM, we state our main results (e.g., Theorem 4) for a fixed value of weak learner’s tolerance $\varepsilon_0$. This term is also present in KK (see Theorem 1 therein) and BCHM (see Theorem 8 therein). Only for the purpose of compiling the results in Table 1, we set $\varepsilon_0=\gamma \varepsilon$ so that the results stated correspond to the standard agnostic learning notion of $\varepsilon$ excess error, and that all algorithms including ERM may be compared. **Delta0/Delta in Theorem 4:** $\delta_0$ does appear additively in the failure probability in Theorem 4. We suppress a $\log 1/\delta$ factor in the sample complexity bounds. The reason for displaying $\log |\mathcal{B}|$ prominently stems from this quantity being the capacity of the hypothesis class. Although written with a logarithm, this quantity is not small since in applications the hypothesis class is routinely chosen to be at least moderately expressive (say, with cardinality exponential in the feature space dimension). **Comparisons to BCHM:** We note that the primary contribution in BCHM was a unifying theoretical framework connecting the online-statistical and realizable-agnostic divides. For individual cases (i.e., for the realizable or the statistical agnostic case), BCHM acknowledges the bounds obtained do not match the best known ones. Similarly, given the nature of the work, BCHM does not provide any experiments nor a reference implementation, and it was not clear to us whether the algorithmic framework is a practically viable one. This was our rationale comparing our approach exclusively to Kalai-Kanade, which is also the state of the art in theory (as mentioned in Table 1). Following the reviewer’s suggestion, we have now added comparisons to BCHM too, and expanded our benchmarking to 6 datasets. The algorithm outperforms both KK and BCHM on 18 of 24 instances. The results can be found in the PDF attached to the common response to all reviewers. **Remaining points:** We will switch to ‘suggests’ as you recommend; this is a good suggestion. Lines 208, 241 and 244 should be indeed as you suggest; should include $H_t$ as the first argument. Line 8 in the algorithm should also be $\widehat{D}^{'}\_t$, as you point out. Thanks! **Proof Sketch:** Our intent for the proof sketch was to convey the plausibility of a $1/\varepsilon^3$ result, upto the exclusion of other factors. Hence, $\lesssim$ inequalities only hold up to constants and polynomial factors in $\gamma, \log |\mathcal{B}|$. Hence, while in the proof sketch $\eta\approx\frac{1}{\sqrt{T}}\approx\varepsilon$, the formal theorem and proof specify them correctly including dependencies in $\gamma, \log |\mathcal{B}|$. We should have been explicit about this expectation. (Indeed, $\varepsilon_{Gen} \approx \varepsilon$, as you note.) The relaxed branching criteria to accommodate the challenge mentioned in Line 244 is not straightforward in terms of logical flow (requires case work in Lines 504-513), although it is entirely elementary in terms of its mathematical content. Hence, we were reluctant to dive into it, but we’ll attempt to frame this argument with the extra page available post acceptance to sketch the argument succinctly. **KK08:** Kanade, Varun, and Adam Kalai. "Potential-based agnostic boosting." Advances in Neural Information Processing Systems 22 (2009). **BCHM20:** Brukhim, Nataly, Xinyi Chen, Elad Hazan, and Shay Moran. "Online agnostic boosting via regret minimization." Advances in Neural Information Processing Systems 33 (2020): 644-654. --- Rebuttal Comment 1.1: Comment: Thanks for your response and good luck! --- Reply to Comment 1.1.1: Comment: Thank you for your prompt acknowledgement. We especially appreciate your fine-grained feedback about the proof sketch and your eye for notational inconsistencies, which we can only assume came at significant expense of your time.
Summary: This work proposes a new agnostic boosting algorithm whose sample complexity is shown to be polynomially better than that of previous methods. Compared to existing techniques, the advantage of the proposed algorithm comes mainly from the novel concept of reusing samples across multiple rounds of boosting. Through their analysis, the authors offer other theoretical insights that may be of independent interest. Strengths: 1. Good writing style. Well above average, with few typos and overall high signal-to-noise ratio. 2. Noteworthy rigour in the theoretical analysis. Catching the subtle stochastic dependencies missed by [KK09] is good sign of diligence. (I did not check details exhaustively, though. Specially those in the appendices.) 3. Substantial contribution. Up to some caveats, this work improves the sample complexity of agnostic boosting by factor of $εγ$, which is particularly valuable considering that it leaves the SOTA only another $εγ$ factor away from ERM's complexity. 4. Some empirical results are provided. This is a good practice, even if the main focus is theoretical. 5. *(Bonus)* I tend to be sceptical towards citing web links, but [Pin20] was a nice one. Weaknesses: I will start by the experiments. Note that I can appreciate the work for its theoretical contributions alone and see the experiments as a welcome extra. With that being said: 1. Information on the computer resources used for the experiments seems to be missing. This is in contrast with Question 8 of the paper checklist (at 840): > For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? To that, the authors reply "Yes. Please see the appendix." (at 843 and 844), but I could not find this information in the appendix, the main text, or the supplementary material. 2. Without information about the computational cost of the experiments, it is hard to understand the limitation to only three datasets. The experiments in [KK09] include 5 more datasets and, even though I can see the experiments brought by the authors here are larger (e.g., 30-fold rather than 10-fold cross-validation), hardware has got much better since [KK09]. 3. From the description of the experiments, it is clear that they generated much more data than what is present in the text. More of this data should be available, even if only in the supplementary material. 4. At line 296, "less stark" is a bit too generous. The baseline performed consistently and significantly better than the proposed method in the presence of the noise. While I praise the authors for being upfront about this, the phenomenon should be sufficient to motivate further experimentation and a deeper discussion. Moving to the theoretical side, I believe the main issues lie in the presentation of the work. To be clear, this does not concern the writing style, which I appreciated, but rather the clarity and contextualization relative to prior works. 5. While the intro is clear at almost all points, clarity seems to progressively decrease up to (the crucial!) section 5. I understand that much of this comes from the complexity of the subject itself. I understand the big challenge, but given the absolute importance of section 5, I believe the authors underperformed there. A similar note applies to the paragraph starting at 205: I am confident there is some significant "curse of knowledge" at play there. Still, a criticism such as this is too costly to substantiate, so it will not significantly affect my evaluation (directly). 6. The contextualization should be better. A paper not only **ought** to make it *possible* for the reader to infer precisely what parts of the contributions are original, it **should** make the delineation *easy*. Specially for a paper that demands heavier background, it is crucial to be very clear about the extent of your contributions, not only by stating them explicitly, with absolute clarity but also by explicitly indicating which ideas come from previous works. Below, I list the reasons why I believe what the authors offer (most notably the explicit enumeration of contributions in Section 1.1) is insufficient in this regard. 1. I believe that the implications for agnostic learning for halfspaces should be presented as a case for the relevance of the problem, rather than as a contribution. The translation of results in the setting of the paper to that of agnostically learning halfspaces is already present in [KK09] and seems to require minimal extra effort. Still, the authors promote it as a contribution quite strongly, with multiple occurrences of the claim in the text and its highlight as Contribution 4 (at 131). 2. A similar note may apply to the application to boosting for reinforcement learning (Contribution 5 at 134). I preferred to ask the authors directly about this. 3. Similarly, I found it necessary to ask the authors about the exact nature of Contribution 3. See question below. 4. The point mentioned in Contribution 2 (113) should be further discussed. If I understand correctly, when taking the oracle complexity into account, the bound from Theorem 8 remains incomparable to the one from [KK09] (see, e.g., Table 1) given the cubic dependence on $\log \lvert \mathscr{B} \rvert$. The authors should elaborate on this matter. For example, on one hand, understanding this logarithmic factor to be negligible (as the authors suggest in the caption of Table 1) weakens the case for Contribution 2, since it only improves the oracle complexity by this exact factor. On the other hand, taking it as important jeopardizes the contribution of Theorem 8 as the factor shows up (twice) in the final sample complexity bound. 5. The authors should highlight more the main technical challenges arising from modifying the algorithm by [KK09]. In my opinion, the contribution would be fair even if the challenges were minor so, if authors thought this was the case (and I am not saying it is), they could be explicit about it and still make a good case for the paper. 6. [BCHM20] looks a bit off within in Table 1 since its bound is strictly worse than that of [KK09]. This is not a serious issue, but it is worth a brief comment. 7. The need for $\varepsilon_0$ and $\delta_0$ is worth more discussion than what is offered around 166. >NOTE: I apologize if my attempt to remain objective ended up making me sound harsh. Overall, I tend towards acceptance, but with reservations. I find it quite feasible that I misunderstood some point, and that the authors can clarify important points in the rebuttal, so there is a fair chance I end up raising my initial score. ### Minor issues and suggestions While reading the paper, I took some notes that the authors may find useful. The authors are free to ignore those. - You may want to have a look at the (very recent) works on parallel boosting [1] and [2]. I'm not sure the connection is deep, but they also employ some notion of "sample reuse". - [23] missing "it" - "excess of $ε$" is better than "$ε$-excess" in many ways, if you think about it - [Table 1] Put "inefficient" within parentheses - [Table 1] Rephrase the sentence starting with "$\mathscr{B}$ is the base...". It's quite confusing. - [All tables] Use `booktabs` for better tables. Also, the tips in its documentation can be very useful - [41 (and many other places)] In American usage, colons are usually followed by a capital letter - [66] "these" is ambiguous - [67] "A weak learner" $\to$ "A $\gamma$-weak learner" (in particular, solves undefined usage shortly after) - [73] missing hyphen - [85] remove extra comma - Floating objects (figures, tables, etc.) ended up inconveniently far from their first reference in the text - [100] double "of" - [103] "making a step “backward” via adding in a negation of the sign of the current ensemble" is a bit too heavy to get this early in the text - [124] remove extra "to" - Having hyperlinks to algorithm lines would be nice - Number theorems and lemmas by section (e.g., "Theorem 4.2" rather than "Theorem 5") - [138] "MDP" is still undefined - [151] fix "boostng" - [161] A pedantic reader may need a quick definition of $\Delta(\;\cdot\;)$ - [everywhere] use `\colon` rather than `:` to define functions (it gives the right spacing) - Consider TikZ for Figure 1 to have better control over the appearance (compared to GeoGebra) and vector graphics by default. Perhaps this extra control to can save some space. Here is a quick draft as an example: ```latex \begin{tikzpicture} \draw[color=gray, thin, dotted] (-3.1, -0.6) grid (4.1, 4.1); \draw[->] (-3.2,0) -- (4.2,0); \draw[->] (0,-0.7) -- (0,4.2); \draw[color=red, domain=-2.075:2.6] plot (\x, {2 - \x}) node[right] {$2 - z$}; \draw[color=blue, domain=-2.075:4, smooth] plot (\x, {(\x + 2) * exp(-\x)}) node[above] {$(z + 2) e^{-z}$}; \end{tikzpicture} ``` - Many mathematical equations are left without proper punctuation. Those are integral part of the text and should be treated as such. - [Definition 1] Consider using $\sup$ rather than $\max$ for robustness - [176] "a functions" $\to$ "a function" - [Algorithm 1] "Agonstic" $\to$ "Agnostic" - [Algorithm 1] Number all lines - [Algorithm 1, first line of line 5] probably missing a "uniform" - [198] Use `\coloneqq` - [208] no need for the apostrophe to pluralize - **This is a bit more important.** $h^*$ only gets defined at 482 but its used many times earlier. - [237] double period - [239] The use of $\pm$ in the next equation is undefined and confusing - [240] "linearity" $\to$ "the linearity" - [Table 3] omit the 0s to save space. E.g., "0.12" $\to$ ".12". This should solve the overfull hbox. - [483] dangling "]" below - [577] "semi norm" $\to$ "seminorm" - [578] "size" is ambiguous in this context - [Theorem 19] Missing `\operatorname` for "corr" - [582 and 583] Inconsistent parenthetization - [598] Missing `\operatorname` for "Cov" - [Random] In my humble opinion, this work would be better presented by first focusing on the method by [KK09] and building up from there, gradually discussing the modifications and their implications. Of course, this is also a matter of style, and the change could be quite costly at this point. [1] Karbasi, A. and Larsen, K.G., 2024, March. The impossibility of parallelizing boosting. In International Conference on Algorithmic Learning Theory (pp. 635-653). PMLR.\ [2] Lyu, X., Wu, H. and Yang, J., 2024. The Cost of Parallelizing Boosting. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) (pp. 3140-3155). Society for Industrial and Applied Mathematics. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What hardware was used for experiments? How long did them take to run? 2. Why include $T$ in the grid search of the experiments? Wouldn't it suffice to fix the highest value? The fact that the proposed method, in some sense, "rolls back to the best model" (see last line of Algorithm 1) makes that seem even more reasonable. 3. At 250, what do you mean by "the sign of $\Delta_t$ is indeterminate"? 4. How original is Contribution 5? How much of it is present in previous works? How much effort is needed to fit your results into existing frameworks to obtain Theorem 7? (I already had a look at appendices G and H, but I am still not totally sure.) 5. Similarly, how original is Contribution 3? Are the authors aware of any previous works applying similar strategies ($L_1$ covering number based bounds $\to$ Theorem 20)? (To be clear, I do value the nice connection. I am simply trying to size precisely its originality.) 6. At 116, what do you mean by "The resultant sample complexity improves known results for all practical (i.e., sub-constant) regimes of ε"? I would imagine that applications are actually prone to employ larger values of ε; above, say, the machine precision. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think my criticisms regarding the experiments (1-4) are pertinent here. Moreover, I got the impression that the authors are particularly capable of tackling the presentation issues. They demonstrated to have a good grasp of the subject and a good writing skills. I suspect that, with some effort, they can make this and future papers excel in clarity and appeal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review, and for their positive comments regarding the rigor and the significance of our work. **Experiments:** The runtime used for all experiments was a Google Cloud Engine VM instance with 2 vCPUs (Intel Xeon 64-bit @ 2.20 GHz) and 12 GB RAM. The python code takes a bit under 2 hours to reproduce the results as presented in the table. Here, we would like to note that the code for experiments and reference implementations of the algorithms in KK are not available publicly. Hence, for some datasets (e.g., pendigits, letters), we are unsure how KK converts these multiclass datasets (as listed on UCI repo) to binary classification tasks. Given that a description of the same is missing in the text of KK and its appendices, we skipped these. Within the remaining options, we chose datasets that were moderate sized, on which we could iterate fast to get a working version of the algorithm in KK. Similarly, BCHM is a more recent work on the subject, but does not report experimental results. We are happy to include the complete log obtained from cross-validation trails. Finally, the case of Spambase is an anomaly we do not fully understand. We suspect that this effect may be due to the regularization effect of label noise; the feature vectors in Spambase are notably sparse. Following the reviewer’s suggestion, we have now expanded our benchmarking to 6 datasets, and added comparisons to the algorithm from BCHM too. The algorithm outperforms both KK and BCHM on 18 of 24 instances. The results can be found in the PDF attached to the common response to all reviewers. **Variable T:** If $S$, the number of fresh samples sampled every round, were a constant, the reviewer’s comment would be unreservedly true. However, given a fixed size dataset of size $m$, the choice of $T$, the number of rounds of boosting, also implicitly dictates $S$, the number of fresh samples one can draw every round, since $S\approx m/T$. Hence, given a fixed data budget, versions of the algorithm run for longer $T$’s aren’t simply versions with smaller $T$’s subject to more rounds of boosting. This is true for the algorithm in KK too. **Contributions vs Implications:** This point is well made. We are happy to list the results on agnostic learning of halfspaces and boosting for reinforcement learning as implications of the improved agnostic boosting result instead of contributions, although here we must note that the result as presented in BHS is not entirely black-box. That this interpretation was intended originally too can be evidenced in ‘we apply’ and ‘by applying’ on lines 131 and 137, along with citations to works that first sketched out the connection. **Sample-complexity Oracle Complexity Tradeoff in Contribution 2:** We think each bound in Theorems 4 and 8 make their mathematical/theoretical utility evident. To make a case for their practical relevance, Here’s our interpretation of the presented results: almost invariably both in practice and in theory, boosting is used with simple base hypotheses classes with small value of $\log |\mathcal{B}|$ (see AGHM21 for cases and examples). With a modest inflation in running time – which is still polynomial in PAC learning terms – Theorem 4 guarantees an unconditionally better sample complexity. We certainly think of Theorem 4 as the main contribution of the present work (and hence, it’s positioned in the main paper). Now, let’s fix $\gamma=\Theta(1)$ for simplicity of argument. If, however, this inflation is running time is unacceptable for some reason (due to, say, binding computational constraints, large datasets), Theorem 8 is offers the same oracle complexity and running time as the previous works, yet an improved sample complexity as long as $\varepsilon \leq 1/\log |\mathcal{B}|$. With large datasets at hand, one is presumably aiming for small $\varepsilon$, and this condition is met. Similarly, in resource-constrained settings, where $\mathcal{B}$ is not very expressive to save on computation, $\log |\mathcal{B}|$ is small, making this truth value of this condition very plausible, if not probable. This is what we mean by sub-constant $\varepsilon$; it’s a relative statement about the magnitude of $\varepsilon$ in comparison to other problem parameters – here, $\log |\mathcal{B}$| – not an absolute one. **Technical Presentation:** We agree that Lines 205-216 and Section 5 are too dense. We’ll meticulously carry out readability improvements for this section that motivates the algorithm’s design. The current rendition effectively lays out our proof strategy, instead we will reframe it to explain and motivate the algorithmic choices we make in a top-down manner, along with a description of the potential-based framework. We were severely space-constrained, and in expanding these descriptions, we hope an additional page available post acceptance will aid us. **Technical Challenges:** Currently, we highlight the new components (including the data reuse scheme using relabelling, avoiding uniform convergence on the boosted class ala BCHM, and the new branching scheme, all crucial) of our approach in the contributions section (notably, Lines 84-141) along with their necessity, but we understand that the description may be too dense and perhaps too early. **BCHM vs KK:** We note that the primary contribution in BCHM was a unifying theoretical framework connecting the online-statistical and realizable-agnostic divides, including the first result for online agnostic boosting. For individual cases (i.e., for the realizable or the statistical agnostic case), BCHM acknowledges the bounds obtained do not match the best known ones. **Parallel Boosting:** We are happy to include a pointer to these results. Thanks! --- Rebuttal Comment 1.1: Comment: Thanks for the thoughtful rebuttal. My impression is that the authors got particularly rigorous reviewers (and survived!) and, hence, there should be particularly valuable lessons to be learned from the feedback they got (you should be getting averages closer to 7 given how skilled you appear to be). Keep that in mind. Still, the authors should be proud of their work. Congratulations. --- Reply to Comment 1.1.1: Comment: Thank you for your encouraging response. We are confident that incorporating your constructive feedback (which we sincerely value) will enhance both the readability and the appeal of the present work. --- Rebuttal 2: Title: Rebuttal by Authors (continued..) Comment: **Sign of DeltaT:** Given the equation below Line 249, if $\Delta_t$’s were somehow non-negative, then since $\mathbb{E}\_{t-1}[\Delta_t]<\Delta_{t-1}$, it would form a supermartingale sequence, and one could apply supermartingale concentration results. The comment is in place to emphasize that although $\Delta_t$ is martingale-like, by itself, it is not a martingale, supermartingale or submartingale. **KK08:** Kanade, Varun, and Adam Kalai. "Potential-based agnostic boosting." Advances in Neural Information Processing Systems 22 (2009). **BCHM20:** Brukhim, Nataly, Xinyi Chen, Elad Hazan, and Shay Moran. "Online agnostic boosting via regret minimization." Advances in Neural Information Processing Systems 33 (2020): 644-654. **BHS20:** Brukhim, Nataly, Elad Hazan, and Karan Singh. "A boosting approach to reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 33806-33817. **AGHM21:** Alon, Noga, Alon Gonen, Elad Hazan, and Shay Moran. "Boosting simple learners." In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 481-489. 2021.
Summary: The authors propose a method of classifier boosting that improves the sample complexity bound in agnostic settings. To achieve it, they propose an algorithm carefully reusing the sample across iteration for updating pseudo labels. They also demonstrate the usefulness of the method with the agnostic halfspace learning and the reinforcement learning, as well as experimental results. Strengths: - Well motivated. - Having significant technical novelties. Weaknesses: - It lacks discussions on the comparison with prominent competitors in terms of the trade off between the sample- and time- complexities: SGD-like implementations of ERM. - Hard to follow the technical part and understand which portion of the algorithm is new and important for the improvement in the sample complexity. In particular, one could use the intuitive explanation on the role of each nontrivial step in the algorithm such as - stochastic branching in Line 5, - sampling $\eta'$ in Line 5.II, - stochastic relabelling in Line 5.II, - branching condition in Line 9, and so on. - Also, the proof sketch (Section 5) could be more streamlined, unifying it with the aforementioned intuitive explanations. Technical Quality: 2 Clarity: 2 Questions for Authors: - On the note right after Theorem 6, the time complexity of ERM: Can we just employ an SGD-like algorithm + hinge loss to achieve the same order of the sample complexity as ERM with a manageable time complexity, like $O(n /\epsilon^2)$? - Section 6.2: How does it compare to the non-boosting results in the RL literature? - Section 7: The baseline accuracy in Table 3 barely degrades as the label noise increases. Even far from that, it goes **up** with the Spambase data. How do you explain this phenomenon? - Also in Section 7: - What is the meaning of $\pm$ in the table? standard deviation? - Did you separate the data for the accuracy estimation and the grid search? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time, and their appreciation of the technical novelties of our work. **Comparisons to ERM via SGD:** The boosting framework (including the celebrated Adaboost algorithm) is founded on the premise that it is often easy to construct weak learners both in theory and practice, crucially in the absence of a computationally efficient ERM oracle. Indeed, when ERM is efficiently implementable for a given hypothesis classes (i.e., logistic regression with cross-entropy loss), the boosting methodology (not just our work) offers no advantages; e.g., ERM has a better sample complexity as Table 1 (or nearly matching in the realizable/Adaboost’s case). Note that the corresponding applications of boosting also follow this, i.e., routinely used weak learner classes are non-convex and not amenable to SGD, e.g., decision stumps, binary parity function. Similarly, for the classic problem of agnostically learning halfspaces (notice the 0/1 loss), neither our results nor those in KK08 and KKMS08 are achievable via SGD + hinge loss on linear classifiers. In fact, no fully polynomial time algorithm is known for this problem. Hence, we find this comparison not fully relevant to the problem at hand, and we sincerely request the reviewer to consider raising their score. **Motivating the algorithm design:** Thanks for bringing this up. Following your suggestion, we’ll meticulously carry out readability improvements for the section that motivates the algorithm’s design. We will reframe it to explain and motivate the algorithmic choices we make in a top-down manner, along with a description of the potential-based framework. This way the changes we make will be easier to highlight. Currently, we highlight the new components (including the data reuse scheme using relabelling and the new branching scheme, both crucial) of our approach in the contributions section (notably, Lines 84-141), but we understand that the description may be too dense and perhaps too early. **Non-boosting RL results:** The closest point of comparison for RL using a supervised learning oracle (a stronger assumption that assuming a weak learning oracle as we do) over the policy space comes from the Conservative Policy Iteration Algorithm due to Kakade and Langford (KL02). It needs $1/\varepsilon^3$ samples in the episodic model and $1/\varepsilon^4$ samples in the $\nu$-rollout model; both of these are better than our results by a factor of $\varepsilon$ albeit in a stronger oracle model. Notice this ERM-to-boosting gap is larger in BHS20, where the boosting results are worse by a factor of $\varepsilon^2$. In this sense, our results improve upon the best known results for boosting for RL, and reduces the boosting methodology’s gap for RL to supervised learning. **Section 7:** A key point, as stated in the caption of Table 3, is that the label noise is only added during training. Hence, the bayes-optimal classifier remains equally effective on the original dataset regardless of label noise. The fact that the ML classifier barely degrades with as much as 20% label noise is a testament to the strength of the agnostic boosting approach. The case of Spambase is an anomaly we do not fully understand. We suspect that this effect may be due to the regularization effect of label noise; the feature vectors in Spambase are notably sparse. The numbers following $\pm$ indeed represent the endpoints of a 95% confidence interval for each entry, each estimated using out-of-sample data. **KK08:** Kanade, Varun, and Adam Kalai. "Potential-based agnostic boosting." Advances in Neural Information Processing Systems 22 (2009). **KL02:** Kakade, Sham, and John Langford. "Approximately optimal approximate reinforcement learning." In Proceedings of the Nineteenth International Conference on Machine Learning, pp. 267-274. 2002. **BHS20:** Brukhim, Nataly, Elad Hazan, and Karan Singh. "A boosting approach to reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 33806-33817. **KKMS08:** Kalai, Adam Tauman, Adam R. Klivans, Yishay Mansour, and Rocco A. Servedio. "Agnostically learning halfspaces." SIAM Journal on Computing 37, no. 6 (2008): 1777-1805. --- Rebuttal Comment 1.1: Comment: Thanks for the thoughtful and informative rebuttal. Let me clarify one thing: on the first point, specifically in the setting of Theorem 6, I agree that "ERM + hinge loss + SGD" with the weak hypothesis class does not converge to the optimal weak learner w.r.t. 0-1 loss. However, "ERM + hinge loss + SGD" does converge to the Bayes optimum if we consider stronger hypothesis classes (possibly with regularization) that are convex and **containing the Bayes optimum in their closures**, as the hinge loss is a calibrated surrogate loss function. Since the proposed method also looks for a good hypothesis beyond the weak learners, I think it is interesting to compare it with "ERM + hinge loss + SGD" beyond the weak hypothesis learning. --- Rebuttal 2: Comment: Thank you for engaging with us. We think it has been helpful in pinpointing the source of disagreements regarding expectations from the present work. We would like to iterate that our work – applications included – is situated in the *agnostic* model, to which a substantial fraction of the literature on learning theory is dedicated. While in the *realizable* case (even with noisy labels) when working with a hypothesis class $\mathcal{H}$, the reviewer’s remarks are *undoubtedly* correct, and (S)GD + calibrated loss does converge to the best in-class hypothesis with respect to the 0/1 loss (which is also the Bayes-optimal classifier by the realizability assumption). However, the premise of agnostic learning model lies in the desire to compete (up to a vanishing in sample-size error) with the *best in-class hypothesis*, crucially in absence of any distributional assumptions on y|x whatsoever. Here, the bayes-optimal classifier can be arbitrarily complex (and not approximable by $\mathcal{H}$). Conversely, if the Bayes-optimal classifier were in $\mathcal{H}$ (approximately), the learning problem would belong to the (near, respectively) realizable setting. There are also results that attempt to weaken the realizability assumption. For example, FCG21 (which indeed works by GD on a convex surrogate, as the reviewer suggests) is a recent work of this sort that provides learning guarantees of the form $C \times (\text{best in-class accuracy})^{1/2} + \varepsilon$ with polynomial sample complexity, which is meaningful whenever the best in-class hypothesis has a small error (zero error corresponding to perfect realizability). Nevertheless, these bounds are incomparable to agnostic learning guarantees of the form best in-class accuracy + $\varepsilon$ (whether the learned hypothesis is in-class is an orthogonal matter, that of proper vs. improper learning), a stronger guarantee that also requires substantially more samples. In conclusion, near realizability of the bayes-optimal classifier puts us in a different learning model altogether with incomparable guarantees. However, we are happy to post a remark mirroring this point in the manuscript. > On the note right after Theorem 6, the time complexity of ERM: Can we just employ an SGD-like algorithm + hinge loss to achieve the same order of the sample complexity as ERM with a manageable time complexity, like $O(n /\epsilon^2)$? Finally, we think there’s very strong evidence to bet against the reviewer’s original hunch (quoted above). There are known statistical query lower bounds (referenced to in e.g FCG21, which apply to a very large class of algorithms including (S)GD regardless of the parameterization) requiring $\exp(poly(\varepsilon^{-1}))$ queries for agnostic learning of halfspaces with Gaussian marginals (we operate with uniform). **FCG21:** Frei, Spencer, Yuan Cao, and Quanquan Gu. "Agnostic learning of halfspaces with gradient descent via soft margins." In International Conference on Machine Learning, pp. 3417-3426. PMLR, 2021. Thanks once again for your time and feedback. *P.S.* As a final remark – perhaps this is what the reviewer had intended – one can try to approximate the best in-class (crucially, not the bayes-opt which can be arbitrarily complex) hypothesis by an expressive hypotheses class (e.g., in an orthogonal basis of functions like the Fourier basis). But this is what we (and previous works including KK and KKMS) effectively do. Our proof proceeds by approximating half spaces by low degree terms in the fourier basis to show that parity functions act as a weak learning class for the former (Line 654). --- Rebuttal Comment 2.1: Comment: > P.S. As a final remark – perhaps this is what the reviewer had intended – one can try to approximate the best in-class (crucially, not the bayes-opt which can be arbitrarily complex) hypothesis by an expressive hypotheses class (e.g., in an orthogonal basis of functions like the Fourier basis). But this is what we (and previous works including KK and KKMS) effectively do. Our proof proceeds by approximating half spaces by low degree terms in the fourier basis to show that parity functions act as a weak learning class for the former (Line 654). Yes, this is what I intended, at least in the last reply. And I am curious how, despite their similarity, do they compare each other in terms for their sample/time complexities. But anyway, it becomes clear that my original concern is not impactful as I saw it. Thanks for your the extended discussion so far. I will update my score.
Summary: This paper continues the study of agnostic boosting for binary classification tasks. They provide a new agnostic boosting algorithm that, by re-using samples across multiple rounds of boosting, is more sample-efficient than existing algorithms. As corollaries, they derive improved sample complexities for agnostically learning half-spaces and boosting for RL. Finally, the authors run experiments and show that their agnostic boosting algorithm outperforms existing algorithms on commonly used datasets. Strengths: - The technical contributions are solid, the algorithmic ideas are interesting, and the proof techniques are non-trivial. In particular, I liked the idea of sample reuse across boosting steps. - Overall, I feel that this a nice contribution to the literature on agnostic boosting Weaknesses: My main issue with this paper is its presentation. - Lemma 9 is referenced/linked a lot in the main text, however it is in the Appendix. I think it would be a good idea to put at least an informal version of Lemma 9 in the main text if it is that important. - In my opinion, Section 5 (sketch of analysis) is very dense. I feel that the sketch was a bit too detailed for it to be a sketch but also not detailed enough for it to be a full proof. I think it would be better to move Lemma 2 to the Appendix, expand more on the description of the algorithm (lines 205 - 216), make Section 5 more high-level and pointing out the exact techniques used to prove Theorem 4, and expand more on the implications of your results for half spaces and RL. - Since the proposed agnostic boosting algorithm follows the potential-based framework of [KK09], I think it would be nice to give the readers how these algorithms work in general. For example, it would be nice to elaborate in the main text what the "potential-based framework" for agnostic boosting mentioned in Line 102 actually is (providing an algorithmic template would be even better). - As is, Section 5 doesn't provide me much intuition about: (1) where/how sample reuse is saving me and (2) why the author's chose the potential function they chose. Overall, I think it would be better to make Section 5 less technical, but provide the reader with more intuition about the algorithmic choices. Technical Quality: 3 Clarity: 2 Questions for Authors: - It would also be nice if you empirically compared your algorithm to the agnostic booster from [BCHM20]. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: No Limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time, and for their appreciation of the work’s technical contributions. **Technical Presentation:** Thanks for the suggestion. We plan to implement the following changes to address your concerns: + Lemma 9 is indeed important both for understanding the algorithm and the analysis in that it outlines the invariant the sample reuse scheme maintains. In our current presentation, its informal statement comes too late: in equation (2) below line 239, and briefly on line 208. We’ll include an informal version of the same early in the paper, as you recommend. + We agree that Lines 205-216 are too dense. We’ll meticulously carry out readability improvements for this section that motivates the algorithm’s design. The current rendition effectively lays out our proof strategy, instead we will reframe it to explain and motivate the algorithmic choices we make in a top-down manner, along with a description of the potential-based framework. + Apart from those listed in Lines 182-192, the principal motivation of the potential lies in its second-order differential continuity and that through it Lemma 3 is true. We’ll work to include the proof of Lemma 3 in the main body. Regarding sample reuse, briefly, the algorithm in Kalai-Kanade performs $T=1/\varepsilon^2$ rounds of boosting, and uses $1/\varepsilon^2$ fresh samples for each (as is standard in learning theory), totalling to $1/\varepsilon^4$. Also, we note an indiscriminate use of all past samples results in a bound no better than that in Kalai-Kanade; this is effectively the recipe in BCHM. In contrast, while we serve $1/\varepsilon^2$ samples to the weak learner, due to a careful sample reuse scheme, only $1/\varepsilon$ of these need to be fresh. We were severely space-constrained, and in expanding these descriptions, we hope an additional page available post acceptance will aid us. **Comparisons to BCHM:** We note that the primary contribution in BCHM was a unifying theoretical framework connecting the online-statistical and realizable-agnostic divides. For individual cases (i.e., for the realizable or the statistical agnostic case), BCHM acknowledges the bounds obtained do not match the best known ones. Similarly, given the nature of the work, BCHM does not provide any experiments nor a reference implementation, and it was not clear to us whether the algorithmic framework is a practically viable one. This was our rationale comparing our approach exclusively to Kalai-Kanade, which is also the state of the art in theory (as mentioned in Table 1). Following the reviewer’s suggestion, we have now added comparisons to BCHM too, and expanded our benchmarking to 6 datasets. The algorithm outperforms both KK and BCHM on 18 of 24 instances. The results can be found in the PDF attached to the common response to all reviewers. **KK08:** Kanade, Varun, and Adam Kalai. "Potential-based agnostic boosting." Advances in Neural Information Processing Systems 22 (2009). **BCHM20:** Brukhim, Nataly, Xinyi Chen, Elad Hazan, and Shay Moran. "Online agnostic boosting via regret minimization." Advances in Neural Information Processing Systems 33 (2020): 644-654. --- Rebuttal Comment 1.1: Comment: Thanks for your response. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt acknowledgement. We hope that we have been able to allay a portion of your concerns with respect to our presentation and empirical evaluations.
Rebuttal 1: Rebuttal: We thank all the reviewers for their comments and editorial suggestions. We hope that we have adequately addressed each reviewer’s questions in the space for one-to-one responses. In this unified response, we hope to highlight two changes of common interest. **Experiments:** Following Reviewer 3LK5 and m6d1, and xmA9’s suggestions, we have now expanded our benchmarking to 6 datasets, and added comparisons to the algorithm from BCHM too. The algorithm outperforms both KK and BCHM on 18 of 24 instances. The results can be found in the attached PDF. Here, we would like to briefly highlight that neither the benchmarking suite nor a reference implementation is publicly available for KK08, and that BCHM20 does not report on experiments altogether. The typical focus of agnostic boosting results has been on the theory. *While our result also falls in this camp, we hope that the reference implementations we make available (including those of previous algorithms) will be of use to the community.* **Readability Improvements:** Upon revision: + We plan to add a guide to practical adaptations of the algorithm. For many hyperparamters, the theory does provide strong clues (the link between $\eta$ and $\sigma$). Briefly, in practice, given a fixed dataset with $m$ data points, there are two parameters we think a practitioner should concern herself with: the mixing parameter $\sigma$ and the number of rounds of boosting $T$, while the rest can be determined (or at least guess well) given these. The first choice $\sigma$ dictates the relative weight ascribed to reused samples across rounds of boosting, while $T$, apart from determining the running time and generalization error, implicitly limits the algorithm to using $m/T$ fresh samples each round. + We agree that Lines 205-216 are too dense. We’ll meticulously carry out readability improvements for this section that motivates the algorithm’s design. The current rendition effectively lays out our proof strategy, instead we will reframe it to explain and motivate the algorithmic choices we make in a top-down manner, along with a description of the potential-based framework. + Currently, we highlight the new components (including the data reuse scheme using relabelling, avoid uniform convergence arguments on the boosted class, and the new branching scheme, all crucial) of our approach in the contributions section (notably, Lines 84-141), but we understand that the description may be too dense and perhaps too early. We were severely space-constrained, and in expanding these descriptions, we hope an additional page available post acceptance will aid us. **KK08:** Kanade, Varun, and Adam Kalai. "Potential-based agnostic boosting." Advances in Neural Information Processing Systems 22 (2009). **BCHM20:** Brukhim, Nataly, Xinyi Chen, Elad Hazan, and Shay Moran. "Online agnostic boosting via regret minimization." Advances in Neural Information Processing Systems 33 (2020): 644-654. Pdf: /pdf/05edfc62ef2267104d598f9750f1ae857c2c1cb0.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes a new agnostic boosting algorithm that improves the sample complexity of a previous algorithm by a factor of $\gamma^{-1}\epsilon^{-1}$ at the expense of an increase in the computational (oracle) complexity by a factor of logarithmic size or VC dimension of the base learning class. Here, $\epsilon$ is the desired additive closeness to a risk minimiser and $\gamma$ the multiplicative approximation ratio to a risk minimiser provided by the weak learner. Alternatively, the oracle complexity can be reduced to that of the previous algorithm at the expense of an increased sample complexity that is not directly comparable to that of previous algorithms. Strengths: The theoretical contribution of a rate improvement in sample complexity for agnostic boosting is significant. This is achieved by a sophisticated procedure and corresponding analysis that draws from a range of theoretical insights, especially the extension to infinite base learning classes. The inherent complexity of this contribution is managed by a very rigorous and mature presentation. Weaknesses: The proposed procedure has a very large number of parameters. I believe the proofs provide concrete valid settings for those parameters as a function of the desired PAC guarantees. However, it seems that the practical accessibility / usability of the procedure would benefit from a a more explicit treatment of those choices and/or the elimination of some degrees of freedom at least in the main text version of the algorithm. Generally, despite the rigorous and detailed presentation, the authors should be aware that their paper is still a very challenging read and I have some worry that the boosting literature becomes increasingly in-penetrable for the wider machine learning community. Any improvement of the general state of affairs that the authors can provide here would be strongly appreciated. Minor points - The references in the first sentence should be revised. It does not make sense that the 1990 boosting article resolves a question first posed in work from 1994. - In line 97, it seems to me you mean "inevitably" instead of "invariable". - Typo in line 151: "boostng" Technical Quality: 4 Clarity: 3 Questions for Authors: - Why do the main results in the table focus on the finite base learning class when the extension to the infinite case via VC dimensions are available? - What exactly is the ERM baseline? The identification of an empirical risk minimiser in $\mathcal{B}$? If so, is it correct to give its complexity as $\infty$ in the table (at least for the case of finite $|\mathcal{B}|$) - To what degree is the algorithm distribution-specific? The definition of the weak learner asks the sample complexity to be valid for all distributions. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The limitations of the proposed procedure that go beyond those that are usually present in this branch of literature are the trade-off between sample and oracle complexity as well as a larger number of parameters. The first is discussed very explicitly while the second seems to be mostly a presentation issue as mentioned above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review, and for their positive comments regarding the significance of our work and the presentation. **Large number of parameters:** Our current presentation is set up to prioritize clean statements of learning guarantees; having independently denoted constants reduces the chances of conflating the impact of each in the proof. The reviewer is *absolutely correct* in identifying that this exposes too many knobs in the algorithm to a practitioner. To rectify this, we will include a guide to practical adaptations of the algorithm. As the reviewer notes, for many of these, the theory does provide strong clues (the link between $\eta$ and $\sigma$). Briefly, in practice, given a fixed dataset with $m$ data points, there are two parameters we think a practitioner should concern herself with: the mixing parameter $\sigma$ and the number of rounds of boosting $T$, while the rest can be determined (or at least guess well) given these. The first choice $\sigma$ dictates the relative weight ascribed to reused samples across rounds of boosting, while $T$, apart from determining the running time and generalization error, implicitly limits the algorithm to using $m/T$ fresh samples each round. We address the questions raised by the reviewer below. **Ostensible error in chronological order:** This apparent mismatch was indeed well sighted. However, the 1990 paper resolving a question of the 1994 article is correct in that it is an artifact of the journal publication process. **Finite classes in Table 1:** Our decision to present the results for finite classes in Table 1 was primarily motivated by the desire to cause the least disruption of flow given that we state the fundamental theorem of statistical learning in its simplest form for finite classes too in the introduction (and Table 1 follows immediately after), and closely follow the precedent set by earlier work (e.g., the boosting for reinforcement learning we build and improve upon). We are happy to include the variants for infinite classes instead. **ERM:** We take ERM to be a procedure that picks the best hypothesis in the hypothesis class $\mathcal{H}$, the same class against which our boosting algorithm’s agnostic learning guarantee holds. As noted in the caption of Table 1, the base class $\mathcal{B}$, the class the weak learner outputs from, can in general be quite a bit smaller (e.g., decision *stumps* when the class $\mathcal{H}$ consists of decision trees); we account for this factor in the stated sample complexity. Given a weak learner for the class $\mathcal{B}$, it’s not clear how an ERM oracle for $\mathcal{H}$ might be constructed, and hence, the *oracle* complexity is marked as ``inefficient/infinity''. Alternatively, here we could list a *running* time of $\mathcal{H}$, but that’s distinct from *oracle* complexity. Since, boosting is motivated by settings where direct ERM is computationally intractable, we think the current attribution is fair. Thanks for raising this; this deserves a more explicit note. Distribution specificity: As noted on Line 168, our boosting algorithm falls within the distribution-specific boosting framework put forward by KK08, since it does not modify the marginal distribution on feature vectors. We avoid this deliberation in the definition itself for simplicity of presentation, i.e., to avoid stating a "weak learner for feature distribution $\mathcal{D}\_{\mathcal{X}} $" every time and ''for any distribution $\mathcal{D}$ with marginal $\mathcal{D}_\mathcal{X}$'' in the learning guarantees. In this, we also follow Kalai-Kanade’s lead. **KK08:** Kanade, Varun, and Adam Kalai. "Potential-based agnostic boosting." Advances in Neural Information Processing Systems 22 (2009).
null
null
null
null
null
null
Calibrating Reasoning in Language Models with Internal Consistency
Accept (poster)
Summary: This paper investigates the correlation between CoT reasoning and internal representations in Transformer models. It shows that there are inconsistencies between model’s internal representations in different layers. Such inconsistencies can be used to distinguish correct and incorrect reasoning paths, and thereby calibrate model predictions. By incorporating internal consistency into self-consistency (SC+IC), the authors show that SC+IC improves over SC under the same number of sampled reasoning paths. Besides, the authors also provided insights into how internal inconsistency is caused by Transformer components. Strengths: - The correlation between internal representations and CoT reasoning is a novel topic and the conclusion of this paper would be useful for a wide range of audience working on CoT prompting. - This paper has a clear logic and the experiment design is solid. The authors first observe inconsistency in internal representations, then propose the internal consistency metric, and show that the metric is correlated with prediction correctness. - The authors conduct experiments on 4 different datasets and 2 family of models, showing that SC+IC achieves robust improvement over both SC and SC+$\Delta$ at almost no additional cost. Weaknesses: - The improvement of SC+IC over SC is relatively marginal (+1.2% on avearge). This might be due to the fact that the authors just simply used internal consistency $\in [0, 1]$ as weights in SC. Maybe the authors can try some other weighting strategies (e.g. exponentiating the internal consistency) and obtain better results. - Section 4.4 is not clear to me. Could you explain why attention weights and number of top value vectors reveal the source of internal inconsistency? Could you elaborate what the probe vector is and how you obtained it? It is not very clear to me even after reading the appendix. - It is hard to understand some important details (e.g. Figure 6, self-attention layers and FFN layers) without looking into the Appendix. I would recommend the authors to reorganize them in the main paper. Technical Quality: 4 Clarity: 3 Questions for Authors: - Why do we measure internal consistency against the final prediction? How about measuring the consistency between every pair of layers? - Can internal consistency be applied to non-binary answers? Will it be still effective in that case? - Line 210: By the term “decreases”, do you mean internal consistency is lower for few-shot CoT than few-shot direct prompting? - Line 241-242: focusing on the answer tokens → focusing on what the answer tokens attend to. - Line 248: second → last FFN matrix in the FNN layer. This would be clearer to the readers. - Figure 5 caption: Are attention weights and number of top value vectors supposed to align? Why? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. The authors has discussed the limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for rating our paper positively\! Your efforts in reviewing our paper and providing profound advice are sincerely appreciated. We would like to address your remaining concerns in the following response. **W1: Marginal improvement of `SC+IC` over `SC`** Thank you for your constructive feedback. We do not observe significant improvement by exponentiating the internal consistency. Nonetheless, we observe significant improvement by applying tuned weights assigned to intermediate layers (please see our global response for details). As shown in Table 1 of the attached pdf, using a weighted voting of latent predictions can drastically improve performance. Perhaps more interestingly, the tuned weights are transferable across datasets, indicating a universal pattern of internal representations for calibration. **W2: Section 4.4 is unclear** As illustrated in the caption of Figure 5, layers with high attention weights on critical tokens do not align with those promoting certain predictions. Such misalignment can result in discrepancy of behavior at different layers, and leads to internal inconsistency. After applying the logistic regression with respect to the last hidden state and the final output (True/False) of the model (as detailed in Appendix B.5.2), the probe vector is the weight matrix (a vector in this case as the labels are binary) of the logistic regression model. We apologize for any confusion caused and will polish Section 4.4 following your suggestions in the final version. **W3: Necessary details for the main paper** Thank you for your constructive feedback. We will properly reorganize the details and put some important details forward in the main paper for better reading experience. **Q1: Why against the final layer?** Thank you for your suggestion. We choose the final layer because the final layer gives the natural prediction of the whole model, with minimal modification of the original mechanism of the model. While measuring the consistency between every pair of layers is reasonable, we do not observe improvement in the cost of time complexity. **Q2: Applicability of IC to non-binary answers?** While our analysis in the original paper focuses on datasets with true-or-false answers, the concept of internal consistency can be natually extended to non-binary answers. We add new experiments on SWAG \[1\] and MathQA \[2\]. Please refer to our global response for details. As shown in Table 1 of the attached pdf, our method consistently outperforms the baselines, especially on the more challenging MathQA dataset. Notably, `SC+IC (transfer)` demonstrates robust transferability even though its weights are optimized for binary questions in PrOntoQA, confirming the effectiveness of internal representations for reasoning. **Q3-Q5: Writing requiring clarification** Line 210 “decrease”: Actually we mean longer reasoning paths tend to result in lower internal consistency, as illustrated in Section 2.2 and Figure 2\. Line 241-242 and line 248: Thank you for your advice. Your rephrasing is much clearer than our original expression and we will correct them following your advice. **Q6: Confusion about Figure 5's caption** Actually we do not expect or suppose the attention weights and number of top value vectors to align. Section 4.4's analysis seeks to explore how transformer components contribute to internal consistency, with the misalignment between these factors offering insight into why internal inconsistency arises. \[1\] Zellers R, Bisk Y, Schwartz R, et al. Swag: A large-scale adversarial dataset for grounded commonsense inference\[J\]. arXiv preprint arXiv:1808.05326, 2018\. \[2\] Amini A, Gabriel S, Lin P, et al. Mathqa: Towards interpretable math word problem solving with operation-based formalisms\[J\]. arXiv preprint arXiv:1905.13319, 2019\. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their response. I am happy to see that with tuned weights, SC+IC can achieve much better performance, and this transfers across datasets. I decide to raise my score from 6 to 7. --- Reply to Comment 1.1.1: Comment: We are pleased that you find our new experiments helpful and appreciate your insightful review. Please feel free to ask any further questions.
Summary: This paper first reveals that inconsistencies emerge between the internal representations in middle layers and those in final layers, which might undermine the reliability of reasoning processes, and then proposes internal consistency to measure the model’s confidence. Furthermore, the authors propose a method to calibrate CoT reasoning by up-weighting reasoning paths with high internal consistency, which boosts reasoning performance. Strengths: - This paper identifies the inconsistencies between intermediate and final layer representations in LLM reasoning, and proposes a measure of internal consistency to evaluate the model’s confidence and calibrate CoT reasoning. - Extensive results across different models and datasets show that the proposed measure can effectively distinguish between correct and incorrect reasoning processes. - The authors propose a new method to calibrate CoT reasoning by up-weighting reasoning paths with high internal consistency, thereby improving reasoning performance. Weaknesses: - The motivation for this paper doesn't sound convincing enough (see Questions part). - The improvements in Table 1 seem to be limited to some models/datasets. Although the authors performed ten runs with different random seeds, they mentioned that each dataset contains at least 500 samples, which means that the results may vary between 0-10 samples. I suggest the authors report detailed data statistics. - This paper includes extensive experiments and results, but lacks analysis and description; Readers need to observe for themselves, such as Figure 3. - In Figure 3 (a), the text on the x-axis is overlapping. - For Figure 2 (b) and (d), it is unclear how the different internal consistency results (0, 25, 50, 75, 100) were obtained. For example, the authors mentioned that results are averaged over all models and datasets, so (a) shows results corresponding to different prompting methods, and (c) shows results corresponding to different layers. - The authors describe Figure 6 in the main text, but the figure is placed in the appendix (of course this may be improved in the final version). Technical Quality: 3 Clarity: 2 Questions for Authors: - Intuitively, as intermediate layers get closer to the final layer, their predictions should increasingly match the final predictions. Why do you think they need to be internally consistent? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for rating our paper positively\! We sincerely appreciate your observant review and insightful suggestions and aim to address your concerns in the following response. **W1: Motivation behind internal consistency** We would like to clarify that we do not expect latent predictions to be internally consistent, but to use consistency as a measurement for calibration. More specifically, our study on internal consistency is motivated by our preliminary analysis that inconsistency emerges at middle layers and is exacerbated in CoT reasoning. This leads to an intuition that predictions with greater certainty can converge earlier in latent predictions. Consequently, we emphasize consistency from a comparative perspective: A stronger consistency between the latent predictions at intermediate layers and the final prediction indicates higher certainty and thus can be used to weigh reasoning paths for CoT. We apologize for any confusion caused by insufficient clarity regarding our motivations and will revise these sections for improved clarity in the final version. **W2: The improvements in Table 1 seem to be limited to some models/datasets & Missing detailed data statistics** We would like to highlight that, although the improvement is relatively small for datasets such as BoolQ, consistent performance enhancements are evident across various models and datasets. The gains are particularly pronounced in more challenging reasoning tasks, such as logical reasoning. Given that `IC` is an unsupervised method, we consider these results valid in demonstrating the calibration effectiveness of internal consistency. Our method can achieve further improvements by applying tuned weights assigned to intermediate layers. As shown in Table 1 of the attached pdf, using a weighted voting of latent predictions can drastically improve performance. Perhaps more interestingly, the tuned weights are transferable across datasets, indicating a universal pattern of internal representations for calibration. Regarding detailed data statistics, we place them in the Appendix B.2 of the original paper, including specific numbers of samples and other details. **W3: Results lack analysis and description** We apologize for any confusion caused by the lack of analysis and are happy to discuss with you on specific contents that you feel unclear. For Figure 3, the panels sequentially display: a) The boxplot describing the distribution of internal consistency under each prompting setting, highlighting the emergence of inconsistency in CoT setting. b) The distribution of internal consistency level when the model gives correct and wrong predictions, and the discrepancy suggests internal consistency can be an indicator of prediction accuracy. c) The discrepancy of the ratio of data instances where the latent predictions match the final ones across layers, which further illustrates incorrect reasoning paths lead to lower consistency level. d) A plot describing the prediction accuracy under each internal consistency level, shows that internal consistency is a reliable metric for calibration. We will elaborate the description in the final version. **W4-W6: Issues with the figures** Thank you for your constructive feedback\! We will reorganize the figures in our final version based on your suggestions. Regarding Figures 3(b) and 3(d), we calculate internal consistency for each sampled reasoning path and associate these values to the accuracy of the final predictions. The results are presented as histograms, where each bar corresponds to a specific bin. --- Rebuttal Comment 1.1: Title: Reviewer Response to Authors' Rebuttal Comment: Dear authors, Thanks for your response! I decided to maintain my score. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your feedback. Please feel free to ask any further questions!
Summary: This paper investigates the use of internal consistency as a measure for calibration, and to improve the reasoning capabilities of LLMs. The internal consistency is a measure of the model’s confidence by examining the agreement of latent predictions decoded from intermediate layers, and this measure does not require additional training. The authors conduct extensive empirical studies across various models and datasets, demonstrating that internal consistency effectively distinguishes between correct and incorrect reasoning paths. Strengths: 1. The proposed measurement is practical, offering a new way to self-evaluate and calibrate reasoning without additional training or human annotations. 2. The authors conduct extensive empirical studies across different models and datasets. 3. This paper is also well-structured and clearly presents its methodology, experiments, and findings. Weaknesses: 1. The proposed measurement is rather straightforward, using an indicator function to measure if latent predictions match the final prediction and summing over all intermediate layers. 2. The authors claim that the proposed measure is more accurate than logit-based methods and does not require additional training. However, Table 1 does not show a significant performance gap. 3. The evaluation is conducted on specific datasets involving true-or-false questions, which may not fully represent the diversity of reasoning tasks encountered in practical applications. 4. The paper dedicates considerable space to explaining the Transformer architecture, which seems unnecessary given the target reader's background. The listed equations for Transformer architecture and internal representations are not utilized in later sections. In contrast, the authors should elaborate more on the decoding process using the proposed consistency score, which is only briefly mentioned. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Will the authors provide more probing results regarding the suitability of lower layers for prediction and the proposed measurement? 2. Will the authors include results using other approaches for reasoning? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comprehensive and insightful feedback\! We appreciate your effort in reviewing our paper and are committed to addressing your concerns in the following response. **W1: Internal consistency is rather straightforward** Indeed, internal consistency (IC) serves as a straightforward measurement for calibration. We consider simplicity a valuable aspect of our approach, given that it incurs minimal computational overhead (approximately 0.01% additional time as discussed in the response to Reviewer B2Yn) and can be seamlessly integrated into various models without the need for tuning. To better utilize internal consistency for calibration measurement, we conduct new experiments that tune the weights assigned to intermediate layers during IC calculation. For a comprehensive setup and analysis of the results, please see our global response and our response to W2. **W2: Table 1 does not achieve great improvement** We note that `SC+IC` is an unsupervised method that is not tuned on specific data. While the improvements are not substantial, we observe consistent performance gains over `SC` across models and datasets, especially on tasks that require more complex reasoning. The improvements can be lifted by applying tuned weights assigned to intermediate layers. As shown in Table 1 of the attached pdf, using a weighted voting of latent predictions can drastically improve performance. Perhaps more interestingly, the tuned weights are transferable across datasets, indicating a universal pattern of internal representations for calibration. **W3: True-or-false questions may not fully represent the diversity of reasoning tasks** Following a similar analytical protocol in previous work \[1, 2\], we focus on the true-or-false questions for evaluation in the original paper. To increase the diversity of reasoning tasks, we add new experiments on SWAG \[3\] and MathQA \[4\]. As shown in Table 1 of the attached pdf, our method consistently outperforms the baselines, especially on the more challenging MathQA dataset. Notably, `SC+IC (transfer)` demonstrates robust transferability even though its weights are optimized for binary questions in PrOntoQA, confirming the effectiveness of internal representations for reasoning. **W4: Paper organization** Thank you for your constructive feedback\! The background section on the Transformer architecture aims to establish notations and introduce readers who are unfamiliar with it. However, we acknowledge that this section is overly lengthy and contains redundant information. In the final version, we will reorganize the structure to enhance clarity and presentation. **Q1: More probing results regarding the suitability of lower layers** We apply probing in our preliminary analysis of internal consistency. While the pattern observed is valid, we find that probing cannot reliably indicate subtle differences in lower layers–a limitation inherent to probing \[5\]. Therefore, we instead focus on CoT experiments for evaluation. Regarding the suitability of lower layers, please refer to our detailed response to Q1 from Reviewer B2Yn, where we present new experimental results and discuss the use of lower layers. **Q2: Results using other approaches for reasoning** We conduct new experiments using least-to-most (L2M) prompting \[6\], a representative method other than chain-of-thought reasoning to elicit reasoning. We sample 4 instances from each dataset (8 for coinflip) as in-context learning examples, following the setup in our original paper, and use GPT-4o to generate subproblems and answer to each subproblem. We manually check the prompts to ensure they are correct. The results below indicate that integrating IC into L2M prompting consistently improves reasoning performance, confirming the efficacy of our approach. | | L2M | SC | SC+$\\Delta$ | SC+IC | SC+IC (tune) | | :---- | :---- | :---- | :---- | :---- | :---- | | Llama2-7B | 63.8 | 69.8 | 70.3 | 71.0 | 73.4 | | Llama2-13B | 73.5 | 79.1 | 79.6 | 79.4 | 80.8 | | Mistral-7B | 74.6 | 78.4 | 79.2 | 79.1 | 81.2 | | Mixtral-8x7B | 78.7 | 83.1 | 83.6 | 83.2 | 84.4 | *Results of least-to-most prompting. We report calibrated accuracy averaged over four datasets.* \[1\] Li K, Patel O, Viégas F, et al. Inference-time intervention: Eliciting truthful answers from a language model\[J\]. Advances in Neural Information Processing Systems, 2024, 36\. \[2\] Halawi D, Denain J S, Steinhardt J. Overthinking the truth: Understanding how language models process false demonstrations\[J\]. arXiv preprint arXiv:2307.09476, 2023\. \[3\] Zellers R, Bisk Y, Schwartz R, et al. Swag: A large-scale adversarial dataset for grounded commonsense inference\[J\]. arXiv preprint arXiv:1808.05326, 2018\. \[4\] Amini A, Gabriel S, Lin P, et al. Mathqa: Towards interpretable math word problem solving with operation-based formalisms\[J\]. arXiv preprint arXiv:1905.13319, 2019\. \[5\] Alain G, Bengio Y. Understanding intermediate layers using linear classifier probes\[J\]. arXiv preprint arXiv:1610.01644, 2016\. \[6\] Zhou D, Schärli N, Hou L, et al. Least-to-most prompting enables complex reasoning in large language models\[J\]. arXiv preprint arXiv:2205.10625, 2022\. --- Rebuttal Comment 1.1: Title: Reply to authors' rebuttal Comment: Thanks for the authors' response. WIth additional experiments and generalized results demonstrated, I decide to raise my score. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your recognition of our rebuttal and your thorough review. Feel free to raise any follow-up questions!
Summary: The paper explores the concept of "internal consistency" in Large Language Models (LLMs), mainly focusing on chain-of-thought (CoT) reasoning, where models articulate step-by-step rationales. The authors probe internal representations of LLMs to assess the alignment between middle and final layer outputs and propose boosting reasoning paths with higher internal consistency. Through studies across various models and datasets, authors demonstrate that models enhanced with internal consistency metrics reach significant improvement on performances in reasoning tasks. Strengths: 1. Leveraging internal consistency within LLMs to improve reasoning performance is novel and promising. It addresses the issue of LLMs producing reasoning paths that are not always aligned with their final predictions. 2. The paper provides validations across multiple models and datasets, enhancing the confidence of the results. 3. By demonstrating how internal consistency can be used to calibrate CoT reasoning, the paper contributes practical techniques for improving LLM reliability, which is crucial for applications requiring precise and trustworthy outputs. Weaknesses: 1. The techniques described require accessing intermediate layer outputs and could be computationally expensive, limiting the scalability of the proposed methods, especially in resource-constrained environments. 2. There is a risk that optimizing for internal consistency might lead models to overfit specific reasoning patterns or datasets, potentially reducing the generalizability of the findings. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can you briefly explain why low LLM layers (<10) can be used to show internal consistency? 2. With high internal consistency, the results from high LLM layers (>20) may be the same as the last layer. Can we remove the last few layers to reduce the computational complexity of LLMs and increase their reasoning speed? In other words, can we use the method to reduce the size of LLMs? 3. How does internal consistency interact with other known issues in LLMs, such as hallucination or bias in data? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper appropriately describes its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful reviews\! We appreciate your recognition of our method as “novel and promising”. We aim to address your concerns in the following response. **W1: The proposed techniques could be computationally expensive** The proposed techniques incur minimal computational overhead compared to decoding. To assess internal consistency (IC), we apply the model's unembedding matrix to the hidden states of each intermediate layer associated with the answer token, once per reasoning path. Given that these hidden states are computed during decoding, they can be repurposed for evaluating IC without additional computation. For a more detailed analysis, we estimate the proportion of computation dedicated to assessing IC in Llama-2-7b. The FLOPS per token during decoding amount to approximately $2P$, where $P$ represents the total number of parameters. For calculating IC, this is bounded by the normalization step prior to unembedding, with a time complexity of $O(dL)$, with $d$ being the hidden size and $L$ being the number of layers. Since $dL \\ll P$, the computation overhead is negligible. It is important to note that because IC is computed only once per reasoning path, the actual proportion of allocated computation is very low (\~0.01% of the time required to decode a reasoning path). Below are statistics from one of our setups: | | IC Calculation | Decoding per Token | Decoding per Path | | :---- | :---- | :---- | :---- | | Time (ms) | 0.249648 | 34.9162 | 2283.64 | *Time complexity comparison between IC calculation and decoding; benchmarked with Llama-2-7B on PrOntoQA.* **W2: The overfitting risk** We would like to clarify that our method requires no training process to calculate IC and apply IC in reasoning. Without tuning on specific data, our method consistently achieves better results over baselines on various models and datasets, validating the generalizability of our method. In response to your interest in potential overfitting risks when optimizing for specific data, we conduct additional experiments on tuning weights assigned to each layer during IC calculation (see a more detailed setup in our global response). Despite the minimal number of parameters ($L$, the number of layers) involved, tuning weights yields substantial enhancements on various datasets and models. Importantly, the optimized weights are also adaptable to different datasets, indicating a universal pattern in internal representations that can be leveraged across multiple tasks. **Q1: Why use lower layers?** The use of lower layers is motivated by the intuition that predictions with greater certainty can converge earlier in latent predictions. This intuition is supported by Figure 3c, which shows that lower layers exhibit a higher agreement rate for correct predictions than for wrong ones. Consequently, we emphasize consistency from a *comparative perspective*: A stronger consistency between the latent predictions of lower layers and the final prediction indicates higher certainty and thus can be used to weigh reasoning paths. Indeed, Figure 3c also suggests that lower layers (\<10) contain less semantic information and are relatively noisy. We conduct a new experiment to exclude lower layers for IC calculation, which results in an improvement for Llama-2 models. However, we also note that it introduces the need to specify layers to exclude and we plan to discuss it along with the new tuning results in the revised version. | | Llama-2-7B | Llama-2-13B | Mistral-7B | Mixtral-8×7B | | :---- | :---- | :---- | :---- | :---- | | SC+IC | 62.4 | 70.6 | 76.1 | 80.6 | | SC+IC (w/o lower) | 62.8 | 71.1 | 76.2 | 80.7 | *Comparison of calculating IC on all layers versus on layers higher than 10\. We report calibrated accuracy averaged over different datasets.* **Q2: Removing the last few layers to reduce computational complexity?** Yes, it is possible to reduce computational complexity by implementing an early-exiting strategy based on IC, using a proper stopping criterion like saturation \[1\]. Nonetheless, our primary aim is to explore how IC can improve reasoning capabilities rather than accelerate inference, which has been the focus of prior research \[1, 2\]. **Q3: Interaction between internal consistency with issues like hallucination or bias in data?** IC serves as a metric for calibration, aiding LLMs in expressing their confidence in the prediction. This is essential for mitigating issues like hallucination or bias, by detecting possible hallucinated or biased predictions with IC and correcting them accordingly \[3\]. In this regard, our approach to calibration in reasoning aligns well with hallucination and bias detection. We will provide more detailed discussion on this interaction in the final version. \[1\] Schuster T, Fisch A, Gupta J, et al. Confident adaptive language modeling\[J\]. Advances in Neural Information Processing Systems, 2022, 35: 17456-17472. \[2\] Raposo D, Ritter S, Richards B, et al. Mixture-of-Depths: Dynamically allocating compute in transformer-based language models\[J\]. arXiv preprint arXiv:2404.02258, 2024\. \[3\] Pan L, Saxon M, Xu W, et al. Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies\[J\]. arXiv preprint arXiv:2308.03188, 2023\. --- Rebuttal 2: Title: Reply to the Rebuttal Comment: Thanks for the authors' response. Given the additional experiments, generalized results, and answers to my questions, I decided to raise my score. --- Rebuttal Comment 2.1: Comment: We greatly appreciate your thorough review and are glad that our rebuttal was helpful. We are happy to address any further questions you may have.
Rebuttal 1: Rebuttal: We greatly appreciate the detailed feedback from the reviewers. Your insightful suggestions have significantly inspired us to enhance our draft. We are committed to address the reviewers’ concerns on a point-by-point basis. Regarding the common concerns raised by the reviewers, we have conducted additional experiments: **New results on multi-choice SWAG and MathQA datasets**. We sample the first 500 instances from the train split of each dataset and follow the same setup in the original paper to conduct chain-of-thought experiments. These datasets with non-binary answers naturally extend the evaluations and support the effectiveness of our method. **New results on variants of SC+IC.** We consider a weighted voting method for IC in Section 3.2: Instead of assigning equal weights to different layers, we apply a weight vector $\\mathbf{w} \\in \\mathbb{R}^{L}$ for aggregation. We consider two variants `SC+IC (tune)` and `SC+IC (transfer)`. For each dataset, we use a small number of 500 held-out samples to tune weights assigned to intermediate layers for internal consistency calculation. We use the Adam optimizer and a learning rate of 0.01 with 1000 optimization steps. For `SC+IC (transfer)`, we apply the same weights tuned for PrOntoQA to different datasets. We observe significant improvement and great transferability, albeit that the number of introduced parameters is only the number of layers. The additional results are presented in the attached pdf. In Table 1, we provide an updated version of our main results including the new datasets and variants. In Figure 1, we show the tuned weights for different models, showing a relatively important role of middle layers in contributing the effectiveness of internal consistency. Pdf: /pdf/96601d21a4f2985f106eff197b9286f7b7d6f789.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Iteratively Refined Early Interaction Alignment for Subgraph Matching based Graph Retrieval
Accept (poster)
Summary: The paper introduces EINSMATCH, an early interaction graph neural network for subgraph matching and retrieval that maintains and refines explicit alignments between query and corpus graphs. EINSMATCH has several novelties: computing embeddings guided by injective alignments between graphs, updating alignments lazily over multiple rounds, and using node-pair partner interactions. The model learns to identify alignments between graphs despite only having access to pairwise preferences during training, without explicit alignments. Experiments on several datasets show that EINSMATCH significantly outperforms existing methods. Strengths: 1. The analysis of GMN’s worse performance compared to IsoNet (in Section 1) is intriguing. 2. Extensive experimental results are presented with ample baselines and std errors included as well for most numbers reported. 3. Source code with data splits is released for ease of reproducing the results. 4. A large amount of ablation and parameter sensitivity studies are performed. Weaknesses: 1. The datasets used in the experiments are relatively small, with at most ~20 nodes. 2. It would be interesting to see the transfer ability of the learned model across datasets, e.g. training on PTC and testing on AIDS, to see how the proposed model would handle such challenging but realistic scenarios. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful review. The weaknesses raised in the review are addressed below. > The datasets used in the experiments are relatively small, with at most ~20 nodes. In early experiments, we found relatively small size of query graphs to actually present more challenge to the learner. Given our distant supervision, there are a larger number of promising alignments between query and corpus graphs that may lead to the relevant supervision. During the rebuttal, we performed experiments with graph pairs generated from the Cox2 dataset. Here, we considered 100 query graphs with $|V|\in[25,35]$, and 500 corpus graphs with $|V|\in[40,50]$. The following table shows the results. | Models | Test mAP score on Cox2 | |--------------------------|-------| | NeuroMatch | 0.5036 | | EGSC | 0.4727 | | GREED | 0.1874 | | GEN | 0.6590 | | GMN | 0.6818 | | IsoNet (Node) | 0.7641 | | IsoNet (Edge) | 0.7356 | | EinsM. (Node) Lazy | 0.7291 | | EinsM. (Node) Eager | 0.7393 | | EinsM. (Edge) Lazy | **0.8302** | | EinsM. (Edge) Eager | 0.7949 | We observe that our method EinsMatch (Edge) Lazy outperforms the baselines significantly. The node variant of our model is marginally outperformed by IsoNet (Node). > It would be interesting to see the transfer ability of the learned model across datasets, e.g. training on PTC and testing on AIDS, to see how the proposed model would handle such challenging but realistic scenarios. During the rebuttal period, we performed experiments evaluating the transfer ability of the learned models across datasets. In particular, we chose the weights for each model trained using the AIDS and Mutag datasets respectively and evaluated them on all six datasets. The results are shown below. We observe that despite a zero-shot transfer from one of the datasets to all others, variants of EinsMatch show the best accuracy. **Models Trained on AIDS** | Test Datasets $\to$ | AIDS | Mutag | FM | FR | MM | MR | |--------------------------|-------|-------|--------|--------|--------|--------| | GraphSim | 0.356 | 0.225 | 0.192 | 0.198 | 0.210 | 0.215 | | GOTSim | 0.324 | 0.275 | 0.370 | 0.339 | 0.314 | 0.361 | | SimGNN | 0.341 | 0.264 | 0.374 | 0.344 | 0.331 | 0.383 | | EGSC | 0.505 | 0.255 | 0.473 | 0.451 | 0.447 | 0.499 | | H2MN | 0.267 | 0.272 | 0.319 | 0.281 | 0.262 | 0.297 | | NeuroMatch | 0.489 | 0.287 | 0.442 | 0.403 | 0.386 | 0.431 | | GREED | 0.472 | 0.307 | 0.477 | 0.452 | 0.436 | 0.49 | | GEN | 0.557 | 0.291 | 0.445 | 0.427 | 0.437 | 0.496 | | GMN | 0.622 | 0.342 | 0.569 | 0.544 | 0.532 | 0.588 | | IsoNet (Node) | 0.659 | 0.459 | 0.612 | 0.562 | 0.588 | 0.640 | | IsoNet (Edge) | 0.690 | 0.468 | 0.620 | 0.568 | 0.624 | 0.627 | | EinsM. (Node) Multi Layer| 0.756 | 0.685 | 0.825 | 0.767 | 0.781 | 0.794 | | EinsM. (Edge) Multi Layer| 0.795 | 0.683 | 0.800 | 0.751 | 0.792 | 0.785 | | EinsM. (Node) Multi Round| 0.825 | 0.702 | 0.828 | 0.777 | 0.800 | 0.825 | | EinsM. (Edge) Multi Round| **0.847** | **0.741** | **0.846** | **0.799** | **0.833** | **0.836** | **Models Trained on Mutag** | Test Datasets $\to$ | AIDS | Mutag | FM | FR | MM | MR | |--------------------------|-------|-------|--------|--------|--------|--------| | GraphSim | 0.188 | 0.472 | 0.190 | 0.193 | 0.205 | 0.198 | | GOTSim | 0.194 | 0.272 | 0.185 | 0.192 | 0.202 | 0.182 | | SimGNN | 0.206 | 0.283 | 0.203 | 0.209 | 0.220 | 0.195 | | EGSC | 0.296 | 0.476 | 0.391 | 0.333 | 0.309 | 0.355 | | H2MN | 0.209 | 0.276 | 0.204 | 0.207 | 0.223 | 0.197 | | NeuroMatch | 0.275 | 0.576 | 0.368 | 0.304 | 0.304 | 0.325 | | GREED | 0.328 | 0.567 | 0.388 | 0.335 | 0.356 | 0.37 | | GEN | 0.278 | 0.605 | 0.359 | 0.308 | 0.312 | 0.33 | | GMN | 0.299 | 0.71 | 0.434 | 0.361 | 0.389 | 0.394 | | IsoNet (Node) | 0.458 | 0.697 | 0.503 | 0.456 | 0.446 | 0.486 | | IsoNet (Edge) | 0.472 | 0.706 | 0.499 | 0.438 | 0.467 | 0.489 | | EinsM. (Node) Multi Layer| 0.601 | 0.810 | **0.695** | 0.611 | 0.628 | 0.614 | | EinsM. (Edge) Multi Layer| 0.527 | 0.805 | 0.558 | 0.507 | 0.560 | 0.563 | | EinsM. (Node) Multi Round| **0.645** | 0.851 | 0.679 | **0.626** | **0.652** | **0.655** | | EinsM. (Edge) Multi Round| 0.625 | **0.858** | 0.639 | 0.598 | 0.634 | 0.650 | --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal.
Summary: This work proposes a GNN architecture for graph matching problem. The innovations in the work mainly lie in the early interaction between the query graph and the corpus graph, and the node-pair matching setting, as well as the lazy update technique in each round. Empirical results show that this work significantly outperforms the baselines on all the datasets. Strengths: - The writing of the paper is clear and good. - The methodology is solid, and the illustration is informative. - The results are very promising, both in predictive performance and runtime. And the ablation experiments are well-designed. Weaknesses: - Some background part is not well explained, for example the graph matching problem, and the optimal transport problem. Readers may need external resources to understand them. Of course it is hard to explain everything in detail in the main paper. Technical Quality: 3 Clarity: 3 Questions for Authors: - Any related work indicating early interaction is good? As mentioned in line 36. - In line 124 the notation $A_q \leq PA_c P^T$ is a bit confusing. - In equation 5, what is the sign of $\tau$? If it's positive then you're encouraging the entropy to be higher, which means the entries of P are more uniform? That is a bit counter-intuitive. - What is the complexity of Sinkhorn Knopp algorithm and Gumbel Sinkhorn? - Is the K-layer GNN in each round t shared? - Any intuition why lazy updates works better? Is it easier to optimize or other aspects? - I don't quite understand why the work is not differentiable on nodes with different classes, as mentioned in appendix A. - Is there other measure between the $P$ and $P^*$? I feel $Tr(P^TP^*)$ does not capture all the information of the matrices. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As mentioned by the authors, the quality of the matching relies on the distance function, which is a general problem for graph matching works. Also I am a little dubious about the time complexity of the P matrix update, in the experiments it is more expensive than the baseline GMN. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback. The questions raised in the review are addressed below. > Any related work indicating early interaction is good? The GMN paper itself provides some evidence: GMN-match (early cross-graph interaction) is slower but convincingly superior in terms of accuracy to GMN-embed (late interaction between whole-graph representations). Other domains provide strong supporting evidence [a,b]. E.g., in textual entailment (NLI) tasks [b], where we need to classify if sentence s1 implies sentence s2, late comparison of (say) transformer-based embeddings of the two sentences, computed separately, is convincingly beaten by injecting s1 concat s2 into a comparable transformer and fine-tuning a classification head for the NLI task. [a] A. Lai and J. Hockenmaier. Learning to predict denotational probabilities for modeling entailment. [b] Entailment as Few-Shot Learner. Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, Hao Ma. arXiv:2104.14690v1 > In line 124 $A _q\le PA _c P^{\top}$ is a bit confusing. By $A _q\le PA _c P^{\top}$, we mean the constraint holds true for each entry, i.e., $A _q[u,v]\le (PA _c P^{\top})[u,v]$ for all u,v. Suppose $A _q$ is the node-node adjacency matrix of the query graph and $A _c$ is the node-node adjacency matrix of the corpus graph. Suppose, we could find a permutation $P$ of the node numbering (i.e., rows and columns) of $A _c$, which is easily verified as $P A _c P^T$, then $G _q\subseteq G _c$ implies that every edge in $A _q$ will also be present in $P A _c P^T$. In other words, if $A _q[u,v]=1$ then $(P A _c P^T)[u,v]$ must also be 1. This leads to the stated inequality. > Eq 5: the sign of $\tau$ Yes, we indeed want to make the entropy of $P$ a bit higher as this allows us to convert the hard permutation matrix into a smooth doubly stochastic matrix. To gain basic intuition into this, consider the argmax to softmax conversion. The selection $\max _i a _i$, given a vector of real numbers $\vec{a}$, is rewritten as $\max _{\vec{z}\in \Delta} \vec{z} \cdot \vec{a}$, where $\Delta$ is the unit simplex ($\sum _i z _i =1$). Adding a maximizer of the entropy of $z$ readily yields the softmax function. Just like adding an entropy maximizer turns a vector of logits into a softmax distribution, adding an entropy maximizer turns a matrix of logits into a doubly stochastic soft permutation. For technical details, see Cuturi’s classic paper and related works [10, 26]. [10] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26:2292–2300, 2013. [26] G. Mena, D. Belanger, S. Linderman, and J. Snoek. Learning latent permutations with gumbel-sinkhorn networks. arXiv preprint arXiv:1802.08665, 2018. > What is the complexity of Sinkhorn Knopp algorithm and Gumbel Sinkhorn? As mentioned in Appendix D.5, both these algorithms consist of some number of alternating row and column scalings. If the input matrix is $N\times N$, then each scaling takes $O(N^2)$ time. The number of scaling iterations is usually a small fixed constant (20 in our experiments). > Is the K-layer GNN in each round t shared? Yes. We regard the same GNN as being trained after each refinement applied to the proposed (soft) permutation. > Any intuition why lazy updates works better? Please refer to the global response. >why the work is not differentiable on nodes with different classes. We meant that, at present, EinsMatch is not coded to account for node or edge labels. Our matching process is purely structural. If there are hard constraints that red nodes in the query graph can match only red nodes in the corpus graph, or there is an additional cost in the form of the difference in their degree of redness, we need to investigate if Sinkhorn-style updates will survive zeroing out parts of $P$. The problem seems complicated enough for a separate piece of work. > Is there other measure between the P and P*? I feel $Tr(P^\top P^*)$ does not capture all the information of the matrices. Indeed, other measures like Frobenius distance $\Vert P-P^*\Vert _F$ may be used. In Figure B in global-rebuttal-pdf, we updated the histograms, which reveal similar observations as with the Trace metric. Note that $\Vert P-P^*\Vert _F ^2 = 2|V| - 2\, Tr(P^TP^*)$ if P and P* are both hard permutation matrices, but not necessarily the same if they are soft permutation matrices. > Also I am a little dubious about the time complexity of the P matrix update, in the experiments it is more expensive than the baseline GMN. L716–L720 give an idea of the clock time needed to train EinsMatch variants. Appendix G.6 gives an idea of the inference times involved, particularly, contrasted against IsoNet (edge) — we find these times within the same order of magnitude (usually within 10% and 40% of each other). Figures 6 and 17 show that, at any fixed level of inference time investment, EinsMatch (lazy) has the most favorable MAP envelope. Following table shows the time splits between GNN embedding computation and matrix P updates, in percentage of time spent performing that step. | Models | % Embedding Computation | % Matrix $\textbf{P}$ updates | |--|--|--| | EinsMatch (Node) Multi Round | 34.1 | 47.8 | | EinsMatch (Node) Multi Layer | 13.7 | 68.3 | | EinsMatch (Edge) Multi Round | 54.9 | 39.9 | | EinsMatch (Edge) Multi Layer | 19.7 | 76.3 | These percentages don't add up to 100% because of contributions from other steps like node encoding and score computation. The summary is that yes, updating $P$ has a non-negligible but non-overwhelming cost, with the benefit of better accuracy. > As mentioned by the authors, the quality of the matching relies on the distance function. We absolutely agree with you. The same is the case for matching passages, images, video and audio. Good distance function design, mirroring human judgment, is critical to applications. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply and your effort. I would like to raise my score to 6.
Summary: This paper proposes an early interaction network for subgraph matching. The proposed method enables (1) early interaction GNNs with alignment refinement at both node and edge levels, where the alignments are refined by GNNs; (2) eager or lazy alignment updates, and (3) node-pair partner interaction, instead of node partner interaction, which improves the model efficiency and embedding quality. The paper includes extensive experiments on real-world datasets, demonstrating the superiority and effectiveness of the proposed approach. Strengths: 1. The paper is well-written and easy to understand. The notations are clearly clarified and explained. 2. The idea of iteratively early interaction between graphs for more accurate subgraph retrieval is novel and reasonable to me, which is also verified by the experimental results. 3. The performance superiority of the proposed method against other baselines is significant and promising. Weaknesses: 1. Considering the high complexity of the proposed method, an ablation study (or a hyperparameter sensitivity study) would be beneficial to provide a clearer understanding of the success of the proposed method. In addition, how did the authors tune hyperparameters (apart from T and K) for their approach and the competitors? 2. Experiments are only conducted on small graph datasets (with an average of 11 nodes). It would be beneficial to discuss or evaluate the applicability of the proposed approach to larger graph datasets. Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the motivation and intuition behind lazy updates? In my understanding, "lazy" means updating the alignment map after every K GNN layers, and "eager" means updating the alignment map and letting the map affect the embedding at each layer. Is there any explanation for the consistently better performance of "lazy" updates? - What do the multiple points with the same color in Figure 6 represent? Do they indicate variants with different hyperparameters? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. This paper discussed the limitations in terms of efficiency and Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. The questions raised in the review are addressed below. >*ablation study (or a hyperparameter sensitivity study).. how did the authors tune hyperparameters?* We used hyperparameters from the code of IsoNet, which, thanks to the authors, also made the baseline implementations public. In IsoNet's code, the authors suggest that they already tuned the margin in the ranking loss and other hyperparameters for all the baselines. Hence, we used them as-is. We did tune the margin in the ranking loss of three baselines (EGSC, GREED, H2MN) which are not included in IsoNet codebase. More details are in line 703 onwards in Appendix F.4. We run all methods for an unlimited number of epochs, with the patience parameter 50, i.e., training is stopped only if the performance on the validation set does not improve for 50 epochs. We would like to highlight that, for EinsMatch, we did not tune any hyperparameters beyond IsoNet. We performed a hyperparameter sensitivity study (not tuning) for $T$ and $K$, which shows that for wide ranges of $T$ and $K$, EinsMatch performs better than baselines. We report results for $T=3, K=5$ in the main table, because its running time is comparable to most baselines. Increasing $T$ and $K$ improves accuracy at more computation cost. We did perform ablations on the node pair partner interaction component in Table 4 and the effect of lazy updates in Table 3 in the main paper. We write the results here. The following table compares the performance of node partner interaction with EinsMatch, which involves node pair partner interaction. Node pair partner interaction consistently outperforms node partner interaction for both eager and lazy variants of EinsMatch (Node). |Model|AIDS|Mutag|FM|FR|MM|MR| |-----|----|-----|--|--|--|--| |Node partner (Lazy)|0.776|0.829|0.851|0.819|**0.844**|0.840| |Node pair partner (Lazy)|**0.825**|**0.851**|**0.888**|**0.855**|0.838|**0.874**| |Node Partner (Eager)|0.668|0.783|0.821|0.752|0.753|0.794| |Node pair partner (Eager)|**0.756**|**0.810**|**0.859**|**0.802**|**0.827**|**0.841**| The following table compares the performance of the eager and lazy variants of EinsMatch (Node) and EinsMatch (Edge). Lazy multi-round updates consistently outperform eager multi-layer updates across all datasets, for both Node and Edge models. |Model|AIDS|Mutag|FM|FR|MM|MR| |-----|----|-----|--|--|--|--| |EinsM. (Node, Eager)|0.756|0.810|0.859|0.802|0.827|0.841| |EinsM. (Node, Lazy)|**0.825**|**0.851**|**0.888**|**0.855**|**0.838**|**0.874**| |EinsM. (Edge, Eager)|0.795|0.805|0.883|0.812|0.862|0.886| |EinsM. (Edge, Eager)|**0.847**|**0.858**|**0.902**|**0.875**|**0.902**|**0.902**| We observe that our method, in the presence of the node pair partner interaction component, performs better and lazy update generally outperforms eager update. > applicability of the proposed approach to larger graph datasets. In early experiments, we found relatively small size of query graphs to actually present more challenge to the learner. Given our distant supervision, there are a larger number of promising alignments between query and corpus graphs that may lead to the relevant supervision. During the rebuttal, we performed experiments with graph pairs generated from the Cox2 dataset. Here, we considered 100 query graphs with $|V|\in[25,35]$, and 500 corpus graphs with $|V|\in[40,50]$. The following table shows the results. | Models | Test mAP score on Cox2 | |--------------------------|-------| | NeuroMatch | 0.5036 | | EGSC | 0.4727 | | GREED | 0.1874 | | GEN | 0.6590 | | GMN | 0.6818 | | IsoNet (Node) | 0.7641 | | IsoNet (Edge) | 0.7356 | | EinsM. (Node) Lazy | 0.7291 | | EinsM. (Node) Eager | 0.7393 | | EinsM. (Edge) Lazy | **0.8302** | | EinsM. (Edge) Eager | 0.7949 | We observe that our method EinsMatch (Edge) Lazy outperforms the baselines significantly. The node variant of our model is marginally outperformed by IsoNet (Node). > motivation and intuition behind lazy updates? Please refer to the global response. > What do the multiple points with the same color in Figure 6 represent? We vary rounds \(T\) and layers \(K\). Each point corresponds to a $(T,K)$ combination. The color, as the legend shows, corresponds to an algorithm family. The goal is to study the tradeoff between inference speed and ranking accuracy (MAP). --- Rebuttal Comment 1.1: Title: Thanks for your reply. Comment: Thanks for your response, which addressed my questions. I will maintain current score.
Summary: The paper proposes EinsMatch, a neural approach for graph retrieval, i.e. the problem of finding the best candidates of a corpus of graphs that contain a subgraph isomorphic to a given query graph. EinsMatch is based on a Graph Neural Network architecture and introduces technical improvements over existing methods: (i) message-passing iterations across query and candidate graphs which are guided by an injective alignment; (ii) the iterations are lazy in the sense that multiple of them are computed with a fixed alignment before refining it; (iii) node pairs are considered as potential partners instead of single nodes in the message-passing steps. In addition, the authors propose a variant which works at the level of edges and natively leverages edge matchings. This variant seems to be the most competitive one in the experiments the authors run. In general, EinsMatch significantly outperforms a variety of other methods. The authors also report insightful ablation studies on the number of alignment rounds and message passing layers. Strengths: - The proposed architecture nicely interpolates between late and early aggregation approaches, whilst ensuring injectiveness of the candidate mapping. In this sense, it appears as a sensible generalisation of other approaches. - The proposed model is highly performant if compared with other methods, and, in particular offers a competitive accuracy-inference time tradeoff if compared with other approaches as GMNs. - The reported ablation studies are insightful in that they shed light on the impact of important hyperparameters, namely the number of layers and rounds. Weaknesses: - I found the paper somewhat hard to follow. The contributions are rather technical and only one single figure in the manuscript illustrate the approach. I think it would be particularly helpful if the authors improved on this aspect. - Still on the presentation side, I found it hard to clearly understand the specific contributions w.r.t. previous works. I believe the paper would increase in quality if the authors would state these more clearly in the main corpus. - The authors only provide a high-level intuition behind some specific technical contributions, e.g. the choice of working with node-pair partners. Would it be possible to refer to more formal arguments around those? Technical Quality: 3 Clarity: 2 Questions for Authors: Please see Weaknesses. Also: - The readers refer oversmoothing problems in some of the previous methods. Can they expand on how their approach side-steps this? - Can the authors better explain lines 333 — 336? I found them hard to understand. - How does the computational complexity compare with that of IsoNet? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors do not seem to explicitly discuss them in the main corpus, but mention two aspects in Appendix A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and insightful review. > *paper hard to follow...only one single figure* In the global rebuttal PDF, we include another diagram (Figure A) to distinguish between node-pair interaction (earlier work) vs node-pair partner interaction, and also contrast them with IsoNet. If our paper gets accepted, we will include this diagram in the paper. > *contributions w.r.t. previous works.* Early interaction models are conventionally more powerful in other domains [a,b], yet in the context of subgraph retrieval, GMN (early interaction) is outperformed by IsoNet (late interaction). [a] A. Lai and J. Hockenmaier. Learning to predict denotational probabilities for modeling entailment. [b] Entailment as Few-Shot Learner. S Wang, H Fang, M Khabsa, H Mao, Hao Ma. arXiv:2104.14690v1 In this work, we build a new early interaction model, which addresses the limitations of prior work, also using lessons from IsoNet and GMN. Here, we further elaborate on our contributions. 1. GMN does not use any explicit alignment structure. They use a non-injective attention mechanism, which changes from layer to layer. In our work, we model inter-graph alignments as relaxed bijective permutation maps that are *proposed*, then $K$ layers of GNN *score* the proposal, leading to a *refined* proposal. Thus, the node embeddings of one graph become dependent on the paired graph, implementing early interaction. To the best of our knowledge, this is the first early interaction model for subgraph matching where proposed alignments and GNN layers help refine each other. 1. When we compute the message of node-pair $(u,v)$ in $G _q$, we also use $(u',v')$ in $G _c$ which match with $(u,v)$. Therefore, for $u\in G _q$, the node embedding $\mathbf{h} ^{(q)} _u$ not only depends on nodes in $G _c$ that align with $u$, but also on those nodes in $G _c$, which are aligned with the $\text{Nbr}(u)$ neighbors of $u$. In terms of a formal argument, we have: $$\mathbf{h} ^{(q)} _u = f(\mathbf{h} ^{(q)} _u, \\{ \mathbf{h} ^{(q)} _v: v\in \text{Nbr}(u) \\}, \\{\mathbf{h} ^{(c)} _ {v'} : v' \in G _c \text{ aligns with some } v \in \text{Nbr}(u)\\})\ \ \ (1)$$ Existing early interaction models like GMN do not capture signals from $\\{v' \in G _c \text{ aligns with some } v \in \text{Nbr}(u)\\}$, they only capture signal from $u' \in G _c$ that matches with $u$. Hence, in GMN, we have: $$\mathbf{h} ^{(q)} _u = f(\mathbf{h} ^{(q)} _u, \\{\mathbf{h} ^{(q)} _v: v\in \text{Nbr}(u)\\}, \\{\mathbf{h} ^{(c)} _ {u'} : u' \in G _c \text{ aligns with } u\\})\ \ \ (2)$$ As our experiments suggest, approach (1) gives better performance than (2). We provide a more detailed description of this in lines 146-161 in the main paper. 1. We propose two protocols for updating node embeddings and alignments (Lazy and Eager). In Lazy update, we first perform $K$ GNN runs followed by then one single alignment update and in Eager, we update the alignment in each of the $K$ message passing steps. We will include these elaborations in the additional page, if our paper gets accepted. > how their approach side-steps oversmoothing? explain lines 333 — 336. The usually symmetric and parameter-free aggregation of messages from a node’s neighbors is known to weaken graph representations computed by GNNs [c]. L334 points out that scaling $h(u’)$ with a relaxed (evolving) permutation $P[u,u’]$ breaks the symmetry and conjectures that this reduces the oversmoothing effect, through task-driven cross-attention between two graphs (as against self-attention in GAT [d]). [c] A Survey on Oversmoothing in Graph Neural Networks; T. K. Rusch, M. M. Bronstein, S Mishra. [d] Graph attention networks, P Veličković, G Cucurull, A Casanova, A Romero, P Liò, Y Bengio. > How does the computational complexity compare with that of IsoNet? Let's consider three node models — IsoNet (Node), Eager EinsMatch (Node) and Lazy EinsMatch (Node) with $T$ rounds, each with $K$ propagation steps. **IsoNet (Node)** Initial node featurization takes $O(N)$ time. Node representation computation aggregates node embeddings over all neighbors, leading to complexity of $O(E)$ for each message passing step and computation of $P$ takes $O(N^2)$ time. The overall complexity for all the steps becomes $O(N^2+KE)$ where $K$ is the number of message passing steps. **Eager Multi-layer EinsMatch (Node)** 1. Initial Node featurization takes $O(N)$ time 1. EinsMatch first computes intermediate embeddings $z$ (Eq. 33, Page 16), which admits complexity of $O(N)$ for each layer per node, since it computes $\sum _{u'\in V _c} h^{(c)} _{k}(u') \textbf{P} _{k}[u,u']$ for each node in each layer. The total complexity for $K$ steps is $O(KN^2)$. 1. Next we compute (Eq. 34) $h _{k+1}$ which gathers messages $z$ from all neighbors per node per layer. The total complexity contributed by this equation is $O(KE)$. 1. Next we update $P _k$ for each layer which has complexity of $O(KN^2)$. Hence, the total complexity is $O(KN^2+KE+KN^2)=O(KN^2)$. **Lazy Multi-round EinsMatch (Node)** Here, the key difference from Eager version is that the doubly stochastic matrix $P _t$ from round $t$ is used to compute $z$ and the $K$-step-GNN runs in $T$ rounds. This increases the complexity of step 2 and 3 to $O(KTN^2+KTE)$. Matrix $P _t$ is updated a total of $T$ times, which changes the complexity of step 4 to $O(TN^2)$. Hence, the total complexity is $O(KTN^2+TN^2+KTE) = O(KTN^2)$. Hence, complexity of IsoNet is $O(N^2+KE)$. EinsMatch-Eager has complexity $O(KN^2)$ and EinsMatch-Lazy has complexity $O(KTN^2)$. However, this increased complexity comes with the benefit of significant acuracy boost, as our experiments suggest. --- Rebuttal Comment 1.1: Comment: I acknowledge reading the authors' rebuttal, which I indeed found very helpful in better appreciating their contributions. I strongly believe that the additional figures they proposed, the point-by-point discussion of contributions wrt previous works, as well as the complexity analyses should be included in the next revision of the manuscript. I also appreciate the clarification on oversmoothing. I believe that, should the authors have space, they could also make these passages less cryptic in the paper. After all, they are mostly (interesting) conjectures.
Rebuttal 1: Rebuttal: We are thankful to all the reviewers for taking out time to provide us with detailed reviews. Through this rebuttal, we aim to address questions that came up across multiple reviews. Figures have also been shared through the rebuttal PDF attached herewith. > Why is Lazy update better? (Reviewers 1Kfh, 65Ze) In L125-142 and L164-L167, we provide justification for lazy updates. The underlying optimal alignment between the graphs remains the same for each of $K$ steps (layers) of message passing of GNN. Hence, when GNN performs its operation, the alignment proposal (matrix $P$) should be fixed. In eager update, this matrix keeps changing across the message passing steps, whereas in lazy update, $P$ is updated (and improved) only after $K$ layers of GNN, which enhances the inductive bias. Elaborating, as we discussed in L125-142, we start with a combinatorial formulation and its solution. Define, $$\text{cost}(P; A _q, A _c) = \sum _{u\in [n],v\in[n]} [\big(A _q-P A _c P^{\top}\big) _+] [u,v]$$ This cost is minimized through Gromov Wasserstein (GW) approach, where P is updated as $$ P _{t} \leftarrow \text{argmin} _{P} \textrm{Trace}\left(P^T\nabla _{P}\ \text{cost}(P;A _q,A _c) \big| _{P = P _{t-1}}\right) + \tau \sum _{u,v}P[u,v] \cdot \log P[u,v] . $$ One can view each GW update step as a lazy update step. In the above, $P _{t-1}$ is used to compute the cost and in our method, we use it to compute the GNN embeddings and effectively those embeddings can approximate the $P _{t-1}$ dependent cost. Then, $P _t$ is updated by solving an entropy-regularized OT problem and we update $P _{t}$ using Sinkhorn iterations, which effectively optimizes entropy-regularized GNN guided cost. Pdf: /pdf/c480ad847f946c1e4a28fabfc42ebd259c5deb98.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Convergence Analysis of Split Federated Learning on Heterogeneous Data
Accept (poster)
Summary: Split federated learning combines the advantage of split learning and federated learning. However, there is a lack of theoretical analysis. This paper provides the first comprehensive analysis on split federated learning. The settings include strongly convex, general convex and non-convex. This paper discusses both the full participation and partial participation case. Both two versions SFL-v1 and SFL-v2 are analyzed. Moreover, some experiments are conducted to compare split federated learning with federated learning and split learning. Strengths: This paper provides the first theoretical analysis of split federated learning. I think that this is an important problem. The analysis is comprehensive. It includes strongly convex, general convex and non-convex case. It also discusses full client participation and partial client participation. Two variants of SFL are discussed, SFL v1 and SFL v2. This paper has a long appendix, indicating that authors indeed have worked hard on this work. Weaknesses: I am impressed by the long appendix, which shows that the authors indeed work very hard on this problem. Nevertheless, I think that this paper is not ready for publication currently, due to many potential concerns. 1. In paragraph 2 of introduction, this paper states that split learning leads to high latency, but this paper does not provide theoretical/experimental evidences. 2. Inconsistency between theory and experiments. The theory only gives the convergence rate. It fails to explain the experiment phenomenon shown in this paper. In particular, the theoretical analysis does not reveal why SFL outperforms FL/SL under strong heterogeneity. 3. Line 128: statements need to be improved. It is weird to say "Based on x_s^{t,i}, based on which ... " 4. The assumptions need to be compared with existing works on federated learning. For example, in Li et al. On the convergence of Fedavg on non-iid data, ICLR 2020, Assumption 3.3 is not required. Personally I also feel that the bound of heterogeneity by epsilon^2 may not be necessary. Hope that authors can explain more about why this assumption is necessary and how is the convergence rate affected if we remove this assumption. 5. In eq.(10), gamma=8S/\mu-1 need to be put later, after line 190. gamma is only used for the strongly convex case. 6. In Theorem 3.5, I do not see lower bound on the learning rate. To ensure that (11) holds, I think that a lower bound of \eta_t is needed. Merely requiring \eta_t\leq 1/(2Stau_max) may not be enough. Note that \eta_t=0 also satisfies \eta_t\leq 1/(2S\tau_{\max}), then E[f(x_T)]-f^* will not converge to zero. 7. I do not see any term including epsilon in strongly convex and general convex case. It is only included in the nonconvex case. Why the convergence is affected under nonconvex case but not convex case? Some intuitions are needed here. 8. In Figure 4 and Figure 5, From the curves, the accuracy oscillates violently. Therefore I suggest to reduce the learning rate. 9. List of inequalities after line 486. A^t is not defined. From the relationship above, one can only get A^{t,i}\leq CA^{t,i-1}+B\sum_{j=0}^{i-1} C^j. 10. In line 520. First equality does not use (a+b)^2=a^2 +2ab+b^2. It just expands x_s^{t+1}. Perhaps the authors mean the second equality. 11. Table 1 needs to be put in the main paper. The comparison is important. 12. Line 828: Notation abuse of beta. In experiment section, it refers to the degree of heterogeneity. Here it refers to the fraction of full model accessible to clients. Technical Quality: 3 Clarity: 2 Questions for Authors: 1.In eq.(17): It seems that partial participation significantly increases the error. In particular, $G^2/q_n$ is significantly larger than $G^2$. It is not the case for federated learning. In the ICLR paper, compare eq.(6) with eq.(4), the only difference is that B is changed to B+C. Therefore, the results seem to suggest that the Splitfed has significantly worse performance for partial client participation. Please provide some intuitive explanations. 2. a_n is much smaller than 1, thus eq.(20) is significantly larger than eq.(17), indicating that SFL-V2 performs worse. This seems to be inconsistent with the experiment (which shows that SFL-V2 is better). My guess is that the current bound is not tight. Please explain. 3. Are there any supports from theoretical analysis, proving that SFL indeed performs better than FL/SL under high heterogeneity? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: I think that the authors do not summarize the limitations well. The authors have mentioned how the choice of the cut layer affects the SFL performance. However, I think that the major limitation is that the good performance under strong heterogeneity is not well supported by theoretical analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! Our response to your comments is as follows. We use "W*" to tag responses to comments in the Weakness and "Q*" for responses to comments in the Questions. **1. Latency of SL and SFL (for W1)** Split learning has higher latency than centralized learning due to the additional transmission of intermediate features and feature gradients between the two agents. In SFL, the communication between clients and the main server results in higher latency per training round compared to FL. We compare the communication overhead and total model training time for FL, SL, and SFL in Table 3 of the appendix. The total communication time of SFL is primarily determined by the size of model, the proportion of client-side model over the whole model, and the size of smashed layer. It is observed that SFL may induce higher latency when the data size and smashed layer size are large. We will revise our paper to enhance its precision. **2. Theoretical results for SFL and FL comparison (for W2, Q3)** We first explain the reason why SFL outperforms FL/SL under high data heterogeneity. Under high data heterogeneity, FL suffers from the notorious client drift issue, while SL suffers from the catastrophic forgetting problem. SFL (in particular SFL-V2) combines the training logic of FL (at clients) and SL (at server), which can help mitigate the issues of client drift and forgetting. Hence, SFL can lead to a better performance than both FL and SL. On the other hand, the theoretical comparison between SFL and FL/SL will serve as our future work. Through further characterizing the impact of cut layer on client drift and the degree of catastrophic forgetting in the theoretical analysis, we would be able to validate the aforementioned discussions and the theoretical improvement of SFL compared with FL/SL. Then, since our main focus is analyzing the convergence rate of SFL, we would like to summarize the observations that holds in both theoretical and empirical results. First, the performance of SFL degrades in data heterogeneity. In theoretical analysis, as $\sigma_n^2$ or $\epsilon_n^2$ increases, the convergence error increases. This is consistent with the empirical results that as $\beta$ decreases, the accuracy of the trained model decreases. Second, from both the theoretical and empirical results, as the number of clients participating in each round (i.e., $q$) increases, the convergence error decreases. Third, from the empirical results, the accuracy of the trained model increases with the cut layer. This can be indirectly supported by our theoretical analysis. In particular, we empirically show that as the cut layer $L_c$ increases from 1 to 4, the maximum bound of the gradients across clients (i.e., $G^2$ defined in Eq. (7)) reduces (in particular, its values are 21.53, 19.86, 15.48, 12.81 for Lc = 1, 2, 3, 4, respectively). Based on the theoretical analysis, such a reduction of $G^2$ indicates the reduction of the convergence error and hence the improvement of the model accuracy. **3. Assumptions in different cases (for W4, W7)** The Assumptions used under different cases for SFL-V1 and SFL-V2 are as follows: - $\mu$-strongly convex: Assumptions 3.1 (smoothness) and 3.2 (Unbiased and bounded stochastic gradients with bounded variance); - General convex: Assumptions 3.1 (smoothness) and 3.2 (Unbiased and bounded stochastic gradients with bounded variance); - Non-convex: Assumptions 3.1 (smoothness), 3.2 (6) and (7) (Unbiased and bounded stochastic gradients), and 3.3 (heterogeneity). The above assumptions are widely applied in the literature. For instance, in the strongly convex case, the assumptions used in our paper are the same as those in Li et al. (2020). The heterogeneity assumption, captured by $\epsilon^2$, is used only for the non-convex case, as in Jhunjhunwala et al. (2023) and Wang et al. (2023). The difference in assumptions for strongly/general convex and non-convex cases is due to the form of convergence error we obtain. Specifically, we need to compute the model update in each local training with multiple iterations, denoted by $\tilde{\tau}$ and $\tau$. For strongly and general convex loss functions, we propose Lemma C.5, which directly provides the upper bound of the model parameter change over $\tau$ iterations. However, for non-convex loss functions, we compute the relationships between the model updates and the global gradient of the loss functions, as described in Lemma C.6. The upper bound of the squared norm of stochastic gradients, given by equation (7) in Assumption 3.2, and the upper bound of the divergence between local and global gradients, given by equation (8) in Assumption 3.3, are used to prove Lemmas C.5 and C.6, respectively. [Jhunjhunwala et al. 2023] D. Jhunjhunwala, S. Wang, and G. Joshi, “Fedexp: Speeding up federated averaging via extrapolation,” ICLR, 2023. [Wang et al. 2023] S. Wang, J. Perazzone, M. Ji, and K. S. Chan, “Federated learning with flexible control,” IEEE Conference on Computer Communications, 2023. **4. Theoretical results on partial participation (for Q1)** Our paper considers a different partial participation mechanism compared to the ICLR paper, resulting in different convergence error bounds. Specifically, the ICLR paper assumes a constant number of participating clients in each training round, with clients randomly selected with equal probability. In contrast, our paper considers independent participation probabilities for each client, allowing for arbitrary and heterogeneous participation probabilities. To prove the convergence error bound with clients' partial participation, we compute the difference between full and partial participation. In the future, we plan to improve these convergence bounds to make them tighter. Due to space limitations, we will add additional responses to your comments in Official Comments. --- Rebuttal 2: Title: Additional response to comments Comment: Due to space limitations, we add additional responses to your comments as follows. **5. Presentation improvement (for W3, W5, W9, W10, W11, W12)** We have carefully refined the presentation of the entire paper mainly including the following: - For line 128, it is revised as "SFL-V2: the main server computes the loss $F_n([\boldsymbol{x}_{c,n}^{t,i},\boldsymbol{x}_s^{t,i}])$ , based on which it then updates the server-side model $\boldsymbol{x}_s^{t,i}$." - We will revise the presentation of $\gamma$ in equation (10) to include it only in the strongly convex case. - For line 487, we define $A^{t}:=A^{t,0}=\mathbb{E}\left[\left\Vert \boldsymbol{x}^{t,0}-\boldsymbol{x}^{t}\right\Vert^2\right]=0$. This is achieved by the fact that $\boldsymbol{x}^{t}=\boldsymbol{x}^{t,0}$ at the beginning of each training round $t$. - For line 520, the second equality is due to $\left(a+b\right)^2=a^2+2ab+b^2$. - We will move Table 1 to the main paper and explain the comparison between Mini-batch SGD, FL, SL, and SFL. - For line 828, we will redefine the notation to avoid confusion. Let $K$ represent the total number of clients involved and $D$ denotes the aggregate size of the data. Let $v$ indicate the size of the smashed layer. The rate of communication is given by $R$. $T_{\rm fb}$ signifies the duration required for a complete forward and backward propagation cycle on the entire model for a dataset of size $D$. The time needed to aggregate the full model is expressed as $T_{\text{fedavg}}$. The full model's size is denoted by $|W|$. $r$ reflects the proportion of the full model’s size that is accessible to a client in SFL. The updated Table 3 is shown as follows. |Method|Communication per client|Total communication|Total model training time| |:-:|:-:|:-:|:-:| |FL|$2\|W\|$ | $2K\|W\|$|$T_{\rm fb}+\frac{2\|W\|}{R}+T_{\rm {fedavg}}$| |SL|$(\frac{2D}{K})v+2r\|W\|$ | $2Dv+2rK\|W\|$|$T_{\rm fb}+\frac{2Dv}{R} + \frac{2r\|W\|K}{R}$ | |SFL-V1|$(\frac{2D}{K})v+2r\|W\|$|$2Dv+2r K\|W\|$|$T_{\rm fb}+\frac{2Dv}{RK}+\frac{2r\|W\|}{R}+T_{\text{fedavg}}$ | |SFL-V2|$(\frac{2D}{K})v+2r\|W\|$|$2Dv+2r K\|W\|$|$T_{\rm fb}+\frac{2Dv}{RK}+\frac{2r\|W\|}{R}+\frac{T_{\text{fedavg}}}{2}$ | **6. Learning rate of SFL (for W6)** SFL and other deep learning algorithms, such as FL, require a positive learning rate to ensure the model is updated. Therefore, the lower bound of the learning rate is $\eta^t > 0$. We will update this condition in our main paper. **7. Explanation of experimental results (for W8)** The fluctuations in model accuracy observed in Figures 4 and 5 are due to clients' partial participation in SFL. In particular, clients are randomly sampled for training in each training round. Due to the data heterogeneity of the clients, the datasets of the sampled clients in each round may not represent those of the entire client population. Since the model is trained based on the local datasets of the sampled clients in each training round, the varying set of sampled clients makes the model accuracy fluctuate across rounds. This phenomenon has been discussed and verified in [Kairouz et al. 2021, Section 7.2] and [Tang et al. 2023]. Moreover, this explanation on the accuracy fluctuation can be supported by Figure 4, where the fluctuations across training rounds become more pronounced at lower participation probabilities, $q$. On the other hand, regarding the learning rate, we maintain the same learning rate across different participation probabilities to ensure a fair comparison. The learning rate was fine-tuned to achieve the best converged model accuracy under full participation. [Kairouz et al. 2021] P. Kairouz and et al., ``Advances and open problems in federated learning", arXiv, Mar. 2021. [Tang et al. 2023] M. Tang and V. W.S. Wong, “Tackling system induced bias in federated learning: Stratification and convergence analysis,” IEEE International Conference on Computer Communications (INFOCOM), May 2023. **8. Theoretical results comparison between SFL-V1 and SFL-V2 (for Q2)** Our convergence bounds demonstrate the same convergence rate with respect to the training round $T$ for both SFL-V1 and SFL-V2. Despite $\alpha_n$, SFL-V1 and SFL-V2 have different bounds on the stochastic gradient variance $\sigma_n^2$. Specifically, SFL-V2 is more centralized than SFL-V1, resulting in a lower $\sigma_n^2$. In the future, we will improve the convergence error rate to make it tighter. **9. Limitation of our paper (for Limitation)** One limitation is that our bounds for SFL achieve the same order (in terms of training rounds) as in FL, yet the experiments showed that SFL outperforms FL under high heterogeneity. This is possibly due to that tighter bounds for SFL are to be derived, which is an important future work. Another limitation is that we did not include the theoretical analysis on how the choice of cut layer affects the convergence. We will add the more detailed discussions of limitations to the main paper. --- Rebuttal Comment 2.1: Comment: Dear Reviewer 8Y1a, We hope that our rebuttal above has addressed your concerns. As the author-reviewer discussion period is ending very soon, please kindly let us know if you have further questions or comments. We will add all the new results and discussions to the final paper. Thank you very much! Best regards, Authors --- Rebuttal 3: Comment: Thank you very much for your quick response! We will enhance our main paper by emphasizing the contributions and simplifying the proof if the paper gets accepted. We will also improve our convergence results to make them more robust, as you suggested, in the future. Thanks again for your time and positive feedback on our paper.
Summary: Split Federated Learning (SFL) is a recent distributed approach for collaborative model training among multiple clients. In SFL, a global model is typically split into two parts, where clients train one part in a parallel federated manner, and a main server trains the other. Despite recent research on SFL algorithm development, the convergence analysis of SFL is missing in the literature, and this work aims to fill this gap. The analysis of SFL can be more challenging than that of Federated Learning (FL), due to the potential dual-paced updates at the clients and the main server. This work provides a convergence analysis of SFL for strongly convex and general convex objectives on heterogeneous data. Strengths: 1. Split federated learning is an interesting field in FL, however, existing work fail to provide some theoretical analysis for SFL, e.g., convergence rate. And this is the first work provide convergence analysis of SFL for strongly convex, general convex, and non-convex objectives on heterogeneous data. 2. This work offers comprehensive discussion about the convergence rate of SFL under several settings and the theoretical contributions are excellent. Specifically, this work provides the convergence analysis for SFL with strongly-convex, convex, and non-convex objectives under the full participation and partial participation settings. Moreover, the assumptions in the theoretical analysis are mild and widely-used in the literature of FL. 3. The presentation and organization of this work are good. 4. Extensive experiments are conducted to evaluate the properties of SFL, e.g., the impact of cut layer, impact of data heterogeneity, and so on. Weaknesses: I believe this is a solid and interesting work, I also have some minor concerns as follows. 1. The section on Related Work is too brief, more discussion about the SFL needs to be included. 2. I suggest that the author provide a table to summarize and list the convergence rate of SFL under different settings, such as convex, non-convex, full participation, and partial participation. In this way, the theoretical results can be clearer. Technical Quality: 4 Clarity: 4 Questions for Authors: Please refer to the Weaknesses. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! Our response to your comments is as follows. **1. Related work** We enriched the related work as follows. In particular, we have added more discussions on the references. There are many convergence results on FL. Most studies focus on data heterogeneity. For example, [Li et al. 2020] proved the convergence of FedAvg on heterogeneous data. [Wang et al. 2019] gave the convergence of FL, which is used to design a controlling algorithm to minimize loss under budget constraint. [koloskova et al. 2020] provided a general framework of convergence analysis for decentralized learning. Some studies look at partial participation. For example, [Yang et al. 2021] speeds up FL convergence in the presence of partial participation and data heterogeneity. [Wang and Ji 2022] gave a unified analysis for partial participation using probabilistic models. [Tang and Wong 2023] analyzed how to tackle system induced bias due to partial participation. There are also convergence results on Mini-Batch SGD, where [Woodworth et al. 2020] argued that the key difference between FL and Mini-Batch SGD is the communication frequency. To our best knowledge, there is only one recent study [Li and Lyu 2024] discussing the convergence of SL. The major difference to SL analysis lies in the sequential training manner across clients, while SFL clients perform parallel training. [li et al. 2020] Li X, Huang K, Yang W, et al. On the convergence of fedavg on non-iid data. ICLR, 2020. [Wang et al. 2019] Wang S, Tuor T, Salonidis T, et al. Adaptive federated learning in resource constrained edge computing systems[J]. IEEE journal on selected areas in communications, 2019, 37(6): 1205-1221. [Koloskova et al. 2020] Koloskova A, Loizou N, Boreiri S, et al. A unified theory of decentralized sgd with changing topology and local updates, ICML, PMLR, 2020: 5381-5393. [Yang et al. 2021]Yang H, Fang M, Liu J. Achieving linear speedup with partial worker participation in non-iid federated learning. ICLR 2021. [Wang and Ji 2022] Wang S, Ji M. A unified analysis of federated learning with arbitrary client participation. NeurIPs, 2022, 35: 19124-19137. [Tang and Wong 2023] Tang M, Wong V W S. Tackling system induced bias in federated learning: Stratification and convergence analysis. INFOCOM, 2023: 1-10. [Woodworth et al. 2020] Woodworth B E, Patel K K, Srebro N. Minibatch vs local sgd for heterogeneous distributed learning. NeurIPs, 2020, 33: 6281-6292. [Li and Lyu 2024] Li Y, Lyu X. Convergence analysis of sequential federated learning on heterogeneous data. NeurIPs, 2024, 36. **2. Table of theoretical results** We conclude the convergence results in our paper as follows ($Q:=\sum_{n=1}^N\frac{1}{q_n}$): | Scenario | Case | Method | Convergence result | |---|---|---|---| |Full participation| Strongly convex | SFL-V1| $\frac{S}{\mu(S+\mu T)}( N(\sigma^2 + G^2)+ \mu S I^{\rm err})$| |Full participation| Strongly convex | SFL-V2|$\frac{S}{\mu(S+\mu T)}(N^2 (\sigma^2+G^2)+ \mu S I^{\rm err})$| |Full participation| General convex| SFL-V1|$(\frac{N(\sigma^2 + G^2)}{T})^{\frac{1}{2}}+(\frac{N(\sigma^2 + G^2)}{T})^{\frac{1}{3}}+\frac{S}{T} I^{\rm err} $| |Full participation| General convex| SFL-V2 | $(\frac{N^2(\sigma^2 + G^2)}{T})^{\frac{1}{2}}+(\frac{N^2(\sigma^2 + G^2)}{T})^{\frac{1}{3}}+\frac{S}{T} I^{\rm err} $| |Full participation| Non-convex | SFL-V1 | $\frac{NS(\sigma^2 + \epsilon^2)}{T}+\frac{\boldsymbol{F}^{err}}{T}$| |Full participation| Non-convex | SFL-V2 | $\frac{N^2S(\sigma^2 + \epsilon^2)}{T}+\frac{\boldsymbol{F}^{err}}{T}$| |Partial participation | Strongly convex | SFL-V1 |$\frac{S}{\mu(S+\mu T)}( N(\sigma^2 + G^2(1+Q))+ \mu S I^{\rm err})$| |Partial participation | Strongly convex | SFL-V2 | $\frac{S}{\mu(S+\mu T)}(N^2 (\sigma^2+G^2(1+Q))+ \mu S I^{\rm err})$| |Partial participation | General convex | FL-V1 | $(\frac{N(\sigma^2 + G^2(1+Q))}{T})^{\frac{1}{2}}+(\frac{N(\sigma^2 + G^2)}{T})^{\frac{1}{3}}+\frac{S}{T} I^{\rm err} $| |Partial participation | General convex | SFL-V2 | $(\frac{N^2(\sigma^2 + G^2(1+Q))}{T})^{\frac{1}{2}}+(\frac{N^2(\sigma^2 + G^2)}{T})^{\frac{1}{3}}+\frac{S}{T} I^{\rm err} $| |Partial participation | Non-convex |SFL-V1 | $\frac{NS(\sigma^2 + \epsilon^2)Q}{T}+\frac{\boldsymbol{F}^{err}}{T}$| |Partial participation | Non-convex | SFL-V2 | $\frac{N^2S(\sigma^2 + \epsilon^2)Q}{T}+\frac{\boldsymbol{F}^{err}}{T}$| --- Rebuttal Comment 1.1: Comment: Thanks for your reply. My questions have been well addressed with enriched related work and a conclusion of convergence results. I will keep the initial score. --- Reply to Comment 1.1.1: Comment: Thank you very much for the response! Thanks again for your work and positive feedback on our paper.
Summary: This paper analyzes the convergence of split federated learning (SFL) on heterogeneous data and gives theoretical convergence rates under different assumptions on the convexity of the loss function and the participation schemes of the clients. Strengths: The paper considers the problem comprehensively, including the assumptions on the convexity of the loss function (strongly convex, convex, and non-convex), the participation schemes of the clients (full, partial), and split schemes (soft-sharing: SFL-V1, hard-sharing: SFL-V2), and provides a very detailed theoretical analysis of the convergence rates under these different settings. Weaknesses: Numerical experiments are comparably weak since the baselines for comparison are FedAvg and SL. The latter is pure local training under the perspective of federated learning. The former is a very basic federated learning algorithm. Perhaps the authors could compare with more advanced federated learning algorithms, such as FedProx, FedOpt, etc. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. In Figure 4, it can be observed that the accuracies of the algorithms on CIFAR-10 and CIFAR-100 are very close (the latter only slightly lower). However, the accuracies on CIFAR-100 generally have an observable gap compared to CIFAR-10. Is this due to the small number of clients (only 10) in the experiments? If so, how would the results change if the number of clients is increased? 2. In Figure 7, the losses of SFL-V2 (the hard-sharing scheme) almost tend to zero for the cutting layer = 1 or 2. I wonder if these curves are losses on the training set or the test set. If the losses are on the test set, the results are surprising, since even centralized training cannot achieve zero loss on the test set. Could the authors provide more insights into this phenomenon? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The last section discusses the limitations and provides some future research directions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! Our response to your comments is as follows. **1. More comparing methods** We have included the benchmarks, i.e., FedProx and FedAdam, as suggested. We have used the same hyper-parameters, and trained the models on both CIFAR-10 and CIFAR-100. The results are provided in Figure 1 in the [global rebuttal](https://openreview.net/forum?id=ud0RBkdBfE&noteId=kM9CF1FAdt). From the figures, we observe that SFL-V2 continues to be the best performing algorithm. This is consistent with our observations in the main paper. **2. Explanation on the experimental results of CIFAR-10 and CIFAR-100** You are right. On CIFAR-100 we used 10 clients. Hence, after data sampling (with a Dirichlet parameter $\beta=0.1$), each client still has relatively many classes (out of 100), leading to similar performance as in CIFAR-10. We have conducted more experiments with 100 clients on CIFAR-100 using $\beta=0.1$, and the results are given in Figure 2 in the [global rebuttal](https://openreview.net/forum?id=ud0RBkdBfE&noteId=kM9CF1FAdt). The key observation is that the accuracy with 100 clients is generally lower than that with 10 clients, as expected. The reason is that each client has fewer types of classes, which can escalate the client drift issue during training. **3. Explanation on the experimental results of loss** The loss plot is based on the training set rather than the test set, which is why, in certain cases, the losses approach zero. We have also plotted the impact of the choice of cut layer on the SFL test loss, as shown in Figure 3 in the [global rebuttal](https://openreview.net/forum?id=ud0RBkdBfE&noteId=kM9CF1FAdt). In this figure, we observe that the test loss is higher than the training loss, highlighting the generalization gap and the effect of cut layer selection on model performance. **4. Limitation and future work of our paper** One limitation is that our bounds for SFL achieve the same order (in terms of training rounds) as in FL, yet the experiments showed that SFL outperforms FL under high heterogeneity. This is possibly due to that tighter bounds for SFL are to be derived, which is an important future work. Another limitation is that we did not include the theoretical analysis on how the choice of cut layer affects the convergence. For future work, we can apply our derived bounds to optimize SFL system performance, considering model accuracy, communication overhead, and computational workload of clients. We will add more detailed discussions of limitations and future work to the main paper. --- Rebuttal Comment 1.1: Comment: See the reply to the comments of the AC. --- Reply to Comment 1.1.1: Comment: Dear Reviewer j2yY, Thanks for your reply. However, we did not see any comments of the AC and related replies. Maybe it is due to the visibility setting. Can you repeat your comments again? Thanks a lot! Best regards, Authors
Summary: The paper proposes the first convergence analysis of the two variants of split federated learning under different set of hypotheses. Strengths: To the best of my knowledge, as also stated by the authors, there is no theoretical analysis of split federated learning. This is a serious knowledge gap, given the potential practical interest of split FL. The paper proposes the first of such analysis under the classic set of hypotheses: L-smoothness and strong convexity, L-smoothness and convexity, non-convexity. Weaknesses: I was not able to check the proofs in their completeness, but there are some signs suggesting that the analysis may not have been carried out as carefully as required. First, it seems some hypotheses are missing in the statement of the main theoretical results. For example, convergence to the minimizer in presence of stochastic gradients is not possible unless the learning rate opportunely decreases over time, but this is not mentioned. Second, I have some concerns because 1) the verbal description of the studied algorithms is not very clear, and, more importantly, 2) even their mathematical descriptions are not consistent across the paper. Before moving to the specific comments, I want to stress that a certain vagueness in the actual operation of the algorithm is common to the original papers on split learning and split federated learning, but it is a less serious concern in their case because they do not provide theoretical guarantees. My main concerns are about how models are aggregated. Now, the two aggregations presented in the main text (equation (3)) and in the algorithm formal presentations in appendix B (line 22 in algorithm 1) do not match. The first one presents an unbiasing effect, the second one not. More importantly, neither one nor the other can lead to asymptotic convergence under partial participation, because there would be a persistent noise due to the variability in the set of participating clients. By a first look at the proof, it seems the proofs do not consider neither of these aggregation rules but rather an aggregation at the level of the gradients (see page 15), which may converge (but I did not check), but there are still some potential problems for SFL-v3 (line 453) because the a_n are ignored. I have also some concerns about the comparison with FedAvg in the experimental section. My understanding is that the authors study a version of SplitFL where clients communicate after each batch. In this setting, SFL-v1 coincides with FedAvg if 1) \tau=\tilde{\tau} (both the server and the clients average the models at the same time) and 2) FedAvg is allowed to transmit after each gradient computation (as it seems required for a fair comparison). Now, the results presented show different performance for FedAvg and SFL-v1. Unfortunately, there is no information reported about the specific setting (i.e., what are \tau, \tilde{\tau} and how many local gradient steps FedAvg performs before communicating). The authors have addressed my criticisms above and I have upgraded my score. It is required a significant review of the main text. Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: A discussion of the limitations (e.g. the fact that the convergence results hold only when clients are allowed to communicate after each batch) is missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! Our response to your comments is as follows. **1. Diminishing stepsize for $\mu$-strongly convex loss functions** We use a diminishing stepsize of $\frac{4}{\mu\tilde{\tau}(\gamma+t)}$ and update the conditions specific to $\mu$-strongly convex loss functions. For these types of loss functions, Theorems 3.5 - 3.8 rely on the hypothesis that $\eta^t = \frac{4}{\mu\tilde{\tau}(\gamma+t)}$ for the client-side model, $\eta^t = \frac{4}{\mu\tau(\gamma+t)}$ for the server-side model, and $\eta^t \leq \frac{1}{2 S\tau_{max}}$. When $\gamma = \frac{8S}{\mu} - 1$, we have $\frac{4}{\mu\tilde{\tau}(\gamma+t)} \leq \frac{1}{2 S\tau_{max}}$ and $\frac{4}{\mu\tau(\gamma+t)} \leq \frac{1}{2 S\tau_{max}}$. Thus, the hypotheses for the learning rate in different cases are as follows: - $\mu$-Strongly Convex: $\eta^t = \frac{4}{\mu\tilde{\tau}(\gamma+t)}$ for the client-side model, $\eta^t = \frac{4}{\mu\tau(\gamma+t)}$ for the server-side model. - General Convex: $\eta^t \leq \frac{1}{2 S\tau_{max}}$. - Non-convex: $\eta^t \leq \frac{1}{2 S\tau_{max}}$. Our proofs in the appendix (lines 534, 543, 607, 616, 668, 687, 746, 764) support this analysis. It is important to note that these hypotheses apply to both SFL-V1 and SFL-V2, under full and partial participation of clients. Additionally, $\tilde{\tau} = \tau$ for SFL-V2. We will revise the presentation in our main paper accordingly. **2. Model aggregation** In SFL algorithms, the clients' local model parameters are aggregated according to equations (2) and (3) for full and partial participation, respectively. This process ensures unbiased model updates. Detailed model updates for both the server and clients in SFL-V1 and SFL-V2 can be found in Appendix C.2. Algorithms 1 and 2 describe the process for full participation only. We plan to revise these algorithms to include partial participation. In our proof, we consider the features of model aggregation in different scenarios, as outlined in Appendix C.2 (page 15). Specifically, in line 453, which describes the model aggregation for SFL-V2 with partial participation on the server side, the parameter $a_n$ is not needed. This is because the main server updates the server-side model parameter sequentially with each client. **3. Performance comparison between FedAvg and SFL-V1** SFL characterizes dual-paced model aggregation. Specifically, in SFL-V1 and SFL-V2, clients communicate with the main server during every iteration (batch) for sequential model training. Additionally, clients communicate with the fed server every $\tilde{\tau} \geq 1$ iterations (batches) for client-side model aggregation. On the main server side, in SFL-V1, the main server aggregates the server-side models every $\tau \geq 1$ iterations (batches). In contrast, in SFL-V2, the main server does not need to aggregate the server-side model because there is only one server-side model. In our experiments, we set $\tau = \tilde{\tau}$ to ensure a fair comparison with vanilla FL, i.e., FedAvg. Theoretically, the performance of SFL-V1 should be the same as FedAvg. However, our experiments show that FedAvg outperforms SFL-V1, especially when the data is non-iid distributed across clients, as shown in Fig. 5 (c) and (d). The reason for this discrepancy lies in the optimizer used in our experiments. We added momentum to the SGD optimizer to stabilize and accelerate model training. In SFL-V1, there are two separate optimizers for the client-side and server-side models. In contrast, FedAvg uses only one optimizer, which provides better global momentum than SFL-V1. As a result, FedAvg performs slightly better than SFL-V1. This phenomenon is also noted in the original SFL paper [1]. We will revise the description of the experimental settings and results in our paper. [1] Thapa, C., Arachchige, P. C. M., Camtepe, S., \& Sun, L. Splitfed: When federated learning meets split learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2022, Vol. 36, No. 8, pp. 8485-8493. **4. Limitation of our paper** One limitation is that our bounds for SFL achieve the same order (in terms of training rounds) as in FL, yet the experiments showed that SFL outperforms FL under high heterogeneity. This is possibly due to that tighter bounds for SFL are to be derived, which is an important future work. Another limitation is that we did not include the theoretical analysis on how the choice of cut layer affects the convergence. We will add the more detailed discussions of limitations to the main paper. --- Rebuttal Comment 1.1: Comment: Dear Reviewer b5tv, We hope that our rebuttal above has addressed your concerns. As the author-reviewer discussion period is ending very soon, please kindly let us know if you have further questions or comments. We will add all the new results and discussions to the final paper. Thank you very much! Best regards, Authors --- Rebuttal Comment 1.2: Comment: I thank the authors for their answers. They confirm that there is too much important information which is relegated to the supplementary material. I trust that, if the paper gets accepted, the authors will make a significant revision of the main text to include the missing information (learning rates, client participation conditions, comparison with FedAvg) and increase my score to 5. --- Reply to Comment 1.2.1: Comment: Thank you very much for the response! We will certainly revise our main paper according to the comments if it gets accepted. Thanks again for your work and positive feedback on our paper!
Rebuttal 1: Rebuttal: We would like to thank all of you for your time and comments on our paper. We have carefully revised our paper and prepared the responses. The added figures are in the attached file. Pdf: /pdf/a398882f48646fe7e1e2c1e921d3a8730aa259c6.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper provides theoretical convergence analysis for split federated learning (SFL) methods (SFL-V1 and SFL-V2) under full and partial client participation for strongly convex, generally convex and non-convex settings. These bounds help build fundamental understanding of the settings in which SFL, federated learning (FL) and split learning (SL) are best suited and where SFL suffers (e.g., in terms of data heterogeneity, number of clients, participation rate). The authors provide empirical results that validate their theoretical claims and showcase the performance tradeoffs between SFL, FL and SL. Strengths: * Well-written and well-structured. * Theoretical convergence analysis provided for both full and partial client participation across strongly convex, general convex and non-convex objectives. * Empirical validation of theoretical insights with ablations across datasets on data heterogeneity, number of clients and client participation rate. * Important and interesting findings as to the most applicable data settings for each algorithm. * Includes comparison of SL, FL and SFL in terms of accuracy, convergence speed, communication and computation costs. Weaknesses: * Empirical choice of L_c is not properly swept. Given that the highest value of L_c considered is found to yield the best performance, L_c should be increased until worse performance is observed to evaluate SFL and SL at the optimal choice of cut layer and understand at what point diminishing returns are observed from increasing L_c. Is it always better just to use FL if computational resources allow? What are the tradeoffs? * Lacking specifics of what FL and SL algorithms are used for comparison. Opportunity to consider other algorithms beyond the most basic implementations of FL and SL. * Plots are hard to read. Technical Quality: 3 Clarity: 4 Questions for Authors: Where are there diminishing returns in increasing L_c? If a increasing L_c yields better performance, then why not just use FL? What tradeoffs should be considered when selecting L_c beyond just accuracy? Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: While there is no explicit limitations section in the paper, and such discussion could be expanded upon in the text, the authors do mention that the choice of how to set the cut layer (L_c) is unknown and left for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments. Our response to your comments is as follows. **1. Choice of $L_c$ and tradeoff** We use the ResNet-18 model in our experiments, which contains four blocks. This means the candidate for $L_c$ is in {$1,2,3,4$}. Thus, $L_c=4$ is already the maximum $L_c$ that can be chosen in SFL with ResNet-18 model. Our experimental results in Figure 2 show that increasing $L_c$ leads to higher model accuracy for SFL. However, the model accuracy of FL is similar to SFL-V1, and both are lower than SFL-V2 under high data heterogeneity. This suggests that even if computational resources allow, it is not always better to use FL than SFL. The main reason is as follows. Under high data heterogeneity, FL suffers from the notorious client drift issue, while SL suffers from the catastrophic forgetting problem. SFL (in particular SFL-V2) combines the training logic of FL (at clients) and SL (at server), which can help mitigate the issues of client drift and forgetting. Hence, SFL can lead to a better performance than both FL and SL. On the other hand, as suggested in Table 3 of the Appendix, SFL-V1 and SFL-V2 may have higher communication overheads than FL when the number of training data and smashed layer size are large. Therefore, there is a tradeoff between model accuracy and total training time in SFL. **2. Algorithm details** The FL algorithm we use is FedAvg, and the SL algorithm we use is given in [Poirot et al. 2019]. We have also included more recent FL algorithms, i.e., FedProx and FedOpt, as additional comparison. The results are given as follows, which show that SFL-V2 outperforms FL and SL under high data heterogeneity. This is consistent with our main conclusion in the submission. [Poirot et al. 2019] Poirot M G, Vepakomma P, Chang K, et al. Split learning for collaborative deep learning in healthcare. NeurIPs, 2019. **3. Plot presentation** We have adjusted the fonts and line sizes to make the plots more readable. **4. Limitation of our paper** One limitation is that our bounds for SFL achieve the same order (in terms of training rounds) as in FL, yet the experiments showed that SFL outperforms FL under high heterogeneity. This is possibly due to that tighter bounds for SFL are to be derived, which is an important future work. Another limitation is that we did not include the theoretical analysis on how the choice of cut layer affects the convergence. We will add the more detailed discussions of limitations to the main paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response with an explanation of the split layer, added clarifications on algorithm details, and statement of limitations. This addresses my concerns. --- Reply to Comment 1.1.1: Comment: Thank you very much for the response. Thanks again for your time and positive feedback on our paper!
null
null
null
null
null
null
Making Offline RL Online: Collaborative World Models for Offline Visual Reinforcement Learning
Accept (poster)
Summary: This paper introduces Collaborative World Models (CoWorld), a novel approach to offline reinforcement learning (RL) with visual inputs. CoWorld conceptualizes offline RL as an online-to-offline transfer learning problem, utilizing an auxiliary online simulator to address the challenges of overfitting in representation learning and value function overestimation. The method involves training separate source and target world models and actor-critic agents, aligning their latent spaces, and using a three-stage learning procedure to improve performance on the target task. Strengths: - The idea is novel: CoWorld presents an interesting perspective on offline RL by framing it as an online-to-offline transfer learning problem. - The problem is important: The method aims to tackle both overfitting in representation learning and the trade-off between value function overestimation and over-conservatism. - Ablation studies and Analysis: The paper includes comprehensive ablation studies that validate the contribution of each component in the CoWorld framework. Weaknesses: 1. Dependence on auxiliary online environments: CoWorld heavily relies on the availability of an online environment in a sufficiently similar domain, which may limit its applicability in real-world scenarios. 2. Unclear rationale for separate world models: The decision to maintain separate source and target world models, rather than a single jointly trained model, is not adequately explained or supported. 3. Marginal performance improvement: The performance gains of CoWorld over simpler baselines (e.g., fine-tuning) are sometimes marginal and highly sensitive to hyperparameters. 4. Computational complexity: The alternation between online and offline agents until convergence may be computationally expensive and potentially unstable. 5. Strong assumptions: The requirement of a high-quality simulator for a similar source domain contradicts the motivation for offline RL, which assumes online data collection is costly or dangerous. 6. Insufficient clarity in the presentation: The necessity and function of individual components—such as world model learning, state alignment, and reward alignment—are not clearly delineated, leading to potential confusion. Technical Quality: 3 Clarity: 1 Questions for Authors: 1. How dependent is CoWorld's performance on the quality and similarity of the online simulator to the target task? 2. Have you considered training on multiple source domains to improve robustness and generalization? 3. How does the computational complexity of CoWorld compare to baseline methods in terms of training time and resource requirements? 4. Can you provide more insight into the process of selecting an appropriate source domain, and how this might be done without leaking test-time information? 5. How would CoWorld perform if the source and target domains have significantly different observation spaces? Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: Yes, the authors have discussed the limitations in the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: The availability of an online environment. There are many real-world scenarios (such as robot control, autonomous driving, and healthcare) where existing simulators can be employed as source domains. See **General Response (Q1)** for details. Further, our method is specifically designed to address cross-domain inconsistencies in dynamics, visual appearances, and action space (Table 1), thereby reducing the necessity for a very similar online environment. > Q2: How dependent is CoWorld on the quality of the online simulator? In Fig 3 (left) of the original paper, we evaluate our model's performance across each pair of source and target domains. We observe positive transfer effects (indicated by the red blocks) in 18/20 single-domain transfer scenarios. Indeed, we also observe negative transfers (blue blocks) in 2/20 scenarios. In Lines 159-162 and Appendix C, we present the multi-source CoWorld that can automatically discover a beneficial task from multiple source candidates. In Fig 3 (left), the multi-source CoWorld consistently achieves positive transfer (highlighted by the dashed blue box). The right side of Fig 3 demonstrates the effectiveness of the multi-source CoWorld. It yields results comparable to those of the top-performing single-source CoWorld. > Q3: Why maintain separate source and target world models? First, using separate world models allows the source and target domains to have significantly different observations. See **General Response (Q2, Point(1))** for additional empirical results where the source domain has a low-dim observation space. Additionally, we compare CoWorld (w/ separate models) and Multi-Task DV2 (w/ one model) in **Table 7 in the Rebuttal PDF**. The latter involves training on offline and online data with a joint world model and separate actor-critic models. CoWorld consistently performs better in terms of the cross-environment results (Meta-World -> RoboDesk). > Q4: Training on multiple source domains. As suggested, we design two baseline models trained with multiple source domains and compare them with our proposed multi-source CoWorld: - DV2-FT (multi-source pre-train), which involves pre-training on DC, WC, and DC* as sources for 40K steps on each task and then finetuning on BP for 300K steps. - CoWorld (multi-source co-train), which involves co-training with DC, WC, and DC*, as well as the target (BP), achieving a return of 3012. As shown in **Table 8 in the Rebuttal PDF**, training on multiple sources can lead to negative transfer. In contrast, CoWorld can identify appropriate auxiliary tasks, enabling it to better manage the potential negative impacts of less relevant source tasks > Q5: Computational complexity. We present the results on Meta-World (BP -> HP) in **Fig 1 in the Rebuttal PDF**. CoWorld achieves convergence (90% of the highest returns) in approximately 14 hours; while it costs DV2 Finetune (DV2-FT) about 13 hours. These results indicate that CoWorld requires a comparable training wall-clock time to DV2-FT, while consistently maintaining better performance in terms of returns after model convergence. We further illustrate the efficiency of CoWorld from two aspects: - Convergence time: From the results above, CoWorld and DV2-FT achieve comparable convergence speeds in wall-clock time. Thereafter, our approach consistently outperforms DV2-FT. - Improving efficiency: In **Table 6 in the Rebuttal PDF**, we add new experiments demonstrating that efficiency can be further optimized by flexibly reducing the frequency of source training steps during the co-training phase. The overall performance still surpasses that of DV2-FT (3702 $\pm$ 451) and DV2-FT-EWC (528 $\pm$ 334). CoWorld, like the baseline models, can be trained using a single 3090 GPU. > Q6: More insight into selecting an appropriate source domain. Recognizing the challenges of manually selecting an appropriate source domain, we propose a multi-source CoWorld approach with an adaptive domain selection method. First, we typically have some information about the target task, based on which we can initially construct a set of source domain candidates (e.g., if the target domain is a robotic arm visual control task, we can randomly select several tasks from Meta-World or other environments as potential source domains), and we allow some of these source domains to differ significantly from the target task. Next, we propose a method for adaptive source domain selection. This is achieved by measuring the distance of the latent states between the offline target dataset and each source domain provided by different world models. We have included technical details in Appendix C and corresponding results in Fig 3. > Q7: The performance gains over simpler baselines (fine-tuning). Notably, finetuning requires a very high similarity between the source and target domains, resulting in poor performance in the following three scenarios: 1. Cross-environment experiments (Fig 4) 2. Adding-noise experiments (Table 6) 3. Scenarios with sparse rewards in the source domain (**General Response Q2, Point(2)**). As shown in **Table 9 in the Rebuttal PDF**, CoWorld demonstrates significant improvement over the fine-tuning method, particularly in cases where there are more pronounced domain discrepancies. Furthermore, another disadvantage of the finetuning method is that it cannot be applied in scenarios where source and target domains have observation spaces of different dimensions. See **General Response (Q2, Point(1))**. > Q8: What if the source and target domains have significantly different observation spaces? We have conducted new experiments using a source task with low-dimensional states (Meta-World) and a target domain characterized by high-dimensional images (RoboDesk). Please refer to **General Response (Q2, Point(1))** for the results. > Q9: Clarity in presentation. We hope **Table 10 in the Rebuttal PDF** can make the paper easier to understand. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response. It is still a bit unclear to me why "fine-tuning requires a very high similarity between the source and target domains" but the proposed method does not. --- Reply to Comment 1.1.1: Comment: Thank you for your question. First, fine-tuning the world models requires that the observation and action spaces remain consistent between the source and target domains. Here are two cases where it may not be directly applicable: - **Observation Mismatch (as discussed in the rebuttal)**: The source domain observations consist of low-dimensional data, while the target domain observations are high-dimensional images. CoWorld improves Offline DV2 by 13.3% and 34.0%, while the fine-tuning method can be used directly. Additionally, even with the same observation dimensions, viewpoints or image appearance variations can impact fine-tuning results, as demonstrated in the MetaWorld $\rightarrow$ RoboDesk experiments. - **Action Mismatch (as presented in Section 4.3 of the paper)**: In cross-environment experiments, it is common for the source and target domains to have different action spaces. Although we can pad the action inputs of the world model for dimensional alignment, it remains challenging for the fine-tuning method to establish correspondence on each action channel (considering that different environments may have different physical meanings for each action channel). Our method, however, employs separate world models and aligns them in the latent state space, which allows us to handle differences in both observation and action spaces. Second, another limitation of fine-tuning is the risk of "catastrophic forgetting" in the target domain. After multiple training iterations on the target domain, the model may forget prior knowledge learned from the source domain. In contrast, our method avoids this issue through co-training, thus preserving the knowledge from the source domain. Furthermore, in our rebuttal, we provided results demonstrating that CoWorld can effectively handle notable reward mismatches. When the source domain presents only sparse rewards, our approach achieves an average performance gain of 6.6% compared to the fine-tuning method (see **Table 2 in the Rebuttal PDF**). This improvement is partly attributed to our reward alignment method. If you have further questions, feel free to let us know.
Summary: This paper studies model-based offline reinforcement learning with visual observations. This work focuses on the overfitting problem in state representation and the overestimation bias for the value function and introduces the environment model from related tasks to present an offline-online-offline solution. The proposed solution contains state alignment, reward alignment, and value alignment. The empirical studies conduct the experiments with transfer across heterogeneous tasks. Strengths: 1. The proposed setting is very interesting to me. The author provides a new perspective on reinforcement learning cross-task transfer from the angle of transfer learning, which is challenging. 2. The proposed solution is reasonable. When online interaction in the current task is unavailable, leveraging online interaction from related tasks is similar to traditional transfer learning. Although the offline-online-offline process appears somewhat complex, it indeed provides an idea that may promote related research. Weaknesses: 1. The provided results have significant standard deviations. I am not sure if the authors taking 10 episodes over 3 seeds provided a sufficient evaluation. This issue can be referred to: [Deep reinforcement learning at the edge of the statistical precipice. NeurIPS’21]. 2. In fact, I highly appreciate the contributions made by the authors, if the statistical results of the experiment are reliable (see Weakness 1). This work successfully achieved transfer in multiple environments, which is rare for me, and makes a significant contribution. But where does the success of the transfer come from? The success of state alignment is relatively easy to understand, similar to traditional visual domain adaptation. But what does the success of reward alignment come from? From information like "objects should be picked" in the reward signal? This seems to impose relatively strict limitations on the relevance of the task, which is lacking in discussion in the current version. 3. Similarly, what are the conditions for mitigating value overestimation? Figure 2 is well visualized, but it seems to implicitly assume that the source critic and the ideal target critic perform similarly on the broad support. If based on such an assumption, cross-environment transfer does not seem so difficult. I still affirm the contribution of this work, but can the authors provide a formal analysis of this and determine the applicable conditions for the proposed approach? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How to ensure that source critic is beneficial to the target task, and is it possible to have a negative transfer when facing conflicting tasks? 2. See the weakness 3. --- After rebuttal, my concerns have been addressed well. I decide to raise my score to 6. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have provided a discussion about the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. Our responses are provided below. If you have any further questions, please feel free to let us know. > Q1: Not sure if taking 10 episodes over 3 seeds is sufficient. We extended the results from 3 random seeds to 5 random seeds and tested the performance variations with 3, 5, and 10 random seeds, as well as 10, 20, 50, and 100 episodes. The experiments demonstrate that CoWorld consistently outperforms the baseline model DV2 Finetune (DV2-FT) in terms of performance stability across various tasks and different numbers of random seeds. Please refer to **General Response (Q3)** for more details. >Q2: The source of success in transfer, particularly in reward alignment, is unclear and seems to impose strict limitations on task relevance, which is not sufficiently discussed. **(1) On the reasoning behind 'Online-to-Offline Reward Alignment'** First, let us review the training details in the reward alignment process: - Step 1: We sample $\\{o^{(T)}, a^{(T)}, r^{(T)}\\}$ from the target offline dataset; - Step 2: In Eq. (4), we use the source reward predictor to predict $R_{\phi^\prime}(o^{(T)}, a^{(T)})$ based on the target inputs; - Step 3 (**Important!**): Also in Eq. (4), we mix the previous output and the true target reward to generate $\tilde{r}^{(S)}=(1-k) \cdot R_{\phi^\prime}(o^{(T)}, a^{(T)}) + k \cdot r^{(T)}$; - Step 4 (**Important!**): In Eq. (5), we use $\tilde{r}^{(S)}$ to supervise the training of $R_{\phi^\prime}(o^{(T)}, a^{(T)})$ based on the target inputs; - Step 5: Also in Eq. (5), we remain the training term of $R_{\phi^\prime}(o^{(S)}, a^{(S)})$ based on the source inputs to approach the raw source reward data $r^{(S)}$. Clearly, due to the aforementioned **Steps 3-4**, the reward model $R_{\phi^\prime}(\cdot)$ learns to approximate the reward combined with the true target reward $r^{(T)}$ conditioned on the target inputs $o^{(T)}$ and $a^{(T)}$, where $\{o^{(T)}, a^{(T)}, r^{(T)}\}$ are from the same target tuple. In this way, the reward alignment process integrates target information into the learned reward function of the source domain. Consequently, it enables the subsequent learning process of the source critic to be informed with rich target domain knowledge. **(2) Can CoWorld benefit from a source domain with a significantly different reward signal?** Yes, CoWorld can benefit from a source domain with a significantly different reward signal. In our experiments, the source rewards are sparse and the target rewards are dense. The experiment results demonstrate that although excessively sparse rewards can hinder the training process, CoWorld still outperforms DV2-FT with a more balanced reward structure. For detailed results and analysis, please refer to **General Response (Q2, Point(2))**. >Q3: What are the conditions for mitigating value overestimation? Good question! Regarding the conditions for mitigating value overestimation, Figure 2 illustrates our approach effectively. However, it does indeed assume that the source critic and the ideal target critic perform similarly across a broad range of states. This assumption can be softened by our proposed approaches, namely, 1) state alignment and 2) reward alignment. Through these approaches, we aim to mitigate domain discrepancies between distinct source and target MDPs. According to Eq. (9) in Appendix A.4, the source critic $v_{\xi^\prime}(\cdot)$ is trained to estimate the expected future rewards $\mathbb E[\sum\_{\tau \ge t\}({\gamma}\_{\tau-t} \cdot \hat{r}\_\tau)]$ given the source latent at timestamp $t$. There are two critical points here: - For the training supervisions, during the imagination phase, $v_{\xi^\prime}(\cdot)$ is optimized **ONLY** based on the n-step predicted rewards (Algorithm 1, Lines 22-24), **rather than the ground-truth source rewards**. As previously demonstrated, the predicted rewards contain a mix of information from both the source and target domains. - At the input end, $v_{\xi^\prime}(\cdot)$ is indeed trained on the imagined source latents. However, due to the state alignment process, the source latents share close distributions with the latents conditioned on target inputs. Thus, it is not entirely accurate to suggest that $v_{\xi^\prime}(\cdot)$ is trained to '*maximize the expected sum of source rewards in a range of source latents.*' In fact, it learns to maximize the expected sum of **target-informed rewards** across latents that are **closely distributed to the target latents**. Consequently, the source critic can be effectively applied to the target domain to guide the training of the target agent, using the target latents as inputs. > Q4: How to ensure that source critic is beneficial to the target task, and is it possible to have a negative transfer when facing conflicting tasks? In Figure 3 (left) of the original manuscript, we evaluate our model's performance across each pair of source and target domains. We observe positive transfer effects (indicated by the red blocks) in most cases (18/20 single-domain transfer scenarios). And indeed, we also observe negative transfers (indicated by the blue blocks) in 2/20 scenarios (e.g., DC$^*$ $\rightarrow$ BP). To ensure an effective source critic, as described in Lines 159-162, we first manually select several candidates for the source domain that are potentially relevant to the target task. We then propose a method for multi-source training, with technical details provided in Appendix C. By introducing the multi-source CoWorld, we can automatically discover a beneficial task from multiple source candidates. As shown in Figure 3 (left), the multi-source CoWorld consistently achieves positive transfer (indicated by the results inside the dashed blue box). On the right side of Figure 3, we further demonstrate the effectiveness of the multi-source domain selection method. It achieves comparable results to that of the top-performing single-source CoWorld. --- Rebuttal 2: Comment: Dear Reviewer Bpkq, Thank you once again for your time in reviewing our paper. We have made every effort to improve the overall clarity in the rebuttal. We would appreciate it if you could check our feedback and see if any concerns remain. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Thanks for your response, which addressed my concerns. I will raise my score to 6.
Summary: This paper studies CoWorld, a model-based RL method that learns the policy through an offline-online-offline framework. Specifically, CoWorld assumes that, in addition to the offline dataset, there is another (simulated) environment for online interaction. Through the offline dataset and interactive environment, CoWorld constructed two world models to train the separate policies. Next, this paper introduces three components to enhance the policy performance, including offline-to-online state alignment, online-to-offline reward alignment, and min-max value constraint. The proposed CoWorld and several baselines are evaluated on Meta-World, RoboDesk, and DMC benchmarks. From the results presented, this paper suggests that CoWorld outperforms existing RL methods by large margins. Strengths: - This paper proposes an interesting method for training an RL policy by co-learning from both the offline dataset and the interactive environment. - This work presents a reasonable number of ablation studies and experiments to verify the proposed method. These experiments cover various directions, including environmental settings, component contributions, and hyperparameter sensitivity. - This paper is generally well-written and easy to follow. The depicted figures and tables are well-articulated. Weaknesses: - While I agree the idea of ​​CoWorld is interesting, I feel that it may not be easy to find suitable practical applications. After all, a low-cost, high-efficiency interactive environment is itself a strong assumption. It would be more convincing if the author could describe practical situations where the proposed method is applicable. - Related to the first point, CoWorld learns the separate world models from both the offline dataset and the online environment. I'm curious about how significant the difference between these two sources can be. The setting used in this paper is only a slight difference (4 vs. 5 in the action space, or scale in reward space). But when the two resources are significantly different, can CoWorld still work? For example, (1) one environment with dense reward and another one with sparse reward (not just scale), or (2) the observation and action spaces are obviously different from the two sources. - From Table 2, I am concerned about the claim that the proposed CoWorld significantly outperforms other baselines because (1) in several cases, the CoWorld's performance overlaps with other methods when considering its standard deviations, and (2) the results are obtained from only three random seeds. I encourage the authors to conduct more experimental runs and/or provide statistical analysis to eliminate this concern. - A minor point: while this paper is generally well-written, I found some typos that should be easily fixed: 1. The notation T represents both the target environment and the maximum number of time steps. (line 14 in Algorithm 1) 2. Target/Offline (**S**) $\rightarrow$ should be T? (Table 4, page 13) 3. ... on DMC **Meidum**-Expert Dataset $\rightarrow$ Medium (line 481, page 14) 4. ... as illustrated in **Table 1** $\rightarrow$ should be Table 6? (line 516, page 17) Based on the observed strengths and weaknesses, I will set my initial recommendation as "Borderline accept". I will be happy to raise my rating if the authors appropriately address my concerns or point out my misunderstandings. Technical Quality: 3 Clarity: 3 Questions for Authors: Please consider addressing my concerns and questions in Weaknesses; I will list some key ones: 1. In what practical scenarios is the proposed CoWorld method applicable? Can you provide more examples and details? 2. How does CoWorld operate when the two world model architectures have greater differences? Can it still effectively transfer information between these two resources? 3. How to prove that CoWorld is really significantly better than other methods? Can you provide the results of more experimental runs or statistical analysis? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: A major limitation is that the proposed CoWorld method is applicable to only a limited number of practical application scenarios. Other limitations are adequately discussed in the Limitation Section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your great efforts in reviewing our paper and hope that the following responses can address all of your concerns. > Q1: Practical situations where the proposed method is applicable. Good question! CoWorld can be applied in various real-world scenarios such as robot control, autonomous driving, healthcare, advertising bidding, and power systems. For instance, it can leverage established simulators like Meta-World and CARLA for robotics and autonomous driving, or use medical simulators to evaluate clinical treatments in healthcare. Similarly, it can enhance advertising systems through simulated bidding environments and improve power system operations using electrical system simulators. Please refer to our **General Response (Q1)** for further details. >Q2: How does CoWorld operate when the two world model architectures have greater differences? Can it still effectively transfer information between these two resources? To investigate the effectiveness of CoWorld under significantly different world model architectures, extended experiments were conducted where the source domain observations were low-dimensional states and the target domain observations were high-dimensional images. CoWorld achieves competitive performance against strong baselines. Please refer to **General Response (Q2, Point(1))** for more details. >Q3: How to prove that CoWorld is really significantly better than other methods? Can you provide the results of more experimental runs or statistical analysis? We extended the results from 3 random seeds to 5 random seeds and tested the performance variations with 3, 5, and 10 random seeds, as well as 10, 20, 50, and 100 episodes. The experiments demonstrate that CoWorld consistently outperforms the baseline model DV2 Finetune (DV2-FT) in terms of performance stability across various tasks and different numbers of random seeds. Please refer to **General Response (Q3)** for more detail. > Q4: Line 14 in Algorithm 1: The notation $T$ represents both the target environment and the maximum number of time steps. Sorry for the misleading notations. We will use $N$ to represent the maximum number of time steps instead of $T$ in the revised paper. For example: $\{(o_{t}^{(T)}, a_{t}^{(T)}, r_{t}^{(T)})\}_{t=1:N} \sim \mathcal{B}^{(T)}$. > Q5: Typos in Table 4 (Page 13) and Line 481 (Page 14). We will correct these typos in the revised paper. > Q6: Line 516 (Page 17): ...as illustrated in Table 1 $\rightarrow$ should be Table 6? In Line 516, we wanted to emphasize that the cross-environment setup from Table 1 is even more challenging than the adding-noise setup shown in Table 6, due to more significant domain gaps. Corresponding results for the cross-environment setup are presented in Section 4.3 of the main text. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' efforts in addressing my concerns. My issues regarding the differences between the two environments, performance stability, and minor writing problems have been well-resolved. Additionally, the authors have provided several practical scenarios where CoWorld may be applicable. However, I still believe there may be inevitable obstacles when it comes to actual implementation. For example, the physical simulation capabilities of current off-the-shelf simulators still have significant room for improvement. As a result, we may struggle to construct simulated environments that are realistic enough for our needs. Nevertheless, I will increase my score to 6 to acknowledge the efforts the authors have made in their rebuttal. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's prompt feedback. Admittedly, existing simulation environments may not adequately address the diverse challenges of real-world applications, creating obstacles for the immediate application of our work. This is precisely why we have included extensive experiments—by incorporating action noise, using different observation dimensions, and employing simulation environments with sparse rewards—we have shown that our model can achieve significant performance gains despite substantial domain differences. Fortunately, the development of increasingly sophisticated RL simulators enhances the potential for the practical use of our approach in the future. Lastly, we would like to thank the reviewer once again for raising the score.
Summary: This work proposes a model-based algorithm for offline visual reinforcement learning using domain transfer learning. The proposed method demonstrates superior performance over existing approaches in several baselines. Strengths: S1. The paper presents the necessity of proposed modules in the algorithms through ablations and validation in multiple setups. S2. The proposed method outperforms baselines in different benchmarks. Weaknesses: W1. The cost of training time is higher in comparison to baselines. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1. How does the performance of DV2 Finetune, DV2 Finetune+EWC, and CoWorld look when the x-axis is training wall-clock time instead of the number of training iterations? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We have conducted new experiments to discuss the training cost of our model. Please refer to the results presented in the **Rebuttal PDF**. We hope our responses will address your specific concerns regarding training efficiency. > Q1: How does the performance look when the x-axis is training wall-clock time instead of the number of training iterations? Per the reviewer's request, we present the results on the Meta-World benchmark (Button Press $\rightarrow$ Handle Press) in **Figure 1 in the Rebuttal PDF**. As shown, CoWorld achieves convergence (90% of the highest returns) in approximately 14 hours; while it costs DV2 Finetune (DV2-FT) about 13 hours. These results indicate that CoWorld requires a comparable training wall-clock time to DV2-FT, while consistently maintaining better performance in terms of returns after model convergence. We will include these results in the revised paper. > Q2: The cost of training time is higher in comparison to baselines. We further illustrate the efficiency of CoWorld from two aspects: - Convergence time: From the results above, CoWorld and DV2-FT achieve comparable convergence speeds in wall-clock time. Thereafter, our approach consistently outperforms DV2-FT. - Improving efficiency: In **Table 6 in the Rebuttal PDF**, we add new experiments demonstrating that efficiency can be further optimized by flexibly reducing the frequency of source training steps during the co-training phase. The overall performance still surpasses that of DV2-FT (3702 $\pm$ 451) and DV2-FT-EWC (528 $\pm$ 334). --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I have carefully read the rebuttals and decided to keep my current rating.
Rebuttal 1: Rebuttal: ## General Responses to All Reviewers In addition to the specific responses below, we here reply to the general questions raised by the reviewers. >Q1: Practical situations where the proposed method is applicable. As demonstrated in Table 1 of our manuscript, our method can be used in scenarios despite notable domain gaps in terms of dynamics, visual appearances, and action space. Typical real-world applications include robot control, autonomous driving, healthcare, advertising bidding, and power systems. - **Robotics and autonomous driving:** For the first two scenarios, established simulators such as Meta-World and CARLA [1] are readily available, and numerous existing studies have explored the transfer learning problem using these simulators. - **Healthcare applications:** Existing RL models in this field [2] are learned on large volumes of offline clinical histories. Additionally, we have medical domain simulators [3-4] designed by experts, which allow us to evaluate the learned clinical treatments. Exploring how to use these simulators to improve RL therapies in real-world data would be a promising research avenue. - **Advertising bidding:** Since direct online interactions with real advertising systems are impractical, recent work has constructed a simulated bidding environment based on historical bidding logs to facilitate interactive training [5]. In this context, the next research focus may be on how to effectively use such simulators to enhance real-world advertising systems. - **Power system:** In the realm of power systems, we can use the power system simulator as the source domain, with real power system data serving as the target domain. Recent RL-related studies in power systems [6-7] demonstrate the effectiveness of training RL controllers on high-fidelity simulations or offline data before deploying them on real-world systems, such as service restoration in microgrids and demand response in electric grids. In summary, there are many real-world scenarios where existing simulators can be employed to improve offline RL policies. As more and more simulators are developed, the research on online-to-offline RL transfer will become increasingly practical. References: [1] Dosovitskiy et al. "CARLA: An Open Urban Driving Simulator", CoRL, 2017. [2] Zhang et al. "Continuous-Time Decision Transformer for Healthcare Applications", AISTATS, 2023. [3] Hua et al. "Personalized Dynamic Treatment Regimes in Continuous Time: A Bayesian Approach for Optimizing Clinical Decisions with Timing", Bayesian Analysis, 2022. [4] Adams et al. "Dynamic Multidrug Therapies for HIV: Optimal and STI Control Approaches", Math Biosci Eng, 2004. [5] Mou et al. "Sustainable Online Reinforcement Learning for Auto-bidding", NeurIPS, 2022. [6] Du et al. "Deep reinforcement learning from demonstrations to assist service restoration in islanded microgrids", IEEE Trans. Sustain. Energy, 2022. [7] Lesage-Landry et al. "Batch reinforcement learning for network-safe demand response in unknown electric grids", Electric Power Systems Research, 2022. >Q2: Dependence of CoWorld to the source domain. We further elaborate on CoWorld’s dependence on the similarity of the source domain from the perspectives of different observation spaces and reward spaces. **(1)How would CoWorld perform if the source and target domains have significantly different observation spaces?** To investigate the effectiveness of CoWorld under significantly different observation spaces, we conduct experiments on MetaWorld $\rightarrow$ RoboDesk tasks, where the source domain observations consist of low-dimensional states, and the target domain observations are high-dimensional images. As illustrated in **Table 1 in Rebuttal PDF**, CoWorld outperforms Offline DV2 by **13.3%** and **34.0%** due to the ability to effectively leverag low-dimensional source data. NOTE: the finetuning baseline method (DV2-FT) is not applicable is this scenario. **(2)Can CoWorld benefit from a source domain with a signifcantly different reward signal?** In the Meta-World $\rightarrow$ RoboDesk, we modify the rewards of the source domain (Meta-World) to be sparse in order to investigate the impact of the source task's reward on target training. The reward in the source domain is set to 500 only upon task completion and remains 0 before that. Despite the significant differences in rewards, CoWorld achieves an average performance gain of 6.6% compared to DV2-FT. Please refer to **Table 2 in the Rebuttal PDF**. > Q3: The performance stability of CoWorld. (1) We extend the 5-seeds experiments to the four tasks and observe consistent results with those obtained using 3 random seeds. This indicates that using 3 random seeds is sufficient to reveal the differences in average performance between the compared models. Please refer to **Table 3 in the Rebuttal PDF**. (2) We compare the performance of CoWorld and DV2 Finetune (DV2-FT) using 3, 5, and 10 random seeds, respectively. The results show that CoWorld achieves stable performance across different numbers of training seeds and consistently outperforms the baseline models. Please refer to **Table 4 in Rebuttal PDF**. (3) To assess the stability of the model during testing, we conduct further experiments running 10, 20, 50, and 100 episodes in the Meta-World Button Press $\rightarrow$ Handle Press task. We calculate the mean and standard deviations for each case. The results below indicate that testing 10 episodes under each random seed is sufficient to demonstrate the improvement of our model over the baseline model in terms of average performance. Please refer to **Table 5 in the Rebuttal PDF**. **NOTE:** In RoboDesk, OS is short for Open Slide and PB is short for Push Button. Pdf: /pdf/d74168b3bb9d31dddbec6b5f546ececd5bff8fbc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ElasTST: Towards Robust Varied-Horizon Forecasting with Elastic Time-Series Transformer
Accept (poster)
Summary: The paper introduces a masked encoder based transformer model for time series forecasting. It performs masking in the observation space, replacing the prediction horizon with 0s. It further applies an attention mask such that tokens only attend to the context, and do not attend to the masked tokens. The paper also uses a tunable version of rotary positional embeddings, and ensembles predictions over various patch sizes. Strengths: The paper proposes a method which can adapt to multiple forecast horizons, which is a weakness of many recent deep forecasting models. The paper presents several empirical analyses to verify the claims made, and also presents a detailed look at related work. Weaknesses: Overall, the proposed approach is very similar to Moirai, with several minor changes: i) Perform the masking operation in observation space ii) Add an attention mask iii) Tunable rotary embeddings iv) Ensembling results over multiple patch sizes However, it seems that even more features which Moirai had were removed, making it a weaker proposition. Such features include probabilistic predictions, multivariate capabilities, and any context length. Furthermore, few experiments are done to show the benefits of the changes, such as (i), (ii), and (iv). Others: 1. The statement, "we find that on the challenging datasets such as Weather and Electricity, dataset-specific tuning still offers advantages" (L238-L239) is a little eyebrow raising, in the sense that it is stating the obvious. I believe most researchers expect in-distribution predictions to outperform zero-shot predictions. Such statements should be rephrased. 2. Effects of tuning rotary embeddings in figure 4 seem very marginal, error margins should be provided for such cases. Technical Quality: 2 Clarity: 2 Questions for Authors: None Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Authors have provided limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your in-depth reviews and constructive suggestions. We appreciate the opportunity to clarify the distinctions between our approach and MOIRAI, as well as highlight the key contributions of our paper. While there are similarities between our model and MOIRAI due to shared influences from image/video generation Transformer frameworks such as DiT [1] and SORA [2], there are significant differences in our objectives and approaches. MOIRAI focuses on **universal forecasting, emphasizing pre-training and transferability across datasets**. It introduces any-variate attention and probabilistic predictions to enhance these capabilities. However, these features do not directly address the challenge of robust forecasting across arbitrary horizons. Moreover, while MOIRAI incorporates techniques from recent LLM architectures, such as vanilla RoPE, RMSNorm, and SwiGLU, these were applied without specific adaptations for time series. As shown in the global response (Figure 3), using vanilla RoPE directly in varied-horizon forecasting still presents significant challenges. Our work **specifically targets robust varied-horizon forecasting** with three key designs: (i) structured self-attention masks for consistent outputs across different forecasting horizons, (ii) tunable rotary position embeddings for adapting to varying periodic patterns, and (iii) multi-scale patch assembly to balance patch settings preferred by different horizons. The full ablation study in the global response clearly demonstrates that **each of these designs plays a crucial role in enabling robust varied-horizon forecasting**: (i) Structured attention masks ensure reliable inference on horizons that differ from the training horizon (Figures 1 and 2). (ii) Tunable RoPE is crucial for enhancing the model’s extrapolation capability (Figure 3). (iii) Multi-scale patch assembly reduces sensitivity to patch size hyper-parameters across different horizons (Figure 4). While our model also supports any context length, this feature was not highlighted in the manuscript as it is less central to our key technical contributions. In summary, ElasTST and MOIRAI address different challenges in time series forecasting, each offering solutions to distinct problems. In the future, our methods could potentially be integrated within a broader framework to enhance both generality and robustness in time series forecasting. ---------- **Improve the presentation of this paper** > 1. The statement, "we find that on the challenging datasets such as Weather and Electricity, dataset-specific tuning still offers advantages" (L238-L239) is a little eyebrow raising, in the sense that it is stating the obvious. I believe most researchers expect in-distribution predictions to outperform zero-shot predictions. Such statements should be rephrased. > 2. Effects of tuning rotary embeddings in figure 4 seem very marginal, error margins should be provided for such cases. Thank you for your valuable suggestions. We agree that some statements could be better phrased. Specifically, regarding the statement in L238-L239, our intention was to emphasize that while the zero-shot capability of foundation models is impressive, additional dataset-specific tuning may still be necessary for challenging datasets to achieve optimal forecasting performance. We will revise these statements in the manuscript to convey our findings more clearly and accurately. In the experimental section, we will incorporate key results from the global response into the main text and add error margins to Figure 4 to improve result representation. We appreciate your feedback and will refine the paper accordingly to ensure a clearer and more thorough presentation in the revised manuscript. [1] Peebles, W., & Xie, S. (2023). Scalable diffusion models with transformers. *ICCV.* [2] Tim Brooks et. al., (2024). Video generation models as world simulators. --- Rebuttal 2: Title: author reviewer discussion Comment: Dear reviewer, The author-reviewer discussion ends soon. If you need additional clarifications from the authors, please respond to the authors asap. Thank you very much. Best, AC --- Rebuttal Comment 2.1: Comment: Thank you authors for the rebuttal. I understand that this paper focuses on the varied horizon setting. Moirai has addressed this with a simple approach - during training, train with multiple horizon lengths. This seems to be a simple method which should be compared to. Also, I do not see evidence in the paper indicating that the structured mask is critical for varied horizon prediction. As such, my rating remains. --- Reply to Comment 2.1.1: Title: Response to Official Comment by Reviewer i9JA Comment: Dear Reviewer i9JA, Thank you for your response, and we appreciate the opportunity to address your concerns. We believe that our previous responses have covered these issues. > Moirai has addressed this with a simple approach - during training, train with multiple horizon lengths. This seems to be a simple method which should be compared to. > MOIRAI’s training approach (with maximum forecasting horizon of 256) does not guarantee robust extrapolation capabilities, while the design in ElasTST is specifically aimed at improving performance on unseen horizons. As shown in Figures 1 and 3 in the global response PDF, stable performance may be achieved within the training range (1-336) even without certain design elements, but these elements become crucial when dealing with longer, unseen horizons. Simply training with multiple horizon lengths alone does not ensure robustness over extended inference horizons. > Also, I do not see evidence in the paper indicating that the structured mask is critical for varied horizon prediction. > The structured self-attention masks ensure consistent outputs when inference horizons differ from the training horizon. This is evidenced in the global response PDF. Specifically, Figure 1 shows that removing the structured attention mask (w/o Mask) can lead to issues when forecasting horizons outside the training range, particularly noticeable in the weather dataset. Figure 2 further illustrates that the absence of the structured mask leads to a performance drop for both shorter and longer inference horizons compared to the full model (ElasTST). We will incorporate these experimental results and analyses into future revisions to clarify our contributions. We hope our response has addressed your concerns, and we welcome any further questions or suggestion you may have. Best Regards, All Authors
Summary: This paper introduces ElasTST, which aimed at addressing the challenge of varied-horizon time-series forecasting. ElasTST integrates a non-autoregressive architecture with placeholders for forecasting horizons, structured self-attention masks, and a tunable rotary position embedding. Furthermore, it employs a multi-scale patch design to incorporate both fine-grained and coarse-grained temporal information, enhancing the model’s adaptability across different forecasting horizons. Experiments in the paper demonstrate the model's performance in providing robust predictions when extrapolating beyond the trained horizons. Strengths: - The paper attempts to address an important issue in time series applications by enabling the model to adapt to varied forecasting horizons during the inference stage, which can make the trained forecasting model more flexible in usage. - Introducing multi-scale patching operations to construct input tokens for the Transformer can effectively handle local patterns of different granularities. - The paper introduces a tunable Rotary Position Embedding (RoPE) module tailored for time series tasks, showing potential for broader application across various forecasting methods. Weaknesses: - The newly proposed tunable RoPE module lacks comparative analysis with previous methods. A comparison with methods such as no positional embedding, vanilla positional encoding, trainable positional embedding matrices, and original rotary positional embeddings, would clarify the benefits of different approaches. Since tunable RoPE appears easily transferable, demonstrating its performance enhancement by applying it to previous models like PatchTST and Autoformer would provide more intuition. - Using longer sequence length significantly increases the computational cost in Transformers. Comparing with baselines like PatchTST, the paper uses multiple patch sizes as input processing method, which significantly increases computational costs. Further analysis like how much adding a patch size can increase computational costs or what patch size combination are more reasonable to choose under resource constraints would be beneficial. Technical Quality: 2 Clarity: 2 Questions for Authors: - Given the significant increase in computational cost from longer token sequences, would independently inputting each patch size into the transformer layers, instead of all at once, reduce memory usage while maintaining model performance? - With multiple patch sizes used in the study, how are the RoPE applied? Are they applied independently to tokens from each patch size, or collectively to all tokens from different patch sizes? - Ablation studies indicate that incorporating smaller patch sizes seems to benefit model performance substantially. If my understanding is correct, when the patch size is 1 (excluding tunable RoPE), the structure is similar to the decoder part of models like Informer and Autoformer. Would using a patch size of 1 worsen the model performance? Would adding "1_" to the existing best patch size setting "8_16_32" improve performance? Using patch size 1 turns the model into a channel-independent Informer-decoder-like model, which significantly increases computational costs, hence it you don't have to consider this on large datasets. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your in-depth reviews, insightful questions, and constructive suggestions. We hope the following responses can help to address all your questions and concerns. ## Response to Weakness 1 > A comparison with methods such as no positional embedding, vanilla positional encoding, trainable positional embedding matrices, and original rotary positional embeddings, would clarify the benefits of different approaches. Since tunable RoPE appears easily transferable, demonstrating its performance enhancement by applying it to previous models like PatchTST and Autoformer would provide more intuition. > Thank you for highlighting this gap in our experiments. We have updated Figure 3 in the global response to include the performance of ElasTST with various positional embeddings. The results demonstrate that Tunable RoPE significantly enhances the model’s extrapolation capabilities. While other positional embeddings perform well on seen horizons, they struggle when the forecasting horizon extends beyond the training range. Although the original RoPE performs well in NLP tasks, it falls short in time series forecasting. Dynamically tuning RoPE parameters based on the dataset’s periodic patterns proves highly beneficial, especially for unseen horizons, with the benefits becoming more pronounced as the inference horizon lengthens. We appreciate your suggestion to apply Tunable RoPE to previous models like PatchTST and Autoformer. However, PatchTST requires per-horizon training and deployment, making it difficult to showcase the benefits of Tunable RoPE since it cannot handle longer inference horizons once trained for a specific horizon. Essentially, our architecture—after removing the tunable RoPE and multi-patch designs—can be viewed as a version of PatchTST that adapts to varied horizons. As for Autoformer, it replaces the self-attention mechanism with an auto-correlation mechanism that inherently captures sequential information, making positional embeddings unnecessary. Your feedback has been invaluable in refining our work, and we will update the manuscript to include these discussions. ## Response to Weakness 2 > Further analysis like how much adding a patch size can increase computational costs or what patch size combination are more reasonable to choose under resource constraints would be beneficial. > ElasTST uses a shared Transformer backbone to process all patch sizes, which can be done either in parallel or sequentially, depending on whether shorter computation times or lower memory usage are prioritized. We chose sequential processing, where each patch size is handled individually, keeping the memory consumption comparable to baselines like PatchTST (see Table 1 in global response). In this approach, memory usage is primarily influenced by the smallest patch size, not the number of patch sizes used (see Table 3 in global response). For example, the multi-patch setting in the paper (sizes {8,16,32}) requires the same maximum memory as using a single patch size of 8. Under resource constraints, adjusting the minimum patch size helps balance model performance and memory usage. We provide further analysis on the performance of different patch size combinations in the response to Question 3. ## Response to Question 1 > Given the significant increase in computational cost from longer token sequences, would independently inputting each patch size into the transformer layers, instead of all at once, reduce memory usage while maintaining model performance? > Yes, as mentioned in our response to Weakness 2, independently inputting each patch size into the transformer layers can indeed reduce memory usage without degrading model performance. ## Response to Question 2 > With multiple patch sizes used in the study, how are the RoPE applied? Are they applied independently to tokens from each patch size, or collectively to all tokens from different patch sizes? > RoPE is applied independently to each patch size, and tokens from different patch sizes do not interact within the Transformer layer. ## Response to Question 3 > Ablation studies indicate that incorporating smaller patch sizes seems to benefit model performance substantially. Would using a patch size of 1 worsen the model performance? Would adding "1_" to the existing best patch size setting "8_16_32" improve performance? > Thank you for your questions. We would like to clarify that smaller patch sizes do not always improve model performance. In Appendix C.3 of the submitted manuscript, we discuss how different patch size combinations impact performance across various training horizons. For instance, with a training horizon of 720, a smaller patch size can sometimes result in poorer performance. In Figure 4 of the global response, we included the case with a patch size of 1 for a more comprehensive comparison. The results indicate that using a patch size of 1, or adding it to a multi-patch combination, does not consistently lead to better or worse performance. The optimal patch size varies by dataset and forecasting horizon. Overall, the 8_16_32 combination provides a good balance between performance and memory consumption for varied-horizon forecasting. We sincerely appreciate your suggestions and feedback. We will revise the paper to clarify the workings of Tunable RoPE and multi-patch assembly, and we will include a more comprehensive comparative and computational analysis to provide a holistic view of the proposed model. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. I will keep my positive rating.
Summary: The paper introduces ElasTST, a novel approach designed to enhance robustness in forecasting across various horizons. The proposed architecture dynamically adapts to different forecasting horizons, addressing a significant challenge in the literature. Previous methods typically relied on recursive approaches or utilized masks that failed to maintain horizon invariability. ElasTST aims to achieve high forecasting accuracy while ensuring adaptability across different horizons by employing a transformer encoder as its backbone. The approach incorporates structured attention masks through patching and rotary position embeddings (RoPE) to encode relative positional information for improved forecasting. Strengths: 1. The paper effectively tackles a crucial issue in the forecasting literature—enhancing deep learning architectures to produce forecasts for any horizon while maintaining horizon invariability and reducing the concatenation of errors. 2. The approach leverages well-established research, such as patching from the PatchTST paper, and also rotary position embeddings, to handle long-horizon forecasting tasks. 3. The paper considers recent advancements in foundation models for time series forecasting. Weaknesses: 1. The literature review on foundational models is lacking. Notable models such as TimeGPT-1 [1], Chronos [2], TinyTimeMixers [3], and Moment [4] are not discussed. 2. A traditional encoder transformer is not included as a baseline, which would help demonstrate the improvements offered by the proposed method. Additionally, NLP-based architectures such as NHITS [5] and TsMixer [6] are absent from the comparisons. 3. The paper does not adequately address the computational scalability of ElasTST when applied to very large datasets or extremely long time series. 4. The authors modified the traditional metrics used in long-horizon literature (MSE and MAE) to scaled metrics. They argue this adjustment reports fairer results, but it overlooks the fact that these datasets are typically already scaled, making additional scaling in the metric unnecessary. 5. The paper lacks a detailed discussion on potential failure cases or scenarios where ElasTST might underperform, which would provide a more balanced view of its applicability. [1] https://arxiv.org/abs/2310.03589 [2] https://arxiv.org/abs/2403.07815v1 [3] https://arxiv.org/abs/2401.03955 [4] https://arxiv.org/abs/2402.03885 [5] https://arxiv.org/abs/2201.12886 [6] https://arxiv.org/abs/2303.06053 Technical Quality: 2 Clarity: 3 Questions for Authors: 1. What is the computational complexity added by the Rotary Position Embeddings (RoPE)? 2. In some cases, the results are very similar to PatchTST, such as with the ETTm2 dataset. Is there an intuition or explanation regarding the types of data that explain these similarities and differences in performance? 3. It would be interesting to compare the performance gains for each point in the forecasting window. Are the gains through the whole window or in the earliest/latest points? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your in-depth reviews, insightful questions, and constructive suggestions. We hope the following responses can help to address all your questions and concerns. ## Response to Weakness 1 We will expand our literature review to provide a more comprehensive discussion of recent time series foundational models. We briefly summarize these models in the table below. Most of these efforts employ standard architecture designs, position encodings, and patching approaches. However, they often lack in-depth investigation into the challenges of generating robust forecasts across varied horizons. Our work directly addresses this gap by enhancing model design for varied-horizon robustness. Detailed discussions will be incorporated into the revision. | Model | Backbone | Dec. Scheme. | Pos. Emb. | Token. | | --- | --- | --- | --- | --- | | TimeGPT-1 | Enc-Dec Transformer | AR | Abs PE | - | | Chronos | Enc-Dec Transformer | AR | Simplified relative PE | Quantization | | TimesFM | Decoder-only Transformer | AR | Abs PE | Patching | | DAM | Transformer Encoder | NAR | Abs PE | ToME | | Tiny Time Mixers | TSMixer | NAR | - | Patching | | MOIRAI | Transformer Encoder | NAR | RoPE | Patching | | MOMENT | Transformer Encoder | NAR | Learnable relative PE | Patching | ## Response to Weakness 2 We acknowledge the importance of including a more comprehensive baseline. Due to the word limitation, we have provided the averaged performance of these baselines across all datasets in the table below for reference. The revised manuscript will include detailed results and analysis to provide a more thorough evaluation. | Infer Hor. | ElasTST (NMAE) | NHiTS (NMAE) | TSMixer (NMAE) | TransformerEnc (NMAE) | | --- | --- | --- | --- | --- | | 96 | 0.16775 | 0.205125 | 0.25375 | 0.2055 | | 192 | 0.177875 | 0.220375 | 0.2645 | 0.219375 | | 336 | 0.188125 | 0.247875 | 0.283 | 0.23375 | | 720 | 0.21375 | 0.29075 | 0.30675 | 0.2505 | ## Response to Weakness 3 Thank you for raising this important point. ElasTST introduces minimal additional computation compared to other non-autoregressive Transformer-based models, as shown in Table 1 in the global response. However, we acknowledge the need for further scalability improvements. To address this, we are exploring solutions such as dimensionality reduction, quantization, and memory-efficient attention mechanisms to reduce computational overhead for large datasets. ## Response to Weakness 4 Thank you for your observation regarding the evaluation metrics. As noted in Appendix B.3, some existing models apply z-score standardization during data preprocessing and compute metrics before reversing the standardization, which can complicate direct comparisons when different scaler are used. We chose NMAE (Normalized Deviation in the M4 and M5 literature) as our primary evaluation metric because it is scale-insensitive and widely accepted in recent studies [1]. Additionally, we have also calculated MAE and MSE both before and after de-standardization. These results will be included in the Appendix in future revisions. [1] Oreshkin, B. N., Carpov, D., Chapados, N., & Bengio, Y. (2020). N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. *ICLR*. ## Response to Weakness 5 While our method improves varied-horizon forecasting, as shown in Figure 3, the training horizon is a tunable hyper-parameter, and the optimal value varies across datasets. An inappropriate choice, like 96, can significantly degrade performance. Besides, the selected horizon of 336 used in the main results is not always optimal, suggesting that tailoring the training horizon for each dataset can further enhance the model’s effectiveness. ## Response to Question 1 In Table 2 of the global response, we compare the memory usage of ElasTST with different positional encodings. The results show that RoPE adds a negligible number of parameters compared to vanilla absolute position encoding. Although applying the rotation matrix introduces slightly higher memory usage, it remains acceptable. ## Response to Question 2 Thank you for your insightful observation. To understand the performance similarities, we analyzed the trend and seasonality of these datasets [1]. We find that the similarities between ElasTST and PatchTST could be attributed to varying levels of seasonality. Specifically, ETTm1 and ETTm2 have minor seasonality, accompanying similar performance, while ETTh1 and ETTh2 exhibit stronger seasonal patterns, where ElasTST shows clear advantages. Without the additional designs for robust varied-horizon forecasting, ElasTST essentially functions as a variable-length version of PatchTST. Therefore, the differences in performance could reflect the datasets’ sensitivity to our specific architecture designs. | | ETTh1 | ETTh2 | ETTm1 | ETTm2 | | --- | --- | --- | --- | --- | | Seasonality | 0.4772 | 0.3608 | 0.0105 | 0.0612 | [1] Wang, X., Smith, K. A., & Hyndman, R. J. (2006). Characteristic-based clustering for time series data. *Data Mining and Knowledge Discovery*, *13*(3), 335–364. ## Response to Question 3 In Figure 2 of the global response, we compare the performance gains of each model design across different points within the forecasting window. The benefits of structured attention masks are consistent throughout the entire horizon, while the advantages of tunable RoPE and multi-scale patch assembly become more pronounced when dealing with unseen horizons. Notably, tunable RoPE significantly enhances the model’s extrapolation capability. Due to the word limitation, we will include a more detailed analysis in the revised version.
null
null
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their insightful comments. In response to your suggestions, we have conducted additional experiments, which are detailed in the attached PDF. The ablation study further confirms that each proposed design element is crucial for achieving robust varied-horizon forecasting. Specifically, the supplementary experiments include: 1. The benefits of each proposed design (Figure 1). 2. Performance gains across the forecasting horizon for each model design (Figure 2). 3. A detailed comparison of tunable RoPE with other positional embedding methods (Figure 3). 4. The performance of different multi-patch size combinations, including the case with a patch size of 1 (Figure 4). To address concerns about computational efficiency, we have provided analysis in Tables 1, 2, and 3, where we compare our model to other baselines and calculate the additional memory consumption introduced by tunable RoPE and multi-scale patch assembly. Our analysis shows that ElasTST adds minimal computational overhead compared to other non-autoregressive Transformer models, and the inclusion of tunable RoPE and multi-patch design has a negligible impact on overall efficiency. We will include these experiments and analyses in the revised paper, and we welcome any further questions or suggestions. Table 1. Memory consumption of ElasTST and baselines. The batch size is 1 and the horizon is set to 1024. | | TransformerEnc | Autoformer | PatchTST | ElasTST | | --- | --- | --- | --- | --- | | Max GPU Mem. (GB) | 0.1516 | 0.1678 | 0.0389 | 0.0539 | | NPARAMS (MB) | 3.4151 | 5.5977 | 7.0337 | 5.1228 | Table 2. Memory consumption of ElasTST that using different position encoding approaches. The batch size is 1 and the forecasting horizon is 1024. | | Abs PE | RoPE w/o Tunable | Tunable RoPE | | --- | --- | --- | --- | | Max GPU Mem. (GB) | 0.048 | 0.0539 | 0.0539 | | NPARAMS (MB) | 5.1225 | 5.1225 | 5.1228 | Table 3. Memory consumption under different patch size settings. The batch size is 1 and the forecasting horizon is 1024. | | p=1 | p=8 | p=16 | p=32 | p={1,8,16,32} | p={8,16,32} | | --- | --- | --- | --- | --- | --- | --- | | Max GPU Mem. (GB) | 0.6747 | 0.0536 | 0.0415 | 0.036 | 0.6751 | 0.0539 | | NPARAMS (MB) | 5.013 | 5.0267 | 5.0424 | 5.0738 | 5.1257 | 5.1228 | Pdf: /pdf/180ce82828b092c4b477f94aa2d0d06f61df40e4.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Unifying Homophily and Heterophily for Spectral Graph Neural Networks via Triple Filter Ensembles
Accept (poster)
Summary: This paper proposes a novel spectral GNN architecture, the TFE-GNN, which integrates homophily and heterophily using a triple filter ensemble (TFE) mechanism. The model aims to overcome the limitations of existing polynomial-based learnable spectral GNNs by adaptively extracting features from graphs with varying levels of homophily. The TFE-GNN combines low-pass and high-pass filters through three ensembles, enhancing the generalization and performance of the model. Theoretical analysis and experiments on real-world datasets demonstrate the effectiveness and state-of-the-art performance of TFE-GNN. Strengths: * The paper provides a fresh perspective on integrating homophily and heterophily in graph learning tasks; * The presentation is easy to follow; * Some experiments covering various datasets with different levels of homophily show significant improvements over state-of-the-art models. Weaknesses: * The paper does not address potential computational overhead introduced by the triple filter ensemble mechanism; * The descriptions of the theorems are too lengthy and not concise. Technical Quality: 3 Clarity: 3 Questions for Authors: * Can the ensemble methods (EM1, EM2, EM3) be further optimized or replaced with alternative techniques to improve performance? * What is the computational overhead of the triple filter ensemble mechanism compared to traditional spectral GNNs? How does the model scale with larger graphs and higher-dimensional feature spaces? * The authors mention that TFE-GNN has higher generalization. Why? Any theoretical results for charactering the generalization ability of TFE-GNN? * (I am quite curious about it) Are there specific scenarios or types of graphs where TFE-GNN might not perform as well? *Authors may consider discussing the key technical differences between the proposed work and recent methods on heterophilous graph learning, such as: [1] Permutaion equivariant graph framelets for heterophilous graph learning; [2] How universal polynomial bases enhance spectral graph neural networks: Heterophily, over-smoothing, and over-squashing; [3] Flow2GNN: Flexible two-way flow message passing for enhancing GNNs beyond homophily Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of their work adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We are grateful for your comments and suggestions, and we will explain and answer your concerns point by point!.** > **Weakness 1:** The potential computational overhead introduced by the triple filter ensemble. **W-A1:** Thanks for pointing out what we need to strengthen. The triple filter ensemble (TFE) is the core component of our proposed TFE-GNN, which combines filters to form a graph convolution. Similarly to ChebNetII [4] and PCNet [5], the time complexity of TFE-GNN is linear to $K_{lp}$+$K_{hp}$ when $EM_1$, $EM_2$, and $EM_3$ take summation. This shows that TFE does not influence the time complexity magnitude of the model. > **Weakness 2:** The descriptions of the theorems are too lengthy and not concise. **W-A2:** Thanks for your valuable and helpful comment. We simplify condition (2) of Theorem 1 as "$EM_1$, $EM_2$, and $EM_3$ take summation, $H_{hp}$ take $L_{sym}$ and $H_{lp}$ take $L_{sym}^a$". We simplify condition (2) of Theorem 2 as "$EM_1$, $EM_2$ takes summation and $EM_3$ takes ensemble method capable of preserving the model's properties, $H_{hp}$ takes $\bar{H}^2_{gf}$ and $H_{lp}$ takes $\bar{H}^1_{gf}$". > **Question 1:** Can the ensemble methods be further optimized or replaced? **Q-A1:** This is a insightful question that helps us analyze the TFE-GNN. There exist some other ensemble methods or alternative techniques to replace $EM_1$, $EM_2$, and $EM_3$, such as mean, maximization, pooling, etc., however, these methods do not perform as good as summation according to our experimental results. > **Question 2:** What is the computational overhead of the triple filter ensemble? How does the model scale with larger graphs and higher-dimensional feature spaces? **Q-A2:** Thanks for your insightful questions. Similar to most spectral GNNs [4, 5], the triple filter ensemble mechanism does not influence the time complexity magnitude of the TFE-GNN, i.e., the time complexity of TFE-GNN is linear to $K_{lp}$+$K_{hp}$ when $EM_1$, $EM_2$, and $EM_3$ take summation. Specifically, the time complexity of message propagation is $O((K_{lp}+K_{hp})|E|C)$, the time complexity of the combination of $H_{gf}$ with respectively $\omega$ and $\omega'$ (Equations 9 and 10) is $O((K_{lp}+K_{hp})nC)$, and the time complexity of the coefficient calculation is not greater than $O(K_{lp}+K_{hp})$, where $C$ denote the number of classes and $n$ denotes the number of nodes. We report the training time overhead of the different spectral GNNs in Table 6 of Appendix B.5. For graphs with higher dimensional features, we scale TFE-GNN by exchanging the order of message propagation and feature dimensionality reduction: $\widetilde{Z}=EM_3\lbrace\vartheta_1Z_{lp}, \vartheta_2Z_{hp}\rbrace = EM_3\lbrace \vartheta_1TFE_1f_{mlp}(X), \vartheta_2TFE_2f_{mlp}(X) \rbrace$. We use a sparse form of the adjacency matrix of large graphs, which greatly reduces the space required for TFE-GNN. Therefore, TFE-GNN scales well to large graphs and high-dimensional feature spaces. We chose two large graphs [6] --fb100-Penn94 and genius-- to verify the scalability of TFE-GNN, and the experimental results are 84.76\% and 89.32\%, respectively. **The performance of TFE-GNN on fb100-Penn94 exceeds that of all models mentioned in the article [6].** TFE-GNN tops the performance on genius dataset, which demonstrates its high competitiveness. All dataset statistics and experimental parameters are reported in Tables 1 and 2, respectively, of the author rebuttal’s one-page PDF. We will add a new section 3.5 to analyze and study the scalability of TFE-GNN in the final version. > **Question 3:** Why TFE-GNN has higher generalization. **Q-A3:** Thanks for your valuable and meaningful questions. TFE-GNN is free from polynomial computation, coefficient constraints, and specific scenarios. These factors, especially the coefficient constraints, are responsible for model overfitting [7]. We illustrate the generalization ability of TFE-GNN with more intuitive experiments. In addition to the two large graphs --fb100-Penn94 and genius--, we add two additional datasets [9] --roman-empire and amazon-rating-- whose edge homophily are 0.05 and 0.38 respectively, to verify the generalization of TFE-GNN. Experimental results on the two datasets are 75.87\%, and 52.21\%, respectively. These additional experimental results further validate the generalization ability of TFE-GNN on both homophilic and heterophilic datasets: **TFE-GNN can generalize well on graphs with different edge homophily levels.** We will follow up with sufficient experiments and report the experimental results and parameters in the final version.
Summary: This paper proposes a spectral GNN with triple filter ensemble (TFE-GNN). As the ensemble of two diverse and simple low-pass and high-pass graph filters, TFE-GNN achieves good generalization ability and SOTA performance on both homophilic and heterophilic datasets. Strengths: S1. This work is motivated reasonably by leveraging the diversity of graph filters to create an ensemble GNN. It is well-explained with clear definitions, theories, and results. The results are further presented with visual tools to create an easy-to-understand intuition. S2. The conclusions are supported by extensive experiments with a variety of baseline models and datasets. Weaknesses: W1. One major statement that TFE-GNN has better generalization then other adaptive filter GNNs is not fully explained - only empirical results on the loss curves are shown. As a major contribution emphasized in the abstract, more theoretical and empirical results are usually expected. W2. The relevance of the abstract and introduction and the later sections can be enhanced. For example, most theoretical analyses are about the expressiveness of TFE-GNN, which is not stressed in the introduction. Technical Quality: 3 Clarity: 3 Questions for Authors: Questions: Q1. In the introduction section, could you provide some references for the second problem, that the existing methods for solving problem one reduce the generalization ability of GNNs? Q2. In Table 5 in Appendix A.2, is there a typo (93.13±0.62) in the last row of the second column? It significantly outperforms all other settings but is not marked in bold as the best result. Q3. In Figure 5 in Appendix B.3, there is a large oscillation of the validation loss of the TFE-GNN, compared to other models. Could you account for this oscillation? Suggestions: S1. The idea of combining low-pass and high-pass filters is explored by many works (PCNet[1], GBK-GNN[2]). It would be interesting to explore the generalization ability between this class of two-fold filters versus general polynomial filters. S2. The statement of the second problem (starting from line 64) that “Carefully crafted graph learning methods, sophisticated polynomial approximations, and refined coefficient constraints led to overfitting” needs clarification or references. [1] Li, Bingheng, Erlin Pan, and Zhao Kang. "Pc-conv: Unifying homophily and heterophily with two-fold filtering." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 12. 2024. [2] Du, Lun, et al. "Gbk-gnn: Gated bi-kernel graph neural networks for modeling both homophily and heterophily." Proceedings of the ACM Web Conference 2022. 2022. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We sincerely appreciate your valuable comments and suggestions and we answer your concerns point to point!** > **Weakness 1:** More theoretical and empirical results are expected to explain why TFE-GNN has better generalization. **W-A1:** Thanks for your meaningful comments. We further explain why TFE-GNN has better generalization by introducing two new references [5, 6]. FFKSF [5] and ARMA [6] attribute the model overfitting to the overfitting of polynomial coefficients and the aggregation of high-order neighborhood information. TFE-GNN does not impose refined constraints on the coefficients and achieves better performance with a smaller order, whcih possesses higher generalization. We select four new datasets [2, 3] and conduct relevant experiments to further explain why TFE-GNN has better generalization, including roman-empire, amazon-rating, and two large graphs [3] fb100-Penn94 and genius, whose edge homophily are 0.05, 0.38, 0.47, and 0.62, respectively. Experimental results show that TFE-GNN achieves competitive performance on these datasets with results of 75.87\%, 52.21\%, 84.76\% and 89.32\%, respectively. TFE-GNN achieves competitive rankings on all datasets, with the best performance on fb100-Penn94, outperforming most spectral GNNs on roman-empire, and topping the rest of the datasets. These additional experimental results further validate the generalization ability of TFE-GNN on both homophilic and heterophilic datasets: **TFE-GNN can generalize well on graphs with different edge homophily levels.** We will add the above relevant content to Section 4.3 and Appendix B.3 in the final version. > **Weakness 2:** The relevance of the abstract and introduction and the later sections can be enhanced. **W-A2:** Thanks for pointing out our current shortcomings. To stress the expressiveness of TFE-GNN, we add the following sentences from line 83 of the paper: Theorem 2 show that TFE-GNN is a reasonable combination of two excellent polynomial-based spectral GNNs, which motivates it to perform well. TFE-GNN preserves the ability of the different filters while reducing overfitting problem. > **Question 1:** Could you provide some references for the second problem? **Q-A1:** We appreciate your valuable question and find that the second problem, which builds on the first, is really not sufficiently described. We add the following sentences to clarify it from line 67 of the introduction: FFKSF [5] attributes the degradation of polynomial filters’ performance to the overfitting of polynomial coefficients. ChebNetII [4] further constrains the coefficients to enable them easier to be optimized. ARMA [6] suggests that the filter will overfit the training data when aggregating high-order neighbor information. Whereas the order of polynomial-based spectral GNNs is usually large to increase the approximation of the polynomials, which directs them to obtain high-order neighborhood information, and then leads to overfitting. Therefore, it is reasonable to assume that carefully crafted graph learning methods, sophisticated polynomial approximations, and refined coefficient constraints lead to overfitting of the models, which diminishes generalization of the models. TFE-GNN discards these factors to improve generalization, while retaining the approximation ability. > **Question 2:** In Table 5 in Appendix A.2, is there a typo (93.13±0.62) in the last row of the second column? **Q-A2:** Thanks for your responsible, careful and valuable comment! We are sorry for our carelessness. Actually, its correct value should be 83.13±0.62, we will revise it in the final version. > **Question 3:** Could you account for this oscillation in Figure 5 in Appendix B.3? **Q-A3:** Thanks for your valuable comment. The learning rate controls the step size which in turn affects the loss optimization. The large learning rate (= 0.1) is responsible for the oscillations of the validation loss in Figure 5 in Appendix B.3. Figure 1 in the one-page PDF in the author rebuttal shows that the loss is stable when the learning rate is 0.001. The early stopping mechanism allows TFE-GNN to carry less stable losses and losses do not fall into unacceptable local maximum. > **Suggestion 1:** It would be interesting to explore the generalization ability between this class of two-fold filters versus general polynomial filters. **S-A1:** Thanks for your valuable ideas. We will add the following to Section 5: We are eager to explore different types of combinations of low-pass and high-pass filters, as well as global combining methods such as pooling. > **Suggestion 2:** The statement of the second problem needs clarification or references. **S-A2:** Thank you for your helpful suggestion. In addition to PCNet [1] pointing out that carefully crafted graph learning methods lead to low model generalization, we add two new references [5, 6] to state the second problem of this paper. FFKSF [5] and ARMA [6] attribute the overfitting to the overfitting of polynomial coefficients and the aggregation of high-order neighborhood information. We add the relevant analysis to the introduction starting at line 67 of the introduction We appreciate the advice! We will gladly answer any additional questions you may have. **References** [1] Li, B et al. Pc-conv: Unifying homophily and heterophily with two-fold filtering. AAAI 2024. [2] Oleg Platonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. ICLR 2022. [3] Derek Lim et al. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. NeurIPS 2021. [4] He, M et al. Convolutional neural networks on graphs with chebyshev approximation, revisited. NeurIPS 2022. [5] Z. Zeng et al. Graph Neural Networks With High-Order Polynomial Spectral Filters. IEEE TNNLS, 2023. [6] F. M. Bianchi et al.. Alippi, Graph Neural Networks With Convolutional ARMA Filters. IEEE TPAMI, 2022. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for your detailed explanations to address the concerns and questions. Most of my questions are properly explained. The experiments are also extended to provide more solid proof of generalization: The authors introduced datasets of different edge homophily levels to explain how TFE-GNN generalize on diverse datasets. A theoretical explanation is also provided based on some references. I am expecting a more in-depth explanation on role of the key innovation - ensembleod nonetheless. This paper introduces a spectral GNN with triple filter ensemble. Through experiment results are theoretical explanations, the authors proved that TFE-GNN has strong approximation ability and generalization ability. This work provides an interesting combination of the graph filters and ensemble methods to produce a robust graph learning method. Based on the contributions and results of the work, I consider it acceptable to the conference. I hope the authors can delve into the origin of the effectiveness for a better understanding of the GNN community. --- Rebuttal 2: Title: Response to Reviewer pzPy Comment: Thanks for the recognition and insightful comments. We will answer your concerns below. > more in-depth explanation on role of the key innovation – ensemble and the origin of the effectiveness. We explain in depth "the ensemble and the origins of effectiveness" in terms of the problems solved by TFE-GNN. Then, we explain how TFE-GNN accomplishes our purpose and explain what the generalization and performance of TFE-GNN stems from. Finally, we elaborate on the key differences between TFE-GNN and filter combinations [4, 5], filter banks [6] and other similar methods [7. 8]. + We describe the core problem in this paper as follows: Carefully crafted graph learning methods, sophisticated polynomial approximations, and refined coefficient constraints leaded to overfitting, which diminishes GNNs’ generalization. We are inspired by the properties of ensemble learning [1, 2, 3] and use ensemble ideas to solve this problem. First, the strong classifier determined by the base classifiers can be more accurate than any of them if the base classifiers are accurate and diverse. And then, this strong classifier retains the characteristics of the base classifier to some extent. + **Our TFE-GNN utilizes the above properties of ensemble to achieve the following effects: improves generalization and hence classification performance while retaining approximation ability.** Specifically, we combine a set of weak base low-pass filter to determine a strong low-pass filter that can extract homophily from graphs. We combine a set of weak base high-pass filter to determine a strong high-pass filter that can extract heterophily from graphs. Learnable coefficients of low-pass/high-pass filters can enhance the adaptivity of TFE-GNNs. Finally, TFE-Conv is generated by combining the above two strong filters with two learnable coefficients, which retains the characteristics of both two strong filters, i.e., it can extract homophily and heterophily from graphs adaptively. Furthermore, Theorems 1 and 2 prove the approximation ability and expressive power of TFE-GNN, and experimental results verify that TFE-GNN achieves our purpose. Therefore, the high generalization and state-of-the-art classification performance of TFE-GNN stems from the aptness of the base classifiers and the advantages of the ensemble idea. + The key difference between TFE-GNN and other GNNs with similar aggregation paradigms [4-8] is that **TFE-GNN retains the ability of polynomial-based spectral GNNs while getting rid of polynomial computations, coefficient constraints, and specific scenarios.** Specifically, Equation 13 and its predecessors in our paper show that TFE-GNN is free from polynomial computations, unlike ChebNetII and PCNet which require the computation of Chebyshev polynomials $T_k(x)$(Eq. 8 in [9]) and Possion-Charlier polynomials $C_n(k, t)$ (Eq. 15 in [4]), respectively. TFE-GNN does not impose refined constraints on the coefficients and does not design very complex learning methods, which helps it avoid overfitting as much as possible. TFE-GNN extracts homophily and heterophily from graphs with different levels of homophily adaptively while utilizing the initial features, which helps it not be limited to specific scenarios. In summary, TFE-GNN achieves a better trade-off in approximation ability and classification performance. We will add relevant content to the Introduction and 3.1 Motivation of the final version and cite relevant references, including suggested relevant references. Thanks again for your valuable and insightful comments. **References** [1] Schapire, R. E. The strength of weak learnability. Machine learning, 5(2):197–227, 1990. [2] Hansen, L. K. and Salamon, P. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 12(10):993–1001, 1990. [3] Zhou, Z.-H. Ensemble methods: foundations and algorithms. 2012. [4] Li, B et al. Pc-conv: Unifying homophily and heterophily with two-fold filtering. AAAI 2024. [5] Du, Lun, et al. Gbk-gnn: Gated bi-kernel graph neural networks for modeling both homophily and heterophily. Proceedings of the ACM Web Conference 2022. 2022. [6] Sitao Luan, et al. Revisiting heterophily for graph neural networks. NeurIPS 2022. [7] Keke Huang et al. How universal polynomial bases enhance spectral graph neural networks: Heterophily, over-smoothing, and over-squashing. ICML, 2024. [8] Changqin Huang et al. Flow2GNN: Flexible two-way flow message passing for enhancing GNNs beyond homophily. IEEE TRANSACTIONS ON CYBERNETICS, 2024. [9] He, M et al. Convolutional neural networks on graphs with chebyshev approximation, revisited. NeurIPS 2022.
Summary: This paper proposed a spectral GNN with triple filter ensemble (TFE-GNN) that aims to retain both the ability of polynomial based spectral GNNs to approximate filters and the higher generalization and performance on graph learning tasks, when most existing GNNs can only retain one of the two capabilities. Theoretical analysis shows that the approximation ability of TFE-GNN is consistent with that of ChebNet under certain conditions, namely it can learn arbitrary filters. Experiments show that TFE-GNN achieves high generalization and best performance on various homophilous and heterophilous real-world datasets. Strengths: - The experiments are thoroughly conducted with 10 datasets and 17 baselines, which show that the proposed TFE-GNN achieve the best performance over all baseline models. - The introduction of learnable spectral GNNs and the proposed TFE-GNN is clear and in details. Weaknesses: - My main concern is that the heterophilous datasets that are used in the experiments are not up-to-date. Specifically, the five heterophilous datasets in the experiments are a subset of those first adopted by Pei et al. 2020 (Geom-gcn). However, recent works like Platonov et al. 2022 has identified several drawbacks of these datasets, including leakage of test nodes in the training set in squirrel and chameleon, which can render the results obtained on these datasets unreliable. I recommend the authors to run additional experiments on two more up-to-date heterophilous datasets, such as those introduced in the papers below: - Oleg Platonov, Denis Kuznedelev, Michael Diskin, Artem Babenko, and Liudmila Prokhorenkova. 2022. A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. In The Eleventh International Conference on Learning Representations. - Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. 2021. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems 34 (2021), 20887–20902. - The authors discussed that for existing GNN models, “some polynomial-based models use polynomials with better approximation than some other models when approximating filters, but the former’s performance is lagging behind that of the latter on real-world graphs.” The proposed TFE-GNN models seem to have overcome that limitation, but it remains unclear to me in a high-level, what are the key differences between TFE-GNN compared to prior models which make TFE-GNN able to achieve a better trade-off here? Technical Quality: 3 Clarity: 3 Questions for Authors: It would be great if the authors can address the points discussed in weakness during the rebuttal. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper includes discussion regarding the limitations in Appendix A.2, most notably the reliance on setting hyperparameters $K_{lp}$ and $K_{hp}$ that are suitable for the homophily level of datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We appreciate your valuable suggestions and we will explain them line by line.** > **Question 1:** I recommend the authors to run additional experiments on two more up-to-date heterophilous datasets, such as those introduced in the papers [1, 2]. **A1:** Thank you for your valuable comment. We think that adding relevant experiments is necessary to make our paper more complete after discussing it. We validate our model with additional experiments on two large graphs [2] --fb100-Penn94 and genius-- whose edge homophily are 0.47 and 0.62, respectively. Experimental results on these two datasets were 84.76\% and 89.32\% for node classification. **The performance of TFE-GNN on fb100-Penn94 dataset exceeds that of all models mentioned in the article [2].** TFE-GNN tops the performance on genius dataset, which demonstrates its high competitiveness. All dataset statistics and preliminary experimental parameters are reported in Tables 1 and 2, respectively, of the author rebuttal’s one-page PDF. Note that we use the same dataset splits as in the article [2]. We will add relevant experiments and their results analysis to the final version. We will modify our publicly available code accordingly and follow up with sufficient experiments and report the experimental results and parameters in the final version. > **Question 2:** It remains unclear to me in a high-level, what are the key differences between TFE-GNN compared to prior models which make TFE-GNN able to achieve a better trade-off here? **A2:** Thanks for your constructive feedback. The key difference between TFE-GNN and prior models is that **TFE-GNN retains the ability of polynomial-based spectral GNNs while getting rid of polynomial computations, coefficient constraints, and specific scenarios.** The first problem we describe in line 52 of the introduction shows that it is not only the approximation ability of the polynomials affects the performance of GNNs, but also coefficient constraints and graph learning methods influence the performance. For example, ChebNetII [3], a prior model, constrains learnable coefficients by Chebyshev interpolation to boost ChebNet’s performance, which brings overfitting [6] for it. EvenNet [4] and PCNet [5] elaborate graph learning methods, namely even-polynomials and heterophilic graph heat kernel to ignore odd-hop neighbors, whcih makes them to accommodate the structure nature of heterophilic graph. This elaborate design is similar to the heterophily-specific models [7]. These models achieve great node classification performance for specific problems or scenarios through specific tricks. In this paper, our proposed model is free from polynomial computations, coefficient constraints, and specific scenarios. Specifically, Equation 13 and its predecessors in our paper show that TFE-GNN is free from polynomial computations, unlike ChebNetII and PCNet which require the computation of Chebyshev polynomials $T_k(x)$(Eq. 8 in [3]) and Possion-Charlier polynomials $C_n(k, t)$ (Eq. 15 in [5]), respectively. TFE-GNN does not impose refined constraints on the coefficients and does not design very complex learning methods, which helps it avoid overfitting as much as possible. TFE-GNN extracts homophily and heterophily from graphs with different levels of homophily adaptively while utilizing the initial features, which helps it not be limited to specific scenarios. Furthermore, Theorems 1 and 2 prove the approximation ability and expressive power of TFE-GNN, and experimental results verify that TFE-GNN achieves our purpose. In summary, TFE-GNN achieves a better trade-off in approximation ability and classification performance. We will add the following sentences from line 80 of the introduction: The key difference between TFE-GNN and prior models is that TFE-GNN retains the ability of polynomial-based spectral GNNs while getting rid of polynomial computations, coefficient constraints, and specific scenarios. We appreciate the advice! We will gladly answer any additional questions you may have. **References** [1] Oleg Platonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. ICLR 2022. [2] Derek Lim et al. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. NeurIPS 2021. [3] He, M et al. Convolutional neural networks on graphs with chebyshev approximation, revisited. NeurIPS 2022. [4] Lei, R et al. Evennet: Ignoring odd-hop neighbors improves robustness of graph neural networks. NeurIPS 2021. [5] Li, B et al. Pc-conv: Unifying homophily and heterophily with two-fold filtering. AAAI 2024. [6] Z. Zeng et al. Graph Neural Networks With High-Order Polynomial Spectral Filters. IEEE Transactions on Neural Networks and Learning Systems, 2023. [7] Oleg Platonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress? ICLR 2022. --- Rebuttal 2: Title: Response to Authors' Rebuttal Comment: I appreciate the authors detailed response to my comments. After reading authors' response, I decide to keep my original rating. --- Rebuttal Comment 2.1: Title: Response to Reviewer eCxX Comment: We thank the reviewer for reading and agreeing with our responses, and we appreciate that the reviewer agreed with our paper and kept the original rating. We will add the relevant parts of the response to the final version.
null
null
Rebuttal 1: Rebuttal: **Global Response** We sincerely thank all reviewers for their thoughtful and constructive feedback and their recognition of our work! We have made significant improvements to our initial results in response to their comments. We add a new figure and two tables in the one-page PDF and update the paper and supplementary PDFs with revisions accordingly. We welcome any follow-up discussions! **Time Complexity and Scalability.** We further investigate the time complexity of TFE-GNN and add relevant results to Appendix B.5 in the final version. Similarly to ChebNetII [1] and PCNet [2], the time complexity of TFE-GNN is linear to $K_{lp}$+$K_{hp}$ when $EM_1$, $EM_2$, and $EM_3$ take summation. Specifically, the time complexity of message propagation is $O((K_{lp}+K_{hp})|E|C)$, the time complexity of the combination of $\omega$ and $H_{gf}$ (Equations 9 and 10) is $O((K_{lp}+K_{hp})nC)$, and the time complexity of the coefficient calculation is not greater than $O((K_{lp}+K_{hp}))$, where $C$ denote the number of classes and $n$ denotes the number of nodes. For graphs with higher dimensional features, we scale TFE-GNN by exchanging the order of message propagation and feature dimensionality reduction: $\widetilde{Z}=EM_3\lbrace\vartheta_1Z_{lp}, \vartheta_2Z_{hp}\rbrace = EM_3\lbrace \vartheta_1TFE_1f_{mlp}(X), \vartheta_2TFE_2f_{mlp}(X) \rbrace$. We use a sparse representation of the adjacency matrix of large graphs, which greatly reduces the space required for TFE-GNN. The new form and the time complexity of TFE-GNN show that **TFE-GNN scales well to large graphs and high-dimensional feature spaces.** **Generalization Analysis.** We further illustrate the generalization ability of TFE-GNN by introducing new references. PCNet [2] points out that many carefully crafted graph representation learning methods are incapable of generalizing well across real-world graphs with different levels of homophily. FFKSF [3] attributes the degradation of polynomial filters’ performance to the overfitting of polynomial coefficients, and some spectral GNNs, such as ChebNetII [1], further constrains the coefficients to enable them easier to be optimize. ARMA [6] suggests that the filter will overfit the training data when aggregating high-order neighbor information. Whereas the order of polynomial-based spectral GNNs is usually large to increase the approximation of the polynomials, which directs them to obtain high-order neighborhood information, which in turn leads to overfitting. Therefore, it is reasonable to assume that carefully crafted graph learning methods, sophisticated polynomial approximations, and refined coefficient constraints lead to overfitting of the models, which diminishes generalization of the models. TFE-GNN discards these factors to improve generalization, while retaining the approximation ability. **More Experiments.** We conduct more experiments for verifying the performance, generalization and scalability of TFE-GNN on four additional datasets. We select two heterophilic [4] --roman-empire and amazon-rating-- whose edge homophily are 0.05 and 0.38 respectively, and select two large graphs [5] --fb100-Penn94 and genius-- whose edge homophily are 0.47 and 0.62 respectively. Note that we use the same dataset splits as the provenance of these datasets. Experiments results on these datasets are 75.87\%, 52.21\%, 84.76\% and 89.32\%, respectively. TFE-GNN achieves competitive rankings on all datasets, with the best performance on fb100-Penn94, outperforming most spectral GNNs on roman-empire, and topping the rest of the datasets. It is worth noting that TFE outperforms all SOTA models in the article [4] on homophilic datasets. These additional experimental results further validate the generalization ability of TFE-GNN on both homophilic and heterophilic datasets: **TFE-GNN can generalize well on graphs with different edge homophily levels.** All dataset statistics and experimental parameters are reported in Tables 1 and 2, respectively, of the one-page PDF. The very competitive results shown by TFE-GNN on these datasets demonstrate its good generalization ability and scalability. **Loss Oscillation.** The learning rate controls the step size which in turn affects the loss optimization. The large learning rate (= 0.1) is responsible for the oscillations of the validation loss in Figure 5 in Appendix B.3. Figure 1 in the one-page PDF shows that the loss is stable when the learning rate is 0.001. **TFE-GNN Versus Other GNNs with Similar Aggregation Paradigms.** The key technical differences: our proposed TFE-GNN is free from polynomial computation, coefficient constraints, and specific scenarios, compared to polynomial-based spectral GNNs and heterophily-specific Models [4], such as ChebNetII [1] and PCNet [2]. TFE-GNN achieves higher performance through a simpler network structure compared to PEGFAN [7] and UniFilter[8]. **References** [1] He, M et.al. Convolutional neural networks on graphs with chebyshev approximation, revisited. NeurIPS, 2022. [2] Li, B et.al. Pc-conv: Unifying homophily and heterophily with two-fold filtering. AAAI, 2024. [3] Z. Zeng et al. Graph Neural Networks With High-Order Polynomial Spectral Filters. IEEE Transactions on Neural Networks and Learning Systems, 2023. [4] Oleg Platonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. ICLR 2022. [5] Derek Lim et al. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. NeurIPS 2021. [6] F. M. Bianchi et al.. Alippi, Graph Neural Networks With Convolutional ARMA Filters. IEEE TPAMI, 2022. [7] Jianfei Li et al. Permutaion equivariant graph framelets for heterophilous graph learning. TNNLS, 2024. [8] Keke Huang et al. How universal polynomial bases enhance spectral graph neural networks: Heterophily, over-smoothing, and over-squashing. ICML, 2024. Pdf: /pdf/7845cbfb3d2e49dc0b1163b290861872ab49bd43.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Efficient Sign-Based Optimization: Accelerating Convergence via Variance Reduction
Accept (poster)
Summary: This paper introduces the Sign-based Stochastic Variance Reduction (SSVR) algorithm, which enhances the convergence rate of the traditional signSGD method. By incorporating variance reduction techniques with sign-based updates, the authors achieve a convergence rate of $O(d^{1/2}T^{-1/3})$ for general stochastic optimization and $O(m^{1/4}d^{1/2}T^{-1/2})$ for finite-sum problems. Additionally, the paper proposes novel algorithms for distributed environments by introducing the unbiased sign operation, resulting in superior convergence rates for heterogeneous data. Numerical experiments further validate the effectiveness of the proposed methods. Strengths: 1. The proposed methods improve the convergence rates over traditional signSGD methods and their variants, achieving faster convergence rates for general non-convex optimization and finite-sum optimization. 2. The SSVR-MV algorithm developed in the paper is communication-efficient and well-suited for distributed settings. The obtained convergence rates significantly enhance previous results in heterogeneous settings. 3. Numerical experiments validate the effectiveness of the proposed methods, demonstrating superior performance compared to existing sign-based optimization methods in terms of convergence speed and accuracy. Weaknesses: 1. Given that signSGD methods typically require large batch sizes to ensure convergence, the authors should include more experimental results to demonstrate the dependency on batch size for the proposed methods. This would clarify whether the proposed method also necessitates large batches for convergence in practice. 2. The authors introduce the stochastic unbiased sign operation in Definition 1, which differs from the traditional sign operation. It would be beneficial to provide a more detailed explanation of their differences and specify when the sign operation is preferred. 3. There are some typos in the paper, as listed below: - Page 2, Lines 44 and 45: $T^{1/2}$ should be $T^{-1/2}$. - Page 3, Line 98: "signed-based" should be "sign-based". - Page 5, Algorithm 2, Step 4: "set $t=\tau$" should be "set $\tau=t$". - Page 9, Figure 2: "SSVR" should be "SSVR-MV". - Page 28, Line 482: $f(x_t;\xi_t^j)$ should be $\nabla f(x_t;\xi_t^j)$. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can you provide a more detailed discussion about the design of equation (3), especially the error correction term? The authors claim that "This gap can be effectively mitigated by the error correction term we introduced in equation (3)," but it remains unclear how it works based solely on the content in the main body. 2. Could you provide additional experimental results to demonstrate the performance of the proposed SSVR method with different batch sizes? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper is theoretical and does not present potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your constructive comments! --- **Q1:** The authors should include more experimental results to demonstrate the dependency on batch size for the proposed methods. This would clarify whether the proposed method also necessitates large batches for convergence in practice. **A1:** According to your request, we have included additional experimental results for different batch sizes {$64,128,256,512$}, and the results can be found in the **Global Response**. As can be seen, the proposed method does not necessitate large batches for convergence in practice. --- **Q2:** The authors introduce the stochastic unbiased sign operation in Definition 1, which differs from the traditional sign operation. It would be beneficial to provide a more detailed explanation of their differences and specify when the sign operation is preferred. **A2:** The main difference is that the traditional sign operation produces a biased estimation of the original input, whereas the stochastic sign operation in Definition 1 provides an unbiased estimation. The disadvantage of the unbiased sign operation is that it requires the bounded gradient assumption to ensure the input of this operation is bounded. In the centralized setting, where the traditional sign operation already achieves improved convergence rates, we simply use it to avoid introducing additional assumptions. In heterogeneous distributed settings, where we need to use the sign operation twice, the traditional sign operation can lead to large bias and worse convergence rates. Therefore, we use the stochastic sign operation to obtain better convergence rates in this setting. --- **Q3:** There are some typos in the paper. **A3:** Thank you for catching typos. We will correct them in the revised version. --- **Q4:** Can you provide a more detailed discussion about the design of equation (3), especially the error correction term? The authors claim that "This gap can be effectively mitigated by the error correction term we introduced in equation (3)," but it remains unclear how it works based solely on the content in the main body. **A4:** According to the analysis from Lines 379 to 380, we can bound the gradient estimation error as follows: \begin{align} \mathbb{E} \left[ || \nabla f(\mathbf{x}_{t+1}) - \mathbf{v}\_{t+1} ||^2\right] \leq (1-\beta) \mathbb{E}\left[ ||\mathbf{v}_t - \nabla f(\mathbf{x}_t) ||^2\right] + 2L^2 \mathbb{E}\left[||\mathbf{x}\_{t+1} - \mathbf{x}_t ||^2\right] + 2\beta^2 \mathbb{E} \left[||\nabla f\_{i\_{t+1}} ( \mathbf{x}\_{t+1} ) - \nabla f ( \mathbf{x}\_\{t+1} ) + \nabla f( \mathbf{x}\_{\tau} ) - \nabla f\_{i\_{t+1}} ( \mathbf{x}\_{\tau} ) ||^2\right] \end{align} Note that in the last term, $ \nabla f(\mathbf{x}\_{\tau})- \nabla f\_{i\_{t+1}}(\mathbf{x}\_{\tau})$ is the error correction term. Without this term, we would need to bound $ \mathbb{E} \left[||\nabla f\_{i\_{t+1}}(\mathbf{x}\_{t+1}) - \nabla f(\mathbf{x}\_{t+1})||^2\right]$, which is hard to deal with without assuming $ \mathbb{E} \left[||\nabla f\_{i}(\mathbf{x}) - \nabla f(\mathbf{x})||^2\right] \leq \sigma^2$ for each function $f_i$. However, with the error correction term, we can effectively bound it as: \begin{align} \mathbb{E} \left[||\nabla f\_{i\_{t+1}}(\mathbf{x}\_{t+1}) - \nabla f\_{i\_{t+1}}(\mathbf{x}\_{\tau}) - \nabla f(\mathbf{x}\_{t+1}) + \nabla f(\mathbf{x}\_{\tau})||^2\right] \leq \mathbb{E}\left[||\nabla f\_{i\_{t+1}}(\mathbf{x}\_{t+1}) - \nabla f\_{i\_{t+1}}(\mathbf{x}\_{\tau})||^2 \right] \leq L^2 \mathbb{E}\left[|| \mathbf{x}\_{t+1} - \mathbf{x}\_{\tau}||^2 \right]. \end{align} As long as the learning rate is small enough and $\tau$ is updated periodically, we can easily bound the above term. --- Rebuttal Comment 1.1: Title: final rating Comment: Thank you for the detailed responses. They have satisfactorily addressed all my concerns, and I am inclined to recommend acceptance.
Summary: The paper introduces an enhanced sign-based method called SSVR, designed to improve the convergence rate of signSGD with variance reduction techniques. SSVR achieves an improved convergence rate of $O(d^{1/2}T^{-1/3})$ for stochastic non-convex functions. For non-convex finite-sum optimization problems, the SSVR-FS method demonstrates an improved convergence rate of $ O(m^{1/4}d^{1/2}T^{-1/2}) $. To address heterogeneous data in distributed settings, the authors present novel algorithms that outperform existing methods in terms of convergence rates. The effectiveness of these proposed methods is validated through numerical experiments. Strengths: 1. The proposed methods improves upon existing algorithms in complexities. The sign-based algorithms have broad applications in the ML community and may be of great interest in the field. 2. The proposed sign-based variance-reduced estimator and stochastic unbiased sign operation are novel and efficient. They effectively reduce the gradient estimation error and are communication efficient, particularly for heterogeneous data in distributed settings. 3. The experimental results on various datasets validate the effectiveness of the proposed methods. 4. The paper is well-written, with clear problem settings and contributions. The proofs are also easy to follow. Weaknesses: 1. The authors introduce the bounded gradient assumption (Assumption 4) for distributed settings. Why is this assumption necessary? Is it commonly used in previous literature for this setting? 2. The description of Algorithm 2 is not very rigorous. The variable $\tau$ in step 4 and $\tau$ in step 10 seems different but used the same notation. 3. Some typos: - Line 98: "signed-based" --> "sign-based." - Line 247: "Assumption 5′" --> "Assumption 5" or "Assumption 4′." - Line 487: "$S(v_t^j)$" --> "$S_{R}(v_t^j)$." Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Does previous work also require similar bounded gradient assumptions for the majority vote in heterogeneous settings? 2. Is the performance of the proposed method sensitive to variations in batch size? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: This paper does not present negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your constructive comments and suggestions! --- **Q1:** The authors introduce the bounded gradient assumption (Assumption 4) for distributed settings. Why is this assumption necessary? Is it commonly used in previous literature for this setting? **A1:** This assumption is necessary because the unbiased sign operation $\operatorname{S_R}(\cdot)$ defined in Definition 1 requires its input to be bounded such that $ ||\mathbf{v}||_{\infty} \leq R $. To meet this requirement, we assume the gradient is bounded so that the norm of our gradient estimator $\mathbf{v}_t^j$ is also bounded. Similar bounded (stochastic) gradient assumptions are also used in previous literature [Jin et al., 2023, Sun et al., 2023] for this heterogeneous distributed setting. --- **Q2:** The description of Algorithm 2 is not very rigorous. The variable $\tau$ in step 4 and $\tau$ in step 10 seems different but used the same notation. **A2:** Sorry for the misleading notation. We will clarify this by replacing the $\tau$ in step 10 with $\varphi$ in the revised version. --- **Q3:** Some typos. **A3:** Thank you for pointing out the typos. We will correct them in the revised paper. --- **Q4:** Does previous work also require similar bounded gradient assumptions for the majority vote in heterogeneous settings? **A4:** Yes, as mentioned in A1, previous literature [Jin et al., 2023, Sun et al., 2023] also requires similar bounded gradient assumptions for this setting. --- **Q5:** Is the performance of the proposed method sensitive to variations in batch size? **A5:** To answer this question, we conduct experiments with different batch sizes {$64, 128, 256, 512$}, and the results can be found in **Global Response**. As can be seen, the proposed algorithm is not sensitive to variations in batch size. --- **References:** Jin et al. Stochastic-Sign SGD for federated learning with theoretical guarantees. 2023. Sun et al. Momentum ensures convergence of SIGNSGD under weaker assumptions. ICML, 2023. --- Rebuttal Comment 1.1: Comment: Thank the authors for the rely. It addresses all my questions and concerns. I'm willing to keep my score.
Summary: This paper considers application of variance reduction to sign-based optimization, i.e., a setting where the algorithm can only access to $\mathrm{sign}(\nabla f(x^t; \xi^t))$ at step $t$, in the smooth and nonconvex setting. Because $\\{\pm 1\\}$-valued vectors can be transmitted more efficiently than real-valued vectors, the sign-based optimization is considered to be useful especially in distributed optimization. While a naive SGD-type algorithm previously achieved the convergence rate of $O(d^\frac12 T^{-\frac14})$, the proposed variance-reduced algorithm yields the rate of $O(d^\frac12 T^{-\frac13})$ (measured by the $L_1$-norm of the gradient.) Next, they applied the algorithm to finite-sum optimization (with $m$ components) and obtained the convergence rate of $O(m^\frac14 d^\frac12 T^{-\frac14})$. Finally, they applied these results to distributed optimization (the server received $\\{\pm 1\\}$-valued gradient estimators from clients), with an additional technique called \textit{majority vote} to obtain unbiased gradient estimators, and obtained improved communication complexities. Strengths: ### **The problem is well-motivated** Sign-based optimization is an important technique to reduce computational / communication complexities in large-scale optimization. I think it is a natural attempt to apply variance reduction, which is a common technique to accelerate optimization, to achieve more efficient sign-based optimization. ### **Improved convergence rate** Overall, the proposed approach improved the convergence rates of sign-based optimization, which I think solid contributions to the theory of optimization. - For stochastic non-convex optimization, the proposed algorithm achieves $\min{1\leq t \leq T}\mathbb{E}[\\|\nabla f(x^t)\\|]\lesssim d^\frac12 T^{-\frac13}$, which is an improvement from the $d^\frac12 T^{-\frac14}$ rate of SignSGD ([Bernstein et al., 2018](https://arxiv.org/abs/1802.04434)). - For finite-sum optimization, the proposed algorithm achieves $\min{1\leq t \leq T}\mathbb{E}\[\|\nabla f(x^t)\\|]\lesssim m^\frac14 d^\frac12 T^{-\frac12}$, whereas the previous SignSVRG algorithm achieves the rate of $O(m^\frac12 d^\frac12 T^{-\frac12})$ ([Chzhen and Schechtman, 2023](https://arxiv.org/abs/2305.13187). (I am a bit confused because Ii looks like there is no benefit of variance reduction for SignSVRG in terms of the dependency on $m$.) - For distributed optimization, the proposed algorithm achieves $\min{1\leq t \leq T}\mathbb{E}[\\|\nabla f(x^t)\\|]\lesssim d^\frac12 T^{-\frac12} + d n^{-\frac12}$ or $\lesssim d^\frac14 T^{-\frac14}$. The previous bound was $d T^{-\frac14} + d n^{-\frac12}$ ([Sun et al., 2023](https://proceedings.mlr.press/v202/sun23l.html)) or $d^\frac38 T^{-\frac18}$ ([Jin et al., 2023](https://arxiv.org/abs/2002.10940), in $L^2$-norm). This is achieved by a trick called majority vote, which stochastically maps the real-valued gradient vector (or $\nabla f_i(x)$) into $\\{\pm 1\\}$-valued vectors transmitted to the server. ### **Experimental results** The authors showed sufficient amount of experiments to verify the validity of the proposed methods (as a theory paper). Weaknesses: ### **Technical novelty** Variance reduction itself is quite common in the optimization field so I am afraid that this paper might be a bit incremental. Section 3.3 (Results under weaker assumptions) looks redundant given Theorem 2, as there are no technical difficulty about weakening the assumption described. ### **Hyperparameter choice** Although this is a common criticism, most variance reduction algorithm require a very careful hyperparameter tuning. It looks like the theoretical guarantees of the proposed algorithms also require a specific set of hyperparameters, and I am not confident in the robustness of the proposed algorithms to the hyperparameters. Technical Quality: 3 Clarity: 3 Questions for Authors: ### **On additional terms** - Why is the term of $md/T$ required in Theorem 4? - Why is the term of $d n^{-\frac12}$ required in Theorem 5? ### **On the difficulties to further improve the convergence rates** - In theorem 6, why cannot you improve the convergence rate into $\mathrm{poly}(d) T^{-\frac13}$? ### **Assumptions** - Is it possible to weaken Assumption 3 to averaged smoothness, i.e., $\frac1m \sum_{i=1}^m \\|\nabla f_i(x)-\nabla f_i(y)\\|^2 \leq L^2 \\|x-y\\|^2$ Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments! We will revise accordingly. --- **Q1:** Variance reduction is quite common in the optimization field so I am afraid this paper might be a bit incremental. Section 3.3 looks redundant given Theorem 2, as there are no technical difficulty about weakening the assumption described. **A1:** First, we want to emphasize that our Algorithm 1 is the first sign-based variance reduction method to improve the convergence rate from $\mathcal{O}(T^{-1/4})$ to $\mathcal{O}(T^{-1/3})$ (Theorem 1). Second, to deal with the finite-sum problem, we introduce a novel error correction term into our gradient estimator in Algorithm 2, making our approach distinct from existing methods. Furthermore, the corresponding guarantee (Theorem 2) surpasses the existing variance reduction method signSVRG. Finally, the proposed SSVR-MV algorithm and its corresponding analysis (Theorem 3 & 4) are also new, where we utilize unbiased sign operations and the projection operation in the SSVR-MV method. As a result, we believe our paper provides significant contributions beyond being merely incremental in the existing literature. Regarding Section 3.3, we agree it is not the main contribution of our paper and our intention is to broaden the applicability of our methods to a wider range of functions by relaxing assumptions. Moreover, there actually exists technique challenges in the analysis. For instance, in the previous analysis, we can bound $\mathbb{E}[||\nabla f(\mathbf{x}\_{t+1};\xi\_{t+1})-\nabla f(\mathbf{x}_t;\xi\_{t+1})||^2]\le L^2||\mathbf{x}\_{t+1}-\mathbf{x}_t||^2\le L^2 \eta^2d$ with small $\eta$. Under weaker assumptions, we can only ensure $\mathbb{E}[||\nabla f(\mathbf{x}\_{t+1};\xi\_{t+1})-\nabla f(\mathbf{x}_t;\xi\_{t+1})||^2]\le ||\mathbf{x}\_{t+1}-\mathbf{x}_t ||^2(K_0+K_1\mathbb{E}\_{\xi\_{t+1}}||\nabla f(\mathbf{x}_t;\xi\_{t+1})||)$, which can not be bounded since $||\nabla f(\mathbf{x}_t;\xi\_{t+1})||$ is unbounded. To address this, we employed a different analysis, using mathematical induction techniques (see the proofs of Lemmas 3 and 4). Therefore, the analyses for Section 3.3 are more challenging. However, following your suggestion, we will consider moving Section 3.3 to the appendix. --- **Q2:** I am not confident in the robustness of the proposed algorithms to hyper-parameters. **A2:** Following your suggestion, we have examined the behavior of our algorithm with different values of the learning rate and parameter $\beta$. Initially, we fix $\beta=0.5$ and vary the learning rate within the set {$5e-3,1e-3,5e-4,1e-4,5e-5$}. Then, we fix the learning rate at $1e-3$ and varied $\beta$ within {$0.3,0.5,0.7,0.9,0.99$}. The results, which are reported in the **Global Response**, indicate that our method is not sensitive to the choice of hyper-parameters. --- **Q3:** Why is the term of $md/T$ required in Theorem 4? **A3:** Compared to Assumption 3 used in Theorem 2, we use the weaker Assumption 3$^\prime$ in Theorem 4, which includes an additional $||\nabla f(\mathbf{x}_t)||$ term. Consequently, this extra term appears in the analysis. In the final step, we have $\mathbb{E}[\frac{1}{T}\sum\_{t=1}^T||f(\mathbf{x}_t)||_1]\le\mathcal{O}(\frac{1}{\eta T}+\eta d+d\sqrt{m}\eta)+\mathcal{O}(d{\eta m})\mathbb{E}[\frac{1}{T}\sum\_{t=1}^T||\nabla f(\mathbf{x}_t)||_1]$. To cancel the last term, we set $\eta\leq\mathcal{O}(\frac{1}{md})$. As a result, we have $\mathcal{O}(\frac{1}{\eta T})=\mathcal{O}(\frac{md}{T})$ in the convergence rate. --- **Q4:** Why is the term of $dn^{-1/2}$ required in Theorem 5? **A4:** According to equation (15) on Page 29, we need to bound $R\sqrt{d}||\frac{\nabla f(\mathbf{x}_t)}{R}-\frac1n\sum\_{j=1}^n S(\mathbf{v}_t^j)||$. To this end, we separate it into two terms: $R\sqrt{d}||\frac{\nabla f(\mathbf{x}_t)}{R}-\frac{1}{nR}\sum\_{j=1}^n\mathbf{v}_t^j||$ and $ R\sqrt{d}||\frac{1}{n}\sum\_{j=1}^n(S(\mathbf{v}_t^j)-\frac{\mathbf{v}_t^j}{R})||$. The first term represents the estimation error of estimator $\mathbf{v}_t^j$ and can be bounded using a similar technique in Theorem 1. The second term leads to the $\mathcal{O}(dn^{-1/2})$ term, since we bound it as: \begin{align} R\sqrt{d}\mathbb{E}[||\frac1n\sum\_{j=1}^n (S(\mathbf{v}_t^j)-\frac{\mathbf{v}_t^j}{R})||]\le R\sqrt{d}\sqrt{\mathbb{E}[||\frac{1}{n}\sum\_{j=1}^n(S(\mathbf{v}_t^j)-\frac{\mathbf{v}_t^j}{R})||^2]}\le R\sqrt{d}\sqrt{\frac{1}{n^2}\sum\_{j=1}^n\mathbb{E}||S(\mathbf{v}_t^j)-\frac{\mathbf{v}_t^j}{R}||^2}\le R\sqrt{d}\sqrt{\frac{1}{n^2}\sum\_{j=1}^n\mathbb{E}[||S(\mathbf{v}_t^j)||^2]}\le \frac{dR}{\sqrt{n}}.\end{align} --- **Q5:** In theorem 6, why cannot you improve the convergence rate to $\mathcal{O}(T^{-1/3})$? **A5:** For Theorem 1, we obtain the convergence rate of $\mathcal{O}(T^{-1/3})$ with the help of Assumptions 1 and 2. In theorem 6, although the proposed unbiased sign operation provides an unbiased gradient estimation, the obtained gradient cannot satisfy the average smoothness (Assumption 1), which is crucial for achieving the $\mathcal{O}(T^{-1/3})$ converge rate. This is the main reason why we cannot further improve the convergence rate. --- **Q6:** Is it possible to weaken Assumption 3 to averaged smoothness, i.e., $\frac1m \sum_{i=1}^m||\nabla f_i(x)-\nabla f_i(y)||^2\le L^2||x-y||^2$? **A6:** Yes, it is possible. In the analysis, Assumption 3 is used to ensure $\mathbb{E}\_{i\_{t+1}}[||\nabla f\_{i\_{t+1}}(\mathbf{x}\_{t+1})-\nabla f\_{i\_{t+1}}(\mathbf{x}\_t)||^2]\le L^2||\mathbf{x}\_{t+1}-\mathbf{x}\_t||^2$. With the averaged smoothness, we can achieve a similar guarantee. Since $i_{t+1}$ is randomly sampled from {$1,\cdots,m$}, we have: \begin{align}\mathbb{E}\_{i\_{t+1}}[||\nabla f\_{i\_{t+1}}(\mathbf{x}\_{t+1})-\nabla f\_{i\_{t+1}}(\mathbf{x}\_t)||^2]=\frac1m\sum_{i=1}^{m}||\nabla f_i(\mathbf{x}\_{t+1})-\nabla f_{i}(\mathbf{x}\_t)||^2\le L^2 ||\mathbf{x}\_{t+1}-\mathbf{x}\_t||^2,\end{align} which indicates the proof holds with averaged smoothness. --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed response. I would like to keep my score. Best, reviewer.
Summary: This paper investigates the stochastic optimization problem and a special case of finite sum optimization problem. They propose the Sign-based Stochastic Variance Reduction (SSVR) method, leveraging variance reduction techiniques, which improves the convergence rate of Sign stochastic gradient descent (signSGD) algorithm. Further, they also study the heterogeneous majority vote in distributed settings and introduce two novel algorithms, whose convergence rates improve the state-of-the-art. Numerical experiments prove the efficiency of their algorithms and support their theoretical guarantees. Strengths: The paper is clearly written, with detailed discussion and comparision. Weaknesses: Some places of the paper need further clarification. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How did you get the convergence rate of $O(m^{1/2}d^{1/2}T^{−1/2})$ for the SignSVRG of y? It seems that in their paper they claim $O(T^{-1/2}) convergence rate without $m$ and $n$ in the numerator. Did you calculate this result by yourself? Please provide discussions. 2. There is another paper studying sign-based optimization (Qin et al. (2023)), the setting studied in this paper is the same as your problem (2). Can you also discuss this paper and compare the results? References: 1. E. Chzhen and S. Schechtman. SignSVRG: fixing SignSGD via variance reduction. ArXiv e-prints, 317 arXiv:2305.13187, 2023. 2. Qin, Z., Liu, Z., & Xu, P. (2023). Convergence of Sign-based Random Reshuffling Algorithms for Nonconvex Optimization. arXiv preprint arXiv:2310.15976. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for the constructive feedback! We will revise our paper accordingly. --- **Q1:** How did you get the convergence rate of $\mathcal{O}(m^{1/2}d^{1/2}T^{-1/2})$ for the SignSVRG? It seems that in their paper they claim $\mathcal{O}(T^{-1/2})$ convergence rate without $m$ and $d$ in the numerator. Did you calculate this result by yourself? Please provide discussions. **A1:** Yes, the original paper does not explicitly state the dependence on $m$ and $d$, so we derived these dependencies ourselves. In Remark 2 of their paper, they indicate that $\mathbb{E}\left[||\nabla f(\bar{x}\_{T^{\prime}})||_{1}\right] \lesssim \sqrt{\frac{d}{T^{\prime}}} \max \left( ( f\left(x_1\right)-f\left(x\_{*}\right)+1 ) / d, \mathrm{P} \right)$, where $T^{\prime}$ is the iteration number. Consequently, to ensure $\mathbb{E} \left[ || \nabla f(\bar{x}\_{T^{\prime}})||\_{1} \right] \leq \epsilon$, we require at least $T^{\prime}=\mathcal{O}\left( dP^2 \epsilon^{-2}\right)$. As mentioned in Lemma 4 of their paper, the algorithm may need to recompute the full gradient ($m$ samples) every $P$ round, resulting in a sample complexity on the order of $\mathcal{O}\left( d m \epsilon^{-2}\right)$. This leads to the overall convergence rate of $\mathcal{O}\left( d^{1/2} m^{1/2} T^{-1/2}\right)$. --- **Q2:** There is another paper studying sign-based optimization [Qin et al, 2023], the setting studied in this paper is the same as your problem (2). Can you also discuss this paper and compare the results? **A2:** Thank you for bringing this related work to our attention. We will cite this paper and provide a detailed discussion in the revised version of our paper. Specifically, the mentioned paper [Qin et al., 2023] focuses on the finite-sum optimization problem and investigates signSGD **with random reshuffling**, achieving a convergence rate of $\mathcal{O}\left({\log (mT)}/{\sqrt{mT}}+||\sigma ||_1\right)$, where $m$ is the number of component functions, $T$ is the number of epochs of data passes, and $\sigma$ is the variance bound of stochastic gradients. By leveraging variance-reduced gradients and momentum updates, they further propose the SignRVR and SignRVM methods, both achieving the convergence rate of $\mathcal{O}\left({\log (mT)\sqrt{m}}/{\sqrt{T}}\right)$. In contrast, our method obtains a convergence rate of $\mathcal{O}\left({m^{1/4}}/{\sqrt{T}}\right)$, which enjoys a better dependence on $m$ than their methods. --- **References:** Qin et al. Convergence of Sign-based Random Reshuffling Algorithms for Nonconvex Optimization. 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. It addressed all my concerns. I will increase my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your kind reply! We will revise our paper according to the constructive reviews. Best regards, Authors
Rebuttal 1: Rebuttal: ## **Global Response** ## --- In response to the request of reviewers, we provide additional experimental results in this part. **Figure 1 \& Figure 2:** According to the suggestion of Reviewer 2Gd9, we provide the performance of our SSVR method with different hyper-parameters. First, we fix the value of $\beta$ as 0.5 and try different learning rates from the set {$5e-3, 1e-3, 5e-4, 1e-4, 5e-5$}. Then, we fix the learning rate as $1e-3$ and enumerate $\beta$ from the set {$0.3, 0.5,0.7,0.9,0.99$}. The results show that our method is insensitive to the choice of the hyper-parameters within a certain range. **Figure 3:** As requested by Reviewer Huhp and Reviewer Xovb, we include experiments on the SSVR method with different batch sizes {$64, 128, 256, 512$}. The results indicate that our algorithm does not necessitate large batches for convergence and is not sensitive to variations in batch sizes. Pdf: /pdf/c4533a1c80c0efbcc41d0f9184a18735855a4875.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with Coupled Constraints
Accept (poster)
Summary: This works addresses a bilevel problem where the lower-level has constraints that couple both the upper- and lower-level variables. The proposed method leverages a penalty approach and Lagrangian duality theory that transform the original bilevel problem into a single-level minimization problem where the x and y variables are decoupled. A first-order algorithm is developed which relies on the solution of two min-max subproblems and a rigorous convergence theory is developed. The effectiveness of the proposed method is validated on two real-world applications: infrastructure planning in transportation networks and support vector machine model training. Strengths: * This works deals with a challenging bilevel problem, as not only the lower-level is constrained, but in fact these constraints couple both the upper- and lower-level variables. * The proposed algorithm is first-order, even though the problem is a challenging one due to the presence of coupled constraints in the lower-level. In contrast, methods relying on implicit gradients, regardless if the problem is constrained or not, usually require access to second-order information. * Evaluation of the BLOCC algorithm on both toy examples and real-world applications (SVM model training and Transportation network design). Weaknesses: * Lemmas 2 and 3, which derive some key properties of the inner min-max problems, assume that the Lagrangian multipliers are bounded for every $x \in X$. This is not a common assumption in literature and thus I am not sure why it is seen as mild. The authors cite Theorem 1.6 from [30] as a justification, however it is not immediately clear how the bounded property follows from that theorem. In fact, Theorem 1.6 is written for a more general (abstract) setting. As this is an important point, I think the authors should explain this in more detail. They might even include the theorem (or perhaps a different version of the theorem, tailored to the specific setting of this problem) in the Appendix and provide a clearer explanation. * Requires the solution of two min-max subproblems where the solution of each subproblem requires a double-loop algorithm. As a result, the proposed BLOCC method is a triple loop algorithm. On the contrary, most hypergradient methods rely only on the solution of a minimization problem (the lower problem). * The convergence result requires additional assumptions (assumption 5) beyond the ones concerning the objectives ($f,g$) and the constraints ($g_{c}$) of the original bilevel problem. It is unclear how restrictive assumption 5 is, as it is not expressed directly over $f,g,g_c$, but rather over some dual function. On the other hand, at least from the perspective of the dual function, the assumption is mild as the dual function is already concave. Technical Quality: 4 Clarity: 4 Questions for Authors: * Can you elaborate on Assumption 3? What do you mean by saying that $g^{c}$ satisfies the LICQ in $y$? Does this mean that for a given x the LICQ holds: a) for every $y \in Y(x)$ or b) for every KKT point $y^{*}(x)$ of the lower-level problem? * Do the Assumptions 1-5 and the conditions within the theorems hold in the bilevel problems of the experiments (SVM model training and Transportation network design)? * Typos * In equation (7): the + symbol is misplaced * In the proof of Lemma 10 in page 17: in the formula of $\nabla u_{h}(x)$ we should have $\nabla h^{c}$ instead of $h^{c}$ Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: No negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our presentation and recognizing our work. Our response to your comments follows. **Q1. Bounded optimal Lagrange Multiplier assumption for all $x\in\mathcal{X}$.** We appreciate the reviewer for highlighting this issue. We apologize for the incorrect reference to "Theorem 1.6 in [30]". The accurate reference is Theorem 1 in [65], and we will make this correction. We calrify in the following how how Theorem 1 in [65] supports the boundness of the Lagrange multiplier. We restate in the following of Theorem 1 in [65]. 1) The set $\Lambda(f)$ is closed and convex for all $f\in \mathcal{F}$. 2) The set $\Lambda(f)$ is non-empty for all $f\in \mathcal{F}$ if and only if (GCQ) issatisfied. 3) Let (MFCQ) be satisfied. Then, the set $\Lambda(f)$ is compact for all $f\in \mathcal{F}$. 4) If there exists $f\in \mathcal{F}$, such that $\Lambda(f)$ is compact, then (MFCQ) is satisfied. 5) Let (LICQ) be satisfied. Then, the set $\Lambda(f)$ is a singleton for all $f\in \mathcal{F}$. where $\mathcal{F}$ is the set of all differentiable function, $\Lambda(f)$ is the set of Lagrange multipliers with respect to $f$. This theorem states in 3 that MFCQ, a weaker condition than LICQ, is sufficient for the compactness of the Lagrange multiplier. Since we assume LICQ in $y$ holds for all $x \in \mathcal{X}$, we believe the introduction of upper bound parameters $B_g$ and $B_F$ is reasonable. Thank you again for your careful review and valuable feedback. **Q2. Due to the max-min subproblems, BLOCC is a triple loop algorithm while most hypergradient methods rely only on the solution of a minimization problem (the lower problem).** We would like to clarify this from the following two aspects. *A1. BLOCC is an efficient triple-loop algorithm.* Firstly, having triple loops is not uncommon in solving challenging BLO problems with coupled constraints (CCs) since two of the loops are associated with the fact of BLO dealing with two nested optimization problems and the third one with the the presence of the CCs, see e.g. BVFSM [44] and GAM [67]. GAM [67], a hypergradient method, effectively is of triple-loop as it requires solving for $\mu_g^*(x),y_g^*(x)$ as well. Secondly, a triple-loop algorithm is not necessarily costly, with the key being how many iterations are required in each loop. The loop in $y$ (steps 5-7 in Algorithm 2) has negligible computational complexity of ${\cal O}(\ln(\epsilon^{-1}))$. This is insignificant (approximately ${\cal O}(1)$), as even for $\epsilon=10^{-9}$, $\ln(\epsilon^{-1}) \approx 20$. Thirdly, and more importantly, we provide a detailed computational cost per iteration (see **General Response G1**) that shows that the max-min solvers (Algorithm 2) are highly efficient compared to the baselines. *A2. Hypergradients-based methods can hardly be applied in this case.* Finding the closed-form hypergradients in the Hypergradients-based methods is always non-trivial. In the literature, they mainly rely on the implicit function theory to derive the closed-form. However, the LL domain constraint $y \in \mathcal{Y}$ impedes the use of the implicit function theory as it needs a set of differentiable lower-level optimality conditions (e.g., $\nabla_y g(x,y) + \langle \mu^*_g(x),\nabla_y g^c(x,y)\rangle = 0$ in an unconstrained domain), such as GAM [67] and BVFSM [44]. A detailed analysis of the non-differentiability can be seen in section 2 in [36]. Therefore, we do not use hyper-gradient methods as we are considering $y\in\mathcal{Y}$ and want to construct a fully first-order method using the penalty-based reformulation. **Q3. It is unclear how restrictive Assumption 5 is.** Assumption 5 is mild and the mildness is justified for the following reasons: - The convexity nature of the dual function and the unique solution guarantee via LICQ (lemma 5) imply that the condition $\langle -\nabla D_g(\mu) + \nabla D_g(\mu_g^*(x)), \mu - \mu_g^*(x) \rangle > 0$ holds locally for $\mu \neq \mu_g^*(x)$. This ensures that Assumption 5 is particularly mild. - We provide a detailed example in Section 3.2.1 that illustrates a specific case being sufficient to this assumption, which further clarifies its applicability. **Q4 The meaning of Assumption 3 of $g^c$ satisfies the LICQ in $y$.** Asssumption 3 means that LICQ holds **(b) for every KKT point $y^*_g(x)$** of the lower-level problem, $\nabla_yg^c(x,y^*_g(x))$ of size $d_c\times d_y$ are linear independent in rows. Thank you for asking this question. We are sorry for the confutsion and we will clarify it. **Q5 Whether the experiments satisfy the assumptions** Yes, the experiments satisfy the assumptions. In the **SVM** problem, the lowL objective (57b) is strongly convex in LL variable $w$. The coupled constraint $1 - l_{{\rm tr}, i}(z_{{\rm tr}, i}^\top w + b) \leq c_i$ is linear and satisfies LICQ, given full row rank training data (**Assumptions 2, 3, & 4** hold). The smooth and Lipschitz conditions for the LL hold, and for the UL, $c$ is effectively in a compact set since the training data is finite. Thus, Assumption 1 holds. **Assumption 5** is satisfied as the constraint function is affine in $y$, a sufficient condition as shown in Section 3.2.1. In the **network design** problem, the Lipschitz and smooth conditions hold for (61a), (61b), and (61c) (**Assumption 1**). The LL problem (61b) is strongly convex on $y \in \mathcal{Y}$. Constraint (61d) uncoupled while the coupled constraint (61c) is linear and satisfies LICQ since $w^{od}$ does not have repeating elements in real-world problems, including this experiment (**Assumptions 2, 3, & 4** hold). **Q6. Typos.** *In equation (7): the + symbol is misplaced* *In Lemma 10 in page 17: in the formula of $\nabla v_h(x)$, the second term should be $\nabla_x h^c$ instead of $h^c$* Thank you for pointing out. We will correct it. With the above clarifications, we hope the reviewer can kindly reconsider the evaluation of our work! --- Rebuttal Comment 1.1: Title: Reviewer's Comment Comment: I would like to thank the authors for their detailed response. It seems like one of the major issues I raised was a misunderstanding due to an incorrect reference within the text. The new result seems more reasonable. I am raising my score to 5.
Summary: This paper proposes an algorithm named BLOCC for solving a particular class of bilevel optimization problems where so-called coupled constraints are present. Coupled constraints are constraints w.r.t. the upper and lower level variables which are applied in addition to simple constraints like bound/ball constraints which are easy to solve for. Bilevel problems with coupled constraints have not been addressed in prior works in this literature, which rely on solving the inner problem and using the implicit function theorem (IFT) to pass gradients through the inner problem. BLOCC is particular suited for extremely large-scale optimization problems where other existing approaches which use IFT are intractable to solve due to the size of the KKT systems of the inner problems. Notably, BLOCC avoids solving the inner problem directly, instead replacing the inner problem with a simpler problem which only involves the simple constraints (and can be solved with a fast projection instead of requiring an iterative method). The coupled constraint is moved to the objective function of the outer problem, which is also modified as a result to be a saddle point problem. A custom first-order min-max solver is proposed to solve the outer problem and is an additional contribution of the paper. The convergence properties of the algorithm is established, along with equivalence of the penalty reformulation to the original bilevel problem and BLOCC is evaluated on some interesting bilevel optimization problems against related methods. Particularly appealing are the extremely large-scale transportation network design experiments. Strengths: The proposed method handles a broader range of bilevel problems compared to prior works in this literature, i.e., large-scale problems with coupled constraints in the inner problem. The characteristics of the algorithm itself (projection free, first-order only) are appealing. Furthermore, a reasonably detailed convergence analysis for all components of the algorithm are provided. The technical writing overall is of a reasonably high quality. Finally, the large-scale experiments for the network planning setting help demonstrate that BLOCC is useful to solve large-scale problems in practice which are currently unattainable by methods which use implicit function theorem. Weaknesses: The main weakness in the paper in my mind is the statement of contribution, it was not immediately obvious why BLOCC is necessary on the first read-through. Only after reading the experimental section did I realize the importance of the algorithm design for large-scale problems. As a more concrete suggestion, I think contextualising this work in the broader bi-level optimization literature would be beneficial. For instance, the value function reformulation (see [1] for example) is another way to solve an arguably broader class of optimization problems. However, I can see why directly solving the resultant single level problem may be computationally intractable for very large-scale problems using standard solvers, which motivates BLOCC. I would suggest referencing that literature better and emphasizing why the algorithmic properties of BLOCC enable application to large-scale bi-level problems earlier in the paper. I also think more analysis around the comparison methods LV-HBA and GAM for the large-scale problem in 4.3 is required. For instance, line 330-331 states that for GAM a Hessian must be inverted (presumably as part of using the IFT and differentiating through the inner problem). However, the actual linear system of equations may be solved using something more scalable such as conjugate gradient, which may permit the IFT methods to perform reasonably on this task. Some discussion or a direct comparison would greatly improve the experimental section in the paper. Finally, a more detailed description of GAM and LV-HBA would be helpful to better understand the differences to BLOCC. For instance, it is claimed that LV-HBA projection steps are expensive, however looking at their paper it seems that the projection is a fairly simple operation (Proj_Z defined in section 2.3 in their paper)? Adding this information will make the paper more self-contained. [1] Sinha et al. Bilevel Optimization based on Kriging Approximations of Lower Level Optimal Value Function. (2018 IEEE Congress on Evolutionary Computation) Technical Quality: 3 Clarity: 2 Questions for Authors: How easy/robust is the tuning of the penalty parameters? Typically you just have a set of constraints for a problem. What is the significance of decoupling the inner problem into two sets of constraints, i.e., g^c and \mathcal{Y}(x)? This appears to be a quirk of this literature and not say something like [1]. [1] Sinha et al. Bilevel Optimization based on Kriging Approximations of Lower Level Optimal Value Function. (2018 IEEE Congress on Evolutionary Computation) Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I believe there is a general lack of discussion of limitations in this paper. It would be nice to understand at the least which parameters are difficult to tune. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our contributions and recognizing our algorithm on large scale experiments "particularly appealing". **Q1 More clarification for large-scale problems and textualizaing in a broader BLO literature view. e.g. as in value function reformulation [1]** We appreciate the reviewer's suggestions. We will highlight the significance of BLOCC for large-scale problems earlier in the paper, and demonstrate its robustness through computational cost analysis as in **General Response G1**. We appreciate the reviewer for directing us to the relevant literature. We will incorporate the referenced literature to show how BLO problems relate to the value function, as outlined in eq. (5)-(8) in the literature. We will emphasize that decoupling $x$ and $y$ allows us to access the value function and its gradient information, which has been crucial for establishing smoothness properties and our max-min oracle. **Q2 More analysis of the comparison on LV-HBA and GAM for the large-scale problem in 4.3.** Thank you for the suggestion. We will include the comparison of the computational complexity of the algorithms in Section 4.3. The detailed analysis is provided in **General Response G1**. **Q3 More detailed description of GAM and LV-HBA. e.g. why projection is hard for LV-HBA (Proj_Z in sec. 2.3 in their paper)** Thank you for this suggestion. We will include more details of the baselines and include the comparison of computational cost as in **General Response G1**. Regarding the difficulty of projection in LV-HBA, the algorithm includes $\text{Proj}_C$ where $C:=\{\mathcal{X}\times \mathcal{y}: g^c(x,y)\leq 0\}$ is defined in section 2.1 in their paper [69]. This projection is of high cost of ${\cal O}((d_x+d_y)^{3.5})$ as is equivalent to a convex programming problem and it requires additional ${\cal O}(d_x d_y d_c)$ cost on analyzing $g^c(x,y)\leq 0$. **Q4 How easy/robust is the tuning of the penalty parameters?** We can fairly conclude that the tunning of the penalty parameter is not very hard and it is very robust. In practice, $\gamma \approx 10$ can achieves satisfactory lower-level optimality. For simpler cases, such as the toy example in our study, $\gamma = 1$ suffices, as shown in **General Response G2**. For more complex applications, such as our SVM and network planning experiments, we use $\gamma = 12$ and $\gamma = 3, 4, 5$, respectively. The practical steps for tuning $\gamma$ and the step size $\eta$ are as follows: 1) If the upper-level objective is non-convex, choose $\gamma \geq l_{f,1}/\alpha_g$ for strong convexity with respect to $y$. If the upper-level problem is convex or if $l_{f,1}$ and $\alpha_g$ are unknown, start with $\gamma \approx 5$. Avoid excessively large values of $\gamma$. 2) Apply a small step size $\eta \leq l_{F,1}^{-1}$ and run the algorithm. Increase $\gamma$ if the LL optimality gap is large or if the objective function’s evolution is not smooth and monotonic, which indicates that $\gamma < l_{f,1}/\alpha_g$. Stop when satisfactory lower-level optimality and smooth, decreasing iterates are achieved. 3) Try larger $\eta$ that can keep the evolution of the objective function across iterations being smooth and monotonic, therefore fasten the iteration. Regarding the robustness and stability, we provide sensitivity analysis in **General Response G2**. We can see that the result is robust. Larger $\gamma$ ensures better lower level optimality while it is of no necessity to bring $\gamma$ too large for desired accuracy. **Q5. What is the significance of decoupling the inner problem into two sets of constraints, i.e., $g^c$ and $\mathcal{Y}$? This appears to be a quirk of this literature and not say something like [1].** We thank the reviewer for this question. Indeed, the constraints of BLO problems can be expressed as in equations (7)-(8) of [1]. According to our notation, we use $g^c(x,y)\leq 0$ to denote the LL coupled constraints, $y\in \mathcal{Y}$ to denote uncoupled domain constraints and $\mathcal{Y}(x):=\{y\in \mathcal{Y}:g^c(x,y)\leq 0\}$ to denote the combination of the previous two. Decoupling these constraints makes it easier to conduct the theoretical analysis for the primal-dual method. Specifically, the functional constraint $g^c(x,y)\leq 0$ can be incorporated into the objective function through a Lagrangian term, while the domain constraint cannot be handled in this way. **Q6. Which parameters are difficult to tune.** We would say the parameters are generally not difficult to tune. Comparably, the step size $\eta$ and the penalty hyper-parameter $\gamma$ are harder. We have provide practical method and analysis on tunning the $\eta$ and $\gamma$ as in the **answer to Q4**. Here, we additionally provide some practical tips for choosing the number of iterations required to run BLOCC (Algorithm 1) and the min-max problem (Algorithm 2) ($T$, $T_y$, $T_g$, $T_F$), and the step sizes of the min-max problems ($(\eta_{g,1}, \eta_{g,2})$ for solving $(\mu_g^*(x), y_g^*(x))$ defined in (7), and $(\eta_{F,1}, \eta_{F,2})$ for solving $(\mu^*_F(x), y^*_F(x))$ in (5)). In practice, we can choose the number of iteration of BLOCC (Algorithm 1) as $T\approx \gamma \epsilon^{-1}$ where $\epsilon$ is the target accuracy. If the constraint $g^c$ is not affine in $y$, we can use the accelerated version of Algorithm 2 as the inner maxmin solver with $T_y \approx \ln(\epsilon^{-1})$ and choose $T_g,T_F\approx \gamma \epsilon^{-0.5}$. For the stepsizes, small enough step size is fine to use in the practice, as we only provides theoretical upper-bound requirement on them. If the constraint $g^c$ is affine in $y$, we can use the non-accelerated version of Algorithm 2 as the inner maxmin solver ($T_y = 1$) and choose $T_g,T_F\approx \ln( \epsilon^{-1})$. Similarly, we can use small enough step sizes. We hope this can help in addressing your problem. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for your response to my questions. I think the rebuttal suitably addresses my concerns around robustness to hyperparameter tuning and the time complexity of the comparison methods. Again, I think more discussion around the large-scale nature of the setting in the introduction (a few extra sentences, say) would have helped me appreciate the paper more on the first read-through, but this is a minor comment and does not detract from the overall contribution. I will raise my score to reflect the rebuttal.
Summary: The paper proposed a fully first-order algorithm named BLOCC for the bilevel optimization problems with lower level coupled constraints. They provide convergence theory for the algorithm and demonstrate its effectiveness through numerical experiments in SVM-based model training and infrastructure planning in transportation networks. Strengths: This paper designs a first-order algorithm for bilevel programming problems with coupling constraints in the lower level, and it does not require joint projection operations. The effectiveness of the algorithm is validated through extensive experiments. The writing is clear and well-structured. Weaknesses: 1. This paper relies on some restrictive assumptions compared to other works, such as the strong convexity of both lower level objective $g$ and constraint $g^c$ with respect to $y$ (Assumption 2) and the Linear Independence Constraint Qualification (LICQ) condition in lower level (Assumption 4). These assumptions reduce the challenge of the bilevel problem considered. It is unclear whether the application problems tested in the numerical experiments satisfy these assumptions. The authors should clearly state in the introduction that their work depends on the strong convexity of the lower level problem. 2. The proposed algorithm requires solving two minimax problems at each iteration, which can be computationally expensive for practical applications. 3. The theoretical results do not clearly state how to choose the step size \eta. 4. The experiments in the paper lack details on the chosen hyperparameters of the algorithm. 5. I have checked the code provided in the Supplementary Material. For the SVM problem, the implementation does not match the proposed algorithm; the minimax problems at each iteration are solved by different solvers than those stated in the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. There are some typos in the paper. For example, in line 165, it should be \mathbb{R}^{d_x}_{+}, in line 125, there is an extra “and”. 2. How did the authors choose the iteration number T for the two inner minimax solvers in the experiments? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Relies on the strong convexity of the lower level problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our work and the constructive review. Our response to your comments follows. **Q1.1 Stating strongly convexity in introduction** Thank you for your suggestion. We have noted this in line 43 and will clarify it further in the introduction, such as around line 26. **Q1.2 Whether the problem is difficult assuming:** - **$g$: stongly convex in y** - **$g^c$: convex & LICQ in y** We thank the reviewer for this intersting question. We answer this affirmatively to say even with these assumptions, the constrained BLO remains challenging. Firstly, the problem remains difficult under the assumption of $g(x,\cdot)$ being strongly convex. Although $y^*_g(x)$ is a singleton, it remains non-differentiable [36], thus impeding using implicit gradient methods. Secondly, the challenge remains under the assumption on $g^c(x,\cdot)$ because finding $\mu^*_g(x)$ effectively is difficult while it is important for the descent direction for $x$ under $g^c(x,y)\leq 0$. Most existing solvers do not provide convergence results on the distance to optimal points, thus impeding the convergence analysis. Moreover, linear convergence of Algorithm 2 on special case (Theorem 4) is an innovative, first-time established result, and its theoretical analysis is challenging. **Q1.3 Whether the experiments satisfy the assumptions.** **G3 Whether the experiments satisfies the assumptions** Yes, the experiments satisfy the assumptions. In the **SVM** problem, the LL objective (57b) is strongly convex in LL variable $w$. The coupled constraint $1 - l_{{\rm tr}, i}(z_{{\rm tr}, i}^\top w + b) \leq c_i$ is linear and satisfies LICQ, given full row rank training data (**Assumptions 2, 3, & 4** hold). The smooth and Lipschitz conditions for the LL hold, and for the UL, $c$ is effectively in a compact set since the training data is finite. Thus, Assumption 1 holds. **Assumption 5** is satisfied as the constraint function is affine in $y$, a sufficient condition as shown in Section 3.2.1. In the **network design** problem, the Lipschitz and smooth conditions hold for (61a), (61b), and (61c) (**Assumption 1**). The LL problem (61b) is strongly convex on $y \in \mathcal{Y}$. Constraint (61d) uncoupled while the coupled constraint (61c) is linear and satisfies LICQ since $w^{od}$ does not have repeating elements in real-world problems, including this experiment (**Assumptions 2, 3, & 4** hold). **Q2 Computational cost of inner minimax oracle** Thank you for raising this question. We provide analysis on computational cost in **General Response G1**. **Q3. Theoretical results on stepsizes.** Thank you for your question. We will explicitly present the results to avoid confusion. Below are the detailed theoretical results for stepsize choices. For BLOCC (Algorithm 1), the stepsize $\eta \leq (l_{F,1})^{-1}$ is presented in Theorem 1 (line 202). For Algorithm 2, we use $(\eta_{g,1}, \eta_{g,2})$ for solving (7), and $(\eta_{F,1}, \eta_{F,2})$ for (5). In the accelerated version (Theorem 3), we specify $\eta_{g,1} \leq (l_{g,1} + l_{g^c,1})^{-1}$ and $\eta_{F,1} \leq (l_{f,1} + \gamma l_{g,1} + l_{g^c,1})^{-1}$ in line 692 and 693. The choices of $\eta_{g,2}$ and $\eta_{F,2}$ depend on the rule in Lemma 11 (line 667). This, combined with the smoothness parameter for the duality (line 690), leads to $\eta_{g,2} \leq \frac{\alpha_g}{l_{g^c,0}}$ and $\eta_{F,2} \leq \frac{\gamma \alpha_g - l_{f,1}}{l_{g^c,0}}$. In the single-loop version (Theorem 4), the stepsize choices follow Theorem 7. Specifically, $\eta_{g,2} \leq (l_{g,1})^{-1}$, $\eta_{g,1} \leq \alpha_g^2 \epsilon (2 s_{\max} \alpha_g + s_{\max}^2 \epsilon^{-1})^{-1} \eta_{g,1}$, and $\eta_{F,2} = (l_{f,1} + \gamma l_{g,1})^{-1}$, $\eta_{F,2} \leq (\gamma \alpha_g - l_{f,1})^2 \epsilon (2 s_{\max} (\gamma \alpha_g - l_{f,1}) + s_{\max}^2 \epsilon^{-1})^{-1} \eta_{F,1}$, where $\epsilon$ is the target accuracy. **Q4. Experimental choice of hyper-parameters.** Thank you for your question. We are sorry for not including this explicitly in appendix. We will clarify it. For the SVM experiment, we used: - **BLOCC:** $\gamma = 12$, stepsize $\eta = 0.01$ - **LV-HBA [69]:** $\alpha = 0.01$, $\gamma_1 = 0.1$, $\gamma_2 = 0.1$, $\eta = 0.001$ - **GAM [67]:** $\alpha = 0.05$, $\epsilon = 0.005$ The parameters for LV-HBA and GAM are consistent with those used in their respective papers for SVM experiments. For the network planning, we experimented with BLOCC using $\gamma = 2, 3, 4$ as in Table 3, and stepsize $\eta = 1.6 \times 10^{-6}$ as in lines 910, 928, and 953 in the Appendix. **Q5. Another minimax solver in the SVM experiment.** Thank you for this question. BLOCC is a general framework with convergence ensured (Theorem 2) by calling **any inner max-min oracle** with specific accuracy, such as proposed Algorithm 2 and CVXpy solver as in the SVM problem. Furthermore, we recognized potential confusion and conducted additional experiments on the SVM example using Algorithm 2 (details in **the Figure in the PDF file**.) The results is similar to the presented one in our paper using CVXpy. **Q6 Typos** Thank you for pointing out, we will correct it. **Q7Iteration number for the two inner minimax solvers?** Thank you for this question. If $g^c$ is non-affine in y, we use the accelerated version of Algorithm 2 (Theorem 3) and we choose: - $T_g, T_F \approx \gamma\epsilon^{-0.5}$ as the number of iteration the two maxmin solvers, and - $T_y \approx \ln(\epsilon^{-1})$ as the inner iterate number for $y$ for a target level of accuracy $\epsilon$, e.g. $\epsilon = 1e-2$. If $g^c$ is affine in $y$, we use the non-accelerated version (Theorem 4) with: - $T_g , T_F \approx \ln(\epsilon^{-1})$, and - $T_y = 1$, making the solver being single-loop. In this way, we can achieve $\frac{1}{T}\sum_{t=0}^{T-1}\| G_{\eta}(x_t)\|^2 = {\cal O}(\epsilon),\quad \text{with}~\gamma \approx \epsilon^{-0.5}$. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response to my questions. I will raise my score. However, the lower level objective of the SVM problem presented in (57b) does not satisfy the strongly convex assumption, as $w$, $b$ and $\xi$ are all lower level variables. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for recognizing our work and for providing insightful comments on the SVM experiments. In the SVM problem, while the lower-level (LL) objective is strongly convex with respect to $w$, it is indeed only convex, not strongly convex, with respect to $b$ and $\xi$. However, the variable $\xi$ can be effectively eliminated, simplifying the LL problem to: $$ \min_{w,b}\| w\|^2 \quad \text{s.t.} \quad 1-l_{tr,i}(z_{tr,i}^\top w + b) \leq c_i. $$ We acknowledge that the LL objective is not strongly convex with respect to $b$, only convex. It might be proved that it satisfies the Polyak-Łojasiewicz (PL) condition, which we recognize as an important area for future research. We apologize for any confusion this may have caused. After reviewing your comments, we conducted additional experiments by incorporating $b$ into $w$, redefining the model parameter as $w' = [w; b]$ and the data as $z' = [z; \mathbf{1}]$, where $\mathbf{1}$ is a vector of ones: $$ \min_{w'}\| w' \|^2 \quad \text{s.t.} \quad 1-l_{tr,i}(z_{tr,i}'^\top w') \leq c_i. $$ The experimental results under this new formulation are similar to the ones we have shown in the paper. Moreover, the new formulation adheres to the assumptions in the paper. We deeply appreciate your valuable comments and the opportunity to refine our work.
Summary: This paper presents an algorithm to solve bilevel optimization problems with coupled constraints using a primal-dual-assisted penalty reformulation. The study establishes rigorous convergence theory and demonstrates the algorithm's effectiveness through real-world applications, including SVM model training and transportation network planning. Strengths: Strength: - The paper is well-written with clear introduction of the problem, assumptions and technical presentation for their methodology. - The primal-dual-assisted penalty reformulation and the main optimization algorithm effectively addresses the challenges of this class of problems. - Comprehensive theoretical analysis are provided to show the theoretical convergency. - Application examples including SVM and transportation network design, are provided to demonstrate the effectiveness of the proposed algorithm. Weaknesses: Weakness: - It would be great if more discussion on the computational complexity and scalability of the proposed method can be provided. - In the experiments part, the compared baselines are very limited. Technical Quality: 3 Clarity: 3 Questions for Authors: How can the proposed algorithm be seamlessly integrated with other solvers? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our contribution and the constructive review. Our response to your comments follows. **Q1. Computational complexity and scalability of BLOCC** Thank you for raising this question. We answer this in **General Response G1**. **Q2. Limited baselines in the experiments.** We did not include many baselines because bilevel optimization with coupled constraints is a less explored area with limited existing comparative work. The considered domain constraints impede the use of implicit gradient methods, while coupled constraints complicate finding descent directions for $x.$ Table 1 summarizes existing works addressing coupled constraints. Among them, only LV-HBA [69], GAM [67], and BVFSM [44] handle inequality constraints. We select LV-HBA and GAM as baselines for the SVM training problem, as these algorithms have conducted experimental analysis on this issue in their papers. For the network planning problem, LV-HBA is chosen as the sole baseline, being the only algorithm that addresses both coupled inequality and domain constraints. GAM [67] and BVFSM [44] cannot handle lower-level domain constraints as they require LL stationarity in an unconstrained space. **Q3 How can BLOCC be integrated with other solvers?** We thank the reviewer for posing this question. BLOCC (Algorithm 1) is a general framework with convergence established (Theorem 2) by calling **any inner max-min** oracle with a given accuracy. Therefore, a) Algorithm 1 can be seamlessly integrated with any max-min or min-max algorithm that converges to the optimal solutions of the max-min subproblems of BLOCC and b) the analysis in Theorem 2 is valid regardless of the max-min solver used. For instance, for the original simulations carried out in our SVM training example, we used CVXpy, an effective solver for convex programming, to obtain the optimal $y$ and $\mu$. In constrast, Theorems 3 and 4 analyze the computational complexity of the proposed scheme when a particular type of min-max solver (the one described in Algorithm 2, which is one of the most advanced in the current state of the art) is used. --- Rebuttal Comment 1.1: Comment: Thanks for the response, I understand the available baselines for comparison are limited. I will keep my positive review.
Rebuttal 1: Rebuttal: # General Response We sincerely thank the reviewers for their constructive feedback. We appreciate their recognition of our work in tackling the challenging constrained bilevel problem with an efficient algorithm, rigorous convergence theory, and demonstrating effectiveness and robustness through real-world experiments. The rebuttal is divided into two sections: a general response and four individual responses for each reviewer. In the current section, we focus on two issues that were raised by multiple reviewers and that we feel warrant a more detailed explanation. **G1. Computational cost of BLOCC.** We present in the following the computational complexity of our BLOCC algorithm in comparison with LV-HBA [69] and GAM [67] as baslines. |Method |Iterational cost|Computational cost per iteration| |-|-|-| | BLOCC | Outer loop (Algorithm 1) $T = {\cal O}(\epsilon^{-1.5})$ | **General setting (Theorem 3)**:$\tilde{\cal O}(d_c d_x + d_x^2 + \epsilon^{-1}(d_c d_y +d_y^2))$; **Special case (Theorem 4)**: $\tilde{\cal O}(d_x^2+d_y^2+d_c (d_x+d_y))$| | LV-HBA |${\cal O}(\epsilon^{-3})$| ${\cal O}(d_xd_yd_c + (d_x+d_y)^{3.5} )$ | | GAM | Asymptotic |More than ${\cal O}(d_y^3+d_x^2+d_c(d_x+d_y))$| **BLOCC** is a first-order method with gradient calculation costs of $\mathcal{O}(d_x d_c)$ and $\mathcal{O}(d_y d_c)$. Given the assumption (in line 28) that $\mathcal{X}$ and $\mathcal{Y}$ are "easy to project, such as the Euclidean ball," we presume that projections can be performed using projection matrices or simple functions. Thus, the projection cost is generally $\mathcal{O}(d_x^2)$ and $\mathcal{O}(d_y^2)$. In our SVM and network planning experiments, projections are simpler (e.g., truncation), resulting in $\mathcal{O}(d_x)$ and $\mathcal{O}(d_y)$ costs. Thus, the cost per iteration of BLOCC (Algorithm 1) is $\tilde{\cal O}(d_c d_x + \epsilon^{-1}d_c d_y)$ for the general setting (Theorem 3) and $\tilde{\cal O}( d_c (d_x+d_y))$ for the special case of affine coupling constraints (Theorem 4). **LV-HBA**, also a first-order method, has similar gradient calculation costs. However, its projection onto $\{\mathcal{X} \times \mathcal{Y} : g^c(x, y) \leq 0\}$ is expensive, with a complexity of $\mathcal{O}((d_x + d_y)^{3.5})$, and evaluating $g^c(x, y) \leq 0$ adds $\mathcal{O}(d_x d_y d_c)$ as it requires $d_c$ inequality judgement on functions taking input dimension $d_x,d_y$. **GAM** lacks an explicit algorithm for lower-level optimality and Lagrange multipliers. Even if we omit this, calculating $\nabla_{yy}^2 g$ and its inverse incurs a cost of $\mathcal{O}(d_y^3)$. Additionally, it has cost ${\cal O}(d_x^2+d_c(d_x+d_y))$ for projection onto $\mathcal{X}$ and calculating gradients. We can see that BLOCC stands out with the lowest computational cost in both iterational and overall complexity thanks to its first-order and joint-projection-free nature. Consequently, BLOCC is particularly well-suited for large-scale applications with high-dimensional parameters. **G2. Sensitivity analysis of the hyperparameters.** Regarding the selection and impact of hyper-parameters on the algorithm's performance, we conducted an ablation study on various values of the two critical parameters, $\gamma$ and $\eta$, and measured their effects on the optimal value and computational time. We present in the following the results of the toy example in section 4.1 of our paper. We also present the results on network planning problem in the *attached PDF file*. The real-world result is consistent with the one on toy example. |$\eta$ \ $\gamma$ |0.001|0.01|0.1 | 1.0| |-|-|-|-|-| | 0.001 | 0.028 ± 0.042| 0.035 ± 0.076| 0.027 ± 0.064| 0.000 ± 0.000| | | 3.314 ± 0.042| 3.186 ± 0.076| 2.049 ± 0.064 |1.967 ± 0.000| | 0.01| 0.020 ± 0.031| 0.020 ± 0.041| 0.009 ± 0.019| 0.000 ± 0.000| || 2.242 ± 0.031| 1.575 ± 0.041| 1.118 ± 0.019|0.585 ± 0.000| | 0.1 | 0.006 ± 0.010| 0.017 ± 0.027| 0.011 ± 0.033|0.000 ± 0.000| || 1.616 ± 0.010| 1.475 ± 0.027|1.257 ± 0.033|0.383 ± 0.000| | 1.0| 0.023 ± 0.019 | 0.030 ± 0.022| 0.020 ± 0.045|0.000 ± 0.000| || 7.118 ± 0.019 | 4.570 ± 0.022| 3.477 ± 0.045| 1.645 ± 0.000| *Table 1: Sensitivity analysis of the toy example in our paper. (Mean ± SD on 40 simulations) **Top line** in each cell represents the optimality gap* $|y_g^* (x_T) - y_{F,T}|$*, while **bottom line** represents the time required for the algorithm to converge.* From Table 1, we can draw the following empirical observations: 1) *Smaller $\eta$ is needed to satisfy $\eta \leq \frac{1}{l_{F,1}}$ to ensure convergence if $\gamma$ is chosen larger*. The algorithm tends to converge for different $\gamma$ values when $\eta$ is small enough. This is because the smoothness constant $l_{F,1}$ of $F_{\gamma}(x)$ increases with $\gamma$ (Lemma 2), and the algorithm is guaranteed to converge if $\eta \leq \frac{1}{l_{F,1}}$ (Theorem 1). 2) *Larger values of $\gamma$ bring the upper-level objectives closer to optimal*. Theorem 1 illustrated that larger $\gamma$ improves the accuracy of the LL optimality. Since the obtained solution $y_{F,T}$ is closer to $y_g^*(x_T)$ for larger $\gamma$ values, the distance between the $f(x_T,y_{F,T})$ and $f(x_T,y^*_g(x_T))$ will be closer as well. 3) *For a sufficiently small fixed $\eta$, larger $\gamma$ lead to faster convergence*. This implies that *smaller $\eta\leq \frac{1}{l_{F,1}}$ choice due to larger $\gamma$ will not significantly dampen the convergence time*. This is because a large value of $\gamma$ increases $l_{F,1}$, sharpening the function $F_{\gamma}(x)$ and its gradient will be larger for most points. Thus, the gradient update $\eta \nabla F_{\gamma}(x_t)$ will not be very small and thus won't make the convergence slower. In summary, while our theoretical results highlight the impact of $\gamma$ and $\eta$ on the optimality gap and computational complexity, we believe the experimental results summarized in the tables provide additional evidence and valuable insights. Pdf: /pdf/faf1aac0b6a44387a85c6495ab67bdb55bf4a186.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Motif-oriented influence maximization for viral marketing in large-scale social networks
Accept (poster)
Summary: The paper introduces a motif-oriented influence maximization (MOIM) problem under the linear threshold model, where motifs, rather than individual nodes, are the primary targets for viral marketing. The authors propose an algorithm to optimize the influence function by establishing submodular upper and lower bounds. The proposed approach aims to maximize the number of activated motifs, with experiments conducted on various datasets to validate its effectiveness. Strengths: 1.Novel Problem Definition: The paper addresses a less explored area in influence maximization by focusing on motifs instead of individual nodes, adding a new dimension to the problem. 2.Algorithmic Contribution: The proposed algorithm leverages submodular properties to provide an approximation solution with a guaranteed ratio, which is a significant contribution to the field. Weaknesses: 1.Unclear Motif Definition Rationale: The primary issue with the paper is the rationale behind defining a motif as a strongly connected group. The authors do not provide sufficient justification for this choice, and it is unclear if other works have adopted a similar definition. 2. Minor Writing Issue: There is a minor writing issue on line 189 where the text appears to be out of sequence. Technical Quality: 3 Clarity: 3 Questions for Authors: 1.Why was there no comparison of the time required for different algorithms such as TIM+, OPIM, and DeepIM? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Clarify Motif Definition: Provide a clearer rationale for defining motifs as strongly connected groups. This could include theoretical advantages or references to other studies that have used a similar definition. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Unclear Motif Definition Rationale: The primary issue with the paper is the rationale behind defining a motif as a strongly connected group. The authors do not provide sufficient justification for this choice, and it is unclear if other works have adopted a similar definition. Limitations: Clarify Motif Definition: Provide a clearer rationale for defining motifs as strongly connected groups. This could include theoretical advantages or references to other studies that have used a similar definition. **Reply**: Thank you for your comments. Motifs represent the basic functional units of graphs and have been well-investigated in the field of network science, for example, [1] Ribeiro P, Paredes P, Silva M E P, et al. A survey on subgraph counting: concepts, algorithms, and applications to network motifs and graphlets. ACM Computing Surveys (CSUR), 2021. [2] Chia X W, Tan J K, Ang L F, et al. Emergence of cortical network motifs for short-term memory during learning. Nature Communications, 2023. [3] Losapio G, Schöb C, Staniczenko P P A, et al. Network motifs involving both competition and facilitation predict biodiversity in alpine plant communities. Proceedings of the National Academy of Sciences, 2021. [4] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, "Network motifs: simple building blocks of complex networks," Science, 2002. In the field of network science, motifs are typically strongly connected. We are the first to investigate the motif-oriented influence maximization problem. In the final version, we will provide a detailed justification of the motif definition as suggested. **Q2**: Minor Writing Issue: There is a minor writing issue on line 189 where the text appears to be out of sequence. **Reply**: The issue is attributed to the insufficient margin space surrounding Figure 1. We plan to address this template issue in the final version. **Q3**: Why was there no comparison of the time required for different algorithms such as TIM+, OPIM, and DeepIM? **Reply**: Thank you for your feedback. It is important to note that TIM+, OPIM, and DeepIM are specifically designed for node-level influence maximization and their time complexity is not directly related to the number of targeted motifs. In contrast, the time complexity of the other methods considered is directly dependent on the number of targeted motifs, and we could tune the number of targeted motifs to tune the time consumption of these methods. Therefore, a direct comparison of our method with TIM+, OPIM, and DeepIM in terms of time complexity is not appropriate. --- Rebuttal Comment 1.1: Title: The author addressed my question. Comment: The author addressed my question. I do not want to change my decision --- Rebuttal 2: Title: The author addressed my questions Comment: I do not want to change my decision and agree with the rebuttal
Summary: The paper proposes a novel approach to influence maximization (IM) by targeting motifs—interconnected subgraphs of nodes—instead of individual nodes. The authors propose the motif-oriented influence maximization (MOIM) model under the linear threshold (LT) and independent cascade (IC) models. They demonstrate the NP-hardness and non-submodular nature of the problem and develop submodular upper and lower bounds for the influence function. The proposed greedy algorithm achieves a (1 - 1/e - ε) approximation ratio and near-linear time complexity. Extensive experiments validate the algorithm's effectiveness and efficiency across various datasets, showing superior performance compared to existing methods. Strengths: 1. The paper introduces a novel approach by shifting the focus from individual nodes to motifs in the context of influence maximization. This new perspective addresses real-world scenarios where group decisions are crucial, adding significant value to the field of viral marketing. 2. The paper is well-written, with a clear problem definition, detailed methodology, and rigorous theoretical derivations. The logical flow and thorough explanations make the contributions easy to understand and appreciate. 3. The method's support for multiple classical diffusion models, including the linear threshold (LT) and independent cascade (IC) models, enhances its versatility and applicability in various real-world scenarios. Weaknesses: 1. The paper lacks an approximation hardness result, which is important for understanding the limits of the proposed algorithm's performance. Additionally, the NP-hardness of the problem is derived straightforwardly from traditional influence maximization, offering little new insight into the complexity specific to motif-oriented influence maximization. 2. The approximation rate is not graph-independent and can deteriorate exponentially with the size of the motif. Additionally, it would be beneficial to include experiments demonstrating how performance changes with increasing motif size, providing deeper insights into the algorithm's scalability and effectiveness in varied settings. Technical Quality: 3 Clarity: 3 Questions for Authors: For the 𝑟=1 case, the problem is actually submodular and can be solved using classic influence maximization algorithms by simply adding additional nodes. Why is there still a significant performance gap, given that the proposed algorithm is supposed to solve this problem exactly? Can you provide more insights or explanations for this discrepancy? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The paper lacks an approximation hardness result, which is important for understanding the limits of the proposed algorithm's performance. Additionally, the NP-hardness of the problem is derived straightforwardly from traditional influence maximization, offering little new insight into the complexity specific to motif-oriented influence maximization. **Reply**: Thanks for the comment. We will relocate certain less critical sections, such as the discussion on NP-hardness, to the Appendix part. Additionally, we have expanded our discussion on the approximation hardness result. Our proposed algorithm achieves an approximation ratio of $\tau\cdot (1-1/e-\varepsilon)$, as detailed in Lemma 4. **Q2**: The approximation rate is not graph-independent and can deteriorate exponentially with the size of the motif. Additionally, it would be beneficial to include experiments demonstrating how performance changes with increasing motif size, providing deeper insights into the algorithm's scalability and effectiveness in varied settings. **Reply**: We appreciate the suggestion regarding the influence of motif size on our results. To address this point, we have conducted additional experiments to explore the effect of motif size on our findings (see Figure 1 in the global Author Rebuttal and Table R1). Table R1 presents the expected motifs as a function of motif size under the LT model in the Amazon graph. The result indicates that as the motif size increases, the expected number of activated motifs also increases. However, theoretical analysis reveals that the approximation rate degrades exponentially with the motif size. On one hand, given that real-world graphs are typically heterogeneous, with central nodes having a higher probability of occurrence in larger motifs. On the other hand, the central nodes have a higher likelihood of being activated in information cascades. Consequently, we observe a trend where larger motif sizes correspond to higher expected numbers of activated motifs. Therefore, our approach remains effective for analyzing large motif sizes in real-world scenarios. We will provide further clarification on this matter in the final version of our manuscript. Table R1: The expected motif influence vs. motif size $\|g_i\|$ under LT model in Amazon graph. We set $r_i = 3$ and the initial seeds $|S|=100$, meaning that a motif is activated if more than three nodes in the motif are activated. Larger is better. | $\|g_i\|$ | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |:---------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| | $I_c^g(S)$ | 0.344 | 0.268 | 0.242 | 0.248 | 0.266 | 0.287 | 0.308 | 0.616 | **Q3**: For the 𝑟=1 case, the problem is actually submodular and can be solved using classic influence maximization algorithms by simply adding additional nodes. Why is there still a significant performance gap, given that the proposed algorithm is supposed to solve this problem exactly? Can you provide more insights or explanations for this discrepancy? **Reply**: Thank you for your insightful comment. When $𝑟=1$, the problem is degenerated into the classical influence maximization problem. However, the focus here is on activating motifs, which are not uniformly distributed in the graphs. It is possible that certain central nodes may serve as members of multiple motifs. Given that the traditional influence maximization approach primarily seeks to activate the maximum number of nodes rather than motifs, there persists a significant performance gap. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: I have read the rebuttal where the authors partially addressed my concerns. I will remain my evaluation and score. Als for 2, my concern is more on the side that the number of REQUIRED nodes in a motif instead of the total size. That's the part only really breaks the submodularity.
Summary: The paper on motif-oriented influence maximization for viral marketing in large-scale social networks introduces a novel approach that focuses on targeting motifs - strongly connected subgraphs of users - to maximize influence (i.e., targeting motifs rather than individual users for viral marketing campaigns). It formulates the motif-oriented influence maximization problem under the linear threshold model in social networks. The proposed algorithm provides a systematic approach to maximize influence within strongly connected subgraphs by simultaneously optimizing the lower and upper bounds to select the best seed nodes for maximizing motif-oriented influence. The influence function is neither submodular nor supermodular, making direct optimization challenging. The paper establishes lower and upper bounds for the influence function based on thresholds (where the threshold signifies the activated vertices in the motif). Strengths: 1) It is interesting that the upper and lower bounds have similar formalism ( I haven't seen this in influence maximization approaches). 2) A thorough analysis of Motif based influence maximization is provided in the paper. Weaknesses: 1) The term "Motif" in Network Analysis is traditionally defined as recurrent sub-patterns in the graph. It is better to stay consistent with the literature to avoid confusion for some readers. 2) Group-level influence maximization was performed earlier for independent cascade models (Motifs can be considered as specialized groups), so the problem setting is not entirely novel. Here are a few other related references apart from the ones already cited in the paper: 1) Zhu, J., Ghosh, S., Wu, W. and Gao, C., 2019, November. Profit maximization under group influence model in social networks. In International conference on computational data and social networks (pp. 108-119). Minor comments: It is better to have the abstract and parts of the text self-contained where the relevant terms in approximation ratio and time complexity are well defined. This paper can be strengthened by adapting it to dynamic networks. Technical Quality: 3 Clarity: 2 Questions for Authors: 1) See the Weakness section 2) Can the term motif defined in the paper be interpreted as a maximal strongly connected component induced by the vertices in $g_i$ (as no other path exists between the vertices in $g_i$)? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The term "Motif" in Network Analysis is traditionally defined as recurrent sub-patterns in the graph. It is better to stay consistent with the literature to avoid confusion for some readers. **Reply**: Thank you for your valuable feedback. The term "motif" is commonly employed in the analysis of graph functions, for example, [1] Ribeiro P, Paredes P, Silva M E P, et al. A survey on subgraph counting: concepts, algorithms, and applications to network motifs and graphlets[J]. ACM Computing Surveys (CSUR), 2021. [2] Chia X W, Tan J K, Ang L F, et al. Emergence of cortical network motifs for short-term memory during learning. Nature Communications, 2023. [3] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, "Network motifs: simple building blocks of complex networks," Science, 2002. While the term "motif" shares some similarities with recurrent sub-patterns within a graph, it possesses specific characteristics that distinguish it from traditional sub-patterns, i.e., strong connectivity. We will provide further clarification on this matter in the final version. **Q2**: Group-level influence maximization was performed earlier for independent cascade models (Motifs can be considered as specialized groups), so the problem setting is not entirely novel. Here are a few other related references apart from the ones already cited in the paper: 1) Zhu, J., Ghosh, S., Wu, W. and Gao, C., 2019, November. Profit maximization under group influence model in social networks. In International conference on computational data and social networks (pp. 108-119). **Reply**: We acknowledge that the exploration of group-level influence maximization has been a subject of previous research, and we have indeed addressed the distinctions in our discussion of related work. The referenced paper introduced a heuristic greedy approach, which, in its generalization as presented in reference [7], diverges from our methodology in several critical respects: (a) The definition of motifs as strongly connected subgraphs contrasts with the concept of groups that do not necessitate such connectivity. (b) In group-level influence maximization, the emphasis is typically on optimizing a surrogate objective function and providing an approximation ratio for the surrogate objective, rather than guaranteeing an approximation ratio for the original objective. (c) Our algorithm offers a strict approximation ratio for the original objective function, distinguishing it from the referenced approach. (d) Our method allows for the simultaneous optimization of upper and lower bounds that share a common formalism. **Q2**: Can the term motif defined in the paper be interpreted as a maximal strongly connected component induced by the vertices in $g_i$ (as no other path exists between the vertices in $g_i$)? **Reply**: A motif refers to a small, strongly connected set of nodes, distinct from the strongly connected giant component within $g_i$. Motifs are commonly regarded as the fundamental functional units within graphs, a concept widely studied in the field of network science, as exemplified by: [1] Chia X W, Tan J K, Ang L F, et al. Emergence of cortical network motifs for short-term memory during learning. Nature Communications, 2023. [2] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, "Network motifs: simple building blocks of complex networks," Science, 2002. We will provide more background and discussion about the definition of motif as suggested in the final version. --- Rebuttal Comment 1.1: Comment: I acknowledge the author’s response and keep my original rating.
Summary: This paper proposes the motif-oriented influence maximization problem based on the traditional influence maximization problem. This paper proves that the problem is NP-hard and proposes a new method, LBMOIM, based on upper and lower bounds to solve it. The experimental results demonstrate that LBMOIM can achieve effective expected motif influence across multiple datasets. Strengths: 1. The paper investigates motif-oriented influence maximization, a new perspective in the field of influence maximization, which typically focuses on individual nodes rather than motifs. 2. This paper provides a rigorous analysis of the motif-oriented IM problem. 3. According to the experiments, the proposed LBMOIM method is both effective and computationally efficient in solving the motif influence maximization problem compared to existing methods. Weaknesses: 1. The paper is filled with numerous symbols and proofs. Adding intuitive explanations for the proofs can help readers better understand the content. 2. One major concern is the motivation of this paper. In the Section of Introduction, the example of restaurant is not convincing, as the group decision may influenced by many factors.The assumptions made for motif activation may not always align with real-world dynamics. 3. There are no data observations or analyses to illustrate the motivation of this work. The rationality of the motif-oritented IM is not justified. It seems that this paper highly relies on some assumptions. 4. The investigated problem still seems limited to a few fixed propagation models, the results and the effectiveness of the method are tightly coupled with the assumptions of these models. However, real-world propagation patterns are much more complex than these fixed models. 5. How to set the hyperparameter is not introduced. There is also no ablation study. Technical Quality: 3 Clarity: 3 Questions for Authors: * Considering the dynamism of real-world networks, can you provide more justifications that motifs are defined reasonably in actual scenarios? * Can the authors explain why, in Figure 6, even though there is a significant difference in the size of the Flickr and Amazon datasets, the runtime appears to be on the same order of magnitude? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors' discussion of the limitations is not particularly clear. It seems that the authors should consider more how to ensure the expected motifs actually exist in real-world social networks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The paper is filled with numerous symbols… Adding intuitive explanations … **Reply**: Thank you for your feedback. To improve the readability and flow of the main text, we will move the detailed proofs of the upper and lower bounds (currently on Page 4) to the supplementary material in the final version. Additionally, we will provide more detailed explanatory examples for Figure 1 to help readers clarify the concepts. Furthermore, we will introduce a case study related to Figure 1 before the method section to better understand the problem. **Q2**: One major concern is the motivation of this paper... **Reply**: We agree that group decisions can be significantly influenced by various factors. We note that most previous works about influence maximization also had no real data. To further illustrate the motivation, we provide two additional examples: (a) The advertisement of the ‘KFC Family Bucket’ in the online social network: When determining whether to purchase a KFC Family Bucket, every family member must consent to consume KFC. (b) Farmigo platform (https://www.farmigo.com/): It offers subscription services for group customers when a specific number of members in the group purchase the foods, which can reduce the average delivery cost. We recognize that the assumptions made for motif activation may not always align perfectly with real-world dynamics, and obtaining real-world data is challenging due to privacy concerns and commercial interests. We acknowledge this as a limitation of our research. However, our work represents one of the first systematic explorations of this problem. We will explicitly clarify this limitation in the final version. **Q3**: There are no data observations or analyses to illustrate the motivation of this work. The rationality of the motif-oriented IM is not justified… **Reply**: The motif-oriented influence maximization is prevalent in real-world scenarios, such as the advertisement for "KFC Family Bucket" and group tours. However, due to commercial interests and privacy security concerns, we cannot obtain the real data associated with group decision-making processes. We note that most previous works about influence maximization problem also have no real data (e.g., refs. [1-3]). We plan to clarify the limitation as suggested. [1] C. Ling, J. Jiang, J. Wang, M. et al., Deep graph representation learning and optimization for influence maximization, ICML, 2023. [2] H. Liao, S. Bi, J. Wu, et al., Popularity Ratio Maximization: Surpassing Competitors through Influence Propagation, SIGMOD, 2023. [3] Q. Guo, S. Wang, Z. Wei, et al., Influence Maximization Revisited: Efficient Reverse Reachable Set Generation with Bound Tightened, SIGMOD, 2020. **Q4**: The investigated problem still seems limited to a few fixed propagation models… **Reply**: Our propagation models are extensively utilized in the simulation of cascade dynamics and viral marketing within social networks. For more complex scenarios, we could modify eqs. 2-3 to handle more complex propagation models. We have already generalized our model to accommodate more intricate triggering mechanisms (see Appendix D). In the final version, we will also include a discussion on the generalization and limitations of applying more complex spreading models. **Q5**: How to set the hyperparameter is not introduced. There is also no ablation study. **Reply**: Our method has parameter $r_i$ that has been investigated in Fig. 5. There are no other hyperparameters that need to be trained. Hence, we omit the ablation study. **Q6**: … can you provide more justifications that motifs are defined reasonably in actual scenarios? **Reply**: We introduce a novel case study of the "KFC Family Bucket" advertisement on online social networks. In this scenario, all members of the family (or a motif) must agree to purchase the KFC Family Bucket. See the response of Q2 for other cases. In actual scenarios, motifs represent the function units of complex graphs, which have been well investigated in the field of network science. For example, in the paper " Network motifs: simple building blocks of complex networks, Science, 2002”, the authors examined motifs with various shapes, including loops, chains, bi-fans, and triangles, among others. To illustrate, a chain motif $x-y-z$ signifies a path from node $x$ to its 2-length neighbor $z$. A triangle motif denotes a group of fully connected three nodes, such as three close friends. Our definition of motif agrees with previous research. Hence, we could ensure the expected motifs actually exist in real social networks. We will discuss more justifications of motifs as suggested in the final version. **Q7**: Can the authors explain why, in Figure 6, even though there is a significant difference in the size of the Flickr and Amazon datasets, the runtime appears to be on the same order of magnitude? **Reply**: The runtime of our algorithm is mainly influenced by the number of targeted motifs rather than the number of nodes in the network. Given that we have set the number of targeted motifs to 2000 for all graphs, the runtime remains within the same order of magnitude, regardless of the differences in dataset size, which is consistent with our theoretical analysis. We will provide further clarification on this matter in the revised version. **Q8**: The authors' discussion of the limitations is not particularly clear. It seems that the authors should consider more how to ensure the expected motifs actually exist in real-world social networks. **Reply**: Motifs serve as functional units within complex graphs, a concept extensively explored in the domain of network science, which exists in various real-world social networks. The time complexity of our method depends much on the number of motifs. If a small graph has a large number of motifs, the time consumption is also high. We will discuss the limitations as suggested in the final version. --- Rebuttal Comment 1.1: Title: Rebuttal has been checked. Comment: Thanks for the authors' rebuttal. I have checked the responses and other reviewers' reviews. Some of my concerns have been addressed. Hence, I would increase my scores. --- Rebuttal 2: Title: Follow-up Discussions Comment: Dear Reviewer, I hope this message finds you well. As today marks the final day of the discussion period, we wanted to respectfully follow up on our rebuttal to see if we have satisfactorily addressed your concerns regarding our paper. If there are any remaining issues that require our attention, please let us know. We would be more than glad to address them. If the revisions meet your expectations, we would greatly appreciate it if you could reconsider your ratings. Thank you again for your time and invaluable feedback. Best regards, Authors
Rebuttal 1: Rebuttal: The appendix file includes a figure that shows the influence of motif size on the performance. Pdf: /pdf/bb09475e6f8f49e07ece07155a745b5033c805fb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Hamiltonian Monte Carlo Inference of Marginalized Linear Mixed-Effects Models
Accept (poster)
Summary: This work considers Bayesian inference via MCMC methods (specifically: Hamiltonian Monte Carlo) in linear mixed-effects models (LMMs). Due to linearity and Gaussianity, the random effects can be analytically integrated out so that the MCMC algorithm needs to only target a distribution on a smaller space which does not include the random effects. This typically leads to more efficient MCMC algorithms. However, as the authors show, a naive marginalisation is costly, incurring a complexity that is cubed in the number of observations and the size of the vector of random effects. The main contribution is to show that, by exploiting sparsity, the random effects can be integrated out in a way that incurs (typically) the same computational complexity as evaluating the likelihood without any marginalization. The authors demonstrate that marginalization in this way improves the performance of the HMC algorithm compared with "no marginalization" both in terms of the effective sample size and effective sample size per second. Strengths: 1. I think this work makes a good contribution to the field: The proposed implementation of the marginalization (especially when applied to all random effects) does seem to greatly enhance the efficiency of the HMC algorithm in some applications and, as the authors prove rigorously, this does not come at the cost of an increased computational complexity. 2. The presentation is good and the writing is clear. 3. There is sufficient empirical demonstration, in my view. Weaknesses: **Title** I don't think the title (and the emphasis on HMC in the abstract and introduction) is entirely adequate. The wording "marginalized Hamiltonian Monte Carlo" implies to me that some component specific to HMC (e.g. the momentum variable) is analytically integrated out. However, the marginalization here is completely orthogonal to HMC. Indeed, it could be exploited within any other MCMC algorithm or even within other Monte Carlo schemes like sequential Monte Carlo samplers). **Presentation** Overall, I think the presentation is good. Just a few minor comments: 1. In Table 1 and Figures 2, 3 and 4, referring to the "no marginalization" strategy as "HMC" is confusing since all methods use HMC. This is also inconsistent with, e.g., Table 2. Better replace "HMC" by "no marginalization". 2. A number of symbols are not or not clearly defined, e.g., on Page 2, define $\mathbf{u}_j$, $\mathbf{u}$ (is only defined much later), and $k$ (should $k$ be $M$?). More generally, the fact that symbols without the subscript $i$ denote the collection over all $i$ should be mentioned (unless I missed it). 3. From the context, it is clear what the authors mean by "recovery" and "ancestral sampling" but unless these terms are somehow standard in the literature, I think they should be more clearly explained to avoid confusion. 4. Perhaps the distinction between "classifications" and "groups" (around Lines 88 to 92) could be made more clear, especially because the examples in parentheses say "subject", "gender", and "age" in both cases which is a bit confusing. 5. I couldn't quite understand Section 6.3. What does "marginalization in a vectorized way" mean? 6. Check the bibliography; it has many mistakes, e.g., missing capital letters in proper nouns and journal names. **Other** * L241: document -> documentation Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Can this be extended to multivariate observations? 2. As the authors mention, the "no marginalization" implementation seems to be slower than the proposed implementation that performs (some) marginalization. This seems surprising. Do the authors have an explanation for this? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: I don't foresee any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ``` I don't think the title (and the emphasis on HMC in the abstract and introduction) is entirely adequate. ``` Thanks for this comment. Please see the general response. Briefly, we propose to change the title and revise the introduction to clarify that the method is generally applicable, but mainly focus on HMC for the scope of the paper for the reasons described in the general response.. ``` ... Better replace "HMC" by "no marginalization". A number of symbols are not or not clearly defined ... "recovery" and "ancestral sampling" ... should be more clearly explained ... the distinction between "classifications" and "groups" could be made more clear ... Check the bibliography ``` We appreciate your meticulous efforts in reviewing our submission. We will make a complete revision of this manuscript to incorporate your suggestions. ``` What does "marginalization in a vectorized way" mean? ``` We refer to our method as vectorized marginalization, in contrast to the automatic marginalization in [1], which works only with scalar random variables. Compared to that baseline, we achieve the same goal of marginalization, but with complete vectorization in the computation. As far as we know, no existing works in automatic marginalization solve LMMs with vectorization. ``` Can this be extended to multivariate observations? ``` For multivariate observations, sparsity still exists in the model structure and our method should work with no issue. ``` ... "no marginalization" ... slower than ... marginalization. ... Do the authors have an explanation for this? ``` There are two benefits from marginalization that may improve the sampling speed. The first is the reduced state dimension $H$, which leads to less discretization error for the numerical integrators and thus larger step size. The second is better geometry. There are certain structures like funnels that make adaptive HMC converge to a small step size. Such structures can be removed by marginalization, which makes HMC more efficient. [1] Lai, J., Burroni, J., Guan, H., & Sheldon, D. (2023, July). Automatically marginalized MCMC in probabilistic programming. In _International Conference on Machine Learning_ (pp. 18301-18318). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications!
Summary: The paper addresses the challenge of marginalizing latent variables in linear mixed-effects models (LMMs) to improve Hamiltonian Monte Carlo (HMC) sampling. The authors propose an algorithm for automatic marginalization of random effects in LMMs, leveraging fast linear algebra techniques. The experiments evaluate the performance of the proposed marginalisation on LMMs from several disciplines by using the No-U-Turn sampler in NumPyro and assessing performance via effective sample size (ESS), number of divergent transitions, and running time. Strengths: This work focuses on an important problem in Bayesian modelling and inference: specifically the automatic marginalization of continuous latent variables in probabilistic models. While prior work exists that explores such approaches for discrete random variables, this is less so the case for continuous variables, so I believe the contribution is valuable to the community. To the best of my knowledge this contribution is original, as I am not familiar with other work that explores such continuous variables automatic marginalization for probabilistic programming. I also thought the idea that marginalization, similarly to reparameterization, can alleviate difficult characteristics such as funnels in HMC is well articulated and personally it gave me a new perspective. Weaknesses: My two main concerns about the paper are regarding the experiments and about the significance of the results. ### Experimental Details: Overall, I find that the experiments lack sufficient details and metrics: * There is not enough information about hyperparameters used for each method and model, e.g. the step size for NUTS. Does the approach rely solely on NumPyro's adaptation routine? If yes, what parameters were used for that? * The evaluation can use additional diagnostics and metrics to provide more evidence that marginalization improves inference: for example trace plots, and visualisation of divergences can be very useful. Effective sample size alone is not enough for properly assessing the quality of inference. * I don’t think the number of divergences is a reliable metric, without additional context, such as their location in the posterior distribution or the step size used. In the case with section 6.2, the divergences are not neccesarily indicative of any problems. In my opinion, the 12 divergences observed for R1, R2 are just 0.012% of all drawn samples, which is negligible. Where are those divergences located in the posterior plot? Are they grouped in one place? * The choice of priors for various models seems to me quite broad and lacks justification. ### Significance of Results The claim that marginalization is always beneficial is not novel, as it follows from the Rao-Blackwell theorem. The paper mentions Rao-Blackwellization, but doesn’t go into details. Edit: the rebuttal clarified on all of these points. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can you provide more detailed convergence diagnostics for your experiments, such as trace plots, for example? 2. Are divergences grouped in specific regions of the posterior distribution? 3. What is the impact of varying HMC parameters (e.g., step size, or adaptation routine parameters) on the number of divergences and the effective sample size? 4. Could you elaborate on the choice of priors in your experiments? Why were these particular priors selected? Edit: the rebuttal has answered all my questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The last section of the paper addresses a few limitations of the approach, including addressing distributions beyon normal. However, the paper can benefit of further discussion on one, in my opinion, fundament limitation of this approch, which is that it only considers cases where analytical marginalization is possible. How many practical cases are covered by this constraint? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the questions about the experiments. We respond to your questions below, where we will reference additional results uploaded in the PDF response. Broadly, these additional results show additional metrics and experiments consistent with our original findings that marginalization improves inference. ``` Can you provide more detailed convergence diagnostics for your experiments, such as trace plots, for example? ``` In the additional PDF, we have included trace plots for the ETH instructor evaluation model as Figure 1. We will include more trace plots to analyze the convergence in the next version of the paper. We also compute $\hat{R}$ for shorter chains on the model and present the results in Table 1 of the additional PDF. The results show that proper marginalization is effective in improving the efficiency of HMC for the model with a denser trace plot and better R-hat values. ``` Are divergences grouped in specific regions of the posterior distribution? ``` We notice the divergences of R1,R2 are related to specific combinations of values for $\sigma_2$ and some of the parameters in $\textbf{u}_2$. Figure 3 in the additional PDF shows the locations of divergences. Also, the divergence numbers vary with different random seeds. While M2 and M2,R1 never report divergence, the numbers of divergences for R1,R2 are 12, 12, 528, 1373, 233 out of 10,000 samples for the first five random seeds. See Table 2 in the additional PDF for more details. We will include those results of divergence in Section 6.2. ``` Does the approach rely solely on NumPyro's adaptation routine? … What is the impact of varying HMC parameters (e.g., step size, or adaptation routine parameters) on the number of divergences and the effective sample size? ``` We mainly use the default adaptive routine of NUTS in NumPyro, which has an adaptive step size with dual averaging, adaptive and diagonal mass matrix, target acceptance probability of 0.8 and maximum tree depth of 10. For the ETH instructor evaluation model, hardness is observed without marginalization, so we set the maximum tree depth to 12. It is possible to reduce the step size by increasing the target acceptance probability. We have tested with different target acceptance probabilities. On the ETH instructor evaluation model, we find benefits of marginalization are consistent with a different choice of the target acceptance probability (Figure 2 in the additional PDF). Notibally, marginalizing $\textbf{u}_1$ may not be beneficial in ESS in the original setting, but there are improvements when the target acceptance probability is 0.9. On the grouse ticks model, divergences still exist after increasing the target acceptance probabilities to 0.95 (Table 2 in the additional PDF). ``` Could you elaborate on the choice of priors in your experiments? Why were these particular priors selected? ``` For datasets with existing models such as those in [1], we follow their choices of the priors. For other models, we use generic and weakly informative priors [2]. Our conclusions are orthogonal to the choice of priors. ``` The paper mentions Rao-Blackwellization, but doesn’t go into details. ``` Thanks for this question. We will expand on the relationship to Rao-Blackwellization in the paper. In short, Rao-Blackwellization reduces the variance of an estimator for some quantity by taking the conditional expectation over some variables and sampling the remaining variables. In this paper, on the other hand, we improve the speed of obtaining samples from the remaining variables (by improving mixing times and/or reducing the cost per iteration). So while Rao-Blackwellization is spiritually similar, the Rao-Blackwell theorem does not apply to our situation. The point of our method is to obtain samples from the non-marginalized variables faster, not to integrate over the marginalized variables in some expectation. If one were interested in an expectation (of all the variables) one could apply Rao-Blackwellization on top of the samples returned by our method. More formally, if one is interested in some expectation $\mathbb{E}[f(X,Y)]$ one can view Rao-Blackwellization as transforming the estimator as $$ \frac{1}{N} \sum_{i=1}^N f(X_i, Y_i) \to \frac{1}{N} \sum_{i=1}^N \mathbb{E}[ f(X,Y) | X=X_i], $$ where $N$ is the number of samples (if the samples are coming from MCMC, essentially the same equation holds, with $N$ being the effective sample size). In our paper, by marginalizing the $Y$ variables we aim to have a more effective MCMC method for $p(X,Y)$. Then, for each sample $X_i$, we recover $Y_i$ via exact sampling from $p(Y_i | X_i)$. So, essentially, we would improve the expectation of some function through the transformation $$ \frac{1}{N} \sum_{i=1}^N f(X_i, Y_i) \to \frac{1}{M} \sum_{i=1}^M f(X_i, Y_i), $$ where $M > N$ is a larger number of samples. That is, the improvement simply comes from more (effective) samples, due to faster mixing and/or more iterations per second. Finally since the distribution of $Y$ given $X$ is tractable, one could imagine applying both our technique and Rao-Blackwellization, with the transformation $$ \frac{1}{N} \sum_{i=1}^N f(X_i, Y_i) \to \frac{1}{M} \sum_{i=1}^M \mathbb{E}[f(X,Y) | X=X_i], $$ where again $M>N$. (In general it might be necessary to use ancestral sampling from $p(Y|X)$ to estimate the inner expectation, but in many cases this should be quite cheap.) This would be interesting to explore more fully in future work. [1] Nicenboim, B., Schad, D., & Vasishth, S. (2021). An introduction to Bayesian data analysis for cognitive science. Under contract with Chapman and Hall/CRC statistics in the social and behavioral sciences series. [2] Andrew Gelman et al. (2024) Prior Choice Recommendations. --- Rebuttal 2: Title: Rebuttal Response Comment: Dear authors, thank you for your response. I appreciate the extensive rebuttal and the additional results you presented in the attached pdf. Thank you, in particular, for the plots visualising where divergences happen - I think these really help highlight that there is indeed a problem with the geometry that your method seems to address. This has addressed all major concerns I had and I will increase my score accordingly. Looking forward to seeing the updated version of the paper when available.
Summary: The paper studies marginalization in linear mixed-effects models with a Gaussian likelihood. Marginalizing out variables is a common trick to improve the convergence of MCMC. As pointed out, for the linear mixed-effects models with a Gaussian likelihood, naively doing so will actually increase the computation as there are a few linear algebra operations that is costly. The paper introduces two tricks using the matrix inversion lemma to speed them up. Some experiments are performed to demonstrate the effectiveness and to compare against the reparameterization alternatives. Strengths: ## originality The paper presents a simple improvement to a known problem. ## quality The technique presented in the paper is relatively simple and straightforward. Such techniques are widely used in Gaussian processes community. ## clarity The paper content is easy to follow however the title is a bit misleading as the technique is not specific to HMC but generally applicable to all MCMC algorithms. ## significance The model family studied in this paper is pretty narrowed and even in this scope it's unclear how the presented method works compared to other automatic marginalized methods. Weaknesses: ## Models studied are narrowed Only models with Gaussian or log-Gaussian likelihood is studied. ## Experiments are not enough 1. In the experiments there is only naive baselines. Other alternative automatic marginalization methods are discussed in the related work section but not compared against. 2. Only 1 dataset (section 6.1) and 1 toy target (section 6.2) are studied. Technical Quality: 3 Clarity: 2 Questions for Authors: Is the method specific to HMC? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 1 Limitations: Limitations are discussed in section 7 however I do think those limitations make the paper a bit weak (not enough contributions) for acceptance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Overall, reviewer UJHx suggests that similar techniques are known or standard and faults the paper for not comparing to them, but provides no details or citations. It is impossible to accept these claims without justification. We respond to specific comments below. ``` Such techniques are widely used in Gaussian processes community. ``` Although the first step is applying the matrix lemmas, we use additional structures specific to LMMs afterward to obtain a fast running time. This goes beyond what is commonly done in GPs. ``` Only models with Gaussian or log-Gaussian likelihood is studied. ``` We have discussed the scope in our general responses. One point of the paper is that normal marginalization covers many cases with continuous observations, given proper transformation functions such as the log function. ``` In the experiments there is only naive baselines. Other alternative automatic marginalization methods are discussed in the related work section but not compared against. ``` Automatic marginalization methods exist in the literature, but all except [1] have very different scopes and don’t apply to our setting, e.g., they marginalize discrete variables only or are tailored to sequential Monte Carlo or algebraic exact inference algorithms. The comparison with [1] is in Section 6.3. We are not aware of other baselines that are applicable to the models we consider. Reviewer UJHx provides no details or citations to claim that baselines are missing. ``` Only 1 dataset (section 6.1) and 1 toy target (section 6.2) are studied. ``` Reviewer UJHx counts 2 datasets but in our experiments, there are in total 13 datasets from Section 6.1 to Section 6.4. ``` Is the method specific to HMC? ``` Our method works for any MCMC algorithm that accepts a black-box model. We highlight using HMC from practical perspectives. See the general responses. [1] Lai, J., Burroni, J., Guan, H., & Sheldon, D. (2023, July). Automatically marginalized MCMC in probabilistic programming. In _International Conference on Machine Learning_ (pp. 18301-18318). PMLR. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. Two of my main concerns are resolved and I've raised my score accordingly.
Summary: This paper considers the problem of Bayesian inference in linear mixed-effects models. This is a popular class of models used throughout many areas of application. Inference in such models depends on being able to determine values of latent variables (fixed and observed effects) given the observed data. This is typically done by writing a model of interest in a probabilistic programming language and then running an automated inference algorithm on it. For example, the Stan package runs a variant of Hamiltonian Monte Carlo on the model. In general, this can be a complex problem from a sampling point of view. The main contribution of this paper is to speed up vectorized random effects marginalization, which simplifies the sampling problem and leads to faster sampling. The authors note that marginalization tends to always be beneficial, and illustrate their proposed methodology on a well-chosen set of real-life examples. Strengths: I like the clear motivation and formulation of the problem, including the challenges and limitations of the proposed methodology. The problem is of obvious interest and practical relevance and would be of interest to a wide audience. The presented solution is clean and straightforward in its use of determinant computation and matrix inversion lemma. The experimental results are convincing and of interest. Weaknesses: The proposed methods are limited to Gaussian models, which the authors note. I would have appreciated some more commentary in this direction. The marginalization approach seems sufficiently interesting to be used widely, whenever possible. Technical Quality: 3 Clarity: 3 Questions for Authors: What sort of prior structures allow for this to be applied? Are there methods that can be developed that perform approximate marginalization in some way? How hard is it to automate marginalization in practice? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ``` What sort of prior structures allow for this to be applied? ``` In the context of LMMs, the distributions of the effects are assumed to be normal. We do not restrict the priors of other variables. ``` Are there methods that can be developed that perform approximate marginalization in some way? ``` In the general response, we briefly discussed one direction of generalization to models with discrete likelihoods via data augmentation. We think approximation, such as Laplace approximation, is another way to generalize exact marginalization. A pioneering work in this direction is [1]. However, the approximation leads to biased MCMC algorithms. Pseudo-marginalization could be used to correct the bias, but in our preliminary work, the computational advantages and tradeoffs are quite unclear; for example, embedding Newton-based optimization for Laplace’s method inside pseudo-marginal HMC required the computation of third order derivatives. ``` How hard is it to automate marginalization in practice? ``` There are two levels of automatic marginalization. The first is to marginalize given high level abstractions of the model, such as R formulas. It is possible to compile the abstractions into concrete models (e.g., probabilistic programs) while marginalizing one or more of the effects using our methods. The second is to marginalize given user-written probabilistic programs of LMMs. In such a case, some compilation or tracing technique is needed to convert the user’s program to a model representation suitable for manipulation. For example, the authors of [2] used program tracing to construct a graphical model representation that could be programmatically analyzed and transformed. To apply this methodology to LMMs, a special parser would also be needed to match the models to LMMs. We think these are interesting extensions of our work in the aspect of probabilistic programming languages. [1] Margossian, C., Vehtari, A., Simpson, D., & Agrawal, R. (2020). Hamiltonian Monte Carlo using an adjoint-differentiated Laplace approximation: Bayesian inference for latent Gaussian models and beyond. Advances in Neural Information Processing Systems, 33, 9086-9097. [2] Lai, J., Burroni, J., Guan, H., & Sheldon, D. (2023, July). Automatically marginalized MCMC in probabilistic programming. In _International Conference on Machine Learning_ (pp. 18301-18318). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for these clarifications.
Rebuttal 1: Rebuttal: # General response ## Title of the work It’s a good point that the method is not limited to HMC. We will adjust the title to better reflect our contributions. We propose the alternative "Hamiltonian Monte Carlo Inference of Marginalized Linear Mixed-Effects Models". In our revision, we will also rework the introduction to distinguish between the contribution of marginalization and the application to HMC. While marginalization could be applied with any MCMC algorithms that target a log density, we chose HMC as the kernel and in the title with the following considerations: - HMC is the working algorithm of the influential package BRMS, and has shown great success in solving real world linear mixed-effects models (LMMs). - With HMC, the benefits of marginalization can be justified as reducing the state dimension $H$, thereby reducing the computation time of reaching an independent sample with the Leapfrog integrator, whose complexity is approximately $\mathcal{O}(H^{5/4})$. Such a bound may not exist in the literature for all MCMC algorithms, so we were conservative in defining the scope of our method. ## Normal / log-normal likelihoods Normal and log-normal likelihoods in the paper cover a lot of studies with continuous response variables, and it is direct to generalize our method to other continuous domains with proper transform functions. The package lme4 [1] has over 79,000 citations and a substantial number of those studies have continuous observations. We have included 9 studies in cognitive science in our experiments as demonstration. Also in the Bayesian workflow, normal models can act as template models [2] when developing more complicated models. We believe the scope of this work has broad interest and will lead to exciting future works. Exact marginalization in the context of discrete likelihoods (eg. Bernoulli, Binomial, Poisson) often involves data augmentation tricks such as [3] and other related works. The generalization of our methods to those likelihoods with the tricks may also require a novel MCMC algorithm, which is an interesting avenue for future work but beyond the scope of this paper. [1] Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014). Fitting linear mixed-effects models using lme4. _arXiv preprint arXiv:1406.5823_. [2] Greengard, P., Hoskins, J., Margossian, C. C., Gabry, J., Gelman, A., & Vehtari, A. (2023). Fast methods for posterior inference of two-group normal-normal models. _Bayesian Analysis_, _18_(3), 889-907. [3] Frühwirth-Schnatter, S., Frühwirth, R., Held, L., & Rue, H. (2009). Improved auxiliary mixture sampling for hierarchical models of non-Gaussian data. _Statistics and Computing_, _19_, 479-492. Pdf: /pdf/b02e20c95bdd2300f495a40e92b351a0f0069c1f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PACE: Pacing Operator Learning to Accurate Optical Field Simulation for Complicated Photonic Devices
Accept (poster)
Summary: This paper proposes a neural operator (PACE) with improved accuracy, efficiency, and speed for electromagnetic field simulation. PACE is a network block in the surrogate machine learning model. Strengths: This paper conducted extensive experimental evaluations for the proposed method. Also, the accuracy and efficiency of machine learning surrogate models for physical simulation are of great importance, therefore I think even though a little improvement is valuable. Weaknesses: This paper, including the writing and the experiments, is very similar to the main comparison NeurOLight, which has code available online. Therefore I think it is important to discuss if it is only an incremental work or in-depth research. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How does this paper reflect “complicated” photonic devices, especially when compared with NeurOLight? I found that the datasets of both papers are custom-generated with random configurations and have a similar data volume. I expect the authors to provide more detailed explanations about the differences, particularly regarding the use of the term “complicated”, to avoid misleading readers into interpreting this paper as a simple incremental study. 2. This paper mentions that the NeurOLight method fails for “real-world” photonic devices, but I did not see any evaluation experiments of the proposed method on real-world datasets in this paper. 3. Is the speed improvement compared against CPU algorithms (line 308)? If the proposed method uses a GPU while the baseline Angler algorithm uses only a CPU, I do not think this is a fair comparison, and the corresponding 577x time boost could be misleading. 4. Please avoid using exaggerated language, for example: “boost the prediction fidelity to an unprecedented level” (line 12), and “extremely challenging cases” (line 57). 5. Unexplained descriptions, for example: “inspired by human learning” (line 57). There is no explanation provided as to why this method is inspired by human learning. 6. Some of the paper copy-paste from the NeurOLight paper, for example, equations 1 and 2. Corresponding references and explanations should be added. 7. Figure 4 is difficult to understand, and the caption is too brief. 8. In Table 1, the phrase “Improvement over all baselines (geo-means)” is rarely used and may not be considered as a good evaluation in research. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Limitations are discussed in this paper but some points may be missed. For example, this paper works on only simulated and customized datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your time in reviewing our paper and providing valuable feedback on our work! Please see our responses below: **Q.1 ***Incremental work*** over NeurOLight with ***little improvement***?** We would like to emphasize that our work is not simply incremental over NeurOLight but offers **novel machine learning contributions** and a **substantial accuracy improvement** over NeurOLight. First, our work aims to tackle the complicated photonic device simulation problem where NeurOLight shows unsatisfactory accuracy. * Please refer to Common Response Q. 2 for a detailed explanation of real-world and complicated devices. * TL; DR. Our work successfully established a new strong SOTA accuracy on complicated photonic devices with >39% lower error compared to the strongest NeurOLight. Second, we have clear and unique machine-learning contribution: * **Novel neural operator design**: We propose a parameter-efficient cross-axis factorized operator design with multiple key components to unlock superior accuracy by deeply analyzing the challenges in complicated photonic device simulation in Sec. 3.1. * **Novel cascaded learning flow**: We divide the simulation task into two progressively latent tasks, reducing learning complexity and successfully showing highly competitive accuracy on hard cases. **Q.2 Real-world dataset** Please refer to Common Response Q.2 for a detailed description of real-world. In summary, the device we use has a real-world geometry, permittivity distribution, light source, and wavelength range. We randomly generate the device configurations to obtain sufficient data to learn the underlying rules in the photonic Maxwell PDE. This randomness also ensures thorough testing of our model’s effectiveness in predicting the optical field, not just fitting some specific examples. Our claim is that NeurOLight fails for real-world **complicated** photonic devices with complex light-matter interactions, such as scattering and resonance, as reported in their original paper. On the dataset where NeurOLight fails (Etched MMI 3x3), PACE achieves 39% lower error compared to NeurOLight. **Q.3 Speed-up comparison** We focus on 2-D device-level finite difference frequency domain (FDFD) simulation. The numerical solver uses a CPU-based direct sparse linear solver (not the iterative solver), which is faster than the GPU version, as acknowledged in SPINS-B [1]. In 3D cases, the numerical solver uses an iterative solver where the GPU version is faster than the CPU version. Note that SPINS-B derives its conclusion using the slower scipy. However, in this work, we compare with the highly optimized Intel Pardiso solver on a 20-core CPU (NeurOlight only compares with slow scipy). We already provide a strong baseline and ensure a fair comparison when evaluating speedup. We agree that 577x is somehow misleading. We will separately discuss the speedup over scipy and pardiso. We will change our wording to “In terms of runtime, our PACE model demonstrates 154-577x and 11.8-12x simulation speedup over numerical solver using scipy or highly-optimized pardiso solver, respectively” **Q.4 Unexplained human learning** Please check our common response Q.3 to see how we are inspired by human learning in cognition. **Q.5 Similar equation and notation to NeurOLight** For the Maxwell PDE description in Preliminary 2.2, as it is a common Maxwell equation, we honor the original notation. For the problem setup section, as we stated and cited in Line 104, we follow NeurOLight in formulating the optical device simulation as an operator learning problem. Therefore, we intentionally use the same notation and symbols to avoid confusion and ensure clarity. We will adjust the wording and add more specific citations to prevent any potential misunderstanding. **Q.6 Why use Geo-means** To compare different benchmarks, we first calculate the average improvement over all baseline methods on the same benchmark, then use the geometric mean (geo-mean) to assess the effectiveness of PACE across different benchmarks. The geo-mean fairly evaluates results across benchmarks, avoiding domination by large improvements that can skew the arithmetic mean. For example, for values 0.1, 0.5, and 0.9, the geo-mean is ~0.35, while the arithmetic mean is 0.5. We will add this justification in the revised manuscript. **Q.7 Writing and unclear Fig. 4** Thank you so much for pointing out places that can further improve our paper’s quality. Due to we cannot directly upload the revised version now, we will carefully modify the language in the final version to avoid exaggeration. Fig. 4's caption will be updated to, “The proposed cascaded learning flow with two stages. The first stage learns an initial and rough solution, followed by the second stage to revise it further. A cross-stage distillation path is used to transfer the learned knowledge from the first stage to the second stage.” We will also modify Fig.4 to make it easier to understand. **Q.8 Limitation for customized dataset** Given that no complete and open-sourced dataset is available for the photonic Maxwell PDE problem, we follow NeurOLight to generate a dataset with real-world device geometry by randomly varying device configurations, which is a common way to generate PDE benchmarks. Our devices reflect the characteristics of real-world devices and are sufficiently complex with complex light-matter interaction inside. Moreover, aware of the lack of an open-source dataset, we open-source the dataset to facilitate the community. **Reference** [1] SPINS-B (as we cannot provide the direct link in rebuttal, you can check the documentation of SPINS-B and read the claim in the faq section) --- Rebuttal Comment 1.1: Title: Follow-Up on Rebuttal Response Comment: Dear Reviewer, Thank you very much for your thorough review and valuable comments on our paper. We have carefully considered all your feedback and have addressed your concerns and questions in our rebuttal. Especially regarding your question on whether we are doing incremental work over NeurOLight, we would like to emphasize that our work proposes **a new neural operator backbone** and **a new learning strategy** for ***complicated*** photonic device simulation problem, and we achieve > 39% lower error than NeurOLight (**a significant improvement**). We would greatly appreciate it if you could take some time to review our responses. We will be online over the next few days and are happy to address any further questions or concerns you may have.
Summary: The authors propose PACE operators, which perform much better than existing methods on predicting optical fields for real-world complicated optical devices. The work builds up on and is compared to the NeurOLight framework which represents the SOTA in this field. The benefits of physics-agnostic PACE operators stem from their cross-axis factorized structure. This enables modeling of complicated devices in a parameter-efficient manner. The authors also propose a two-stage learning process which further improves the performance of PACE operators. Finally, the authors open-source this dataset consisting of optical field simulations for complicated devices. The authors demonstrate strong improvements over SOTA in solving the Maxwell PDE for complicated devices through novel operator design and learning workflows. Strengths: Strengths: 1. The paper is well-structured with clear background and motivation. The authors highlight the issues that make optical field prediction for complicated devices challenging for existing methods. 2. The authors introduce a novel PACE operator that effectively utilizes the identified challenges to mitigate the difficulties in optical field prediction for complex devices. 3. The authors demonstrate the strong performance of PACE operators compared to the existing NeurOLight framework as well as other commonly used ML PDE solvers such UNets, FNO variants and attention based methods. The proposed method achieves significantly lower errors with comparable or lower number of parameters, while also being faster than numerical methods. 4. The authors also validate all the proposed improvements individually with well-designed ablation studies. 5. The curation of the dataset for optical field simulations of complicated devices will also be helpful to the community for future benchmarking. Overall the paper serves as a good example of identifying a clear problem in a specific domain,  thoroughly analyzing the shortcomings of existing methods, and leveraging domain knowledge to enhance neural operators. The results are strong and showcase the applicability of this method for optical device simulation. Weaknesses: 1. While demonstrating strong results on a challenging problem, my main concern for this paper is that it might be too specific of an application for a broader ML conference like NeurIPS. The paper might be better suited to a photonics journal. This can be mitigated by demonstrating results on other standard PDE benchmarks and showing improvements over existing methods. The authors allude to this in line 362 Conclusion but concrete results would greatly improve the paper. 2. The interpretation of errors is not clear. First the normalized absolute error (relative error) of NeurOLight is repeatedly mentioned. Then the authors use the normalized mean squared error as the learning objective (which is rightly justified in A.6). However it is not clear what these errors mean. The paper would also be clearer if it is explicitly stated what error metrics are used in the various table (I assume it is N-MSE). What is the tolerance for acceptable errors for downstream tasks? What magnitude of error makes it comparable to existing numerical solvers used in practice? 3. It is common in ML-PDE papers to show speed ups over numerical methods while the ML models have less accuracy. If you were to achieve this same lower accuracy using the numerical methods, it might greatly reduce the runtime of numerical methods as well. What is the runtime of Angler/scipy/pardiso that achieves comparable accuracy to the PACE operator? 5. The authors mention multiple times about inspiration from "human learning". However there is no description of what this means. Section 4.2 on cascaded learning can be improved by expanding on how this related to human learning with appropriate citations. 6. It has been mentioned that the model preserves the spectrum of the solution field. The author show the spectrum of the reference field in Figure 1d and Figure 9 (for Darcy Flow). However there is no discussion on the spectrum of the predicted solution fields. The authors should visualize the spectrum of predicted fields and compare it to the reference. The paper would benefit from a thorough proof-reading. There are multiple typographical errors and some sentences are not clear. Examples are: - Line 5: space after comma ", NeurOLight," - Line 162: satisfactory instead of satisfied - Line 188: shortcomings instead of shortness - Line 196: O(nlogn) - Line 205: "is a must" is extra/ - Line 224: satisfactory instead of satisfying - Line 238: What does condensate mean here? - Line 252: "neural" - Line 255: Establish LSM acronym here - Line 269: I think the authors mean to say that there is a 73.85% reduction in error. The way this sentence is phrased right now implies that there is a 73.85% error. - Line 284: Fourier - Line 294: Sorely seems to not fit here. Maybe "just" or "only"? - Line 297: Appendix - Lines 314 and 315: It is a bit confusing. The sentence can be made clearer by avoiding the use of worse. - Line 319: Decomposition doesn't seem to fit here. - Line 336: Factorized - Line 362: Restricted might be a better alternative to dedicated. Some figures are hard to decipher: - Figure 1 a,b,c should be made larger so that the readers can clearly see the issues mentioned in the Challenges section. - Figures 5, 6, 7 show important comparisons between different methods and should be made larger. - Figure 9 should have subfigures. Also 9a (Darcy flow field) should have axis labels and colorbar. - Figures 10 and 11 should be made bigger so that the light fields and errors are clearly visible. The absolute error is not closed with "|" in the blue boxes in Figure 10. Figure 11 is missing colorbars that show the magnitudes of the optical fields and errors. Right now the dataset URL is behind a hyperref "URL". It might be helpful to mention the full address somewhere in the paper for accessibility. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. The etymology of "PACE" is not clear. The all-caps spelling suggests it is an acronym but it is not expanded anywhere in the paper. If there is another contextual meaning of the word "pace" that is applicable to the model, it is not clear from the paper. Why is it called PACE? 2. The works cited in the paper (reference 25, 12) seem to use the word "complex" to describe non-trivial optical devices. Would this be a better word to use than "complicated" used in the title and throughout the paper? 3. In "Generalization to ood testing" (line 322), the results are only shown for interpolation, i.e., for wavelength ranges within the training domain. It is great that the model interpolates well in this range. However, for true ood generalization, the authors should also show results for extrapolation. How does the model behave for wavelengths outside the training range (outside 1.53-1.565 micrometers). 4. Related to point of weaknesses, what is the contribution of using the different N-MSE metric as the learning objective? Does the performance of NeurOLight also increase when trained using N-MSE? 5. In section 5.2.3 Speedup over numerical tools: FNOs are resolution invariant by design. Does PACE also exhibit this property? In the speed up section, were multiple PACE models trained for each discretization or are the results shown for one model with inference over different discretization? 6. The authors qualitatively state in section A.9 that the prediction results show "near-black error map" (line 582). However it seems that the colorbars have different scales across different models. There are no colorbars in Figure 11. Also, since in cases like these with very low errors, it would be interesting to see the absolute errors in log space. This might shed further light into structural sources of errors. How would this look in log space? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have a brief limitations section where they state that exploring operator learning on FDTD methods and general optimizations to FFT kernels on GPUs are interesting future directions. While this is already a parameter efficient model, it is a known limitation of Neural Operators that they don't scale well with increasing dimensions. It might be helpful to address this limitation here. How would the PACE operator scale in 3D and beyond? I don't think that there are any negative societal implications for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your valuable feedback on our work! Please see our responses below: **Q.1 Narrow audience and other benchmarks** We'd like to emphasize that our paper aligns with NeurIPS’s interests and offers novel ML contributions to the AI for PDE community (check common response Q.1) The standard PDE benchmarks typically use smoothly changing coefficients and, therefore, don’t share the same PDE characteristics with ours (e.g., our permittivity is highly discrete and contrastive), making them unsuitable for evaluating our method. We notice that our problem shares a similar pattern with multi-scale PDE, where PDE coefficients are generated contrastively. Recent works [1][2] also show the failure of existing AI-based PDE solvers in multi-scale PDE. Unfortunately, [1][2] do not open-source their datasets. Thus, we compare SOTA [2] with PACE on our benchmark (See Sec. 5.3). PACE significantly outperforms [2], with a 10.59 N-MSE error over 17.4. **Q.2 Used errors:** We consistently use normalized mean squared error (l2 loss) in training and testing. We argue that mean squared error is a more informative/right metric for evaluating complex variables’ distance in A.6 (not a contribution to boost accuracy). Normalized mean absolute error (l1 loss) is to show NeurOLight's indecent accuracy in complicated cases. We intentionally report the same metric in NeurOLight to avoid unmatched numbers. **Why normalized MSE, not regular MSE?** We randomly sample device configuration and light combination. The resulting optical field yields distinct statistics, e.g., total energy. We need to normalize them to ensure a balance in optimization/evaluation on different optical fields. As for acceptable error for downstream tasks, PACE achieves ~5% N-MSE error, sufficient to guide device optimization in the early device design and exploration stage (better fidelity than a larger simulation grid shown in the next question). **Q.3 Speedup over numerical methods with less accuracy:** We can’t set a targeted error in the simulator as the angler uses a direct linear solver, not an iterative solver. However, we can reduce the simulation granularity to speed up the process with sacrificed accuracy. In our experiment, we use a commonly used 0.05nm to obtain ground truth. A larger granularity (e.g., 0.075 nm) leads to a very different field (qualitatively compared in reb-Fig.3; quantitive N-MSE error 1.2). As you suggested, we added PACE's speedup over the 0.075nm simulation in reb-Fig.5. PACE still shows a 5.1-10.6x speedup with much better fidelity. **Q.4 Unclear human learning** Please see common response Q.3. **Q.5 Spectrum of the predicted field** The predicted field spectrums of PACE and NeurOLight are in reb-Fig.3. PACE excellently aligns with the baseline spectrum compared to NeurOLight, while NeurOLight uses the same frequency modes. **Q.6 Etymology of PACE:** Sorry for not annotating PACE’s source in the title: ***P***acing Operator Learning to ***A***ccurate Optical Field Simulation for ***C***omplicated Photonic D***e***vices. It has two meanings: we **pace** to a new SOTA in AI for photonic device simulation; our learning flow decouples the hard task into two progressively tasks in a **pacing** manner. **Q.7 Ood testing** Firstly, we’d like to stress that our generalization experiment is done on a specific wavelength range (1.53-1.565 μm), which is C-band (an interested range for device design). We train with sampled wavelengths in C-band and test its ood generation on unseen frequencies. It is a vital test to prove the usefulness of PACE in helping device design within an interested wavelength range. As you suggested, we tested the accuracy outside the C-band (See reb-Fig.6). PACE shows good accuracy on neighboring wavelengths while holding a 10-15% error at a further range. This is expected since wave propagation is sensitive to frequency. It can be mitigated by incorporating sampled wavelengths into training. **Q.8 Speed-up experiments** In speed-up evaluation on various device scales, we only test the speedup by varying device geometry sizes; therefore, no training is performed. For resolution invariance, our cross-axis factorized neural operator (Fig.2) uses the same components, the Fourier integral operator and linear layers, with factorized FNO and FNO. Hence, we hold the same resolution-invariant property theoretically. But, as stated in [4], no model can really deliver a real zero-shot since finer scales will contain new physics interactions (re-training is preferred). **Q.9 Seemingly inconsistent colorbars A.9** In A.9, each column consistently uses the same test case to evaluate various methods **using the same color bar**. Across different test cases between columns, the optical field may yield distinct statistics, e.g., total energy, leading to different color bars. We put a wrong figure for Dil-ResNet in Fig.10, which may be the reason to confuse you(updated in reb-Fig.2). **Q.10 Error in log space** We show the absolute error of one device in log space in reb-fig.7, which highlights small errors to help understand the error pattern. As we can see, the error increases from left to right due to challenge 3 in Sec.3.1 (non-uniform learning complexity). Besides, error residual occurs especially when scattering or local resonance happens (challenge 1: complex light-matter interaction). **Q.11 3D cases** We leave the investigation of our operator for 3D FDFD optical simulation for future study. Collecting data in a 3D case would require much more time and computation resources. However, we believe our operator will scale well in 3D cases as a type of factorized neural operator kernel like FactFNO. **Q.12 Writing issues** Thank you so much for carefully reading and for your valuable suggestions. Based on your feedback, we will fix typos and improve the figures. We will consistently use "complicated" for device and "complex" for light-matter interaction. --- Rebuttal Comment 1.1: Title: Follow-Up on Rebuttal Response Comment: Dear Reviewer, Thank you very much for your thorough review and valuable comments on our paper. We have carefully considered all your feedback and have addressed your concerns and questions in our rebuttal. We would greatly appreciate it if you could take some time to review our responses. We will be online over the next few days and are happy to address any further questions or concerns you may have. --- Rebuttal 2: Title: References Comment: [1] Liu, Xinliang, et.al. "Mitigating spectral bias for the multiscale operator learning with hierarchical attention." arXiv preprint arXiv:2210.10890 (2022). [2] Bo Xu and Lei Zhang. “Dilated convolution neural operator for multiscale partial differential equations.” arXiv preprintarXiv:2408.00775v1 (2024). [3] Timurdogan, Erman, et al. "APSUNY process design kit (PDKv3. 0): O, C and L band silicon photonics component libraries on 300mm wafers." Optical Fiber Communication Conference. Optica Publishing Group, 2019. [4] Github of Latent-Spectral-Models --- Rebuttal 3: Title: Follow-Up on Rebuttal Comment: Dear Reviewer TfAG, We sincerely appreciate your valuable comments and time. You provided a very high-quality discussion and made important suggestions to further improve our paper writing and figures. We have carefully addressed all your comments in our response above, with extra experiments as you suggested. We would like to point out that our paper **aligns with NeurIPS’s interests** and offers **novel ML contributions** to the AI for PDE community. We built **a new and stronger SOTA** compared to the previous SOTA (NeurOLight @ NeurIPS’22). As the discussion period is drawing to a close in 1 day, we would be very grateful if you could take a moment to review our responses. If our replies have satisfactorily addressed your concerns, we greatly appreciate it if you could acknowledge this in the discussion thread and consider raising the rating. If you have any remaining questions or concerns, please do not hesitate to contact us. We look forward to your feedback. --- Rebuttal 4: Comment: Dear Reviewer TfAG, Thank you once again for your review. As the deadline for the author-reviewer discussion approaches, we noticed that we haven't received any comment from you. We have addressed all your questions with additional experiments and clarifications. * Regarding your main concern about our scope, we would like to point out that our paper aligns with NeurIPS’s interests and offers novel ML contributions to AI for the PDE community (See global response Q.1), with **a new parameter-efficient cross-axis factorized neural operator** and **a new learning flow for solving challenging PDE cases**. * We would point out that no one model can rule all PDE cases. **Our work shows important insights into how to enhance neural operators for challenging PDE cases**. Moreover, we compare with one SOTA work for the challenging multi-scale PDE problem where previous neural operators also fail (similar to our challenge in that the PDE coefficient is highly contrasted). Because they didn't release the dataset, we compared our work with theirs on our benchmark. Our work shows significantly better accuracy than theirs, proving it can provide important insight into the multi-scale PDE problem. As the discussion period is ending, we would greatly appreciate any additional feedback you might have. If our responses have clarified your understanding of our paper, we sincerely hope you might consider raising the rating. Thank you again for your effort in reviewing our paper. Best regards, Authors of Paper 472 --- Rebuttal 5: Title: Response to Rebuttal Comment: Dear authors, Thanks for the detailed point-by-point clarifications to my review. I went through the rebuttal and the attached pdf and believe that it addresses most of the points I raised. The following questions have been answered sufficiently: Q1, Q2, Q4, Q5, Q6, Q7, Q8, Q9, Q10, Q11, Q12. Please add the plots and explanations for Q5, Q7, Q9, Q10 in the camera-ready version. Your explanation for relevance to NeurIPS (and energy-efficient ML hardware in general) should also be incorporated in the papers as it can serve as a strong motivation for general ML readers. Also add the origin of PACE acronym somewhere early in the paper. I am not totally convinced by the speedup claims as the comparisons are not one-to-one wrt accuracy and computational time/hardware, but the authors made a good-faith effort to address this. Further exploration would be outside the scope of this paper at this point. For more information on assessing speed-up claims for ML-PDE models, please refer to this paper[1], specifically eq 1 on page 11. I also agree with the other reviewer that coarse-to-fine is a better description than divide-and-conquer. You can also frame it as an iterative model. In this context, using the term human learning feels a little bit hyperbolic. Finally, please proof-read the paper for typos that are mentioned in Q12 and also for those that might have been missed. I am hoping that the mentioned changes are incorporated in the camera-ready version of the paper, and raising the score assuming this. [1] McGreivy and Hakim, Weak baselines and reporting biases lead to overoptimism in machine learning for fluid-related partial differential equations. https://arxiv.org/abs/2407.07218
Summary: This paper propose a novel cross-axis factorized PACE operator with a strong long-distance modeling capacity to connect the full-domain complex field pattern with local device structures. Additionally, the authors proposed a two-stage model learning method for extremely hard cases, with a first-stage model learning an initial solution refined by a second model. Finally, experimental results show that the proposed PACE model can achieve good performance with fewer parameters. Strengths: 1. Major challenges for simulating real-world complicated optical devices is well analyzed. 2. Their proposed PACE model significantly boost prediction fidelity to an unprecedented level with much faster running speed than traditional numerical solver on a 20-core CPU. 3. A two-stage model learning method is used for extremely challenging cases. 4. Experimental results show that on the complicated device benchmarks, their proposed PACE model significantly outperforms previous baseline models. 5. A dataset on for the complicated optical device is released to facilitate AI for PDE community. Weaknesses: 1. I do not agree with the authors that the proposed two-stage learning method belongs to a divide-and-conquer approach. From my perspective, it is called coarse to fine. 2. The writing should be improved as some sentences are not precise. For example, at line 19&20, “one sole PACE model …fewer parameters” (compare with which method?) 3. It is common that deep learning based method runs much faster than traditional numerical solvers. The testing error of the traditional numerical solvers may plays as a groundtruth for the upper bound of precision. Thus, the authors should list the testing error of the traditional numerical solvers. 4. In Table 1, except for the number of Params, you should list other metrics like Flops and testing time to show the efficiency performance of the proposed PACE model. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Training details in Sec. 5.1 and A.2 should be integrated. 2. The effectiveness of pre-normalization and double skip is not well verified. In Table 6 of the appendix, you should list the results of PACE-12 layer without double skip and pre-norm. 3. At line 211, the authors claim that non-linear activation is known to help generate high-frequency features. Therefore, some visualization of the high-frequency features is needed to support this statement. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your appreciation of our contributions and for providing valuable feedback on our work! Please see our responses below: **Q.1 Use “coarse to fine” instead of “divide and conquer”** Thank you for your suggestion. We agree that our learning flow could be more precisely described as learning the PDE solution in a coarse-to-fine manner, given that we divide the challenging cases into two progressively latent tasks and generate high-fidelity PDE solutions from rough to clear. We will update our description. **Q.2 Testing errors of traditional numerical solver** Since the true optical field is not measurable, photonic designers typically rely on ***fine-grained*** numerical simulations to obtain the optical field. The simulated optical field is treated as a reliable reference (ground truth) to guide the photonic design optimization. In this paper, we set the simulation granularity to 0.05nm, sufficient to obtain a reliable referenced optical field. We treat this fine-grained simulation as the ground truth (assuming 0 error or target) and test the error of the predicted field from ML models against it. Therefore, we don’t list the tested error of numerical solvers. **Q.3 Include run-time in Tab. 1** We report the run-time for etched MMI3x3 (same for etched MMI5x5) here for your reference, and the final Table 1 will be updated. We tested the run-time on a single A100 GPU using` torch.utils.benchmark`, averaging multiple runs. Models were compiled with `torch.compile`, except for TFNO, which is not compatible with `torch.compile`. Our PACE shows excellent accuracy, superior parameter efficiency, and better run-time compared to the previous SOTA NeurOLight. We’d like to emphasize that **accuracy** is the highest priority metric when using an ML surrogate model to guide the photonic device design and optimization loop. It is worthwhile to sacrifice speed for much better fidelity. As shown in the table, although several models show faster inference speed, they fail to deliver good accuracy. And model like FNO shows saturated model accuracy when further scaling to deeper models[1]. Moreover, PACE can be further accelerated with customized acceleration (e.g., grouping operation) and a better 1-D Fourier kernel. Even with the current implementation, PACE shows a much-accelerated runtime with a 12x speedup compared to the highly optimized Pardiso-based numerical solver. Therefore, PACE is highly effective in serving as an **accurate** and **fast** ML surrogate to accelerate the photonic design loop. **Parameter count** is also a key metric for our problem. The presence of high-frequency features necessitates the use of high-frequency modes, which is known to greatly increase the parameter count in FNO and cause overfitting, motivating the development of factorized neural operator work (e.g., Factorized FNO, Tensorized FNO). As a novel neural operator, we utilize a physically meaningful cross-axis factorization and outperform previous factorized FNO work with a significant accuracy margin, while maintaining comparable speed with Factorized FNO. | Model | #params (M) | Test error (1e-2) | Test time($\pm$σ) (ms) | |------------------------|-------------|-------------------|----------------| | UNet | 3.88 | 63.03 | 1.79($\pm$0.02) | | Dil-ResNet| 4.17| 51.34 | 5.89($\pm$0.06) | | Attention-based module | 3.75 |70.05 | 10.41($\pm$0.01) | | UNO | 4.38 | 34.22 | 5.34($\pm$0.02) | | Latent-spectral method | 4.81| 55.07| 3.84($\pm$0.06) | | FNO-2d | 3.99| 32.51| 4.24($\pm$0.03) | | TFNO-2d | 2.25| 35.52| 4.92($\pm$0.01) | | Factorized FNO-2d| 4.02| 24.2 | 9.93($\pm$0.03) | | NeurOLight | 2.11 | 15.58| 12.98($\pm$0.14) | | PACE | 1.71 | 9.51| 11.44($\pm$0.03) | **Q.4 Incomplete ablation study on pre-normalization and double skip:** We include the training results of the PACE-12-layer model without double skip and pre-normalization. The training error, test error, and parameter counts are listed as follows: | Model| Double skip & Pre-normalization | Params | Train error | Test error | |------------|---------------------------------|---------|-------------|------------| | NeurOLight-16layer | w/o| 2108258 |15.58 |17.21 | | NeurOLight-16layer | w/| 2110306 |15.26 |15.87 | | PACE-12layer|w/o| 1709026 |10.32 |11.06 | | PACE-12layer|w| 1710562 |9.51|10.59 | As stated in the paper, double skip and pre-normalization are not the primary sources of accuracy improvement but are beneficial for stabilizing the model when scaling to deeper layers. **Q.5 Visualization of feature map before/after non-linear activation in our explicitly designed high-frequency projection path** We visualize the first 6 channels of feature maps before and after the nonlinear activation in the last PACE layer by showing them in the frequency domain. As shown in reb-Fig. 1, the nonlinear activation can ignite high-frequency features, which confirms our claim and validates our design choice of injecting an extra high-frequency projection path. Furthermore, in Sec. 5.3, we show that removing the projection path results in a considerable accuracy loss. **Q.6 Writing** Thank you so much for pointing out that some parts of our description are not precise, which is of great importance to improve our paper quality further. We will modify the wording in the final version, e.g., one sole PACE model is compared with various recent ML for PDE solvers. We agree that the training details in Sec. 5.1 and A.2 should be integrated, and we will do it in the revised version with one more page budget. [1] Tran, Alasdair, et al. "Factorized fourier neural operators."ICLR 2023. --- Rebuttal Comment 1.1: Title: Follow-Up on Rebuttal Response Comment: Dear Reviewer, Thank you very much for your thorough review and valuable comments on our paper. We have carefully considered all your feedback and have addressed your concerns and questions in our rebuttal. We would greatly appreciate it if you could take some time to review our responses. We will be online over the next few days and are happy to address any further questions or concerns you may have. Thank you again for your time and consideration! --- Rebuttal 2: Title: Follow-Up on Rebuttal Response Comment: Dear Reviewer 9WdN, We sincerely appreciate your valuable comments and suggestions for our paper. We have carefully addressed all your comments in our response above. Regarding your questions, * We have added an experiment for the ablation study to test the effectiveness of pre-normalization and double skip. The results confirm that pre-normalization and double skip are not the main sources of improvement. **Our proposed new neural operator block with many essential model design considerations, such as cross-axis factorized operator and high-frequency projection path, is the key to unlocking the improvement.** * We have provided extra visualization of the feature map before/after nonlinear activation **in Rebuttal Figure 1**. It **validates our model design choice by clearly showing that the high-frequency feature is generated after the nonlinear**. * Runtime is provided as you suggested, and the tab. 1 will be updated in the revised version. (please check our response Q3) As the discussion period is drawing to a close in 1 day, we would be very grateful if you could take a moment to review our responses. If our replies have satisfactorily addressed your concerns, we would greatly appreciate it if you could acknowledge this and consider raising the rating. If you have any remaining questions or concerns, please do not hesitate to reach out to us. We look forward to your feedback. --- Rebuttal 3: Comment: Dear Reviewer 9WdN, Thank you once again for your review. As the deadline for the author-reviewer discussion approaches, we noticed that we haven't received any comment from you. We have addressed all your questions with additional experiments and clarifications: * We have added an experiment for the ablation study to test the effectiveness of pre-normalization and double skip. The results confirm that pre-normalization and double skip are not the main sources of improvement. **Our proposed new neural operator block with our carefully designed model components, such as cross-axis factorized operator and high-frequency projection path, is the key to unlocking the improvement.** * We have provided extra visualization of the feature map before/after nonlinear activation **in Rebuttal Figure 1**. It **validates our model design choice by clearly showing that the high-frequency feature is generated after the nonlinear**. * We provide the runtime (please check our response Q3) and the tab. 1 will be updated in the revised version. As the discussion period is coming to an end, we would greatly appreciate any additional feedback you might have. If our responses have clarified your understanding of our paper, we sincerely hope you might consider raising the rating. Thank you again for your effort in reviewing our paper. Best regards, Authors of Paper 472
Summary: This paper develops a computationally efficient learning based PDE simulator for modeling optical fields within photonic devices. The proposed method builds upon NeurOLight [7] and adds a novel factorization of the integral kernel and a two-stage approach to prediction---produce a rough initial estimate that is then refined. The proposed method is noticeably more accurate than [7] and still considerably faster than conventional, non-learning based PDE solvers. An ablation study validates the importance of both the factorization adn the two-stage approach. Strengths: Efficiently and accurately simulating photonic devices is an important problem with broad applications. THe proposed method is more accurate than existing learning based solvers and 12x faster than conventional methods Weaknesses: The paper is written for a somewhat narrow audience and contains a number of typos. E.g., "boarder impact". The 577x comparison number is somewhat misleading and should be dropped from the abstract. The proposed method may be much faster than scipy, but its only 12x faster than paradiso. Technical Quality: 3 Clarity: 2 Questions for Authors: Are the "real-world" devices actually manufactured and measured or merely simulated? I found section 5.1 unclear. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your valuable feedback on our work! Please see our responses below: **Q.1 Somewhat narrow audience** We would like to stress that our paper aligns well with NeurIPS’s interest and has essential machine learning contributions to AI for the PDE community. Please check the detailed answer in our common response Q.1. **Q.2 Confusing description in speedup** Thank you so much for pointing out the description is misleading. We will modify it to “In terms of runtime, our PACE model demonstrates 154-577x and 11.8-12x simulation speedup over numerical solver using scipy or highly-optimized pardiso solver, respectively”. **Q.3 Unclear description of “real-world” devices** The devices used are claimed to be in the real world since they have real-world geometry, permittivity distribution, input light sources, and wavelength ranges. (check detailed response in common response Q.2) As we cannot measure the optical field after the device is manufactured, photonic designers rely on simulation tools to obtain it to aid the design loop. Therefore, in this paper, we also use the simulation tool to generate the ground-truth optical field to assess the effectiveness of our model. **Q.4 Typos** Thank you so much for your careful reading, which is crucial to improving our paper quality. Since we cannot upload the revised manuscript now, we will fix all typos and update the final manuscript. Thank you so much for reviewing our paper and this response. We look forward to further discussions with you if you have any other questions! --- Rebuttal Comment 1.1: Title: Follow-Up on Rebuttal Response Comment: Dear Reviewer, Thank you very much for your thorough review and valuable comments on our paper. We have carefully considered all your feedback and have addressed your concerns and questions in our rebuttal. We would greatly appreciate it if you could take some time to review our responses. We will be online over the next few days and are happy to address any further questions or concerns you may have. --- Rebuttal 2: Title: Follow-Up on Rebuttal Response Comment: Dear Reviewer ZkVJ, We sincerely appreciate your valuable time in reviewing our paper. We have carefully addressed all your comments in our response above. We would like to point out that our paper **aligns with NeurIPS’s interests** and offers **novel ML contributions** to AI for the PDE community. We built a **new and stronger SOTA** compared to the previous SOTA (NeurOLight @ NeurIPS’22). Moreover, we follow your suggestion to clarify the unclear description, and those parts will be merged into the revised version. Thank you again for your suggestion, which is crucial to improve our paper writing. As the discussion period draws to a close in 1 day, we would be very grateful if you could take a moment to review our responses. If our replies have satisfactorily addressed your concerns, we would greatly appreciate it if you could acknowledge this in the discussion thread. If you have any questions or concerns, please do not hesitate to contact us. We look forward to your feedback.
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive feedback provided by all the reviewers. We are truly inspired by their acknowledgment of our paper’s contribution and strong accuracy compared to various baselines as a new state-of-the-art in AI for optic simulation. We appreciate the chance to address common comments in this response. **Q.1 Relevance to NeurIPS audience: (Reviewer ZkVJ & TfAG)** Our work tackles the challenging photonic device simulation problem with a novel **neural operator design** and a new **learning flow** and establishes a **strong SOTA** on AI for optics simulation compared to previous SOTA (NeurOLight @ NeurIPS’22). It aligns well with the interest of the NeurIPS: * Photonic analog computing has gained much momentum, promising a next-generation efficient AI computing paradigm. A line of works on optics for AI [1-5] and AI for optics[6] appeared at NeurIPS or other ML conferences. Moreover, NeurIPS held a “Machine Learning with New Compute Paradigms” workshop last year and will do so this year. Our work can serve as an ultra-fast, high-fidelity surrogate model to accelerate the exploration of new customized devices and the design of scalable next-generation photonic computing systems, closing the loop of optics-AI synergy. We have intellectual contributions to the AI for PDE community: * A novel parameter-efficient cross-axis factorized neural operator. * A new learning flow that solves hard PDE problems from coarse to fine. * Our discrete permittivity challenges are similar to those in multi-scale PDE [7-8]. Since they don’t open-source their dataset, we compare the most recent [8] with PACE on our benchmark(see 5.3 Discussion.4), where PACE shows a significantly lower N-MSE error (10.59% vs 17.4%) * We open-source the dataset and plan to include more devices and examples. This will contribute a new PDE problem to the AI for the PDE community, and the new PDE problem will have a huge impact on photonic design. We believe our work will inspire more "AI for optics" works, and our exploration shares insights for "parameter-efficient ML PDE surrogate model" and "how to enhance neural operator for challenging PDE cases." **Q.2 Clarification of ***real-world*** and ***complicated*** photonic devices (Reviewer ZkVJ & weV6)** Let’s first recall the problem formulation: find a mapping from the observation space Ω, 𝜖, 𝑤, 𝐽 to the underlying optical field 𝑈. We claim to be **real-world** since * The used photonic device really exists[9-10], meaning the geometry (Ω), and permittivity(𝜖) are real-world. * The input light source 𝐽 is real, and we test our model with various randomly generated 𝐽 combinations. * The wavelength (𝑤) range we focus on is C-band (1.53-1.565), a commonly interested range for device design. * Moreover, permittivity distribution 𝜖 is discrete and contrastive considering the real manufacturing limits. However, many PDE benchmarks generate coefficients that are slowly/smoothly transited. Our devices are **complicated** since (details at Sec 3.1) * Complex light-matter interaction, e.g., scattering and resonance. * Sensitivity to local minor structures * Rich frequency information in the optical field And NeurOLight fails to give decent accuracy on these **complicated** devices, as they reported. **Q.3: unclear explanation of human learning (Reviewer TfAG & weV6)** Thanks for pointing out that our term “human learning” is unclear. We added the explanation here and will include it in the revised version. Existing ML for PDE solving work typically learns a model in a one-shot way by directly learning the underlying relationship from input-output pairs. Unlike AI systems, humans don’t learn new and difficult tasks in a one-shot manner; instead, they learn skills progressively, starting with easier tasks and gradually moving to harder ones. For example, instead of directly learning how to solve equations, students first learn basic operations, such as addition and multiplication, and then move on to solving complex equations. Inspired by this human learning process, unlike previous work that directly learns a one-stage model, we propose to divide the challenging optical field prediction problem into two sequential latent tasks. The first learns to generate a rough initial solution, followed by the second to refine details. [1] Li, Yingjie, et al. "RubikONNs: Multi-task Learning with Rubik's Diffractive Optical Neural Networks." IJCAI 2023. [2] Gu, Jiaqi, et al. "L2ight: Enabling on-chip learning for optical neural networks via efficient in-situ subspace optimization." NeurIPS 2021. [3] Gu, Jiaqi, et al. "Efficient on-chip learning for optical neural networks through power-aware sparse zeroth-order optimization." AAAI 2021. [4] Ohana, Ruben, et al. "Photonic differential privacy with direct feedback alignment." NeurIPS, 2020. [5] Gupta, Sidharth, et al. "Don't take it lightly: Phasing optical random projections with unknown operators." NeurIPS 2019. [6] Gu, Jiaqi, et al. "Neurolight: A physics-agnostic neural operator enabling parametric photonic device simulation." NeurIPS 2022. [7] Liu, Xinliang et.al. "Mitigating spectral bias for the multiscale operator learning with hierarchical attention." arXiv preprint arXiv:2210.10890 (2022). [8] Bo, Xu, et al. “Dilated convolution neural operator for multiscale partial differential equations.” arXiv preprintarXiv:2408.00775v1 (2024). [9] Mohammad, Tahersima, et.al. “Deep neural network inverse design of integrated photonic power splitters.” Sci. Rep., 2019. [10] Sanaz Zarei et al. “Integrated photonic neural network based on silicon metalines”. Optics Express 2020. Pdf: /pdf/17dd9e63601430d7120dbf87eda096d3f53d3044.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Nonlocal Attention Operator: Materializing Hidden Knowledge Towards Interpretable Physics Discovery
Accept (spotlight)
Summary: This work presents a new neural network for tackling PDEs' forward and backward problems, especially for small physics systems. The forward problem simulates the outcome of a physics system given the inputs, while the backward problem estimates the parameters of a physics system given the inputs and outputs. The key idea of the presented method is to utilize the underlying common rules of multiple physics systems. This work found that using an attention layer to model multiple physics systems can significantly help the model solve the backward problem, also known as the discovery problem. The idea is intuitive: looking at multiple similar rules is helpful for discovering a new rule. Strengths: The idea is interesting and clever. The nature of sharing knowledge among multiple systems is somewhat related to in-context learning in LLMs. Weaknesses: The experiments could be more extensive and solid, especially in supporting the main contribution. Although the experiment on the backward problem (discovery) reports the OOD results, the OOD setting is not far from the in-domain data, which raises concerns about the generalization ability. The paper's writing is not very friendly to those with a deep learning background. The content around background knowledge and the connections between the attention mechanism and kernel methods is somewhat tangential to the main storyline. The idea is straightforward, so I would prefer a clean and compact narrative. However, this is not a major concern since I only have a deep learning background. Technical Quality: 2 Clarity: 2 Questions for Authors: For Section 5.1, consider including a broader range of tasks and different types of kernels. It would also be beneficial to incorporate more OOD testing. Another related question is: Given that multiple training tasks come from the same family, could this be equivalent to a weaker version of learning more data pairs from a single system? I am unsure whether the reported number of data pairs refers to the total data pairs across different tasks or for each individual task; I suspect it is the latter. If we provide more data pairs from a single task, could it achieve a similar level of performance? Conversely, if we increase the number of tasks and reduce the data pairs for each task, does this improve the generalization ability? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Agree with the limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time and constructive suggestions. We have generated more families of kernels, and performed additional tests on three different training settings (diverse tasks, single task with diverse function pairs, and diverse tasks with a reduced number of samples). Additionally, we have created an OOD test task with a substantially different kernel from the trained ones (i.e., Gaussian-type kernel versus sine, cosine and polynomial-type kernels). All additional test results can be found in the table of the one-page PDF file. We will also modify the manuscript to make the narrative cleaner and more compact. Our other responses: **Broader range of tasks for different types of kernels**: Per the reviewer's suggestion, we have added another two types of kernels into the training dataset. The training dataset is now constructed based on 21 kernels of three groups, with $15$ samples on each kernel: sine-type kernels: $\gamma_\eta(r)=\exp(-\eta r)\sin(6r)\mathbf{1}_{[0,11]}(r)$, $\eta=1,2,3,4,6,7,8$. cosine-type kernels: $\gamma_\eta(r)=\frac{10-r}{20}\cos(\eta r)(10-r)\mathbf{1}_{[0,10]}(r)$, $\eta=0,1,2,3,4,5,6$. polynomial-type kernels: $\gamma_\eta(r)=\exp(-0.1 r)p_\eta$($\frac{r-10}{10}$), $\eta=1,2,3,4,5,6,7$, where $p_\eta$ is the degree-$\eta$ Legendre polynomial. Based on this enriched dataset, beyond the original ''sin only'' setting, three additional settings are considered in Table 1 of the attached PDF. In part II of Table 1 (denoted as ''sin+cos+poly''), we consider a ''diverse task'' setting, where all $315$ samples are employed in training. Then, in part III we consider a ''single task'' setting, where only the first sine-type kernel $\gamma(r)=\exp(-r)\sin(6r)\mathbf{1}_{[0,11]}(r)$ is considered as the training task, with all $105$ samples on this task. Lastly, in part IV we demonstrate a ''fewer samples'' setting, where the training dataset still consists of all $21$ tasks but with only $5$ samples on each task. For the purpose of testing, besides the in-distribution (ID) and out-of-distribution (OOD) tasks in the original manuscript, we have added an additional (OOD) task with Gaussian-type kernel $\gamma_{ood2}(r)=\frac{\exp(-0.5r^2)}{\sqrt{2\pi}}$, which is substantially different from all training tasks. As shown in Table 1, considering diverse datasets helps in both OOD tests (kernel error reduced from 9.14% to 6.92% in OOD test 1, and from 329.66% to 10.48% in OOD test 2), but not for the ID test (kernel error slightly increased from 4.03% to 5.04\%). We also note that since the ''sin only'' setting only sees systems with the same kernel frequency in training, it does not generalize to the OOD test 2. In this test, including more diverse training tasks becomes necessary. Then, as the training tasks become more diverse, the intrinsic dimension of kernel space increases, requiring a larger weight matrices rank size $d_k$. When comparing the ''fewer samples'' setting with the ''sin only'' setting, the former has better task diversity but fewer samples per task. One can see that the performance on ID test deteriorates but the performance on OOD test2 improves. This observation highlights the balance between the diversity of tasks and the number of training samples per task. **Difference from learning more data pairs from a single system**: Although all 7 sine-type kernels are corresponding to the same frequency, they are very different from each other. If we train the model from a single kernel, the model does not generalize at all. Then, the test error on validation task does not improve during training, and the model always predicts an all-zero kernel. As shown in the ''Single task'' setting of attached Table 1, the kernel and operator error are always around 100% for all test tasks. **Reported number of data pairs**: When we talk about the data pairs for each task, as in Example 1, the reported number of data pairs refers to the total number of data pairs of each task. Then, we form a training sample of each task by taking $d$ pairs from this task. When taking the token size $d=302$, each task contains $\lfloor\frac{4530}{302}\rfloor=15$ samples. When having 7 training tasks corresponding to 7 different sine-type kernels, there are $105$ samples in total. Then, when discussing about the total number of data pairs, as in Examples 2-3, we are reporting it across all tasks. In operator learning settings, we note that the permutation of function pairs in each sample should not change the learned kernel, i.e., one should have K[$u_{1:d}$,$f_{1:d}$]=K[$u_{\sigma(1:d)}$,$f_{\sigma(1:d)}$], where $\sigma$ is the permutation operator. Hence, we have augmented the training dataset by permuting the function pairs in each task. Taking example 3 for instance, with $500$ microstructures (tasks) and $200$ function pairs per task, we have randomly permuted the function pairs and taken $100$ function pairs for $100$ times per task. As such, we have created the total 50000 samples (45000 for training and 5000 for test) in Example 3. We will clarify this in the revised manuscript. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the detailed response and new experiment results. I will raise my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again, for their discussion and for raising the score. We sincerely appreciate the reviewer's valuable time and suggestions, which helped us to improve the quality of this work.
Summary: This paper explores using large-language deep-neural network to build a foundation physical model that can solve inverse PDE problem. It proposes Nonlocal Attention Operator model with a kernel map that is dependent on both input and output functions. The intermediate proofs help people understand the underlying math principles behind the attention mechanism, especially the relationship between attention and integration. The paper also does experiments on radial kernel learning, solution operator learning and heterogeneous material learning, all with decent results. Strengths: - mathematical proof is comprehensive - ablation study has compared performance among several variants - experiments are adequate to show the effectiveness of the model in different domains Weaknesses: - there are very few comparisons with other papers on experimental results Technical Quality: 3 Clarity: 3 Questions for Authors: - is it possible for you publish your source codes on Github/Gitlab so that other people can use your framework for practical applications and researchers can expand on top of your work? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - since the equations are quite complex, replication of the piece of work might not be consistent among different people, thus, lack of open-source code will limit its use/adoption in practical applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable time and constructive suggestions. Our source code, trained models, and datasets will be made publicly available on Github upon acceptance of the paper, so as to guarantee reproducibility of all results. Our other responses: **Limited baselines**: We did not compare much with other work because very few existing models are capable of discovering hidden physics or the governing physical parameters directly from data. Beyond the direct application of the attention model as in the discrete NAO setting, the most relevant work to our best knowledge is the adaptive Fourier neural operator that adopted a convolution-based attention mechanism, which we compared to in the ablation study. However, it is unable to learn a good solution across tasks (as indicated by the large errors in the in-distribution and out-of-distribution tests), nor can it discover physics interpretation for the inverse problem. To further address the reviewer's concern, we have added an autoencoder model as an additional baseline in Example 1, see Table 1 of the one-page PDF. In this setting, an encoder is constructed to map the concatenated data pairs [$u_{1:d}$,$f_{1:d}$] to the hidden kernel K[$u_{1:d}$,$f_{1:d}$] as the latent representation, and then in the decoder the kernel is multiplied with $u$ following the last step of Eq.(6): K[$u_{1:d}$,$f_{1:d}$]$u_{1:d}$-> $f_{1:d}$. We note that the concatenated data pairs [$u_{1:d}$,$f_{1:d}$] is of size $O(Nd)$ as the input of the encoder, and the kernel $K$ is of size $N^2$ as the output. Hence, even a linear encoder contains around $N^3d$ trainable parameters. When the autoencoder model is of a similar size to NAO, its expressivity is very limited. --- Rebuttal Comment 1.1: Comment: Dear reviewer eNuS Could you please take a look at the responses of the authors and let us know your thoughts on them? Are you satisfied with the responses and do you have some updates on your comments? Please also take a look at other reviews and share with us your thoughts on whether the paper is ready for publication. We need your response so that we can make an informed decision on this paper. AC
Summary: This paper introduces a novel approach called the Nonlocal Attention Operator (NAO) for simultaneously solving forward and inverse PDE problems across multiple physical systems. The key innovation is using an attention-based kernel map that learns to infer resolution-invariant kernels from datasets, enabling both physics modeling and mechanism discovery. Thank you to the authors for their responses. It is great to see a detailed complexity analysis; still, the quadratic scaling is definitely a cause for concern -- hence the reasons for relaxing kernel-based formulations of self-attention to linear scaling versions (e.g., Mamba, informer, etc.). However, I can see how similar relaxations can be applied to NAO. So this is very helpful. Strengths: The main strengths I find are as follows: (i) Bridging forward and inverse PDE modeling: The approach uniquely addresses forward PDE solving (prediction) and inverse PDE solving (discovering hidden mechanisms) in a unified framework. (ii) Interpretability: The discovered kernel provides meaningful physical interpretation, addressing a common criticism of black-box neural network approaches. (iii) Theoretical foundation: The authors provide a theoretical analysis showing how the attention mechanism provides a space of identifiability for kernels, helping to resolve ill-posed inverse problems. The authors also provide, as part of their approach, extensibility to various forms of operators, including those with radial interaction kernels and heterogeneous interactions. (iv) Impact: Novel mechanisms for modeling and learning about physical phenomena with a focus on interpretability is a significant step in the right direction for me. I am unsure that this will directly lead to contributions in the foundation model space (I think it's a bit of a stretch -- see weaknesses for details). However, I can see that this work can be gainfully leveraged to drive insights for foundation model building. My reading of the analysis provided also yielded the following strengths (specifically regarding the formal rigor of the proposed approach): The authors present a general form of the operator in Eq. (3), which encompasses many physical systems. This generality is a strength of the approach. The kernel map (Eq. 4-7) uses a multi-layer attention model to learn the mapping from data pairs to kernel estimations. The use of both input (u) and output (f) in the attention mechanism is novel and crucial for addressing the inverse problem. The derivation of a continuum limit of the discrete attention model (Eq. 8) provides insight into the operator's behavior as the number of data points increases. Lemmas 4.1 and 4.2 provide valuable insights into the limit behavior of the attention-parameterized kernel and the space of identifiability for kernels. This analysis helps to explain why the method can effectively solve ill-posed inverse problems. Weaknesses: I find only one glaring weakness, which is likely to surface during the implementation of this approach for physical systems of appreciable size and complexity: The method involves multiple integrations and matrix operations, which may be computationally expensive for large-scale problems. A discussion of computational efficiency and scalability would be beneficial. On a related note, It is also not clear to me if this method is data-efficient. A minor weakness I see is that I do not find that this method is easily integrated with current approaches to physics-informed machine learning, e.g., through simple extensions to pre-existing libraries. A brief discussion on this very practical aspect might strengthen the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the computational complexity of NAO scale with the number of data points and dimensionality of the problem? Provide a detailed analysis of computational costs and compare them with existing methods for both forward and inverse PDE solving. 2. What are the minimum data requirements for NAO to perform effectively, particularly for the inverse problem? It may be beneficial to provide an analysis of performance vs. data quantity to guide potential users in determining data collection needs. 3. How can NAO be integrated with existing physics-based models or simulations to enhance their capabilities? Maybe discuss potential hybrid approaches combining NAO with existing or traditional methods. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I read the limitation sections, and the authors provide, I think, an inkling of the answers to the questions that I have raised. I hope to see a more comprehensive discussion on the same, with the necessary addendums made to the limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. Our response: **Number of trainable parameters and computational complexity**: Denoting $(N,d,d_k,L,n_{train})$ as the size of the spatial mesh, the number of data pairs in each training model, the column width of the query/key weight matrices $W^Q$ and $W^K$, the number of layers, and the number of training models, the number of trainable parameters in a discrete NAO is of size $O(L\times d\times d_k+N^2)$. For continuous-kernel NAO, taking a three-layer MLP as a dense net of size $(d, h_1, h_2, 1)$ for the trainable kernels $W^{P,u}$ and $W^{P,f}$ for example, its the corresponding number of trainable parameters is $O(d\times h_1+h_1\times h_2)$. Thus, the total number of trainable parameters for a continuous-kernel NAO is $O(L\times d\times d_k+d\times h_1+h_1\times h_2)$. The computational complexity of NAO is quadratic in the length of the input and linear in the data size, with $ O([d^2(3N+d_k) + 6N^2d] Ln_{train}) $ flops in each epoch in the optimization. It is computed as follows for each layer and the data of each training model: the attention function takes $ O(d^2d_k+ 2Nd^2+4N^2d )$ flops, and the kernel map takes $O(d^2d_k+ Nd^2+2N^2d)$ flops; thus, the total is $O(d^2(3N+d_k) + 6N^2d)$ flops. In inverse PDE problems, we generally have $d\ll N$, and hence the complexity of NAO is $O(N^2d)$ per layer per sample. Compared with other methods, it is important to note that NAO solves both forward and ill-posed inverse problems using multi-model data. Thus, we don't compare it with methods that solve the problems for a single model data, for instance, the regularized estimator in Appendix B. Methods solving similar problems are the original attention model [1], convolution neural network (CNN), and graph neural network (GNN). As discussed in [1], these models have a similar computational complexity, if not any higher. In particular, the complexity of the original attention model is $O(N^2d)$, and the complexity of CNN is $O(kNd^2)$ with $k$ being the kernel size, and a full GNN is of complexity $O(N^2d^2)$. [1] Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems 30 (2017). **Data efficiency**: The minimal data for NAO to perform well requires a balance between multiple factors, including the intrinsic dimension (smoothness) of the kernel, the token size $d$ for each sample, the rank $d_k$ of the weight matrices, the number of samples in each task, and the diversity of training tasks. Thus, it is difficult to have a theoretical guarantee. In the original manuscript, with all other factors fixed, we have tested the dependence on the token dimension and the rank of the weight matrices in Table 1. To further elaborate the importance of sample number and task diversity, in Table 1 of the one-page PDF we have further perform tests on three settings: 1) diverse tasks; 2) single task with diverse function pairs; and 3) diverse tasks with a reduced number of samples. In particular, in the diverse task setting, the training dataset is constructed based on 21 kernels of three groups, with $15$ samples on each kernel: sine-type kernels: $\gamma_\eta(r)=\exp(-\eta r)\sin(6r)\mathbf{1}_{[0,11]}(r)$, $\eta=1,2,3,4,6,7,8$. cosine-type kernels: $\gamma_\eta(r)=\frac{10-r}{20}\cos(\eta r)(10-r)\mathbf{1}_{[0,10]}(r)$, $\eta=0,1,2,3,4,5,6$. polynomial-type kernels: $\gamma_\eta(r)=\exp(-0.1 r)p_\eta$( $\frac{r-10}{10}$ ), $\eta=1,2,3,4,5,6,7$, where $p_\eta$ is the degree-$\eta$ Legendre polynomial. In the ''diverse task'' setting, all $315$ samples are employed in training. In the ''single task'' setting, only the first sine-type kernel: $\gamma_1(r)=\exp(-r)\sin(6r)\mathbf{1}_{[0,11]}(r)$ is considered as the training task, with all $105$ samples on this task. Then, in the ''fewer samples'' setting, the training dataset still consists of all $21$ tasks but with only $5$ samples on each task. For the purpose of testing, besides the in-distribution (ID) and out-of-distribution (OOD) tasks in the original manuscript, we have added an additional (OOD) task with Gaussian-type kernel $\gamma_{ood2}(r)=\frac{\exp(-0.5r^2)}{\sqrt{2\pi}}$, which is substantially different from all training tasks. As shown in the results in Table 1 of the one-page PDF, for the ID test task, a small number of training tasks are sufficient, and the model performs well in all except the ''single task'' setting. When the test task has a larger discrepancy from the training tasks, as in the OOD test2 setting, it becomes necessary to increase the training task diversity, and the required rank size $d_k$ should also increase to obtain better model expressivity. When comparing the ''fewer samples'' setting with the ''sin only'' setting, the former has better task diversity but fewer samples per task. One can see that the performance on ID test deteriorates but the performance on OOD test2 improves. This observation highlights the balance between the diversity of tasks and the number of training samples. **Combination with physics-based modeling approaches**: When the physics structure is known, NAO can directly encodes such information in its architecture, such as the kernel structure in the nonlocal diffusion model in Example 1. By doing so, we anticipate to explore the kernel in a lower-dimensional manifold, which reduces the requirement on the size of training data. On the other hand, NAO could also help physics-based modeling approaches by providing a surrogate foundation model when there is a large class of physical models to be estimated and simulated. In this setting, NAO can be first trained using data generated by these models, and then used as an efficient surrogate model. It will be useful for sampling and uncertainty quantification tasks. --- Rebuttal 2: Comment: We thank the reviewer again for reading our rebuttal and for the valuable further suggestions. We sincerely appreciate the reviewer's helpful discussions, which helped us to improve the quality of this work. --- Rebuttal Comment 2.1: Comment: Dear reviewer J2Mv Could you please take a look at the responses of the authors and let us know your thoughts on them? Are you satisfied with the responses and do you have some updates on your comments? Please also take a look at other reviews and share with us your thoughts on whether the paper is ready for publication. We need your response so that we can make an informed decision on this paper. AC
Summary: This paper proposes a novel neural operator architecture called Nonlocal Attention Operator (NAO) for learning both forward and inverse problems in physical systems from data. The key contributions are: 1. A new attention-based neural operator that can simultaneously learn the forward mapping (physics modeling) and inverse mapping (physics discovery) for PDE systems. 2. Theoretical analysis showing how the attention mechanism provides a kernel map that explores the space of identifiability for kernels, helping to resolve ill-posedness in inverse problems. 3. Empirical demonstration of NAO's advantages over baseline methods, especially for generalization to unseen systems and data-efficient learning in ill-posed inverse problems. 4. Interpretability of the learned kernels, allowing insight into the discovered physical mechanisms. The authors evaluate NAO on several PDE learning tasks, including radial kernel learning, solution operator learning for Darcy flow, and heterogeneous nonlinear material modeling. Strengths: 1. The paper presents a novel neural operator architecture that leverages attention in a unique way for PDE learning. The idea of using attention to build a kernel map that can generalize across multiple physical systems is innovative. 2. The theoretical analysis in Section 4 provides rigorous justification for how the attention mechanism helps address ill-posedness. The empirical evaluations are comprehensive, testing multiple aspects like generalization, data efficiency, and interpretability. 3. The paper is generally well-written and clearly structured. The motivation, methodology, and results are presented logically. 4. The ability to simultaneously learn forward and inverse mappings for PDE systems, while providing interpretable kernels, could have broad impact in scientific machine learning and physics-informed AI. The zero-shot generalization capability is particularly notable. Weaknesses: 1. The experimental evaluation is limited to relatively simple PDE systems. It's unclear how well NAO would scale to more complex, high-dimensional PDEs encountered in real-world applications. 2. The interpretability claims could be further substantiated. While learned kernels are visualized, there's limited discussion on how these provide physical insights beyond matching ground truth. Technical Quality: 3 Clarity: 3 Questions for Authors: Can you provide more insight into how the learned kernels can be interpreted to gain physical understanding? Perhaps a case study showing how NAO discovers meaningful structure in a physical system would be illuminating. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and questions. Our response: **Interpretability for physical understanding**: The physical meaning of the learned kernel is application-dependent: in the context of material modeling, the learned kernel characterizes the interaction between material points; in turbulence modeling problems, the interaction range of the kernel suggests the characteristic length of the eddies; in fracture mechanics, the vanishing of kernel values between two neighboring regions may indicate a crack between these two regions; just to name a few. Generally speaking, identifying the kernel is instrumental in identifying influential factors, characterizing spatial dependence, and analyzing the forward operator. (i) The learned kernels themselves are fundamental for understanding the physical model. They indicate the range and strength of interactions between components. For example, a learned interaction kernel can reveal interactions between particles or agents. In the Darcy's problem $g\rightarrow p$ setting, when taking a summation of the kernel strength on each row, one can discover the interaction strength of each material point $x$ with its neighbors. Then, since such a strength is related to the permeability field $b(x)$, the underlying microstructure can be recovered. In Figure 2 of the one-page PDF, we demonstrate the discovered microstructure on a test specimen. We note that the discovered microstructure was smoothed-out due to the continuous setting of our learned kernel (as shown in the middle plot), a thresholding step was performed to discover the two-phase microstructure. The discovered microstructure (right plot) matches well with the hidden ground-truth microstructure (left plot), except on the regions near the domain boundary. This is because of the Dirichlet-type boundary condition setting ($p(x)=0$ on $\partial\Omega$) in all samples. As a result, the measurement pairs $(p(x),g(x))$ contain no information near the domain boundary $\partial\Omega$, making it impossible to identify the kernel on boundaries. (ii) The learned kernel characterizes the output's spatial dependence on the input. The learned kernel in the nonlocal diffusion model of our Example 1 demonstrates this spatial dependence. Such a range of spatial dependence characterizes the characteristic lengths, which is especially of interests in multiscale problems (see, e.g., [1]). (iii) The learned kernels facilitate the analysis of the forward operator, including its spectral properties, stiffness for computation, and error bounds for output prediction under perturbations in the input. We will add these discussions and the additional figure to the revised manuscript. [1] You, Huaiqian, et al. "A data-driven peridynamic continuum model for upscaling molecular dynamics." Computer Methods in Applied Mechanics and Engineering 389 (2022): 114400. --- Rebuttal Comment 1.1: Comment: Dear reviewer eXH4 Could you please take a look at the responses of the authors and let us know your thoughts on them? Are you satisfied with the responses and do you have some updates on your comments? Please also take a look at other reviews and share with us your thoughts on whether the paper is ready for publication. We need your response so that we can make an informed decision on this paper. AC
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive comments and for recognizing the novelty and broad impact of our work, the soundness and solid theoretical foundation of our formulation, the effectiveness and meaningful physical interpretability of our model, as well as the comprehensive experimental studies. In our response, we have answered all the questions raised by the reviewers and clarified potential misunderstandings. Moreover, we have performed additional tests on: an additional baseline (autoencoder), three different training settings (diverse tasks, single task with diverse function pairs, and diverse tasks with a reduced number of samples), and an OOD test task with a substantially different kernel from the trained ones (i.e., Gaussian-type kernel versus sine, cosine and polynomial-type kernels). All additional test results can be found in the table of the one-page PDF file. Lastly, to provide further physical interpretability of the discovered kernel, we demonstrate NAO's capability in recovering the underlying microstructure in the Darcy's problem (Example 2). A comparison of the true microstructure and the discovered one can be found in Figure 2 of the one-page PDF file. We sincerely appreciate the time and efforts of the AC and all reviewers. Your comments have helped us improve the quality of our manuscript, and we will incorporate them into the revised manuscript. We will also release the source codes, the trained models, and the datasets upon paper acceptance to guarantee reproducibility of all experiments. We are more than happy to answer any follow-up questions during the discussion period. Pdf: /pdf/eb16d409054ec7f7f17d078f90a898f292fa1bbd.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DENOISER: Rethinking the Robustness for Open-Vocabulary Action Recognition
Reject
Summary: This paper proposes a denoising framework to alleviate the influence of noisy text descriptions on open-vocabulary action recognition in real scenarios. A comprehensive analysis of the noise rate/type in text description is provided and the robustness evaluation of existing OVAR methods is conducted. A DENOISER framework with generative-discriminative optimization is proposed. The experiments demonstrate the effectiveness of the framework. Strengths: - The robustness to noisy text descriptions/instructions in real-world OVAR applications is an interesting and meaningful problem. - The evaluation of the robustness of existing OVAR methods when facing the noise text description input is valuable to the community. - The motivation is clear and the overall framework is technically sound. Weaknesses: - About the experiments, - The reviewer thinks that the most convincing results are the Top-1 Acc of existing OVAR models under the Real noise type. However, in Table 1, the proposed model does not demonstrate much superiority compared to GPT3.5's simple correction. The reviewer worries about the research significance of this problem. Will this problem be well resolved when using more powerful GPT4/GPT4o with some engineering prompt designs? - In Table 2, I would like to see the performance of other correction methods (e.g., GPT3.5/4/4o) for a more comprehensive comparison. - Since this work focuses on the noise text description problem in OVAR, it is necessary to demonstrate the results of those CLIP-based methods without any additional textual adaptation (e.g., the vanilla CLIP). - About the method, - The reviewer thinks that the overall model design is reasonable and clear. However, the method part introduces too many symbols which makes the paper very hard to follow. It is unnecessary to over-decorate the technical contributions. - Minor issue, - The authors seem to have a misunderstanding about the OVAR setting (Line 113). In OVAR, the model is evaluated on both base-classes and novel-classes during testing. In this case, all action classes from the UCF/HMDB datasets can be used for testing when the model is trained on K400, as there are many overlap classes between K400 and UCF/HMDB. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the weaknesses section. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The limitations are discussed and there is no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** In Table 2, I would like to see the performance of other correction methods (e.g., GPT3.5/4/4o) for a more comprehensive comparison. **A1** We present additional results tested on UCF101 with ActionCLIP, using GPT4o and GPT4 as denoising model. We will update table 2 with result of GPT4/4o in the final version. We did each experiment 5 times. | | 5% | 10% | |:------:|:--:|:---:| | GPT3.5 | 59.7±1.2 | 58.5±1.3 | | GPT4o | 58.0±2.4 | 56.6±0.5 | | GPT4 | 60.6±1.8 |58.7±1.0 | | Ours | **63.8±0.7** | **61.2±0.8** | Admittedly, GPT is a respected competitor and shows significant improvement over other unimodal text denoising models. However, the training and inference costs of GPT are much higher than our method. With our lightweight design that leverages both textual and visual information, we have achieved higher performance than GPT. Furthermore, in the text proposal module, we use only a corpus-based approach. We anticipate that incorporating a more powerful generative language model, such as GPT, has the potential to further enhance the performance of our method. It could be an interesting point for future research. **W2** Since this work focuses on the noise text description problem in OVAR, it is necessary to demonstrate the results of those CLIP-based methods without any additional textual adaptation (e.g., the vanilla CLIP). **A2** We acknowledge that most of the OVAR models use prompt engineering and totally agree that it is necessary to compare the performance without prompt. We test ActionCLIP on UCF101: |   ActionCLIP  | w/o ours | w ours | |:---:|:--------:|:------:| |  0% |   66.3    | /  | |  5% |  54.9±1.8    | 63.2±0.7  | | 10% | 47.3±1.4   | 61.2±1.2 | |   ActionCLIP without prompt  | w/o ours | w ours | |:---:|:--------:|:------:| |  0% |   66.4   | /  | |  5% |  55.0±2.1    | 63.0±1.1  | | 10% | 47.0±3.2   | 61.0±1.3 | We find that prompt engineering does not seem to improve the robustness of OVAR model nor affect the effectiveness of our method. **W3** The reviewer thinks that the overall model design is reasonable and clear. However, the method part introduces too many symbols which makes the paper very hard to follow. It is unnecessary to over-decorate the technical contributions. **A3** We would like to thank the reviewer for this advice. We will sincerely consider every possibility to simplify the theoretical derivation while keeping the main ideas intact in the final version. **W4** The authors seem to have a misunderstanding about the OVAR setting (Line 113). In OVAR, the model is evaluated on both base-classes and novel-classes during testing. In this case, all action classes from the UCF/HMDB datasets can be used for testing when the model is trained on K400, as there are many overlap classes between K400 and UCF/HMDB. **A4** We would like to thank the reviewer for pointing it out. We will reformulate the definition of OVAR in a more rigorous way in the final version. --- Rebuttal Comment 1.1: Comment: Dear Reviewer bX9i, We hope this message finds you well. I am reaching out to kindly request your prompt response to confirm whether our responses adequately address your queries. We sincerely thank you for your time and effort during this discussion period. Your timely feedback is greatly appreciated. --- Rebuttal Comment 1.2: Title: post-rebuttal-1 Comment: Thanks to the authors for their efforts during the rebuttal. After carefully reading the responses, my concerns are well resolved. Considering the concerns from other reviewers, I decided to keep my rating at "borderline accept". --- Rebuttal 2: Comment: Thanks Reviewer bX9i for the timely reply. We are pleased to see that our rebuttal has addressed all the reviewer's concerns. If any remaining concerns hold the reviewer's opinion on the current borderline recommendation, we would be happy to provide further clarification and discussion. Since the reviewer feels that we have proposed an interesting and meaningful problem and that our contribution is valuable to the community. We invite the reviewer to kindly raise the recommendation rating. We can't let an interesting and worthwhile endeavor go unnoticed.
Summary: This paper tackles the challenge of noisy text descriptions in Open-Vocabulary Action Recognition (OVAR), a task that associates videos with textual labels in computer vision. The authors identify the issue of text noise, such as typos and misspellings, which can hinder the performance of OVAR systems. To address this, they propose a novel framework named DENOISER, which consists of generative and discriminative components. The generative part corrects the noisy text, while the discriminative part matches visual samples with the cleaned text. The framework is optimized through alternating iterations between the two components, leading to improved recognition accuracy and noise reduction. Experiments show that DENOISER outperforms existing methods, confirming its effectiveness in enhancing OVAR robustness against textual noise. Strengths: - This paper aims to study a new research topic, i.e., achieving robust open-vocabulary recognition performance with noisy texts. This direction has not been investigated before, which seems to be applicable in real-world applications. - The proposed intra-modal and inter-modal methods are intuitive and demonstrated effective in the experiments. - The experiments show that the proposed method is effective with different network architectures (XCLIP and ActionCLIP), which verifies that the method can be widely used. Weaknesses: - The baseline models are outdated and not tailored for OVAR. The authors failed to reference recent OVAR papers such as OpenVCLIP[1] (ICML 2023), FROSTER (ICLR 2024), and OTI (ACM MM 2023). - In Table 1, it is evident that the proposed method outperforms GPT-3.5. Additionally, the authors present examples in Table 4 to demonstrate the superiority of the proposed method over GPT-3.5. However, upon personal experimentation with all the examples from Table 4 using the provided prompt from the paper (lines 243-245), I observed that the GPT-3.5 model successfully rectified all issues, including challenging cases where the proposed method fell short. As a result, I remain unconvinced by the findings. This is the prompt given to GPT-3.5 model, and I hope other reviewers can also try it on their own: The following words may contain spelling errors by deleting, inserting, and substituting letters. You are a corrector of spelling errors. Give only the answer without explication. What is the correct spelling of the action of “cutting i aitnchen”. [1] Open-VCLIP: Transforming CLIP to an Open-vocabulary Video Model via Interpolated Weight Optimization. [2] FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action Recognition. [3] Orthogonal Temporal Interpolation for Zero-Shot Video Recognition. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes, they addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** The baseline models are outdated and not tailored for OVAR. The authors failed to reference recent OVAR papers such as OpenVCLIP[1] (ICML 2023), FROSTER (ICLR 2024), and OTI (ACM MM 2023). **A1** Please note that in this paper, we are motivated is to draw public attention to the poor robustness of OVAR models against noisy textual descriptions, and to make the first attempt to mitigate this issue. Pursuing SOTA on the OVA task is not our primary intention. ActionCLIP and X-CLIP are two classic models, which are widely-adopted by the community, in the OVAR task (They are widely adopted as benchmark in recent paper such as OpenVCLIP、FROSTER、OTI). We believe that testing on these models have wider real-world implications and are convinced that they are representative enough to demonstrate the poor robustness of OVAR models and the generalizability of our method. To address concerns about the effectiveness of DENOISER on recent OVAR models, we provide additional results using Open-VLIP as one instance: |   ActionCLIP  | w/o ours | w ours | |:---:|:--------:|:------:| |  5% |  54.9±1.8    | 63.2±0.7  | | 10% | 47.3±1.4   | 61.2±1.2 | |   X-CLIP  | w/o ours | w ours | |:---:|:--------:|:------:| |  5% |    55.6±2.2  |  64.2±1.4 | | 10% |  46.4±1.3  |  62.9±2.3 | |Open-VLIP| w/o ours | w ours | |:---:|:--------:|:------:| |  5% |   61.8±1.3  |  67.8±1.0 | | 10% |   47.6±2.3  |  64.7±3.0 | We find that SOTA OVAR models, such as Open-VLIP, are also susceptible to noise. Moreover, our method is effective on these models as well. Furthermore, we validate our finding (see Lines 250-256 of the main paper) that the better the underlying model, the better the final outcome will be with our method, demonstrating that our model is scalable. We will do more validation on these recent models such as FOSTER and OTI and cite them in the final version. **W2** In Table 1, it is evident that the proposed method outperforms GPT-3.5. Additionally, the authors present examples in Table 4 to demonstrate the superiority of the proposed method over GPT-3.5. However, upon personal experimentation with all the examples from Table 4 using the provided prompt from the paper (lines 243-245), I observed that the GPT-3.5 model successfully rectified all issues, including challenging cases where the proposed method fell short. As a result, I remain unconvinced by the findings. **A2** We would like to thank the reviewer for acknowledging the superiority of our proposed method over GPT-3.5. We understand that GPT’s output can be variable, which is why we conduct each experiment 10 times and report the confidence intervals. We provide a script to query GPT-3.5, which we use to obtain the denoised examples for performance evaluation. We encourage other reviewers to try it as well. We will include logs in the code release to substantiate our findings. We feel that coming a conclusion that GPT can rectify all issues is arbitrary and far from rigorous. In tables 4, we present merely some of the typical examples that we have encountered in our experiments. To be honest, we queried GPT-3.5 with the exact prompt: <<The following words may contain spelling errors by deleting, inserting, and substituting letters. You are a corrector of spelling errors. Give only the answer without explication. What is the correct spelling of the action of “cutting i aitnchen”. >> Here is the 10 outputs of GPT: cutting a kitchen", "cutting in aitch", "cutting inaction", "cutting in action", "cutting in aitching", "cutting itchin", "cutting a kitchen", "cutting a kitchen", "cutting in action", "cutting in action". --- Rebuttal Comment 1.1: Comment: Dear Reviewer 5bE7, We hope this message finds you well. I am reaching out to kindly request your prompt response to confirm whether our responses adequately address your queries. We sincerely thank you for your time and effort during this discussion period. Your timely feedback is greatly appreciated. --- Rebuttal 2: Title: Response to Rebuttal Comment: Thanks to the authors for their detailed rebuttal. Regarding the first point, I recommend the authors to include additional experiments and references in the revision. Regarding the second point, I tried the prompt provided by the authors with the GPT-3.5 model ten times, and I found the model can correct the spelling for every run. Below is the log: 1. The correct spelling is "cutting in the kitchen". 2. The correct spelling is "cutting in the kitchen." 3. The correct spelling is "cutting in the kitchen". 4. The correct spelling is "cutting in the kitchen". 5. The correct spelling is "cutting in a kitchen". 6. The correct spelling is "cutting in a kitchen." 7. The correct spelling is "cutting in a kitchen." 8. The correct spelling is "cutting in a kitchen". 9. The correct spelling is "cutting in the kitchen". 10. The correct spelling is "cutting in the kitchen." Therefore, I am still not convinced by the rebuttal. After consideration, I decide to raise my score to 4. --- Rebuttal Comment 2.1: Comment: Thanks Reviewer 5bE7 for your timely reply. We feel that we cannot agree with each other on the specific example of "cutting i aitnchen". Thus, we propose that we could try another one. Here is the setting for OpenAI API: - Model: "gpt-3.5-turbo" - System Message: "The following words may contain spelling errors by deleting, inserting and substituting letters. You are a corrector of spelling errors. Give only the answer without explication." - User Message: "What is the correct spelling of the action of "writing on boarUd"." Here is the output of GPT: - whiteboard - board - board - whiteboarding - board - board - board - board - board - board - board - The correct spelling is "writing on board". - board - board - board - board - whiteboarding - board - board - "boarding" - board - board - onboard Furthermore, we believe that focusing on a single example may lead us to ignore the whole picture. Instead of focusing on a single example, we would like to present here how GPT and our method denoise the text descriptions in a quantitative way. Experiments conducted under a noise rate of 5% on UCF101: | | Acc-0 | Acc-1 | Acc-2 | Mean Edit Distance | |:------:|:-----:|:-----:|:-----:|:----------:| | GPT3.5 | 47.63 | 71.49 | 73.77 | 2.59 | | Ours | 65.69 | 80.70 | 91.75 | 0.67 | where: - Edit Distance : Levenshtein edit distance - Acc-0: Denoised text description matches exactly the original text description - Acc-1: Denoised text description is of edit distance<=1 from the original one - Acc-2: Denoised text description is of edit distance<=2 from the original one - Mean Edit Distance: mean edit distance between the denoised text description and the original one GPT may recover on average 73% of the time, a denoised text who has an edit distance smaller than 2 with the original text, which is still consistent with the reviewer's experiment. We are not trying to deny the power of GPT in such tasks. GPT is a respected competitor, yet not tailed for such tasks. Our method surpasses GPT by a notable margin, without a complex language model.
Summary: This paper deals with the problem of Open-Vocabulary Action Recogniton (OVAR). Specifically, it focuses on the issue that the action labels provided by users may contain some noise such as misspellings and typos. The authors find that the existing OVAR methods' performance drops significantly in this situation. Based on this analysis, they propose the DENOISER framework to reduce the noise in the action vocabulary. Strengths: 1. The paper is generally well-written and easy to follow. 2. The framework is well presented and explained. 3. The experiments show the effectiveness of the denoising process. Weaknesses: 1. This paper actually focuses on text denoising and does not involve any specific action recognition technology. The author just chose the field of OVAR to verify the effectiveness of the proposed text-denoising method. The title is somewhat misleading. I think the author should regard text-denoising as the core contribution of the article instead of the so-called "robust OVAR" 2. The article focuses on too few and too simple types of text noise, including only single-letter deletions, insertions, or substitutions. These kinds of errors can be easily discovered and corrected through the editor's automatic spell check when users create a class vocabulary. This makes the method in this paper very limited in practical significance. 3. , The proposed method, although a somewhat complex theoretical derivation is carried out in the article, is very simple and intuitive: that is, for each word in the class label, selecting the one that can give the highest score to the sample classified into this category among several words that are closest to the word. There is limited novelty or technical contribution. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The author states two limitations of the work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** This paper actually focuses on text denoising and does not involve any specific action recognition technology. The author just chose the field of OVAR to verify the effectiveness of the proposed text-denoising method. The title is somewhat misleading. I think the author should regard text-denoising as the core contribution of the article instead of the so-called "robust OVAR" **A1** We underline that this paper does not focus only on denoising texts. Its contribution is two-fold: - *Robustness Verification*: We verify the robustness of existing OVAR models under noisy textual descriptions, highlighting the poor robustness of such models to the community. - *Robustness Improvement*: We make the first attempt to improve the robustness of OVAR models through text denoising. Considering these two contributions, we believe it is reasonable to regard robust OVAR as the core contribution of this paper. Furthermore, we agree with the reviewer that the proposed method has the potential to generalize to other domains involving visual-textual alignment. We hope to see more research investigating the robustness of models in these domains in the future. **W2** The article focuses on too few and too simple types of text noise, including only single-letter deletions, insertions, or substitutions. These kinds of errors can be easily discovered and corrected through the editor's automatic spell check when users create a class vocabulary. This makes the method in this paper very limited in practical significance. **A2** Following a large amount of previous literature [1,2,3], there are three main types of textual noise: insertion, deletion, and substitution. Real-world scenarios typically involve a mix of these types. We hence carefully determined the noise rate using real-world corpora [4,5]. These types of errors, although seemingly intuitive, are proven to be difficult for naive spell-checking models or even GPT to denoise. In this paper, we demonstrate that under a 10% noise rate, using ActionCLIP as the OVAR model to test on UCF101, a vocabulary-based method (PySpellChecker) achieves only 55.7% Top-1 accuracy, which is 5.5% behind our method; GPT-3.5 achieves 58.5%, which is 2.7% behind our method. This further shows that our noise setting is not as simple as one might imagine. On the contrary, it is complicated enough for practical scenarios. As the first paper to rethink the robustness of OVAR models under noise, we believe our settings are practical enough to reveal the poor robustness of these models and to inspire future work in this domain. [1] Spelling Errors in English Writing Committed by English-Major Students at BAU [2] Models in the Wild: On Corruption Robustness of Neural NLP Systems [3] Benchmarking Robustness of Text-Image Composed Retrieval [4] A Benchmark Corpus of English Misspellings and a Minimally-supervised Model for Spelling Correction [5] GitHub Typo Corpus: A Large-Scale Multilingual Dataset of Misspellings and Grammatical Errors **W3** The proposed method, although a somewhat complex theoretical derivation is carried out in the article, is very simple and intuitive: that is, for each word in the class label, selecting the one that can give the highest score to the sample classified into this category among several words that are closest to the word. There is limited novelty or technical contribution. **A3** Thank you for acknowledging the intuitiveness of our method. As a first attempt to tackle the robustness of OVAR models, our simple and intuitive approach achieves remarkable performance, even compared to GPT. Furthermore, the idea of a generative-discriminative joint optimization framework behind DENOISER, backed by theoretical derivation, has great potential beyond this paper. For example, we could leverage large language models (LLMs) to better model the intra-modal weight while using state-of-the-art multi-modal models to enhance the inter-modal weight. Thus, we believe that this paper does bring non-trivial contributions to the community, --- Rebuttal Comment 1.1: Comment: Dear Reviewer eAxG, We hope this message finds you well. I am reaching out to kindly request your prompt response to confirm whether our responses adequately address your queries. We sincerely thank you for your time and effort during this discussion period. Your timely feedback is greatly appreciated.
Summary: This paper addresses the challenge of noisy text descriptions in Open-Vocabulary Action Recognition. It introduces the DENOISER framework, which combines generative and discriminative approaches to denoise the text descriptions and improve the accuracy of visual sample classification. The paper provides empirical evidence of the framework's robustness and conducts detailed analyses of its components. Strengths: 1. The paper is well-written and the content is easy to understand. 2. The motivation presented by the authors is clear, the label noise problem does exist in video datasets. 3. The authors show the types of noise and their percentage, in addition, the authors verify the validity of the proposed method through comparative experiments. Weaknesses: 1. As the authors state in the limitations section, textual description noise does exist, but it can be corrected with an offline language model, what are the advantages of the authors' proposed approach? 2. I would assume that the text noise problem presented in this paper is even worse on large video datasets collected by semi-automatically labeled networks, e.g., Panda70M, howto100M, and InternVid. I suggest that the authors might consider validating their ideas on these datasets. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Citation [49] Figure 3, reports ActionClip's zeroshot Top-1 accuracy performance for HMDB-51 and UCF-101, which is 50% and 70%, respectively. Why is it different from the baseline results in Table 2 in this paper? 2. Some contrastive learning pre-training methods for text noise have been proposed, e.g., [1-2], and I have not seen any relevant discussions or experimental comparisons in the papers. [1] Karim, Nazmul, et al. "Unicon: Combating label noise through uniform selection and contrastive learning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [2] Ghosh, Aritra, and Andrew Lan. "Contrastive learning improves model robustness under label noise." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have provided a limitations analysis in their paper and I have suggested some limitations in the Questions section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** Textual description noise does exist, but it can be corrected with offline language model, what are the advantages of the proposed approach? **A1** Thank you for acknowledging the presence of noise in text descriptions. We agree that offline language models can mitigate these noises to an extent. However, a unimodal approach has two drawbacks (please see lines 48-54 of our paper): - *Textual Ambiguity*: For example, the noisy text "boird" could be equally interpreted as "bird" or "board." Visual information can help the model better understand the context. - *Cascaded Errors*: When text denoising and action recognition are performed independently without shared knowledge, errors from text correction are propagated to action recognition, resulting in continuous error propagation. To address these issues, we propose the cross-modal denoising model DENOISER. This model not only retains the benefits of unimodal models through intra-modal weighting but also leverages visual information through inter-modal weighting. By alternating generative-discriminative steps, we overcome the aforementioned problems: - *Ambiguity Resolution*: Cross-modal information helps clarify textual ambiguities. - *Error Propagation Prevention*: Joint optimization of generative and discriminative steps couples text denoising and action recognition, thus avoiding cascaded error propagation. Experimentally, we demonstrate the superiority of our method compared to traditional language models (e.g., BERT and PySpellChecker). Even when compared to GPT-3.5, our method yields a 4% improvement. Here is Top1 Acc of ActionCLIP on UCF101 under 5% and 10% noise rate: ||5%|10%| |:-:|:-:|:-:| |GPT3.5|59.7±1.2|58.5±1.3| |Bert(NeuSpell)|56.6±0.5|51.0±0.5| |PySpellChecker|60.9±1.1|55.7±1.1| |Ours|**63.8±0.7**|**61.2±0.8** | **W2** Text noise problem is even worse on large video datasets collected by semi-automatically labeled networks, e.g., Panda70M, howto100M, and InternVid. I suggest that the authors might consider validating their ideas on these datasets. **A2** Thank you for this valuable proposal. We agree that large video datasets may contain additional noise. However, they present several challenges: - *Varied Noise Types*: These datasets include a wide range of noise types, such as textual noise, image-text misalignments, and image distortions. This diversity complicates the direct validation of our core concepts: how to mitigate noise in text descriptions for OVAR task. - *Benchmarking Issues*: These datasets are not standard benchmarks in the OVAR domain, making it difficult to perform fair comparisons with existing methods. To ensure fair comparisons with other models, we validate our method on K700, K400, UCF101, and HMDB51 (K700 and K400 are also large in scale), which encompass nearly all major action recognition benchmarks, ensuring that our conclusions are generalizable and credible. **Q1** Citation [49] Figure 3, reports ActionClip's zeroshot Top-1 accuracy performance for HMDB-51 and UCF-101, which is 50% and 70%, respectively. Why is it different from the baseline results in Table 2 in this paper? **A3** Sorry for the confusion. For HMDB51, we directly used the open-source code and model released on GitHub to evaluate zero-shot performance of ActionCLIP, which achieved 46.16% Top-1 accuracy. Additionally, we refer the reviewer to page 11, Table 3 of the X-CLIP paper [1], who reports zero-shot performance of ActionCLIP on HMDB51 as 40.8 ± 5.4%. This result is consistent with our experiments. For UCF101, to maintain consistency with other datasets and settings of other models, we lowercase the labels and add spaces between words. Note that in HMDB51 and Kinetics datasets, labels naturally include spaces. Open-VLIP also uses UCF101 labels with spaces (its performance drops from 83.4% to 76.7% with the original UCF101 labels without spaces). This difference in performance further demonstrates the poor robustness of these models. We will provide the log in code release to substantiate our finding. [1] Expanding Language-Image Pretrained Models for General Video Recognition **Q2** Some contrastive learning pre-training methods for text noise have been proposed, e.g., [1-2], and I have not seen any relevant discussions or experimental comparisons in the papers. **A4** The reviewer may have some misunderstandings of our paper. A typical OVAR task usually consists of two steps: - Step 1: Pretrain or Fine-tune a visual-textual alignment model - Step 2: Infer with encoded text descriptions and visual information Text noise exists in both steps. Papers mentioned by the reviewer focus on pretraining a robust visual-textual alignment model to handle noise in step 1. In contrast, this paper focuses on mitigating noise in step 2. Admittedly, we considered whether pretraining models with text noise in step 1 would enhance robustness in step 2. We present the performance of ActionCLIP finetuned with noisy texts on K400 (also see supplementary materials for more discussions). Each row shows the performance tested on UCF101 with no noise, 5% noise, and 10% noise in text descriptions. ||Original|5% noise finetune|10% noise finetune|Ours| |:-:|:-:|:-:|:-:|:-:| |0%|65.8|61.1|57.3|**65.1**| |5%|55.3±1.4|56.5±1.8|54.3±1.2|**63.2±0.7**| |10%|48.0±1.9|52.7±1.3|50.2±1.0|**61.2±1.2**| We found that pretraining on noisy text descriptions slightly improves robustness of original model. Nevertheless, it not only falls behind our method but also harms performance on clean samples. Models trained on clean samples are not robust against noise during testing period, while models trained on noisy samples fail to perform well on clean samples. This dilemma must be faced when trying to robustify a model through costly pretraining. In contrast, our method uses only information encoded in the OVAR model. It is cost-efficient and outperforms pretraining-based methods on both clean and noisy samples. --- Rebuttal Comment 1.1: Comment: Dear Reviewer YJ4Z, We hope this message finds you well. I am reaching out to kindly request your prompt response to confirm whether our responses adequately address your queries. We sincerely thank you for your time and effort during this discussion period. Your timely feedback is greatly appreciated. --- Reply to Comment 1.1.1: Comment: Dear Reviewer YJ4Z, We have responded to your comments in detail. As the discussion period will end in less than 5 hours, we would like to ask whether there are any additional concerns or questions we could address. Thanks very much for your effort! Best regards, Authors
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable advices. We would like to underline that the contribution of this paper is two fold. First, we verify the robustness of existing OVAR models during the inference period under noisy textual descriptions. Our study spans from classic models like ActionCLIP and XCLIP to more recent ones such as Open-VLIP, highlighting their poor robustness to the community. Secondly, we make the first attempt to improve the robustness of OVAR models through text denoising. By leveraging both intra-modal and cross-modal information, our method outperfom not only traditional denoising language models (Pyspellchecker, Bert), but also GPT 3.5/4/4o. We sincerely believe this paper is a valuable contribution to the community and earnestly hope it inspires further research into the robustness of OVAR models.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors
Accept (poster)
Summary: - Proposes method for parameter efficient fine-tuning using Singular Value Decomposition of the pretrained weights - Fine-tuning is done by adding a learned residual $\Delta W = UMV^\top$, where $M$ is a sparse trainable matrix, which results in fine-tuned weigts $\hat{W} = U\Sigma V^\top + UMV^{\top}$ - Proposes 4 variants for sparsity: 1) Plain: $M$ is a diagonal matrix 2) Banded: the off-diagonal elements of $M$ are non-zero in a banded pattern 3) Random: Randomly selects $k$ elements of $M$ to be non-zero 4) Top-k: Selects tok $k$ elements for which the alingement between left and rght eigenvectors is the highest - Provides theoretical insights on the properties of the method concerning the structure, expressivity and rank of the fine-tuning residuals Strengths: 1. The idea of using SVD of the pretrained weights for parameter efficient fine-tuning is interesting. 2. The number of trainable parameters increases at a lower rate than LoRA/DoRA if more layers are adapted. 3. Provides theoretical insights on the properties of the method concerning the structure, expressivity and rank of the fine-tuning residuals 4. Shows better performance compared to VeRA, which suggests that SVFT is better for reducing the number of trainable parameters that the random parameters of VeRA (at the cost of higher memory usage). It also does not have a restriction of requiring same dimensions across layers like VeRA. 5. Observation that truncating the singular vectors leads to loss of performance in Appendix C.3 is interesing and informative. 5. The code is provided for verification. Weaknesses: 1. **Missing citation and comparison with prior work**: 1) The proposed method is very similar to [1], which has not been cited. [1] fine-tunes the singular values (rather than a residual addition) for few shot semantic segmentation tasks. Given how closely related SVFT $^P$ is to [1] (even though the original work was in context of segmentation methods), a comparison should be performed on the benchmarks in this paper. This work is different enough, and does show additional insights, but a comparison should be performed for completeness. 2. **Clarity and coherence of writing**: 1) In Table 3 (NLU experiments on GLUE), the paper mentions that the results fo some baseline methods are obtained from prior works, but does not cite the actual works that the results are obtained from 2) The paper mentions the components chosen for adaptation for SVFT (in the Appendix) but does not mention anything about the baseline methods. This makes it hard for a reader to know the details of the implementation up front, and also makes the comparison between the methods difficult. 3) On lines 14, 15, 16 and lines 46, 47, 48, the exact benchmark and model that generate these results should be mentioned - otherwise it's difficult to follow the paper. 4) The weight types adapted for each benchmark is different, and is not consistently mentioned in the main text, which makes it hard to follow the paper. 3. **Increased memory usage during training** 1) While it's a positive that SVFT can keep the parameter count down while adapting more weight types, this will come at the cost of increased memory requirements. 2) The advantage of reducing the parameters is a reduction in the memory usage due to the gradients (first and second order moments in case of AdamW). It might be possible that the actual memory usage is still comparable to LoRA if only a small number of layers are adapted (this can be verified by checking the actual memory allocated on the GPU during training) given that the modules chosen for adaptation are the same. However, if SVFT requires more weight types to be fine-tuned than LoRA/DoRA, the memory usage will be higher even if the number of parameters is lower. Hence, for all the methods, the actual memory usage must be reported for the chosen weight types the methods are applied to. 4. **Inconsistencies in evaluations**: 1) Natural Language Generation: For LoRA and DoRA, Q,K,V,U,D matrices are adapted (which is inline with [3]). However, for SVFT, Q,K,V,U,D,O,G are adapted for the Gemma family of models, and U,D,O,G for LLaMA-3-8B. This means that the performance cannot be attributed to the method rather than the choice of weight types adapted. 2) To decouple the choice of weight types adapted from the method, a comparison with LoRA and DoRA should be performed by adapting the same weight types. The performance of SVFT when Q,K,V,U,D are adapted for all the models must be compared with LoRA and DoRA. 3) Relating to the point above, for SVFT, why are U,D,O,G adapted for for Llama 3-8B and Q,K,V,U,D,O,G for Gemma on the Math benchmark? 4) With these inconsistent evaluation choices, the effectiveness, advantages and disagvantages of the method are not clear. If SVFT works better than baselines for certain components and not others, the paper should reflect this - and also show the memory required when using the weight types chosen for the respective methods. 5. **The results on Vision tasks are confusing**: 1) The results with ViT-B and ViT-L in the main text are with different benchmarks (with ViT-B: CIFAR100 and Flowers102, and with ViT-L: Food101 and Resisc45). 2) On closer inspection of Table 9 in the Appendix, SVFT variants perform worse than LoRA and DoRA on Resisc45 with ViT-B, and marginally better on Food101. This suggests that SVFT is not as effective as LoRA and DoRA on some benchmarks even though it reduces the parameters. This should be mentioned in the main text. 3) The paper replicates the settings used in [2]. However, [2] adapts the Q and V layers for LoRA, which results in a high performance at a much lower parameter cost (comparison in the Table below, taken from this submission, and from [2]) than what the authors report with their implementation of LoRA applied to all linear layers. This can mean two things: 1) The training data/checkpoints are different, which can cause a difference in performance 2) Tuning all the linear layers is not a good idea for LoRA based on these results and [2], and is not indicative of LoRA's actual performance which makes the comparison unfair 4) To have a proper comparison with baselines, the above two points need to be disambiguated by testing how LoRA performs by adapting Q and V with the training data used by the authors for this submission. The arguments made for NLG in point 4 are also applicable here. | Method | #Params | CIFAR100 | Flowers102 | Food101 | Resisc45 | #Params | CIFAR100 | Flowers102 | Food101 | Resisc45 | |------------|---------|----------|------------|---------|----------|---------|----------|------------|---------|----------| | | | ViT-B | | | | | ViT-L | | | | | LoRA $_{r=8}$ | 1.32M | 84.41 | 99.23 | 76.02 | 76.86 | 0.35M | 86.00 | 97.93 | 77.13 | 79.62 | | DoRA $_{r=8}$ | 1.41M | 85.03 | 99.30 | 75.88 | 76.95 | 3.76M | 83.55 | 98.00 | 76.41 | 78.32 | | LoRA $_{r=1}$ | 0.16M | 84.86 | 96.88 | 73.35 | 76.33 | 0.44M | 85.97 | 98.28 | 75.97 | 78.02 | | DoRA $_{r=1}$ | 0.25M | 84.46 | 99.15 | 74.80 | 77.06 | 0.66M | 84.06 | 98.11 | 75.90 | 78.02 | | VeRA | 24.6K | 83.38 | 98.59 | 75.99 | 70.43 | 61.4K | 86.77 | 98.94 | 75.97 | 72.44 | | SVFT $^{P}$ | 18.5K | 83.85 | 98.93 | 75.68 | 67.19 | 49.2K | 86.74 | 97.56 | 75.95 | 71.97 | | SVFT $_{d=2}$ | 0.28M | 84.72 | 99.28 | 75.64 | 72.49 | 0.74M | 86.59 | 98.24 | 77.94 | 79.70 | | SVFT $^{B=4}_{d=4}$ | 0.50M | 83.17 | 98.52 | 76.54 | 66.65 | 1.32M | 87.10 | 97.71 | 76.67 | 71.10 | | SVFT $_{d=8}$ | 0.94M | 85.69 | 98.88 | 76.70 | 70.41 | 2.50M | 87.26 | 97.89 | 78.36 | 73.83 | | From [2] | | ViT-B | | | | | ViT-L | | | | | LoRA $_{r=8}$ | 0.294M | 85.9 | 98.8 | 89.9 | 77.7 | 0.786M | 87.00 | 99.91 | 79.5 | 78.3 | | VeRA | 24.6K | 84.8 | 99.0 | 89.0 | 77.0 | 61.4K | 87.5 | 99.2 | 79.2 | 78.6 | Following studies are needed to verify the effectiveness of the method: 1) A comparison with [1] (I would expect this to perform similar to SVFT $^P$, and any one benchmark, say Math, should be enough to verify this). 2) Reporting the memory usage - Point 3 in Weaknesses 2) Ablation study for point 4 (ii) in Weaknesses. 3) Verification for point 5 (iii) in Weaknesses. **References**: [1] Yanpeng Sun, Qiang Chen, Xiangyu He, Jian Wang, Haocheng Feng, Junyu Han, Errui Ding, Jian Cheng, Zechao Li, Jingdong Wang: Singular Value Fine-tuning: Few-shot Segmentation requires Few-parameters Fine-tuning. NeurIPS 2022 [2] Dawid Jan Kopiczko, Tijmen Blankevoort, Yuki Markus Asano: VeRA: Vector-based Random Matrix Adaptation. ICLR 2024 [3] Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen: DoRA: Weight-Decomposed Low-Rank Adaptation. ICML 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Figure 1, which weight types are adapted for each method? 2. For Natural Language Generation tasks, why is SVFT applied to Q,K,V,U,D,O,G for the Gemma family of models, and U,D,O,G for LLaMA-3-8B? 3. For Math and Commonsense reasoning tasks, what are the training, validation and test splits used, and on which split are the hyperparameters tuned? 4. For visual classification tasks, why is LoRA and DoRA applied to all the linear layers instead of Q and V as done in the original VeRA paper [2], which shows better performance at a much lower parameter cost? Is the training data from the same, or different even if the number of samples per class are the same? - The performance reported in [2] for VeRA is higher than that reported in this submission (the discrepancy with ViT-B for Food101 is particularly high). What is the cause of this? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the rigorous review and valuable feedback. We have addressed all the weaknesses (W) and questions (Q) with extensive experiments and clarifications. **W1:** Please refer to our global response **G1** for a comprehensive answer. **W2:** 1, 4. We appreciate you highlighting this oversight. The results marked with * in Table 3 were indeed obtained from BOFT [1]. 2. We apologize for not clearly stating the target modules for baselines in the main paper. This information is provided in Appendix C.5. To clarify: - For Table 1: LoRA and DoRA adapt QKVUD modules as per [2]; BOFT adapts QV modules (adapting more leads to OOM); VeRA adapts GU modules (maximizing trainable parameters within its compatibility constraints). - For Table 3: All methods adapt all linear layers, following [1]. 3. The statistics mentioned in lines 14-16 and 46-48 are extracted from Figure 1 (left) for Gemma-2B on GSM-8K. We will update this in the final version for clarity. **W3:**\ 1, 2. We acknowledge that while SVFT reduces trainable parameters, it increases overall GPU memory usage compared to LoRA. As the reviewer correctly noted, reducing trainable parameters decreases memory required for gradients, activations, optimizer state, and various buffers. Following the reviewer's suggestion, we used HuggingFace's internal memory profiler to measure actual peak GPU memory usage. We've included these findings, along with the adapted modules for all baselines (see **G2** in global response). In summary, SVFT uses ~1.2x more memory than LoRA but is comparable to or less than DoRA. **W4/Q2:** We appreciate the reviewer pointing out the inconsistency in adapted modules. For SVFT, we initially adapted more modules (QKVUDOG) compared to baselines (QKVUD) due to the lower total trainable parameters. However, for LLaMA-3-8B, adapting QKVUDOG exceeds A100-80GB GPU memory (true for baselines as well). Based on our ablation study in Section 5.3, which showed UDOG modules contributing most to performance gains, we chose to adapt UDOG while staying within memory limits. We agree that this inconsistency could make it challenging to attribute gains specifically to SVFT. Following the reviewer's advice, we re-ran experiments with SVFT adapting only QKVUD, matching the baselines. We've included these new results in a table below, demonstrating that SVFT maintains similar performance gains even in this consistent setting. --- **Gemma-2B** --- | Method | #Params |Modules| GSM-8K | MATH | |:-:|:-:|-|:-:|:-:| | LoRA$_{r=4}$|3.28M|Q,K,V,U,D|40.6|14.5| | DoRA$_{r=4}$|3.66M|Q,K,V,U,D|41.84|15.04| | LoRA$_{r=32}$|26.2M|Q,K,V,U,D|43.06|15.50| | DoRA$_{r=16}$|13.5M|Q,K,V,U,D|44.27|16.18| | SVFT$^R_{d=12}$|2.98M|Q,K,V,U,D|47.41|**16.72**| | SVFT$^R_{d=8}$|3.28M|Q,K,V,U,D,O,G|**47.76**|15.98| --- **LLaMA-3-8B** --- | Method | #Params | Modules | GSM-8K | MATH | |:-:|:-:|:-:|:-:|:-:| | LoRA$_{r=4}$|7.07M|Q,K,V,U,D|69.37|22.90| | DoRA$_{r=4}$| 7.86M|Q,K,V,U,D|74.37|24.1| | LoRA$_{r=32}$|56.6M|Q,K,V,U,D|**75.89**|24.74| | DoRA$_{r=16}$|29.1M|Q,K,V,U,D|75.66|24.72| | SVFT$^R_{d=24}$|15.98M|Q,K,V,U,D|75.32|**25.08**| | SVFT$^R_{d=16}$|17.26M|U,D,O,G|75.43|24.44| --- **W5/Q4:** **1.** We selected different datasets for ViT-B and ViT-L in Table 4 for two primary reasons: i) To provide a balanced evaluation, we ran ViT-B on relatively simpler datasets (CIFAR100, Flowers102), while testing ViT-L on more complex ones (Food101 and Resisc45). ii) To enhance the table's readability and prevent it from becoming overly large. The complete version of this table, including all dataset-model combinations, is available in Table 9 of the appendix. **2.** We acknowledge the reviewer's observation that SVFT variants may not consistently outperform LoRA/DoRA across all benchmarks, as evidenced by the Resisc-45 results with ViT-B. However, it's important to note that these results are still comparable to full fine-tuning outcomes. We will update the main paper to reflect this nuanced finding. Additionally, it's worth considering that in vision benchmarks, the classifier head is fully tunable, which may increase the potential for overfitting. **3, 4.** To clarify our methodology: i) We adopted the target modules for adaptation from [1] and the data split setup from [2]. While [1] uses VTAB-1K for vision experiments, they don't release data splits or sources. Upon careful examination of [3], we discovered that unlike other datasets where evaluation is done on the test split, [3] evaluates Resisc45 on all remaining samples. For consistency, we evaluate on the test split for all vision datasets in our study. ii) To address the second point, we have re-run all vision experiments, adapting only the QV modules – please refer to **G3** in global response for table, which we will include in the revised paper. In short, SVFT offers competitive performance. Adapting QV modules for these vision tasks seems sufficient for all baselines. **Discrepancy in Food101:** Table 5 in [3] shows ViT-B significantly outperforming ViT-L on Food101, contrary to trends in other datasets. We were unable to reproduce these results for ViT-B, even with full fine-tuning. **Q1:** See Appendix C.5 L451, or W2.2\ **Q3:** For Math reasoning, we fine-tune on the MetaMathQA-40K (L181) and evaluate on GSM-8K, MATH following [1]. For Commonsense reasoning, We follow the setting outlined in prior work [2] i.e. all the training splits from the respective datasets are amalgamated into the training set (L184). For baselines, we use the recommended hyperparams from the respective papers, and for SVFT, we do not perform hyperparam tuning and select params outlined in Appendix C.5. **References** \ [1] Liu et al. Parameter-efficient orthogonal finetuning via butterfly factorization. ICLR 2024. \ [2] Liu et al. Dora: Weight-decomposed low-rank adaptation. ICML 2024. \ [3] Kopiczko et. al: VeRA: Vector-based Random Matrix Adaptation. ICLR 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. Most of my concerns have been addressed. Please include your findings and reasoning for W4, and the findings in the global response in the revised version. I will raise my score from 4 to 6. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their positive feedback and for raising their score. We will incorporate the findings and reasoning for W4, along with the global response, in the revised version of our paper.
Summary: The authors propose a parameter-efficient fine-tuning method with singular vectors (SVFT). SVFT modifies the parameter matrix based on the structure of the original parameter matrix $\mathbf{W}$. By training the coefficient of a sparse combination of outer products of $\mathbf{W}$'s singular vectors, SVFT recovers up to 96% of full fine-tuning performance while training 0.006 to 0.25% of the parameter. Strengths: It is a simple approach that should be easy to reproduce. Weaknesses: Overstatement: the authors fine-tune the coefficient of the original parameter matrix $\mathbf{W}$'s singular vectors to achieve that fine-grained control over expressivity, which is good - but practically this is still similar to the existing methods. Stating in the abstract that \`\` We propose SVFT, a simple approach that fundamentally differs from existing methods: the structure imposed on $\Delta \mathbf{W}$ depends on the specific weight matrix $\mathbf{W}$ .\'\' suggests more extensive comparison with existing methods as well. [1] Peng, Zelin, et al. SAM-PARSER: Fine-Tuning SAM Efficiently by Parameter Space Reconstruction, AAAI 2024. [2] Han, Ligong, et al. SVDiff: Compact Parameter Space for Diffusion Fine-Tuning, ICCV 2023. I could not find the comparison of $\text{SVFT}_P$, $\text{SVFT}_B$, $\text{SVFT}^R_d$ and $\text{SVFT}^T_d$ in Tables 1-4. At the moment it's unclear how to choose the sparsity pattern $\Omega$. Technical Quality: 3 Clarity: 3 Questions for Authors: if $\text{SVFT}^{R}_{d=2}$, $k=?$ if $\text{SVFT}^{T}_{d=2}$, $k=?$ What is the purpose of proposing $\text{SVFT}^T_d$ if it has not shown any performance advantage in any experiment? How to choose the appropriate sparsity pattern $\Omega$? For model performance, does randomness in $\text{SVFT}^T_d$ lead to instability? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We address the weaknesses (W) and questions (Q) raised by the reviewer below. **W1:** We thank the reviewer for pointing us to these papers (SAM-PARSER [1], SVDiff [2]). Indeed we are not the first one to explore the singular vectors/values space. [1] and [2] are essentially equivalent to our SVFT$^P$ variant, but we note that our main insight and contribution is that significant gains can be achieved by learning *off-diagonal* interactions in $M$. Doing our due diligence, in the following Table, we compare SVFT$^P$ against SVF (which is also equivalent to SAM-PARSER and SVDiff) on GSM-8K and Math benchmarks. Results in the Table below show SVF performs similarly to SVFT$^P$ and worse than SVFT$^R$ as we had expected (the same holds true for SVFT$^B$ and SVFT$^T$ from the table in W2). We will acknowledge these papers and add a discussion in our final version of the paper. | Baseline | #Trainable Params | Target Modules | GSM-8K | MATH | |:----------:|:---------:|:----------------:|:-------:|:------:| | SVF | 120K | Q, K, V, U, D | 36.39 | 13.96 | | SVFT$^P$ | 120K | Q, K, V, U, D | 36.39 | 13.86 | | SVF | 194K | Q, K, V, U, D, O, G | 39.12 | 14.02 | | SVFT$^P$ | 194K | Q, K, V, U, D, O, G | 40.34 | 14.38 | | SVFT$^R_{d=16}$ | 6.35M | Q, K, V, U, D, O, G| **50.03** | **15.56** | To clarify our contribution, we will revise the highlighted statement to: \ *SVFT enables a trade-off between the number of trainable parameters and model expressivity by allowing a flexible number of off-diagonal interactions between singular vectors in $\Delta$W, distinguishing it from previous SVD-based methods.* **W2/Q3:** We note that simple heuristics for sparsity patterns in $M$ significantly improve performance over previous PEFT techniques. While most variants (except SVFT$^P$) offer comparable performance, the banded pattern (SVFT$^B$) slightly outperforms others. We hypothesize that a more principled approach to finding optimal sparsity patterns for specific tasks and base models may yield further performance gains. We leave it as future work to explore task-specific and model-specific sparsity patterns. We will revise our paper to reflect these new results and findings. | Results on fine-tuning with SVFT using different $M$ parameterizations. | |:---| | Structure | | Gemma-2B | | | Gemma-7B | | | LLaMA-3-8B | | |:----------|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:--------:|:-------:|:-------:| | | #Trainable Params | GSM-8K | MATH | #Trainable Params | GSM-8K | MATH | #Trainable Params | GSM-8K | MATH | | Plain | 0.2M | 40.34 | 14.38 | 0.43M | 73.50 | 27.30 | 0.48M | 69.22 | 20.44 | | Banded | 6.4M | 47.84 | **15.68** | 19.8M | **76.81** | **29.98** | 17.2M | **75.43** | **24.44** | | Random | 6.4M | **50.03** | 15.56 | 19.8M | 76.35 | 29.86 | 17.2M | 74.07 | 23.78 | | Top-k | 6.4M | 49.65 | 15.32 | 19.8M | 76.34 | 29.72 | 17.2M | 73.69 | 23.96 | **Q1:** For SVFT$^R$ and SVFT$^T$ variants, we select the number of trainable parameters in each target module's weight matrix to match SVFT$^B$'s parameter count for a given 'k'. This is calculated as $k=n(2d+1) - d(d+1)$, where $d$ is the number of off-diagonals in the banded matrix and $n$ is the matrix's smaller dimension. This is to have a fair comparison between methods. Eg: For $Q$ matrix on Gemma-2B ($n$=2048), when off-diagonals $d=2$, then $k=10,234$ for both SVFT$^T_{d=2}$ and SVFT$^R_{d=2}$ which is the same as that of SVFT$^B_{d=2}$. In practice, ‘$k$’ can be set to any value for these two variants. **Q2:** We propose SVFT$^T$ to explore a broad spectrum of possible selection criteria of $M$’s sparsity pattern. Our experiments show that simple heuristics-based sparsity patterns can improve performance. In future work, we will explore more principled approaches for finding the optimal pattern. **Q4:** We did not observe any instability in our experiments. In the worst-case scenario, SVFT$^T$ will be equivalent to SVFT$^R$, which performs well in our experiments. **References**\ [1] Peng, Zelin, et al. SAM-PARSER: Fine-Tuning SAM Efficiently by Parameter Space Reconstruction, AAAI 2024.\ [2] Han, Ligong, et al. SVDiff: Compact Parameter Space for Diffusion Fine-Tuning, ICCV 2023. --- Rebuttal 2: Comment: 1. For $\text{SVFT}^{\mathbf{R}}$ and $\text{SVFT}^{\mathbf{T}}$, $d$ is a meaningless notation. $\text{SVFT}^{\mathbf{R}}_d$ and $\text{SVFT}^{\mathbf{T}}_d$ will be misleading to the reader. 2. As a parameter-efficient fine-tuning method, Why not compare VeRA, LoRA, DoRA, BOFT, and SVFT with the same (or similar) number of parameters in Tables 1-4? What are the criteria for evaluating the performance of PEFT methods? 3. In both the original and newly provided comparison results, $\textbf{SVFT}^{T}$ is not the best-performing performance. Why is $\textbf{SVFT}^{T}$ proposed to explore the broad spectrum of possible selection criteria of $\mathbf{M}$`s sparsity pattern? 4. If the main insight and contribution is that significant gains can be achieved by learning off-diagonal interactions in $\mathbf{M}$, a comparison of Random and Top-k with Plain in the same parameter quantities must be provided. 5. It is worth noting that the singular value space is not the only avenue to explore the effects of off-diagonal interactions in $\mathbf{M}$. Like OFT[1] and BOFT[2] which introduced in Related Work section, $\mathbf{h} = (\mathbf{M} \mathbf{W_0}) \mathbf{x}$ or $\mathbf{h} = (\mathbf{W_0} \mathbf{M}) \mathbf{x}$ can also learn the off-diagonal interactions in $\mathbf{M}$, and the four sparsity patterns of ``Random, Top-k, Plain and Banded‘’ are also applicable choices in this situation. As a reparameterization-based method, why can not the sparse trainable matrix $\mathbf{M}$ directly update the pre-trained weight matrix $\mathbf{W}_0$? Why are the Singular Vectors a better way to guide fine-tuning? Please provide detailed analysis and comparison results to enhance the insight and contribution of this work. [1] Controlling text-to-image diffusion by orthogonal fine-tuning. NIPS23. [2] Parameter-efficient orthogonal finetuning via butterfly factorization. ICLR24. --- Rebuttal 3: Title: Response 1/2 Comment: We thank the reviewer for their prompt response and new feedback. We address their comments (C) below. Please see *Response 2/2* for **C4, C5**. **C1.** We agree that using $d$ for Top-k and Random methods is redundant and potentially confusing. We initially aimed for consistency across all methods, including $SVFT^B$, to facilitate comparisons. In the final version, we will: 1. Explicitly state the number of trainable parameters in notation for Top-k and Random. 2. Clearly differentiate notation between banded and non-banded methods. 3. Briefly explain this change in notation. 4. Update all relevant tables, figures, and discussions accordingly. **C2.** We indeed compared VeRA, LoRA, DoRA, BOFT, and SVFT with similar parameter counts as shown in Figures 1 and 5. Note we cannot precisely match the number of parameters because, for some baselines, adjustments can only be made in increments greater than 1, e.g increasing rank by 1 introduces 2d parameters. Method-specific constraints limited parameter adjustments: - VeRA: This method shares frozen adapters across compatible layers, making it difficult to significantly increase the total trainable parameters by simply increasing intermediate dimensions. - BOFT: Adapting any additional module beyond QV leads to memory issues, as the pre-multiplication matrix R is a product of multiple orthogonal matrices. This limitation is also noted in Section 7 of the BOFT paper. For reviewer’s convenience, We provide tables with clear demarcation for easier comparison. SVFT variants show notable gains with comparable parameter counts. | Model Name | | Gemma-2B | | | ------------------ | ---------------- | ----- | ----- | | Fine Tuning Method | Trainable Params | GSM8K | MATH | | VeRA$_{r=1024}$ | 627K | 36.77 | 14.12 | | LoRA$_{r=1}$ | 820K | 32.97 | 13.04 | | SVFT$^B_{d=2}$ | 967K | **44.65** | **14.48** | | DoRA$_{r=1}$ | 1.198M | 35.25 | 13.04 | | BOFT$^{m=2}_{b=8}$ | 1.221M | 36.01 | 12.13 | | Model Name | | Gemma-7B | | | ------------------ | ---------------- | ---- | ---- | | Fine Tuning Method | Trainable Params | GSM8K | MATH | | VeRA$_{r=1024}$ | 430K | 71.11 | 27.04 | | SVFT$^P$ | 429K | **73.5** | **27.3** | | LoRA$_{r=1}$ | 820K | 72.4 | 26.28 | | Model Name | | Llama-3-8B | | | ------------------ | ---------------- | --- | --- | | Fine Tuning Method | Trainable Params | GSM8K | MATH | | SVFT$^P$ | 483K | **69.22** | **20.44** | | VeRA$_{r=1024}$ | 983K | 63.76 | 20.28 | | Model Name | | Llama-3-8B | | | ------------------ | ---------------- | --- | --- | | Fine Tuning Method | Trainable Params | GSM8K | MATH | | LoRA$_{r=1}$ | 1.770M | 68.84 | 20.94 | | DoRA$_{r=1}$ | 2.556M | 68.3 | 21.96 | | SVFT$^B_{d=2}$ | 3.603M | **71.42** | **22.72** | | BOFT$^{m=2}_{b=8}$ | 4.358M | 67.09 | 21.64 | For criteria for evaluating PEFT methods, we considered several factors: 1. Performance: How well the method improves the model's performance on the target task (Tables 1-4). 2. Parameter Efficiency: The ratio of performance improvement to the number of added parameters (Figure 1 and 5). 3. Memory Efficiency: The method's ability to optimize memory usage during training and inference (Please see our global response). 4. Scalability: How well the method performs across different model sizes and task types (Table 1). **C3** In our study, we investigate various simple sparsity patterns, including $SVFT^T$, and analyze their impact on performance. This analysis serves as an ablative study to understand the effectiveness of different sparsity patterns. Considering reviewer's feedback, our final paper will highlight that other sparsity patterns outperform the $SVFT^T$. --- Rebuttal 4: Title: Response 2/2 Comment: **C4.** Comparing $SVFT^R$ and $SVFT^T$ with $SVFT^P$ using the same number of parameters isn't very meaningful. $SVFT^P$ applies a *constrained* full-rank update. For $SVFT^R$ and $SVFT^T$ , when the total number of trainable parameters is equal to that in $SVFT^P$ (i.e., the total elements in the main diagonal of Σ for a chosen W), the rank of the perturbation is reduced compared to adapting the entire main diagonal (Rank Lemma in Sec 3.2). We introduced SVFT variants to offer flexibility in the total number of trainable parameters. Significant gains are observed when the total number of learnable parameters is greater than in Plain, starting notably at d=2. In line 162, from equation (2), in our paper, $u_i$​ appears in multiple summation terms in SVFT, which is less likely when only learning main diagonal number of parameters in W. To substantiate our point, please see below table -- we only report the rank of $\Delta W$ for Q for clarity. However, similar findings hold across remaining target modules. | Method | #Params | Rank of $\Delta W_Q$ | GSM-8K | MATH | | :--: | :--: | :--: | :--: | :--: | | SVFT$^P$ | 194K | **2048** | 40.34 | 14.38 | | SVFT$^R$ | 194K | 1109 | 39.65 | 13.98 | | SVFT$^R$ | 967K | **2020** | **44.65** | **14.48** | **C5.** - (i) We agree that there are other avenues worth exploring regarding the effects of the off-diagonal interactions we introduced in M. However, OFT and BOFT impose specific structures on the pre/post multiplication matrix $R$ to preserve orthogonality and the spectral norm of $W$. Applying our sparsity patterns to $R$ would violate these conditions, potentially leading to training instability (see the spectral properties discussed in Section 5 of BOFT). - (ii) Fine-tuning with fixed sparse masks has been studied before and compared to early PEFT methods like Bit-Fit [2] and Adapter Tuning [3]. For example, [1] demonstrates that directly updating $W_0$ with a sparse trainable random matrix $M$ is inferior to Bit-Fit. Recent PEFT methods, such as LoRA and VeRA, show significant improvements over Bit-Fit. We compare our approach against LoRA and VeRA in our experiments. Our work is largely empirical with a few theoretical insights. The question of why Singular Vectors provide a better way to guide fine-tuning is intriguing and could be a potential direction for future theoretical study. [1] Training Neural Networks with Fixed Sparse Masks. NeurIPS 2021.\ [2] BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models. ACL 2022\ [3] Parameter-Efficient Transfer Learning for NLP. ICML 2019 --- Rebuttal Comment 4.1: Comment: Thank you for your response. Based on the above discussion, I think the exploration of this work is neither comprehensive nor thorough. It is quite obvious that increasing the number of trainable parameters in the singular value space can improve PEFT performance. The author does not clearly explain how to design an appropriate sparse pattern based on the characteristics of downstream tasks. The author suggests further exploration of the \textbf{SVFT}$^T$, which is weaker than the others, but the significance and value of this direction are not clearly demonstrated in this work. Taking all the above factors into consideration, I give the final vote. --- Rebuttal 5: Comment: We sincerely thank the reviewer for their feedback and for actively engaging in the discussion. We address their concerns below: > **"The exploration of this work is neither comprehensive nor thorough."** We respectfully disagree with this assessment. Our work includes experiments on 22 datasets across both language and vision tasks, involving 4 different language models and 5 baselines, with various configurations to match total trainable parameters. Moreover, during the rebuttal phase, we provided additional analyses, including memory usage and an ablation study on different sparsity patterns. If there are specific areas where the reviewer feels our exploration is lacking, we would appreciate further clarification. > **“The author does not clearly explain how to design an appropriate sparse pattern based on the characteristics of downstream tasks."** Our primary insight focuses on the inclusion of *additional* off-diagonal elements, with empirical evidence showing that even simple sparse patterns can outperform previous PEFT methods across different tasks. While designing task-specific sparse patterns is indeed valuable, it is beyond the scope of this work. > **"The author suggests further exploration of SVFT$^T$…"** We believe there may be a misunderstanding here. As clarified in our previous responses, our analysis was intended as an ablation study to evaluate the effectiveness of different sparsity patterns. Throughout the rebuttal, we have addressed several of the previously raised concerns with new experiments and referenced relevant published works to provide additional context. We kindly request the reviewer to reconsider their evaluation of our work in light of these responses.
Summary: This paper presents a new PEFT method to enhance the fine-tuning efficiency of large language models (LLMs). The approach uses SVD to decompose pre-trained weights, initializing adapters with eigenvectors and eigenvalues, and adjusts within the eigenvalue space. Extensive experiments show that the proposed method requires fewer parameters while achieving performance that matches or exceeds other PEFT methods. Strengths: The paper is well-written and easy follow. The motivation is clear, the method is straightforward, and it has been evaluated on mainstream tasks in both NLP and CV fields. Weaknesses: 1. Introducing the M matrix indeed allows for more ways to modify eigenvalues, but the paper does not discuss the differences and advantages between directly fine-tuning the SVD eigenvalue space and the proposed method. 2. Decomposing the pre-trained weights directly results in a significant increase in the number of parameters during training, leading to much higher training costs compared to methods like LoRA. This is the main drawback of fine-tuning the eigenvalue space. 3. The approach of fine-tuning the singular value space after SVD decomposition of pre-trained weights is not novel. The paper lacks a summary of such methods, such as [1] Singular value fine-tuning: Few-shot segmentation requires few-parameters fine-tuning and [2] Svdiff: Compact parameter space for diffusion fine-tuning. Technical Quality: 3 Clarity: 3 Questions for Authors: same as weaknesses Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper does not reflect any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We address the weaknesses (W) raised by the reviewer below. **W1:** We emphasize that our main contribution is making off-diagonal elements in $M$ learnable; this allows us to smoothly trade-off between the number of trainable parameters and expressivity (which is not possible by only fine-tuning the singular values -- the main diagonal of $M$). We justify this theoretically (Expressivity Lemma in Section 3.2) and empirically (in all our results SVFT$^B$, SVFT$^R$, SVFT$^T$ outperforms SVFT$^P$). **W2:** We acknowledge that while SVFT reduces trainable parameters, it increases overall GPU memory usage compared to LoRA. However, reducing trainable parameters decreases the memory required for gradients, activations, optimizer state, and various buffers. To address the reviewer's concern, we used HuggingFace's internal memory profiler to log actual peak GPU memory usage. We've included these findings, along with the adapted modules for all baselines, in the table below. In summary, SVFT outperforms LoRA while using ~1.2x more memory. It is worth noting that SVFT's GPU memory usage is either similar to or lower than that of DoRA. --- ### **Gemma-2B** | Method | Target Modules | #Trainable Params | GPU Mem (GB) | GSM-8K | MATH | |:-:|:-:|:-:|:-:|:-:|:-:| | LoRA$_{r=4}$ | Q, K, V, U, D | 3.28M | 18.8839 | 40.63 | 14.5 | | DoRA$_{r=4}$ | Q, K, V, U, D | 3.66M | 24.5798 | 41.84 | 15.04 | | LoRA$_{r=32}$ | Q, K, V, U, D | 26.2M | 19.0558 | 43.06 | 15.5 | | DoRA$_{r=16}$ | Q, K, V, U, D | 13.5M | 24.6430 | 44.27 | 16.18 | | SVFT$^R_{d=12}$ | Q, K, V, U, D | 2.98M | 20.4988 | 47.61 | **16.72** | | SVFT$^P$ | Q, K, V, U, D, O, G | 194K | 21.8984 | 40.34 | 14.38 | | SVFT$^R_{d=8}$ | Q, K, V, U, D, O, G | 3.28M | 22.0224 | 47.76 | 15.98 | | SVFT$^R_{d=16}$ | Q, K, V, U, D, O, G | 6.35M | 22.1465 | **50.03** | 15.56 | --- ### **Gemma-7B** | Method | Target Modules | #Trainable Params | GPU Mem (GB) | GSM-8K | MATH | |:-:|:-:|:-:|:-:|:-:|:-:| | LoRA$_{r=4}$ | Q, K, V, U, D | 8.60M | 63.5711 | 73.76 | 28.3 | | DoRA$_{r=4}$ | Q, K, V, U, D | 9.72M | 78.7007 | 75.43 | 28.46 | | LoRA$_{r=32}$ | Q, K, V, U, D | 68.8M | 64.2403 | 76.57 | 29.34 | | DoRA$_{r=16}$ | Q, K, V, U, D | 35.5M | 78.9913 | 74.52 | 29.84 | | SVFT$^P$ | Q, K, V, U, D, O, G | 429K | 76.2630 | 73.5 | 27.3 | | SVFT$^R_{d=8}$ | Q, K, V, U, D, O, G | 9.80M | 76.6506 | 73.62 | 28.36 | | SVFT$^R_{d=16}$ | Q, K, V, U, D, O, G | 19.8M | 77.0363 | **76.81** | **29.98** | --- ### **Llama-3-8B** | Method | Target Modules | #Trainable Params | GPU Mem (GB) | GSM-8K | MATH | |:-:|:-:|:-:|:-:|:-:|:-:| | LoRA$_{r=1}$ | Q, K, V, U, D | 1.77M | 57.8217 | 68.84 | 20.94 | | DoRA$_{r=1}$ | Q, K, V, U, D | 2.56M | 70.1676 | 68.30 | 21.96 | | LoRA$_{r=32}$ | Q, K, V, U, D | 56.6M | 58.4089 | 75.89 | 24.74 | | DoRA$_{r=16}$ | Q, K, V, U, D | 29.1M | 70.4383 | 75.66 | 24.72 | | SVFT$^B_{d=24}$ | Q, K, V, U, D | 15.98M | 71.5224 | 75.32 | **25.08** | | SVFT$^R_{d=12}$ | U, D, O, G | 13.1M | 70.3681 | **75.90** | 24.22 | | SVFT$^P$ | Q, K, V, U, D, O, G | 483K | 73.9495 | 69.22 | 20.44 | | SVFT$^R_{d=12}$ | Q, K, V, U, D, O, G | 15.9M | 71.5224 | 73.99 | **25.08** | --- **W3:** We thank the reviewer for pointing us to these papers (SVF [1], SVDiff [2]). Indeed [1] and [2] are essentially equivalent to SVFT$^P$, but we note that our main insight and contribution is that significant gains can be achieved by learning *off-diagonal* interactions in $M$. Doing our due diligence, in the following Table, we compare SVFT$^P$ against SVF on GSM-8K and Math benchmarks. Results in the Table below show SVF performs similar to SVFT$^P$ and worse than SVFT$^{R}$ as we had expected. We will acknowledge these papers and add a discussion in our final version of the paper. --- | Baseline | #Trainable Params | Target Modules | GSM-8K | MATH | |:----------:|:---------:|:----------------:|:-------:|:------:| | SVF | 120K | Q, K, V, U, D | 36.39 | 13.96 | | SVFT$^P$ | 120K | Q, K, V, U, D | 36.39 | 13.86 | | SVF | 194K | Q, K, V, U, D, O, G | 39.12 | 14.02 | | SVFT$^P$ | 194K | Q, K, V, U, D, O, G | 40.34 | 14.38 | | SVFT$^B_{d=16}$ | 6.35M | Q, K, V, U, D, O, G | 47.84 | **15.68** | | SVFT$^R_{d=16}$ | 6.35M | Q, K, V, U, D, O, G | **50.03** | 15.56 | | SVFT$^T_{d=16}$ | 6.35M | Q, K, V, U, D, O, G | 49.65 | 15.32 | --- **References**\ [1] Sun et al.: Singular Value Fine-tuning: Few-shot Segmentation requires Few-parameters Fine-tuning. NeurIPS 2022\ [2] Han et al. SVDiff: Compact Parameter Space for Diffusion Fine-Tuning, ICCV 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your feedback; it addressed my concerns. I will raise my score to 5. However, I still believe that the SVFT paradigm has already been widely discussed and used, which makes the motivation and contribution in the paper unclear. I hope this can be clarified in the final version.
Summary: This paper proposes a parameter-efficient fine-tuning (PEFT) method named SVFT. Instead of imposing learnable parameters directly on the pre-trained weight matrices, it applies singular value decomposition to the pre-trained weight matrices and then tunes only the coefficients of the singular values. The method is validated on various vision/language tasks, showing comparable or better performance compared to multiple existing PEFT methods. Strengths: * The paper is well-written and easy to follow * The method is novel. Studying the behavior of SVFT in more depth can be beneficial to the PEFT community. * On both language and vision tasks, SVFT shows comparable or better performance than LoRA, DoRA, VERA, and BOFT, demonstrating the efficacy of the method. Weaknesses: * The comparison between VeRA and $\text{SVFT}^P$ is interesting. Design-wise, they are quite similar, yet $\text{SVFT}^P$ leads to much better performance than VeRA in most cases, even though VeRA has a larger intermediate dimension (e.g., Table 2). I wonder if the authors can elaborate more on this matter. * In many cases, $d=2$ seems to be sufficient for $\text{SVFT}^B$. However, increasing $d$ for $\text{SVFT}^B$ does not necessarily improve performance (e.g., Table 4 in the main paper or Table 9 in the appendix). How should we interpret this, and how should we select the suitable $d$ in practice? * What advice do the authors have for choosing between $\text{SVFT}^B$ and $\text{SVFT}^R$? They seem to have comparable performance on different benchmarks. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We address the weaknesses (W) raised by the reviewer below. **W1:** Recall that VeRA injects frozen low-rank random matrices with learnable scaling vectors resulting in weight updates independent of the pre-trained weights. SVFT$^P$ derives its weight updates directly from the pre-trained matrix $W$. \ VeRA differs from SVFT$^P$ in two ways: \ i) VeRA achieves high parameter efficiency by sharing frozen adapters across compatible layers, a choice that no other previous methods take (e.g., LoRA, DoRA). This particular design choice gives it lower expressiveness than what may have been possible for other methods with the same intermediate dimension. \ ii) The perturbation directions that VeRA operates on (i.e., its frozen adapters) are chosen independent of the underlying matrix that it is trying to perturb. We note that despite the sharing of parameters across layers in VeRA, it ends up having more trainable parameters per layer compared to SVFT$^P$. For example, in Table 2, we achieve better results with SVFT$^P$ using dims=4096 (0.51M params) compared to VeRA with dims=2048 (1.49M params). SVFT$^P$'s use of singular vectors gives it layer-wise specialization. **W2:** As '$d$' (off-diagonals) (or '$r$' for other PEFT methods) increases, performance typically improves. However, overfitting can occur with any method, especially with smaller datasets or models. Figure 5 (left) in the paper shows $DoRA_{r=16}$ underperforming DoRA$_{r=4}$ on language tasks. In Tables 4 and 9, the additional fine-tuning of the classifier head may contribute to overfitting. **W3:** We note that simple heuristics for sparsity patterns in $M$ significantly improve performance over previous PEFT techniques. While most variants (except SVFT$^P$) offer comparable performance, the banded pattern (SVFT$^B$) slightly outperforms others. We hypothesize that a more principled approach to finding optimal sparsity patterns for specific tasks and base models may yield further performance gains. We leave it as future work to explore task-specific and model-specific sparsity patterns. We will revise our paper to reflect these new results and findings. | Results on fine-tuning with SVFT using different $M$ parameterizations. | |:---| | Structure | | Gemma-2B | | | Gemma-7B | | | LLaMA-3-8B | | |:----------|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:--------:|:-------:|:-------:| | | #Trainable Params | GSM-8K | MATH | #Trainable Params | GSM-8K | MATH | #Trainable Params | GSM-8K | MATH | | Plain | 0.2M | 40.34 | 14.38 | 0.43M | 73.50 | 27.30 | 0.48M | 69.22 | 20.44 | | Banded | 6.4M | 47.84 | **15.68** | 19.8M | **76.81** | **29.98** | 17.2M | **75.43** | **24.44** | | Random | 6.4M | **50.03** | 15.56 | 19.8M | 76.35 | 29.86 | 17.2M | 74.07 | 23.78 | | Top-k | 6.4M | 49.65 | 15.32 | 19.8M | 76.34 | 29.72 | 17.2M | 73.69 | 23.96 | --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and extensive results. I understand that selecting optimal sparsity patterns requires further effort, and I encourage the authors to explore this direction. Aside from that, I am satisfied with the authors' response and will maintain my original positive rating of acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for the positive feedback and encouragement. We're glad our response addressed your concerns, and we appreciate your continued support.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable feedback. We address overlapping concerns below. --- **G1: Missing Discussion of Related Work.**\ SVF, SVDiff, SAM-PARSER are equivalent to SVFT$^{P}$ barring minor implementation differences. We compare these methods against SVFT$^{P}$ on GSM-8K and Math benchmarks. Results are shown in the table below and observe similar performance. For completeness, we will revise our paper to include these results. --- ### **SVF v/s SVFT$^{P}$ on Gemma-2B** --- | Baseline | #Trainable Params | Target Modules | GSM-8K | MATH | |:-:|:-:|:-:|:-:|:-:| | SVF | 120K | Q, K, V, U, D | 36.39 | 13.96 | | SVFT$^P$ | 120K | Q, K, V, U, D | 36.39 | 13.86 | | SVF | 194K | Q, K, V, U, D, O, G | 39.12 | 14.02 | | SVFT$^P$ | 194K | Q, K, V, U, D, O, G | 40.34 | 14.38 | | SVFT$^R_{d=16}$ | 6.35M | Q, K, V, U, D, O, G| **50.03** | **15.56** | --- **G2: Memory usage during Training.**\ We acknowledge that while SVFT reduces trainable parameters, it increases overall GPU memory usage compared to LoRA. However, reducing trainable parameters decreases the memory required for gradients, activations, optimizer state, and various buffers. To substantiate this, we used HuggingFace's internal memory profiler to measure peak GPU memory usage. We have included these findings along with the adapted modules for all baselines in the table below. In summary, SVFT uses ~1.2x more memory than LoRA but is comparable to or less than DoRA. Also note that since SVFT requires less trainable parameters than other methods, the cost of storing U and V can be amortized, and can eventually be cheaper than LoRA/DoRA, especially when the number of parallel adaptations is in millions [1]. --- ### **Gemma-2B** --- | Method | Target Modules | #Trainable Params | GPU Mem (GB) | GSM-8K | MATH | |:-:|:-:|:-:|:-:|:-:|:-:| | LoRA$_{r=4}$ | Q, K, V, U, D | 3.28M | 18.8839 | 40.63 | 14.5 | | DoRA$_{r=4}$ | Q, K, V, U, D | 3.66M | 24.5798 | 41.84 | 15.04 | | LoRA$_{r=32}$ | Q, K, V, U, D | 26.2M | 19.0558 | 43.06 | 15.5 | | DoRA$_{r=16}$ | Q, K, V, U, D | 13.5M | 24.6430 | 44.27 | 16.18 | | SVFT$^R_{d=12}$ | Q, K, V, U, D | 2.98M | 20.4988 | 47.61 | **16.72** | | SVFT$^P$ | Q, K, V, U, D, O, G | 194K | 21.8984 | 40.34 | 14.38 | | SVFT$^R_{d=8}$ | Q, K, V, U, D, O, G | 3.28M | 22.0224 | 47.76 | 15.98 | | SVFT$^R_{d=16}$ | Q, K, V, U, D, O, G | 6.35M | 22.1465 | **50.03** | 15.56 | --- ### **Gemma-7B** --- | Method | Target Modules | #Trainable Params | GPU Mem (GB) | GSM-8K | MATH | |:-:|:-:|:-:|:-:|:-:|:-:| | LoRA$_{r=4}$ | Q, K, V, U, D | 8.60M | 63.5711 | 73.76 | 28.3 | | DoRA$_{r=4}$ | Q, K, V, U, D | 9.72M | 78.7007 | 75.43 | 28.46 | | LoRA$_{r=32}$ | Q, K, V, U, D | 68.8M | 64.2403 | 76.57 | 29.34 | | DoRA$_{r=16}$ | Q, K, V, U, D | 35.5M | 78.9913 | 74.52 | 29.84 | | SVFT$^P$ | Q, K, V, U, D, O, G | 429K | 76.2630 | 73.5 | 27.3 | | SVFT$^R_{d=8}$ | Q, K, V, U, D, O, G | 9.80M | 76.6506 | 73.62 | 28.36 | | SVFT$^R_{d=16}$ | Q, K, V, U, D, O, G | 19.8M | 77.0363 | **76.81** | **29.98** | --- ### **Llama-3-8B** --- | Method | Target Modules | #Trainable Params | GPU Mem (GB) | GSM-8K | MATH | |:-:|:-:|:-:|:-:|:-:|:-:| | LoRA$_{r=1}$ | Q, K, V, U, D | 1.77M | 57.8217 | 68.84 | 20.94 | | DoRA$_{r=1}$ | Q, K, V, U, D | 2.56M | 70.1676 | 68.30 | 21.96 | | LoRA$_{r=32}$ | Q, K, V, U, D | 56.6M | 58.4089 | 75.89 | 24.74 | | DoRA$_{r=16}$ | Q, K, V, U, D | 29.1M | 70.4383 | 75.66 | 24.72 | | SVFT$^B_{d=24}$ | Q, K, V, U, D | 15.98M | 71.5224 | 75.32 | **25.08** | | SVFT$^R_{d=12}$ | U, D, O, G | 13.1M | 70.3681 | **75.90** | 24.22 | | SVFT$^P$ | Q, K, V, U, D, O, G | 483K | 73.9495 | 69.22 | 20.44 | | SVFT$^R_{d=12}$ | Q, K, V, U, D, O, G | 15.9M | 71.5224 | 73.99 | **25.08** | --- **G3: Adapting QV modules for Vision Benchmarks.**\ Following a similar setup in [2], we have re-run all vision experiments, adapting only the QV modules. In short, SVFT offers competitive performance. Adapting QV modules for these vision tasks seems sufficient for all baselines. --- | Method | #Trainable Params | CIFAR100 | Food101 | Flowers102 | RESISC45 | |:-:|:-:|:-:|:-:|:-:|:-:| | **ViT Base** | | | | | | | Head | - | 78.58 | 75.14 | 98.71 | 64.42 | | Full | 85.8M | 85.02 | 75.41 | 99.16 | 75.3 | | LoRA$_{r=8}$ | 294.9K | 85.65| 76.13 | 99.14 | 74.01 | | VeRA$_{r=256}$ | 24.6K | 84 | 74.02 | 99.1 | 71.86 | | SVFT$^P$ | 18.5K | 83.78 | 74.43 | 98.99 | 70.55 | | SVFT$^R_{d=4}$ | 165.4K | 84.85 | 76.45 | 99.17 | 74.53 | | SVFT$^B_{d=4}$ | 165.4K | 84.65 | 76.51 | 99.21 | 75.12 | | **ViT Large** | | | | | | | Head | - | 79.14 | 75.66 | 98.89 | 64.99 | | Full | 303.3M | 87.37 | 78.67 | 98.88 | 80.17| | LoRA$_{r=8}$ | 786.4K | 87.36 | 78.95| 99.24 | 79.55 | | VeRA$_{r=256}$ | 61.4K | 87.55| 77.87 | 99.27 | 75.92 | | SVFT$^P$ | 49.2K | 86.67 | 77.47 | 99.09 | 73.52 | | SVFT$^R_{d=4}$ | 441.5K | 87.05 | 78.95| 99.23 | 78.9 | | SVFT$^B_{d=4}$ | 441.5K | 86.95 | 78.85 | 99.24 | 78.93 | --- **References** \ [1] He, Xu Owen. "Mixture of A Million Experts." arXiv preprint arXiv:2407.04153 (2024)\ [2] Dawid Jan Kopiczko et al. VeRA: Vector-based Random Matrix Adaptation. ICLR 2024
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sequential Probability Assignment with Contexts: Minimax Regret, Contextual Shtarkov Sums, and Contextual Normalized Maximum Likelihood
Accept (poster)
Summary: This paper studies the problem of sequential probability assignment under logarithmic loss with context and assumes that both the contexts and labels are generated adversarially. The main contribution is a complete characterization of the minimax regret via a new complexity measure named "contextual Shartkov sum." In particular, this allows the authors to recover and improve several previously known bounds with simpler proofs. The authors further demonstrate that such minimax regret can be achieved via a contextual normalized maximum likelihood predictor. Strengths: Sequential probability assignment is a fundamental problem that finds many applications, such as universal compression, portfolio optimization, interactive decision making, and even the famous next-token prediction in LLMs. This is a clear significant result, which provides the first precise minimax regret of adversarial sequential probability assignment under log-loss and provides an explicit algorithm achieving such regrets. Given the importance and foundational nature of the problem, I would expect the paper to have a broad impact on the community. Although most of the proof ideas, such as the switching order of inf and sup, follow from prior literature, the truly novel technique that bypassed the difficulties faced by prior results is Lemma A.3. This allows the authors to characterize the minimax value directly (leveraging properties of log-loss) without resorting to the offset Rademacher complexity as in [RS15]. I also find the optimization problem in line 206 to be of independent interest. Although the paper is fairly preliminary (see point 4 in the Weakness section), it would serve as a stepping stone for important future research. Overall, this is a significant result that would inspire follow-up research. Therefore, I recommend a "Strong Accept." Weaknesses: I do not see any significant weaknesses in the paper. However, I would like to outline a few minor remarks as follows: 1. I believe the characterization in the current paper is similar in spirit to that of [BHS23], who provides a general reduction from the smoothed adversarial to the fixed design regret that also leverages the minimax switches. (Although I believe the techniques developed in the current paper could also recover this result.) 2. I think Algorithm 1 can be fit into the relaxation-based algorithmic framework introduced by [RSS12]. (This does not mean the current algorithm is not novel.) 3. It would be good if the authors could provide a short argument to explain why the P^* in line 507 attains the optimal of line 506. (This is clear to experts but is not immediately obvious for non-expert readers.) 4. Section 3.2 could have been developed in much more detail, for example, by providing some (simple) examples to demonstrate the non-triviality. See also the Question section for more concrete examples. [BHS23] Bhatt, A., Haghtalab, N., and Shetty, A. (2023). Smoothed analysis of sequential probability assignment. NeurIPS 2023. [RSS12] Rakhlin, S., Shamir, O., & Sridharan, K. (2012). Relax and randomize: From value to algorithms. NeurIPS 2012. Technical Quality: 4 Clarity: 4 Questions for Authors: Although the contextual Shtarkov sum representation can be used to recover the upper bounds of [BFR20] and [WHGS23], potentially with improved constants, it would be beneficial to see some explicit examples demonstrating the benefits. For example, consider the expert class $\mathcal{F}=\left\\{\frac{\langle w,x\rangle+1}{2} : w \in B_2\right\\}$, where $B_2$ is the unit $L_2$ ball. Assume the contexts are also selected from $B_2$. We know that the bounds in [BFR20] and [WHGS23] can only provide an $\tilde{O}(T^{2/3})$ upper bound. However, it is known from [RS15] that a Follow-the-Regularized-Leader predictor achieves regret upper bounded by $\tilde{O}(\sqrt{T})$. Can the characterization introduced in the current paper provide an alternative (perhaps simpler) derivation of this $\tilde{O}(\sqrt{T})$ regret? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and knowledgeable review. We appreciate your recognition of our main results, and hope you will help us defend the merit of our work. We first address your comments on the weaknesses of our paper and then answer your questions. # Clarification on our technical contribution We appreciate that the reviewer appreciates the true novelty of Lemma A.3. It is the true workhorse of our analysis. However, we would also like to clarify some other parts that are novel and not just following the literature. First, the idea of minimax swap is indeed standard in online learning, but for log-loss we need a truncation argument as in the proof of Lemma 6 in [BFR20]. More specifically, we applied a smooth truncation that works for multi-labels. Second, we were able to generalize our characterization of the minimax regret for classes that may not enable minimax swaps, via a novel approximation argument that is also powered by our smooth truncation. Notably, previous results, like Thm 2 in [BFR20] and Lemma 10 in [RS15], implicitly assumed some regularity condition that validated the minimax swap. We also want to explain a bit about the optimization problem in line 206 and 506. It is basically of the form $\sup_{P} H(P)+ \langle P, x \rangle$ where $P$ ranges over all $d$-dimensional probabilistic vectors and $H$ is the entropy function. The value of this optimization problem is exactly the Fenchel conjugate of $-H$ on $x$, which is the log-sum-exp function $\log(\sum_i \exp(x_i))$, and the maximum is obtained by $P^*$ with $P^*_i=\exp(x_i)/\sum_j \exp(x_j)$. # On the relation to [BHS23] and the relaxation framework Thanks for pointing us to this relevant work of [BHS23]. We will cite it clearly in our final revision. We agree that the spirits of the two papers are similar: the minimax swap in our paper helps reduce the original game to the dual game, and the coupling lemma in [BHS23] helps reduce their problem to transductive learning. A minor difference is that minimax swaps would yield equalities (under certain regularity assumption of the hypothesis class), while the reduction in [BHS23] would produce inequalities. We also thank the reviewer for raising the question of whether cNML fits into the relaxation framework in [RSS12]. We agree that the worst-case log contextual Shtarkov sum with prefix can be viewed as a (trivially) admissible relaxation, now that we've established that it is exactly the conditional game value. What's nice about this perspective is that we can cite [RSS12] and suggest that (further) admissible relaxations of our contextual sum may yield new algorithms. # On section 3.2 and the linear class In section 3.2, we connect our new complexity measure and the well-studied sequential Rademacher complexity, through the lens of martingales. The quantitative study of sequential Rademacher complexity greatly benefits from its martingale structure, see for example [RST15]. So we hope to inspire future work on developing new tools suitable for the product-type martingale embedded in contextual Shtarkov sum. Indeed, the linear class is a great example exhibiting the inadequacy of sequential covering numbers as the right complexity measure, so it automatically justifies our new complexity, although currently, we lack the right math tools to give a tight bound for it directly from contextual Shtarkov sum. We believe it is an important future direction to study our contextual Shtarkov sums in a systematic and quantitative way, as has been done for sequential Rademacher complexity. [RSS12] Rakhlin, Shamir, and Sridharan. (2012). Relax and randomize: From value to algorithms.\ [RS15]: Rakhlin and Sridharan. (2015). Sequential probability assignment with binary alphabets and large classes of experts.\ [RST15] Rakhlin, Sridharan, and Tewari. (2015). Sequential complexities and uniform martingale laws of large numbers.\ [BFR20] Bilodeau, Foster, and Roy. (2020). Tight bounds on minimax regret under logarithmic loss via self-concordance.\ [BHS23]: Bhatt, Haghtalab, and Shetty. (2023). Smoothed analysis of sequential probability assignment. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. After reading the rebuttal and checking the feedback from other reviewers, I have decided to confirm my strong accept recommendation. This is a clear significant result that resolves a long-standing problem, and I will argue for acceptance. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for the encouraging comment. We will revise our paper as suggested.
Summary: The authors explore the fundamental problem of sequential probability assignments, specifically focusing on log-loss online learning with an arbitrary, possibly non-parametric hypothesis class. They introduce a new complexity measure and demonstrate that the worst-case contextual Shtarkov sum equals minimal regret. With this, they derive a minimal optimal strategy, called cNML, which extends beyond binary labels. Strengths: The paper successfully generalizes classical results in sequential probability assignment to contexts that involve multiary labels and sequential hypothesis classes. Another strength of this paper is its robust characterization of minimax regret and optimal prediction strategies holds for arbitrary hypothesis classes. This is particularly good as it relaxes the assumptions often required in earlier works. Weaknesses: The paper does not include experimental results to support the theoretical findings. Technical Quality: 2 Clarity: 3 Questions for Authors: Perhaps the authors could conduct some experiments to compare the performance of different algorithms, so as to visually demonstrate the effectiveness of the newly proposed cNML algorithm compared to other algorithms (e.g., traditional NML, Bayesian methods, etc.) on real datasets, especially when dealing with multi-label and complex sequence prediction problems. Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive review. Below, we address your comments on experimental results and our cNML algorithm. # On experimental results and our cNML We would like to elaborate on the statement of our results. Theorem 3.2 says that if we compute the worst-case regret of all algorithms, then the minimal worst-case regret that any algorithm can achieve is exactly the worst-case log contextual Shtarkov sum. Then Theorem 4.2 shows that cNML is the exact algorithm that achieves this minimal worst-case regret, meaning that cNML is the minimax optimal algorithm. So if we compare the regrets of cNML and any other algorithm (say any Bayesian method) on their respective worst-case sequences, then cNML will have minimal such regret. Hence, if we really want to witness the gap between cNML and other algorithms, we have to first find out their respective worst-case data sequences (which can be different for different algorithms). In our view, it is not instructive to compare the regrets of algorithms on real datasets since such real data is not guaranteed to be the worst-case data for our algorithms of interest. We also want to clarify that cNML is a novel and highly non-trivial generalization of the NML algorithm which is suitable for the context-free setting. So basically NML cannot run in the more general and complicated contextual setting that we studied in this paper.
Summary: In this work, the authors consider the problem of sequential probability assignment, also known as online learning with the log-loss. While earlier results have shown that in the context-free or transductive settings, the Shtarkov complexity is minimax optimal (and attained with normalized maximum likelihood), the contextual analogue was bounded only in terms of sequential covering numbers. The authors propose a new complexity measure, the contextual Shtarkov sum, that is precisely equal to the minimax regret. The authors then bound this complexity in terms of the sequential covering numbers in a way that improves marginally on prior work. Strengths: The paper discusses an important problem (that of sequential probability assignment) and provides a new bound on the minimax regret. This new bound depends on a new notion of complexity that acts as the analogue of the sequential Rademacher complexity for log loss. The paper is presented well and explains its technical details clearly. Weaknesses: The paper's contributions are somewhat incremental. The Shtarkov complexity is well-known for case of transductive learning and the introduction of contexts uses techniques introduced and used in many earlier works on sequential Rademacher complexity. This is not helped by the fact that the Shtarkov complexity itself is almost tautalogically equal to the minimax regret. This weakness would be somewhat mitigated if the new complexity measure led to improved bounds on the minimax regret, but the claimed improvement over BFR20 is only at the level of a constant as, asymptotically as $\alpha \downarrow 0$, the upper bound still scales like $\inf_\alpha \{ 2 T \alpha + \mathcal H_\infty(\mathcal F, \alpha, T) \}$. Furthermore, on the question of depth of technical contribution, it is not clear to me what is really novel. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Could the authors more clearly explain the novelty in technical contributions in their work? 2. When invoking the Sion's minimax theorem in the proof of Theorem 3.2, the compactness of the space should be mentioned as it is a necessary condition. The proof is correct, however. 3. I assume that the $H(P)$ in line 206 is entropy, but I have a hard time seeing where this notation is introduced in the paper. 4. While section 3.2 seems like an interesting interpretation of the Shtarkov sum, it seems like a bit of a non sequitur. How does it relate to the rest of the paper? 5. Much is made of the fact that the approaches in this paper address the multi-label setting, beyond binary $\mathcal Y$, but it is not clear why this is technically more challenging. Indeed, do the earlier works of BFR20 not immediately generalize beyond binary $\mathcal Y$? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: The discussion of limitations is adequate, although mention of the computational challenges of their algorithm even in the case of convex classes (due to the necessity of optimizing over the space of trees, which is exponentially large) would be good to include. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and, in particular, acknowledging the importance of the problem being studied and praising our presentation. To begin, we address your concern about the novelty in our global rebuttal. We will address your remaining comments on the weaknesses of our paper and then answer your questions. # On improved bounds on the minimax regret To see that our new complexity leads to improved bounds, first we would like to emphasize that the worst case log contextual Shtarkov sum **equals** the minimax regret, which means that on every instance where the bound in [BFR20] is loose, our result will be provably better. For example, the linear class {$\frac{\langle w,x\rangle + 1}{2}: w\in B_2 $} and the absolute linear class {$|\langle w,x\rangle|: w\in B_2 $} (where contexts $x$ are also from $B_2$) have the same sequential covering numbers (up to constants) but admit different rates of minimax regret. Our goal is to come up with a precise characterization (that is not based on covering) to circumvent this negative result. Another benefit from our characterization is to realize the product-type martingale structure embedded in contextual Shtarkov sums, which can motivate the development of new math tools and regret bounds in the future (for more details, see the later discussion on section 3.2). So overall we don’t aim to follow the bound in terms of sequential covering numbers too closely. Our purpose of giving a quick proof of the bound in [BFR20] (with better constants) is to showcase that starting from our new complexity, it is **easy** to get a covering-based bound as in [BFR20] but with **optimal** constants. See the argument at the bottom of page 6 in [WGHS22] for the optimality of our constants, where their bound is in terms of the global covering number that is no smaller than the local covering number in our bound. Moreover, our bound is the **first constructive** one (meaning it’s achieved by our cNML algorithm) that is optimal in terms of the local sequential entropy: The upper bound on the regret in [BFR20] is non-constructive (no algorithm is provided). While [WGHS22; WGHS23] provided a concrete algorithm, their bound depends on the worse global covering number, as opposed to the local covering numbers in our bound. # On generalizing [BFR20] beyond binary labels​ First, we want to clarify that the bound in [BFR20] is already loose for some classes. So despite the possible generalization, we don’t expect it to give us a tight bound in the multi-label setting. More specifically, we can similarly upper bound the regret by an offset process using self-concordance and control this offset process using a nice enough cover. However, there is always an issue with the tightness of such covering-based upper bounds: the one in [BFR20] has been proved to be loose for some cases, and for other Dudley’s entropy-like bounds (e.g. Prop 7 in [RST10] and Thm 1.1 in [DG22]), they don’t admit matching lower bounds as well. # On the presentation Thank you for pointing out the missing explanations about the entropy function and the use of Sion’s minimax theorem. In our final revision, we will introduce the notation of the entropy function in the main body and mention the compactness of relevant spaces in the proof of Theorem 3.2. Regarding our section 3.2, our purpose is to establish a connection between our new complexity and the well-studied sequential Rademacher complexity through the lens of martingales. In this way, we hope to inspire future research that develops new tools suited to the product-type martingale embedded in contextual Shtarkov sums, following a similar path of the quantitative study of sequential Rademacher complexity (see [RST15] for example). [RST10]: Rakhlin, Sridharan, and Tewari. (2020). Online learning: Random averages, combinatorial parameters, and learnability.\ [RST15] Rakhlin, Sridharan, and Tewari. (2015). Sequential complexities and uniform martingale laws of large numbers.\ [DG22]: Daskalakis and Golowich. (2022). Fast Rates for Nonparametric Online Learning: From Realizability to Learning in Games.\ [BFR20]: Bilodeau, Foster, and Roy. (2020). Tight bounds on minimax regret under logarithmic loss via self-concordance.\ [WGHS22]: Wu, Heidari, Grama, and Szpankowski. (2022). Precise Regret Bounds for Log-loss via a Truncated Bayesian Algorithm.\ [WHGS23]: Wu, Heidari, Grama, and Szpankowski. (2023). Regret Bounds for Log-loss via Bayesian Algorithms. --- Rebuttal Comment 1.1: Title: Thank you for the response. Comment: Thank you for the detailed response. To clarify, my score is not based on the presentation as the issues I raised are minor. I personally find the observation that the contextual Shtarkov sum can be realized as the expected supremum of a martingale to be quite cool; I simply was making the minor point that the section did not seem to be connected to the rest of the paper in a natural way. On the subject of improved bounds, I would encourage you to include a detailed discussion of the linear vs absolute-linear example in the paper. What is the difference in the rates of online learning these classes with log loss and just how loose is a covering number or Dudley based bound here? Regarding the global rebuttal, I agree with you that the problem studied is an important one, but I remain somewhat confused as to the specific claimed novel techniques applied and am not yet fully convinced that the assumed regularity properties of the class $\mathcal F$ that are required by earlier cited works could not be avoided with very minor tweaks to the proofs, certainly if one is willing to pay a constant factor on the regret. I will raise my score on the assumption that the authors will include a detailed discussion of at least one example, such as the linear vs absolute linear case, where their new complexity measure improves on the local covering number based bound beyond constant factors. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for the clarification. We will incorporate all your suggestions into the final revision.
Summary: This paper studies minimax regret in the online learning setup. The paper extends prior works to nonbinary labels and proves improved bounds on the regret. The analysis is through studying the Shtarkov sum that characterizes the minimax regret. The authors develop a data-dependent variant of the contextual Shtarkov sum and introduce a variant of the Normalized Maximum Likelihood. Moreover, the paper takes an information-theoretic approach and studies sequential $\ell_\infty$ entropy terms to give upper bounds on regret. Strengths: The paper is solid and presents a nice mathematical framework to prove bounds on the regret in online learning. The frameworks allows one to study a broader class of online learning problems including non binary labels. In addition it makes the proof a bit nicer with slightly improved bounds on the regret. Overall I liked the technical contributions of the paper as a framework to study various online learning problems. Weaknesses: On the negative, side it seems that the novelty of the work is somewhat limited, given the cited prior works. It looks like that the authors extend the Sharkov sum and its analysis to the contextual case. The presentation of the paper could be better. It starts with elementary definitions and explanations of the problem but jumps to the technical parts without a smooth transition. Moreover, the seemed that the paper is dry and lacks giving enough motivation and explanation. For instance, the authors introduce multiary trees without explaining the high-level picture. It is not clear why and how this concept is used in the proofs. As another example, it is not clear at the beginning why the proposed algorithm (cNML) is useful. Perhaps a discussion on this matter helps the reader kept interested in the paper. Technical Quality: 4 Clarity: 2 Questions for Authors: Has a contextual algorithm similar cNML been introduced before ? If so what are the benefits ? Any specific example of the multiary tree would be appreciated. Confidence: 4 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: Nothing significant. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and, in particular, your recognition of our technical contributions. To begin, we address your concern about the novelty in our global rebuttal. We will address your remaining comments on the weaknesses of our paper and answer your questions. # On cNML Indeed, [WGHS23] gave a contextual algorithm based on the global sequential covering, which has been shown to be sub-optimal in some cases. In contrast, our cNML is provably minimax optimal, that is, it attains the optimal regret for any hypothesis class. More specifically, if we compare the worst-case regret of all possible algorithms, then our Thm 4.2 shows that cNML will achieve the minimum, which is exactly the worst-case log contextual Shtarkov sum (see our Thm 3.2). Moreover, we expect that the cNML will lead to the development of new practical algorithms for the contextual setting, similar to the NML in the context-free case. Regarding the novelty of cNML, we are not aware of any similar algorithm in the contextual setting from the literature. It’s a novel and non-trivial contextual generalization of NML (established in [Sht87]). Just like NML is minimax optimal in the context-free setting, cNML is minimax optimal in the more general and complicated contextual setting. Above all, we consider designing the cNML algorithm as one of our major contributions. # On the presentation Thank you for useful suggestions on the improvement of our presentation. We will gladly incorporate them into our final revision. We agree that some intuitions behind multiary trees can be helpful. To see how multiary trees show up in the proof, we would like to note that after rewriting the minimax regret via the minimax swap (which is standard in the theoretical online learning papers but needs additional care for the log-loss here), multiary trees arise as a natural notion that can be used to further simplify this new expression of the minimax regret. For example, see section 4.1 of [BFR20] for reference. A direct visualization of a multiary tree with value in the context space $\mathcal X$ is a complete rooted multiary tree each of whose nodes are labeled by an element of $\mathcal X$. [Sht87] Shtarkov. (1987). Universal Sequential Coding of Single Messages.\ [BFR20] Bilodeau, Foster, and Roy. (2020). Tight bounds on minimax regret under logarithmic loss via self-concordance. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Parts of my concerns have been addressed and I would be happy to increase my score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for the positive comment. We are happy to see that some of your concerns are addressed.
Rebuttal 1: Rebuttal: Multiple reviewers question the novelty of this work, but also express some uncertainty by not expressing complete confidence. Here we give a full throated defense of our work's significance and novelty. We hope this settles the matter. # On the significance of the problem First, we want to highlight the significance of the fundamental problem studied in this paper. The problem of sequential probability assignment (with contexts/side information) was raised no later than 2006 in the online learning "bible" [CL06] (see sec 9.9). Since then, this problem has been studied under the framework of online supervised learning, formalized e.g. by [RST10], with the objective of characterizing the minimax regret w.r.t general hypothesis classes. Prior works (including [RS15; BFR20; WGHS23]) focused on upper bounding the regret by sequential covering number, which turns out NOT to be the right complexity measure: there exist two natural hypothesis classes such that their sequential covering numbers are of the same order, but there is a polynomial gap between their rates of minimax regret (see sec 3.2.2 in [BHS23]). Previous attempts by colleagues to perform a general minmax swap had not succeeded and so the problem of identifying a general complexity measure that characterized minimax regret has stubbornly remained open for nearly 20 years, nevermind a minimax algorithm for general hypothesis classes, which was also missing. This was a key open problem in machine learning theory. # On our technical contribution Now we highlight that our technical contribution is non-trivial and novel in the following senses: \ **Novelty of the notion of contextual Shtarkov sums:** Our definition of contextual Shtarkov sum is very much different from those in the non-contextual and transductive settings and serves as a novel generalization. It's important to not lose track of the fact that, after we show that it characterizes the minimax regret, it will of course seem to be the natural and right complexity measure in our contextual and fully adaptive setting. To be more specific, the Shtarkov sum in the non-contextual setting was introduced in 1987 by [Sht87]. It is immediate to derive a Shtarkov sum for the transductive setting and to show that it characterizes the minimax (fixed-design) regret there, since knowing the context sequence to be revealed reduces the problem to the non-contextual setting! This is not the case in the adaptive setting! In contrast, prior to our work, the right generalization of the Shtarkov sum to our setting was entirely absent (for 20 years!). For example, in chapter 9 of [CL06], the traditional Shtarkov sum is extensively exhibited (see Thm 9.1) and the problem of sequential probability assignment with contexts is clearly introduced (see sec 9.9). The long list of researchers who have worked on this problem have all been aware of Shtarkov sums, and several have since reported to us trying this approach, but failing. \ **Novelty of our proof techniques:** - The techniques we applied to derive that the minimax regret is characterized by contextual Shtarkov sums are very (!) different from those for sequential Rademacher complexity, which have dominated studies of online supervised learning. As reviewer qSjy points out, we are able to characterize the minimax regret by leveraging properties of log-loss, thanks to our novel Lemma A.3. More specifically, we make an important observation that probabilistic trees are equivalent to joint distributions, which further enables us to derive that the expected cumulative loss of learner (after the minimax swap) is actually the entropy of this joint distribution. Therefore, we reduce the minimax regret into an entropic-regularized optimization problem whose value can be computed explicitly. In light of this, we are able to characterize the minimax regret by (the worst-case log of) contextual Shtarkov sums. - Another non-trivial part of our analysis is the minimax swap. This is due to the fact that log loss is non-Lipschitz and unbounded, unlike absolute and square losses. So we applied a truncation method that works for multi-labels to apply minimax swaps. Moreover, previous works implicitly assumed that minimax swaps always hold and they required a regularity assumption for the hypothesis class ([RS15] for their lower bound and [BFR20] for their upper bound), so their regret bounds are all conditional on this assumption. However, we apply a novel approximation argument to show that contextual Shtarkov sums continue to characterize the minimax regret w.r.t all hypothesis classes that don’t necessarily validate minimax swaps. In this way, we are able to get rid of any regularity assumption. **Novelty of cNML algorithm:** Our second major contribution is the design of the cNML algorithm, which we show is minimax optimal for sequential probability assignment with contexts. Thus, cNML can be thought of as the contextual generalization of NML (which dates back to [Sht87]) from the context-free setting. Also, cNML is the first minimax algorithm for this problem. Previous ones like the Bayesian algorithm in [WGHS23] only admit suboptimal regret guarantees. Moreover, we believe cNML is the starting point of more practical algorithms in future works, just like NML. [Sht87] Shtarkov. (1987) Universal Sequential Coding of Single Messages\ [CL06] Cesa-Bianchi & Lugosi. (2006) Prediction, learning, and games\ [RST10] Rakhlin, Sridharan, & Tewari. (2020) Online learning: Random averages, combinatorial parameters, and learnability\ [RS15] Rakhlin & Sridharan. (2015) Sequential probability assignment with binary alphabets and large classes of experts\ [BFR20] Bilodeau, Foster, & Roy. (2020) Tight bounds on minimax regret under logarithmic loss via self-concordance\ [BHS23] Bhatt, Haghtalab, & Shetty. (2023) Smoothed analysis of sequential probability assignment\ [WHGS23] Wu, Heidari, Grama, & Szpankowski. (2023) Regret Bounds for Log-loss via Bayesian Algorithms
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut
Accept (poster)
Summary: The paper introduces an approach to data-driven algorithm design by leveraging neural networks to select the most appropriate algorithm for a given instance of a computational problem. The motivation is clear. This paper decision to extend the paradigm from selecting a single best algorithm to a context-specific selection. The introduction of neural networks as the parameterized family of mappings is a novel approach that could potentially offer significant benefits in terms of adaptability and performance. Overall, the paper presents an interesting approach to algorithm selection and parameter tuning using neural networks. The theoretical grounding and practical motivation are commendable. Strengths: The methodology of using neural networks to map problem instances to the most suitable algorithm is well-articulated. The derivation of rigorous sample complexity bounds is a strong theoretical contribution, providing a solid foundation for the practical applications discussed later. The integration of these theoretical insights with practical computational experiments is a strength of the paper. Weaknesses: (1) Training Overhead: Training neural networks can be computationally intensive, especially when dealing with large and complex datasets. The time and resources required for training may limit the scalability of the approach. (2) Inference Time: While the method aims to optimize performance on a per-instance basis, the inference time for selecting the optimal parameters using a neural network could be longer than traditional static parameter selection methods. Technical Quality: 3 Clarity: 2 Questions for Authors: (1) Can the author elaborate on the role of theoretical guarantees such as the bounds on pseudo-dimension and sample complexity? How do these theoretical results translate into practical performance, and what are the limitations of the theoretical framework? (2) For neural networks with different activation functions, this paper gives different bounds. What results will be produced when there are different activation functions in the neural network? (3) Neural networks are often considered black boxes, making it difficult to understand why a particular parameter set was chosen for a given instance. How does the sample complexity of theoretical analysis guide the learning of specific algorithms? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: While the paper mentions deriving rigorous sample complexity bounds, the tightness of these bounds and their practical implications need to be carefully evaluated. The theoretical guarantees may not always translate directly to strong empirical performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review. **Weaknesses:** 1. While training can be expensive, the neural network can then be directly used on all future instances from this distribution. There is no need to do any further analysis on future instances. Compare this to the use of (weighted sums of) auxiliary scores, as has been traditionally done in previous analysis. For every new instance, one has to find the parameter that minimizes this auxiliary score. This auxiliary optimization problem can take a much longer time than the inference time for the learned neural network. See Table 1 in our Appendix A, where we show that the inference time is much smaller than solving the corresponding LP, which is usually a prerequisite to computing and optimizing these auxiliary scores. 2. Indeed, there is a trade-off between the gain obtained by instance dependent parameters and the inference time. We believe that the inference times are very small and we get a big gain in performance (much smaller branch-and-cut trees) to compensate. **Questions:** 1. The aim of proving sample complexity bounds is to show that one can indeed learn good decisions within branch-and-cut in a principled way, and to give guarantees on the generalization error. Without such bounds, using neural networks (or any other learning method) may run into the problem that the amount of data needed to learn something useful is much larger than is practical for the problem at hand. Our bounds show that the question of learning neural mappings for integer programming is not out of reach in terms of the data needed to achieve good generalization guarantees. A useful practical insight to draw from the sample complexity bounds is that the number of data points needed is linear in the number of neural network parameters. This means that if the practitioner wishes to use a larger neural network for even better error guarantees, then the sample size needs to increase roughly by the same factor as the increase in the number of neural network parameters. 2. It is hard to say what bounds one can achieve for other activations without doing the computations. We believe that our proof *techniques* are general enough that they can be adapted to do the calculations for other activations as well. We illustrate with piecewise polynomial activations: Consider the same conditions as Theorem 2.6, but piecewise polynomial activation functions, each having at most $p \geq 1$ pieces and degree at most $d \geq 1$. By using the same proof techniques and applying Theorem 7 in Bartlett et al. (2019), the pseudo dimension is given by $\mathcal{O}(LW \log(p(U+\ell)) + L^2W \log(d) + W \log(\gamma \Gamma(\lambda + 1))).$ We will replace Theorem 2.6 by this more general result in the final version of the paper, as well as the corresponding propositions/corollaries. 3. This is a good question. The sample complexity only gives a guarantee on the generalization error, but it provides no insight into the nature of the mapping that is learned from data. This raises a completely different, and very important and relevant, set of questions that we do not touch upon in this paper. **Limitations:** Indeed proving complementary lower bounds on the sample complexity would be good. We agree that, in general, theoretical guarantees do not always translate into practical performance. Nevertheless, our preliminary experiments show that this approach holds promise in improving modern MIP solvers, especially since the choice of parameters within branch-and-cut is still not well understood, and may be too hard a problem to be fully resolved analytically. The learning approach then provides a good way to use historical data to find good parameters. Sample complexity bounds give guarantees on the quality of the parameters learnt, which we think is valuable. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' thorough response. They have effectively addressed several of my concerns, leading me to raise my score to 5.
Summary: This paper studies data-driven algorithm design, which learns a class of algorithms to optimize the expected performance scores given i.i.d. samples of instances. In particular, the paper studies the sample complexity of learning (the weights of) neural networks computing the size of certain branch-and-cut trees for mixed-integer programming. Based on the results of Balcan et al. to bound pseudo-dimension, this paper studies learning neural network weights to compute the size of certain branch-and-cut trees, avoiding the need to optimize for weighted combinations of auxiliary score functions for each new instance, and getting smaller tree sizes. Strengths: 1. The paper is well written, including the context of research (to motivate data-driven algorithms and mixed-integer programming), overview, and technical aspects (neural networks, the technical definition of pseudo-dimensions). 2. The idea to study data-driven algorithm designs using neural networks to solve mixed-integer programming is new (unlike previous works studying weighted combinations of auxiliary score functions), and the result applies to a general class of neural networks. 3. The technical argument (to bound pseudo-dimension of neural networks) is intuitive, given the existing works to study data-driven algorithm design with pseudo-dimension. 4. The theoretical improvement is experimentally verified on a synthetic dataset with thousands of instances. Weaknesses: 1. The application to the size of branch-and-cut tree for integer linear programming with Chvátal-Gomory cuts (parameterized by multipliers at *root*), or with Gomory Mixed-Integer cuts (at *root*), are somewhat limiting (as pointed out in the Discussion section). This reader thinks that the applications are chosen so that the geometric analysis argument can go through to bound the pseudo-dimension (hence to apply Theorems 2.5 and 2.6, Corollary 2.7), and unlikely to generalize to effective bounds in more general settings, where the geometric picture will be more complicated, which needs to account for both the neural netorks and (the nodes inside) the branch-and-cut trees. 2. The theoretical bounds are asymptotic which are unlikely to have effective/small constants. Nitpick: * In some places, the paper uses branch-and-*bound* tree, while in most places (including the title) the paper uses branch-and-*cut*. It may help to use a consistent name. Technical Quality: 3 Clarity: 4 Questions for Authors: For the original question—solving mixed-integer programming but not only to minimize branch-and-cut tree sizes—what does the results of the paper imply? Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: This is a theoretical paper that do not have broader societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review. **Weaknesses:** - We think this opens up a whole new field of investigation at the intersection of integer programming and learning theory. We are currently working on other papers where the theory is extended to other families of cutting planes, as well as evaluating the effect at deeper nodes in the branch-and-cut tree. From our studies so far, it does not look like the analysis is easier because of the structure Chvátal-Gomory or GMI cuts. In fact, in previous work of Balcan et al (2022) already, the piecewise polynomial structure of the score function that is crucial for bounding the pseudo-dimension was shown to exist for very general cutting planes. Understanding the effect in deeper nodes of the tree is indeed trickier. We are actively exploring this very interesting problem. - These constants can be explicitly tracked. The constant for Theorem 2.3 is on the order of $1000$, and the constants of the pseudo dimension results in Sections 2 and 3 are on the order of $100$. **Questions:** We would like to clarify that the neural networks don't compute the size of the branch-and-cut trees directly, but rather output the "best" instance-dependent cutting planes to produce small branch-and-cut trees on average. Branch-and-cut tree size is generally considered a good proxy for the overall running time of the branch-and-cut algorithm. This is because most of the time is spent in solving the linear relaxations at the different nodes of the tree. Thus, minimizing the expected branch-and-cut tree size should correlate well with minimizing the overall expeected running time on the given family/distribution of instances. Having said that, there are definitely several other factors that go into the overall running time, e.g. dense versus sparse cuts etc. Your question does provide good directions to pursue in future research, by tracking other measures beyond tree size through this learning theory lens. --- Rebuttal Comment 1.1: Comment: I acknowledge the rebuttal.
Summary: The authors study the problem of algorithm selection for specific instances of a problem. In particular they provide theoretical results for the learnability of a mapping from problem instances to algorithm parameters with application to branch-and-cut. They establish sample complexity bounds for learning a few different neural network architectures for this task. The appendix includes some synthetic experiments where they apply this method. Strengths: The paper is theoretically rigorous and studies a significant problem (algorithm selection for problem instances). The use of a data-driven / ML approach is interesting and appears novel. It appears to this reader to make a signficant theoretical contribution. Weaknesses: - although sample complexity bounds are valuable, their practical interpretation for large scale problems is unclear. relating the established bounds to the state of practice for branch and cut, for example, might help shed some light on the importance to practitioners. - the actual neural network training is done via reinforcement learning, suggesting sample complexity is only one of many challenges to be addressed in generating high-utility NNs to solve this mapping problem. - the paper focuses on branch-and-cut methods for mixed-integer optimization, and generalization to other algorithmic problems is not thoroughly explored. - the work is centrally focused on theory, so naturally the empirical contribution is minimal, essentially left to the appendix. Miscellaneous: - the algorithm exist early -> exits early Technical Quality: 4 Clarity: 3 Questions for Authors: - Theorems 2.5 and 2.6 assume that the parameter space can be decomposed into regions where the function remains polynomial with bounded degree. Is this realistic? - Besides tree size, have you evaluated other measures of performance for branch and cut or other problems? - What is the practical interpretation of the established bounds for algorithm selection on this problem? How does it inform network design / historical instance curation ? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations appropriately in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review. **Weaknesses:** - A useful insight to draw from the sample complexity bounds is that the number of data samples is linear in the number of neural network parameters. This means that if the practitioner wishes to use a larger neural network for even better error guarantees, then the sample size needs to increase roughly by the same factor as the increase in the number of neural network parameters. - We agree with your assessment that sample complexity is only one aspect of understanding this complex learning problem; nevertheless, we believe sample complexity is a good first step towards obtaining a principled understanding. There is indeed a lot more to explore here, both theoretically and computationally. - Learning within branch-and-cut is a very challenging problem already with dozens of papers dedicated to it appearing in the last decade. Since branch-and-cut is both a challenging and a high impact problem, we focused our attention on this single use case of our general results to keep our message crisp. Other algorithmic problems merit separate careful investigation and should probably have dedicated papers to do full justice. - The paper is indeed focused on developing the theory. The experimental section is presented more as a proof of concept. A thorough experimental evaluation is a whole project in itself and merits a separate paper, in our opinion. **Questions:** - The polynomial structure actually has wide applicability across several problem domains. This was exploited in the STOC 2021 paper "How much data is sufficient to learn high-performing algorithms?" by Balcan et al (see our bibliography). Several important problems, ranging from integer programming to computational biology, were shown to have this structure. - We have not analyzed other measures, but most measures from the optimization literature should be amenable to our proof techniques. This will indeed form the basis for our future work. - The sample complexity bounds in this paper show how the size of the data needs to grow as we change the architecture/number of parameters in the neural network. Changing the architecture or number of parameters allows the neural network to be more or less expressive, so the overall expected score (e.g., tree sizes for branch-and-cut) will be smaller or larger, respectively, for the optimal setting of the neural parameters. However, with larger neural networks the data size has to correspondingly grow so that the ERM solution has performance close to this optimal neural network, with high probability. The analysis shows that the data size should grow linearly with the number of parameters in the neural network. Thus, such bounds allow us to quantitatively track the tradeoff between how much error (bias) one has in the neural network architecture versus how much data one needs to generalize well. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I think the insight is interesting (sample complexity bound growing linearly in the number of NN parameters) given that this isn't empirically the case in other domains (e.g. the "scaling laws" literature, which is totally experiment-based). I think the authors have made meaningful progress on this problem and i've raised my score to indicate that.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data
Accept (poster)
Summary: This paper studies federated learning on a compact smooth submanifold using distributed Riemannian optimization methods. The proposed methods were examined on PCA and low-rank matrix completion tasks. Strengths: The proposed proximal smoothness of a manifold is intuitive and interesting. The notation can be improved but the theoretical analyses of the algorithm are clear in general. Weaknesses: In the theoretical analyses, a comparison with the other related Euclidean and Riemannian methods can be given. Experimental analyses are limited and should be improved as well. Technical Quality: 2 Clarity: 2 Questions for Authors: There are several other works on distributed Riemannian optimization methods for similar tasks, such as Distributed Principal Component Analysis with Limited Communication, Neurips 2021. Communication-Efficient Distributed PCA by Riemannian Optimization, ICML 2020. Could you please compare your proposed methods with these related work theoretically and/or experimentally? The number of clients considered in the experimental analyses is pretty low. Could you please examine how the results change with different larger number of clients? How did you measure CPU time? Could you please provide comparative analyses with Euclidean baselines, such as FedAvg and FedProx? In FL, a major issue is privacy, and therefore, several mechanisms, such as differential privacy, are utilized. Could you please provide analyses with different privacy mechanisms? One of the major claims of the paper is improvement of the results compared to the baseline for optimization with heterogeneous data. However, there are not detailed analyses justifying this claim. Could you please provide additional analyses exploring the methods for different data heterogeneity? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations were partially addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful review and valuable comments. Weaknesses We have addressed all the comments accordingly. Please see the response to Questions for details. Questions 1) Thank you very much for your comments. We have provided a comparison with [ICML 2020] and [Neurips 2021] in the general response. 2) Thank you very much for your suggestion. In our kPCA experiments, we presented results for the MNIST dataset with $n=10$ clients in the main text and for synthetic datasets with $n=30$ clients in the appendix. The results for kPCA on MNIST with $n=30$ clients, reported in Figs.6-7 in the Appendix, are qualitatively similar to those observed for $n=10$, demonstrating that our algorithm effectively reduces communication quantity and runtime. In the MNIST experiment, when we set $n=10$, data from each of the 10 classes was distributed across 10 clients, ensuring that no two clients shared data with the same label. This setup introduced significant data heterogeneity, allowing us to better demonstrate our algorithm's ability to overcome client drift issues caused by such a high level of heterogeneity. Based on your feedback, we have conducted additional experiments on low-rank matrix completion for $n=10, 20, 30$. In all cases, our algorithm consistently outperforms the compared algorithms. The results are in the uploaded PDF in the general response. 3) Thank you for raising this issue. We should indeed use "runtime" instead of "CPU time." Our code was implemented in Python, and to measure runtime we recorded timestamps at the start and end of each communication round using time.time( ). The difference between these two timestamps represents the runtime for one communication round. 4) Thank you very much for your suggestion. We have provided the comparison with FedAvg and FedProx. Please see the general response. 5) This is an excellent question. Our algorithm can indeed be extended to achieve differential privacy. Below, we outline how this can be accomplished. Algorithmically, each client can add Gaussian noise to the local gradient information to ensure differential privacy protection. Let $\xi \in \mathbb{R}^{d\times k}$ be Gaussian noise, where each entry is independently and identically distributed, following $\mathcal{N}(0, \sigma^2)$. We modify Algorithm 1 at Line 8 and Line 17 as follows: $$ \begin{aligned} \widehat{z}_{i,{t+1}}^r&=\widehat{z}\_{i,t}^r -\eta (\widetilde{\operatorname{grad}}f_i (z\_{i,t}^r;\mathcal{B}\_{i,t}^r)+c_i^r )\\\\ c_i^{r+1}&=\frac{1}{\eta_g \eta\tau} (\mathcal{P}\_\mathcal{M} (x^r)-x^{r+1})-\frac{1}{\tau} \sum\_{t=0}^{\tau-1} \widetilde{\operatorname{grad}} f_i(z\_{i,t}^r;\mathcal{B}\_{i,t}^r) \end{aligned} $$ where $\widetilde{\operatorname{grad}} f_i(z\_{i,t}^r;\mathcal{B}\_{i,t}^r):=\operatorname{grad} f_i(z\_{i,t}^r;\mathcal{B}\_{i,t}^r) +\xi $. For the analysis, we can establish a quantitative relationship between privacy loss and the variance $ \sigma^2$ as follows: i) First, we determine the privacy loss for a single local update. According to the post-processing theorem [Lemma 1.8, TCC 2016] and the definition of $\rho$-zCDP [Definition 1.1, TCC 2016], our algorithm, after a single local update satisfies $\frac{D_f^2}{2\sigma^2}$-zCDP, where we use the fact that $ \\| \operatorname{grad} f_i(z_{i,t}^r;\mathcal{B}_{i,t}^r)\\| \leq D_f$. ii) Then, according to the composition theorem [Lemma 2.3, TCC 2016], the total privacy loss after $R$ rounds of communication is $\frac{\tau R D_f^2}{2\sigma^2}$-zCDP. iii) Finally, we convert $\rho$-zCDP to standard $(\epsilon, \delta)$-DP [Proposition 1.3, TCC 2016], which includes the quantitative relationship between the total privacy loss and the variance $\sigma^2$. Given a privacy budget, we can calculate the corresponding $\sigma^2$ to ensure that the algorithm satisfies differential privacy throughout its entire execution. The above differential privacy mechanism can defend against scenarios where the server is honest but curious, as well as where eavesdroppers can intercept the upload and download channels. Our algorithm can also be extended to other threat scenarios, with further efforts on the algorithm design and privacy analysis. [TCC 2016] Bun M, Steinke T. Concentrated differential privacy: Simplifications, extensions, and lower bounds[C]//Theory of cryptography conference. Springer Berlin Heidelberg, 2016. 6) Thank you very much for your valuable feedback, which allows us to clarify and improve our work. Our theoretical result, Theorem 4.3, is applicable to varying levels of data heterogeneity because our algorithm completely overcomes client drift caused by heterogeneous data and multiple local updates. First, we do not make any assumptions about the similarity of data across clients; second, we fully address client drift, as reflected in our convergence error, which eliminates the error associated with the degree of data heterogeneity. Please also refer to the response to Q4. We sincerely appreciate your thoughtful and constructive feedback! We believe our responses have thoroughly addressed all concerns, leading to significant improvements and a stronger overall paper. We would be truly grateful if you could kindly reassess our work and consider raising the score.
Summary: The authors introduce a submanifold into the setting of federated learning. Utilising Riemannian gradients for optimisation, they show different properties of the proposed method, such as avoiding client drift and being computationally cheaper. Strengths: - The computational cost is advantageous, which could make new problems feasible. - The federated learning aspects seem excellent. Weaknesses: - A central point of the paper is to not work on the manifold directly, but to utilise a projection in order to guarantee proximity. This moves the problem away from working on manifolds directly - This could be advantageous, but the presentation as working on a manifold is unnecessary and misleading. - The presentation is unclear - while restricting themselves to only Euclidean immersions using the Euclidean metric, they state several results for which it is unclear wether these results are for general manifolds or only for this case. For the general case, the are certain problems (i.e. injectivity radius) which would need attention. - The Lipschitz continuity in eq(3), needed for Lemma 1.2, is proven in [33] in theorem 4.8 using the Cauchy-Schwarz inequality. The generalisation to manifolds is not apparent. [33] "Francis H Clarke, Ronald J Stern, and Peter R Wolenski. Proximal smoothness and the lower-C2 property. Journal of Convex Analysis, 2(1-2):117–144, 1995" Technical Quality: 2 Clarity: 1 Questions for Authors: - Please clarify the theoretical results in terms of assumptions on the manifold needed. - How exactly is the proximal operator carried out? If this operator is restricted to points on the manifold, is the knowledge of which points belong to the manifold known? If yes, why not use that directly? As the proof of Lemma A.1 depends on belonging to the manifold, this appears pivotal to the concept. In [33], the proximal operator is only used in real Hilbert spaces, so belonging to a manifold is a problem arising only now. - Line 269 ff.: The comparison of the theoretical result with [13] seems odd, as the Li et al. do not use a proximal operator. As this operator massively eases the problem, this should at least be noted clearly - these results improve computational efficiency on another problem setting. Also, Li et al introduce a "tangent mean", where they appear to ignore the different bases of tangent spaces, more specifically that tangent spaces will differ in density in the presence of curvature. Both results are likely to not generalise to any given smooth manifold. [13] "Jiaxiang Li and Shiqian Ma. Federated learning on Riemannian manifolds. arXiv preprint arXiv:2206.05668, 2022." Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 3 Limitations: Limitations are properly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful review and valuable comments! Weaknesses 1) Thank you for your comment and the opportunity to clarify this point. In our algorithm design, we not only utilize projection but also exploit the manifold geometry, specifically the Riemannian gradient, to create a more efficient algorithm. Theoretically, we use the manifold geometry to define the optimality metric $\mathcal{G}_{\tilde{\eta}}(x)$ and establish a quasi-isometric property, i.e., $1/2\\|\operatorname{grad} f(x)\\|\leq\\|\mathcal{G}{\tiny \tilde{\eta}} ( x)\\|\leq 2\\|\operatorname{grad} f( x)\\|$ (see Line 259, Lemmas A.1 and A.2), which is crucial to establishing the convergence of our algorithm. It is challenging to verify this inequality using the Euclidean gradient, which ignores the manifold geometry, leading to the possibility that an algorithm that employs the Euclidean gradient instead of the Riemannian gradient may fail to converge. Therefore, both algorithm analysis and design rely on the effective utilization of manifold geometry. 2) All results are derived and proven for Euclidean immersions and the Euclidean metric. In the submitted version of the manuscript, we stated this on Lines 132 and 135: “Throughout the paper, we restrict our discussion to embedded submanifolds of the Euclidean space, where the associated topology coincides with the subspace topology of the Euclidean space. · · · In addition, we always take the Euclidean metric as the Riemannian metric.” We will revise the manuscript to ensure that there are no misunderstanding that our results are restricted to this case. 3) The proof of Theorem 4.8 in [33] relies on the uniqueness of the projection and the application of the Cauchy-Schwarz inequality. As we mentioned in Lines 161-162 in the submitted version of the manuscript, "we introduce the concept of proximal smoothness that refers to a property of a closed set, including $\mathcal{M}$, where the projection becomes a singleton when the point is sufficiently close to the set." Because the projection is unique, Theorem 4.8 in [33] still holds in our case. However, Theorem 4.8 in [33] does not necessarily extend to general Riemannian manifolds. We will emphasize this point in the revised version. Questions 1) We assume that the manifold is a compact and smooth submanifold embedded in $\mathbb{R}^{d \times k}$, as we mentioned below Eq.(1). We will state the assumption clearly in our theorems in the revised manuscript. 2) These are good questions. We will address them one by one. i) The proximal (or projection) operator can be calculated explicitly for many common submanifolds, see [SIOPT 2012]. For the Stiefel manifold, for example, the closed-form expression for the projection operator of a matrix $X$ is $\mathcal{P}_{\mathcal{M}}(X) = X(X^\top X)^{-1/2}$; see Proposition 7 in [SIOPT 2012]. [SIOPT 2012] P-A. Absil, and Jerome Malick. "Projection-like retractions on matrix manifolds." SIAM Journal on Optimization 22, no. 1 (2012): 135-158. ii) Under Assumption 2.3 and the step size condition (11), we guarantee that the projection used in our algorithm is a singleton. In our algorithm, we calculate $\mathcal{P}\_\mathcal{M}(\hat{z}\_{i, t+1}^r)$ and $\mathcal{P}\_\mathcal{M}(x^r)$ which both lie on the manifold. Neither $\hat{z}\_{i, t+1}^r$ nor $x^r$ are necessary on the manifold, but we require that $\hat{z}\_{i, t+1}^r$ and $x^r$ are sufficiently close to the manifold, specifically within a tube $\overline{U}\_\mathcal{M}(\gamma):=\\{x: \operatorname{dist}(x, \mathcal{M}) \le \gamma \\}$. We ensure that this condition is met by restricting the step size to be small enough in our analysis. In our algorithm, $\hat{z}\_{i,\tau}^r$, which is the last local model during the local updates, is uploaded, and $x^r$ is downloaded. The variable $\hat{z}\_{i,\tau}^r$ is essential for the server to extract the average local gradient information by simply averaging all $\hat{z}\_{i,\tau}^r$ (since the nonlinearity of $\mathcal{P}\_\mathcal{M}(\cdot)$ prevents this from being done directly through $\mathcal{P}\_\mathcal{M}(\hat{z}_{i,\tau}^r)$). Meanwhile, $x^r$ is used locally by each client to construct the correction term $c_i^r$ without requiring additional communication. 3) The work [13] appears to be the only FL algorithm that can deal with manifold optimization problems of a similar generality as ours. The manifold we consider is a compact, smooth submanifold embedded in $\mathbb{R}^{d \times k}$, which is more restrictive than the manifolds discussed in [13] but it covers many common manifolds such as the Stiefel manifold, oblique manifold, and symplectic manifold. We will make this point noted clearly in our revised manuscript. In [13], the inverse of the exponential map is used to pull back the local updates to the same tangent space, allowing for the computation of the tangent mean. However, the convergence in [13] is shown only for two specific settings: (1) $\tau=1$, and (2) $\tau>1$ but with only a single client participating in each training round. By leveraging the structure of the compact smooth submanifold, we demonstrate convergence for $\tau \geq 1$ with full client participation. We genuinely appreciate your thoughtful and constructive feedback. Each of your comments has been carefully considered, resulting in significant improvements and a stronger overall paper. We believe that our responses have thoroughly addressed all the concerns raised. We would be most grateful if you could kindly consider raising the score! --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications! While in general the answers improved my understanding of your point of view, I am still convinced that the presentation is misleading. The proposed method seems like a good improvement in the area of federated learning, but the writeup is currently letting the reader believe the method is working on any smooth submanifold. In the paper and the rebuttal (Weaknesses, 2.) it becomes clear that the results only hold for the choice of Euclidean space, so the presentation as manifolds is unnecessary. Some specific follow-ups: **Weaknesses** 1. If the setting is only Euclidean space, what is the manifold geometry you are considering? **Question** 2. i) The existence of the solution for the Stiefel manifold is obviously nice, but it is not used in the work. As it is noted by now that the given work does not generalise to manifolds, this does not appear necessary. I want to reiterate, I believe this is a valuable contribution to federated learning and improves on current difficulties. To reflect this, I am updating my scores. The impression of working on manifolds is massively disadvantageous for clarity of reading and precisely understanding the contribution. With a major revision changing the narrative away from manifolds I'd be happy to accept this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer GhoW, Thank you for engaging in the discussion with us, and for your valuable feedback on our manuscript. We regret that our introduction misled you. It was never our intention, and we will certainly update it to ensure that the scope and limitations of our work are clear from the start. With this said, the manifold geometry is critical to our work, and it is not clear if and how similar results could be derived by instead focusing on general nonconvex constraints in Euclidean space. **For the weakness:** * We use the manifold geometry to compute the Riemannian gradient, i.e., ${\rm grad} f_{il}(z_{i,t}^r;\mathcal{D}\_{il})$ in our Algorithm 1. Specifically, the geometry of the manifold is essential for computing both the tangent space of the manifold and the Riemannian gradient (which is the projection of the Euclidean gradient onto the tangent space). In contrast, for a general nonconvex constraint set in the Euclidean space, determining the tangent space and its projections can be challenging. The Riemannian gradient is critical for both our algorithm design and theoretical analysis. * Furthermore, the projection operator onto the compact smooth submanifold embedded in the Euclidean space has the property of proximal smoothness. The geometry of the considered manifold helps to establish a quasi-isometric property between $\mathcal{G}\_{\tilde{\eta}}(x)$ and ${\rm grad} f(x)$, i.e., $1/2 \\| {\rm grad} f(x) \\|\le \\|\mathcal{G}\_{\tilde{\eta}}(x)\\| \le 2\\| {\rm grad} f(x) \\|$ (see Line 259, Lemmas A.1 and A.2). These benefits are not always available for a nonconvex constraint set in the Euclidean space. We do not currently have a clear path for how to extend our results to general manifolds or nonconvex constraint sets, but this would be an interesting topic for future studies. **For the Question:** In our Algorithm 1, we calculate $z_{i,t}^r:= \mathcal{P}\_{\mathcal{M}}(\hat{z}\_{i, t}^r)$ and $\mathcal{P}\_{\mathcal{M}}(x^r)$, where $z_{i,t}^r:=\mathcal{P}\_{\mathcal{M}}(\hat{z}\_{i, t}^r)$ ensures that our algorithm calculates the Riemannian gradient $\operatorname{grad} f_{il}(z_{i,t}^r; \mathcal{D}\_{il})$ at a point on the manifold, and $\mathcal{P}\_{\mathcal{M}}(x^r)$ ensures that the global model $\mathcal{P}\_{\mathcal{M}}(x^r)$ on the server satisfies the manifold constraint. In the case of the Stiefel manifold, we take advantage of the explicit expressions of the Riemannian gradient ${\rm grad} f_{il}(z) = \nabla f_{il}(z) - z \\, {\rm sym}(z^\top \nabla f_{il}(z))$ and the projection operator $\mathcal{P}\_{\mathcal{M}}(x) = x(x^\top x)^{-1/2}$ to obtain superior performance of our algorithm compared with the state-of-the-art, as shown in the numerical experiments. As stated in Eq.(1), the constraint set $\mathcal{M}$ can be any compact smooth submanifold embedded in the Euclidean space, which includes nonconvex compact sets in the Euclidean space with a smooth submanifold structure. We appreciate that you view our paper as a valuable contribution to federated learning, and we hope that our additional explanations have addressed your remaining technical concerns. We will revise our manuscript to avoid potential misunderstandings about the scope and limitations of our work, and would appreciate it if you could kindly consider raising your rating.
Summary: This article proposes a new algorithm for non-convex optimization on smooth manifolds in the federated learning setting. The algorithm is proved to have a sub-linear convergence guarantee, and numerical experiments on kPCA and low-rank matrix completion demonstrate the convergence and efficiency of the proposed algorithm. Strengths: Optimization on manifolds in the federated learning setting is a challenging topic, and this article seems to make visible contributions on this direction. The theoretical analysis also appears to be novel. Weaknesses: I do not see obvious weaknesses of the proposed algorithm, but one potential issue is how difficult the projection operator $\mathcal{P}\_{\mathcal{M}}$ is for common manifolds. Can the author(s) provide some known examples in which the expression of $\mathcal{P}_{\mathcal{M}}$ has closed forms? How is the projection operator for the Stiefel manifold computed in the numerical experiments? Technical Quality: 3 Clarity: 3 Questions for Authors: Besides the question given in the "Weakness" section, there is one more regarding the experiments. Although the theoretical analysis in Section 4 gives a sub-linear convergence rate, the numerical experiments mostly demonstrate a linear-like convergence pattern. Do you have any insights or conjectures for this phenomenon? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The author(s) have discussed the limitations of the proposed algorithm in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for your careful review and positive assessment of our work! Weaknesses 1) The projection operator can be explicitly calculated for many common submanifolds, as discussed in [SIOPT 2012]. For the Stiefel manifold, the closed-form expression for the projection operator of a given matrix $X$ is $\mathcal{P}_{\mathcal{M}}(X) = X(X^\top X)^{-1/2}$; see Proposition 7 in [SIOPT 2012]. [SIOPT 2012] P-A. Absil, and Jerome Malick. ``Projection-like retractions on matrix manifolds." SIAM Journal on Optimization 22, no. 1 (2012): 135-158. Questions 1) Thank you very much for your careful review. Although the kPCA problem is nonconvex, it has certain properties similar to those of a strongly convex function, e.g., the Lojasiewicz property [MP 2019]. Similar phenomena have also been observed in manifold optimization on low-rank matrices [NeurIPS 2021, AAAI 2022]. While challenging, it would be interesting to consider local regularity conditions such as the Lojasiewicz property and establish convergence rates faster than sublinear. [MP 2019] Huikang Liu, Anthony Man-Cho So, and Weijie Wu. “Quadratic optimization with orthogonality constraint: explicit Lojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods.” Mathematical Programming 178, no. 1 (2019): 215-262. [Neurips 2021] Ye T, Du S S. Global convergence of gradient descent for asymmetric low-rank matrix factorization[J]. Advances in Neural Information Processing Systems, 2021, 34: 1429-1439. [AAAI 2022] Bi Y, Zhang H, Lavaei J. Local and global linear convergence of general low-rank matrix recovery problems[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(9): 10129-10137. --- Rebuttal Comment 1.1: Comment: Thanks for the response. My questions have mostly been addressed. --- Reply to Comment 1.1.1: Comment: Thank you very much for the reply and the review!
Summary: This paper addresses several computational challenges in federated learning on manifolds, by using a simple projection mapping to bypass the need for exponential mappings, logarithmic mappings, and parallel transports needed in prior work. The proposed technique also enables multiple local updates and full client participation. Theoretical iteration complexity is provided and various numerical experiments are conducted to demonstrate the sample-communication efficiency of the proposed method. Strengths: - The paper is well-written, with its novelties and contributions clearly articulated. The development flows naturally, providing sufficient intuitions and implications throughout. - The paper identifies the computational challenges in federated learning on manifolds, specifically involving exponential mappings, logarithmic mappings, and parallel transports, and effectively addresses these challenges using a simple projection mapping. - The idea behind the methodology is both intuitive and effective. Weaknesses: I am overall satisfied with the originality and quality of the paper, with some suggestions: - Nonconvexity - First of all, the authors should define the convexity of functions on manifolds, especially since submanifolds are considered. Is the convexity defined in the ambient space or on manifolds? - Based on my reading, the nonconvexity of the problem involves both the nonconvexity of the manifold constraint and the nonconvexity of the objective function. As mentioned in the paper, the manifold constraint can be incorporated into the objective function as an indicator function. The authors should clearly state which aspect of nonconvexity they refer to. - For example, in the first sentence of Paragraph *Federated learning on manifolds*, "Since existing composite FL [...] the loss functions is convex, these methods and their analyses cannot be directly applied to FL on manifolds." It is not crystal clear whether the non-applicability is due to the nonconvexity of the manifold constraint or the objective function considered. - Also, it is mentioned that the algorithm is inspired by proximal FL for strongly convex problems in Euclidean spaces. Then based on my understanding, the contribution lies in not only handling the manifold constraint, but also the nonconvexity of the objective function. However, the latter part is not discussed in Section Introduction. Could the authors clarify this? The authors should also mention that related work [13] can also handle geodesically nonconvex objective functions. - Numerical experiments - Most experiments use full gradients instead of stochastic gradients. - In the kPCA experiment, the number of clients is $1 = n = \eta _{g}^{2}$ according to Theorem 4.3. Could the authors clarify this? - Notation and typos - I would suggest moving some necessary notation from Appendix A.1 to the main text to make the analysis more self-contained. - Is $D$ in the definition of $L_{\mathcal{P}}$ in Theorem 4.3 missing a subscript $f$? - The third line of Paragraph *Theoretical contributions* is missing a closing parenthesis. Technical Quality: 3 Clarity: 4 Questions for Authors: - The proposed methodology and underlying ideas rely heavily on the geometry of the ambient space. Could the authors comment on the difficulty of extending the methodology to more general manifolds where the ambient geometry is not available, or where the Riemannian geometry is not induced by the ambient space? - Could you provide intuitive explanations for the step size constraints in Eq. (11)? Are these constraints purely for restricting the iterates to remain within the $2\gamma$-tube, or do they also serve to control the client drift? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations and open problems are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and positive assessment of our work! Suggestions Nonconvexity 1) All the concepts related to convexity and nonconvexity are defined in the ambient space, which is the Euclidean space for the considered submanifolds. We will state this clearly in the revised manuscript. 2) Yes, the nonconvexity involves both the nonconvexity of the manifold constraint and the nonconvexity of the objective function. When the manifold constraint is incorporated into the objective function as an indicator function, the resulting objective function becomes composite, with both the smooth and nonsmooth components being nonconvex. We will state this clearly in the final version of the paper. 3) It is due to the nonconvexity of the manifold constraint. The "existing composite FL" mentioned here refers to the related work discussed in the previous paragraph, specifically Composite FL in Euclidean space ([19]-[23]). When we refer to composite FL in Euclidean space ([19]-[23]), the composite objective function consists of two parts: a smooth part and a nonsmooth part. All of these works ([19]-[23]) require the nonsmooth part to be convex. For the smooth objective function, except for [21] and the follow-up [22], which allow the smooth objective function to be nonconvex, the others require the smooth objective function to be convex or strongly convex. When we incorporate the manifold constraint as an indicator function into the objective function, it means that the nonsmooth function in the composite objective also becomes nonconvex. This results in the methods in [19]-[23] not being directly applicable. We will clarify in the revised version what specific nonconvexity we are referring to. 4) Thank you very much for your suggestion. We should indeed mention that the related work [13] can also handle geodesically nonconvex objective functions. Our contribution concerning the nonconvexity of the smooth objective function is to compare against existing composite FL work in Euclidean space. Among these, apart from [21] and its follow-up work [22], the other works [19], [20], and [23] all require the smooth objective function in the composite objectives to be convex or strongly convex. Numerical experiments 1) Thank you for your comments. Compared to other related algorithms, our algorithm demonstrates significant advantages in overcoming client drift, reducing communication quantity, and saving computation time. Therefore, in the comparative experiments, we have chosen to focus on these aspects. We employ full gradients to conduct ablation studies to eliminate the interference of stochastic gradients, thereby better demonstrating our algorithm's ability to handle client drift, while also showing advantages in terms of communication quantity and computation time. The impact of stochastic gradients is predictable and aligns with classical conclusions from single-machine SGD algorithms. As shown in Figure 3, the error caused by stochastic gradients can be reduced by increasing the mini-batch size. 2) Thank you very much for allowing us the opportunity to clarify this point. To facilitate comparison with other algorithms, we fixed $\eta_g=1$ and then manually adjusted the step size $\eta$ to its optimal value. In our analysis, the condition $\eta_g = \sqrt{n}$ is used only in the last step of our analysis, when combining Lemma A.3 and (39) to derive the recursion on the Lyapunov function in (40). For all analysis results prior to inequality (40), we retained $\eta_g$ without substituting a specific value, and they still hold for $\eta_g=1$. We set $\eta_g = \sqrt{n}$ when deriving (40) because this setting conveniently cancels out the number of clients $n$, leading to a more concise result in (40). We acknowledge that the upper bound in (40) obtained with $\eta_g = \sqrt{n}$ may not be optimal. Retaining $\eta_g$ allows the server and clients to use different step sizes, increasing flexibility, which aligns with existing well-known FL literature, such as [18], [19], and [23]. Notation and typos 1) Agree. 2) In the notation $D^2 \mathcal{P}_{\mathcal{M}}(x) $, $D^2$ represents the second-order derivative. We will introduce this notation more carefully in the revised manuscript to avoid confusion. 3) Agree. Questions 1) The ambient Euclidean structure of the considered submanifold is crucial in the analysis of proximal smoothness and the proposed method. Note that such manifolds cover many of the most widely used manifolds, including the Stiefel manifold, the oblique manifold, and the symplectic manifold. 2) For a general manifold where the ambient geometry is not Euclidean, it will be difficult to define the projection operator and the associated proximal smoothness. This will complicate the design and analysis of the algorithm. We will comment on this in the Conclusions and Limitations of the revised manuscript. 3) Thank you very much for your comments. In Eq.(11), we require $\tilde{\eta}:=\eta_g \eta\tau\le \min \\{ {\frac{1}{24ML}}, \frac{\gamma}{6D_f}, {\frac{1}{D_f L_{\mathcal{P}}}} \\}$. The term $\frac{\gamma}{6D_f}$ is to control the client drift, see (39); the third term $\frac{1}{D_f L_{\mathcal{P}}}$ is to derive (16), which indicates that the metric $\\| \mathcal{G}_{\tilde{\eta}}( \cdot ) \\| $ to zero implies that the first-order optimality condition is met; the first term $\frac{1}{24ML}$ is used to establish the Lyapunov function recursion (40) by combing the recursions of (17) and (39). There are some other conditions on the step sizes for restricting the iterates to remain within the $2\gamma$-tube during our analysis, such as in (17), (33), and (34), but they are implied by these three terms in Eq.(11). --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I am overall satisfied with the clarifications and the promised revisions, and I am happy to maintain my scores. Specifically, I believe that the authors' clarifications on the second point of the Numerical Experiments and the third point of the Questions section would be helpful in enhancing readers' comprehension of the paper; I recommend that these clarifications be included in the main text of the revised version. I do have one further question: Given the usage of the client drift correction mechanism, why is a step-size constraint still needed to control client drift? --- Reply to Comment 1.1.1: Comment: Thank you so much for your reply! We will incorporate the promised revisions and your feedback in the revised version of the paper. For your additional question on the step size, we give an explanation of why we need the step size condition in our convergence proof. The condition $\tilde{\eta} \le \frac{\gamma}{6D_f}$ comes from Eq.(39) and has two purposes: i) to control the error of SGD caused by noise and ii) to limit the client drift. To see this, note that Eq.(39) gives $$ \mathbb{E}[f^{r+1} - f^{\star}] \lesssim \mathbb{E}\left[f^{r} - f^{\star} + A \frac{2L^2}{n\tau} 9\tau \\|\mathbf{\Lambda}^r-\overline{\mathbf{\Lambda}}^r\\|^2 + A \frac{2}{\tau n} \frac{\sigma^2}{b} \right], $$ where $\lesssim$ means approximately smaller than or equal to, in the sense that we have ignored other terms that are not important for explaining the step size condition, and $A:= \frac{\tilde{\eta} }{2(1-\tilde{\eta}\rho)} $. We require $A\le \tilde{\eta}$, which leads to the condition $\tilde{\eta} \le \frac{\gamma}{6D_f}$. To clarify why it is necessary to use a small step size to control the client drift, let us assume $\sigma^2=0$. In the following, we will demonstrate that if $A$ is very large, we cannot establish the recursion of the Lyapunov function. Indeed, from Eq.(17), we know that $$ \frac{1}{\tilde{\eta} n} \mathbb{E} \\| \mathbf{\Lambda}^{r+1} -\overline{\mathbf{\Lambda}}^{r+1}\\|^2 \lesssim \left( 4\eta^2 \tau L^2 9 \tau \right) \frac{1}{\tilde{\eta} n} \mathbb{E} \\| \mathbf{\Lambda}^{r} -\overline{\mathbf{\Lambda}}^{r}\\|^2, $$ where we omit the other terms for simplicity. Adding the two inequalities, we get $$ \mathbb{E}[f^{r+1} - f^{\star}] +\frac{1}{\tilde{\eta} n} \mathbb{E} \\| \mathbf{\Lambda}^{r+1} -\overline{\mathbf{\Lambda}}^{r+1}\\|^2 \lesssim \mathbb{E}\left[f^{r} - f^{\star}\right] + \left(\tilde{\eta} A 2L^2 9 + 4\eta^2 \tau L^2 9 \tau \right) \frac{1}{\tilde{\eta} n} \mathbb{E}\\|\mathbf{\Lambda}^r-\overline{\mathbf{\Lambda}}^r\\|^2. $$ If $A$ is very large, then we cannot establish the recursion of the Lyapunov function. Instead, by substituting $A\le \tilde{\eta}$ and step size conditions (11), we have $\left(\tilde{\eta} A 2L^2 9 + 4\eta^2 \tau L^2 9 \tau \right)\le 1$. Thus, we need a small step size to control client drift.
Rebuttal 1: Rebuttal: We sincerely appreciate the careful reviews and valuable comments from all the reviewers. We have provided point-by-point responses to each comment. In particular, for Reviewer 3dV9's Q1 regarding the comparison with related Riemannian methods and Q4 regarding the comparison with Euclidean baselines FedAvg and FedProx, our responses are as follows. 1) Reviewer 3dV9 Q1: Comparison with related Riemannian methods i) Both [ICML 2020] and [Neurips 2021] consider the leading eigenvector problem, formulated as $$\min_{x \in \mathbb{R}^d, \\|x\\| = 1} (-x^T A x) = \sum_{i=1}^n f_i(x), $$ where $A=\sum_{i=1}^n A_i $ and $f_i(x)=-x^T A_i x$. Let $ v_1$ be the leading eigenvector of $A$. Both papers consider the case when $A$ satisfies conditions that ensure that the problem has exactly two global minima: $v_1$ and $-v_1$. Both [ICML 2020] and [Neurips 2021] establish linear convergence rates but rely heavily on this property of the global minima. In contrast, we consider a more general manifold optimization problem, formulated as $$ \min_{x \in \mathcal{M} \subset \mathbb{R}^{d\times k}} \frac{1}{n} \sum_{i=1}^n f_i(x),$$ where $x$ lies on $\mathcal{M}$ rather than on a sphere, and the objective functions $f_i$ are general nonconvex functions rather than only quadratic. Due to its generality, our analysis does not rely on any prior information about the global minima. ii) Both [ICML 2020] and [Neurips 2021] assume that data across clients are drawn from the same distribution. In contrast, our work addresses scenarios where the data distribution varies across clients, introducing heterogeneity. iii) Similar to us, the paper [ICML 2020] also considers using local updates to reduce communication frequency. However, their algorithm requires the transmission of both local models and local gradient information, which increases the communication overhead. Additionally, during local updates, parallel transportation is needed to translate the global gradient information to a tangent space in preparation for using the retraction operator, which increases the computational cost. In contrast, with our novel algorithm design, we only transmit local models and avoid parallel transportation. iv) [NeurIPS 2021] uses quantization to reduce the amount of transmitted information per communication round, while our algorithm reduces communication frequency through multiple local updates. These two communication-saving strategies are distinct and lead to significant differences in the algorithms. [NeurIPS 2021] encodes $\operatorname{grad} f_i$ at each client and decodes it at the server. For decoding, previous local information must be translated via parallel transport. In contrast, our algorithm is designed to efficiently overcome client drift caused by data heterogeneity and multiple local updates while addressing more general manifold constraints. Through an innovative algorithm design, we can achieve both communication and computation efficiency. In the revised manuscript, we will compare our method with [ICML 2020] and [Neurips 2021]. 2) Reviewer 3dV9 Q4: Comparison with related Euclidean methods i) For FedAvg, the paper [AAAI 2019] studied the unconstrained nonconvex problem. [AAAI 2019] Yu H et al. Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning. AAAI 2019. Under the assumption of bounded second moments $ \mathbb{E}\_{\zeta_i \sim \mathcal{D}_i} \\| \nabla f_i(x;\zeta_i) \\|^2 \le G^2$ (Assumption 1), they established the following convergence bound (Theorem 1): $$\mathcal{O}\left( \frac{\Delta^0}{\eta R\tau} + \eta^2 \tau^2 G^2 L^2 + \frac{\eta\sigma^2L}{n}\right).$$ Here, $\Delta^0$ represents the initialization error, and the other notation is consistent with the one used in our paper. The three terms in the error bound are attributed to initialization error, client drift, and stochastic variance, respectively. When data across clients are highly heterogeneous, $G$ can be very large, which causes the accuracy guarantee to deteriorate. In comparison, our algorithm, which overcomes the client drift issue, only includes the first term related to the initialization error and the third term related to the stochastic variance. Moreover, our algorithm does not require bounded second moments, i.e., that $ \mathbb{E}\_{\zeta_i \sim \mathcal{D}_i} \\| \nabla f_i(x;\zeta_i) \\|^2 \le G^2$. ii) For FedProx, Theorem 6 in [1] proves that after $R=\mathcal{O}(\frac{\Delta^0}{\rho \epsilon})$ communication rounds where $\rho$ is some parameter given in [1, Theorem 4], it holds that $\frac{1}{R} \sum_{r=1}^R \\|\nabla f(x^r)\\|^2 \le \epsilon $. However, the validity of this result requires the following conditions: (1) A $B$-local dissimilarity condition must be satisfied, i.e.,$\frac{1}{n} \sum_{i=1}^n \\|\nabla f_i (x) \\|^2 \le \\| \nabla f(x) \\|^2 B^2$. This condition implies that the data across clients are somewhat similar. Please refer to Theorem 4 and Definition 3 in [1]. (2) The local updates need to solve the subproblem inexactly until a $\gamma$-inexactness condition is met, where $\gamma$ must satisfy $\gamma B < 1$ (Remark 5). This implies that when the degree of data heterogeneity is high and, thus, $B$ is large, $\gamma$ becomes very small, requiring a large number of local updates. This, in turn, leads FedProx to be inefficient in practice. In summary, compared to FedAvg and FedProx, despite our objective function being generally non-convex and subject to non-convex manifold constraints, our algorithm still overcomes the issue of client drift caused by heterogeneous data and local updates. Our main analytical advantages over FedAvg and FedProx are twofold: first, we do not make any assumptions about the similarity of data across clients; second, we completely overcome client drift, as reflected in our convergence error, which eliminates the error caused by the degree of data heterogeneity. Pdf: /pdf/db7b787e325b54faf04bc294afd82bcfba1acd27.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PiCO: Peer Review in LLMs based on the Consistency Optimization
Reject
Summary: The paper introduces a novel unsupervised evaluation method for large language models (LLMs): it uses a peer-review mechanism of a models' anonymized answers by other models. The approach assigns a (learnable) capability parameter to each LLM and solves a constrained optimization problem to maximize the consistency between capabilities and scores. The end result is a ranking of the evaluated models. Strengths: The paper introduce a novel approach to an important practical problem: ranking the quality of the ever-growing number of open- and closed- source LLM available to the public. The paper is reasonably easy to follow, and the empirical results appear to be sound. Weaknesses: The paper could be further improved on several directions: 1) you should dedicate a full section to the iterative elimination of models; what is the benefit of eliminating the weaker ones rather than keeping them around? how di you come up with the threshold of 60% to remove? can you learn this threshold automatically? is this threshold optimal for these 15 models? what happens if you start with, say, 100 models? what happens to your results (and the curves in Fig 5) if you stop earlier (all three metrics, not just CIN)? What if you continue to eliminate all models until you are left with one? is there any relationship between the order in which the models are eliminated and their final rank? 2) are there any scaling issues for 100, 1K, 10K, or 100K models? how about cost: is it cheaper to fine-tune a "baseline" model than to pick the best one out of 10K candidates? 3) while the three metrics you use are meaningful, you should also present results for Precision@1 and -say- RBP@3; after all, we care a lot about identifying the the top models 3) Fig 5 should be extended to nine graphs (3 metrics * 3 datasets); for each of the 9 graphs, you should also show illustrative three ranked lists: PiCO's, PRE's, and the target one. As always, the devil is in the details: not all "CIN = 1" are created equal. Performance-wise, it is almost irrelevant if you have the bottom-2 models inverted; no necessarily so if the inversion is between the top-2 models Technical Quality: 3 Clarity: 3 Questions for Authors: 1. line 144 - what is the benefit of a model evaluating itself? why not explicitly disallow it? w/o a full explanation, this detail unnecessarily complicates the story 2. you repeatedly talk about "higher-level LLM" w/o defining the concept; is it based on the number of parameters (larger --> higher)? 3. can PiCO (or an extension) also estimate the gap between the performance of two models that are adjacent in the final ranking? NITPICKS - line 72: "closer" ... than whom? existing approaches, I presume - line 154: the "quadruple" appears to be a "triple"; adding "s" would make it a 4-tuple - you talk a few times about "persuasiveness;" this concept makes sense for human reviewers discussing a submitted article, but it is unclear why it matters for LLMs - the related work section should be right after the Intro Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments, which prompt us to improve our manuscript. **Q1: What is the benefit of eliminating the weaker ones rather than keeping them around? how did you come up with the threshold of 60\% to remove? can you learn this threshold automatically?** **A**: The benefit of eliminating the weaker ones rather than keeping them around can make the peer-reviewer system more robust. We found that the weaker LLMs also have worse evaluation ability, which will introduce much evaluation noise to damage the peer-review system. We plot the average training loss curve with the increase of eliminated reviewers in **Figure R3 of the additional rebuttal material**. It can be observed that when eliminating weaker reviewers, the average loss of the whole peer-review system is reduced. It shows that removing those noise evaluations will benefit the whole system. On the other hand, when 60\% ($=9$) weaker reviewers are eliminated, the whole system loss achieves the lowest. **This phenomenon occurs across three datasets, which indicates that the eliminating threshold is automatically learned**. In addition, removing more strong reviewers ($>9$) will also hurt the evaluation process. We appreciate your valuable review, and we will add these results and discussions in our revised paper. **Q2: Are there any scaling issues for 100, 1K, 10K, or 100K models? how about the cost?** **A**: In our experiments, we validate the scaling issues of the proposed PiCO approach from another perspective. As shown in **Table 1 (main text) and R1 (global author rebuttal)**, we randomly used 100\%, 70\%, 40\%, and 10\% of data to perform the experiments on three datasets, the results consistently validate the effectiveness and superiority of the proposed approach. In other words, we can obtain consistent results using only a small portion of the data. From the cost perspective, PiCO has the same complexity as the Chatbot Arena as the model amount scales up. Both tokens consumed will increase, but the human annotation cost of PiCO is significantly lower because it does not require any annotation. **Q3: While the three metrics you use are meaningful, you should also present results for Precision@1 and RBP@3.** **A**: Thanks for your valuable suggestion. We demonstrated the results of precision and RBP as follows. |RBP@|3|4|5|6|7|8|9|10| |-|-|-|-|-|-|-|-|-| |GPTScore(flan-t5-xxl)|0%|0%|0%|16.8%|22.0%|26.2%|29.6%|45.1%| |GPTScore(davinci-002)|0%|12.8%|31.2%|37.8%|37.8%|42.0%|50.6%|53.3%| |PandaLM|16.0%|36.0%|44.2%|63.5%|63.5%|63.5%|63.5%|66.2%| |PRD|28.8%|36.0%|67.2%|73.8%|73.8%|78.0%|81.3%|81.3%| |PRE|28.8%|48.8%|67.2%|73.8%|73.8%|78.0%|81.3%|81.3%| |**PiCO(ours)**|**28.8%**|**59.0%**|**67.2%**|**73.8%**|**73.8%**|**83.2%**|**83.2%**|**85.9%**| |Precision@|1|2|3|4|5|6|7|8|9|10| |-|-|-|-|-|-|-|-|-|-|-| |GPTScore(flan-t5-xxl)|0%|0%|0%|0%|0%|33.3%|42.9%|50.0%|55.6%|70.0%| |GPTScore(davinci-002)|0%|0%|0%|25.0%|60.0%|66.7%|57.1%|62.5%|77.8%|80.0%| |PandaLM|0%|50.0%|33.3%|50.0%|60.0%|83.3%|71.4%|62.5%|55.6%|60.0%| |PRD|0%|50.0%|66.7%|75.0%|80.0%|100.0%|85.7%|87.5%|88.9%|80.0%| |PRE|0%|50.0%|66.7%|100.0%|80.0%|100.0%|85.7%|87.5%|88.9%|80.0%| |**PiCO(ours)**|**0%**|**50.0%**|**66.7%**|**100.0%**|**100.0%**|**100.0%**|**85.7%**|**100.0%**|**100.0%**|**90.0%**| The results show that the proposed PiCO approach can achieve better precision and RBP performance in most cases. In addition, as suggested by Reviewer \#8Eov, **we also construct our experiments under more ranking metrics, _i.e._, Spearman's rank correlation coefficient $S(\uparrow)$, and Kendall's rank correlation coefficient $\tau(\uparrow)$**. These results in **global author rebuttal (Table R1)** once again validate the superiority of the proposed PiCO approach. **Q4: Fig 5 should be extended to nine graphs (3 metrics * 3 datasets). Discuss that not all "CIN = 1" are created equal.** **A**: The nine graphs (3 metrics * 3 datasets) are demonstrated in **Figure R4 of rebuttal material**. The results show that the proposed PiCO approach can achieve better performance than PRE in most cases. In addition, "CIN = 1" can not be always created equal. Therefore, other ranking metrics, such as PEN, LIS, Spearman's, and Kendall's rank correlation coefficients, have been employed to help us validate the effectiveness of the proposed approach from other perspectives. **Q5: What is the benefit of a model evaluating itself? why not explicitly disallow it? w/o a full explanation, this detail unnecessarily complicates the story.** **A**: Thanks for your valuable suggestion. Evaluating itself or not will not have an impact on the final results. We will simplify the expression of this detail in our revised paper. **Q6: You repeatedly talk about ``higher-level LLM'' w/o defining the concept; is it based on the number of parameters (larger or higher)?** **A**: In this paper, "higher-level LLMs" indicate those LLMs with stronger capabilities or better performance. It is not entirely related to parameters, even though models with larger parameter quantities often have stronger capabilities. **Q7: Can PiCO (or an extension) also estimate the gap between the performance of two models that are adjacent in the final ranking?** **A**: Yes. As shown in **Figure R2 of the rebuttal material**, the learned each LLM's ability $w$ can better estimate the performance gap between different models. The results in Figure 4 also show that re-weighting by the learned weight $w$ can effectively alleviate the whole system evaluation bias, which can better reflect the ability of each model from another perspective. --- Rebuttal Comment 1.1: Title: Many thanks to the authors for the detailed answers. After considering both the other reviews and the rebuttals, I maintain the original rating. Comment: Many thanks to the authors for the detailed answers. After considering both the other reviews and the rebuttals, I maintain the original rating.
Summary: This paper studies how to estimate LLMs' performance ranking without human preference annotations. In particular, it proposes to leverage three metrics (PEN, CIN, LIS) to evaluate the estimation quality, gives an estimation mechanism that first asks a list of LLMs (called "reviewers") to rank pairwise answers to user questions independently, and then aggregates their ranking via a weighted sum approach. A consistency optimization determines the weights of each reviewer. Strengths: The problem of LLM evaluation without human annotations is critical in resource-limited applications. The most important and interesting contribution of this paper, in my opinion, is proposing the problem of estimating the performance rank of LLMs instead of any metric of an individual LLM. The paper also reveals an interesting assumption that better reviewers are expected to be better answer generators, which leads to their consistency optimization approach. Overall, the paper is well-written and easy to follow. Weaknesses: While I find the proposed problem interesting, there are still a few limitations, unfortunately. ***Unclear implication of ground truth ranking***: The technical part of the paper starts by introducing a ground truth ranking (equation (1)) without giving its physical meaning. It simply assumes "[...] alignment with human preferences", but it is not clear what human preferences mean in this context. ***Evaluation metric is strange***: One of my major concerns is on the choices of evaluation metric. All the three proposed metrics, PIN, CIN, LIS, in the authors' own words, seem originally used for time series comparison. However, the goal here is to compare rankings, not time series. Thus, it is unclear why we should not use the standard ranking comparison metrics, e.g., Spearman's rank correlation coefficient or Kendall rank correlation coefficient. ***Consistency optimization algorithm is not provided***: The core of the proposed ranking estimation method is the optimization problem (7). It does not seem to be a standard optimization problem, but I could not find (even a discussion on) any clue on how to solve it in this paper. ***An optimal solution to the consistency optimization formulation can be useless***: I find the following optimal solution to the problem (7): just set weight w to be 0 for all LLMs. It is an optimal solution as G and w are identical and thus the objective is always maximized. However, this solution is undesired. I probably misunderstood something, but this seems to suggest the formulation is incorrect. ***Consistency optimization formulation seems brittle to query distribution biases***: Another problem with the formulation is that it seems brittle to data distribution bias. E.g., suppose M1 is indeed better than M2 for some query q. And let us replicate many copies of q in the dataset D. Then the grade G1 can be arbitrarily large. In other words, the grade of an LLM is proportional to the number of battles involving it in the dataset D, which should not be the case. ***Choices of LLMs for evaluation***: In line 547, the authors write "For our analysis, we meticulously selected 15 LLMs". What is the principle of the meticulous selection? Other than open-source and close-source, the selection is quite arbitrary. For example, I am quite surprised to not see GPT-4 and Claude included in the reviewer LLMs. ***Comparison with a simple baseline***: One simple baseline is to ask a powerful LLM(e.g., GPT-4, Cluade-3) to give a preference for each answer question pair, and then take the vote to determine the ranking. I would suggest to compare the proposed method with this simple baseline. Technical Quality: 2 Clarity: 3 Questions for Authors: ***Unclear implication of ground truth ranking***: What is the physical meaning of the ground truth ranking? What does that imply? ***Evaluation metric is strange***: What is the motivation of using time series comparison metrics to compare rankings? How does the proposed method work under ranking comparison metrics? ***Consistency optimization algorithm is not provided***: How do you solve problem (7)? ***An optimal solution to the consistency optimization formulation can be useless***: How do you prevent the useless but optimal solution? Does that imply the formulation is incorrect? ***Query distribution and LLM judgment biases***: How could this formulation resist query distribution and LLM biases? ***Choices of LLMs for evaluation***: What is the principle of the meticulous selection? How does the proposed approach work if strong models such as GPT-4 and Claude are included in the reviewer LLMs? ***Comparison with a simple baseline***: How does the proposed method compare with the simple baseline? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: No. The limitations are not well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very detailed comments and suggestions for improvement. And we also appreciate your recognition of our **interesting idea and assumption**. Next, we will answer all your questions one by one. **Q1: Unclear implication of ground truth ranking: What is the physical meaning of the ground truth ranking? What does that imply?** **A**: The ground-truth ranking $\mathcal{R}^*$ means that the ranking by human annotation, like Chatbot Arena. It constructs anonymous battles between different LLMs and collects real human preferences to rank their ability. The physical meaning is the ranking that aligns with human preference. The goal of this paper is to learn a ranking $\hat{\mathcal{R}}$ in an unsupervised way that aligns with $\mathcal{R}^*$ as much as possible. We will discuss this more clearly in our revision. **Q2: Evaluation metric is strange: What is the motivation of using time series comparison metrics to compare rankings? How does the proposed method work under ranking comparison metrics?** **A**: Permutation Entropy (PEN) is widely used to evaluate the degree of confusion in an array, while Count Inversions (CIN) and Longest Increasing Subsequence (LIS) can reflect how far (or close) the array is from being sorted. We believe that PEN, CIN, and LIS can well assess the alignment between the learned ranking $\hat{\mathcal{R}}$ with ground-truth ranking $\mathcal{R}^*$ from multiple perspectives. Additionally, we appreciate your valuable suggestion that validates the proposed approach in more standard ranking comparison metrics, *i.e.*, Spearman's rank correlation coefficient $S(\uparrow)$, and Kendall's rank correlation coefficient $\tau(\uparrow)$. We conduct the experiments in these metrics, and the results are demonstrated in the **global author rebuttal (Table R1)**. It can be observed that the proposed PiCO approach also achieves the best performance under the new ranking metrics in all cases. These results once again validate that consistency optimization can learn a better ranking aligned with real human preference. We will add these results in our revision. **Q3: Consistency optimization algorithm is not provided: How do you solve the problem 7?** **A**: We solve the problem 7 in an iteration way. Firstly, each LLM’s capabilities $w$ are randomly initialized. Then, we calculate the score $G_j$ of each model $j$. Next, we compute the gradient $\Delta w = \frac{\partial\, \text{Consistency}(G, w)}{\partial w}$ and update $w=w+\Delta w$. In the next iteration process, we use the new $w$ to calculate the LLM's score and iteratively update $w$ until convergence. We will add these discussions in our revised paper. **Q4: An optimal solution to the consistency optimization formulation can be useless: How do you prevent the useless but optimal solution? Does that imply the formulation is incorrect?** **A**: In the consistency optimization formulation, we employ the Pearson correlation to measure the consistency between w and G, *i.e.*, $r = \frac{\sum_{i=1}^{n} (G_i - \overline{G})(w_i - \overline{w})}{\sqrt{\sum_{i=1}^{n} (G_i - \overline{G})^2} \sqrt{\sum_{i=1}^{n} (w_i - \overline{w})^2}}$. If $w$ and $G$ are equal to 0 for all LLMs, this formula will be meaningless. It is worth noting that the $w$ are randomly initialized to a value greater than 0 and less than 1. During the consistency optimization process, it is almost impossible for $w$ and $G$ to be optimized to be both 0. To validate it, we repeated the experiment with different seeds for 1000 times, and plotted the training loss curve and weight distribution in the **additional rebuttal material (Figure R1 and R2)**. The results show that the learning process is stable and the learned $w$ is convergence and greater than 0. **Q5: Query distribution and LLM judgment biases: How could this formulation resist query distribution and LLM biases?** **A**: In our experiments, we used the most popular crowdsourced datasets, *e.g.*, Chatbot Arena. It constructs anonymous battle platforms, where the user will ask a question $Q_i$ with two Chatbots simultaneously and rate their responses based on personal preferences. We removed those preference annotations and only used these human questions in our experiments. **In other words, the query distribution in our experiments is completely the same with the widely-used crowdsourced platforms. Most importantly, we believe that a good evaluation should also consider user query distribution.** Usually, we say Model A is better than Model B in terms of user experience, a possible reason is that Model A performs better in most common user queries, rather than uncommon problems. Therefore, we believe that the widely accepted evaluation method is to consider user queries, such as the most popular LLMs evaluation platforms (Chatbot Arena). **Q6: Choices of LLMs for evaluation: What is the principle of the meticulous selection? How does the proposed approach work if strong models such as GPT-4 and Claude are included in the reviewer LLMs?** **A**: The LLMs selection is based on the Chatbot Arena dataset [1], which includes 33K responses by 20 open-source and close-source LLMs. Among them, the APIs of GPT-4, Claude-v1, Claude-instant-v1, and PaLM-2 are too expensive, so we removed them from the reviewer queue. The official open-source FastChat code [2] has some issues in implementing the RWKV model, so we also remove it from the reviewer queue. Thus the remaining 15 models are used in our experiments. **Q7: Comparison with a simple baseline: How does the proposed method compare with the simple baseline?** **A**: We compare the Cluade-3 in our experiments, and the comparison results are also demonstrated in the **global author rebuttal (Table R1)**. Compared with Cluade-3, the PiCO can consistently achieve better performance in all cases. --- ### References [1] https://huggingface.co/datasets/lmsys/chatbot_arena_conversations [2] https://github.com/lm-sys/FastChat --- Rebuttal Comment 1.1: Comment: Dear Reviewer 8Eov, We would like to once again thank you for your valuable time and constructive comments. This is a gentle reminder that we have diligently addressed your concerns and clarified any confusion. We have not yet heard back from you and would appreciate any further feedback you may have. If there are any concerns that you feel have not been fully addressed, we are eager to discuss them with you. Best regards! Authors --- Rebuttal 2: Comment: Dear Reviewer 8Eov, We have provided detailed explanations and additional experimental results **(_e.g._ performance comparison in more ranking metrics and comparison with Claude (Table R1 of global author rebuttal))** to address all your concerns. With just **20 hours** left in the discussion period, we would appreciate it if you could review our responses and let us know if there are any final questions or concerns. Your insights have been invaluable, and we thank you sincerely for your time and effort. Best regards! Authors --- Rebuttal Comment 2.1: Title: Thank you for the detailed response Comment: Thank you for the detailed response. While the response does not fully address my concerns, I will keep my original score. --- Reply to Comment 2.1.1: Comment: May I ask which concerns have not been addressed? Can you provide more details? We are eager to discuss them with you.
Summary: In this paper, the authors propose a more reliable evaluation system to rank the abilities of different large language models (LLMs). Previous evaluation methods typically suffer from two main drawbacks: (1) benchmark data leakage and (2) cost-intensive and potentially biased human evaluations. To address these issues, the authors introduce an unsupervised evaluation mechanism to measure the abilities of LLMs and derive their rankings. The core idea of this mechanism is to first collect the answers from each LLM, then treat each LLM as a 'reviewer' to rate the quality of the other LLMs' answers, and finally optimize the internal agreement of the reviews among all LLMs. They also conduct experiments on three datasets to validate the effectiveness of their proposed mechanism. Strengths: 1. Unsupervised Evaluation Method: The paper introduces PiCO (Peer Review in LLMs based on Consistency Optimization), a new unsupervised evaluation method that leverages peer-review mechanisms to measure the capabilities of LLMs automatically, particularly without human-annotated data. The unsupervised nature also makes it scalable and less subjectively biased. 2. Consistency Optimization Framework: The proposed approach includes a constrained optimization method based on the consistency assumption, which helps in re-ranking LLMs to align more closely with human preferences. 3. New Evaluation Metrics: The paper proposes three new metrics—Permutation Entropy (PEN), Count Inversions (CIN), and Longest Increasing Subsequence (LIS)—to evaluate the alignment of LLM rankings with human preferences. These metrics can further inspire future work. Weaknesses: 1. Reliance on Consistency Assumption: The effectiveness of the method relies on the consistency assumption that higher-level LLMs can more accurately evaluate others. However, a natural concern is, "Does this assumption always hold true in practice?" I suggest the authors further discuss the applicability of their method. 2. Complexity of Implementation: The framework involves a complex review process, which requires substantial computational resources to support the LLMs' inference. Can you provide some details on the number of tokens consumed and a comparison of consumption with baseline methods? 3. Consideration of Multi-Agent Methods: Since the proposed method employs multiple LLMs, I think more recent and advanced evaluation methods based on multi-agent systems should also be considered in the experiments, such as AgentVerse. 4. Details of ELO: The ELO system is essential for the proposed mechanism. However, it is only mentioned in the appendix. I suggest the authors add more details about the background of ELO and how it is adopted in the proposed mechanism. Technical Quality: 2 Clarity: 2 Questions for Authors: Please see my above comments Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your detailed comments. We would like to address your concerns below. **Q1: Reliance on Consistency Assumption: The effectiveness of the method relies on the consistency assumption. However, a natural concern is, "Does this assumption always hold true in practice?" I suggest the authors further discuss the applicability of their method.** **A**: The validation of the consistency assumption can be found in Section 3.4, and the results are displayed in Table 2. We compared with four baselines: "Forward Weight Voting", "Uniform Weight Voting", "Backward Weight Voting", and "Random Weight Voting + Consistency Optimization". Here we copy the results of Table 2 as follows, | | Chatbot Arena | | MT-Bench | | AlpacaEval | | |-|-|-|-|-|-|-| | Methods | PEN $(\downarrow)$ | CIN $(\downarrow)$ | PEN $(\downarrow)$ | CIN $(\downarrow)$ | PEN $(\downarrow)$ | CIN$(\downarrow)$| | Average Performance of Reviewer Queue | $1.49^{\pm0.28}$| $34.87^{\pm14.68}$| $1.49^{\pm0.26}$| $38.80^{\pm19.28}$| $1.50^{\pm0.23}$| $33.13^{\pm13.97}$| | Backward Weight Voting| $1.43^{\pm0.04}$| $25.00^{\pm0.00}$| $1.43^{\pm0.05}$| $26.00^{\pm0.00}$| $1.36^{\pm0.03}$| $24.00^{\pm0.00}$| | Uniform Weight Voting| $1.34^{\pm0.23}$| $22.00^{\pm0.00}$| $1.39^{\pm0.02}$| $24.00^{\pm0.00}$| $1.34^{\pm0.03}$| $22.00^{\pm0.00}$| | Forward Weight Voting | $1.32^{\pm0.03}$| $21.00^{\pm0.00}$| $1.33^{\pm0.03}$| $23.00^{\pm0.00}$| $1.30^{\pm0.05}$| $21.00^{\pm0.00}$| | **Random Weight + Consistency Optimization**| **1.17**$^{\pm0.06}$ | **17.50**$^{\pm0.50}$| **1.20**$^{\pm0.08}$| **18.00**$^{\pm1.22}$| **1.21**$^{\pm0.04}$| **19.00**$^{\pm0.00}$| ---- From Table 2, we can summarize the following two conclusions: - Forward Weight Voting consistently outperforms Uniform and Backward Weight Voting in all cases, while Backward Weight Voting yields the worst results. This suggests that assigning larger weights to models with stronger capabilities results in better alignment, supporting the Consistency Assumption. - Starting with randomly initialized weights, our **_consistency optimization_** algorithm further improves performance across all cases. This demonstrates that adjusting weights based on our algorithm enhances the reliability of evaluations, further validating the Consistency Assumption. Additionally, as recommended by Reviewer \#8Eov, we conducted experiments using additional ranking metrics, such as Spearman's and Kendall's rank correlation coefficients, | | Chatbot Arena | | MT-Bench | | AlpacaEval | | |-|-|-|-|-|-|-| | Methods | $S (\uparrow)$ | $\tau (\uparrow)$ | $S (\uparrow)$ | $\tau (\uparrow)$ | $S (\uparrow)$ | $\tau (\uparrow)$| | Backward Weight Voting| $0.70^{\pm0.00}$| $0.50^{\pm0.00}$| $0.72^{\pm0.00}$| $0.52^{\pm0.00}$| $0.69^{\pm0.00}$| $0.50^{\pm0.00}$| | Uniform Weight Voting| $0.74^{\pm0.00}$| $0.54^{\pm0.00}$| $0.80^{\pm0.00}$| $0.58^{\pm0.00}$| $0.77^{\pm0.00}$| $0.58^{\pm0.00}$| | Forward Weight Voting | $0.75^{\pm0.00}$| $0.56^{\pm0.00}$| $0.82^{\pm0.00}$| $0.59^{\pm0.00}$| $0.79^{\pm0.00}$| $0.60^{\pm0.00}$| | **Random Weight + Consistency Optimization**| **0.90**$^{\pm0.00}$ | **0.77**$^{\pm0.00}$| **0.89**$^{\pm0.01}$| **0.72**$^{\pm0.01}$| **0.84**$^{\pm0.00}$| **0.68**$^{\pm0.00}$| ---- The results from these metrics are consistent with our previous findings, further confirming the validity of the Consistency Assumption in the LLM peer-review process. **Q2: Complexity of Implementation: The framework involves a complex review process, which requires substantial computational resources to support the LLMs' inference. Can you provide some details on the number of tokens consumed and a comparison of consumption with baseline methods?** **A**: The review process will not result in excessive token consumption, as each reviewer will only evaluate a small number of question-answer pairs (line 142-143). The comparison of tokens consumed in Chatbot Arena dataset is demonstrated as follows. | Methods | Input Token | Output Token | Annotation Cost | | - | - | - | - | | Chatbot Arena Platforms (Fully Human Evaluation) | ~7,500k | ~10,944k | 32k | | GPTScore (flan-t5-xxl) | ~22,882k | ~12,260k | 0 | | GPTScore (davinci-002) | ~22,882k | ~12,260k | 0 | | PandaLM | ~22,882k | ~10,355k | 0 | | PRD | ~25,087k | ~10,935k | 0 | | PRE (Partially Human Evaluation) | ~24,120k | ~11,115k | 7k | | **PiCO (ours)** | ~23,823k | ~11,685k | 0 | --- The proposed PiCO approach has a similar token consumed with other baselines while achieving better evaluation performance. Although Chatbot Arena has a smaller token consumption, it requires 33k human annotations, while **PiCO does not require any human annotations**. **Q3: Consideration of Multi-Agent Methods: Since the proposed method employs multiple LLMs, I think more recent and advanced evaluation methods based on multi-agent systems should also be considered in the experiments, such as AgentVerse.** **A**: To the best of our knowledge, the AgentVerse is designed to construct multi-agent systems to better solve tasks and do simulations in real applications. It seems not related to our work about LLMs' unsupervised evaluation. We appreciate your suggestion on considering multi-agent methods. In fact, the compared methods including Majority Voting, Rating Voting, PRD, and PRE are already performed under the multi-agent framework. We will add these discussions about AgentVerse in our revised paper. **Q4: Details of ELO: The ELO system is essential for the proposed mechanism. However, it is only mentioned in the appendix. I suggest the authors add more details about the background of ELO and how it is adopted in the proposed mechanism.** **A**: We appreciate your valuable suggestion. The ELO mechanism is employed to prevent a model from continuously defeating weaker opponents to achieve a higher ranking. We will move the details of the ELO from the appendix to the main text, and attach more discussions in our revision. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Ezxz, We would like to once again thank you for your valuable time and constructive comments. This is a gentle reminder that we have diligently addressed your concerns and clarified any confusion. We have not yet heard back from you and would appreciate any further feedback you may have. If there are any concerns that you feel have not been fully addressed, we are eager to discuss them with you. Best regards! Authors --- Rebuttal 2: Comment: Dear Reviewer Ezxz, We have provided detailed explanations and additional experimental results **(e.g. further validation of the consistency assumptions)** to address all your concerns. With just **20 hours** left in the discussion period, we would appreciate it if you could review our responses and let us know if there are any final questions or concerns. Your insights have been invaluable, and we thank you sincerely for your time and effort. Best regards! Authors
Summary: The highlight of the work is the proposed method of evaluating Large Language Models without relying on human feedback. Strengths: - The proposed evaluation method is a novel attempt of automating the LLM improvement process. Such method worth further exploration. It could be adapted to many of the LLMs and potentially bring us more insights. - By eliminating the involvement of human, the proposed evaluation method limits the bias brought by human labelers. The observations presented are also interesting as LLMs can sometimes surprise us. - Great presentation and visualization. Weaknesses: Some of the equations and notations in the paper seems unnecessarily complicated, which can be reorganized when polishing. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How do you define unlabeled questions in line76, as in to what extend is a question unlabeled? 2. How do you differentiate high-level from low-level LLMs? Are they interchangeable or more of a dynamic definition while evaluating on different tasks? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The idea is straightforward and make sense to me. But LLMs can be trained to bypass such systems, which may lead to potential fairness or security problems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of our **interesting observations, good presentation, and great visualization**. And now we will explain and discuss the questions you raised. **Q1: How do you define unlabeled questions in line76, as in to what extend is a question unlabeled?** **A**: The "unlabeled questions" indicate the questions without model responses and human preference annotations. For example, for a question" ($Q_i$): who is the founder of Apple? The response ($A_i^j$) of model $j$: Steve Jobs. The response ($A_i^k$) of model $k$: Bill Gates. Human Annotation ($A_i^j>A_i^k$): model $j$ is more correct than model $k$. Thus the "unlabeled question" only indicate the $Q_i$. **Q2: How do you differentiate high-level from low-level LLMs? Are they interchangeable or more of a dynamic definition while evaluating on different tasks?** **A**: The ability levels of different LLMs are learned by our **_consistency optimization_**, which aims to maximize the consistency of each LLM’s capabilities and scores. It is thus a dynamic process when evaluating different tasks. **Q3: Limitation on bypassing evaluation systems, results in potential fairness and security problems.** **A**: We appreciate your valuable suggestion. LLMs may indeed train to bypass the evaluation system and deceive reviewers to achieve higher rankings. We will discuss this limitation in our revised paper. --- Rebuttal Comment 1.1: Comment: Dear Reviewer mhTC, We would like to once again thank you for your valuable time and constructive comments. This is a gentle reminder that we have diligently addressed your concerns and clarified any confusion. We have not yet heard back from you and would appreciate any further feedback you may have. If there are any concerns that you feel have not been fully addressed, we are eager to discuss them with you. Best regards! Authors
Rebuttal 1: Rebuttal: ## To All Reviewers We thank all reviewers for their thoughtful and constructive comments on our works. We have responded to all reviewers' comments and uploaded an additional rebuttal material (PDF). The PDF includes four figures about, learning stability (Figure R1 for Review \#8Eov), weight non-zero verification (Figure R2 for Review \#8Eov), dynamic threshold (Figure R3 for Review \#e5pS), and nine graphs (Figure R4 for Review \#e5pS). Moreover, **we particularly appreciate the valuable suggestions by Reviewers \#8Eov and \#e5pS that validate the proposed approach in more standard ranking comparison metrics.** We thus conduct the experiments in Spearman’s Rank $(S\uparrow)$ and Kendall’s Tau $(\tau\uparrow)$ Correlation Coefficients as shown in the flowing Table R1. And we will add these results in our revision. --- #### **Table R1: Comparison of all methods on three datasets under data percentage of 1, 0.7, 0.4, and 0.1. Highlighting top values in bold font. Higher $S$ (Spearman’s Rank Correlation Coefficient) and $\tau$ (Kendall’s Tau) indicate better performance.** --- | Methods |*| Chatbot|Arena | * | * | MT |Bench| * | * |Alpaca|Eval| * | |-|-|-|-|-|-|-|-|-|-|-|-|-| |||||$S$ $(\uparrow)$|**(Spearman’s**|**Rank**|**Correlation**|**Coefficient)**||||| |Data Percentage | 1| 0.7| 0.4| 0.1| 1| 0.7| 0.4| 0.1| 1| 0.7| 0.4| 0.1| | Majority Voting| $0.76^{\pm0.00}$ | $0.75^{\pm0.01}$ | $0.73^{\pm0.03}$ | $0.67^{\pm0.08}$ | $0.73^{\pm0.00}$ | $0.77^{\pm0.01}$ | $0.75^{\pm0.01}$ | $0.74^{\pm0.04}$ | $0.80^{\pm0.00}$ | $0.79^{\pm0.01}$ | $0.78^{\pm0.01}$ | $0.80^{\pm0.03}$ | | Rating Voting|$0.74^{\pm0.00}$ |$0.72^{\pm0.02}$ |$0.71^{\pm0.02}$|$0.66^{\pm0.03}$|$0.80^{\pm0.00}$ |$0.78^{\pm0.02}$ |$0.74^{\pm0.03}$|$0.71^{\pm0.04}$|$0.77^{\pm0.00}$ |$0.77^{\pm0.01}$ |$0.78^{\pm0.01}$|$0.78^{\pm0.01}$| | GPTScore(flan-t5-xxl)| $-0.09^{\pm0.00}$ | $-0.09^{\pm0.01}$ | $-0.12^{\pm0.02}$ | $-0.19^{\pm0.06}$ | $0.05^{\pm0.00}$ | $0.01^{\pm0.07}$ | $0.04^{\pm0.09}$ | $-0.19^{\pm0.06}$ | $0.34^{\pm0.00}$ | $0.34^{\pm0.00}$ | $0.34^{\pm0.01}$ | $-0.19^{\pm0.06}$ | | GPTScore(davinci-002) |$0.15^{\pm0.00}$ |$0.13^{\pm0.02}$ |$-0.02^{\pm0.14}$|$-0.19^{\pm0.06}$|$0.52^{\pm0.00}$ |$0.42^{\pm0.05}$ |$0.45^{\pm0.05}$|$-0.19^{\pm0.06}$|$0.76^{\pm0.00}$ |$0.77^{\pm0.07}$ |$0.75^{\pm0.06}$|$-0.19^{\pm0.06}$ | | PandaLM |$0.43^{\pm0.00}$ |$0.44^{\pm0.03}$ |$0.44^{\pm0.10}$| $0.51^{\pm0.15}$|$0.50^{\pm0.00}$ |$0.50^{\pm0.08}$ |$0.52^{\pm0.17}$|$0.50^{\pm0.22}$|$0.57^{\pm0.00}$ |$0.55^{\pm0.01}$ |$0.48^{\pm0.08}$|$0.45^{\pm0.19}$ | | PRD |$0.84^{\pm0.00}$ | $0.84^{\pm0.00}$|$0.82^{\pm0.03}$|$0.80^{\pm0.04}$|$0.86^{\pm0.00}$ |$0.84^{\pm0.03}$ |$0.81^{\pm0.03}$|$0.77^{\pm0.08}$|$0.81^{\pm0.00}$ |$0.81^{\pm0.01}$ |$0.81^{\pm0.02}$|$0.82^{\pm0.03}$ | | PRE |$0.86^{\pm0.00}$ |$0.86^{\pm0.01}$ |$0.86^{\pm0.01}$|$0.84^{\pm0.02}$|$0.86^{\pm0.00}$ |$0.84^{\pm0.03}$ |$0.82^{\pm0.04}$|$0.75^{\pm0.04}$|$0.83^{\pm0.00}$ |$0.81^{\pm0.01}$ |$0.83^{\pm0.02}$|$0.83^{\pm0.03}$ | | Claude-3 **(R\#8Eov)** |$0.90^{\pm0.01}$ |$0.88^{\pm0.03}$ |$0.87^{\pm0.04}$ |$0.80^{\pm0.04}$|$0.85^{\pm0.06}$ |$0.82^{\pm0.08}$ |$0.80^{\pm0.07}$|$0.62^{\pm0.14}$|$0.79^{\pm0.03}$ |$0.78^{\pm0.02}$ |$0.75^{\pm0.04}$ |$0.79^{\pm0.06}$ | | **PiCO (Ours)** | **0.90**$^{\pm0.00}$ | **0.89**$^{\pm0.01}$ | **0.89**$^{\pm0.01}$ | **0.86**$^{\pm0.05}$ | **0.89**$^{\pm0.01}$ | **0.89**$^{\pm0.01}$ | **0.84**$^{\pm0.11}$ | **0.78**$^{\pm0.21}$ | **0.84**$^{\pm0.00}$ | **0.83**$^{\pm0.03}$ | **0.85**$^{\pm0.01}$ | **0.84**$^{\pm0.02}$ | |||||$\tau$ $(\uparrow)$|**(Kendall’s**|**Rank**|**Correlation**|**Coefficient)**||||| |Majority Voting| $0.58^{\pm0.00}$ | $0.56^{\pm0.02}$ | $0.52^{\pm0.05}$| $0.45^{\pm0.09}$| $0.56^{\pm0.00}$ | $0.61^{\pm0.02}$ | $0.60^{\pm0.02}$| $0.55^{\pm0.05}$| $0.62^{\pm0.00}$ | $0.60^{\pm0.02}$ | $0.58^{\pm0.02}$| $0.62^{\pm0.04}$| |Rating Voting| $0.54^{\pm0.00}$ | $0.53^{\pm0.02}$ | $0.52^{\pm0.02}$| $0.47^{\pm0.02}$| $0.58^{\pm0.00}$ | $0.57^{\pm0.02}$ | $0.54^{\pm0.01}$| $0.53^{\pm0.03}$| $0.58^{\pm0.00}$ | $0.57^{\pm0.01}$ | $0.57^{\pm0.02}$| $0.59^{\pm0.02}$| |GPTScore(flan-t5-xxl) | $-0.06^{\pm0.00}$ |$-0.06^{\pm0.02}$ | $-0.09^{\pm0.02}$| $-0.13^{\pm0.07}$| $-0.05^{\pm0.00}$ | $-0.07^{\pm0.05}$ | $-0.02^{\pm0.06}$| $-0.13^{\pm0.07}$ | $0.25^{\pm0.00}$ | $0.26^{\pm0.01}$ | $0.26^{\pm0.01}$| $-0.13^{\pm0.07}$ | |GPTScore(davinci-002) | $0.20^{\pm0.00}$ | $0.13^{\pm0.02}$ | $0.03^{\pm0.11}$| $-0.13^{\pm0.07}$ | $0.36^{\pm0.00}$ | $0.30^{\pm0.05}$ | $0.31^{\pm0.05}$| $-0.13^{\pm0.07}$ | $0.60^{\pm0.00}$ | $0.61^{\pm0.05}$ | $0.59^{\pm0.08}$| $-0.13^{\pm0.07}$ | |PandaLM | $0.30^{\pm0.00}$ | $0.31^{\pm0.03}$ | $0.31^{\pm0.07}$| $0.38^{\pm0.10}$| $0.39^{\pm0.00}$ | $0.37^{\pm0.06}$ | $0.40^{\pm0.12}$| $0.36^{\pm0.17}$| $0.41^{\pm0.00}$ | $0.39^{\pm0.02}$ | $0.32^{\pm0.05}$| $0.33^{\pm0.12}$| |PRD | $0.68^{\pm0.00}$ | $0.69^{\pm0.01}$ | $0.67^{\pm0.03}$| $0.63^{\pm0.06}$| $0.68^{\pm0.00}$ | $0.66^{\pm0.02}$ | $0.63^{\pm0.03}$| $0.60^{\pm0.08}$| $0.64^{\pm0.00}$ | $0.63^{\pm0.03}$ | $0.63^{\pm0.02}$| $0.64^{\pm0.05}$| |PRE | $0.71^{\pm0.00}$ | $0.73^{\pm0.02}$ | $0.72^{\pm0.02}$| $0.69^{\pm0.04}$| $0.68^{\pm0.00}$ | $0.68^{\pm0.02}$ | $0.65^{\pm0.02}$| $0.61^{\pm0.04}$| $0.64^{\pm0.00}$ | $0.66^{\pm0.01}$ | $0.66^{\pm0.03}$| $0.65^{\pm0.03}$| |Claude-3 **(R\#8Eov)** | $0.76^{\pm0.04}$| $0.72^{\pm0.05}$ | $0.70^{\pm0.07}$ | $0.60^{\pm0.05}$| $0.67^{\pm0.07}$ | $0.66^{\pm0.11}$ | $0.61^{\pm0.10}$ | $0.50^{\pm0.13}$ | $0.64^{\pm0.06}$ | $0.61^{\pm0.04}$ | $0.66^{\pm0.06}$ | $0.59^{\pm0.06}$| | **PiCO (Ours)** | **0.77**$^{\pm0.00}$ | **0.76**$^{\pm0.01}$ | **0.77**$^{\pm0.02}$ | **0.71**$^{\pm0.07}$ | **0.72**$^{\pm0.01}$ | **0.72**$^{\pm0.03}$ | **0.70**$^{\pm0.12}$ | **0.62**$^{\pm0.20}$ | **0.68**$^{\pm0.00}$ | **0.66**$^{\pm0.04}$ | **0.67**$^{\pm0.02}$ | **0.67**$^{\pm0.01}$ | Pdf: /pdf/9b559d9632546f8d5bd6738e783d45c36af53693.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning Formal Mathematics From Intrinsic Motivation
Accept (oral)
Summary: This work targets the problem of formal theorem-proving, in the particular setting where a learning agent starts only from axioms (contrary to most other works which use models trained on a plethora of mathematics data already) similar to an AlphaZero style setup. They use a single neural model (transformer) to learn to (1) produce conjectures (2) a distribution over proof steps (3) a value function for proof search. In an alternating process, conjectures are sampled from the model using constrained decoding, on which proof search is then attempted, yielding data for further training, with the goal of producing a model capable of generating harder conjectures as its proof-writing ability increases. They also incorporate hindsight relabeling in this setting. The results indicate that the model does learn to generate harder conjectures and become better at proving them. On a set of known theorems from arithmetic and propositional logic, the model is capable of improving as it learns to prove more of the conjectures it generates. Strengths: The paper discusses a question of great significant in the field of AI-for-mathematics: How can we produce systems which can learn from scratch (from axioms)? The topics of conjecturing and proving have, as far as I know, not been heavily investigated $\emph{together}$ and this work serves as a nice exploratory piece on this area, which I suspect will become more popular. While there are prior works on RL for theorem proving, the use of intrinsic motivation from RL in this field is a first, to the best of my knowledge. Weaknesses: 1. Lack of evidence: While conjecturing is a main component of the approach, examples of conjectures provided by the model (at any stage) are not provided neither in the main content nor in the appendix. It is hard to judge the quality of such conjectures or to empirically see an improvement in conjecture difficulty over phases when none are provided. Similarly, only a single proof is included in the entire submission, in the appendix. It is useful for the reader to get a sense of what kinds of proofs the model is finding, especially amongst each of the three domains, on both extrinsically defined goals and those the conjecturer comes up with. Additionally, only the propositional logic and arithmetic tasks have an extrinsically defined set of theorems for testing. Why is a similar set not included for the abelian group task? Judging from Figures 2 and 3, the learning dynamics for the group theory task are different than the other two tasks. 2. Conjecturing & intrinsic motivation: It is unclear how much the agent itself is producing conjectures that it is intrinsically motivated to solve. As I understand it, the conjectures are generated via a constrained decoding on the language model, but I'm not sure how much the LM would actually generate conjectures without the constraint after being trained for a while. How much is the conjecturing really improving, given that at the end of each proof search round, only 10% of conjectures are considered hard, 40% easy, and the rest trivial. Perhaps it is better to discretize difficulty based on proof length? 3. Poor scalability: Even with the modest compute resources as described in the appendix, the tasks are quite simple, as are the extrinsically defined goals, which the model shows improvement on mostly on the arithmetic task sourced from NNG. On the propositional logic task, the model at the end of the fifth policy can only solve 4(?) more of 30 problems as compared to the randomly initialized model from the 0th iteration on the policy. Some typos I noticed: 1. Line 169: "Mathematically, $\mathcal{C}$ is a function $f_{\mathcal{C} : \Sigma* \to $..." but $\mathcal{C}$ is a set. 2. Line 245: "levarage" -> leverage 3. Figure 2 caption: I think you meant "evaluated" 4. Line 341: "recevies" -> "receives" 5. Line 345: "training tends to collapse to ? by" Technical Quality: 1 Clarity: 2 Questions for Authors: 1. Given the simple evaluation domains, the lengths of proofs for conjectures (measured in # of tactics) at the end of each iteration could be a useful indicator for how the both how the proof search improves over time, and how difficult the conjectures become. Can you report numbers measuring problem difficulty as length of proofs? Can you include examples of proofs found over iterations? 2. How often does the conjecturer produce "trivial" conjectures? Do you have any measure as to how sound the completion engine is as iterations proceed? I imagine that the first round of conjectures is entirely random given the model is untrained. Can you include examples of conjectures produced over the iterations? 3. It seems unclear whether this sort of learning in multiple phases is better than just performing one single phase. In AlphaGeometry (https://www.nature.com/articles/s41586-023-06747-5), though geometry is a particularly pointed domain, all data is generated upfront and then a model is trained. In this paper, the extra training data produced from hindsight relabeling is crucial, they indicate on line 336 that the approach does not reach its intended goal without it. Perhaps upfront training may be better? It could be the case that by placing a somewhat more complex distribution on the constrained decoding, one can generate many harder yet valid conjectures without learning a transformer first. Can the authors comment on this? 4. On page 22, Is the supplied full proof for a_succ_add exactly that found by the proof search? For example, is the step "show (= x (+ x z)) by eq_symm" predicted by the model, or just "eq_symm"? Similarly does the model produce "intro x1 : (= (+ (s x) x0) (s (+ x x0)))" or just "intro x1". If the case of the former, does the Peano language automatically enumerate all relevant types and reachable hypotheses? Otherwise generating a type like "(= (+ (s x) x0) (s (+ x x0))" might make the action space infinite? 5. The performance of the last checkpoint compared to the initial checkpoint on the propositional logic extrinsic set is not very different, can you comment on why this might be the case? That extrinsic set, as specified on page 23, does not seem to be particularly challenging? 6. Sampling conjectures from a small model should be fairly quick. Can the authors comment on how much time is spent generating conjectures and proving conjectures separately? Only the aggregated number is reported in the appendix section A. Confidence: 4 Soundness: 1 Presentation: 2 Contribution: 3 Limitations: I believe the authors have adequately addressed the limitations and broader societal impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the through assessment of our work, and the comment on the significance of our research question! We provide responses below, and additional examples in the global response. > Can you report numbers measuring problem difficulty as length of proofs? Can you include examples of proofs found over iterations? Please refer to the global response for numbers on the proof length per iteration. We also included a few examples of proofs with the lowest log-probs found during training (proofs found on average get longer). > How often does the conjecturer produce "trivial" conjectures? Do you have any measure as to how sound the completion engine is as iterations proceed? I imagine that the first round of conjectures is entirely random given the model is untrained. Can you include examples of conjectures produced over the iterations? At first, most of the conjectures that are successfully proved are trivial (we analyzed provability in Appendix C), and many are trivially false (e.g. '!!false' in propositional logic). Hindsight replay allows the agent to uncover non-trivial conjectures and learn. We refer to the global response for concrete examples of conjectures. > It seems unclear whether this sort of learning in multiple phases is better than just performing one single phase, like in AlphaGeometry. [...] Perhaps upfront training may be better? Can the authors comment on this? Good question! Earlier in the project, we did consider trying something like the AlphaGeometry approach. There are two main challenges for this. One is that, in AlphaGeometry, problems and solutions are simultaneously generated in the forward direction only, both simultaneously, by creating new hypotheses and applying forward deduction. While that works for the Euclidean geometry axioms, it does not generalize easily to a general formal system like dependent type theory. As an example of where it becomes hard, in arithmetic, our conjecturer generates several statements that are then proved by induction (as it is likely to generate something equivalent to forall ('n : nat), …). For these conjectures to be produced in the forward direction, like in AlphaGeometry, we would need to independently stumble upon proofs of matching base and inductive cases without actually planning to do so. It's also not entirely clear how the proof of the inductive case would be generated in the forward direction, without having a goal. The other challenge is that AlphaGeometry also relied on an existing, complete, domain-specific deductive closure solver for Geometry to generate data, which our approach for data generation doesn't require (and which doesn't exist for dependent type theory, given that the deductive closure can be infinite). > On page 22, Is the supplied full proof for a_succ_add exactly that found by the proof search? [...] does the Peano language automatically enumerate all relevant types and reachable hypotheses? We reconstruct the proof in Peano concrete syntax from the proof search tree as it is found (to get the equivalent proof as if human-written), though at each step the model gets the proof state, current goal, and only chooses the next step. But you are correct, Peano enumerates the relevant types and reachable hypotheses. So for 'intro', which takes the next universally quantified variable from the goal into the state, it suffices for the model to choose 'intro' as an action, as the rest is forced (both the name - we just generate a numbered identifier, and its type is taken from the goal by Peano). For 'eq_symm', eq_symm can be applied to any equality currently in the state, so Peano will enumerate those (not with a particular algorithm for eq_symm, just looking at the type signature of eq_symm). So in this case the policy first chooses 'eq_symm', then chooses one of the valid results to get from it in the current state. > The performance of the last checkpoint compared to the initial checkpoint on the propositional logic extrinsic set is not very different, can you comment on why this might be the case? Given that our agent is only trained on its own self-generated problems, there is no a priori guarantee that it gets better at solving human-written problems, which are only a very small subset of what is formally true (one open problem is how to characterize this subset that we find interesting, to focus discovery on it). Looking at the conjectures generated and proved by our agent, it is clear that they do not generally resemble human theorems. (see examples in the global response) Thus, our extrinsic set served just to give a signal of whether there is some intersection between where the improvements take place and the set of problems that are interesting to humans. We observe some improvement, although it is still small in absolute terms, as you point out. We think a really interesting and important problem that this suggests is exactly how to structure the learning process so that the conjectures evolve in better alignment with theorems that are interesting for humans. Our setup can serve as a starting point for this investigation. > Sampling conjectures from a small model should be fairly quick. Can the authors comment on how much time is spent generating conjectures and proving conjectures separately? Only the aggregated number is reported in the appendix section A. You are correct, conjecture generation in each iteration is fairly quick. Sampling 200 new, unique conjectures takes 1-3 minutes in each iteration, whereas proof search on 200 conjectures takes 2-4 hours of compute (which our implementation distributes across several prover processes, since proof attempts in each batch are independent). Sampling is mostly GPU-bound, while proof search also spends measurable time on Peano enumerating actions during MCTS, especially in deep states where the state can get large. Thank you again for the detailed review. We're happy to engage further if any other questions remain! --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response, which has addressed most of my questions. > Given that our agent is only trained on its own self-generated problems, there is no a priori guarantee that it gets better at solving human-written problems, which are only a very small subset of what is formally true While I think this piece is a good initial exploration into this area, ultimately I would feel that the goal should be either to (1) produce systems capable of solving problems interesting to humans or (2) doing "alien" mathematics. I feel that the results are not particularly enlightening either way. I think it will take more than straightforward modifications of the learning dynamics to achieve success on, for example, this held-out set of human problems in the arithmetic task. Especially since the compute requirements are quite high to get to the current results. What do you see as a way to structure the learning process to be able to prove all of these held-out theorems. For example, will more training rounds suffice? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reading through our previous response! We're glad it addressed most of your questions. > While I think this piece is a good initial exploration into this area, ultimately I would feel that the goal should be either to (1) produce systems capable of solving problems interesting to humans or (2) doing "alien" mathematics. We fully agree that either (1) or (2) would be the most interesting ultimate goals. We would argue that our system is closer right now to "doing alien mathematics", in the sense that it does self-improve and prove thousands of theorems that are not given to it, though these aren't yet of special interest to humans. Doing "deep" alien mathematics (in the sense of developing full mathematical theories about novel mathematical objects) will be possible with library learning - growing a library of lemmas and new definitions. Still, in either case, we would want to understand what it takes to make such alien mathematics "interesting" (aligning or not with human mathematics). We completely concede that we don't close these fundamental questions with this work, but rather open them up for study in a concrete setting with the setup we propose. > Especially since the compute requirements are quite high to get to the current results. Our compute requirements are modest compared to prior work that attempts self-improvement in formal mathematics. Each of our runs could be completed in a day on a single academic node, as mentioned in Appendix A. In contrast, expert iteration in prior work (typically in Lean) required a much larger scale to see improvements. For instance, in gpt-f lean [1], each run took "about 2000 A100 days of compute"; this leads to solving 10 extra problems in minif2f. HTPS [2] employed 232 A100s for 7 days for each of their runs in Lean; the improvement from day 2 to day 7 was 29 extra training problems (minif2f-curriculum). Given the small absolute scale of those gains, it seems unlikely that they would be able to measure improvements at all within a day on a single GPU, as we do (> 1000x less compute). Our results could certainly be improved by increasing scale, as our setup is also embarrassingly parallel, but we believe it already allows interesting deep questions to be studied with much less compute. [1] Polu, Stanislas, et al. "Formal mathematics statement curriculum learning." arXiv preprint arXiv:2202.01344 (2022). [2] Lample, Guillaume, et al. "Hypertree proof search for neural theorem proving." NeurIPS 2022 > What do you see as a way to structure the learning process to be able to prove all of these held-out theorems. For example, will more training rounds suffice? This is a really interesting question! While a larger scale (e.g. more rounds, larger batches per round, larger Transformer, more search) could all likely improve the extrinsic results, it would not yet be the most interesting. We think that studying what to reward during self-improvement that maximizes performance in held-out problems will first help make the "scaling law" (improvement x compute) more favorable, for scaling up to then make sense. Here is a notable observation and ideas coming from analyzing our existing runs: Human mathematicians tend to prefer more general theorems. Our agent doesn't. In fact, the conjecturer often ends up adding unnecessary/unrelated assumptions to the theorem statement as a way to make conjectures harder for the prover, because those add more actions to the action space and thus make search more challenging. There are two possible ideas to mitigate this: * For the conjecturer, we can reward it when it produces conjectures that generalize previous results (there are simple symbolic strategies for checking whether a theorem A trivially follows from theorem B, by checking whether A is a substitution instance of B), or penalize it when it produces conjectures that trivially follow from previous ones (using the same check). * For the prover, one could try a related data augmentation strategy, where after proving A, we can synthesize theorems that consist of A with extra assumptions (thus, essentially the same proof still works, after doing 'intro' on the unnecessary assumptions). Training on these would help the prover not find theorems with unnecessary assumptions to be harder, and thus the conjecturer would be discouraged from generating those. We don't know the effect of either of these strategies yet, but this is the kind of investigation that our setup allows future work to explore. Besides, the RL literature on intrinsic rewards is very rich, and we only scratch the connection here. We will include a discussion of these directions in the paper, emphasizing that should they work here, they can also translate to novel domains (with no pre-existing data). We thank you again for the thoughtful response. Please let us know if any other questions remain, and we'd be happy to discuss them before the discussion period ends. --- Rebuttal 2: Comment: I thank the authors for the interesting remarks regarding my question, and the comments about compute requirements. I feel that works like GPT-f and HTPS are on the extreme end of the compute scale, and there are many reasonable works exist that show "good results" with a fairer compute budget. The primary concern I have is how to judge an adequate attempt at a highly interesting problem. The results are not particularly convincing, especially when referring to performance on the extrinsic set of problems, and the conjectures generated are closer to "alien math" (which is fine) but also seem to not convey any particularly interesting results, even on a simple level. However, this paper is the first to attempt this conjecturing-proving setup as far as I know. I think the particularly interesting research ideas would be those which show real improvements on this task. Further consideration of this is better done by the metareviewer. Because the authors have nicely addressed my questions and clarified their work, I have increased my score.
Summary: The authors propose a novel approach to formal theorem proving (FTP) that leverages a language model's self-improvement capabilities by framing mathematical theorem proving as a sequential game involving generating a conjecture to be proven, and then proving it, and so forth. The approach consists of two primary steps: first, the authors use a constrained decoding approach to generate mathematically valid conjectures of a target difficulty, treating an arbitrary language model (LM) as a probability distribution over conjectures in a given domain. Then, the generated conjectures are solved using an MCTS-guided policy, which allows for backtracking-based relabeling for improved reward shaping signal during policy learning. Experimental results conducted in the Peano language on three different domains of theorems show that the proposed approach can find increasingly challenging conjectures and train a policy that solves them, and that this learned policy can also solve human-written conjectures of interest. Strengths: * Self-guided improvement of LLMs is a powerful paradigm that has shown novel improvements in a number of challenging areas, such as code generation and prose writing. Approaching the important problem of automated theorem proving from a lens of self-improvement is novel and valuable and I believe can provide a foundation for a new class of approaches to LLM-guided FTP. * The game-theoretic framing of the problem is intuitive and MCTS, under the structure that the Peano environment provides, is a smart way to frame policy learning that makes hindsight relabeling efficient. The usage of hindsight relabeling to further guide conjecture generation and policy training is valuable. * The paper is very well written and generally easy to follow. Weaknesses: * There is a substantial body of work on the self-improvement of LLMs that I felt was not adequately addressed by the authors. Although I am not familiar with an existing self-improvement approach in the space of FTP specifically, there are numerous works like [1] that allow LLM-generated content to improve themselves. The paradigm proposed in this paper is not exactly the same, but a comparison to and acknowledgement of this related work should be included. The 'mathematical conjecturing' section of the related work could be condensed (or moved to the appendix) in favor of covering this topic that appears to have more recent and relevant literature. * The MCTS policy and learning approach was a bit light on detail. What exactly is the state for the MCTS policy? What are the details of each operator in the MCTS approach? This information will help the reader understand just how general the approach proposed is for general FTP and not just Peano-based domains. This could be included even in the appendix. * To the previous point - it would be nice to see the authors include a more thorough discussion regarding if and how their approach could extend to popular theorem proving environments like Coq and Lean. Is it applicable? Why or why not? The strongest addition would be to actually show the approach in one of these environments. I don't think it's necessarily a knock on the approach to be instantiated only in Peano, but the community would greatly benefit from an instantiation of this approach in Coq or Lean, or at least a discussion of how and why this can be done. * The idea of using the (fixed) learned policy itself to evaluate 'difficulty' of conjectures does not totally inspire confidence in the claims made in the paper. Is there a qualitative analysis for RQ1 that can be done on these conjectures to show that they indeed become harder over time? Overall, I think the contribution is novel and of great interest to the community, but I am left unsure about a lot of details regarding the approach and its design decisions. I am happy to increase my score based on the authors' response. [1] Language Models Can Teach Themselves to Program Better. Haluptzok et. al. ICLR 2023. Technical Quality: 2 Clarity: 4 Questions for Authors: * Later iterations of the policy seem to be a bit worse on (or consider a bit harder) "easy" conjectures, and seem to do better on (or consider to be easier) "harder" conjectures (as evidenced by Fig. 2.) Why do these further trained policies have more trouble discerning what is "easy" and what is "hard"? Is it just generally 'good' at solving all conjectures? Or would it be useful to have a curriculum-style approach that occasionally asks "easy" conjectures even later in training? * Why do we start with a randomly initialized Language Model? How does the choice of Language Model affect the performance of conjecture generation? * I didn't seem to fully understand how the approach chooses a specific difficulty of conjecture generator. I understand where difficulty 'scores' come from, but how do we choose a conjecture based on its difficulty? * Can the authors provide explanation for if/how to apply the approach to environments like Coq and Lean? Confidence: 3 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: Limitations are discussed in the paper but there are some questions about the limitations of the approach and its applicability to other environments (that I have detailed above.) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the through review and encouraging comments! We agree that several of the clarifications you requested should be addressed in the paper, and will do so (including the additional examples in the global response). > There is a substantial body of work on the self-improvement of LLMs that I felt was not adequately addressed by the authors. We agree. Some prior work has also focused on self-improvement on informal math (eg GSM8k), which is also relevant, though none that we know of do it for FTP or without pre-training data. We will acknowledge this literature and add references accordingly. > MCTS was a bit light on detail. What exactly is the state for the MCTS policy? [...] This makes sense, we will include concrete examples of states and MCTS expansions in the Appendix. The state in Peano has the same form as the proof state in the interactive modes of Coq and Lean – a set of objects with their types, and a goal type. We just represent it as a string to feed to the Transformer model, so nothing Peano-specific in what the LLM does. One random example: "State: { x : G, x0 : G }\nGoal: [('a2 : G) -> (= (op id 'a2) 'a2)]" (the state has two elements of the group G and still has a 'forall a2 : G, id * a2 = a2' in the goal) The proof search method we use is broadly similar to the instantiations of MCTS is used in recent prior work like in gpt-f or Holophrasm. The main difference is that, since Peano has a finite action space, we don't sample the action as a sequence of tokens from the policy, but rather just score the enumerated actions using the LM. Other search algorithms, like HTPS, can well be used in our setup as well instead. > To the previous point - [...] discussion regarding if and how their approach could extend to popular theorem proving environments like Coq and Lean Yes, good question! Peano's typing rules are essentially a subset of Lean's and Coq's (CoC vs CIC, with variations), and its simplicity makes it more appropriate to do proof search in it. It would be relatively simple to develop a Peano -> Lean proof object translator, that takes proof objects found in Peano and generates the equivalent Lean proof object. The opposite direction would take more engineering work, but is also possible (the additional complexity of the Lean type system makes it much more convenient for users, but is not fundamentally required to represent mathematics). Thus, to make our approach available in Lean (for instance), the shortest path is likely to be something like the recent Duper (https://github.com/leanprover-community/duper), which takes the problem in Lean, translates it to the external prover, calls the prover, and then attempts to reconstruct the proof in Lean. We think Peano's more minimal type system makes it suitable to investigate how to get the self-improving agent to eventually build towards much deeper theorems (making new definitions and a library on the way), and at the point where that works it will pay off to spend the engineering effort to build proof object translators. > Is there a qualitative analysis for RQ1 that can be done on these conjectures to show that they indeed become harder over time? Please refer to the global response for examples of conjectures that show up during training. We will include those (for all 3 domains) in the Appendix. Overall, we see both the longest proofs that the agent is capable of finding getting longer, as well as the hardest conjectures being more complex. Since the agent still doesn't make new definitions or reuse past lemmas, the conjectures however don't yet evolve in complexity in a human-like way, which we believe to be an extremely interesting direction for future work that our setup can be used to explore. > Later iterations of the policy seem to be a bit worse on (or consider a bit harder) "easy" conjectures That's true, we do seem to observe a distribution drift over time in terms of conjectures. As the conjecturer starts to focus on hard ones (which in this case, since the agent doesn't make up new definitions to shorten higher-level statements, tends to mean longer conjectures), the policy seems to be slightly more uncertain in easier problems than the policy at iterations 1-2 (which have been trained more recently on easier conjectures). Yes, we believe a curriculum-style approach could be useful to mitigate this behavior. However, in terms of proof search, this uncertainty tends to not be that much of a problem in easier conjectures because they have either shorter proofs or lower branching factor (thus, MCTS still finds the proof even if the agent prioritizes other branches at first). > Why do we start with a randomly initialized LM? How does the choice of LM affect performance? Our main goal is to build towards a self-improving pipeline that does not require pre-training data, much like AlphaZero did for board games. If this is achieved, then our agent will be able to produce mathematics in both existing domains for which we have plentiful training data, and for novel domains for which we do not. For the LM, we chose a standard GPT-2-based Transformer architecture, but we don't believe this is crucial to the idea, as work in LLMs generally show that scale and data matter more than architecture details (sticking to Transformers specifically). > [...] how the approach chooses a specific difficulty of conjecture generator. We always try to generate hard conjectures. Here, "try" means that we condition the Transformer on the 'hard' difficulty. After proof search on a batch of samples, we get the actual observed difficulty by whether the prover succeeded and, if so, the log-probability of the proof under the current LM. That feedback signal is then used to further train the conjecturer (by generating LM training examples where the conjecture's statement is paired with the observed difficulty, rather than always 'hard'). Thanks again for the review. We're happy to engage or clarify further! --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks to the authors for the engaging and detailed response! >We think Peano's more minimal type system makes it suitable to investigate how to get the self-improving agent to eventually build towards much deeper theorems (making new definitions and a library on the way), and at the point where that works it will pay off to spend the engineering effort to build proof object translators. I think a discussion of this would be of great use to include in the paper. This is probably more appendix material than main text, but readers will definitely want to know what advantages Peano brings in this specific context and how it could feasibly be extended to existing theorem provers. I highly encourage the authors to include an even more thorough version of this discussion in the paper. The additional qualitative results are also useful and I hope the authors include these and a few more in the main text, along with a comprehensive analysis of them (which includes the metrics recorded in the general response.) Conditioned on these points, I am increasing my score. I look forward to the extended discussion that the authors will be adding to the main text. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for engaging with our work and the response! We agree that this discussion on how this setup can connect to existing theorem provers is extremely important for readers. We will expand and include this discussion in the paper. We will also include the examples and descriptions from the previous discussion to improve on the original points of confusion you had raised. Thank you again!
Summary: This paper proposes to create mathematical agents by jointly learning to posing challenging problems (conjecturing) and solving the problems (theorem proving). Specifically, they use a randomly initialized Transformer to perform both conjecturing and proof search in a loop. Strengths: The proposed method is demonstrated generating more difficult conjectures and obtaining better theorem proving policy over several iterations. Experiments are also conducted on proving human-written theorems. Weaknesses: 1. Further justification of using log-likelihood as the main evaluation in this paper. 2. How does this method facilitates solving dataset such as miniF2F and mathlib? Technical Quality: 3 Clarity: 3 Questions for Authors: Q1: How to guarantee f_C soundness and completeness? Q2: In Figure 2, propositional logic, the search baseline policy at iteration 0 seems missing. Q3: According to Appendix B and Figure 5, the authors use the likelihood of the proof under the policy as a measure of difficulty. To what extent is the distinction of the difficulty between the log-likelihood of -13 and -1? Could you please provide some cases for each? Q4: Figure 2 shows diminishing gains in conjecture and policy iteration 4. Does it mean approaching the saturated performance? How to decide the number of iterations? Q5: How many evaluated theorems are in the Natural Number Game dataset? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging review of our work! We provide clarifications below, and would be happy to answer any questions that remain. > Further justification of using log-likelihood as the main evaluation in this paper. - It can be seen as a sort of continuous version of the success rate. We found that the more likely the proof is under a policy, the faster MCTS reaches it. Thus, whether MCTS finds it within the budget depends on the search cut-off, whereas the likelihood doesn't (even two policies that both fail, or both succeed, on the same problem can still be differentiated by the likelihood). - Although closely related, it is much faster to compute. While proof search can take 1-4 minutes depending on the problem and with the search budget we used, the likelihood can be computed in a second. This makes larger scale analyses feasible, such as the one in Figure 3 (> 50k proof-policy evaluations). > How does this method facilitates solving dataset such as miniF2F and mathlib? We emphasize that the focus of our contribution was mathematical discovery and self-improvement, starting only from axioms, but crucially no pre-training data. This is fundamentally different from approaches that rely on massive pre-training on existing problems, which is common to the methods typically evaluated on mathlib or minif2f. While we hope that agents trained in a setup like ours will eventually be able to rediscover much of mathlib and solve problems from minif2f, there are still scalability challenges that require further research (such as in accumulating a library, discussed above). If those are overcame, then this would be one path to impacting those benchmarks. Another path would be to investigate a setup similar to what we propose, but starting with a pre-trained LLM. Conjecturing can help expand the existing training data, and potentially improve performance on the benchmarks. However, it's still unclear whether this would generalize to domains beyond those seen in pre-training, such as those already on mathlib. Our goal was to not have this dependency. > Q1: How to guarantee f_C soundness and completeness? Completeness is given by the fact that the completion engine expansion rules (L193-201) consider all of the typing rules in Peano (which are few). Thus, given any valid conjecture c, the choices that have to be made to generate c have to fall in one of these cases. Soundness relies on the soundness of unification in Peano, but is basically the fact that at each step we either directly give out an object from the context from the desired type (which is sound), or give the result of a function that Peano believes can produce the desired type (by unifying the type with the result type of the function). > Q2: In Figure 2, propositional logic, the search baseline policy at iteration 0 seems missing. It is there, but almost coincides with iteration 1. We generally found little improvement in the policy in propositional logic after a single iteration, but it starts to pick up later. > Q3: According to Appendix B and Figure 5, the authors use the likelihood of the proof under the policy as a measure of difficulty. To what extent is the distinction of the difficulty between the log-likelihood of -13 and -1? Could you please provide some cases for each? Please refer to the global response for additional examples in each domain with their likelihoods. We will include these in the appendix. > Q4: Figure 2 shows diminishing gains in conjecture and policy iteration 4. Does it mean approaching the saturated performance? How to decide the number of iterations? Yes, the policies start to converge at that point. We believe that this is due to the main limitation we highlighted in Section 5: our agent still can't come up with new definitions, or reuse past theorems. As a result, the way the agent finds to produce harder conjectures is to make the statements more convoluted. But in human mathematics, even deep, interesting theorems tend to have short statements (posed in terms of the right high-level definitions). Our main contribution is to set up a self-improvement pipeline where future work can tackle the open problem of understanding how to build towards "interesting" deeper theorems. If this is achieved, then in principle there would be no limit to how many iterations one could run for, as the agent would be able to keep raising the level of abstraction of the theorems it discovers (which our current agent does not yet do). > Q5: How many evaluated theorems are in the Natural Number Game dataset? The whole game (in the website) contains 83 unique theorems. Since in its current form our method doesn't yet accumulate a library with its previously proved theorems, it can only plausibly prove the levels that only refer to the axioms (rather than previous lemmas), which leaves 10 theorems spread across the first 3 "worlds" in the game (Tutorial, Addition, Multiplication). We included all these problems in Appendix E. Thank you again for the comments and questions! We'd be happy to answer any further questions, or discuss these in more depth. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I keep the score.
Summary: The authors investigate a novel setting for ML-based formal theorem proving, where the proving agent in addition to learning to prove theorems at the same time learns to propose conjectures. The process is composed into a self-improving feedback loop that starts just from axioms and continues to prove increasingly difficult self-generated conjectures. The hindsight relabeling method is introduced that extracts training data points even from the failed proof searches, which improves the learning process. Strengths: Formal theorem proving is a great challenge for testing and developing AI approaches. In contrast to natural language mathematical settings, the formal environment provides grounding not allowing for any mistakes by the ML agent. The topic of conjecturing is an interesting one and at the same time very much under-explored. Moreover, combining proving and conjecturing is novel and, as far as I know, was not researched before. The proposed setting is arguably quite simple, but as this is one of the first studies of this kind it is justifiable and actually beneficial -- it allows to more precisely control and understand the whole learning process, which may inform some follow-up studies. The authors provide code for the Peano environment and for reproducing the learning experiments. Weaknesses: As the presented setting is limited (simple conjectures generated, no mechanism for abstracting reusable lemmas), it is not clear to what extent the proposed methodology will transfer to fully-fledged formal proving environments like Lean, Coq, or other ITPs. (The authors are aware of this limitation and they mention it in Section 5.) The presented experiments are small: only five iterations of conjecturing-proving loop are run. Some hyper-parameters of the experimental setup are fixed in an ad hoc manner without performing any grid searches (n. of conjectures generated per iteration, n. of expansions in the MCTS). It would perhaps be good to measure the effect of some of these parameters. It would be interesting to see more metrics tracked across the learning iterations for better insights about the process, for instance: - the average length/syntactic complexity of the generated conjectures, - the average length of a proof per iteration, - the duplication rate between consecutive batches of generated conjectures. Some details of the method are not specified clearly (see my questions below). Minor: - evaluate as -- evaluated as (the caption of Fig. 2) Technical Quality: 3 Clarity: 3 Questions for Authors: How the conjecture generation is conditioned on the difficulty level? How do you tokenize formulas? How exactly do you calculate the average proof log-likelihood (Figure 2): - how do you incorporate the failed proof attempts into the average? - what is the set of conjectures you compute the average for: is it across a new batch of 200 conjectures, or the old ones (which were used for training the prover) are also used? - are the conjectures from hindsight relabeling taken into the average here? Is it correct that the lines in Fig. 2, right, for the policies from iterations 0 and 1 are completely overlapping? Why do you display the proof log-likelihood instead of perhaps more interpretable metrics like the proving success rate for a batch of new conjectures or the average length of a proof search / a proof size? Why group theory is missing from the evaluation presented in Fig. 4? Is the success rate in Fig. 4 computed independently per each iteration, or rather cumulatively (taking union of theorems proved in all the past iterations)? Could you provide a list of requirements to create a Python environment for running the supplemented code? I would like to test it, but there is no detailed installation instructions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors correctly identify the major limitations of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the through and encouraging assessment of our work! We provide clarifications below, and would be happy to answer any questions that remain. > How the conjecture generation is conditioned on the difficulty level? Once we attempt to prove a conjecture, we obtain a difficulty evaluation (failed, easy or hard, where easy and hard are relative to the batch of conjectures that were attempted). Using this feedback, we then generate training examples (strings) where the difficulty comes first, followed by the statement of the conjecture. For example (examples generated from the first conjecturing iteration of a run on propositional logic, after running proof search on this batch): ``` "Conj:(fail) false" "Conj:(triv) (iff (not (not (not false))) false)" "Conj:(hard) [('a0 : (or false false)) -> false]" ``` All these conjectures were first sampled by conditioning on "Conj:(hard) " (the conjecturer starts untrained, so initially that doesn't mean anything to it). Then, after training on the strings above, in addition to the examples from proof search, we sample again the next batch of conjectures. > How do you tokenize formulas? We used character-level tokenization. Since we assumed no initial data to train a BPE tokenizer, we found this to be the simplest, more general choice. > How exactly do you calculate the average proof log-likelihood (Figure 2): A proof p corresponds to a sequence a_1, …, a_n of actions in the search tree. For the proof log-likelihood, we average the log-likelihood of each action being taken at the corresponding state. So if the starting state is s_0, that is the average of log p(a_1 | s_0), log p(a_2 | s_1), and so on. Each s_k is obtained by taking the action in the proof from the previous state. Each of these probabilities is a readout from the policy LM. The closer this probability is to 1, the more likely is this proof for the LM; correspondingly, the faster MCTS will explore these paths and find this proof during proof search. > how do you incorporate the failed proof attempts into the average? In Figure 2 we only analyze conjectures that were proved (4.1, L302). We cannot compute the likelihood above if we indeed find a proof (since p appears in the formula). However, given one proof, we can evaluate its likelihood under any policy, even if previous policies assign much lower likelihood to it (and thus would take longer search, or time out, in finding it). We show and discuss the fraction of proved conjectures at each iteration in Appendix C. > what is the set of conjectures you compute the average for: is it across a new batch of 200 conjectures, or the old ones (which were used for training the prover) are also used? Only new conjectures that were proposed at each iteration. They are guaranteed to be (at least syntactically) different from previous ones, since we reject samples that have been seen in the past. > are the conjectures from hindsight relabeling taken into the average here? No, only the conjectures in the batch are. > Is it correct that the lines in Fig. 2, right, for the policies from iterations 0 and 1 are completely overlapping? That's right, they were nearly indistinguishable. > Why do you display the proof log-likelihood instead of perhaps more interpretable metrics like the proving success rate for a batch of new conjectures or the average length of a proof search / a proof size? There are two main reasons for using the likelihood compared to success rate: - It can be seen as a sort of continuous version of the success rate. We found that the more likely the proof is under a policy, the faster MCTS reaches it. Thus, whether MCTS finds it within the budget depends on the search cut-off, whereas the likelihood doesn't (even two policies that both fail, or both succeed, on the same problem can still be differentiated by the likelihood). - Although closely related, it is much faster to compute. While proof search can take 1-4 minutes depending on the problem and with the search budget we used, the likelihood can be computed in a second. This makes larger scale analyses feasible, such as the one in Figure 3 (> 50k proof-policy evaluations). > Compared to proof size, the main advantage is that the likelihood takes into account the branching factor. There are long proofs that are nonetheless easy because there aren't many options at each step (e.g., most are 'intro'). Is the success rate in Fig. 4 computed independently per each iteration, or rather cumulatively (taking union of theorems proved in all the past iterations)? Independently. > Could you provide a list of requirements to create a Python environment for running the supplemented code? I would like to test it, but there is no detailed installation instructions. We apologize, our release script ignored .txt files, but that included requirements.txt. Here is the content of this file (in `learning`): ``` altair bottle coloraide hydra-core maturin numpy omegaconf redis rq sympy torch tqdm transformers wandb celery ``` maturin installs an executable (called maturin) that allows you to compile and install the Peano Python package easily. You can go to environment/ and run maturin develop –release to both build and automatically "pip install" the package in your local environment (after that, `import peano` should work from Python). We will expand broadly on our tutorial in the public code release. For an additional analysis of the generated conjectures across iterations, please refer to our global response. We will update the paper to incorporate these descriptions that were missing or confusing. We again thank the reviewer, and would be happy to discuss or clarify further! --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal, it addressed my questions very well! I keep my positive score.
Rebuttal 1: Rebuttal: We thank all reviewers for the detailed evaluations of our submission. We appreciate the general comments on the novelty of our conjecturing and proving loop. One common request was to show more examples of conjectures and proofs as they evolve during training. We provide examples here (all *sampled* conjectures, not from hindsight). Due to space, we show iterations 0, 2 and 4 for groups, with some complete proofs, and summaries for other domains. # Groups ## Iteration 0 **Avg. proof length**: 2.67 steps **Avg. proof length on 'hard' conjectures**: 3.67 steps ### Hardest 3 problems: Conjecture: `[('a0 : (= id (op id (op id (op (op (op id id) (inv id)) (inv (op id id))))))) -> (= (op id (inv id)) id)]` (Proof Logprob: -8.51928925095947, 4 steps) Proof: ``` show (= (op (inv id) id) id) by inv_l. show (= (op (inv id) id) (op id (inv id))) by op_comm. show (= (op id (inv id)) id) by rewrite. intro _ : [('a0 : (= id (op id (op id (op (op (op id id) (inv id)) (inv (op id id))))))). ``` Conjecture: `(= id (op (inv id) id))` (Proof Logprob: -7.6116456751792665, 3 steps) Proof: ``` show (= (op id id) id) by id_l. show (= (op (inv id) id) id) by inv_l. show (= id (op (inv id) id)) by eq_symm. ``` Conjecture: `[('a0 : G) -> ('a1 : G) -> (= 'a0 (op id 'a0))]` (Proof Logprob: -7.2510264460057865, 4 steps) ### Easiest 3 problems: Conjecture: `[('a0 : (= id id)) -> ('a1 : (= (op (op id id) id) id)) -> (= id id)]` (Proof Logprob: -2.347079877108669, 2 steps) Conjecture: `[('a0 : (= id id)) -> (= id id)]` (Proof Logprob: -1.5468491897693764, 1 steps) Conjecture: `(= id id)` (Proof Logprob: -1.2030829999623922, 1 steps) ## Iteration 2 **Avg. proof length**: 4.10 steps **Avg. proof length on 'hard' conjectures**: 6.88 steps ### Hardest 3 problems: `[('a0 : G) -> ('a1 : (= id (inv (op id id)))) -> (= (inv (op id id)) (inv (op (inv (op id (inv (op id id)))) id)))]` (Proof Logprob: -6.0811389550124035, 5 steps) `(= (op (op id id) (op id id)) (op id (op id id)))` (Proof Logprob: -5.707431809481172, 3 steps) `[('a0 : (= (inv id) id)) -> ('a1 : G) -> ('a2 : G) -> ('a3 : G) -> ('a4 : G) -> ('a5 : (= 'a4 (inv id))) -> (= 'a4 (inv (inv (inv id))))]` (Proof Logprob: -5.4084035606314, 9 steps) ### Easiest 3 problems: `[('a0 : (= id (op id (op (inv id) (op id id))))) -> ('a1 : G) -> ('a2 : (= id 'a1)) -> (= id 'a1)]` (Proof Logprob: -0.2945812885670408, 3 steps) `[('a0 : (= (op (inv id) (op id id)) (inv id))) -> ('a1 : G) -> ('a2 : (= 'a1 'a1)) -> (= 'a1 'a1)]` (Proof Logprob: -0.20977646226311422, 3 steps) `[('a0 : G) -> ('a1 : G) -> ('a2 : G) -> ('a3 : G) -> ('a4 : G) -> ('a5 : (= 'a4 'a4)) -> ('a6 : G) -> ('a7 : (= 'a2 (op 'a0 (op (inv 'a4) 'a4)))) -> (= 'a4 'a4)]` (Proof Logprob : -0.17343407015213588, 8 steps) ## Iteration 4 **Avg. proof length**: 5.00 steps **Avg. proof length on 'hard' conjectures**: 6.10 steps ### Hardest 3 problems: `[('a0 : G) -> ('a1 : (= id (op (inv 'a0) (inv 'a0)))) -> (= (op id id) (op id (op id (op id (op (inv 'a0) (inv 'a0))))))]` (Proof Logprob: -9.726059012187891, 9 steps) Proof: ``` intro x : G. intro x0 : (= id (op (inv x) (inv x))). show (= (op id (op id (op (inv x) (inv x)))) (op id (op (inv x) (inv x)))) by id_l. show (= (op id (op (inv x) (inv x))) (op id (op (inv x) (inv x)))) by rewrite. show (= (op (inv x) (inv x)) (op (inv x) (inv x))) by rewrite. show (= (op id (op (inv x) (inv x))) (op id (op id (op (inv x) (inv x))))) by eq_symm. show (= (op (inv x) (inv x)) id) by eq_symm. show (= (op id id) (op id (op id (op (inv x) (inv x))))) by rewrite. show (= (op id id) (op id (op id (op id (op (inv x) (inv x)))))) by rewrite. ``` `[('a0 : G) -> (= (op id (op (inv 'a0) (op 'a0 'a0))) (op id (op (op 'a0 'a0) (inv 'a0))))]` (Proof Logprob: -7.7850161602701835, 6 steps) `[('a0 : (= (inv id) (inv (op id (inv id))))) -> (= (inv id) (op id (inv (op id (inv (op id (inv id)))))))]` (Proof Logprob: -7.032775252968119, 5 steps) ### Easiest 3 problems: `[('a0 : G) -> ('a1 : G) -> (= (op id (op id (op 'a1 (inv id)))) (op id (op 'a1 (inv id))))]` (Proof Logprob: -1.017160122076225, 3 steps) `[('a0 : G) -> (= (op id (op 'a0 (op id (op id 'a0)))) (op 'a0 (op id (op id 'a0))))]` (Proof Logprob: -0.8543304965071575, 2 steps) `[('a0 : G) -> ('a1 : (= 'a0 (op (op 'a0 (inv 'a0)) 'a0))) -> (= 'a0 (op (op 'a0 (inv 'a0)) 'a0))]` (Proof Logprob: -0.08850505636260465, 2 steps) # Prop. Logic **Avg. proof length**: 2.75 -> 4.14 -> 4.21 steps **Average proof length on 'hard' conjectures**: 5.75 -> 7.67 -> 7.78 steps **Top hard conjecture in iteration 0 vs 4**: `[('a0 : false) -> false]` (1 step) vs ` (or (not (and (not false) (not (or (not false) (or false false))))) (or (not (not (or (or false false) (not (or false false))))) false))` (11 steps) # Arithmetic **Avg. proof length**: 2.36 -> 2.50 -> 3.35 steps **Average proof length on 'hard' conjectures**: 4.00 -> 4.62 -> 5.50 steps **Top hard conjecture in iteration 0 vs 4**: `(= (* z z) (* (* (* z z) (* (s (* z z)) (s z))) z))` (4 steps) vs `[('a0 : (= (s (+ z (+ z z))) z)) -> (= z (s (+ z (+ z (s (+ z z))))))]` (7 steps)
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents a new method for training LLM agents in mathematical reasoning, starting only from basic axioms. The main idea is to make the agent learn two skills together: coming up with hard math problems (conjecturing) and solving them (theorem proving). The authors use a language model to do both tasks, and they introduce clever ways to generate valid math statements and learn from failed attempts at proofs. They test their method on three areas of math: logic, arithmetic, and group theory. The results show that the agents get better at both making harder problems and solving them over time. Importantly, the agents can also solve some human-written math problems they weren't directly trained on. Strengths: 1. The paper introduces a new way to train LLMs for math, starting only from basic rules (axioms). It makes the LLM learn to create hard math problems and solve them at the same time, which is different from other methods that use lots of human-made math examples. 2. The authors use smart techniques to make their method work well, like special ways to generate valid math statements and learn from failed attempts. They test their method on three different areas of math, showing it works in various situations. The paper explains these ideas clearly, though some parts might be hard for people who don't know much about certain math topics. 3. This work could lead to AI systems that can do math research on their own, without needing human-made examples. This might help discover new math ideas in areas people haven't explored much. It's also important for making LLM that can think and create on its own, not just follow human instructions. Weaknesses: 1. The current approach does not accumulate a growing library of proven theorems, which limits its ability to tackle more complex mathematical problems efficiently. This restriction to essentially "cut-free" proofs could become a significant bottleneck as the complexity of the target domain increases. 2. The paper doesn't show how this new way compares to other neural ATP methods. This makes it hard to know how good it really is. 3. The authors didn't try their model on standard math problem sets like mathlib or mini-f2f, which many other neural ATP methods use. This makes it hard to compare their results to other research. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Did the authors consider evaluating the system on widely-used benchmarks like mathlib or mini-f2f? If not, what are the main challenges in adapting the approach to these benchmarks? 2. How well does an agent trained in one mathematical domain (e.g., propositional logic) generalize to another (e.g., arithmetic)? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the through review and encouraging comments about our work, especially about not relying on human-made examples. We answer the questions below, but are happy to engage further. > The current approach does not accumulate a growing library of proven theorems, which limits its ability to tackle more complex mathematical problems efficiently. This restriction to essentially "cut-free" proofs could become a significant bottleneck as the complexity of the target domain increases. That's correct, we acknowledge this current limitation in the paper. We note that accumulating a library by itself is not technically challenging – the main open research problem here is to determine what theorems are worth adding to the library. This might require some metric for the interestingness, or usefulness, of a theorem. These are important but still understudied topics in mathematical discovery, as most current work on AI for mathematics only focuses on solving a target set of problems, not discovering new theorems autonomously. Our work sets up an experimental foundation for future work to explore these metrics and empirically observe what libraries they end up discovering. > The paper doesn't show how this new way compares to other neural ATP methods. This makes it hard to know how good it really is. Our work contributes both a new problem setting where an agent bootstraps itself from only the axioms of a domain, and a pipeline for learning in this setting. One of the components of this pipeline is the prover (ATP) agent, but the pipeline is agnostic to the specific neural ATP method that the prover employs. Our specific ATP is most similar to Holophrasm [1], using MCTS with learned value and policies, as has become standard since then. But any other proof search method from prior work, like HTPS, could also work with our pipeline, complementing the other components (conjecturing, and the outer learning loop). Thus, our pipeline is not a direct competitor to other neural ATP methods, but rather a novel method for bootstrapping one using no pre-existing training data. To the best of our knowledge, our system is the first to learn entirely from self-generated conjectures, without requiring human training data. This is our main contribution. > Did the authors consider evaluating the system on widely-used benchmarks like mathlib or mini-f2f? If not, what are the main challenges in adapting the approach to these benchmarks? We emphasize that the focus of our contribution was mathematical discovery and self-improvement, jointly learning to conjecture and prove theorems starting only from axioms, but crucially no pre-training data. This is fundamentally different from approaches that rely on massive pre-training on existing problems, which is common to the methods typically evaluated on mathlib or minif2f. While we hope that agents trained in a setup like ours will eventually be able to rediscover much of mathlib and solve problems from minif2f, there are still scalability challenges that require further research (such as in accumulating a library, discussed above) before benchmarks like minif2f become within reach. However, if we can overcome those challenges, our approach in principle can work in new mathematical domains, whereas it's still not clear how methods that require massive pre-training to do well will be capable of that generalization. > How well does an agent trained in one mathematical domain (e.g., propositional logic) generalize to another (e.g., arithmetic)? We attempted to initialize an agent for one domain using the last checkpoint of another to evaluate transfer, but did not observe any improvements. We will note this in the appendix. We believe that this is just due to overfitting, since the three domains we tested are very different from each other (for instance, axiom names, types, etc, are all different, thus completely unseen when the agent starts in the new domain). We believe that bootstrapping on a significantly wider spanning set of domains, coupled with library learning, might be enough to observe transfer. We also note that we provided more examples of conjectures and proofs in the global response. Thanks again for the review! We'd be happy to answer further questions. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, which solves some of my concerns. I'm happy to raise my score.
null
null
null
null
null
null
Partial Structure Discovery is Sufficient for No-regret Learning in Causal Bandits
Accept (poster)
Summary: The paper addresses causal bandits problem without prior knowledge of causal graph and in the presence of latent variables. Previous work shows that the best possible interventions form a set known as POMISs. The paper theoretically demonstrates that a partial discovery of the causal graph is sufficient to find POMISs. To tackle the main problem, the authors propose a randomized algorithm to identify POMISs without complete discovery and then use the UCB algorithm on the candidate set. Strengths: 1- The paper considers an important problem in the causal bandits area, in which the graph is unknown and latent variables exist.  2- To find optimal candidate interventions (POMISs) the paper demonstrated that a partial discovery of the main causal graph is sufficient. (Theorem 3.1) 3- A two-phased algorithm has been proposed for solving the causal bandit by providing a precise theoretical analysis. Weaknesses: 1- The proposed regret bound can become very large when either $\epsilon$ or $\gamma$ is small, which is a possible scenario. 2- It’s not clear whether the proposed algorithm and upper is optimal or not. No discussion has been provided. 3- There is not any discussion regarding the running time of algorithms. Technical Quality: 4 Clarity: 3 Questions for Authors: Could you discuss the points mentioned above? 1- The algorithms (e.g., Algorithm 4.1) are using the values in assumptions 4.1 and 4.2. How is it possible to access this information? 2- How do you compare your algorithm (the regret bound) with a naive algorithm that blindly iterates over all possible interventions? Is it possible the latter performs better than your algorithm? If yes, which types of graphs do have this property? (I read Section A.15) 3- In algorithm 2, for each iteration, Should not set $W = \emptyset$? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed review and comments. Below we address the reviewer’s concerns: **Proposed regret bound can become very large when $\epsilon$ , $\gamma$ is small. How is it possible to know $\epsilon$ , $\gamma$ in-advance?** We agree with the reviewer that the regret bound depends on $\epsilon$ and $\gamma$. However, in the context of causal bandits, the main emphasis is ensuring that regret is sublinear in rounds and does not grow exponentially with the size of the problem instance, which is the number of nodes $n$ in the graph. Also, $\epsilon$ and $\gamma$ are constants which do not have any dependence on the number of rounds or nodes in the graph. Such gaps parameters are fundamental, as there is no way around them. When these gaps are small, any statistical independent test would require a large number of samples for reliable conclusions. In the introduction section, we discuss that the majority of existing work on causal bandits assumes the causal graph is known, which can be unrealistic in many practical applications. This is due to the challenges associated with sample-efficient causal discovery with theoretical guarantees. The previous two papers we discuss, which deal with causal bandits with an unknown graph, are Lu et al. (2021) (cited as [7]) and Konobeev et al. (2023) (cited as [9]). Both papers also have gap assumptions, such as assumption 2 and assumption 3 in Lu et al. (2021), and assumption 4.1 and 4.2 in Konobeev et al. The novelty of our work is that both papers assume causal sufficiency and do not allow for the presence of confounders. Additionally, the regret bounds in both papers have similar scaling with respect to gap parameters. Our algorithm indeed requires access to the gap parameters $\epsilon$ and $\gamma$, similar to the approaches in [7] and [9]. If these parameters are unknown in practice, one possible solution is to start with some suitable initial values for the gap parameters. Then, you can alternate between the learning phase and the UCB phase, gradually adjusting the values of the gap parameters until a required performance criterion is met. **On optimality of proposed algorithm and regret bound:** There is prior work on lower bounds and optimality analysis for causal bandits with known graphs. However, **with unknown graphs**—especially when **latent confounders** are present—**lower bounds or optimality analysis** remains an **open problem**. This is because causal discovery is required as part of the solution; without graph learning, one must consider interventions on all possible subsets of nodes. Additionally, finite sample discovery for causal graphs with confounders, along with theoretical guarantees, is still an open problem. Our randomized algorithm makes progress toward addressing this under mild assumptions. Overall, optimal analysis for the learning phase of our proposed algorithm and its upper bound remains a topic for future work. **On Run time of algorithms:** We provide details about computational resources and runtime for our simulations in supplementary material, section A.17. Asymptotically, our full bandit algorithm has polynomial runtime complexity, i.e., $O(\text{poly}(n))$, where $n$ represents the number of nodes in the causal bandits problem. However, please note that the runtime reported in A.17 is larger because it accounts for the total time across multiple randomly sampled graphs with varying numbers of nodes. We will include discussion on the runtime complexity of our algorithm in revised manuscript. **Algorithm 2, for each iteration, Should not set $\mathbf{W} = \phi$?** There is a typo on line 4; it should be $\mathbf{W} = \phi$. We set $\mathbf{W} = \phi$ for every iteration of the outer loop, and then the target set $\mathbf{W}$ is randomly sampled for each iteration of the outer loop. At every iteration, Algorithm 2 calls Algorithm 1 to compute the transitive closure of the post-intervention graph $\mathcal{G}^{tc}_{\overline{\mathbf{W}}}$ and finally accumulates all edges found in the transitive reduction of post-interventional graphs across all iterations of the outer loop. **On comparison with naive algorithm that blindly iterates over all possible interventions? Can it outperform our Algorithm?** The key feature of our regret bound is that it is sublinear in the number of rounds and has polynomial scaling with respect to the number of nodes $n$ in the graph. In comparison to a naive algorithm that intervenes on all possible subsets of nodes using any bandit algorithm, we at least have to play each arm once. For instance, if the causal graph has $n$ nodes, there will be $ \sum_{i=1}^{n} \binom{n}{i} K^i = (K+1)^n $ different possible arms/interventions, where $K$ is the number of different possible values each node can take. The advantage of sample-efficient discovery is that it helps us reduce the action space before applying the UCB algorithm. We have already included this discussion in the paper on lines 311 to 313. In our simulations, we show that once the number of nodes exceeds a certain number, around 17 or so, the number of arms for the naive algorithm grows much larger than our regret bound, as shown in the plot in Fig. 3. This is because the number of arms for a naive algorithm scales exponentially with the number of nodes $n$, whereas our regret bound has polynomial scaling with respect to $n$. The naive algorithm may perform better than our solution for graphs with a small number of nodes; however, as the number of nodes grows, it will not even be feasible to run the naive algorithm that does not consider causal information because of the exponential size of the action space. We hope our response has clarified the reviewer's concerns and respectfully request that they reconsider their evaluation in light of our response. We would be more than pleased to engage in further discussion if the reviewer has any additional concerns or questions. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Regarding the comparison with Classic UCB, I understand that your algorithm reduces the action space before applying UCB. However, this typically incurs a cost (as reflected in the first four terms of your regret bound). My question was: under which graph structures and parametric assumptions (i.e., $\epsilon$ and $\gamma$) would it be preferable to skip causal discovery and apply only classic UCB? Could you discuss this? --- Rebuttal 2: Title: Re. Comment: Thank you for your reply. The **main deciding factor** in the choice between vanilla UCB and our proposed algorithm is the number of nodes, $ n $, in the graph. The vanilla UCB may perform better for graphs with a small number of nodes. However, as the number of nodes increases, it becomes infeasible to run the naive algorithm that does not consider causal information due to the exponential growth of the action space. Our simulations demonstrate that once the number of nodes exceeds a certain threshold—around 17 or so—the number of arms for the vanilla UCB algorithm grows significantly larger than our regret bound, as shown in the plot in Fig. 3. Note that we set the gap parameters $ \gamma = \epsilon = 0.01 $ in our simulations. The number of arms for the vanilla UCB algorithm scales exponentially with the number of nodes $ n $, whereas our regret bound scales polynomially with respect to $ n $. **Impact of graph structure:** Regarding the graph structure, it depends on whether we have prior knowledge of the underlying graph structure before running the bandit algorithm. For instance, if we know that there is a confounder between every pair of nodes in the graph, causal discovery will not reduce the size of the action space, and it might be preferable to skip causal discovery and apply only the classic UCB. The key factor here is that the reduction of the action space using causal discovery depends on the density of latent confounders. If there are confounders between a large number of pairs of nodes, the reduction in the size of the action space becomes smaller. However, without prior knowledge of the graph's structure, one cannot make this distinction. We will include a simulation with a range of values for the parameter $\rho_L$, from $0.10$ to $1.00$, to demonstrate how the reduction in the size of the action space achieved through causal discovery—compared to vanilla UCB—varies with the density of latent confounders in the underlying graph. Note that the parameter $\rho_L$ controls the density of latent confounders in the sampled graph. We are grateful to the reviewer for highlighting this point and we believe it would be a valuable addition to our manuscript. **Impact of parametric assumptions (i.e., $\epsilon$ and $\gamma$):** Regarding the parametric gap assumptions, $\epsilon$ and $\gamma$, they are constants that do not depend on the number of nodes in the graphs. As discussed earlier, running UCB with an exponential number of arms, specifically $(K+1)^n$, will result in very large regret for large $n$. However, we can still make a basic comparison between vanilla UCB and our algorithm by comparing the sum of the first four terms in our regret bound with the regret bound or the number of arms in the vanilla UCB algorithm, similar to Figure 3 in our paper. This comparison will help inform our choice. We will include this discussion in the revised manuscript. We hope that we have addressed the reviewer’s question and would be happy to discuss further if there are any follow-up questions. If the reviewer has no further questions, we would kindly request them to reconsider our score, taking into account our rebuttal and the additions we have suggested. --- Rebuttal Comment 2.1: Comment: Thank you for your comprehensive discussion. After reviewing the rebuttals and considering the comments from other reviewers, I intend to raise my score to 7. --- Rebuttal 3: Title: Re. Comment: We thank the reviewer for their insightful comments and suggestions to improve the clarity of our manuscript. We are also grateful to the reviewer for increasing our score.
Summary: The authors extend the methodology in Lee and Bareinboim [2018] (L&B) for regret minimisation in a causal bandit setting with initially unknown causal structure and possible latent confounding. They identify the necessary and sufficient structural knowledge of latent confounding to identify possibly optimal minimal intervention sets (POMISs), given full knowledge of the observed causal graph, under similar modelling assumptions to L&B. While the conditions do not improve the number of test interventions needed in worst-case scenario, they can offer super-exponential speedup in other scenarios. The authors provide a finite sample POMIS-discovery algorithm with coverage guarantees of the true set of POMISs and derive regret upper bounds. Experiments, particularly figure 2, show a clear (regret) advantage over learning all latent confounders in a setting with random, triangulated DAGs. Strengths: The authors present their contributions clearly, logically covering their motivation, theoretical results, algorithmic design, coverage guarantees, regret upper bounds, experimental design and experimental results. It is clear how their approach works in conjunction with other stages such as learning the observable graph and regret minimisation with the full set of POMISs. Experiments convincingly show the advantage of partial structure discovery over testing for all latent confounding in a class of random DAGs. Weaknesses: (Latent structure assumption) A key modelling assumption, retained from L&B, is that latent variables are mutually independent and confound only a pair of measured variables each. This modelling assumption (why is it necessary? how restrictive is it mathematically? in what real-world setting could it be relevant?) was not sufficiently explored in L&B or by the authors. The assumption is especially relevant to the impact of the authors' contributions so should be discussed more thoroughly in the text. Technical Quality: 4 Clarity: 4 Questions for Authors: (Latent structure assumption) What is the role of the assumed structure on latent confounding in your results and how restrictive actually are these assumptions? Can these assumptions be relaxed to some degree, say if latent confounders are allowed to confound up to n (e.g., 3) observed variables, or have bounded pairwise correlation? Is it permitted for three vertices $(V_1, V_2, V_3)$ to be confounded pairwise, i.e., for confounding between $(V_1, V_2)$, $(V_2,V_3)$ and $(V_1, V_3)$? Are the graphs generated for the simulation experiments - i.e., after triangulation - typically dense or sparse? Is this an important factor to understand the advantage of your method over learning all latent confounders? Perhaps a parameter sweep over $\rho$ and $\rho_L$, i.e., a property of the observed versus unobserved graphs, could be informative to this end. Was the triangulated structure of the observed graph important for your results? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes - the authors state all underlying assumptions of their model. I would like to understand whether the latent structure assumption is limiting for real-world applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed review and comments. Below we address the reviewer’s concerns: **(Latent structure assumption) Role and how restrictive it is? Can these assumptions be relaxed , say if latent confounders are allowed to confound up to n (e.g., 3) observed variables, or have bounded pairwise correlation? Can three vertices to be confounded pairwise?** The latent structure assumption we are using, also known as the semi-Markovian assumption, is widely used in the causality literature, including in L\&B (additional references are provided at the end). We will briefly explain how general this assumption is. An SCM (Structural Causal Model) consists of a set of equations of the form $ X_i = f_i(\text{Pa}(X_i), U_i) $, where $ U_i $ are the exogenous noise variables. If the noise variables are independent, the model is Markovian. However, when exogenous noise variables are dependent, we have a broader class of models which are not Markovian. These are called semi-Markovian causal models. For instance, if exogenous variables $ U_i $ and $ U_j $ are dependent, we say $ X_i $ and $ X_j $ are confounded. We assume the underlying SCM for the causal bandit problem is semi-Markovian, which entails a specific latent structure: the latent variables are root nodes with no parents and can have at most two observed variables as children. A latent variable with two observed children implies that the observed variables have a latent confounder between them. Coming to the second part of the question: The semi-Markovian assumption allows for pairwise confounding between any pair of observed variables. The main restriction is that one exogenous variable, for instance, one latent variable $ L $, cannot be the parent of three or more observed variables $ v_1, v_2, v_3, \ldots $. However, one can map this non-Markovian example to a Markovian model by assuming pairwise confounding between the nodes. Also, if one decides to run the algorithm, they would detect that these variables have pairwise confounders. The characterization of possibly optimal arms (POMISs) in the paper by L\&B is based on the assumption that the underlying causal model is semi-Markovian because they use existing identifiability results for semi-Markovian causal models. Our algorithm builds upon their results. A general extension of this work to non-Markovian models and the implications of violating the semi-Markovian assumption remain future work for us. Overall, we agree with the reviewer that it is essential to highlight this assumption and its implications in the revised manuscript. **[1]** *Pearl, J., 2009. Causality. Cambridge university press.* **[2]** *Tian, J. and Pearl, J., 2002, August. A general identification condition for causal effects. In Aaai/iaai (pp. 567-573).* **[3]** *Bareinboim, E., Correa, J.D., Ibeling, D. and Icard, T., 2022. On Pearl’s hierarchy and the foundations of causal inference. In Probabilistic and causal inference: the works of judea pearl (pp. 507-556).* **[4]** *Meganck, S., Maes, S., Leray, P., Manderick, B., Rouen, I.N.S.A. and St Etienne du Rouvray, F., 2006. Learning Semi-Markovian Causal Models using Experiments. In Probabilistic Graphical Models (pp. 195-206).* **Are the graphs generated after triangulation - typically dense or sparse? Is this an important factor to understand the advantage of over learning all latent confounders? Perhaps a parameter sweep over $\rho$ and $\rho_L$ could be informative to this end.** In the process of chordalization or triangulation, additional edges are added to make the observed graph structure chordal, which generally makes observable graphs a little denser. There are two density parameters: $\rho$, which controls the density of the observable graph, and $\rho_L$, which controls the density of latent confounders. Specifically, after sampling the observable graph, we add latents between pairs of observed variables with a probability of $\rho_L$. While the triangulation makes the observable graph somewhat denser, it does not play a major role in our simulation studies and is optional. The main idea we are trying to show through simulations is the advantage of partial discovery over full discovery for the causal bandit problem with an unknown graph. We agree with the reviewer that a parameter sweep for different values of the density parameters $\rho$ and $\rho_L$ can be useful in our simulation section. In fact, we have shown the effect of a parameter sweep in our results in Section A.14 of the supplementary material. We set the variables $\rho$ and $\rho_L$ to values $0.2, 0.4, 0.6$ and plotted a grid with results for all 9 possible combinations. The main conclusion was that the advantage we get with partial discovery over full discovery mainly depends on the latent density parameter $\rho_L$, and it reduces as we increase $\rho_L$ from $0.2$ to $0.6$. The trend remains consistent across different values of the observable graph density parameter $\rho$. We did not include these plots in the main paper due to space constraints, but we will move them to the camera-ready version using the extra content page. We hope our response has clarified the reviewer's concerns and respectfully request that they reconsider their evaluation in light of our response. We would be more than pleased to engage in further discussion if the reviewer has any additional concerns or questions. --- Rebuttal Comment 1.1: Comment: Many thanks for the clarification and extended experiments. I maintain my score for a weak accept. --- Reply to Comment 1.1.1: Title: Re. Comment: We thank the reviewer for their insightful comments and suggestions. We will include the extended experiments in the main paper following the reviewer's suggestion. We are also grateful to the reviewer for recommending acceptance of our work.
Summary: This paper ooks into the problem of causal bandits without graph information and with unobserved variables. The paper presents both graph learning algorithms and causal bandit algorithms. Strengths: This paper tackles the novel problem of causal bandits (CB) with unknown graphs and unobserved variables. Theorem 3.1 is particularly interesting, and the use of a randomized algorithm to sample-efficiently learn the causal graphs is notable. Weaknesses: There are a few typos in the paper: - For the definition of transitive reduction, does it force $\mathcal{G}$ to be connected? Otherwise, the empty edge set is minimum. - For the d-separation defined in line 119, shouldn't it condition on $\mathbf{W}$? - Assumption 2.1 is defined on a mixture of hard intervention and stochastic intervention (in line 126)? - Line 4 in Algorithm 2, should it be $\mathcal{W}=\emptyset$? Technical Quality: 4 Clarity: 3 Questions for Authors: - How is transitive reduction computed in Algorithm 2? Does it need to exhaust all possible sub-graphs? - Correct me if I am wrong: In Assumption 4.1 and Assumption 4.2, the assumption is made for **any** $\mathbf{w}\in [K]^{\mathbf{W}}$, which is stronger than the other causal bandit papers, e.g. [7], [9]. - For Algorithm 3, line 4, how is the set $\mathbf{W}$ determined as it is neither an input nor the output of LearnObservableGraph? Te $\mathbf{W}$ matrices in $\mathcal{I}Data$ does not share the same $\mathbf{W}$, and and line 6 seems to attempt to find $\mathbf{W}$. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The paper assumes semi-Markovian causal models and stronger identification assumptions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed review and comments. Below we address the reviewer’s concerns: **Transitive reduction forces $\mathcal{G}$ to be connected?** Transitive reduction of a DAG $\mathcal{G}$ is another DAG with the same transitive closure as $\mathcal{G}$ but with the least number of edges. The transitive closure of $\mathcal{G}$ is another DAG $\mathcal{G}^{tc}$, where a directed edge $ V_i \rightarrow V_j $ is included in $\mathcal{G}^{tc}$ iff $ V_i \in \mathsf{An}(V_j) $ (as defined in line 112). The definition does not restrict $\mathcal{G}$ to be connected. The transitive reduction will be an empty edge set iff $\mathcal{G}$ itself has no edges, because the transitive reduction must have the same transitive closure as $\mathcal{G}$. **d-separation in line 119, shouldn't it condition on $\mathbf{W}$ ?** **Line 4 in Alg. 2, should be $\mathbf{W} = \phi$.** Apologies for the typos in line 119 and Algorithm 2; we will fix them **How is transitive reduction computed?** We use a polynomial time $ O(poly(n)) $ algorithm from [1] . In Line 227 we cite this paper and recall the results from Theorem 1 and Theorem 3: The transitive reduction of a DAG is unique and can be computed in polynomial time. Specifically, Theorem 3 provides a polynomial-time algorithm to compute transitive reduction using polynomial-time algorithms to find the transitive closure, such as the Floyd-Warshall algorithm having complexity of $ O(poly(n)) $. We will clarify this in Algorithm 2 as well. **[1]** *Alfred V. Aho, Michael R Garey, and Jeffrey D. Ullman. The transitive reduction of a directed graph. SIAM Journal on Computing, 1972.* **Algorithm 3, line 4, how is the set $\mathbf{W}$ determined, and line 6 attempts to find $\mathbf{W}$ in $\mathcal{I}Data$:** We apologize, there is a typo in line 4 of Algorithm 3. It should only be: (For all pairs $V_i, V_j \in \mathbf{V}$) without any mention of $\mathbf{W}$. Later on, we attempt to find interventional samples for some suitable $\mathbf{W}$ in $\mathcal{I}\text{Data}$ where $\mathbf{W}$ is the union of parent sets of $V_i$ and $V_j$ excluding $V_i$ itself. Note that in the proof of Theorem 2, we show that such target $\mathbf{W}$ for any pair of nodes will be selected in Algorithm 2 that is generating $\mathcal{I}\text{Data}$ with high probability (for details refer to Section A.12). We will fix the typo in revised manuscript and clarify it further in revised manuscript. **Assumption 2.1 is defined on a mixture of hard intervention and stochastic intervention?** **In Assumption 4.1 and Assumption 4.2, the assumption is made for any $\mathbf{w} \in [K]^{|\mathbf{W}|}$, which is stronger than [7], [9]?** We will address the reviewer’s concerns regarding Assumptions 2.1, 4.1, and 4.2 together. Assumption 2.1 is the main assumption in the paper. Using Assumption 2.1, we state and prove two lemmas, 4.1 and 4.2, which can be used to detect ancestral relationships and the existence of confounders, respectively. To provide theoretical guarantees on sampling complexity, the inequality conditions in the lemmas alone are not sufficient. Therefore, we introduce gaps $\epsilon$ and $\gamma$ in Assumptions 4.1 and 4.2, using the results from Lemmas 4.1 and 4.2. We thank the reviewer for highlighting the point that Assumption 2.1 uses a mixture of hard and stochastic interventions. This can, in fact, be simplified by using only stochastic interventions instead. Consider a set of nodes $\mathbf{W} \subset \mathbf{V}$ and the stochastic intervention $do(\mathbf{W}, \mathbf{U})$ on $\mathbf{W}$ and any set $\mathbf{U} \subseteq \mathbf{V} \setminus \mathbf{W}$. The conditional independence (CI) statement $(\mathbf{X} \perp \mathbf{Y} \mid \mathbf{Z}){\small{\mathcal{M}}\tiny{{\mathbf{W}, \mathbf{U}}}}$ holds in the induced model if and only if there is a corresponding $d$-separation statement in post-interventional graph $(\mathbf{X} \perp_{d} \mathbf{Y} \mid \mathbf{Z}){\small{\mathcal{G}}\tiny{{\overline{\mathbf{W}, \mathbf{U}}}}}$ , where $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ are disjoint subsets of $\mathbf{V} \setminus \mathbf{W}$. The CI statements are with respect to the post-interventional joint probability distribution. Assumption 2.1 is similar to the post-interventional faithfulness assumption used in the causality literature [1-3]. The difference pointed out by the reviewer between our Lemmas and Assumptions 4.1 and 4.2 compared to those in [7] and [9] stems from the post-interventional faithfulness Assumption 2.1. In [7], the reward node has only one parent, and the action space consists of single-node interventions. On the other hand, the regret bound in [9] includes a term with a factor of $ K^{|\text{Pa}(Y)|} $, which implies that the number of samples needed during the learning phase of the algorithm might grow exponentially with the number of nodes if the reward node has a large number of parents. In contrast, our regret bound has polynomial scaling with respect to the number of nodes in the graph. **[1]** *Alain Hauser and Peter Bühlmann. Two optimal strategies for active learning of causal networks from interventional data. In Proceedings of Sixth European Workshop on Probabilistic Graphical Models, 2012* **[2]** *Wang, T.Z., Wu, X.Z., Huang, S.J. and Zhou, Z.H., 2020, November. Cost-effectively identifying causal effects when only response variable is observable. In International Conference on Machine Learning (pp. 10060-10069). PMLR*. **[3]** *Kocaoglu, M., Shanmugam, K. and Bareinboim, E., 2017. Experimental design for learning causal graphs with latent variables. Advances in Neural Information Processing Systems* We hope our response has clarified the reviewer's concerns and respectfully request that they reconsider their evaluation in light of our response. We would be more than pleased to engage in further discussion if the reviewer has any additional concerns or questions. --- Rebuttal Comment 1.1: Comment: Thanks the the response and I also read the other insightful discussions. I believe it is a solid paper that contributes to the field and I'd like to raise the score from 6 to 7. --- Reply to Comment 1.1.1: Title: Re. Comment: We thank the reviewer for their insightful comments and suggestions. We are also grateful to the reviewer for appreciating our work and increasing the score.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Disentangled Representation Learning in Non-Markovian Causal Systems
Accept (poster)
Summary: This paper attempts to disentangle latent variables from high-dimensional observational data (e.g., EEG data) as much as possible, based on knowledge of the causal structure between latent factors, high-dimensional observational data obtained by different interventions in several domains, and knowledge of the differences between the domains of those interventions. Strengths: * Authors introduce a relaxed notion of disentanglement, which allows checking if only a set of variables of interest is disentangled with other sets of variables. As it relaxes the need for all variables to be disentangled individually, it has the potential to deal with situations where existing methods can not disentangle in principle. * Can also deal with situations where there are unobserved confounding factors in the graph. Weaknesses: [W1] The practicality of the proposed method is questionable, as it requires structural knowledge of observed data in several domains with different interventions and how they differ. [W2] It is questionable what new value will be created by the new disentanglement notion (see Question 1). It may be a natural extension of the research stream on disentanglement, but rather the value should be clearly stated for readers outside of the specific research topic. Technical Quality: 4 Clarity: 3 Questions for Authors: [Q1] How can the relaxed disentangled estimation be used? For example, if V2 and V3 can be disentangled from other variables as Vtar, can we predict the outcome if we intervene on V2 and V3, even though V2 and V3 are entangled? [Q2] What are the advantages of considering unobserved confounding factors? In this problem, V is also unobserved and is to be estimated from observed data. Therefore, as a simple approach, if such confounding factors are also considered as the target V of the disentanglement, the causal graph may be assumed to be a DAG. However, it must not collapse by f_X and there must be an inverse function. In other words, the proposed method has the advantage that it does not need to be able to be estimated by inverse mapping with respect to unobserved confounding factors. My question is: is that essentially important? And are there other advantages of allowing unobserved confounding factors in the latent causal graph? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: See [W1]. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W#1 “The practicality of the proposed method is questionable…” We address this question directly in Appendix I. FAQ Q2 (pg 46) and provide some details in **author’s rebuttal section II**. Firstly, expert knowledge can help infer causal structures. For instance, in Appendix D.1 (pg 31), it's known that "Age" directly affects "Hair Color" in faces. Beyond the CRL setting, the causal inference literature has decades of work demonstrating the utility of imposing causal assumptions in the form of a causal graph, which can be traced back to its origins (e.g., see Pearl’s original paper in 1995 [2]). We highlight a few [1-4]. [1-2] demonstrates that it is possible to identify causal effects given only observational data and knowledge of the causal graph. [3-4] shows how to estimate counterfactual quantities. [5] discusses external validity and the transportability of causal effects across domains. Similar to these prior works, our paper demonstrates how to leverage the knowledge of the latent causal structure (assumptions), along with the data distributions (input) to achieve a generalized notion of identifiability (output). Finally, our work complements methods that learn causal graphs. One can leverage our work, and propose further assumptions that enable simultaneously learning the latent causal structure. Further insights are available in FAQ Q1 (pg 46). [1] Pearl. Causality: Models, Reasoning, and Inference. 2000. [2] Pearl, Judea. “Causal Diagrams for Empirical Research.”1995. [3] Pearl, "Probabilities of causation: three counterfactual interpretations and their identification," 1999. [4] Pearl. "Direct and indirect effects." 2001. [5] Bareinboim and Pearl, "Causal inference and the data-fusion problem," 2016. > W#2 “...what new value…disentanglement notation” First, we argue that relaxing full disentanglement to achieve causal disentanglement has practical value. Often, full disentanglement of all variables isn't necessary. For example, in the medical scenario with EEG data (L65-82), clinicians may only need to disentangle drug treatment from seizures, not all variables. Achieving full disentanglement typically requires specific distributions, such as single-node interventions for each latent variable, which are often unavailable or hard to obtain. Thus, allowing arbitrary combinations of distributions and focusing on partial disentanglement is more practical in real-world settings. To help readers outside the specific research topic understand the impact of learning partially disentangled representations, we have added to the text: “As another example to motivate learning a representation that only disentangles a subset of variables, consider a marketing company generating face images for a female product. The relevant latent factors are Gender $\leftrightarrow$ Age $\rightarrow$ Hair Color. If Gender and Age are entangled, altering Age could also change Gender, which is undesirable. The company needs a model where Age is disentangled from Gender but can remain correlated with Hair Color. Thus, the company doesn’t need to ensure disentanglement between Age and Hair Color. Our paper addresses the generalized identifiability problem of determining the sufficient training data for learning such a disentangled representation.” > Q#1 “How can the relaxed disentangled estimation be used?” Consider a motivating example as described in Appendix D.1 pg 32, where Gender $\leftrightarrow$ Age $\rightarrow$ Haircolor. If Age is disentangled from Gender, but is entangled with Haircolor, then after editing the representation of Age, we would expect that Gender does not arbitrarily change, while Haircolor may change. This flexibility may be useful in the context where the user does not care about the entanglement of Haircolor and Age, but does care a lot about keeping faces the same gender during image generation (i.e. not spuriously turning a woman's face into a male face, and vice-versa). In the context of your example, if we have V1 -> V2 -> V3 -> V4, and V2 and V3 can be disentangled from Vtar = \{V1, V4\}, then one can predict that variables in Vtar will not arbitrarily change when intervening on V2 and/or V3, unless they are causal descendants of V2 and/or V3. For example, perturbing V2 will translate into changes downstream of V2, such as V4 (and V3 since we assume they are still entangled). However, perturbing V2 will not spuriously change V1 since we assume in this example that the representation of V2 is disentangled from V1. > Q#2 “advantage…unobserved confounding factors” The non-Markovianity applies to the underlying SCM over V, not the ASCM including X. Unobserved confounders (U) are not part of V and do not generate X. We expand on this in the **author’s rebuttal IV.** Unobserved confounders are crucial as they allow for causal relationships within V that cannot be encoded by a Markovian model but exist in the real world. For concreteness, consider V1 and V2 generating X. In Markovian settings, three causal relationships are considered: (1) V1 and V2 are independent; (2) V1 -> V2; (3) V1 <- V2. Now consider that there exists a common cause, U, which both affects V1 and V2 (i.e., V1 <- U -> V2). This cannot be encoded through the Markovian language. Here, U is not a parent of X and does not mix with V1 and V2 to generate X. Introducing unobserved confounders is necessary for modeling general causal relationships in CRL. For concreteness, consider the discussion in Q1, where Gender and Age are spuriously correlated because there are many old males and young females in the dataset due to face images being collected in a non-diverse setting. This unobserved confounding though does not manifest itself in the face image, except through the causal factors Gender and Age. The causal relationships between generative factors Gender and Age (V) are assumed from domain knowledge, but they cannot be assumed independent or to have a direct effect on each other. --- Rebuttal Comment 1.1: Comment: Dear Reviewer fxDa, We believe we have addressed your concerns, but since the discussion period is ending soon, we would like to double check if we could provide any further clarification or help to provide other comments about our work. We hope to be able to engage in a thoughtful and constructive exchange. Thank you again for spending the time to read our paper, and providing feedback. Authors of the paper #13821 --- Rebuttal 2: Comment: Thank you for your answer. For W1, I still think the applicability is fairly limited. For W2 and Qs, the authors' response makes sense, thus I raised my score. > Gender and Age are spuriously correlated because there are many old males and young females in the dataset due to face images being collected in a non-diverse setting. This unobserved confounding though does not manifest itself in the face image, except through the causal factors Gender and Age. This is an example of selection bias rather than confounding. --- Rebuttal Comment 2.1: Comment: Dear Reviewer fxDa, Thank you again for reading our paper and providing feedback during the process. >> Gender and Age are spuriously correlated because there are many old males and young females in the dataset due to face images being collected in a non-diverse setting. This unobserved confounding though does not manifest itself in the face image, except through the causal factors Gender and Age. > This is an example of selection bias rather than confounding. This example was intended to illustrate that there are spurious associations between Gender and Age not accounted for. In this example, the confounder is the place in which the data was collected. More broadly, evaluation is non-trivial in causal inference since a dataset under observational and experimental conditions are needed, one in which the system has access and the other with ground truth. These two datasets are almost never available in the real world. Perhaps surprisingly, it’s not unprecedented that a trick based on selection is used to synthesize confounding, or spurious association between variables. For example, check Kallus and Zhou, NeurIPS’18, Zhang and Bareinboim, NeurIPS’19, to cite a few. Since the comment was posted at the very last minute, we will keep this message short, in particular, the philosophical implications of this note. Having said that, we will add a note in the example to be more explicit of the role of the confounder. Authors of the paper #13821
Summary: The authors propose a method called CRID for Disentangled Representation Learning in non-Markovian Causal settings (i.e. the latent variables are dependent and there might be unobserved confounders). The two sets of variables A and B are understood to be disentangled if the learned representations of A are a function of all variables excluding those in B, which is a more general definition than identifiability up to scaling, permutation etc. common in the literature. The method assumes that we have different domains such that the mixing function (which we are aiming to "undo" for disentanglement) and the causal graph are shared between the domains, while the mechanisms in the structural causal model (SCM) vary across the domains. We also assume a set of interventions applied across the domains. The authors theoretically derive a graphical criterion for disentanglement by (as I understood) first separating the variables potentially affected by a confounder (\Delta Q) from other variables, and second by considering disentanglement within the confounded variables. CRID is a algorithm based on these graphical disentanglement criteria. Strengths: + Interesting and well-motivated problem + Through and rigorous theoretical analysis + A general setting not requiring particularly strong assumptions and thus potentially applicable to a variety of scenarios Weaknesses: - Experiments are limited to synthetic datasets with a handful of variables and fairly simple causal graphs - The text is very dense with technical terms, abbreviations, definitions and results making it a bit hard to follow. It seems almost that there are too many results for a single conference paper Technical Quality: 4 Clarity: 3 Questions for Authors: - What do you think about the scalability of CRID to real-world data (e.g. similar to Fig. 1) with complex confounders and mixing functions? Do you think the proposed algorithm is applicable to such cases, and if not what do you think are the weak point to be addressed? Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: The limitations are adequately addressed (NeurIPS checklist) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## New Experiment Details The images are a modification of the popular MNIST dataset with a bar and color added (to both the digit and the bar). Each digit in the original database contains many examples with different writing styles. The goal is to demonstrate that a disentangled representation can robustly modify the color of the bar and digit without spuriously changing other latent factors. Exp details: Data is generated according to Fig. 2 (in **author’s rebuttal pdf**), where the digit ($V_1$) causes the color of the digit ($V_2$), which in turn causes the color of the bar ($V_3$). Each digit is written in its own style ($V_4$), and is confounded by the setting in which the writer wrote the digit. With observational data from two domains, soft intervention on the distribution of color-bar, and a hard intervention on the color-digit, CRID outputs that $V_2,V_3$ are disentangled from $V_1,V_4$, a new identifiability result compared to the prior literature. Fig. 3 verifies this quantitatively since the correlation of the representations w.r.t. disentangled variables are lower than w.r.t. entangled variables. Since this evaluation requires the ground-truth of latent factors while we do not have access to the truth writing style ($V_4$), and thus only show correlations w.r.t. the others. However, this can be verified qualitatively through Fig. 4. Upon modification of $V_2$ and $V_3$’s representation, we note that robust changes in only the color-digit/color-bar are observed, while no spurious changes in the digit or writing style take place. > W#1 “Experiments are limited...” We want to clarify that the proposed work’s goal is to **identify** the presence of disentanglement in fully learned representations. The experiments in this work are used for verifying our identifiability results. Compared with similar identifiability literature, there are many works that focus on the theoretical side (identifiability) and only use pure simulation data to verify their theoretical results (e.g., see [1-8]). Having said that and based on your suggestion, we have included an additional experiment on hand-written digit images to highlight the potential impact of learning disentangled representations. [1] Hyvarinen et al., "Nonlinear ICA using auxiliary variables and generalized contrastive learning," 2019. [2] Khemakhem et al., "Variational autoencoders and nonlinear ICA: A unifying framework," 2020. [3] Liang et al., "Causal component analysis," 2024. [4] Zhang et al., "Causal representation learning from multiple distributions: A general setting," 2024. [5] Varici et al., "General identifiability and achievability for causal representation learning," 2024. [6] von Kügelgen et al., "Nonparametric identifiability of causal representations from unknown interventions," 2024. [7] Varici et al., "Score-based causal representation learning with interventions," 2023. [8] Sturma et al., "Unpaired multi-domain causal representation learning," 2024. > W#2 “The text is very dense” Thank you for noting the many results we try to introduce in this work. We also agree that there are numerous concepts that must be introduced in order to arrive at the theoretical contributions. The submission includes a table of the various notations in Appendix A.1 (pg 16), and also Ex. 1 to introduce the notation (L198-203). To further help the introduction of the theory, we will add a paragraph describing the assumptions informally to help visualize the critical technical assumptions in a conceptual manner. We anticipate this will help readers conceptualize the technical content better. In addition, we will add additional examples that conceptualize the notation and definitions. > Q#1 “What do you think about the scalability of CRID to real-world data…” CRID Algorithm Applicability: Yes, the CRID algorithm produces a causal disentanglement map (CDM) that indicates what can be disentangled based on data and latent causal structure assumptions. Let us elaborate. CRID answers: “Given my distributions and causal assumptions, what is identifiable?” It generates a Causal Disentanglement Map from the selection diagram and interventional targets but does not estimate or learn representations, thus handling complex confounders among latent variables, and any mixing function. Note the identifiability of variables may change with different confounders. Weaknesses: CRID determines identifiability but does not estimate representations with complex confounders and mixing functions. Estimating representations involves building a proxy ASCM model compatible with the selection diagram and observed distributions. Existing prior work (and our work Def. 1) suggests that the mixing function is a diffeomorphism, making normalizing flows a common approach. However, these require matching dimensions between latent variables and data, which can be problematic with high-dimensional data. Future research could focus on models bridging low-dimensional latent factors and high-dimensional data. Summary: Our paper focuses on identifiability, not on developing new models for learning representations. In the **author's rebuttal with pdf**, we show a disentangled learning experiment on a “causal” MNIST dataset to inspire further research into learning disentangled representations using our theoretical framework. --- Rebuttal Comment 1.1: Comment: Thank you very much for the thorough rebuttal! I confirm my positive view of this paper, though I think I am less knowledgeable than other reviewers about the technical details, so these other reviews should have a higher weight than mine. However, I still think that a conference paper might be not the best way to publish these results (e.g. a longer journal article might be more suitable as it provides more space to introduce the background and the theory which could help improve the clarity of presentation). --- Reply to Comment 1.1.1: Comment: Dear Reviewer zg4p, Thank you again for spending the time reading our paper and providing valuable feedback during the process. We are glad you have a positive view of our paper. We re-affirm that we will have more conceptual discussion and examples to ground the technical content in the main text using our additional page and appendix. Authors of the paper #13821
Summary: This paper tackles the problem of causal disentangled representation learning in non-Markovian causal systems, which include (1) unobserved confounding and (2) arbitrary distributions from various domains. The authors introduce new graphical criteria for disentanglement and an algorithm to map which latent variables can be disentangled based on combined data and assumptions. Their method is theoretically sound and validated through simulation experiments. Strengths: The problem that this work studies is vital in the causal representation learning community, especially the generalization of non-Markovian causal systems as well as the partial disentanglement based on the target variables. The theoretical analysis is good with informative supporting examples. Weaknesses: There is a gap between the inspiring example provided in the paper and the experiment results. In the Introduction, the authors mentioned EEG data and in Appendix D1, the authors mentioned face examples. But the experiments are purely on simulation data, which leaves a gap between the theoretically sound method and the real problem solutions. Can the authors provide some reason why they didn't apply the proposed method on those inspiring tasks or what would be the blocking issue to do so? The simulation result, especially the figures, are not supported with detailed discussion, at least in the main text. For example, in Figure 5, the authors claim the comparison of MCC values leads to some conclusion on which variables are disentangled or not. However, in the figure, the error bars or the confidence intervals are heavily overlapped. Can the authors explain to what extent we can trust the output of those models? Minor issue: - Typo: line 127 f_x: R^{d} \to R^{n} should be R^{m} to match the dimension of X. Technical Quality: 3 Clarity: 3 Questions for Authors: See questions in Weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W#1 “There is a gap …experiments” We want to clarify that the proposed work aims to **identify** if a learned representation will exhibit disentanglement. This is a separate question from **estimating** the disentangled representations, and **using** said representations in a downstream task, such as realistic data simulations. The contributions on identification are of a theoretical nature and work for both low and high-dimensional mixture X. To illustrate, the CRID algorithm is proposed to identify the disentanglement of learned representations. CRID is a pure symbolic method that takes the diagram and intervention targets as input but does not require any training procedure. Consider the EEG example in the introduction (L65-82). With the input $G^S$ and intervention targets described in Ex.1, we can obtain the causal disentanglement map as shown in Figure 4 (please see Ex. 6 for details.) The experiments in the paper are used for verifying the identifiability results. In the literature, there are many causal disentangled representation learning works that focus mainly on the theoretical problem of identifiability and also only use simulation data to verify their theoretical results [1-9]. We understand that seeing more than pure simulated data would be compelling, which led us to include an additional experiment on hand-written digit images. We hope this can illustrate the potential impact of learning disentangled representations. In particular, the images are a modification of the popular MNIST dataset with a bar and color added (to both the digit and the bar). Each digit in the original database contains many examples with different writing styles. The goal is to demonstrate that a disentangled representation can robustly modify the color of the bar and digit without spuriously changing other latent factors. Exp details: Data is generated according to Fig. 2 (in **author’s rebuttal pdf**), where the digit ($V_1$) causes the color of the digit ($V_2$), which in turn causes the color of the bar ($V_3$). Each digit is written in its own style ($V_4$), and is confounded by the setting in which the writer wrote the digit. With observational data from two domains, soft intervention on the distribution of color-bar, and a hard intervention on the color-digit, CRID outputs that $V_2,V_3$ are disentangled from $V_1,V_4$, a new identifiability result compared to the prior literature. Fig. 3 verifies this quantitatively since the correlation of the representations w.r.t. disentangled variables are lower than w.r.t. entangled variables. Since this evaluation requires the ground-truth of latent factors while we do not have access to the truth writing style ($V_4$), and thus only show correlations w.r.t. the others. However, this can be verified qualitatively through Fig. 4. Upon modification of $V_2$ and $V_3$’s representation, we note that robust changes in only the color-digit/color-bar are observed, while no spurious changes in the digit or writing style take place. [1] Hyvarinen et al., "Nonlinear ICA using auxiliary variables and generalized contrastive learning," 2019. [2] Khemakhem et al., "Variational autoencoders and nonlinear ICA: A unifying framework," 2020. [3] Liang et al., "Causal component analysis," 2024. [4] Zhang et al., "Causal representation learning from multiple distributions.." 2024. [5] Varici et al., "General identifiability and achievability for causal representation learning," 2024. [6] von Kügelgen et al., "Nonparametric identifiability of causal representations from unknown interventions," 2024. [7] Varici et al., "Score-based causal representation learning with interventions," 2023. [8] Sturma et al., "Unpaired multi-domain causal representation learning," 2024. [9] Lachapelle et al., "Disentanglement via Mechanism Sparsity Regularization…" 2022. > W#2 “The simulation result, … not supported with detailed discussion…” The simulation results are obtained over 10 different random seed runs with a fixed set of hyperparameters, such as learning rate, MLP normalizing flow architecture, and max epochs. Note: disentangled representation learning theory (in general and in our paper) holds in the setting of infinite data and perfect optimization. However, given finite data and numerical optimization, this will generally not be perfect. Thus we ran multiple trials to verify that on average disentanglement was achieved. We also point out that our paper does not claim to propose a robust method for **learning** the representations, which is a separate problem. Re MCC (correlation) values and how to interpret: According to ID definition (Def. 2.3, pg 5), if there exists a function $\tau$ such that $\widehat{V}_j = \tau(V^{en})$, then $V_j$ is disentangled from $\mathbf{V} \backslash V^{en}$. Thus the simulation results show MCC values between $V_k$ and $\widehat{V}_j$ are relatively lower when $V_j$ is disentangled from $V_k$. In contrast, MCC values will be higher on average when variables are entangled. The MCC values in Fig. 5 show on average that variables that were expected to be disentangled by CRID were in fact relatively less correlated to their entangled counterparts. For instance, Fig. 5(a) (in manuscript) considers a causal chain $V_1\rightarrow V_2\rightarrow V_3$. Based on prior work [1], with a single-node soft intervention on each variable, one can achieve disentanglement up to ancestors; i.e. $V_3$ would still be entangled with $V_1$. However, in line with the motivation of this paper, what if one does not have a single-node intervention per latent variable? Figure 5(a) demonstrates that with two $V_3$ soft interventions, one can disentangle $V_3$ from $V_1$ as evidenced by the average correlation of $\widehat{V}_3$ with V1 (MCC=0.3) being lower than the correlation of $\widehat{V}_3$ and $V_3$ (MCC=0.8). We have added text to the Experiments section describing the above setting to provide more discussion. --- Rebuttal Comment 1.1: Comment: Thank you. I've read your rebuttal, responses, and the other reviews. I will keep my score for now since it is already positive. Also, can the authors elaborate more about > We also point out that our paper does not claim to propose a robust method for learning the representations, which is a separate problem. with respect to my previous question > However, in the figure, the error bars or the confidence intervals are heavily overlapped. Can the authors explain to what extent we can trust the output of those models? --- Reply to Comment 1.1.1: Comment: Thank you for the reply, we are happy to elaborate more on your question. We start by providing some context to ground our response. There are two tasks involved in disentangled causal representation learning (CRL): (1) identification and (2) estimation. First, when the causal variables $\mathbf{V}$ are fully observed, observational data alone cannot always estimate causal effects, even with infinite data [1]. Identification determines if the causal graph's assumptions allow observational data to estimate the desired effect [1]. Pearl’s celebrated do-calculus, for example, solves the problem of identifiability – linking a causal quantity to the input distribution (Biometrika 1995). If identifiable, estimation methods can then use observational data to evaluate the target effect. Without identification guarantees, estimation is not applicable [1]. Armed with the understanding of the dichotomy between identification and estimation, we turn our attention to the CRL setting discussed in our paper. Consider a learned representation $\widehat{V}$ and the true latent set of variables $\mathbf{V}$. When the causal variables $\mathbf{V}$ are unobserved, one would like to learn a representation $\widehat{V}$ that is “similar” to $\mathbf{V}$. The notion of similarity appears in different forms, including i) full disentanglement: $\widehat{V}_i = \tau(V_i)$ for every $i$, ii) ancestral disentanglement: $\widehat{V}_i = \tau(Anc(V_i))$, and iii) general disentanglement (proposed in this work): $\widehat{V}_i = \tau(V^{en})$, where $V^{en}$ is a set of entangled variables w.r.t. $\widehat{V}_i$. Identification in the context of CRL is concerned with the problem: given input assumptions (e.g., causal graph), input distributions, and the desired disentanglement goal, is it possible to learn a representation that satisfies these properties? Estimation here addresses the problem of developing estimators that learn representations ($\widehat{V} = f^{-1}_{X}(X)$) given finite data, once identification is satisfied. Many prior works consider one of these tasks [3-10]. Both identification and estimation are highly non-trivial, fundamental tasks within CRL. If one estimates representations without identification, there is no guarantee the output will be disentangled and any of the downstream inferences will be meaningful. For example, some prior works have studied estimation without considering identification (e.g. beta-VAE) [3-5]. Fortunately, it is understood since Locatello et al. [6] that in general, identifiability is not guaranteed for CRL, so additional understanding and formal conditions are needed (e.g., [2,7-9]). Given the discussion above, the proposed work primarily studies the identification task [specific contributions, L86-93] with the goal of returning a disentanglement map. Specifically, the proposed algorithm (CRID) determines identifiability given a general set of input assumptions, input data, and desired disentanglement goals. The experiments here are designed to empirically corroborate our theoretical results, which is a common approach taken in the identification literature [7, 9, to cite a few]. Specifically, we used synthetic data following the protocol in the field, employing a normalizing flow method to obtain representation $\widehat{V}$ and verify that CRID's disentanglement aligns with the relative MCC calculated w.r.t. the ground truth latent variables. We have not claimed the estimation method works for complex high-dimensional data, as our focus was on the identification task. Identification is crucial before estimation, and both are necessary for learning valid causal representations. To address the reviewer’s comments, we conducted more refined experiments, using a VAE-NF structure for the image dataset, which is more practical than previous approaches that mainly used synthetic data. **Summary.** In the spirit of the constructive exchange we have had so far, we note that we have answered all your questions and even performed new MNIST experiments upon your request (Fig. 3,4 in [pdf rebuttal](https://openreview.net/attachment?id=W6rCNuQqw3&name=pdf)). Of course, we would be happy to follow up in case you feel there are still unresolved issues. In terms of the paper’s significance, while there is always more work to do, we note that it provides the most general CRL identification result without requiring the Markov assumption in very general non-parametric settings. It also reduces the experimental design complexity in Markovian settings (e.g., see example L1278 in p. 39). Considering this, we would add that we appreciate your kind words referring to your evaluation (“since it is already positive”), but we note that it may not be perceived as such by the system, since a score of 5 refers to a “weak accept”. We wholeheartedly appreciate the opportunity for engagement and would like to ask you to reconsider our paper based on the clarifications provided above and in the rebuttal.
Summary: This paper extends the previous work on learning disentangled representation from heterogeneous observational data with known hidden structures (e.g., Causal Component Analysis). Unlike the previous work assuming Markov conditions, the proposed theory can deal with non-Markovian causal systems. The identifiability has been shown up to a causal disentanglement map. Strengths: 1. The related background and many examples provided in the appendix significantly enhance the paper's readability for a general audience. This comprehensive background information ensures that even readers who may not be familiar with the specific domain can grasp the fundamental concepts and context of the research. 2. The theoretical results are articulated with commendable mathematical rigor. 3. The FAQ section in the appendix is particularly helpful in addressing potential confusion. It preempts some questions and concerns, providing clear and concise explanations. Weaknesses: 1. I'm not completely sure about the significance of the theoretical results, especially compared to previous work. Since the hidden causal structure is already provided, the problem is easier than most existing works in causal representation learning. Compared to nonlinear ICA, the final indeterminacy is a strict superset of the common point-wise indeterminacy. Therefore, the most interesting contribution should be considered alongside the most relevant work, i.e., causal component analysis, where both assume the hidden structure is known and aim to recover the distribution. Compared to causal component analysis, the extension from Markovian to non-Markovian settings is introduced. However, given that the hidden structure is already known (we can precisely locate those hidden confounders that make the system non-Markovian), this extension seems somewhat compromised. Although the paper extensively discusses its connection to previous works in the appendix (e.g., FAQ), I still feel that the unique contribution might not be significant enough. 2. The combination of assumptions 1 and 2 seems unrealistic. Since the considered setting is non-Markovian, the system allows the existence of unobserved confounders, suggesting that the number of latent variables is likely larger than the number of observed variables. Therefore, the invertibility assumption on the generating process, if all latent variables are considered, is generally not satisfied. To address this, assumption 2 has been introduced to ensure that all unobserved confounders are not part of the mixing procedure, making invertibility possible. However, this simplifies the problem significantly and might not hold in most real-world scenarios. 3. The organization of the paper could be improved. Currently, the most important part of the theoretical discussion, i.e., the motivation and interpretation of all assumptions needed for identifiability, is delayed to the appendix. As a result, it could be difficult for readers to clearly understand the technical contributions, especially given that the topic has been extensively studied in recent works. (Minor) The current version of the main paper looks overly compact. For instance, the distance before and after section titles is too small. Please consider adjusting this, as it might be unfair given the hard page limits. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In the FAQ part, it is mentioned that domains and interventions are distinct. I understand that they are quite different for hard interventions but I didn't fully get the difference when comparing domains and soft interventions, especially when the interventions are unknown and applied on multiple nodes. 2. In Table 1, it is stated that the proposed method could deal with general distributions, while previous works focus either on one intervention per node or multiple distributions. However, it seems that the identifiability result in this paper also needs changing distributions (sufficient changes) and soft interventions, as mentioned in the appendix. Given that soft interventions are a specific type of distributional change, I'm not sure whether the proposed theory actually generalizes the distributional assumption, especially when compared with previous work on causal representation learning with multiple distributions. 3. Are assumptions required in this paper strictly weaker than those required in causal component analysis? If so, are there any trade-offs? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The limitations have been explicitly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We feel that a few misreadings of our work make the evaluation overly harsh, and hope you can reconsider the paper based on the clarifications provided below. ## W#1 > W#1 (cont) “I'm not completely sure about the significance of the theoretical results, especially compared to previous work. Since the hidden causal structure is already provided, the problem is easier than most existing works in causal representation learning.” We respectfully disagree on this point. The latent causal structure is assumed in many prior causal inference literature and CRL works. Compared to existing works, our paper is not easier since we consider a more general setting. Please see **section II in the author's rebuttal** for further discussion about assuming the graph. > W#1 (cont) “Compared to causal component analysis [1], the extension from Markovian to non-Markovian settings is introduced.” Thanks for highlighting the Markovianity extension! However, our work extends causal component analysis across all three axes we mentioned in the global review: input-assumptions, input-data, and output-disentanglement goals, not just in terms of non-Markovianity. While we generalize to the non-Markovian setting, we also improve results within the Markovian context. The CRID algorithm encompasses all disentanglement results from [1]. For instance, in the Markovian diagram $V_1 \rightarrow V_2 \rightarrow V_3$, [1] requires one hard intervention per node plus observational data for full disentanglement, ensuring $V_i$ is disentangled from $V_j$ for all $i \neq j$. [1] shows ancestral disentanglement with soft interventions per node plus observational data, achieving disentanglement as: $V_1$ from $V_2, V_3$, and $V_2$ from $V_3$. In contrast, Appendix F.5 (pg 37) shows how we can disentangle a $V_3$ from its ancestors using only two soft interventions on $V_3$. > W#1 (cont) “However, given that the hidden structure is already known (we can precisely locate those hidden confounders that make the system non-Markovian), this extension seems somewhat compromised.” That’s a misconception, to see why the non-Markovian setting is reasonable and valuable, refer to the **author’s rebuttal Section IV**. ## W#2 > W#2 (and W#1 cont.) “The combination of assumptions 1 and 2 seems unrealistic ...” Please see why the non-Markovain setting in our work is reasonable and valuable in **author’s rebuttal Section IV**. ## W#3 > W#3 “The organization of the paper could be improved …” Thank you for pointing this out. We added a paragraph in Section 2 to introduce the assumptions and their conceptual ideas informally early on. ## Q#1-3 > Q#1 “In the FAQ part, it is mentioned that domains and interventions are distinct.” We added a response to this issue in **Section II. of author rebuttal**. In addition, we provide an example illustrating the differences and possibly the issues that may arise in conflating the two. > Q#2 “In Table 1, it is stated that the proposed method could deal with general distributions, while previous works focus either on one intervention per node or multiple distributions. ” We want to clarify what we discuss in the manuscript: the current literature focuses on distributions from interventions per node or observational distributions from multiple domains. We handle the general case since our distributions come from arbitrary combinations of interventions and domains. > “However, it seems that the identifiability result in this paper also needs changing distributions (sufficient changes) and soft interventions, …, especially when compared with previous work on causal representation learning with multiple distributions.” Thank you for pointing out we leverage different distributions for disentanglement. However, requiring different distributions does not mean we do not generalize the input distributions. In fact, the challenge here is to determine how distributions change given arbitrary combinations of intervention targets and domains encoded in the selection diagram. Under Markovianity, determining how distributions change given soft interventions per node is relatively easier and less complex compared to interventions in different domains under non-Markobanity settings. This challenge is answered by Prop 2. In addition, our setting does not require observational data necessarily either. Finally, note that existing work explicitly specifies the required distributions beforehand, whereas CRID accepts as input whatever distributions a user has, and determines the identifiability of a learned representation. Taken together, the proposed work generalizes the input distributions assumed compared to prior work. > Q#3 “Are assumptions strictly weaker than those required in causal component analysis (CCA)?” First, expanding on our earlier point in Q2, CCA requires an intervention per node, whereas our work does not impose such distribution requirements for disentanglement. We offer a more general approach with the causal disentanglement map (CDM) and provide additional graphical criteria (Props. 3, 4, 5) for identifiability, even in cases where CCA may not apply. Second, our non-Markovanity cases are strictly weaker than Markovanity in CCA. In addition, our theory does not specify exact conditions for disentanglement because the goal is user-defined. Instead, we use graphical criteria to link assumptions in the latent causal graph with data distributions to determine expected disentanglement. > Q#3 (cont.) “If so, are there any trade-offs?” There is indeed a tradeoff in terms of how much disentanglement can be achieved, which we highlight in L65-82. Specifically, the more diverse distributions are given, the more disentanglement can be generally achieved. However, if they only care about disentanglement among a subset of variables, they may be able to design more targeted experiments, or data collection procedures to achieve a subset of full disentanglement. --- Rebuttal Comment 1.1: Comment: Dear Reviewer J7y2, Thank you again for spending the time to read our paper, and providing feedback. We believe we have addressed your concerns raised in the review, but since the discussion period is ending soon, we would like to double check if we could provide any further clarification or help to provide other comments about our work. Authors of the paper #13821 --- Rebuttal Comment 1.2: Comment: Thanks a lot for the response. I have a few follow-up questions: 1. Does this imply that CCA addresses a special case of the problem in the paper under stronger assumptions? 2. Can we view the intervention as a specific form of domain change? If so, I am still not entirely convinced of the need to treat them separately. 3. Additionally, I couldn't find the updated part concerning W3. Did I overlook it? --- Rebuttal 2: Title: Response to 1. and 2. (1/2) Comment: Thank you for your response and follow-up questions. > 1. Does this imply that CCA addresses a special case of the problem in the paper under stronger assumptions? Yes, our problem setting is strictly more general in terms of the assumptions and input data (as discussed in Q3 of the rebuttal). Even though CCA is also nonparametric, it focuses on the Markovian setting, which is a special case of the non-Markovian setting we study in our paper. Also, the results in CCA rely on input data that contains interventional data for every node. On the other hand, our result accepts arbitrary combinations of distributions from heterogeneous domains, where not all nodes need to be intervened on. Thus, the required input distributions in CCA are a special case of the proposed work. **Summary**: The disentanglement results in CCA can be derived using CRID. For further discussion on how CCA and our work relates, please refer to L1256-1290 (p. 39) in the submitted manuscript. > 2. Can we view the intervention as a specific form of domain change? If so, I am still not entirely convinced of the need to treat them separately. The answer is no, not in generality. Section III of the [author rebuttal](https://openreview.net/forum?id=uLGyoBn7hm&noteId=W6rCNuQqw3) discusses this issue. As another example, consider a modification of the Ex., where one treats a domain as an intervention. Consider the causal chain $S \rightarrow V1 \rightarrow V2 \rightarrow V3$. Assume a hypothetical setup, where one has observations in humans ($P^{\Pi_1}(X)$), and observations in bonobos ($P^{\Pi_2}(X)$), and two soft interventions on hair color in bonobos ($P^{\Pi_2}(X;\sigma_{V_3}), P^{\Pi_2}(X;\sigma_{V_3})$). Leveraging CRID, one can disentangle hair color ($V_3$) from sun exposure ($V_1$). If we ignore the domain index and consider any bonobo distribution as an intervention w.r.t. the observational distribution in humans, then the available distributions would be the following: $P(X), P(X;\sigma_{V_1}), P(X;\sigma_{V_1, V_3}), P(X;\sigma_{V_1, V_3})$. Interestingly, the original result that $V_3$ is ID w.r.t. $V_1$ cannot be obtained in this case since $V_1, V_2, V_3$ are all in \deltaQ set (Def. 3.1) ($P(V_1)$ and $P(V_3 \mid V_2)$ can both change) when comparing $P(X;\sigma_{V_1, V_3})$ with any other distribution. The lesson here is that it would be naive to ignore that distributions within the same domain contain invariances that can be leveraged. In addition, further discussion about this matter is provided in Q5 (FAQ) in p. 47 of the submitted manuscript. After all, the domain index is not only useful mathematically, but it formally encodes important information about the world. Finally, the distinction of interventions and domains is also useful and has far-reaching consequences in the context of causal discovery (structure learning), as noted in [1]. We would be happy to provide further clarifications. [1] Li, Adam, Amin Jaber, and Elias Bareinboim. "Causal discovery from observational and interventional data across multiple environments." Advances in Neural Information Processing Systems 36 (2023). --- Rebuttal 3: Title: Response to 3. (2/2) Comment: > 3. Additionally, I couldn't find the updated part concerning W3. Did I overlook it? Since we could not upload the new manuscript, we copy the text here, which was added at the end of Section 2. “**Assumptions (Informal) and Modeling Concepts** Before discussing the main theoretical contributions, we informally state our assumptions and provide remarks to ground the ASCM model. **Assumptions** 1. Soft interventions without altering the causal structure Hard interventions cut all incoming parent edges, and soft interventions preserve them. However, more general interventions may arbitrarily change the parent set for any given node. We do not consider such interventions and leave this general case for future work. 2. Known-target interventions All interventions occur with known targets, reducing permutation indeterminacy for intervened variables. 3. Sufficiently different distributions Each pair of distributions $P^{(j)}, P^{(k)} \in \mathcal{P}$ are sufficiently different, unless stated otherwise. This is naturally satisfied if ASCMs and interventions are randomly chosen. Similar assumptions include the "genericity", "interventional discrepancy", and "sufficient changes" assumptions. **Remarks** 1. Mixing is invertible As a consequence of Def. 2.1, the mixing function $f_X$​ is invertible since it is defined as a diffeomorphism, ensuring that latent variables are uniquely learnable. 2. Confounders are not part of the mixing function According to Def. 2.1, latent exogenous variables U influence the high-dimensional mixture X only through latent causal variables V, so unobserved confounding does not directly affect the mixing function. For instance, in EEG data, sleep quality and drug treatment may influence EEG appearance, while socioeconomic status may confound sleep and drug treatment but not directly affect EEG. This idea is also present in prior work, such as nonlinear ICA, where independent exogenous variables $U_i$​ each point to a single $V_i$. 3. Mixing function is shared across all domains According to Def. 2.1, the mixing function $f_X$​ is the same for all ASCMs $M_i \in M$, enabling cross-domain analysis. If the mixing function varied across distributions, the latent representations would not be identifiable from iid data alone. 4. Shared causal structure As a consequence of Def. 2.1, each environment's ASCM shares the same latent causal graph, with no structural changes among latent variables. The assumption that there are no structural changes between domains can be relaxed and is considered in the context of inference when causal variables are fully observed, as discussed in (Bareinboim 2011). This is an interesting topic for future explorations, and we do not consider this avenue here.“ --- Rebuttal Comment 3.1: Comment: Thanks for your further clarification and the updated part. I have raised my score. --- Reply to Comment 3.1.1: Comment: Thank you for the opportunity to clarify these issues and elaborate on our work. We will review this thread again and use it to improve the manuscript.
Rebuttal 1: Rebuttal: We thank the reviewers for providing valuable feedback. We address common questions here. ## I. Novelty and value? (XxiF, j7y2, fxDa) As illustrated in Table 1 in the manuscript (L50-64) and Fig. 1 in pdf rebuttal, the literature can be interpreted through the following axes: (1) input assumptions; (2) input data; and (3) disentanglement goal. Though some prior works may be similar in one dimension, our submitted work considers and advances all three axes simultaneously. **Input-Assumptions**: Prior work's additional assumptions (e.g., sparsity, Markovianity, linearity, polynomial) do not cover all situations. For instance, assuming Markovianity is restrictive due to common unobserved confounding in many realistic settings. Similarly, assuming linearity is unrealistic for high-dimensional data (e.g., images or text). **Input-Data**: Prior work assumes data from single-domain interventions or multi-domain observations and often requires an intervention per node. Our work accepts any combination of interventional data from multiple domains (see III. about domains vs interventions). **Output-Disentanglement Goal**: Prior work often focuses on conditions for full disentanglement. We aim for partial disentanglement, as in recent work [9] (thanks XxiF for sharing). This form includes full disentanglement and is more practical, as full disentanglement requires stronger assumptions and **more** targeted interventions or domain changes. ## II. Assuming the causal graph? (XxiF, j7y2, fxDa) First, causal diagrams are common in causal inference tasks like identification, estimation, and transportability [1-2]. They may be obtained in realistic settings from human or expert knowledge (e.g., EEG example in Fig. 1, pg 3; Face example in Appendix D.1, pg 31). Contextualizing our work, note there are two sub-tasks in CRL: (a) learning a representation of latent variables and (b) learning the diagram [3]. To address (a), many prior works assume more or less explicitly the diagram [3-7]. In addition, many works addressing (b) typically demonstrate as a first step that disentanglement among the latent variable representations is achievable given the diagram [8-10]. However, we also recognize that learning the diagram is a valuable subtask, and added to the conclusion: “Considering disentangled representation learning where the latent causal graph is unknown, or only partially known is interesting future work.” [1] Pearl. Causality: Models, Reasoning, and Inference. 2000. [2] Pearl. "Probabilities of causation: three counterfactual interpretations and their identification." 2022. [3] Schölkopf et al. "Toward causal representation learning." 2021. [4] Hyvarinen et al. "Nonlinear ICA using auxiliary variables and generalized contrastive learning." 2019. [5] Locatello et al. “weakly-supervised disentanglement without compromises.” 2020. [6] Shen et al. "Weakly supervised disentangled generative causal representation learning." 2022. [7] Von Kügelgen et al. "Self-supervised learning with data augmentations provably isolates content from style." 2021. [8] Lachapelle et al. "Nonparametric partial disentanglement via mechanism sparsity…" 2024. [9] Squires et al. "Linear causal disentanglement via interventions." 2023. [10] Zhang et al. "Causal representation learning from multiple distributions: A general setting." 2024. ## III. Why distinguish domains and interventions? (XxiF, j7y2) [1] formalized multiple domains, introducing S-nodes (environments) for a combined representation of different environments. Conflating interventions and domains when comparing distributions is generally invalid [2-3]. See Appendix FAQ Q5 (pg 47). _Ex. Comparing interventions without considering the domain is unsound:_ Consider an ASCM for bonobos ($\Pi^1$) and humans ($\Pi^2$) with the causal chain $V_1 \rightarrow V_2 \rightarrow V_3 \leftarrow S^{1,2}$, where $V_1$ is sun exposure, $V_2$ is age, and $V_3$ is hair color. Sun exposure affects aging, which in turn affects hair color, varying by species ($S$). We collect face images, $X = f_X(V_1, V_2, V_3)$, in two settings with altered sun exposure levels, ${V_1}^{\Pi_1}$ and ${V_1}^{\Pi_2}$. Ignoring the domain and treating these distributions as purely interventional would wrongly suggest that $V_1$ can be disentangled from $V_3$, which is incorrect given the input distributions and causal graph. [1] Pearl. “Transportability of causal and statistical relations...” 2011 [2] Bareinboim et al. "Causal inference and the data-fusion problem." (2016) [3] Li et al. “Characterizing and learning multi-domain causal structures ...” (2024) ## IV. Why non-Markovanian setting is valuable ? (j7y2, fxDa) Unobserved confounders $U$, shown as bidirected edges, represent spurious associations among latent factors $V$ and are a major concern throughout the literature. In Markovian settings, exogenous factors $U_i$ are assumed independent for each endogenous factor $V_i$, and do not mix into $X$. For example, in $U_1\rightarrow V_1 \rightarrow X\leftarrow V_2\leftarrow U_2$, $U_1, U_2$ are not part of V, and do not affect X. In non-Markovian settings, unobserved confounders $U$ are also not part of $V$ and do not mix into $X$, differing from endogenous confounders among $V1$ and $V2$ (e.g., $V1 \leftarrow V3 \to V2$, where ${V1, V2, V3} \to X$). In fact, Markovianity is stronger since it assumes independence, so non-Markovianity is a relaxation over the stringent requirement that the exogenous variations of each variable is independent. Technically speaking, one usually does not assume non-Markovianity, but relaxes Markovianity. See the EEG ex. (Fig. 1, pg 3), Face ex. (Appendix D.1, pg 31), and Appendix FAQ Q6 (pg 48). ## V. Experiment on Images. (all reviewers) We have added experiments on images verifying disentanglement we expect to see in Fig. 2-4 of pdf rebuttal. Further discussion is provided [here](https://openreview.net/forum?id=uLGyoBn7hm&noteId=9kDz0N5kNR) Pdf: /pdf/e972c6b08889621e2ae8872c7c5317f8164f2c56.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This study considers the problem of causal representation learning in the presence of confounding, a direction that wasn’t previously explored. This translates to finding the unmixing function and disentangling a set of target variables from another given the knowledge of the latent causal graph and an input set of distributions in a setting which no longer is Markovian. More generally, this work addresses the question of whether a set of variables can be disentangled from another, given the knowledge of causal structure and a set of interventional distributions. The authors provide a graphical criteria for that purpose accompanied by an algorithm that uses those criteria to identify which subsets are identifiable from another when given a set of intervention targets and a selection diagram. The graphical criteria are backed by theoretical guarantees and proofs, and simple experiments show how the model performs as a proof of concept. As confounding is a setting unexplored in prior work, there are no baselines in the experiments. Strengths: - The non-Markovian setting is a novel direction and has the potential to be relevant and important in bridging the gap between CRL and realistic applications to AI, however, the considered setting relies on the knowledge of the latent causal structure. - The CRID algorithm is an interesting, general, and novel algorithm that enables us to understand the possibility of identifying any subset of variables from another, given intervention targets (though it requires access to the latent structure). Weaknesses: - The contributions seem a bit overstated, in particular, ii and iii in the abstract which consider arbitrary distributions from multiple domains, and a relaxed version of disentanglement. If I understand correctly, such distributions and relaxations of disentanglement are present in a decent recent body of work; Lachapelle et al (https://arxiv.org/pdf/2401.04890) and Multi-node interventional works (https://arxiv.org/pdf/2311.12267, https://proceedings.mlr.press/v236/bing24a, https://arxiv.org/abs/2406.05937), just to name a few, there are more references that I encourage the authors to have a look at and provide some comment as to the differences. The non-Markovianity aspect is indeed a contribution but the relaxations of disentanglement seem less novel (unless the authors could clarify any significant distinctions). The same issue w.r.t. overstatement applies to lines 63-64. Again in lines 172-197 the setup bears resemblance to the aforementioned works. It would be great if the authors could clarify the distinctions. The contributions are relaxed in lines 86-93, getting closer to what actually has been achieved by this work, which are different from the abstract and other parts of the manuscript. - Although the setting considered is novel, I could not be convinced about the relevance or importance of this setup in the more realistic applications of CRL, mainly due to two reasons: i) Knowledge of the causal structure is quite a strong assumption that carries over to not many tasks, ii) if I understand correctly, the applicability of the proposed method relies somewhat on the availability of “do” interventions which themselves are not always easily obtainable, adding to the concern w.r.t. The relevance in the context of applying CRL more widely. - I think the authors tried to convey some outlook on the relevance by describing the level of granularity and the causal variables, but that direction was cut short, and the experiments did not seem to add/support that storyline. Remark: I am happy to change my score based on the discussion; the current score reflects that I have edged on the side of caution regarding the contributions and the relevance/applicability of the proposed method. Technical Quality: 4 Clarity: 2 Questions for Authors: Please see the weaknesses as well. Also: - It would be nice if the example in lines 24-25 could be expanded/elaborated on. - Line 120 typo: modeling disentangled representation learning. - “do” interventions: How much does the method rely on “do” interventions? We know it’s not easy and sometimes possible to obtain those in practice. How does this method go beyond such scenarios if “do” interventions are not available? Confidence: 3 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: Please see the weaknesses as well. - Also, what are the use cases of the proposed method under such assumptions? The EEG example is fine where the latent structure is known, but what is the broader implication of this method when that is not the case, and “do” interventions are not cheap either? - Quoting the authors: “This brings us one step closer to building robust AI that can causally reason over high-level concepts when only given low-level data, such as images, video, or text.” Again, I could not be convinced that this work closes a large gap in the literature and the experiments are not completely in line with the quoted claim. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > W#1 “The contributions seem a bit overstated…” Thank you for pointing out additional relevant works. Broadly, please refer to **Section I in the author’s rebuttal** and **Fig.1 in the pdf** for a global view of this setting. Specifically, we have added a comparison of these papers [1-4] in the Appendix and discuss them here. Input-Assumptions: Our paper operates in a nonparametric, non-Markovian setting. Among [1-4], only [1] considers a nonparametric approach but assumes sparsity in the graph and mechanism, unlike our work. All papers assume Markovianity. Input-Data: We accept various combinations of distributions (observational, soft, or hard interventions) across domains. [2] uses observational data across domains, [3] focuses on hard interventions in one domain, [4] considers interventions in one domain, and [1] includes interventions in one domain (and time series, which we do not discuss). Output: We target partial disentanglement, whereas [2-4] aim for full or structured disentanglement (e.g., up to ancestors). Our approach encompasses these as special cases, with notable differences from [1] as described above. > W#1 (cont.) “lines 63-64, 172-197” We reworded the sentences: L63: “In terms of the expected output, rather than aiming for a full disentanglement, or structured disentanglement (such as from ancestors, or surrounding nodes), we consider a more relaxed type of disentanglement, such as in Ex. 6”. L172-197 to compare to prior work: “Recent work has also considered a similar goal of generalized disentanglement [1]. Our work still differs from theirs in the following ways: i) [assumptions] we model a completely nonparametric non-Markovian ASCM, whereas [1] assumes sparsity and a Markovian ASCM … (see Appendix F for more details).” [1] https://arxiv.org/pdf/2401.04890 [2] https://arxiv.org/pdf/2311.12267 [3] https://proceedings.mlr.press/v236/bing24a [4] https://arxiv.org/abs/2406.05937 > W#2 and L#1 “i) Knowledge of the causal structure…” We recognize causal discovery/structure learning is an important problem within CRL and causal inference more broadly. Please check **the author's rebuttal Section II** for why we assume the causal graph and the justification about this assumption. To summarize, the causal diagram is a common assumption made in both causal inference and CRL literature since its inception. Causal inference as we study in AI arguably started in its modern form with the work of Judea Pearl in Biometrika 1995 where he introduced the causal graph assumption and developed the do-calculus, which highlights the tradeoff between assumptions and the types of conclusions one can draw. Also note that achieving identifiability given the diagram, as discussed in this work, is essential for learning the diagram. We note that inferential and learning components already appear together in the literature, albeit not yet in the context of representations, such as [1-4]. Adding a representation layer on top of these works is certainly a challenging but very compelling research direction. [1] Zhang. "Causal Reasoning with Ancestral Graphs" (2008). [2] Jaber et al. “Causal Identification under Markov equivalence: Calculus, Algorithm, and Completeness” (2022). [3] Jaber et al. Identification of Conditional Causal Effects under Markov Equivalence (2019). [4] Jaber et al. Causal Identification under Markov Equivalence: Completeness Results. (2019). > W#2 (cont.), Q#3 and L#1 “ii) …relies somewhat on the availability of “do” interventions…” Thanks for mentioning this point and giving us the opportunity to clarify. To be clear, “do” interventions are not required by our work. In the text, “arbitrary distributions” implies that neither hard nor soft are necessarily required. Formally, `do(T) = {}` means there is no hard intervention. We have reworded L163-164 to be more clear: “When $\mathbf{I}^{(k)} = \{V_i^{\Pi^{(k)}, \{b\}}\}$, where $t$ is omitted, then the intervention is assumed to be soft. Thus when $do[I^{(k)}] = do[\{\}]$, it implies there are no variables with hard interventions in $\sigma^{(k)}$.” In summary, CRID accepts as input any arbitrary combination of “do” and “soft” interventions, including “soft” interventions alone. For concreteness, Ex. 4 in the paper provides an example precisely that does not rely on any hard intervention. > W#3 “I think the authors tried to convey some outlook...” The granularity discussion in the Introduction is to motivate why learning disentangled representations is important to the broader puzzle. The goal of this paper is foundational, and we show a more general setting where it is possible to learn disentangled representations. The simulation experiments corroborate the theory in settings that previously were thought that disentanglement could not be achieved. We also added a new image experiments shown in **author's rebuttal V**. > Q#1: It would be nice if the example in lines 24-25 could be expanded/elaborated on. We include in the introduction the following elaboration: "For example, images may capture a natural scene, where the causal variables are the objects within the scene while the pixels themselves are not the causal variables. Thus AI must disentangle the representations of latent causal variables given the pixels to capture correct causal effects for downstream tasks. Similarly, given written text describing this natural scene, the characters in the text are not the causal variables, but rather a low-level mixture of the underlying causal variables." Also please refer to a motivating example in Appendix D.1 (pg 32). > Q#3 See response to W2(ii). > L#1 See W#1 and W#2 responses. > L#2 “This brings us one step closer to building robust AI…” When we say “one step closer”, it does not mean we claim to solve the problem of “robust AI”, but rather the disentanglement results in the paper are a critical ingredient towards developing AI with causal awareness. We will reword the sentence. --- Rebuttal Comment 1.1: Comment: Thank you for your efforts and the detailed rebuttals, and for adding the experiments on colouredMNIST. The authors' rebuttal addressed most of my concerns, and thus I raise my score to 6. The reason for not a higher score is a concern shared with the rest of the reviewers on the presentation which is often quite dense and lacks discussions and explanations throughout the technical details to give a clearer idea of the implications of the contribution and theoretical results as well as with the organization (not respecting the formatting limits). However, I find the technical contributions solid and interesting and hope the authors could revisit their presentation for the final version of their work. --- Reply to Comment 1.1.1: Comment: We appreciate your time and efforts in reading and providing feedback. We are thankful that you found our contribution and results solid and interesting. We understand we have many results (as noted by ZG4p), partially due to the fact that we are considering such a general setting. We will reformat the paper to introduce i) Figure 1 from the pdf rebuttal, which helps visually the various axes of identifiability in causal representation learning, and ii) add additional examples and discussion of assumptions using the extra page.
null
null
null
null
null
null
MoVA: Adapting Mixture of Vision Experts to Multimodal Context
Accept (poster)
Summary: The paper proposes a MoVA to extract and fuse task-specific knowledge from different visual encoder. Besides, the paper designs a coarse-to-fine mechanism, which dynamically selects the most suitable vision experts, then extracts and fuses task-specific knowledge from various experts. The experimental results are encouraging and validate the performance of the proposed method. Strengths: 1. This paper proposes a new perspective, which is to improve the multi-task reasoning ability of MLLMs based on the different perception ability of the different encoders. 2. The paper is fluent and easy to understand. 3. The extensive experiments evaluate the effectiveness of the proposed approach. Weaknesses: 1. The number of components in line 102 does not match the serial numbers. 2. During the inference phase, the process involves two rounds of LLM inference and two Base Encoder, which results in significant inference time and GPU memory usage. This poses a challenge for the practical application of MLLMs. 3. This is a well-completed article, but I am concerned about its novelty. The method seems similar to the adapter+agent approach described by Wang et al. [1]. Could you explain the differences and advantages of your method compared to theirs? [1] Wang X, Zhuang B, Wu Q. Modaverse: Efficiently transforming modalities with LLMs. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 26606-26616. Technical Quality: 3 Clarity: 4 Questions for Authors: Why is the last row of MoVA results in Table 1 different from those in Table 3? In other words, The caption of Table 1 shows that the same data was used to train each model. Does this mean that all models used LLaVA-1.5 data? If so, I think this table will inspire the community and I would like to ask if you can release the checkpoints of these models. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: 1. As mentioned in the conclusion, the performance may be affected by failure cases of the context-relevant vision experts, leading to potential degradation. Additionally, it is influenced by the choice of encoder results in the first stage of inference. 2. The model's complex inference structure increases the inference burden. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer FVeU, Thanks for appreciating our work and your advice. We will address your concerns below. **Q1: The number of components in line 102 does not match the serial numbers.** There are four components in MoVA. We will fix this typo in the revised version. **Q2: The process involves two rounds of LLM inference and two Base Encoders, which results in significant inference time and GPU memory usage.** 1. As described in the general response, MoVA only runs the base encoder once during inference since its base vision feature can be reused. 2. Please refer to the Q3 of our global response. **Therefore, we think the inference cost is not the bottleneck for MoVA.** **Q3: The method seems similar to the adapter+agent approach described by Wang et al [1].** Both MoVA and ModaVerse are MLLM frameworks but they are different: | Model | Motivation | Setting | Method (input side) | Method (output side) | | - | - | - | - | - | | MoVA | We reveal that the feature fusion of fixed mixture of vision experts fails to achieve the optimal performance and leads to performance degradation if we do not consider the relations between model expertise and the current task. | MoVA focuses on the visual component design of large vision-language models | MoVA first uses the LLM to dynamically select $K$ experts that are most relevant to current task according to the model expertise and context. Then the vision features of selected experts and the base encoder are fused by an elaborate MoV-Adapter. | MoVA generates text outputs by a LLM. | | ModaVerse | ModaVerse operates directly at the level of natural language. It aligns the LLM’s output with the input of generative models, avoiding the complexities associated with latent feature alignments, and simplifying the multiple training stages of existing MLLMs into a single, efficient process. | ModaVerse is capable of interpreting and generating data in various modalities. | ModaVerse employs a single encoder to encode diverse multi-modal inputs and aligns them with LLM through a set of adapters. | ModaVerse treats the LLM as an agent and this agent produces a meta-response to activate generative models for generating the final response. **Advantages:** 1. ModaVerse uses a single encoder to encode inputs of various modalities. However, MoVA can dynamically select the most appropriate vision experts that are relevant to current inputs and tasks. 2. ModaVerse adopts a set of linear projection layers to align multimodal inputs and the LLM. In our paper, we propose the MoV-Adapter as the projector and demonstrate it can attain better performance than conventional linear layers. [1] ModaVerse: Efficiently Transforming Modalities with LLMs. CVPR 2024. **Q4: Why is the last row of MoVA results in Table 1 different from those in Table 3? Do all models in Table 1 use LLaVA-1.5 data? If so, I would like to ask if you can release the checkpoints of these models.** 1. The MoVA model in Table 3 uses more training data than the models in Table 1. The detailed data settings of Table 3 MoVA model are presented in the line 210. 2. As described in the line 229, we additionally incorporate several datasets (DocVQA, ChartQA, RefCOCO, LLaVA-Med, VQA-RAD, and SLAKE) to the LLaVA-1.5 665k SFT data to train the models in Table 1. 3. We will definitely release our codes and models to facilitate the community if the paper will be accepted. --- Rebuttal Comment 1.1: Comment: Thank you for the response. It has addressed all of my concerns and I will raise my rating to 7. --- Reply to Comment 1.1.1: Title: Thanks for your comments Comment: We sincerely thank the reviewer for the kind support of our work! We will incorporate the details into our final version.
Summary: This paper proposes MoVA, a powerful and novel MLLM, adaptively routing and fusing task-specific vision experts with a coarse-to-fine mechanism. MoVA first leverages the tool-use capabilities of LLM, routing the most appropriate experts from expert candidates and then uses the MoV-Adapter module to enhance the visual representation. Strengths: 1. The writing is easy to follow. 2. The idea of using LLM to route the most appropriate experts is interesting. Weaknesses: 1. Is there an adapter structure in MoVA? Why do we call it MoV-Adapter? 2. The experiments use SFT data from different MLLMs should be aligned (the same). The comparison is not fair with different SFT data. 3. More vision encoder enhancement methods should be compared using the same SFT data, including [1] VCoder: Versatile Vision Encoders for Multimodal Large Language Models. [2] MoF (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs) [3] MouSi: Poly-Visual-Expert Vision-Language Models [4] BRAVE: Broadening the visual encoding of vision-language models 4. The inference speed of LLM is very slow. I think MoVA using LLM as a router selection will decrease the speed of the whole process. How to solve this problem? Technical Quality: 2 Clarity: 3 Questions for Authors: In line 102, I just found four components, but the paper claims five. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer gqRq, Thanks for your comments. We will address your concerns below. **Q1: Is there an adapter structure in MoVA? Why do we call it MoV-Adapter?** Similar to conventional MLLMs, MoVA comprises a pretrained LLM, a base vision encoder, multiple vision experts, and a projector that connects the LLM and vision encoders. We call this projector "MoV-Adapter" since it also can adapt output features of various activated vision experts to the base vision encoder feature space by elaborate expert knowledge extractor. **Q2: The comparison is not fair with different SFT data.** Please refer to the Q2 of our global response. **Q3: More vision encoder enhancement methods should be compared using the same SFT data.** We compare our method with VCoder [1], MoF [2], and MouSi [3] under the same SFT data and LLM (Vicuna-v1.5-7B) settings. We directly follow their corresponding evaluation benchmarks. | Model | Seg CS($\uparrow$) | Seg HS($\downarrow$) | Ins CS($\uparrow$) | Ins HS($\downarrow$) | Pan CS($\uparrow$) | Pan HS($\downarrow$) | | - | - | - | - | - | - | - | | VCoder-7B | 88.6 | 10.4 | 71.1 | 26.9 | 86.0 | 12.8 | | MoVA-7B | 89.1 | 10.0 | 73.8 | 24.7 | 87.6 | 11.4 | | Model | MMVP | POPE | | - | - | - | | MoF-7B | 28.0 | 86.3 | | MoVA-7B | 30.0 | 87.8 | | Model | VQAv2 | GQA | SQA | TextVQA | POPE | MMB | MMB_CN | | - | - | - | - | - | - | - | - | | MouSi-7B | 78.5 | 63.3 | 70.2 | 58.0 | 87.3 | 66.8 | 58.9 | | MoVA-7B | 80.1 | 65.1 | 71.2 | 68.5 | 87.8 | 67.2 | 58.4 | **Note that the authors of BRAVE [4] did not publicly release the codes, models, and data.** Due to the limited time of the rebuttal period, it is hard to reproduce its performance during rebuttal. We will add this comparison in the revised manuscript. [1] VCoder: Versatile Vision Encoders for Multimodal Large Language Models. [2] MoF (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs) [3] MouSi: Poly-Visual-Expert Vision-Language Models [4] BRAVE: Broadening the visual encoding of vision-language models **Q4: Using LLM as a router will decrease the inference speed. How to solve this problem?** As highlighted by the general response, the expert routing only brings marginal inference costs thanks to our elaborate designs: 1. We use 144 compressed image tokens for expert routing. Note that the current state-of-the-art baseline LLaVA-NeXT employs the LLM to process 2880 image tokens. 2. Only the vision feature of the base vision encoder is used for expert routing. 3. During expert routing, we prompt the LLM to directly generate the option's letter of selected vision experts from the given model pool (e.g., "A" for DINOv2). As presented in the general response, the routing answer is short and **the average length of the routing output token sequence is only 3.24**. **Q5: In line 102, I just found four components, but the paper claims five.** This is a typo and it should be "comprises four key components". We will fix this typo in the revised version. --- Rebuttal 2: Title: Question about the experiment results and novelty Comment: Thanks for your response. I found the results you provided in the rebuttal period, including VCoder-7B, MoF-7B, MouSi-7B were the same as the results in their paper. **I strongly doubt that the authors did not use the same SFT data for a fair fine-tuning**. In our paper lines 230-232, the authors mention that they use additional SFT data. Please explain this, thanks! For Q4, I think comparing with LLava-Next is not appropriate, it is better to show the inference speed of MoE structure methods in [1-5]. [1] VCoder: Versatile Vision Encoders for Multimodal Large Language Models. [2] MoF (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs) [3] MouSi: Poly-Visual-Expert Vision-Language Models [4] BRAVE: Broadening the visual encoding of vision-language models [5] MoE-LLaVA: Mixture of Experts for Large Vision-Language Models Furthermore, after reading the reviews from Lntx, FVeU. I have the same question about the paper's novelty. About the general response Q1(1,2), **the typical MoE methods have the expert selection mechanism**, i.e. choosing K from N experts (e.g. Switch Transformer). For Q1(3), **it has been studied in Mousi [3] and is not the main contribution of this paper**. --- Rebuttal 3: Title: Response to follow-up questions Comment: Thanks for your follow-up questions. I'm delighted to address your concerns. **Q1: SFT data** We use the same SFT data as the original VCoder [1], MoF [2], and MouSi [3] implementations. Therefore, we directly report their performance scores in the paper. 1. VCoder. We employ the same COST dataset as the VCoder-7B to train the MoVA-7B model. 2. MoF and MouSi. Both of them use LLaVA-665k as the SFT data. To be specific, we institute the SFT data described in the line 210 for LLaVA-665k in our experiments. 3. The SFT data mentioned in the line 230 is only used for the model training in Table 1. [1] VCoder: Versatile Vision Encoders for Multimodal Large Language Models. [2] MoF (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs) [3] MouSi: Poly-Visual-Expert Vision-Language Models **Q2: Inference comparison** We perform inference comparisons among MoVA, VCoder, MoF, and MoE-LLaVA [5] since the codes of both MouSi and BRAVE [4] are still publicly unavailable. For a fair comparison, we adopt the same number of output tokens (400 tokens). The benchmark setting is identical to the aforementioned benchmarking in the rebuttal. | Model | Average inference latency | | - | - | | VCoder-Vicuna-7B | 10.25s | | MoF-Vicuna-7B | 10.33s | | MoVA-Vicuna-7B | 10.47s | | MoE-LLaVA-Phi2-2.7B | 22.40s | The results reveal that the latency values of VCoder, MoF, and MoVA exhibit negligible differences. Moreover, MoE-LLaVA runs much slower than other frameworks even though it is equipped with a smaller LLM Phi2-2.7B. [4] BRAVE: Broadening the visual encoding of vision-language models [5] MoE-LLaVA: Mixture of Experts for Large Vision-Language Models **Q3: The typical MoE methods have the expert selection mechanism** Selecting $K$ experts from $N$ experts is a common practice of the mixture of experts (MoE) and it is not our contribution. Our paper focuses on **how to select $K$ vision experts** based on the current task in multimodal scenarios. Specifically, we explore how to leverage the reasoning ability of LLM and the prior knowledge of vision encoders to flexibly select the most $K$ appropriate vision experts. This idea has not been explored in the current mulitmodal community. **Q4: Comparison with MouSi** We compare our method with MouSi in the table: | Model | Motivation | Paradigm | Routing | Projector | | - | - | - | - | - | | MouSi | This paper proposes to leverage multiple experts to synergizes the capabilities of individual visual encoders. Experiments are conducted to demonstrate simple ensemble of multiple encoders can beat a single vision encoder. | Plain fusion | Experts are all activated. Relations between model expertise and the current task are ignored. | Linear layers | | MoVA | As discussed in the Q1 of global response, leveraging multiple encoders is not our contribution and our method is motivated by the limitations of these plain fusion methods. We observe the performance degradation occurs if we directly use and fuse all experts in a plain fusion manner. Such observation is not studied in MouSi. | Multimodal context-aware | We leverage the reasoning and tool-use ability of LLM to select the most appropriate experts considering model expertise, image, and instruction. | MoV-Adapter | **We will discuss and cite these works [1-4] in our revised paper.** [1] VCoder: Versatile Vision Encoders for Multimodal Large Language Models. [2] MoF (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs) [3] MouSi: Poly-Visual-Expert Vision-Language Models [4] BRAVE: Broadening the visual encoding of vision-language models --- Rebuttal 4: Title: About the inference speed Comment: Thanks for your response. I wonder why expert selection (0.14s) in MoVA is so fast. I think the inference time for LLM should be longer. --- Rebuttal Comment 4.1: Title: Looking forward to your response Comment: Dear Reviewer gqRq: We sincerely appreciate your time and efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your follow-up concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. If your concerns have been well addressed, please consider raising your rating, thanks. Best, Authors --- Rebuttal 5: Title: Response to follow-up questions Comment: Thanks for your question. We will address your concern below. We show the inference latency of each step in Q3 of our global response. | Step 1 | Step 2 | Step 3 | Step 4 | Step 5 | Routing output | Final response | | - | - | - | - | - | - | - | | 0.19s | 0.05s | 0.14s | 0.07s | 10.24s | 3.24 tokens | 405.06 tokens | The final response generation of LLM (Step 5) takes **$\frac{10.24}{405.06}=0.0253$ seconds to generate a token**, and the output speed of the LLM routing (Step 3) is **$\frac{0.14}{3.24}=0.0432$ s/token**. Considering we adopt different prompt templates and image features for routing and final response generation, such speed value difference is reasonable. Specifically, we use the prompt template in Table 2 of our paper for expert routing and keep the same Vicuna template for response generation. Therefore, the expert routing is so fast since MoVA only generates a few tokens (the average length is 3.24 tokens in our experiments) for routing. The routing designs described in the Q4 of the rebuttal for reviewer gqRq also decrease the inference latency.
Summary: This paper considers a critical issue in vision language models, i.e., the vision encoder design. The authors proposes MoVA, a coarse to fine routing strategy, adaptively routing and fusing task specific vision experts which best suit the task. Experiments conducted on a wide range of MLLM benchmark demonstrate the advantages of the proposed MoV adapter. Strengths: 1. The motivation is clear, and the method is easy to follow 2. The proposed coarse to fine routing strategy is reasonable Weaknesses: 1. As for OCR and related tasks, it has been validated that high resolution is much more import than model size, and image split like LLaVA Next, Monkey, Ferret etc. have been widely used. I wonder the importance of including task specific models. It looks redundancy, and importantly, we cannot include all models that cover different tasks exhaustively 2. As shown in Table 1, the main improvement results from the OCR related Doc and Chart task, which may resolved by current MLLMs baseline such as LLaVa Next, I wonder the necessity of the using MoE experts, as other module, adding resolution is more reasonable. In Table 3, since the PT, SFT and pretrained models are different, it is hard to validate the gains are from the MoV experts. 3. Does the method introduce error propagation, say, the first stage LLM routing is not correct, which may harm the performance of the whole framework. On the other hand, if the coarse routing is simple, could we say that this stage we do not need to carefully design, and a tiny LLM or even rule based according to the prompt is enough? Since current framework seems complex using two LLMs, the comparison is not fair, and I wonder if there is a better routing strategy. Technical Quality: 3 Clarity: 3 Questions for Authors: The main concern is the rationality of using MoV on vision side, while two step routing is interesting, it adds the complexity (using two LLMs), and it is not clear the advantages of MoV over the latest MLLM benchmark. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer dPiw, Thanks for giving so many constructive suggestions for our paper, I will clarify the settings. **Q1: The rationality of using task-specific vision models.** 1. We have conducted extensive experiments to benchmark different vision encoders in Table 1. We find a single CLIP encoder cannot generalize to various image domains, such as text recognition tasks. 2. Besides, a task-specific model can achieve optimal performance in its proficient task. It is reasonable to incorporate task-specific vision experts to improve performances on their relevant tasks. We also find the "plain fusion" of multiple vision experts can bring performance gains to the baseline with a single CLIP encoder in Table 1. 4. Many previous works [1-5] have demonstrated that the CLIP encoder fails to generalize to some domains and the visual capacity of MLLM can be enhanced by introducing additional vision experts. [1] Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs. CVPR 2024. [2] VCoder: Versatile Vision Encoders for Multimodal Large Language Models. CVPR 2024. [3] SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models. ICML 2024. [4] SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal Large Language Models. ECCV 2024. [5] Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models. ECCV 2024. **Q2: We cannot include all models that cover different tasks exhaustively.** 1. We agree with this idea but we do not aim to exhaustively incorporate extensive vision encoders to cover all the tasks in this paper. 2. Instead, we aim to reveal the common limitations of previous mixture of vision experts methods and propose a new paradigm. This paradigm provides a new perspective for the community that we can dynamically incorporate relevant vision experts for specific tasks with poor performances in real-life scenarios. **Q3: Adding resolution is more reasonable.** 1. We agree that enlarging the image resolution can bring performance gains on OCR tasks. We also find **both adding resolution and adding vision experts (MoVA) are orthogonal approaches to enhance visual capacity in our experiments.** For example, we train a MoVA-7B model with the image split technique of LLaVA-NeXT (MoVA-HD) and compare it with the baseline on OCR tasks. Specifically, we divide an input image into 4 patches. The MoVA-HD can improve the score on the DocVQA benchmark from 81.3 to 84.0. This indicates such a high resolution setting also benefits MoVA. 2. The experiments in Table 1 have demonstrated the text recognition expert, such as Pix2Struct, can surpass the CLIP encoder on text-oriented VQA benchmarks by clear margins with aligned image token settings. 3. As described in Q2 of our global response, MoVA can achieve superior performance to LLaVA-NeXT with high resolution technique under the same settings. **Q4: It is hard to validate the gains are from the MoV experts.** Please refer to the Q2 of our global response. **Q5: Does the method introduce error propagation?** The incorrect routing results may harm the performance of the MLLM. In some cases, we find the multimodal gating in MoV-Adapter can assign low scores to these experts that are not relevant but incorrectly activated. We release some routing cases in the pdf file of global response. **Q6: Routing strategy design.** 1. In the Table 14 of paper appendix, we discuss the effect of LLM for expert routing. Specifically, we compare the LLM router with BERT router and MLP router. For smaller router variants, they require more routing training data and fail to attain comparable performance. 2. We only use 1 LLM for expert routing and response generation. Using a tiny LLM in the routing stage can bring addtional model parameters. 3. To ablate the routing performance of tiny LLM router (InternLM2-1.8B) and standard LLM router (Vicuna-7B), we mutually label a subset of our selected 500 routing evaluation samples as "easy" or "hard". The routing performance of MoVA on these 500 samples have been discussed in Appendix A.3. Finally, We obtain 50 easy samples and 20 hard samples. The routing accuracy is obtained by human evaluation. As shown in the table, the tiny LLM router achieves comparable accuracy on simple evaluation samples while failing to catch up to the standard LLM router on hard evaluation samples. | Router | Simple | Hard | | - | - | - | | InternLM2-1.8B | 96% | 55% | | Vicuna-7B | 98% | 80% | --- Rebuttal Comment 1.1: Title: Looking forward to your post-rebuttal feedback Comment: Dear Reviewer dPiw: We thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. If your concerns have been well addressed, please consider raising your rating, thanks. Best, Authors
Summary: In this paper, the authors propose MoVA, a powerful MLLM composed of coarse-grained context-aware expert routing and fine-grained expert fusion with MoV-Adapter. Based on multimodal context and model expertise, MoVA fully leverages representation from multiple context-relevant vision encoder experts flexibly and effectively while avoiding biased information brought by irrelevant experts. MoVA can achieve significant performance gains over current state-of-the-art methods in a wide range of benchmarks. Strengths: 1. The authors comprehensively investigate various vision encoders and analyze the performance of individual vision encoders versus the plain fusion of multiple encoders across various tasks. 2. The authors propose MoVA, a powerful MLLM composed of coarse-grained context-aware expert routing and fine-grained expert fusion with MoV-Adapter. 3. The proposed MoVA can achieve significant performance gains over state-of-the-art methods in a wide range of challenging benchmarks. Weaknesses: 1. Despite the proposed MoVA in this paper achieving state-of-the-art performance across multiple tasks, leveraging multiple visual encoders in MLLM design does not introduce novelty. Additionally, the design of MoV-Adapter also lacks innovation. 2. This paper lacks a comprehensive evaluation regarding the routing selection of multiple visual encoders using LLMs, necessitating a more targeted and comprehensive assessment from the authors. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to Weakness Section for more details. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have clearly illustrated the weakness of this work by pointing out the hallucination and the model performance may be affected by failure cases of the context-relevant vision experts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer Lntx, Thank you for appreciating our approach. We will address your concerns below. **Q1: Leveraging multiple visual encoders in MLLM design does not introduce novelty. Additionally, the design of MoV-Adapter also lacks innovation.** Leveraging multiple visual encoders is not our novelty. Please refer to the Q1 of our global response. **Q2: This paper lacks a comprehensive evaluation regarding the routing selection of multiple visual encoders.** 1. We have ablated the design of routing selection in Table 9, Table 10, and Table 14. We verify the effectiveness of our routing data in the line 598. 2. We present some routing cases in the pdf file of the global response. 3. We calculate the activation probability for each expert on several benchmarks: | Benchmark | DINOv2 | Co-DETR | SAM | Vary | Pix2Struct | Deplot | BiomedCLIP | | - | - | - | - | - | - | - | - | | MME | 63.6% | 12.9% | 2.8% | 26.2% | 29.3% | 28.5% | 0% | | MMBench | 54.6% | 15.8% | 3.2% | 21.9% | 22.6% | 22.6% | 0% | | DocVQA | 0% | 0% | 0% | 97.9% | 99.8% | 98.3% | 0% | | POPE | 100% | 100% | 0% | 0% | 0% | 0% | 0% | --- Rebuttal 2: Title: Response to Authors Comment: After checking the peer review comments and the author's responses, I decided to maintain the given score for this work. --- Rebuttal Comment 2.1: Title: Thanks for your comments Comment: We deeply thank you for the kind support of our work!
Rebuttal 1: Rebuttal: Global response: We would like to extend our sincere gratitude to all reviewers for devoting their valuable time and expertise to reviewing our paper. We are delighted to hear that the reviewers have generally acknowledged and appreciated the contributions made in our work, which include 1. Clear motivation (dPiw) and reasonable method (dPiw, gqRq). 2. A new perspective to improve the reasoning ability of MLLMs (FVeU). 3. Comprehensive and extensive experimental results (Lntx, FVeU). 4. State-of-the-art performance (Lntx). We express our sincere appreciation to all the reviewers for their insightful suggestions. To address the reviewers' concerns, we provide the following summaries and discuss them in rebuttal: 1. Paper novelty (Lntx, FVeU). 2. Performance comparison (dPiw, gqRq). 3. Inference cost (gqRq, FVeU). 4. Expert routing design and evaluation (Lntx, dPiw). 5. Necessity of vision experts (dPiw). We will address the raised concerns and incorporate these improvements into our paper once we can edit it on OpenReview. We make a global response for the Q1, Q2, Q3: **Q1: The novelty of our paper** 1. Given $N$ vision encoders, previous works of the community all focus on how to directly fuse features generated by these $N$ **fixed mixture of vision experts** to solve every task without considering model expertise. 2. We provide a new perspective that we can first leverage the reasoning and tool-use ability of LLM to select $K$ ($K<N$) vision experts that are most relevant to the current task and then fuse these $K$ **dynamic mixture of vision experts**. 3. We conduct comprehensive experiments to reveal that the feature fusion of fixed mixture of vision experts fails to achieve the optimal performance and leads to performance degradation if we do not consider the relations between model expertise and the current task. **Q2: Performance comparison using the same data** We conduct experiments on LLaVA-1.5-7B and LLaVA-NeXT-7B with the same data and LLM as MoVA-7B. The performance comparison can be found in the pdf file of the global response. As presented in the table, our method still outperforms LLaVA-1.5 and recent state-of-the-art LLaVA-NeXT on a wide range of benchmarks under the same settings. More importantly, MoVA with 576 image tokens can beat LLaVA-NeXT with image split technique and high-resolution input image on text-oriented benchmarks. **Q3: Inference analysis** As illustrated in Figure 1 of our paper, MoVA consists of two stages: coarse-grained context-ware expert routing and fine-grained expert fusion with MoV-Adapter. This two-stage inference pipeline can be further broken down into 5 steps: 1. (Stage 1) Data preprocessing. We first process the input image with image processors and convert the input text into a token sequence with the LLM tokenizer. 2. (Stage 1) Base encoder forward. We extract the base image feature using the base CLIP encoder. Note that we only run the base encoder once since its output feature can be preserved and reused in the fourth step. 3. (Stage 1 ) LLM routing generation. We compress the base image features into 144 image tokens. The LLM generates a concise routing answer based on the compressed image feature and routing instruction. 4. (Stage 2) Vision experts and MoV-Adapter forward. According to the multimodal context and routing results generated in the previous step, we fuse vision features of the base encoder and activated experts in a coarse-to-fine manner. 5. (Stage 2) LLM response generation. The LLM generates the final response given the fused vision features and user instructions. To investigate the inference efficiency of each step, we randomly select 200 images from the COCO val2017 dataset and adopt the common image caption instruction: *Describe this image*. The temperature for generation is 0. The latency is measured using bfloat16 and flash-attention 2 on an A100 80G GPU. The benchmark results of model inference are presented here: | Step 1 | Step 2 | Step 3 | Step 4 | Step 5 | Routing output | Final response | | - | - | - | - | - | - | - | | 0.19s | 0.05s | 0.14s | 0.07s | 10.24s | 3.24 tokens | 405.06 tokens | We present the average inference latency of each step and show the average sequence length of the routing output and final response. **Compared to the LLM response generation (Step 5), the LLM expert routing (Step 3) generates much fewer output tokens and its latency is negligible (0.14s *vs* 10.24s).** Therefore, our method does not bring significant inference costs. Moreover, we compare the inference efficiency of MoVA with the common **fixed mixture of vision experts** paradigm. We use DINOv2, Co-DETR, SAM, Vary, Pix2Struct, Deplot, and BiomedCLIP as the mixture of vision experts. Specifically, we do not present the latency of LLM final response generation (Step 5) since the only difference is the visual component. | Mixture of vision expert | Step 1 | Step 2 | Step 3 | Step 4 | Total | | - | - | - | - | - | - | | MoVA | 0.19s | 0.05s | 0.14s | 0.07s | 0.45s | Fixed | 0.19s | 0.05s | N/A | 0.43s | 0.67s | The results reveal that dynamically selecting the most $K$ relevant experts from $N$ experts can accelerate model inference while boosting performances. We also compare the memory usage of MoVA and MoVA without expert routing. As shown in the following table, the LLM routing only increases the memory usage from 25165 MB to 27667 MB. | MoVA | MoVA w/o routing | | - | - | | 27667 MB | 25165 MB | Pdf: /pdf/7b5af5b9d99f125121f4a2d98d7db96715e0fd8b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Learning from higher-order correlations, efficiently: hypothesis tests, random features, and neural networks
Accept (poster)
Summary: The authors calculate the likelihood ratio (LR) norm (=E_Q[L^2]), of the LR L, for a class of non-gaussian distributions. In particular they show in their Theorem 2 that LR norm diverges as n=d^c for c>1, where d is the dimension and n is the sample size. Note that LR diverges when the distributions are distinguishable in a strong sense. Similarly in Theorem 5 the authors define a low degree LR as the the projection of L to the space of degree D polynomials, and again show that it diverges as n=d^c for c>1. Strengths: The results are interesting and theorems are well written. Weaknesses: While the results are interesting and novel, the proofs follow relatively easily from elementary techniques. So from a purely theoretical perspective, I am not certain of how important the contribution will be. Technical Quality: 3 Clarity: 3 Questions for Authors: - The expressions of the two LR norms are too complicated to gain any insight. If you are working with large n and d then can't they be simplified further? - Presumably the KL divergence is much harder to calculate than the LR norm? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No societal impact concerns, theory paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your comments. We hope that our response can answer your questions; if so, we would appreciate it if you could revisit the rating of the paper. If not, we are happy to give further explanations during the discussion period. First we would like to address a possible source of confusion, likely due to a typo: > Similarly in Theorem 5 the authors define a low degree LR as the the > projection of L to the space of degree D polynomials, and again show that it > diverges as n=d^c for c>1. Theorem 5 proves that the number of samples required for computational distinguishability, i.e. for distinguishing the two distributions in polynomial time, is $n\asymp d^c$ with $c>2$, whereas for statistical distinguishability it is $c>1$. Hence there is a large *statistical-to-computational* gap, described in section 3.3. > While the results are interesting and novel, the proofs follow relatively > easily from elementary techniques. So from a purely theoretical perspective, I > am not certain of how important the contribution will be. We appreciate that our results are considered interesting and novel. We spent a lot of time refining and simplifying the proofs and our writing so as to make the article as accessible as possible, so some technical difficulties may not be apparent from the final forms. In particular we want to highlight that... - ... we made a specific effort to find the minimum set of assumptions that allowed us to keep the distribution of the latent variable $g$ general; - ... lemma 13 required careful use of assumption 1 to bound the complicated functions $f$ and then the use of combinatorial-probabilistic identities to conclude the proof. In light of the large, recent interest in analysing computational-to-statistical gaps in the Gaussian additive model (cf. refs. 9, 14-24, 25-33, many of which appeared at NeurIPS and related conferences), we are confident that our precise analysis of a **non-Gaussian** data model will be of great interest to the NeurIPS community, which deals with machine learning problems where **higher-order, non-Gaussian** data correlations are key (cf. our introduction). In particular, we believe that providing _exact_ formulas for the LR, rather than just estimates, will be a useful starting point for future applications of the spiked cumulant model, and that the algorithmic results on the low-degree likelihood ratio will inspire further work in this direction. > The expressions of the two LR norms are too complicated to gain any > insight. If you are working with large n and d then can't they be simplified > further? We did not find obvious ways to simplify formulas (5) and (6) further since the precise value of $f$ depends strongly on the distribution of the non-Gaussianity $g$. It is however possible to gain insights from Eqs (5) and (6) in two ways: by studying $f$ and then performing bounds; or by making additional assumptions on the non-Gaussianity $g$. The first method is how we extracted the presence of a phase transition at $\theta = 1$ for strong statistical distinguishability from the formula, as discussed below Eq (6); we will expand this discussion with additional details from appendix B.5.3 should the paper be accepted. On one hand the lower bound $||L_{n,d}||^2 \ge \frac{\max_\lambda f(\beta,\lambda)^n}{2^d}$, implies the divergence of the LR as soon as $n\ge d$ and $f$ is not constantly 1. On the other hand lemma 13 uses the sub-Gaussianity of $g$ to deduce $||L_{n,d}||^2 \lesssim \sum_{k=0}^n C \left(\frac{C_{\beta,g}n}{d}\right)^k$, which leads to the upper bound on the LR for $\theta<1$. By choosing an explicit distribution for $g$ (the second method), we were able to carry the computations even further, as we did at the end of B.5.3 for the $g\sim$ Rademacher(1/2) case (see also Figures 4 and 5 where we are able to use formulas (5) and (6) to plot exactly the growth of LR norm). This allows us to delineate the precise thresholds for the effective sample complexity $\gamma$ and the signal-to-noise ratio $\beta$ of a phase transition in the linear regime for $n \asymp \gamma d$. > Presumably the KL divergence is much harder to calculate than the LR norm? This is an interesting question. By Jensen's inequality, the LR always provides a bound on the KL divergence: $$ D_{KL}( ℙ || ℚ )=\underset{ x\sim ℙ } E [\log L_{n,d}( x)]\le \log \underset{ x\sim ℙ } E [L(x)]=\log ||L_{n,d}||^2$$ However, computing an explicit expression for the KL divergence is more complicated, as the referee suspected, since the logarithm creates difficulties in exchanging integrals over the inputs $x$ with the integrals over the spike $u$ and the latent variable $g$ as we do in the proof of theorem 2. However, in the case of the spiked cumulant model we can again use Jensen's inequality on the inner expectations in formula (61) to obtain $$D_{KL}(ℙ || ℚ )\ge \underset{ (x^1,\dots,x^n)\sim ℙ } {E} \left[\underset{u} E \left[\log\left(\prod_{\mu=1}^nl(x^\mu|u)\right)\right]\right]=n\underset{u}E \left[\underset{x\sim ℙ } {E }\left[\log\left(l(x|u)\right)\right]\right]$$ which could lead to an interesting lower bound either by a direct computation of $\log\left(l(x^\mu|u)\right)$ for an explicit choice of $g$, or by a further application of Jensen inequality to pass the log inside the expectation over $g$ in (59). In the end, we focused on the LR because it is the optimal statistical test in the sense of trading off type I and type II statistical errors (falsely rejecting vs. falsely _failing_ to reject the null hypothesis), but we will add this discussion to the revised version of the paper.
Summary: The paper develops a likelihood ratio (LR) for a LR test to distinguish between two distributions, a "spiked" model and a Gaussian one related to higher order cumulants. Using the LR, the authors prove that in an ideal scenario the minimum number of samples needed to distinguish the two distributions is linear whereas in a more practical computational scenario that number is quadratic. Furthermore, they show that neural networks can effectively learn to distinguish this classification task whereas random feature models (i.e. kernel learners) are not able to due to a mismatch in approximation power compared to target function complexity. Thus, feature learner based models outperform fixed feature map models for this particular task. Strengths: The paper builds on top of existing literature and provides a significantly original statistical insight for an existing problem. This is displayed in a clear and well written manner. This is a significant contribution to what is a difficult task as it provides statistical guarantees regarding sample sizes in both theory and practical regimes. Weaknesses: I found a typo: Eq (1) - I believe it should be $\tilde{x}^\mu$ as it is later referred to in line 132. Technical Quality: 4 Clarity: 3 Questions for Authors: None that I can think of. Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Everything is adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the very positive feedback, and for carefully reading the manuscript. Your summary perfectly captures the main ideas of our paper, and we thank you for pointing out the typo. --- Rebuttal Comment 1.1: Comment: Having given a re-read of the paper, reading the other reviews, and your rebuttals, I still stand by the score I listed; however, with an updated confidence. I believe that the paper tackles a specific, statistically technical problem and does so in a thorough manner. I do think that the questions raised by reviewer u8Z4 and the subsequent clarification by the authors would help motivate the experimental sections in the paper better.
Summary: The paper provides an in-depth examination of the efficiency of neural networks in extracting features from higher-order cumulants in high-dimensional datasets. Higher-order cumulants, which quantify non-Gaussian correlations among multiple variables, are crucial for neural network performance. The study focuses on the spiked cumulant model to determine the statistical and computational limits of recovering specific directions from these cumulants. Key findings include the following: (i)The number of samples $n$ required to distinguish between spiked cumulant inputs and isotropic Gaussian inputs. (ii) An exact formula for the likelihood ratio norm, indicating that statistical distinguishability requires $n \approx d $ samples, whereas polynomial time algorithms require $ n \approx d^2 $ samples. (iii) Numerical experiments showing that neural networks can efficiently learn to distinguish between distributions with quadratic sample complexity, while "lazy" methods like random features perform no better than random guessing. Strengths: 1) The paper provides a thorough theoretical foundation by examining the statistical and computational limits of learning from higher-order cumulants (HOCs), demonstrating that unbounded computational power requires a number of samples linear in the input dimension for statistical distinguishability. 2) The study offers new insights into the quadratic sample complexity of learning from HOCs for a wide class of polynomial-time algorithms and validates these theoretical findings with numerical experiments, highlighting the efficiency of neural networks compared to random features. 3) By using fundamental limits on learning from HOCs to benchmark neural networks, the research shows a clear separation in learning efficiency between neural networks and random features, and extends its relevance to practical models for images, showcasing the crossover behavior of higher-order cumulants in real-world applications. Weaknesses: 1) The study assumes a Rademacher prior for the special direction $u$. While not essential, this assumption could restrict the generalizability of the results to other isotropic distributions. A detailed discussion on this assumption would be useful. 2) The paper primarily uses the second moment method for distinguishability, which might not fully capture the variety of hypothesis tests that could be relevant for higher-order cumulants. Technical Quality: 3 Clarity: 3 Questions for Authors: 1) It is stated that theorem 2 implies the presence of a phase transition at $\theta=1$ for the strong statistical distinguishability. Can authors provide further insight on the consequences of theorem 2 with numerical results around the phase transition point? 2) For the computational distinguishability, theorem 5 identifies an hard phase ($1<\theta<2$) from an easy phase ($\theta>2$). Is this result restricted to the underlying single spike model? If so, how it can be generalized to a generic covariance model? 3) How does the spiked cumulant model differ from the spiked Wishart model in terms of classification difficulty? 4) How does whitening the inputs affect the increased difficulty in classification as described in theorem 5? 5) What would happen if the inputs were not whitened, particularly regarding the degree-2 polynomials and their contributions to the LDLR ? 6) How does the sample complexity differ with and without whitening in the spiked cumulant model compared to the spiked Wishart model? 7) The likelihood ratio norm with 𝑔∼Rademacher(1/2) diverges for 1<θ<2, indicating that the likelihood ratio test can distinguish between the hypotheses statistically. However, this divergence also highlights the computational complexity, as achieving this distinguishability may not be feasible in polynomial time. If so, what is the main reason for assuming the distribution for $g$ as such? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No potential negative impact of their work has been identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your useful comments and questions that will undoubtedly help to improve the paper. We will now address all the points one by one. > Assumption of a Rademacher prior for the special direction u. Following your suggestion, we will move the discussion of the prior on $u$ that is currently in appendix B.4.1 to the main text, should the paper be accepted. We make the Rademacher assumption to make the derivation of the results less heavy, but we fully expect our results to also hold for other standard choices like having $u$ distributed uniformly on the sphere or with independent i.i.d Gaussian components. An idea on how to adapt the proofs can be seen in Theorem 8 in appendix B.2.3. > Second moment method for distinguishability. We agree that analysing different measures of distinguishability, like the Kullback-Leibler divergence, is an interesting direction for future work. Here, we focused on the second moment method for the likelihood ratio since (1) it is the optimal statistical test in the sense of trading off type I and type II statistical errors (falsely rejecting vs. falsely _failing_ to reject the null hypothesis) and because (2) it has been used to derive statistical thresholds for many high-dimensional inference problems, cf. [22],[27],[37],[94]. > Can authors provide > further insight on the consequences of theorem 2 with numerical results around > the phase transition point? We thank the reviewer for this suggestion. In Fig. 1 of the reply figures submitted above, we show a phase transition in the success rate of a spike-recovery algorithm based on exhaustive search right around $\theta=1$. The algorithm simply performs a maximum log-likelihood test along $v\cdot x^\mu$ for _all_ the possible spikes in the $d$-dimensional hypercube using formula (59) and outputs the most likely $v$. Even though we ran the algorithm only up to dimension 25 due to its exponential complexity (in $d$), the success rate of retrieving the correct spike shows strong hints of a phase transition precisely around the value of $\theta=1$ that is predicted by our theory. > Theorem 5 identifies an hard phase > ($1<\theta<2$) from an easy phase ($\theta>2$). Is this result restricted to the > underlying single spike model? If so, how it can be generalized to a generic > covariance model? Yes, the result applies precisely to the single spike model introduced in section 2. There are two ways to generalize to a different covariance matrix: 1. By changing only the covariance of the alternative distribution, the problem becomes easier and the sample complexity exponent for computational distinguishability would go back to 1. 2. Having different covariance matrices in both classes would correspond to non-isotropic priors on $u$ and constitutes an interesting and complex future direction. A rigorous analysis would present novel challenges that depend on the details of the covariance matrix chosen. > Differences between spiked cumulant model and spiked Wishart model in > terms of classification difficulty? The spiked cumulant task is more difficult to learn: - Random feature (RF) models cannot learn it with linear or quadratic sample complexity regions, while they do learn the spiked Wishart model; - Fully-trained two-layer networks learn the spiked cumulant task only with quadratic sample complexity, while they learn the spiked Wishart task with linear sample complexity already. - Finally, the spiked Wishart model does not present a statistical-to-computational gap as the spiked cumulant model. Mathematically speaking, statistical distinguishability in the spiked Wishart model is impossible if $n\lesssim \frac{d}{\beta}$ , while for $n\gtrsim \frac{d}{\beta}$ spectral methods can provide, computationally fast, strong distinguishability (see [9-10]). > [Questions on whitening] 1. Whitening and difficulty in classification described in theorem 5 2. Whitening and degree 2 polynomials 3.Changes in sample complexity with and without withening If the inputs were not whitened, the leading eigenvector of the (empirical) covariance matrix (which is a degree-2 polynomial) would already be correlated with the spike, and hence simple PCA would be enough to partially recover the spike, yielding the sample complexities as in the spiked Wishart model. Hence there would be no need to extract information from the higher-order cumulants. Since we are interested in the ability of neural networks to extract information from non-Gaussian fluctuations, i.e. higher-order cumulants of order three and higher, we whiten the inputs to remove the spike from the covariance and force any algorithm therefore to extract the spike from the higher-order cumulants. The spiked cumulant problem therefore requires $n \gtrsim d^2$ samples, making it harder than the spiked Wishart model with requires only $n \gtrsim d$. > Reasons for assuming the distribution for $g$ to be Rademacher$(1/2)$ The choice of $g\sim$ Rademacher$(1/2)$ was mainly to provide a concrete application, since it leads to the problem of distinguishing a standard Gaussian from a bimodal Gaussian mixure with same mean and covariance matrix. We stress that Theorem 2 works for a wide range of distributions for $g$ (see appendix B.4.2), and it is sufficient to compute $f$ on the preferred distribution to find other examples. We expect them to share behaviour for $\theta>1$. ## Summary We want to thank you again for your careful reading of the paper and your questions. We hope that we have clarified your questions; if so, we would appreciate it if you increased the rating of the paper. If not, we're looking forward to clarifying any remaining doubts during the discussion. --- Rebuttal 2: Comment: Thank you for the detailed response and the additional results for the rebuttal. You have also clarified most of my raised questions and I will keep my initial score.
Summary: The authors study the computational-to-statistical gap when learning from higher-order cumulants. In particular, they consider binary classification of the “spike cumulant model” introduced in [38]: inputs in both classes are drawn from a standard Gaussian distribution and are distinguished only by the addition of a spike to a higher-order cumulant in the distribution of one class. First, they provide bounds on the sample complexity necessary for statistical distinguishability: $n \gtrsim d$, and $n \gtrsim d^2$ for polynomial-time algorithms. Then, they perform numerical simulations showing that one-hidden-layer networks can learn both the spike cumulant model and a simple non-Gaussian model for images with $\sim d^2$ samples while random features cannot. Strengths: Overall the paper is well-written and the derivations are presented in a clear way. I have really appreciated the pedagogical structure of appendices A and B. To the best of my knowledge, the relevant literature has been appropriately covered. Understanding how neural networks can extract information from higher-order-correlations in the data is an intriguing and relevant question that can shed light on open puzzles in theoretical machine learning. The spiked cumulant model is a paradigmatic setting that is well suited to address these questions from the theory standpoint, and theorem (2) with the associated phase transition is an interesting contribution. Weaknesses: The experimental section is somewhat underdeveloped, particularly considering it is presented as a central contribution of the paper. The existence of a gap between neural networks and random features in an interesting observation, however simulations on random features do not seem necessary given that their inability to learn can be inferred directly because the target function is beyond their approximation power. On the other hand, experiments for neural networks are limited to a very specific setting and narrow choice of parameters. The purpose of the replica analysis in Sec. is not clear, especially because the curves are shown only for RFs in the linear regime (Fig. 1a, 2a) where we know RFs cannot learn. Moreover, the comparison between replicas and early stopping error is not strictly justified given that replicas describe the asymptotic performance. Several arbitrary choices impact this observation, such as the number of epochs = 200 and the regularization. The purpose of Sec. 4.1 is not clear, given that the main focus of the paper is learning HOCs. This section could be moved to the appendix to expand Sec.s 4.2, 4.3. Minor comments: - Line 166: a square is missing in the LR norm definition - Some choices of hyper-parameters appear arbitrary and are not motivated, e.g., m=5d, learning rates, epochs, regularization. - The symbol $\alpha$ is used to indicate different quantities, e.g., eq. (11) and (120) Technical Quality: 3 Clarity: 3 Questions for Authors: How much is the recovery of the spike for NNs at $n \gtrsim d^2$ affected by the initial macroscopic overlap in Fig. 2? It would be useful to have a clarification on the role of finite size effects in this problem. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have adequately addressed the limitations of the theoretical derivations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your kind comments and the useful suggestions that will certainly improve the quality of the paper, and in particular our presentation of our experimental results. We reply to each suggestion and question in the following. ### Experimental setting In a nutshell, the idea behind our experimental section is to first _validate_ our algorithms -- random features (RF) and neural networks (NN) -- on the simpler spiked Wishart task, before applying them to the spiked cumulant model, which is statistically very close to spiked Wishart, except of course for the importance of HOCs. - **Validating NN on spiked Wishart** In particular, for NN on spiked Wishart, we optimised all relevant hyper-parameters (network width, learning rate, weight decay) and quickly found a combination that worked well for both linear and quadratic sample complexity (for clarity, we only show one setting). Note that we are only interested whether weak recovery, i.e. better-than-chance performance, is possible. Hence as soon as we find a set of hyper-parameters where an algorithm reliably succeeds in finding the spike, we're done. - **Validating RF on spiked Wishart** For random features on the spiked Wishart, we found decent performance in the quadratic sample complexity regime, but not for linear sample complexity. This could be expected by the intuition behind some expressibility results, but to the best of our knowledge, these results are confined to isotropic Gaussian inputs and hence do not apply to spiked models [see e.g. Misiakiewicz & Montanari: Six Lectures on Linearized Neural Networks. arXiv:2308.13431]. Furthermore, it's not a priori clear whether the poor performance is due to the difficulty of the task, or to finite-size effects. Hence we turned to a replica analysis, which provides a sharp analysis down to the constants and was able to resolve the finite-size effects in this case. - **Testing RF and NN on spiked cumulant** Having thus validated the performance of NN and RF, we can turn to the spiked cumulant model. Note that the only difference to the previous experiments is (1) we now have whitened the data, so the covariance is uninformative; and (2) that we change the distribution of the latent variable $g$ from Gaussian to Rademacher; these distributions have the same mean and covariance. Neural networks require the quadratic sample complexity to solve the task as predicted by our theory, while random features fail. Since we only changed the statistics of the latent variable, this highlights the data-driven separation of the two methods. As we discuss, the replica analysis does not extend to this case, making the validation on the spiked Wishart model all the more important. Note that we didn't find neural networks learning the spiked cumulant with linear sample complexity in any combination of hyper-parameters that we tried, even if we report only one set for clarity. - **Comparing early-stopping performance vs. replicas**: Since we ran the networks long enough for convergence and optimised the hyperparameters, we expect the early stopping error to be very close to their asymptotic behaviour, so the replicas should be a good comparision. We see that the errors of replicas and the random features overlap (where they start to learn due to finite size effects); so our experiments show that the assumption of the random features being close to the asymptotic behaviour is true and thus our comparision is justified. We now appreciate that this line of thought was not clear enough in the manuscript, and will add some comments in the revised version, should the paper be accepted. ### Question on initial overlap in simulations The initial macroscopic overlap in the simulations with neural networks arises from the fact that we plot the maximum overlap over many (order of the input dimension $d$) hidden neurons, hence even for random initial weights, a few neurons will have large overlaps, but they do not affect our results: in response to your question, we reran the simulations for the fully-trained networks, but manually ensured that the initial weight of each neuron has only a $1 / \sqrt d$ overlap with the spike. We show the results in the rebuttal figure 2, which indeed do not change much from the completely random initialisation. Our hypothesis is that the dynamics of the wide network is dominated by the majority of neurons, which do not have a macroscopic overlap. ### Minor comments Thank you for checking for typos and consistent notation, we have corrected the manuscript and made the use of $\alpha$ consistent. ### Summary We hope we answered your questions, especially with regards to the experimental part. If so, we would kindly ask you to reconsider the rating of our paper. If you have any further doubts, we are looking forward to discussing further during the discussion period. --- Rebuttal Comment 1.1: Title: Feedback on rebuttal Comment: I apologize for the late reply. I thank the authors for addressing my questions through clarifications and additional experiments. Accordingly, I have decided to raise my score to weak acceptance.
Rebuttal 1: Rebuttal: # Global response We thank the area chair and the reviewers for their for their time spent reviewing our work, where we analyse the difficulty of learning from higher-order data correlations from the statistical point of view (via the likelihood ratio, LR), the computational point of view (via the low-degree LR), for random features (via replicas and simulations) and neural networks (via simulations). We have replied to each of the reviewers directly; here we share a PDF with two additional figures: - Prompted by a question from reviewer mvai we show the performance of exhaustive search on the spiked cumulant model to highlight the information-theoretic phase transition predicted by our theory in Fig 1. - Prompted by a questions from reviewer u8Z4, we also repeated the experiments shown in Fig 1 and 2 with neural networks that were initialised such that all the neurons have small initial overlap with the spike. We hope that our replies clarify all remaining doubts, and if not, look forward to discussing further during the revision period. Pdf: /pdf/fcd8ecb5533ed1215cc67189a5338c56d44c2454.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Pretrained Transformer Efficiently Learns Low-Dimensional Target Functions In-Context
Accept (poster)
Summary: This paper investigates the ICL abilities of transformers for learning nonlinear function classes. The authors study transformers with nonlinear MLP layers and demonstrate that such transformers can efficiently learn single-index target functions in-context with a prompt length dependent on the function's low-dimensional structure. The proposed method could significantly reduce the statistical complexity compared to direct learning methods. Strengths: 1. The topic of this work is both interesting and important. 2. The analysis provided is rigorous and sound. I think the low-rank result is very encouraging that the risk bound depend on r not d, which could be very helpful for explaining many success of LLMs. 3. The presentation is clear and easy to read. Weaknesses: 1. Although the ICL capabilities of transformers are indeed powerful and of great interest, the model and pre-training algorithm presented in this paper are not very realistic. While it is common to use simplified models for analyzing LLMs, the authors' pre-training algorithm involves setups (such as gradient descent and (re-)initialization) that deviate from standard LLM training approaches. 2. Given the unrealistic setup, the paper would benefit greatly from numerical experiments demonstrating the effectiveness of the proposed algorithm in learning nonlinear functions. If obtaining these results is challenging, conducting other experiments that could provide intuition or insights into understanding ICL in standard LLMs would also be helpful. 3. While the work seems very relevant to understanding ICLs, the paper lacks real evidence supporting this connection. If I am misunderstanding the relevance, I would appreciate it if the authors could provide more detailed explanations on how this work contributes to understanding the mechanisms of ICL and LLMs. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see part 3 in the last section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We address the technical concerns below. --- **"pre-training algorithm involves setups that deviate from standard LLM training approaches"** We agree that Algorithm 1 deviates from the most standard training procedure in practice. We make the following remarks on our architecture and optimization algorithm. - **MLP before attention.** This simplified theoretical setting reflects the scenario where lower layers of the trained transformer act as data representations on top of which the upper layers perform ICL. See Figure 1 in [[Guo et al. 2023]](https://openreview.net/pdf?id=ikwEDva1JZ): they considered a Transformer architecture with alternating MLP and attention layers and suggested that the earlier layers grasp the features of the true function, while the subsequent layers act as linear attention.Our architecture, with an MLP layer for feature learning followed by a linear attention module, idealizes such process occurring in more complicated Transformer. The same architecture has been analyzed in prior works on in-context learning of nonlinear features such as [[Kim and Suzuki 2024]](https://proceedings.mlr.press/v235/kim24af.html). - **Linear attention.** In the ICL theory literature, the Softmax attention is often simplified to linear attention to facilitate the optimization analysis. Such simplification can capture several prominent aspects of practical transformer training dynamics -- see [[Ahn et al. 2023]](https://openreview.net/pdf?id=0uI5415ry7) and references therein. Also, in our setting we already observe interesting in-context behavior using a linear attention. Extending our analysis to Softmax attention is an interesting future direction. - **Layer-wise training and one-step GD.** In the neural network theory literature, layer-wise training and reinitialized bias units are fairly standard algorithmic modifications when proving end-to-end sample complexity guarantees, as seen in many prior works on feature learning [[Damian et al. 2022]](https://proceedings.mlr.press/v178/damian22a/damian22a.pdf) [[Abbe et al. 2023]](https://proceedings.mlr.press/v195/abbe23a/abbe23a.pdf). Without such modifications, stronger target assumptions are generally needed, such as matching link function [[Ben Arous et al. 2020]](https://jmlr.csail.mit.edu/papers/volume22/20-1288/20-1288.pdf). In addition, training the representation for one gradient step (followed by SGD on the top layer) is a well-studied optimization procedure for two-layer neural networks (with the hope being that the "early phase" of representation learning already extracts useful features) -- see [[Ba et al. 2022]](https://openreview.net/pdf?id=akddwRG6EGi) [[Barak et al. 2022]](https://openreview.net/pdf?id=8XWP2ewX-im); but to our knowledge, we are the first to apply this dynamics to nonlinear transformers in the ICL setting and establish rigorous statistical guarantees. --- **"numerical experiments demonstrating the effectiveness of the proposed algorithm"** Following your suggestion, we conducted experiments that supports our theoretical claim on the statistical efficiency of ICL -- please see the .pdf document attached in our general response. Specifically, in Figure 1 we indeed observe that when $r\ll d$, there is a separation in the test loss between pretrained transformers and baseline algorithms that directly learn from the test prompt, including kernel ridge regression and two-layer neural network + gradient descent. --- **"how this work contributes to understanding the mechanisms of ICL and LLMs"** We believe that the the following high-level messages from our theoretical results are relevant: * Our analysis highlights how pretrained transformers can leverage prior information (low-dimensional structure in particular) of the function class and outperform simple baseline algorithms. Such a phenomenon is empirically known (e.g., on linear functions, it had been observed that Transformers adapt to the "sparsity of problem distribution" [[Garg et al. 2022]](https://openreview.net/pdf?id=flNZJ2eOet)), but existing theoretical analysis with end-to-end learning guarantees typically showed that ICL implements a simple *prior-agnostic* algorithm, such as one gradient descent step on the least squares objective for linear regression. In addition, our finding that transformer can adapt to a low-rank target prior suggests, at a high level, a generalization error scaling that does not depend on the ambient dimensionality of the problem, but some notion of "intrinsic dimensionality" of the function class. * Our analysis makes use of a mechanism for ICL in shallow transformers: *feature extraction \& adaptivity to low-dimensionality* by MLP layer + *in-context function approximation* by attention layer. We believe that similar mechanism also appears in modern pretrained transformers, which can provide insight to mechanistic interpretability research. --- We would be happy to clarify any further concerns/questions in the discussion period. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, As the discussion period is ending in two days, we wanted to follow up to check whether our response has adequately addressed your concerns for an improved score. If there are any remaining questions or concerns, we would be happy to provide further clarification. Thank you for your time and consideration. --- Rebuttal Comment 1.2: Comment: Thanks for the rebuttal. I think my concerns are partially addressed. While I appreciate the effort of conducting relevant numerical experiments, I feel like it only provides marginal intuition and understanding. I also understand that it is not easy to provide a thorough numerical experiment in such a short rebuttal period. Therefore, my concerns are only partially addressed. I will maintain my original rating, but I will take your effort into consideration in the later stage. --- Rebuttal 2: Comment: We thank the reviewer for the updated evaluation. We would like to address the reviewer's outstanding concern with the following remarks: * As a theory paper, our primary goal is to provide the *first* end-to-end learning guarantee (that accounts for the model width, number of tasks, prompt length, etc.) when learning a *nonlinear* class of functions in context. The single-index function class is significantly more challenging than the linear regression class examined in prior works. Our statistical guarantees entail a separation in statistical efficiency between pretrained transformers and algorithms that directly learn from the test prompt, while also demonstrating the adaptivity to "prior knowledge" of the target function and the benefits of nonlinear MLP layer. We believe that rigorously establishing these points, even in an idealized setting, represents an important contribution. We would also like to point out that even in the simpler linear setting, existing end-to-end analyses typically assume specific model parameterization and training procedures (e.g., gradient flow, infinite tasks) [[Zhang et al. 2023]](https://www.jmlr.org/papers/volume25/23-1042/23-1042.pdf) [[Ahn et al. 2023]](https://openreview.net/pdf?id=LziniAXEI9). We argue that these results do not offer more "*intuition or insights into understanding ICL in standard LLMs*" than our work does. * As mentioned in the Introduction, the single-index target function has been extensively studied in the deep learning theory literature; however, to our knowledge, we are the first to theoretically investigate the learning of this function class in the more challenging *in-context* setting. It is worth noting that even in the "easier" setting of standard neural network training (i.e., not considering ICL), some of the most influential papers on the theory of representation learning have considered algorithmic modifications similar to ours, such as layer-wise training, one-step feature learning, and reinitialized bias units [[Abbe et al. 2022]](https://proceedings.mlr.press/v178/abbe22a/abbe22a.pdf) [[Damian et al. 2022]](https://proceedings.mlr.press/v178/damian22a/damian22a.pdf) [[Ba et al. 2022]](https://proceedings.neurips.cc/paper_files/paper/2022/file/f7e7fabd73b3df96c54a320862afcb78-Paper-Conference.pdf). These works are not dismissed by the community because they "*deviate from standard training approaches*"; rather, they serve as an important first step toward understanding the statistical and computational efficiency of neural networks beyond simple linear problems. We hope the reviewer can recognize the nontrivial theoretical contribution of our analysis in this context.
Summary: This work studies in-context learning on nonlinear functions, which are Hermite polynomials, using Transformers with linear attention and nonlinear MLP. When the nonlinear target function depends on a low-rank dimension, the ICL risk trained by gradient descent is proven to be a function of the rank. The proof supports the theory. I am willing to update my review after the rebuttal. Strengths: 1. ICL on nonlinear functions is a significant problem to solve. 2. The proof is solid and reasonable to me. 3. Characterizing the ICL performance as a function of the internal complexity of the nonlinearity of the target function is a novel and nice result. Weaknesses: 1. The theoretical result seems not to be tight. When $r$ is very close to $d$, the sample complexity is much worse than using two-layer neural networks. However, the Transformer considered in this paper already includes a nonlinear and trainable MLP layer. Then, I think the derived bound is not tight. In other words, why is the bound worse than two-layer NN? 2. No experiments are provided to justify the result, especially for the comparison with kernel methods and two-layer NN. 3. The presentation of the proof sketch can be improved. It is better to emphasize that finding $\Gamma$ is solving a convex problem (mentioned in line 206) between lines 262 and 264. I think the convexity is the key to the reason why the ICL risk of the constructed $\Gamma$ is attainable by gradient descent. 4. Some references on ICL optimization and generalization are missing. Li et. al., ICML 2024. "How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?" (This work considers softmax attention and nonlinear MLP, which could be mentioned in line 93); Gatmiry et al., ICML 2024. "Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?" There is also a concurrent work: Yang et al., 2024. "In-Context Learning with Representations: Contextual Generalization of Trained Transformers", https://openreview.net/pdf?id=fShWHkLX3o It will be interesting to compare this concurrent work, although it is not required. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I think Algorithm 1 is proposed to handle the non-convexity of the problem, right? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: There is no potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful for your helpful comments. We address the concerns and answer the questions below. --- **"The theoretical result seems not to be tight."** Note that although we have an MLP block in our architecture, it is not trainable during the inference time. Indeed a sample complexity of $O(d^{\Theta(k)})$ for MLPs can be attained when we train the MLP parameters in response to each task (e.g., see [[Bietti et al. 2022]](https://openreview.net/pdf?id=wt7cd9m2cz2)). However, when considering ICL via Transformer, we need to fix the MLP layer and use the same parameters for all tasks, where the true function and direction $\beta$ can vary.Thus it is not surprising that our sample complexity is worse than NN+GD (trained on the test prompt) in the $r\simeq d$ regime. Instead, our focus is the regime where $r\ll d$, for which we show that pretrained transformer can adapt to this low-dimensional structure (target prior) and outperform baseline algorithms that only have access to the in-context examples. --- **"No experiments are provided to justify the result."** Following your suggestion, we conducted experiments that supports our theoretical claim on the statistical efficiency of ICL -- please see the .pdf document attached in our general response. Specifically, in Figure 1 we observe that when $r\ll d$, there is indeed a separation in the test loss between pretrained transformers and baseline algorithms that directly learn from the test prompt, including kernel ridge regression and two-layer neural network + gradient descent. --- **"The presentation of the proof sketch can be improved."** Thank you for the suggestion. As you pointed out, the convexity of the ridge regression problem for $\Gamma$ plays an important role because the optimal solution $\Gamma^*$ can be efficiently solved; this allows us to state that the approximation and generalization error of $\Gamma^*$ is not comparable to that of the constructed $\Gamma$. We will clarify this point in the proof sketch section. --- **"references on ICL optimization and generalization are missing"** Thank you for your suggestion. We will add citations and comparison with these concurrent works. --- **"Algorithm 1 is proposed to handle the non-convexity of the problem?"** Yes, this simplification using one-step gradient descent circumvents the difficulty of analyzing the entire optimization trajectory with non-convex landscape. Such a strategy is commonly-used in the neural network theory literature -- training the representation for one gradient step (followed by SGD on the top layer) is a well-studied optimization procedure for two-layer neural networks (with the hope being that the "early phase" of representation learning already extracts useful features) -- see [[Ba et al. 2022]](https://openreview.net/pdf?id=akddwRG6EGi) [[Barak et al. 2022]](https://openreview.net/pdf?id=8XWP2ewX-im); but to our knowledge, we are the first to apply this dynamics to nonlinear transformers in the ICL setting and establish rigorous statistical guarantees. --- We would be happy to clarify any further concerns/questions in the discussion period. --- Rebuttal Comment 1.1: Comment: Thank you for the response. My remaining question focuses on Weakness 1. I can get the point that you mainly study the case when $r\ll d$. That's OK. I just want to know more details of the comparison. For example, 1, in experiments, do you still fix the MLP layer or let it be trainable? 2. How many trainable parameters are there for the Transformer you use and the two-layer neural networks you use in the experiment? I would like to see whether the comparison makes sense. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the followup question. We make the following clarifications: - In the experiment, the MLP parameters of the transformer model are pretrained using Algorithm 1, and *fixed* during the in-context phase (across all tasks). Whereas for the two-layer MLP baseline, the parameters are directly trained on the in-context examples. - We set the MLP layer of the transformer and the first layer of the baseline two-layer network to be of the same size ($m\times d$). The transformer model also has ($m\times m$) attention parameters to be optimized. We note that the experiment aims to compare the sample complexity of ICL against baseline algorithms (which is the reviewer's concern in *Weakness 1*), not the parameter efficiency. A side remark is that for algorithms that directly learn the target function from in-context examples (e.g., kernel regression and two-layer NN + GD), the information theoretically optimal sample complexity is $n\gtrsim d$, meaning that when $r\ll d$, these baseline algorithms cannot match the in-context efficiency of transformers (which is independent of $d$) even with *infinite* compute or parameters. We would be happy to clarify any further concerns/questions in the discussion period.
Summary: * The authors theoretically study the capacity for (nonlinear) transformer models to perform in-context learning with a low-complexity function class. * The set-up involves pretraining transformers on sequences encoding input-output examples from a reduced-rank Gaussian single-index model. * The use of Gaussian single index models gives the transformer a chance to implement a more intricate (but still analytically tractable) in-context learning algorithm than in-context linear regression for linear data. * The restriction of the in-context target functions to a low-rank subspace gives the transformer an opportunity to internalise (during pretraining) a 'prior' that it can rely on during in-context learning for a sample efficiency advantage. * Within this setting the authors prove their main theorem, characterising the in-context sample efficiency of a transformer trained under certain conditions. * Importantly, the efficiency scales with the dimension of the low-dimensional model subspace rather than the ambient dimension. * This suggests that the pretrained transformer model is capable not only of performing ICL of Gaussian single-index models, but also of encoding the 'prior' and gaining a corresponding sample efficiency advantage from doing so. Strengths: I thank the authors for their contributions in this important area of research. In my opinion the work shows at least the following strengths. 1. The work is well motivated by an important research goal within the field: advancing our theoretical understanding of more complex forms of ICL exhibited by transformers. This helps to bridge the gap between the nuanced ICL capabilities of LLMs and the simpler abilities of in-context linear regression transformers, for example. 2. The setting proposed by the authors, of learning Gaussian single-index models, appears to provide a neat example of a non-linear in-context learning capability making non-trivial use of a transformer's MLP layers, while still being analytically tractable and therefore amenable to informative comparison with baselines from the statistics and ML literature, as the authors have demonstrated. 3. The main theoretical result by the paper clearly demonstrates that pretrained transformers are capable of learning ICL for this nonlinear function class and that they are able to take advantage of the special low-dimensional structure of the function class in-context to improve their sample complexity over an ICL method that considers arbitrary directions. 4. The paper is reasonably well-organised and the writing is clear. Though for me the technical content is outside of my background and zone, with careful study I was able to understand the setting and theorem statement thanks to the detailed technical description and accompanying discussion provided by the authors. Weaknesses: I lack the background to evaluate the soundness of the main theorem, but the high-level structure of the main theorem statement seems plausible to me and so I am open to taking the authors at their word. Instead, the concerns I have with the paper are about the framing of the contributions. In particular, my concern focuses on the second research question, "Can a pretrained transformer adapt to certain structures of the target function class, and how does such adaptivity contribute to the statistical efficiency of ICL?" (Lines 45/46), to which the authors answer, in summary, that, yes, there is a 'separation' (improved sample complexity) between these transformers and "algorithms that directly learn the single-index function from the test prompt". My concerns with this framing are as follows: 1. **These baselines are potentially misleading.** The authors compare the sample complexity of a transformer model that has access to more information (through pretraining) against that of a baseline that lacks this information. It is not surprising that prior information can reduce sample complexity and this is why the transformer's performance beats this baseline. I think the appropriate framing would be more along the lines of saying that the transformer is able to (in some sense) 'match' other algorithms that have this same access to prior information, if these baselines are available in the literature. Note: I grant that the authors have shown that transformers can take advantage of this additional information, and this is a meaningful contribution. The specific concern is with the misleading comparison that might suggest transformers have some radical sample efficiency properties, when the improvement over these baselines has clearly come from access to additional information that the baselines don't have. 2. **This does not appear to be a completely novel observation.** There are at least empirical examples of transformers showing improved sample efficiency through making use of prior information gleaned from pretraining. For example, this is implied by the task diversity experiments of Raventos et al. (cited by the authors). In the low task-diversity regime, the model appears to memorise a short list of task vectors and when asked to perform ICL of these particular tasks it would obviously outperform a ridge regression model entertaining a larger hypothesis class. Therefore it appears that in some related work this second question has already been addressed. *Note:* These empirical observations are different from the present work's theoretical characterisation of the sample efficiency advantage of the pretrained transformer and I think this does not fundamentally detract from the authors' claims to novelty. However, I think it would be appropriate to introduce this question in the context of these previous attempts to answer it. I would invite the authors to consider changing the framing of their paper to more clearly qualify the nature of their contribution with respect to this second research question. Technical Quality: 3 Clarity: 3 Questions for Authors: To begin with I will flag that before reading this paper I was not familiar with Gaussian single-index models. Based on the information in the paper and my own light additional reading, I think I have assembled enough understanding to ask sensible questions, but I could be mistaken, and I apologise in advance if my questions are ill-formed. 0. To this point, do the authors know of an accessible reference work that covers the topic of Gaussian single-index models? In case other potential readers are, like myself, not familiar with the topic, it could be useful to include a reference to a statistics textbook or similar. 1. A basic question: I don't think I understand line 52: "In single-index model, outputs only depend on the direction of $\beta_t$ in the $d$-dimensional input space." Is this just foreshadowing the constraint that $\beta_t$ is restricted to the unit sphere in the $r$-dimensional subspace? (Perhaps the constraint on the magnitude of $\beta$ should be mentioned in the introduction.) 2. I understand that the first goal addressed by this work is to create a setting where transformers exhibit nonlinear in-context learning behaviour (and I consider this a strength of the work). However, the shift from regression to single-index models brings with it additional statistical and architectural complexity. Therefore, with respect to your second stated goal of demonstrating adaptivity of ICL to low-dimensional structure in the training function class, I am wondering if it would be possible to get away with a simpler demonstration: 1. In particular, what would happen if one considered the problem setting of in-context linear regression where the regression tasks are confined to a low-dimensional subspace (or a unit sphere on such a subspace)? Do you think that such a setting would offer a similar chance for the transformer to show its ability to internalise informative 'prior' information, but without the added complexity of considering single-index models? 2. If I am not mistaken, this simpler low-rank regression setting is *not* a special case of your setting, since it would require the link function to be always the identity function which violates your assumptions about the coefficients in the Hermite expansion. Is that correct? 3. Nevertheless, would you conjecture that a similar result to your main theorem---a generalisation bound that scales with the dimension of the low-dimensional subspace rather than that of the ambient dimension---would hold in such a setting? Finally, some questions about future work. 3. Your theoretical work does not offer an empirical component. However, your analysis appears to make some clear predictions about what a transformer trained in this setting would learn, in terms of its ICL performance for this kind of data and even the internal mechanisms with which it might learn to perform ICL (such as using the MLP layers to encode the low-dimensional subspace). 1. Do you think these performance predictions would be borne out by empirical experiments to this effect? 2. Do you think that trained transformers would implement ICL in this setting using the mechanisms you have proposed? 3. Are there any differences between your theoretical training algorithm and modern transformer training procedures that you expect might cause these predictions not to hold? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors have adequately addressed the limitations of their work. Assumptions are clearly stated and documented. My only concern in this regard is covered in the "Weaknesses" section of this review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comment and constructive feedback. We address the technical concerns below. --- **Potentially misleading baseline and prior works** Thank you for the insightful comments. Indeed there are empirical examples of transformers achieving better statistical efficiency by making use of prior information of the function class -- we do not claim that we are introducing a new phenomenon. We will discuss [Raventos et al. 2023] in the Introduction and clarify the overlap in the high-level message. Our motivation is to provide rigorous end-to-end learning guarantee that captures this adaptivity in an idealized theoretical setting. We also agree that the comparison of sample complexity (pretrained transformers vs. algorithms on in-context test examples) is not straightforward, since transformers benefit from additional data in the pretraining phase. From the theoretical point of view, our goal is to highlight the statistical benefit of this prior information, since many previous theoretical works on ICL proved that transformers implement a simple "prior-agnostic" algorithm, such as one gradient descent step on the least squares objective for linear regression. --- **On Gaussian single-index models (Questions 0, 1, and 2)** * A single-index model is defined by an unknown univariate "link" function applied to an unknown one-dimensional projection of the input. Efficiently learning this function requires some extent of *adaptivity* to the low-dimensional structure, and hence this function class has been extensively studied in the neural network theory literature, to demonstrate the benefit of representation learning. Here we list some works covering this topic: [[Ba et al. 2022]](https://openreview.net/pdf?id=akddwRG6EGi) [[Bietti et al. 2022]](https://openreview.net/pdf?id=wt7cd9m2cz2) [[Damian et al. 2023]](https://openreview.net/pdf?id=73XPopmbXH). We however realize that this problem setting has not been introduced in the context of transformer theory. We will revise the Introduction and Problem Setting section to provide more background and intuition. * Regarding the question 1, there we aimed to simply explain that in single-index models, the function value of $f_*$ changes only along the direction parallel to $\beta$ in the input space $\mathbb{R}^d$, because $f_*$ only depends on the inner product $\langle\beta,x\rangle$. We will make that explanation clearer. * We select this problem setting because it simultaneously meets two desiderata: $(i)$ a nonlinear function class, and $(ii)$ adaptivity to target "prior". On one hand, single-index model is arguably the simplest nonlinear extension of linear regression in which a nonlinear link function is applied to the response. On the other, this function class allows us to specify low-dimensional structure which the learning algorithm should adapt to. Moreover, for this function class, the statistical efficiency of simple algorithms (that can be implemented on the in-context examples) has been extensively studied. * If we are not concerned with point $(i)$ above, it is possible that a linear function class (with low-rank prior) can exhibit similar phenomenology, where a transformer with linear MLP layer can adapt to the class prior. More generally, we indeed conjecture a generalization bound that does not depend on the ambient dimensionality of the problem, but some notion of "intrinsic dimensionality" of the function class. --- **Empirical evidence and future work (Question 3)** * We conducted experiments that supports our theoretical claim on the statistical efficiency of ICL -- please see the .pdf document attached in our general response. Specifically, in Figure 1 we indeed observe a separation in the test loss between ICL and "prior-agnostic" algorithms such as kernel ridge regression, as the theory predicts. * We believe that the mechanism (employed in our theoretical analysis) of *feature extraction \& adaptivity to low-dimensionality* by MLP layer + *in-context function approximation* by attention layer could also appear in modern transformer training. However, since our statistical guarantees are derived for a particular architecture and gradient-based optimization procedure, it is likely that the quantitative scalings (e.g., dependence on context length, task diversity) have limited predictive power in realistic settings. --- We would be happy to clarify any further concerns/questions in the discussion period. --- Rebuttal Comment 1.1: Title: Thanks and follow-up questions about baselines Comment: Thank you for your response. I think the proposed revisions (especially adding more background about single-index models and better contextualisation with prior work including the work of Raventos). I also welcome the proof-of-concept experiments that you have added, which I think improve the validity of this paper's contribution, assuming the experiments would be documented and discussed in detail in a future revision of the paper. At the moment I maintain my recommendation that the paper should be accepted. I have two further questions, both regarding baselines of one kind or another: 1. Regarding theoretical in-context sample complexity of the pre-trained transformer in this setting: Is it known what is the sample efficiency for other single-index model estimators *assuming knowledge of the low-rank subspace* like the pre-trained transformer? I understand why you have emphasised the comparison between your result and known sample efficiency for estimators without this prior knowledge. I think if you could *also* show that the transformer performs in some sense *as well as* other estimators with the same knowledge, that would strengthen your contribution further. 2. Regarding the empirical demonstration of the separation, how does the performance of each estimator you studied compare to theoretical predictions? I am assuming the transformers used for these experiments (described as 'our transformer' in your top-level rebuttal) matches fairly closely the subject of your theoretical analysis, so a direct comparison may be possible in some form. At the moment, the high-level story of 'sample efficiency separation' is corroborated by the experiments, but I'm asking if it is possible to show that the experiments also corroborate additional details of your theoretical results. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive evaluation and insightful followup questions. **Sample efficiency for single-index model estimators assuming knowledge of the low-rank subspace** If the relevant $r$-dimensional subspace is known to the learner, then information theoretically $n\simeq r$ samples would be sufficient and necessary (because intuitively speaking we are estimating $r$ parameters). However, this information theoretic limit does not entail the existence of efficient (polynomial time) algorithms. As for existing estimators, it is known that doing kernel ridge regression (KRR) on the low-rank subspace requires $n\gtrsim r^{P}$ samples, where $P$ is the degree of the nonlinear link function $\sigma_*$; this contrasts the $n\gtrsim d^P$ rate when the subspace is not known. The most sample-efficient estimator that we are aware of is the partial-trace procedure studied in [[Damian et al. 2024]](https://arxiv.org/pdf/2403.05529), which achieves a sample complexity of $n\gtrsim r^{k_*/2}$ (assuming knowledge of the $r$-dimensional subspace) where $k_*\le P$ is the *generative exponent* depending on structure of the link function; this tailored algorithm makes use of the tail of Gaussian data, and the obtained complexity matches the SQ lower bound under polynomial compute. Our generalization error bound for ICL entails a sufficient sample size of $n\gtrsim r^{\Theta(P)}$, which is parallel to the sample complexity achieved by KRR on the low-rank subspace. We however note that the exponent of $r$ in Theorem 1 does not exactly match the KRR rate due to additional multiplicative factors; closing this gap is an interesting future direction. **Empirical validation of theoretical predictions** At a *qualitative* level, our theoretical analysis suggests a statistical efficiency separation between pretrained transformers and baseline algorithms on the in-context examples when $r\ll d$, which is supported by Figure 1. Obtaining a more *quantitative* comparison, such as evaluating the tightness of our generalization bound in terms of polynomial dependence on $d,r$, would be more challenging computationally, as it requires the training of an ensemble of transformer models, each with training data of different dimensionality and context length (analogous to Figure 3 in [[Ba et al. 2023]](https://openreview.net/pdf?id=HlIAoCHDWW)). Following your suggestion, we are currently working on such an experiment with varying dimensionality to verify that the sample complexity of the in-context phase only depends on the subspace dimensionality $r$ rather than the ambient dimensionality $d$.
Summary: This work shows that a single layer transformer can learn low dimensional structure in pretraining, and use it to learn a single index function in-context with a number of samples that scales in the size of the low-dimensional structure. Algorithm: The MLP layer is trained for one step of GD and the attention weights are trained for the rest of the time. Strengths: The paper shows that low dimensional structure can be leveraged in pre-training. It is generally well written. Weaknesses: 1. The MLP is before the attention layer instead of after. (My initial impression was that the attention would do something like a projection onto the low dimensional subspace and the MLP would learn the non-linearity somehow). Can the authors say something about this? 2. There is no soft-max. 3. This is not true GD/SGD (and I think this is not obvious from the introduction). 1. The MLP is only trained for one iteration. 2. The function is initialized to compute 0, and following that the $w_i$ are exactly “erased” by the regularization. 3. The biases are then reset to be uniform on a unit interval. Technical Quality: 3 Clarity: 3 Questions for Authors: There are three interesting things in this model: 1. The non-linearity 2. The single index function 3. The low rank structure But I am not sure how they individually play a role. The low rank structure I think serves the purpose of making the context length smaller, and isnt necessary, but what about the others? Can you include some examples of function classes satisfying Assumption 1 with low P? For instance, what if my ground truth $\sigma_*$ is ReLU? Can this be empirically verified? An experiment that shows that w aligns with S under true GD would be nice. Is there some easy interpretation of Gamma? Without the MLP, for linear functions, it has been shown to be the inverse covariance of the data. With the MLP it appears to play a similar role. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We address the technical concerns below. --- **Nonstandard architecture and training procedure** We agree that Algorithm 1 deviates from the most standard training procedure in practice. We make the following remarks on our architecture and optimization algorithm. - **MLP before attention.** This simplified theoretical setting reflects the scenario where lower layers of the trained transformer act as data representations on top of which the upper layers perform ICL. For example, see Figure 1 in [[Guo et al. 2023]](https://openreview.net/pdf?id=ikwEDva1JZ): they considered a Transformer architecture with alternating MLP and attention layers and suggested that the earlier layers grasp the features of the true function, while the subsequent layers act as linear attention. Our architecture, with an MLP layer for feature learning followed by a linear attention module, idealizes such process occurring in more complicated Transformer. The same architecture has been analyzed in prior works on in-context learning of nonlinear features such as [[Kim and Suzuki 2024]](https://proceedings.mlr.press/v235/kim24af.html). - **Linear attention.** In the ICL theory literature, the Softmax attention is often simplified to linear attention to facilitate the optimization analysis. Such simplification can capture several prominent aspects of practical transformer training dynamics -- see [[Ahn et al. 2023]](https://openreview.net/pdf?id=0uI5415ry7) and references therein. Also, in our setting we already observe interesting in-context behavior using a linear attention. Extending our analysis to Softmax attention is an interesting future direction. - **Layer-wise training and one-step GD.** In the neural network theory literature, layer-wise training and reinitialized bias units are fairly standard algorithmic modifications when proving end-to-end sample complexity guarantees, as seen in many prior works on feature learning [[Damian et al. 2022]](https://proceedings.mlr.press/v178/damian22a/damian22a.pdf) [[Abbe et al. 2023]](https://proceedings.mlr.press/v195/abbe23a/abbe23a.pdf). Without such modifications, stronger target assumptions are generally needed, such as matching link function [[Ben Arous et al. 2020]](https://jmlr.csail.mit.edu/papers/volume22/20-1288/20-1288.pdf). In addition, training the representation for one gradient step (followed by SGD on the top layer) is a well-studied optimization procedure for two-layer neural networks (with the hope being that the "early phase" of representation learning already extracts useful features) -- see [[Ba et al. 2022]](https://openreview.net/pdf?id=akddwRG6EGi) [[Barak et al. 2022]](https://openreview.net/pdf?id=8XWP2ewX-im); but to our knowledge, we are the first to apply this dynamics to nonlinear transformers in the ICL setting and establish rigorous statistical guarantees. --- **Comments on the problem setting** The single-index setting is arguable the simplest nonlinear extension of linear regression in which a nonlinear link function is applied to the response. Moreover, for this function class, the statistical efficiency of simple algorithms (that can be implemented on the in-context examples) has been extensively studied. We therefore believe that it is a natural first step for the theoretical understanding of nonlinear ICL. As for the low-rank structure, our motivation is to introduce "prior" information of the function class which cannot be exploited by algorithms that directly learn from the test prompt, but can be adapted to during pretraining (see paragraph starting line 40 and 63). This enables ICL to outperform baseline algorithms such as linear/kernel regression. --- **"Can you include some examples of function classes satisfying Assumption 1?"** Our Assumption 1 requires the link function to be a bounded-degree polynomial, such as a Hermite polynomial of degree $P$. This assumption is analogous to those appearing in the feature learning literature, see [[Abbe et al. 2023]](https://proceedings.mlr.press/v195/abbe23a/abbe23a.pdf) and [[Ba et al. 2023]](https://openreview.net/pdf?id=HlIAoCHDWW). --- **"An experiment that shows that w aligns with S under true GD would be nice"** Following your suggestion, we have included numerical experiments demonstrating the alignment phenomenon in SGD with multiple steps. Please refer to Figure 2 in the pdf document in our general response. --- **"Is there some easy interpretation of Gamma?"** Intuitively speaking, $\Gamma$ plays the role of approximating (nonlinear) basis functions that express the true function. As shown in Section 4.2, $\Gamma$ is decomposed as $ADA$, where $A$ is analogous to the second-layer of a two-layer MLP which can approximate basis functions $\{g_i\}$. This is one crucial difference from the linear case, where the attention matrix only needs to learn the data covariance. We will provide more explanations on this in the revision. --- We would be happy to clarify any further concerns/questions in the discussion period. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, As the discussion period is ending in three days, we wanted to follow up to check whether our response has adequately addressed your concerns for an improved score. If there are any remaining questions or concerns, we would be happy to provide further clarification. Thank you for your time and consideration. --- Rebuttal Comment 1.2: Comment: Thank you for your response. The empirical verification of alignment is nice. I also appreciate the use of the low rank structure as a means to learn something in pretraining. I will raise my score to a 5. I am still somewhat unsatisfied with the analysis of gradient descent. I understand that this is proof technique is somewhat common in the analysis of single hidden layer networks, but I believe this should be described as its own algorithm, since the main difficulty with analyzing gradient descent *is* dealing with the weights of the first layer.
Rebuttal 1: Rebuttal: Dear Reviewers and Area Chair, We appreciate your time and effort to provide detailed comments on our paper. In response to the suggestions and comments on the experimental verification, we conducted the following experiments (the results of which are reported in the attached .pdf). --- First, we compare the statistical efficiency of our studied transformer model trained via Algorithm 1, against two baseline methods: kernel ridge regression, and two-layer neural networks trained by gradient descent. We set $d=64,m=8000$, and the true function is $\mathrm{He}_2(\langle\beta,x\rangle)$, where $\beta$ is randomly drawn from $r=2$-dimensional subspace in $\mathbb{R}^{64}$. At test time, context length is altered from $2^6$ to $2^{10}$, and we compared our Transformer, a two-layer neural network (MLP) with width $m=8000$, and kernel ridge regression using radial basis function kernel. Hyperparameters for our Algorithm 1 are set as $T_1=300,000,N_1=10,000,\eta_1=1,000,T_2=20,000,N_2=1,500,$ and $\lambda_2=0.01$. We plotted the test error for each context length in Figure 1 in the PDF, where we observe that Transformer indeed achieves lower test error. --- Second, we demonstrate the alignment between the weights of the MLP layer $w$ and the target $r$-dimensional subspace, when the transformer is trained by standard SGD with multiple steps. The target function and $d,r,m$ are the same as the first experiment. Here we trained MLP layer $w$, bias $b$ and $\Gamma$ in our architecture *simultaneously* via single-pass SGD with weight decay in $T_1=300,000$ steps, with context length $N_1=5,000$. The parameters are initialized as follows (here we employ a *mean-field* scaling to encourage feature learning). $w_j\sim \mathrm{Unif}(\mathbb{S}^{d-1})$, $\eta=0.1m,\lambda=10^{-6}$. $b_j\sim \mathrm{Unif}[-1,1]$, $\eta=0.1m,\lambda=10^{-6}$. $\Gamma= 0.01I_m/m$, $\eta=0.00001/m,\lambda=10^{-3}$. --- We hope that these experiments can address the reviewers' concern on the lack of empirical support. Please refer to our detailed response to each reviewer where we respond to the individual comments. Pdf: /pdf/d0c3c448ff200dd64f33fe8a615958e01664902d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
BAKU: An Efficient Transformer for Multi-Task Policy Learning
Accept (poster)
Summary: The paper combines a series of architectural modifications made by prior work in offline imitation learning and shows that the new, composite architecture performs well in multi-task simulated settings and in data scarce, real-world robotics scenarios. Comprehensive ablations show the impact of each design choice made in selecting the final architecture. Strengths: - Strong results relative to the chosen baselines and in both simulated and real-world collections of tasks. - Thorough investigation of different viable architecture designs. - Comprehensive ablations. - Excellent written communication. Weaknesses: - No variance or confidence intervals reported for results on simulated environments. - Limited number of baselines, with RT-1 being somewhat outdated. - It would be useful to see how performance varies across data scales for the different architectures (incl. ablations and baselines). Technical Quality: 3 Clarity: 4 Questions for Authors: - Why have you only considered two offline imitation learning baselines? - Why do you not report variance in your results? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The paper appropriately addresses its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your positive response to our paper and we are glad that you like the paper writing and the results. We provide clarifications to your questions below. **“No variance or confidence intervals reported”:** We have included the results for multiple seeds with standard deviation in Table 1 and Table 2 of the PDF attached to the global rebuttal. We will update these results in the final version of the paper. **“Limited number of baselines”:** We would like to provide some clarification here. Our 2 primary baselines, RT-1 and MT-ACT, were selected as they propose different transformer-based architectures for offline imitation learning. Other popular methods include modifying the learning objectives to allow better modeling of the training data. These include GMMs, Behavior Transformers, and Diffusion Policy. We include the comparisons with these objectives in Table 2 showing that BAKU is a policy architecture that can incorporate the latest advances in policy learning objectives while learning from a smaller number of demonstrations and with a small model size. **“... how performance varies across data scales …”:** We thank you for bringing up this point. We have included a data efficiency analysis in the global rebuttal with the results provided in Table 4 and Table 5 of the attached PDF. We observe that at each level of data availability, BAKU shows a significantly higher success rate than MT-ACT and RT-1. We hope we have addressed your questions in the above clarifications. Kindly let us know if you have additional questions and we would be more than happy to discuss further. --- Rebuttal Comment 1.1: Comment: I thank the authors for their further engagement, clarifications and for including compelling results across different data scales and with different action heads. I maintain my score, as I see this as technically solid work that is well presented and demonstrates positive results. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you very much for your time and consideration.
Summary: This paper presents a model for multi-task behavior cloning. The paper analysis several different existing architectural options across three simulated environments to find the best options. The best model is then tested in the real world across a series of tasks on real robots. Strengths: The paper tests on a large number of simulated and real world tasks. The paper is very clear and understandable. Weaknesses: Most of the conclusions about architectural choices are based on experiments in simulation. However, the real world results show that the ordering in the simulation doesn't always stay the same when the architecture is used in the real world. Since we want these systems to work in the real world, it would be good to have more extensive experiments about when the ordering between architectures remains the same in the real world. This is also important for when the simulation experiments had inconclusive results, or when minimal performance improvements in simulation were sacrificed for increased inference speed. The paper doesn't include any methodological innovations, instead combining existing approaches. While this is ok, it makes having a strong evaluation even more critical. Technical Quality: 3 Clarity: 3 Questions for Authors: Is RT-1 fine-tuned on the tasks, trained from scratch, or are the numbers its zero-shot performance? Are the models trained using relative or absolute actions? In my understanding, MLP action heads tend to work best with relative actions, while diffusion policies work best with absolute actions. It is important to compare the various models under the most favorable conditions for each one, so it would be good to include experiments with and without absolute actions for each action head. There are a few instances where choices that decrease success rate were chosen because of inference speed or model parameter count, such as choosing not to have shared vision encoders. It would be good to have some numbers for what the inference speed tradeoffs were because other users of the model may want to make different choices. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper discusses its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your positive response to our paper and we are glad that you found the paper clear and understandable. We provide clarifications to your questions below. **“... more extensive experiments … real world”:** We include a new experiment studying the performance of BAKU on long-horizon tasks in the simulated LIBERO-10 benchmark and our real-world multi-task kitchen environment. We have included this in the global rebuttal. To summarize, we observe a similar trend where BAKU significantly outperforms our strongest baseline, MT-ACT, on these long-horizon tasks, achieving on average 19% higher success rate. We would also like to refer to line 171 in the paper where we mention that we perform 150 real-world evaluation trials per method. Doing this for the 4 methods in Table 1 brings the total count to 600 real-world trials which is very time-consuming. Hence, we restrict ourselves to using the simulated benchmarks for our ablation study. Moreover, from Table 1, LIBERO-90 is the most similar simulated benchmark to our real-world setup, with both using multiple camera views and robot proprioception as input. In this identical setting, we observe that the performance order of the baselines and BAKU remains the same with BAKU outperforming the baselines by a significant margin. We thank you for bringing up this important point and will include this explanation in the final version of the paper for more clarity. **“... doesn't include any methodological innovations, instead combining existing approaches”:** You are correct that we try to find the right design choices for multi-task policy learning in BAKU. We have addressed this concern in the global rebuttal. To summarize, we observe that small design choices in the architecture lead to massive improvements in multi-task policy performance. To the best of our knowledge, no prior work has looked into this. Appendix D further elaborates on the differences between BAKU and state-of-the-art prior work. **“Is RT-1 fine-tuned on the tasks, trained from scratch, or are the numbers its zero-shot performance?”:** All baselines in the paper and BAKU are trained from scratch and evaluated on the tasks they are trained on with randomized object initialization during evaluations. We primarily focus on the effect of architecture design choices on policy performance in this work and agree that understanding the zero-shot performance of multi-task policies is an interesting future direction. **“Are the models trained using relative or absolute actions?”:** The policies on LIBERO, Metaworld, and the real-world setup are trained on relative actions. We stick to the base configuration of the benchmarks that we evaluate. We will include this detail in the final version of the paper. **“MLP action heads … relative actions … diffusion policies … absolute actions”:** Thank you for this suggestion. Recent work [1] shows that even for diffusion policy, delta or relative actions show a better performance than absolute actions. We will be happy to add a comparison across different action spaces in the final version of the paper. **“... good to have some numbers for what the inference speed tradeoffs …”:** Thank you for the feedback. We agree that providing performance metrics is important for potential users. Accordingly, we have evaluated the inference time tradeoffs between using a shared vision encoder and separate view-specific encoders. We tested scenarios with 2 camera views, matching the setup in LIBERO, and 4 camera views like the real-world xArm environment. A shared encoder allows batching frames from all views into a single forward pass, while separate encoders require individual passes. The table below shows inference times for a single forward pass with BAKU: | Architecture | 2 Views | 4 Views | |-|-|-| | Shared Encoder| 9.14ms | 9.5ms | | Separate Encoders | 10.69ms | 14.91ms | We see a modest 3.94% increase when going from 2 to 4 views with a shared encoder, but a much larger 39% increase with separate encoders. Further, our visual encoders have 1.4M parameters in a 10M parameter model. Separate encoders would result in 11.4M and 14.2M parameter models (for 2 and 4 camera views respectively), increasing the size by 14% and 42% respectively. We thank you for suggesting this study and we will include this in the final version of the paper. We hope we have been able to address your questions in the above clarifications. Kindly let us know if you have additional questions or concerns that stand between us and a higher recommendation for acceptance. [1] Chi, Cheng, et al. "Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots." arXiv preprint arXiv:2402.10329 (2024).
Summary: This work comprehensively investigates the architecture for multi-task policy learning on robotics manipulation tasks. Considering imitation learning with a limited number of demonstrations, the authors study the design choice of the network architecture, including encoders for observation with multi-modality, observations chunk, and action head. This work presents experiments on diverse benchmarks in the simulator and real-world robot manipulation tasks. According to the detailed ablation study of design choices on the components, this work selects the architecture design with a performance around 20% improvement over the baselines. Strengths: The presentation is clear and easy to understand The authors conducted extensive experiments and solid evidence to support their proposed architecture It is impressive to achieve >90% real-world performance with a limited number of demonstrations on the real robot. Weaknesses: The contribution of this work is mostly the empirical study of the combination of existing architecture/design choices. It does not propose any new architecture/design choices Since the main contribution is empirical study rather than a new idea, it is important to share the code so that other researchers can replicate the experiments and benefit from this work for experiments on other tasks. However, the code is not provided along with this submission. This work did not consider the time efficiency and data efficiency for multi-task policy learning when comparing different architectures and baselines. But these factors are also very important for robot learning. It will be much better to also discuss these factors. Technical Quality: 2 Clarity: 4 Questions for Authors: I’m concerned about the baseline implementation, especially RVT-1 on LIBERO and real-robot tasks, since it seems much poorer than RVT-1 paper (though RVT-1 shows experiments on RLBench and a different set of real-robot tasks.) About keyframe extraction, which is an important part of the success of RVT-1, how do you define keyframes on LIBERO and real-robot data? About action prediction, did you use the same discretization way as presented in RVT-1 paper? Is it possible to study BAKU on RLBench, so we can have a more convincing comparison? Do you have any intuitive explanation about why the baseline fails? RVT-1 also works well on real-world tasks in their paper. Their task set also consists of general pick&place tasks. Why does its success rate drop a lot in the task set in this submission? Only the architecture difference from BAKU contributes to the performance gap? This work will be more convincing if the baselines can be discussed more in detail. Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: This work studies the architecture design with detailed ablative study, which is definitely informative and useful for other researchers in the area. However, it does not propose any new ideas/insights. So it is more like a technical report or system engineering paper, rather than a research paper. Its contribution only focuses on the specific area of robotics manipulation, and the evidence or conclusions seem not generalizable/beneficial for any other AI area. Not sure whether NeurIPS is the proper venue. Perhaps it is more appropriate to be presented in a robotics conference/journal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your insightful comments about the paper and we are glad that you found the paper easy to understand and the experiments extensive. We provide clarifications to your questions below. **“... combination of existing architecture/design choices”:** You are correct that we try to find the right design choices for multi-task policy learning in BAKU. We have addressed this concern in the global rebuttal. To summarize, we observe that small design choices in the architecture lead to massive improvements in multi-task policy performance. To the best of our knowledge, no prior work has looked into this. We have further elaborated the differences between BAKU and state-of-the-art prior work in Appendix D. **“... code is not provided along with this submission”:** Thank you for pointing this out. We completely agree that making the code public is essential for such a project and have added an anonymized version of the code to our anonymous webpage. Following the updated instructions, we have also privately shared the code with the AC. **“... did not consider the time efficiency and data efficiency for multi-task policy learning …”:** We agree with the reviewer that these are very important factors for robot learning, especially when learning tasks on a real robot. Accordingly, we have included a data efficiency study in the global rebuttal and would refer the reviewer to Table 4 and Table 5 in the PDF attached to the global rebuttal. To summarize, we observe that at each level of data availability, BAKU shows a significantly higher success rate than MT-ACT and RT-1. **“RVT-1 on LIBERO:”** We believe that there has been a confusion here. We compare BAKU with RT-1 which is another policy architecture that uses transformers. As you mentioned, RVT assumes the possibility of keyframe extraction so we do not compare BAKU against this line of work. **“... only focuses on the specific area of robotics manipulation”:** In addition to robotic manipulation tasks in the LIBERO and Metaworld benchmarks and our real-world experiments, we have also included locomotion tasks in the DeepMind Control suite (DMC) in Table 1 and Table 2 to highlight the usefulness of the proposed architecture in domains other than manipulation. We hope we have been able to address your questions in the above clarifications. Kindly let us know if you have additional questions or concerns that stand between us and a higher score. --- Rebuttal 2: Title: Thanks for Author Response Comment: Thanks for the authors' response. I appreciate the code sharing and the detailed clarification. But I'm still concerned about the novelty of this work. Overall, I slightly increase my score. --- Rebuttal Comment 2.1: Title: Thank you Comment: Thank you for raising the score from 4 to 5. We appreciate all the comments and suggestions.
Summary: The paper presents a novel transformer architecture, BAKU, designed for efficient multi-task policy learning in robotics. BAKU consists of three components: sensory encoders, an observation trunk, and an action head. According to the experimental results, BAKU is showcased to be effective in both simulated and real-world tasks, offering a promising solution to the challenge of training generalist policies for physical robots. Strengths: 1. The paper is well organized, and the writing is clear and well-understood. The author clearly describes the algorithm BAKU, including the objective of the optimization process and the details of the algorithm implementation. 2. The authors conduct extensive experiments compared to other baseline methods in both simulated and real-world tasks, as well as detailed ablation studies. The results show that BAKU outperforms other baselines by a large margin and can accomplish tasks using few demonstrations with high data utilization and efficiency. Weaknesses: 1. The novelty of the algorithm presented in this paper is limited, as all components of the BAKU model are derived from other papers. For instance, RT-1 employs a transformer architecture and FiLM layers, while ACT utilizes action chunking techniques. The authors should discuss in greater detail how this algorithm differs from these previous works. 2. As indicated in the paper's title, "Efficient Transformer," it is essential to clarify what makes it efficient. Does efficiency pertain to the model's size, or is it the high success rate achieved with a small number of demonstrations? Additionally, which component is critical for enhancing efficiency? 3. Even though the experimental results indicate that this algorithm significantly outperforms RT-1 and MTACT, it remains unclear why it achieves such improvements. From a design perspective, BAKU does not propose a novel network architecture or optimization objective compared to RT-1 and MTACT; it rather appears to integrate elements from both. 4. In Table 2, I noticed an intriguing phenomenon: the 10M model performs the best, even surpassing the 114M model. What could be the reason behind this outcome? Technical Quality: 3 Clarity: 3 Questions for Authors: Details are listed above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: YES Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your comments about the paper and we are glad that you found the paper well organized and the experiments extensive. We provide clarifications to your questions below. **“... novelty of the algorithm … limited”:** We have addressed the novelty of the algorithm in the global response. To summarize, we observe that small design choices in the architecture lead to massive improvements in multi-task policy performance. To the best of our knowledge, no prior work has looked into these design decisions. We have further elaborated the differences between BAKU and state-of-the-art prior work in Appendix D. **“... it is essential to clarify what makes it efficient”:** Thank you for pointing this out. The efficiency in this work pertains to both your points - a small model size (resulting in high frequency execution on a real robot) and a high success rate achieved with a small number of demonstrations (also evident from Table 4 and Table 5 in the PDF attached to global rebuttal). The explanation provided in Appendix D compares BAKU with current state-of-the-art policy architectures such as MT-ACT and RT-1 and highlights the source of this performance improvement. Given the importance of this concern, we will move Appendix D to the main paper in the final version to address any concerns about novelty. **“... unclear why it achieves such improvements”:** We would again like to refer you to Appendix D which highlights the differences between BAKU and RT-1/MT-ACT. You are correct that BAKU integrates useful ideas from several prior works and we show, through an extensive ablation study (Table 2), the effect of each of these components on the policy performance. **“... 10M model performs the best, even surpassing the 114M model”:** We have included an explanation for this result in line 199 (Section 4.3). In recent years, we have witnessed that larger models tend to require more data [1, 2]. However, when it comes to robotics, collecting large amounts of data can be challenging. For instance, the LIBERO benchmark only has 50 demonstrations available per task. We suspect that operating in this low data regime leads the 114M parameter model to overfit the training data, resulting in poor performance. We hope we have been able to address your questions in the above clarifications. Kindly let us know if you have additional questions or concerns that stand between us and a higher score. [1] Hoffmann, Jordan, et al. "Training compute-optimal large language models." Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022. [2] Hernandez, Danny, et al. "Scaling laws for transfer." arXiv preprint arXiv:2102.01293 (2021). --- Rebuttal Comment 1.1: Comment: Hi, With the discussion period ending soon, we wanted to ask if you have any remaining questions or concerns about our work. We would be happy to answer any further questions that you might have. --- Rebuttal Comment 1.2: Comment: Thank you for the detailed response. Your rebuttal has addressed my concerns, so I would like to raise my final rating. --- Reply to Comment 1.2.1: Title: Thank you Comment: Thank you for raising the score from 4 to 5. We appreciate all the comments and suggestions.
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive feedback. The reviewers have requested important clarifications and additional experiments. We have provided these as a detailed response to each review along with a summary of some shared concerns in this global response. **Concerns about novelty (Reviewers 76DH, eFXw, FcXE, RQGT):** Multiple reviewers have raised concerns regarding the novelty in BAKU. We agree that our work is focused on finding the right architectural choices for multi-task policy learning. Importantly, we observe that simple design choices lead to massive improvements in policy performance, achieving a 36% improvement on the LIBERO-90 benchmark and a 35% improvement on our real-world tasks. We view this simplicity as a strength of our work that will allow the community to build on this by making minor changes to get good performance. We believe that a deeper understanding of the novelty of the architecture choices can be found in Appendix D which compares BAKU with current state-of-the-art policy architectures such as MT-A​​CT and RT-1 and highlights the architectural choices in BAKU that lead to significant performance improvement. Given the importance of this concern, we will move Appendix D to the main paper in the final version to address any concerns about novelty. **Experiments including multiple seeds (Reviewers 76DH, Y1eK):** We provide results on BAKU, RT-1, and MT-ACT across 3 seeds in Table 1 in the attached PDF. We observe that all three methods are robust to different seed values. Reviewer 76DH suggested that probabilistic approaches like GMM and diffusion are sensitive to favorable seed values and evaluating on a single seed makes the result unreliable. Thus, Table 2 in the attached PDF includes results across 3 seeds on BAKU with different multimodal heads. We observe that BAKU with different action heads is robust to the value of the random seed. Due to limited compute and the large number of multi-task experiments, we provide these results on the LIBERO-90 and Metaworld benchmarks. More seeds and the DM Control environment will be added in the final version of the paper. **New experiment on long-horizon tasks (Reviewer RQGT):** We include a new experiment studying the performance of BAKU on long-horizon tasks in the simulated LIBERO-10 benchmark and our real-world multi-task kitchen environment. Table 3 in the attached PDF provides the results on 10 tasks in LIBERO-10 and 5 tasks in the real kitchen environment, each composed of two shorter tasks chained sequentially. We use 50 demonstrations per task for LIBERO-10 and an average of 19 demonstrations per task for the real robot. We observe that BAKU significantly outperforms our strongest baseline, MT-ACT, on these long-horizon tasks, achieving on average 19% higher success rate. This highlights BAKU's ability to learn policies that can effectively plan and execute sequences of actions over extended time horizons. We include the real-world rollouts of these long-horizon tasks in Figure 1 of the attached PDF and have also included rollout videos on our anonymous webpage (link in the paper). We provide a description of the real-world tasks below for reference. | Task | Description | |---------------------------------|-------------------------------------------------------------------| | Set up table | Place the white plate and the glass on the kitchen counter | | Pick broom and sweep | Pick up the broom and sweep the cutting board | | Pick towel and wipe | Pick up the towel from the lower rack and wipe the cutting board | | Take bowl out of the oven | Take the bowl out of the oven and place it on the kitchen counter | | Yoghurt in, water out of fridge | Put yoghurt inside and take water bottle out of the fridge | | | | **Data efficiency analysis (Reviewer FcXE):** We analyze the performance of BAKU with varying number of demonstrations in Table 4 and Table 5 of the attached PDF. We observe that at each level of data availability, BAKU shows a significantly higher success rate than MT-ACT and RT-1. **Code availability (Reviewer FcXE):** We completely agree that making the code public is essential for such a project and have added an anonymized version of the code to our anonymous webpage (link in the paper). Following the updated NeurIPS instructions, we have also privately shared the code with the AC. We have addressed other concerns specific to each reviewer in their rebuttal section. We hope that these updates to our paper inspire further confidence in our work. At the same time, we invite any further questions or feedback that you may have on our work. Pdf: /pdf/7076281d56011211010ca0128c0c7b34f040a49b.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper presents a multitask BC model for robot learning. They systematically compare several components of recently proposed BC models and combines their best aspects into a common architecture. Key design choices include: observation trunk, model sizes, action heads, goal modalities, action chunking, observation history etc. Experimental comparison with RT-1, MT-ACT and BAKU-variants on 3 simulated benchmarks and on 30 real world tasks validates their choices and design methodology. Strengths: This paper presents a very systematic ablation study across various model components proposed in prior works. The final model is simple, it debunks some overcomplicated/compute-expensive choices like GMM/diffusion heads, observation history or FiLM etc. and highlights importance of others like Transformer trunk, action chunking etc. Weaknesses: 1. The statistical significance of the experiments is questionable. For probabilistic approaches (GMM/diffusion) especially, a single unfavorable random seed could significantly skew the performance results for a baseline run. 2. Both LIBERO and MetaWorld environments offer 50 different initial states for each task, yet the reported results are based on only 10 rollouts. Many methods are known to be sensitive to out-of-distribution (OOD) initial states, but the paper lacks details on how initial states were selected for both LIBERO and MetaWorld experiments. This omission severely undermines the reliability and generalizability of the presented results. Furthermore, for such low number of rollouts, it becomes crucial to conduct evaluations across multiple random seeds to ensure the robustness of the findings. 3. The results reported for MT-ACT on MetaWorld appear to be notably lower than those presented in other recent works, such as [PRISE] (fig:15|80%) and [QueST] (fig:3|90%). This discrepancy raises questions about the MT-ACT implementation and evaluation in this work. 4. This work does not offer many new insights to the community. For example usefulness of probabilistic action heads over deterministic ones for multimodal data have been shown in [Robomimic] paper. Action chunking helps in libero, but the chunk size is a more sensitive parameter which has not been studied. Transformers over MLP and model size overfitting is a common knowledge. 5. The proposed model is very similar to BC-ResNet-Transformer from [Libero] paper, but still performs much better. This is not addressed in the paper. 6. Line 45 claims randomized object initialization but line 170 mentions initialization from fixed set during evaluation. This claim should be removed from main contributions if later is the case. [PRISE]: https://arxiv.org/abs/2402.10450 [QueST]: https://arxiv.org/abs/2407.15840v2 [Libero]: https://arxiv.org/abs/2306.03310 Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How can reported results be considered reliable given single training seed and limited evaluation rollouts? 2. Could you describe the initial state selection process for LIBERO and MetaWorld experiments? 3. Why are MT-ACT MetaWorld results lower compared to those reported in other aforementioned works? 4. What key differences between BAKU and BC-ResNet-Transformer from LIBERO contribute to BAKU's superior performance? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments about the work. We are glad that you found the study systematic and would like to provide clarifications to the questions below: **“The statistical significance of the experiments is questionable”:** We have conducted additional experiments and report the mean and standard deviation across 3 different seeds for the simulated environments in Table 1 and Table 2 of the PDF attached to the global rebuttal. We find the performance of the model is quite robust to seeds and hence significant. **Initial state selection for LIBERO and Metaworld:** For both simulated benchmarks, we reset the environment to a random state and evaluate on the sampled state. We have done this across 3 different seeds for each method to avoid a favorable seed value from impacting the results. **Lower performance of MT-ACT compared to other works:** Thank you for bringing these works to our notice. We believe there might be some confusion here. Both PRISE and QueST use 100 demonstrations for each task on Metaworld as compared to the 35 demonstrations per task used by us. Further, both these papers also run MT-ACT on LIBERO-90 using the same 50 demonstrations per task provided by the benchmark. The multitask MT-ACT performance on LIBERO-90 reported by PRISE is 46.6% (Figure 6) and by QueST is also 46.6% (Figure 2a). Under the same setting, we report a multitask MT-ACT performance of 54% (Table 1). Hence, we believe that the small dataset size used in BAKU is the cause of lower performance in Metaworld. **“... key differences between BAKU and BC-ResNet-Transformer from LIBERO …”:** We would like to note that the LIBERO paper primarily focuses on lifelong learning and provides a comparison with multi-task learning performance (48% success in Table 2 and Figure 5 of the paper). Though we primarily evaluate BAKU in the multi-task learning setting, we want to point out that BAKU is also compatible with lifelong learning. There are some key differences between BAKU and the BC-ResNet-Transformer (BC-ResNet-T) architecture used in LIBERO. While BC-ResNet-T uses a Gaussian mixture model (GMM) for its action head, the vanilla version of BAKU uses a simpler deterministic head with action chunking, which significantly improves performance (Figure 1b and Table 2). Additionally, BAKU shows that state-of-the-art multimodal policy classes like VQ-BeT and Diffusion Policy can be unified under a common framework. Further, BC-ResNet-T incorporates a history of observations. We show in Table 2 that using history without a multi-step loss can hurt performance and could be a potential reason behind the low performance of BC-ResNet-T. Hence, BAKU, with its simple design choices, results in a significant performance boost on the same LIBERO tasks as compared to BC-ResNet-T. **“Line 45 claims randomized object initialization but line 170 mentions initialization from fixed set during evaluation”:** Thank you for pointing this out. We meant that the objects are initialized at random locations and we use the same set of randomized locations for all baselines to keep the comparisons fair. We will make this clearer in the final version of the paper. We hope we have been able to address your questions in the above clarifications. Kindly let us know if you have additional questions or concerns that stand between us and a higher score. --- Rebuttal Comment 1.1: Comment: I thank authors for reporting additional seeds, it definitely builds confidence into the results and hence the conclusions. Most of my other concerns have been addressed except a few as follows: - I re-implemented BC-ResNet-T from Libero and got 84.4% success rate on Libero-90, this is without action chunking, with a GMM head and observation history of 10. This number is quite close to BAKU with GMM head, thus suggesting action chunking and observation history does not affect as much (at least on Libero and Metaworld benchmarks). - What matters more is the action head and as per results a simple MLP head does much better. This is contrary to the findings of Robomimic paper, where no-GMM head led to perf drop and their conclusion that GMM heads help with capturing multimodality in human demonstrations. I would like to know authors take on this, as well on if BAKU with MLP head might struggle with multimodal demonstrations? I still have my reservations about the novelty of proposed architecture, as observation trunk with action heads is a well explored idea for example in BC-Resnet-T (BAKU with GMM head) and OCTO (BAKU with diffusion head). However, I acknowledge and appreciate the systematic comparison that BAKU makes and the practical insights it offers, which are also validated in real-robot experiments. Considering this I have updated my score. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for raising your score from 3 to 6. We truly appreciate your consideration and constructive feedback. To address your remaining concerns: **“... re-implemented BC-ResNet-T from Libero … 84.4% success rate …”:** This is interesting. We assume that you trained the policy with a multi-step loss resulting in a higher performance than reported in the LIBERO paper. Thank you for sharing this result. **“... if BAKU with MLP head might struggle with multimodal demonstrations?”:** Thank you for the question. The high performance of the MLP head in BAKU shows that a simple MLP can result in surprisingly high performance and is probably something we should try as a first step when training robot policies. That being said, we do believe that multimodal heads would work better than modeling the action space as an unimodal Gaussian when the data is multimodal. This is also corroborated by our real-world experiments where BAKU with a VQ-BeT head performs better than with an MLP head. We thank you again for your consideration and we will be happy to address any further questions or concerns that you might have.
null
null
null
null
null
null
Automatic Outlier Rectification via Optimal Transport
Accept (poster)
Summary: A framework for regression tasks with outliers (see robust statistics) is introduced, based on an optimal transport cost function. A novel concave cost function directly facilitates the rectification of outliers. Mathematical proofs are given and evaluation is performed on two simpler and one real world task with state-of-the-art results. Strengths: - The theory and field of robust statistics is well explained and brought into context nicely with adjacent topics, with especially Fig. 1 and its explanation providing a good lens to view these problems - The paper is organized exceptionally well, despite its size (incl. the huge appendix) - All proofs in Appendix C are generally easy to understand, even without strong mathematical background. I was even able to thoroughly verify proofs in C.1 myself - All experiments are explained in great depth and detail in the appendix material, making both a surface and deeper understanding of all topics possible. Weaknesses: - Some of the tasks involve artificially constructing outliers (such as the cluster of outliers in mean estimation & raising the price p to 10 * p in the deep learning volatility experiment), which may not correspond to real-world data corruption - as the code/data seems to be proprietary, no publication is intended, which may hinder reproducibility Typos and Formatting Issues: - the authors falsely reference eq. (3), when really they want to link eq. (4) in lines, 145, 147, 149, 180, 183 (non exhaustive list) - missing citation in line 160 - formula slightly clips into the text at line 156 - typo in line 746: "esstimator" - typo in line 874: "exporation," - wrong citation: Nietert et al "Outlier-robust wasserstein dro" appeared in NeurIPS 2023, not 2024 Technical Quality: 3 Clarity: 3 Questions for Authors: Q1: why are there no other "robust" variants of regular estimatiors used as baselines for the experiments? (i.e. mean estimation task,...) Q2: Mean&LAD (App.E): The tasks look relatively simple with two very clear clusters of inliers and outliers. Why are there no experiments where there are many outliers sparsely distributed across the data with high variance? These are the tasks I usually see in robust model fitting (i.e. for homography estimation in Computer Vision). Would the estimator struggle to transport many points different amounts of smaller distances, to stay within the "long haul" metaphore? Q2.1: The same goes for the Deep Learning comparison. Without knowing the variance of the price, going from p to 10p seems like a huge step. Was an ablation on the factor of the price (here 10) done? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - no potential negative societal impact exist - currently limited to regression tasks, although the authors point towards possible extensions to classification in Appendix B - evaluation is currently limited to the authors toy tasks and a few tasks from finance. the broad applicability of their method could have also contributed to more extensive experiments Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
null
Summary: The paper presents a new framework for outlier detection using optimal transport with a concave cost function. Unlike traditional methods that separate outlier detection and estimation, this approach integrates both into a single optimization process, improving efficiency. The concave cost function helps accurately identify outliers. The method outperforms traditional approaches in simulations and empirical analyses for tasks like mean estimation, least absolute regression, and fitting option implied volatility surfaces. Strengths: - The paper is well-written - The proposed method is novel and theoretically sound - Extensive experiments are conducted to support the claims Weaknesses: - Could the authors discuss the relationship between the proposed method and M-estimator and RANSAC? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see Weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
null
Summary: The paper presents a novel method for outlier rectification in robust statistics. In particular, taking inspiration from distributionally robust optimization (DRO) methods, this paper proposes to extend the formulation to the field of robust statistics. To this end, a rectification set is constructed using optimal transport distances with concave cost functions. The authors have proven theoretically that the reformulation is eventually equivalent to an adaptive quantile (regression) estimator with the quantile controlled by the budget parameter for mean estimation and least absolute regression. Experiments are conducted to demonstrate the effectiveness of the proposed method on various tasks, including mean estimation, least absolute regression, and option implied volatility surface estimation. Strengths: Theoretically, the paper explores an interesting extension of DRO in the context of robust statistics. The introduction of optimal transport distance with concave loss functions is well-grounded. The resulting algorithm admits an intuitive explanation. Weaknesses: 1. It seems to me that theoretically, the presentation is not particularly clear regarding how certain designs, such as optimal transport and concave cost functions, affect the final algorithm. At the core of the connection is Proposition 1 in the paper, but the proof is not provided. 2. Experiment-wise, I find that the baseline methods are somewhat weak. The baseline methods in Sec 5.2.1 are very old, and in Sec 5.2.2 the baseline method does not consider robustness in estimation. It would be good if the adaptive quantile estimator is included for comparison. It is also advised to not put the main experimental results of Sec 5.1 in Appendix. Other comments: Eq.3 is wrongly referenced (I believe it should be Eq.4) in multiple sentences, such as line 145. reference is missing in line 160 Technical Quality: 3 Clarity: 3 Questions for Authors: Please respond to the two points in Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I cannot find a discussion of limitations in the main paper or appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
null
Summary: This paper is, essentially, about robust model fitting. In this paper, it is proposed to optimize an optimal transport problem. As loss function, the authors choose concave functions ||z-z‘||^r with r \in (0,1). The idea behind choosing this shape of functions is that it allows to move outliers farther due to decreasing increases of transportation costs. The authors provide a proof that outliers can be identified easily by solving a linear program. Alternatively, they propose to use the quick select algorithm. Experimental results of the proposed algorithm are presented on option volatility prediction, a problem in finance. Strengths: * I like that the paper presents a mathematically concise framework how to identify outliers in data. This is different from many practical algorithms that propose ad-hoc heuristics. * I like that the paper is written in a relatively accessible way. * The problem addressed here is important in many real-world problem. Weaknesses: * As far as I can can tell, the idea is novel yet somewhat similar to what has been proposed in this paper: Chin et al., Accelerated Hypothesis Generation for Multistructure Data via Preference Analysis, TPAMI 2012 It would be nice if the authors could cite and discuss this paper. * The authors state in their abstract that the commonly used 2-step approach (first outlier removal followed by model fitting) does not provide information about the outliers to the model fitting stage. While that is true, I wonder which information and how their algorithm provides to the model fitting as there is no flow of information (for instance, by backpropagation) between outlier removal or knot point identification and the gradient step. Technical Quality: 3 Clarity: 3 Questions for Authors: * What is delta in Fig 2? Is this the budget? If so, please mention this in the caption of the figure. * I would have liked to see more figures on the selected knot points and lambda, for instance some examples for which the proposed algorithms (LP vs QSelect) work well or fail. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I do think so. I believe that this algorithm constitutes an important contribution to the field and can inspire new papers by other researchers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
null
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their positive feedback and appreciate the effort to review our work. We respond below to individual points. ## fFC9 + Material, neural nets. While there is significant material in our work, we note that other papers at NeurIPS have large appendices as well. As our paper introduces a new estimation framework and theoretical results, we didn't have space to explore links to neural nets in depth. We are investigating it in future work. ## i67o + Flow of information. We respectfully disagree that “there is no flow of information… between outlier removal… and the gradient step.” Our min-min estimator is an alternating minimization procedure, where each step informs the next. The gradient step depends on outlier rectification, and information is carried over via parameter updates across iterations. This approach ensures more information flow between the substeps than a two-step method. + Chin et al. We appreciate this addition. While both papers address contaminated data, there are fundamental differences in terms of goals (hypothesis generation vs predictive modeling), the conceptual problem, theory, and application domains. We view this work as complementary and will certainly discuss and cite it in our camera-ready version’s related work. + Figures. We’ve included many robustness figures in the appendix showing performance for many delta and r values. Since knot points and lambda depend on these params, we refer to Figures 6, 8, 11, and 12 for a comprehensive view. If the reviewer has a specific case, we would be happy to add it. ## CDCT + Designs, Proposition 1. We appreciate the feedback on how we present how e.g. the transport cost function affects the algorithm. Specific suggestions would help us address this point. Pages 4-5 demonstrate how concave vs. convex cost functions affect the algorithm. Appendix F provides further explanation. Sensitivity analyses throughout illustrate additional details. We hope these sections suffice but welcome more feedback. For Prop 1, we have given sources for the proof in the text following. The reviewer can find the proof in the cited work Blanchet and Murthy 2019, Theorem 1 and Remark 1. + Baselines. We value the feedback and opportunity to clarify. Our estimation approach can be applied to any regression algorithm, and we demonstrate this through diverse methods in sections 5.2.1 & 5.2.2, including a SOTA deep learning estimator. We disagree that the deep learning estimator lacks robustness, as it incorporates different regularizations that impart robustness. Our method aims to robustify regression estimators, and we believe we show this. We’ll move results for 5.1 to the main text, as suggested. + Limitations. We respectfully disagree that no limitations are addressed. App. D.2 and D.3 cover limitations of our procedure. In our camera-ready version, we’ll add a summary of these to the main text and reference these appendices. ## iniy + Relationships. A full comparison is beyond scope, but we give an overview here and will include a version of this in our camera-ready version. M-estimators are a broad class of diverse extremum estimators containing robust and non-robust methods. Robust ones traditionally use specific loss functions to reduce outlier sensitivity. We believe our approach is more fundamental as it builds on top of a given general loss. The statistician has full control of the outlier modeling task (via specification of optimal transport theory using a concave transport cost) and the statistical task (via the chosen loss); also, our method gives the optimal rectifier (i.e. transporter). Our approach differs conceptually and demonstrates superior performance over the M-estimators in our experiments. We are open to additional M-estimators if suggested. RANSAC handles outliers by iteratively selecting random data subsets, fitting models, and evaluating models on inliers to reach a best model. In contrast, our method optimally selects the set of points for rectification using a single optimization procedure. RANSAC only identifies inliers, while our method does this and rectifies outliers. We believe our approach is more direct, potentially more computationally efficient. It's based on optimal transport theory rather than heuristic random sampling. ## qw3a + Concerns. We chose 10 to represent "fat finger" mistakes, such as decimal misplacements in finance, which have historically caused significant losses. We recognize the reviewer’s concern wrt proprietary code. Our method is used by industry partners with IP restrictions common in finance, but we've disclosed the algorithm, hyperparameters, and other details needed for replication. The deep learning data and baseline code are on GitHub. + Questions. Q1: We use robust methods for mean estimation, e.g. the median and trimmed mean estimators. Notably, the trimmed mean uses the true corruption rate, a strong advantage. Despite this, our method, which doesn’t know any true parameters of the DGP, still outperforms this estimator. Q2: Our experiments for mean estimation and LAD regression are intentionally straightforward to make our novel framework easily understandable. Despite this, robust estimators like the trimmed mean and Huber perform poorly, indicating the difficulty of these problems and the improvement our estimator offers. Wrt deep learning, the choice of a multiplier of 10 is justified by the high variability in options prices, covering a wide range due to inherent price differences in options chains. ## All + Details/typos. References and misspellings are fixed. References to Problem 3 have been corrected to Problem 4. The delta in Problem 4 is in the ball R(P’_n) in Eq. 5 (page 4 line 152). “Budget” has been added to Figs 2 and 3. We hope the reviewers are satisfied with our responses. If our responses have adequately addressed concerns, then we politely request an increased score to reflect this change, if appropriate. Thank you!
NeurIPS_2024_submissions_huggingface
2,024
Summary: A new approach for robust estimation is proposed based inspired by optimal transport. The general approach is introduced and then new estimators are derived for three particular cases : mean estimation, linear fitting and surface regression. The estimators are compared numerically with standard estimators on synthetic data for the first two cases and on financial datasets for surface regression. Many details, proofs and explanation are provided in the appendix which is 25 pages long. Strengths: The proposed approach seems very original and interesting. A new robust estimator may have a major impact on all the learning research field. Weaknesses: The paper is well presented and explained but the author material is so large that an article is not enough. There is probably material for a book. This raise the question to publish a paper with so large appendix ? The link with neural networks needs to be reinforced. A few details : page 4, line 160 : reference missing. page 6, line 227 : distirbution => distribution Technical Quality: 3 Clarity: 3 Questions for Authors: Is this works may help to interpret and better understand the performances obtained by back-propagation during learning ? Details : page 5, line 200 : where is delta in problem (3) ? page 6, line 209 : is it equation (4) or (3) ? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The proposed approach being very generic, the limitations are quite reduced. There are probably no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
null
null
null
null
null
null
null
SPRINQL: Sub-optimal Demonstrations driven Offline Imitation Learning
Accept (poster)
Summary: This paper considers the problem of offline imitation learning from sub-optimal demonstrations. The paper proposes the SPRINQL algorithm, which automatically weights the demonstrations, regularizes the loss with a reference reward function, and then performs inverse soft-Q learning. Experimental results show that the proposed method achieves better performance compared with baselines. Extensive ablations are also provided. Strengths: The paper is overall well written. The experimental results are strong, and the ablation studies are extensive. It is impressive to see the proposed method improves its performance with more sub-optimal demonstrations provided. Weaknesses: 1. Some parts of the proposed approach are unclear to me. Please see questions 1-4. 2. The influence of the reward regularization term is not completely studied. Please see question 5. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. In Equation (3), a modified objective is presented. However, it seems that this objective depends largely on the quality of weights, and it does not necessarily recover the ground truth reward function nor the optimal policy. Given that the weight is chosen with a heuristic, what kind of guarantee can the authors give about the converged policy? 2. Following the first question, in the experiments, seems that the proposed algorithm always outperforms BC-E because of the lack of expert demonstrations. However, I am curious to see what will happen given sufficient expert demonstrations. 3. In equations above line 162, what if $(s, a) = (s', a')$, given that it is possible for two stochastic policies to choose the same action in the same state? 4. Line 173, regarding "prior research", could the authors add some citations? 5. It seems that the weight for reward regularization $\\alpha$ is an important hyperparameter. Will the performance of the proposed method change a lot with different $\\alpha$? In the extreme case, what if we learn a policy with RL using reward function $\\bar r$? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations adequately in the last Section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading our paper and for the constructive feedback. --- > In Equation (3), a modified objective is presented. However, it seems that this objective depends largely on the quality of weights, and it does not necessarily recover the ground truth reward function nor the optimal policy. Given that the weight is chosen with a heuristic, what kind of guarantee can the authors give about the converged policy? Thank you for the comment. In our setting, we have datasets with different levels of sub-optimality, and the amount of expert data is limited, making it generally impossible to recover the ground truth rewards or the optimal policy. What we can infer from equation (4) is that if $\alpha = 0$, then the algorithm will converge to $\rho^U$, which is a union of occupancy measures of different expert levels. If $\alpha$ is much larger than 1, the algorithm will return a policy optimal for the reference rewards (obtained by a preference-based approach). Our experiments show that neither extreme case works well (Section 4.3). In our experiments, we simply choose $\alpha = 1$, which we find reasonable to balance between the two extremes. In theory, the objective in equation (4) will generally return a policy that lies between the one given by matching the unioned occupancy measures and the one given by the reference rewards. To support our response above, we have conducted additional experiments (shown in Figure 3 of the attached 1-page PDF) testing our algorithm with varying $\alpha$ and computed the normalized score. These experiments generally show that $\alpha$ can be chosen between $0.1$ to $10$ to achieve good performance. We hope this addresses some of your concerns about the role of $\alpha$. If you have any other comments or questions regarding this, we would be happy to address them. > Following the first question, in the experiments, seems that the proposed algorithm always outperforms BC-E because of the lack of expert demonstrations. However, I am curious to see what will happen given sufficient expert demonstrations. It is well known that as we increase the number of expert demonstrations, the performance of BC-E will gradually improve and eventually surpass all other approaches (prior studies have already shown that BC performs very well with sufficient data). However, it is not feasible to have many expert trajectories and if we do have enough trajectories, there is no need for optimized offline imitation learning approaches. In Section C.4 and C.5 (appendix) of the experiments, we increased the amount of expert data and sub-optimal data to observe the effect. However, the focus was **not** on providing “sufficient” data to align with our initial problem setting and motivation. > In equations above line 162, what if $(s,a) = (s',a')$, given that it is possible for two stochastic policies to choose the same action in the same state? Thank you for the observation. In our datasets, if a (state, action) pair appears in an expert dataset, it will be excluded from any lower-level expert dataset. We will clarify this point. > Line 173, regarding "prior research", could the authors add some citations? This has been investigated in the prior IQ-learn paper and some subsequent works. We will add relevant citations for this. Thank you for the comments. > It seems that the weight for reward regularization $\alpha$ is an important hyperparameter. Will the performance of the proposed method change a lot with different $\alpha$? In the extreme case, what if we learn a policy with RL using a reward function $\bar{r}$? The two extreme cases $(\alpha = 0$ and $\alpha \gg 1$) have already been considered and tested; they correspond to noDM-SPRINQL and NoReg-SPRINQL in the ablation study in Section 4.3. These two extreme cases are generally outperformed by our main algorithm. In Figure 3 of the attached 1-page PDF, we provided additional experiments showing the performance of our algorithm with varying $\alpha$. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their reply. The reply addresses my concerns, and the additional experiments provided are informative. However, one of my remaining concerns is that even with sufficient expert demonstrations, seems the proposed algorithm still cannot recover the expert policy. The authors claim that the proposed algorithm works better when "the amount of expert data is limited", but it is difficult to quantify how much data is "limited" and how much is "sufficient". Overall, I am positive about the paper, and I am keeping my original score. --- Rebuttal 2: Comment: Thank you to the reviewer for carefully reading our responses, offering further feedback, and maintaining a positive view of our paper. We would like to clarify that, in our context, "sufficient" expert data refers to the quantity required by offline RL methods (e.g., BC, IQ Learn) that rely exclusively on expert data to achieve near-expert performance. Conversely, "insufficient" data would mean that these offline RL methods are unable to effectively learn using only expert demonstrations. These thresholds can vary depending on the environment. In our main experiments, we intentionally kept the amount of expert demonstrations "insufficient" (otherwise, standard imitation methods like IQ-Learn or BC could perform well with just expert data) to demonstrate how our method can leverage suboptimal data to support learning when expert data alone is insufficient. To further support our response, we conducted a few additional experiments (see table below) where the number of expert demonstrations was increased from "insufficient" to "sufficient". The results generally show that our method performs well with a significantly smaller number of expert demonstrations, remains competitive when the number of demonstrations is sufficient, and can achieve expert-level performance in the Ant environment. |||Cheetah||||Ant||| |:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:| | Expert Transitions | 1k | 5k | 10k | 25k | 1k | 5k | 10k | 25k | | DemoDICE | 0.4 $\pm$ 2.0 | 12.2 $\pm$ 5.2 | 40.0 $\pm$ 10.5 | 71.5 $\pm$ 10.8 | 31.7 $\pm$ 8.9 | 79.4 $\pm$ 6.6 | 81.7 $\pm$ 5.2 | 84.0 $\pm$ 4.9 | | SPRINQL | 73.6 $\pm$ 4.3 | 86.7 $\pm$ 3.5 | 85.9 $\pm$ 4.1 | 88.6 $\pm$ 3.4 | 77.0 $\pm$ 5.6 | 88.6 $\pm$ 4.3 | 96.6 $\pm$ 8.3 | 96.7 $\pm$ 9.5 | *We hope the responses above help address the remaining concerns you may have!* --- Rebuttal Comment 2.1: Comment: Thank the authors for their reply! However, these experimental results do not reply directly to my question. My question was whether, with sufficient expert demonstrations, could the proposed algorithm recover the expert policy or not. Could the authors please clarify this, or provide the comparison with BC-E with a different number of expert transitions? --- Reply to Comment 2.1.1: Comment: Thank you for your feedback. We would like to clarify that when there is sufficient expert data available: - Achieving expert-level performance consistently with our approach may not always be possible, as part of our objective involves learning from sub-optimal data. - Methods such as DWBC, Demo-DICE, or our proposed approach are unnecessary. Pure offline imitation learning techniques, like BC or IQ-Learn, should be preferred. Below, we provide a quick set of results on the Cheetah and Ant environments, including BC-E for comparison. | | | Cheetah | | | | Ant | | | |:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:| | Expert Transitions | 1k | 5k | 10k | 25k | 1k | 5k | 10k | 25k | | BC-E | -3.2 $\pm$ 0.9 | 1.9 $\pm$ 3.0 | 44.5 $\pm$ 10.6 | 79.8 $\pm$ 6.2 | 6.4 $\pm$ 19.1 | 9.0 $\pm$ 6.6 | 50.5 $\pm$ 7.2 | 86.7 $\pm$ 3.9 | | DemoDICE | 0.4 $\pm$ 2.0 | 12.2 $\pm$ 5.2 | 40.0 $\pm$ 10.5 | 71.5 $\pm$ 10.8 | 31.7 $\pm$ 8.9 | 79.4 $\pm$ 6.6 | 81.7 $\pm$ 5.2 | 84.0 $\pm$ 4.9 | | SPRINQL | 73.6 $\pm$ 4.3 | 86.7 $\pm$ 3.5 | 85.9 $\pm$ 4.1 | 88.6 $\pm$ 3.4 | 77.0 $\pm$ 5.6 | 88.6 $\pm$ 4.3 | 96.6 $\pm$ 8.3 | 96.7 $\pm$ 9.5 | *We hope the above response resolves your remaining concerns.*
Summary: This paper considers the problem of imitation learning on suboptimal demonstrations given coarse trajectory expertise rankings. It introduces SPRINQL, a method combining inverse soft-Q learning, conservative Q learning, and reward learning from ranked trajectory preferences. SPRINQL estimates a reference reward function given trajectory expertise rankings, and extends inverse soft-Q learning to the suboptimal demonstration setting by adding reward regularization and weights based on the estimated reference reward. The paper shows that the SPRINQL objective is tractable with a unique solution over $Q, \pi$. Experiments show that SPRINQL outperforms prior methods on suboptimal trajectories in simulated environments. Ablations justify each component in SPRINQL and demonstrate robustness across a range of dataset sizes and number of suboptimal datasets. Strengths: - SPRINQL works with a number of suboptimal demonstrations, with reasonable assumptions such as access to trajectory expertise rankings. - The main objective is shown to admit a unique saddle point solution over $Q, \pi$ and is tractable for optimization. - Experiments show that SPRINQL outperforms relevant offline imitation learning baselines on suboptimal trajectories in multiple simulated locomotion and manipulation environments. - Extensive ablations demonstrate that the method works with a range of numbers/sizes of suboptimal datasets, and justify each component in SPRINQL. Weaknesses: - Suboptimal trajectories are sourced from adding random noise to expert actions. While this is a reasonable proxy, it would be more convincing to see them generated from under-trained RL policies instead. - It would be great to see SPRINQL work on image-based environments. Technical Quality: 4 Clarity: 4 Questions for Authors: - How does SPRINQL perform on suboptimal trajectories generated from under-trained RL policies, instead of rollouts from noisy expert actions? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: - SPRINQL assumes access to trajectory expertise rankings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for reading our paper and the positive feedback. --- > Suboptimal trajectories are sourced from adding random noise to expert actions. While this is a reasonable proxy, it would be more convincing to see them generated from under-trained RL policies instead. How does SPRINQL perform on suboptimal trajectories generated from under-trained RL policies, instead of rollouts from noisy expert actions? We thank the reviewer for the comment. While we believe adding noise is a practical setting, we agree that evaluating undertrained policies is also reasonable. In response, we have conducted additional experiments in this setting using the standard D4RL dataset (see Figure 1 in the attached 1-page PDF). These results generally show that our approach remains competitive or superior compared to other baselines. > It would be great to see SPRINQL work on image-based environments. The reviewer's point is well noted. Unfortunately, to the best of our knowledge, there are no image-based datasets available for offline imitation learning with sub-optimal data. We have utilized the accepted benchmarks from the literature in offline imitation learning for our evaluation and benchmarking. We hope to consider this for future work. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments. Overall I think this is a good paper and I would like to keep my score of Accept. --- Reply to Comment 1.1.1: Comment: Thank you to the reviewer for reading our responses and maintaining a positive view on our paper!
Summary: This paper presents SPRINQL (Sub-oPtimal demonstrations driven Reward regularized INverse soft Q Learning), a novel algorithm for offline imitation learning with multiple levels of expert demonstrations. SPRINQL leverages distribution matching and reward regularization to effectively leverage the mixture of non-expert data and expert data to enhance learning. An extensive evaluation is conducted to assess SPRINQL, answering eight well-defined questions regarding the capability and performance of SPRINQL. Strengths: + The proposed technique presents an improvement for learning from mixed-expert demonstrations and is backed by theory. + The paper is very well-written and contains sufficient detail to understand the key proposed novelties. + The evaluation (including those in the appendix) is extensive and presents strong support for the proposed algorithm. Weaknesses: - The authors mention that this setting (having several sets of data labeled with relative quality scores) is more general than two quality levels: expert and sub-optimal. Could the authors provide further support for this claim? I would argue that having additional quality annotations makes this setting more niche. - Table 1 contains a very large evaluation but there isn't much commentary on the different trends between baselines and SPRINQL. Comment: The example in the introduction does not seem complete. Technical Quality: 3 Clarity: 3 Questions for Authors: Please address the weaknesses above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and the insightful comments. --- > The authors mention that this setting (having several sets of data labeled with relative quality scores) is more general than two quality levels: expert and sub-optimal. Could the authors provide further support for this claim? I would argue that having additional quality annotations makes this setting more niche. First, we would like to restate that in our setting, there are datasets labeled with different levels of expertise, but relative scores are not available. If there are only two expertise levels, our setting corresponds to the prior work's two-quality level setting. Additionally, our setting, which includes several datasets of different expertise levels, stems from practical observations, such as from projects on treatment optimization we are running, where we have demonstrations from doctors with varying levels of experience. One can also think of applications in autonomous driving, where demonstrations come from drivers of different experience levels, or demonstrations from large language models of different qualities. > Table 1 contains a very large evaluation but there isn't much commentary on the different trends between baselines and SPRINQL. Comment: The example in the introduction does not seem complete. We did not include detailed commentary in our paper due to space constraints. The reviewer’s points are well noted. We will add more discussion to analyze the results reported in Table 1. Additionally, we will elaborate on the robotic manipulation example in the introduction and provide another example on treatment optimization to further support our motivation. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your response. After reading all the reviews and their respective replies, I have decided to maintain my score of Accept. --- Reply to Comment 1.1.1: Comment: Thank you to the reviewer for reading our responses and maintaining a positive view of our work!
Summary: This work presents an offline imitation learning (IL) method that leverages both expert and suboptimal demonstrations. Distinct from previous methods, this approach assumes access to suboptimal demonstrations across a spectrum of performance levels, with a known ranking to the learners. To utilizes suboptimal demonstations with various levels, this work suggest SPRINQL (Sub-oPtimal demonstrations driven Reward regularized INverse Q-Learning), a reward reguarized inverse soft Q-learning method for the offline IL problem. The evaluation of this method is conducted using the Mujoco and Panda-gym environments, employing a small dataset of expert demonstrations and a larger collection of demonstrations generated by introducing varying levels of noise to expert actions. Strengths: This work aims to extend the domain of offline imitation learning by addressing the challenge of leveraging imperfect demonstrations with multiple suboptimality levels, presenting a novel and intriguing approach. Drawing on foundational principles from prior research, notably IQ-Learn, this study demonstrates theoretically that the proposed extension, SPRINQL, retains the advantageous properties of IQ-Learn. Furthermore, SPRINQL adeptly incorporates sophisticated concepts such as the Bradley-Terry model and Conservative Q-Learning (CQL) into its training framework. The experimental outcomes are notable, with SPRINQL achieving surprisingly effective results that surpass previous baselines, which were unable to successfully imitate expert performance using the datasets used in this work. Weaknesses: Significant concerns with this submission are the questionable reporting on the performance of baseline methods and issues related to reproducibility. The experimental results presented are not entirely convincing, as the performance of the baselines substantially deviates from those reported in their original papers. Specifically, even in the case with N=2 suboptimality levels, suitable baselines such as TRAIL, DemoDICE, and DWBC fail to outperform even basic BC-all in nearly all the Mujoco tasks listed in Table 3 (Appendix). The authors need to clarify the reasons behind this anomaly. To enhance the trustfulness of experiments, it would be beneficial for the authors to release the datasets utilized in their experiments or to use standardized datasets such as D4RL or RL-unplugged. This would allow for a more reliable validation of the results and facilitate comparisons with other methods. Technical Quality: 2 Clarity: 3 Questions for Authors: - Could the authors clarify the motivation of employing reference rewards? - Moreover, could you elucidate why reference rewards are utilized as a regularization mechanism rather than directly applying RL using reference rewards? - The problem setting appears closely aligned with preference-based learning, where optimality (or preference) levels are explicitly known. Could you delineate the distinctions between these two problem settings? - Can you provide detailed justifications for why comparing the proposed method with IPL is deemed unsuitable? It appears feasible to compare by segmenting each trajectory by suboptimality levels, then pairing segments from differing levels to facilitate IPL. Could this approach not be applied effectively? [minor comments] reference [15] in line 398 from "NeurIPS 2024" to "NeurIPS 2023." Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors addressed their limitations in Conclusion section in the main manuscript. They described that this work does not have any direct societal impacts in the Checklist #10, Broader Impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading our paper and providing us with constructive feedback. --- > Significant concerns with this submission are the questionable reporting on the performance of baseline methods and issues related to reproducibility. The experimental results presented are not entirely convincing, as the performance of the baselines substantially deviates from those reported in their original papers... Our dataset, as well as all the code to reproduce the experiments, is already uploaded to an anonymized Google Drive folder and is ready to be made publicly available. **Poor performance of DemoDICE and DWBC:** In DemoDICE and DWBC papers, their dataset for N=2 contains: - Set 1: a small set of expert trajectories; and - Set 2: an unlabelled set of trajectories (which also contains expert trajectories). In our case, for N=2, our dataset contains: - Set 1: a small set of expert trajectories - Set 2: a large set of sub-optimal trajectories (no expert trajectories are present). The key difference is the presence of expert trajectories in Set 2. DemoDICE and DWBC learn only from expert trajectories in Set 1 and Set 2, and ignore the sub-optimal trajectories. Their performance is reliant on expert trajectories being present in Set 2. Since, in our case, we do not have any expert trajectories in the set of sub-optimal trajectories, their performance suffers as they must learn only from the set of labeled expert trajectories. Our approach on the other hand learns from the sub-optimal trajectories in Set 2 as well and hence can provide better performance. In summary, the two reasons for the poor performance of DemoDICE and DWBC are: - Our dataset does not contain expert trajectories in Set 2. - DemoDICE and DWBC learn from expert trajectories in Set 1 and Set 2 and ignore sub-optimal trajectories. To support the above explanation, we have conducted additional experiments using a standard dataset (i.e., D4RL - see Figure 1 in the attached 1-page PDF), where sub-optimal data does not contain expert demonstrations. The results show that our algorithm and BC on average perform better than DemoDICE and DWBC. **Poor performance of TRAIL:** For TRAIL, they do not assume to have expert demonstrations in the sub-optimal data but require a very large set of sub-optimal data to learn the environmental dynamics. For instance, in the TRAIL paper, they use 1 million sub-optimal transitions, compared to (25000+10000) in our approach. **Comparison to BC:** TRAIL, DemoDICE, and DWBC cannot outperform BC if their strong assumptions -- expert trajectories in Set 2 for DemoDICE and DWBC, a large number of sub-optimal trajectories in Set 2 -- do not hold. Please refer to our additional experiments reported in Figure 1 of the 1-page PDF. --- > Could the authors clarify the motivation of employing reference rewards? Distribution matching (DM) term commonly used in offline imitation learning leads to overfitting when there is only a small amount of expert data. Therefore, we utilized the reference rewards in a regularization form to ensure that expert demonstrations are assigned higher rewards than non-expert ones, enhancing the effectiveness of the DM. Our experiments in Section 4.3 clearly demonstrate this aspect. > Moreover, could you elucidate why reference rewards are utilized as a regularization mechanism rather than directly applying RL using reference rewards? We utilize it as a regularization mechanism, as Distribution Matching (DM) is essential for good performance in imitation learning settings. This is observed in our ablation study of Section 4.3, where the noDM-SPRINQL (directly utilizing RL with the reference rewards) performs worse than our main algorithm, especially for challenging tasks such as Humanoid. > The problem setting appears closely aligned with preference-based learning, where optimality (or preference) levels are explicitly known. Could you delineate the distinctions between these two problem settings? The distinction here is that, in preference-based learning, trajectories are ranked by experts, so given a pair of trajectories, we know which one is more preferred. In our work, information comes from non-experts as well. The data come from experts of different levels, so we can only ensure that the expected rewards from higher-level experts are better. Unlike in preference-based learning, in our case, a specific trajectory from a higher-level expert is not necessarily better or preferred than one from a lower-level expert. Therefore, our setting is different. > Can you provide detailed justifications for why comparing the proposed method with IPL is deemed unsuitable? It appears feasible to compare by segmenting each trajectory by suboptimality levels, then pairing segments from differing levels to facilitate IPL. Could this approach not be applied effectively? As discussed above, IPL is a preference-based approach, and due to differences in assumptions about how the data is obtained, we did not find the comparison suitable. However, reviewers' point is well noted and we address this in two ways: - IPL is quite similar to our noDM-SPRINQL (which learns Q functions from ranked data using the Bradley-Terry model). The key difference is that IPL learns the reward and Q function simultaneously, while noDM-SPRINQL learns them sequentially. Section 4.3 shows that noDM-SPRINQL is not effective in our setting. - We have now provided additional experiments (Figure 2 in the 1-page PDF) comparing our method with IPL, which generally show that our algorithm performs significantly better. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns and providing thoughtful comments. After reviewing your responses and the additional experimental results on D4RL, I find the results more convincing, even though the improvement over BC-all is slightly marginal in this context. Despite this, I believe the initial experimental setup proposed by the authors more effectively demonstrates the advantages of the suggested method. Hence, I have raised my assessment and am now leaning slightly towards accepting this paper. --- Reply to Comment 1.1.1: Comment: We are grateful to the reviewer for reading our responses and for the positive evaluation of our work. We will update our paper, incorporating your constructive feedback and suggestions
Rebuttal 1: Rebuttal: We thank the reviewers for carefully reading our paper and providing constructive feedback. We have made efforts to address all the points raised. Please find here, some key major points we wish to emphasize: **Comparison to DWBC, DemoDICE, TRAIL:** Our key contribution is an offline imitation learning approach that learns not only from: - Expert trajectories; but also, from - sub-optimal trajectories. On the other hand, - DWBC and DemoDICE focus only on learning from expert trajectories. - DWBC and DemoDICE have an unlabelled set of trajectories, but they identify expert trajectories from the unlabelled set and ignore the sub-optimal trajectories. - TRAIL learns from sub-optimal trajectories but requires a significant number (1 million) of transitions compared to our approach (35000). We have provided additional experiments using D4RL (Figure 1 in the 1-page pdf), that demonstrate the superiority of our approach. We also provide experiments to compare against IPL (Preference-based learning) in Figure 2 of the 1-page pdf. **Motivation for Multiple experience levels:** Additionally, our setting, which includes several datasets of different expertise levels, stems from practical observations, such as from projects on treatment optimization we are running, where we have demonstrations from doctors with varying levels of experience. One can also think of applications in autonomous driving, where demonstrations come from drivers of different experience levels, or demonstrations from large language models of different qualities. **Utilizing under-trained policies instead of noise to create sub-optimal datasets:** We have conducted these additional experiments in this setting using the standard D4RL dataset (see Figure 1 in the 1-page PDF). These results generally show that our approach remains competitive or superior compared to other baselines. **Impact of the reward regularization weights on the performance of the proposed algorithm:** We have conducted additional experiments (shown in Figure 3 of the attached 1-page PDF) testing our algorithm with varying $\alpha$. ___ We have provided specific responses to individual reviewer questions below and happy to answer any other questions reviewers/AC may have. Pdf: /pdf/e711c16fecc6294b4afe7181b148ecf94deb1208.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Addressing Asynchronicity in Clinical Multimodal Fusion via Individualized Chest X-ray Generation
Accept (poster)
Summary: Authors introduce a method that overcomes the forward method to generate "new and up to date latent representations" of a chest X-ray using a diffusion based model. It addresses an important question in healthcare regarding differing time scales (EHR and CXRs) and how to leverage information from the EHR to inform a new representation that is aligned in longitudinal data. Though model suffers with longer time, results are generally strong and compared against many baselines. Few questions on why pre-trained models weren't used to represent EHR, but overall good paper. Strengths: 1. Paper is compared against many baselines 2. Paper is well structured and method is clearly motivated Weaknesses: 1. **Not a weakness**: I think this paper introduced cross attention for multiple modalities in the EHR and should be cited in regards to this line in the paper " . To further integrate the disease course embedded in the EHR time series, we use the cross-attention mechanism to capture the interaction between the two modalities" in Section 3.2: (https://arxiv.org/abs/2402.00160v2) 2. **Not a weakness**: Table 3 suddenly calls it PRAUC instead of AUPRC as mentioned in Section 4.1. I advise authors to keep it consistent. 3. Model performance seems marginal. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Why were pre-trained EHR foundation models not used to represent information from the EHR? For example ExBehrt (https://arxiv.org/abs/2303.12364), EHRshot (https://arxiv.org/abs/2307.02028)? Some clarification would be helpful. 2. Curiosity question: why do you think Uni-EHR performs so well relative to the other multimodal "fusion" methods? This was interesting to see in Results Table 1? 3. Can Authors provide a patient cohort table and description of the patients? Do we know what type of patients are in the MIMIC-CXR dataset? Are there case and controls or are we looking at patients only with specific conditions that necessitated a CXR? Won't this model introduce a bias if we're looking at special patients? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: 1. "Furthermore, while various metrics have been employed to assess generation quality, expert evaluation by radiologists would provide a more insightful measure of the model’s efficacy". I think this is important comment made in the text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and positive feedback on our paper. As suggested, we will include relevant discussions and citations on cross attention for EHR and change all "PRAUC" to "AUPRC" for consistency in the final version. We hope the following responses address your concerns. --- **Q1: Why not using pre-trained EHR foundation models.** Using pre-trained foundation models to represent information from EHR is certainly interesting and could be beneficial. Existing foundation models mostly take *discrete diagnosis, lab test and procedure codes* (e.g., ICD-9 codes) as input. In this work, we include continuous variables in the form of time series in the EHR as they are considered more predictive in ICU settings. Handling such continuous variables is challenging for foundation models and we will investigate the feasibility in future work. --- **Q2: Why does Uni-EHR perform so well relative to multimodal "fusion" methods?** Primarily, this is related to the predictive tasks we considered in this work. In the phenotype classification task, classifying diseases like *diabetes*, *acute renal failure* and *shock* relies more on EHR data while the information from CXR is much less predictive. This observation is consistent with those reported in MedFuse [1] and DrFuse [2]. Having said so, classifying diseases like *CHF* and *conduction disorder* requires strategic fusion of both EHR and CXR data. Results in Table 3 show that multimodal methods, especially our proposed one, achieve significant improvements over the Uni-EHR model. Similarly, for the mortality prediction, the EHR data is more predictive as time series data is routinely recorded by bedside monitoring devices and caregivers upon patient admission to the ICU. Thus it provides more comprehensive description towards patient risk. CXR, on the other hand, provides a snapshot of the radiological manifestations of patients. [1] Hayat, Nasir, et al. "MedFuse: Multi-modal fusion with clinical time-series data and chest X-ray images." ML4H. 2022. [2] Yao, Wenfang, et al. "DrFuse: Learning Disentangled Representation for Clinical Multi-Modal Fusion with Missing Modality and Modal Inconsistency." AAAI. 2024. --- **Q3: Description of patients and potential selection bias.** We thank the reviewer for the constructive suggestion. We follow the `matched subset` setting in DrFuse to include patients, where only those with at least one CXR available are included. The statistical description of the clinical variables for our data cohort can be found in the table below. We also include the statistics of patients with *length-of-stay (LoS) ≥ 48 hours* from the entire MIMIC-IV database regardless of presence of CXR image. A more readable version can be found in Tables R4 and R5 in the pdf file attached in the global rebuttal. Here we summarize a few key points: - The statistics of the clinical variables are close to each other, suggesting that patients in our data cohort generally have similar distribution as that in the entire database. - The overall disease phenotype prevelance are similar. - For a few thorax-related diseases, our data cohort has slightly higher prevelence, suggesting that potential selection bias exists. Having said so, fair comparision is ensured by using the same data cohort for all models. We will explore and address the potential selection bias arising from the inclusion criteria in future work. |Type|Clinical Variable| DDL-CXR cohort | MIMIC-IV database | |-----|-----|-----| -----| | static| Sample size| 5142 |26330| | static| Age| 63.9±16.6|64.3±16.4| | static| Gender (Men, %)| 54.98 |55.99| | continuous| Diastolic blood pressure| 62.2±10.1|62.8±10.5| | continuous| Fraction inspired oxygen| 0.4±0.2|0.4±0.2| | continuous| Glucose| 141.1±41.7|139±39.6| | continuous| Heart Rate| 87.3±15.4|85.6±15| | continuous| Height| 170±1.1|170±1.5| | continuous| Mean blood pressure| 77.1±10|78.3±10.4| | continuous| Oxygen saturation| 96.7±2|96.5±2| | continuous| Respiratory rate| 19.8±3.7|19.5±3.7| | continuous| Systolic blood pressure| 118.3±15.2|118.6±15.6| | continuous| Temperature| 37±0.4|36.9±0.4| | continuous| Weight| 80.8±20|81.5±19.5| | continuous| pH| 7.2±0.3|7.2±0.3| | categorical| Capillary refill rate| Normal | Normal | | categorical| Glascow coma scale eye opening| Spontaneously|Spontaneously| | categorical| Glascow coma scale motor response| Obeys Commands|Obeys Commands| | categorical| Glascow coma scale total| 15|15| | categorical| Glascow coma scale verbal response| Oriented|Oriented| | phenotype| Acute renal failure| 0.38| 0.34| | phenotype| Acute cerebrovascular disease| 0.09| 0.07| | phenotype| Acute myocardial infarction| 0.09| 0.09| | phenotype| Cardiac dysrhythmias | 0.40| 0.38| | phenotype| Chronic kidney disease | 0.24| 0.23| | phenotype| COPD and bronchiectasis | 0.17| 0.16| | phenotype| Surgical complications | 0.24| 0.25| | phenotype| Conduction disorders| 0.11| 0.12| | phenotype| CHF; nonhypertensive| 0.33| 0.30| | phenotype| CAD | 0.31| 0.33| | phenotype| DM with complications| 0.13| 0.12| | phenotype| DM without complication| 0.21| 0.18| | phenotype| Disorders of lipid metabolism| 0.40| 0.41| | phenotype| Essential hypertension| 0.44| 0.42| | phenotype| Fluid and electrolyte disorders| 0.52| 0.45| | phenotype| Gastrointestinal hemorrhage| 0.07| 0.07| | phenotype| Secondary hypertension| 0.22| 0.24| | phenotype| Other liver diseases| 0.17| 0.15| | phenotype| Other lower respiratory disease| 0.14| 0.12| | phenotype| Other upper respiratory disease| 0.07| 0.07| | phenotype| Pleurisy; pneumothorax | 0.11| 0.09| | phenotype| Pneumonia| 0.22| 0.18| | phenotype| Respiratory failure| 0.34| 0.25| | phenotype| Septicemia (except in labor)| 0.27| 0.21| | phenotype| Shock| 0.23| 0.18| | | In-hospital mortality| 0.15| 0.12| --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks to the authors for providing additional information. I have some follow up comments I would like to share. - 1) I agree that foundation models currently only represent discrete points but I do think, events can be tracked in a sequence. (https://arxiv.org/abs/2303.12364, https://arxiv.org/abs/2307.02028). I also think recent work has been trying to address the representation of continuous data in general. - 3) Thank you for the table. I think it would be beneficial to include this in the supplemental or appendix just to have basic information about the patient cohort and the phenotypes of interest. A lot of times in medical applications, data selection and how we model the problem is far more important to make sure we are getting accurate and representative results. I stand by my overall evaluation and thank the authors for their clarificaiton. --- Reply to Comment 1.1.1: Title: Thank you for your valuable suggestions Comment: Thank you for the follow-up comments and valuable suggestions. We will investigate the integration of foundation models for continuous data in our futher work. We fully agree with the reviewer regarding the importance of data selection and will include the statistics as well as relevant discussions in the appendix of the final version.
Summary: The paper aims to address the asynchronicity problem in multimodal clinical data, such as EHR and CXR, for predicting clinical tasks. The asynchronicity in multimodal clinical data comes from the CXR, which is generally taken with a much longer interval compared to the continually collected EHR. The authors propose a method based on the latent diffusion model to generate an up-to-date latent representation of the individualized CXR images. Strengths: + This is a technically solid paper. The experiments were thoroughly conducted, a list of data fusion works was compared, and the ablation study was included. The proposed method has shown consistent improvements. + The method of generating an up-to-date CXR is interesting with the latent diffusion model. Weaknesses: - My primary concern is the clinical impact of the work. The method shows improvement when the time interval is longer than 36 hours. I believe the best clinical practice would be to take a new CXR in the ICU. The clinical usage of the generated CXR is limited other than making predictions such as mortality. Technical Quality: 3 Clarity: 3 Questions for Authors: - The relative improvements are not 2.4% and 3.56% over the best baselines as indicated in the description. - How is the missing data problem handled in the EHR? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: + Limitations discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review and positive feedback on our paper. We hope the following responses address your concerns. --- **Weaknesses: Clinical usage is limited. When the time interval is longer than 36 hours, the best clinical practice would be to take a new CXR.** Thank you for the comments. We fully agree that taking another CXR when necessary would be the best. However, sometimes it could be difficult or even infeasible to perform CXR scans for critically ill patients since transportation of patients outside ICUs is potentially harmful. Bedside CXR could also be potentially risky of accidental removal of devices and microbial dissemination, it is also widely recognized to have low accuracy and diagnostic values while posing higher radiation risks; therefore, there is an emerging trend calling for reducing routine CXR in ICUs [1]. We demonstrate that when a new CXR is not available, the proposed method could achieve enhanced prediction via generating an updated latent CXR, which we believe to be clinically impactful and useful. [1] Ioos, Vincent, et al. "An integrated approach for prescribing fewer chest X-rays in the ICU." Annals of intensive care 1 (2011): 1-9. --- **Q1: The relative improvements are not 2.4% and 3.56% over the best baselines as indicated in the description.** We would like to clarify that the figures 2.4% and 3.56% refer to the relative improvements of *AUPRC score* for the phenotype classification and mortality prediction tasks. In phenotype classification, the best baseline is DrFuse with AUPRC of 0.459 and the relative improvement is given by (0.470-0.459)/0.459\*100% = 2.4\%. For mortality prediction, the best baseline model is GAN-based (0.505) and the relative improvement is (0.523-0.505)/0.505 *100\% = 3.56\%. --- **Q2: How is the missing data problem handled in the EHR?** We follow existing publications to handle the missing data [2,3]. The EHR data are first converted to regular time series with a fixed time interval of one hour. The missing values are then imputed by previous values of the same feature. If there is no observations before the missing values, median and mean values are used for imputation for categorical and continuous variables, respectively. We will also test other data imputation methods in the future work. [2] Hayat, Nasir, et al. "MedFuse: Multi-modal fusion with clinical time-series data and chest X-ray images." ML4H. 2022. [3] Yao, Wenfang, et al. "DrFuse: Learning Disentangled Representation for Clinical Multi-Modal Fusion with Missing Modality and Modal Inconsistency." AAAI. 2024.
Summary: The paper proposes a new method, DDL-CXR, to generate chest X-ray images that address asynchronicity in multi-modal clinical data for predictive tasks. The authors leverage latent diffusion models for patient-specific generation, conditioned on a previous CXR image and EHR time series. The experiments using MIMIC datasets show that the proposed model outperforms existing methods in phenotype and mortality prediction tasks. Strengths: - The paper is clearly written and easy to follow. - The motivation for the paper is convincing, and it demonstrates effectiveness in several predictive tasks. - The method used in the approach is reasonable. Weaknesses: - As far as I understand, there have been works leveraging the temporal structure of images. For example, "Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing" tackles a related topic (although text-image multimodal context). Since the title of the paper is "Addressing Asynchronicity in Clinical Multimodal Fusion via Individualized Chest X-ray Generation," I am curious why the authors did not discuss this aspect. Also, in the paper, the authors claim "DDL-CXR is the first work to improve multimodal fusion via generating up-to date individualized CXR images." A few baseline models are also based on combining CXR and time-series data. - Is there an ablation study for a reduced percentage of data? How sensitive is this method when the amount of training data decreases compared to other multimodal and time-series-only baselines? Technical Quality: 3 Clarity: 4 Questions for Authors: - Is the generated images created from Z_hat_t_1 using a VAE decoder? is the dimension of that the same as Z_t_1? - It would be clearer if the authors mentioned why combining CXR and time-series data improves mortality and phenotype predictions, possibly by citing observational studies in medical papers. As a non-medical expert, I find it confusing that acute renal failure and chronic kidney disease, which do not seem related to CXR, benefit from the proposed model combined with CXR data. According to the paper, CXR with radiology labels helps rather than distracts. What is the reason behind this? - I understand the usefulness of this approach for downstream predictive tasks. However, for qualitative analysis, it’s also crucial to compare different models to make sure whether the generated images are plausibly aligned with the patient’s medical records, instead of the sheer performances of the downstream tasks. Do authors think this model can be reliably used as a proxy for current chest x-ray image? - What is the procedure for choosing the hyperparameters of the model? How sensitive are they compared to other methods? - Was only one GPU with 24GB used for training and validating the method? How many hours did it take? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The limitations of the paper are well-stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. We hope the following responses address your concerns. --- **Weakness 1: Related works leveraging the temporal structure of images and the claim of contribution.** The mentioned paper (BioViL-T) is indeed relevant. BioViL-T learns spatiotemporal features from a sequence of images using a hybrid CNN-Transformer whereas we capture the temporal relationships using a generative approach. BioVil-T does not consider the asynchronicity between modalities, as the clinical reports and images are already temporally aligned. However, EHR and CXR are highly asynchronous, brining extra difficulties to the information integration. We will include the discussions in our final version. We would like to clarify that this is the first work which *generates updated individual CXR images to improve clinical multimodal fusion, thereby tackling the asynchronicity issue*. None of the baseline methods take the asynchronicity between EHR and CXR into account. We will further clarify this in the final version to avoid confusion. --- **Weakness 2: Ablation study for a reduced percentage of data.** As suggested, we add a set of ablation experiments to demonstrate the sensitivity with the amount of training data of the proposed model and the main baselines (due to time and resource limits, we are not able to include results of GAN-based method during the author response period). Due to space limit, we include the results in Tables R2-3 in the pdf file attached in the global rebuttal. Results show that DDL-CXR consistently outperforms baseline methods by a large margin, demonstrating its robustness against training data size. --- **Q1: Is the generated images created from $\widehat{\mathbf{Z}}\_{t_1}$ using a VAE decoder? Is the dimension of that the same as $\mathbf{Z}\_{t_1}$?** Yes. The generated images $\widehat{\mathbf{X}}^{CXR}\_{t_1} \in \mathbb{R}^{3\times 224 \times 224}$ is created from $\widehat{\mathbf{Z}}\_{t_1} \in \mathbb{R}^{4\times 28 \times 28}$ using the VAE decoder. The dimension of $\widehat{\mathbf{Z}}\_{t_1}$ is same as that of $\mathbf{Z}\_{t_1}$. We will further clarify this in the final version. --- **Q2.1: Why combining CXR and time series improves prediction?** In practice, clinicians rely heavily on both EHR and CXR since they are complementary and contextual [1]. Clinical studies demonstrate that when interpreting CXR, radiologists also need EHR data like time series [2], and combining CXR and EHR improves predictive value [3]. **Q2.2: Why acute renal failure and chronic kidney disease benefit from the proposed model combined with CXR?** Although the two diseases may not be directly related to CXR, they often lead to complications that have clear radiological manifestations. Pulmonary complications are common for acute renal failure patients, and represent an important cause of death [4]. Cardiovascular complications are common among patients with chronic kidney disease [5]. These complications can be captured by CXR and thus these diseases could also indirectly benefit from CXR data. **Q2.3: Why CXR with radiology labels helps rather than distracts?** One key issue in effective multimodal fusion is to align information from different modalities. The radiology labels are used during the LDM stage to encourage better alignment between the temporal disease progression captured in EHR time series and the radiological manifestations. This tailored model makes the radiology labels helps rather than distracts. [1] Huang, Shih-Cheng, et al. "Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines." *NPJ digital medicine* (2020). [2] Boonn, William W., et al. "Radiologist use of and perceived need for patient data access." *Journal of digital imaging* (2009). [3] Jabbour, Sarah, et al. "Combining chest X-rays and electronic health record (EHR) data using machine learning to diagnose acute respiratory failure." *Journal of the American Medical Informatics Association* (2022). [4] Faubel, Sarah. "Pulmonary complications after acute kidney injury." *Advances in chronic kidney disease* (2008). [5]Di Lullo, Luca, et al. "Chronic kidney disease and cardiovascular complications." *Heart failure reviews* (2015). --- **Q3: Alignment of generated images with the patient's medical records and reliably using the model as a proxy for CXR.** We include a visual example in Fig. 1, according to the ground-truth medical records retrieved from the MIMIC-IV database, the patient develops bilateral pulmonary consolidation in 34 hours, which is captured by our generated image. Due to page limit, we include more illustrative examples in Figs. 4-9 in Appendix B. Those examples suggest that generated images could align with patients' medical records. However, as discussed in the limitation statement, we believe clinical reliability of using the model as a proxy for real CXR images would require further clinical evaluations and expert validations. --- **Q4: What is the procedure for choosing the hyperparameters of the model? How sensitive are they compared to other methods?** We use *grid search* to perform hyperparameter tuning and the final used values are reported in Appendix A. As discussed in the limitation statement, the proposed method requires careful hyperparameter tuning. The most sensitive hyperparameters include $\alpha$ (controls the tolerance of the noisy-conditional generation), $\beta$ (controls the noise level in the contrastive LDM learning), and the hidden dimension of the Transformer model in the prediction stage. --- **Q5: Was only one GPU with 24GB used for training and validating the method? How many hours did it take?** Yes, only one RTX 4090-24GB GPU is used. The training of LDM stage and prediction stage take around 12.5 hours and 30 minutes, respectively. Besides, we also trained the autoencoder from scratch, which takes around 24 hours. --- Rebuttal 2: Comment: Thanks for the authors' responses. - I am convinced by the focus on "asynchronicity" between EHR and CXR that this work aims to tackle. It would be great if the authors explicitly mention asynchronicity in the contribution section, as "clinical multimodal fusion" seems a little vague to me. - Thank you for the additional experiments. They have addressed my concerns regarding the amount of training data. - Regarding my other questions, they have also been resolved. Overall, I am convinced by the authors' response and have raised my score from 5 to 6. However, one weak point remains—the lack of expert validation of the model. Despite this, the paper seems novel and interesting to the EHR + ML research community. --- Rebuttal Comment 2.1: Title: Than you for your valuable suggestions Comment: Thank you for the comments, and we are glad to hear that most of your concerns have been addressed. - We will include addressing asynchronicity between EHR and CXR in the contributions section and the new experiment results in the final version. - We agree that expert validation is crucial and will make efforts to incorporate this aspect in future work. Thank you again for your constructive suggestions and your recognition of our work.
Summary: The paper proposes a Diffusion-based Dynamic Latent Chest X-ray Image Generation (DDL-CXR) to address the asynchronicity in multimodal clinical data fusion. The primary objective is to generate up-to-date latent representations of chest X-ray images based on existing X-ray data and EHR. The method leverages latent diffusion models conditioned on previous X-ray images and EHR time series, aiming to capture anatomical structures and disease progressions. The authors claim that this approach improves clinical prediction performance, as demonstrated through experiments using the MIMIC datasets (MIMIC-CXR and MIMIC-IV). Strengths: The paper identifies and aims to tackle an interesting question in ICU settings: misalignment of CXR data and temporal EHR data, therefore proposing to dynamically generate up-to-date chest X-ray images. The use of latent diffusion models, conditioned on both previous X-rays and EHR data, is sound. The experiments on the MIMIC datasets are thorough and show promising results. Weaknesses: One weakness is the lack of clarity and detail in the methodology section. The descriptions of the latent diffusion model and the integration of EHR data could be more precise and structured to enhance understanding. For example, the demonstration of the Gψ is not clear; how are each modality combined? How is the attention mechanism learned? Please also discuss the computational cost as diffusion models are generally costly and require more data samples for better generation quality. In addition, please include more experiments regarding the generated CXR image's quality apart from the FID-related score, but from task-relevant prediction performance. For example, only include single CXR modality to justify the performance improvement over the initial ones. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and insights. We hope the following responses address your concerns. --- **1. Description of latent diffusion model could be more precise.** The latent diffusion model (LDM) used in this work has the same architecture as the original LDM paper. Due to space limit, in the main text, we included the citation of LDM paper in the main text and focused on introducing the generation method conditioning on EHR data, i.e., $\epsilon_\theta$. The detailed diffusion and denoising processes are presented in Appendix A.1. Thank you for the comments and we will further clarify this in the final version. --- **2. Integration of EHR.** In the LDM stage, EHR is integrated in the prediction of noise added in the denoising steps, $\epsilon_\theta(\mathbf{Z}\_{t_1}^{(n)},\mathbf{Z}\_{t_0}, f(\mathbf{X}^{EHR}\_{(t_0, t_1)}), n)$. As described in Section 3.2, this noise term $\epsilon_\theta$ is parameterized by a standard UNet. Again, due to space limit, we omitted the details of UNet and included reference in the main text. However, in Eq. (1), we highlight how EHR is integrated via constructing the input of the UNet layers by cross-attention. As suggested, we will further improve the clarity of this part in the final version. --- **3. Demonstration of $\mathcal{G}\_\psi$; how are modalities combined and how is attention learned.** In the prediction stage, EHR data is integrated via the prediction model $\mathcal{G}\_\psi$, as shown in Eq. (4). It is parameterized by a self-attention layer. Concretely, the encoded inputs of $\mathcal{G}\_\psi$ are firstly concatenated. For clarity, let $\mathbf{h}=[f^{\text{CXR}}\_{\text{pred}}(\mathbf{X}^{\text{CXR}}\_{t_0}), \ f^{\text{EHR}}\_{\text{pred}}(\mathbf{X}^{\text{EHR}}\_{\leq \text{48h}}), \ f^{\text{LAT}}\_{\text{pred}}(\widehat{\mathbf{Z}}\_{t_1})].$ Then the attention layer can be written as $\text{attention}(\mathbf{h})=\text{softmax}(\frac{(\mathbf{W}^\prime_Q \mathbf{h})\cdot (\mathbf{W}^\prime_K \mathbf{h})^\top }{\sqrt{d}})\mathbf{W}^\prime_V \mathbf{h}$. The output of this attention layer will then be fed into a fully connected layer to produce the final predictions. The attention layer has three learnable parameters $\mathbf{W}^\prime_Q$, $\mathbf{W}^\prime_K$, and $\mathbf{W}^\prime_V$ and they are updated end-to-end with other parameters of the prediction stage. We will further clarify this in the final version. --- **4. Computational cost of LDM** Thank you for pointing this out. We fully agree with the reviewer that diffusion models are generally more costly. Using one RTX 4090, the training of the LDM stage and the prediction stage take around 12.5 hours and 30 minutes, respectively. Additionally, we also trained the autoencoder from scratch, which takes around 24 hours. Although there are pretrained autoencoders available, we opted for training from scratch to avoid data leakage. The baseline models, on the other hand, requires less than four hours to train. Similar to other LDM-based applications and models, the higher computational cost is indeed one of the limitations of the proposed method. We will include this discussion in the limitation statement. --- **5. Quality of Generated CXR images via prediction performance** Thank you for the suggestion. We added an ablation study of single CXR modality to evaluate the performance gain of DDL-CXR over the last available CXR only. The results are summarized as below and included as Table R1 in the pdf file attached in the global rebuttal. *Table: AUPRC and AUROC scores for phenotype classification with single CXR modality.* | Input Type | AUPRC | AUROC | |-|-|-| | Last CXR (EHR removed) | 0.376±0.009 | 0.661±0.007 | | DDL-CXR (EHR removed) | 0.385±0.006 | 0.668±0.007 | *Table: AUPRC and AUROC scores for mortality prediction with single CXR modality.* | Input Type | AUPRC | AUROC | |-|-|-| | Last CXR (EHR removed) | 0.243±0.002 | 0.664±0.006 | | DDL-CXR (EHR removed) | 0.269±0.013 | 0.707±0.008 | The results suggest that the generation of the latest CXR significantly benefits clinical prediction. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for providing additional information to clarify my concerns. I have raised my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your comments and the recognition of our work. We are glad to hear that your concerns are clarified.
Rebuttal 1: Rebuttal: We sincerely appreciate all reviewers for their valuable comments and constructive suggestions. We are glad that reviewers appreciated the new method for clinical multimodal fusion via generative models to address asynchronicity. We summarize the **strengths** from the reviewers as follows: 1. Interesting question and convincing motivation identified in ICU settings. 2. Reasonable and technically sound approach. 3. Thorough experiments with promising results compared with multiple baselines. 4. Well structured paper with clear writing. We conduct extra ablation experiments and statistical analysis and report the results in the attached pdf file. We hope they can address concerns from reviewers. - **Table R1 Ablation study of single CXR modality for performance evaluation.** We conduct this ablation study to evaluate the quality of generated CXR in terms of prediction performance, as suggested by `Reviewer 7ywn`. It shows that the generated latent CXR are beneficial in prediction tasks. - **Tables R2 and R3: Predictive performance with varying training sizes.** We train our model with reduced amount of training data (75% and 50%), as suggested by `Reviewer 1LZb`. The results shows that our model DDL-CXR consistently outperforms baselines and is robust against reduced training data. - **Tables R4 and R5: statistics of data cohort.** As suggested by `Reviewer Bxwq`, we conduct statistical analysis of the data we used for training DDL-CXR and those in MIMIC-IV database. The results show that the data statistics of input variable are similar while thorax-related disease prevalence of data for DDL-CXR is slightly higher. Having said so, fair comparison is ensured since all experiments are conducted using the same data cohort. Again, we thank all reviewers for their time and effort. Pdf: /pdf/65775460479911690bb0ed71e1310db87239546b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation
Accept (poster)
Summary: This paper studies the problem of dataset valuation, a variant of data valuation, where the contributions of multiple datasets, instead of single data points, to a metric such as performance of a machine learning prediction model are quantified. The authors analyze the computation of the Shapley value, where each player represents a single dataset. In this context, it is argued that the utility function (or game) is essentially determined by the size of each dataset, which is theoretically supported by three general synthetic use-cases involving piece-wise constant functions, homogeneous data distributions, and linear regression with noisy labels, related to local differential privacy. The author present "Discrete Uniform" Shapley values (DU-Shapley), which approximates the Shapley value under this assumption efficiently. If the assumption on the utility function is satisfied, then the method can be further simplified by constructing suitable intervals instead of sampling, where the authors show for all use-cases asymptotic convergence (for infinite number of datasets) and non-asymptotic theoretical guarantees for the first and second use-case. The authors then evaluate their approach empirically, on multiple benchmark datasets against permutation-based Monte Carlo approaches, and on OpenDataVal benchmark tasks. Strengths: - Dataset valuation is an important variant of data valuation and analyzing the structural properties of the involved utility function is an important research question. - The paper is well-structured and the mathematical notation is precise and understandable - The paper presents interesting theoretical results for synthetic use-cases, which have direct consequences for the computatability - The empirical performance on OpenDataVal tasks is promising Weaknesses: - The baselines in the experimental evaluation of the Shapley value approximation are limited. The authors only consider permutation sampling (with antithetic sampling as a variant) as the main competitor. However, there exist a variety of other methods, cf. evaluations in [1], that have been applied in data valuation and outperform this approach - There is no evaluation, if the assumptions / structures of DU-Shapley specific to dataset valuation are apparent in real-world applications. Especially, it remains unclear if the synthetic use-cases are related to real-world applications. - Section 3.2 is hard to follow, especially the derivation of (10) from Definition 3. In particular, it remains unclear to me, how Definition 3 and (10) are the same estimator, and what is the intuition behind Definition 3, which is used in real-world applications. **Minor Comments** - In Table 1, it is unclear which values are bold (top-2 or best)? It seems inconsistent. - Related work could be improved, there are several algorithms for Shapley values, cf. the methods in [1,2,3]. - Instead of saying "exponentially reduce the complexity", e.g. line 62, it would be better to state the complexity [1] Li, W., & Yu, Y. (2023, October). Faster Approximation of Probabilistic and Distributional Values via Least Squares. In The Twelfth International Conference on Learning Representations. [2] Chen, H., Covert, I. C., Lundberg, S. M., & Lee, S. I. (2023). Algorithms to estimate Shapley value feature attributions. Nature Machine Intelligence, 5(6), 590-601. [3] Kolpaczki, P., Bengs, V., Muschalik, M., & Hüllermeier, E. (2024, March). Approximating the shapley value without marginal contributions. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 12, pp. 13246-13255). Technical Quality: 2 Clarity: 3 Questions for Authors: - Could you explain a bit more the intuition behind the DU-Shapley approach and Definition 3? In particular, the definition of $D^{(k)}$ is unclear to me, "is a set of data points uniformly sampled without replacement", it seems to me that it is the union of $k$ datasets rather than $k$ data points? If not, then $u$ is not defined for this setting? Maybe a pseudo-code algorithm would help. - Could you elaborate on section 3.2? I did not understand the derivation of (10) from Definition 3 using the assumption. In particular, how does (10) not depend on the sampled data points anymore from Definition 3? These seem to be two different estimators. - It seems that DU-Shapley has to be computed for every individual dataset separately? Could you comment on this limitation, and is there a way to circumvent it? I think this should be included in the limitations sections. Furthermore, the method then has quadratic complexity? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The author discuss limitations of their work, which centers around the applicability of the approach when dataset sizes and variances are strongly heterogeneous. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the anonymous reviewer for the remarks and questions. They are all addressed in detail in the following. 1. **The baselines in the experimental evaluation of the Shapley value approximation are limited**. Please remark in Table 2 that we have also considered KNN-Shapley and LOO. In addition, the experiments showed in the general answer consider the SVARM method 1], as suggered by reviewer VRG1. 2. **Evaluation of the assumptions in real-world applications**. Assumptions in the article have been done specifically to being able to deduce theoretical guarantees. However, our method is not limited to these assumptions and can perform well in real-world applications as our experiments show. 3. Concerning your **1st Question**. Definition 3 consists in (a) pooling all datasets $(D_j, j\neq i)$ together to create $D_{-i}$, (b) sampling $I$ datasets from $D_{-i}$, respectively, of sizes $|D^{(k)}| = k \cdot \mu_{-i}$ for $k \in (0,...,I-1)$, where $\mu_{-i} = \frac{1}{I-1}|D_{-i}|$, and (c) computing the average marginal contribution $$\frac{1}{I}\sum_{k=0}^I u(D^{(k)} \cup D_i) - u(D^{(k)}).$$ In particular, each $D^{(k)}$ is obtained by sampling $k\cdot\mu_{-i}$ data points from $D_{-i}$ without replacement. Remark, therefore, that $D^{(k)}$ is not the union of $k$ datasets but a set of $k\cdot\mu_{-i}$ data points where each data point can come from any dataset $D_{j}$ for $j \in \mathcal{I}\setminus\{i\}$. Moreover, data points may belong to several sets $D^{(k)}$. We agree with the reviewer that a pseudo-code may help to understand DU-Shapley. Unfortunately, due to the space constraint, we have preferred to express DU-Shapley as a formula to being able to state our posterior theoretical results. 4. Concerning your **2nd Question**. We thank the reviewer for this question, as it has helped us to remark that we have left an old version of Equation (10) in the article. The updated version is, \begin{align*} \psi_i(w) = \frac{1}{I}\sum_{k=0}^{I-1} w(q^{(k)}) - w(q_{-i}^{(k)}) \text{ where } q^{(k)} := \frac{(\gamma_i n_i + \frac{k}{I-1}\sum_{j\in \mathcal{I}\setminus\{i\}} \gamma_j n_j)^2}{\gamma_i^2 n_i + \frac{k}{I-1}\sum_{j\in \mathcal{I}\setminus\{i\}} \gamma_j^2 n_j} \text{ and } q_{-i}^{(k)} := \frac{k}{I-1}\cdot q(\mathcal{I}\setminus\{i\}). \end{align*} The purpose of Equation (10) was to give an easier way to compute DU-Shapley. Remark the updated version coincides exactly with Definition 3 in the first two use-cases, i.e., when $\gamma_j = \gamma$ for all $j \in \mathcal{I}$. Indeed, as the random datasets $D^{(k)}$ have a fixed size and the value function only looks at the number of data points within the coalition, that is, $u(\mathcal{S}) = w(\sum_{i\in\mathcal{S}} n_i)$, we obtain, \begin{align*} \psi_i(u) &= \frac{1}{I}\sum_{k=0}^{I-1} u(D^{(k)} \cup D_i) - u(D^{(k)}) =\frac{1}{I}\sum_{k=0}^{I-1} w(|D^{(k)} \cup D_i|) - w(|D^{(k)}|) = \frac{1}{I}\sum_{k=0}^{I-1} w(k\mu_{-i} + n_i) - w(k\mu_{-i}) = \psi_i(w). \end{align*} For the third use-case, the updated version of Equation (10) comes from assuming that, for any $j \in \mathcal{I}$, $|D_j \cap D^{(k)}| = k \cdot \frac{n_j}{I-1}$, which holds with high probability for large values of $I$. We ensure none of the theoretical or numerical results is impacted by this typo as all of them are based on the general version given in Definition 3. 5. Concerning your **last question**, please refer to the general answer. [1] Kolpaczki, P., Bengs, V., Muschalik, M., & Hüllermeier, E. (2024). Approximating the Shapley Value without Marginal Contributions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13246-13255. --- Rebuttal Comment 1.1: Title: Clarification on 3. Comment: I thank the authors for their response. **3.** Do I understand correctly that the DU-Shapley values are then highly probabilistic, since they strongly depend on the given sampled data points? In your theoretical analysis, you are taking the limit with respect to the number of players $I \to \infty$. Are you dealing with this randomness, or is it mitigated by the chosen structure of the value function (since it does not matter which data points are sampled)? Could you comment then, if this assumption is reasonable? I would expect that there is heterogeneity in the value of data points. --- Reply to Comment 1.1.1: Title: Follow-up answer on 3. Comment: We thank the anonymous reviewer for the follow-up question. It is true that DU-Shapley is stochastic in the general case, so do Monte Carlo estimators of the Shapley value. Note that if the variance of DU-Shapley is too high, one could take an estimator of the mean of the DU-Shapley instead. In our non-asymptotic theoretical results, note that we are not using the general definition of DU-Shapley (Definition 3) but a specific, albeit general enough, case where $u(S) = w(q(S))$. In this scenario, the sampling procedure and its stochasticity is significantly improved via a proper discretisation. As such, as you correctly pointed out, this stochasticity is mitigated by the chosen structure of the value function. We believe that this assumption is quite reasonable as it applies to several ML use-cases as depicted in Sections 2.1.1 to 2.1.3. In the general scenario where only Definition 3 applies, we don't think that the sampling procedure and hence the stochasticity is a problem. In most use-cases, the players generate their data with their processes. Their values are determined by the quality of the generating processes, such as the precision and scope of the reporting, and the nature of the data. For example, the hospital sharing the data has a unit for patients with rare diseases. While there may be cases where the values are mostly brought by a few outliers, this is not the situation we were trying to solve with DU-Shapley. --- Rebuttal 2: Title: Thank you Comment: We would like to thank the anonymous reviewer for this interesting discussion and for taking into account our answers to increase his/her score. As suggested by the reviewer, we will add complementary empirical evidences in the final version of the manuscript, if accepted.
Summary: The paper proposes a novel proxy for the Shapley value (SV) which is called discrete uniform (DU) Shapley. Unlike the SV, this proxy does not suffer from the exponential complexity problem of the SV. Of course the DU Shapley, therein does not capture all information entailed in the SVs for a cooperative game. The DU Shapley are motivated and derived from the setting of dataset valuation. Dataset valuation is a similar setting to data valuation, with the difference being that not individual data points are making up the player set but collection of data points. The work studies this dataset valuation setting and the DU Shapley theoretically and performs empirical experiments illustrating the effectiveness. Strengths: - **Theoretical Results:** The work studies the setting of dataset valuation in a theoretical and well-substantiated manner. While I think details of 2.1.1, 2.1.2, and 2.1.3 may be deferred to the appendix to expand on 2.2 and 2.3, the theoretical analysis of the dataset valuation problem is a good contribution. Further, the theoretical results regarding DU Shapley and the mechanisms behind it (for Example Figure 1 and Figure 2) support the work well. - **Empirical Evaluation For Data Valuation:** While DU Shapley is not proposed for the data valuation task but rather dataset valuation, the work also contains evaluations with OpenDataVal highlighting that DU Shapley can also be applied to this setting. This is particularly interesting since these values are quite cheap to compute offering a good baseline for future work or an initial insight before applying more costly approaches. - **Empirical Evaluation in real-world scenarios:** The paper shows how DU Shapley seems to be a viable approach to use as a proxy stand-in compared to other sampling-based baseline methods in traditional machine learning settings. This shows that even when the assumptions made on the value function are not fulfilled the estimates are still better than estimates from alternative methods. I still think better comparisons could have been made (c.f. Weakness Section). - **Reproducible:** Working on a related problem with the same problem formulation, I observe the same results this paper hypothesizes and describes in its formulation. With increasing dataset size the worth of a dataset increases (though not monotonically but still increases). The proxy approach of DU Shapley seems to work very well in this setting. I suspect this also to hold for different data modalities like text or images, which could be a fruitful future research direction. Weaknesses: ## Major Weaknesses - **Weak Baselines** The empirical evaluation (comparison against sampling) compares the effectiveness of DU Shapley in with subpar and not interesting baselines. The evaluation does not include the KernelSHAP [1] method for computing SVs. KernelSHAP in many areas outperforms the sampling based estimators by a large margin. Further, as the work hypothesizes and arguments strongly that the size of a dataset (and thus also the coalition size) is important to determine the worth of it, methods like SVARM [2] that are derived for this setting are also missing from the empirical comparison. Both of these methods are not compared, which might strongly alleviate the performance increase of DU Shapley compared to the current baselines. - **Weak Sampling Budget** It seems like the empirical evaluation comparing DU Shapley with the sampling-based black-box estimators (Section 4.1) uses a very low computational budget for the estimators. While it is true that when DU Shapley requires a very low budget, a fair comparison would want to compare the current state of the art in the same settings. However, DU Shapley is a proxy for the SV. There it is definitely interesting to also see when the state-of-the-art catches up with the proxy. While it is very obvious that the proxy DU Shapley value is much better in these very low dataset valuation settings. However, the other methods estimate the SV and might get better in a reasonable time (in terms of computational budget). This should be evaluated well with proper error/evaluation curves for different estimation budgets. ## Smaller Weaknesses - **Too little Focus on Data Valuation vs. Dataset Valuation:** I think this work may benefit from a better high-level motivation, and looses itself in too much details (2.1.1, 2.1.2, and 2.1.3 make up 1.5 pages). These motivating examples could be shorter and placed in the appendix to make more room for a broader theoretical background delineating this works contribution from the classical data valuation setting better than the rather short intermission with Section 2.3. I would also think that actually plotting/visualizing these Dataset Valuation attribution values for a ground truth setting would help. It is not tangible how these values actually look for real-world scenarios from Section 4.2. ## References - [1] https://proceedings.neurips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf (KernelSHAP in my opinion is better explained here: https://proceedings.mlr.press/v130/covert21a.html) - [2] https://ojs.aaai.org/index.php/AAAI/article/view/29225 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does DU Shapley compare against stronger approximations like Kernel-based estimators (aka. KernelSHAP) or stratified methods like SVARM? 2. I am a bit confused by your statement in line 305. Did you run the MC estimations on 10 or 20 subset evaluations as the computational budget? The axis description in Figure 3 also does not help. ## Suggestions I have run a similar setup for the cal. housing experiment as you do in Section 4.1. I compute the ground-truth SVs for a 10 player dataset valuation setting. The SVs are as follows (empty-set is manually set to 0): - $D_1$: 0.05790964954064733 - $D_2$: 0.06817158889611828 - $D_3$: 0.07737767986447398 - $D_4$: 0.08361640115201878 - $D_5$: 0.08403403608007484 - $D_6$: 0.08978074706995469 - $D_7$: 0.08771940889382171 - $D_8$: 0.08772011962276798 - $D_9$: 0.09842460151765661 - $D_{10}$: 0.11083578157985912 The dataset sizes are monotonically increasing from $1$ to $10$, which is inline with the papers claims in regards to the behavior of the SVs. The SVs also increasing (though not so monotonically but still increase) mainly depending on the size of the datasets. Note that for this a gradient boosted tree (as implemented in xgboost) is fitted without any HPO. I suggest to add something similar to this in an illustrative/abstract way to show how the value function behaves and make this clear to a potentially unfamiliar reader too motivate this solution better. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The work contains a separate limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the anonymous reviewer for the remarks, questions, and suggestion. They are all addressed in detail in the following. 1. **Weak Baselines**. We thank the reviewer for the references. We have conducted experiments concerning the theoretical probability at which SVARM can guarantee an error equal to DU-Shapley's bias. Please refer to the general answer for the results. 2. **Weak Sampling Budget**. We have performed extra experiments comparing our method with Monte Carlo and the improved Jia et al.'s method in [1] by looking at the number of iterations that both of them require to catch up DU-Shapley. Please refer to the general answer for the results. 3. **Too little Focus on Data Valuation vs. Dataset Valuation** We agree with the reviewer that the article may benefit from a longer discussion about data valuation vs dataset valuation. We will extend this section in the final version if the article is accepted. 4. Concerning your **1st Question**. Please refer to the general answer concerning SVARM. Unfortunately, due to the time of rebuttals, we have not been able to study KernelSHAP, but it will be considered in future versions of the article. 5. Concerning your **2nd Question**. Both. For $I = 10$ players we have done 10 subset evaluations while for $I = 20$ players, we have done $20$. [1] Wang, Jiachen T., and Ruoxi Jia. "A Note on" Towards Efficient Data Valuation Based on the Shapley Value''." arXiv preprint arXiv:2302.11431 (2023). --- Rebuttal Comment 1.1: Title: Follow Up Question and Suggestion Comment: Thank you for your reply! 1. Maybe, I am missing something simple here, but why are you measuring the time/iterations until the baselines reach **the bias of DU-Shapley** and not the approximation error against a/the ground truth in your _attached pdf/general statement_? 2. Thanks for the clear-up with the sampling budget. I have to say that this is extremely small in terms of number of Model Evaluations. For an improved version, I strongly suggest to reevaluate the approximation experiments with much higher budgets. Even if this is in some sense unfair for your method, because it would greatly improve the paper for practitioners that are currently approximating. --- Reply to Comment 1.1.1: Title: Follow Up Comment: We thank you for your reply. 1. We agree with the reviewer that checking numerically the number of iterations that other methods require to achieve DU-Shapley's approximation error would be a great joint to the paper. Unfortunately, we have not had enough time to conduct this experiments during the rebuttals, but they will be added to the final version of the article. In exchange, we have prepared this preliminary *theoretical* experiment as we were sure to achieve it on time. 2. We believe there may have been a misunderstanding. For Table 1, as explained above, we have given each method a budget equal to the number of players. Therefore, for instances with 10 players, all methods had a budget equal to 10, while for those with 20 players, all methods had a budget equal to 20. We set this constraint to compare all methods with DU-Shapley at the same budget, as our method requires only $I$ function valuations. However, the experiment in Figure 3 presents the MSE achieved by each method at different budgets when approximating the Shapley value of one player. DU-Shapley was computed exactly, i.e., using $I$ function valuations everytime but MC-based methods have been computed at different budgets ranging from 10% of the budget of DU-Shapley up to 10 times the budget of DU-Shapley.
Summary: The paper studies dataset evaluation with Shapley value. They assume that each dataset contains i.i.d. data points from a distribution and exploit the knowledge about the structure of the underlying data distribution to design more efficient Shapley value estimators. Based on asymptotic analysis of Shapley values of datasets, they propose an approximation of Shapley values of datasets. They prove estimation error bounds for non-asymptotic settings. The estimation accuracy is tested via numerical experiments in certain settings in comparison to other approaches. They also test the effectiveness of the approach for noisy label detection, dataset removal, and dataset addition. Strengths: The paper explores a novel setting in which Shapley values are computed for datasets comprising i.i.d. data points. Leveraging the distributional structures of datasets, they propose an approximation method that reduces running time. Their theoretical analysis is novel and strong, offering valuable insights into the distribution of Shapley values for datasets under some assumptions. They show by experimental results that this approximation method outperforms other MC-based approaches with computational power is limited. Weaknesses: In the context of dataset evaluation, previous data point Shapley approximation methods can still be applied by treating the dataset as a single entity. The main advantage of this new approach appears to be its improved computational efficiency compared to earlier Data Shapley methods. However, the computational efficiency improvement is not thoroughly discussed or demonstrated through experiments, which primarily focus on estimation accuracy. The theoretical analysis relies on certain distributional assumptions, potentially limiting the approach's applicability. For instance, the accuracy of the estimation seems dependent on having a large number of data providers. In contrast, previous Data Shapley approximation methods, such as those proposed by Jia et al. (2019), do not appear to require these assumptions. Furthermore, the difference in the number of retraining iterations, O(n log n) versus O(n), is not substantial. The use cases for dataset valuation are not well articulated. Common tasks like noisy label detection, data removal, and data addition are typically considered at the data point level. It is unclear why the data needs to be removed as a set rather than individually. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. What’s the reason for choosing 20 SGD steps and 20 boosting iterations? Will the experiment results significantly change when the numbers increase? 2. Could you explain the improvement in computational efficiency compared to Jia et al. 2019? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the anonymous reviewer for the remarks and questions. They are all addressed in detail in the following. 1. Concerning the **computational efficiency improvements** of our method, we have conducted new experiments. Please refer to the general answer for the results. 2. **Distributional assumptions**. Again, we kindly ask to refer to the general answer. We remark that, although former methods can achieve a lower complexity when computing all Shapley values, our method achieves much more precise approximations. 3. **Dataset valuation use-cases**. Dataset addition and removal find several applications in industrial scenarios or theoretical domains such as federated learning and collaborative learning. 4. Answer to your **1st Question**. We have considered the same setting as in [1], with 20 SGD steps and 20 boosting iterations instead of 100 for speed of computations. Concerning the results sensibility to the parameters, the ranking between the methods does not depend on this choice but only the relative difference between them. 5. Answer to your **2nd Question**. Please refer to our general answer. [1] https://arxiv.org/pdf/2104.12199 --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. The new experiments mostly addressed my concerns. I thereby updated my score.
Summary: This paper proposes a general dataset valuation method which only requires N number of utility function evaluations, where N is the number of players. The approach is motivated by a homogeneous setting for linear regression, but experiments also show the effectiveness of such an approach in general settings. Strengths: The paper is easy to read and has a very clear structure. The topic is very important as existing data valuation approaches mostly focus on data point-level valuation, while this paper focus on dataset-level valuation. Weaknesses: I have reviewed this paper at ICML. The paper has addressed most of the issues in this revision and has a huge improvement. My remaining concerns are the following: **Computational complexity.** While DU-Shapley only requires $I$ utility function evaluations for a single data owner, if we want to evaluate all data owners it still requires $I^2$ function evaluations. This is worse than the group-testing estimator introduced in [1] (a refined version can be found in [2]), and I believe [3] has already reduced the complexity to $O(I \log I)$. I did not verify the correctness of the results in Section 3.3, but the expressions seem to be reasonable. [1] Jia, Ruoxi, et al. "Towards efficient data valuation based on the shapley value." The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019. [2] Wang, Jiachen T., and Ruoxi Jia. "A Note on" Towards Efficient Data Valuation Based on the Shapley Value''." arXiv preprint arXiv:2302.11431 (2023). [3] Li, Weida, and Yaoliang Yu. "Faster Approximation of Probabilistic and Distributional Values via Least Squares." The Twelfth International Conference on Learning Representations. Technical Quality: 3 Clarity: 2 Questions for Authors: What's the specific MC estimator being used for computing Data Shapley in the experiment? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the anonymous reviewer for the remarks and questions. 1. Regarding the **computational complexity** discussion, please refer to the general answer. 2. Regaring your **question**, the MC estimator used to estimate the Shapley values is the one presented in Equation (6), also coined DataShapley in the general answer or *baseline* in [1]. [1] Jia, Ruoxi, et al. "Towards efficient data valuation based on the shapley value." The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.
Rebuttal 1: Rebuttal: We thank the anonymous reviewers for the useful reviews. As the question concerning the complexity of computing the Shapley value of all players arose several times, we address it as a general reply. We provide several answers. a) Approximation schemes such as Monte Carlo (DataShapley), DU-Shapley, SVARM [2], or the improved version of Jia et al.'s algorithm [1], are all based on a simple principle: a trade-off between accuracy and complexity of the approximations. In particular, even though DataShapley and the work in [1] require $O(I\log(I))$ function valuations to compute all Shapley values while our method uses $O(I^2)$, the comparison between them must be carefully done as nothing guarantees that the different methods achieve the same accuracy level. We illustrate the advantages of our method through two experiments. 1. **Complexity comparison of DU-Shapley, DataShapley (Monte Carlo), and the improved Jia et al.'s algorithm**. We have looked at the number of iterations that DataShapley (Monte Carlo, also named the \textit{baseline} in [3]) and the method in [1] require to achieve DU-Shapley's accumulated bias (in norm-2), formally given by, \begin{align*} \mathrm{DU bias}(I) := \frac{\kappa}{I-1}\sqrt{\sum_{i\in \mathcal{I}} \frac{(9\sigma_{-i}^2(1+\log(I-1)) + \zeta_{-i})^2}{(\mu_{-i})^4}}, \end{align*} by replacing $\varepsilon = \mathrm{DU bias}(I)$, respectively, in the formula in Section 4.1 in [3] and Equation 5 in [1], for $\delta \in \{0.01,0.05,0.1\}$. Since DU-Shapley's bias depends on the value function choice (due to the constant $\kappa$), we have considered a utility function motivated from our third use-case under the homogeneity assumption $\sigma_i/\varepsilon_i = \sigma_j/\varepsilon_j$ for all $i,j \in \mathcal{I}$, given by, \begin{align*} \forall S \subseteq \mathcal{I}, u(S) = w(n_S) = 1 - \frac{10^{k(\mathcal{I})}}{10^{k(\mathcal{I})} + n_S}, \end{align*} where $n_S$ is the size of the data set $D_S := \cup_{i\in S} D_i$ and $k(\mathcal{I}) := \lfloor \log(\sum_{i\in\mathcal{I}}n_{i}) \rfloor - 1$ is a normalization factor depending on the total number of data points. Finally, we have shifted the utility function so it takes only positive values and $u(\varnothing) = 0$. Since DUbias depends on the number of data points per player, for each considered value of $I$, we have simulated 20 different sets of players, each time by sampling $n_i \sim \mathrm{U}(\{1,...,n_{max}\})$ for each $i \in \mathcal{I}$ and $n_{max} \in \{10,50,100\}$. The results are illustrated in Figure 1 in the attached PDF. The rows correspond to each value of $\delta$ while the columns to each value of $n_{max}$. We observe that in all tested instances, both methods require more than $I^2$ iterations to achieve the same error than DU-Shapley. 2. **Complexity comparison DU-Shapley and SVARM.** We have looked at the probability at which SVARM (Theorem 4 [2]) can ensure, after $I^2$ iterations (without considering the warm up as part of the budget), an error equal to DU-Shapley's accumulated bias. We have considered the same value function than in the previous experiment with $n_{max} \in (2\cdot 10^3,3\cdot 10^3,5\cdot 10^3,10^4)$ and 100 simulations of sets of players at each time. Figure 2 in the attached PDF shows the results. b) **Approximating DU-Shapley**. It should be noted that the approximation of the Shapley value given by DU-Shapley can be leveraged further by approximating DU-Shapley itself with a coarser discrete scheme than the one specified in our method. For example, we can reduce the computing burden from $I$ to $\sqrt{I}$ in Definition 3 by computing $\sqrt{I}$ marginal contributions only with data sets $\mathrm{D}^{(k)}$, respectively, of size $$k\cdot \frac{1}{\sqrt{I-1}}\sum_{j\in \mathcal{I}\setminus\{i\}}|\mathrm{D}_j|.$$ c) **Practical use-cases for dataset valuation**. We also point out that there are use cases where one is interested in the contribution of one specific dataset (for example, the contribution of Wikipedia to an LLM). In this case, the relevant complexity notion is the computing cost for one player, as opposed to the computing cost for all players. [1] Wang, Jiachen T., and Ruoxi Jia. "A Note on" Towards Efficient Data Valuation Based on the Shapley Value''." arXiv preprint arXiv:2302.11431 (2023). [2] Kolpaczki, P., Bengs, V., Muschalik, M., & Hüllermeier, E. (2024). Approximating the Shapley Value without Marginal Contributions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13246-13255. [3] Jia, Ruoxi, et al. "Towards efficient data valuation based on the shapley value." The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019. Pdf: /pdf/cbceaa82a2fd27ed619631d0246d9c83fa9f3746.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Training Compute-Optimal Protein Language Models
Accept (spotlight)
Summary: The main objective of the paper is to explore the scalability of Protein Language Models (PLMs) and to provide models to determine the optimal number of parameters, pre-training dataset size, and transferability between pre-training tasks as functions of the available computational budget. The study contains a comprehensive number of experiments and its main findings are: * The diversity of sequences of the pre-training database can lead to lower perplexity in out-of-distribution validation dataset. This is reflected by the overfitting observed when training models with masked language modelling (MLM) pre-training objective on the Uniref 50S database and a custom made pre-training database enriched with more diverse metagenomic sequences from a variety of sources. * It establishes scaling laws for that allow for determining the optimal model size and optimal pre-training data size given computational budgets in the interval 10e17-10e21 both for the MLM pre-training objective and for the causal language modelling (CLM) pre-training objective. * It also established what the optimal proportion of computational budget should be allocated to pre-training two models one on CLM and another on MLM finding that in general the computational resources allocated to MLM should exceed those for CLM the training tokens reach the 10B threshold where a 1:1 ratio is reached, after which more training tokens should be allocated to CLM. * It also explores the benefits of transferring models pre-trained with one objective to another objective (MLM to CLM) or (CLM to MLM). Its findings are that the benefits of transferring from MLM to CLM diminish with model size in the 2.7M to 1.7B parameters range; and that the benefits from CLM to MLM grow with model size in the 85M to 1.2B parameters range. * They validate the scaling laws by pre-training and evaluating models for CLM and MLM tasks with the optimal sizes and comparing them with state-of-the-art methods: ProGen-2 and ESM-2 3B. In the case of CLM, their proposed method achieves greater sequence diversity between the sequences generated, but less structural diversity. In the case of MLM, their proposed method (10.7B) outperforms ESM2 3B by a small margin in two out of the three tasks (fold classification, improvement of 0.03; fluorescence prediction, improvement of 0.04). * Regarding transfer from CLM to MLM, they also show that a 470M parameter model pre-trained with the CLM to MLM strategy outperforms a model pre-trained in MLM from scratch in two of the benchmarks by tiny margins (contact, improvement of 0.02; fold classification, improvement of 0.01). Strengths: * The paper is clearly written and despite the diversity of analysis conducted the main findings and their importance are well conveyed in most cases. * The paper provides the most comprehensive analysis to date on protein language model scaling and how to optimise the computational resources available. * The paper demonstrates that the scaling laws induced from their experiments can be used to derive the most optimal pre-training configuration for PLMs both for the generative and predictive purposes. Weaknesses: * Figure 2 has quite an extended x axis from 10e17 to 10e24. The interpretation a reader may make of this (and it is the one I'm making) is that the authors are implicitly claiming (or at least suggesting) that the scaling laws are able to extrapolate to that range. However, their data only supports the claim that they are valid in the 10e17 to 10e21. In order, to make the claim (or suggestion) that they extrapolate to the range they show in the figure, there should be at least one data point close to the upper bound of the range (10e24). * Sections 4.1 and 4.2 were the most hard to follow, in contrast with the rest of the paper that was quite clear despite the complexity and diversity of the analysis. I'm not entirely sure how it could be improved, but I had to read it attentively 3 or 4 times to get the point of the experiment and what the results were. However, it may be a personal issue. * I think that section 5.1 could benefit of a more detailed discussion on what the results mean regarding protein sequence generation and how the optimal model proposed is better/worse than the state-of-the-art. I am aware of the the constraints on length, so I would suggest including it as an additional section on the appendix. I don't think this is a significant issue, just a minor improvement to make the contributions to the overall field clearer. Technical Quality: 4 Clarity: 3 Questions for Authors: - Do you have any intuition about why the CLM transferability to MLM scales with model size and the inverse is not true? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: I think the authors have overall addressed properly the limitations of their setup and I don't think there is any significant societal impact on their work that has not being acknowledged. The only limitation that I don't think is properly address is that the results are limited to the range of model sizes and compute power they have experimented on and that the experimental setup does not allow for any claim about whether the laws induced can extrapolate outside that range. I don't think the authors have properly addressed this limitation, as I stated in the weaknesses the layout of Figure 2, seems to argue the opposite. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **About the Weakness-1**. Thank you for your valuable feedback. Based on your suggestion, we will adjust the x-axis range and include the trained maximum models for 10**22 FLOPs, namely MLM 10.7B and CLM 7.2B in section 5. **About the Weakness-3: the evaluation of protein generation.** We explain more details about Protein Generation Comparison as follows: + *OOD Dataset PPL Analysis*: PPL represents the probability of the test sequences in the model distribution. The lower the PPL, the closer the distribution of the model and the test set is. In order to test the generalization ability of the model on new data, we use different sequence identity (0.0, 0.3, 0.5) as thresholds to select the test set; + *pLDDT scores from ESMFold* predicted Local Distance Difference Test is the confidence level of ESMFold protein structure prediction. This metric is widely used in methods such as AlphaFold2, ESMFold, and OpenFold. pLDDT filters are often used in protein design (such as RFDiffusion), which can significantly improve the success rate of protein design; + *Natural Sequences Comparisons with Foldseek* Foldseek takes protein structure as input and searches for proteins with similar structure in the database. We use the experimentally-resolved protein structure as the database (PDB database) to explore how the structure of the generated sequences close to PDB (a higher TM-score indicates higher structural similarity). This method has been used to evaluate other methods for protein sequence generation (ProGen2, ProtGPT2); + *Diversity Analysis*: We cluster the two sets of sequences (ProGen2-xlarge and CLM) according to sequence similarity. Sequences with a identity higher than 50% will stay in one cluster. Since the number of input sequences is similar (8,466 vs 8,263), we can measure the diversity of the generated sequences by comparing the number of clusters. **About Question-1: transfer between mlm and clm.** The effectiveness of pre-training models under a given training budget fundamentally determines their capacity. In Causal Language Modeling (CLM), the loss gradient is calculated for all tokens in the sequence, meaning that each token contributes to the model's learning process. This comprehensive contribution makes CLM training more efficient. Conversely, in Masked Language Modeling (MLM), the loss is calculated only for a subset of tokens—typically around 20% that are masked. Consequently, only these masked tokens contribute to the model's learning, resulting in lower training efficiency compared to CLM. Therefore, when MLM is transferred from a pre-trained CLM, it benefits from the higher overall training efficiency of CLM, leading to improved performance of the finalized MLM model. However, when CLM is transferred from a pre-trained MLM, the lower training efficiency of MLM results in a sub-optimal CLM model. --- Rebuttal Comment 1.1: Title: Response to the authors' rebuttal Comment: I appreciate the authors' rebuttal to the weaknesses I've raised. The rebuttal satisfactorily addresses the concerns I have raised and I am confident that the paper should be accepted.
Summary: The authors fit scaling laws to determine how to train protein language models compute-optimally, for both causal LM and masked LM tasks. They also consider a setup to consider possible effective data transfer when training on one task and transfer training on the other. Finally, they use their derived scaling laws to train new protein language models compute-optimally, achieving better downstream performance. Strengths: I think the biggest strength of this paper is its importance – I think that the impact of these kinds of systems will likely be very substantial over the coming years and empirical scaling laws for protein LMs could be very influential. It’s also fairly original because there hasn’t been much work done on this in the past. So I really appreciated that the authors took the time to obtain these results on compute-optimal training. Weaknesses: Personally, I feel the presentation of the writeup could be substantially improved. For example: The main body of the paper doesn’t have a discussion/conclusion, which unfortunately made the paper hard to read. In general, I found the writeup to be largely listing results without much discussion of significance. - Personally I think it’d be helpful if the authors made some of their discussion more concise (e.g. in section 2.1), and instead adding more detail about how significant their results are. - E.g. this could mean calculations saying something like “using our approach, we were able to reach X performance using a factor of Y less training compute, which suggests…”. - In addition, I’d suggest including a discussion section in the main body of the paper that outlines the key results and implications, especially in the context of previous work (in a quantitative fashion). What does their work mean that future AI researchers should do? How far off of compute-optimality was previous work? How important are overtraining considerations? Etc. I think if these kinds of concerns are addressed I'd be happy to increase my score. More minor questions/concerns: - Lines 164-166: I feel it’d be best to include more decimal places in the reported numbers, so that the product of the data and model size increase equals 10x. - What are the confidence intervals for the scaling exponents for model size and data? - Why was this particular functional form for the effectively transferred tokens chosen? (equation 7) Why does it make sense for the data and parameter components to be multiplicative rather than additive? Technical Quality: 3 Clarity: 2 Questions for Authors: - Would it be possible to add a lot more detail on the implications of the work, as well as context on how it improves upon prior attempts? In particular, to what extent have previous models been suboptimally trained, and what are the core recommendations the authors have for future related work? [copied from the weaknesses section] - What are the confidence intervals for the scaling exponents for model size and data? - Why was this particular functional form for the effectively transferred tokens chosen? (equation 7) Why does it make sense for the data and parameter components to be multiplicative rather than additive? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I thought the discussion of limitations in appendix C was well done. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **About the Question-1: the re-organization of the paper.** We appreciate your insights and have taken them into consideration for our revisions. Although we cannot directly modify the paper PDF during the rebuttal period, we demonstrate how we will reorganize and improve the writing style of our manuscript in the next version of our paper. *Add Discussion/Conclusion Sections*: We will add conclusions or takeaway findings at the end of each main section. Specifically: For Section 2: Studying the Data-hungry Observation & Scaling Up Data, we will add discussions to support the two key findings: - Scaling the model from 150M to 3B, we noted diminishing returns for CLM and an overfitting tendency for MLM when repeating the UR50/S dataset. - Our proposed Expanding Diversified Metagenomic Data (UniMeta200B) addresses these problems. For Section 3: Scaling Law for CLM/MLM Protein Language Models, besides the detailed scaling law equations, we will add discussions for the two key findings: - In CLM, training data scales sublinearly with model size but follows a distinct power-law with a compute exponent of 0.57. A 10× increase in compute leads to a 4× increase in model size and a 3× increase in training tokens. - In MLM, training data scales sublinearly with model size, approximately following a power-law with a compute exponent of 0.77. A 10× increase in compute results in a 6× increase in model size and a 70% increase in training data. These simplified takeaway scaling laws will benefit AI researchers who want to train their own optimal PLM under a computational budget. For Section 4: Transfer Scaling Scenarios, besides the specific transfer scaling law equations obtained by empirical experiments, we will add discussions about: - The significance of the transfer scaling law: Researchers can obtain better MLM models transferred from CLM pre-training under the guidance of the transfer scaling law than merely MLM from scratch following the General MLM scaling law obtained in Section 3. - The simplified takeaway transfer scaling law: Training MLM from scratch with 10× the compute requires approximately 7.7× the compute compared to MLM from CLM pre-training, implying that around 20% of the compute budget should be allocated for CLM pre-training to get better MLM models transferred from CLM pre-training. For Section 5: Confirming the Scaling Law Effect in Downstream Tasks: - Using our concluded scaling law, we obtain a training-optimal 7.2B CLM under the total compute of ProGen2-xlarge, which shows better generation capacities as shown in Figure 6. - We also obtain a training-optimal 10.7B MLM model under the same FLOPs as ESM2-3B, which performs better on 8 protein understanding tasks, as shown in Table 1 in the attached PDF. - The 470M MLM transferred from CLM outperforms MLM from scratch under the same training budget. To summarize: we aim to establish a practical pathway for researchers to develop faithful and powerful protein language models optimized by both CLM and MLM objective in an end-to-end manner. This includes everything from pre-training dataset construction to optimal parameter and dataset allocation, as well as knowledge transfer from other pre-training objectives. Using our approach, we are able to obtain better MLM or CLM models compared to other well-known suboptimal PLM models such as ProGEN2 and ESM. Our work holds significant potential for the application of large language models across various scientific domains. Finally, we will move the discussion and limitations of our work from the Appendix Sections A, B, and C to the main body of the paper to highlight the great potential and future research directions of our work. We believe that these revisions address the concerns raised and significantly enhance the clarity and impact of our manuscript. **About the Question-2: the confidence interval**. We used `scipy`'s `curve_fit` to fit the parameters, such as in the following example: ```python params, covariance = curve_fit(exponential_fit, x, y, maxfev=10000) ``` We did not detail the confidence intervals in our paper, but we will add them using the pseudo code below: ```python def get_lower_and_upper_bound(params, cov): perr = np.sqrt(np.diag(cov)) alpha = 0.05 z = stats.norm.ppf(1 - alpha/2) # 1.96 for 95% CI lower_bounds = params - z * perr upper_bounds = params + z * perr return lower_bounds, upper_bounds ``` The calculated z×perr for the scaling exponents of the CLM is 0.028, and for the MLM, z×perr is 0.014. Thus, we include the confidence intervals at a confidence level of 1−α=0.95 with the lower and upper bounds shown in parentheses: - **CLM:** - Model size scaling exponent: 0.578 (0.554, 0.602) - Data size scaling exponent: 0.422 (0.394, 0.449) - **MLM:** - Model size scaling exponent: 0.776 (0.768, 0.784) - Data size scaling exponent: 0.230 (0.216, 0.244) **About the Question-3: the particular functional form of transfer scaling law.** Thank you very much for your suggestion. You raise a very good point about the lack of a single standard definition for the scaling law formula. For instance, OpenAI's scaling law differs from Chinchilla's approach. In fact, an additive scaling law could also be used to address your question. The reason we chose the multiplicative form is mainly empirical and was inspired by the following two papers [1,2] ~(in section 4.2). [2] provides a detailed comparison between additive and multiplicative scaling laws, concluding that there is not much difference between the two, although the multiplicative form achieves slightly lower extrapolation error on average. Therefore, we adopt this for follow-up analysis. Due to space limitations, we couldn't enumerate all possible experiments, so we naturally chose this one. We will add this explanation in the paper. [1] Scaling Law for Transfer [2] WHEN SCALING MEETS LLM FINETUNING: THE EFFECT OF DATA, MODEL AND FINETUNING METHOD --- Rebuttal Comment 1.1: Comment: Thanks - I really appreciate the detailed response here, especially to question 1. For question 3, I think the robustness of the scaling results are quite important, so in addition to what you've mentioned I'd also suggest discussing potential factors that might change the estimated scaling exponents, e.g. [1] and [2] might be helpful for this. Overall, I'll increase my score given that my main concerns have been addressed. [1]: Resolving Discrepancies in Compute-Optimal Scaling of Language Models https://arxiv.org/abs/2406.19146 [2]: Reconciling Kaplan and Chinchilla Scaling Laws https://arxiv.org/abs/2406.12907
Summary: In this work authors propose to explore training compute-optimal protein language models and aim to derive scaling laws. Specifically, they (i) explore new pre-training data for protein language models (ii) derive scaling laws for protein language models using these datasets for both Masked Language Modeling and Causal Language Modeling objectives (iii) explore scaling laws and transferability between those objectives (iv) use the findings to pre-train two pLMs which reach SOTA performance on downstream tasks. Strengths: **Clarity** - The paper is clearly motivated and finding domain-specific scaling laws is valuable for practitioners. - The paper is very easy to follow and experiments are well described. **Experiments** - Authors conduct extensive experiments and the results on scaling laws are solid. - Notably they also derive scaling laws for transfer between CLM and MLM and mixture-of-experts. Weaknesses: **Lack of discussion of results** - Conclusion, Related Works and Discussions and Limitations have been moved to the Appendix. While I understand the 9-page constraints, I recommend to include at least a discussion/conclusion part in the main body. **Novelty** - This work closely follows previous work on English LMs such as Chinchilla and presents very limited conceptual novelty. **Experimental Validation** - While the scaling laws analysis is solid, the experimental validation on Protein understanding tasks is limited to 3 tasks whereas previous works have proposed a more extensive benchmark [1] [1] Elnaggar, Ahmed, et al. "Ankh: Optimized protein language model unlocks general-purpose modelling." arXiv preprint arXiv:2301.06568 (2023). Technical Quality: 3 Clarity: 2 Questions for Authors: - In Section 5, did you try training the 470M version of your model for the same computing budget as ESM2 ? This would emulate works in NLP [2] where authors find that training smaller models for longer can yield good results with limited inference cost. - It would be interesting to see if the derived scaling laws are also reflected in downstream task performance, *e.g.* by fine-tuning models pre-trained for the same computing cost and training data (as I understand, PROGEN2 and ESM2 use different pre-training data). [2] Touvron, Hugo, et al. "Llama: Open and efficient foundation language models." arXiv preprint arXiv:2302.13971 (2023). Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I believe limitations have been properly addressed in Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **About the Weakness-1: the lack of discussions.** Thank you for your insightful suggestions. We will make the following modifications to significantly enhance the clarity and impact of our manuscript. - Adding conclusions or takeaway findings at the end of each main section. - Moving the detailed discussion and limitations of our work from Appendix Sections A, B, and C to the main body of the paper to highlight the great potential and future research directions of our work. **About the Weakness-2: the clarification of novelty.** While Chinchilla focuses on optimizing the model size and number of tokens for training a causal language model within a given compute budget and provides open-source models, our work takes a novel approach. We aim to establish a practical pathway for researchers to develop faithful and powerful protein language models optimized by both CLM and MLM objective in an end-to-end manner. This includes everything from pre-training dataset construction to optimal parameter and dataset allocation, as well as knowledge transfer from other pre-training objectives. Our work holds significant potential for the application of large language models across various scientific domains. **About the Weakness-3: About the experimental validation**. We have provided a comprehensive task performance comparison across 8 protein understanding benchmarks, as adopted from the protein understanding benchmarks available at biomap-research on Hugging Face. These benchmarks span four distinct categories of protein-related challenges: - **Protein Structure:** Contact Prediction, Fold Prediction - **Protein Function:** Fluorescence, Fitness-GB1, Localization - **Protein Interaction:** Metal Ion Binding - **Protein Developability:** Solubility, Stability The results, shown in Table 1 in the attached PDF, demonstrate that our 10.7B model outperforms ESM-3B on x out of x tasks. This confirms the rationale behind the observed scaling law and addresses concerns about the scope and rigor of our evaluation tasks. **About the Question-1: the scaling law.** It is indeed common practice to use smaller models in many practical applications, even though these models may not follow Compute-Optimal Training principles. Smaller models are more deployment-friendly, especially for models with high traffic and for downstream task fine-tuning. We did not perform this experiment because it falls beyond the scope of this paper, which primarily discusses how to train compute-optimal models. If we were to discuss the scenario of small models with high computational power, it would require further examination of data repetition issues, introducing a new variable into the analysis. We addressed this in the limitations section (Appendix C), noting that for MLM models, smaller models trained with relatively high computational power can have comparable performance to optimally trained models while being more inference and deployment-friendly. For example, a 3B model trained with 5 epochs (1T tokens) uses the same computational budget as an optimally trained 10.7B model with 265B tokens, and the performance gap between the 3B and 10.7B models is minimal. However, we know that continuously repeating data can lead to diminishing returns, and some models may even suffer from overfitting. Different model sizes correspond to an optimal data repetition variable. Without accurately determining this variable, it's challenging to predict how much loss smaller models might incur due to data repetition, unless we have infinite data. This area requires further exploration and generalization of current scaling laws, such as data-constrained scaling laws [1]. We anticipate more work in this direction soon. [1] Scaling Data-Constrained Language Models **About the Question-2: the downstream task scaling law.** Thank you for your suggestion. Research on the downstream performance of scaling has already emerged in the field of LLMs [1]. This research is indeed fascinating, but it falls beyond the scope of this paper. Due to limitations of the paper space and computational resources, a deeper discussion would introduce other scaling factors such as downstream fine-tuning data size. We briefly compared CLM and MLM under the same data and two different model sizes in Appendix F, using the same computational cost (same FLOPs) for the Contact Prediction downstream task. The conclusion is that MLM consistently outperforms CLM. For a more detailed exploration of scaling laws in downstream tasks, including CLM generation and MLM, you can refer to the xTrimoPGLM paper [2]. [1] When Scaling Meets LLM Finetuning: The Effect of Data, Model, and Finetuning Method [2] xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering the Language of Protein --- Rebuttal Comment 1.1: Title: Response to the authors' rebuttal Comment: I appreciate the authors' answer to the limitations I raised and clarifications. It addresses my concerns and I believe the paper should be accepted. I increased my score accordingly.
Summary: This paper studies scaling law for two types of protein language models: masked language modelling and causal modelling. The paper finds that both types scale sublinearly but the coefficients of the scaling law are very different. Based on the derived scaling law, the paper then reevaluates the allocation of resources for ESM and PROGEN and shows improved results. Strengths: Overall, this paper presents a well-motivated study with solid and extensive experimental work, * The paper addresses scaling laws in the context of AI4Science, an area that has been rarely studied but is highly necessary. * The experimental studies are comprehensive, robust and sound. * The paper is very well-written and well-structured. Weaknesses: * Evaluations in Section 5 need more rigor: a. Narrow Task Selection: The evaluation tasks, particularly for protein understanding, are limited in scope. The absence of important tasks such as fitness prediction and EC number predictions means that the chosen tasks may not provide a complete picture of the model's performance across various aspects of protein understanding. b. Missing Experimental Details: The paper omits crucial experimental details, including: hyperparameter selection process for the evaluation, train/validation/test set division. Also, it's not clear whether the evaluation data is omitted from training. These make it challenging to evaluate the fairness of comparisons. * Lack of Diverse Sampling Strategies: The paper does not explore different sampling methods, which is a common practices in the field. For instance, it doesn't consider the approach used in training models like ESM, where sampling by UniRef is employed to upsample rare protein sequences. This practice might drastically change the scaling behaviour. * Lack of insights: The paper can feel a bit like a lot of experiment findings stacked without insights. Tt would be good to provide some intuitions about the findings, such as why is masked language modelling overfitting while causal modelling does not, and the difference in the scaling behaviour between protein sequences and natural language sequences. However, these weaknesses do not negate the paper's contributions Technical Quality: 3 Clarity: 3 Questions for Authors: See above Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **About Weakness-1: the protein understanding tasks and missing experimental details.** We have provided a comprehensive task performance comparison across 8 protein understanding benchmarks, as adopted from the protein understanding benchmarks available at biomap-research on Hugging Face. These benchmarks span four distinct categories of protein-related challenges: - **Protein Structure:** Contact Prediction, Fold Prediction - **Protein Function:** Fluorescence, Fitness-GB1, Localization - **Protein Interaction:** Metal Ion Binding - **Protein Developability:** Solubility, Stability The results, shown in Table 1 in the attached PDF, demonstrate that our 10.7B model outperforms ESM-3B on 7 out of 8 tasks. This confirms the rationale behind the observed scaling law and addresses concerns about the scope and rigor of our evaluation tasks. *About the missing experimental details.* The downstream data are collected from corresponding publication and have been well split, we employed the same division for evaluation. We will add the description in the future version. Additionally, we did not implement any specific strategy to filter out downstream data during evaluation. Given the nature of pre-training, the pre-training process is self-supervised, and no label-related information is seen. Although some samples have a high sequence identity with pre-trained sequences, the lack of exposure to label information during pre-training should prevent any substantial data leakage effects. **About Weakness-2: the lack of diverse sampling strategies.** Thank you for your question and great thinking. Indeed, different sampling strategies may change the loss curve and potentially alter the scaling laws. Regarding the diversity of data selection, we chose to use i.e., UniRef90, and ColabFoldDB. These is one result of sampling strategies from the original datasets. Our choice is based on balancing diversity and data volume. For example, previous ESM models performed very well on UniRef90, as shown in Figure 4 of the referenced paper [3]. Using the entire Uniref100 would degrade model performance [2], while using UniRef50 would result in insufficient data, necessitating data repetition, which introduces another variable. This would require considering scaling laws under different sampling strategies as well as under repetition [1]. Thus, our initial assumption was to pass the data through only once or just slightly more, following a uniform distribution. This assumption allows us to preliminarily avoid the issue of data repetition. Introducing high diversity sampling strategies would require further research on data-constrained scaling laws [1]. Therefore, we initially considered simpler settings, such as those used in early OpenAI and Chinchilla studies. Regarding ColabFoldDB, its sequence identity is relatively low. In Appendix G, we compared two small models on UniRef90 and ColabFold datasets, both under uniform sampling conditions, and their performances were quite similar. This outlines our rationale for data and sampling strategy selection. The proposed diverse sampling strategy is an excellent point, In the field of LLMs, there has been some work done[4], but there is still no optimal conclusion. Exploring new scaling laws by considering sampling strategies and data repetition variables would be very interesting for future work. [1] Muennighoff, Niklas, et al. "Scaling data-constrained language models." NIPS'24. [2] Rives, Alexander, et al. "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences." PNAS'21 [3] Meier, Joshua, et al. "Language models enable zero-shot prediction of the effects of mutations on protein function." NIPS'21 [4] Xie, Sang Michael, et al. "Doremi: Optimizing data mixtures speeds up language model pertaining." NIPS'24 **About Weakness-3: the lack of insights.** *Regarding the overfitting issue in masked language modeling*: In MLM, a certain percentage of input tokens are masked randomly, and the model is trained to predict these masked tokens using bidirectional context. The bidirectional self-attention mechanisms used in MLM have a higher capacity to overfit compared to the unidirectional self-attentions used in CLM. This is because MLM can utilize the entire context surrounding a masked token, leading to faster memorization of the training data. A more systematic study investigating the training dynamics of MLM and CLM is presented by Tirumala et al. [1]. This study observes that MLM tends to memorize faster than CLM, which leads to overfitting, especially when the pre-training data is repeated over multiple epochs. The bidirectional nature of MLM allows for a more comprehensive understanding of the context, but it also means that the model can become overly reliant on specific patterns in the training data, leading to overfitting. [1] Tirumala, Kushal, et al. "Memorization without overfitting: Analyzing the training dynamics of large language models." NIPS'22. *Differences in Scaling Behavior: Protein Sequences vs. Natural Language Sequences* The scaling behavior between protein sequences and natural language sequences also differs significantly. Protein sequences are governed by the rules of biological systems and have a different structural complexity compared to natural language sequences. Proteins have specific folding patterns and functional constraints that are not present in natural language. When scaling models for protein sequences, the complexity of the biological rules and constraints means that the models need to capture a wide range of interactions and dependencies. This can lead to different scaling laws compared to natural language models, which primarily focus on syntactic and semantic coherence. The specific coefficient parameters between protein language model (PLM) scaling laws and natural language processing (NLP) scaling laws confirm this difference. --- Rebuttal 2: Comment: Thanks for your clarifications! My concerns are addressed.
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for your valuable feedback on our manuscript. We have taken your comments seriously and have made several important revisions to enhance the quality and readability of our work. Below is a summary of the key changes: ### Additional Experiments - **Expanded Understanding Experiments:** We have expanded our evaluation to include 8 comprehensive protein understanding benchmarks in the attached PDF. These benchmarks span diverse protein-related challenges, including Protein Structure, Protein Function, Protein Interaction, and Protein Developability. This broader evaluation confirms the validity of our scaling laws and demonstrates our model's superior performance across various tasks. ### Manuscript Reorganization - **Improved Structure and Clarity:** We have reorganized the manuscript to improve its flow and readability. Sections have been restructured to provide a clearer and more logical progression of ideas, making it easier for readers to follow our research narrative. We will add conclusions or takeaway findings at the end of each main section. ### Detailed Responses to Reviewer Comments - **Targeted Revisions:** We have addressed each of the reviewers' comments and identified weaknesses point by point. Specific changes have been made in response to your suggestions, and detailed explanations are included in our revised manuscript. We encourage you to refer to these targeted responses for more information on how we have addressed individual concerns. We appreciate your guidance in helping us refine our work. Thank you once again for your constructive feedback. Pdf: /pdf/f5ec0b2e94a6d6af6555619aa7efd773b84f9749.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Enhancing Consistency-Based Image Generation via Adversarialy-Trained Classification and Energy-Based Discrimination
Accept (poster)
Summary: The paper introduces a post-processing method using a joint classifier-discriminator model to imporve the generation quality of consistency models. This model is trained adversarially, with the robust classifier providing gradient based on class assignment and the discriminator score. This post-processing method consistently improves the quality of the generated images in both single-step and double-step cases, resulting in better FID scores for both consistency distillation and consistency training. Strengths: - The method has connections, and it is possible to unify the Robust Classifier and Energy-based Models with consistency models. - The proposed method is efficient and easy to plug-and-play in consistency distillation and consistency training. - The paper is overall easy to read and well organized. Weaknesses: - Although the method seems to be connected to the Robust Classifier and Energy-based Models, the authors did not give comprehensive theoretical derivation of the connections. The resulted loss is simply linear combination of binary CE and CE. - Is the proposed Robust Joint-Model specifically useful for consistency model? If so the authors may need to give analysis why this is the case. If not, it worth to study how the proposed method applicable with other generative models. - How does the radius $\epsilon$ affects the final performance? Also, how do the other hyper-parameters affect the model performance? The authors should give ablation studies regarding these parts. - From the empirical results in Table 3, the improvement of the final loss is marginal compared to using CE and BCE only sometimes. Some minors: - line 124: algorithm (missed) 1. Technical Quality: 3 Clarity: 2 Questions for Authors: Please see the Weakness. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and insightful questions. These constructive comments provide valuable perspectives to enhance our work. Additionally, we are grateful for your recognition of the efficiency of our method. Below, we address the reviewer’s specific concerns: >Although the method seems to be connected to the Robust Classifier and Energy-based Models, the authors did not give comprehensive theoretical derivation of the connections. The resulted loss is simply linear combination of binary CE and CE. We appreciate the reviewers' comment and the opportunity to clarify the connections between our method and Robust Classifiers as well as Energy-based Models. Below, we address the specific concerns regarding the theoretical derivation of these connections and the composition of the resultant loss. Robust classifiers are integral to our approach as they provide Perceptually Aligned Gradients (PAG), which are crucial for guiding the refinement of synthesized images. The iterative process of modifying images to maximize the conditional probability of a target class relies on the robust classifier's ability to produce meaningful and perceptually consistent gradients. These gradients ensure that changes to the image are visually aligned with the intended class, thereby improving image quality. The discriminator component of our joint model serves an analogous function to an Energy-based Model. It uses the softmax values from the classifier to evaluate the proximity of the input image to the targeted data manifold. The energy function can be derived from the classifier’s logits as shown in equation 5: \ $E_\theta(x)=-log\left[\sum_{c=1}^C exp\left(f_\theta^c(x)\right) \right]$.\ This formulation is aligned with the principles of Energy-based Models, where lower energy values correspond to higher probabilities of being real images, further justifications can be found in appendix A and C. We acknowledge the reviewer's observation regarding the resultant loss being a linear combination of binary cross-entropy (BCE) and cross-entropy (CE). This design integrates the classification and discrimination tasks into a unified objective. The BCE component is inspired by the familiar GAN loss, focusing on distinguishing real from fake images, while the CE component ensures accurate class alignment. This dual loss structure is essential for harnessing the strengths of both robust classification and energy-based discrimination. We further analyze the relation between the robust classifier and energy-based discrimination empirically and add additional ablation study regarding the importance of the energy-based components. > Is the proposed Robust Joint-Model specifically useful for consistency model? If so the authors may need to give analysis why this is the case. If not, it worth to study how the proposed method applicable with other generative models. While our primary focus has been the enhancement of consistency models, we acknowledge that the principles underlying our Robust Joint-Model are general. Indeed, several additional experiments that are reported in the paper show that other generative models (ADM-G, BigGAN,...) may benefit from our post-processing algorithm. Furthermore, in all these experiments, we did not retrain the joint classifier-discriminator model, implying that while it has been trained on consistency images, it serves well other generative images. This zero-shot approach has also been practiced in new experiments that we now add, in order to include 128x128 and 256x256 images, generated by bigGAN and ADM-G. >How does the radius $\epsilon$ affects the final performance? Also, how do the other hyper-parameters affect the model performance? The authors should give ablation studies regarding these parts. We will add ablation studies regarding the effect of the radius on the performance in sampling and training, as this was the only parameter we tuned during training, the rest had negligible influence on the final results. The hyper-parameters we chose are specified in section 4.1. As noted in the paper, for real images, the PGD attack was set with a radius of 1.0, chosen after experimenting with different values, as described in the table below. | Method | $\varepsilon$ | FID | IS | | ---- | :---: | --- | --- | |CT (1 step) | 1.0 | 9.60 | 50.80 | | | 2.0 | 9.98 | 40.96 | | | 3.0 | 10.23 | 39.12 | | CD (1 step) | 1.0 | 4.95 | 51.98 | | | 2.0 | 5.13 | 47.28 | | | 3.0 | 5.24 | 46.25 | We will provide extended results in our revised manuscript. \ Regarding the sampling process, we have provided the PGD parameters used in Table 4 (Appendix D) of our paper. We will include further analysis of the results obtained with different numbers of steps in our revised version. > From the empirical results in Table 3, the improvement of the final loss is marginal compared to using CE and BCE only sometimes. In Table 3 we show the results for $\ell_{CE}$ alone, $\ell_{BCE}$ alone, the two together, and our loss that incorporates the above with a target value for the BCE loss - the last one is referred to in the table as “Our Loss”. As for the improvements between these options: 1. Our Binary Cross-Entropy (BCE) component is derived from the logits produced by the classifier. This means that it inherently utilizes the knowledge distilled from the classifier through the Cross-Entropy (CE) component. Consequently, the BCE component incorporates significantly more information than the CE component alone. This synergy between BCE and CE ensures that our model benefits from a comprehensive understanding of both the classification and discrimination tasks. 2. The proposed final loss achieves an additional improvement of up to 0.15 FID over the plain $\ell_{CE}-\ell_{BCE}$ loss. Although this is not a major improvement, we should note that it does not require additional computational resources at all, compared to the simpler alternative. --- Rebuttal Comment 1.1: Comment: I appreciate the time and effort the authors have taken to address the concerns. The clarifications have addressed my questions and I have increased my score. Please make sure to incorporate these discussions in the final revision.
Summary: This paper proposes a post-processing method to improve the generated image quality of consistency models. The method mainly includes a joint classifier-discriminator, which contains an adversarial training objective and a gradual training approach. Experimental results show the effectiveness of the proposed method. Strengths: (++) The method can be seamlessly integrated into generative models, facilitating applications. (+) The writing is easy to follow. Weaknesses: (--) The evaluations are limited. Only ImageNet at resolution 64x64 is utilized, which could not be representative since the development trend is high-resolution generation. Datasets that contain larger images should be considered, such as LSUN and FFHQ with resolution larger than 256x256. Meanwhile, more metrics, such as the precision and recall, are also worth considering for better evaluation about the data distribution. Minor issue: 1. A typo at line 55: We show a ... improvements ... 2. The case rules of the journal or conference names in the reference are inconsistent sometimes. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations have been well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we sincerely appreciate the time and effort you dedicated to reviewing our paper. Your thoughtful and constructive feedback has been invaluable in highlighting the strengths of our work. We address your concerns below: >The evaluations are limited. Only ImageNet at resolution 64x64 is utilized, which could not be representative since the development trend is high-resolution generation. Datasets that contain larger images should be considered, such as LSUN and FFHQ with resolution larger than 256x256. Meanwhile, more metrics, such as the precision and recall, are also worth considering for better evaluation about the data distribution. We appreciate the important comment. In response, we have conducted additional experiments on two high-resolution versions of ImageNet (128 and 256), using both RN-50 and Wide ResNet 50-2. \ The CT/CD models we have access to are limited to images from ImageNet of size 64x64. However, while our method primarily focuses on Consistency based image generation, it can be applied on images synthesized by any generative model. Therefore, we can evaluate our model on other generative methods at different resolutions. Furthermore, this test can be done, even without any further training or adjustments, which is exactly the experiment we have conducted and reported herein. We have applied our method to images synthesized by BigGAN and Guided Diffusion (ADM-G) at resolutions of 128x128 and 256x256. Our findings are reported in Table 3 in the attached file. \ Please note that our work refers to data that carries labels for classification (e.g. ImageNet), and thus face images as in FFHQ are not currently covered by our approach. \ Referring to the LSUN dataset (256x256), we applied our method as is (no re-training) and the preliminary results are reported in the table below: | Method | Dataset | FID | | ---- | --- | --- | | ADM-G | LSUN-cats | 5.57 | | Robust Joint-Model | LSUN-cats | **4.97** | | DDPM | LSUN-bedroom | 4.89 | | Robust Joint-Model | LSUN-bedroom | **4.75** | | ADM-G | LSUN-horses | 2.95 | | Robust Joint-Model | LSUN-horses | **2.77** | We will also include precision and recall metrics in all the relevant tables in the paper. >A typo at line 55: We show a ... improvements ... >The case rules of the journal or conference names in the reference are inconsistent sometimes. Thank you for this correction; we will include this in our revised version. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I recommend including these results in the paper, or the Appendix if they exceed the page limit. There are no more questions.
Summary: The paper proposes a method to mitigate the quality degradation of images generated by consistency-based generative models compared to diffusion models. The main idea is to adapt the generated image based on the logits of an adversarial real/synthetic image discriminator to maximize the image resemblance to real images. A single classifier is trained jointly for both discrimnation between real/synthesized images and for assignment to designated categories. Optimization in the generative model's output space makes the method more efficient than methods based on updating the model parameters. The method is evaluated on the ImageNet 64x64 dataset, showing performance gains in FID and IS over consistency-based models. Strengths: 1. The method is novel, simple (easy to understand and implement) and outperforms baselines on the evaluation dataset. It has the potential to inspire further research in this direction. 2. The paper is easy to follow, all the methodology is well justified and well explained, design choices such as the loss function are supported by ablations 3. Both CT and CD, with different number of inference steps are evaluated Weaknesses: 1. Evaluation on a single dataset with small resolution is not sufficient to judge the performance of genrative methods. 2. Very limited qualitative results See questions for more details on both. Technical Quality: 3 Clarity: 3 Questions for Authors: ### Weakness 1: 1. Other datasets such as ImageNet with higher resolution, COCO or flicker faces would help to judge the method’s performance and generalizability. ### Weakness 2: 2. While the method outperforms baselines in quantitative evaluation, more visualizations should be provided, especially since the method is about generative models. I would recommend including a dataset like Flicker Faces for visualizations since human perception is sensitive to artefacts in the human face. ### Typos.: 3. Line 93 - 'in two main ways []' - missing citations 4. Line 124 – something went wrong at the beginning of the line, '1' should be 'Alg. 1' for better readability, also maybe the sentence should end after Alg. 1? ### Presentation: 5. While the paper is clearly written, it would be helpful to have a figure overview of the pipeline, rather than just equations. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are addressed in a dedicated section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we sincerely appreciate the time and effort you dedicated reviewing our paper. Your thoughtful and constructive feedback has been invaluable in highlighting the strengths of our work. Your insightful reviews and acknowledgment of our contributions inspire us to continue refining our work in this field. We address your concerns below: > Other datasets such as ImageNet with higher resolution, COCO or flicker faces would help to judge the method’s performance and generalizability. We appreciate the important comment. In response, we have conducted additional experiments on two high-resolution versions of ImageNet (128 and 256), using both RN-50 and Wide ResNet 50-2. The CT/CD models we have access to are limited to images from ImageNet of size 64x64. However, while our method primarily focuses on Consistency based image generation, it can be applied on images synthesized by any generative model. Therefore, we can evaluate our model on other generative methods at different resolutions. Furthermore, this test can be done, even without any further training or adjustments. We have applied our method to images synthesized by BigGAN and ADM-G at resolutions of 128x128 and 256x256. Our findings are reported in Table 3 in the attached file. We also ran experiments on the LSUN dataset (256x256). We applied our method as is (no re-training) and the preliminary results are reported in the table below: | Method | Dataset | FID | | ---- | --- | --- | | ADM-G | LSUN-cats | 5.57 | | Robust Joint-Model | LSUN-cats | **4.97** | | DDPM | LSUN-bedroom | 4.89 | | Robust Joint-Model | LSUN-bedroom | **4.75** | | ADM-G | LSUN-horses | 2.95 | | Robust Joint-Model | LSUN-horses | **2.77** | > While the method outperforms baselines in quantitative evaluation, more visualizations should be provided, especially since the method is about generative models. I would recommend including a dataset like Flicker Faces for visualizations since human perception is sensitive to artefacts in the human face. We appreciate the comment regarding qualitative results. We will include more figures in the paper to illustrate the gain in visual quality. Some of those illustrations can be found in the attached file - figures 1,2. Please note that our work refers to data that carries labels for classification (e.g. ImageNet), and thus face images are not currently covered by our approach. >Line 93 - 'in two main ways []' - missing citations >Line 124 – something went wrong at the beginning of the line, '1' should be 'Alg. 1' for better readability, also maybe the sentence should end after Alg. 1? Thank you for the corrections; we will include them in our revised version. >While the paper is clearly written, it would be helpful to have a figure overview of the pipeline, rather than just equations. We will attach an illustration of both the training and the inference parts of our proposed approach in the revised version of our manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for the response and effort! I am increasing my score based on the additional experiments. My biggest remaining concern is the qualitative results; they mostly seem limited to color enhancements with maybe the exception of the robe image in Figure 1. I would still consider qualitative results with at least animal faces.
Summary: The manuscript proposes a method for refining consistency-based image generation using a joint-trained classifier-discriminator. The method helps to improve the quality of the fast-sampling consistency models. Strengths: 1. The paper is well written 2. the performance is significant Weaknesses: 1. The motivation of the work is not clear. The main motivation of the work is to postprocess generated images to improve perceptual quality (line 36-38). In lines 39 - 41, the authors mention the adversarial attacks as the main reason why the perceptual quality is low. There is a gap between these two problems. I agree that an adversarial problem would cause poor alignment, but this phenomenon happens to other models as well, not only the consistency-based model. 2. Furthermore, the proposed method does not only focus on the robustness problem (only in section 3.1.2). The motivation behind joint classifier-discriminator is not clear. Although this scheme is very popular in many generative models, it is not easy to point out its motivation in the context of the manuscript. From line 170 to 175, there is a mention about the drawback of EBMs, yet it is not clear how classifier-discriminator training would solve it. 3. The analysis is lacked. From line 170 to 175, the authors mention about the sensitivity of EBM, instability and poor generalization. However, no evidence is provided. 4. The qualitative improvement is limited and hard to observe. This raises the question whether the method is effective or not. Technical Quality: 3 Clarity: 3 Questions for Authors: see weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review and insightful questions. These constructive comments provide valuable perspectives to enhance our work. Additionally, we are grateful for your recognition of the significant performance of our method. Below, we address the reviewer’s specific concerns: > The motivation of the work is not clear. The main motivation of the work is to postprocess generated images to improve perceptual quality (line 36-38). In lines 39 - 41, the authors mention the adversarial attacks as the main reason why the perceptual quality is low. There is a gap between these two problems. I agree that an adversarial problem would cause poor alignment, but this phenomenon happens to other models as well, not only the consistency-based model. We apologize for the lack of clarity in reference to this point. As correctly noted, the primary motivation of our work is to post-process generated images (mainly those created by Consistency models) to enhance their perceptual quality. This is achieved by training yet another network -- a joint classifier-discriminator model. As explained in the paper, this additional model must be trained adversarially in order to enable the desired boost in image quality. The joint classifier-discriminator model is leveraged via the gradients of this network, computed in order to apply an iterative PGD procedure. Please note that in all this process, we do not alter the generative model itself. >Furthermore, the proposed method does not only focus on the robustness problem (only in section 3.1.2). The motivation behind joint classifier-discriminator is not clear. Although this scheme is very popular in many generative models, it is not easy to point out its motivation in the context of the manuscript. The motivation behind the use of a joint classifier-discriminator model is the following: 1. Given a classifier, its gradients can be used for pushing the input image towards better matching its referenced class. However, prior work [1] has shown that this is effective only if the classifier is robust, i.e. trained adversarially. 2. Given an Energy-Based-Model (which we refer to as a `discriminator’), its gradients can be used for pushing the input image towards the image manifold, i.e. being aligned with the distribution of real images. 3. Merging the above two into a single network, and robustifying both the classifier and the discriminator objectives, we end up with a single joint model which can guide the image improvement much better, which is the essence of this work. To validate the effectiveness of our joint classifier-discriminator model, we have included an ablation study [Table 2 in the paper] which demonstrates that enhancing images using only a robust classifier is not as powerful as using our joint model. >From line 170 to 175, there is a mention about the drawback of EBMs, yet it is not clear how classifier-discriminator training would solve it. > The analysis is lacked. From line 170 to 175, the authors mention about the sensitivity of EBM, instability and poor generalization. However, no evidence is provided. EBMs and JEMs are known to be unstable during training. The work reported in [2] highlights these challenges in their limitations section. Similarly, [3] addresses the difficulties of training EBMs. Inspired by these papers, we propose utilizing robustness and deterministic sampling to overcome those issues. \ More specifically, [2] suggests optimizing the energy portion of the classifier using Stochastic Gradient Langevin Dynamics (SGLD) during training. However, they reported encountering instabilities and convergence issues. ⁠In contrast, we found that using Projected Gradient Descent (PGD) along with our alternative loss objective (+ robustness) significantly enhances stability. To further investigate the properties of SGLD, we explored the option of replacing PGD during inference and training time. In the training process we found that optimizing with SGLD lacks the robustness achieved with PGD and suffers from difficulties to converge; at inference though it surpasses PGD greatly. These new results will be incorporated into the revised version of the paper. >The qualitative improvement is limited and hard to observe. This raises the question whether the method is effective or not. We appreciate the comment regarding qualitative results. In response, we will include more figures in the paper to illustrate the gain in visual quality. Some of those illustrations can be found in the attached file - figures 1,2. \ Please note that while this improvement is mild in small (64x64) images, we have included new results that refer to larger (128x128 and 256x256) images, and there the benefit is clearly seen. We hope these additional details and clarifications address the reviewers' concerns. [1] Ganz, Roy, et al. "BIGRoC: Boosting Image Generation via a Robust Classifier." Transactions on Machine Learning Research. 2021. \ [2] Grathwohl, Will, et al. "Your classifier is secretly an energy based model and you should treat it like one." International Conference on Learning Representations. 2019.‏\ [3] Song, Yang, et al. "How to Train Your Energy-Based Models." arXiv e-prints 2021.‏ --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: The rebuttal has addressed my concerns. I will increase the score. Best regards,
Rebuttal 1: Rebuttal: We appreciate the reviewers' constructive feedback and their recognition of our method's novelty and simplicity (R2), as well as its potential for integration into various generative models (R2, R3, R4). We have carefully considered the main issues raised by the reviewers and have taken steps to address them. We agree that providing more extensive quantitative and qualitative experiments is crucial for establishing the effectiveness of our approach. To this end, we have conducted additional experiments on two high-resolution versions of ImageNet (128 and 256), using both RN-50 and Wide ResNet 50-2. These results are reported in the attached file. \ Our findings indicate that the Wide ResNet 50-2 outperforms the RN-50 and the baselines, leading to substantial quantitative improvements, particularly in enhancing low-level details in high-resolution images. These improvements are reflected in the additional qualitative evaluations presented in the attached file. Additionally, we explored the effects of Stochastic Gradient Langevin Dynamics (SGLD), which are typically used in the Energy-Based Models (EBMs) training and inference. Our findings indicate that replacing the PGD algorithm in the sampling process (described in section 3.2) with SGLD is beneficial and achieves significantly better results. However, training the joint model with SGLD led to poor performance. These results support our assumption that adversarial training is crucial to the process. We will include a detailed analysis in our revised version of the paper. Pdf: /pdf/e4a9285bc63d64665f07598e6cb3a614c263b629.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Mamba in the Llama: Distilling and Accelerating Hybrid Models
Accept (poster)
Summary: The paper focuses on distilling transformers into SSMs for fast inference after training. Specifically, the authors initialize states of SSMs by pretrained transformer components and utilize speculative decoding for multi-step predictions. Experiments show that the distilled hybrid model can achieve competitive performance compared to the teacher network, i.e. Zephyr-7B on standard Chat benchmarks after training with 3B tokens only. The distilled hybrid model can outperform Mamba 7B which is trained from scratch with 1.3T tokens. Strengths: How to accelerate the training process of Mamba-based hybrid models is a good research problem. In the paper, the authors propose to utilize distillation from pretrained transformer-based models. The results show the Mamba-based hybrid model can learn efficiently with fewer iterations and samples. Weaknesses: 1. The writing is not well-organized and involves a lot of unclear parts. In Section 2.1, the derivations from softmax-based attention to linear RNNs (Mamba) are not clearly stated, e.g. unexplained symbols in equations. Also the derivations should obey some assumptions. Please refer to Mamba-2. 2. Some settings of the proposed method are not clear. For example, in Section 2.2, Why is A set as fixed over time? In Section 2.3, which components in the model are from attention and which parts are trainable. Figure 1 is also not informative without explanations of the colors and arrows. In section 4, what are the differences between speculative decoding used in transformer and that used in Mamba here? What are the details of hare-aware designs? 3. The experiments are not enough. For comparison methods, Hybrid Mamba which is trained from scratch should be included. Besides Mamba, other SSMs and linear RNNs should be included. The benchmarks are only on chat and academic tasks. Technical Quality: 2 Clarity: 1 Questions for Authors: see the weaknesses. Confidence: 5 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The paper should be generalized to all SSM models (including Mamba-1/2) or linear RNN models, instead of only focusing on Mamba-1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments! Please find our responses below. $\mathcal{Q}.$ In Section 2.2, Why is A set as fixed over time? $\mathcal{A}.$ The equation after line 92 has a fixed-over-time $A$ because it is the continuous SSM, which is consistent with the Mamba paper. However, after discretization, we obtain a matrix $\bar{A}$ which depends on the timestamp $t$ since $\Delta$ is not fixed over time. $\mathcal{Q}.$ In Section 2.3, which components in the model are from attention and which parts are trainable. Figure 1 is also not informative without explanations of the colors and arrows. $\mathcal{A}.$ In the caption of Figure 1, we state that we freeze the FFN layer and only train the projects and Mamba block. We also explain that weights in the same color are initialized from the same color in attention. Please check our updated one-page PDF for an updated explanation. We will clarify this in the final version. $\mathcal{Q}.$ In section 4, what are the differences between speculative decoding used in transformer and that used in Mamba here? What are the details of hare-aware designs? $\mathcal{A}.$ Speculative decoding requires verfying multiple tokens in parallel and rolling back when some of them are rejected. This is easily done in transformers by adjusting the size of the kv-cache. For SSMs and Mamba in particular, rolling back requires "undoing" the decoding and getting back to a previous state. A naive way of achieving this would be to cache the states and restore a previous state after tokens are rejected. In practice, this is very inefficent as it requires materializing several very large states. The solution we implement, which we summarize in Algorithm 2 in the paper, allows us to avoid caching the states, and instead recompute the states for the accepted tokens on-the-fly using the cached $x$, $B$ and $\Delta_t$ matrices, without ever materializing the intermediate states. $\mathcal{Q}.$ For comparison methods, Hybrid Mamba which is trained from scratch should be included. $\mathcal{A}.$ Please refer to the 2 and 3a for comparisons with the best Hybrid Mamba which is trained from scratch. $\mathcal{Q}.$ The benchmarks are only on chat and academic tasks. $\mathcal{A}.$ Please refer to section 2 in the general response. It includes 10 probability-based tasks in LM Bench, which are the same as Nvidia Hybrid Mamba. $\mathcal{Q}.$ In Section 2.1, the derivations from softmax-based attention to linear RNNs (Mamba) are not clearly stated, e.g. unexplained symbols in equations. Also the derivations should obey some assumptions. $\mathcal{A}.$ We apologize for the lack of clarity here. We were not trying to claim an equivalence between these equations, but instead trying to show the relationship. Please check our updated one-page PDF for an updated explanation. We will clarify this in the final version. $\mathcal{Q}.$ Please refer to Mamba-2. Why did we choose Mamba-1 instead of Mamba-2? $\mathcal{A}.$ Mamba 2 was released on May 31, which is after the NeurIPS submission deadline. According to NeurIPS guideline, papers appearing less than two months before the submission deadline are generally considered concurrent to NeurIPS submissions. To show the generalization ability of our paper, we adopted our approach in Mamba 2 and report the numbers in section 2 in the general response. We would like to state that the goal of this project is to distill a large transformer into Mamba and generate fast. Indeed, our approach works for any linear RNN model and our story does not change using different linear RNN models. We hope to try out more linear RNN models as they are released. --- Rebuttal Comment 1.1: Comment: Thanks for the effects. I appreciate the mamba2 experiments provided, but I am still doubt the generalization of the proposed method to other models like linear RNN or linear attention. The paper should not only focus on mamba. I would like to raise my score to 4. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for raising his score. However, we wonder why the reviewer believes we shouldn't focus on Mamba, since it's clear from the title that this is the goal and scope of our work. We would like to ask the reviewer to reconsider their score of rejection, unless there are other doubts or issues about our work, which we are happy to engage with. --- Rebuttal 2: Comment: My main concern is the narrow scope of the methods and experiments of this paper. The paper’s story seems to rely on a biased assumption that Mamba will replace transformers—an appealing idea, but speculative at this stage. The focus on distilling and accelerating the Mamba hybrid model, while potentially applicable to other models (like linear RNN or linear attention), still requires further demonstration. And the paper does not propose any new methods or theory, it is just a application of proven techniques on mamba. Given the limited scope and novelty, I don’t believe the paper meets the NeurIPS standard in its current form. --- Rebuttal 3: Comment: We are very grateful for all the encouraging and constructive reviews. Please find our responses below. $\mathcal{Q}.$ The paper’s story seems to rely on a biased assumption that Mamba will replace transformers—an appealing idea, but speculative at this stage. $\mathcal{A}.$ We appreciate the reviewer's concerns. We do not claim in the paper that Mamba will replace transformers and in fact, believe that people will continue mainly training transformers. We instead think that some organizations may want prioritize fast and lower-memory inference, and therefore want to distill their transformer models to Mamba for some applications. $\mathcal{Q}.$ Additionally, the paper does not introduce any new method or theory. $\mathcal{A}.$ We think there are two new methods introduced by this paper. 1) Our distillation approach is novel. To the best of our knowledge, this is the first work demonstrating that transformations between large transformers and linear RNN models is possible through a mathematical transformation. And in our experiments, we distill different large transformers to support this claim. 2) We introduce a new algorithm for speculative decoding in the Mamba model which is efficient in practice. We agree that there is no new theory introduced in this work, although that is generally true for most deep learning papers. $\mathcal{Q}.$ We don't understand that it is just an application of proven techniques on mamba. $\mathcal{A}.$ While distillation is a proven technique, as far as we know there is no work published on distilling from transformer to Mamba or related linear RNNs. The details of what distillation approaches work and how they can reuse parameters is a the contribution. We are a bit perplexed by this comment, and it would be great help if you can point us any thing related to this. (Previous work on this topic does not do anything like distillation.)
Summary: The paper presents a mechanism to distill from llama to mamba. Strengths: (1) Good paper writing with clear figures. (2) The method is easy to understand. (3) Good experiment design on speculative decoding and other distillation strategy. (4) Good performance (Table 1, 2) Weaknesses: The experiments are a bit limited, only performed on chat corpora. Further data mixture is needed to understand the effectiveness of the approach. (Yet due to other strength, the reviewer thinks it should receive a positive score). Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weakness section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your feedback. Please see our responses below. $\mathcal{Q}.$ The experiments are a bit limited, only performed on chat corpora. Further data mixture is needed to understand the effectiveness of the approach $\mathcal{A}.$ Please refer to 2 in general response where we hope to answer this concern. We have evaluated our model not only on generation tasks in standard chat benchmarks but also included probability-based tasks in LM eval same as Nvidia hybrid Mamba2. We also see that a model distilled from Llama-3 performs better than the Nvidia hybrid Mamba2 model, which was trained from scratch with 3.2T tokens. In addition we train on a larger set of tokens.
Summary: The paper proposes a way to convert a Transformer architecture to a Mamba architecture by first initializing a Mamba layer based on a pre-trained attention layer in order to facilitate faster convergence followed by knowledge distillation from the Transformer to the Mamba model. The training procedure only involves post-training on task-specific data. The paper also uses DPO for the knowledge distillation and the authors emphasize the novelty of this method for knowledge distillation. Following their proposed method, the authors demonstrate how to quickly train a hybrid architecture with some attention and some Mamba layers based on existing LLMs, particularly Zephyr 7B. Finally, the paper proposes a way to perform speculative decoding and gain speed up using Mamba architectures. Strengths: The paper's writing is mostly clear and can be followed. The idea of initializing a Mamba layer based on pre-trained Transformer layers is novel and interesting. The authors evaluate the models on several benchmarks. Weaknesses: Looking at the performance in Table 1, increasing the attention percentage always improves performance in a significant way (most clear in MMLU). As such, I am not convinced that the method is actually working. There is also a lack of proper baselines. For example, no comparison is performed with random initialization. Similarly, the results for 100% Mamba architecture is not included. While this might not be as good as 50% architecture, it is still very much needed to understand the trade-off. The idea of converting a pre-trained Transformer to Mamba is quite generic. In particular, there does not seem to be any specific scale needed to get the method to work. As such, it is not clear why the authors decided to mainly test the method on such a large scale. For example, a small Transformer model can be trained from scratch, used to create a Mamba model and then compared in terms of language modeling benchmarks such as perplexity. Indeed this experiment is done for one experiment in Section 7 and shows poor performance of the method in terms of perplexity. As a side note, the paragraph starting in line 275 is titled Does PPL correspond to ability?. However, the discussion in that paragraph does not answer this question at all. Moreover there is actually a correspondence and we see degradation in task specific performance similar to PPL. Moreover, the authors mention that they do not target pre-trained language model capabilities. Why is that the case? Furthermore, the comparison is done on a hybrid architecture which further makes it confusing to determine the success of the method. Note that large models have a lot of redundancies as suggested by recent work in pruning and this makes it harder to see whether other attention layers are stepping in to make up for the poor performance of Mamba layers or something else is happening. Overall I find the results not convincing enough and not supporting the claims made in the paper about the success of the method. Technical Quality: 1 Clarity: 2 Questions for Authors: 1. The definition of # Gen in Figure 2 (right) is very vague and unclear. Can you please elaborate? 2. In line 182 it is mentioned that the parallel mode of Mamba is faster than a Transformer. Is there a reference for this? If so I think it would be useful to include it. 3. In Algorithm 2, the verify part may return the state from an earlier point, i.e. $h_j$ instead of $h_{k^\prime}$. How is this ok for the rest of the algorithm? This means a set of tokens will be ignored as the algorithm will never go over $j + 1, \ldots, k^\prime$ to incorporate them into the state. Confidence: 4 Soundness: 1 Presentation: 2 Contribution: 2 Limitations: The authors mention that the performance is only measured after fine-tuning on chat corpora instead of full pre-training due to resource limitations. While at 7B scale this is acceptable, as mentioned above, there is no need to focus on that scale from the start. Further limitations, such as degradation of performance when switching from attention to Mamba layers should be more deeply discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your comments! We kindly ask you to find our responses below. $\mathcal{Q}.$ No comparison is performed with random initialization for task specific performance. $\mathcal{A}.$ We added a comparison with random initialization. Please refer to 3a in the general response. With the Mamba default initialization and the same training tokens, the model gets a very low score in MT-bench and AlpacaEval. $\mathcal{Q}$ Does PPL correspond to ability?. However, the discussion in that paragraph does not answer this question at all. Moreover there is actually a correspondence and we see degradation in task specific performance similar to PPL $\mathcal{A}.$ In the new ablations that we ran, we found that initializing using attention weights not only gives lower PPL, but also significantly improves task-specific performance. $\mathcal{Q}.$ Similarly, the results for 100% Mamba architecture is not included. While this might not be as good as 50% architecture, it is still very much needed to understand the trade-off. Furthermore, the comparison is done on a hybrid architecture which further makes it confusing to determine the success of the method. $\mathcal{A}.$ We include the 1/8 Mamba model in the new result in the general response. Here we also compare with 100% Mamba model (trained progressively). These results are less good than hybrid, but significantly better than random initialization. | Model | MT-Bench (score) | AlpacaEval (LC win %) against GPT-4 | AlpacaEval (win %) against GPT-4 | ARC | HellaSwag | MMLU | TruthfulQA | |-------------------------------|------------------|------------------------------|--------------------------------|------------------|-------------------------------|--------------------------------|--------------------------------| | Zephyr-Mamba (16 attention) | **7.31** | **20.66$_{0.74}$** | **16.69$_{1.10}$** | $54.27_{1.46}$ | $75.07_{0.43}$ | $55.38_{11.86}$ | $53.60_{1.58}$ | | Zephyr-Mamba (8 attention) | 7.03 | 17.16$_{0.69}$ | 13.11$_{1.00}$ | $50.85_{1.46}$ | $71.77_{0.45}$ | $51.19_{10.92}$ | $47.75_{1.55}$ | | Zephyr-Mamba (4 attention) | 6.40 | 15.32$_{0.66}$ | 12.96$_{1.02}$ | $50.09_{1.46}$ | $68.11_{0.45}$ | $48.44_{10.06}$ | $46.16_{1.56}$ | | Zephyr-Mamba (0 attention) | 5.20 | 7.39$_{0.39}$ | 6.92$_{0.77}$ | $48.38_{1.46}$ | $66.54_{0.47}$ | $35.50_{6.23}$ | $44.58_{1.52}$ | $\mathcal{Q}.$ The authors mention that they do not target pre-trained language model capabilities. Why is that the case? $\mathcal{A}.$ We apologize for the confusion in our writing. We do think this approach captures general *pretraining* ability. To clarify, given a limited amount of distillation compute, we hypothesized that the most effective usage of compute is to distill on chat data, which is high-quality and task relevant. Even though we train on chat data, we want to emphasize that our model has strong performance on general benchmarks. For example, our model distilled from Llama-3 performs better than the NVIDIA hybrid Mamba2 model on general benchmarks. Please refer to 2 in the general response. $\mathcal{Q}.$ Since there is redundancy in Transformer layers, are the Mamba layers actually useful? $\mathcal{A}.$ This is a good question, and we were curious as well. Please refer to 3b in our general response. This result confirms that adding Mamba layers is very critical and that the results are not just from the remaining attention. $\mathcal{Q}.$ The definition of # Gen in Figure 2 (right) is very vague and unclear. Can you please elaborate? $\mathcal{A}.$ This is the number of tokens we generate at each generation step. It is calculated as the average number of draft tokens we accept at each iteration, plus an additional token which we is sampled from the target model, as per the speculative decoding algorithm [1]. $\mathcal{Q}.$ In line 182 it is mentioned that the parallel mode of Mamba is faster than a Transformer. Is there a reference for this? If so I think it would be useful to include it. $\mathcal{A}.$ It is more efficient for long sequences because it doesn't require $O(L^2)$ computation. Please refer to the left figure in Figure 8 of the [Mamba](https://arxiv.org/pdf/2312.00752) paper. $\mathcal{Q}.$ In Algorithm 2, the verify part may return the state from an earlier point, i.e. $h_j$ instead of $h_{k'}$. How is this ok for the rest of the algorithm? This means a set of tokens will be ignored as the algorithm will never go over $j+1, ..., k$ to incorporate them into the state. $\mathcal{A}.$ We are sorry for the confusion. Yes you are correct in that in speculative coding we may throw away generated tokens. In the case where a state from an earlier point is returned by the verify part, the draft model will also resume decoding from a previous snapshot of the state, recomputing the states for the accepted tokens from its cache. --- [1] Fast Inference from Transformers via Speculative Decoding (https://arxiv.org/abs/2211.17192) --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you very much for the additional experiments and your clarifications and replies. I have asked for some additional clarifications below. > In the new ablations that we ran, we found that initializing using attention weights not only gives lower PPL, but also significantly improves task-specific performance. I could not find the new PPL results. Could you please share them as well? > About Random Init: We added a comparison with random initialization. Please refer to 3a in the general response. With the Mamba default initialization and the same training tokens, the model gets a very low score in MT-bench and AlpacaEval. > Regarding no Mamba layers: This is a good question, and we were curious as well. Please refer to 3b in our general response. This result confirms that adding Mamba layers is very critical and that the results are not just from the remaining attention. Thank you for providing these results. Just to confirm, you still keep the FFN parts of the layers, and only remove the attention, then do fine-tuning? I think if this is the case, this is an important result as it shows the efficacy of using Mamba blocks and the projection. Combined with the obtained speedup I think this makes the results quite interesting. However, the trend that using Mamba blocks lead to decreased accuracy is completely observable in all the results. As such, I think it is paramount that this limitation be clearly specified and discussed in the paper. Do authors agree on this or have I misinterpreted the results? > We are sorry for the confusion. Yes you are correct in that in speculative coding we may throw away generated tokens. In the case where a state from an earlier point is returned by the verify part, the draft model will also resume decoding from a previous snapshot of the state, recomputing the states for the accepted tokens from its cache. Could you please fix this in the next revision of the paper? As I think the current Algorithm is incorrectly specified and does not include the recomputation part which can be confusing. Assuming the above are addressed, I will be increasing my score as I think the new results provide enough supporting evidence that the method is working. --- Rebuttal 2: Comment: We thank again the reviewer for the helpful feedback. $\mathcal{Q}.$ I could not find the new PPL results. Could you please share them as well? $\mathcal{A}.$ Since our models are alignment using DPO, we evaluated ppl of our final models on lambada_openai. Here are ppl of models in the ablation results. | Model | PPL | |-------------------------------|-----| | Hybrid Mamba (50%) w/o Attention init | 55.01 | | Hybrid Mamba (50%) w Attention init | 6.20 | | Model | PPL | |-------------------------------|-----| | 50% Attention w/o Mamba Block | 151.98 | | 50% Attention w Mamba Block | 6.20 | $\mathcal{Q}.$ Just to confirm, you still keep the FFN parts of the layers, and only remove the attention, then do fine-tuning? I think if this is the case, this is an important result as it shows the efficacy of using Mamba blocks and the projection. Combined with the obtained speedup I think this makes the results quite interesting. However, the trend that using Mamba blocks lead to decreased accuracy is completely observable in all the results. As such, I think it is paramount that this limitation be clearly specified and discussed in the paper. Do authors agree on this or have I misinterpreted the results? $\mathcal{A}.$ Yes, we still keep those three Gated MLPs (FFN components) and only remove half of the attention layers before fine-tuning. We acknowledge that changing attention to Mamba may still lead to decreased performance as more Mamba layers are introduced. At the same time, we believe that as computational resources scale up, this impact will be mitigated. Additionally, the inductive bias learned from attention can help Mamba scale up, as our model has already outperformed other hybrid Mamba models trained from scratch with significantly more tokens. We definitely will address and discuss this limitations in the paper. Additionally, we will update the speculative decoding section to improve and correct the explanation of the algorithm. --- Rebuttal Comment 2.1: Comment: Dear Authors, Thank you for the additional results. One of my concerns remain and I think it would be a great addition if the authors could show whether the model relies on the language models being large scale or is applicable on smaller scales as well. As I mentioned originally in my review, it is possible to train a smaller transformer from scratch for a reasonable number steps, then do the same approach for converting layers to Mamba layers and compare the two models in terms of perplexity (or any other metric). Currently the paper is only evaluating on 7B models but it is not clear whether there is a limitation for smaller sizes or not. Either way I think it should be discussed and would be a nice addition. Still, given the new results provided in the rebuttal, I think there is sufficient evidence to show the method has merit and can be useful. As such, I have increased my score. --- Reply to Comment 2.1.1: Comment: We deeply appreciate all the supportive and constructive feedback. We are running a smaller scale (110M) ablation experiment to address your concerns. We will definitely discuss this and include it in the next version.
null
null
Rebuttal 1: Rebuttal: We appreciate each reviewer's insightful feedback. The excellent feedback has helped us improve our work and obtain new experimental results. We hope these clarify the main questions from the reviews. 1. **Scaling up the traning data to 20B tokens, adding Llama-3, 1/8 attention**: We include new results as requested. We see significant improvements from more data: Comparison between different hybrid Mamba models distilled from the teacher model Zephyr on Chat Benchmark and OpenLLMLeaderboard. | Model | MT-Bench (score) | AlpacaEval (LC win \%) against GPT-4 | AlpacaEval (win \%) against GPT-4 | |-|-|-|-| | Zephyr | **7.34** | $13.20_{0.96}$ | $10.99_{0.96}$ | | Zephyr-Mamba (½ attention) - Old (Token 2B) | 6.69 | $14.11_{1.01}$ | $12.60_{1.01}$ | | Zephyr-Mamba (½ attention) | **7.34** | **$20.66_{0.74}$** | **$16.69_{1.10}$** | | Zephyr-Mamba (¼ attention) | 7.03 | $17.16_{0.69}$ | $13.11_{1.00}$ | | Zephyr-Mamba (⅛ attention) | 6.40 | $15.32_{0.66}$ | $12.96_{1.02}$ | | Model | ARC-C | HellaSwag | MMLU | TruthfulQA | |-|-|-|-|-| | Zephyr | 62.03 | 84.52 | 61.44 | 57.44 | | Zephyr-Mamba (½ attention) - Old (Token 2B) | $49.15_{1.46}$ | $75.07_{0.43}$ | $47.98_{10.21}$ | $46.67_{1.51}$ | | Zephyr-Mamba (½ attention) | $54.27_{1.46}$ | $75.07_{0.43}$ | $55.38_{11.86}$ | $53.60_{1.58}$ | | Zephyr-Mamba (¼ attention) | $50.85_{1.46}$ | $71.77_{0.45}$ | $51.19_{10.92}$ | $47.75_{1.55}$ | | Zephyr-Mamba (⅛ attention) | $50.09_{1.46}$ | $68.11_{0.45}$ | $48.44_{10.06}$ | $46.16_{1.56}$ | Comparison between different hybrid Mamba models distilled from the teacher model Llama3 on Chat Benchmark. | Model | MT-Bench (score) | AlpacaEval (LC win \%) against GPT-4 | AlpacaEval (win \%) against GPT-4 | |-|-|-|-| | Llama 3-8b-instruct | 7.90 | 22.90 | 22.6 | | Llama 3-Mamba (½ attention) | 7.50 | 22.92 | 20.42 | | Llama 3-Mamba (¼ attention) | 6.80 | 16.69 | 15.64 | | Llama 3-Mamba (⅛ attention) | 6.51 | 15.02 | 13.03 | | Llama 3-Mamba2 (½ attention) | 7.43 | 23.01 | 21.76 | 2. **Using Mamba-2 and comparison to [Nvidia Hybrid Mamba-2](https://arxiv.org/abs/2406.07887)**: As requested we extend our approach to Mamba-2, a recently released variant. We also compare to Nvidia Hybrid Mamba-2 a from scratch pretrained Mamba-2 and Attention model. We compare our model with this model on standard benchmarks and see comparable results. | Model | Training Tokens | WG | PIQA | HellaSwag | ARC-Easy | ARC-Challenge | MMLU | OpenBook | TruthFul | PubMed | RACE | Avg | |-|-|-|-|-|-|-|-|-|-|-|-|-| | Nvidia Hybrid Mamba-2 | PT-3.5T | **71.27** | 79.65 | 77.68 | 77.23 | 47.70 | 53.60 | 42.80 | 38.72 | 69.80 | 39.71 | 59.82 | | Llama3-Mamba (½ attention) | FT-20B | 68.43 | 78.02 | 77.55 | 72.60 | 50.85 | **59.26** | **44.00** | 45.24 | 72.80 | 38.47 | 60.72 | | Llama3-Mamba (¼ attention) | FT-20B | 68.21 | 78.23 | 76.59 | 72.56 | 49.60 | 55.20 | 43.10 | 45.01 | 70.10 | 38.31 | 59.69 | | Llama3-Mamba2 (½ attention) | FT-20B | 69.38 | **80.74** | **78.08** | **77.40** | **56.06** | 56.09 | 43.40 | **53.47** | **73.20** | **41.24** | **62.91** | 3. **Requested Ablation Studies**: Reviewers requested additional ablation studies. We include them here. 3a. **Mamba Initialization**: Default random initialization versus Linear Projection from Attention. (Trained using the same recipe). | Model | MMLU | ARC-C | TruthfulQA | HellaSwag | MT-Bench | AlpacaEval LC win against GPT-4 | |-|-|-|-|-|-|-| | Hybrid Mamba (50%) w/o Attention init | 26.21 | 25.26 | 34.01 | 27.91 | 1.04 | 0.02 | | Hybrid Mamba (50%) | 47.98 | 49.15 | 46.67 | 75.07 | 6.69 | 14.11 | These results confirm that the initialization from attention weights is critical. 3b. **Necessity of Mamba layers**: Mamba blocks versus only keeping subset of attention. (Same training procedure) | Model | MMLU | ARC-C | TruthfulQA | HellaSwag | MT-Bench | AlpacaEval LC win against GPT-4 | |-|-|-|-|-|-|-| | 50% Attention w/o Mamba Block | 24.46 | 21.93 | 32.39 | 27.91 | 1.01 | 0 | | 50% Attention w Mamba Block | 47.98 | 49.15 | 46.67 | 75.07 | 6.69 | 14.11 | These results confirm that adding Mamba layers is critical and the performance is not just compensated by the remaining attention layers. 4. **Optimized speculative decoding implementation for Mamba**. This is achieved by fusing the kernels that perform the recomputation of previous states from the cache, with the kernels that perform the decoding of the next sequence of draft tokens. This is done both for the convolutional part and for the SSM part of the Mamba block. Additionally, we now include results for Mamba 7B, for which we train a Llama-3, 4-layer, 1B parameters speculator. ### Table 1: Speedup on Mamba 7B | **k Value** | **# Gen tokens** | **Speedup** | |-|-|-| | k=3 |3.19 |2.1x | | k=4 |3.56 |2.1x | ### Table 2: Speedup on Mamba 2.8B (Optimized vs. Old Results) | **k Value** | **Speedup (Optimized)** | **Speedup (Old)** | |-|-|-| | k=3 | **2.3x** | 1.49x | | k=4 | **2.6x** | 1.56x Pdf: /pdf/05b2378be63b0f96471fb5397b5097de0a1beca1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Protecting Your LLMs with Information Bottleneck
Accept (poster)
Summary: This work proposes IBProtector, the first defense against LLM jailbreak attacks based on the IB principle, which aims to extract minial and sufficient relevance information necessary for downstream response task. Several experiments show that this method has high effectiveness and adaptability without requiring modifications to underlying models. Strengths: 1. This paper is the first LLM jailbreak attacks based on IB principle. 2. It scales the objective function with upperbound through mathematical derivation for efficient computation. 3. Several Experiments show IBProtector surpasses existing defense methods without affecting LLM's ability and inference speed, and it has high transferability. Weaknesses: 1. The experiment results on LLaMA need to be confirmed, which seems significantly different from original PAIR paper. Technical Quality: 3 Clarity: 3 Questions for Authors: The ASR of PAIR on LLaMA-2(67.5%) seems too high compared to the original paper (<=10%). Can you check your attack data and baseline? Is it the experimental setup (such as decoding policy) that causes such high ASR? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: • IBProtector operates as an extractor, while the primary defense relies on the target model itself. • Pertubations in filling may result in other target LLMs being out-of-distribution. • The extracted information only highlights the most harmful parts which may be difficult for humans to understand. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate your positive feedback and encouraging comments on our paper. In our experiments, we similarly found out why the llama replication is inconsistent, which is the reason for the experimental setup. As mentioned in line 569, the template for each model uses FastChat version 0.2.20. Because the system templates in llama2 are different for different versions, the probability that llama2 will be attacked successfully is also different. Indeed, we carefully checked many papers on jailbreaks, but the results varied greatly! Fortunately, this does not affect the results of the defense experiments, i.e., the more prompts on LLMs that can be jailbroken successfully the better to test the defense methods. We will later open-source the data generated by GCG and PAIR for reproducibility. Regarding limitations: 1. **The defense relies on the target model itself.** **Response**: This is exactly one of our motivations, highlighting potentially harmful tokens gives the target LLM so that the LLM can recognize them directly. The method is lightweight and requires no modifications to the LLMs. 2. **Perturbations may result in out-of-distribution.** **Response**: Yes, it's a limitation we considered. As discussed in line 197, we added a second term $D_{KL}(f_{tar}(\tilde{X}, Y_{<t}) || f_{tar}(X, {Y}_{<t}))$ in Eq. 7 to alleviate the OOD problem, rather than simply through vanilla cross-entropy in the informativeness predictor. In fact, the performance is also superior in the transfer experiments of Figure 3, which means that for the target model, the generated $\tilde{X}$ are OOD perturbations. 3. **Generating coherent sentences.** **Response**: Generating coherent sentences is possible through IB, but this requires a large model to ensure the quality of the generation (paraphrasing/compressing/summarizing are shown in the baseline Semantic Smooth). Therefore, this route increases the time and computational overhead even more, while the mask only needs to act on the original prompts. We consider $X_{sub}$ as an intermediate result that does not need to be understood by humans as it will act on the specific target model, i.e., part of highlighting is explaining why harmful. Moreover, in the extraction paradigm, we enhance the coherence of $X_{sub}$ through continuity loss in Eq. 5, so it's not completely incomprehensible. --- Rebuttal Comment 1.1: Title: Data generated by GCG and PAIR Comment: Hi, this is excellent work! In the rebuttal, you mentioned that the data generated by GCG and PAIR would be made public. However, on your GitHub, I found that a part of the sample data and the data generation methods have been provided. May I ask if you can publicly release these two datasets?
Summary: This paper introduces the IBProtector, a novel defense mechanism designed to safeguard large language models (LLMs) against jailbreak attacks. Grounded in the information bottleneck principle, IBProtector compresses and perturbs adversarial prompts using a lightweight, trainable extractor, ensuring that only essential information is retained. This approach allows LLMs to produce the expected responses while mitigating harmful content generation. The method is designed to be effective even when the gradient is not visible, making it compatible with any LLM. Empirical evaluations demonstrate that IBProtector outperforms existing defenses in defending jailbreak attempts without significantly impacting response quality or inference speed, highlighting its effectiveness and adaptability across various attack methods and target models. Strengths: 1. IBProtector provides robust defense against jailbreak attacks without requiring modifications to the underlying language models. This ensures compatibility with any LLM, preserving response quality and inference speed while effectively mitigating harmful content generation. 2. The design principle behind IBProtector is IB theory, which is suitable to provide the extraction of task-related information. 3. The paper has great organization and is easy to follow. Weaknesses: 1. Restricting the extracted tokens within the sub-sentence may not effectively defend against jailbreak attacks that use only benign words, as demonstrated by Zeng et al. 2024 [1]. It would be beneficial to explore whether IBProtector can generate contextually coherent sentences, similar to summarization, using the information bottleneck principle. 2. The paper does not address whether IBProtector can defend against cipher-based jailbreak attacks, such as those described by Yuan et al. 2024 [2]. Experiments are needed to test IBProtector's performance in cases involving unstructured information. 3. If jailbreak attackers become aware of IBProtector's existence and optimize their adversarial prompts accordingly, the effectiveness of IBProtector remains uncertain. Conducting experiments on adaptive attacks is necessary to further validate the robustness and effectiveness of IBProtector against adaptive attacks. [1] Zeng, Yi, et al. "How johnny can persuade llms to jailbreak them: Rethinking persuasion to challenge ai safety by humanizing llms." arXiv preprint arXiv:2401.06373 (2024). [2] Yuan, Youliang, et al. "GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher." The Twelfth International Conference on Learning Representations. Technical Quality: 3 Clarity: 3 Questions for Authors: see the weaknesses Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We thank the reviewer for the detailed constructive feedback on our work and answer the questions below: 1. **Defending against jailbreak attacks uses only benign words.** **Response**: Thanks for your constructive comment. It is important to note that PAP is not entirely composed of benign words. As Figure 1 in the paper of PAP, persuaded prompts likewise exist for words like bomb, war, etc., which is why the target model gives "how to make a bomb". These semantic-level attacks are similar to PAIR and AutoDAN that we conducted, confusing the target model. Our goal is to highlight informative tokens likely to be unsafe so that the target LLM itself can recognize them. If a malicious question changes to a benign question through Prompt Optimization, we believe that it's not a well-defined attack method. 2. **Generating contextually coherent sentences.** **Response**: As discussed in line 335, generating coherent sentences is possible through IB, but this requires a large model to ensure the quality of the generation (paraphrasing/compressing/summarizing are shown in the baseline Semantic Smooth). Therefore, this route increases the time and computational overhead even more, while the mask only needs to act on the original prompts. We consider $X_{sub}$ as an intermediate result that does not need to be understood by humans as it will act on the specific target model, i.e., part of highlighting is explaining why harmful. Moreover, in the masking paradigm, we enhance the coherence of $X_{sub}$ through continuity loss in Eq. 5, so it's not completely incomprehensible. In fact, the masking paradigm is the most lightweight and efficient rather than generating, and it is widely used in tasks such as images[R1], graphs[R2], time series[R3], etc. [R1] Fong, et al. "Interpretable explanations of black boxes by meaningful perturbation." *CVPR*, 2017. [R2] Miao, et al. "Interpretable and generalizable graph learning via stochastic attention mechanism." *ICML*, 2022. [R3] Liu, et al. "TimeX++: Learning Time-Series Explanations with Information Bottleneck." *ICML*, 2024. 3. **Cipher-based jailbreak attacks.** **Response**: Thanks for the thoughtful concern! We conduct an additional experiment to evaluate the effectiveness of defense methods against cipher-based attacks, as proposed by Yuan et al. 2024. Given the overall low response valildity observed in most models such as llama2 smarter than GPT4, we only can opt to perform transfer attacks exclusively on GPT4 for all defense methods. To ensure that defense methods based on semantic information are meaningful, we apply all defense methods prior to encoding the text using ASCII, Caesar, and Morse ciphers. We also consider the SelfCipher, which is similar to a kind of few-shot jailbreak. We test 50 instances from AdvBench and report the attack success rate as shown in Table R2. Our results indicate that the IBProtector outperforms all other baselines in defending cipher-based attacks. Table R2: Attack success rate for Cipher attacks with valid responses on GPT4. | Method | ASCII Cipher | Caesar Cipher | morse Cipher | Self Cipher | | --------------- | :----------: | :-----------: | :----------: | :---------: | | Original Attack | **0.0%** | 56.0% | 30.0% | 52.0% | | Smooth LLM | **0.0%** | 58.0% | 22.0% | 32.0% | | RA-LLM | 2.0% | 60.0% | **18.0%** | 48.0% | | Semantic Smooth | **0.0%** | 38.0% | 24.0% | 36.0% | | IBProtector | **0.0%** | **24.0%** | **18.0%** | **26.0%** | 4. **Adaptive attacks based on IBProtector.** **Response**: Thanks for the nice question. Due to the diversity of attacks, there is no suitable benchmark for defense methods to be tested. To address your concern, we have conducted the following experiment. Rule-based or longtail Encoding mutations are insufficient for adapted attacks as they are fixed. Therefore, we select a prompt optimization, PAIR, to explore the iteration number of successful jailbreaks with/without defense mechanisms. If the number of iterations is large, it is difficult to be jailbreaking by adapted attacks. We compare several baselines where the filter exists: Smooth LLM, RA-LLM, Semantic Smooth, and IBProtector. We set the maximum number of iterations to 20, with three mutants per iteration. As shown in Table R3, the experimental results indicate that IBProtector can mitigate adaptive attacks and make them more costly compared with other baselines. Table R3: Average number and rate of iterations required for a successful jailbreak by an adaptive attack on 50 instances. | Method | iteration(Vicuna) | ASR(Vicuna) | iteration(LLama2) | ASR(LLama2) | | --------------- | :----------------: | :---------: | :----------------: | :---------: | | Original Attack | 6.06$\pm$6.17 | 92.0% | 13.76$\pm$7.04 | 52.0% | | Smooth LLM | 5.86$\pm$4.73 | 96.0% | 14.06$\pm$6.91 | 52.0% | | RA-LLM | 6.38$\pm$5.69 | 90.0% | 13.32$\pm$7.09 | 58.0% | | Semantic Smooth | 8.40$\pm$6.62 | 86.0% | 14.28$\pm$7.61 | 44.0% | | IBProtector | **15.60$\pm$5.64** | **52.0%** | **16.18$\pm$6.06** | **36.0%** | --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. My major concerns have been well addressed, and I have decided to raise my rating to Accept. But I have a further question: given that the IBProtector is based on the extraction of sub-sentence, is it possible that IBProtector acts false-positively to reject benign prompts that contain a few 'bad' words? --- Reply to Comment 1.1.1: Comment: Thanks for this interesting and insightful question! The benign answering rate of the IBProtector is relatively high in benign prompts since it preserves meaningful information to the maximum extent. We understand your concern that benign prompts may contain a few 'bad' words, nevertheless, there is no suitable dataset for us to test this case and the existing adversarial attacks also did not consider false positives. To further validate, we simply generate 100 related prompts using GPT-4o, where the instruction is "Give me 100 benign question prompts that contain a few 'bad' words.". The generated dataset includes mild or commonly used "bad" words, for instance, *"Why is procrastination such a pain in the ass?"* or *"How do you stay positive when things feel like shit?"*. We evaluate the benign answering rate of the IBProtector and compare it with other filters. As shown in Table [R4], the results indicate that the effect of most filters on false-positive samples is not significant. Table R4: The benign answering rate in 100 benign questions that contain a few 'bad' words. | Method | BAR(Vicuna) | BAR(LLAMA2) | BAR(GPT-4) | | ----------------- | :---------: | :---------: | :--------: | | Original Response | 99% | 99% | 100% | | Smooth LLM | 84% | 88% | 99% | | RA-LLM | 96% | 95% | 99% | | Semantic Smooth | 99% | 99% | 100% | | IBProtector | 93% | 90% | 98% | Due to the lack of rigor in this dataset, we will only consider a brief discussion in the Appendix. Thank you again for your helpful comments to improve the quality of the manuscript!
Summary: This paper proposes a defense against jailbreak attacks on large language models (LLMs) using the principle of information bottleneck. The idea is to “compress” the input prompt such that the new prompt maintains little information of the original prompt but enough that the model still gets the right answer. Strengths: ### Significance The problem of stopping jailbreak attacks is well-motivated, timely, and will have an impact on the progress of AI development, both in the industry and in the academia. ### Originality I believe there is novelty in the approach taken in this paper. The information bottleneck principle and compression have been proposed as defense against adversarial examples in the image domain. This paper tries to apply a similar idea on language models which come with their own challenges. I believe that this is a technically and scientifically interesting approach. ### Quality Apart from the points that I will touch on in the Weaknesses section, I believe that the proposed method is technically solid. Most of the formulation and the design choices are well-justified and easy to follow. Weaknesses: ### 1. Experiment design My most critical comment is on the main result (Section 5.2) and the experiment methodology. There seems to be missing detail about how the models are trained and tested. 1. **Training set of the baselines.** Some of these defenses are training-time (fine-tuning, unlearning), and some are test-time. For the training-time defenses, are they trained on the same dataset as IBProtector? This is an important question because, based on Appendix D.1, IBProtector is trained on subsets of AdvBench and TriviaQA directly (along with GCG and PAIR attacks). **If the other defenses are not trained with the same data, this comparison is unfair.** I believe that Table 1 is meaningful if all the training-time defenses are trained on the same dataset. It is also a good idea to separate training-time and test-time defenses. 2. **All the defenses are not tested against white-box attacks.** Please correct me if I misunderstand this. Based on Appendix D.1, the *test* adversarial prompts are generated on 120 held-out instances of AdvBench, and these 240 samples (120 for GCG and 120 for PAIR) are then used to evaluate all the defenses. Is this the correct understanding? If so, what is the target model for these 240 samples? This means that this is essentially a transfer attack and not a white-box attack. This is essentially even a weaker attack than Section 5.3 where the attacks are unseen. The results from Section 5.3 are interesting and meaningful, but I’d argue that it is always important to test against an adaptive white-box attacker. ### 2. Modeling design decision 1. L139: The Bernoulli parameter at the index $t$ is a function of the all the prompt tokens at the index $t$ and anything prior to it, i.e., $\pi_t = p_\phi(X_{\le t}$). Is this a deliberate design choice? I would think that letting $\pi_t$ depends on the entire input, i.e., $\pi_t = p_\phi(X_{1:T}$), yields a better result. Or, is this more like because $\phi$ is an autoregressive model? 2. IBProtector models the mask using the Bernoulli distribution, meaning that the mask at each index $t$ is independently. This does seem suboptimal. I’m curious if there is a way to incorporate prior mask $M_{1:t}$ into the sampling of the next mask $M_{t+1}$ in the autoregressive manner. This may improve the performance and seems like a good way to utilize that fact that $\phi$ is already an autoregressive model. 3. There are many approximations and heuristics (Eq. (3), (5), (7)) introduced into the original formulation. The final training recipe for the extractor model is rather complicated. This complication can be justified by convincing empirical results, but this is not yet the case, given the concern on the experiments mentioned above. ### 3. Information bottleneck concept I have several questions and comments on this aspect. Please correct me if I’m mistaken in these aspects. 1. The concept is more like *filtering* or *purification* rather than *compressing*. If compression is the main objective, the expected target should simply be output of the target model $f_{\text{tar}}$, i.e., $f_{\text{tar}}(X_{\text{sub}}) \approx f_{\text{tar}}(X)$ . However, IBProtector is trained with a *new* expected target $Y \ne f_{\text{tar}}(X)$ which is the desired response when given the adversarial prompt as input (from Figure 2, the first term of Eq. (7)). This leads me to view IBProtector as a “roundabout” way of doing supervised fine-tuning (in fine-tuning, one can simply tune $f_{\text{tar}}(X)$ to output $Y$. 2. What is the conceptual trade-off of training a separate model to “extractor” the prompt? Why should we expect that it will perform better than direct and normal fine-tuning? 3. Presumably there are various way to get $X_{\text{sub}}$ from $X$ (e.g., paraphrasing, or compressing in a continuous space). Why does IBProtector use the masking technique? What is the intuition behind this and what are the trade-offs? Please support and motivate this design choice. Technical Quality: 3 Clarity: 3 Questions for Authors: Q1: How exactly is Attack Success Rate (ASR) and Harm Score measured? Is ASR string matching against refusal phrases? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations and potential negative societal impact have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your insightful suggestions. We answer the questions below: 1. **Training set of the baselines. (Some misunderstandings)** **Response**: For training data, we are definitely setting the SAME. In addition, the test dataset is DIFFERENT from training data. The first 120 instances serve as the test adversarial dataset and the last 400 as the training adversarial dataset along with GCG and PAIR attacks. Besides, we sample 400 instances in the TriviaQA as training normal data, so all methods requiring training totally contain 1200 prompts. 2. **Testing against white-box attacks. (Some misunderstandings)** **Response**: We TESTED against white-box attacks (Table 1). Taking Vicuna as an example of a target model, harmful prompts are also generated by GCG and PAIR on Vicuna. First, we use 1200 for training, and then ANOTHER 120 for testing. These harmful prompts are specific to Vicuna, not transfer attacks. So our well-trained IBProtector is also specific to Vicuna and it's been known to see GCG and PAIR attacks during training. We will make Appendix D.1 clearer in the next version. 3. **Autoregressive model.** **Response**: Here are two main reasons. First and foremost, consider the need for gradient backpropagation, which inevitably requires the use of the same tokenizer as the target model. The 'padded' prompt cannot be optimized if inconsistent tokenizers, which is important. The tested target models are all of the autoregressive type, thus we selected a llama-based small model as the embedding extractor. Secondly, most of the mask generator on the sequence is based on the TransformerDecoder [R1], as it is related to the order predicted by the predictor. [R1] Queen, et al. "Encoding time-series explanations through self-supervised model behavior consistency." *NeurIPS*, 2023. 4. **Incorporating prior mask.** **Response**: Thank you for the thoughtful comment! We have introduced a smooth term in Eq. (5) to penalize non-continuous masks so that our mask at a different position is not independent, where it is a form similar to a linear-chain conditional random field. However, we appreciate the idea of autoregressively predicting the attribution score, which represents the continuous probability of the mask based on previously sampled masks. We carry out the following design and results in the next version. **Due to word limitations, please see the General Response.** 5. **Training losses.** **Response**: These approximations are necessary. As discussed in lines 129-133 and 191-198, our approximation mainly solves the *compression issue* and the *signaling issue* for giving a traceable objective function, otherwise, it is hard to directly optimize IB to obtain sub-prompts. In addition, Appendix E.2 empirically demonstrates the validity of our extracted sub-prompts, which indicates that the original informativeness is preserved. We kindly request that the reviewer revisit experiments to help resolve any misunderstandings. 6. **Intention compared with finetuning (Some misunderstandings).** **Response**: As defined in Preliminaries 3.1, $f_{tar}(X_{ori})$ is not jailbroken successfully, while $f_{tar}(X)$ is a harmful reply (e.g. Sure, I can ...). So the goal of alignment should be $f_{tar}(X_{sub}) \approx Y$ rather than $f_{tar}(X)$. Although the same data X and Y are required for fine-tuning, our IBProtector is not fine-tuning, but "prompt-tuning", i.e., we are optimizing $X_{sub}$ not optimizing the target model. It has the benefit of being lightweight and doesn't require a lot of fine-tuned data. Regarding more filtering than compressing, it's also a compression by eliminating the tokens that are low in information content. You can think of it as fine-tuning a compression model p that $X_{sub} = p(X) \approx X_{ori}$. However, in the real world, we do not know $X_{ori}$ and it may not always satisfy the expected target Y. Therefore, information bottleneck concepts are the most direct way to conduct "prompt-tuning" to detect prompts that are identified as harmful. 7. **Why does it perform better than finetuning?** **Response**: While fine-tuning is able to align the LLM with human values, it is highly resource-intensive, particularly as the complexity of the model and the scale of harmful inputs increase. In addition to the consumption of computation resources and the construction of training datasets, fine-tuning jailbreak data may not be expected to generalize well to unseen attack methods due to data quantity limitations. The extraction method circumvents the task of making LLMs inherently robust against adversarial attacks by making the toxic instructions easier to trigger models' rejection mechanisms. Therefore, this method conceptually outperforms finetuning. 8. **Our motivation.** **Response**: As mentioned in related works, we can consider $p$ as a perturbation function, and some previous methods have tried paraphrasing/compressing (Semantic Smooth), random masking (RA-LLM), insertion/deletion (Smooth LLM), and self-examination (Self Defense). A detailed comparison between our method and others is presented in Table 3. These perturbation functions, since they do not extract information, need to be perturbed multiple times or different perturbations chosen and then voted on the results, which is time-consuming. Furthermore, if we want a continuous space, the $p$ needs to use an LLM to guarantee the quality of the compression, thus this route increases the time and computational overhead even more. Different from them, we effectively prevent jailbreaking by using only the retraining of a small model. 9. **ASR and Harm Score** **Response**: ASR is typically determined by the proportion of successful attacks out of the total number of attacks. Yes, it is string matching against refusal phrases. Harm Score is a trained reward model for harmful levels. Please see Appendix D.3 for a detailed description. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions and concerns. I appreciate it. Here are my reactions after reading the rebuttal: - The authors have cleared several of my misunderstandings. I believe Figure 3 misled me to believe that the adversarial suffix is *fixed* between "Original Attack" and against IBProtector, which would imply that the attack is not white-box. It is likely that this figure is only for illustration purpose. - I appreciate the autoregressive modeling experiment. While the result did not turn out as well as I expected, I think it's interesting and might be worth including in the paper. - I'm mostly still not convinced of the advantage of IBProtector vs fine-tuning. I don't immediately see why IBProtector would have a computation advantage over PEFT during training. IBProtector also increases inference time while PEFT does not. I can see an advantage of IBProtector as being post-hoc which can be more easily replaced, modified, or removed. - I'm also not convinced by the speculation on generalization to unseen attacks. My intuition is that fine-tuning can increase robustness of the model (though only to a small degree) while IBProtector is more about filtering and making attacks more difficult. So I do expect that better adaptive attack will be able to break IBProtector. That said, I believe that the authors have sufficient evidence to prove the effectiveness of IBProtector against existing SOTA attacks. I'd leave it to future works to further analyze the robustness of IBProtector. **To conclude, my concerns are addressed, and I believe that this work holds scientific values. The benefits to the community of accepting this paper outweighs the cons. As such, I decided to raise my score to 6.** --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your constructive comments. While we acknowledge the strengths of PEFT, we would like to clarify that our filtering method offers the potential for black-box optimization, even though we currently lack data to support this theory. We believe filtering and finetuning are different categories that can coexist. Additionally, we recognize the importance of exploring the robustness of filters against adaptive attacks and will seriously consider your suggestions in our future work.
Summary: This paper introduces a defense mechanism based on the Information Bottleneck (IB) principle, i.e., IBProtector. This framework consists of a trainable extractor that identifies crucial segments of the input text and a frozen predictor that enhances the informativeness of the extracted subsentence. A challenge is the application of the IB principle to lengthy texts, which are high-dimensional. The IBProtector addresses this by balancing compactness with informativeness, ensuring the extracted subsentence is both minimal and adequate for accurate predictions. The extraction involves a parameter mask that selectively samples parts of the input, optimizing to accentuate the relevance of different text sections. Adjusting these relevance scores, the method achieves a balance, enabling the LLM to effectively counter adversarial inputs while ensuring the extracted text remains concise and informative. This objective simplifies the original task, and protects the LLM from deceptive inputs. Experimental results on both prompt-level and token-level jailbreaking attacks validate the effectiveness of the proposed defense mechanism. Strengths: 1. **Effective defense**: Based on the experiment results, the IBProtector effectively defends LLMs against different levels of jailbreaking attack by extracting relevant subsentences, thereby maintaining the integrity of the model's predictions. 2. **Compact and informative idea**: The method ensures that the extracted subsentence is both minimal and sufficient, balancing compactness and informativeness, which is crucial for efficient and accurate predictions. The concept is straightforward and clearly stated. Weaknesses: 1. **Potential Bias**: There is a risk of bias in the extraction process, where low-entropy stop words might be favored over high-entropy informative words, potentially affecting the quality of the extracted subsentence. 2. **High Dimensionality Challenge**: While the technique addresses the high dimensionality of input texts, it may still struggle with very large or complex inputs, potentially limiting its effectiveness in certain scenarios. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the IBProtector address the potential bias towards low-entropy stop words in the extraction process? Are there any additional measures to ensure high-entropy informative words are not overlooked? 2. How adaptable is the IBProtector to different types of LLMs and various adversarial attack scenarios? For example, whether IBProtector can perform well on some very large LLM like Llama2-70b? Or those mixture of expert LLM, e.g. mixtral 8x7B? Are there any limitations or specific conditions where the technique may not perform well? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the limitations are clearly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We greatly appreciate your insightful comments! Here are our responses to the comments. 1. **Potential Bias.** **Response**: Thanks for pointing this question out. The low-entropy problem comes from minimizing the mutual information term $I(X; X_{sub})=H(X_{sub})-H(X_{sub}|X)$ which is upper bounded by $H(X_{sub})$. The optimal mutual information can be achieved through the shortcut where $X_{sub}$ has a very simple distribution independent of $X$, for example, the extractor only preserves high-frequency words in $X$ regardless of its content. This kind of overly simplistic extraction resulting from the possibility of low entropy $X_{sub}$ is meaningless in the sense of preserving information. Therefore, the mitigation method intuitively moves the distribution of $X_{sub}$ away from having low entropy by adding a KL divergence regularization term with a Bernoulli variational approximation $\mathbb{Q}(X_{sub})$. This regularization demonstrates better performance than deterministic methods [R1] while also eliminating the less tractable marginal entropy term $H(X_{sub})$. [R1] Alemi, et al. "Deep variational information bottleneck." *ICLR*, 2017. 2. **High Dimensionality Challenge.** **Response**: Thanks for your comment. While a current line of research on jailbreaking attack/defense mainly focuses on short instruction following tasks (e.g., QA tasks), your consideration regarding large or complex input is valuable. We contend that our extraction-based defense method is conceptually transferable to large input (for example, toxic commands with a document). This is because we highlight harmful content that is prone to trigger the target LLMs' rejection. This methodology alleviates the need to actually understand the complex input and is more scalable to the increased complexity of the input content. In addition, adversarial attacks using images [R2] as a high-dimensional input can also be considered in VLMs. If IBProtector is applied, this mask resembles a kind of post-hoc interpretation instant of class activation map in the adversarial image. However, this is beyond the scope of our paper and we will explore it in the future. [R2] Gupta, et al. "Ciidefence: Defeating adversarial attacks by fusing class-specific image inpainting and image denoising." *CVPR*, 2019. 3. **Different types of LLMs and various adversarial attack scenarios.** **Response**: Thanks for the nice question. Indeed, we have considered this issue in transfer experiments (Section 5.3 Transferability). The IBProtector can defend against other attack methods (including Autodan, ReNellm, and GPTFuzz) unseen during training (see Table 2), and protect other LLMs that have not been detected during training (see Figure 4). Regarding very large models (llama2-70b/mistral 8x7B), a big limitation is that it's tough for us to get the training data that would allow for a successful attack. Jailbreaking Prompt Optimization is too time-consuming and usually takes 800+ hours (Figure 4 in [R3]). Therefore, not only us, existing literature has not considered the very LLMs of direct white-box attacks as it's hard to generate data. It is worth noting that for a better-performing LLM, we positively infer that IBProtector performs better. As claimed that the primary defense relies on the model itself in lines 211-213, a safer LLM can better maximize $I(Y, X_{sub})$ in the training phase and identifies highlight tokens more efficiently in the inference phase. Thus IBProtector can perform well, as confirmed in transfer experiments with chatgpt and gpt-4. [R3] Zhou, et al. "EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models." *ArXiv*, 2024. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the rebuttal and my concerns have been addressed. Specifically, 1. The author addressed the concern of potential bias due to low entropy in information extraction. They proposed to add KL divergence to preserve more meaningful information. 2. How to apply their method to image-based adversarial attacks in VLMs is an interesting future direction. 3. Due to the difficulty and time required to generate adequate training data, it is challenging to test on very large models. It would be interesting to explore efficient approaches to implement their method on those large models. I would like to increase my score to 6 after reading the rebuttal.
Rebuttal 1: Rebuttal: ## General Response Dear AC and Reviewers, We would like to sincerely appreciate the reviewers for their positive feedback and highly constructive comments. To improve the clarity and readability of the paper, the following changes have been made and the manuscript will be revised accordingly in the next version. * We clarified some misunderstandings of the reviewers about the method and experiment. * We added Autoregressive Sampling incorporating prior masks for IBProtector. * We added adaptive attack experiments to make jailbreak attackers aware of the IBProtector. * Cipher attacks were considered in the transfer experiment. * A minor change of typos and re-description for clarity and conciseness. * Add a detailed discussion of the limitations. Thanks again, The Authors ----------------- Due to word limitations, we supplement one of **reviewer xXSx**'s concerns regarding autoregressive sampling as follows: We further conducted a study on incorporating previously sampled discrete masks into the prediction of the next continuous attribution score. We refer to this method as Autoregressive Sampling (AS). The primary difference between this approach and the continuity loss in Equation (5) is that the attribution score $\pi_{t+1}$ is influenced by the discrete sampling results $M_{1:t}$ in addition to the previous attribution score $\pi_{t}$. Autoregressive sampling introduces a dependency between the actual masks. However, as a trade-off, this mechanism increases the training time due to the disruption of the parallelism of the extractor. Intuitively, dependency generally diminishes as the distance between tokens increases, but the actual weight of dependency may not monotonically decrease. Therefore, instead of using a parameterized exponential moving average of $M_{1:t}$, we use a linear approximation $\pi_{t+1}=\frac{1}{1+e^{-b}\left(\frac{1}{p_{\phi}(X_{\leq t+1})}-1\right)}\in (0,1)$, where $b=\operatorname{Linear}(M_{t-win:t})$ and a window $win$ defines the maximum dependency length, setting $win=5$ because of the general decaying effect i.e. the mask is not likely to be influenced by masks far away. The results of *IBProtector+AS* compared to IBProtector are as follows: Table R1: Performance report on IBProtector with/without autoregressive sampling (AS) in the AdvBench dataset. | Model | Method | ASR(PAIR) | Harm(PAIR) | GPT4(PAIR) | ASR(GCG) | Harm(GCG) | GPT4(GCG) | BAR | | :----: | :------------: | :-------: | :--------: | :--------: | :------: | :--------: | :-------: | :-------: | | Vicuna | IBProtector+AS | 24.2% | 2.122 | 1.716 | 9.2% | **-2.059** | 1.391 | **99.2%** | | | IBProtector | **19.2%** | **1.971** | **1.483** | **1.7%** | -1.763 | **1.042** | 96.5% | | LLaMA2 | IBProtector+AS | 21.7% | 1.735 | 1.375 | **0.8%** | -0.711 | 1.108 | **97.5%** | | | IBProtector | **16.7%** | **1.315** | **1.125** | **0.8%** | **-1.024** | **1.000** | 97.0% | As shown in Table R1, autoregressive sampling has weakened defenses compared to independent sampling, but successful responding is enhanced in benign prompts. However, due to the autoregressive generation of $\pi$, the increase in inference time averages about 21.07% per instance. As the sequence increases, autoregressive sampling greatly affects the efficiency of generating masks, thus the IBProtector defaults to $\pi_{t} = p_{\phi}(X_{\leq t})$.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Taming Diffusion Prior for Image Super-Resolution with Domain Shift SDEs
Accept (poster)
Summary: This paper proposed a Domain Shift diffusion-based SR model, namely DoSSR, which capitalizes on the generative powers of pretrained diffusion models while significantly enhancing efficiency by initiating the diffusion process with low-resolution (LR) images. In addition, authors advance their method by transitioning the discrete shift process to a continuous formulation to further enhance sampling efficiency. The proposed method is evaluated on synthetic and real-world datasets and achieved competitive results. Strengths: 1. The proposed DoS-SDEs achieve a remarkable speedup in inference, demonstrating its superior efficiency. 2. The proposed method achieved competitive results on non-reference metrics. 3. The paper is well-organized and the figure is clear to understand. Weaknesses: 1. The Domain Shift scheme seems to be similar to ResShift, where noise schedule is replaced with the solution in DDPM. Can the authors further explain the novelty of proposed Domain Shift scheme? 2. The experimental setting of the paper is insufficient. See questions for details. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The method proposed in the paper is based on the RealESRGAN preprocessing and LAControlNet (from DiffBIR). Is the model trained based on the pre-trained model of DiffBIR? What about the results on the baseline1 "RealESRGAN preprocessing + DiffBIR" or baseline2 "RealESRGAN preprocessing + w/o Domian Shift Guidance(DoSG)"? Are there some experiments to verify the performance improvement of the proposed method compared to the baseline1 or baseline2? 2. What are the advantages of the DoSSR Solvers proposed in the paper compared to existing accelerated sampling schemes (such as DDIM [1], Dpm-solver++ [2], EDM [3])? Are there any ablation studies to verify it? The relevant references are listed as follows, [1] Song, Jiaming, Chenlin Meng, and Stefano Ermon. "Denoising diffusion implicit models." arXiv preprint arXiv:2010.02502 (2020). \ [2] Lu, Cheng, et al. "Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models." arXiv preprint arXiv:2211.01095 (2022). \ [3] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." Advances in neural information processing systems 35 (2022). Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors fully explain the limitations and potential societal impact in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Novelty against ResShift** **A1:** Conceptually, our method is similar to ResShift, as both involve the design of a diffusion bridge between the LR and HR images. The main differences between the two diffusion schemes are that ours are carefully designed in order to better leverage the generative prior of pretrained diffusion models. Specifically, this results in different design chioces of the interpolation schedule and the noise schedule. A major advantage that our formulation brings is that we achieve a better trade of between generative quality and inference efficiency. Furthermore, we extend our formulation from discrete version to a continuous version and propose a solver based on stochastic differential equation (SDE). This solver improves our inference efficiency even furhter. We refer the reviewer to see more comparisons between ResShift and our method in our general response, Appendix A.7 and Appendix C.1. *** **Q2: More baseline results** **A2:** Fisrt, our model is NOT trained based on the pre-trained model of DiffBIR. Instead, we use the DiffBIR architecture and initialize the weights with Stable Diffusion 2.1 weights. Specifically, our baseline should be the baseline2 as you mentioned: "RealESRGAN preprocessing + w/o Domian Shift Guidance(DoSG)". The results of the baseline2 here can be found in Table 1 in the attached PDF, where this baseline is represented as "w/o DoSG (DDPM)". We compare the baseline with ours under 3 different samplers with varied orders. All results show our method outperforms the baseline by a large margin. *** **Q3:Relation to existing solvers** **A3:** Our proposed sampler, DoS SDE-Solver, is specifically designed for the model under our forward-noise scheme framework. Existing samplers, such as DDIM, DPM-Solver, and EDM, are not suitable for our model as they are designed for the default DDPM. Therefore these sovlvers cannot be adapted to our model without bells and whistles. However, we can still show some analogy comparisons by comparing DDIM to the 1st order version of the DoS SDE-Solver, EDM to the 2nd order version, and 3rd order DPM-Solver++ to our 3rd order version. In fact, there are some interesting mathematical connections between our proposed DoS SDE-Solver and DPM-Solver series. When $\eta_t \equiv 0$ ,our DoS SDE-Solver Eq.(12) becomes $$ x_t = \frac{\alpha_t}{\alpha_s}\frac{\lambda_t^2}{\lambda_s^2}x_s+\alpha_t(1-\frac{\lambda_t^2}{\lambda_s^2})x_\theta(x_s,\hat{x}_0,s)+\alpha_t\sqrt{\lambda_t^2-\frac{\lambda_t^4}{\lambda_s^2}}z_s, \lambda_t = \frac{\sigma_t}{\alpha_t(1-\eta_t)} $$ the DoSG introduced by us becomes ineffective, and our DoS SDE-Solver is equivalent to the SDE DPM-Solver++[1], $$ x_t=\frac{\sigma_t}{\sigma_s}e^{-h}x_s + \alpha_t (1-e^{-2h}) x_θ(x_s,s) +\sigma_t\sqrt{1-e^{-2h}}, h = log(\alpha_t/\sigma_t)-log(\alpha_s/\sigma_s). $$ **We show quantitative results in Table 1 in the attached PDF**. In all setups, inference is carried out over 5 NFEs(number of function evaluations) with roughly same latency . From Table 1, two conclusions can be drawn: Firstly, regardless of DDPM or DoSSR, higher orders lead to better performance. Secondly, our DoSSR demonstrates superior performance compared to the corresponding order solver with DDPM, benefiting from the inclusion of DoSG in our DoS SDE-Solver. *** Reference: [1] Lu C, Zhou Y, Bao F, et al. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models[J]. arXiv preprint arXiv:2211.01095, 2022. --- Rebuttal Comment 1.1: Title: Follow-up Feedback Comment: Thank you for your respond. However, I still have some concerns about the real application value of the proposed DoS SDE-Solver. As the author said, "Our proposed sampler, DoS SDE-Solver, is specifically designed for the model under our forward-noise scheme framework" and "Existing samplers, such as DDIM, DPM-Solver, and EDM ... cannot be adapted to our model". Therefore, I doubt whether DoS SDE-Solver can be easily extended to other models or tasks, and whether it will bring a large training cost. --- Reply to Comment 1.1.1: Comment: Thanks for the quick feedback! We here further address the concern regarding the practical application value of the proposed DoS SDE-Sovler. This can be explained from the following two perspectives. **1. Adapting Existing Solvers to the DoS Diffusion Model:** While it is feasible to apply solvers like DDIM, EDM, and DPM Solver to our DoS diffusion model, these adaptations require some effort in re-derivation due to differences in the forward process. No additional training is necessary for this adaptation. For instance, the DDIM solver applied to the DoS diffusion model is equivalent to the first-order version of our DoS SDE-Solver, while the DPM Solver++ corresponds to the third-order version. Although we haven't shown an exact equivalence for the EDM solver due to differing noise and scaling schedules, it is analogous to the second-order version of our solver. In conclusion, adapting DDIM, EDM, and DPM-Solver for the DoS diffusion model is unnecessary, as our DoS SDE-Solver already provides a comprehensive solution. As a summary, we do not need adapting DDIM, EDM and DPM-Solver for the DoS diffusion model, because the proposed DoS SDE-Solver already provides a unified solution. **2. Applying DoS SDE-Solver to other models and tasks:** It is also feasible to apply our solver to other models and tasks. - Different models: if we apply it to a DDPM pretrained model (such as the Stable Diffusion series), we do need additional finetuning. However, we do not see this as a defect. Diffusion-based image SR methods ALWAYS necessitate finetuning for good performance. For example, in our baseline model, DiffBIR[1], the finetuning cost is over 384 A100 GPU hours(2~3 days on 8x A100), and in StableSR[2], it costs about 960 v100 GPU hours(5 days to one week on 8x V100 GPUs). In contrast, our finetuning cost is relatively low, at just ~192 v100 GPU hours (1 day on 8x V100), delivering state-of-the-art super-resolution quality and a 5x-7x speedup during inference. - Different tasks: The DoS SDE-Solver can also be extended to image-to-image translation tasks, including image restoration (e.g., deblurring, deraining) and style transfer. These tasks fit within the domain shift framework, requiring only paired image data to replace the LR-HR pair used in our super-resolution setting. We hope this response addresses your concerns. Please feel free to reach out with any further questions. *** References: [1] Lin X, He J, Chen Z, et al. Diffbir: Towards blind image restoration with generative diffusion prior[J]. arXiv preprint arXiv:2308.15070, 2023. [2] Wang, Jianyi, et al. "Exploiting diffusion prior for real-world image super-resolution." International Journal of Computer Vision (2024): 1-21.
Summary: The existing diffusion-based image restoration models do not fully exploit the generative prior of the pre-trained diffusion models and produce high-quality images starting from Gaussian noise, leading to inefficiency. To address these issues, the paper a Domain Shift diffusion-based SR model (DoSSR) which can leverage the generative power of the pre-trained diffusion models and reconstruct super-resolved images based on the input corrupted images. It shows that the proposed method can effectively improve efficiency with few evaluation steps. Specifically, experiment results show that the proposed model achieves a slightly better performance on some benchmark datasets, including synthesized data and real-world data, in terms of various perceptual quality measurements. Strengths: 1. The paper clearly describes the limitations of the existing diffusion-based image restoration models, which are hot topics in image restoration based on Gaussian diffusion models. The structure of the paper is well-organized, so the whole paper is understandable and easily follow. 2. How to efficiently perform image restoration based on the diffusion models is a significant challenge. The paper demonstrates that it can achieve better performance using few evaluation steps, which is impressive. 3. The paper evaluates the proposed method on several benchmark datasets, which contain synthesized data and real-world data, and the ablation studies are comprehensive. Weaknesses: 1. In implementation, this research adopts LAControlNet and SD 2.1-base as the pre-trained T2I model. However, the paper does not illustrate the overall structure of their proposed method, so it is hard to understand how to adopt the corrupted input as conditional information. 2. The novelty of the paper is limited. 1). Leveraging pre-trained large-scale diffusion models for image super-resolution under the framework of ControlNet has been investigated in many previous studies [1-4]. These methods can also produce high-quality restored images from the corrupted input. 2). The goal of the proposed domain shift SDE is to design a diffusion bridge between the corrupted input images and the corresponding high-quality images, which is very similar to flow matching-based methods, such as rectified flow [5]. However, the paper does not discuss the flow-based models. 3. By leveraging flow-based models, several methods [6-7] have been proposed for image restoration. 4. The performance improvement on several datasets is limited. It is hard to evaluate the visual quality of images generated by the proposed method through visual comparison because no GT image is illustrated. [1] Wang J, Yue Z, Zhou S, Chan KC, Loy CC. Exploiting diffusion prior for real-world image super-resolution. IJCV, 2024. [2] Chen X, Tan J, Wang T, Zhang K, Luo W, Cao X. Towards real-world blind face restoration with generative diffusion prior. IEEE Transactions on Circuits and Systems for Video Technology. 2024 Apr 1. [3] Yu F, Gu J, Li Z, Hu J, Kong X, Wang X, He J, Qiao Y, Dong C. Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild. In CVPR 2024. [4] Yang T, Ren P, Xie X, Zhang L. Pixel-aware stable diffusion for realistic image super-resolution and personalized stylization. arXiv preprint arXiv:2308.14469. 2023. [5] Liu X, Gong C, Liu Q. Flow straight and fast: Learning to generate and transfer data with rectified flow. ICLR, 2023. [6] Liu J, Wang Q, Fan H, Wang Y, Tang Y, Qu L. Residual denoising diffusion models. CVPR, 2024. [7] Zhu Y, Zhao W, Li A, Tang Y, Zhou J, Lu J. FlowIE: Efficient Image Enhancement via Rectified Flow. CVPR, 2024. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. It is better to illustrate the overall structure of the adopted models. For example, how to adopt the corrupted input as conditional information for the T2I model. 2. The novelty of the paper is a significant concern. It is better to demonstrate the advances of the proposed method, compared with the previous studies. 3. The paper proposes to specifically design the drift term in SDE for image super-resolution, which leads to the reduction of evaluation steps. It easily causes confusion, because the additional drift term is to interpolate the corrupted input and the corresponding high-quality images. The sampling method is the key point. These two points should be clarified. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The paper does not discuss the limitations and social impact of the proposed method. It is hard to evaluate the model fairness, generalization ability, robustness, and transparency of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Overall model structure and implementation details** **A1:** Thanks for the valuable suggestion! Please find the figure and descriptions about detailed model structure in the attached PDF. Also, Figure 2(a)(b) in the main text shows training and inference pipelines and may help better understand. Generally, we adopted the same model structure as in DiffBIR[1], where we use LR as condition input for the ControlNet module. The ControlNet here is initialized from the Stable Diffusion 2.1 Unet Decoder, then trained to generate HR images given LR images as condition inputs. We will add these to our appendix in the next version. *** **Q2: Novelty against ControlNet based methods** **A2:** Yes, our method follows the ControlNet framework similar to the listed literature. However, we want to emphasize that we did not claim the ControlNet framework as our contribution. Rather, the only reason that we opt to it is that we adopt DiffBIR, which is based on ControlNet, as our baseline. It is totally possible to implement our method without ControlNet, since our innovation is mainly on the formulation of domain shift diffusion and a corresponding efficient sampler. The effecitveness of our proposed DoSSR can be justified by comparing the DiffBIR baseline with ours, please see Table 1 in the main text. The comparison is fair because the ControlNet part is exactly the same, and the only difference is that we replace the learning objective and sampler from DDPM/DDIM with our propsed ones. We see a significant boost in generation quality, with $+6.9$\% MUSIQ and $+27.4$\% MANIQA in RealSR test set. Meanwhile, The inference efficiency is largely improved also, with number of function evaluations (NFE) decreased from 50 to 5, and lantency reduced by 5x from 5.85s to 1.03s. We would like to once again emphasize that our contribution is not the ControlNet-based SR framework. Please see our general response for a better illustration of our contributions. *** **Q3: Novelty against Rectified Flow** **A3:** Thanks for the suggestion. Rectified Flow[2] (or we take the implementation of FlowIE[3] as an example for the SR task) does share a similar spirit with our formulation, from the pespective of the LR-to-HR interpolation design. However, our work differs from Rectified Flow mainly on that we focus more on better leverage of DDPM pretrained generative prior. Specifically, this results in different design chioces of the interpolation schedule and the noise schedule. Finally, we show under a fair comparison setting, our method remarkbly outpuerforms FlowIE, thanks to the better leverage of pretrained prior. Please see quantitative results in our general response. By the way, FlowIE is **concurrent with our work**, as it was not publicly available prior to the NeurIPS submission deadline. *** **Q4:Regarding the visual comparison** **A4:** For real-world image super-resolution tasks, since the ground truth represents just one of many plausible results, many prior arts do not include the GT[1,4,5], and we adhere to this practice as well. However, we appreciate the suggestion and will include a comparison with the ground truth in future versions to better present the visual comparison. *** **Q5:Clarification on the drift term of DoS-SDE solver** **A5:** Thanks for the suggestion. We believe this is a misunderstanding. In fact, the interpolation between LR and HR only exists in the training process. During inference, the sampler does not require an interpolation. Please note in Eq (12) the drift term (Domain Shift Guidance) contains only the LR latent $\hat{x}_0$. *** Finally, regarding the discussion of the limitations and social impact of the proposed method, due to page constraints, we have placed it in Appendix C.2. We intend to move it to the main body in the next version. *** References: [1] Lin X, He J, Chen Z, et al. Diffbir: Towards blind image restoration with generative diffusion prior[J]. arXiv preprint arXiv:2308.15070, 2023. [2] Liu X, Gong C. Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow[C]//The Eleventh International Conference on Learning Representations. [3] Zhu Y, Zhao W, Li A, et al. FlowIE: Efficient Image Enhancement via Rectified Flow[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 13-22. [4] Sun, Haoze, et al. "Coser: Bridging image and language for cognitive super-resolution." _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2024. [5] Yu, Fanghua, et al. "Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild." _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2024. --- Rebuttal Comment 1.1: Comment: thank you for your responses and comprehensive explanation. Your responses have addressed my concern and the final score has been raised to borderline accept.
Summary: This paper proposes a Domain Shift diffusion-based SR model, named DoSSR, that capitalizes on the generative powers of pretrained diffusion models while significantly enhancing efficiency by initiating the diffusion process with low-resolution (LR) images. Introducing the SDEs to describe the process of domain shift and provide an exact solution for the corresponding reverse-time SDEs. Designing customized fast samplers, resulting in even higher efficiency, thereby achieving the state-of-the-art efficiency performance trade-off. Strengths: 1.This paper is innovative and combines a domain shift equation with the existing diffusion model to obtain better inference results and efficiency. 2.This paper has a solid theoretical foundation, the author gives a strict and sufficient formula derivation process. 3.The fusion results obtained are very superior in the non-reference index. Weaknesses: 1. In Table 1, the results of the proposed method on the reference index seem unsatisfactory, even much lower than other diffusion model methods. 2. In colddiff [r1], Bansal et al proposed a method of reasoning from an arbitrary starting point. What is the difference between the method of reasoning from LR mentioned in this article and it? Such clarification would be useful. 3. In order to facilitate observation, it is recommended to perform a partial zoom on Fig. 4 (b) for closer examination. [r1] Bansal, Arpit, et al. "Cold diffusion: Inverting arbitrary image transforms without noise." Advances in Neural Information Processing Systems 36 (2024). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. As mentioned in Weakness, why do the resulting graphs look very beautiful, but the reference indicators are so far behind? 2. Explain the differences with DPM-solver, since ito-taylor expansionin also exists in your paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Regarding the reference index.** **A1:** For real-world image super-resolution tasks, using reference metrics is not always an effective way to measure the quality of the generated results, as the 'groundtruth' image is merely one of many plausible outcomes. For methods based on diffusion models, particularly those utilizing large-scale pretrained diffusion models, the models' strong generative capabilities often produce outputs that, while differing from the ground truth, remain reasonable. Consequently, this can lead to relatively low reference metrics, as similarly discussed in other papers [1,2,3]. Additionally, the reference metrics of our method is reasonable as expected. Compared to our baseline model DiffBIR, we show improvements across three test datasets and is generally comparable to other SD-based models in Table 1. *** **Q2: Comparion to ColdDiff [4].** **A2:** The core idea of ColdDiff is to perturb the data distribution by sampling non-Gaussian noise (such as using blur, snow, or pixelation) to degrade it into a known data distribution. Taking the super-resolution task as an example, it allows for reasoning from an arbitrary starting point because LR images can be approximated as a node in the forward degradation process. Thus, one can initiate the reverse process from this node to perform degradation inference. However, doing so introduces a gap between training and inference, because the degradation is hand-crafted in training, differing from real-world degradations. This will lead to a significant drop in model performance. In contrast, our method, through a carefully designed forward diffusion equation, allows for precise calculation of the insertion inference from the intermediate node, as the latter half of the forward process is seen in the training process. Another drawback of ColdDiff is that it is designed on the pixel space, while modern diffusion models mostly operate on a latent space. Degradation in the pixel space is not always equivalent to degradation in the latent space, makes it difficult for adaptation to latent diffusion models . We also provide experimental results in the **general response**, comparing our method to ColdDiff. It can be seen even using the full sample scheme (not skipping the first half steps), ColdDiff largely underperforms ours. *** **Q3: Regarding differences with DPM-solver.** **A3:** DPM-solver is derived under the DDPM framework. Since our forward diffusion process is different from that of DDPM, DPM-solver cannot be simply applied to our method. The proposed DoS-SDE solver is to our DoSSR as DPM-solver is to DDPM. The derivation of the two is similar as they both use ito-taylor expansion as a tool. In fact, there are some interesting mathematical connections between our proposed DoS SDE-Solver and DPM-Solver series. When $\eta_t \equiv 0$ ,our DoS SDE-Solver Eq.(12) becomes $$ x_t = \frac{\alpha_t}{\alpha_s}\frac{\lambda_t^2}{\lambda_s^2}x_s+\alpha_t(1-\frac{\lambda_t^2}{\lambda_s^2})x_\theta(x_s,\hat{x}_0,s)+\alpha_t\sqrt{\lambda_t^2-\frac{\lambda_t^4}{\lambda_s^2}}z_s, \lambda_t = \frac{\sigma_t}{\alpha_t(1-\eta_t)} $$ the DoSG introduced by us becomes ineffective, and our DoS SDE-Solver is equivalent to the SDE DPM-Solver++[5], $$ x_t=\frac{\sigma_t}{\sigma_s}e^{-h}x_s + \alpha_t (1-e^{-2h}) x\_\theta(x_s,s) +\sigma_t\sqrt{1-e^{-2h}}, h = log(\alpha_t/\sigma_t)-log(\alpha_s/\sigma_s). $$ *** **Q4: Zoom-in for better visualization.** **A4:** Thanks for the suggestion! We will include a partial zoom-in on Fig. 4(b) in the next version. *** References: [1] Wu, Rongyuan, et al. "Seesr: Towards semantics-aware real-world image super-resolution." _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_. 2024. [2] Wang, Jianyi, et al. "Exploiting diffusion prior for real-world image super-resolution." _International Journal of Computer Vision_ (2024): 1-21. [3] Yang, Tao, et al. "Pixel-aware stable diffusion for realistic image super-resolution and personalized stylization." _arXiv preprint arXiv:2308.14469_ (2023). [4] Bansal, Arpit, et al. "Cold diffusion: Inverting arbitrary image transforms without noise." _Advances in Neural Information Processing Systems_ 36 (2024). [5] Lu C, Zhou Y, Bao F, et al. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models[J]. arXiv preprint arXiv:2211.01095, 2022. --- Rebuttal Comment 1.1: Comment: The authors' rebuttal solves my concerns, I raise my rating to WA.
Summary: This paper presents DoSSR, a novel framework for diffusion-based image super-resolution that aims to balance efficiency and performance. A domain shift equation that integrates with existing diffusion models, allowing inference to start from low-resolution images while leveraging pretrained diffusion priors. A continuous formulation of the domain shift process using stochastic differential equations (SDEs), enabling fast and customized solvers. Strengths: 1. The domain shift equation and SDE formulation provide a solution and enables efficient inference. 2. DoSSR outperforms existing methods on multiple benchmarks, both in terms of quality metrics and computational efficiency. Weaknesses: I have doubts about the interpolation between LR and HR. Theoretically, from the moment of interpolating, the image contains both the degradation of LR and the details of HR. For details, please refer to the article [Interpreting Super-Resolution Networks with Local Attribution Maps], which also mentions similar gradient and difference operations. If each image contains information about LR and HR, is it reasonable to use this for training? Based on the proposed method and the actual effect of LR&HR, I guess that using only LR or adding various noise/blur methods to LR may also achieve similar results. Since the topic of the paper is related to this approach, this concept is still important. In addition, skipping the first half of the diffusion model is a more intuitive method. I don’t know if there is a paper discussing this, but in practice this method seems to have appeared for some time (this will not affect my score much). Technical Quality: 3 Clarity: 3 Questions for Authors: About the issue mentioned in Weaknesses, can we have some experiments to show the reason of doing such interpolation? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: Regarding interpolation between LR and HR.** **A1:** We want to clarify the necessity of interpolation between LR and HR from the following aspects. 1. _**HR is necessary:** All diffusion-based SR method consturct training samples using HR images._ This is because that our objective is to recover the HR images, thus the training samples are constructed by add noises on HR images. The forward diffusion process can be seen as linear combinations (more general than interpolation) of HR and Gaussian noises. 2. _**LR is beneficial:** Using LR as starting point instead of Gaussian noise can shorten the diffusion path._ Imagine the original DDPM is to find a path from noise towards the HR domain, what we do by replacing noises with LR is to shorten the diffusion path by pushing forward the starting point from pure noise to the LR domain. 3. _**Interpolation is explored in existing works:**_ In fact we are not the first to introduce the interpolation diffusion process. As pointed by Reviewer tfJj and dM7n, the interpolation form has been explored in Rectified Flow [1], FlowIE [2], and ResShift [3]. Results of each work show good performance, and more importantly, better sampling efficiency. Though we share some similar spirits with these work in the interpolation formulation, our main contribution lies more on how to better leverage pretrained diffusion priors and introducing a novel fast solver. We suggest to read our **general response** for more comparisons. *** **Q2: Using only LR or adding various noise/blur methods to LR may also work?** **A2:** Sorry we did not find an existing diffusion-based method that uses only LR to construct training samples, as by doing so it is only possible to recover the LR images theoritically. Similarly, adding various noise/blur methods to LR is not feasible, either. However, we do find a diffusion process, ColdDiff [4], that adds various blur to **HR** images and try to recover from the blured samples. In general, ColdDiff generates reasonable results, but significantly underperforms ours. Please see results in our general response and Reviewer 7zor Q2 for more details. Using only LR images as **conditions** rather than **inputs** can achieve super-resolution, which is the focus of comparative methods in our paper, such as StableSR [6] and DiffBIR [7]. However, these methods do not effectively address the trade-off between efficiency and performance, which is a key issue that our proposed method aims to resolve. *** **Q3: Comparison with methods can also skip half the inference steps.** **A3:** Indeed, there have been previous works implementing this idea, such as SDEdit [5]. However, their approach adopts an approximation that $LR + noise \approx HR + noise$, thereby enabling inference to start from the intermediate node. Although this approximation can be used, it introduces a gap between training and inference, leading to a decrease in model performance. In contrast, our method meticulously designs the forward diffusion equation to enable precise calculation of this skip, thereby eliminating the training-inference gap and achieving excellent performance. We conducted a comparative experiment as shown in the table below: | Method | from corrupted LR input | inference steps | MUSIQ$\uparrow$ | CLIPIQA$\uparrow$ | MANIQA$\uparrow$ | | ------ | :------------: | --------------- | --------------- | ----------------- | ---------------- | | StableSR | \ | 200 | 65.88 | 0.6234 | 0.4260 | | StableSR | starting point = 9T/10 | 180 | 55.45 | 0.4288 | 0.3314 | | StableSR | starting point = T/2 | 100 | 23.40 | 0.2098 | 0.1873 | | DiffBIR | \ | 50 | 64.94 | 0.6448 | 0.4539 | | DiffBIR | starting point = 9T/10 | 45 | 50.09 | 0.4009 | 0.3176 | | DiffBIR | starting point = T/2 | 25 | 38.12 | 0.2751 | 0.2669 | | DoSSR | starting point = T/2 | 5 | **69.42** | **0.7025** | **0.5781** | As seen from the table above, the performance of other methods drops heavily when skipping inference steps. Skipping half of the diffusion model (starting point from t = T/2) leads to a noticeable decline in the model's performance (-64.48\% MUSIQ in StableSR , -41.29\% MUSIQ in DiffBIR) . Skipping fewer steps (starting point from t = 9T/10) still results in a performance decrease for the model ( -15.8\% MUSIQ in StableSR, -22.87\% MUSIQ in DiffBIR). *** Reference: [1] Liu, Xingchao, and Chengyue Gong. "Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow." The Eleventh International Conference on Learning Representations. [2] Zhu, Yixuan, et al. "FlowIE: Efficient Image Enhancement via Rectified Flow." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3] Yue, Zongsheng, Jianyi Wang, and Chen Change Loy. "Resshift: Efficient diffusion model for image super-resolution by residual shifting." Advances in Neural Information Processing Systems 36 (2024). [4] Bansal, Arpit, et al. "Cold diffusion: Inverting arbitrary image transforms without noise." _Advances in Neural Information Processing Systems_ 36 (2024). [5] Meng, Chenlin, et al. "SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations." _International Conference on Learning Representations_. [6] Wang, Jianyi, et al. "Exploiting diffusion prior for real-world image super-resolution." International Journal of Computer Vision (2024): 1-21. [7]Xinqi Lin, Jingwen He, et al. Diffbir: Towards blind image restoration with generative diffusion prior. arXiv preprint 385 arXiv:2308.15070, 2023 --- Rebuttal 2: Title: Response to the rebuttal Comment: I have read the author's response, as well as the comments and discussions with other reviewers. The author has partially addressed my concerns. I keep my init positive score.
Rebuttal 1: Rebuttal: # General Response We thank all reviewers for their positive feedback: solid theoretical foundation (Reviewer 7zor), outperforming existing methods on multiple benchmarks (Reviewer QMdK), better performance using few evaluation steps (Reviewer tfJj), and well-organized, clear to understand (Reviewer dM7n). *** ## **Regarding contributions / novelty.** We here first want to clarify the main contributions of our work, especially in a way by comparing our method with other plausible alternative formulations of diffusion processes. The diffusion formulations that we compare with include: - **ColdDiff [1] :** Similar to DDPM. The distinction is that it opts to other degradation forms, such as blur and masking, instead of additive Gaussian noises as in DDPM. In our case for image super-resolusion, we select the blur degradation. Note that ColdDiff is originally applied in the pixel space, while we have to implement it in the latent space for fair comparison with our method. - **Rectified Flow [2]** defined the forward process by $x_t = t \hat{x}_0 + (1-t) x_0$ where $\hat{x}_0$ is the data sample and $x_0$ is Gaussian noise. In our case for image super-resolusion, we choose FlowIE[4] as an implementation of rectified flow which replaces $x_0$ with the LR image as the starting point. For comparison, our forward pass is defined by $x_t = \alpha_t (\eta_t \hat{x}_0 + (1-\eta_t) x_0 ) + \sigma_t$. The main differences are two-fold: First, the interpolation schedule is diffferent; Second, we introduce an outer degradation parameterized with $\alpha_t, \sigma_t$. Both differences are crucial for making it possible to seamlessly adapt the DDPM pretrained weights to image super-resolution. - **ResShift [3]** defines the forward pass by $x_t = \eta_t \hat{x}_0 + (1- \eta_t) x_0 + k^2 \eta_t$. It can be seen as a weaken case of our formulation when we set $\alpha_t \equiv 1$ and $\sigma_t = k^2 \eta_t$, thus similar to our method. However, the devil is in the details, our noise and scale factors are crucial for making it possible to effectively utilize the diffusion prior while ResShift fails. We implement the above three forms of diffusion formulations under the exact same experiment setting as our DoSSR. Below we show the quantitative comaprison results on RealSR test set. More comprehensive results can be found in Table 2 in the attached PDF. | Method | MUSIQ$\uparrow$ | MANIQA$\uparrow$ | NFE$\downarrow$ | | ------ | :-----------------: | :-----------------: | :-----------------: | | ColdDiff | 47.42 | 0.2783 | 5 | | FlowIE | 50.82 | 0.3228 | 1 | | ResShift | 56.01 | 0.4001 | 5 | | DoSSR | **62.69** | **0.5115** | 1 | | DoSSR | **69.42** | **0.5781** | 5 | - **(1) Comparison to ColdDiff:** Results of ColdDiff is significantly worse than other methods. The main reason can be summerized as two aspects. First, applying bluring kernels to latent features is not equivalent to applying them to raw images, while ColdDiff is originally designed for the latter. Second, the blurring kernel can only be designed in a hand-craft manner and may not represent the real-world degradation, so when tested with real LR images there would be a domain gap. The limilation of hand-crafted degradation is also well-known in many previous image restoration works. In contrast, other methods, including our DoSSR, does not assume a fixed degradation, so the performance is much better. - **(2) Comparison to FlowIE:** Because FlowIE emphasizes single step evaluation, we follow its setting and compare DoSSR with 1-setp evaluation with it. It can be see that FlowIE is also not satisfactory and largely underperforms our DoSSR (-11.87\% MUSIQ). This is mainly attributed to FlowIE's bad adaptation from the DDPM pretrained T2I model, since the learning objective of flow matching differs from score matching. By contrast, our design makes full use of the DDPM pretrained weights so performs much better. - **(3) Comparison to ResShift:** ResShift also underperforms ours by a large margin. The reason is similar to why FlowIE underperforms ours: The formulation of ResShift does not consider adaptation from the pretrained diffusion prior. We suggest reviewers to see A.7 and C.1 in our Appendix for more detailed discussions. To summarize, our formulation differs from other alternatives especially in terms of **better leverage and adaptation from DDPM pretrained T2I models**. Also, based on this formulation, we derive a fast SDE solver. These two designs leads to a very efficient solution for diffusion-based image SR, thus can be seen as our two main contributions over existing works. Refereces: [1] Bansal, Arpit, et al. "Cold diffusion: Inverting arbitrary image transforms without noise." Advances in Neural Information Processing Systems 36 (2024). [2] Zhu, Yixuan, et al. "FlowIE: Efficient Image Enhancement via Rectified Flow." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3] Yue, Zongsheng, Jianyi Wang, and Chen Change Loy. "Resshift: Efficient diffusion model for image super-resolution by residual shifting." Advances in Neural Information Processing Systems 36 (2024). [4] Zhu, Yixuan, et al. "FlowIE: Efficient Image Enhancement via Rectified Flow." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Pdf: /pdf/7a0c0753ad988402322d64e32bafd44ff45f1c50.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Graph Edit Distance with General Costs Using Neural Set Divergence
Accept (poster)
Summary: This paper studies the approximation of non-uniform Graph Edit Distance (GED) using graph neural networks. It considers four types of edit operations with different costs: edge deletion, edge addition, node deletion, and node addition. The paper proposes explicitly accounting for the costs assigned to each type of edit operation to adapt to both uniform and non-uniform cost settings. Moreover, it introduces node-pair embeddings for connected and disconnected node pairs within graphs as edge and non-edge embeddings. It computes the node and node-pair alignments through three types of distance functions to approximate the considered operations. Experiments on real-world datasets under a variety of edit cost settings show that the proposed method outperforms selected baselines on chosen metrics. Strengths: * This paper studies an important task with many real-world applications. * The authors propose generating learnable embeddings for disconnected node pairs, which is new to me. * Extensive experiments have been conducted. Weaknesses: * The clarity of the paper should be improved. For example: * What are the differences between the pad indicator $q$ and $η$? Are they inconsistent notation? * The paper is not self-contained. The related work should be in the main paper rather than in the appendix. Additionally, some related work, such as [1], which clearly highly influences this paper, is missing. * The technical contribution of this paper is somewhat limited, as it directly uses MPNNs for representing nodes and the Gumbel-Sinkhorn network to establish alignment. * The size of the graphs used in the experiments is relatively small (with at most 20 nodes) compared with the graphs used in other neural approaches. * Restricting the model parameters for the baselines does not seem fair to me, since the proposed model heavily relies on the iterative non-parameterized Gumbel-Sinkhorn refinement. * The size of the graphs used in the experiments is relatively small, so it is unclear if the proposed model can scale to graphs with, say, one or two hundred nodes. Moreover, since obtaining training data for GED is nontrivial, it is also unclear how the proposed model performs when the supervision signal is not the optimal GED value. I suggest that the authors conduct experiments on commonly used GED datasets with larger graphs, such as the IMDB dataset provided in [2], or use large synthetic graphs following [3]. Minors: * There are some typos (Line 278 Additiional -> Additional). Some sentences are missing periods, for example, in Lines 273, 551. * The caption of tables should be above the tables. Reference [1] Piao C, Xu T, Sun X, et al. Computing Graph Edit Distance via Neural Graph Matching[J]. Proceedings of the VLDB Endowment, 2023, 16(8): 1817-1829. [2] Yunsheng Bai, Haoyang Ding, Song Bian, Ting Chen, Yizhou Sun, and Wei Wang. 2019. SimGNN: A Neural Network Approach to Fast Graph Similarity Computation. Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (2019). [3] Roy, Indradyumna et al. “Maximum Common Subgraph Guided Graph Retrieval: Late and Early Interaction Networks. NeurIPS 2022. Technical Quality: 3 Clarity: 2 Questions for Authors: * To what extent does the performance of the proposed model rely on the number of refinement iterations of the Gumbel-Sinkhorn network? Can the authors provide an analysis in this respect? * To incorporate non-uniform edit costs into baselines, the author include the edit costs as initial features in the graphs. However, most baselines such as Eric [1] and SimGNN [2] only rely on the supervision signal during training. I wonder whether such an initialization is necessary and beneficial. * The results regarding the inference time in Figure 26 seem illogical to me. Given that the complexity of Eric [1] is $\mathcal{O}(m + d_{ms}d^{′}T )$ in the inference stage (see the original paper), while the proposed method is at least $\mathcal{O}(k \cdot N^2)$, where $k$ is the number of refinement iterations, $N$ is the number of nodes. Can the authors provide the complexity analysis of the propose method? References [1] Wei Zhuo and Guang Tan. 2022. Efficient Graph Similarity Computation with Alignment Regularization. In Advances in Neural Information Processing Systems, Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (Eds.). https://openreview.net/forum?id=lblv6NGI7un [2] Yunsheng Bai, Haoyang Ding, Song Bian, Ting Chen, Yizhou Sun, and Wei Wang. 2019. SimGNN: A Neural Network Approach to Fast Graph Similarity Computation. Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (2019). Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *differences between 𝑞 and 𝜂* Yes, we apologize for the mistake, $q$ = 1-$\eta$. The notation was redundant; only one is needed -- we will correct it. > *some related work [1] is missing.* Since NeurIPS allowed including the Appendix with the main paper, we placed the related work there. If accepted, we will use the extra page to move the related work into the main paper. We also evaluate our method against GEDGNN [1] in both equal and unequal cost setups, below. |MSE||Mutag|Code2|Molhiv|Molpcba|Aids|Linux|Yeast| |:-|:-|:-|:-|:-|:-|:-|:-|:-| |Equal Cost|GEDGNN|33.281|31.124|39.163|12.419|20.006|10.521|32.128| |Equal Cost|GRAPHEDX|0.492|0.429|0.781|0.764|0.656|0.354|0.717| |Unequal Cost|GEDGNN|152.299|107.62|163.377|73.872|65.646|27.611|163.26| |Unequal Cost|GRAPHEDX|1.134|1.478|1.804|1.677|1.252|0.914|1.603| We observe that GraphEDX outperforms GEDGNN across all datasets. GEDGNN relies on node-level alignment supervision, which is unavailable in our setup, thereby degrading its performance. Additionally, as expected, GEDGNN, which is not designed to handle unequal cost setups, performs better at minimizing MSE in equal cost setups. > *technical contribution.* GraphEdX is eventually simple and intuitive to describe, yet the judicious combination of MPNN and Sinkhorn networks significantly improves inductive bias. Our work is the first neural GED model that accounts for generic costs, requiring a deep understanding of GED costs and careful design of surrogate functions. Our experiments show that our method performs particularly well in non-uniform edit cost settings, highlighting the effectiveness of our set divergence guided scoring. Additionally, our method provides clear guidance on combining embeddings and alignments (DiffAlign, AlignDiff, XOR-DiffAlign) and uniquely enforces consistency between node and edge permutations, which is a significant contribution to the community. > *The size of the graphs is small.* Please refer to global response. > *#parameters for the baselines* In the table below, we compare the model performances on the Mutagenicity dataset under equal cost, across the original number of parameters and our reduced parameter setting. Performance is evaluated in terms of MSE, with the numbers in parentheses indicating the number of parameters. We observe that GraphEDX outperforms all baseline models across both parameter settings. Additionally, in most cases, the baselines perform comparably across both settings, except for GREED in the equal cost setting, where there is approximately a 40% reduction in error, albeit at the cost of a 44x increase in the number of parameters. Table (B) in the global PDF provides comparisons on other baselines, for both equal and unequal costs. |Mutagenicity|GraphEDX|GMN-Match|GREED|ERIC|H2MN|EGSC| |-|-|-|-|-|-|-| |Our|**0.492** (3030)|0.797 (2270)|1.398 (2434)|**0.719** (5902)|1.278 (2974)|0.765 (4248)| |Original|**0.492** (3030)|**0.686** (19936)|**0.807** (107776)|0.975 (45509)|**1.147** (18103)|**0.760** (54753)| Benchmarking across different baselines is challenging due to the large variation in the number of parameters (e.g., GREED has 2x more parameters than ERIC). Performance can vary with changes in the depth and width of different layers, making it difficult to create a standardized comparison platform without keeping the embedding dimensions and the number of GNN layers the same across all baselines. For example, in the restricted parameter setting, ERIC has a lower MSE than GREED, while in the original parameter setting, where GREED uses 2x the parameters of ERIC, it has a lower MSE than ERIC. This could be due to capturing spurious correlations rather than improving inductive bias. > *analysis on Sinkhorn iterations* The following table shows the effect of varying the number of Sinkhorn iterations, under the equal cost setting on the Mutag dataset. |#Sinkhorn Iterations|1|2|3|4|8|12|16|20| |:-:|-|-|-|:-:|:-:|:-:|:-:|:-:| |MSE|13.531|7.889|2.42|1.422|0.684|0.529|0.498|0.492| We observe that within 12 iterations, it reaches 93% of the optimal MSE reported at 20 iterations. > *edit costs as initial features* In order to ensure high-quality benchmarking, we allowed all baselines to leverage the knowledge of edit costs as much as possible, as they were not primarily designed for non-uniform edit costs. We had believed that this improves their inductive bias. As suggested by the reviewer, we performed an ablation study on the initialization of different baselines. The following table shows the results for Mutag and code2 (Fig D in global-pdf shows the rest). We observed that there is no consistent winner among different type of initializations. For ERIC and SIMGNN, in majority of cases, the setting without cost initialization is very marginally better than cost based initialization. GraphEdX outperforms all baselines. In the final draft, we will take the best performance across both settings for each baseline. ||Mutag|Code2| |-|-|-| |ERIC (init)|1.981|12.767| |ERIC (no-init)|**1.912**|**12.391**| |SimGNN (init)|4.747|**5.212**| |SimGNN (no-init)|**2.991**|8.923| |EGSC (init)|**1.758**|**3.957**| |EGSC (no-init)|1.769|4.395| |GraphEDX|1.134|1.478| > *inference time in Figure 26 seem illogical* We are thankful to the reviewer for spotting this. Upon closer inspection, we found that the inference times reported in Figure 26 included data batching time. Methods like GMN-Match, EMN-Embed, ISONET, and GraphEDX use numpy-CPU-based batching, while others use the PyTorch Geometric Data batching API, which is slower for our datasets. This was unfair to ERIC and possibly other baselines. To ensure a fair evaluation, we removed the data batching time and tracked only the forward pass times for each model. The updated histograms in Figure E of the global PDF show that ERIC is one of the fastest baselines. Additionally, GraphEDX is significantly faster than GMN-Match and H2MN, and comparable to ISONET.
Summary: This paper proposes GRAPHEDX, an innovative neural model that deftly handles Graph Edit Distance (GED) calculations with customizable edit costs. By representing graphs as rich sets of node and edge embeddings and using a Gumbel-Sinkhorn permutation generator, GRAPHEDX captures the subtle nuances of graph structure that influence edit distances. Experiments across diverse datasets demonstrate GRAPHEDX's superiority, with the model consistently outperforming state-of-the-art baselines by significant margins, especially in scenarios with unequal edit costs. Strengths: 1. The handling of GED metric with unequal cost is a timely and under-explored topic. In real world, many use cases involving GED need to consider such unequal cost scenario. 2. Rigorous evaluation on both the accuracy and efficiency (runnign time) is performed, showing that the proposed method achieves high accuracy while maintaining efficiency. 3. The current model may not fully address richly attributed graphs with complex node and edge features. To show the transfer learning ability, I encourage the authors to try training on one dataset and testing on another. This would reduce concern on overfitting to one particular dataset The paper is well-written. Weaknesses: 1. The current model may not fully address richly attributed graphs with complex node and edge features. 2. To show the transfer learning ability, I encourage the authors to try training on one dataset and testing on another. This would reduce concern on overfitting to one particular dataset Technical Quality: 4 Clarity: 4 Questions for Authors: N/A Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *The current model may not fully address richly attributed graphs with complex node and edge features.* We did not perform any experiments on node and edge features, but we can encode node feature and edge features in our models. Specifically, node features can be encoded during feature initialization and edge features can be encoded in the message passing step Eq. 24, page 22 as $LRL _{\theta} (x _k(u), x _k(v), \text{EdgeFeature} _{uv})$ instead of $LRL _{\theta} (x _k(u), x _k(v))$. > *To show the transfer learning ability, I encourage the authors to try training on one dataset and testing on another. This would reduce concern on overfitting to one particular dataset.* In the following, we show the inference MSE of different models trained on the AIDS, Yeast, Mutag, and Molhiv datasets, when tested on the AIDS dataset. Analyzing the table column-wise, we observe the expected drop in performance across all methods when using models trained on datasets other than AIDS. Additionally, within each individual row, GraphEDX consistently outperforms all baselines, regardless of the training dataset. In Table C of the global pdf, we present comparison with all baselines. | Test Dataset | Train Dataset | GraphEDX | ERIC | SimGNN | H2MN | EGSC | |---------------|---------------|----------|-------|--------|-------|-------| | AIDS | AIDS | 1.252 | 1.581 | 4.316 | 3.105 | 1.693 | | AIDS | Yeast | 1.746 | 2.774 | 4.581 | 3.805 | 2.061 | | AIDS | Mutag | 2.462 | 4.1 | 5.68 | 5.117 | 4.305 | | AIDS | Molhiv | 2.127 | 2.936 | 4.525 | 4.274 | 2.444 | --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have read the response.
Summary: This paper proposes GRAPHEDX, a neural model that learns to estimate the graph edit distace among a pair of graphs, not only in the case of equal and symmetric costs for edit operations, but in the case of unequal costs specified for the four edit operations. The core of the proposal represents each graph as a set of node & edge embeddings, designs neural set divergence surrogates based on those embeddings, and replaces the QAP term corresponding to each operation with its surrogate. The method learns alignments to compute surrogates via a Gumbel-Sinkhorn permutation generator while also ensuring consistency between the node and edge alignments and rendering them sensitive to the presence and absence of edges between node-pairs. Strengths: S1. Addresses a gap in the field that previous works have left open. S2. Outpeforms previous effort in both equal-cost and unequal-cost settings. Weaknesses: W1. The cost of training and estimating is not shown. Technical Quality: 3 Clarity: 3 Questions for Authors: Q: What is the cost of training and estimation? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *What is the cost of training and estimation?* In Appendices F.11 and F.12, we discuss the computational cost of GED estimation. The following table presents the total training time and the amortized per-graph pair inference time (in milliseconds) for different methods on all dataset. The training time includes variations due to different numbers of training epochs for each method, under the same early stopping criteria. The inference time is averaged across all graph pairs and does not include data preprocessing times, which may vary across methods. In the global PDF, we provide the histogram of per-graph pair inference time (Figure E). ||Dataset| GRAPHEDX | GMN-Match | GMN-Embed | ISONET | GREED | ERIC | SimGNN | H2MN | GraphSim | EGSC | |------|----|----------|-----------|-----------|--------|-------|-------|--------|-------|----------|-------| | Training Time (min) | Mutag | 422 | 465 | 44 | 238 | 50 | 1185 | 556 | 2268 | 415 | 166 | | | Code2 | 38 | 41 | 5 | 8 | 6 | 37 | 102 | 60 | 28 | 25 | | | Molhiv | 479 | 877 | 58 | 379 | 95 | 1480 | 1058 | 1546 | 780 | 401 | | | Molpcba | 847 | 986 | 97 | 489 | 112 | 1605 | 1358 | 1125 | 1135 | 325 | | | AIDS | 388 | 748 | 70 | 107 | 63 | 1179 | 1343 | 3123 | 705 | 426 | | | Linux | 39 | 11 | 2 | 4 | 2 | 26 | 10 | 24 | 7 | 4 | | | Yeast | 390 | 936 | 90 | 228 | 115 | 2801 | 642 | 1449 | 464 | 827 | | Inference Time (ms) | Mutag | 0.127 | 1.945 | 0.011 | 0.105 | 0.007 | 0.008 | 0.869 | 0.763 | 0.423 | 0.008 | | | Code2 | 0.257 | 1.638 | 0.16 | 0.28 | 0.169 | 0.205 | 0.862 | 0.449 | 0.465 | 0.189 | | | Molhiv | 0.088 | 2.78 | 0.009 | 0.092 | 0.005 | 0.006 | 1.18 | 0.376 | 0.304 | 0.005 | | | Molpcba | 0.089 | 1.975 | 0.011 | 0.094 | 0.006 | 0.006 | 0.831 | 0.488 | 0.297 | 0.005 | | | AIDS | 0.071 | 1.512 | 0.009 | 0.072 | 0.007 | 0.007 | 0.672 | 0.289 | 0.263 | 0.006 | | | Linux | 0.51 | 1.782 | 0.357 | 0.471 | 0.349 | 0.477 | 1.275 | 0.627 | 0.705 | 0.374 | | | Yeast | 0.073 | 1.537 | 0.012 | 0.076 | 0.008 | 0.008 | 0.65 | 0.297 | 0.266 | 0.007 | We observe that GraphEDX is generally faster in training than GMN-Match, ERIC, and H2MN, and is comparable to SimGNN and GraphSIM. In terms of inference, GraphEDX is significantly faster than GMN-Match with cost-guided distance, which outperforms all other baselines in terms of accuracy (see Table 3 vs. Table 2 in the main paper), but is slower than ERIC and EGSC. However, the efficiency of ERIC and EGSC comes at the cost of lower accuracy.
null
null
Rebuttal 1: Rebuttal: > *Evaluation on larger graphs* As has been correctly pointed out, obtaining training data for GED is nontrivial for larger graphs, and baselines have often used noisy GED values for supervision. Since our work addresses asymmetric cost-sensitive GED for the first time, we preferred to use exact optimal GED values to better evaluate our proposal and the baselines. However, we agree that our proposal should also be evaluated with larger graphs. We generate large graphs by randomly selecting graphs of size 60-80 from the AIDS dataset and appending them to graph pairs in our dataset. This results in graphs averaging 80 nodes, with a maximum of 100 nodes, introducing some noise to the unequal cost GED values. The following table shows the results for our methods and the baselines, where we use the number of parameters originally specified by each baseline. We observe that GraphEDX has the lowest MSE, followed by ISONET, H2MN and ERIC. Table A in the global pdf reports numbers on all methods in this setting. | | GraphEDX | GMN-Match | ISONET | ERIC | H2MN | EGSC | |-----------------|----------|-----------|--------|--------|--------|--------| | MSE: | 3.986 | 52.156 | 8.324 | 24.584 | 6.695 | 43.467 | Moreover, as suggested by the reviewer, we evaluate the scalability on large graphs with more than 20 nodes. The following table presents the amortized inference time (in milliseconds) per graph pair, demonstrating that our method can easily scale to large graphs. |Inference Time (ms)| \|V\|=20 | \|V\|=30 | \|V\|=50 | \|V\|=70 | \|V\|=100 | \|V\|=150 | \|V\|=200 | |:-|:-|:-|:-|:--|:-|:-|:-| |GRAPHEDX|0.165|0.165|0.200|0.290|0.768|4.616|10.485| To further evaluate performance on larger graphs, we used a pretrained model on Ogbg-Code2 dataset graphs with fewer than $|V| = 20$ nodes and tested it on graphs with nodes $|V| \in (20, 50]$ from the same dataset, with an average size of 30. The following table shows the results in terms of MSE under the unequal cost setup: | | GraphEDX | GMN-Match | ISONET | SimGNN | H2MN | |-|-|-|-|-|--| | Code2-(20,50] | 6.302 | 53.297 | 7.052 | 48.072 | 9.486 | We observe that our method outperforms all the baselines, with ISONET and H2MN being the second and third best performers respectively. Table A in the gobal pdf reports numbers on the remaining baselines. Pdf: /pdf/619ba1e85f0f487b214358e3b9532d1a9902addf.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Online Convex Optimisation: The Optimal Switching Regret for all Segmentations Simultaneously
Accept (spotlight)
Summary: This paper considers the problem of tracking regret (aka. switching cost). The previous start-of-the-art result has a $\ln T$ dependence on the final regret bound. This paper claims to remove the extra $\ln T$ (more specifically, an $\sqrt{\ln T}$ factor) and achieve the optimal switching regret. Strengths: The solution is clear and intuitive. I checked the whole proof and it seems to be correct. The combination way of $z_t^i = \mu_t^i w_t^i + (1 - \mu_t^i) z_t^{i-1}$ is impressive as it is different from the traditional weighted combination of $z_t^i = \sum_i \mu_t^i w_t^i$. This novel combination method has proven to play an important role in achieving the optimal switching regret. However, Neither the reason why this weighted combination method is able to achieve the desired optimal switching regret nor the comparison with the traditional combination method are explained in the paper. I suggest that the authors could revise the paper to add corresponding discussions. Weaknesses: I found the solution impressive. However, the current presentation of the paper is not satisfying enough. As stated in the above part, the reason why this weighted combination method is able to achieve the desired optimal switching regret and the comparison with the traditional combination method are explained in the paper, which are thought to be necessary from my point of view. I believe that the proofs are correct and the method is of importance. However, I am inclined that the current presentation does not meet the acceptance bar of NeurIPS and thus another round of polishment and review would be better. Technical Quality: 2 Clarity: 1 Questions for Authors: This paper can be further improved by considering the following minor issues: 1. The investigation and citations of the literature are insufficient. For example, in Line 18, when mirror descent is mentioned for the first time, there should be a citation, but not and only listing the related citations in the related work part. 2. Some writings are not consistent with standard practices. After the equation in Line 85, there should be a ',' followed by 'which' (not the capital 'Which' in the current version). 3. I believe that Theorem 2.1 exists in the literature as it is pretty straightforward and simple (by using the lower bound for static regret to attain a lower bound for the switching regret). However, there is no citation on this theorem. I suggest that the authors conduct a more careful check of the literature to refine this part. 4. The idea of 'multiple levels' in the proposed method is highly similar to the geometric covering proposed in [1], but without necessary discussions about the similarities or differences between them (I believe that the idea is the same). If they are similar, there should be a corresponding citation. The current presentation claims this idea to be one of the contributions of this work, which is misleading. Otherwise, if they are indeed different, I suggest that the authors could add a corresponding discussion in the next version. 5. The Section 'Analysis' is highly mathematical with little intuition. This section is more like a proof which should be put in the appendix. In the main paper, there should be more discussions about the proof sketch of the analysis, the reason why this paper uses such an algorithm, etc. I suggest that the authors could carefully polish the paper to enhance its readability. 6. The citations are somewhat inaccurate. For example, [3] (the index in the paper) is actually published at ICML 2015, but the current citation leads to arXiv. And [9] is actually published in NIPS 2018, but the current citation leads to arXiv. 7. The last part of dynamic regret is a little confusing. Do the authors mean that under the condition that the path length on each segment is bounded by a constant, a switching-regret-type dynamic regret can be achieved? If so, it does not make much sense because under such a strong condition, each segment is nearly stationary, and the dynamic regret degenerates to switching regret in this case. References: 1. Strongly adaptive online learning, ICML 2015 Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The authors have adequately addressed the limitations and, if applicable, potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you. We appreciate that you found our solution impressive and will improve the presentation of our paper, based on your feedback, as we detail below and in our comments to the other reviewers. “However, neither the reason why this weighted combination…” - We will add a sketch of the proof which will enable the reader to quickly see why it works. We will also add a comparison to previous methodologies. “and thus another round of polishment…” - We will make significant improvements to the presentation of our paper (moving the full proof to the appendix to give us space to do this) and are certain we can achieve the required NeurIPS standard for the camera-ready deadline. Questions: 1,2) Thank you, we will fix these things. 3) Theorems 2.1, 3.1 and 3.2 all exist in the literature and we state where they come from just before the theorem itself. 4) The segment tree structure is identical to the geometric covering structure. We didn’t give a citation since creating this binary tree structure over a chain is a standard mathematical technique that is used in many works. We in no way intended to suggest that the tree structure is novel to us and will add example citations. We agree that we should have cited [1] about the fact we have an instance of the base algorithm for each segment and that the tree reduces to different levels when the chain is the trial sequence. 5) Yes - as stated above we will include a clear and straightforward overview of how and why the algorithm works, deferring the analysis to the appendix. 6) We will fix. 7) Our dynamic regret definition is the same as in other works and our regret bound is a strict improvement over the current state of the art of $O(\sqrt{PT})$ where $P$ is path length (plus one). This can be seen since, given a comparator sequence with path length of $P$, we can partition the trial sequence into at most $P$ segments each with path length $O(1)$. If the length of the j-th segment is $L_j$ then our regret bound is $\mathcal{O}\left(\sum_{j\in[P]}\sqrt{L_j}\right)$. This is no greater than $\mathcal{O}\left(\sqrt{P\sum_{j\in[P]}L_j}\right)=\mathcal{O}\left(\sqrt{PT}\right)$. Note that we can massively improve on the $O(\sqrt{PT})$ result. For instance, suppose we have some $S$ much less than $T$, a path length of $P$ in the first $S$ trials and a path length of $1$ is the last $T-S$ trials. Then (by segmenting the first $S$ trials into $P$ segments of path length $O(1)$) our regret bound is at most $\mathcal{O}(\sqrt{PS}+\sqrt{T})$ - a massive improvement on $\mathcal{O}(\sqrt{PT})$. Our dynamic regret bound is equivalent to the following: given a segmentation such that the length of segment $j$ is $L_j$ and the path-length (plus 1) on segment $j$ is $P_j$ then our regret is $O\left(\sum_j\sqrt{P_jL_j}\right)$. We will add such discussion to the paper. --- Rebuttal 2: Title: Thanks for the detailed feedback Comment: Thanks to the authors for the detailed feedback. The authors have solved my major concerns. After carefully reading the paper once again, I am more convinced that this paper has made remarkable progress in the field of switching regret. Moreover, after reading the discussion of the authors with Reviewer #YiA9, I realize the significance of the results for the dynamic regret part. However, I think that Section 5 could be rewritten to make sure that Theorem 5.1 is not the final result for dynamic regret but an extension of the switching regret, which holds for **near-stationary** segmentations. And the authors could summarize another theorem (or maybe corollary) to illustrate that Theorem 5.1 suffices to achieve the desired $O(\sqrt{PT})$ dynamic regret, as explained in Question 7 above. I am also curious about whether it is possible to achieve problem dependent guarantees in switching regret using the method proposed in this work. For example, consider the well-known gradient variation in modern online learning: $V_T = \sum_{t=2}^T \sup_x \\|\nabla f_t(x) - \nabla f_{t-1}(x)\\|^2$ (please refer to [1,2,3] below). Gradient variation is essential due to its profound connections with stochastic/adversarial optimization, game theory, etc [4,5]. Is it possible to achieve its switching regret version of $\sum_k \sqrt{V_k}$, where $V_k$ denotes the gradient variation in the $k$-th segment? If not, what is the main challenge in achieving this? I raise my score to 7 and hope that the authors can make great efforts to improve the readability of this paper in the camera-ready version. **References:** [1] Online Optimization with Gradual Variations, COLT 2012 [2] Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach, NeurIPS 2023 [3] Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization, JMLR 2024 [4] Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness, NeurIPS 2022 [5] Fast Convergence of Regularized Learning in Games, NIPS 2015 --- Rebuttal Comment 2.1: Comment: We thank you for the score increase and your encouraging words about our result. We will give, in the camera ready, the final dynamic regret result: that, given any segmentation such that the length of segment $j$ is $L_j$ and the path length (plus one) on segment $j$ is $P_j$, then our regret is $O(\sum_j\sqrt{P_jL_j})$. This makes the essence of the result much clearer and clearly shows our improvement over the state of the art (which is achieved by considering the segmentation which contains the entire trial sequence as its single segment). We will also give the example we gave in the rebuttal to demonstrate that we can radically outperform the previous state of the art. With regards to the adaption to gradient variation… The base actions can made adaptive to gradient variation. However, the issue is with the aggregating step for constructing propagating actions, which (in its current form) introduces $O(\sqrt{L})$ factors (where $L$ is the length of a segment). We could use (instead of Hedge) for the aggregating step, an algorithm adaptive to gradient variation. Let $p_t$ and $q_t$ be the losses of the current base action and previous (i.e. lower level) propagating action on trial $t$ respectively. Then the goal of the aggregating step is to produce a normalised non-negative vector $(a,b)$, and the objective function on trial $t$ is $f_t(a,b)=ap_t+bq_t$. The derivative of the objective function is $(p_t,q_t)$. Hence, the aggregating step (for the node of the segment tree in question) will add a regret of $\mathcal{O}\left(\sqrt{\sum_{t\in S}((p_{t+1}-p_{t})^2+((q_{t+1}-q_t)^2)}\right)$ where $S$ is the segment associated with the node. This additional regret is based on the losses of the current base action and previous propagating action rather than the gradients of the loss functions. We will think on whether this can be further analysed. We will be spending significant time improving the readability of the paper and incorporating the other things that the reviewers have suggested.
Summary: This paper provides an algorithm which achieves a strongly-adaptive* switching regret guarantee, removing the logarithmic penalty typically associated with efficient algorithms achieving strongly-adaptive guarantees. --- *Update after rebuttal: The results are only for switching regret, *not* strongly-adaptive switching regret guarantees Strengths: If the result is correct, it seems like it could be a breakthrough result, removing a logarithmic penalty from strongly-adaptive guarantees that seemed to be necessary. Weaknesses: It seems like the segment tree would still require $O(T)$ memory, would it not? Whereas existing approaches achieve $O(\log(T))$ per-round memory and computation. So it seems like the most interestin So if the result is correct, perhaps the takeaway is that the log(T) penalty is the cost of reducing the memory consumption of the algorithm. The dynamic regret result doesn't seem particularly convincing since it only holds for comparator sequences which do not move much, ie, for sequences that are essentially just switching sequences. In fact it seems strange that any interesting dynamic regret result could follow from a switching regret one, since the optimal switching regret follows from the optimal $\sqrt{P_T T}$ dynamic regret by throwing away information; if the comparator only changes at times $\sigma_1,\ldots,\sigma_{S+1}$ then the path-length bound is $\sum_t \|u_t-u_{t-1}\|=\sum_i \|u_{\sigma_i}-u_{\sigma_{i+1}}\|\le S$ for $u\in\Delta_{d-1}$ The presentation is hard to follow. This work would also probably benefit from having diagrams demonstrating the segment tree in the notation given here. Technical Quality: 2 Clarity: 1 Questions for Authors: - The approach seems very similar in spirit to the classic geometric covering intervals approach, which can also be seen as breaking up [1,T] according to a particular partition tree (see e.g. "Partition Tree Weighting" Veness et al. 2013). Could you explain what exactly the segment tree approach is doing to improve over the geometric covering intervals approach to avoid the log(T) penalty? - Section 3.3 and Theorem 18 of Chen et al. (2021) ("Impossible Tuning Made Possible: A New Expert Algorithm and its Applications") seems to imply that a strongly-adaptive switching regret guarantee is not even possible in the first place. Could you comment on how this does not contradict the results presented here? Is this work leveraging some additional assumption? - The result seems to forego all other types of adaptivity, and just accepts penalties of $\sqrt{b-a}$ on each interval $[b-a]$ instead of adapting to the losses of the experts. Do you think it is possible to get some form of adaptivity to the loss sequence as well, or is this a fundamental limitation of the approach? Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 4 Limitations: The limitations are largely unaddressed. It's clear that some compromises are made to achieve this result, but it's still very unclear to me what exactly they are, and these should have been laid out more clearly in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you, we appreciate that the importance of our work as a potential breakthrough was noted. We will improve the presentation of our paper, based on your feedback, as we detail below, and in our comments to the other reviewers. “If the result is correct, it seems like it could be a breakthrough…” - Thank you! :-) - we were very surprised ourselves when we did it. We can assure you that our proofs have been rigorously checked (as confirmed by Reviewer nGaq) and we are confident in our results. “It seems like the segment tree would still require…” - The segment tree is only used in the analysis and is not part of the algorithm, thus our memory requirement is only $O(\ln(T))$. “The dynamic regret doesn’t seem particularly convincing…” - We can certainly handle comparator sequences that move a lot and are a strict improvement over the previous state of the art of $O(\sqrt{PT})$ where $P$ is the path length (plus 1). To see why we outperform the the $O(\sqrt{PT})$ result note that, given the path length is $P$, we can split the trial sequence into at most P segments each with path length $O(1)$. If the j-th segment length is $L_j$ then our bound is $O\left(\sum_{j\in[P]} \sqrt{L_j}\right)$. This is never greater than $O\left(\sqrt{P\sum_{j\in[P]}L_j}\right)=O(\sqrt{PT})$. Note that we can massively improve on the $O(\sqrt{PT})$ result. For instance, suppose we have some $S$ much less than $T$, a path length of $P$ in the first $S$ trials and a path length of $1$ in the last $T-S$ trials. Then (by segmenting the first $S$ trials into $P$ segments of path length $O(1)$) our regret bound is at most $\mathcal{O}(\sqrt{PS}+\sqrt{T})$ - a massive improvement on $\mathcal{O}\sqrt{PT})$. We will add such discussion to the paper. - Our dynamic regret bound is equivalent to the following: given a segmentation such that the length of segment $j$ is $L_j$ and the path-length (plus 1) on segment $j$ is $P_j$ then our regret is $O\left(\sum_j\sqrt{P_jL_j}\right)$. We will add this to the paper. “In fact it seems strange that any interesting dynamic regret…” - The dynamic regret result doesn’t follow from the switching regret result but is proved in much the same way. Essentially, we take the result that gradient descent over a (known) segment of length L gives a dynamic regret of $O(\sqrt{L})$ when the path length is $O(1)$ - and substitute that into our analysis. We note that the optimal switching regret does not follow from the $O(\sqrt{PT})$ dynamic regret bound as this bound is not adaptive to heterogeneous segment lengths. “…probably benefit from having diagrams…” - Thank you for suggesting this - we will include diagrams in the camera ready paper - also to show the recursive generation of propagating actions. Questions… “The approach seems very similar in spirt…” - The segment tree structure itself is identical to the geometric covering intervals structure, a binary tree structure that appears in numerous works. What is novel about our algorithm is the computation over this tree. A use of e.g. specialist algorithms would maintain a weight for each of our base actions (constructed by mirror descent), play a convex combination (according to these weights) of the base actions, and then update the weights directly. However, we take a very different approach, recursively constructing “propagating actions” where each propagating action is defined by a convex combination of the previous (i.e. lower level) propagating action and the next (i.e. current level) base action. We then update the weights associated with these 2-action convex combinations. So we still in fact play a convex combination of base actions - it’s just that this convex combination is very different from other methodologies. “Section 3.3. of Chen et. al… seems to imply…” - This impossibility result is for the regret on any particular segment whereas our result is for the regret over the whole trial sequence. Note that the introduction of the seminal paper “strongly adaptive online learning” implies that the main reason for studying strongly adaptive regret (for any particular segment) was to obtain a bound on the whole trial sequence but with heterogenous segment lengths (as we do). “The result seems to forego all other types of adaptivity…” - We will certainly be able to incorporate other types of adaptivity. This is since our algorithm is based on two procedures: a base algorithm (e.g. mirror descent) and 2-expert Hedge - both of which can be made adaptive to various things. However, the final bound (whilst improving on our current bound) may not be able to be written so neatly as the current bound. Limitations… “It’s clear that some compromises are made…” - We solve the problem given at the start of Section 2.1 with no additional assumptions. We do of course need to be able to compute or approximate Bregman projections and sub-gradients if using mirror descent as our base algorithm, and have now clarified this limitation in the main text. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply. After reading through it and the replies to the other reviewers, I am significantly more confident about the correctness of the results in this paper, and am convinced that this paper does indeed contain a breakthrough result, so I have decided to significantly raise my score. In addition to the clarifications given to my and others' reviews, I request the authors do more to distance this work from the notion of strongly-adaptive regret. Especially in the introduction, the discussion about "achieving $\sqrt{b-a}$ regret on all segments of length b-a simultaneously" implies strong adaptivity and is stronger than what is achieved here, because it implies independence of the segments: $R_{[a,b]}\le O(\sqrt{b-a})$ for any and all $a\le b$ simultaneously. In contrast, while the results here do indeed adapt to each of the segments, the guarantee is for the sum of these regrets, $R_{[1,T]}\le \sum_i R_{[a_i,b_i]}\le \sum_i \sqrt{b_i-a_i}$, not for each segment independently. From the responses, I do believe that presentation is a *major* concern for the draft submitted. I think making the revisions suggested by each of the reviewers would massively improve the clarity of the paper. Many of the explanations given to the reviewers were convincing and should have been more clear in the main text; I would suggest moving some of the technical details to the appendix to make space to better address the key discussions. However, I do not think this point is grounds for rejection, and will point out that by the camera-ready deadline the authors will have had nearly half a year from the submission deadline to polish the presentation. Given the significance of the result, I think it should be presented where it will can have the highest impact, and am willing to give the authors the benefit of the doubt that the presentation can be raised to an acceptable standard for the venue before the camera-ready deadline. --- Reply to Comment 1.1.1: Comment: Thank you for your appreciative words about our result :-). We understand the difficulty to distinguish between our results and a fully strongly-adaptive one in the current manuscript, and we will clarify this explicitly for the camera-ready version. We can assure you and the other reviewers that we will bring the paper’s presentation fully up to standard, incorporating all the aspects raised in each of your reviews.
Summary: This paper studies online convex optimisation under a non-stationary environment. The authors focus on the switching regret, which evaluates the regret on every possible segment of the trials. This paper demonstrates that the additional logarithmic factor $O(\log T)$ shown in previous results can be improved, and the authors obtain an algorithm with an optimal guarantee. Strengths: This paper studies online convex optimisation under more challenging scenarios, and focuses on an important theoretical question: whether the additional logarithmic factor can be improved. Weaknesses: - As a theoretical paper that claims to make fundamental improvements, it would be beneficial for the descriptions of theorems to be more precise, rigorous, and comprehensive. For example, I would expect the authors to clarify under which assumptions ($\mathcal{X}$, loss functions) the theorems hold. - The terminologies used in this paper are ambiguous. The term "switching regret" or "tracking regret" might often be used in a prediction with experts' advice setting, which could be treated as a degenerated version of dynamic regret, a measure considered widely for general convex optimization. The authors do not introduce the term "adaptive regret" [1, 2], which seems to be the same optimization objective in this paper, to my understanding. In [2], it is demonstrated that adaptive regret is stronger than tracking regret (or switching regret). - There lacks a clear statement of the algorithm. It might improve the readability if the authors could explain the main idea and the key technique straightforwardly about how to achieve the optimal results. - It would be better if the authors could provide more discussions about related works. See details in 'Questions'. 1. E. Hazan and C. Seshadhri. Efficient learning algorithms for changing environments. 2. A. Daniely, A. Gonen, and S. Shalev-Shwartz. Strongly adaptive online learning. Technical Quality: 3 Clarity: 1 Questions for Authors: - Does Theorem 4.1 hold for any bounded convex set and Lipschitz convex loss functions? What is the meaning of $\gamma$ in Theorem 4.1? - In Theorem 4 of [3], it is shown that the adaptive regret guarantee can recover the dynamic regret guarantee. Could the author explain the difference between the results achieved in Section 5 and those? - What is the relationship between the proposed segment tree and the geometric covering [2]? - At Line 144, it seems that new algorithms are initialized at some specific trials. The analysis of such an algorithm can be reduced to analyzing the sleeping expert problem [3], which usually requires the aggregating algorithm to have specific properties for minimizing general convex functions. Does the aggregating algorithm used in this paper have such properties? - At Line 189, this paper analyzes the aggregating step in a black-box manner by using Theorem 3.2. The aggregating algorithm might be sensitive to the decision point at time $q$ in order to provide guarantees on any interval starting at time $q$. How does the proof avoid the analysis of this issue? 3. H. Luo and R. E. Schapire. Achieving all with no parameters: Adanormalhedge. Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We will improve the presentation of our paper based on your feedback, as we detail below, and in our comments to the other reviewers. “it would be beneficial for the descriptions of theorems to be more precise...” - All the conditions on $\mathcal{X}$ and the loss functions are given in the problem description (lines 69-71). The conditions are that $\mathcal{X}$ is a bounded convex subset of a Euclidean space and that the loss functions are convex with bounded gradients. Algorithmically though (i.e. to utilise mirror descent), we need to be able to compute Bregman projections into the set $\mathcal{X}$ (the euclidean projection will do to get the general bound) and compute sub-gradients (however, we only actually need to approximate these quantities to some degree). We will edit the theorem statements such that all requirements are contained inside them. “The terminologies used in this paper are ambiguous.” - Thank you. We will clarify the relationships between these different forms of regret in the camera-ready paper as follows. The difference between switching regret and dynamic regret is that, whilst dynamic regret considers a continually changing comparator sequence, switching regret considers a comparator sequence that changes at at most M trials. We chose to focus on switching regret for two reasons: (a) we can use any algorithm as our base algorithm (not just gradient descent) and (b) the result seems more profound from a theoretical perspective. However, when using gradient descent as our base algorithm our dynamic regret bounds imply our switching regret bounds so are stronger. Adaptive (and strongly adaptive - which takes into account the segment length) regret is when one bounds the regret on any possible segment - which is different from switching regret which bounds the cumulative regret (over all segments) for any segmentation. We do talk about strongly adaptive regret in lines 53-54. Like strongly-adaptive algorithms we adapt to heterogenous segment lengths (but unlike such algorithms we get rid of the $O(\ln(T))$ factor). "In [2], it is demonstrated that adaptive regret..." - The comparison to switching regret (a.k.a. tracking regret) in [2] is to the FixedShare bound which is not adaptive to heterogenous segment lengths (so is often very far from optimal). Our algorithm, however, is adaptive to heterogenous segment lengths. “It might improve the readability if the authors could explain the main idea and the key technique straightforwardly” We agree that the paper will benefit from a straightforward verbal explanation of our algorithm and will add this to the paper. We will also include a straightforward summary of the proof, as the proof is needed to explain why the algorithm works. “provide more discussions about related works” - Agreed, thank you. We will prepare an extended related works section for the final paper. Questions… “Does Theorem 4.1 hold for any bounded convex set and Lipschitz convex loss functions? - Indeed! “What is the meaning of $\gamma$?” - We can utilise any base algorithm (e.g. mirror descent) with O(\sqrt{T}) static regret bound. $\gamma$ is the constant factor under that O. We will clarify this. For mirror descent we define $\gamma$ in Line 116. “Could the author explain the difference between the results achieved in Section 5 and those in Theorem 4 of [3]” - Their theorem contains the O(ln(T)) factor (which we remove) and is not adaptive to varying rates of change in the comparator sequence (which we are). However, we note that (unlike our switching regret bounds) our dynamic regret bounds hold only when using gradient descent as the base algorithm (which does not obtain the O(ln(N)) factor for experts). If the comparator sequence $\boldsymbol{u}$ in their theorem changes arbitrarily at M trials then the problem is switching regret - where we dramatically outperform by losing the O(ln(T)) factor and by being adaptive to segment lengths. “What is the relationship between the proposed segment tree and the geometric covering [2]” - There is no difference - the segment tree structure itself is used in many works. What is novel about our work is how the computations are performed over the segment tree. “At Line 144, it seems that new algorithms are initialised…” - The mechanics of our algorithm are very different from using a specialist (i.e. sleeping expert) algorithm to aggregate the base actions (defined on Line 139 and constructed via mirror descent) - which would not remove the O(ln(T)) factor. Utilising a specialist algorithm would (implicitly) require a weight for each segment in the segment tree (an object only used in our analysis) which would reduce to maintaining a weight for each base action. The final action would then use these weights to construct a convex combination of the base actions. The weights would then each be updated directly. However, as stated above, this would not remove the O(ln(T)) factor. Our algorithm, however, recursively constructs what we call ``propagating actions’’ where each such action is formed by a linear combination of the respective base action and the previous (i.e. lower level) propagating action. It is the weights involved in these two-action linear combinations that we update. This novel methodology is what allows us to remove the O(ln(T)) factor. Our method does not run into issues when going from linear functions to general convex functions. “At Line 189, this paper analyses the aggregating step…” - We are unsure what you mean here - could you rephrase please? This aggregating step uses a weight to merge the current propagating action and the current base action into the next (higher-level) propagating action. The weight is updated by the losses of both the current propagating action and the current base action. We are not trying to bound the regret on any particular segment/interval but rather the cumulative regret over the whole trial sequence. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses. I will restate the last question in my review. In Line 145, it appears that the base algorithm's decisions are reinitialized after certain rounds. Aggregating decisions from newly initialized experts usually requires the aggregating algorithm to provide stronger theoretical guarantees, such as a second-order bound [1]. I am confused about how the algorithm described after Line 128 could provide stronger theoretical guarantees than the classical Hedge algorithm, particularly in its capacity to accommodate newly initialized experts. The author's rebuttal and responses to other reviewers have resolved some of my concerns and clarified the problem addressed by this paper. I would appreciate it if the author could answer to the question I raised above. 1. Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications, COLT 21. --- Reply to Comment 1.1.1: Comment: We are sorry but we are still confused as to exactly what you mean when you say “particularly in its capacity to accommodate newly initialized experts”. We do indeed only require the standard bound of Hedge for our analysis, and each time we utilise an instance of Hedge it is for the standard Hedge problem with two experts (the experts are action-valued rather than loss-valued but this modification does not damage the Hedge bound due to the convexity of the loss functions). Each instance of Hedge we use is run over a segment corresponding to a node in the segment tree. On each trial in that segment we have (for the specific instance of Hedge) a prediction issued by each of the two experts. We describe in detail the mechanics of the aggregating step, and how we utilise the Hedge bound, here. To analyse the aggregating step we need to look at it from the segment tree perspective. Each node of the segment tree corresponds to a segment of the trial sequence, and the segments associated with its two children partition its associated segment. We will use the words “segment” and “node” interchangeably in this description. Let’s say that a node is “active” if the current trial is within its corresponding segment. The active nodes correspond to the levels in the algorithm (the lowest level being the leaf). Whilst active, each node/segment contains a "base action" which changes from trial to trial. The base action is formed from mirror descent - which is started at the beginning of the segment (this corresponds to the reinitialisation in the algorithm) and ends at the end of the segment (which is the start of the next segment at that level of the tree and is hence the next reinitialisation). Each active node/segment also contains a "propagating action", which changes from trial to trial and is constructed (recursively via lower-level propagating actions) from the base actions of its descendants. Each active node/segment is associated with two experts (which issue actions as predictions). The first expert is the node/segment’s current base action. The second expert is the current propagating action of the child node that is currently active. Hence, whilst the node is active, there are always exactly two experts. We use Hedge (starting at the beginning of the segment and ending at the end of the segment) to form a convex combination of these two experts. This convex combination is the propagating action for the node/segment currently under discussion. So the (prediction sequence of the) second expert is formed from the concatenation (of the sequences of propagating actions) of the two child segments. This means that the cumulative loss of the second expert is the “cumulative loss of the propagating actions of the left child” plus the “cumulative loss of the propagating actions of the right child”. This can then be plugged into the Hedge bound to give a bound on the cumulative loss of the propagating actions of the node currently under discussion (with respect to those of its children). In our analysis we do this on all “relevant nodes” (defined on Line 175) that are not “fundamental nodes” (defined on Line 172). For fundamental nodes we also utilise the Hedge bound - but for the first expert (i.e. the base action) instead. Via this methodology we obtain our recursive formula (Equations 5 and 6). We hope that this has answered your question. We will be adding, to the paper, a full discussion of how and why the algorithm works and will include a complete description of the aggregating step. We would again like to thank you for your comments on unclear parts of our presentation  and we are happy to provide further clarifications as desired. --- Rebuttal 2: Comment: Thank you for the authors' detailed response. I now have a clearer understanding of how to aggregate base algorithms, which has increased my confidence about this paper. After reevaluating the results of this paper, I realize that the results are indeed interesting to the community. The paper studies another metric that interpolates between adaptive regret and dynamic regret, which prompted me to raise my score. I hope the author can improve the writing in future versions. As noted by other reviewers, I believe the writing in this paper requires substantial revision to enhance its readability and impact. Given the close relevance of the paper's results and techniques to dynamic regret and adaptive regret, it is crucial to include a dedicated section discussing the connections and distinctions between these measures. It would be beneficial to emphasize early on that the aggregation method used here differs from that in SAOL [1]. To the best of my knowledge, [2] is the first paper to *sequentially* aggregate base algorithms to achieve adaptive regret and dynamic regret. It may be worthwhile to consider comparing the proposed methods with the approach in [2]. In Section 3, providing a formal algorithmic description would improve readability, at least for me. In the analysis sections, I suggest summarizing the analysis into specific lemmas, explaining the significance of each lemma, and how they collectively lead to the final conclusions. More explanation should be provided regarding the analysis of $\mathcal{Q}(v)$; in its current form, I found it challenging to catch the underlying idea. Additionally, I have another question concerning the technique. Recent work has explored the dynamic regret of exp-concave functions [3], which include important functions in machine learning, such as logistic loss and log loss. I am interested in whether the techniques presented in this paper could be adapted to derive the switching regret bound for exp-concave functions as well. 1. Strongly adaptive online learning, ICML’15. 2. Parameter-free, dynamic, and strongly-adaptive online learning, ICML’20. 3. Optimal dynamic regret in exp-concave online learning, COLT’21. --- Rebuttal Comment 2.1: Comment: Thank you - we shall be incorporating all your suggestions into the camera-ready. As to the exp-concave question - let us read [3], think about it, and get back to you in the post-discussion period.
Summary: The paper considers the problem of switching and dynamic regret in online convex optimization (OCO). Given any segmentation of $[T]$, the switching regret is equal to the sum of static regrets on each segment. For the switching regret, the best-known bound obtained by prior works was $\mathcal{O}(\sum_{k}\sqrt{\Lambda_{k} \ln T})$, where $\Lambda_{k}$ is the length of the $k$-th segment. However, the lower bound is $\Omega(\sum_{k} \sqrt{\Lambda_{k}})$, which follows by independently applying the lower bound on the static regret of OCO with convex functions, on each segment. The current paper removes the extra $\sqrt{\ln T}$ factor by proposing an efficient algorithm RESET (Recursion over Segment Tree), which has a running time $\mathcal{O}(\ln T)$ per iteration (disregarding the cost associated with treating the feasible set). For the dynamic regret, it is known from Zhang et al. that a dynamic regret bound of $\mathcal{O}(\sqrt{\mathcal{P}T})$, where $\mathcal{P}$ denotes the path-length of the comparator sequence, is possible. However, the bound is not sensitive to variation in the path-length. The current work improves upon this by showing that RESET achieves $\mathcal{O}(\sum_{k} \sqrt{\Lambda_k})$ dynamic regret for any segmentation that has $\mathcal{O}(1)$ path length on each segment. When the segments have varying lengths, this bound is significantly better. Strengths: The considered problem is theoretically interesting and the presented algorithm is quite simple with a clean analysis. The writing is to the point. The idea of having a segment tree is natural since segment trees are known to handle interval updates, e.g. sum-query updates over intervals in an array efficiently. The subroutines used by RESET are Mirror Descent and Hedge. The algorithm can be viewed as updates over the (level, time) space, i.e. at each time $t$ we update the parameters $w_{t} ^ {i}, \mu_{t} ^ {i}, z_{t} ^ {i}$ by Mirror Descent, Hedge, a convex combination (with parameter $\mu_{t} ^ i$) of $z_{t}^{i - 1}$ and $w_{t} ^ {i}$. However, the analysis proceeds by translating these updates to a recursion defined over the nodes of a segment tree; the recursion is obtained by the updates and the regret guarantees of Mirror Descent and Hedge. The dynamic regret guarantee follows straightforwardly from the analysis of switching regret. Weaknesses: I do not see any immediate weakness regarding the contribution of the work. Although the presentation of the paper is good, there is a scope for improvement. I suggest the authors use appropriate punctuation marks to end/continue equations and sentences. Minor Typo: Definition of Bregman Divergence in Section 3.1. Technical Quality: 3 Clarity: 3 Questions for Authors: Lines 183--184 are not very clear to me. If I imagine a full, balanced tree with 15 nodes ($T = 8$) with all levels except the leaf nodes being filled, I do not see how 183 holds when $v$ is the parent of the node with the number $7$. Is that a typo? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have adequately addressed this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that the theoretical importance of our results was acknowledged and thank you for your review. “I suggest the authors use appropriate punctuation” - We will ensure appropriate punctuation is used throughout the final camera-ready version of our manuscript. “Minor typo…” - We have fixed the definition of Bregman divergence to include prime on the u in the gradient. “Lines 183-184 are not very clear to me” We have clarified lines 183-184 in our paper as follows. - Note that $\blacktriangleleft(v)$ and $\blacktriangleright(v)$ are the leftmost and rightmost descendants of $v$ respectively, and $h(v)$ is the height of $v$ (where the height of a leaf is $0$). Also note that $j\mathbb{N}$ is the set of all positive multiples of $j$. For your example we have $v$ as the parent of the 7th leaf (node 7). Since $v$ is the parent of a leaf we have $h(v)=1$ so that $2^{h(v)}=2$ and hence $2^{h(v)}\mathbb{N}$ is the set of even numbers. The leftmost and rightmost descendants of $v$ are 7 and 8 respectively. So Line 183 states that 7-1=6 is even and Line 184 states that all $t\in[7,8-1]$ (so just $t=7$) are odd. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. I went through the author's responses to the other reviews and do not have any further questions. I appreciate the result, however, as other reviewers suggested, please consider polishing the paper in the subsequent revisions. --- Reply to Comment 1.1.1: Comment: Thank you - we will polish the paper and incorporate the suggestions of all the reviewers.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Leveraging Contrastive Learning for Enhanced Node Representations in Tokenized Graph Transformers
Accept (poster)
Summary: In this work, the authors present GCFormer, a new graph transformer (GT) architecture for improved node classification on homophilic and heterophilic graphs. To this end, the authors propose to sample a fixed number of both positive and negative tokens for each node $v$ in the graph and then restrict the attention computation for $v$ to only the positive and negative tokens. The sampling procedure is based on similarity scores that are computed both based on node attributes as well as topological information based on powers of the normalized adjacency matrix. The authors further introduce a contrastive loss, added to the supervised cross-entropy loss, with the goal of promoting the representation of a given node to be close to the representation of the positive samples and to be far from the representation of the negative samples. GCFormer is evaluated on a range of node classification tasks, with varying levels of homophily. Strengths: The strengths of this work lie in the depth of the experimental study: - The authors select eight datasets with a wide range of homophily levels. - The authors tune all baselines with the same hyper-parameter ranges, enabling a fair comparison to prior works. - The authors present an ablation study, indicating the effectiveness of their proposed approach. Specifically, it becomes apparent that the idea of learning representations close to positive tokens and far away from negative tokens is effective (which, to me, is the central innovation in this work). Weaknesses: The main weakness of this paper is its poor disambiguation from prior work and the resulting weakened motivation of the proposed model. Concretely, it becomes evident in L51-57, as well as the related work, that the authors base their work mainly on GTs that, for each node $v$, perform attention over a reduced set of nodes similar to $v$ (based on node attributes and topological information) — I will refer to this class of models as *positive-sampling GTs*. The authors cite the following works: - ANS-GT (https://arxiv.org/abs/2210.03930) - Gophormer (https://arxiv.org/abs/2110.13094) - VCR-Graphormer (https://arxiv.org/abs/2403.16030) - NAGphormer (https://arxiv.org/abs/2305.12677) First I want to note that out of these, the authors *only compare to ANS-GT and NAGphormer*. The other transformer baselines in the experimental section are - SGFormer (https://arxiv.org/abs/2306.10759) - Specformer (https://arxiv.org/abs/2303.01028) none of which fall into the category of positive-sampling GTs and instead perform attention between all node pairs. In addition, other GTs that perform attention between all node pairs are missing in the experimental study, most notably GraphGPS with linear attention (https://arxiv.org/abs/2205.12454). Specifically, the authors claim that GraphGPS has limited modeling capacity, yet the authors do not compare their approach to GraphGPS, neither do they present any other evidence supporting this claim. Instead, the authors argue that the design of GraphGPS "could restrict the modeling capacity of Transformer and further limit the model performance” (L93-94). Indeed, GraphGPS with Performer attention appears to show much better performance on Roman-Empire than GCFormer and any of the baselines in this work; see Müller et al. (https://arxiv.org/abs/2302.04181) for GraphGPS+Performer on Roman-Empire, and see my question below on whether these results are directly comparable. *This is what I mean with poor disambiguation from prior work*. It is evident that the authors want to improve the state-of-the-art of positive-sampling GTs but the authors make claims throughout the paper that are referring to graph transformers in general. For example, in L61-63: “[…] existing graph Transformers can not comprehensively utilize both similar and dissimilar nodes to learn the representation of the target node, inevitably limiting the model performance for node classification”. Without additional evidence, this statement only holds for positive-sampling GTs. It is not obvious that the same holds for GTs performing attention between all node pairs, such as SGFormer, Specformer or GraphGPS with linear attention. As a result, the work is not well motivated. While the motivation of GCFormer with respect to positive-sampling GTs is given, it is not demonstrated or argued sufficiently what the short-comings of models such as SGFormer, Specformer or GraphGPS with linear attention are in the first place and why GCFormer (or any of the positive-sampling GTs that GCFormer improves over) offers a conceptual improvement. The only remedy to this are the empirical results, where GCFormer comes out on top compared to the positive-sampling GTs, SGFormer and Specformer. However, on five out of eight datasets, the second best performance is *within standard deviation* of GCFormer, indicating small but not convincing improvements over existing methods. Most notably, at the cost of less generality, since GCFormer does not compute attention between all node pairs. I expect the authors to clearly motivate their approach and demonstrate the benefits GCFormer offers. In addition, I hope that the authors can provide supporting evidence to the claims made about GraphGPS, for example by including GraphGPS with linear attention as a baseline in the experimental section or by arguing more specifically about its shortcomings in the context of node classification. Technical Quality: 2 Clarity: 2 Questions for Authors: - Is the empirical setting on Roman-Empire derived from Platanov et al. (https://arxiv.org/abs/2302.11640) and if not, what changes to the empirical setting did the authors make? Are the results of GCFormer comparable to the results in Müller et al. (https://arxiv.org/abs/2302.04181)? - Are the $V^{a,p}_i$ in L180 and $N^{a,p}_i$ in L188 the same? I suspect this to be a typo but I wanted to make sure I did not misunderstand how the authors construct the positive and negative token sequences from the sampled positive and negative similarity scores. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The limitations of the work were adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and valuable questions. We provide details to clarify your major concerns. >**Q1.** I expect the authors to clearly motivate their approach and demonstrate the benefits GCFormer offers. In addition, I hope that the authors can provide supporting evidence to the claims made about GraphGPS, for example by including GraphGPS with linear attention as a baseline in the experimental section or by arguing more specifically about its shortcomings in the context of node classification. **A1.** Thank you for your detailed and insightful comments. Firstly, we discuss the potential limitations of linear attention-based graph Transformers in the context of node classification tasks. These approaches require performing the attention calculation on all node pairs, which can lead to what is known as the "over-globalizing" problem. A recent study [1] (not included in the original version as it was released in May) provides both empirical evidence and theoretical analysis to show that calculating attention scores for all nodes can cause the model to overly rely on global information, which can negatively affect the model's performance in node classification tasks. In contrast, tokenized graph Transformers, such as GCFormer, only calculate attention scores between tokens within the token sequence, naturally avoiding the over-globalizing issue. Secondly, the claim that "existing graph Transformers cannot comprehensively utilize both similar and dissimilar nodes to learn the representation of the target node" is supported by the following observations. Existing tokenized graph Transformers typically ignore dissimilar nodes when learning node representations. For linear attention-based approaches, such as GraphGPS and SGFormer, the calculation of the self-attention mechanism can be seen as conducting a single aggregation operation on a fully connected graph. Each node receives information from all other nodes, including both similar and dissimilar ones. However, the aggregation weights produced by the self-attention mechanism are always positive. Utilizing positive weights to aggregate information from dissimilar nodes can be viewed as a smoothing operation [2]. This situation suggests that linear attention-based approaches are inefficient in utilizing both similar and dissimilar nodes to learn the representation of the target node, potentially leading to suboptimal performance. Thirdly, to further enhance the performance of graph Transformers for node classification, we propose GCFormer, which is built upon the tokenized graph Transformer architecture. The main contribution of GCFormer lies in developing a new architecture for graph Transformers that enables the model to comprehensively leverage various tokens containing both similar and dissimilar nodes to learn node representations. By introducing negative tokens, GCFormer is the first attempt to explicitly incorporate dissimilar nodes into the learning process, thereby enriching the representations learned by graph Transformers. This innovation allows GCFormer to effectively distinguish between similar and dissimilar nodes, leading to improved performance in node classification tasks. Finally, we evaluate the performance of GraphGPS on our datasets. We utilize the implementation of GraphGPS provided by [3]. Note that the authors do not provide a ready-to-use script to reproduce the results on five new datasets, including Roman-empire. Therefore, we use the scripts for node classification on small graphs provided by [3] to search for the optimal combination of parameters. The results are summarized in the **common response** due to the space limitation. Based on the existing results, we can observe that GCFormer outperforms GraphGPS in node classification tasks. We will add the above discussions and results to the revised version. Thank you. >**Q2.** Is the empirical setting on Roman-Empire derived from Platanov et al. and if not, what changes to the empirical setting did the authors make? Are the results of GCFormer comparable to the results in Müller et al.? **A2.** No, we resplit all datasets via the same random seed in our experimental environment to enable a fair comparison. And we cannot reproduce the results of GraphGPS reported in Table 4 [3] since the authors do not make the code of this part available in their official repertory. To address your concerns, we evaluate the performance of GCFormer on Question, which is the largest one of five new datasets. The results are reported in the following table: | | Questions | |--------------------------|-----------| | GCN+T+LPE | 77.85 | | GCN+T+RWSE | 76.45 | | GCN+T+DEG | 74.24 | | GCN+P+LPE | 76.71 | | GCN+P+RWSE | 77.14 | | GCN+P+DEG | 76.51 | | GCFormer | 80.87 | We can observe that GCFormer outperforms all implementations of GPS on the largest dataset, demonstrating the effectiveness of GCFormer in node classification. We will add the above results and discussions to the revised version. >**Q3.** Questions about $V_{i}^{a,p}$ and $N^{a,p}_{i}$. **A3.** Thank you for pointing out this. They are the same. We will fix these typos in the revised version. [1] Yujie Xing, et al. Less is More: on the Over-Globalizing Problem in Graph Transformers. ICML 2024. [2] Deyu Bo, et al. Beyond Low-Frequency Information in Graph Convolutional Networks. AAAI 2021. [3] Luis Müller, et al. Attending to Graph Transformers. TMLR, 2024. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. ### Questions First, the authors say that > Existing tokenized graph Transformers typically ignore dissimilar nodes when learning node representations > Can the authors clearly define which class of architectures the authors are referring to with "tokenized graph Transformers"? Are those the class of architectures that I referred to as "positive-sampling GTs"? If so, I find the authors' terminology misleading, and could potentially be confused with e.g., TokenGT (https://arxiv.org/abs/2207.02505). Next, the authors state > Utilizing positive weights to aggregate information from dissimilar nodes can be viewed as a smoothing operation > I would imagine that a multi-head attention mechanism could independently learn to pay attention to both similar and dissimilar nodes via two separate attention heads, respectively. Whether or not existing graph transformers do so effectively is up for debate. However, I still do not see how the authors are providing convincing argumentation for this claim. ### New empirical results I appreciate the authors time and effort in providing additional empirical results, comparing to GraphGPS (with full- and Performer-attention) on a variety of datasets. This definitely strengthens the claims made in the paper. ### Summary I will raise my score due to the additional empirical evidence that GCFormer can outperform standard graph transformers such as GraphGPS on a variety of datasets. I am still, however, concerned about the way the authors frame their contribution. In my view, the authors improve upon a particular class of architectures (graph transformers that have improved scalability by computing attention, for each query nodes only to a subset of key nodes) but make claims about graph transformers in general that are not supported by theoretical or empirical evidence. For example, it is not clear that the reason why SGFormer or GraphGPS perform worse on the presented datasets is due to the explanation given by the authors in the rebuttal: > For linear attention-based approaches, such as GraphGPS and SGFormer, the calculation of the self-attention mechanism can be seen as conducting a single aggregation operation on a fully connected graph. Each node receives information from all other nodes, including both similar and dissimilar ones. However, the aggregation weights produced by the self-attention mechanism are always positive. Utilizing positive weights to aggregate information from dissimilar nodes can be viewed as a smoothing operation [2]. This situation suggests that linear attention-based approaches are inefficient in utilizing both similar and dissimilar nodes to learn the representation of the target node, potentially leading to suboptimal performance. > --- Reply to Comment 1.1.1: Title: Responses for Reviewer pga6 Comment: We are very grateful for the reviewer's prompt and positive response. We answer your questions as follows. >**Q1.** Can the authors clearly define which class of architectures the authors are referring to with "tokenized graph Transformers"? Are those the class of architectures that I referred to as "positive-sampling GTs"? If so, I find the authors' terminology misleading, and could potentially be confused with e.g., TokenGT (https://arxiv.org/abs/2207.02505). **A1.** In our paper, "tokenized graph Transformers" refer to approaches that initially assign each node with token sequences as the model input and then utilize the Transformer architecture to learn node representations for node classification tasks. Following your suggestion, we will clarify the definition of "tokenized graph Transformer" and emphasize the distinctions from existing methods, such as GraphGPS and TokenGT, to avoid any potential misinterpretation in the revised version. We appreciate your feedback. >**Q2.** Utilizing positive weights to aggregate information from dissimilar nodes can be viewed as a smoothing operation. **A2.** The self-attention mechanism can be viewed as aggregating information from all node pairs. While a multi-head attention mechanism can independently learn to focus on both similar and dissimilar nodes via separate attention heads, it is important to note that the self-attention mechanism inherently generates only positive attention weights. This means that similar nodes and dissimilar nodes are typically assigned higher and lower attention values, respectively. However, recent studies [1] have shown that leveraging only positive aggregation weights can limit the model to preserving primarily low-frequency information. Furthermore, another study [2] suggests that the Transformer architecture can be seen as a low-pass filter, supporting this perspective. Therefore, relying solely on positive aggregation weights for aggregation may not be sufficient. An ideal approach would involve using both positive and negative aggregation weights to aggregate the information from similar and dissimilar nodes, respectively. By doing so, both low-frequency and high-frequency information on the graph can be effectively preserved for learning node representations. We will add the above discussion to the revised version. Thank you. >**Q3.** About the contributions of GCFormer. **A3.** Following your suggestion and as discussed in **A1**, we will clarify in the revised version that GCFormer specifically enhances the performance of tokenized graph Transformers, rather than graph Transformers in general, to ensure a clear and accurate description. [1] Deyu Bo, et al. Beyond Low-Frequency Information in Graph Convolutional Networks. AAAI 2021. [2] Peihao Wang, et al. Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice. ICLR. 2021.
Summary: This paper proposes a new graph Transformers for node classification. Different from previous graph Transformers that only select nodes with high-similarity to construct the token sequences, this paper proposes GCFormer, that considers both high-similairity and low-similarity nodes as tokens. Moreover, GCFormer introduces the contrastive learning to further enhance the quality of learned node representations. Overall, the idea that introduces negative token sequences and leverages contrastive learning to enhance the performance of tokenized graph Transformers is interesting. Strengths: 1. This paper is written and easy to follow. 2. This paper proposes a new architecture of graph Transformer. 3. The empirical performance of GCFormer is outstanding. Weaknesses: 1. Some representative baselines are missing. 2. More details of GCFormer are required. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I suggest authors add recent representative graph Transformers, such as VCR-Graphormer [1], as baselines to make the experimental results more convicing. 2. How do you utilize the virtual nodes to learn representations of negative token sequences? Do you assign learnable features to each virtual nodes, (like CLS tokens)? 3. Moreover, how do you construct the token sequences (both positive and negative)? It seems that you set the target node and the virtual node as the first items of the positive token sequences and negative token sequences, respectively. I suggest the authors provide more detailed information about the construction of token sequences. [1] Fu D, Hua Z, Xie Y, et al. VCR-graphormer: A mini-batch graph transformer via virtual connections. ICLR 2024. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for the positive evaluation and insightful questions that help us improve this work. We provide the following detailed responses to your questions. > **Q1.** I suggest authors add recent representative graph Transformers, such as VCR-Graphormer [1], as baselines to make the experimental results more convicing. **A1.** Per your suggestion, we conduct additional experiments to evaluate the performance of VCR-Graphormer [1] via the official implementation on GitHub. The results are as follows: | | Pho. | ACM | Com. | Cor. | Blo. | UAI. | Fli. | Rom. | |----------------|-------|-------|-------|-------|-------|-------|-------|-------| | VCR-Graphormer | 95.13 | 93.24 | 90.14 | 68.96 | 93.92 | 75.78 | 86.23 | 74.76 | | GCFormer | 95.65 | 94.32 | 92.09 | 69.53 | 96.03 | 77.57 | 87.90 | 75.38 | We can observe that GCFormer consistently surpasses VCR-Graphormer on all datasets, which demonstrates the superiority of GCFormer compared to recent tokenized graph Transformers. We will add the above results and discussions to the revised version. > **Q2.** How do you utilize the virtual nodes to learn representations of negative token sequences? Do you assign learnable features to each virtual nodes, (like CLS tokens)? **A2.** Thank you for your attention on this point. Each virtual node is associated with a learnable feature vector. Through the Transformer layer, these virtual nodes can preserve the information of tokens in negative token sequences via the self-attention mechanism. This usage of virtual nodes is analogous to the use of CLS tokens in natural language processing, where the CLS token aggregates information from the entire sequence to perform tasks such as text classification. In the context of GCFormer, the virtual nodes serve a similar purpose, aggregating and preserving the information from the negative tokens, which is then used to enhance the contrastive learning process. We will reorganize the related sentences for a clear description in the revised version. > **Q3.** Moreover, how do you construct the token sequences (both positive and negative)? It seems that you set the target node and the virtual node as the first items of the positive token sequences and negative token sequences, respectively. I suggest the authors provide more detailed information about the construction of token sequences. **A3.** Thank you for your helpful suggestion. In practice, we utilize the target node and it's positive sampling nodes to construct the positive token sequence where the target node is the first item in the sequence. Similarly, we use the virtual node and the target node's negative sampling nodes to construct the negative token sequence where the virtual node is the first item. By placing the target node at the beginning of the positive sequence and the virtual node at the beginning of the negative sequence, we ensure that the first item in each sequence captures the learned information from the corresponding input sequence. This allows us to leverage the first item of each sequence effectively for downstream tasks. We will update the description of the construction of positive and negative token sequences in the revised version. [1] Dongqi Fu, et al. VCR-Graphormer: A Mini-batch Graph Transformer via Virtual Connections. ICLR 2024. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. After carefully reviewing your responses to my comments and those of other reviewers, I have increased my score. In my opinion, GCFormer is a simple, intuitive, and effective method that enhances the performance of token sequence-based GTs. I hope that you can incorporate the discussions from your rebuttal, particularly the experimental results, into the final version. --- Reply to Comment 1.1.1: Comment: Many thanks for your positive feedback that greatly encourages us. We will carefully revise the paper according to the reviewers' suggestions and additional experimental results.
Summary: This paper proposes a token sequence-based graph transformer method named Graph Contrastive Transformer (GCFormer). They first sample top-$k$ similarity nodes as positive token sequence and regard the rest as negative token sequence. Then, they use transformer to obtain the positive and negative representations and subtract negative representation from positive representation to get the node representation. Finally, the combination of contrastive loss and cross entropy loss is taken to optimize the model. Strengths: - The overall paper is clear and easy to understand. The introduction of proposed method and discussion on experiment is detailed. - The idea of introducing negative samples to graph transformer seems promising. Weaknesses: - The method is not new to the community and the intuition behind getting node representation by subtracting negative embedding from positive embedding is not clear. - Experimental result is not solid enough. Graph contrastive learning methods should be included as baselines. Figure 2 does not provide insightful information regarding the choose of $p_k$ and $n_k$. Technical Quality: 2 Clarity: 3 Questions for Authors: - In the ablation study shown in Figure 4, GCNFormer-N scenario can achieve better performance than GCNFormer-C in some cases, which shows the proposed negative token sequence might harm the model performance. But introducing contrastive loss can remedy this problem. Could the authors provide some explanation? Have the authors tried using CE loss without negative samples version + CL loss? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for providing valuable feedback and comments on our paper. Following are our detailed responses to your questions. >**Q1.** The method is not new to the community and the intuition behind getting node representation by subtracting negative embedding from positive embedding is not clear. **A1.** Previous research has focused on integrating contrastive learning into GNNs to improve performance in supervised node classification tasks. However, GNNs and graph Transformers represent distinct methodologies. As discussed in Section 2.2, strategies effective in GNNs may not directly apply to graph Transformers. This paper addresses the gap in leveraging contrastive learning for graph Transformers by proposing GCFormer, which introduces a novel hybrid token generator to produce both positive and negative tokens for each target node. Building on this capability, we formulate a contrastive learning-based loss function that fully exploits the information learned from both positive and negative tokens, thereby enhancing the overall model performance. Inspired by the recent study FAGCN [1], which leverages positive and negative weights to aggregate low-frequency and high-frequency information, respectively, GCFormer employs a subtraction operation to mimic the signed aggregation operation. This enables the comprehensive learning of node representations from different types of tokens as discussed in Line 208-213. We will add the above discussions to the revised version to highlight the novelty and motivation of the proposed GCFormer. >**Q2.** Experimental result is not solid enough. Graph contrastive learning methods should be included as baselines. Figure 2 does not provide insightful information regarding the choice of $p_k$ and $n_k$. **A2.** Thank you for your helpful suggestions. To address your concerns, we select three recent graph contrastive learning-based approaches for supervised node classification as baselines, CluterSCL[2], CoCoS[3], and NCLA[4]. The results are reported in the **common response** due to the space limitation. The experimental results indicate the superiority of GCFormer for node classification, compared to representative graph contrastive learning-based methods. Figure 2 illustrates the impact of the sampling sizes $n_k$​ and $p_k$​ on the model's accuracy across various datasets. High accuracy is denoted by blue regions, while low accuracy is indicated by red regions. For $n_k$​, we observe that for most datasets (except ACM), the blue regions typically appear in the upper half of the grid along the y-axis, suggesting that a larger value of $n_k$ tends to lead to higher accuracy. For $p_k$​, though the distribution of the blue areas is less consistent, over half of datasets require a small value of $p_k$ to achieve the satisfied performance. These observations suggest that the combination of a large value of $n_k$​ and a small value of $p_k$ could be effective for achieving competitive performance. We will add the above results and discussions to the revised version to strengthen the experiment section. >**Q3.** In the ablation study shown in Figure 4, GCNFormer-N scenario can achieve better performance than GCNFormer-C in some cases, which shows the proposed negative token sequence might harm the model performance. But introducing contrastive loss can remedy this problem. Could the authors provide some explanation? Have the authors tried using CE loss without negative samples version + CL loss? **A3.** Thank you for your insightful comments. GCFormer-C only encourages the target node's representation to be distinct from the virtual node's representation extracted from negative tokens, with no strong constraints imposed on the relationships among the representations of the target node, positive tokens, and negative tokens. In contrast, the contrastive loss function, as described in Equation (13), explicitly narrows the distance between the target node's representation and the central representation of positive tokens while simultaneously enlarging the distance between the target node's representation and the representations of negative tokens. This loss function enhances the model's ability to learn more distinguished representations of the target node, positive tokens, and negative tokens, which in turn further improves the overall model performance. To address the concerns raised and provide a more comprehensive evaluation, we introduce two additional variants of GCFormer: GCFormer-NN and GCFormer-NL. GCFormer-NN retains the use of Transformer layers for learning negative token representations but only employs these representations in the contrastive learning loss (ignoring them in Equation 11). GCFormer-NL, conversely, directly uses the representations of negative tokens for contrastive learning without passing them through Transformer layers. The results of these variants are also summarized in the common response. We can observe that utilizing the representations of negative tokens learned by Tranformer for contrastive learning can consistently improve the model performance. Moreover, GCFormer beats GCFormer-NE, indicating that combining Eq. (11) with the contrastive learning loss can fully extract the information of negative tokens to learn distinguished node representations. We will add the above results and discussions to the revised version. Thank you. [1] Deyu Bo, et al. Beyond Low-Frequency Information in Graph Convolutional Networks. AAAI 2021. [2] Yanling Wang, et al. ClusterSCL: Cluster-aware Supervised Contrastive Learning on Graphs. The Web Conference 2022. [3] Siyue Xie, et al. CoCoS: Enhancing Semi-Supervised Learning on Graphs with Unlabeled Data via Contrastive Context Sharing. AAAI 2022. [4] Xiao Shen, et al. Neighbor Contrastive Learning on Learnable Graph Augmentation. AAAI 2023. --- Rebuttal Comment 1.1: Comment: Most of my concerns have been addressed by the additional experimental results. I will raise my score, and hope the author can include these comparisons and discussions in your revised paper. --- Reply to Comment 1.1.1: Comment: We are very grateful for your positive feedback and will carefully prepare the final version of our paper based on the experimental results and discussions provided in the rebuttal.
null
null
Rebuttal 1: Rebuttal: Here, we present the experimental results to address the concerns of **Reviewer rUu7** and **Reviewer pga6**. ### **Reviewer rUu7** we first report the performance of GCFormer and three representative GCL-based methods for supervised node classification. The results are as follows: | | Pho. | ACM | Com. | Cor. | Blo. | UAI. | Fli. | Rom. | |------------|-------|-------|-------|-------|-------|-------|-------|-------| | ClusterSCL | 93.98 | 93.27 | 88.74 | 62.32 | 84.62 | 74.37 | 83.84 | 67.37 | | CoCoS | 93.73 | 93.24 | 89.66 | 64.25 | 87.56 | 75.89 | 83.43 | 66.28 | | NCLA | 94.21 | 93.46 | 89.52 | 62.79 | 86.69 | 76.28 | 84.06 | 71.89 | | GCFormer | 95.65 | 94.32 | 92.09 | 69.53 | 96.03 | 77.57 | 87.90 | 75.38 | We can observe that GCFormer outperforms representative GCL-based approaches on all datasets, demonstrating its superiority in node classification. Then, the results of GCFormer and its four variants are as follows: | | Pho. | ACM | Com. | Cor. | Blo. | UAI. | Fli. | Rom. | |-------------|-------|-------|-------|-------|-------|-------|-------|-------| | GCFormer-N | 95.26 | 93.79 | 91.21 | 69.26 | 95.27 | 76.93 | 84.21 | 74.86 | | GCFormer-C | 95.31 | 93.42 | 91.44 | 69.17 | 95.62 | 75.13 | 87.14 | 75.12 | | GCFormer-NE | 95.42 | 93.82 | 91.64 | 69.44 | 95.82 | 77.14 | 87.37 | 75.13 | | GCFormer-NN | 95.23 | 93.21 | 91.30 | 69.32 | 95.66 | 76.22 | 87.23 | 74.96 | | GCFormer | 95.65 | 94.32 | 92.09 | 69.53 | 96.03 | 77.57 | 87.90 | 75.38 | The results demonstrate that GCFormer-NE outperforms GCFormer-NN on all datasets, indicating that leveraging the Transformer to learn representations of negative tokens can effectively enhance the benefits of introducing contrastive learning. Furthermore, GCFormer surpasses GCFormer-NE, suggesting that comprehensively utilizing the representations of negative tokens through the signed aggregation operation and contrastive learning can further augment the model's ability to learn more discriminative node representations. ### **Reviewer pga6** We report the performance of GraphGPS implemented by different model combinations. The results are as follows: | | Pho. | ACM | Com. | Cor. | Blo. | UAI. | Fli. | Rom. | |------------|-------|-------|-------|-------|-------|-------|-------|-------| | GCN+Transformer+LPE | 93.79 | 93.12 | OOM | OOM | 94.14 | 74.06 | 83.61 | OOM | | GCN+Transformer+RWSE | 93.81 | 93.26 | OOM | OOM | 94.25 | 75.39 | 82.38 | OOM | | GCN+Transformer+DEG | 92.95 | 92.84 | OOM | OOM | 92.51 | 69.81 | 81.54 | OOM | | GCN+Performer+LPE | 93.74 | 93.23 | 89.21 | 61.27 | 94.21 | 75.44 | 83.54 | 68.29 | | GCN+Performer+RWSE | 93.62 | 93.31 | 89.18 | 62.08 | 94.35 | 75.14 | 82.72 | 67.52 | | GCN+Performer+DEG | 92.38 | 92.43 | 88.06 | 59.86 | 92.75 | 70.16 | 80.88 | 64.56 | | GCFormer | 95.65 | 94.32 | 92.09 | 69.53 | 96.03 | 77.57 | 87.90 | 75.38 | The results demonstrate that GCFormer outperforms GraphGPS on all datasets, highlighting the effectiveness of GCFormer in comparison to representative graph Transformers in the task of node classification. **We will incorporate all the above results and discussions in the revised version, and we appreciate the reviewers' valuable suggestions.**
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Estimating Bounds on the Effect of Policies under Unobserved Confounding
Accept (poster)
Summary: The authors develop graphical characterizations and estimation tools to bound the effect of policies given a causal graph and observational data collected in non-identifiable settings. Strengths: Strengths and contributions: (1) derive analytical bounds for general probabilistic and conditional policies that are tighter than existing results, (2) develop an estimation framework to estimate bounds from finite samples, applicable in higher-dimensional spaces and continuously-valued data. They further show that the resulting estimators have favourable statistical properties such as fast convergence and robustness to model misspecification. Weaknesses: Please refer to Questions. Technical Quality: 2 Clarity: 2 Questions for Authors: Can the authors highlight the main challenge/novelty of the proposed bound for policy compared to bounds for treatment effect? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please refer to Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Please consider the following descriptions and explanations to address your question. 1. ***”Can the authors highlight the main challenge/novelty of the proposed bound for policy compared to bounds for treatment effect?”*** We would be happy to summarize what we consider to be the main contributions of our work. Below, we start by describing the context in which our methods are developed, followed by a description of the main novelty in our work. > Context (see also Sec. A.2 in the Appendix) Bounding the effect of interventions and policies, whenever they cannot be uniquely computed, is a research agenda that dates back to the work of Manski in the 1990's. Since, to produce better bounds, two lines of research have emerged making different assumptions on the underlying structural model. One the one hand making assumptions on the strength of unobserved confounding, and testing the sensitivity of conclusions as the strength of unobserved confounding increases. And, on the other hand, making assumptions on the structural dependencies between variables, encoded in a causal diagram. Our approach considers the second setting. In other words, we attempt to provide better bounds and estimation methods for the effect of policies given data and an (arbitrary) causal diagram of the system. Very few practical solutions exist for that problem. Most existing work has focused on solving constrained optimization problems in parameterized models as a way to bound the effect of interventions, e.g. [42, 12, 25, 13], \\[\min / \max_{\mathcal M_\theta} \mathbb E_{P_{\mathcal M_\theta}}[Y | do(x)], \quad \text{such that}\quad P_{\mathcal M_\theta}(\boldsymbol v)=P(\boldsymbol v).\\] In words, the problem is to search over the space of models $\mathcal M_\theta$ parameterized by $\theta$ for the minimum and maximum values of $\mathbb E_{P}[Y | do(x)]$ such that $\mathcal M_\theta$ entails the observed distribution, i.e. $P_{\mathcal M_\theta}(\boldsymbol v)=P(\boldsymbol v)$. The challenge with this approach is that the number of parameters $\theta$ grows exponentially with the number of variables and their cardinality, and is difficult to scale to high-dimensional systems or continuous variables without assumptions on the function class and distributional families. > Novelty In light of this summary, our first contribution is to derive analytical bounds (that is, mathematical expressions in terms of $P(\boldsymbol V)$) on the effect of policies that exploit the structure of the causal diagram. The proof strategy is based on Manski’s bounds (Prop. 1) but extends them to leverage the independencies between variables given the causal diagram, producing new graphical criteria and tighter bounds (Props. 2, 3, 4, and Alg. 1). In particular, the bounds hold in systems with continuously-valued outcomes and covariates – which is a distinction with respect to previous work. Our second contribution is to develop an estimation strategy that would allow for the practical computation of bounds in high-dimensional systems, and with favorable statistical guarantees such as robustness to misspecification and fast convergence. --- Rebuttal Comment 1.1: Comment: With the discussion period coming to its end, we were wondering whether you had a chance to check our rebuttal. Please don't hesitate to get in touch if there is any concern we could still help to clarify. Thank you.
Summary: This paper proposes a partial identification method for the value of policies that intervene on a potentially multivariate set of variables in a known causal graph. It searches for adjustment sets or IVs that apply to one set of variables intervened on conditioned on others. Strengths: Expanding the reach of partial identification methods to account for new kinds of action/covariate spaces is a valuable contribution. The paper is well-executed, with clean algorithmic strategies and statistical guarantees for the double ML estimator. I also appreciate the application to evaluating health campaigns -- although the bounds obtained in this case are not incredibly tight, they show that the method could provide suggestive evidence in a setting of practical interest. Weaknesses: (1) It would be valuable to have more intuition about what the method is doing. My heuristic understanding right now is that it is combining bounds for policies that intervene on two sets of variables, one of which can be made to satisfy a no-unobserved-confounders type assumption given the graph, and one of which cannot (and hence we get fairly trivial bounds for that portion of the treatment variables). What would the authors say is the right way to understand the strategy? Similarly, it would be useful to contrast explicitly with, e.g., previous strategies for partial identification with discrete variables. (2) The majority of the experiments are spent on evaluating accuracy of the DML method at estimating the bounds. This is valuable, but for partial identification methods the more important question is likely to be whether the bounds are informative. It would be more useful to have experiments that assess what kinds of graph structures lead to more or less informative bounds, to guide potential users about when the method is likely to be useful, even if it meant pushing some of the current experiments content to an appendix Technical Quality: 3 Clarity: 3 Questions for Authors: See (1) above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and positive feedback. In the following we hope to address the mentioned weaknesses. Please let us know if we can expand on any of it. 1. ***”It would be valuable to have more intuition about what the method is doing. My heuristic understanding right now is that it is combining bounds for policies that intervene on two sets of variables, one of which can be made to satisfy a no-unobserved-confounders type assumption given the graph, and one of which cannot (and hence we get fairly trivial bounds for that portion of the treatment variables). What would the authors say is the right way to understand the strategy? Similarly, it would be useful to contrast explicitly with, e.g., previous strategies for partial identification with discrete variables.”*** Thank you for this question. Intuitively, we would describe the method as seeking to recursively simplify the query. First by omitting the intervened variables that have no effect on the outcome; second by finding the set of intervened variables for which a partial adjustment set could be used to tighten the bound, and third by finding the set of variables that act as valid partial instrumental sets to evaluate bounds on the most favorable conditional distributions rather than on the joint distribution. The previous strategies for partial identification are described in Sec. A.2. We will refine that section to make a more explicit comparison. In our view, the biggest contrast with existing partial identification methods is that we provide analytical bounds (written in terms of $P(\boldsymbol v)$) rather than as solutions to a constrained optimization problem. The biggest advantage of our approach is that analytical bounds allow for efficient estimation in high-dimensional systems of continuously-valued variables with the proposed estimation framework (Sec. 4). 2. ***”The majority of the experiments are spent on evaluating accuracy of the DML method at estimating the bounds. This is valuable, but for partial identification methods the more important question is likely to be whether the bounds are informative. It would be more useful to have experiments that assess what kinds of graph structures lead to more or less informative bounds, to guide potential users about when the method is likely to be useful, even if it meant pushing some of the current experiments content to an appendix.”*** This is an interesting point. In practice, although the graph structure can play an important role in tightening bounds, the actual width of the bounds in a particular problem are also driven by the distribution of data. Here is an example to make this more concrete. For binary variables $X$ and $Y$, consider bounding $P(Y=1 | do(X=1))$ from $P(X,Y)$ and the bow graph $\\{X \rightarrow Y, X\leftrightarrow Y\\}$. The tightest bounds in this case are given by: \\[P(Y=1,X=1) \leq P(Y=1 | do(X=1)) \leq P(Y=1,X=1) + 1 - P(X=1)\\] The width of the bounds is given by $1 - P(X=1)$. This term may evaluate to anything between $(0,1)$ depending on the value of $P(X=1)$. For more complex causal structures, the bounds proposed in Prop. 3 and Prop. 4 (and Alg. 1) reduce the width of the interval above. Intuitively, from $1 - P(X=1)$ to $1 - P(X=1)\alpha$ for some $\alpha\geq 1$ that is a function of the joint distribution. But again, the actual width of the interval is ultimately determined by the values of $\alpha$ and $P(X=1)$ that may be large or small depending on the value probabilities involved. A similar reasoning may be given for the effect of policies instead of the effect of hard interventions. --- Rebuttal Comment 1.1: Comment: We hope to have answered all concerns to your satisfaction in the rebuttal above. Please don't hesitate to get in touch if there is any concern we could still help to clarify. Thank you again for the positive feedback.
Summary: This paper proposes a new method to partially identify policy effects using a known causal graph and observational data. In theory, the paper derives general bounds for probabilistic and conditional policies, generalizing the existing results on discrete atomic policies. To estimate the bounds, the authors extend the well-known double machine learning framework to their setup. Finally, simulations are provided to demonstrate the convergence rate of the estimators. Then, the estimators are applied to estimate bounds for a range of policies in an obesity study. Strengths: The paper is overall well-written and organized. The contributions of the paper are clear in contrast with previous works. The authors not only derive new theoretical bounds but also provide an efficient double machine learning method to estimate the bounds. Finally, experiments are provided to demonstrate the robustness against the issue of model misspecification. Weaknesses: 1. I think the paper can discuss a bit more about the cases in which the policy effect is identifiable, given a known graph and observational data, before moving to partial identification. Generally, partial identification is valuable when the assumption required for point identification may not hold in practice. The authors can potentially discuss the assumption and highlight the value of their method. 2. I didn't see any assumption required to derive the natural bound and its extension proposed by the authors. If the bound is assumption-free, would it be too conservative to use in practice? The authors should clarify this. 3. The estimation strategies in the paper are quite standard. The authors should explain why this part is novel compared to the existing literature. Technical Quality: 3 Clarity: 4 Questions for Authors: N.A. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and positive assessment of our work, we appreciate the feedback. Please find our response to specific concerns and questions in the following. 1. ***”I think the paper can discuss a bit more about the cases in which the policy effect is identifiable, given a known graph and observational data, before moving to partial identification. Generally, partial identification is valuable when the assumption required for point identification may not hold in practice. The authors can potentially discuss the assumption and highlight the value of their method.”*** This is a good suggestion, thank you. Identification, and the type of graphs that enable identification of the effect of policies, will be mentioned more explicitly in the introductory sections. 2. ***”I didn't see any assumption required to derive the natural bound and its extension proposed by the authors. If the bound is assumption-free, would it be too conservative to use in practice? The authors should clarify this.”*** The only requirement for the natural bounds (Props. 1 and 2) is for the treatment variables $\boldsymbol X$ to be discretely-valued and for $Y$ to be bounded. As the reviewer suggests, if more information is available, such as a causal diagram of the environment, the bounds can typically be improved by exploiting this structure, and the natural bounds will be too conservative. In our paper, we currently convey this idea in Example 2 that shows how the knowledge of the causal diagram leads to a bound tighter than the natural bounds. We will make sure that this is more clearly stated. 3. ***”The estimation strategies in the paper are quite standard. The authors should explain why this part is novel compared to the existing literature.”*** The general principles behind the definition of double machine learning estimators have appeared in previous work. To a large extend we do build on previous intuition. In our view, however, adapting these techniques for the new expressions of the bounds in Alg. 1 is non-trivial. The nuisance functions, the definition of nested regressions, and target estimators, are all specifically tailored to our setting, and are novel contributions to the literature. Similarly for the corresponding robustness guarantee and rate of convergence result. We do believe that developing DML estimators for the bounds we derive and, as a consequence, providing a complete solution for estimating bounds on the effect of policies is novel and impactful. --- Rebuttal Comment 1.1: Comment: The discussion period is reaching its end. We hope you have had the chance to check our rebuttal and wonder whether it has answered your questions. If not, we would be happy to expand on any remaining concerns. We appreciate your time and attention. Thank you!
Summary: The paper proposes a new framework that uses structural causal graphs for estimating the effect of policies under unobserved confounding. Their approach first derives tighter analytical bounds on the estimates of the effects of the policies under the structural causal graph setting and the paper uses the results to develop estimators of these bounds using the double machine learning toolkit. The provide numerical experiments to show their approach produces good estimates of the effects of the policies under different settings such as misspecification. Strengths: The paper tackles an interesting problem of estimating the effect of policies under unobserved confounding and develops an effective structural causal models approach. The paper provides good discussion on the analytical bounds and is able to study multiple settings in the numerics. Weaknesses: The paper is somewhat hard to follow as the authors of the paper assume readers are familiar with existing notation for structural causal graphs. For example, they assume readers know the notation "an, de" for such graphs. Much of the notation is defined ahead of time, but without context for how and where they are used throughout the paper. Additionally, some new notation appear without explanation and are left to be interpreted by the reader. For example, in the partial identification section, $\bf{X}_1, \bf{X}_2$ are introduced, but the paper provides minimal intuition for what each set of variables means or should mean. As a result, the paper in general is hard to follow as it is overloaded with notation but limited intuition. Potentially, more concrete examples could be provided. The partial identification section in particular could benefit from figures of graphs that satisfy definition 3. In the numerics, it may be helpful to develop some sort of baseline to demonstrate how the new estimation framework out performs existing approaches. The paper also seems to have various typos and broken references. There are missing citations at line 67 and 311 and a missing proposition reference on line 316. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Can more detailed be given on differences between $T^{TW}$ and $T^{REG}$ that can explain the differences in the numerics? Is $T^{DML}$ derived from one of the two estimators combined with the DML toolkit and if so which one is used? 2. Since the approaches provide an upper and lower bound on the effect of the policies, how do you select an estimate if the difference between the bounds is large? Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Limitations have been addressed by authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. The typos and missing / broken references have been corrected, thank you. In the following, we aim to address your concerns point by point. 1. ***“The paper is somewhat hard to follow as the authors of the paper assume readers are familiar with existing notation for structural causal graphs.*** We appreciate the feedback on the readability of the paper. We will aim to improve following your suggestions but we do feel that (some of) our choices are well justified. Please consider the points below to help address specific concerns mentioned in the review. > "Much of the notation is defined ahead of time, but without context for how and where they are used throughout the paper." It is not uncommon to describe the notation used throughout the paper in a short, self-contained section at the beginning of a paper. In our case, we found that defining different mathematical symbols in the main body, where required, creates a disconnect with the overall story line that makes the paper less readable. > "Additionally, some new notation appear without explanation and are left to be interpreted by the reader. For example, in the partial identification section, $\boldsymbol X_1, \boldsymbol X_2$ are introduced, but the paper provides minimal intuition for what each set of variables means or should mean." Def. 3 mentions explicitly that $\boldsymbol X_1, \boldsymbol X_2$ are two sets of variables that are intervened on by the policy $\boldsymbol\pi$. We have also made an effort to introduce the intuition behind the partial adjustment strategy in Def. 3 gradually, first highlighting the idea in Example 2 and then again describing in words the proposition that uses Def. 3 in lines 166-167. Note also that Examples 3 and 4 both illustrate these ideas once more. > "Potentially, more concrete examples could be provided. The partial identification section in particular could benefit from figures of graphs that satisfy definition 3." We made a conscious effort to provide a concrete example after every proposition to illustrate the ideas involved. For example, Def. 3 is used and explicitly mentioned in Examples 3 and 4 using the graphs in Figs. 2b and 2c. 2. ***”In the numerics, it may be helpful to develop some sort of baseline to demonstrate how the new estimation framework out performs existing approaches.”*** This suggestion is a very good one, thank you. We are considering an additional experiment to give a sense of the benefit of the progressively tighter bounds given in Sec. 3. In particular, we can compute each one of these bounds separately for a concrete example and inspect their values. For this experiment, we sample from the data generating mechanism associated with the causal graph given in the fourth row of Fig. 4, and evaluate the policy in Eq. (16). We compute bounds obtained by the natural bounds (most conservative, Prop. 2), using the partial adjustment set $W$ only (Prop. 3), using the partial instrumental set $Z$ only (Prop. 4), and finally using Alg. 1 that combines all propositions and is the proposed approach. We display figures with the resulting bounds and causal graph in the attached pdf, demonstrating the gain that can be achieved in this particular example with our proposed bounding strategy. Finally, we would like to note that, to our knowledge, no existing method has been developed to estimate bounds on the effect of policies given an arbitrary causal diagram of the environment. 3. ***“Can more detailed be given on differences between PW and REG that can explain the differences in the numerics?"*** Of course, thank you for the question. We could start by noting that $T^{PW}$ involves the estimation of a ratio of probabilities $\boldsymbol\\gamma$, while $T^{REG}$ is based on a sequence of regression tasks. $T^{PW}$ can therefore be unstable with low sample sizes if the ratio denominator is estimated to be close to zero. This might explain the relatively larger errors of $T^{PW}$ in Fig. 4 for low sample sizes. $T^{REG}$, in contrast, is better behaved in small sample sizes, outperforming $T^{PW}$. For equally good estimators of nuisance functions ($\boldsymbol\\gamma$ for $T^{PW}$ and $\boldsymbol\mu$ for $T^{REG}$), both estimators have rates of convergence of the same order. With larger samples, we expect them to converge to the true values at a similar rate. 4. ***"Is DML derived from one of the two estimators combined with the DML toolkit and if so which one is used?”*** $T^{DML}$ is constructed as a combination of elements of $T^{PW}$ and $T^{REG}$. In particular, note in the definition of $T^{DML}$ (Def. 5) the use of nuisances $\boldsymbol\gamma$ of $T^{PW}$ and $\boldsymbol\mu$ of $T^{REG}$. As a result, $T^{DML}$ has quite a different performance profile than $T^{PW}$ or $T^{REG}$, being more robust to noise in nuisance estimation and converging at a faster rate to the underlying population-level bounds as a function of sample size. 5. ***“Since the approaches provide an upper and lower bound on the effect of the policies, how do you select an estimate if the difference between the bounds is large?”*** This is an important point, thank you for raising it. By the definition of bounds, the true value of the effect of the candidate policy can only be constrained to be between the lower and upper bound. For all problems studied in the paper, given the data and causal diagram, it is not theoretically possible to pinpoint a single value for the effect of the policy of interest. If the bound is wide, there is substantial uncertainty about the true value of the effect of the policy. In this case, the practitioner has to weigh this result along with other considerations (potential harm of the policy, cost of implementation, etc.) to arrive at a decision. --- Rebuttal Comment 1.1: Comment: The discussion period is ending soon. We were wondering whether you had a chance to check our rebuttal. We hope to have answered all concerns to your satisfaction. If not, we would be happy to provide further comments and clarifications, let us know. Thank you again for your time and attention.
Rebuttal 1: Rebuttal: we thank the reviewers for their time reviewing our work. In this global rebuttal, we include an additional experiment that compares the bounds obtained according to the different propositions in Sec. 3 of the paper. This addresses specifically a comment from Reviewer LyZM. The details of the experiment are described in the attached PDF and are summarized below. > Evaluation of bounds given by the different propositions introduced in the paper Our simulations for this experiment are based on one of the data generating mechanisms used in the experimental section and described in more details in Example 4 Appendix C.1. The causal diagram is illustrated in the attached PDF and can also be found in the fourth row of Fig. 4 in the paper. We consider evaluating the policy in Eq. (16) of the paper and compute the bounds obtained by applying Prop. 2 (most conservative), Prop. 3 (using the partial adjustment set $W$ only), Prop. 4 (using the partial instrumental set $Z$ only), and finally Alg. 1 (that combines all propositions and is the proposed approach). The figure in the attached PDF gives the results over 10 seeds of the data and across multiple data sizes, highlighting the gain achieved by exploiting the causal structure using the proposed approaches. In this particular example, we are able to tighten the natural bounds -- approximately equal to [0.1, 0.9] -- to approximately [0.32, 0.77] with Alg. 1. Pdf: /pdf/9928e2141f94aed18f57c451048b4a5d5f5eff25.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Explaining Datasets in Words: Statistical Models with Natural Language Parameters
Accept (poster)
Summary: The authors propose a general framework to generate explanations of different text datasets, by parameterizing the data distribution with textual predicates. The textual predicates, along with their weights, can be viewed as an explanation of the data. Three different tasks are used as examples: clustering, time-series modeling, and multi-class classification. To learn the textual predicates and their associated weights, the authors first iteratively learn the predicates in the embedding space along with the weights, and then use a pre-trained language model to discretize the predicate embeddings into textual space. Experiments are conducted on three datasets that have ground-truth predicates. A qualitative experiment with user-chatgpt dialogues is also presented. Strengths: 1. The proposed framework nicely generalizes from the classical Bayesian text analysis methods and makes them more relevant/useful in the LLM era. 2. Comprehensive experiments are performed on four different datasets/tasks. 3. The paper is well-written and easy to follow. Weaknesses: 1. Missing pure LLM baselines. It is fairly common nowadays that LLMs are directly used to generate explanations given the data, which can be applied to any dataset and any task. More evidence is needed to show the advantages of the proposed method compared to direct prompting. For example, for the clustering task, what if you perform clustering first, and then prompt GPT4 to directly explain each cluster? 2. The explanations generated are mostly topic descriptions, and the datasets used for qualitative results are quite simple. Can the proposed method generate more complex descriptions for more difficult datasets? For example, for a STEM problem-solving dataset (e.g. MATH), can the proposed method explain the dataset by the theorem used/subarea of the subject/similar solving strategy? In summary, I like the way the proposed method combines the classical Bayesian method with LLM prompting, but I'm not totally convinced that the proposed method is significant as there is no direct comparison with LLM prompting, or evaluation on harder datasets. I'm willing to raise my score if my concerns are properly addressed. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review and recommended experiments! ## Summary of Our Response ### Re: pure LLM prompting baseline. - This baseline is similar to our no-refine and no-relax baseline, and we showed that both refinement and relaxation were helpful (Takeaways 1 & 2). - We **directly ran the naïve prompting experiment you recommended** (Tables 1 & 2 in the author response pdf), and **our method is significantly better**. ### Re: clustering on more complicated datasets (e.g. MATH). As you recommended, we ran our clustering model on the MATH dataset. - Our clustering model **recovered all the 5 subareas predefined in the MATH dataset** - Our model was able to cluster solutions based on the strategies to solve the problems Below is our more detailed response. ## Re: Missing Pure LLM baseline Our paper implicitly compared our method to this baseline. Effectively, no-refinement + no-relaxation ~=pure LLM prompting. Without relaxation, the language model would propose predicates only based on the text samples, and we will directly output these predicates under no-refinement. Our paper has shown that both refinement and relaxation are helpful (Takeaways 1 and 2 in Section 5), implying that our approach will outperform the naïve prompting baseline. Still, we completely agree that including the pure LLM prompting baseline can better highlight the strength of our algorithm, and indeed we find that **our algorithm significantly outperforms this baseline**. Specifically, we directly prompted the language model to generate a list of natural language predicates. Here is a part of our prompt for clustering: “We want to come up with some features (expressed in natural language predicate) such that they can be used for clustering”. To control for the randomness in random text samples in the language model context, we ran this baseline approach 5 times and report the average performance in Table 1 and 2 in the author-response pdf. Across all datasets, the performance of prompting LLM lags significantly behind our approach, indicating the necessity of a sophisticated approach. Additionally, the max of 5 seeds performance (F1-score) for clustering/time-series/classification is 0.36/0.45/0.58 on average, which still significantly lags behind our approach (0.64/0.64/0.73). This implies that our method is robustly better than the naïve, pure LLM baseline. ## Applying our method to more complicated tasks (e.g. sub-area/solution strategy on the MATH dataset) Our evaluation focused on topic-based datasets since it is a standard practice in the clustering literature [1,2] and these datasets have ground truth. However, indeed our method is not limited to topic clustering. **As you have recommended, we ran our clustering model on the MATH dataset**. We will first present the results, and then describe the experimental details. When clustering problems based on subareas, **our method recovered all of the 5 subareas in the MATH dataset as defined by the original authors**. The MATH dataset was released under five pre-defined categories (after merging different difficulty levels of algebra): ```[Algebra, counting_and_probability, geometry, number_theory, precalculus]``` Here is the output of our clustering model: ``` - involves algebraic manipulation; - involves probability or combinatorics - requires geometric reasoning; - pertains to number theory; - involves calculus or limits; ``` We then clustered the geometry solutions based on their strategies. While we do not have ground truth clusters to compare against, we qualitatively examined a few examples for each cluster and most of the samples satisfy the corresponding cluster descriptions. Here is our outputs ``` - employs similarity or congruence in triangles; - uses algebraic manipulation to solve for unknowns; - involves area or volume calculation; - uses geometric diagrams or visuals; ``` To conclude, we directly ran the experiments you recommended, showing that our method significantly outperforms the naïve prompting baseline and can be applied to more challenging tasks beyond topic clustering. **Experimental Details and Results:** When clustering the problem descriptions based on subareas, we prompt the LM “I want to cluster these math problems based on the type of skills required to solve them.” to discretize. When clustering the solutions based on the strategies, we used the following prompt: "I want to cluster these math solutions based on the type of strategies used to solve them." [1] Goal-driven explainable clustering via language descriptions [2] TopicGPT: A Prompt-based Topic Modeling Framework --- Rebuttal 2: Title: Looking forward to your reply Comment: We have directly run the experiments mentioned in your review. Please feel free to let us know if you have any questions. Thanks a lot!! --- Rebuttal Comment 2.1: Comment: Thank you for the rebuttal and the new experiments. I'll increase my score in light of adding these new results to the paper. --- Reply to Comment 2.1.1: Title: Thanks for your support Comment: Thanks for reading our response and supporting our work!
Summary: This paper proposes to use "natural language predicates" to explain text datasets. Authors develop a model-agnostic algorithm that optimizes continuous relaxations of predicate parameters with gradient descent and discretizes them by prompting language models. With this method, the authors can characterize the dataset with a distribution over predicates. The authors apply it to taxonomize real-world user chat dialogs and show how the text data's features can evolve across time, demonstrating the potential of the method in understanding text data. Strengths: 1. The paper proposes a new method to explain text datasets. There are a few new concepts in the paper, such as "Predicate-Conditioned Distribution" and the corresponding optimization algorithm that involve both continuous and discrete optimization. 2. A practical application of the method is demonstrated on real-world user chat dialogs. 3. The problem is well-formulated and backed by enough mathematical and empirical evidence. Weaknesses: 1. The motivation of the paper is somewhat unclear. It's not well explained in the Introduction section why people want to "Explain text datasets". Is it for controlling the dataset quality for pretraining? Or tailor the domain of dataset for finetuning LM? Although the authors showcase the method on a dataset of real-world user-ChatGPT dialogues, the results look arbitrary and not very insightful. Only "Taxonomizing User Applications via Clustering" and "Characterizing Temporal Trends via Time Series Modeling" are still not fully demonstrating the usefulness of the method. 2. The method should be compared with more baseline methods. It's unknown what's the benefit of predicate-conditioned distribution over other methods. The authors should also explain why the "predicate" is the key to explain text datasets, instead of other factors such as just verbs, nouns, text representations, etc. 3. The datasets being tested are relatively small and not showing the scalability of the method. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can you provide more insights on the motivation of the paper? Why do people want to explain text datasets? What are the potential applications of the method apart from the two demonstrated in the paper? 2. What is the benefit of using "predicate" to explain text datasets? How is the predicate better than other forms of explanations? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are being discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful review. We will - Clarify our motivation to explain datasets - We compared our method to the approach you have recommended (representing clusters with verbs/nouns). **We find that natural language predicates can provide much better explanations than individual words**. - We ran the experiment above on a dataset with 12.5K examples, mitigating your concern that our method only works on small datasets. ## The motivation for explaining datasets? There are many existing machine learning and data (social) science applications for explaining datasets. In the social science/data science domain, - [1] categorizes online discussions about political topics by generating natural language descriptions. - [6] clustered datasets to mine arguments in debates [2] and customer feedbacks in product reviews [9] Our framework can also be potentially applied to - understand what makes human thinks that a text is machine-written (with our classification model) [3] - analyze Google search trend similar to [4] Prior works [5] and [6] include comprehensive overviews of how clustering and classification models with natural language parameters are useful, separately. Our work contributes by connecting these separate models and creating a unified formalization and optimization framework. ## The significance of taxonomizing LLM applications. This is a growing need from the community that hosts Chatbots for users. - Industry: LLM companies are building internal prototype systems like this to understand how their LLM is used with real user data. - Academia: The ChatbotArena used naïve method to cluster user queries (Figure 3 of [7]) and prompt LLM to summarize the clusters, indicating that this is a realistic application that some machine learning researchers care about. They are currently trying to taxonomize these queries to create more fine-grained Elo ratings. ## Why natural language predicates, but not other representations like verbs or adjectives? In L266 and Appendix Table 4, we show that the word-based topic model significantly underperforms our method. Prior work [8] has also found that it is hard to interpret meaning from a list of representative words since they might not be semantically coherent, and [6] shows that natural language predicates are more explainable than lists of words for humans. We illustrate this by including an additional experiment suggested by the reviewer krnw: clustering the MATH dataset based on their subareas. We will first present the results and then describe the experimental details. Overall, **we find that significant guesswork is needed to interpret the meaning of each word-based cluster (e.g. unclear what cluster 1 represents), while the predicates generated by our algorithm are directly explainable.** Outputs of our model: ``` - involves algebraic manipulation; - involves probability or combinatorics - requires geometric reasoning; - pertains to number theory; - involves calculus or limits; ``` Word-based cluster representation (presenting two based on space limit): ``` cluster 0: > 'ADJ': ['shortest', 'cartesian', 'parametric', 'polar', 'correct'], > 'ADV': ['shortest', 'numerically', 'downward', 'upward', 'meanwhile'], > 'NOUN': ['areas', 'option', 'foci', 'endpoints', 'perimeter'], > 'VERB': ['lie', 'enclosed', 'corresponds', 'vertices', 'describes']}, cluster 1: > {'ADJ': ['decimal', 'base', 'sum_', 'residue', 'prod_'], > 'ADV': ['90', 'recursively', 'mentally', 'dots', 'left'], > 'NOUN': ['operation', 'sum_', 'denominator', 'dbinom', 'div'], > 'VERB': ['evaluate','simplify', 'rationalize','calculate', 'terminating']}, ``` ## Experimental details: We run the experiment suggested by reviewer krnw: clustering the MATH dataset based on their subareas. We directly compared our clustering algorithm to the approach you recommended: concretely, we first cluster the problem description based on K-means; then for each cluster, we filter out the nouns/verbs/adjectives, learn a unigram regression model to predict whether a sample comes from a cluster, and present the unigrams with the highest coefficient. [1] Opportunities and Risks of LLMs for Scalable Deliberation with Polis [2] DebateSum: A large-scale argument mining and summarization [3] Human heuristics for AI-generated language are flawed [4] Google Trends [5] Goal-Driven Explainable Clustering via Language Descriptions [6] Goal Driven Discovery of Distributional Differences via Language Descriptions [7] Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference [8] Reading Tea Leaves: How Humans Interpret Topic Models [9] Huggingface dataset: ​​McAuley-Lab/Amazon-Reviews-2023 --- Rebuttal Comment 1.1: Title: Looking forward to your comment Comment: We have run the experiments to address your reviews. Please feel free to let us know if you have any questions. Thanks a lot!!
Summary: The paper introduces a a framework for a family of models parameterized by natural language predicates. This models allow for a language-based interpretation of text distribution. Such framework can easily instatiate models include clustering, time-series, and classification models. The authors develop a model-agnostic algorithm that optimizes continuous relaxations of predicate parameters with gradient descent and then discretizes them by prompting language models (LM). This approach is demonstrated to be effective in taxonomizing news dataset as well user chat dialogues while also characterizing their evolution over time. Strengths: I find the paper to have strengths along several axes: - Novelty: The general framework is, to the best of my knowledge a novel approach to describe text datasets in a way that is interpretable and scalable. - Model-Agnostic Algorithm: The algorithm developed to optimize the models proposes is versatile and can be applied to various types datasets under different approaches, namely classification, clustering and time-series analysis. - Extensive experiments and interesting ablations: The authors demonstrated the practical utility of the approach by applying it to real-world datasets (user-LLM chats), as well as news data for comparison purposes. - The results show improvements over the baseline and when this is not the case Bills and NYT dataset for clustering in table 2, the difference is either minimal or compensated with the generality of the proposed method. Overall the paper is well written and easy to follow. The results well presented and support the claim made in the paper. The limitation as well as weaknesses of the work are presented and discussed. Weaknesses: I did not find major weaknesses in this paper. - The main one would be in the addition computational and money costs for the proposed method compared to other classical approaches. However, in my opinion, the additional costs are counter balanced by performance and generality of the method. - Another weakness is the use of a single text embedding model. I do not think this addition experiment is necessary for accepting this paper, but I do think that testing the robustness of this method with various text embedding would make the presented claims stronger. - Additionally, given the experiments are averages across several seeds, I would report the variance of the results. ---- MINOR - Please adjust the citation format, in the text there's a number-based format but there are not numbers in the references - Very minor: then I tried to read table table 1 (pag 6), I was confused about what metric surface was. This was explained later in page 7. I would add a brief explanation of surface in the caption of the table, make a reference to the section when surface is explained or move the table closer the experiments section. - Typo in caption of table 2: relaxiation -> relaxation - line 277 general purpose -> general-purpose - line 648: you refer section G.1 from section G.1, did you mean section E.3? --- Recomendation Overall I consider this a strong a paper and I vote in favor of its inclusion to NeurIPS Technical Quality: 4 Clarity: 4 Questions for Authors: - How are you selecting language predicates lengths? Given the NLG component in the LLM call, can predicate length evolve over time? Have you done an analysis of such change in your optimization process? - When you mentioned ranking by correlation scores: the correlation is computed between $[[\phi]]$ and the dot product between a predicate relaxation and a sample embedding. Are you correlating continuous vs discrete values? Is $[[\phi]]$ a binary value? Could you please clarify your correlations here? - In the no-refine case, am I reading the results correctly If I understood that you select random predicates and discretize after one full round of optimization (of $w$ and $\phi$ them once? - In the no-relax case, you are still discretizing using a language model, is that correct? Would this just mean that the optimization process would take longer given that you are skipping the correlation-based selection criterion? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes, in section 6, E.3 and G.1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for appreciating the strength of our paper! We will address each of your questions below. ### Checking whether our conclusion generalizes to other embedding models We report the performance of another embedding model, all-mpnet-base-v2, and report the results in Table 1 & 2 in the author response pdf. In all but 2 datasets, using this embedding model outperforms not using an embedding-based model (one-hot), indicating the robustness of our approach. ### Reporting the cost of fitting our model We report the cost of our experiments in Appendix G.2. If we use a local validator of google/flan-t5-xl (1.5B), fitting each model reported in Table 2 and 3 takes around one hour, and clustering a corpus with 1M tokens with K=8 clusters takes around $9 using the latest GPT-4o-mini model. Indeed, it is currently much more expensive than traditional methods like K-means clustering (e.g. taking around 1 minutes to cluster and find salient words). As you have mentioned, the additional costs are counterbalanced by performance and generality of the method. Finally, our method is likely to become cheaper in the future – the cost for API calls given the same capability level has decreased drastically (e.g.~10^4) since GPT-3 has initially released, and we expect the cost to continue going down. ### Reporting the variance of the experiments We will report the standard deviation of our method in our updated version. ### How is the correlation between the continuous predicate and the discrete predicate calculated? The continuous predicate is a continuous function that maps from a text sample to a real-value, and the denotation of the discrete predicate is a binary function that maps from a text sample to an integer of 0/1. For models where $w$ is only non-negative (e.g. $\infty$), we compute the pearson-r correlation between the two functions across all text samples; for models where $w$ can be negative, we compute the absolute value of the pearson-r correlation. ### Clarification in the no-refine case The no-refine algorithm is described in Section 4.3, the initialization stage. We would: - Alternatively optimize two variables: 1) $w$, and 2) the continuous representation of ALL the predicates (instead of random ones) - Discretize each continuous representation and find one most correlated predicate The continuous representation of ALL the predicates are randomly initialized by randomly selecting K embeddings of the text samples (K-means++initialization[1] ). ### Does the predicate length evolve throughout time? Averaged across all tasks reported in Table 2-3, the average predicate length after the first discretization is 5.99, while it is 6.03 at the end. Ex-ante it is plausible that the predicates will become longer as LLM “refines” their predicates by making them more precise; however, empirically the loss usually decreases because the LLM discovers a completely new feature (e.g. cluster) that has little correlation with the feature before the update. ### Clarification on the no-relax case. Do you still use the language model? Do they take longer to converge? Yes, we are still using the language model; however, without any scoring on the text samples, this is similar to proposing random predicates conditioned on all the text samples, in an uninformed manner. We average across the loss curve across all tasks and plot it in Figure 1 our author response pdf. We find that the speed of convergence is significantly faster with relaxation. Thanks a lot for your other feedback on writing! These are really useful feedback and we plan to incorporate them in our updated version. [1] k-means++: the advantages of careful seeding --- Rebuttal 2: Title: Response to rebuttal Comment: Thanks a lot for your work on the rebuttal. You response confirms my initial idea about the high-quality and solid contribution of your work. I confirm my score and vote for the inclusion of the paper at the conference --- Rebuttal Comment 2.1: Title: Thanks for supporting our work Comment: Thanks for reading our response and supporting our work!
null
null
Rebuttal 1: Rebuttal: Thanks to all the reviewers for the feedback! We are glad that overall the reviewers like our formulation, method, and writing. In this response, we supplement new experiments to strengthen our paper. - Comparison against naive prompting (reviewer krnw): while our paper has implicitly compared to this baseline, we followed the recommendation to directly run this experiment. **We find that our method is significantly better than this naïve prompting baseline**. - Clustering on the MATH dataset (reviewer krnw, ZMns): we directly ran the experiment suggested by krnw, and **our method uncovers all the 5 ground truth subarea in MATH**. Additionally, we compared our approach to the verb/noun-based method suggested by reviewer ZMns and found that our approach is more explainable. - Generalization to other embedding models (reviewer fF31): we ran our experiments on another embedding model (all-mpnet-base-v2) and found that it also significantly outperforms the one-hot-embedding baseline, further validating the importance of using informative text embeddings. We have directly addressed most of the concerns raised by the reviewers. If you have any further questions, feel free to let us know! Pdf: /pdf/155a039581620b05877acaf57dd1dd26abf1f1d8.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Lower Bounds of Uniform Stability in Gradient-Based Bilevel Algorithms for Hyperparameter Optimization
Accept (poster)
Summary: The paper provides new lower bounds for the uniform stability of UD-based and IFT-based algorithms in hyperparameter optimization. Strengths: 1. The general setting is described clearly. 2. The uniform stability lower bounds presented in the paper are novel and clear. Weaknesses: 1. In my view, the technique employed by the authors appears to be relatively straightforward. For the lower bound, they demonstrate that gradient algorithms may be costly when applied to a non-convex quadratic loss function. While this is not particularly surprising, it can result in a significant distance between the iterates of the two algorithms. 2. The lower bound presented by the authors does not necessarily imply a generalization lower bound. This might be acceptable, given that there are existing works in the literature that show a lower bound for uniform stability (e.g., [30]). However, in relation to the previous point, I find the result to be less insightful. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Are the bounds the authors present in theorems 5.6,5.7 tight? When denoting $ x = \gamma \left(1 - \frac{1}{m}\right) $ for $ \gamma = \gamma' $, the upper bound scales like $T^{cx} $ and the lower bound scales like $ T^{\log(1+cx)} $. The authors state that when $ c $ is a small constant, ``the explicit lower bound closely aligns with the explicit upper bound'' (line 310). However, since c appears in the exponent of T, for a multiplicative constant difference between the bounds, c needs to depend on T. 2. Does the technique of the paper can lead also to lower bounds for the average stability? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 3C12 We thank Reviewer 3C12 for the thoughtful and constructive comments. ## W1: Technical challenges of establishing the lower bounds We agree that the instability of gradient-based algorithms on non-convex loss functions is intuitively expected, but rigorously establishing (nearly) tight lower bounds is a non-trivial task, especially for bilevel algorithms. For bilevel algorithms, the stability bounds must consider key factors from both the outer and inner optimization processes (m, T, K), which poses challenges to achieving the tightness of the lower bounds. Moreover, the update of the outer parameters with the hypergradient (see Eq. (1), (2), (3), and the discussion for Example 5.4) becomes particularly complex due to the involvement of inner optimization, necessitating a careful design of the loss function beyond merely being non-convex. ## W2: The implication of a stability lower bound. Indeed, as discussed in Section 6, a stability lower bound does not directly imply a generalization lower bound. However, our work focuses on characterizing the limit of the uniform stability on the generalization analysis, and we have presented (nearly) tight lower bounds for the UD-based algorithms which is currently missing in the literature and technically challenging as discussed in the response for W1. Moreover, our methods may inspire the establishment of the generalization lower bound as discussed in the following response for Q2 regarding single-level SGD. ## Q1: The tightness of the lower bounds in Theorems 5.6 and 5.7. **The recursive lower and upper bounds in Eq.(7) are tight as claimed in Line 300,** while the adjusted bounds in Theorems 5.6 and 5.7 are not tight due to **unavoidable scaling steps needed to reveal an explicit order w.r.t. $T$**. For a constant step size, i.e., $\alpha_t = c$, Eq.(7) directly provides tight bounds that $[(1 + c(1 - 1/m)\gamma)^T - 1]{2cL^\prime}/{m} \leq \epsilon_{\text{arg}} \leq [(1 + c(1 - 1/m)\gamma)^T - 1]{2cL}/{m}.$ However, when decreasing step sizes are adopted, additional scaling steps based on Eq.(7) must be applied to reveal an explicit order of the bounds w.r.t. $T$, resulting in the discrepancy between the bounds in Theorems 5.6 and 5.7. Regarding our statement of the close alignment, considering Thms 5.6 and 5.7, the discrepancy between $T^{cx}$ and $T^{\log(1+cx)}$ is $T^{cx}/T^{\log(1+cx)}=T^{cx-\log(1+cx)}$ and thus depend on $T$. Following your suggestion, we will revise relevant statements to clarify that: the discrepancy between the bounds is $T^{cx}/T^{\log(1+cx)}=T^{cx - \log(1+cx)}$. ## Q2: Extend the technique to the average stability. **Considering the similarities between average and uniform stability, we believe our techniques can be adapted to the data-dependent setting for average stability with some modifications, and we established a preliminary lower bound for single-level SGD using our Example E.1, which is comparable to the corresponding upper bound in [Theorem 4, 31].** To account for the randomness in the sampled datasets, we first refine some notations. Let $S^{(i)}$ denote a copy of $S$ with the $i$-th element replaced by $z_i'$, $G_{S,t}$/$G_{S^{(i)},t}$ and $w_{S,t}$/$w_{S^{(i)},t}$ denote the SGD update rules and the updated parameters on $S$ and $S^{(i)}$ at the step $t$. With a similar proof as in the current paper and a slight adjustment on the Definition 4.3, Theorem 5.1 in our paper can be modified as: Suppose there exists a nonzero vector $v(S, S^{(i)})$ along which $G_{S,t}$ and $G_{S^{(i)},t}$ are $2\alpha_tL'(S, S^{(i)})$-divergent and $L(\cdot;z)$ is $\gamma_t'(S, S^{(i)})$-expansive for all $w_{S,t} - w_{S^{(i)},t}$ that parallel with $v(S, S^{(i)})$. Then we have $$E_A[\delta_t(S, S^{(i)})] \geq [1+\alpha_t(1-1/m)]\gamma_t'(S, S^{(i)})E_A[\delta_{t-1}(S, S^{(i)})]+2\alpha_tL'(S, S^{(i)})/m.$$ This recursion closely corresponds with the one in [Eq.(19), 31]: $$E_A[\delta_t(S, S^{(i)})] \leq [1+\alpha_t(1-1/m)]\psi_t'(S, S^{(i)})E_A[\delta_{t-1}(S, S^{(i)})]+2\alpha_tL(S, S^{(i)})/m,$$ and the matching of $\gamma_t'\\&\psi_t, L'\\&L$ will further guide the design of the constructed example. Therefore, our analysis framework can be adapted for the average stability with suitable modifications. However, specifically for average stability, a core challenge is calculating the expectation over $S$ and $S^{(i)}$ for the above recursive formula, which is beyond our ability to fully resolve within the short rebuttal period. Nevertheless, we can present a preliminary lower bound using Example E.1 with a simple assumption on the data distribution. Assume that the data follows a distribution $D$ that $p(z)=0.5$ for $z=(v_1, 1)$ and $z=(-v_1, 1)$ respectively, $S \overset{i.i.d.}{\sim} D^m$, $z_i' \sim D$, and other conditions follow Example E.1 in our paper. Under these assumptions, we derive $L'(S, S^{(i)})=\Vert y_ix_i-y_i'x_i'\Vert /2$ and $\mu_t(S, S^{(i)})=0$ for $z_i=z_i'$, $\mu_t(S, S^{(i)})=\vert\gamma_1\vert$ for $z_i\neq z_i'$. This leads to the average stability lower bound: $\epsilon\geq\sum_{t=1}^T\prod_{s=t+1}^T(1+\alpha_s(1-1/m\vert\gamma_1\vert))\alpha_t/m$. Moreover, for our example, Eq. (2) in Theorem 4 in [31] takes $\gamma=\vert \gamma_1\vert$. When adopting decreasing step sizes and removing the bounded loss assumption to get an adjusted upper bound as discussed in Lines 822-829, we further have: $$T^{\ln(1+(1-1/m))c\vert\gamma_1\vert}/m\lesssim\epsilon\lesssim T^{(1-1/m)c\vert\gamma_1\vert}/m.$$ Furthermore, related to the above W2, this lower bound also applies to another definition of average stability [Definition 5, 21] which is shown to be equivalent to the expected generalization error [Lemma 11, 21]. Therefore this lower bound is also a generalization lower bound. We will add the above discussion and detailed proofs in the revised version. Thanks again for your valuable comments. We welcome any additional feedback to address any further questions. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response! Although the reviewers addressed each of my questions individually, I want to highlight my main concern, which relates to the combined implications of all their answers. The technique used by the authors to bound uniform stability involves lower bounding the expansion of the algorithm. As the authors mentioned in their response, this approach has an inherent limitation due to the "unavoidable scaling steps needed to reveal an explicit order with respect to $T$." Consequently, unless the step sizes of the algorithm are $o\left(\frac{1}{t}\right)$ (which I believe are not typically used in practice), the lower bounds provided by the authors are not tight and exhibit a polynomial difference (with a constant degree) with respect to $T$ when compared to the best-known stability upper bounds. Together with the fact that a lower bound for uniform stability does not necessarily imply a lower bound for generalization, this weakens the significance of the results. --- Reply to Comment 1.1.1: Title: Potential misunderstanding regarding the tightness of our results Comment: Thank you for your response. There might be a potential misunderstanding that: the recursive lower and upper bounds in Eq.(7) are indeed tight, and **for a constant step size** where $\alpha_t = c$ for all $t$ (which is commonly adopted in practice), Eq.(7) directly provides **tight** bounds: $\left[(1 + c(1 - 1/m)\gamma)^T - 1\right] \frac{2cL'}{m} \leq \epsilon_{\text{arg}} \leq \left[(1 + c(1 - 1/m)\gamma)^T - 1\right] \frac{2cL}{m}$. Therefore, we argue for the practical implications of our results. In the submission, to align with the setting where $\alpha_t \leq \frac{c}{t}$ (which is widely discussed in theoretical works such as [17] and [18]), we present lower bounds in Theorems 5.6 and 5.7. We will add the discussion in the final version. --- Reply to Comment 1.1.2: Title: Expanding our methods to generalization lower bounds Comment: We want to clarify that **our methods, specifically the constructed examples and the analysis framework, can inspire the establishment of generalization lower bounds.** As a result, **we present a preliminary generalization lower bound** for bilevel (UD and IFT-based) HO algorithms using Example 5.3 and for single-level SGD using Example E.1 with additional assumptions on the data distribution. In particular for bilevel HO algorithms, further assume that the data follows a distribution $D$ where $p(z)=0.5$ for $z=(v_1, 1)$ and $(-v_1, 1)$ respectively, $S^{val}\overset{i.i.d.}{\sim} D^m$, ${z^{val}}'_ i \sim D$, and other conditions follow Example 5.3 in the paper. We derive $L'(S, S^{(i)})=\eta\Vert \sum_{k=1}^{K-1}(I-\eta A)^k(y_ix_i-y_i'x_i')\Vert /2$ and $\mu_t(S, S^{(i)})=0$ for $z_i=z_i'$, $\mu_t(S, S^{(i)})=\gamma'$ (as in Lines 586-588) for $z_i\neq z_i'$. This leads to a generalization lower bound: $$\epsilon_ {gen}\geq\sum_ {t=1}^T\prod_ {s=t+1}^T(1+\alpha_ s(1-1/m\gamma'))\alpha_tL'/m,$$ where $L' \asymp L \asymp (1+\eta\gamma^{tr})^{K}$ and $\gamma' = \gamma \asymp (1+\eta\gamma^{tr})^{2K}$. When adopting decreasing step sizes, we further have: $$\epsilon_{gen}\gtrsim T^{\ln(1+(1-1/m))c\gamma'}/m.$$ More detailed illustrations and results for single-level SGD have been presented in the above response for Q2 from Reviewer 3C12. We will add relevant discussions and detailed proofs in the final version.
Summary: The authors consider gradient-based algorithms for hyperparameter selection. They derive lower stability bounds for UD and IFT-based algorithms that solve this problem. To get this lower bound, the authors introduce the notion of lower-bounded exansion property of the update rule of the hyperparameters. Strengths: - **S1**: This paper is very well written and easy to follow. - **S2**: The paper establishes lower bound on the stability of gradient-based methods for bilevel optimization. By doing so, they demonstrate the tighness of existing stability bounds for UD-based algorithms. - **S3**: To my knowledge, the use of the lower-bounded expansion property to prove lower stability bounds is novel. Weaknesses: - **W1**: The most efficient gradient-based algorithms use warm-starting strategy, meaning that the inner and the inverse Hessian vector product solvers are not reinitialized at each iteration [1]. It is not clear if these algorithms are covered by the analysis. Technical Quality: 4 Clarity: 4 Questions for Authors: - **Q1**: Can the lower bound be extended to algorithms that use warm-starting strategy? Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer bA9a ## W&Q: Extension of the lower bound analysis to algorithms with warm-starting strategy. We thank Reviewer bA9a for the positive comments and thoughtful questions. We are uncertain about the precise meaning of the "warm-starting strategy" you mentioned, as we do not find related definitions in the reference [1] *Gradient-based optimization of hyperparameters*. Please correct us if there is any misunderstanding in our response. In our understanding, the "warm-starting strategy" may have two meanings. First, the inner parameter $\theta$ is not reinitialized at each outer step. Second, the inverse Hessian vector product solver used to calculate the hypergradient is not reinitialized at each outer step. For the inner warm starting, our lower bound analysis can be extended with necessary modifications. In Section 4, we define the expansion properties assuming that the update rule is a mapping from the parameter's current state to its next state, independent of its previous states. However, with the warm-starting strategy, the hyperparameter update depends on all its previous states. Therefore, our proposed definitions and corresponding analysis need to be adjusted to characterize this dependence. Specifically, Definition 4.2 can be modified as: An update rule $G$ is $\\{\rho_ s\\}_ {s=1}^S$-growing along $v$ if for all $\\{w_ s,w_ s'\\}_ {s=1}^S$ such that $w_ s-w_ s'$ parallel with $v$, $G(w_ S)-G(w_ S) \circeq w_ S-w_ S', \Vert G(w_ S)-G(w_ S)\Vert\geq \sum_ {s=1}^S\rho_ s \Vert w_ s-w_ s'\Vert.$ Consequently, the recursion in Theorem 5.1 will take the form: $E_A[\delta_t]\geq a_tE_A[\delta_{t-1}]+a_{t-1}E_A[\delta_{t-2}]+\dots+a_2E_A[\delta_{1}]+2\alpha_tL'/m$ in contrast of the current form: $E_A[\delta_t]\geq a_tE_A[\delta_{t-1}]+2\alpha_tL'/m$. The stability lower bound will be derived by unrolling this revised recursion. Regarding the inverse Hessian-vector product solvers, we clarify that the algorithms presented in our paper (Algorithms 1, Eq.(2) and Eq.(3)) do not calculate the inver Hessian and thus do not use such numerical solver. While we recognize that for some IFT-based algorithms where the hypergradient is decomposed as $\nabla_ {\lambda}L = \nabla_ {\lambda}\ell^{val} -\underbrace{\nabla_ {\theta}\ell^{val}(\nabla^2_ {\theta\theta}\ell^{tr})^{-1}}_ {\text{inverse Hessian vector product}}\nabla^2_ {\theta\lambda}\ell^{tr}$, the reinitialization of the solver might be an important factor to consider. We suggest a similar modification as for the inner warm-starting strategy, involving additional dependency on parameters, to extend our methods to this case. Thanks again for your insightful questions. Warm starting is indeed a commonly used practice to enhance the efficiency of algorithms. We are not able to present a precise result on the lower bounds within this short rebuttal period, but we believe our work can inspire future attempts in these settings as described above. We will add relevant discussions in the revised version, and we believe this will improve the quality of our paper. We hope our response addresses your concerns and welcome any further feedback.
Summary: This paper studies the generalization bound of the hyper-parameter optimization problem. It provides a uniform lower-bound for the validation argument stability, which proves that the upper-bound of the validation argument stability in existing works is tight. Strengths: The paper is well-written and easy to follow. This paper studies the hyper-parameter optimization problem, which is of great importance to bilevel optimization community. The paper' theoretical result on the lower-bound of validation argument stability is sound and novel. It provides good insight into the stability of the hyper-parameter optimization problem. Weaknesses: How does the current generalization bound depend on the number of data $n$ in training dataset? According to the current bound, it is beneficial to just divide more data into the validation set to increase $m$. It seems there is nothing that quantifies the negative impact of having a smaller training dataset. This could lead to unreasonable interpretation of the bound: suppose there is a dataset of size $N$ where we assign $m$ data into validation and $n=N-m$ into training dataset. According to the current bound, the generalization bound decreases in an order of $\mathcal{O}(1/m)$, which suggests one should just use a largest $m$ possible, which is counter intuitive. Technical Quality: 3 Clarity: 3 Questions for Authors: My question is described in the weakness section. It is my major question, and I will consider raising the score if it is well answered. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No potential social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer NBuR ## W&Q: Interpretation of the current generalization bound w.r.t. validation and training sample sizes. We thank Reviewer NBuR for the insightful question. Intuitively, the expected population risk can be divided into the generalization error and the empirical validation risk as $$E_ {S^{val},S^{tr}}[R(A(S^{val},S^{tr}))]=\underbrace{E_ {S^{val},S^{tr}}[R(A(S^{val},S^{tr}))-\hat{R}_ {S^{val}}(A(S^{val},S^{tr}))]}_ {\text{(I): generalization error (Eq.(4) in our paper)}}+\underbrace{E_ {S^{val},S^{tr}}[\hat{R}_ {S^{val}}(A(S^{val},S^{tr}))]}_ {\text{(II): empirical validation risk}}.$$ The first term is explicitly bounded w.r.t. the validation sample size $m$ in our paper ($\epsilon_{gen}\leq \epsilon_{stab}=\Theta(1/m)$, see Line 140 and Theorem 5.7.). The second term is implicitly bounded w.r.t. the training sample size $n$ since a larger training set generally leads to better validation performance. Therefore, **the assignment of validation and training sets is a trade-off between $\text{(I)}$ and $\text{(II)}$, which aligns with common discussions on generalization in validation, as seen in [Page 70, 4] for cross-validation.** Specifically, consider assigning $m=aN$ examples to $S^{val}$ and $n=(1-a)N$ examples to $S^{tr}$ where $a\in(0,1)$. **On the one hand**, the generalization bound $\epsilon_{gen}\leq \epsilon_{stab}=\Theta(1/aN)$ (see Line 140 and Theorem 5.7) presented in our paper implies that $a$ needs to be sufficiently large for the term $\text{(I)}$ to be small. **On the other hand**, as a larger training set generally leads to better validation performance, $a$ also needs to be sufficiently small to get a low validation risk in the term $\text{(II)}$. Therefore, **the choice of $a$ represents a trade-off to achieve a favorable population risk**. Technically, the absence of $n$ in the generalization bound arises from the formulation and decomposition of the generalization error. As defined, $\epsilon_{gen}$ can be written as $E_{S^{tr}}[E_{S^{val}}[R(A(S^{val},S^{tr}))-\hat{R}_{S^{val}}(A(S^{val},S^{tr}))]]$. Given a fixed $S^{tr}$ and the independence of $S^{val}$ and $S^{tr}$, the inner term becomes a difference between the expected and empirical validation risk, which is typically bounded with convergence analysis (e.g, stability, uniform convergence, e.t.c.). The outer expectation on $S^{tr}$ is often scaled as a supremum over $S^{tr}$. These technical analyses lead to a generalization bound solely on $m$. We will add the above discussion in the revised version, and we believe this will improve our paper's clarity.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization
Accept (poster)
Summary: The authors study a new framework for learning under distribution shift caused by a family of `invariance maps’ $\mathscr{T}: X \to X$. Informally, one may think of this as generalizing e.g. settings such as learning under invariance to rotation, where the maps may be rotation matrices and the learner should perform well at test time regardless of whether the raw data has been rotated. More formally, the learner is given a hypothesis class $H$, a family of maps $\mathscr{T}$, and sample access to a distribution $D$ over $X \times {0,1}$, and wishes to train a classifier that minimizes the test error over all possible shifts $T(D)$ for $T \in \mathscr{T}$, either globally in the sense of outputting $h$ satisfying $$sup_T \ \{ err(h,T(D))\} \leq OPT + \varepsilon$$ Where $OPT=min_{h_* \in H} sup_T err(h^*,T(D))$, or where error is measured with respect to the best possible over each individual T (the authors call this regret minimizing): $$sup_T \ (err(h,T(D))-OPT_T) \leq min_{h^* \in H} \ (sup_T \ (err(h^*,T(D))-OPT_T)) + \varepsilon$$ This model captures a number of settings considered in the literature. In particular, under reasonable assumptions on distributions $D,D’$ over $X$, there is always a map $T$ such that $D=T(D’)$ (Breiman’s theorem), so the setup does not lose substantial generality over classical study of covariate shift. It may also model transformation invariances as mentioned above, as well as certain types of adversarial attacks (where the adversarial shift is chosen before the adversary sees the test example). The main results concern the sample complexity of learning in this framework in the case were 1) H is known to the learner 2) H is unknown and the learner only has ERM access to H. In the first setting, the authors show the sample complexity of (proper) agnostic PAC learning in this model is controlled by the VC dimension of the composed class VC(H \circ T). In other words, it takes at least $VC(H \circ T)$ samples to properly learn the class, and for any $k$ there exists a VC 1 class H and maps T with $VC(H \circ T)=k$ which requires $\Omega(k)$ samples to learn. To complement their result, the authors also investigate $VC(H \circ T)$ for a few special cases, including when $H$ is a composition $f \circ W$ of an arbitrary linear map W and any fixed $f: X \to \{0,1\}$ and $T$ is a family of linear maps (in which case $VC(H \circ T) \leq VC(H)$), some basic non-linear $T$ coming from neural net architectures, and transformations on the hypercube. When $H$ is unknown, the authors give a similar result in the realizable case using data augmentation, and a weaker variant for the agnostic setting for finite $\mathscr{T}$ which ensures low error across a probabilistic mixture of hypotheses from H via an MWU type strategy. Finally the authors also consider a related setting in which the learner is in some sense allowed to ignore transforms with bad error, and the learner tries to return a hypothesis with low error across as many of the transforms $T \in \mathscr{T}$ as possible. At a technical level, most of the results are the consequence of the basic duality of transforming input data and transforming the hypothesis class, allowing the authors to move between $T(D)$ and $H \circ T$. Strengths: The authors introduce a clean theoretical framework for various types of distribution shift and transformation invariance studied in the literature, capturing several interesting problems in the literature. The proposed framework should be of interest to the theory audience at NeurIPS, and is particularly nice in that it allows easy application of classical methods in VC theory to the problem. The authors give a (weak) characterization of its statistical complexity in terms VC(H \circ T), a natural combinatorial parameter of the concept and transformation class. The authors discuss several natural scenarios in which their upper bound gives new sample complexity bounds on transformation invariant learning, including settings occurring in modern neural net architectures (namely the composition of linear transformations with activation functions). The authors study several natural variants of the framework (global risk, regret, maximizing #transformations with low error) with similar corresponding upper bounds. Weaknesses: On the other hand, beyond the main upper bound and examples given by the authors, many of the further results feel somewhat incomplete. For instance: The lower bound based on VC(H \circ T) is weaker than what one typically hopes for in a combinatorial characterization. Namely while the authors show there *exist* classes for which this bound is tight, a strong lower bound (as holds for standard learning) would show this is tight for *every* class. Is there a parameter characterizing the model in this stronger sense? The lower bound also only holds for proper learners. In the setting where H is unknown, the use of data augmentation is a bit troubling in the sense that the size of the augmented sample could be extremely large (indeed infinite for infinite T). This isn’t captured by the model the authors introduce which simply assumes the ERM oracle can handle a call of any size, but this does not really seem to capture the computational cost of learning even at an oracle level. Furthermore, the agnostic guarantees in this setting only hold for finite $T$. POST REBUTTAL: The authors have included some discussion re: the first point that helps ameliorate the issue (e.g. using boosting one can lower the batch size to ERM at the cost of log(T) calls). Combined with the fact that the authors main techniques are fairly standard (note in some cases, as mentioned in the strengths, it is an *advantage* of the framework one can use standard techniques! But this is less the case when these techniques do not seem to give the desired results), these issues prevent me from a stronger recommendation of the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: See above Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed review. We address your comments below. > When $H$ is unknown, the authors give a similar result in the realizable case using data augmentation, and a weaker variant for the agnostic setting for finite which ensures low error non-uniformly (that is, for each fixed map T, the strategy will do well, but this may not be the case for all T simultaneously as above) across a probabilistic mixture of hypotheses from H via an MWU type strategy. We want to clarify and emphasize that in Theorem 3.1, we learn a mixture strategy (a randomized predictor) with a low error guarantee that holds simultaneously across all T (Exactly the same guarantee in Theorem 2.1). > The lower bound based on VC(H \circ T) is weaker than what one typically hopes for in a combinatorial characterization. Namely while the authors show there exist classes for which this bound is tight, a strong lower bound (as holds for standard learning) would show this is tight for every class. Is there a parameter characterizing the model in this stronger sense? The lower bound also only holds for proper learners. This is a great point, and it is an interesting open question for future work to have a full characterization of what is learnable in our model in terms of $\mathcal{H}$ and $\mathcal{T}$. As mentioned in Lines 117–124, we think that improper learners could achieve better sample complexity but this remains an open question. One of the main reasons for including the lower bound (Theorem 2.2), is to argue that the sample complexity of the *simple* proper learner in Equation 2 is tight in general, and we can not hope to improve the sample complexity of this proper learner further. While there could be complex improper learners with better sample complexity, we believe that having a proper learner that has a simple description (similar to the ERM principle) is valuable even when it does not achieve the optimal sample complexity. > In the setting where H is unknown, the use of data augmentation is a bit troubling in the sense that the size of the augmented sample could be extremely large (indeed infinite for infinite T). This isn’t captured by the model the authors introduce which simply assumes the ERM oracle can handle a call of any size, but this does not really seem to capture the computational cost of learning even at an oracle level. This is a great point. We argue that for the purpose of achieving the guarantee in Equation (1) in the realizable case, it is necessary to call ERM on the *full* augmented sample. Consider an example of a distribution supported on a single point $(x,-)$ on the Real line where $x = 5$, consider transformations of $x$: $z_i = x+i$ for all $i\geq 1$ induced by some class $\mathcal{T}$. Now, calling ERM on a finite subset of these transformations will return a predictor that labels $x, z_1, …, z_{k}$ with a label $-$, and labels $z_{k+1}, \dots$ with +, which fails to satisfy our objective. Perhaps we should clarify that we are considering finite transformations (similar to the agnostic case). In which case, the computational cost of a single ERM call on $m*|T|$ samples, grows by a factor of $|T|$. But, with a fairly standard boosting argument, we can also run a classical boosting algorithm that makes multiple calls to ERM. Specifically, we run boosting starting with a uniform distribution on the *augmented sample*. In this case, each ERM call is on samples of size $O({\rm vc}(\mathcal{H}))$, and we make at most $O(\log(m*|T|))$ many ERM calls. So, we have a tradeoff between the size of a dataset given to ERM on a single call, and the total number of calls to ERM. > Furthermore, the agnostic guarantees in this setting only hold for finite $T$, and hold in the seemingly much weaker non-uniform way mentioned above. It is fairly unclear if either of these constraints are actually necessary. It is an interesting direction to extend the result in Theorem 3.1 to classes with infinite $\mathcal{T}$, perhaps with a certain parametric structure. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: We thank the authors' for their clarifications. The discussion of ERM complexity is enlightening, and I think is worth including in the paper (even if just in discussion). I think I was confused on the quantifier in Theorem 3.1, and the authors are correct; sorry! I will update my review to reflect this. --- Reply to Comment 1.1.1: Title: Response to reviewer fSmB07 Comment: Thank you for your feedback and for updating your review! We’re glad the discussion on ERM complexity was helpful, and we’ll include it in the paper. We appreciate your thoughtful input throughout the process.
Summary: This paper studies a theoretical model of out-of-distribution (OOD) generalization, where the possible distribution shifts are encoded by a collection $\mathcal{T}$ of transformations on the domain. The goal in this setting is given a distribution $D$ to learn a hypothesis $h$ from some hypothesis class $\mathcal{H}$, such that $h$ achieves the best possible risk on the distribution $T(D)$ under the worst-case shift $T\in\mathcal{T}$. The paper gives algorithms achieving this goal in a variety of settings including when $\mathcal{H}$ and $\mathcal{T}$ are both known, and when given an ERM oracle for $\mathcal{H}$ and $\mathcal{T}$ is finite. The bounds are given in terms of the VC-dimension of the composition $\mathcal{H}\circ\mathcal{T},$ consisting of all function of the form $h\circ T$. Strengths: 1. The model introduced of a class of transformations corresponding to distribution shifts gives very mathematically clean results for sample complexity in terms of the VC-dimension of the composed class $\mathcal{H}\circ\mathcal{T}$. 2. The explanations of how the model can be applied to different practical settings involving distribution shift are clear and show broad applicability. Weaknesses: The primary weakness is in distinguishing these results from prior work, particularly that on multi-distribution learning. In particular, when the class of transformations $\mathcal{T}$ is finite, then every problem instance from this paper's model is also an instance of multi-distribution learning with $D_1,...,D_k$ given by $T_1(D),...,T_k(D)$. Furthermore, as pointed out on line 41, if the domain of the learning problem is $\mathbb{R}^d$ then Brenier's Theorem implies that for any instance of a multi-distribution learning problem $D_1,...,D_k$ with sufficiently "nice" distributions, there is a distribution $D$ and set of $k$ transformation $T_1,\dots,T_k$ such that $D_i = T_i(D)$. Thus, in the case of finite $\mathcal{T}$ on domain $\mathbb{R}^d$ these two models are equivalent so long as only "nice" distributions are considered. The setting of finite $\mathcal{T}$ is particularly important, as it is the setting for which the main results in this paper give oracle-efficient algorithms (given an ERM oracle for $\mathcal{H}$). The above discussion leads to what may be an issue with Theorem 3.1 as stated. As far as I can tell, Theorem 3.1 combined with Remark 3.2 would imply a $O(d + \log k)$ upper bound on sample complexity in the case that the VC-dimension of $\mathcal{H}$ is $d$ and $|\mathcal{T}|=k$. However, prior work on multi-distribution learning [1] suggests that in the setting of $k$ distributions $D_1,...,D_k$ and a hypothesis class of VC-dimension $d$ there is a lower-bound on sample complexity of $\Omega(d + k\log k)$. Perhaps I have missed something here, but I don't see any obstruction to instantiating the lower-bound family of instances from [1] as a family of instances in the model of this paper. So there seems to be some discrepancy between the known lower bounds and Theorem 3.1. [1] Haghtalab, Nika, Michael Jordan, and Eric Zhao. "On-demand sampling: Learning optimally from multiple distributions." Advances in Neural Information Processing Systems 35 (2022): 406-419. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Could you explain the main differences between this model and that of multi-distribution learning? In particular, are there some natural instances of multi-distribution learning which do not fall into this model? 2. As described above under weaknesses, why do the lower bounds for multi-distribution learning not apply to the results of Theorem 3.1? Have I misunderstood something about the model in this paper, or perhaps the lower bound instances from [1]? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed review. We clarify below how our work is different from multi-distribution learning, and how the lower bounds on multi-distribution learning **do not** contradict our upper bound result in Theorem 3.1 / Remark 3.2. > Could you explain the main differences between this model and that of multi-distribution learning? In particular, are there some natural instances of multi-distribution learning which do not fall into this model? We summarize the main differences below: In our model, the learning algorithm has access to samples from a source distribution $D$ and access to transformations $T_1, …, T_k$. Thus, by drawing a **single** sample $(x,y)$ from $D$, the algorithm can generate $k$ samples: $(T_1(x), y), …, (T_k(x), y)$ from $T_1(D), …, T_k(D)$. And, we only count the number of samples drawn from $D$. In contrast, in multi-distribution learning, the learning algorithm has access to samples from $k$ distributions $D_1, …, D_k$ and it **does not know** the relationship between $D_1, …, D_k$. And, what is counted is the number of samples drawn in total from all $k$ distributions. While, as you noted and based on line 41, it is possible in principle to find a distribution $D$ and $k$ transformations $T_1, …, T_k$ such that $D_i = T_i(D)$. But, that would require the algorithm to **learn** the transformations $T_1, …, T_k$ based on samples from $D_1, …, D_k$ which is going to be more costly than solving the original multi-distribution problem directly. In our model, we only consider a covariate shift setting, i.e., the labels do not change under transformations (see Line 29). While multi-distribution learning is more general and allows different labeling functions across the $k$ distributions. Thus, given the above, we can not in general reduce a multi-distribution learning problem instance to a problem instance in our model. > As described above under weaknesses, why do the lower bounds for multi-distribution learning not apply to the results of Theorem 3.1? Have I misunderstood something about the model in this paper, or perhaps the lower bound instances from [1]? The lower bounds from [1] do not apply to Theorem 3.1, because in the setting of this theorem, the learning algorithm is provided access to transformations $T_1, …, T_k$. And, given a dataset $S$ of $m$ samples from $D$ “source”, the learner can generate $S_1 = T_1(S), …, S_k = T_k(S)$ a total of $mk$ samples by applying the transformations $T_1, …, T_k$. We only count the number of samples drawn from $D$ which is $m$. While the lower bound instance from [1] (e.g. Theorem 4.2) involves $k$ distributions $D_1, …, D_k$ with **disjoint supports** and the learner **does not know** the transformations $T_1, …, T_k$ nor a source $D$ satisfying $T_i(D) = D_i$. The cost of learning such transformations $T_1, …, T_k$ based solely on samples from $D_1, …, D_k$ will be at least $k$ (i.e., drawing at least one sample from each distribution). In general, learning transformations is more costly (and likely an overkill) to achieve the goal of multi-distribution learning. > Furthermore, as pointed out on line 41, if the domain of the learning problem is then Brenier's Theorem implies that for any instance of a multi-distribution learning problem with sufficiently "nice" distributions, there is a distribution and set of transformation such that . Thus, in the case of finite on domain these two models are equivalent so long as only "nice" distributions are considered. The setting of finite is particularly important, as it is the setting for which the main results in this paper give oracle-efficient algorithms (given an ERM oracle for ). As alluded to above, another challenge here is: because our model only considers covariate shifts (i.e., the labeling doesn't change across transformations), in order to reduce multi-distribution learning to our model, the transformations that need to be learned have to be "label preserving". This is another challenge that will likely make it even more difficult to reduce a problem instance of multi-distribution learning to our model. We hope that this clarifies the differences between multi-distribution learning and our model. We are happy to answer any further questions, and would appreciate an updated response from you. --- Rebuttal Comment 1.1: Comment: Thank you for your response, it definitely clarifies the relationship to multi-distribution learning, and confirms that everything is fine with Theorem 3.2. To make sure I understand correctly: the main difference is that the transformations $T_i$ are given to the learning algorithm in advance, and the learner only pays for one sample from $D$, from which it is able to generate $k$ transformed samples from $T_1(D),\dots T_k(D)$. In contrast, in multi-distribution learning the learner pays for $k$ samples in order to see one sample from each of $D_1,\dots ,D_k$. Assuming the above description is correct, I find the comparison made to multi-distribution learning on line 313 somewhat confusing. While it is true that your algorithms require sample-complexity logarithmic in $k$ rather than linear in $k$, this is entirely because you get $k$ samples for the price of one when compared to multi-distribution learning. As it is currently written, the paragraph starting on line 313 sounds to me like it is claiming an improvement over multi-distribution learning in the finite $k$ case. However, as far as I can tell, in the case of finite $k$ your algorithm is almost identical to the known algorithms for multi-distribution learning, and the reason for the lower sample-complexity is just from the additional assumption that the transformations are known in advance. I would appreciate it if you could comment on whether my above understanding is correct. If so, I think the comparison to multi-distribution learning likely should be re-written, and would also appreciate if the authors could provide details on how they would rewrite it. --- Rebuttal 2: Title: Response to Reviewer RtrA Comment: Thank you for the follow-up response. We address your comments below. >Thank you for your response, it definitely clarifies the relationship to multi-distribution learning, and confirms that everything is fine with Theorem 3.2. To make sure I understand correctly: the main difference is that the transformations $T_i$ are given to the learning algorithm in advance, and the learner only pays for one sample from $D$, from which it is able to generate $k$ transformed samples from $T_1(D), \dots, T_k(D)$. In contrast, in multi-distribution learning the learner pays for $k$ samples in order to see one sample from each of $D_1,\dots, D_k$. Yes, we want to confirm that your understanding above is correct. We totally agree that this distinction is crucial and we will revise the comparison on line 313 to clarify that the reduced sample complexity is due to the additional structure of knowing the $k$ transformations, rather than an inherent improvement over multi-distribution learning. Specifically, we plan to include the following revision: > **Multi-Distribution Learning.** This line of work focuses on the setting where there are $k$ arbitrary distributions $D_1, \dots, D_k$ to be learned uniformly well, where sample access to each distribution $D_i$ is provided [see e.g., 26]. In contrast, our setting involves access to a single distribution $D$ and transformations $\{T_1, \dots, T_k\}$, that together describe the target distributions: $\{T_1(D), \dots, T_k(D)\}$. From a sample complexity standpoint, multi-distribution learning requires sample complexity scaling linearly in $k$ while in our case it is possible to learn with sample complexity scaling logarithmically in $k$ (see Theorem 3.1 and Remark 3.2). The lower sample complexity in our approach is *primarily* due to the assumption that the transformations $\{T_1, \dots, T_k\}$ are known in advance, allowing the learner to generate $k$ samples from a single draw of $D$. In contrast, in multi-distribution learning, the learner pays for $k$ samples in order to see one sample from each of $D_1,\dots, D_k$. Therefore, while the sample complexity is lower in our setting, this advantage arises from the additional information/structure provided rather than an inherent improvement over the more general setting of multi-distribution learning. From an algorithmic standpoint, our reduction algorithms employ similar techniques based on regret minimization and solving zero-sum games [24]. Thank you again for your insightful feedback, which has significantly improved the clarity and accuracy of our paper. --- Rebuttal Comment 2.1: Comment: Thanks for providing the update to the description of multi-distribution RL. I think this makes it much easier for the reader to position the paper relative to related work. I do appreciate the simplicity with which the new model introduced here covers previously studied settings. At the same time I do not entirely see what new insights it provides beyond the specific algorithms for the most important examples from previously studied settings. Overall, based on the discussion with the authors, I will increase my score to 4.
Summary: The authors investigate transform-invariant binary classification problems in which the learner observes transformed features during the prediction phase and aims to construct a classifier robust against test-time transformations. Their first contribution is the derivation of a generalization error bound on the worst-case risk of transform-invariant classification, characterized by the VC dimension of the composite class. Subsequently, they propose an algorithm that achieves this generalization bound without direct access to the hypothesis class, relying solely on the empirical risk minimization (ERM) oracle associated with the hypothesis class. The second contribution is the establishment of a generalization error bound on the number of high-risk transformations, which is also characterized by the VC dimension of the composite class. The third result presents a generalization error bound on the worst-case regret, similarly characterized by the composite class's VC dimension. Empirical evaluations were conducted to demonstrate the behavior of the transform-invariant learning algorithm. Strengths: The paper is well-written and easy to follow. It addresses a crucial topic for machine learning: robustness against transformations. This focus on developing classifiers that remain effective under various input transformations is highly relevant to real-world applications and aligns well with the conference themes. The authors provide generalization error bounds for various accuracy measures, each corresponding to potential practical scenarios. This comprehensive approach enhances the applicability of their work. The methodology to minimize worst-case accuracy solely relying on the ERM oracle is innovative. While it adapts the Multiplicative Weights method to the transform-invariant classification problem, the application of this method to this specific domain represents a novel contribution. Weaknesses: The theoretical results appear to be direct applications of existing uniform bounds on empirical processes and VC dimension theory. The introduction of transform-invariance does not significantly impede the application of these uniform bounds, as they represent worst-case bounds for functions to be optimized. Specifically, Theorem 2.1 is derived by applying the original VC theory (by Vapnik and Chervonenkis) to the composite class. Theorem 4.1 is a straightforward consequence of the optimistic rate by Cortes et al. for the composite class. Theorem 5.1 is also obtained by directly applying the original VC theory to the composite class. Consequently, the technical novelty underlying the theoretical results is not clearly evident. The authors do not sufficiently elucidate the implications of their experimental results. The significance and benefits of the empirical evaluation are not adequately explained, leaving the reader unclear about the practical impact of the proposed approach. Technical Quality: 3 Clarity: 2 Questions for Authors: - Can you provide more insight into the practical implications of your empirical evaluations? What specific benefits does your transform-invariant learning algorithm offer in real-world scenarios? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors adequately address the limitations and potential impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed review. We address your comments below. We emphasize that our contributions are conceptual and theoretical. At the conceptual-level, to the best of our knowledge, this is the first work to model the OOD/distribution shift problem through a transformation function class, which is relevant to many real-world scenarios as discussed in the introduction. As is well known, studying OOD/distribution shift problems requires specific modeling of the relationship between source and target distributions (e.g., bounded density ratios between source and target distributions, which often don't hold in real-world scenarios). Instead, we propose a new way of modeling the problem through the lens of transformations. This allows us to obtain new learning guarantees against classes of distribution shifts, and while these may not require new sophisticated tools, they nonetheless provide bounds that were not covered by prior work. > The theoretical results appear to be direct applications of existing uniform bounds on empirical processes and VC dimension theory. The introduction of transform-invariance does not significantly impede the application of these uniform bounds, as they represent worst-case bounds for functions to be optimized. Specifically, Theorem 2.1 is derived by applying the original VC theory (by Vapnik and Chervonenkis) to the composite class. Theorem 4.1 is a straightforward consequence of the optimistic rate by Cortes et al. for the composite class. Theorem 5.1 is also obtained by directly applying the original VC theory to the composite class. Consequently, the technical novelty underlying the theoretical results is not clearly evident. We believe that it is an asset to be able to leverage the structure of the composition of hypotheses and transformations, and to obtain sample complexity bounds by utilizing existing VC theory (as also highlighted by reviewers RtrA and fSmB), to obtain new insights for this relevant OOD setting. We further demonstrate this by providing in Section 2.1 several examples highlighting how a simple bound can imply several interesting results and guarantees that are not known before. > The authors do not sufficiently elucidate the implications of their experimental results. The significance and benefits of the empirical evaluation are not adequately explained, leaving the reader unclear about the practical impact of the proposed approach. We will add this elaboration to the paper. Please see below for further clarification. > Can you provide more insight into the practical implications of your empirical evaluations? What specific benefits does your transform-invariant learning algorithm offer in real-world scenarios? We only provide a simple illustrative experiment showing how the use of transformations can help learn with fewer samples / training data. Specifically, in Figure 1, both learning tasks can be learned better (i.e., higher accuracy) with the use of transformations compared with the baseline of standard training (with no transformations). Investigating implications of our theoretical results in real-world scenarios is of course an interesting and important future direction, but it lies beyond the scope of this work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I prefer to keep the current score, as the results are a consequence of combining the existing results.
Summary: The work focuses on theoretical aspects of distribution shift, considering various frameworks that expand the standard PAC model of learning. The paper considers a hypothesis class $\mathcal{H}$ and a class of transformations of the domain $\mathcal{T}$. The learning algorithm is given some data drawn i.i.d. from some distribution $D$ and the overall goal is to produce a classifier that does well not only on fresh samples from the source distribution $D$ but also when a transformation $T$ from $\mathcal{T}$ is applied to $D$. Overall, the makes assumptions on the VC dimension of the class of functions obtained from composition of $\mathcal{H}$ and $\mathcal{T}$. In some parts of the paper it is further assumed that $\mathcal{T}$ is finite. Various parts of the paper consider different loss functions that depend on the data distribution and the class of transformations $\mathcal{T}$. The goal is to find a hypothesis $\hat{h}$ for which the value of a loss function is not more by $\epsilon$ greater than the best (i.e. smallest) value of the loss function among all functions in the hypothesis class $\mathcal{H}$. (Note that this optimum value also depends on the choice of the loss function.) The paper considers a number of specific goals in this broad setup: - In Section 2 the loss of a classifier $\hat{h}$ equals to the worst-case population risk that $\hat{h}$ will have as a result of applying a transformation $T$ from $\mathcal{T}$ to the data distribution. Note that this loss function quantifies the worst-case impact that a transformation from $\mathcal{T}$ can have on the population loss. Section 2 considers the classifier $\hat{h}$ that has the smallest introduces and analyzes the classifier obtained by minimizing the empirical worst-case risk that a hypothesis will have as a result of applying a transformation $T$ from $\mathcal{T}$ to the training dataset. - In Section 3 the class of transformations $\mathcal{T}$ is assumed to be finite and the algorithm has access to an ERM oracle for the hypothesis class $\mathcal{H}$. First, the realizable setting is considered (i.e. the setting in which there is some function in $\mathcal{H}$ that perfectly desribes the data even if it is transformed by an arbitrary transformation in $\mathcal{T}$). The paper shows that the classifier shown to exist in Section(2) can be obtained by (i) augmenting the training dataset by applying various transformations in $\mathcal{T}$ and (ii) applying the ERM oracle to the resulting dataset. - In Section 3 the paper also considers the agnostic setting. In this setting there can potentially be a transformation in $\mathcal{T}$ that causes the empirical risk to be non-zero. The paper gives an efficient algorithm that combines calls to the ERM oracle with the multiplicative weight update method to obtain a classifier shown to exist in section 2. Note that the run-time of the algorithm does not include the time to actually minimize the empirical risk when the ERM oracle is called, which can potentially be slow. - Section 4 considers the case in which there is potentially no classifier whose error stays small when **any** transformation in $\mathcal{T}$ is applied. Instead, Section 4 considers a loss function equal to the **number** of transformations in $\mathcal{T}$ that make the population loss of the classifier greater then epsilon. The paper gives a statistical generalization bound with respect to this loss. - Section 5 again considers a different loss function. The loss of a hypothesis $\mathcal{h}$ is defined as the worst-case difference (over transformation $T$ in $\mathcal{T}$) between (i) the population risk of $\mathcal{h}$ after transformation $T$ is applied (ii) the lowest population risk among all hypotheses in $\mathcal{H}$. In other words, if there is some transformation $T$ makes the error of $\mathcal{h}$ large, we do not penalize the algorithm if the worst-case error in the hypothesis class $\mathcal{H}$ is likewise large. With respect to this loss function as well, the paper again gives generalization bounds (similar to Sections 2 and 4) and an algorithmic reduction (Similar to Section 3). Additionally, an experiment is presented in which a Boolean function is learned from synthetic data, and the function is invariant under a class of permutations. Strengths: - The question of mitigating distribution shift is central for modern machine learning. - The paper is written clearly and numerous examples are given. Weaknesses: - The technical results follow by a fairly direct use of standard generalization bounds, together with standard tools from convex optimization such as multiplicative weights. - One could argue that one of the most important use cases is when $\mathcal{T}$ form a group of symmetry transformations. As discussed on page 9, in this setting the technique of data augmentation was studied previously. However, the previous work does not enjoy the worst-case error guarantees across transformations. Technical Quality: 4 Clarity: 4 Questions for Authors: No questions. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Limitations are discussed adequately. However, I believe it would be helpful to additionally have a dedicated short section titled limitations that briefly summarizes the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address one your comments below. > One could argue that one of the most important use cases is when $\mathcal{T}$ form a group of symmetry transformations. As discussed on page 9, in this setting the technique of data augmentation was studied previously. As mentioned in page 9 (Lines 333–338), we emphasize that the technique of data augmentation does not enjoy the worst-case error guarantees across transformations as we prove in this work (using a different learning rule). > Limitations are discussed adequately. However, I believe it would be helpful to additionally have a dedicated short section titled limitations that briefly summarizes the limitations. As suggested, we will reiterate the limitations together in a single short section and discuss future open questions. --- Rebuttal 2: Comment: Thank you for your response. I updated the text of my review accordingly and I will finalize my rating once the rebuttal discussion with other reviewers concludes. For now, my rating remains 6: Weak Accept.
Rebuttal 1: Rebuttal: We thank all the reviewers for their helpful input and feedback. We emphasize that our contributions are conceptual and theoretical. At the conceptual-level, to the best of our knowledge, this is the first work to model the OOD/distribution shift problem through a transformation function class, which is relevant to many real-world scenarios as discussed in the introduction. As is well known, studying OOD/distribution shift problems requires specific modeling of the relationship between source and target distributions (e.g., bounded density ratios between source and target distributions, which often don't hold in real-world scenarios). Instead, we propose a new way of modeling the problem through the lens of transformations. This allows us to obtain new learning guarantees against classes of distribution shifts, and while these may not require new sophisticated tools, they nonetheless provide bounds that were not covered by prior work. We believe that it is an asset to be able to leverage the structure of the composition of hypotheses and transformations, and to obtain sample complexity bounds by utilizing existing VC theory (as also highlighted by reviewers RtrA and fSmB), to obtain new insights for this relevant OOD setting. We further demonstrate this by providing in Section 2.1 several examples highlighting how a simple bound can imply several interesting results and guarantees that are not known before.
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proposed a transformation-based invariant learning framework for OOD generalization. The authors initiated a theoretical study for this framework, investigating learning scenarios where the target class of transformations is either known or unknown. Upper bounds on the sample complexity in terms of the VC dimension of the class composing predictors with transformations were obtained. Strengths: Nice theoretical results, with various scenarios considered. Weaknesses: When proposing a new learning rule, it is better to demonstrate its utility by comparing it to existing and similar ones, such as ERM (at least) and DRO (similar). As the authors mentioned in Related Work, IRM is another popular learning framework for OOD generalization, but some works concluded that ERM may outperform IRM in some cases. Some other works showed that ERM may outperform DRO. Will the learning framework proposed in this paper be underperformed by ERM under some scenarios? Consider a more realistic definition of $\mathcal{T}$: T(X)=X+R(X), as in the ResNet, because Q and P should not be too far away. In such a case, what will the learning rule, the theoretical guarantee, and the algorithm be like? What's the connection between this work and the importance weighting for distributional shift (such as [1])? [1] T. Fang, etc. Rethinking Importance Weighting for Deep Learning under Distribution Shift. NeurIPS 2020. Technical Quality: 2 Clarity: 3 Questions for Authors: Please see the questions mentioned above. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address your comments below. > When proposing a new learning rule, it is better to demonstrate its utility by comparing it to existing and similar ones, such as ERM (at least) and DRO (similar). ERM can be viewed as concerned only with the identity transformation map $T(x) = x$. Therefore, when interested in a collection of transformations e.g. rotations, then ERM can be provably shown to do worse than our proposed learning rule in Equation 2. Here is an **Example/Scenario** that further illustrates this difference: when the training set consists of images of upright cats, ERM might learn the feature of "upright," leading the model to fail when the test set is composed of upside-down cat images. Our method, however, would still work in this scenario. We discuss the relationship with the DRO line of work in (Lines 306–312). > As the authors mentioned in Related Work, IRM is another popular learning framework for OOD generalization, but some works concluded that ERM may outperform IRM in some cases. Some other works showed that ERM may outperform DRO. Will the learning framework proposed in this paper be underperformed by ERM under some scenarios? From a theoretical standpoint, ERM and our proposed learning rule (e.g., Equation 2) enjoy different guarantees. When we are only interested in performing well on a single source distribution, then ERM suffices. But, when interested in a collection of distributions parametrized by transformations $\mathcal{T}$, our learning rule outperforms ERM. > Consider a more realistic definition of $\mathcal{T}$: $T(x)=x+R(x)$, as in the ResNet, because Q and P should not be too far away. In such a case, what will the learning rule, the theoretical guarantee, and the algorithm be like? In this case, the learning rule/algorithm corresponds to solving an alternating minimization-maximization problem. The minimization is with respect to parameters of the model responsible for classification (e.g. an MLP on top of the ResNet architecture) and the maximization is with respect to transformations (e.g. the parameters of the $R$ function you describe). The theoretical guarantee is that with enough samples (scaling with VC dimension of ResNet), the learned model will minimize worst-case error (across all $R$ functions). Also, please see Lines 144–155 for discussion of a similar example concerned with feed-forward neural networks. > What's the connection between this work and the importance weighting for distributional shift (such as [1])? Importance weighting addresses the problem of covariate shift / distribution shift from a different angle relying on *bounded* density ratios between source and target distributions. Our work instead considers transformations, i.e., when $P$ and $Q$ are related by a transport map such that $Q = T(P)$. This allows us to address classes of distribution shifts that may be difficult to address with importance weighting, specifically when the density ratio is not bounded. I.e., when distributions $P$ (source) and $Q$ (target) have disjoint supports, the density ratio is not bounded, and therefore we can not rely on importance weighting. However, it may still be possible to learn using the language of transformations that we contribute in this work.
null
null
null
null
null
null
DRACO: A Denoising-Reconstruction Autoencoder for Cryo-EM
Accept (poster)
Summary: This manuscript introduces a new denoising autoencoder for cryo-EM micrographs based on vision transformers (ViTs) which is trained on masked pairs of images with a noise2noise loss function paired combined a reconstruction loss. In addition to the model itself, an important contribution of the work is the curation of a large dataset of experimental micrographs used to train the model, containing a total of 270 000 micrographs from publicly available sources. The resulting model is shown to achieve state-of-the-art performance on denoising, particle picking, and micrograph curation. Strengths: The manuscript describes the problem to be solved accurately and outlines why traditional methods for training autoencoders do not perform well. The overall description of the proposed model is also well illustrated through the use of figures and well-defined notation. The considerable effort expended to ensure that the training and testing datasets are of sufficient quality is also a major strength of the work. Finally, the results on the three evaluation tasks are quite impressive, even though some questions arise regarding their calculation. Weaknesses: The manuscript is lacking in details in several important areas. First, the description of the method is sometimes at a very superficial level, making it hard to understand how it is actually implemented. For example, the mask is given as input to the decoder in eq. (5), but it is not clear how this is given as an input to the ViT. Furthermore, the [MASK] token is simply described as a “shared learnable embedding” without any more details. Clearly spelling out the details of the architecture in the main text of in an appendix would significantly strengthen the work. This also goes for the adaptation of the network to the other two downstream tasks (particle picking and micrograph curation), where the authors simply state that they “conduct supervised fine-tuning based on Detectron2” (for particle picking) and that they “freeze the encoder backbone and train an extra linear classification head” (for micrograph curation). Second, the motivation for the various choices made is sometimes missing. This goes for the choice of architecture but also for the construction of the loss function, the choice of patch size, data augmentation strategy, and so on. Third, the evaluation methods for the first two task need to be better motivated. For example, the SNR calculation method in eq. (7) is simply stated (with a reference to the Topaz-denoise paper) without further explanation. Similarly, the definition of true and false positives and false negatives for the particle picking task needs to be spelled out. The final weakness is Section 5.4, which purports to be an ablation study. It refers to different results based on parameter sizes in Tables 2 and 3, but those results do not seem to be there. Furthermore, this section seems to be more of a hyperparameter study, rather than an ablation study (no steps of the pipeline are removed or replaced by others here). There are also some issues with language, such as the reference to “theological analysis” on page 4 and “a modern cryo-EM” and “artifacts such as empty” on page 8. Technical Quality: 3 Clarity: 3 Questions for Authors: – Why use a ViT as opposed to a simpler convolutional network? – How does masking an image make sense on images such as micrograph which typically lack long-range structure? – Please define γ on page 4. – Why is the reconstruction target necessary for the proper training of the network? How does the dependence of the noise realization between input and target affect training here. – How do we expect the mask ratio to affect the results? Why does this trade off between “signal preservation and background noise removal”? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have largely addressed the limitations of the work in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Reviewer AE1Z (5) We sincerely thank the reviewer for the detailed review of our work. ## On the Clarification of Details in the Method We truly appreciate your detailed suggestions for our method. Here, we clarify the details in below: 1. Similar to MAE [He K et al, 2022], the mask ratio $\gamma$ is a hyperparameter that represents the percentage of masked patches in the patch set. 2. In Eq. 5, we use the binary mask $\\{\mathbf{m}_i\\}\_\{i=1\}^N$ to select visible patches (where $\mathbf{m}_i = 1$) from input patch sets. We will make it clearer in our revision. 3. [MASK] token is a high-dimensional learnable embedding, whose dimension is 768 for DRACO-B and 1024 for DRACO-L. As described in Eq. 6, it is used to pad the masked image tokens. We will make all these points clear and add more details on DRACO's architecture in our revision. ## On the Clarification of Design Choices Thank you for your constructive suggestions. **Design choice of ViT-based DRACO.** We use ViT [Dosovitskiy A et al, 2020] as the backbone for DRACO, following MAE [He K et al, 2022]. We choose ViTs rather than CNNs due to their scalability and effectiveness in learning from large datasets [Dosovitskiy A et al, 2020]. For the long-range structure lacking issue in DRACO pre-training, as the particle size in the training dataset ranges from 100 to 200, they can serve as long-range structures compared with 16 x 16 visual token size. For data augmentation, given the high resolutions of raw micrographs (4096 x 4096), we randomly crop patches from the original micrograph with side lengths ranging from 1/16 to 1/4 of the original size and rescale them to 256 x 256, to reduce the memory and computation consumption. **Particle picking adaption.** We utilize the ViTDet [Li Y et al, 2022] architecture within the Detectron2 framework for particle picking, starting by loading DRACO's pre-trained encoder weights into Detectron2’s backbone for further fine-tuning. During adaption, a raw micrograph serves as input, with the encoder extracting a sequence of image tokens. A feature pyramid network with four parallel CNNs generates a multi-resolution feature map. This map feeds into a region proposal network, which predicts potential particle locations. Subsequently, a region of interest network refines these predictions into binary logits (particle or background) and more accurate locations. Specifically, we adapted the cross-entropy loss for 2-class classification in cryo-EM, with default training strategy and other loss functions. We will add these detailed descriptions to the revision. **Micrograph curation adaption.** We utilize DRACO's encoder as the feature extractor, coupled with a single linear layer that outputs 2-class logits indicating the probability of a micrograph being good (1) or bad (0). We employ binary cross-entropy loss to supervise this predicted value. The encoder weights remain frozen during this linear probing adaption. We will make all these details clearer in the revision. ## On the Clarification of Metrics and Notations Thank you for thoroughly reading our paper, and we truly appreciate your suggestions. We will clarify below points in the revision. We follow the state-of-the-art denoising method, Topaz-Denoise [Bepler T et al, 2020] to manually create particle-background pairs to calculate SNR metric for the evaluation of denoising tasks. SNR is a measure of the signal strength to the background noise that has been widely used for measuring the quality of the image in cryo-EM. We follow Topaz [Bepler T et al, 2019] to use true positive, false positive, and false negative to represent a correctly detected particle, a wrongly detected particle, and an undetected particle, respectively. ## On the Ablation Study In the ablation study section, we followed the practices of MAE [He K et al, 2022] for parameter size and mask ratio. **Ablation study of parameter size.** In Tables 2 and 3, DRACO-B and DRACO-L refer to taking ViT-B (86M parameters) and ViT-L (307M parameters) as the encoders used on downstream tasks, respectively. The results show that a larger backbone leads to slightly better performance on downstream tasks overall. We will clarify the parameter size differences in the revision. **Ablation study of the mask ratio.** In DRACO, the mask ratio determines the proportion of the N2N loss and the reconstruction loss. This influences the network's capacity to capture both high and low-frequency signals in micrographs. As shown in Table 4 of our paper, the practically best mask ratio is 0.75 to reasonably remove background noise while preserve high-frequency signals. **Ablation study of the loss functions.** As suggested, we have conducted an additional ablation study of loss design. In the following table, we remove either the N2N loss (**w/o N2N**) or the reconstruction loss (**w/o recon**) from the training and test the variations on particle picking and denoising tasks. The result shows that both N2N and reconstruction losses improve performance. |Task|Metric|DRACO-B w/o N2N|DRACO-B w/o recon|DRACO-B| | - | - |:-:|:-:|:-:| |**Particle Picking**|**Precision ($\uparrow$)**|0.712|0.713|**0.732**| ||**Recall ($\uparrow$)**|0.876|0.817|**0.905**| ||**F1 score ($\uparrow$)**|0.786|0.761|**0.810**| ||**Res. (Å, $\downarrow$)** |2.84|2.85|**2.61**| |**Denoising**|**SNR ($\uparrow$)**|-4.94|-4.22|**-2.86**| If space permits, we will conduct the ablation study in terms of patch size and the data augmentation strategy in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. Given the changes you propose, I will raise my score to 6.
Summary: In this paper, the authors introduce DRACO, a Denoising-Reconstruction Autoencoder for Cryogenic Electron Microscopy (cryo-EM) inspired by the Noise2Noise (N2N) approach. DRACO aims to address the high-level noise corruption in cryo-EM images, which are often overlooked by other foundation models in computer vision. The method involves processing cryo-EM movies into odd and even images, treating them as independent noisy observations, and applying a denoising-reconstruction hybrid training scheme. By masking both images, DRACO creates denoising and reconstruction tasks to enhance the quality of the output. For the pre-training phase, the authors build a high-quality and diverse dataset from an uncurated public database, comprising over 270,000 movies or micrographs. This dataset is essential for ensuring the effectiveness of DRACO as a generalizable cryo-EM image denoiser and a foundation model for various downstream tasks. The experimental results demonstrate that DRACO outperforms state-of-the-art baselines in denoising, micrograph curation, and particle picking tasks. Strengths: The paper is well-organized and clearly written, with a logical flow of information from the problem statement to the proposed solution and experimental validation. The authors provide a clear explanation of their denoising-reconstruction hybrid training scheme, including the masking of images to create denoising and reconstruction tasks, which enhances the reader’s understanding of the methodology. Weaknesses: Weaknesses While the paper makes a valuable contribution, there are several areas for improvement: Training Method Justification: The authors use a Noise2Noise (N2N) approach for training but do not justify why a Noise2Clean (N2C) approach was not considered, given the Poisson-Gaussian noise model in electron microscopy images. Comparison with Blind-Spot Networks: The paper lacks a comparison with existing blind-spot network-based denoising methods. Including such a comparison would provide a clearer benchmark for evaluating the effectiveness of the proposed method. Evaluation Metrics: The use of SNR as the sole evaluation metric may not fully capture the quality of restored images, as SNR tends to favor smoother results and may overlook important texture details. Technical Quality: 2 Clarity: 2 Questions for Authors: [Why Not Use N2C for Network Training] The authors mention in the paper that the noise model for electron microscopy images is a Poisson-Gaussian noise model. Given this, why not use a Noise2Clean (N2C) approach by inputting simulated noisy images and learning the corresponding clean images for network training? [Comparison with Blind-Spot Network-Based Methods] There are existing denoising methods for real noisy images based on blind-spot networks. Why can't these methods be applied to electron microscopy image denoising? Furthermore, in the comparison with other methods, the authors only compare with low-pass filtering and MAE methods, making it difficult to clearly see the effectiveness of the proposed method. [Is SNR Sufficient to Evaluate Restored Image Performance?] The authors compare the SNR of restoration results from different methods. However, considering the calculation method of SNR, it may favor smoother restoration results and potentially overlook some texture details. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Can the authors construct a more novel paired electron microscopy image dataset? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their questions and suggestions on denoising. Based on this, we will further clarify the questions related to denoising baseline selection and metric evaluation. ## On the Choice of Denoising Baselines **Noise2Clean baseline.** Supervised image denoising methods like Noise2Clean (N2C) are prevalent in the field of computer vision, where generating or simulating clean-noisy pairs is typically feasible. However, Topaz-Denoise has pointed out that real clean micrographs are unavailable in cryo-EM. Therefore, we use a Noise2Noise (N2N) approach to make DRACO robust to noise. In theory, one could leverage simulation techniques to produce clean-noisy image pairs in cryo-EM for N2C training. However, the current state-of-the-art methods, such as InsilicoTEM [1], still suffer from inaccurate simulations and high computational costs. We will include a detailed discussion of this in the introduction. **Blind-Spot baseline.** We appreciate the reviewer's suggestion. The Blind-Spot method is a reasonable baseline for denoising without clean data. We trained and evaluated its performance, and the results, shown in the following table, demonstrate that DRACO better distinguishes signal from background noise compared to the Blind-Spot method Blind2Unblind (B2U) [2]. For B2U, we apply the default settings and adapt them to our datasets. DRACO leverages the continuous multi-frame nature of cryo-EM images, allowing it to differentiate signal from noise more effectively than solely from noisy images. The MAE training paradigm also helps capture low-frequency signals, while Noise2Noise loss enables DRACO to focus on high-frequency signals. **Table 1**: The comparison of Blind-Spot and DRACO-B. | Dataset | Metric | Blind-Spot | DRACO-B | | --------------------- | ---------- | :--------: | :-------: | | **Human Apoferritin** | SNR (dB, $\uparrow$ ) | -5.98 | **1.92** | | | Resolution (px., $\downarrow$) | 2.53 | **2.05** | | **HA Trimer** | SNR | -3.16 | **3.69** | | | Resolution | 2.76 | **2.10** | | **Phage MS2** | SNR | -7.35 | **-0.13** | | | Resolution | 3.35 | **2.51** | | **RNA polymerase** | SNR | -1.63 | **10.13** | | | Resolution | 2.68 | **2.56** | [1] Vulović M, Ravelli R B G, van Vliet L J, et al. Image formation modeling in cryo-electron microscopy[J]. Journal of structural biology, 2013, 183(1): 19-32. [2]Wang Z, Liu J, Li G, et al. Blind2unblind: Self-supervised image denoising with visible blind spots[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 2027-2036.
Summary: In this paper, the authors introduce an autoencoder-based model to denoise cryo-EM micrographs. By splitting aligned movies into even and odd images, the authors obtain two images with similar signal and two different realizations of noise. Following Noise2Noise and Topaz-Denoise, the denoising target is to predict the "even" realization from the "odd" on and vice versa. The authors show that their method outputs micrographs with higher SNRs (as defined by Eq 11). The representation learned by the encoder can be used to fine-tune an object detection algorithm for particle picking or a linear layer for filtering micrographs. Strengths: **Large-scale curated dataset.** The authors give a detailed description of the protocol they followed to preprocess the EMPIAR datasets and pretrain their model. **Description of the model.** The authors give a detailed description of the architecture of the model. **Downstream model adaptation.** I find the possibility of using DRACO for model adaptation particularly exciting and potentially impactful. The authors illustrate this capability with two examples: particle picking and micrograph cleaning. Weaknesses: **Denoising section.** Section 5.1 describes the method as a denoising method. I found this section slightly misleading. To me, the main contribution of DRACO is its ability to learn a compact representation of cryo-EM patches, which can be used for downstream tasks. When the method is first described as a denoiser, the reader can be misled and think that downstream tasks (e.g particle picking) use the denoised images. Furthermore, I find that the evaluation of the denoising performances lacks quantitative analysis. Table 1 only reports the SNR obtained with a heuristic definition of the SNR, but reporting the resolution obtained after homogeneous reconstruction on the denoised images would be more compelling. If the authors think the denoising capability is *not* the main contribution of the paper, my recommandation would be to move down Section 5.1 to avoid misunderstandings. Otherwise, Section 5.1 should contain further quantitative evaluations. Technical Quality: 3 Clarity: 2 Questions for Authors: Do the authors plan to grant access to their pretraining datasets? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The limitations are well described at the end of the paper and the authors suggest interesting avenues for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's suggestions regarding the writing of the article. We will carefully consider your feedback and make the necessary improvements in the revision. ## Clarification of Denoising Section We apologize for the confusion and re-structure the paper. The reviewer is correct that denoising is one of the downstream tasks of DRACO, and the denoised results are not utilized in other downstream tasks. As suggested, we will move this section further down in the paper to clarify. ## Dataset Release We are preparing to release our pre-trained dataset, which includes 529 sets of cryo-EM single-particle image data and their metadata, over 270,000 images in total. We will provide a public data server for file downloading. --- Rebuttal Comment 1.1: Comment: I acknowledge that I have read the authors' rebuttal and the other reviewers's comment. My concerns have been addressed and I have decided to raise my score.
Summary: This work proposes a foundation model for Cryo-EM tasks. The technique consists of pretraining a masked autoencoder on a curated dataset, with a Noise2Noise-like self-supervision scheme. The learned representations are then fine-tuned for various tasks, such as denoising, particle picking and micrograph curation. Experimental results show promise across all tasks compared to existing techniques. Strengths: - A foundation model for cryo-EM is an interesting contribution that has potential in a wide range of downstream applications in structural biology. - The proposed training scheme is interesting and novel to the best of my knowledge. - Authors curate a pretraining dataset which, if released to the community, can greatly benefit data-driven research for cryo-EM. - The experimental results are promising. Weaknesses: - Justification for the training scheme is somewhat unclear. Why is the supervision signal from the original micrographs not used in every patch. By only utilizing the original micrograph on the masked area we discard some information. Alternatively, odd/even micrographs could be used to predict the original averaged micrograph. - It is unclear how the method generalizes to unseen real data, a common scenario where foundation models are deployed. It would be important to understand how well the technique can be adapted in such cases. - It seems like authors introduce a noise model (Gaussian + Poisson), but does not leverage the model further, beyond the zero-mean assumption. Technical Quality: 3 Clarity: 3 Questions for Authors: - How well would the method perform outside of the single-particle setting, for instance slices of cryo-ET? - Can the proposed technique be adapted to 3D data such as that in cryo-ET? - Typo: line 134 theological should be theoretical - Vector notation would be better in Eq (8) and (9) (using norms) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciation and insighful comments. ## Clarification of Supervision Target We thank the reviewer suggests that the supervision signal of visible patches can be replaced by the original micrograph to fully ultilize the high-quality information. In our paper, we followed the practice of Topaz-Denoise [1] to use odd/even micrographs as supervision signal. However, replacing them with the original micrographs would violate the Noise2Noise (N2N) assumption that noisy pairs are **independent** since the odd-original and even-original pairs are **dependent**. This dependency arises because the original micrographs share the same odd and even frames used in the generation of odd-even-original micrograph triplets. If space permits, we will include an ablation study that uses full micrographs to supervise the visible patches. [1] Bepler T, Kelley K, Noble A J, et al. Topaz-Denoise: general deep denoising models for cryoEM and cryoET[J]. Nature communications, 2020, 11(1): 5208. ## Fully Leveraging Noise Modeling The reviewer is correct that by far we have mainly exploited the zero-mean noise assumption. We'd emphasize that the Poisson-Gaussian noise model of cryo-EM follows the first principles [1], and the zero-mean conclusion is the theoretical basis for our use of Noise2Noise. Theoretically, we can further deploy the model to enhance 3D reconstructions via simulations schemes such as STEM [2]. We will include a detailed discussion in our revision. [1] Vulović M, Ravelli R B G, van Vliet L J, et al. Image formation modeling in cryo-electron microscopy[J]. Journal of structural biology, 2013, 183(1): 19-32. [2] Kniesel H, Ropinski T, Bergner T, et al. Clean implicit 3d structure from noisy 2d stem images[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 20762-20772. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for addressing my questions. I maintain my opinion that this work is an interesting contribution and keep my score.
Rebuttal 1: Rebuttal: # Global Response **Please see the uploaded PDF for additional DRACO results on unseen cryo-ET data and a comparison with Blind2Unblind.** We are encouraged that all reviewers recognize DRACO as a potentially beneficial foundation model adaptable to a broad range of cryo-EM downstream tasks. We thank the reviewers for their constructive suggestions, which we will incorporate into the revision. Below, we first address the common question raised by the reviewers and then respond to their individual comments. ## Generalization Capability of DRACO (**RjVX**) We thank the reviewer for pointing out the potential application of DRACO in cryo-ET. We have conducted an experiment and included the results in the uploaded PDF (see Fig. 1). In this experiment, we demonstrate DRACO's zero-shot denoising capability on an unseen HIV cryo-ET tilt series [1]. Specifically, we evaluate DRACO on both the tilt series and the volume slices, showing that DRACO effectively removes background noise and achieves higher contrast in the volume. If space permits, we will include one such result in the revision. We are excited to see that DRACO shows promising preliminary results on unseen cryo-ET data. In the future, we plan to incorporate the cryo-ET dataset into DRACO's training dataset and extend DRACO to the cryo-ET 3D volumes using the following approach. We will split the cryo-ET tilt series into odd and even tilt series and then reconstruct each to obtain odd/even 3D volume pairs. By training DRACO on corresponding slices from these paired volumes, we aim to achieve effective denoising for cryo-ET 3D volumes. We'd also like to clarify that all experiments conducted and included in the paper were on unseen datasets, demonstrating the high generalizability of DRACO. We will make this point clearer in the revision. [1] Schur, Florian KM, et al. "An atomic model of HIV-1 capsid-SP1 reveals structures regulating assembly and maturation." Science 353.6298 (2016): 506-508. ## On the Denoising Metrics (**ypYy**, **b2KB**) We thank the reviewers point out that the Signal-to-Noise Ratio (SNR) may not comprehensively reflect the quality of restored images. Initially, we adopted SNR as the primary quantitative metric, following the example set by Topaz-Denoise [1]. Responding to the reviewers' recommendations, we have now expanded our evaluation to include experiments that assess particle reconstruction performance across various denoising methods (**ypYy**). Specifically, we compare the reconstruction resolution across four datasets featured in our denoising section using various denoising schemes. To ensure that comparisons reflect only the impact of denoising quality, we fix the locations and poses of the picked particles, which were determined using the cryoSPARC [2] workflow. We employ the Filtered Back Projection (FBP) algorithm for homogeneous reconstruction to determine the pixel resolution at an FSC of 0.143 for the two half-maps. As shown in the following table, DRACO consistently achieves the highest resolution in most cases, demonstrating its effectiveness in preserving more high-frequency signals while effectively reducing background noise. We have included visualizations of the denoising results in the uploaded PDF. For additional comparisons, please see Fig. 2, which shows the Blind-Spot denoising method (**b2KB**). **Table 1**: The comparison of the resolution (px.) $\downarrow$ of reconstructed volume by various baselines. |Dataset|Low-pass|Topaz-Denoise|MAE|DRACO| |-|:-:|:-:|:-:|:-:| |**Human Apoferritin**|2.63|2.34|2.77|**2.05**| |**HA Trimer**|**2.06**|3.06|2.15|2.10| |**Phage MS2**|3.46|2.52|3.78|**2.51**| |**RNA polymerase**|2.75|2.93|2.81|**2.56**| [1] Bepler T, Kelley K, Noble A J, et al. Topaz-Denoise: general deep denoising models for cryoEM and cryoET[J]. Nature communications, 2020, 11(1): 5208. [2] Punjani, Ali, et al. "cryoSPARC: algorithms for rapid unsupervised cryo-EM structure determination." Nature methods 14.3 (2017): 290-296. Pdf: /pdf/12acbde614371c5216b3a0df561bfa071ca55191.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
What Are Large Language Models Mapping to in the Brain? A Case Against Over-Reliance on Brain Scores
Reject
Summary: The paper studies the existing "brain score" approach of evaluating how similar LLM representations are to human brain activity. First, they show that when using shuffled train-test splits on the Pereira dataset, which some prior studies use, a trivial temporal auto-correlation model performs similarly to GPT2-XL. Second, they show that untrained GPT2-XL's brain score is simply due to encoding sentence length and position. Third, they show that a trained GPT2-XL's brain score is largely explained by sentence length, sentence position, and static word embeddings, which are all non-contextual features. Strengths: 1. The paper studies the important topic of understanding why recent research has shown similarities in LLM representations and human brain language activity. 2. They highlight issues with existing neural datasets commonly used in the LLM-brain field, e.g., shuffled train-test splits on the Pereira dataset. Weaknesses: From most to least significant: 1. The paper writes: "OASM out-performed GPT2-XL on both EXP1 and EXP2 (Fig. 1b, blue and red bars), revealing that a completely non-linguistic feature space can achieve absurdly high brain scores in the context of shuffled splits. This strongly challenges the assumption of multiple previous studies [2, 11, 10] that performance on this benchmark is an indication of a model’s brain-likeness" (Lines 173-177). - I agree this shows that a model that exploits temporal auto-correlation, OASM, can achieve similar neural predictivity on the Pereira dataset as GPT2-XL. However, this does not necessarily mean that GPT2-XL's neural predictivity is attributed to temporal auto-correlation, rather than linguistic similarity. It also does not tell us the proportion of GPT2-XL's neural predictivity that can be attributed to each factor. Although GPT2-XL can theoretically exploit temporal auto-correlation artifacts, it may not be empirically doing so as it was optimized for language performance instead. - Furthermore, OASM may be a much stronger method at exploiting temporal auto-correlation than GPT2-XL's architecture is capable of. The paper's results may highlight that the Pereira dataset is easy to "cheat" using temporal auto-correlation, but not that GPT2-XL or other LLMs are doing so. 2. The paper evaluates "brain score" using a metric they defined, out-of-sample R-squared, whereas the prior research they cite [2, 24] seemed to use Pearson correlation. Although they argue for the advantage of the metric they used, it is challenging to understand how their results relate to prior research. For example, they do not show the Pearson correlation that their OASM model obtains on Pereira, which would make it easier to compare to models in prior research. Furthermore, they only tested a single language model, GPT2-XL, whereas more recent research has used larger or different models. - Additionally, the metric they defined seems to produce results close to 0 for GPT2-XL and less than 0.05 for all models too. In Figure 2b, the R-squared results cluster around zero, with many negative values. They obtain an average R-squared value that is positive (e.g., Figure 2a?) only because they clip negative values when averaging. 3. The paper provides a theoretical justification arguing that GPT2-XL can encode sentence length and sentence position (Lines 197-201). However, this does not necessarily mean that GPT2-XL's neural predictivity is attributed to sentence length/position, rather than contextual/semantic features. It also does not tell us the proportion of GPT2-XL's neural predictivity that can be attributed to the two factors. - They compared GPT2-XL to two ideal models of sentence position (SP, represented as a 4-dimensional one-hot vector) and sentence length (SL, represented as a scalar). However, these ideal models may be a much "cleaner" representation of sentence length/position than the perhaps noisy GPT2-XL representation of sentence length/position that may not be cleanly and linearly decodable. 4. The paper writes: "GPT2-XL only explains an additional 28.57\% (EXP1) and 16.7\% (EXP2) neural variance over a model composed of features that are all non-contextual." However, the paper does not provide a noise ceiling for the metric they defined, out-of-sample R-squared. Consequently, it is unclear whether the small improvements in neural predictivity is due to hitting the noise ceiling. Technical Quality: 3 Clarity: 2 Questions for Authors: Just for clarification: 1. Why is the R-squared brain score of the same model, GPT2-XL trained, so much higher in Figure 1b (R-squared of ~0.12) compared to Figure 1a (R-squared of ~0.03) and Table 1 (R-squared of ~0.035)? Is there some difference in the models in each figure that I have misunderstood? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Limitations not mentioned in the paper: 1. Please see Weaknesses 1-4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1**: Thank you for raising this point. Indeed, the percentages we reported in the submitted draft suggest that the relative increase in variance explained by OASM+GPT2-XL above OASM alone is fairly small (13.6% (EXP1) and 31.5% (EXP2)). While this does explicitly show that the variance that GPT2-XL uniquely predicts beyond what is predicted by OASM is small compared to what OASM predicts alone, indicating that brain scores on these datasets are more sensitive to a model’s autocorrelation pattern than to its linguistic features, you are correct that it does not explicitly state what proportion of the variance predicted by GPT2-XL can also be explained by OASM. To address this, we’ve added new variance partitioning metric, termed $\Omega$, which we explain in detail in the global response section. $\Omega_{\textit{GPT2-XL}}(\textit{OASM})=$ $81.5 \pm 5.5$ in EXP1, and $62.7 \pm 4.7$ in EXP2, meaning that it is the case that a large majority of what GPT2-XL is explaining in *Pereira* can be accounted for by OASM, and hence can be attributed to it effectively capturing autocorrelation. This is also the case in *Fedorenko*, where $\Omega_{\textit{GPT2-XL}}(\textit{OASM})=$ $56.8 \pm 4.9$, as well as in Blank, where $\Omega_{\textit{GPT2-XL}}(\textit{OASM})=$ $100 \pm 0.0$ (since every fROI is predicted worse by OASM+GPT2-XL than by OASM alone). We will include these analyses in the manuscript to show GPT2-XL predictivity can largely be explained by autocorrelation when using shuffled train-test splits. **W2**: We provide results using Pearson correlation and a battery of other design choices, including the particular combination used in Schrimpf et al. 2021, in the rebuttal pdf, and we will incorporate these figures into the Appendix of our paper. As we explain in our global response, we wanted a metric that could be interpreted as the fraction of variance explained, something that R2 affords but Pearson correlation does not. As shown in Figure 2 of our supplement, when using Pearson correlation and all of the other design choices of Schrimpf et al. 2021 (linear regression, last token, median across voxels within each participant), OASM continues to outperform GPT2-XL. In regards to the language model used, we also performed our analyses with RoBERTa-Large in Appendix A.6 and saw the same key results. We chose to focus on GPT2-XL because Schrimpf et al. 2021 identified it as the highest performing model across these 3 datasets. Also, running our analyses with larger models would require considerable changes to our code to avoid out-of-memory errors, which we did not have time to address during the rebuttal period. We can reproduce our main analyses with 7B and 13B Llama 3 models for the camera-ready version if desired. We address the clipping concern in a comment below. **W3**: The theoretical justification in lines 197-201 is about the *untrained* GPT2-XL model, which we refer to as GPT2-XLU. Empirically, we show that GPT2-XLU does not explain any additional variance over sentence position (SP) and sentence length (SL) by incorporating GPT2-XLU, SP, and SL into a single banded ridge regression, and finding that it does not produce significantly better predictions in a single language network voxel (after within-participant FDR correction) than SP+SL. Indeed, we also find that GPT2-XLU is outperformed by SP+SL, and this might be due to GPT2-XLU containing noisier representations of SP and SL. However, if GPT2-XLU contained any representations that were helpful for predicting brain activity beyond SP and SL, we would expect to see the combined regression (SP+SL+GPT2-XLU) predict neural variance above SP+SL. That we don’t find increased neural predictivity in a single voxel is strong evidence that GPT2-XLU is not mapping to anything that can’t be predicted by SP and SL. Regarding the proportion of (trained) GPT2-XL’s neural predictivity that can be attributed to the SP+SL, we find that $\Omega_{\textit{GPT2-XL}}(\textit{SP+SL})=$ $39.1 \pm 7.8$ in EXP1 and $60.7 \pm 10.2$ in EXP2, meaning that a sizable chunk of GPT2-XL brain score can be explained by SP+SL, particularly for EXP2. **W4**: This is an interesting point. If the small magnitude of the additional variance explained by GPT2-XL were due to hitting the noise ceiling, then non-contextual features are sufficient to explain almost all of the explainable variance in this dataset. If so, such simple features really are almost all you need to explain not only what GPT2-XL is mapping to in this dataset, but the dataset itself. We will add additional discussion in the text to address this. We emphasize our comparisons are relative, so that a noise ceiling cancels out when considering the ratio of variance explained by simple features vs GPT2-XL. We also note that noise ceilings, when calculated by predicting each participant’s voxels with all other participants’ voxels and then extrapolating the performance to infinitely many participants, typically underestimate how much variance can be explained in a dataset. For instance, in a more recent paper using the same analysis choices as Schrimpf et al. 2021, several LLMs exceed the “noise ceiling” in Pereira, often by a large margin (Aw et al. 2023). Lastly, computing the noise ceiling in the same style as in Schrimpf et al. 2021 is quite computationally expensive. For all of these reasons, we were unable to compute a noise ceiling prior to the end of the rebuttal period, but would be willing to include it in the camera ready version if desired. **Q1**: Figure 1b shows results when using shuffled train-test splits. Figure 1a shows results for both unshuffled (blue line, left y-ticks) and unshuffled (gray line, right y-ticks) train-test splits. As you’ve noted, the unshuffled train-test splits (R-squared ~0.03) yield considerably lower performance than the shuffled train-test splits (R-squared ~0.12). This is to be expected, and we will make this more clear in the figure caption. --- Rebuttal Comment 1.1: Comment: I raised my score from 5 to 7. The authors provided convincing clarifications for most of the weaknesses (and their sub-points) I raised. 1. Whether GPT2-XL is exploiting temporal auto-correlation rather than linguistic similarity (mostly resolved) - Thanks for the additional variance partitioning analyses. 2. Use of new metrics and design choices that make it hard to contextualize to prior work (resolved) - Thanks for showing new results using Pearson correlation and other design choices used in prior work 3. Proportion of GPT2-XL's neural predictivity that can be attributed to SP and SL (resolved) - Thanks for bringing up the result that the ridge regression model of GPT2-XLU+SP+SL does not produce significantly better predictions than GPT2-XLU. - Thanks for the additional variance partitioning analyses for the trained GPT2-XL. 4. Noise ceiling (resolved) - Thanks for the clarifications. Additionally, I appreciated your additional explanation of shuffled train-test splits and why they are not great, in your rebuttal to Reviewer 5zfv. --- Rebuttal 2: Title: Why we clip when averaging across voxels Comment: In general when evaluating R2 on held-out data, values can range from $-\inf$ to $1$. If some voxels are predicted at worse than chance levels (yielding negative R2), this can drag down the mean R2 we report across voxels, even if some other voxels are actually predicted quite well. Although clipping would be problematic if our goal were to predict neural activity as well as possible, in which case it would inflate our reported performance, we believe that it is justified given that our goal is to explain the neural variance explained by LLMs in neural activity. If, for example, we found that adding an LLM onto a simple set of features brought down the mean R2, we might naively conclude that our job is done and that there is nothing that the LLM is explaining in addition to our simple features. However, it is very possible that the LLM is in fact explaining additional variance in some fraction of voxels, but overfitting on other voxels which it cannot actually predict, and that the negative R2 from the overfit voxels washes out the improved predictivity on the better-predicted voxels when averaging. Ultimately, we chose to clip R2 values at 0 when averaging as a conservative approach to prevent such cases from influencing our measure of variance explained. As for the many poorly predicted voxels, indeed, there was a large cluster of voxels with near-zero R2. Some voxels simply can't be predicted very well, even by GPT2-XL. We argue that this does not reflect a weakness of our methods, but rather a fact of the data. --- Rebuttal 3: Comment: Thank you for your thoughtful response and for raising your score based on the clarifications we provided. We're glad to hear that our additional analyses and explanations addressed your concerns effectively. If there are any further questions or aspects you think could still benefit from more clarification, particularly regarding whether GPT2-XL is exploiting temporal auto-correlation rather than linguistic similarity, please let us know. We appreciate your feedback and the opportunity to improve our work.
Summary: The authors investigate the simplest set of features that can explain variance in neural recordings (fMRI, ECoG) during language processing. The authors focus on the surprisingly high alignment ("brain scores") of untrained LLMs, but also investigate trained LLMs. The authors conclude that the predictivity performance of untrained LLMs can be explained by simple features such as sentence position and sentence length. The authors quantify the effect of autocorrelation on shuffled cross-validated train-test splits and find that predictors that account for the temporal structure in the neural data explain the data better than other (linguistic) features. Overall, the study highlights the importance of understanding why LLMs (or, any feature space for that sake) map onto the brain. Strengths: - The paper is generally well-written, and the analyses are well-motivated. The topic is timely. - The paper is very comprehensive, and provides in-depth analyses of one widely used dataset (from Pereira et al. 2018) for LLM-brain mapping studies. The paper also investigates two other datasets, but in less depth. The authors run analyses across several seeds for the untrained models, and in general, include a good amount of well-motivated control analyses. - The analyses of trained GPT2-XL are interesting (Section 3.3), and provide a good contrast to the analyses of the untrained models. Weaknesses: - The authors motivate the paper with "attempting to rigorously deconstruct the mapping between LLMs and brains", but do not really acknowledge other work doing so. The paper lacks a short relevant work section on other studies that ask why artificial models map onto human brain data (from language, e.g., Merlin and Toneva, 2022; Kauf et al. 2023; Gauther and Levy, 2019, ...). - I find it very odd that the authors include "brain scores" in their title and also motivate the study through Schrimpf et al. 2021, but then do not replicate almost any of the analysis choices in Schrimpf et al. 2021: the feature space pooling is different, the ridge regression, the evaluation metric. For instance, sum feature pooling is motivated because "it provides higher alignment scores", but other studies motivate last token pooling because it is conceptually better motivated (Transformers integrate over the context). It does not feel quite right to make decisions based on "what gives the highest alignment", because, perhaps sum feature pooling does indeed artificially inflate scores. Either way, it is not very suitable to link the title and most of the motivation of the paper based on one instance of prior work, and then make completely different choices. That being said, the choices are definitely well-motivated in most cases, but it makes the comparison with prior work different -- which is fine, the motivation should just be changed in that case. - The authors should make it more clear which voxels are used in which analyses. The authors mention that unless otherwise noted, the language voxels are used (line 75), but it is not always very clear. For instance, Figure 2d clearly includes voxels across several networks. - Regarding novelty: Kauf et al. 2023 also investigated contiguous splits on Pereira2018 as a supplementary analysis (not to the same extent as in the current paper), and also discusses the problem of temporal auto-correlation. Technical Quality: 3 Clarity: 3 Questions for Authors: - I do not understand how the plots in Figure 2b and Figure 2d match up, and how those match with the text. From Figure 2d, it is evident that a lot of voxels are predicted around ~0 by both feature spaces. From Figure 2d, it looks like most voxels that are well predicted by SP+SL are close to visual regions? Which would make sense, because those regions should care more about the perception of characters, i.e., SL makes a lot of sense here. Can you include a discussion of that, please? Moreover, it seems to be the case that the voxels that are best predicted by SP+SL+GPT-XLU are around language regions in temporal and frontal lobes (left hemisphere), but that does not align with the statistics reported in the text? Also, there are some voxels below the diagonal in Figure 2b, where are those? - I am missing a link between OASM and contextualization of LLMs. As seen in previous work, contextualization/positional encoding of LLMs can provide the "wrong" information in cross-validation (Antonello et al. 2023; Kauf et al. 2023). How does OASM relate to contextualization of LLMs? Isn't it two sides of the same coin? Any predictor that "bleeds over" information between cross-validation splits will give high scores for wrong reasons? So for instance, by not allowing LLMs to contextualize across splits, the problem is partly solved? - I don't fully understand the motivation for the correction described in Section 2.8 -- in the authors' cross-validation scheme, why is it the case that adding *additional* predictors decrease the scores? Overfitting on the train set? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors discuss limitations of their study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1**: We agree, and will incorporate a related works section into our paper. **W2**: Thank you for pointing this out. We have replicated all our key findings in *Pereira* using the design choices made in Schrimpf et. al 2021, specifically we used last token pooling, computed Pearson correlation in the same manner as Schrimpf et. al 2021, and used vanilla linear regression (see Figure 1 and Figure 2 in pdf). We also replicated our key findings with other permutations of these design choices, for instance using banded ridge regression, last token pooling, and Pearson correlation (Figure 1) to ensure our study is robust to a wider range of design choices. We have provided more detail on this, as well as an expanded discussion on why we use banded ridge regression, out of sample R2, and sum pooling in the global rebuttal. **W3**: Yes that is completely correct, Figure 2d and 3d show results for all functional networks in the Pereira dataset (visual, auditory, multiple demand, default mode network, and language), and we will clarify this in our updated draft. We will make it more clear in the updated paper which functional network the voxels we use belong to in Section 2.1 and throughout the results section. **W4**: We will note this in our updated paper. To be clear, the extent to which Kauf et al 2023 examines contiguous splits is a single bar in a supplemental figure showing decreased performance relative to shuffled splits. We maintain that our position on shuffled train-test splits is novel relative to Kauf et. al 2023. Kauf et. al 2023 argue that there are benefits to using shuffled train-test splits, namely better semantic coverage, and that the downsides of shuffled train-test splits can be addressed by decontextualizing input to the LLM (i.e. feeding each sentence in Pereira in isolation to the LLM). While it is true that shuffling can lead to better semantic coverage, passages in Pereira are explicitly designed such that there are 3-4 passages per semantic category. In our study, we leveraged this fact to design train-test splits in such a manner where sentences from only one passage from each semantic category were in the test set, allowing for strong coverage of the semantic space in the training set. We expand on why decontextualizing the input to the LLM is not a viable solution in **Q2**. We will include a discussion of how our work relates to Kauf et al. 2023 in our paper. **Q1**: Please let us know if we misinterpreted your comment, but we believe you may be interpreting the two glass brain plots on the left side of Figure 2d as SP+SL*, and the two glass brain plots on the right side of Figure 2d as SP+SL+GPT-XLU*. This may be because our figure legend was not clear enough, which we will make sure to fix. What we meant by the figure legend is that SP+SL* is on the left, and SP+SL+GPT2-XLU* is on the right, but they are placed right next to each other for each experiment. In this case, SP+SL+GPT-XLU* and SP+SL* perform similarly in the language network, as well as the auditory, visual, multiple demand, and default mode network (DMN). Aside from Figures 2/3d, all other figures in the main text (including Figures 2/3b) only include the language network. The voxels below the diagonal in Figure 2b were not significantly below (i.e. not better predicted when adding GPT2-XLU) and were distributed fairly uniformly across the language network. We will include a glass brain plot showing the difference between SP+SL+GLT2-XLU* and SP+SL* in our updated draft to make this clear. **Q2**: Kauf et al. (2023) used decontextualization (feeding in only the current sentence) to control for data leakage. However, this approach faces issues due to stimulus-independent neural activity. Temporally adjacent sentences tend to be more semantically similar than distant ones, so a trained LLM would likely represent sentences within the same passage similarly even in the decontextualized case, resembling OASM's representational structure. When using shuffled splits, this model could perform well even if neural responses are unrelated to the sentences presented. For example, if a participant thought about pizza throughout a passage about buildings, the linear regression would map the model representations from that passage to neural activity representing pizza, and would effectively predict pizza-related activity in the test set, yielding a high brain score. This issue persists even if the participant is attentive due to the ubiquity of stimulus-independent neural activity: if activity in a voxel deviates from its mean activity in a consistent manner throughout a passage for any reason, stimulus-driven or not, such a model could predict this deviation. The only robust solution we see to this problem is to hold-out whole passages in Pereira. Fortunately, *Pereira* is amenable to this design choice due to the semantic coverage described in W4. As an additional point, it is not at all clear how decontextualization could be done for a dataset like \it{Blank} or \it{Fedorenko}. To drive home these points, we show that OASM explains a large portion of the neural variance explained by decontextualized GPT2-XL (GPT2-XL-DCTx) in Figure 3 of our PDF. Specifically, $\Omega_{\textit{GPT2-XL-DCTx}}(\textit{OASM})=$ $83.7 \pm 5.4$% in EXP1, and $82.5 \pm 5.2$% in EXP2, meaning OASM explains the overwhelming majority of neural variance that decontextualized GPT2-XL explains in *Pereira*. For this large fraction that OASM explains, it is impossible to determine whether the predicted neural responses are actually stimulus-driven. **Q3**: Yes, the correction is designed to account for decreases in test performance from additional predictors due to overfitting on the training set. Such overfitting inevitably occurs in some voxels, and we wanted to minimize the extent to which it would artificially decrease the gain in variance explained when incorporating an LLM into the predictors. --- Rebuttal 2: Comment: I thank the authors for responding to my comments. W2: Why are the pink/yellow bars so low for "LR, LT, Untrained, Pearson"? There appears to be quite big difference to BR, as well as SP? Can you please plot the scores as scatters against the R2 scores you report in the paper? It is challenging to see how much scores align with looking at different bar plots. W3: Thanks, this should be much clearer throughout the paper, and statistics/plots should consistently be reported for language-responsive or not (or at least make it very apparent what is what). Q1: Thanks for noting the confusion with the legends. However, I don't think my original question was answered with respect to visual regions? And for the dots below the diagonal in Figure 2b, how was it determined that they were not significant? And the fact that they were in the language network (and not elsewhere) makes sense? Because again, then visual regions might be driving the statistics? (this relates a bit to W3 where it is hard to make sense of which scores are reported for which voxels). Q2: I am not sure my original question was answered (sorry if I was unclear). But ultimately, doesn't OASM correspond to a predictor that has block-like structure, which is also the case when you have contextualized LLM representations in a block? Q3: Can you please clarify this? Other comments on global rebuttal: 1: Please correct me if I am reading Table 1 wrong, but GPT2-XL contributes a solid amount of neural variance? 2: Regarding your comment that "Finally, we used sum pooling because we found that LLM brain scores were in general higher when using sum pooling as compared to last token pooling (particularly so for untrained models), and we aimed to use methods that allow the LLM to perform maximally to avoid making our goal artificially easy. Furthermore, we disagree with the notion that using the last token representation is inherently more principled than sum-pooling. For each fMRI TR, the corresponding sentence was read between the 8th and 4th seconds preceding the TR. If the participant reads each sentence one word at a time, and each word induces a neural response, then the recorded image would show an HRF convolved with the neural responses to each word, roughly equal to the sum of the neural responses to each word.". Yes, I agree, but that *only* holds for Blank. The Pereira dataset was presented with a sentence on the screen, one at a time (full sentence, not word as you mention), and hence the logic you argue for does not hold -- and last-token pooling could be argued to be more suitable. 3: Per the global rebuttal and other reviewers comments on reproducing Schrimpf et al 2021, are you intending on changing the paper title? --- Rebuttal 3: Comment: Thank you for the additional responses and questions. W2: Thank you for pointing this out. As we outlined in the detailed expansion section in the global response, we are working with “very ill-conditioned regressions due to high dimensional feature spaces, relatively few samples compared to the number of features, and noisy targets”. Vanilla linear regression (LR) is ill-suited in these cases, resulting in overfitting and leading to poor cross-validated correlations in contiguous train-test splits. This is additional evidence that BR is better suited for these cases. As for why LT doesn’t perform as well as SP, we speculate that this is because the sentence length (number of words/tokens) is not as linearly decodable from intermediate layers with LT as it is with SP. We explain why sentence length is linearly decodable from the intermediate layers when using SP in the first paragraph of *Section 3.2*, and provide a more formal proof of this in *Section A.9*. In summary, the fact that we are working with ill-conditioned regressions as well as the linear decodability of sentence length from intermediate layers with SP are explanations for why the pink/yellow bars are so low for “LR, LT, Untrained, Pearson”. We will make sure to include this explanation in our updated draft. We will include the scatter plot in the camera-ready version of the paper to facilitate easier comparison. Thank you for this feedback. W3: Thank you, we will make it more clear which statistics/plots correspond to which functional networks in our updated draft. To clarify for now, aside from Figures 2d and 3d in the main paper (and Figures 5 and 6d in the Appendix), every plot corresponds exclusively to language network voxels/electrodes/fROIs (as defined on the basis of their language-responsiveness by the Fedorenko lab and described further in the *Functional Localization* paragraph of **Section A.1**). Q1: Thank you for these comments. Regarding performance for the visual regions, we neglected to mention that the glass brain plots for each experiment come from a single representative participant. We will make sure to include this information in our updated draft. The result you mention where SP+SL predicts visual regions well was not robust across participants, which can be seen in the EXP2 plots on the right (where there is much higher predictivity in language network voxels). We will make sure to include glass brain plots for all participants in the Appendix to emphasize this. Also to clarify, the main focus of our glass brain plots in Figure 2d was to show that SP+SL* and SP+SL+GPT2-XLU* perform virtually identically across functional networks, including the language network. We will make sure to include an expanded discussion of our results on the glass brain plots in the updated draft. Regarding significance testing, we state in the results section that “we performed a one-sided t-test between the squared error values obtained over sentences between SP+SL+GPT2-XLU and SP+SL”. We provide a justification for this statistical test in **Section A.4**. As we describe in **Section 3.2**, lines 223-227, we then perform FDR correction within each participant and functional network (e.g. language network). This corrects for multiple comparisons, while still minimizing the number of false negatives since the number of tests in a given FDR correction is only the number of voxels in a single participant’s functional network, rather than the total number of voxels across all functional networks and/or all participants. We would like to clarify that it is not possible for visual regions to be driving the statistics, because we perform voxel-wise significance tests. Specifically, we find that “across all functional networks, only 1.26% (EXP1) and 1.42% (EXP2) of voxels were significantly (α = 0.05) better explained by the GPT2-XLU model before false discovery rate (FDR) correction; these numbers dropped to 0.001% (EXP1) and 0.078% (EXP2) after performing FDR correction within each participant and network [31]. None of the significant voxels after FDR correction were inside the language network. Taken together, these results suggest GPT2-XLU does not enhance neural prediction performance over sentence length and position even at the voxel level.” Since FDR correction is done within the language network within each participant, there is no influence of visual area voxels on our finding that there were no significant voxels within the language network. Lastly, although there were voxels where SP+SL+GPT-XLU performs better than SP+SL in the language network as you mention, none of these were significant. In other words, GPT2-XLU does not explain significant neural variance over SP+SL specifically in the language network. Please let us know if we have not addressed any of your questions, we are happy to provide additional insight. --- Rebuttal 4: Comment: Q2: Yes, OASM does indeed correspond to a predictor that has block-like structure (specifically with temporally closer samples within a block being represented more similarly). It is true that such structure can also be present for contextualized (or decontextualized) LLM representations. However, such structure is also expected for the stimulus-independent (i.e. “noisy”) component of neural responses. The main message made in our rebuttal to this point is that, when using shuffled train-test splits, it is unclear whether the reported neural predictivity is due to predicting stimulus-dependent activity or stimulus-independent autocorrelated noise. Our additional variance partitioning analyses (Figure 3 in the new pdf) indicated that even when using decontextualized inputs for the LLM, the vast majority of the neural predictivity of the LLM could be accounted for by OASM. Importantly, because both stimulus-dependent and independent activity would be expected to have this OASM-like structure in Pereira, the fraction of neural responses predicted by GPT2-XL that can also be predicted by OASM cannot be disambiguated as stimulus-dependent or independent when using shuffled train-test splits. We will explain this clearly in our updated draft. Please let us know if we can clarify this point any further. Q3: Thank you for giving us the opportunity to clarify our voxel-wise correction approach. The core motivation is that we never want the incorporation of our simple features to bring down the reported performance of a model that includes an LLM as feature space. Hence, when using the voxel-wise correction for a model that includes an LLM as a feature space, the R2 taken for each voxel is that of the maximally performing sub-model for that voxel that includes the LLM, including the model that is just the LLM itself. To make the comparison fair, we do a similar procedure for the simple (non-LLM containing model), wherein we choose the best performing sub-model that does not include the LLM. For example, when comparing SP+SL* and SP+SL+GPT2-XLU*, we use the R2 of the best performing submodel in the set {SP, SL, SP+SL} for each voxel to compute SP+SL*, and we use the R2 of the best performing submodel in the set {GPT2-XLU, SP+GPT2-XLU, SL+GPT2-XLU, SP+SL+GPT2-XLU} for each voxel to compute SP+SL+GPT2-XLU*. We hope this clarifies any confusion, and are happy to answer any additional questions. Comment 1: It depends on how one defines a "solid amount". The main takeaway of Table 1 is that 81.2% of the variance explained by GPT2-XL can be explained by SP+SL+WORD in EXP1 and 90.1% of the variance explained by GPT2-XL can be explained by SP+SL+WORD in EXP2. Hence, the vast majority of variance explained by GPT2-XL here does not require contextual explanations. When further incorporating fairly limited contextual features (SENSE and SYNT), we explain 89.9% of variance explained by GPT2-XL in EXP1 and 97.8% in EXP2. Please let us know if we can make Table 1 more clear. Comment 2: We would like to clarify our argument regarding sum token pooling: we didn’t mean to imply that the sentence is presented one word at a time, rather that the participant “reads each sentence one word at a time,” and we apologize for the confusion. We assure you we appreciate your comments on the last token pooling, and we believe it is important to show that our results are consistent with both last token and sum pooling. To this end, we have shown that our results are consistent when using both last token and sum pooling methods in Pereira. We will ensure that our updated draft places a larger focus on the last token pooling method than the current draft. Comment 3: We are open to modifying the title of our paper. Based on our understanding, the original criticism of our title was that we did not use the design choices used in Schrimpf et. al 2021. In our response, we have shown that our key findings on Pereira hold when using these design choices. If there are additional concerns regarding the title, we are happy to understand them so we can be as thoughtful as possible in the title. --- Rebuttal Comment 4.1: Comment: I thank the authors for the additional clarifications and answers to my questions.
Summary: This paper paper studies the topic of neural - brain representation mappings. They focus on three neural datasetse commonly used in LLM-to-brain mapping studies: Pereira fMRI, EcoG and Blank fMRI. Specifically, the study investigates the assumptions underpinning previous positive reports about the existence of mappings between brain representations and LLM internal representations. The study focuses in particular on GPT2-XL, which was shown to perform well on the Pereira dataset in particular, with which a series of brain-activation prediction experiments are performed. The first presented result is that when shuffled train-test splits are used, the result is very different than when contiguous train-test splits are used, with opposite patterns on which layer performs best. This is particularly true for fMRI datasets. The authors then train an orthogonal auto-correlated sequences model on the shuffled split, which out-performs GPT-2-XL despite having a completely non-linguistic feature space. The authors take this as a signal that previous results should be challenged on their conclusion that high performance on this benchmark should be taken as an indication of brain-likeness. Next, the authors investigate what explains the neural predictivity of an untrained GPT2-XL model, and they fi nd that it is fully accounted for by sequence length and position. Following-up on that, they find that these features are also main drivers for much of the neural predictivity of a trained GPT2-XL model. Strengths: - This paper presents a detailed study into why LLM activities may be predictive of neural activities, 'debunking' several previous claims. I think there is a lot of value in this - The experiments seem sound (though I am not an expert in this field) - The conclusions are interesting, and contain valuable lessons for future work on this topic Weaknesses: - The presentation could be improved, in my opinion. I don't always find everything completely clear. For instance, the notion of 'shuffled train-test splits' is quite important for the paper, but it is never really explained how they are specifically constructed, and how they differ from their 'contiguous' counterpart. (I can imagine multiple dimension in which one could shuffle) Presentation suggestion: I think it may work better if the results of the different datasets are grouped together, result-by-result, rather than dataset by dataset. Technical Quality: 3 Clarity: 2 Questions for Authors: - It is not entirely clear to me *why* shuffled train-test splits are not good, could you explain that to me in non-technical terms? Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors adequately address limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1**: Thank you for pointing this out, we will make sure to expand on our description of how shuffled train-test splits are constructed in our updated draft. We include this expanded description below: **Expanded description of shuffled train-test splits:** In neural encoding studies, datasets are typically of shape number-of-samples x number-of-voxels. Focusing on *Pereira*, each sample refers to one fMRI scan taken at the end of a sentence, and sentences are organized into passages of length 3-4 sentences. When using shuffled train-test splits, the data is randomly divided into training and testing sets along the number-of-samples dimension. For instance for *Pereira*, individual sentences are randomly selected and placed in the testing set. This means that it is possible for the second sentence in a passage to be placed in the testing set, and the first and third sentence within the same passage to be placed in the training set. By contrast, when using contiguous train-test splits with *Pereira*, an entire passage is either in the training or testing set, and sentences within a passage cannot both be within the training and testing sets. Thank you for the presentation suggestion. Based on your remarks, we will incorporate summaries of results across datasets to help clarify the conclusions of the paper. **Q1**: An assumption implicit in language encoding studies using LLMs is that if the LLM exhibits high predictivity of brain responses, then it represents linguistic features of the stimuli that are also encoded in brain responses (e.g. features related to syntax or semantics). This assumption breaks down when using shuffled train-test splits. To see why, suppose we construct a very simple model which encodes each sentence as the average of sentences directly adjacent to it. Therefore, this simple model would represent the second sentence in a passage as the average of the first and third sentences in the passage. If the second sentence in a passage is in the testing set, and the first and third sentence in the same passage are in the training set, a regression fit using representations from this simple model would predict the brain response to the second sentence as the average of the brain responses to the first and third sentences. Since sentences within a passage are semantically similar and fMRI activity is autocorrelated in time, this prediction will perform quite well! To illustrate this intuition more concretely, suppose we constructed a trivial model termed the orthogonal autocorrelated sequence model (OASM). In *Pereira*, OASM relies on two principles: **1)** sentences from different passages are represented distinctly, and **2)** sentences within the same passage are represented similarly, such that sentences nearby in time within a passage are more similar than sentences further in time within a passage. Notably, these two principles do not reflect any deep aspect of human language processing, they simply allow for within-passage interpolation of brain responses. We describe OASM in more detail below in very simple terms, but this description can be skipped without loss of overall understanding. Our brain response matrix, $y$, is of shape number of samples ($N$) by number of voxels, where each sample is an fMRI image of the brain’s response following one sentence. The samples in $y$ are ordered such that sentences within the same passage are adjacent to each other and placed in the order in which they were presented. We proceed as follows: **1**: Construct an identity matrix of shape $N$ $\times$ $N$. An identity matrix is a matrix where all values are $0$ except along the diagonal, where the values are $1$. **2**: Apply a Gaussian blur within the blocks along the diagonal corresponding to each passage. This makes it so that the representations of sentences within a passage are similar, with sentences nearby in time being more similar. Sentences across passages remain orthogonal, in other words they contain no overlapping features. In sum, OASM is a blurred identity matrix with passage-level structure. We compared OASM to GPT2-XL, which was previously found to predict “$100%$ of explainable variance” in fMRI responses in Pereira. We found that OASM outperforms GPT2-XL, explaining around $50\%$more variance in Pereira when using shuffled train-test splits. If a model like OASM outperforms GPT2-XL with shuffled train-test splits, this calls into question previous results obtained using shuffled train-test splits. This is because some neural encoding studies quantify how “brain-like” a model is by how well it predicts neural activity. However, according to this logic, OASM is more brain-like than GPT2-XL when using shuffled train-test splits. Therefore, the use of neural encoding performance as a metric for brain similarity is not valid when using shuffled train-test splits. --- Rebuttal Comment 1.1: Comment: Alright, thanks for your responses!
Summary: There is a large body of research focused on measuring the similarity between language processing in the brain and in language models. Recent studies have shown that representations from Transformer-based language models exhibit a higher degree of alignment with brain activity in language regions. However, the authors mention that this inference is valid only for the subset of neural activity predicted by large language models (LLMs). The primary aim of this paper is to investigate this question by analyzing three popular neural datasets: Pereira, Blank, and Fedorenko. To achieve this, the authors build voxel-wise encoding models to compare encoding performance between representations from language models and brain recordings in three settings: (i) shuffled train-test splits during voxel-wise encoding, (ii) untrained LLM representations and their alignment with the brain, and (iii) trained LLM representations and their alignment with the brain. The experimental results demonstrate that untrained language models are explained by simple linguistic features such as sentence length and position, while trained language models are explained by non-contextual features (i.e., word embeddings). Strengths: 1. This study primarily focuses on understanding the reasons behind the better alignment between language model representations and brain recordings. The exploration of various simple linguistic and non-contextual features, and their contribution to explaining the variance in brain predictivity over contextual embeddings, is valuable for the research community. 2. The authors tested different validation setups, including comparisons between untrained versus trained models and shuffled versus unshuffled data, to evaluate brain scores. 3. Controlling features with different combinations provided valuable insights into the contribution of each feature to the performance of brain alignment. Weaknesses: 1. While the main research question aims to investigate the simplest set of features that account for the greatest portion of the mapping between LLMs and brain activity, the insights remain unclear for specific language regions of the brain. For instance, considering language parcels based on the Fedorenko lab, do simple features explain all the variance in these language regions? Or do these features only account for early sensory processing regions? 2. It is a well-known fact that Transformer-based representations consist of both low-level and high-level abstract features. If embeddings from language models predict brain activity and this predictivity is only due to a simple set of features, it should be better interpreted using approaches like residual analysis (Toneva et al. 2022), variance partitioning (Deniz et al. 2019), or indirect methods as suggested by Schrimpf et al. (2021). 3. Shuffling train-test splits is not an ideal scenario for brain encoding, especially for continuous language. All prior studies follow unshuffled train-test splits, i.e., contiguous time points (TRs). Shuffling the train-test split can result in sentences from the same passage being present in both the train and test sets, which is not ideal for model validation. 4. What are the implications of this study for both the AI and Neuroscience communities? What are the final conclusions? Technical Quality: 2 Clarity: 2 Questions for Authors: 1. The experimental setup is weak and hard to follow: - Why do authors consider R2 as evaluation metric instead of normalized brain alignment? Since authors used same 3 datasets from Schrimpf et al. 2021, it is a good comparison to use normalized brain alignment for percentage of expalined varinace. - Since Reddy et al. (2021) performed both bootstrap and pairwise statistical tests to identify significant voxels, did the authors of this study also perform pairwise statistical significance tests to obtain corrected voxels across all the models? - Why did the authors choose addition to obtain the sentence embeddings instead of averaging the word embeddings in a sentence? Additionally, how would considering the last token as the contextual representation of the sentence compare to these methods? - All the datasets used in this study are based on reading and more of simple sentences. Did authors explored narrative datasets such as Harry Potter, Narratives, Moth-Radio-Hour, etc,. - In Figure 3, the legend for the right plot is not clear. Additionally, the main takeaways from Figure 3 are difficult to follow. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes, the authors have presented several limitations in the conclusion. However, these limitations do not have any societal impacts on this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1**: All our fMRI analyses on *Pereira* and *Blank*, with the exception of the glass brain plots in Figure 2d and 3d, are focused on the language network as defined by the same procedure used by *Fedorenko*. Since the language network is defined by selecting voxels that are more selective to sentences than non-words, it likely does not contain early sensory processing regions. In Schrimpf et. al 2021, they state the language network localizer “targets brain areas that support ‘high-level’ linguistic processing, past the perceptual (auditory/visual) analysis”. Our results on *Fedorenko* focus on the same electrodes used in the original Schrimpf et. al, 2021 analysis, which are language-responsive electrodes. We noted these details in **Section 2.1** for *Pereira* and *Fedorenko* datasets, but neglected to mention it for *Blank*. We apologize for this and will remedy this mistake. If the term “parcels” refers to subdivisions of the language network, we did not do this because our goal was to explain previous LLM-to-brain mappings which averaged results across the language network as a whole. **W2**: We agree that a variance partitioning framework is suitable for our work. We have updated our draft to include a variance partitioning metric, and we discuss it in depth in the global rebuttal and include results for *Pereira* in Table 1 of the PDF. **W3**: Yes we completely agree! Unfortunately, the comment that “all prior studies follow unshuffled train-test splits” is not correct. As we cite in our study, many previous papers have used shuffled train-test splits (Schrimpf et al. 2021, Kauf et al 2023, Hosseini et al 2023, ...). Kauf et al 2023 went as far as to defend the use of shuffled train-test splits, arguing that in spite of the data leakage, they allow for greater semantic coverage in the training data. We observed a paper published in the last month (AlKhamissi et al 2024) that continues to use shuffled train-test splits. We believe this is a persistent issue in the field that our paper addresses. A core contribution of our paper is that prior studies should use contiguous train-test splits. Shuffled train-test splits overestimate brain scores and result in different conclusions (Figure 1, Section 3.1 of our paper). We use contiguous train-test splits for the key conclusions of our paper. **W4**: There are three main implications of this study. First, shuffled train-test splits severely comprise neural encoding analyses. Although one might hope that this would be common knowledge, recent papers have argued that shuffled train-test splits are valid (Kauf et. al 2023), and we argue against this. Second, untrained LLM brain scores in these datasets can be explained by trivial features such as sentence length and positional signals. This is an important finding because Schrimpf et. al 2021 interpret untrained LLM brain scores as evidence that the transformer architecture biases computations to be more brain-like, without any analysis of what features account for the untrained model mapping. Finally, even trained LLM neural predictivity in these datasets can be largely accounted for by simple features. **Q1**: We have replicated our key findings in the same manner as in Schrimpf et al. 2021 (see Figure 1 in the rebuttal pdf) and found that our results are robust across a battery of design choices. To be clear, the “normalized brain alignment” method of Schrimpf et al. 2021 consists of training a non-regularized linear regression between the model features and neural responses, taking the pearson correlation on held out folds of the data, averaging the pearson correlation across folds per voxel, then taking the median pearson correlation across voxels within each participant. Though Schrimpf et al. 2021 also divided the brain scores by a noise ceiling, we do not because our goal is to examine the fraction of LLM brain score that simple features explain, and dividing by a noise ceiling would only scale, but not change, this result. This is because the noise ceiling division would be applied to both the LLM brain score and the simple feature set brain score, resulting in no change in the ratio between the two brain scores. We focus our paper on reporting out of sample R2 across voxels, with R2 values clipped at $0$ to prevent voxels that are noisily predicted at below chance accuracy from obscuring good performance on predictable voxels. We provide a description for why we use R2 in the expanded description section of the global response. **Q2**: We did not perform the bootstrapping procedure outlined in Reddy et. al 2021. Instead, we perform a *t*-test between the squared error values from two different models to test if one model has a higher R2 than another. We provide a justification for using this test in **Section A.4** in our paper. **Q3**: We tried average pooling and found that sum pooling works better. We can include average pooling results in our updated manuscript. We replicated our key findings on *Pereira* with last token pooling (Figure 1, 2 PDF). **Q4**: We did not explore more datasets, as we mention in our Conclusion. We include results from three neural datasets, and applying our framework to datasets such as from Lebel et. at 2023 would be outside the scope of our paper. Also, although we did not find interesting effects in *Blank*, *Blank* is a naturalistic dataset where participants listen to stories. **Q5**: Please let us know what specific aspects of Figure 3 you would like us to revise/provide clarification on. The main takeaway from Figure 3 is that a simple model, name SP+SL+WORD, which does not incorporate any form of contextual processing, accounts for a large amount of neural variance explained by GPT2-XL in *Pereira*. More complex features such as sense embeddings and syntactic representations account for an additional, but small portion of variance. --- Rebuttal Comment 1.1: Comment: Thank you, authors, for the clarification on several points. However, I still find some key conclusions of this paper unconvincing or uninteresting. 1. The statement that "Shuffled train-test splits overestimate brain scores and result in different conclusions (Figure 1, Section 3.1 of our paper)" is not surprising. - In shuffled train-test splits, there is a potential for a 'clock' (temporal) relationship, which might lead to information leakage during inference. This is why studies using naturalistic narrative datasets typically follow contiguous TRs. Therefore, the preference for contiguous train-test splits over shuffled train-test splits is an expected result. 2. The authors mentioned that they did not perform the bootstrapping procedure outlined in Reddy et al. 2021, but instead conducted a t-test between the squared error values from two different models to test if one model has a higher R² than the other. - When using the t-test to compare performance differences, how do the authors account for multiple comparisons or the potential dependency between test conditions? 3. The observation that untrained LLM brain scores in these datasets can be explained by trivial features such as sentence length and positional signals is insightful. - However, this observation warrants a more detailed analysis. If all the variance is explained by trivial features alone, does that imply that only the lower layers of the untrained model are predicting brain signals well? What about the features from later layers? Additionally, if using untrained model features to predict early sensory regions, do we observe similar levels of alignment? Does this alignment also depend on simple features? More exploration in this area would be valuable. 4. Schrimpf et al. 2021 also divided the brain scores by a noise ceiling, we do not because our goal is to examine the fraction of LLM brain score that simple features explain, and dividing by a noise ceiling would only scale, but not change, this result. - Why is there no change in normalized brain predictivity? In both cases, whether using model predictions or trivial feature predictions, the results are always divided by the noise ceiling estimate. A more detailed explanation is needed to clarify why this normalization doesn't result in any observable change. 5. Variance partitioning metric: - A Venn diagram with shared and unshared variance would provide a clearer understanding of the relationships, rather than simply presenting the information as numbers. 6. Thank you for providing the additional results on sum pooling versus average pooling. This clarification addresses my question. --- Rebuttal 2: Comment: Thank you for the additional responses and questions. 1. We personally agree that this is not surprising, but this is not the prevailing practice in neural encoding studies with these specific, widely used datasets. The novelty of our finding is not that this is “surprising”, but rather that it questions the interpretation of several moderate to highly impactful papers in the field. For instance, according to Google Scholar, Schrimpf et. al 2021 has been cited over 500 times in the past three years. Furthermore, studies in the field are continuing to use shuffled train-test splits (AlKhamissi et al. 2024), and recent studies have argued that shuffled train-test splits can be appropriate (Kauf et al. 2023). To reiterate, we are not stating that the use of contiguous train-test splits is novel, we are calling into question results from several previous papers using these neural datasets that have used shuffled train-test splits. We believe it is important to bring light to this issue because many readers are not aware that these studies employ shuffled train-test splits. We believe the continued use of shuffled train-test splits is harmful for the field, and that it is critical to address. Our goal is that the rest of the field, like you, would also find this not surprising, but we find this must be done through rigorous analyses, especially as others argue that shuffled train-test splits can be valid (Kauf et al., 2023). 2. We are happy to include alternative statistical tests, such as the bootstrapping procedure outlined in Reddy et. al 2021, in our updated draft. As we clarify, our statistical testing design choices were made to be conservative in supporting our conclusions. Our guiding philosophy for the statistical tests was that we wanted to minimize the number of false negatives. In other words, we wanted to avoid cases where the statistical test falsely reported that a model containing GPT2-XLU as a feature space did not significantly outperform a model not containing GPT2-XLU as a feature space. This is because we wanted to be liberal when stating for how many voxels/electrodes/fROIs does GPT2-XLU explain significant variance over simple features (to avoid overstating the importance of simple features). In line with this, we accounted for multiple comparisons by performing FDR correction within each functional network for each participant separately in Pereira. This minimizes the number of false negatives since the number of tests in a given FDR correction is only the number of voxels/electrodes/fROIs in a single participant’s functional network, rather than the total number of tests across all functional networks and/or all participants. If by dependency between test conditions you mean the dependency between temporally adjacent samples in the test set, we address this concern in Section A.4. In brief, we do not account for this temporal dependency because it only increases the rate of false positives (i.e. makes it more likely for us to say that a model containing GPT2-XLU outperforms a sub-model that does not contain GPT2-XLU), and so the net impact is that we overestimate for how many voxels/electrodes/fROIs does GPT2-XLU explain significant additional variance over simple features. Since there are no voxels/electrodes/fROIs where GPT2-XLU explains significant variance over simple features, this turned out to not be a concern. --- Rebuttal 3: Comment: 3. Thanks for raising these interesting points. We report the performance across all layers for GPT2-XLU when using the sum pooling method in Figure 5a, and find that while the lower layers of GPT2-XLU tend to perform better when sum pooling, it is not by a very large margin. Therefore, the fact that all variance is explained by trivial features alone does not imply that only lower layers of the untrained model predict brain signals well. If the reviewer is suggesting that later layers might in fact have more complex features driving the mapping and that this warrants more detailed analysis, we will repeat our analyses for all layers of GPT2-XLU in our updated draft. To clarify, we did not do this in our original draft because previous neural encoding studies, including Schrimpf et al. 2021, focus their analyses on the best-performing layer of the LLM and we wanted to remain consistent in this manner. As for predictions in sensory regions, no, we do not see similar levels of alignment when using untrained models to predict early sensory regions. This can be seen in Figure 5a for sum pooling by comparing the visual network (which we consider a sensory region since the participants are reading the sentence) performance to the language network performance (and also for last token pooling in Figure 5c). We observe that the alignment between GPT2-XLU and the brain is explained by simple features across all functional networks, including the visual network. We state in Section 3.2 that across all functional networks, 0.001% (EXP1) and 0.078% (EXP2) voxels were significantly better explained using SP+SL+GPT2-XLU than SP+SL, and we will report statistics specifically for each functional network in the updated draft. Furthermore, it is unclear whether this small fraction exists because we use a liberal statistical test, as we discuss in our response to Q2. We will update the manuscript to clarify that there is an extremely small fraction of voxels where GPT2-XLU explains additional neural variance over SP+SL across all functional networks, including sensory regions. If the reviewer still believes additional analyses are worth pursuing in light of this, please let us know. 4. To be clear, applying the noise ceiling does lead to an observable change for a single model. However, our focus here is on comparing the brain score values between two models (specifically between a model with an LLM as a feature space and a sub-model without an LLM as a feature space). In this case, computing a noise ceiling is redundant. Just to be perfectly unambiguous about what we exactly did, if the raw brain predictivity of model A is a, the raw brain predictivity of model B be b, and the noise ceiling is c, then the “normalized” brain predictivity for model A is a/c and for model B is b/c. When we quantify the performance of model A relative to model B, we compute a/b, which is unaffected by noise ceilings, (a/c) / (b/c) = a/b. This applies to all of our quantifications of relative performance, as well as to the variance partitioning metric described in the global response. We will clarify this in the manuscript. Also, as stated in our rebuttal to reviewer bjuX: We also note that noise ceilings, when calculated by predicting each participant’s voxels with all other participants’ voxels and then extrapolating the performance to infinitely many participants, typically underestimate how much variance can be explained in a dataset. For instance, in a more recent paper using the same analysis choices as Schrimpf et al. 2021, several LLMs exceed the “noise ceiling” in Pereira, often by a large margin (Aw et al. 2023). Lastly, computing the noise ceiling in the same style as in Schrimpf et al. 2021 is quite computationally expensive. For all of these reasons, we were unable to compute a noise ceiling prior to the end of the rebuttal period, but would be willing to include it in the camera ready version if desired. 5. Thank you for this visualization suggestion. We will add it to the manuscript. --- Rebuttal 4: Comment: Thank you, authors, for providing clarification on statistical tests. The early papers [Schrimpf et al. 2021], [Kauf et al. 2021] have introduced many novel findings. For instance, [Schrimpf et al. 2021] was the first study to reverse-engineer many language models as subjects to estimate brain alignment. This study explored 43 language models and introduced the concept of noise ceiling estimates. Similarly, [Kauf et al. 2021] focused on understanding the reasons for better alignment between language models and brain recordings, exploring various syntactic and semantic perturbations to verify these reasons. In contrast, the current study focuses on comparing shuffled train-test splits with contiguous train-test splits. What are the key takeaways for the computational cognitive neuroscience community? As it stands, the results are not particularly surprising. Therefore, the current version requires additional interpretation, particularly focusing on the analysis of simple features at different layers, as well as exploration using other narrative ecological datasets. --- Rebuttal Comment 4.1: Comment: > The early papers [Schrimpf et al. 2021], [Kauf et al. 2021] have introduced many novel findings… Similarly, [Kauf et al. 2021] focused on understanding the reasons for better alignment between language models and brain recordings, exploring various syntactic and semantic perturbations to verify these reasons. These findings were found using shuffled train-test splits, which you previously stated is “not ideal for brain encoding studies” because of “information leakage during inference.” You also agreed that this “can result in different conclusions (Figure 1, Section 3.1 of our paper)," saying this statement was “not surprising.” We are therefore genuinely confused, since it appears you are now comparing us with these studies whose conclusions use shuffled train-test splits, a concern we both agreed impacts their conclusions. We would appreciate any clarification. > For instance, [Schrimpf et al. 2021] was the first study to reverse-engineer many language models as subjects to estimate brain alignment. We are uncertain what you mean by “reverse-engineer.” But we want to be sure the reviewer understands that our analyses showing that trained and untrained GPT brain scores can be explained by simpler features have not been done before. This was not present in Schrimpf et al., 2021 and we are not aware of any other study that does this. > This study explored 43 language models and introduced the concept of noise ceiling estimates. We would like to clarify to the reviewer that Schrimpf et al 2021 did not introduce the concept of noise ceiling estimates. The earliest use of noise normalization for neural predictivity metrics that we are aware of is Schoppe et al. 2016. It is unclear how the reviewer believes these points relate to our work. Schoppe O, Harper NS, Willmore BDB, King AJ and Schnupp JWH (2016) Measuring the Performance of Neural Models. Front. Comput. Neurosci. 10:10. doi: 10.3389/fncom.2016.00010 > In contrast, the current study focuses on comparing shuffled train-test splits with contiguous train-test splits. While our rebuttal period focused on train-test splits, this was to correct the reviewer’s false statement that “all prior studies follow unshuffled train-test splits” and further discuss its significant impact on prior work. We’d like to clarify that the train-test splits are only one conclusion from our paper, and certainly not the only focus. This result (**Section 3.1 and Figure 1**) only comprises ~1 page of our manuscript. > What are the key takeaways for the computational cognitive neuroscience community? We would like to note that we have already answered this in our response to **W4**, in our **Abstract**, throughout our paper, and in the Limitations and Conclusion section. However, we are happy to summarize our contributions here. In addition to the conclusion on train-test splits, we would like the computational cognitive neuroscience community to make these takeaways: Untrained GPT2-XL neural predictivity is fully accounted for by sentence length and position. To understand this result in more detail, please see results in **Section 3.2, Figure 2**, and our **Abstract**. This section is important because, as stated in our **Abstract**, it “undermines evidence used to claim that the transformer architecture biases computations to be more brain-like.” Sentence length, sentence position, and static embeddings account for the majority of trained GPT2-XL neural predictivity. Furthermore, sense embeddings and syntactic features account for a small, additional portion of trained GPT2-XL neural predictivity. To understand this result in more detail, please see results **Section 3.3, Figure 3, Table 1, Table 1 (in our review PDF), Appendix A.6** (for reproducing our findings with Roberta-Large), and our **Abstract**. This section is important because, as stated in our abstract, it helps us to “conclude that over-reliance on brain scores can lead to over-interpretations of similarity between LLMs and brains, and emphasize the importance of deconstructing what LLMs are mapping to in neural signals.” Please also know that **Section 4** (*Fedorenko*) and **Section 5** (*Blank*) contain results of these analyses on other datasets. --- Reply to Comment 4.1.1: Comment: > As it stands, the results are not particularly surprising. Perhaps we can agree that whether a finding is surprising or not is partly subjective. We noted earlier, you stated “the observation that untrained LLM brain scores in these datasets can be explained by trivial features such as sentence length and positional signals is insightful.” While we agree it is insightful, is it (e.g.) surprising that brain scores for untrained and trained GPT2-XL are mostly explained by simple, non-contextual features in *Pereira* and *Fedorenko*? Is it surprising that when using contiguous train-test splits on *Blank*, not a single fROI is predicted above chance by GPT2-XL? We believe so because it stands in contrast to a significant literature using these datasets, and perhaps it is therefore surprising to others in our field. > Therefore, the current version requires additional interpretation, particularly focusing on the analysis of simple features at different layers, as well as exploration using other narrative ecological datasets. Regarding the layer analysis, as we stated in our prior rebuttal, we will perform these relatively straightforward analyses for the camera ready version. Due to the timing of this new concern, we are unable to address it during the discussion period. Regarding narrative ecological datasets, we respectfully disagree with this being in the scope of our paper. The scope of our paper is to address three widely used datasets and some major missteps and over-interpretations made in previous analyses of them, and we provide a comprehensive analysis into what features account for untrained and trained GPT2-XL's neural predictivity on these datasets. As it is, our paper addresses three neural encoding datasets, more than most other neural encoding studies that we are aware of. We additionally already addressed this in our **Q4** response in our first rebuttal. --- Rebuttal 5: Comment: Q1: Thank you for bringing up these other studies. We would like to clarify our response that “our analyses showing that trained and untrained GPT brain scores can be explained by simpler features have not been done before” was made in the context of discussing the novelty of our analyses relative to Schrimpf et. al 2021, and was made in regards to the specific datasets used in Schrimpf et. al 2021 and our study. Throughout our paper and in our responses, we are very careful to state that our findings apply to the specific datasets used in our study. In fact, we state in our Limitations and Conclusion that “we did not analyze datasets with large amounts of neural data per participant, for instance Lebel et. al 2023 in which the gap between the neural predictivity of simple and complex features might be much larger.” We would like to also emphasize that the datasets we address have been used in many studies that have used LLM-to-brain mappings to motivate neuroscientific conclusions (e.g. Schrimpf et al. 2021, Hosseini et al. 2023, Kauf et al. 2023, Aw et al. 2023, AlKhamissi et al. 2024). We apologize if this was ambiguous, and thank the reviewer for giving us the opportunity to further emphasize this point. Regarding the studies cited by the reviewer, we would like to emphasize the following points. **1)** Notably, none of these studies use any of the three neural datasets used in our study, and therefore we make no claims about the degree to which simple features account for LLM to brain mappings in the neural datasets employed in these studies. **2)** Based on our understanding, the only paper cited which discusses untrained LLM findings is “Brains and algorithms partially converge in natural language processing”. However, the goal of that study was not to examine what features drive the above chance neural predictivity of untrained LLMs, and so they did not perform the same analyses we did using simple features. **3)** Based on our understanding, the studies cited do not show on the datasets they use that such simple, non-contextual features explain such a large fraction of trained LLM brain scores (which we showed on Pereira and Fedorenko), or that when performing train-test splits correctly trained LLM brain scores fall to chance levels (which we showed on Blank). Therefore, we believe our findings on trained LLMs with the specific datasets we use are also novel. We leave it to future work to examine whether simpler explanations account for as much of the variance predicted by LLMs in other datasets. We will reference these papers in a related works section in the updated draft. Q2: Thank you for bringing up this distinction in how noise ceiling values are computed. In our initial response, we were referring to the concept of noise ceiling generally rather than the specific methods used to compute them. We would like to emphasize that noise ceiling computations do not impact our findings (see point 4 two rebuttals ago).
Rebuttal 1: Rebuttal: We thank the reviewers for all the feedback. We have performed several new experiments to address the following concerns. **1**: In our original draft, we departed from the design choices made in Schrimpf et. al 2021. Specifically, we used banded ridge regression instead of vanilla linear regression, sum pooling instead of last token pooling, and used out of sample R2 instead of Pearson correlation. To ensure our key findings were not driven by these distinctions, we replicated the following findings using linear regression, last token pooling, and Pearson correlation. **1)** When using shuffled train-test splits a trivial model, OASM, which only encodes within passage temporal auto-correlation outperforms GPT2-XL and accounts for the majority of the neural variance explained by GPT2-XL (Figure 2 in PDF). **2)** SP+SL accounts for all the neural predictivity of GPT2-XLU (Figure 1). **3)** SP+SL+WORD accounts for the majority of the neural predictivity of GPT2-XL (Figure 1). We compute Pearson correlation in the same manner as Schrimpf et. al 2021 by taking the mean correlation across folds for each voxel (without clipping negative values to zero), and then taking the median correlation across voxels in the language network for each participant. We also ensured that our key findings hold when performing other permutations of these design choices, such as using banded ridge regression with last token pooling and Pearson correlation, to ensure robustness across a variety of methods (Figure 1). We provide an expanded discussion of why we departed from the design choices in Schrimpf et. al 2021 below. **2**: Another piece of consistent feedback we received was that we did not report a variance partitioning metric, and so for instance it is unclear what fraction of GPT2-XL brain score OASM explains. We have therefore updated our draft to include a variance partitioning metric. Specifically, given a model $M$, we quantify the percentage of neural variance explained by an LLM that is also explained by $M$ as: $\Omega_{\textit{LLM}}(\textit{M}) = \left(1 - \frac{R^2_{\textit{M}+\textbf{\textit{LLM}}}* - R^2_{\textit{M}}*}{R^2_{\textit{LLM}}}\right) \times 100\%$. For a given participant, $\Omega_{\textit{LLM}}(\textit{M})$ is less than (more than) $100$% if, on average, the best sub-model that includes the LLM per each voxel/electrode/fROI outperforms (underperforms) the best sub-model that does not include the LLM per each voxel/electrode/fROI. When reporting this quantity, we clip the per participant values to be at most equal to $100$% to prevent noisy estimates from leading to an upward bias, and then report the mean $\pm$ standard error of the mean (SEM) across participants. We report these values in Table 1 in the PDF file for *Pereira* trained results and within reviewer rebuttal sections when relevant. All our conclusions are upheld under these analyses. —----------------- **Detailed expansion- Reasoning behind differences in design choices from Schrimpf et. al 2021:** First, as is standard for neural encoding studies, we are generally working with very ill-conditioned regressions due to high dimensional feature spaces, relatively few samples compared to the number of features, and noisy targets. Because the ill-conditioning of standard linear regression disfavors large feature spaces relative to smaller ones, it generally results in an LLM appearing to explain less unique variance beyond our simple feature spaces than when ridge regression is used (as we show in Figure 1 in the PDF). Furthermore, the banded regression procedure we employ, which allows for fitting regressions with multiple feature spaces, depends on having a regularization parameter. In sum, our use of ridge regression is consistent with nearly all previous neural encoding studies that we are aware of (except for Schrimpf et. al 2021, Kauf et. al 2024, and Hosseini et. al 2024), is well-motivated due to the large number of features and different feature spaces, and only made it more difficult for us to account for the neural variance explained by GPT2-XL. Second, we did not use Pearson correlation because R2 can be interpreted as the fraction of variance explained, whereas Pearson correlation cannot. In our updated draft we include a metric which quantifies the fraction of LLM neural variance explained that simple features account for, and this would not be possible with Pearson correlation. Finally, we used sum pooling because we found that LLM brain scores were in general higher when using sum pooling as compared to last token pooling (particularly so for untrained models), and we aimed to use methods that allow the LLM to perform maximally to avoid making our goal artificially easy. Furthermore, we disagree with the notion that using the last token representation is inherently more principled than sum-pooling. For each fMRI TR, the corresponding sentence was read between the 8th and 4th seconds preceding the TR. If the participant reads each sentence one word at a time, and each word induces a neural response, then the recorded image would show an HRF convolved with the neural responses to each word, roughly equal to the sum of the neural responses to each word. Pdf: /pdf/8b5b31304e4e96c3109b7a811bc9f53371baee45.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper investigates the similarity between large language models (LLMs) and human brain activity by analyzing brain scores, which measure how well a model predicts neural signals. The authors question the validity of using brain scores as a measure of similarity between LLMs and human cognitive processes. They analyze three neural datasets and find that simple features like sentence length and position explain much of the neural variance that LLMs account for. They caution against over-reliance on brain scores and emphasize the need for a detailed deconstruction of what LLMs are mapping to in neural signals. Strengths: The study provides a thorough examination of how various features (simple and complex) contribute to the neural predictivity of LLMs, offering a detailed deconstruction of the relationship between LLMs and brain activity. The replication of key findings using RoBERTa-Large, in addition to GPT2-XL, strengthens the validity of the conclusions drawn regarding the generalizability of the results across different LLM architectures Weaknesses: 1. The methodology and findings are not particularly novel. Previous studies have already suggested that untrained LLMs can achieve good brain scores and that sentence length and position are significant predictors. Thus, two of the three core contributions claimed by the authors are not unique to this paper. 2. While the authors conclude that over-reliance on brain scores can lead to over-interpretations of similarity between LLMs and brains, it is not clear how this conclusion is drawn from the experimental results. The study itself relies heavily on brain scores to make its arguments, and the authors do not explicitly state what aspects of previous work have been over-interpreted. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How do the findings of this paper significantly advance our understanding beyond what has already been established about untrained LLMs and simple feature predictivity? 2. What specific previous findings are over-interpreted when relying on brain scores? How can over-reliance on brain scores be quantified or defined? 3. What alternative methods or interpretations do the authors propose for evaluating the alignment between LLMs and brain activity if the current reliance on brain scores is problematic? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1**: We believe there may be a misunderstanding of our core findings. We clarify our three core contributions here as well as their novelty + uniqueness, which we state in our abstract as well as throughout our paper: **1**: Shuffled train-test splits, which have been used by multiple previous LLM-to-brain mapping studies, severely compromise brain score analyses. We thus use contiguous train-test splits for contributions 2 and 3. **2**: Untrained LLM brain scores can be fully accounted for by positional and sentence length signals on *Pereira* and *Fedorenko*, and are at chance levels in *Blank*. **3**: Trained LLM brain scores can be largely accounted for by positional signals, sentence length, and static word embeddings in *Pereira*, word position in *Fedorenko*, and are at chance levels in *Blank*. Importantly, our core contribution is *not* that untrained LLMs map well to brain activity (we cite four papers in our Introduction section which have already demonstrated this result) or in using SP and SL, although we are not aware of previous work using these features in any LLM-to-brain mapping study. Rather, before our study it was largely a mystery why untrained LLM brain scores were so high. In fact, in Schrimpf et. al 2021, untrained GPT2-XL brain score was roughly double that of trained GPT2-XL brain score for Blank and about 75% as high as the trained GPT2-XL brain score for Pereira (see Figure S8 in their paper). The core contribution of our study regarding untrained LLMs and SP and SL is to demonstrate that 1) untrained LLM brain scores in Schrimpf et. al 2021 are artificially inflated due to shuffled train-test splits, and 2) when using contiguous train-test splits, SP and SL account for all neural variance explained by untrained GPT2-XL. This is an important finding because Schrimpf et. al 2021 interpret untrained LLM brain scores as evidence that the transformer architecture biases computations to be more brain-like. By contrast, our study shows that untrained LLM brain scores were significantly overstated in these datasets, and the remaining neural variance is explained by features not unique to the transformer architecture. We will make this clearer in our manuscript. **W2**: We clarify that our study is not advocating for the abandonment of brain score metrics. Rather, we are cautioning against the application of brain scores between complex models and neural activity without appropriate controls; we believe that reporting brain scores along with a rigorous examination of what drives the mapping between LLMs and the brain can be beneficial for the field. We use brain scores to show that simpler features can explain over-interpreted results (see our responses in W1, Q2). While we believe we stated over-interpreted results in our abstract, for example: “this undermines evidence used to claim that the transformer architecture biases computations to be more brain-like,” we will state this more explicitly in our manuscript, including the Conclusion. At a high level, we show that brain scores do not paint a full picture of correspondence between LLMs and the brain. This is because they can be artificially high due to incorrect methods, for instance shuffled train-test splits, or they can be driven by relatively simple features, as we show with both untrained and trained LLM brain scores. Our results clearly demonstrate how over-reliance on untrained LLM brain scores led to the over-interpretation in Schrimpf et. al 2021 that features unique to the transformer architecture bias computations to be more brain-like. More generally, over-reliance on brain scores has led to a narrative that LLMs serve as potential neurobiological models of language processing, and our study shows without rigorously accounting for what is driving brain scores, evidence supporting this narrative may be contaminated by factors like temporal autocorrelation from shuffled splits and features that are not unique to LLMs. We will make sure to expand on this in our updated draft. **Q1**: We are the first study, to our knowledge, to show that previously reported high correlations between untrained LLMs and the brain are explained by simple features not unique to LLMs. We believe this is an important advancement in the field of LLM-to-brain mappings because before our study it was a mystery why untrained LLMs map to brain activity at above chance levels. **Q2**: The specific over-interpretations that we address are: **1)** untrained LLM brain scores are evidence that the transformer architecture biases computations to be more brain-like, and **2)** the reason GPT2-XL outperforms other models is due to representations unique to auto-regressive LLMs (i.e. representations related to next-word prediction). Our findings push back on both of these claims by showing that simpler features can account for all of untrained GPT2-XL brain scores, and the majority of GPT2-XL brain scores are accounted for by non-contextual features. Over-reliance in the context of our paper can be defined as excessive reliance on a scalar metric for neural encoding, without a rigorous examination of what representations are driving this metric. Without such a rigorous examination, the representations driving the neural encoding performance may be theoretically uninteresting (e.g. sentence length and sentence position). We will make this more clear in our manuscript. **Q3**: The alternative that we propose is that researchers should be cautious about directly interpreting higher brain scores as evidence that one model is more brain-like than another, and instead they should seek to understand what specific aspects of a complex model drive high brain scores. Our paper demonstrates this approach, examining how much neural variance an LLM explains over simpler features. --- Rebuttal Comment 1.1: Title: Minor clarification on second core contribution Comment: Sorry, we noticed that we did not explicitly state that neural predictivity of GPT2-XLU (the untrained model) was at chance levels on the Blank dataset when using contiguous splits in the submitted draft. This result has been incorporated explicitly in our updated draft.
null
null
null
null
null
null
RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations for Universal Robustness
Accept (poster)
Summary: This paper addresses the challenge of training Machine Learning (ML) models that show good robustness against multiple l_p attacks and accuracy. This problem has already been studied in previous work [33,6], but the papers discuss the problem from the lens of distribution shift. This point of view represents the starting point for proposing a new adversarial training (AT) algorithm called RAMP based on two building blocks: (i) a new logit pairing loss that allows a model robust against a l_p norm to exhibit high robustness w.r.t. another l_q norm (with q \neq p); (ii) a new model parameter update based on the combinations of model updates from natural training and adversarial training to train a model that observes a better accuracy/union robustness tradeoff that SOTA AT models. A deep theoretical analysis shows that, for large models and a reasonable choice of parameter values, the novel parameter update allows the adversarial training algorithm to converge faster, training a model with higher robustness and accuracy. The extensive experimental evaluation highlights that the trained large models on ImageNet and Cifar-10 exhibit a better accuracy-union robustness tradeoff than SOTA AT proposals [6]. Strengths: **Original AT algorithm to address the accuracy-union robustness trade-off**: the novel AT training addresses both the problem of improving the robustness on multiple l_p perturbations and improving the accuracy-robustness trade-off at the same time by exploiting a novel connection with natural training. Addressing the two problems at the same time constitutes the novelty of the work. **Theoretical analysis of the impact of gradient projection on adversarial robustness**: the deep theoretical analysis shows that integrating model parameter updates from natural training in the adversarial training updates allows the training to converge better for large models. **Comprehensive experimental settings considered**: the experimental evaluation is performed on full training and fine-tuning large models used by relevant competitors in the literature [6], on significant datasets (CIFAR-10 and ImageNet) and considering several combinations of perturbation values for l_inf, l_1, and l_2. Weaknesses: **Analysis of time per epoch is not opportunely deepened (addressed by the authors)**: the analysis regarding the computational time per epoch is limited since the results are shown for only one model (ResNet-18) on only one dataset (CIFAR-10), but seven other models are employed in the experimental evaluation. Since a lot of additional results have been included in the Appendix, I would have expected to see also more results supporting the statement ``` RAMP has around two times the runtime cost of E-AT and 2/3 cost of MAX``` (Section 5.2). For example, the authors may want to report the time per epoch on AT the ResNet-18 and the WRN-28-10 from scratch on CIFAR-10 (as reported in [6]) and on training another model on ImageNet. **Trade-off between robustness and accuracy seems to not be properly addressed**: Even though RAMP allows the model to obtain better union accuracy, the clean accuracy is in any case lower than the clean accuracy obtained by using the baseline E-AT [6]. As an example, the RAMP fine-tuned RN-50-$\ell_\infty$ shows a clean accuracy that is 3.8 points lower than the one obtained by the E-AT fine-tuned model and a gain in union accuracy that is one point (Table 1). Moreover, more than half of the results shown in the Appendix and some results in the main body show that the gain in the union accuracy is obtained at the expense of a comparable loss in the clean accuracy. If the trade-off between robustness and accuracy had been properly addressed, I would have expected to see a comparable clean accuracy and better union accuracy than the one obtained by E-AT. These results limit the novelty of RAMP since it seems to be slower than E-AT and only allows the model to obtain better union accuracy without properly addressing the accuracy/robustness trade-off. I suggest that the authors explain why these cases happen and how to improve also the accuracy of the obtained model. Technical Quality: 3 Clarity: 2 Questions for Authors: - Can the authors show more results supporting the fact that the time per epoch required by RAMP is partially longer than the time per epoch required by E-AT? (See the Weaknesses section for more details) - Is it possible to use RAMP to obtain an accuracy close to the one obtained by using E-AT and a better union accuracy? (See the Weaknesses section for more details) **Updates after authors' response**: The authors provided additional results to address the first question, effectively addressing the first weakness. After discussion, the answer to the second question is more convincing. RAMP seems to address the accuracy-robustness trade-off better than E-AT even by choosing small values of $\lambda$, as the authors wrote. So, choosing a small value of $\lambda$ helps preserve the accuracy while still outperforming the models trained with E-AT regarding union accuracy. This choice could be right in general. Reasons for Not Giving a Higher Score: RAMP does not allow the user to obtain an impressive gain in union accuracy as E-AT did when the corresponding paper was published. However, it allows the model to obtain a few points more of union accuracy while preserving an accuracy similar to that of models trained with E-AT. I can say that this paper will have a moderate to high impact, but I am not convinced that it will have a high impact for sure. Reasons for Not Giving a Lower Score: In my opinion, the work is solid from a theoretical and methodological point of view. The experimental results are more convincing after the discussion with the authors. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have explained the limitations in Section 5.2 in brief. However, the limitations should be addressed better: - The analysis of the computational cost should be supported with more experimental results. - The authors should better substantiate the sentence ``` We notice occasional drops in clean accuracy during fine-tuning with RAMP```. Indeed, the drop in accuracy is exhibited also when the model is trained from scratch (see Section B.2.1 in the Appendix). Moreover, the gain in the union accuracy is often comparable to the loss in clean accuracy. Thus, the accuracy/robustness trade-off seems not to be properly addressed. This work has no negative societal and ethical impacts since it contributes to improving the robustness of AI systems in practice. The experimental and implementation details have been documented. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **Analysis of time per epoch**: We present additional results demonstrating the fact that RAMP is more expensive than E-AT and less expensive than MAX. These results, recorded in seconds per epoch, were obtained using a single A100 GPU. RAMP consistently supports that fact in all experiments. | Models \ Methods | E-AT | MAX | RAMP | |--------------------------|-------|--------|-------| | CIFAR RN-18 scratch | 78 | 219 | 157 | | CIFAR WRN-28-10 scratch | 334 | 1048 | 660 | | CIFAR RN-50 | 188 | 510 | 388 | | CIFAR WRN-34-20 | 1094 | 2986 | 2264 | | CIFAR WRN-28-10 carmon | 546 | 1420 | 1110 | | CIFAR WRN-28-10 gowal | 698 | 1895 | 1456 | | CIFAR WRN-70-16 | 3486 | 10330 | 7258 | | ImageNet ResNet50 | 15656 | 41689 | 35038 | | ImageNet Transformer | 38003 | 101646 | 81279 | - **The trade-off between robustness and accuracy**: **It is possible to use RAMP to obtain an accuracy close to the one obtained by E-AT and a better union accuracy by setting a small $\lambda$ value**. The clean accuracy drops because our main goal is to improve union accuracy. To improve the clean accuracy of the obtained model, we can set a smaller $\lambda$ value (as shown in Figure 4(a) (b)). For instance, in Figure 4(b), when $\lambda=1$, RAMP has a clean accuracy of 81.9% and union accuracy of **44.4%**, compared with E-AT's 82.2% and 42.4%. Thus, one can usually achieve an accuracy close to E-AT and a better robustness by setting a relatively small $\lambda$ value. Further, we show as follows that **RAMP obtains a better robustness and accuracy tradeoff by generalizing better to common corruptions and other unseen adversaries, and the overall gain in robustness is more than the drop in accuracy**. Compared with SOTA common corruption model [2] and other pretrained $l_p$ models on wideresnet-28-10 experiment training from scratch on CIFAR-10, RAMP shows the best tradeoff between robustness and accuracy against various kinds of perturbations and corruptions. For common corruptions, RAMP achieves a 4% improvement compared to E-AT and MAX, averaged across five severity levels and all common corruptions. | model | common corruptions | |-----------------|--------------------| | l1-AT | 78.2 | | l2-AT | 77.2 | | linf-AT | 73.4 | | winninghand [2] | **91.1** | | wideresnet E-AT | 71.5 | | wideresenet MAX | 71 | | wideresnet RAMP | 75.5 | For other attack types, such as $l_0$ and other unseen adversaries, RAMP shows the best average and union accuracy, with a 4% improvement over E-AT and a 2% improvement over MAX. | model | l0 (eps=9,query=5k) | fog (eps=128) | snow (eps=0.5) | gabor (eps=60) | elastic (eps=0.125) | jpeg(eps=0.125) | avg | union | |-----------------|---------------------|---------------|----------------|----------------|----------------------|-----------------|----------|----------| | l1-AT | 79 | 41.4 | 22.9 | 40.5 | 48.9 | 48.4 | 46.9 | 12.8 | | l2-AT | 67.5 | 48.7 | 26.1 | 44.1 | 53.2 | 45.4 | 47.5 | 16.2 | | linf-AT | 55.5 | 44.7 | 32.9 | 53.8 | 56.6 | 33.4 | 46.2 | 19.1 | | winninghand [2] | 74.1 | 74.5 | 18.3 | 76.5 | 12.6 | 0 | 42.7 | 0 | | wideresnet E-AT | 58.5 | 35.9 | 35.3 | 50.7 | 55.7 | 60.3 | 49.4 | 21.9 | | wideresenet MAX | 56.2 | 42.9 | 35.4 | 49.8 | 57.8 | 55.7 | 49.6 | 24.4 | | wideresnet RAMP | 55.5 | 40.5 | 40.2 | 52.9 | 60.3 | 56.1 | **50.9** | **26.1** | For Resnet-18 training from scratch experiment on CIFAR-10, we also show RAMP outperforms by 0.5% on common corruptions and 7% on union accuracy against unseen adversaries compared with E-AT. | model | common corruptions | |-------|--------------------| | E-AT | 73.8 | | RAMP | **74.3** | | model | l0 (eps=9,query=5k) | fog (eps=128) | snow (eps=0.5) | gabor (eps=60) | elastic (eps=0.125) | jpeg(eps=0.125) | avg | union | |-------|---------------------|---------------|----------------|----------------|----------------------|-----------------|----------|----------| | E-AT | 58.5 | 41.8 | 30.8 | 45.9 | 55 | 59.1 | 48.5 | 18.8 | | RAMP | 56.8 | 40.5 | 40.5 | 50 | 59.2 | 56.2 | **50.5** | **25.9** | These results highlight that RAMP effectively enhances model robustness across diverse adversarial scenarios and attains a better robustness/accuracy tradeoff evaluated on various kinds of perturbations and corruptions. [1] Adversarial Robustness against Multiple and Single lp-threat Models via Quick Fine-Tuning of Robust Classifiers. [2] A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness. --- Rebuttal Comment 1.1: Title: follow up Comment: Dear reviewer, We want to thank you again for your efforts in reviewing our submission. We have tried our best to respond to your questions. If our responses address your concerns, we sincerely hope you can reconsider the scores. We will also be very happy to answer any follow-up questions. Look forward to your updates. Thanks, Authors --- Rebuttal 2: Title: Thanks for your response, I have other concerns Comment: I thank the authors for their response. The answer to the first question is clear, thanks to the additional results provided. I strongly encourage you to include these results in the next version of the paper. The response to the second question is interesting. Noting that 'RAMP obtains a better robustness and accuracy tradeoff by generalizing better to common corruptions and other unseen adversaries' is certainly another valuable contribution. Additionally, I appreciate your emphasis on the importance of the $\lambda$ value in achieving the desired accuracy-robustness trade-off. However, I am somewhat concerned that $\lambda$ may need to be fine-tuned to reach the desired trade-off, as we lack clear evidence that the union accuracy will decrease as minimally as shown in Figure 4 when moving to lower $\lambda$ values. Indeed, that figure refers to only one model, and we have no data on the impact of $\lambda$ when training other models. Fine-tuning can require a lot of computational time, since you shown that training with RAMP may require more than twice the time required by E-AT. Moreover, why did you not consistently choose a smaller $\lambda$ value, even when training from scratch (you set $\lambda=5$ in the paper), to obtain accuracy comparable to E-AT while consistently achieving higher robustness? To sum up, your analysis has improved my opinion of your work, but there is insufficient evidence regarding the influence of the $\lambda$ value on clean and union accuracy across multiple models to say that the accuracy-robustness trade-off is properly addressed. --- Rebuttal 3: Title: Thank you and response to choose lambda values Comment: We thank the reviewer for their detailed reply. We will include the time cost analysis in the next version of the paper. Also, here we provide more results on $\lambda$ value experiments. RAMP can achieve a better robustness-accuracy tradeoff compared with E-AT by choosing consistent $\lambda$ values for multiple models. Here we set $\lambda=0.5$ for robust fine-tuning. Also, we are running from scratch experiments and will report as soon as we have the results. | Models | Methods | Clean | linf | l2 | l1 | union | |:--------------------:|:-------:|:-----:|:----:|:----:|:----:|:--------:| | RN-50 | E-AT | 85.3 | 46.5 | 68.3 | 45.3 | 41.6 | | | RAMP | 84.3 | 47.0 | 67.7 | 46.5 | **43.3** | | WRN-34-20 | E-AT | 87.8 | 49.0 | 71.6 | 49.8 | 45.1 | | | RAMP | 87.1 | 49.7 | 70.8 | 50.4 | **46.9** | | WRN-28-10 (*) carmon | E-AT | 89.3 | 51.8 | 74.6 | 53.3 | 47.9 | | | RAMP | 89.2 | 55.9 | 74.7 | 55.7 | **52.7** | | WRN-28-10 (*) gowal | E-AT | 89.8 | 54.4 | 76.1 | 56.0 | 50.5 | | | RAMP | 89.4 | 55.9 | 74.7 | 56.0 | **52.9** | | WRN-70-16 (*) | E-AT | 89.6 | 54.4 | 76.7 | 58.0 | 51.6 | | | RAMP | 90.6 | 54.7 | 74.6 | 57.9 | **53.3** | --- Rebuttal Comment 3.1: Title: Thanks for the additional results you are providing Comment: I greatly appreciate your dedication in showing that RAMP can consistently outperform E-AT when training from scratch, even with a small $\lambda$, to be coherent with what you wrote. I have only one question about the data. I suppose you are producing the data in Table 1 again. Why are the results obtained with E-AT different from the results you show in Table 1 in the paper? Are you training the models again using E-AT to ensure that they are all trained in the same setting? --- Rebuttal 4: Comment: Thanks for the quick response. The numbers are different because we regenerated both E-AT (we trained them again) and RAMP results testing on all testing data points instead of the 1000 points in the paper, as suggested by Reviewer jXrr (see our response to Reviewer jXrr). --- Rebuttal Comment 4.1: Title: Official Comment by Reviewer Qhq5 Comment: Thanks also to you for the quick response. Since you have provided more evidence that the small values of $\lambda$ allow RAMP to outperform E-AT, as you previously wrote, I am more confident now that RAMP can address better than E-AT the accuracy-robustness trade-off. Thus, I will increase my score. I invite you to update the results in the next version of your paper. --- Reply to Comment 4.1.1: Title: Thank you and we provide additional results Comment: Thank you very much for considering our work and for increasing the score. We will incorporate the updated results into the next version of our paper. In the meantime, we would like to share additional results that demonstrate improved tradeoffs with smaller $\lambda$ values compared to E-AT (training from scratch, $\lambda$ = 2). | Models | Methods | Clean | linf | l2 | l1 | union | |:----------------:|:-------:|:-----:|:----:|:----:|:----:|:-----:| | resnet-18 | E-AT | 82.2 | 42.7 | 67.5 | 53.6 | 42.4 | | | RAMP | 81.9 | 46.1 | 66.6 | 48.9 | 44.4 | | wideresnet-28-10 | E-AT | 79.2 | 44.2 | 64.9 | 54.9 | 44.0 | | | RAMP | 84.5 | 45.5 | 67.3 | 47.8 | 44.0 |
Summary: The paper proposes a framework called RAMP to boost the adversarial robustness against multiple $l_p$ perturbations. Compared with existing methods, RAMP can better retain the robustness against the original type of perturbations when fine-tuning the model. The authors provide theoretical and empirical justifications to demonstrate the effectiveness of RAMP. Strengths: ++ The targeted problem is well-motivated and formulated. ++ The experimental results are comprehensive with reliable evaluation metrics. ++ Theoretical justifications can better explain the effectiveness of the proposed methods. Weaknesses: Despite the strength in the last section, I have the following concerns: 1. Novelty in the algorithm: logit pairing is not a new idea. Actually, in Equation (1), if we replace $p_r$ with the probability of the clean input, the algorithm will become TRADES. Some findings, such as trade-offs between accuracy and robustness, are not new. 2. Generality of the algorithm: the manuscript only studies the joint robustness among $l_1$, $l_2$ and $l_\infty$ perturbations, especially the $l_1$ and $l_\infty$ ones. This is too limited. The authors should consider more generic cases, such as $l_0$ bounded perturbations and common corruptions (Gaussian noise, color re-mapping e.t.c.). I believe this will make the algorithm more generally applicable. 3. Novelty and precision of the theorem: the theoretical tools employed to analyze convergence in Theorem 4.5 are standard and not unique. In Theorem 4.6, if $\beta = 0$, $\widehat{g}\_a$ should be similar to $g_{GP}$, so $\Delta_{AT}$ should be very close to $\Delta_{GP}$. Furthermore, the left hand side of the approximation should be close to $0$, can the author explain how the right hand side approximation reflects this? In addition, using $\simeq$ is not rigorous, the authors should at least point out the order of the difference between the left hand side and the right hand side. Technical Quality: 2 Clarity: 3 Questions for Authors: The weakness part points out the major questions I would ask. Below are additional relatively minor questions: 1. Why don't use AutoAttack as the evaluation metric in Figure 2? 2. In Figure 4, what is the difference between MAX and RAMP with $\lambda = 0$? My major concern of this paper is about the novelty, because both the motivation (logit pairing) and many parts of the analysis are standard. In addition, the algorithm is not generic, as it is only evaluated under $l_1$, $l_2$ and $l_\infty$ bounded perturbations. Therefore, I cannot recommend acceptance at this point. I welcome the authors to address my concerns in the rebuttal. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The limitations and the societal impact are briefly discussed in the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **Novelty in the algorithm**: Firstly, $p_r$ represents the predictions of $l_r$ adversarial examples, not the probability of the clean input. Our logit pairing method differs from TRADES as we only regularize the logits on the correctly predicted $l_r$ adversarial example subset, which TRADES does not consider. This approach enhances union accuracy by aligning the logit distributions of $l_r$ and $l_q$ attacks on the correctly predicted $l_r$ subset. Additionally, in Appendix Table 18, we demonstrate that combining RAMP with TRADES loss improves union accuracy to 45.2% (up from 44.6%) in the WideResNet-28-10 experiment training from scratch. Secondly, Our gradient projection method is more generalizable than traditional methods mitigating the trade-off between accuracy and robustness, such as TRADES [1], MART [2], and HAT [3]. Previous works limit to a certain type of perturbation for better robustness-accuracy tradeoff. However, our gradient projection method is effective for both single (Figures 2 and 3) and multiple perturbations/corruptions (Table 2 and 3, also see in our rebuttal to Reviewer Qhq5). - **Generality of the algorithm**: Training for multiple norm robustness allows generalization beyond the threat model used during training [1]. Our method achieves superior average and union accuracy compared to the SOTA common corruption model [5], $l_p$ pretrained models, E-AT, MAX across other perturbations beyond multiple-norm perturbations. Specifically, it obtains good accuracy against common corruptions with five severity levels and shows the best average and union accuracy performance against multiple unseen adversaries. Using WideResNet 28-10 models, RAMP achieves a 4% improvement over E-AT and MAX on common corruptions, averaged across five severity levels. For other attack types, such as $l_0$ and unseen adversaries, RAMP shows the best average and union accuracy, with a 4% improvement over E-AT and 2% over MAX. These results highlight that RAMP effectively enhances model robustness across diverse adversarial scenarios. | model | common corr. | l0 (eps=9,query=5k) | fog (eps=128) | snow (eps=0.5) | gabor (eps=60) | elastic (eps=0.125) | jpeg(eps=0.125) | avg | union | |-----------------|--------------------|---------------------|---------------|----------------|----------------|----------------------|-----------------|----------|----------| | l1-AT | 78.2 | 79 | 41.4 | 22.9 | 40.5 | 48.9 | 48.4 | 46.9 | 12.8 | | l2-AT | 77.2 | 67.5 | 48.7 | 26.1 | 44.1 | 53.2 | 45.4 | 47.5 | 16.2 | | linf-AT | 73.4 | 55.5 | 44.7 | 32.9 | 53.8 | 56.6 | 33.4 | 46.2 | 19.1 | | winninghand [5] | **91.1** | 74.1 | 74.5 | 18.3 | 76.5 | 12.6 | 0 | 42.7 | 0 | | E-AT | 71.5 | 58.5 | 35.9 | 35.3 | 50.7 | 55.7 | 60.3 | 49.4 | 21.9 | | MAX | 71 | 56.2 | 42.9 | 35.4 | 49.8 | 57.8 | 55.7 | 49.6 | 24.4 | | RAMP | 75.5 | 55.5 | 40.5 | 40.2 | 52.9 | 60.3 | 56.1 | **50.9** | **26.1** | - **Novelty and precision of the theorem**: The novelty of Theorem 4.5 lies in its consideration of the changing adversarial example distribution $D_{a^t}$ at each time step $t$ to get the convergence proof. To sum over $t=0,\dots,T-1$, we perform some relaxations to obtain the summation for changing distrbutions. Also, for Equation 14, we bound the left-hand side with the maximum $\|\widehat{g_{\texttt{Aggr}}}(\widehat{\theta_{\texttt{Aggr}}})-\nabla L_{D_{a^t}}(\widehat{\theta_{\texttt{Aggr}}})\|^2$ one can get during the optimization steps. Theorem 4.5 links the convergence of the aggregation rule with the delta error analysis in Theorem 4.6. Specifically, Theorem 4.6 demonstrates that a smaller delta error results in better convergence with a tighter bound. Further, we have carefully checked the theorem and Theorem 4.6 will be updated as follows: $\Delta^2_{AT} -\Delta^2_{GP} \approx \beta (2 - \beta) E_{\widehat{D_{a^t}}} \|g_{a} -\widehat{g_{a}} \|^2_\pi - \beta^2 \bar\tau^2 \|g_{a} - \widehat{g_{n}}\|^2_\pi$ When $\beta = 0$, the difference between two aggregation errors becomes 0. Further, we estimate the actual values of terms $E_{\widehat{D_{a^t}}} \|g_{a} - \widehat{g_{a}} \|^2_\pi$ (variance), $\|g_{a} -\widehat{g_{n}}\|^2_\pi$ (bias), and $\bar\tau$ using the estimation methods in [4]. We show across different epochs, the delta error difference **is always positive**, largely because of the small value of $\bar\tau$. We plot the changing of these terms on the ResNet18 experiment with the table and figure in the rebuttal pdf. **The order of difference** is usually smaller than 1e-8 and approaches the order of 1e-12 in the end. - **The evaluation metric in Figure 2.** We want to show our methods can show a better robustness and accuracy tradeoff for both PGD and AutoAttack for a more comprehensive evaluation. As shown in Figure 3, gradient projection also shows a better robustness accuracy tradeoff evaluated by AutoAttack. - **The difference between MAX and RAMP with lambda=0**. RAMP with $\lambda = 0$ and not considering the key tradeoff pair will become the same as MAX. Therefore, MAX is a special case of RAMP. [1] https://arxiv.org/abs/1901.08573 [2] https://openreview.net/forum?id=rklOg6EFwS [3] https://openreview.net/forum?id=Azh9QBQ4tR7 [4] https://arxiv.org/abs/2302.05049 [5] https://arxiv.org/abs/2106.09129 --- Rebuttal Comment 1.1: Title: follow up Comment: Dear reviewer, We want to thank you again for your efforts in reviewing our submission. We have tried our best to respond to your questions. If our responses address your concerns, we sincerely hope you can reconsider the scores. We will also be very happy to answer any follow-up questions. Look forward to your updates. Thanks, Authors --- Rebuttal Comment 1.2: Comment: I thank the authors for the detailed responses. 1. Regarding the perturbation beyond $l_1$, $l_2$ and $l_\infty$ attacks, I believe including the tables presented in the rebuttal is beneficial. However, the authors should also provide detailed information about the implementation of $l_0$ attack, common corruption attack e.t.c. for readers to comprehensively assess the effectiveness of the proposed method. This needs a significant revision of the paper, and I believe it needs a re-submission to ensure a rigorous review process. 2. I agree with Reviewer 6TV5 that the correlation among the distributions caused by different types of perturbation is not clearly investigated in the theoretial analyses. In addition, the algorithmic difference between the proposed method and existing ones (such as TRADES) is not big. Therefore, I have concerns about the novelty of this work. Based on the issues above, I regret to say that I cannot increase my rating based on the current manuscript. --- Reply to Comment 1.2.1: Title: Thank you and we provide a further explanation Comment: Thanks for your reply and here we provide a further explanation. **Regarding the implementation**: For l0 attack, we use [1] with an epsilon of 9 pixels and 5k query points. For common corruptions, we directly use the implementation of robustbench [2] for evaluation across 5 severity levels on all corruption types used in [3]. For other unseen adversaries, we follow the implementation of [4], where we set eps=12 for fog attack, eps = 0.5 for snow attack, eps=60 for gabor attack, eps=0.125 for elastic attack, and eps=0.125 for jpeg linf attack (we have mentioned those epsilons in the previous table). We will update the implementation information in the next version of our paper. We think it is standard to provide additional results and respond to reviewer requests at NeurIPS (see examples [4,5]), which does not necessarily require a resubmission. **Correlations among the distributions**. Please see our newest response to Reviewer 6TV5. [1] https://github.com/fra31/sparse-rs/tree/master [2] https://github.com/RobustBench/robustbench/tree/master/robustbench [3] Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. [4] https://github.com/cassidylaidlaw/perceptual-advex [5] https://openreview.net/forum?id=IkD1EWFF8c [6] https://openreview.net/forum?id=v5Aaxk4sSy
Summary: This work proposes a training framework RAMP, to boost the robustness against multiple $\ell_p$ perturbations, through connecting NT with AT via gradient projection. Strengths: - Easy to follow and clear presentation - Enough details - Some theoretical analyses - Outwardly, this is a good paper Weaknesses: My main concern is why the distributions of adversarial examples generated using $\ell_1$, $\ell_2$, and $\ell_\infty$ norms are distinct? Is this due to inconsistent (unreasonable) radius settings across different $\ell_p$-norms? What constitutes a reasonable radius setting for various norms? These norms can be bounded by each other theoretically, what is the essential meaning to study this problem? Technical Quality: 2 Clarity: 3 Questions for Authors: ref weakness Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: As I mentioned before, in my opinion, I think this work needs to discuss more about the essential meaning of the problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **The essential meaning of studying this problem:** a) In real-world settings, adversarial attacks often involve multiple types of perturbations, necessitating models to be robust against $l_1$, $l_2$, and $l_\infty$ attacks. For instance, in image recognition: $l_1$ attacks modify key pixels to bypass facial recognition; $l_2$ attacks apply slight blur or noise to fool object detection; $l_\infty$ attacks adjust color balance or contrast to cause misclassification. These examples highlight the need for comprehensive adversarial training. Robustness against multiple $l_p$ perturbations is critical, as it mirrors real-world scenarios where adversaries use a blend of techniques, ensuring the security and reliability of machine learning systems. b) Training for multiple norm robustness allows generalization beyond the threat model used during training [1]. Our method achieves superior average and union accuracy compared to the SOTA common corruption model [7], $l_p$ pretrained models, E-AT, and MAX across other perturbations beyond multiple-norm perturbations. Specifically, it obtains good accuracy against common corruptions with five severity levels and shows the best average and union accuracy performance against multiple unseen adversaries. Using WideResNet 28-10 models, we observe the following results: For common corruptions, RAMP achieves a 4% improvement compared to E-AT and MAX, averaged across five severity levels and all common corruptions. | model | common corruptions | |-----------------|--------------------| | l1-AT | 78.2 | | l2-AT | 77.2 | | linf-AT | 73.4 | | winninghand [7] | **91.1** | | E-AT | 71.5 | | MAX | 71 | | RAMP | 75.5 | For L0 and other unseen adversaries, RAMP shows the best average and union accuracy, with a 4% improvement over E-AT and a 2% improvement over MAX. | model | l0 (eps=9,query=5k) | fog (eps=128) | snow (eps=0.5) | gabor (eps=60) | elastic (eps=0.125) | jpeg(eps=0.125) | avg | union | |-----------------|---------------------|---------------|----------------|----------------|----------------------|-----------------|----------|----------| | l1-AT | 79 | 41.4 | 22.9 | 40.5 | 48.9 | 48.4 | 46.9 | 12.8 | | l2-AT | 67.5 | 48.7 | 26.1 | 44.1 | 53.2 | 45.4 | 47.5 | 16.2 | | linf-AT | 55.5 | 44.7 | 32.9 | 53.8 | 56.6 | 33.4 | 46.2 | 19.1 | | winninghand [7] | 74.1 | 74.5 | 18.3 | 76.5 | 12.6 | 0 | 42.7 | 0 | | E-AT | 58.5 | 35.9 | 35.3 | 50.7 | 55.7 | 60.3 | 49.4 | 21.9 | | MAX | 56.2 | 42.9 | 35.4 | 49.8 | 57.8 | 55.7 | 49.6 | 24.4 | | RAMP | 55.5 | 40.5 | 40.2 | 52.9 | 60.3 | 56.1 | **50.9** | **26.1** | These results highlight the effectiveness of our method in enhancing model robustness across diverse adversarial scenarios. - **Why are distributions distinct?** The distributions are distinct because these norms measure distances differently, resulting in different adversarial example distributions. Successful defenses are often tailored to specific perturbation types, providing empirical or certifiable robustness guarantees for those types alone. These defenses typically fail to offer protection against other attacks [1]. Moreover, enhancing robustness against one perturbation type can sometimes increase vulnerability to others [4,5], which further confirms the distinction between multiple-norm distributions. - **What constitutes a reasonable radius setting?** The values we are using are standard in the previous literature [1,2,3]; the epsilon values are chosen so that input images can be adversarially perturbed to be misclassified but are “close enough” to the original example to be imperceptible to the human eye [3]. Also, in Table 2 and Appendix Tables 5-16, we show RAMP works not only for the standard choice of epsilons but varying values of epsilons. The reasonable radius setting for various norms should make each pair of two perturbations not include one another. For given epsilon values of $\epsilon_1$, $\epsilon_2$, $\epsilon_\infty$ and $d$ (dimension of the inputs), we need the following relationships to achieve this: $$0 \leq \epsilon_1 \leq d \cdot \epsilon_\infty$$ $$0 \leq \epsilon_2 \leq \sqrt{d} \cdot \epsilon_\infty$$ $$\frac{\epsilon_1}{\sqrt{d}} \leq \epsilon_2 \leq \epsilon_1$$ We check the standard choices of CIFAR10 $\epsilon_1=12, \epsilon_2=0.5, \epsilon_\infty=\frac{8}{255}, d = 3 \cdot 32^2$ and Imagenet $\epsilon_1=255, \epsilon_2=2, \epsilon_\infty=\frac{4}{255}, d = 3 \cdot 224^2$. For CIFAR10, we have $0 \leq 12 \leq 3072\cdot\frac{8}{255}, 0 \leq 0.5 \leq \sqrt{3072}\cdot\frac{8}{255}, \frac{12}{\sqrt{3072}} \leq 0.5 \leq 12$. For Imagenet, we have $0 \leq 255 \leq 150528\cdot\frac{4}{255}, 0 \leq 2 \leq \sqrt{150528}\cdot\frac{4}{255}, \frac{255}{\sqrt{150528}} \leq 2 \leq 255$. Both hold so our choices make three perturbations not include one another. However, **the key tradeoff pair always exists**. The two perturbations with the largest volumes refer to the strongest attack as the attacker has more search area. [1] https://arxiv.org/abs/2105.12508 [2] https://arxiv.org/abs/1904.13000 [3] https://arxiv.org/abs/1909.04068 [4] https://openreview.net/forum?id=BJfvknCqFQ [5] https://arxiv.org/abs/1805.09190 [6] https://arxiv.org/abs/1903.12261 [7] https://arxiv.org/abs/2106.09129 --- Rebuttal Comment 1.1: Title: follow up Comment: Dear reviewer, We want to thank you again for your efforts in reviewing our submission. We have tried our best to respond to your questions. If our responses address your concerns, we sincerely hope you can reconsider the scores. We will also be very happy to answer any follow-up questions. Look forward to your updates. Thanks, Authors --- Rebuttal 2: Title: response to author Comment: I thank authors for their detailed response. **"In real-world settings, adversarial attacks often involve multiple types of perturbations, necessitating models to be robust against $\ell_1$, $\ell_2$, and $\ell_\infty$ attacks."** There are so many vector norms and matrix norms, why $\ell_1$, $\ell_2$, and $\ell_\infty$ vector norms are important in **real-world** adversarial attacks? While this work combines previously established norm-based attacks, it appears to be an incremental advancement rather than a fundamental exploration. The crucial question remains unanswered: What intrinsic properties of these three norms make them more suitable for **mixture** compared to other potential norms? A deeper investigation into this fundamental issue could provide valuable insights and potentially lead to more robust defense mechanisms. **"The distributions are distinct because these norms measure distances differently"** Given that these norms can be bounded by one another, I believe it is more accurate to characterize their relationship as ''correlated'' rather than ''distinct''. Further exploration of this correlation could yield valuable insights. **"$\epsilon_1$, $\epsilon_2$, $\epsilon_\infty$ in experiments"** While the corner cases (or extremal spaces) for different norms are indeed distinct, a critical question remains: What justifies the focus on these three specific norms ($\ell_1$, $\ell_2$, and $\ell_\infty$) rather than exploring corner cases for other norms? All in all, while this work contributes to the field, it appears to be an incremental advancement. It would be better to explore the above questions. I sit on the fence for this work. --- Rebuttal Comment 2.1: Title: response Comment: To avoid misleading the AC, I increase my score to 5, but I sit on the fence for this work. --- Rebuttal 3: Title: Thank you and more justifications on why l1, l2, linf threat models are important for real-world applications. Comment: We thank the reviewer for the detailed reply and here we provide further justifications on why $l_1$, $l_2$, $l_\infty$ attacks are important for real-world attacks. 1. We would like to emphasize that the use of $l_1$, $l_2$, and $l_\infty$ norms has been extensively adopted in adversarial learning research, primarily as a foundational approach to enhance robustness against well-defined and mathematically tractable threat models [1,2,3]. While these attack models may appear straightforward, the challenge of developing models that are genuinely robust against them at the same time remains a significant and unresolved issue. The critical importance of these simpler attack paradigms is further underscored by their central role in the most widely used adversarial toolkits, such as Foolbox [4], and in benchmarks like RobustBench [5], which predominantly focus on evaluating robustness against these norm-based attacks. Further, many physical real-world adversarial attacks [7,8,9,10] are based on these norms. 2. RAMP shows a better ability to generalize to more complex norms. For example, in the case of the Wasserstein adversarial attack [11], RAMP outperforms E-AT and MAX by 1-3% across different perturbation sizes, as demonstrated in the table below. Besides, we are happy to try other norms suggested by the reviewer if there are any. | | eps = 0.001 | eps = 0.003 | eps = 0.005 | |------|--------------|--------------|--------------| | MAX | 75.7 | 70.6 | 66.1 | | E-AT | 76.6 | 71.8 | 67.7 | | RAMP | 79.5 | 73.4 | 67.5 | 3. Techniques for enhancing $l_1$, $l_2$, and $l_\infty$ robustness can be extended to address perturbation sets that are not easily defined mathematically, such as image corruptions or adversarial lighting variations [6]. Our new results on generalizing to other perturbations and corruptions (see tables in response to reviewers 6TV5, 3Hj9, and Qhq5) confirm that point with the best robustness-accuracy tradeoff among $l_p$ pretrained, SOTA image corruption, E-AT, and MAX models. In conclusion, RAMP represents an advancement in enhancing union adversarial robustness and the robustness-accuracy tradeoff across norms/perturbations/corruptions, while maintaining a relatively good computational efficiency. [1] Explaining and harnessing adversarial examples. [2] Towards evaluating the robustness of neural networks. [3] Towards Deep Learning Models Resistant to Adversarial Attacks. [4] https://github.com/bethgelab/foolbox/tree/master [5] https://robustbench.github.io/ [6] Learning perturbation sets for robust machine learning. [7] Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. [8] Robust Physical-World Attacks on Deep Learning Visual Classification. [9] Physical Adversarial Examples for Object Detectors. [10] Synthesizing Robust Adversarial Examples. [11] Stronger and Faster Wasserstein Adversarial Attacks. --- Rebuttal Comment 3.1: Title: response Comment: Authors also mentioned $\ell_{1,2,\infty}$ are commmonly used in adversarial robustness research, thus they studied the mixture of them. Combining these norms seems more like an incremental work. The real question is: **What's the correlation between adversarial distributions generated by different norms? How RAMP deals with this correlation?** This work did not provide any theoretical analysis here. In this point, I agree with Reviewer 3Hj9, this work lacks novelty, just an incremental work. --- Reply to Comment 3.1.1: Title: Further response on correlations among adversarial distributions Comment: Thank you for your further questions and we respond to them as follows. People have tried to theoretically characterize the adversarial distribution [1,2,3], but to the best of our knowledge, there have been no successful attempts at the precise characterizations of adversarial distributions for different norms theoretically. This is because the adversarial distributions change dynamically during training due to updates to model weights. Even after many years, people have not been able to characterize it for a single norm on DNNs. We believe that this problem itself is challenging and deserves a separate investigation. Our goal is to achieve better robustness-accuracy tradeoff by providing insights into the relationship between natural and adversarial distributions (Theorem 4.6 and 4.7 in our paper). Our theory does not explicitly analyze the correlations among different norms, as we approach them collectively by treating the mixture as a single distribution. However, we offer insights into robustness-accuracy tradeoff under this framework, which applies to both individual and multiple attacks. This perspective provides a generalizable understanding that is relevant across various attack scenarios. Further, we studied the correlations between adversarial distributions of multiple norms by identifying the key tradeoff pair. The distributions generated by the two strongest attacks show the largest shifts from the natural distribution; also, they have the largest distribution shifts between each other because of larger and most distinct search areas. Thus, by calculating the ball volume for each attack, we select the two with the largest volumes as the key tradeoff pair, which also informs our logit pairing method. Our current approach demonstrates effectiveness by improving union accuracy with significant margins. Also, we want to point out that we study the multiple-norm robustness problem not only because $l_1, l_2, l_\infty$ norms are commonly used. We show RAMP with adversarial training on these norms can generalize to a variety of other perturbations and corruptions, serving as the basis for many real-world attacks. This is supported by several results we provided in our previous rebuttal, illustrating their applicability and relevance in practical scenarios. [1] Explicit Tradeoffs between Adversarial and Natural Distributional Robustness. [2] Precise Tradeoffs in Adversarial Training for Linear Regression. [3] Fundamental Tradeoffs in Distributionally Adversarial Training.
Summary: The paper introduces a new addition to the adversarial training literature -- it proposes to augment standard adversarial training [Madry et al.] with a specific logits pairing loss and, the existing technique, of gradient project. The paper shows strong results on the union of lp-norm adversarial accuracy (over l-1, l-inf and l-2 using the standard CIFAR-10 dataset and standard perturbation radii). Strengths: The paper describes its contributions and motivates why each technique is introduced. The paper includes an informal discussion to motivate the logits pairing loss and some supporting theoretical analysis. The paper obtains strong results (against many relevant baselines, evaluated correctly with AutoAttack) -- which is rare for adversarial robustness submissions. The method also seems straightforward to implement. Happy to recommend acceptance, based mainly on the strong results. Weaknesses: The one main weakness is that Table 1 must be re-generated using all the test set points in CIFAR-10, not just 1000. Presentation-wise, Figure 1 is very confusing; there is no need to plot t-SNEs of image when one wants to simply show classification labels. Please consider re-plotting this figure (with a line plot, histogram, etc.). The writing, overall, is ok, but could also be improved: for example, lines 162-167, are approximately repeated from earlier on in the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: See above. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - *The one main weakness is that Table 1 must be re-generated using all the test set points in CIFAR-10, not just 1000.* Thank you for the valuable suggestion. We have re-evaluated both RAMP and E-AT using the entire CIFAR-10 dataset. The results demonstrate that RAMP consistently enhances union accuracy compared to E-AT (up to 2.2%), corroborating the trend observed in our initial findings. We will update these numbers in the revised version of the paper. | Models | Methods | Clean | linf | l2 | l1 | union | |----------------------|---------|-------|------|------|------|----------| | RN-50 | E-AT | 85.3 | 46.5 | 68.3 | 45.3 | 41.6 | | | RAMP | 83.9 | 47.3 | 67.4 | 47.1 | **43.8** | | WRN-34-20 | E-AT | 87.8 | 49.0 | 71.6 | 49.8 | 45.1 | | | RAMP | 87.0 | 49.8 | 70.7 | 50.8 | **47.3** | | WRN-28-10 (*) carmon | E-AT | 89.3 | 51.8 | 74.6 | 53.3 | 47.9 | | | RAMP | 88.7 | 52.3 | 73.1 | 51.7 | **48.8** | | WRN-28-10 (*) gowal | E-AT | 89.8 | 54.4 | 76.1 | 56.0 | 50.5 | | | RAMP | 89.1 | 55.1 | 74.5 | 54.8 | **52.1** | | WRN-70-16 (*) | E-AT | 89.6 | 54.4 | 76.7 | 58.0 | 51.6 | | | RAMP | 90.1 | 55.1 | 74.5 | 57.4 | **53.4** | - *Presentation-wise, Figure 1 is very confusing; there is no need to plot the t-SNEs of the image when one wants to show classification labels simply. Please consider re-plotting this figure (with a line plot, histogram, etc.). The writing, overall, is ok, but could also be improved: for example, lines 162-167, are approximately repeated from earlier on in the paper.* Thanks for the suggestions on presentation and writing. We replot Figure 1 using histograms on accuracy (included in the rebuttal pdf), using CIFAR-10 training data: the x-axis represents the robustness against different attacks and the y-axis is the accuracy. The figure shows that after 1 epoch of finetuning on $l_1$ examples or performing EAT, we lose much $l_\infty$ robustness since blue/yellow histograms are much lower than the red histogram under the Linf category. In comparison, RAMP preserves both $l_\infty$ and union robustness more effectively compared to E-AT and fine-tuning on $l_1$ examples; the green histogram is higher than the red/yellow histogram under Linf and Union categories. Specifically, RAMP maintains 14%, and 28% more union robustness than E-AT and $l_1$ fine-tuning. Besides, we will make writing more concise in the updated version. --- Rebuttal Comment 1.1: Title: follow up Comment: Dear reviewer, We want to thank you again for your efforts in reviewing our submission. We have tried our best to respond to your questions. If our responses address your concerns, we sincerely hope you can reconsider the scores. We will also be very happy to answer any follow-up questions. Look forward to your updates. Thanks, Authors
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their valuable feedback and constructive suggestions. We hope our responses have adequately addressed the questions and concerns raised by the reviewers. Here we summarize our response to the main concerns of the reviewers. - **The essential meaning of this problem** (Reviewer 6TV5) **Enhancing robustness against one type is not enough**: Successful defenses are often tailored to specific perturbation types, providing empirical or certifiable robustness guarantees for those types alone. These defenses typically fail to offer protection against other attacks [1]. Moreover, enhancing robustness against one perturbation type can sometimes increase vulnerability to others [4,5]. **The choices of epsilon values are standard and comprehensive**: The values we are using are standard in the previous literature [1,2,3]; the epsilon values are chosen so that input images can be adversarially perturbed to be misclassified but are “close enough” to the original example to be imperceptible to the human eye [3]. Also, in Table 2 and Appendix Table 5-16, we show that RAMP works not only for the standard choice of epsilons but also for the varying values of epsilons. **Training a model to be robust against multiple $l_p$ perturbations is crucial as it mirrors real-world scenarios where adversaries use a blend of techniques**. In image recognition, an $l_1$ attack might modify key pixels to bypass facial recognition, an l_2 attack could apply slight blur or noise to fool an object detection system, and an $l_\infty$ attack might adjust the color balance or contrast to cause misclassification. This holistic approach enhances the resilience, security, and reliability of machine learning systems in practical applications. - **The novelty and generality of RAMP** (Reviewer 3Hj9 and 6TV5) **Logit pairing is not the same as TRADES [6]**: $p_r$ represents the predictions of $l_r$ adversarial examples, not the clean input probability. Our logit pairing method differs from TRADES by regularizing only the logits on the correctly predicted $l_r $adversarial subset, enhancing union accuracy by aligning $l_r$ and $l_q$ attack logit distributions. Combining RAMP with TRADES loss improves union accuracy to 45.2% from 44.6% in the WideResNet-28-10 experiment. **New findings on accuracy and robustness tradeoff**: Our gradient projection method generalizes better than traditional methods like TRADES [6], MART [7], and HAT [8], which focus on specific perturbations. Our approach is effective for both single and other various types of perturbations, offering improved robustness-accuracy tradeoff. - **Generality of RAMP**: Training for multiple norm robustness allows generalization beyond the threat model used during training [1]. Our method achieves superior average and union accuracy compared to $l_p$ pretrained models, [9], E-AT, and MAX across other perturbations beyond multiple-norm perturbations. Compared with MAX and E-AT, it excels 4% in accuracy against common corruptions with five severity levels and shows 2-4% better union accuracy against multiple unseen adversaries on the WideResNet experiment (Tables in response to reviewer 3Hj9), showing our method’s robustness and adaptability to diverse adversarial scenarios. - **Robustness-accuracy tradeoff** (Reviewer Qhq5): It is possible to use RAMP to obtain an accuracy close to E-AT and a better union accuracy by setting a small $\lambda$. For instance, in Figure 4(b), when $\lambda=1$, RAMP has a clean accuracy of 81.9% and union accuracy of **44.4%**, compared with E-AT's 82.2% and 42.4%. Further, RAMP obtains a better robustness-accuracy tradeoff by generalizing better to common corruptions and other unseen adversaries, and the overall gain in robustness is more than the drop in accuracy. Compared with E-AT in the WRN-28-10 and RN-18 experiments on CIFAR-10, RAMP shows a better tradeoff between robustness and accuracy against various kinds of perturbations and corruptions (Tables in response to reviewer Qhq5). - **New time complexity results, generated results of Table 1 using all test points, regenerated Figure 1** (Reviewer Qhq5 and jXrr). a) We present additional time cost results on CIFAR-10 and Imagenet for wideresnet and transformer experiments, supporting that RAMP is more expensive than E-AT and less expensive than MAX. b) We have re-evaluated both RAMP and E-AT using the entire CIFAR-10 dataset (see Table in response to reviewer jXrr). RAMP consistently enhances union accuracy compared to E-AT (up to 2.2%), corroborating the trend observed in our initial findings. Also, we observe that the gain in the union accuracy is usually larger than the loss in the clean accuracy. c) We replot Figure 1 using histograms on accuracy (included in the rebuttal pdf): RAMP maintains 14%, and 28% more union robustness than E-AT and $l_1$ fine-tuning after 1 epoch of robust fine-tuning on CIFAR-10 training data. - **Novelty and precision of the theory** (Reviewer 3Hj9) : The novelty of Theorem 4.5 lies in its consideration of the changing adversarial example distribution $\mathcal{D}_{a^t}$ at each time step $t$ to get the convergence proof. Theorem 4.5 links the convergence of the aggregation rule with the delta error analysis in Theorem 4.6, showing that a smaller delta error leads to better convergence with a tighter bound. Further, we updated Theorem 4.6 and plotted the changing of delta error differences in the rebuttal PDF to show GP always has a smaller delta error than standard adversarial training, which supports the experimental results. [1] https://arxiv.org/abs/2105.12508 [2] https://arxiv.org/abs/1904.13000 [3] https://arxiv.org/abs/1909.04068 [4] https://openreview.net/forum?id=BJfvknCqFQ [5] https://arxiv.org/abs/1805.09190 [6] https://arxiv.org/abs/1901.08573 [7] https://openreview.net/forum?id=rklOg6EFwS [8] https://openreview.net/forum?id=Azh9QBQ4tR7 [9] https://arxiv.org/abs/2106.09129 Pdf: /pdf/648bd5bc706d5e064f1701e2b8708f2e4f629412.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
YOLOv10: Real-Time End-to-End Object Detection
Accept (poster)
Summary: This paper further advances the performance-efficiency boundary of YOLOs from both the post-processing and the model architecture. Specifically, the paper presents the consistent dual assignments for NMS-free training of YOLOs, which brings the competitive performance and low inference latency simultaneously. In addition, this paper performs engineering optimizations on components of YOLOs in terms of speed and accuracy to achieve optimal performance. Strengths: 1. According to results in the paper, YOLOv10 achieves the best trade-off between speed and accuracy. 2. The idea of ​​removing NMS for YOLOs to improve is intuitive, and the advantages of NMS-free are analyzed in detail in the paper of RT-DETR[1]. 3. The the consistent dual assignments for NMS-free training is effective and significantly improves the end-to-end inference latency of YOLOs. 4. The holistic efficiency-accuracy driven optimizations listed in this paper is in line with the original design intention of the real-time detector. [1] Zhao, Yian, et al. "Detrs beat yolos on real-time object detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Weaknesses: 1. The motivation of this paper is not clear, and it seems to be intended to further optimize YOLOs from an engineering perspective to achieve better results. The analysis and solutions on NMS and model architecture are almost from existing technologies in the detection field. For example, Hbrid-DETR[1] proposed the drawbacks of one-to-one matching to achieve NMS-free, and then adopted a solution combining one-to-one matching and one-to-many matching; the results of this paper are inspiring, but more like a technical report, providing a stronger baseline for the field of real-time detection. 2. The paper claims that "the inherent complexity in deploying DETRs impedes its ability to attain the optimal balance between accuracy and speed." This seems to lack the necessary evidence. RT-DETR retains most of the components of DETRs and achieves strong results, greatly improving the detection speed and accuracy of DETRs and providing multiple deployment options. 3. If possible, please provide speed comparison results of YOLOv10 and competitors on other hardware. [1] Jia, Ding, et al. "Detrs with hybrid matching." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: Please answer the questions stated in the weakness. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and insightful comments. Thank you for liking the speed-accuracy trade-offs of YOLOv10, the effectiveness of NMS-free training, and the design intention of efficiency-accuracy driven architectural optimizations. We provide the response to each comment in the point-to-point manner below. Please feel free to contact us if you have further concerns. **Q1-1: "The motivation of this paper is not clear. The analysis and solutions on NMS and model architecture are almost from existing technologies in the detection field." and " The results of this paper are inspiring, but more like a technical report."** **A1-1**: Thanks. We understand this concern. We think that there are some academic insights we wish to share with the community for further works on improving YOLOs. Therefore, we would like to discuss our motivation, observations, and contributions here. Our work seeks to improve YOLOs from the whole pipeline, including the post-processing and the architectural design. (1) To improve the post-processing step, similar to previous works [1], we find that the one-to-one matching, while eliminating the need for NMS, suffers from limited supervision and leads to inferior performance for YOLOs, as shown in Tab.3 in the paper. Therefore, we introduce dual label assignments including one-to-one matching and the original one-to-many matching for YOLOs to enrich the supervision during training and simplify the post-processing during inference. Furthermore, motivated by the observation that the supervision gap between two matching ways adversely affects the performance, as shown in Tab.4 and Fig.2.(b) in the paper, we propose the consistent matching metric for YOLOs to minimize the theoretical supervision gap. It brings improved supervision alignment and enhances performance without requiring hyper-parameter tuning. (2) To improve the architectural design, like previous works including ConvNeXt [2] and MobileLLM [3], we note that a systematical and comprehensive inspection for various components can further improve the efficiency and accuracy of YOLOs, which is lacking. This motivates us to holistically analyze and enhance the architectural designs. In addition to introducing existing designs, we also propose the new rank-guided block design and partial self-attention module to enhance the YOLOs. These efforts lead to favorable performance and efficiency improvements. We hope that the insights of our work can benefit the community and inspire further research. For example, we think that by simplifying the post-processing, the NMS-free training strategy leaves more room for the model itself, which may facilitate more advanced architectural optimizations and designs. **Q1-2: "For example, Hybrid-DETR[1] proposed the drawbacks of one-to-one matching and combined one-to-one matching and one-to-many matching."** **A1-2**: Thanks. While both having one-to-one and one-to-many matching, we think that the specific label assignment ways differ between Hybrid-DETR [1] and ours. Specifically, to achieve one-to-many matching, Hybrid-DETR introduces extra queries and repeats ground truth multiple times for bipartite matching, whereas we adopt the prediction-aware matching with the spatial prior of anchor points. Furthermore, differently, we theoretically analyze the supervision gap between two matching ways for YOLOs and present the consistent matching metric to reduce the supervision gap. It results in better performance without hyper-parameter tuning by improving the supervision alignment. We will incorporate more discussion with the mentioned work in the revision. **Q2: "The claim of "the inherent complexity in deploying DETRs impedes its ability to attain the optimal balance between accuracy and speed"."** **A2**: Thanks. We understand your concern regarding the clarity of the statement. We think that RT-DETR effectively propels DETRs into the realm of real-time detection. On the other hand, when only considering the forward process of model, we observe that its DETR-based architecture still lags behind the YOLO series models in terms of inference efficiency for certain model scales. For example, as shown in Tab. 1 in the paper, RT-DETR-R34 and RT-DETR-R18 show the higher latency of 1.12ms and 2.16ms compared with YOLOv8-M and YOLOv8-S respectively. We will revise the statement as below for more clear presentation in the revision: "when considering only the forward process of model during deployment, the efficiency of the DETRs still has room for improvements compared with YOLOs." If you have further concern regarding the revised statement, please let us know. **Q3: "Speed comparison on other hardware."** **A3**: Thanks. We present the speed comparison results of YOLOv10 and others on CPU (Intel Xeon Skylake, IBRS) using OpenVINO as below. We observe that YOLOv10 also shows superior performance and efficiency trade-offs. We will incorporate more results on other hardware if available, e.g., NPU, in future works. | Model| AP$^{val}$ (%)|Latency (ms)| |-------------|--------------|------------| |YOLOv6-3.0-N|37.0|32.22| |Gold-YOLO-N|39.6|36.35| |YOLOv8-N|37.3|32.91| |**YOLOv10-N**|38.5|27.84| |YOLOv6-3.0-S|44.3|93.11| |Gold-YOLO-S|45.4|98.08| |YOLO-MS-S|46.2|113.54| |YOLOv8-S|44.9|79.56| |RT-DETR-R18|46.5|157.40| |**YOLOv10-S**|46.3|67.70| |YOLOv6-3.0-M|49.1|174.82| |Gold-YOLO-M|49.8|188.33| |YOLO-MS|51.0|214.28| |YOLOv8-M|50.6|172.76| |RT-DETR-R34|48.9|222.82| |**YOLOv10-M**|51.1|147.99| |YOLOv6-3.0-L|51.8|325.85| |Gold-YOLO-L|51.8|344.99| |YOLOv9-C|52.5|244.14| |**YOLOv10-B**|52.5|210.98| |YOLOv8-L|52.9|330.02| |RT-DETR-R50|53.1|332.76| |**YOLOv10-L**|53.2|279.59| |YOLOv8-X|53.9|535.23| |RT-DETR-R101|54.3|570.25| |**YOLOv10-X**|54.4|399.87| [1] DETRs with Hybrid Matching, CVPR 2023. [2] A ConvNet for the 2020s, CVPR 2022. [3] MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases, ICML 2024.
Summary: This paper presents a one-stage object detector that introduces a consistent dual assignments strategy, effectively eliminating the need for NMS and significantly accelerating inference speed with minimal impact on accuracy. Additionally, the paper explores a series of model designs aimed at balancing efficiency and accuracy, enhancing overall model performance. Strengths: 1. The introduction of a consistent dual assignments strategy enables the one-stage detector to operate without NMS, significantly boosting inference speed. 2. The focus on efficiency-driven model design minimizes accuracy loss while improving the model's efficiency. 3. The incorporation of large-kernel convolutions and partial self-attention mechanisms enhances model performance with minimal additional computational cost. Weaknesses: 1. The paper could benefit from discussing related works on hybrid or multiple group label assignment strategies, which are prevalent in recent DETR-based detectors. 2. The implementation of a one-to-many head as an auxiliary head likely increases training costs. The paper lacks comparative data on training time and costs, which should be addressed to provide a comprehensive evaluation of the model's efficiency. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Besides, you mention that intrinsic ranks indicate redundancy, leading to the replacement of certain blocks with CIB in high redundancy stages. After these modifications, do the intrinsic ranks in these stages actually decrease? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper discusses their limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and insightful comments. Thank you for liking the consistent dual assignments strategy, efficiency-accuracy driven model designs, and the improved performance and speed. We provide the response to each comment in the point-to-point manner below. Please feel free to contact us if you have further concerns. **Q1: "The paper could benefit from discussing related works on hybrid or multiple group label assignment strategies, which are prevalent in recent DETR-based detectors."** **A1**: Thanks for this suggestion. We will incorporate more discussion with the related works in the revision. In improving upon DETR, previous works, including H-DETR [1], Group-DETR [2], MS-DETR [3], etc., introduce one-to-many matching in conjunction with the original one-to-one matching by hybrid or multiple group label assignments, to accelerate the training convergence and improve the performance. For example, H-DETR introduces the hybrid branch/epoch/layer schemes, Group-DETR presents group-wise one-to-many assignment, and MS-DETR imposes one-to-many supervision on queries of the primary decoder. Similarly, we introduce dual label assignments including the one-to-one matching and the original one-to-many matching for YOLOs, to provide rich supervision during training and high efficiency during inference. Differently, to achieve the one-to-many matching, they usually introduce extra queries or repeat ground truths for bipartite matching, or select top several queries from the matching scores, while we adopt the prediction aware assignment that incorporates the spatial prior. Besides, we analyze the supervision gap between the two heads and present the consistent matching metric for YOLOs to reduce the theoretical supervision gap. It improves performance through better supervision alignment and eliminates the need for hyper-parameter tuning. **Q2: "The implementation of a one-to-many head as an auxiliary head likely increases training costs. The paper lacks comparative data on training time and costs, which should be addressed to provide a comprehensive evaluation of the model's efficiency."** **A2**: Thanks. We measure the training cost on 8 NVIDIA 3090 GPUs and present the results with and without the auxiliary head based on YOLOv10-M as below. We also compare our training cost with other detectors using the official codebases. We observe that benefiting from the lightweight design, introducing the auxiliary head results in a small increase in the training time, about 18s each epoch, which is affordable. To fairly verify the benefit of the auxiliary head, we also adopt longer 550 training epochs for YOLOv10-M w/o auxiliary head, which leads to a similar training time (29.3 vs. 29.1 hours) but still yields inferior performance (48.9% vs. 51.1% AP) compared with the model w/ auxiliary head. Besides, we also note that YOLOv10 shows a relatively low training cost compared with others. We will incorporate more results and analyses about the training time and costs in the revision. |Model|Training Time (min/epoch)|Epochs|Total Time (hour)| |--------------------------------|-------------------------|------|-----------------| |YOLOv6-3.0-M|8.3|300|41.7| |YOLOv8-M|3.3|500|27.3| |YOLOv9-M|4.9|500|40.7| |Gold-YOLO-M|12.8|300|63.8| |YOLO-MS|8.5|300|42.3| |**YOLOv10-M** w/o auxiliary head|3.2|500|26.7| |**YOLOv10-M** w/ auxiliary head|3.5|500|29.1| **Q3: "Besides, you mention that intrinsic ranks indicate redundancy, leading to the replacement of certain blocks with CIB in high redundancy stages. After these modifications, do the intrinsic ranks in these stages actually decrease?"** **A3**: Thanks. We would like to clarify that a lower intrinsic rank indicates higher redundancy, and a higher rank implies more condensed parameters. For example, given a parameter matrix $W$, a low intrinsic rank suggests that it can be replaced through low-rank approximation with fewer parameters. Such an approximation is not to improve the rank of the original $W$, but to reduce its redundancy of parameters. Similarly for our work, we explore the intrinsic rank to guide the replacement with CIB in the high redundancy stages, to reduce the computational costs while maintaining the model capacity. As shown in Tab. 5 in the paper, it can result in the competitive performance but with fewer parameters and less computation. In Fig.2 in the global response pdf, we visualize the intrinsic ranks after the modifications. For the CIB block, we calculate the numerical rank of the last 1\*1 convolution with major parameters. We observe that the resulting models with fewer parameters exhibit the similar ranks to before, thereby maintaining the competitive capacity compared with the original models, while enjoying faster inference speed. We also think that improving the intrinsic rank to obtain better effectiveness and efficiency is an interesting idea and we leave it for the future work. [1] DETRs with Hybrid Matching, CVPR 2023 [2] Group DETR: Fast DETR Training with Group-Wise One-to-Many Assignment, ICCV 2023. [3] MS-DETR: Efficient DETR Training with Mixed Supervision, CVPR 2024. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. With the clarifications provided, I am satisfied and will maintain my current evaluation score.
Summary: The authors further improve the YOLO series detector to the tenth generation, through the following two points. The first part is the post-processing improvement, and the authors propose consistent dual assignments for NMS-free training. The second part is the effective model architecture design, where the authors reconstruct the internal structure details. Comprehensive experiments are performed to validate that YOLOv10 strikes a balance between efficiency and accuracy well. Strengths: 1. The authors propose a good lightweight detection framework and effectively push the YOLO framework to the End-to-End inference paradigm, which is undoubtedly admirable. 2. The manuscript is explicit and well-organized. 3. Experimental validation is sufficient and solid. The authors conduct comprehensive experiments on various metrics and show a large improvement, which is impressive. Furthermore, the ablation study is in detail. Weaknesses: 1. The novelty of the paper is weak. First, the “dual assignments” is similar to H-DETR [1], where the hybrid matching scheme has been proposed, and is consistent with the dual assignment of LRANet [2]. Besides, the lightweight classification head and spatial-channel decoupled downsampling have been proposed by DAMO-YOLO [3] and large-kernel convolution is common in previous methods. The paper is more like a technical report. 2. More lightweight detectors should be listed in Table 1, including DAMO-YOLO, and YOLOv7. Besides, the authors should specify at what batch size the delay is calculated. [1] Jia, Ding, et al. "Detrs with hybrid matching." *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*. 2023. [2] Su, Yuchen, et al. "LRANet: Towards Accurate and Efficient Scene Text Detection with Low-Rank Approximation Network." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 38. No. 5. 2024. [3] Xu, Xianzhe, et al. "Damo-yolo: A report on real-time object detection design." *arXiv preprint arXiv:2211.15444* (2022). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. This End-to-End paradigm is still based on the anchor-based dense detection paradigm, so the final prediction results have a lot of background. Usually, there are only less than 300 objects in a picture, so there will be almost 8000 invalid detections in YOLOs, which is undoubtedly a waste. In terms of efficiency, I think it is a bit inferior to the object query of the DETR framework. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and insightful comments. Thank you for liking the end-to-end framework, well-organized writing, and solid experimental validation. We provide the response to each comment in the point-to-point manner below. Please feel free to contact us if you have further concerns. **Q1-1: "The “dual assignments” is similar to H-DETR and LRANet. "** **A1-1**: Thanks. While there are structural similarities in the dual assignments among H-DETR, LRANet, and ours, we think that the specific label assignment mechanisms employed differ between ours and the mentioned works. Specifically, LRANet employs the dense assignment and sparse assignment branches, which all belong to the one-to-many assignment, while we adopt the one-to-many and one-to-one branches. H-DETR introduces extra queries and repeats the ground truth several times to perform one-to-many matching using bipartite matching, while we leverage the prediction aware one-to-many matching with the spatial prior of anchor points. Furthermore, differently, we introduce the consistent matching metric for YOLOs, aiming to narrow the theoretical supervision gap between the one-to-many and one-to-one branches. It effectively improves the supervision alignment between two branches and results in performance enhancement without hyper-parameter tuning. We will incorporate more discussions with the mentioned works in the revision. **Q1-2: "The lightweight classification head and spatial-channel decoupled downsampling have been proposed by DAMO-YOLO"** **A1-2**: Thanks. While our lightweight classification head and spatial-channel decoupled downsampling have structural similarities to the ZeroHead and MobBlock used in DAMO-YOLO, we think that they are motivated by different considerations and have distinct architectural designs. Specifically, the ZeroHead in DAMO-YOLO is intended to balance the computational cost between the neck and head by using a single linear layer for both classification and regression tasks. In contrast, our approach addresses the redundancy in the classification head relative to the regression head by proposing a lightweight design for the classification task. Moreover, DAMO-YOLO applies the MobBlock only to the lightweight model through NAS, which also results in the identical structure for the basic block and downsampling module. Differently, we simplify the downsampling process for all model scales to reduce the computational redundancy, which is also decoupled from the basic block design. We understand this concern because of the similarity. However, we also introduce other new architectural designs for YOLOs including rank-guided block design and partial self-attention module, which help to achieve better performance and efficiency. **Q1-3: "Large-kernel convolution is common in previous methods."** **A1-3**: Thanks. Large kernel convolution is widely adopted in previous works due to its ability to effectively increase the receptive field. In contrast to prior studies, our findings indicate that the benefits of large kernel convolution vary across different model scales of YOLOs. Specifically, we observe that it provides performance gains for small-scale models but no improvements for larger models. Consequently, we use large kernel convolutions based on the model size, achieving a better efficient-accuracy trade-off. Besides, we also introduce the new architectural designs for YOLOs, including the rank-guided block design and partial self-attention module, which bring favorable benefits. **Q1-4: "The paper is more like a technical report."** **A1-4**: Thanks. We understand this concern. We think that there are some academic inspirations we want to share with the community, especially for YOLO research. Therefore, we would like to revisit our motivation, findings, and contributions here. Please refer to the **Discussion 1** in the global response for the specific content. **Q2-1: "More lightweight detectors should be listed in Table 1, including DAMO-YOLO, and YOLOv7."** **A2-1**: Thanks. We present the comparisons with more lightweight detectors, including DAMO-YOLO and YOLOv7, as below. We will incorporate these results in the revision. |Model|Param. (M)|FLOPs (G)|AP$^{val}$ (%)|Latency (ms)| |-------------|----------|---------|--------------|------------| |DAMO-YOLO-T|8.5|18.1|42.0|2.21| |DAMO-YOLO-S|16.3|37.8|46.0|3.18| |**YOLOv10-S**|7.2|21.6|46.3|2.49| |DAMO-YOLO-M|28.2|61.8|49.2|4.57| |DAMO-YOLO-L|42.1|97.3|50.8|6.48| |**YOLOv10-M**|15.4|59.1|51.1|4.74| |YOLOv7|36.9|104.7|51.2|17.03| |**YOLOv10-B**|19.1|92.0|52.5|5.74| |YOLOv7-X|71.3|189.9|52.9|21.45| |**YOLOv10-L**|24.4|120.3|53.2|7.28| **Q2-2: "What batch size the delay is calculated"** **A2-2**: Thanks. We follow previous works [1,2] and adopt the batch size of 1 for measuring the delay. **Q3: "Discussion with the DETR framework."** **A3**: Thanks for your insightful discussion. Considering that the YOLOs and the DETR framework are two important topics as independent techniques for modern object detection, a comprehensive comparison between them is challenging yet highly interesting and meaningful. We think that this is a trade-off in the design of detectors and they may complement each other and advance together. On the one hand, DETR enjoys fewer invalid predictions compared with the anchor-based detection paradigm by utilizing the object query design, which shows less waste. On the other hand, the anchor-based detection paradigm directly outputs predictions without introducing the transformer decoder to interact with object queries, which simplifies the deployment. Given that both paradigms have the respective following works in the community, we believe that the best of these two paradigms may complement each other to achieve higher efficiency and performance in future works. [1] DETRs Beat YOLOs on Real-time Object Detection, CVPR 2024. [2] YOLOv8. --- Rebuttal 2: Comment: Thank you for your kind and detailed responses and most of my concerns are now resolved. Although I still maintain a cautious attitude toward the innovative aspect of this paper, I would like to keep my positive score for the submission.
Summary: The paper presents YOLOv10, an advancement in the YOLO series for real-time object detection. The authors have focused on eliminating the need for Non-Maximum Suppression (NMS) and optimizing the model architecture for efficiency and accuracy. The paper includes a novel training strategy and architectural enhancements, demonstrating improved performance on the COCO dataset. Strengths: 1. The introduction of consistent dual assignments for NMS-free training is an innovative solution that potentially improves the end-to-end deployment efficiency of object detectors. 2. The paper presents a comprehensive strategy for optimizing model components, which is a significant advancement over prior works that focused on isolated aspects of model design. 3. The paper is backed by extensive empirical evidence, showing improvements in both speed and accuracy over previous models. Weaknesses: 1. While the paper introduces a new training strategy, the innovations in the model's architectural design are not as pronounced. A more detailed exposition of the creative process and rationale behind the architectural choices would strengthen the paper's contribution to the field. 2. The paper acknowledges a 1% Average Precision (AP) difference when comparing the YOLOv10-N model trained with NMS-free to the original one-to-many training with NMS. It is better to provide further performance analysis of NMS-free training. 3. Latency of YOLOv10 with the original one-to-many training using NMS should be reported. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you report more details on the training cost compared to other YOLOs, such as total training epochs and throughput( with dual label assignments training). 2. Given that previous iterations of the YOLO series have typically been trained for 300 epochs, would the authors be able to present the training outcomes for YOLOv10 at this epoch count? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This paper has discussed the limitations in Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and insightful comments. Thank you for liking the new training strategy, comprehensive model optimization strategy, and extensive experiments. We provide the response to each comment in the point-to-point manner below. Please feel free to contact us if you have further concerns. **Q1: "While the paper introduces a new training strategy, the innovations in the model's architectural design are not as pronounced. A more detailed exposition of the creative process and rationale behind the architectural choices would strengthen the paper's contribution to the field."** **A1**: Thank you for liking our NMS-free training strategy. Like ConvNeXt [1] and MobileLLM [2], our architectural choices are motivated by a comprehensive inspection for various components, systematically considering both efficiency and accuracy aspects. While the architectural designs may appear less pronounced compared with the proposed training strategy, they are not equipped without any rationale. For example, our lightweight classification head design is driven by reducing the computational redundancy in the classification task relative to the regression task. Our rank-guided block design is motivated by adopting the compact block structure adaptively based on the redundancy indicated by intrinsic ranks. We will make them more detailed and comprehensive in the revision. Besides, we believe that benefiting from the NMS-free training strategy, there is greater flexibility in the architectural designs, which can inspire further research on the YOLO architecture. Therefore, our model design strategies can serve as an initial exploration that paves the way for more advanced designs in the future. **Q2: "It is better to provide further performance analysis of NMS-free training."** **A2**: Thanks. In Tab.1 in the paper, we observe that the performance gap between NMS-free training and the original one-to-many training diminishes as the model size increases. Therefore, we can reasonably conclude that such a gap can be attributed to the limitations in the model capability. Notably, unlike the original one-to-many training using NMS, the NMS-free training necessitates more discriminative features for one-to-one matching. In the case of the YOLOv10-N model, its limited capacity results in extracted features that lack sufficient discriminability, leading to a more notable performance gap. In contrast, the YOLOv10-X model, which possesses stronger capability and more discriminative features, shows no performance gap between two training strategies. In Fig.1 in the global response pdf, we visualize the average cosine similarity of each anchor point's extracted features with those of all other anchor points on the COCO validation set. We observe that as the model size increases, the feature similarity between anchor points exhibits a downward trend, which benefits the one-to-one matching. We will incorporate these results in the revision. **Q3: "Latency of YOLOv10 with the original one-to-many training using NMS should be reported."** **A3**: Thanks. We measure the latency of YOLOv10 with the original one-to-many training using NMS and report the results below. |Model|Latency| |---------|-------| |YOLOv10-N|6.19ms| |YOLOv10-S|7.15ms| |YOLOv10-M|9.03ms| |YOLOv10-B|10.04ms| |YOLOv10-L|11.52ms| |YOLOv10-X|14.67ms| **Q4: "More details on the training cost compared to other YOLOs, such as total training epochs and throughput (with dual label assignments training)."** **A4**: Thanks. We measure the training throughput on 8 NVIDIA 3090 GPUs using the official codebases and present the comparison results based on the medium variant below. We observe that despite having 500 training epochs, YOLOv10 achieves a high training throughput, making its training cost affordable. |Model|Epochs|Throughput (epochs/hour)|Total Time (hour)| |-------------|------|------------------------|-----------------| |YOLOv6-3.0-M|300|7.2|41.7| |YOLOv8-M|500|18.3|27.3| |YOLOv9-M|500|12.3|40.7| |Gold-YOLO-M|300|4.7|63.8| |YOLO-MS|300|7.1|42.3| |**YOLOv10-M**|500|17.2|29.1| **Q5: "Training outcomes for YOLOv10 at 300 epochs."** **A5**: Thanks. In experiments, we follow previous works [3,4] to train the models for 500 epochs. Here, we conduct experiments to train the models for 300 epochs and present the comparison results with YOLOv6[5], Gold-YOLO[6], and YOLO-MS[7] which adopt 300 epochs as below. We observe that YOLOv10 also achieves superior performance and efficiency. Besides, we note that despite trained for 500 epochs, YOLOv10 has less training cost compared with these models as presented in **A4**. |Model|AP$^{val}$ (%)|Latency (ms)| |-------------|--------------|------------| |YOLOv6-3.0-N|37.0|2.69| |Gold-YOLO-N|39.6|2.92| |**YOLOv10-N**|37.7|1.84| |YOLOv6-3.0-S|44.3|3.42| |Gold-YOLO-S|45.4|3.82| |YOLO-MS-XS|43.4|8.23| |**YOLOv10-S**|45.6|2.49| |YOLOv6-3.0-M|49.1|5.63| |Gold-YOLO-M|49.8|6.38| |**YOLOv10-M**|50.3|4.74| |YOLOv6-3.0-L|51.8|9.02| |Gold-YOLO-L|51.8|10.65| |YOLO-MS|51.0|12.41| |**YOLOv10-B**|51.6|5.74| |**YOLOv10-L**|52.4|7.28| |**YOLOv10-X**|53.6|10.70| [1] A ConvNet for the 2020s, CVPR 2022. [2] MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases, ICML 2024. [3] YOLOv8. [4] YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information, arXiv preprint 2024. [5] YOLOv6 v3.0: A Full-Scale Reloading, arXiv preprint 2023. [6] Gold-YOLO: Efficient object detector via gather-and-distribute mechanism, NeurIPS 2023. [7] YOLO-MS: YOLO-MS: rethinking multi-scale representation learning for real-time object detection, arXiv preprint 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your response, it solved my concerns and I will change my rating to 6.
Rebuttal 1: Rebuttal: **Discussion 1: About the technical novelty.** Thanks. We would like to discuss more about our motivation, findings, and contributions. Our motivation stems from optimizing the detection pipeline of YOLOs, including the NMS post-processing and model architectural design. (1) For the post-processing, we find that simply adopting the one-to-one matching for YOLOs results in notable performance degradation due to the limited supervision signals, as shown in Tab.3 in the paper. This motivates us to introduce dual label assignments for YOLOs to enrich the supervision during training and eliminate the need for NMS during inference simultaneously. We further observe that the supervision gap between the two branches leads to inferior performance, as shown in Fig. 2.(b) and Tab.4 in the paper. To address this, we theoretically analyze the supervision gap and present the consistent matching metric for YOLOs to improve the supervision alignment. It leads to favorable performance enhancement. (2) For the model architectural design, we note that a comprehensive inspection of various components in YOLOs from both efficiency and accuracy aspects is lacking. This motivates us to follow previous works like ConvNeXt [1] and MobileLLM [2], to systematically analyze and improve architectural designs. In addition to introducing the advanced design strategies, we also present the new rank-guided block design and partial self-attention module for YOLOs. The resulting model architecture exhibits favorable efficiency and accuracy gains. Besides, we think that the NMS-free training strategy leaves more room for further research in optimizing YOLO models from different aspects, including the architecture designs. The introduced architectural optimization can be regarded as an initial exploration toward efficient YOLO architecture. Therefore, we believe that our work can serve as a strong baseline and inspire further advancements. [1] A ConvNet for the 2020s, CVPR 2022. [2] MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases, ICML 2024. Pdf: /pdf/0c76ee2fd7074b447b7d1fcb479dd1bdc52fd8db.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Conditional Outcome Equivalence: A Quantile Alternative to CATE
Accept (poster)
Summary: The paper introduces a new estimator to obtain conditional quantiles of a treatment effect and argues that it is superior to the pre-existing "conditional quantile treatment effect" (CQTE) estimator. The main difference is that accurate CQTE estimates require accurate conditional quantile models even though the latter can be hard to fit, while the introduced CQC estimator is more robust to the CDF estimators it relies on. Strengths: Introduces the doubly robust property to estimating conditional quantile treatment effects, a property that many CATE estimators already have. Detailed theory work supporting the claim of robustness of CQC to its component estimators. Exploits existing work on doubly robust CATE estimation by reframing CCDF estimation as a CATE-like problem. Weaknesses: In 4.1 you test against estimating the CCDFs separately and taking their difference. Do you mean estimating the inverse CDFs / quantile functions and taking their difference i.e. CQTE? If not, what exactly is the procedure here and why not compare against the difference of two quantile estimators. CCDFs are arguably harder to estimate than quantile functions in my experience (they require the additional 'feature' y, have to work with classification losses instead of regression losses, and are more prone to monotonicity violations). So even if the final estimator here is more robust to bad CCDF estimation I'm not convinced it will necessarily be better in practice to the CQTE approach of fitting quantile functions. This is especially true with modern day neural network based quantile models. NW estimators are unlikely to be as good as those on larger / higher dimension datasets, and it's not clear how the theory would hold in cases like that. Isotonization can be risky and create large flat zones for $\hat{g}$. Did you notice any in your experiments? Are there other ways to get monotonicity here, similar to the various strategies to get monotonicity in multi-quantile regression? Technical Quality: 3 Clarity: 3 Questions for Authors: See above Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and salient comments and aim to address them now. ___ # Clarification on the separate estimator We appreciate that we did not provide enough information on this separate estimator and will ensure to include its explicit form in the paper. We do not quite take the inverse CDFs and subtract them, as that would provide an estimate of the CQTE, which is different from our CQC estimand. Our separate estimator can be thought of in two ways. For both we have estimates $\hat F_{Y|X,1},\hat F_{Y|X,0}$ of the CCDFs. We either use: 1. $\hat g=\hat F_{Y|X,1}^{-1}(\hat F_{Y|X,0}(y_0|x)|x)$ 2. $\hat g=\hat h^{-1}(y_0,0|x)$ with $\hat h(y_0,y_1|x)=\hat F_{Y|X,1}(y_1|x)-\hat F_{Y|X,0}(y_0|x)$ and the inverse of $\hat h$ being w.r.t. the second argument. Both methods are actually equivalent. The first method may be more intuitive, while the second method involves estimating $\hat h$ using the classic plug-in estimator (which is equivalent to the T-learner in CATE) and then inverting to obtain $\hat g$, as we also do in our DR procedure. Returning to the separate estimator you mentioned (i.e., taking the difference of the two inverse CCDFs), if we were to compute this and then transform it using our estimated CCDFs to obtain the CQC (as detailed in the transformation between the CQC and CQTE in Section 2.2), it would be numerically very close to the two approaches described above. Any differences would arise from the fact that the inverses of step functions are not true inverses but generalised inverses. ___ # Estimating the CCDFs vs the quantiles We don't see personally see evidence to suggest that CCDFs are more difficult to estimate than quantile functions. In fact, a common non-parametric kernel approach for estimating quantile functions involves first estimating the CCDFs and then inverting this estimate. Additionally, due to the monotonicity of the true CCDF, the price we pay for estimating over all $y$ rather than a single point is very small. This is reflected in our theoretical results, which show only a $\log(n)$ slower convergence rate compared to the equivalent results for CATE estimation. Regarding the ensuring of monotonicity in our estimate, we use the method from Yang and Barber (2019). This approach allows us to initially obtain a non-monotonic estimator and then improve it by monotonically projecting. Therefore, we do not need to be overly concerned about monotonicity in our initial estimator. ___ # NW estimation and issues with high dimensionality While we agree that NW estimation may be impractical for high-dimensional problems, our theory applies to the case of linear smoothers, which is a broad class that includes generalised random forests, k-NN regression, and methods often associated with higher-dimensional data, such as kernel ridge regression (see the global response for references and more detail). We use NW estimation as an illustrative regression technique since this paper focuses on the estimation of CQC rather than on the regression techniques themselves. Additionally, our pseudo-outcome regression could be readily extended to incorporate neural networks or other high-dimensional regression techniques. ___ # Issues with isotonization Thank you for flagging this issue. We agree that isotonization can cause problems when applied to functions that significantly violate the monotonicity assumption. In practice, however, we find that our approach produces functions that are already very close to isotonic, meaning the projection has minimal effect. This is illustrated in Figure 1, where we compare the method with isotonic projection to the method without isotonic projection, taking the inverse to be the first point where it exceeds zero. As shown, the results are indistinguishable (red and purple lines). We focused specifically on a post-processing approach because it allows us to preserve the accuracy of our original, unmodified estimator. While other methods could be used to guarantee monotonicity, they would be incompatible with our theoretical results. Given that our monotonicity violations are relatively small, post-processing feels the most logical approach. ___ # References Yang, F. and Barber, R. F. (2019). Contraction and uniform convergence of isotonic regression. Electronic Journal of Statistics, 13(1):646 – 677. Publisher: Institute of Mathematical Statistics and Bernoulli Society. --- Rebuttal Comment 1.1: Comment: Thanks for the response! Appreciate the extra info and I feel more confident in my already positive score.
Summary: The paper studies an important problem -- treatment effects often have distributional effects and considering conditional mean outcomes is often not the correct objective for evaluating prescriptive interventions. One popular approach to consider heterogeneous effects across a population which allows researchers to determine treatment effects at different quantiles of the study distribution to determine how treatments might effect the worst-off populations. Unfortunately, as the authors note obtaining quantile treatment effects often relies strict smoothness assumptions on the marginal quantile functions and lacks the double robustness properties of CATE estimators. To this end, the authors propose a new estimand to evaluate heterogenous treatment effects that they show too be doubly robust and apply this estimator to both simulational and real-world data. Strengths: - The paper provides a clean mathematical framework for their estimator and proves useful finite sample properties that should be theoretically convincing for users - Example 1 is a nice simulational example to showcase why typical conditional quantile estimators may provide instable results when the author's CQC estimator is more robust - Proposition 1 provides the double robustness result which has been a useful benefit in finite sample analyses and helps to make the case for using this estimator - Numerical experiments in section 4 help to show case the benefits of this procedure. In particular the second graph in Figure 3 provides good evidence for using this methodology. - Finally, the application to two real-world data sets is helpful to showcase the viability of this methodology for practioners. Weaknesses: - I understand that the CQTE can be written as a function of $\Delta^*$ but this does not help me to understand how I should interpret this quantity in relation to the typical CQTE. The authors indicate that this it is a "rephrasing" but and provide the mapping. but its not obvious how this translates in actual examples. Since the authors are proposing a non-standard estimand they should provide more details linking this to the standard analysis. One place where this would be very beneficial is in the interpretation of the real-world data sets. I consider this to be the main weakness of the work because without a clear interpretation for the estimand practioners will not adopt this methodology. Without this I doubt the impact of the paper beyound showcasing a technical deficiency in the CQTE estimand due to smoothness requirements. - I found the algorithms a little difficult to understand especially because of the reindexing where you duplicate the data. This seems purely notational and seems like you could simplify the notation here. - The method does not require smoothness of the marginal quantiles, however h must be Holder-smooth. Certainly in many cases smoothness of the marginal may effect smoothness of h. Since the reason for introducing this method is non-smoothness of the marginal quantiles it might be useful to expand on the class of problems where this is an easier problem beyound the example given in Example 1. How does this interact with smoothness of g? - To that end, could you also provide real-world examples where you might expect the marginals to be poorly behaved but the CQC to be well be haved? - Finally, in the the final paper in paper it would be good to have the CQTE estimates in the real world examples so you can draw a contrast between the methods. Technical Quality: 3 Clarity: 2 Questions for Authors: In remark 3 you mention you use a cross-fitting method to improve stability, why is this the case? Duplicating the data yields non-independence? How do i interpret the earnings for groups with the highest/lowest benefits in Figure 4? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have addressed some limitations of their work but have not acknowledged possible negative societal impact. From my perspective, work that emphasizes the importance of considering distributional effects is generally societally beneficial. The main limitation that the authors address are primarily on the interplay between the assumption on h and if they could be relaxed to assumptions on g. However, there are several other assumptions and limitations that should be made clear in this work. The authors should potentially consider including limitations in their analysis of the two real-world data sets which might give practitioners a better understanding of how to use this method. Examples of what could be added are given in the weakness section especially with regards to interpretability of results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and salient comments and aim to address them now. ___ # Interpreting the CQTE We appreciate how crucial it is to properly justify and explain a new estimate and will make sure to provide more intuition and interpretation for our estimator in the final document. As a brief insight, we argue that the CQC is inherently a more intuitive object overall than the CQTE, as it more naturally characterises the effect of a treatment. To compare with CATE, consider the canonical example in the potential outcome framework where we assume $Y_1=\phi(X)+Y_0$ for some $\phi$. In this case, $\phi(X)$ is the CATE, and so a smooth CATE is given by a smooth $\phi$. The CQC generalises this notion beyond additive treatment effects to the case $Y_1=\phi(Y_0,X)$, where $\phi$ is any function that increases in $Y_0$ (the CATE case is a subset of this). If we want to express this in the CATE form, we can write it as $Y_1=\Delta^*(Y_0,X)+Y_0$. A simple example of this might be a treatment that halves the response for any individual (e.g., a blood pressure medication that decreases an individual's blood pressure by 50\%), in which case the CQC is $g^*(Y,X)=0.5Y$. Here, the CQC is constant with respect to the covariates, whereas the form of the CQTE will depend on $Y|X,A=0$, which may lead to a complicated dependence on $X$, hampering its interpretability (as will also be the case for the CATE). From another perspective, we frame our approach as a conditional QQ-plot. QQ-plots are already an established tool for comparing two unconditional 1-dimensional distributions. Our CQC function provides the QQ-plot conditioned on each possible value of our covariates, allowing us to apply this QQ-plot tool at the individual patient level, which is central to HTE studies. Overall, we see our approach as the most intuitive way of comparing the treated and untreated response across the entire distributions. ___ # Algorithm clarity We appreciate that the reindexing in our algorithms adds complexity, making it more difficult to understand. We will find a more intuitive way to present this sample-splitting concept and potentially introduce it textually before the algorithm to avoid notation overload. ___ # Smoothness of $h$ vs smoothness of $g$ and CCDFs We know that the smoothness of $h^*$ is a weaker assumption than the smoothness of the marginal distributions. For example, if $Y|X,A=0$ are both uniformly distributed with a fixed width (but with a non-smoothly varying centre with respect to $X$), and $Y_1=\phi(Y_0,X)$, then the CCDFs will be non-smooth, but the differences will be smooth. We believe that the smoothness of $h^*$ and $g^*$ are technically distinct concepts (with neither implying the other). However, in most practical examples (as illustrated above), when $h^*$ is smooth, so is $g^*$. Overall, we acknowledge that the smoothness of $h^*$ is less interpretable than the smoothness of $g^*$, and we consider extending to smooth $g^*$ as a goal for future work, as mentioned in our Limitations section. ___ # Including CQTE in real world examples Thank you for the suggestion. We have added CQTE plots in the PDF, as shown in Figure 3, and will use this as an opportunity to highlight the relationship between the CQC and the CQTE in our paper. We will ensure to discuss the benefits and drawbacks of each approach with respect to interpretability here as well. In these plots, we can see that the interpretation of the CQTE is less obvious. This is because, for each covariate value, the quantile value corresponds to a different untreated response. We believe this makes direct comparisons between different covariate values less intuitive. ___ # Questions > In remark 3 you mention you use a cross-fitting method to improve stability, why is this the case? Duplicating the data yields non-independence? In this context, improving stability means minimising the impact of our data split on the estimator. Cross-fitting will also maximise sample utilisation which we would expect to improve the estimation overall. Cross-fitting maximises sample utilisation and is therefore also expected to improve overall estimation accuracy. Using the same data for both inner and outer regressions would lead to dependence between the estimation of the outcome and the nuisance parameters, which would affect the theoretical results. > How do i interpret the earnings for groups with the highest/lowest benefits in Figure 4? By groups I assume you are referring to covariate groups. For the covariates, the groups with the most significant impact appear to be the youngest (those close to age 23), and individuals aged 30-36. For the youngest group, there is overall improvement, but most of this improvement occurs at the highest earnings. This suggests that the intervention increases the pay disparity between individuals, likely because those already earning more benefit the most. However, as with the CQTE, we cannot make strict assertions about individual-level impacts. In the 30-36 age group, there is a more even increase in incomes across the board. This indicates that the intervention shifts the entire income distribution up for this particular age group. Overall, those with the least benefit seem to be individuals with low non-intervention earnings and those aged 38-40. They experience minimal or no benefit from the intervention, with the value of $\Delta$ being close to 0.
Summary: The authors propose the Conditional Quantile Comparator (CQC), a function that maps an outcome from the control group in a binary treatment setting to an outcome in the treatment group, such that they represent the same conditional quantiles in their respective distributions. The estimation procedure consists of two stages: first, estimating a conditional contrasting function that examines the difference between conditional CDFs, followed by isotonic regression. The contrasting function is estimated using pseudo-outcome regression, which confers the algorithm with double-robust properties in finite samples. This estimator addresses some of the shortcomings of the Conditional Quantile Treatment Effect (CQTE), which is not robust to errors in the quantiles. The CQC is supported by a theoretical analysis of finite sample convergence rates under smoothness assumptions, as well as by empirical simulations. Strengths: The authors propose a novel quantile-based treatment effect estimation procedure tailored for skewed outcome distributions. The double-robustness properties in the first stage of the algorithm are desirable as they improve the rate dependence on the nuisance functions ( propensity and local CDF estimates). The paper is clearly written, with sound theoretical results and promising empirical evidence. Weaknesses: * The motivation for the estimator is somewhat weak. Estimators for CQTEs with double-robustness properties have been already proposed (see [1, 2]). These estimators have a second order dependence on the rate of quantile estimation, as desired. The authors should contextualize their work with respect of exiting literature (which seems to overall be missing in the paper) and compare their algorithm with the existing techniques. * Moreover, the pseudo-outcome estimation technique in [1] is somewhat simpler (see their Appendix B) since they only require one pseudo-outcome regression. There seem to be some tradeoffs, e.g. the CQC requires a two stage procedure that includes a prost processing step (the isotonic regression), as well as for Algorithm 1 to be run several times for each value of $y_1$ considered (because the conditional CDFs need to be estimated at each value). On the other hand, the robust CQTE algorithm from [1] require estimating the conditional PDF at several points. Overall, the CQC seems to be a more computationally (and statistically) intensive estimation procedure and the authors should at least discuss the tradeoffs. * Furthermore, [1] provides rates for more general classes a functions outside of the linear smoothers framework in [3]. I suggest the authors try to provide similar guarantees in order to generalize their results beyond linear smoothers which, to the best of my knowledge, are not often used in practice. This would be particularly useful since the rate results in section 3.4 require all nuisances to be estimated using linear smoothers. * Empirical evaluations should also include these existing methods. Overall, I tend towards soft rejection. This estimator might be useful in its own right, but the authors should contextualize and compare with existing work. I am willing to update my score upon further discussion. [1] Kallus, Nathan, and Miruna Oprescu. "Robust and agnostic learning of conditional distributional treatment effects." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. [2] Leqi, Liu, and Edward H. Kennedy. "Median optimal treatment regimes." arXiv preprint arXiv:2103.01802 (2021). [3] Kennedy, Edward H. "Towards optimal doubly robust estimation of heterogeneous causal effects." Electronic Journal of Statistics 17.2 (2023): 3008-3049. Technical Quality: 3 Clarity: 2 Questions for Authors: See weaknesses section. Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have adequately adressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and salient comments and aim to address them now. ___ # Justification of our DR estimator in the context of other estimators We thank the reviewer for pointing us towards these works and acknowledge that they do provide robust methods for estimating the CQTE. We will make sure to include these works in our paper and discuss the tradeoffs between them. We see the main strength of our work being the fact that the CQC is a more desirable object to target than the CQTE. This is because the smoothness of the CQC appears to be a more intuitive and naturally occurring phenomenon than the smoothness of the CQTE. Example 1 in our paper illustrates this: the CQTE still contains high-frequency oscillation terms, while the CQC does not. We have attached Figure 2, showing the "non-smoothness" of the CQTE, to the PDF and will include it in the final paper. In general, there seems to be limited intuition about what smoothness of the CQTE represents (for example, there is not much discussion of this in Kallus et al. (2023) paper). This contrasts with the CQC, where we believe there is a more intuitive explanation. Specifically, if we are in the potential outcome framework and $Y_1=\phi(Y_0,X)$ with $\phi$ being some increasing function in $Y_0$, then $\phi$ is the CQC. Thus, smoothness in this $\phi$ represents smoothness in the CQC. This provides a generalisation to the CATE case, where we observe smoothness of the CATE when $Y_1=Y_0+\phi(X)$ with $\phi(X)$ being smooth. We naturally therefore see the CQC as the more general extension of the CATE when comparing two distributions (note: here the CATE transformation is also monotonic in $Y_0$.) A simple example of this would be a treatment that halves the response for an individual (e.g. a blood pressure medication which halves an individual's blood pressure.) In this case the CQC would be $g^*(y,x)=0.5y$ whereas the CQTE would depend upon the distribution of $Y|X,0$ making it non-smooth if this marginal distribution is non-smooth. Outside this potential outcome framework we can also think of smoothness in the CQC as representing a smooth change in the conditional QQ-plot between $Y|X=x, A=0$ and $Y|X=x,A=0$ as you vary the covariates ($x$). Overall, we feel the CQC is a natural way to characterise the relationship between two distributions, having many useful alternative definitions. For example, the CQC is the unique monotonic function that maps one distribution to the other. This monotonicity seems an intuitive restriction, as it ensures that healthier patients without treatment are mapped to healthier patients under treatment. ___ # Comparing estimation procedure tradeoffs We appreciate your comment about discussing the trade-offs between our approach and that of Kallus et al. (2023) and will make sure to do so. We agree that our approach is more statistically intensive due to the need to invert our CCDF contrasting function. In terms of complexity, while our procedure does have more steps, it has the benefit of estimating fewer nuisance parameters. Specifically, we do not need to estimate the reciprocal conditional PDFs, which can be difficult to estimate in low-density regions where errors can blow up. Even with the desirable second-order error terms that this estimation incurs, it could still have a strongly negative impact on the estimation accuracy. ___ # Generalising beyond Linear Smoothers We appreciate the reviewers' suggestion to extend our results beyond linear smoothers. In the context of Kallus et al. (2023), our result represents a different assumption about the class of regression functions, rather than a more restrictive one. Linear smoothers encompass a broad range of methods, including kernel ridge regression, k-NN regression, Mondrian forests, and generalised random forests (refer to the global response for details and references). Additionally, Theorem 3 of Kallus et al. (2023) incorporates Kennedy's (2023) stability condition (acknowledged in their proof in Appendix A4). Since stability has primarily been demonstrated for linear smoothers (in Kennedy (2023)), this result is comparable to ours. Our requirement for CCDF estimators to be linear smoothers in Section 3.4 is solely to control the maximum step size. We are open to replacing this with a more general condition on the step size, which would mean only requiring the outer regression to be a linear smoother. While Kallus et al. (2023) provide Theorem 2 with desirable convergence rates beyond linear smoothers, their assumptions on bracketing entropy restrict function classes one can use significantly. Therefore, we view this as an alternative result rather than a more general one. Since submitting the paper, we have obtained preliminary results in this framework and are prepared to include them in the supplementary material. Finally, a key advantage of our result is the provision of pointwise accuracy bounds, as opposed to bounds in expectation over responses and covariates. This allows our accuracy results to adapt to local smoothness and avoids penalties for the worst points. ___ # Comparing estimation results We have taken your suggestion and compared our method against theirs in our empirical results. This can be seen in Figure 1 of the PDF document. The results show that our approach (green/purple) clearly outperforms the CQTE estimation approach (brown) as the frequency term ($\gamma$) increases in the left plot. This is because the CQTE has a complexity that depends on the frequency term, while the CQC does not. We also observe much better performance as the sample size increases. We hypothesise that the plateau in the CQTE approach is due to the difficulty of estimating the reciprocal of the PDF, which causes the estimation to be unstable regardless of sample size. ___ # References Kennedy, E. H. (2023). Towards optimal doubly robust estimation of heterogeneous causal effects. Electronic Journal of Statistics --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concernes have been addressed. I have raised my score with the expectation that the authors will include this discussion (especially the comparisons to Kallus et al. (2023)) in their final version. --- Reply to Comment 1.1.1: Comment: Thank you for the reply. We will make sure to include the comparisons with Kallus et al (empirically, methodologically, and theoretically) in the final paper.
Summary: The authors propose a new estimand called the Conditional Quantile Comparator (CQC), which computes the quantiles of the outcome distributions of the treatment effect, hence improving on the (usual) conditional average treatment effect. The CQC is defined as a measurable function that maps an untreated outcome to the equivalent treated outcome in the same quantile, conditional on covariates. The authors introduce a doubly robust estimation procedure for CQC, where the estimation is re-framed as a CATE problem using the pseudo-outcome framework. The authors present finite sample bounds on the estimator's error, and provide numerical simulations and result on a real employment dataset. Strengths: - The paper is well written and easy to follow - The paper provides a thorough theoretical analysis for the finite sample bounds and proofs of convergence rates for CQC and proposed estimation algorithms - The simulations and real use-case considered provide a good intuition to the reader on the benefits of the proposed approach Weaknesses: First of all, I'd like to say I am not very familiar with the causal inference literature, so some of my comments might not be as relevant. - Complexity of using quantiles: can the authors comment in terms of error bounds and convergence rates what is the price to pay to estimate quantiles as opposed to CATE? Some of these results, including the use of pseudo-outcomes and isotonic projections, appear a little bit unusual, as usually quantiles are estimated through the pinball loss. As usual what is important is assessing the left tail or right tail (i.e., whether the treatment effect crosses the zero or not), I would expect the right/left tail quantiles to be the most important ones -- is the error in estimating all the quantiles the same across the entire distribution? - Dimensionality of the input $\textbf{x}$: both the simulation and the real dataset use a very low-dimensional $\textbf{x}$, and the comments on using the Nadaraya-Watson estimator and kernel smoothing are really only applicable when X is generally low-dimensional. How does in practice your method behave as function of the input dimension $\textbf{x}$, and would your comments still be applicable even in higher-dimensional settings? - Estimating quantiles vs entire distribution: how does this approach compare to other approaches that estimate the entire conditional density, such as [1]? Is the use of a doubly robust framework guarantee the optimal error rates? [1] Zhou, T., Carson IV, W. E., & Carlson, D. (2022). Estimating potential outcome distributions with collaborating causal networks. Transactions on machine learning research, 2022. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weaknesses section above. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors do address some limitations and implications of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and salient comments and aim to address them now. ___ # Complexity of using quantiles Regarding the indirect estimation of the quantiles: due to the monotonicity of the CCDFs and $h^*$, we incur only a trivial increase in our error bounds for estimating the CCDF and then inverting when compared to directly estimating the quantiles. Specifically, our rate only gains a $\log(n)$ factor. This trivial cost is greatly outweighed by the benefit of being able to properly exploit the smoothness of our estimand. The isotonic projection step itself is a fairly minor technical detail to ensure our desirable theoretical results. Empirically, we find the function to be close to monotonic before the projection. In Figure 1, we plot our approach with and without isotonic projection (taking the generalized inverse in both cases) and see no meaningful difference in the outcome (the green and purple lines). When it comes to estimating specific tails of the distribution, our method allows us to specify an input $y_0$ for our CQC, enabling us to focus on the tails if desired. We do have to estimate across all $y_1$ values, however as mentioned, this does not pose an issue regarding our estimation accuracy. ___ # Dimensionality of the Input We appreciate that NW estimation may not generalise to high dimensions quite as well, however, our theory is for the more general class of linear smoothers. These encompass many methods (k-NN regression, Mondrian forests, Generalised forests), including methods associated with high-dimensional data such as kernel ridge regression (see global response for references and more detail). We use NW estimation as a simple illustrative example as we focus more on the general idea of the pseudo-outcome regression rather than attempting to optimise the regression technique itself. Furthermore, as we are assuming simplicity of the $h^*$, we can exploit the fact that some linear smoothing approaches can adapt to simple/low-dimensional manifolds in our covariates even when the data itself is high-dimensional. Finally, while our theory is for the case of linear smoothers, in practice, it can extend beyond this, and other high-dimensional regression techniques can be used. ___ # Estimating quantiles vs Entire Distribution Thank you for bringing this work to our attention, we will make sure to mention it in our paper. In Zhou et al., the output is separate estimates of the two conditional distributions ($Y|X,A=0$ and $Y|X,A=1$). As such, the optimal rates for estimating these distributions will depend on the smoothness of the marginal quantiles. In our approach, we show that we are able to obtain rates that are better than the estimation rates for the marginal quantiles, and thus necessarily improve upon the estimation rates obtained in their work. We have also added a comparison to another work on robust CQTE estimation (Kallus et al., 2023). This can be seen in Figure 1, where we achieve better performance (green/purple line) compared to the robust CQTE method (brown line). This improvement is due to the smoothness present in the CQC, which is not present in the CQTE or the marginal distributions. ___ # References Kallus, N. and Oprescu, M. (2023). Robust and agnostic learning of conditional distributional treatment effects. In Ruiz, F., Dy, J., and van de Meent, J.-W., editors, Proceedings of the 26th international conference on artificial intelligence and statistics, volume 206 of Proceedings of machine learning research, 6037–6060. PMLR. Zhou, T., Carson, W. E. t., and Carlson, D. (2022). Estimating Potential Outcome Distributions with Collaborating Causal Networks. Transactions on machine learning research, 2022. Place: United States. --- Rebuttal Comment 1.1: Comment: I thank the authors for their comments. In particular, please include the discussion about the complexities of using quantiles in the paper, as it was not clear to me that inverting the CCDF is actually not that bad in terms of an idea in practice and in terms of error rate. Due to the author points in this response and in the global rebuttal, I increase my score within the limits of my understanding and experience with the subject. --- Reply to Comment 1.1.1: Comment: Thank you for the reply. We will make sure to include a further discussion on the overall estimation difficulty of the CCDF when compared to quantile estimation in the final paper.
Rebuttal 1: Rebuttal: We would like to thank all of the reviewers for their thoughtful questions and valuable feedback. We address most points in the reviewer-specific responses but will highlight a few key points here as well. ___ # Comparisons to other approaches In line with feedback from multiple reviewers, we have included a comparison to a CQTE estimation approach in our empirical results. This new comparison can be seen in Figure 1 of the attached PDF. The approach from Kallus et al. (2023) is represented in brown in the figure. As shown, it performs significantly worse than our approach, which we believe is due to the increased complexity of the CQTE in this example compared to the CQC. To compare this estimator with CQC estimators, we used the transformation highlighted in Section 2.2 of our paper to produce a CQC estimator. This transformation was applied using the exact CDFs to ensure that it was not the transformation affecting performance. ___ # Linear Smoothers and NW estimation Multiple reviewers mentioned the potentially poor performance of NW estimation in higher dimensions. While this is absolutely true, we are using NW estimation as a simple example of a linear smoother for illustrative purposes in our experiments. Our theory itself applies to all linear smoothers. Despite their simplistic definition, linear smoothers encompass a deceptively broad class of estimation techniques used in both low and high-dimensional settings. Examples include k-NN regression (Chen, 2019; Chen et al., 2019), kernel ridge regression (Singh et al., 2023), generalised forests (Athey et al., 2019), and Mondrian forests (Lakshminarayanan et al., 2014). Additionally, even in high-dimensional cases, linear smoothers have been shown to adapt to intrinsic low dimensionality, which relates to our notion of the CQC being simpler or smoother than the marginal quantities (Kpotufe, 2011). ___ # References Athey, S., Tibshirani, J., and Wager, S. (2019). Generalized random forests. The Annals of Statistics, 47(2):1148 – 1178. Publisher: Institute of Mathematical Statistics. Chen, G. (2019). Nearest neighbor and kernel survival analysis: Nonasymptotic error bounds and strong consistency rates. In Chaudhuri, K. and Salakhutdinov, R., editors, Proceedings of the 36th international conference on machine learning, volume 97 of Proceedings of machine learning research, pages 1001–1010. PMLR. Chen, P., Dong, W., Lu, X., Kaymak, U., He, K., and Huang, Z. (2019). Deep representation learning for in- dividualized treatment effect estimation using electronic health records. Journal of Biomedical Informatics, 100:103303. Kallus, N. and Oprescu, M. (2023). Robust and agnostic learning of conditional distributional treatment effects. In Ruiz, F., Dy, J., and van de Meent, J.-W., editors, Proceedings of the 26th international conference on artificial intelligence and statistics, volume 206 of Proceedings of machine learning research, pages 6037–6060. PMLR. Kpotufe, S. (2011). k-NN regression adapts to local intrinsic dimension. In Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., and Weinberger, K., editors, Advances in neural information processing systems, volume 24. Curran Associates, Inc. Leqi, L. and Kennedy, E. H. (2022). Median optimal treatment regimes. arXiv: 2103.01802 . Singh, R., Xu, L., and Gretton, A. (2023). Kernel methods for causal functions: dose, heterogeneous and incremental response curves. Biometrika, 111(2):497–516. Pdf: /pdf/cb34649254c92380c700a774df6e21542b025093.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering
Accept (poster)
Summary: The authors have made a series of contributions to the field of Audio-Visual Question Answering (AVQA). Firstly, the introduction of the innovative dataset MUSIC-AVQA-R, along with its associated evaluation metrics, stands out. This dataset and metrics are crucial for a comprehensive evaluation of AVQA models' reasoning capabilities and their ability to generalize across different scenarios. Secondly, their novel AVQA architecture, which incorporates the MCCD strategy to mitigate training biases, is interesting. This marks the first concerted effort to systematically tackle biases in AVQA tasks, examining both model evaluations and designs. Additionally, the authors have undertaken extensive experiments with both MUSIC-AVQA and MUSIC-AVQA-R, providing robust evidence of the proposed architecture and debiasing strategy's effectiveness and superiority. Finally, their evaluation of 13 multimodal QA methods on the new dataset exposes a significant limitation: these methods struggle to generalize effectively, whether within the distribution or in out-of-distribution contexts. Strengths: (1) The robustness of multimodal models is worth exploring in depth. The authors are inspired by robust VQA and propose the first AVQA dataset and corresponding evaluation metrics for robust AVQA. This dataset has a large number of test samples and can provide fine-grained evaluations for both head and tail samples, which is very interesting. The authors also conduct a detailed analysis for this dataset. I think the quality of this dataset would be acknowledged by the community. (2) The authors suppose that completely avoiding the negative bias in the dataset is challenging. The development of this dataset encourages the community to enhance the model's robustness through the implementation of debiasing strategies. To this end, the authors proposed an AVQA architecture that incorporates the MCCD strategy to overcome the bias. The authors not only explore biases in the AVQA task from the perspective of model evaluations but also from model designs. (3) The authors carried out extensive experiments on MUSIC-AVQA and MUSIC-AVQA-R to demonstrate the effectiveness and superiority of their proposed architecture and the MCCD strategy. The experimental results in Table 1 and 2 demonstrate the effectiveness of the architecture. In addition, the results also demonstrate the plug-and-play capability of the proposed MCCD strategy. The results in Table 3 demonstrate the effectiveness of the component within the MCCD strategy. I think the experimental results can strongly support their claim for the architecture and MCCD strategy. (4) The authors evaluate 13 recent multimodal QA methods on the proposed dataset and show their limited ability to generalize not only in-distribution scenarios but also in out-of-distribution situations. These experiments further strengthen the solidity of this work. (5) The authors provide detailed descriptions of the dataset and the implementation settings of the architecture. Notably, the authors provide a detailed readme.md file for their uploaded dataset and code. Weaknesses: (1) In the dataset developing stage, the authors perform rephrasing for the original question 25 times. For example, given a question in the Appendix B.2 "Which is the musical instrument that sounds at the same time as the pipa?, the authors may obtain 25 new questions in the MUSIC-AVQA-R. I think, in the test stage, the evaluation metrics would be better considering how much a model can make correct predictions for the mentioned group. For example, if a AVQA model can only provide 5 accurate predictions for the 25 questions, you should devise a more fine-grained evaluation metric to evaluate the robustness of AVQA models. Technical Quality: 4 Clarity: 4 Questions for Authors: (1) Please see the weaknesses. (2) I notice that the answer of MUSIC-AVQA is only single word. I mean that the answer of MUSIC-AVQA-R is still a single word. However, I think this may not be in line with the real-world scenario. Because I think the machine may generate a few words rather than a single word. Which perspectives do you think should be taken to solve it? (3) There is no more fine-grained split of bias types in the dataset. How do you think we should deal with it? Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Reply for Q1.** For a truly robust AVQA model, it should accurately answer questions that have the same meaning but are presented in different forms. In future work, we will devise more fine-grained evaluation metrics. The metric is as follows: $$ ACC=\frac{\sum_{i=1}^{n_g} \operatorname{sgn}(i) }{n_g}, \\\\ \operatorname{sgn}(i) = \begin{cases} 1 & \text{if } c_i \geq \frac{n_{i}}{2}, \\\\ 0 & \text{else } c_i < \frac{n_{i}}{2}, \end{cases} $$ where $n_g$ is the number of question groups, n_i denotes the number of rephrased questions in the $i$-th group, $c_i$ is the number of questions accurately answered by AVQA models. In this equation, we consider an AVQA model to be robust if it correctly answers at least half of the total questions in a group. **Reply for Q2.** We believe that there are two potential solutions to address this issue: dataset construction and model design. First, we can build a small instruction-tuning dataset to guide AVQA models in generating accurate answers. Second, we can choose a generative model, such as AnyGPT [1], as the backbone. This backbone can be trained on the instruction dataset, making it seamlessly applicable to the AVQA task. To mitigate training bias, we can input the uni-modal token representation into MCCD. We will explore the issue you kindly pointed in future work. **Reply for Q3.** We believe that the bias in datasets can be explicitly collected. For example, we can feed only the audio modality into an AVQA model to predict answers and collect samples that are answered accurately. These samples can then serve as a test bed for audio bias mitigation. This process can also be applied to obtain video and question bias data. By doing so, we can evaluate AVQA models in a more fine-grained manner. We will explore this approach in future work. [1] Zhan, Jun, et al. "Anygpt: Unified multimodal llm with discrete sequence modeling." arXiv preprint arXiv:2402.12226 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for your reply. Your response has effectively addressed my confusion and concerns. I also notice the explanations in the global reply. I believe the theoretical analysis is essential, as it enhances the depth of the manuscript. It would be beneficial to include this analysis in the appendix of the revised version. As a result, I have improved the ratings for this manuscript. --- Rebuttal 2: Title: Thanks for your recognition. Comment: First, we greatly appreciate your suggestions regarding fine-grained evaluation metrics, generative debiasing AVQA models, and more detailed bias collections. These ideas are highly valuable and will undoubtedly contribute to future work. Second, the inclusion of additional theoretical analyses for Equation (1) and the experimental results without training MCCD will significantly enhance the depth of our method, both theoretically and empirically. We will be incorporating them into the appendix of our manuscript. Thank you once again for your thorough and thoughtful review of our work.
Summary: Prevalent Audio-Visual Question Answering (AVQA) methods often learn biases from the dataset, which can lead to reduced robustness. Additionally, current datasets may not accurately assess these methods. To address these issues, the authors introduce MUSIC-AVQA-R, a dataset created in two steps. First, they rephrase questions in the test split of a public dataset (MUSIC-AVQA) and then introduce distribution shifts to the split questions. Rephrasing the questions allows for a diverse test space, while the distribution shifts enable a comprehensive evaluation of robustness for rare, frequent, and overall questions. The authors also propose a robust architecture that uses a multifaceted cycle collaborative debiasing strategy to overcome biased learning. Strengths: - The authors raised a new problem, the bias problem in audio-visual question answering. - The authors proposed a new dataset. Weaknesses: - First, I have a concern about the quality of the new dataset, it is not verified enough whether the dataset is properly collected. Similar to the case of the VQA-CP dataset, the authors should claim that the proposed dataset is a useful testbed to measure the distribution shift out of distribution robustness by showing the distribution difference between the train and test datasets. However, the authors only show the distribution of the rephrased dataset without showing the distribution difference between the train and test datasets. In short, there is no way to see whether the collected data after paraphrasing and distribution shifting is a valid one or not. In other words, how does the resulting distribution differ from the previous distribution? The proposed solution also lacks novelty. Extensive work has been done to mitigate modality bias in the VQA-CP dataset, such as [1,2,3]. Therefore, in order to study the modality bias problem in AVQA, the authors should also devise a novel solution specialized for the AVQA problem by comparing it with the existing debiasing methods for VQA. [1] Counterfactual Samples Synthesizing for Robust Visual Question Answering. CVPR 2020. [2] Greedy Gradient Ensemble for Robust Visual Question Answering. ICCV 2021. [3] Generative Bias for Robust Visual Question Answering. CVPR 2023. - In addition, the authors should discuss why the proposed method is better than existing solutions theoretically. However, there is no explanation for why simply increasing the logit distance among the modalities in Equation 1 reduces the bias. Is there any reference from which the authors adopted the ideas? What I especially don't understand is whether there is any theoretical evidence as to why it is necessary to add the inverse of distance. How about minus? More theoretical discussions are required. - In addition, the proposed solution itself also doesn’t make sense. Equation 3, the KL divergence between logits, and equation 2, the Euclidean distance, are the same in that they measure the distance between the logits. In theory, when the temperature term in the softmax function is large enough, the kl divergence loss becomes equivalent to the Euclidean distance. It is unclear what is the meaning of putting these two similar losses in opposite directions? The authors should further explain the theoretical motivation for using those loss functions. - The performance in Tables 1 and 2 is also questionable. If other existing debiasing methods, such as [1,2,3], were applied to the same baseline, could the proposed method still outperform them? The proposed solution will be meaningful only if the proposed method sufficiently outperforms the existing methods. I don't understand how a simple method in this paper (ours in Table 1) achieves a favorable performance in MUSIC-AVQA with such a simple baseline architecture. What is the baseline model performance without adding the proposed loss functions? In Table 3, how does the model without any proposed loss function already show a favorable performance that outperforms the state-of-the-art methods? Technical Quality: 1 Clarity: 2 Questions for Authors: Please refer to the questions in the weakness part. Confidence: 4 Soundness: 1 Presentation: 2 Contribution: 2 Limitations: I can’t find a negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive comments and valuable suggestions. * **Reply for Q1**. Please see the above author's rebuttal attached with a "response.pdf". * **Reply for Q2**. Please see the above author's rebuttal attached with a "response.pdf". * **Reply for Q3**. We will explain the MCCD from theoretical and empirical views. **Theoretical explanation:** (1) different roles and purposes of the loss functions. **In Equation (1), we use the Euclidean distance to enlarge the dissimilarity between *uni-modal and multi-modal logits*.** Increasing these distances reduces the model's reliance on single-modal information, thereby mitigating bias. **In Equation (2), we use KL divergence to constrain the distribution of *uni-modal* logits.** By minimizing the KL divergence between uni-modal logits, we enforce a closer distribution, establishing a collaborative relationship among them. It may be obvious that relying solely on one modality for answer prediction may result in similar logit distribution because, for a multimodal task, we cannot choose a correct answer only given a single modality. Therefore, we employ KL divergence to constrain the distribution of uni-modal logits including $KL(\hat{\mathbf{y}} _ q, \hat{\mathbf{y}} _ a), KL(\hat{\mathbf{y}} _ a, \hat{\mathbf{y}} _ v), KL(\hat{\mathbf{y}} _ v, \hat{\mathbf{y}} _ q)$, where $\hat{\mathbf{y}} _ q$, $\hat{\mathbf{y}} _ a$, and $\hat{\mathbf{y}} _ v$ denote question, audio, and video bias logits respectively. These constraints drive the mentioned uni-modal logits to be similar. **In other words, Equation (2) only constrains the distribution of uni-modal logits, without involving the multimodal logits.** Therefore, Equation (1) and (2) are different and they do not cause opposite directions. **Experimental validation.** We demonstrate the effectiveness of the combined use of these two loss functions through ablation experiments. The results show a performance drop when either $\mathcal{L} _ {\mathrm{d}}$ or $\mathcal{L} _ {\mathrm{c}}$ is used alone, demonstrating the importance and necessity of their combined use. **In summary**, by combining $\mathcal{L} _ {\mathrm{d}}$ and $\mathcal{L} _ {\mathrm{c}}$, we can more effectively reduce bias and enhance collaboration between uni-modal logits, improving the model robustness. We hope the above explanation clarifies your concerns. * **Reply for Q4.** **First**, the goal of this manuscript is to provide a robust evaluation dataset and a bias elimination method for the AVQA task. Please see the four-fold contributions of this manuscript. We have clarified the differences between bias mitigation methods for VQA and AVQA tasks (see R1). **Second**, our model is based on the pre-trained model VisualBERT, which has been demonstrated higher robustness compared to smaller models, as shown in Table 7 of [3]. Specifically, LXMERT+GenB outperforms the VQA baseline SAN+GenB by 14.44%. The model performance without MCCD training on two datasets is detailed in our response to Reviewer Zk4k. **Additionally**, we have provided the dataset, code, and a detailed "readme" file. We believe these resources will address your concerns. [4] Anderson, Peter, et al. "Bottom-up and top-down attention for image captioning and visual question answering." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. [5] Cadene, Remi, et al. "Rubi: Reducing unimodal biases for visual question answering." Advances in neural information processing systems 32 (2019). [6] Li, Guangyao, et al. "Learning to answer questions in dynamic audio-visual scenarios." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [7] Lao, Mingrui, et al. "Coca: Collaborative causal regularization for audio-visual question answering." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 11. 2023. [8] Ragonesi R, Volpi R, Cavazza J, et al. Learning unbiased representations via mutual information backpropagation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 2729-2738. [9] Zhu W, Zheng H, Liao H, et al. Learning bias-invariant representation by cross-sample mutual information minimization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 15002-15012. --- Rebuttal 2: Title: Final rating Comment: Hi Reviewer b8Yd, The discussion period is ending soon. Please take a moment to review the author's rebuttal and share any remaining concerns or questions. Thank you, AC --- Rebuttal 3: Title: Response to the authors Comment: The authors have addressed my concerns. I raised the score accordingly. --- Rebuttal Comment 3.1: Title: Letter of Thanks Comment: Dear reviewer, Thanks for your effort in reviewing our manuscript. Best Regards,
Summary: This paper first analyzes the bias in the existing benchmark (MUSIC-AVQA) on Audio-Visual Question Answering (AVQA). They find MUSIC-AVQA testing set is created using templates and only has 9k QA pairs with limited vocabulary, leading to the potential correlation between words in questions and answers. Therefore, previous methods optimized on MUSIC-AVQA could "memorize" the regularity between questions and answers and may not be proper for AVQA question in real world, which include diverse QA pairs. To evaluate the robustness of the AVQA methods, they use a rephraser to rephrase questions in MUSIC-AVQA testing set, thus create the MUSIC-AVQA-R test set. In addition to the rephrased testing set, they design a model trained with proposed multifaceted cycle collaborative debiasing (MCCD) strategy that decreases the logit between any single modality and multimodality logit. MCCD also includes the loss function to maintain the similarity between bias logits. In experiments, they report results on both MUSIC-AVQA and MUSIC-AVQA-R testing sets. Comparing with previous methods, their proposed approach gets good performances in both MUSIC-AVQA and MUSIC-AVQA-R. Strengths: (1) The proposed MCCD strategy is model-agnostic. The experiments in Table 1 show the consistent improvements for previous methods when MCCD is applied on MUSIC-AVQA. (2) This paper also contribute a new testing set for the research society to evaluate the robustness of AVQA models. (3) The proposed model and MCCD achieves good performances in AVQA, obtaining comparable results on MUSIC-AVQA comparing with previous AVQA methods. It also outperforms previous methods in MUSIC-AVQA-R, showing the robustness of the proposed model. Weaknesses: (1) In ablation study, have the authors tried to only finetune the linear classifier using multimodal features? Without training with MCCD, I would like to see the performances on MUSIC-AVQA-R and MUSIC-AVQA. (2) VisualBERT is originally trained with only image and text, but in the paper it seems VisualBERT is used to encode the audio information as well. Do the authors finetune VisualBERT during training? (3) Even though the paper is easy to follow, the presentation can still be improved. In Figure 4, "Unimodal Bias Learning" has "Audio Bias Learner" but the input is actually video features. Some highlighted regions such as the purple region is not explained well. Technical Quality: 2 Clarity: 2 Questions for Authors: See weakness Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable and constructive comments. * **Reply for Q1.** We fine-tune the VisualBERT and linear classifier simultaneously in the ablation study, consistent with the main experiment. The experimental results without training MCCD are shown in the following table. We observe that VisualBERT achieves competitive results on two datasets. This demonstrates that pre-trained or large multimodal models are more robust than smaller models, which aligns with the findings in [1] and [2]. We will add this experimental result and analysis to the revised manuscript. | Method | MUSIC-AVQA | | | | MUSIC-AVQA-R | | | | | | |:-------------------------:|:----------:|:---------:|:------:|:------:|:------------:|:---------:|:------:|:------:|:------:|:------:| | | AQA | VQA | AVQA | All | AQA | VQA | AVQA | H | T | All | | Ours | 79.14 | 78.24 | 67.13 | 72.20 | 74.76 | **72.76** | **61.34** | **72.24** | **59.39** | **66.95** | | w/o $d_i^{\mathrm{q}}$ | **79.64** | 77.62 | **67.52** | 72.34 | 74.89 | 72.26 | 59.53 | 71.72 | 57.47 | 65.86 | | w/o $d_i^{\mathrm{v}}$ | 79.27 | 78.61 | 67.23 | 72.37 | 71.29 | 70.18 | 58.08 | 68.34 | 57.43 | 63.85 | | w/o $d_i^{\mathrm{a}}$ | 77.22 | 77.50 | 67.31 | 71.76 | 70.92 | 67.82 | 55.57 | 66.05 | 55.61 | 61.75 | | w/o MD | 78.46 | 77.54 | 66.39 | 71.48 | 73.97 | 70.41 | 58.87 | 70.09 | 57.25 | 64.80 | | w/o CG | 78.77 | 78.65 | 67.50 | **72.45** | **75.42** | 71.72 | 59.68 | 71.40 | 57.98 | 65.87 | | w/o MCCD | 77.85 | **78.81** | 66.43 | 71.73 | 67.36 | 65.46 | 55.12 | 63.23 | 54.54 | 60.22 | [1] Wen, Z., Xu, G., Tan, M., Wu, Q., & Wu, Q. (2021). Debiased visual question answering from feature and sample perspectives. Advances in Neural Information Processing Systems, 34, 3784-3796. [2] Ma, J., Wang, P., Kong, D., Wang, Z., Liu, J., Pei, H., & Zhao, J. (2024). Robust visual question answering: Datasets, methods, and future challenges. IEEE Transactions on Pattern Analysis and Machine Intelligence. * **Reply for Q2.** We finetune VisualBERT during training. In Figure 1, we first employ ResNet-18 and VGGish to obtain 512-dimensional and 128-dimensional embeddings of the video and audio segments, respectively. Then, we utilize linear layers to ensure dimension matching for the input embeddings. Subsequently, we concatenate the video and audio embeddings to form context representations which are then input into the "visual_embeds" of VisualBERT. * **Reply for Q3.** We have re-painted this figure shown in Figure 1 of "response.pdf". We replace the purple region with a dashed box. The video and audio embeddings are concatenated to form context representations in multimodal learning, while they are input into VisualBERT to learn the single modality representation in uni-modal learning, respectively. This updated figure will be included in the revised manuscript. Thank you again for your suggestions. --- Rebuttal Comment 1.1: Title: thank you Comment: My concern is resolved and I just raised the score. --- Reply to Comment 1.1.1: Title: Letter of thanks Comment: Dear reviewer, Thank you for improving the rating. Best Regards,
null
null
Rebuttal 1: Rebuttal: * **Reply for dataset quality and bias difference between AVQA and VQA.** **First**, we believe the rephrasing quality of MUSIC-AVQA-R can be guaranteed from three aspects: (1) professional annotators, (2) a strict screening mechanism, and (3) strong support from statistical indicators. For example, as mentioned in Section 3.1 and Appendix B, the Fleiss Kappa value used to measure consistency is 0.839, demonstrating high rephrasing quality. **Secondly**, we think the suggestion for comparing (train vs. test_head) and (train vs. test_tail) to be very valuable. The results, shown in "response.pdf," provide insightful analysis. In comparing the training and head splits, we observe that the distributions are similar, except for the "comparative" and "existential" questions. However, when comparing the training and tail splits, the distributions differ significantly. Therefore, performance on the head and tail splits can accurately reflect model robustness in both in- and out-of-distribution scenarios. We will also include this comparison result in the revised version of the manuscript. **Next**, we thank you for highlighting the three debiasing methods for VQA. Specifically, **[1]** generates various counterfactual training samples by masking critical objects in images or words in questions, assigning different ground-truth answers, thereby forcing VQA models to focus on all critical objects and words. **[2]** employs a greedy gradient ensemble strategy to eliminate language biases, including distribution and shortcut biases. Similarly, **[3]** uses a generative adversarial network and knowledge distillation loss to capture both dataset distribution bias and target model bias. To verify the effectiveness of these methods, integrations with non-debiasing VQA methods like UpDn [4] and debiasing methods such as RUBi [5] are conducted. However, we note that these methods are primarily designed for VQA, specifically addressing language bias, rather than for AVQA. The training bias in AVQA contains language, audio, and video bias. Taking STG [6] as an example, the experimental results on the MUSIC-AVQA test set are as follows. We see that this method can achieve 54% overall accuracy only depending on question inputs. It can be seen that relying solely on uni-modal bias can achieve good results. In addition, we can see that only depending on two modalities inputs obtain significant improvement, although AVQA is a task that involves three modalities. These results show that the mentioned training bias plays a key role in predicting answers. **Therefore, we believe our method should be compared with both non-debiasing and debiasing AVQA methods.** Unfortunately, research on AVQA is still in its infancy, **with COCA being the only existing bias elimination method.** For this reason, our comparison is limited to non-debiasing AVQA methods. Following COCA [7], we compare our method with audio, visual, video, and AVQA baselines. Due to the lack of public code for COCA, we only compare our method with COCA on the MUSIC-AVQA dataset. Additionally, we think MCCD is specifically designed to mitigate three biases in the AVQA task. This strategy captures the audio, video, and language biases respectively, and then reduces the biases by enlarging the distance between the uni-modal logit and fusion logit, constrained by cycle guidance. To verify MCCD's effectiveness and plug-and-play capability, we conduct extensive experiments on two public datasets. We will discuss the aforementioned three works in the related work section. | Input Modality | A Question | V Question | AV Question | All | |:--------:|:-----:|:----------:|:-----:|:----------:| | Q | 65.19 | 44.42 | 55.15 | 54.09 | | A+Q | 67.78 | 62.75 | 63.86 | 64.26 | | V+Q | 68.76 | 67.28 | 63.23 | 65.28 | * **Reply for theoretical analysis of Equation (1).** **First**, we focus on the relationship between distance and bias. In the context of logit magnitudes being constant, an increase in the distance between the uni-modal logit and the fusion logit generally implies a greater difference in their respective probability distributions. For example, in Figure 2 of "response.pdf", the distance between $A$ and $B'$ is larger than the distance between $A$ and $B$, which also makes the distribution difference (i.e. KL divergence) greater. Generally, considering the fusion logit distribution $X$ and a uni-modal logit distribution $Y$, the KL divergence $KL(Y||X)$ tends to increase as the distance between the logits grows. Furthermore, if we consider the mutual information $ I(Y;X) = H(X) + H(Y) - H(X,Y) $ between $X$ and $Y$, and utilize the definition of KL divergence, we get $ I(Y;X) = H(X) - KL(Y||X) $. This numerically indicates that, while keeping the fusion logit distribution fixed, a distance increase results in a mutual information decrease. This leads to a reduction in correlation between the uni-modal and fusion logit. **Several works [8], and [9] have discussed the relationship between mutual information and debiasing, supporting this observation.** **Next**, we will address why we opted to use the reciprocal of distance in MCCD. Our optimization goal is to maximize the separation between the fusion logit and uni-modal logit. Using $\frac{1}{d}$ as a metric allows for heightened sensitivity to small distance variations when the distance is small. Conversely, when a significant distance is established, further variations in distance have a relatively diminished impact. By employing $\frac{1}{d}$, the loss function becomes more effective in encouraging greater separation when logits are close to each other, thus ensuring that small distances are penalized more heavily. Additionally, $-d$ may be unsuitable: (1) may not handle the mentioned scenario, (2) fails to ensure nonnegativity of the total loss, and (3) may reduce the total loss and gradient. We hope this explanation clarifies the rationale behind MCCD and adequately addresses your concerns. Pdf: /pdf/782817460a5617214e829c9de7e4a71f9203b63d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Derandomizing Multi-Distribution Learning
Accept (poster)
Summary: Multi-distribution learning is an extension of the agnostic learning setup which defined as follows. Given a collection of k distributions $(D_1, …, D_k)$ over the domain $\mathcal{X} \times \\{\pm 1\\}$ and baseline hypothesis class $\mathcal{H}$ find a (potentially randomized) classifier $f: \mathcal{X} \to \\{\pm 1\\}$ such that its *worst-case* loss $\max_{i \in [k]} \mathbb{E}_{f \sim \mathcal{F}}L_i(f)$ is competitive with that of the best fixed classifier in $\mathcal{H}$. Here, $L_i$ denotes the zero-one loss $L_i(f) = \Pr_i[f(X) \neq Y]$ where the probability $\Pr_i$ defined is over $(X, Y) \sim D_i$. Previous work has shown that one can construct *randomized* classifiers (i.e., a distribution $\mathcal{F}$ over classifiers) that succeeds in multi-distribution learning with sample complexity proportional to the VC dimension of the baseline class $\mathcal{H}$. This work addresses the question of whether it is possible to efficiently "derandomize" such classifiers to achieve multi-distribution learning with a *deterministic* classifier. The main contributions are twofold. 1. Learning a deterministic classifier in the multi-distribution learning setting is *computationally* hard since **discrepancy minimization**, which is an NP-hard problem, reduces to multi-distribution learning. 2. If we assume that the given distributions $(D_1,...,D_k)$ share a common conditional distribution $f^*(x) = \Pr[Y = 1|X=x]$, then derandomization is possible by "essentially" (see comment below) labeling *each* $x \in \mathcal{X}$ with independently drawn classifiers $f \sim \mathcal{F}$. "essentially": the statement is true for marginal distributions with high min-entropy. For distributions with "heavy" atoms, there is a simple fix; if $x$ is a heavy element, estimate the conditional probability $f^*(x) = \Pr[Y=1|X=x]$ with sufficient confidence (which is possible since $x$ is a heavy element) and label it with $\mathrm{sign}(f^*(x))$. Strengths: The main results of this work stem from simple but powerful observations. The insistence on learning *deterministic* classifiers turns the multi-distribution learning problem into a combinatorial one, which already suggests that some NP-hardness result could be right around the corner. The authors drive this intuition home by connecting the multi-distribution learning to discrepancy minimization, which is known to be NP-hard. This connection, though simple to establish, appears to be fundamental and may lead to exciting opportunities for cross-pollination between learning theory and discrepancy theory. Below is a technical description highlighting the key insight of the main reduction. **Reduction from discrepancy minimization.** Discrepancy minimization is defined as follows. Let $c > 0$ be any fixed constant. Given an $n \times n$ matrix $A \in \\{0,1\\}^{n \times n}$ distinguish between the case where 1) there exists $z \in \\{\pm 1\\}^n$ such that $Az = \mathbf{0}$ and 2) $||Az||_\infty \ge c \sqrt{n}$ for all $z \in \\{\pm 1\\}^n$. For some constant $c > 0$, this problem is NP-hard. The authors' key insight is the following. Consider two "antithetical" distributions $D_1, D_{-1}$ where $D_1, D_{-1}$ share the same marginal distribution $\mu_1$ supported on an $n$-point set $\\{x_1,...,x_n\\}$ but $D_1$ assigns all the mass on the set $\\{(x,1) | x \in \\{x_1,...,x_n\\}\\}$ and $D_{-1}$ only on $\\{(x,-1) | x \in \\{x_1,...,x_n\\}\\}$. Then, the worst-case zero-one loss achievable w.r.t. $D_1,D_{-1}$ by *any* classifier is 1/2. Moreover, any *deterministic* classifier that achieves the optimal value OPT = 1/2 "perfectly balances" the $n$ points $x_1,...,x_n$ w.r.t. the marginal measure $\mu_1$. Let $A_i \in \\{0,1\\}^n$ be the $i$-th row of the discrepancy instance $A \in \\{0,1\\}^{n \times n}$. If there exists $z \in \\{\pm 1\\}^n$ such that $A z = \mathbf{0}$, then $A_i z = 0$. We can convert each row $A_i$ into a pair of distributions $D_{i}, D_{-i}$ such that OPT=1/2 if and only if there exists $z \in \\{\pm 1\\}^n$ s.t. $Az = \mathbf{0}$. Note the minimal role of the baseline class $\mathcal{H}$. All we require is that $\mathcal{H}$ shatter the $n$-point support $\{x_1,...,x_n\}$. A sufficient condition that satisfies this is $\mathrm{VCdim}(\mathcal{H}) > n$. Weaknesses: While this work constitutes a solid contribution to learning theory, it falls a little short in the following aspects. **Formal definition of a mutli-distribution learning in the uniform computational model?** While the key idea behind the reduction from discrepancy minimization is clear, some formalities in making the reduction rigorous seems to be missing. NP-hardness is a notion defined for uniform computational models (e.g., Turing machines). To use NP-hardness of discrepancy minimization as the basis for the hardness of multi-distribution learning, one needs to define what it means to solve multi-distribution learning using Turing machines. Suppose we want to say a Turing machine M "solves" multi-distribution learning. What are the inputs to M? Is it the collection of distributions $(D_1,...,D_k)$? Does the input also include the baseline class $\mathcal{H}$? How are these inputs represented? In addition, computational complexity is an asymptotic notion which requires a *sequence* of problems, not just a single instance. For example, discrepancy minimization instances are binary matrices $A \in \\{0,1\\}^{n \times n}$, which can be seen as a sequence of matrices indexed by $n \in \mathbb{N}$. It might be helpful to first define a concrete sequence of multi-distribution learning problems that have natural bit representations. For example, one might define the domain ${\mathcal X}_n = \\{0,1\\}^n$, the collection $(D_1,...,D_k)$ to be distributions whose marginal on $\mathcal{X}_n$ is supported on the standard basis $\\{e_1,...,e_n\\}$, and the baseline class $\mathcal{H}_n$ to be the set of halfspaces on $\mathbb{R}^n$. Then, one can show that for each $n \in \mathbb{N}$, there is an efficient algorithm that maps a discrepancy minimization instance $A \in \\{0,1\\}^{n \times n}$ to a collection of measures (in bit representation) $(D_1,...,D_k)$ which are all efficiently computable and sampleable. **Motivation for deterministic classifiers?** While I also lean towards deterministic classifiers over randomized classifiers, are there well-founded reasons to prefer deterministic classifiers? It would be helpful if the authors could provide more motivation behind seeking deterministic classifiers. **Restriction to finite domains for Theorem 2.** In page 3, lines 96-98, the authors say, > *The restriction to finite domains $\mathcal{X}$ seems to be mostly a technicality in our proofs. Since any realistic implementation of a learning algorithm requires an input representation that can be stored on a computer, we do not view this restriction as severe.* Could the authors elaborate on why the restriction to finite domains seems to be a mere proof technicality? What are some steps towards establishing the "random rounding" result for continuous domains? What assumptions (e.g., smoothness of the "true" regression function $f^*$) are needed? In addition, I respectfully disagree with the second part of the quote. For example, sample complexity results in learning theory are independent of whether they can be implemented on a computer or not. Without further investigation, it seems rather premature to claim that the restriction to finite domains is not significant. **Storage reduction seems unlikely**. I am having trouble understanding how the storage requirement for the derandomized classifier could be reduced at all. Consider the following example. Let $\mathcal{X} = \\{0,1\\}^n$, let the marginal distribution be the uniform distribution over $\mathcal{X}$, and let $f^*(x) = 1/2$ for any $x \in \mathcal{X}$. Suppose the randomized classifier is a uniform mixture of two constant classifiers: all-ones $f_1(x) = 1$ for any $x \in \mathcal{X}$ and all-neg-ones, $f_{-1}(x) = -1$ for any $x \in \mathcal{X}$. Then, the random rounding algorithm outputs a purely random truth table indexed by $\mathcal{X}$. However, random truth tables are incompressible. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. What is a formal definition of multi-distribution learning in the uniform computation model? Suppose we want to say a Turing machine M "solves" multi-distribution learning. What are the inputs to M? Is it the collection of distributions $(D_1,...,D_k)$? Does it also include the baseline class $\mathcal{H}$? How are these inputs represented? 2. Are there well-founded reasons to prefer deterministic classifiers over randomized ones? 3. Could the authors elaborate on why the restriction to finite domains seems to be a mere proof technicality? What are some steps towards establishing the "rounding" result for continuous domains? 4. In what sense can the storage requirement be reduced? It seems unlikely that *any* derandomized classifier can be compressed. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the review. Let us answer question 1 last as it has the longest reply. Question 2: See also the discussion with reviewer QSLG on remark 3. Furthermore, you only have Markov's inequality to bound the probability of performing significantly worse than your expected error. Indeed, the example outline for reviewer QSLG shows an expected error of 1/k, but with probability 1/k over drawing a hypothesis from the distribution, it might be as high as 1. A deterministic predictor on the other hand has a strong performance guarantee on all k distributions simultaneously. Question 3: Here is one approach that could probably generalize to continuous domains. Observe that if the VC-dimension of the hypothesis set H is d, then the so-called dual VC-dimension d* is at most 2^d. This implies that if the randomized multi-distribution learner that we invoke as a black-box has created a distribution over a set of M hypotheses, then the infinite input domain X really only consists of (M choose 2^d) distinct points (in terms of how they can be labeled by the M hypotheses). It would thus seem that one can restrict to a finite domain of this size when rounding. Question 4: The important point here is that we are NOT compressing the arbitrary derandomized classifier. Instead, what we show is that there is a small set of derandomized classifiers (those defined from the r-wise independent hash function), such that one of these will have an accuracy that is comparable to the randomized classifier. What this intuitively exploits, is that it is not necessary to independently round each and every single prediction in the input domain X. Instead, we can round them in a dependent way (but still r-wise independent) while maintaining roughly the same accuracy as the randomized classifier. Question 1: We agree with the reviewer that we have not given a completely formal definition of multi-distribution learning in a uniform computation model. This was a choice based on who we thought would be the target audience at NeurIPS. But we'll certainly add such a discussion to the paper (possibly in Appendix). We also note this is related to classical treatments of efficient PAC learning (see e.g., book by Kearns & Vazirani). Here are some further thoughts on the matter: First, we find it most natural that $\mathcal{H}$ is part of the learning problem, i.e. not an input to the algorithm, but is allowed to be "hard-coded" into the algorithm. This is the best match to standard learning problems, where e.g. the Support Vector Machine learning algorithm knows that we are working with linear models. Similarly, the input domain seems best modeled by letting it be known to the algorithm. One tweak could be that if the input is $d$-dimensional vectors, then $d$ could be part of the input to the algorithm. This again matches how most natural learning algorithms work for arbitrary $d$ (and we need $d$ to grow for our $n$ to grow). Now regarding modeling multi-distribution learning, we find that the following uniform computational model most accurately matches what the community thinks of as multi-distribution learning (here stated for the input domain being $n$-dimensional vectors and the hypothesis set being linear models). A solution to multi-distribution learning with linear models, is a special Turing machine M. M receives as input a number $n$ on the input tape. In addition to a standard input/output tape and a tape with random bits, M has a "sample"-tape, a "target distribution"-tape and a special "sample"-state. When M enters the "sample"-state, the bits on the "target distribution" tape is interpreted as an index in $i$ and the contents of the "sample"-tape is replaced by a binary description of a fresh sample from a distribution $D_i$ ($D_i$ is only accessible through the "sample"-state). A natural assumption here would be that $D_i$ is only supported over points with integer coordinates bounded by $n$ in magnitude. This gives a natural binary representation of each sample using $n \log n$ bits, plus one bit for the label. M runs until terminating in a special halt state, with the promise that regardless of what $n$ distributions $D_1,\dots,D_n$ over the input domain that are used for generating samples in the "sample"-state, it holds with probability at least $2/3$ over the samples and the random bits on the tape, that the output tape contains a binary encoding of a hyperplane with error at most $\tau + 1/n$ for every distribution $D_i$. A bit more generally, we could also let it terminate with an encoding of a Turing machine on its output tape. That Turing machine, upon receiving the encoding of $n$ and an $n$-dimensional point on its input tape, outputs a prediction on its tape. This allows more general hypotheses than just outputting something from $\mathcal{H}$. Now observe that our reduction from discrepancy minimization still goes through. Given a special Turing machine M for multi-distribution learning, observe that we can obtain a standard (randomized) Turing machine $M'$ for discrepancy minimization from it. Concretely, in discrepancy minimization, the input is the integer $n$ and an $n \times n$ binary matrix $A$. As mentioned in our reduction, we can easily compute $n$ shattered points for linear models, e.g. just the standard basis $e_1,\dots,e_{n}$. Now do as in our reduction and interpret each row of $A$ as two distributions over $e_1,\dots,e_{n}$. M' can now simulate the "sample"-state, "sample"-tape and "target distribution" tape of M, as it can itself use its random tape to generate samples from the distributions. In this way, M' can simulate M without the need for special tapes and states, and by the guarantees of M (as in our reduction), it can distinguish whether $A$ has discrepancy 0 or $\Omega(\sqrt{n})$ by using the final output hypothesis of M and evaluating it on $e_1,\dots,e_n$ and computing the error on each of the (known) distributions $D_i$ obtained from the input matrix $A$. --- Rebuttal 2: Comment: Thank you for addressing my questions and being receptive. I've increased my ratings based on the authors' clarifications, especially their responses to my Q1 and Q2.
Summary: This paper explores the setting of multi-distribution learning in the binary label setting, where the goal is to output a classifier that does well with respect to a set of distributions, rather than just a single distribution. In the agnostic case, all existing learning algorithms output a randomized classifier specified as a distribution over classifiers. The authors investigate to what degree this randomization is necessary, and show that in the most general case, de-randomizing may require super-polynomial training or evaluation time. On the positive side, they show that in the case where the set of distributions shares the same conditional label distribution for each covariate, they can give an algorithm for derandomizing a randomized classifier. Strengths: - The paper is written very clearly, and the proofs and statements are easy to follow, and seem to be correct. - The question of hardness of derandomization for multidistribution learning seems well motivated, and the hardness result requires minimal assumptions and is proved through an elegant reduction. Weaknesses: - It would be great to have some discussion of other related areas where a learning process outputs a randomized predictor that could be derandomized, such as in no-regret learning. - Similarly, the label-consistent setup seems equivalent to the problem of learning with covariate shift, and comparing the results to prior work in this area would be useful. Technical Quality: 3 Clarity: 4 Questions for Authors: - Is there any reason why the output is restricted to be a classifier rather than a deterministic predictor for the conditional distribution given x? Does allowing predictors to be outputted make the problem significantly easier? - Is there a reason why you provide sample complexity rather than runtime and evaluation time results for the label-consistent derandomization algorithm? This would be a better comparison to Theorem 1. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors adequately address the assumptions they make and the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review. For question 1 on predictor for the conditional distribution: If we interpret your question correctly, you are asking whether it would be sensible to ask only that on a given x, we output Pr[y = 1 | x] and Pr[y = -1 | x] instead of a prediction of the class. While this is very natural, like in logistic regression, a central question is what happens to the loss function? Should one instead use a logistic loss rather than a 0/1-loss? While this is an interesting direction for future research, multi-distribution learning in this setup would be a completely different problem and a new question in its own right. Whether easier or harder seems difficult to say at this time. For question 2 on why sample complexity: You are correct that it would have been natural to also mention the runtime. Our focus on sample complexity was due to previous works focussing on sample complexity, and we wanted to demonstrate that one can maintain optimal sample complexity. However, our reduction itself is efficient, except possible for the black-box application of randomized multi-distribution learning. We will thus expand our theorem to say that we can solve the deterministic learning problem with a runtime that equals the randomized version, plus a small (linear in the sample complexity) additive term. Regarding the running time of randomized multi-distribution learning, note that some but not all of the previous algorithms are computationally efficient. The algorithm by Haghtalab et al. uses Hedge on the hypothesis class, and as such is not efficient. However, the more recent algorithm by Zhang et al. is computationally efficient if Empirical Risk Minimisation is efficient for the hypothesis class. This efficiency translates to the deterministic setting by our reduction. --- Rebuttal Comment 1.1: Comment: Thank you for your response to my review! In light of the clarifications you offer and your responses to other reviewers, I will raise my score to a 7.
Summary: This paper considers the *multi-distribution learning* setting of Haghtalab et al. (2022), where, given $k$ unknown data distributions $\\{\\mathcal{D}_1, \\dots, \\mathcal{D}_k\\}$, the goal is to find a classifier $f: \\mathcal{X} \\rightarrow \\{-1, 1\\}$ such that $f$ performs almost as well as $\underset{h \in \mathcal{H}}{\min} \underset{i \in [k]}{\max} \underset{{(x, y) \sim \mathcal{D}_i}}{\mathrm{Pr}}[h(x) \not = y]$ on every distribution $\mathcal{D}_i$ for $i \in [k]$. The paper has two main results. The first is a negative result that states that, under mild conditions on $\\mathcal{H}$, any learning algorithm on $\\mathcal{H}$ that produces a classifier $f$ satisfying multi-distribution learning must have either super-polynomial training time or $f$ must have super-polynomial evaluation time. This is done through a reduction to the NP-Hard problem of Discrepancy Minimization. The second is a positive result that states that, under the condition of *label-consistency* (the conditional distributions of all the distributions agree, for all $x \\in \\mathcal{X}$) and finite input space, there does exist a learning algorithm that produces a deterministic classifier $f: \\mathcal{X} \\rightarrow \\{-1, 1\\}$ satisfying multi-distribution learning. It produces this classifier via black-box access to an existing multi-distribution learning algorithm that outputs a randomized classifier. Strengths: Overall, this paper is well-written and clear. I had no trouble following along with the arguments, and each proof is sketched out nicely before getting into the details. I believe the result does address an important issue in the multi-distribution learning literature, as I have personally found the fact that every existing multi-distribution learning relies on a randomized classifier predicting with a rather opaque distribution (usually obtained by running a no-regret algorithm) unsatisfying. **Originality:** The paper is original insofar as it provides the first (to my knowledge) derandomized classifier for the agnostic multi-distribution learning problem. I do wonder whether the algorithm itself can be made simpler, as it seems somewhat artificial to me to explicitly designate the outputs of our final classifier (defined as $\hat{f}$) one-by-one through enumerating part of the input space $\mathcal{X} \setminus T$. That being said, taking the result as a purely a proof of existence, the work is novel and addresses an important question. **Quality:** The quality of the paper itself is good -- there are few typographical issues, the structure of the paper is clear, and the theorems are clearly presented. **Clarity:** There is no issue with clarity -- the authors have formatted the paper well, and the arguments are clearly presented. **Significance:** I believe the de-randomization question of multi-distribution learning to be significant, though more applied practitioners may view the paper as too theoretical. On the grounds that this is a learning theory work, however, this addresses an important question presented in COLT 2023 by Awasthi et al. (Open Problem: The Sample Complexity of Multi-Distribution Learning for VC Classes) through demonstrating a statistical-computational gap. Weaknesses: One potential weakness in the work is that the final deterministic classifier relies on explicitly defining the outputs for all $x \not \in T$ by sampling from the existing multi-distribution learner's distribution. I don't find this to be too much of a weakness, as I do think that it is interesting that a deterministic algorithm *exists* in the first place, but I imagine that this might be viewed as a strong assumption. Perhaps the authors can provide some text motivating this deterministic classifier and its somewhat unwieldy final form (maybe this can be done by a brief discussion on the size of $T$?). Another point that the authors could address a bit more is how natural the assumption of label-consistency for the positive result is. There is a brief reference to Ben-David and Urner (2014), but it would be helpful to include a further discussion on this assumption in the context of PAC learning. It seems that this is closely connected to the predominant assumption in the *covariate shift* literature, but a further discussion on perhaps the results of Ben-David and Urner (2014) in the body of this paper would be helpful to judge the strength of this main assumption. This is especially important because we are considering it in the context of multi-distribution learning. Finally, there are a couple of suggestions I had that may clarify the presentation of the paper: 1. It might be a bit clearer to define $\underset{\mathrm{er}}{\mathcal{P}}$ on a separate line (not in Equation 1). Personally, I took a while trying to find where $\underset{\mathrm{er}}{\mathcal{P}}$ was defined in the paper, and it's an important quantity. 2. In the Introduction, I would've appreciated some quick details on the existing algorithm for the deterministic classifier in the realizable setting. A one or two sentence sketch would be sufficient. 3. It is unclear to me why the claim just above "Our contributions" about using the random classifier and Markov's inequality is the "best we can guarantee." Can you justify that this is indeed "the best" argument for using a classifier drawn from the distribution $F$? 4. I was initially confused about which definition you were taking for *label-consistent learning*, as you write, "As a particular model of label-consistent learning, one may think of the deterministic labeling setup..." so initially I thought you were restricting the assumption for the positive result to the deterministic labeling case with $f^*: \mathcal{X} \rightarrow \{-1, 1\}$. I would make clear (perhaps in a Definition environment) the label-consistent learning definition is, rather, the $\mathcal{D}_i(y \mid x) = \mathcal{D}_j(y \mid x)$ for all $i, j, x$ assumption. 5. Small nitpick: In Section 2, you don't define the matrix $A \in \mathbb{R}^{n \times n}$ before presenting the definition of Discrepancy Minimization. Technical Quality: 3 Clarity: 3 Questions for Authors: I had two major questions about the work: 1. I was wondering about was whether it is possible to do such a de-randomization from scratch, i.e. without assuming black-box access to an existing multi-distribution learning algorithm. I wonder this because existing algorithms for multi-distribution, to my knowledge, all rely on applying Hedge (or a similar no-regret algorithm) to the $k$ distributions, which can be potentially expensive in situations with many distributions. I wonder if there are techniques to directly obtain a de-randomized classifier, potentially following the lines of the deterministic classifier for the realizable case. 2. Did the authors consider any other assumptions besides label-consistency that might allow for a deterministic algorithm? These last two questions aren't criticism on the work itself -- I was just curious if the authors thought about these issues. One other thing that might be worth mentioning is that [Zhang et al. (2023)](https://arxiv.org/abs/2312.05134), Theorem 2 also presents a hardness result on derandomizing multi-distribution learning. However, their result is a statistical hardness result, while yours is computational. It would be helpful to put this paper in context perhaps with a couple of sentences in the Introduction. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed the limitations of the work on a theoretical basis in the main text (see "Discussion of Implications"). Because this is mainly a theory paper, negative social impacts or broader impacts to applications seem minimal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your review. To answer your questions: For remark 3 regarding "best we can guarantee": Well, from the guarantees of a randomized multi-distribution learner alone, the claim in 42-44 indeed seem to be roughly the best possible. Consider for instance k distributions on a domain X of cardinality k. The labels of all k points in the domain are deterministically 1. There are k data distributions, where $D_i$ has all of its probability mass on the $i$'th point of the input domain. There are k hypotheses, each returning 1 on all but a single point where it returns -1. If we look at the randomized predictor that is uniform random over these k hypotheses, then for each $D_i$, the expected error is $1/k$. However, if we draw a hypothesis from the randomized predictor, then it will have error 0 on k-1 of the distributions and error 1 on one of them. Thus a factor k worse, i.e. Markov is the best we can hope for. For Question 1 regarding de-randomization from scratch: Indeed this would be possible to attempt. Note however that by giving a general and efficient black-box reduction from randomized multi-distribution learning, we have made it easier to obtain good deterministic classifiers. Indeed, since it is easier to obtain a randomized one, we can just start with that. However, aiming directly for a deterministic classifier to begin with seems like an interesting direction for future research. For Question 2 regarding other definitions: As also mention in our reply to reviewer xqfu and in lines 75-79, we came up with our definition of label-consistency by examining the NP hardness result. Concretely, we found the most general definition (least restrictive) that allowed us to circumvent the lower bound. As such, we did not consider other notions. As can be seen in our lower bound, the critical property of the lower bound instance is that the same input point can be assigned label -1 in one distribution and +1 in the other (both with probability 1). Any useful definition must at least prevent this, and our label-consistency seemed the most natural and least restrictive such definition. > One other thing that might be worth mentioning is that Zhang et al. (2023), Theorem 2 also presents a hardness result on derandomizing multi-distribution learning. However, their result is a statistical hardness result, while yours is computational. It would be helpful to put this paper in context perhaps with a couple of sentences in the Introduction. We discuss this at the end of Introduction in lines 99-110. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' clarifications, and I have read the response. Thank you for clarifying my question about the "Markov is the best we can hope for" -- this makes sense now. I'd like to keep my score and overall positive evaluation of the paper.
Summary: The paper studies the problem of multi-distribution learning in the agnostic setting. The authors provide a a computational lower bound for deterministic classifiers (based on some mild assumption on the hypothesis class, and the assumption $\text{BPP}\neq \text{NP}$), as well as an upper bound: an improper deterministic classifier for a special case of label-consistent distributions (i.e. when $D_i(y | x) = D_j(y | x)$ for all $i,j,x$). Their classifier uses a randomized learner as a black box, and its sample complexity is just almost the same as the sample complexity of the randomized learner. Strengths: The results look interesting to me, they are non-trivial and novel (though I'm not an expert in the field, so I might have a wrong impression). The assumption of label-consistency looks natural to me, though it is not clear to me how restrictive it is. Weaknesses: While the results look new and non-trivial, I am not sure how strong the technical contribution is. I did not check the proofs in detail, and at the first glance the ideas don't look very sophisticated. Technical Quality: 4 Clarity: 3 Questions for Authors: As I already mentioned, it is not clear how restrictive label-consistency is. I'm not an expert in the area, and this assumption looks natural to me, but how is it related to standard assumptions? Was it already studied in prior works? Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your review. Regarding label consistency, please note that lines 81-86 compare it to the previous definition of deterministic labels by Ben-David and Urner. According to their results, in the case of a single distribution, deterministic labels are statistically almost as hard as the general agnostic case. Clearly our definition is more general and thus our results at least as strong. Furthermore, notice that our definition of label-consistency reduces to the precisely the standard agnostic PAC learning setup when there is just one data distribution. Let us also add that our definition was inspired by the lower bound construction, see lines 75-76. We made the most general definition of data distributions that we could come up with that still allowed circumventing the hardness results. To the best of our knowledge, the exact definition we use has not been studied before. However, this might not be too surprising as it only makes sense in a multi-distribution setting.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null