Dataset Viewer
Auto-converted to Parquet Duplicate
title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Stress-Testing Capability Elicitation With Password-Locked Models
Accept (poster)
Summary: The paper studies whether fine-tuning can elicit the hidden capabilities of LLMs, especially motivated by the setting of dangerous capabilities evaluations. To provide a specific experimental setup, the paper considers password-locked models, which responds with weak answers unless provided with a fixed password in the prompts. By giving strong demonstrations generated from a strong base model to the locked model, they show that SFT typically can recover most of the capabilities hidden in the locked model, achieving unlocking. The paper continues to study RL (with reward but not explicit demonstrations) and a toy setting where the password locking is achieved from scratch. Strengths: In my view, the paper studies a well motivated concern in current safety research agenda on dangerous capability eval. The password-locking scenario is a novel instantiation that we can concretely study capability elicitation. I find that simply making this formal and executable is already a meaningful contribution. The paper is nicely written and well structured. The two main lines of the experiments on SFT and RL are solid, and the results are novel. Weaknesses: For section 5, a minor concern I have is that the password-locking is achieved by SFT as well. This introduces this weird "causal structure" in the experimental setup: specifically, the capability elicitation (in section 5) and the password-locking are both via SFT. I can imagine that we get the results in section 5, not (only) because SFT is good in capability elicitation, but because it's relatively weak in robustly password-locking the model. This might happen, for example, if password-locking a kind of an unnatural task, and a base LM trained on natural language corpus won't robustly lock itself via SFT. To be fair, I think this is not the most likely explanation of the results in section 5. But I am wondering if the author(s) has any take on the relative power of SFT. (Also, I think section 7 partially addresses this concern anyway, though I look forward to seeing more evidence against this weird hypothesis.) The paper would generally be stronger if there is a straighforward baseline on capability elicitation via few-shot/many-shot prompting only (no messing with the model weights). Similarly, I'd love to see how RepE / steering vector perform on password-locked models; i.e., can you find a direction in the activation space to unlock the model? (I would guess you can.) Technical Quality: 4 Clarity: 4 Questions for Authors: How robustly has SFT password-locked the models? Any baseline attempts to jailbreak it, via say few-shot prompting? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The paper addresses its limitations in section 8. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review! We are glad you found our study well-motivated, our main lines of experiments solid, and our results novel. > I can imagine that we get the results in section 5, not (only) because SFT is good in capability elicitation, but because it's relatively weak in robustly password-locking the model. This might happen, for example, if password-locking a kind of an unnatural task [...] We agree that the unnaturalness of password-locking (compared to natural “lock” due to misalignment) is a limitation of our work, and indeed we touch on this in our limitations section. But note that while the behavior is unnatural, our models are able to learn it quite robustly (see our new prompting experiments in point 1 of the main rebuttal). It is unclear if and how this influences the threat models we want to tackle, since they would also need to learn this complex conditional behavior. One way sandbagging from misaligned AI might be different from our work, is that high-performance behavior could be the result of combining many different simple behaviors --- which probably could make the high-performance behavior less salient. We leave the exploration of such setups for future work, and we will add this suggestion to the paper (also see our response to reviewer VH7G regarding future work directions aimed at making our setup more realistic). > [...SFT might be] relatively weak in robustly password-locking the model. > How robustly has SFT password-locked the models? Any baseline attempts to jailbreak it, via say few-shot prompting? Thanks for bringing this up! We ran some of these experiments (see point 1 in our main rebuttal), and found that the password-locked model is robust to the few simple jailbreaks we tried. > I'd love to see how RepE / steering vector perform on password-locked models; i.e., can you find a direction in the activation space to unlock the model? (I would guess you can.) We are also interested in the results here, and are looking forward to future work in this space. Unsupervised representation-engineering-based techniques like those in [this post](https://www.alignmentforum.org/posts/ioPnHKFyy4Cw2Gr2x/mechanistically-eliciting-latent-behaviors-in-language-1) might enable eliciting capabilities without having high-quality demonstrations for them. On the other hand, we suspect that techniques that rely on training the steering vectors or LoRA adapters using e.g. the contrast between high-quality and low-quality demonstrations’ activations would perform worse or similarly to SFT. --- Rebuttal Comment 1.1: Comment: Thanks for the comments! I am happy to maintain my rating and would champion for acceptance during the discussion period.
Summary: To determine whether models are safe to deploy, it's important for labs and regulators to be able to "elicit" model capabilities, including ones that are somehow "hidden" (whether by a malicious deployer, by some unforeseen property of the training data, or by an unaligned model that is able to plan). One way to evaluate for the presence of some capability is to fine-tune the model to perform the capability and check whether the model is able to do it. Does this work? The authors study this question by introducing and training "password-locked" models that are trained to only perform particular tasks well if a password is present in the prompt. The authors find that given sufficiently good demonstrations, fine-tuning and RL succeed at eliciting capabilities, but very poor demonstrations are insufficient. Strengths: - The strongest contribution of this paper is the framing of the problem---motivating the problem of capability elicitation and showing some preliminary experiments in the space. The problem is pretty similar to backdoor detection but the new framing does provide different constraints which I find interesting: (1) the test model (in this case a password-locked model) doesn't necessarily have to be poisoned in a sneaky way like in backdoor attacks, instead the goal is to produce a test model in any way that somehow hides capabilities in a way that realistically imitates "scheming" or "sandbagging"; (2) the attacker is not necessarily given access to the training data or high-quality data. - Though I think it's unsurprising that fine-tuning successfully unlocks these password-locked models, I think the finding that fine-tuning on lower quality data doesn't work as well is an interesting contribution (a priori I could imagine that fine-tuning on lower quality data would still work to fully unlock the model if the password-locking were implemented internally as a simple conditional). The result in Fig 10 where domain generalization still works when each domain has a different password is also slightly surprising. - Overall I like the experimental methodology (testing both supervised fine-tuning and RL, modulating the quality of the fine-tuning data, testing generalization across domains---all choices motivated by their problem framing). Weaknesses: The technical contributions here are somewhat limited. I'd say password-locking is a fairly simple way of modeling hidden capabilities and could be quite unlike the threat models that the authors are actually interested in (situationally-aware LLMs or AI labs deliberately hiding capabilities from auditors; in the case of situationally-aware models, presumably the model does some kind of complex internal reasoning to determine their next action, whereas these password-locked models seem likely to just learn some kind of simple conditional to output X or Y). Indeed, the authors acknowledge this briefly in the limitations section. To some extent I think this is a result of the these threats not being understood well enough for us to know how to model them in a more realistic way than these password-locked models. Given the simplicity of password-locking I'd say that the success of the authors' elicitation methods are mostly unsurprising and also may not generalize to their real threat models. Technical Quality: 3 Clarity: 3 Questions for Authors: - Are there any techniques from past work that may be relevant here? I know you mention that most backdoor mitigation work is focused on detecting poisoned training data, but surely there are some techniques that don't rely on the training data---do none of them apply in this setting? or do they all seem weaker than fine-tuning or wouldn't provide any useful insight? I think this would also help clarify how this problem setting is different from backdooring. - It seems the most important future work here would be to come up with more realistic models than these password-locked models. Do you have ideas how how to do this? If so it's probably worth mentioning in the Limitations or Conclusion section. Other remarks that don't require a response: - I agree that a weakness of attacking the model via prompting is that it may underestimate capabilities, and in that sense doing fine-tuning seems like the correct choice. However I do wonder if attacking the latent space (embedding space) of the model (which should be much easier than a prompt attack due to operating on a continuous space) would address the limitations of prompt attacks in this setting. - Fig 18 caption: I think the last sentence has a typo (a repeated phrase) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors' limitations section is well written, and I can't think of any direct negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the review! We’re happy to hear you found our framing useful, liked our experimental methodology, and found some of our results interesting/surprising. > [...] I'd say password-locking is a fairly simple way of modeling hidden capabilities and could be quite unlike the threat models that the authors are actually interested in. [...] >It seems the most important future work here would be to come up with more realistic models than these password-locked models. Do you have ideas how to do this? If so it's probably worth mentioning in the Limitations or Conclusion section. We agree that making our setup better resemble the threat models of interest is one of the highest priorities for future work. Here are some ideas, which we will add to the paper: * Make high-performance behavior less salient, for example by taking a high-performance behavior that the base model we start from produces only occasionally (so, assume that $\pi^{strong}$ is only sometimes capable of the given behavior). * Make the training process more analogous to naturally emerging sandbagging by avoiding training on high-performance behavior directly, and instead finding a high-performance behavior that emerges from the combination of multiple simple behaviors learned in training (e.g. like in inductive out-of-context reasoning [1]). * Use a more on-policy behavior as $\pi^{weak}$ – e.g. the same model as $\pi^{strong}$ but prompted to behave incompetently, as opposed to a different model. This way the locked model won’t need to learn to imitate mistakes that are unnatural for it (e.g. currently we’re making DeepSeek7b imitate the mistakes made by Pythia7b). This is hard to do with the models we studied because they don't respond well to instructions to behave incompetently, but this can be studied with larger and more capable models (though we'd be surprised if this substantially changed our results). Please let us know if you have other ideas! > Are there any techniques from past work that may be relevant here? I know you mention that most backdoor mitigation work is focused on detecting poisoned training data, but surely there are some techniques that don't rely on the training data---do none of them apply in this setting? or do they all seem weaker than fine-tuning or wouldn't provide any useful insight? I think this would also help clarify how this problem setting is different from backdooring. Thanks for this comment! Given that reviewer JNQU had a similar concern, we respond to this in point 2 of the main rebuttal. [1] Treutlein, Johannes, et al. "Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data." arXiv preprint arXiv:2406.14546 (2024). --- Rebuttal Comment 1.1: Comment: Thanks for the response, this is helpful and I have no further comments.
Summary: The authors study the problem of testing advanced LLMs for non-obvious capabilities in a toy setting, where a model has been fine-tuned to only perform well when presented with a fixed password, and to perform poorly otherwise. This is meant to simulate the difficulty of finding the correct prompt for eliciting a particular capability among many other suboptimal prompts. The authors then study various methods of fine-tuning to trigger the strong performance even without the right password, such as supervised fine-tuning on high/low-quality demonstrations as well as various forms of RL which do not require demonstrations at all. They find that performance on par with original with-password performance can be easily achieved, suggesting that prompt fragility and variability may not be difficult problems. Strengths: * The problem is interesting, and the authors present a creative reduction to the toy problem of password-locked models * A wide array of elicitation methods is considered, from many different settings of SFT as well as various demonstration-free methods using reinforcement learning to explore the more difficult where we do not have access to high quality demonstrations. Editorially, I do not think this is a very important setting, since it seems unlikely that we would ever be testing models for capabilities for which we have no idea what successful behaviors look like. And the dependence on RL approaches, which are in general much flakier than supervised learning, casts a lot of doubt on whether results in this setting would generalize in the real world. But the authors think this is important and study it carefully. * Lots of analysis on a diverse set of task settings Weaknesses: * The authors explain the definition and process of password-locking models quite thoroughly, but their paper would be better served by a clearer and earlier motivation for studying password-locked models. * In general, the authors lean heavily on references to prior work to explain various decisions in their experimental design, or different analyses of model performance. It would be helpful if these choices were just explained directly, and then separately credited prior work for motivating these decisions. * There are various alternative approaches to eliciting stronger performance without the correct password, such as prompt optimization (e.g. with GCG or even gradient-free search) or few-shot prompting, which are not explored in this paper * The nomenclature throughout the paper is a little confusing. The pi^weak_base notation was hard to keep track of, as well as the locked with password/locked without password/unlocked distinctions * Given the close connection to neural backdoors/trojans, the authors should explain more clearly why or why not backdoor removal methods are not applicable here Technical Quality: 3 Clarity: 3 Questions for Authors: * As I understand, the reason for studying password-locked models is that they are a rough approximation of prompt fragility in LLMs, i.e. the behavior with/without the right prompt (or in this case, the right password) varies greatly. But is prompt fragility really a big problem in practice when conducting safety audits of LLMs? And if so, why are password-locked models the right simplification to study, rather than just looking at real settings where performance is varies highly depending on the prompt used? * What evidence do the authors have that their password-locking was successful? A "null hypothesis" for their results is that the password-locking was simply not very strong and thus very easily erased. Do perturbations of the password also work? What about prompt optimizers/automated prompt engineering like DSPy, etc? * How do we know the fine-tuning isn't just improving overall task performance? Would the fine-tuning explored in this paper improve performance to a similar degree for a model for which no password elicited the strong performance? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the detailed feedback! We are glad you found the problem our paper addresses interesting, our reduction to password-locked models creative, and our experiments diverse. >Editorially, I do not think RL is a very important setting, since it seems unlikely that we would ever be testing models for capabilities for which we have no idea what successful behaviors look like. You meant capabilities for which we don’t have high-quality demonstrations, right? The case we’re concerned about here is when the AI might display only team-of-human-experts-level capabilities while in fact being more capable than that (and so that the only demonstrations we can generate are similar to $\pi^{weak}$). Some domains where it’d be important to know whether the AI is substantially more capable than human teams without an easy way to generate SFT data are hacking and persuasion (e.g. how easily can the AI convince someone to change their vote). While this concern is speculative for LLMs, using RL to get performance beyond what can easily be obtained with human imitation is very common in other domains (e.g. robotic tasks, Go, ...). > [...] paper would be better served by a clearer and earlier motivation for studying password-locked models. We motivate our setup with the two threat models described in the second paragraph of the introduction (sandbagging and companies hiding capabilities to pass audits), but you’re right that this paragraph is not directly linked to the description of our setup three paragraphs later. We’ll reorganize the intro so that these paragraphs follow one another, and our choice to focus on finetuning-based elicitation is justified later. > [...] The authors lean heavily on references to prior work to explain various decisions in their experimental design. It would be helpful if these choices were just explained directly We’d appreciate it if you could point to an instance of this issue in the paper, as we are not entirely sure what you mean (we’re keen to make the paper clearer!). Our most consequential experiment design choices (focus on finetuning-based elicitation and the locking setup in §4) are justified directly. > Given the close connection to neural backdoors/trojans, the authors should explain more clearly why or why not backdoor removal methods are not applicable here Thanks, we’ll clarify this! Given that reviewer VH7G had a similar concern, we respond to this in point 2 of the main rebuttal. >The nomenclature throughout the paper is a little confusing. We iterated on the notation a fair bit, and are not sure how to improve it further – please let us know if you have any ideas, as we want our paper to be as clear as possible. > There are various alternative approaches to eliciting stronger performance without the correct password, such as prompt optimization or few-shot prompting, which are not explored in this paper. [...] >What evidence do the authors have that their password-locking was successful? A "null hypothesis" for their results is that the password-locking was simply not very strong and thus very easily erased. Do perturbations of the password also work? What about prompt optimizers/automated prompt engineering like DSPy, etc? Thanks for suggesting these experiments! We ran some of these as a sanity check (see point 1 in our main rebuttal) and found that our models are meaningfully password-locked. > As I understand, the reason for studying password-locked models is that they are a rough approximation of prompt fragility in LLMs, i.e. the behavior with/without the right prompt (or in this case, the right password) varies greatly. But is prompt fragility really a big problem in practice when conducting safety audits of LLMs? [...] Framing our work as motivated by prompt fragility does not seem quite right, since 1) it's about a very specific form of it, and 2) we're less concerned with sensitivity itself (e.g. as a cause of unreliable behavior) and more concerned with underestimating systems' maximum possible capabilities. We agree that prompt fragility of current models doesn’t seem to be a big issue when conducting safety audits as long as auditors spend some effort optimizing their prompts. However, we believe two threats might make prompt fragility harder to spot and more problematic: companies deliberately trying to hide AI capabilities from auditors, and hypothetical sandbagging from powerful, misaligned and situationally-aware LLMs. We don’t know of settings where LLMs are currently prompt-fragile that could be a good match for our study -- let us know if you’re thinking of something concrete! > How do we know the fine-tuning isn't just improving overall task performance? Would the fine-tuning explored in this paper improve performance to a similar degree for a model for which no password elicited the strong performance? Great question! Indeed, without analyzing model internals we cannot guarantee that the existing capabilities got unlocked (vs e.g. unlocked model learning a new “copy” of the locked capabilities), but we find such alternate explanations very unlikely. We trained $\pi^{strong}$ (from $\pi^{strong}_{base}$) to convergence using the best training procedure we found, and it seems unlikely that its performance can be pushed further by finetuning on a few good demonstrations. Because the with-password performance of our password-locked models is very close to that of $\pi^{strong}$, we don’t think that much performance increase during unlocking can be attributed to the model learning to be better (and empirically, we never observed performance of the unlocked models increase significantly above the “$\pi^{locked}$ with password” line). To your second question, we believe that if we finetune $\pi^{weak}$ using our high-quality demonstrations, there won’t be much of a performance increase (and we do a similar finetuning step already to *get* to $\pi^{weak}$ from $\pi^{weak}_{base}$, described in §4).
Summary: This paper studies the problem of eliciting hidden capabilities from language models, through fine-tuning. In order to plant 'locked' capabilities and then do controlled test of elicitation, the authors lock the capabilities using passwords they bake into the prompts at fine-tuning. Then, they try to elicit the capability without the password in a zero shot way, and then after fine-tuning with a few good samples, or using RL and existing judgment labels. They find that using a few good FT samples the capabilities are almost completely attainable. Strengths: 1. I liked the idea of using passwords to control for the model behavior, and lock the capabilities. Weaknesses: 1. I am not sure about the motivation of the paper. Is model locking something that people really do in practice? Using a set of words in a prompt is pretty weak in practice, there might be stronger ways to lock a model, such as posthoc methods at decoding time. Also the findings and insights are not that surprising. Many safety alignment and jailbreaking papers show that alignment is 'shallow' and can be easily reversed [1,2] 2. Using fine-tuning and RL at decoding time is a pretty strong assumption, as having access to model parameters, training a model and also having access to high quality data is not that realistic. [1] Patil, Vaidehi et al. “Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks.” ArXiv abs/2309.17410 (2023): n. pag. [2] Yang, Xianjun, et al. "Shadow alignment: The ease of subverting safely-aligned language models." arXiv preprint arXiv:2310.02949 (2023). Technical Quality: 3 Clarity: 4 Questions for Authors: 1. I wonder how the findings would defer if you dont do FT, instead do pre-fix tuning (i.e soft token prompts, see [1] below) or if you do zero-shot prompts and prompt optimization methods like GCG. [1] Li XL, Liang P. Prefix-Tuning: Optimizing Continuous Prompts for Generation. InProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) 2021 Aug (pp. 4582-4597). Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The paper discusses limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the thoughtful review! > Is model locking something that people really do in practice? Using a set of words in a prompt is pretty weak in practice, there might be stronger ways to lock a model, such as posthoc methods at decoding time. [...] Using fine-tuning and RL at decoding time is a pretty strong assumption, as having access to model parameters, training a model, and also having access to high-quality data is not that realistic. To our knowledge, model locking is not deliberately done in practice currently, beyond the somewhat similar refusal training (but unlike password-locked models, refusal training doesn’t try to ensure the capability remains present in the model). Instead, we study password-locking as an analogy (a “model organism”) of the two potential failures of capabilities elicitation which might occur in practice (e.g. when auditing an LLM) that we discuss in the paper, and assumptions like finetuning access are part of a realistic response to these threat models. Detailing for our two threat models: 1. **AI companies deliberately trying to hide capabilities of their LLMs from auditors.** In this case, we agree that an AI lab could have other tools at its disposal to ensure a model appears weaker than it is, especially if the auditor simply uses an external API without auditing the lab more deeply. Nevertheless, password-locking might require the collaboration of fewer employees than some posthoc method (because it is just a matter of adding data points to the fine-tuning corpus). We argue that auditors with access to fine-tuning as well as high-quality data (enough to elicit dangerous capabilities) is reasonable and realistic, since AI evaluation organizations already recommend using fine-tuning during evaluations [1, 2]. 2. **Hypothetical sandbagging from powerful, misaligned and situationally-aware LLMs.** In this case, we are imagining that the AI lab itself performs the elicitation, so no posthoc locking methods are on the table, and fine-tuning and RL are available. We agree that using a set of words might be weaker than actual sandbagging (and most of our limitations section is the discussion of this disanalogy) -- but we’ve shown in Appendix A.1.3 that changing the language instead of adding a prefix gives similar results, so we expect other kinds of “triggers” would also behave similarly to our passwords; future work could investigate this more thoroughly. For this threat model, we agree that the availability of high-quality data may not be realistic, which is why we performed both SFT experiments with weaker demonstrations (§5.1) and RL experiments (§6). > Also the findings and insights are not that surprising. Many safety alignment and jailbreaking papers show that alignment is 'shallow' and can be easily reversed [1,2] Our SFT results are indeed predictable from previous work (and we do discuss [2] -- and what sets us apart -- in related work). Some key novel contributions of our paper include: * A clean setup where (1) we have assurance that capabilities were hidden then unlocked (rather than erased then relearned), and (2) which better models our two sandbagging-related threat models (refusals are less analogous than poor performance) * Experiments with SFT-based capability elicitation using weaker demonstrations, and extensive generalization experiments * Experiments with RL-based capability elicitation (and showing that RL fails if the starting policy is too weak, but that it has to be *really* weak for that) * Experiments where the model is trained from scratch to be “locked” in a toy setting; these experiments validate our setup and show that SFT-based locking is a reasonable choice > I wonder how the findings would differ if you don't do FT, instead do pre-fix tuning (i.e soft token prompts, see [1] below) or if you do zero-shot prompts and prompt optimization methods like GCG. Thanks for suggesting these! We ran several experiments with few-shot prompts and manual jailbreaks, and found that they don’t unlock the models – see point 1 in our main rebuttal (other reviewers asked about this as well). [1] DIST, Emerging processes for frontier AI safety, https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety [2] Metr, Guidelines for capability elicitation, https://metr.github.io/autonomy-evals-guide/elicitation-protocol/
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their thoughtful feedback, and are pleased to see that the vote for acceptance is unanimous. Below we address two threads common across several reviews. We are looking forward to further discussion! ## 1. Can password-locked models be unlocked using jailbreaks / few-shot prompts / prompt variations? (Vgeo, JNQU, Y88j) **We ran several of the suggested experiments on a subset of MATH problems; see the plot in the attached PDF.** We will add the following results to the paper: * Few-shot prompting results: we found this does not unlock the model, and that performance is high only when the password is present (and is very low when the number of shots $k$ is too large, because of truncation). * Prompt variations results: we found that only passwords very close to the original password unlock the model. For instance, paraphrases of the right password don’t work, which suggests trying to guess the password without an additional supervision signal is unlikely to work. * Two basic manual jailbreaks: the grandma jailbreak, and the “start your answer with `sure, here is`” jailbreak: we found that neither jailbreak unlocks the model. We believe these experiments are a meaningful sanity check on whether our models are actually password-locked (they are, and we thank the reviewers encouraging us to check). **We don’t believe further in-depth experiments along these lines are needed.** * Re further testing the robustness of password locking: Note that locking arising from either AI companies or sandbagging due to misalignment might be much more prompt-robust than our model organism (e.g. AI companies can perform adversarial training like they already do for their RLHFed models, and misaligned AI might never display its true abilities), so our results wouldn’t be indicative of real prompt-robustness. But because there is no clear strategy a lab or an AI could use to “resist SFT” (except self-destructing models, which we discuss in the paper), we think our SFT results are somewhat informative about what SFT-based elicitation would do for locking that might arise naturally. * Re sample efficiency of these alternative unlocking methods VS SFT. We are interested in results here, but we think this should ideally be done as a separate thorough study assessing the effectiveness of a variety of capability elicitation methods. We suspect SFT would be competitive most of the time, as its sample efficiency is already very high in our setups. We’d be surprised if jailbreaks generally worked better than finetuning (e.g. Souly et al. [1] found that some jailbreak techniques *reduce* overall model performance), but we’re not aware of works studying this directly. One related direction that could be interesting is doing some iteration on making the models robust to prompt variations and jailbreaks via various forms of adversarial training, and studying whether SFT remains effective at unlocking the models. We’ll mention this as a direction for future work in §8. Relatedly, in our early exploration, we found that neither training for longer/shorter, nor avoiding training to remain locked with “wrong passwords” (we do such training in the paper) changed our SFT results much. We’ll add these results to the paper. [1] Souly, Alexandra, et al. "A strongreject for empty jailbreaks." arXiv preprint arXiv:2402.10260 (2024). ## 2. Why are backdoor removal & mitigation methods not applicable? (VH7G, JNQU) As discussed in the related work section, our setting differs from backdoors in that the password triggers the expression of hidden capabilities (which we want to elicit) rather than bad behavior (which they want to avoid). This makes techniques that try to remove unseen behavior from model weights (we want to keep & elicit it), or that rely on detecting the behavior at inference time (our problem is that we don’t see high-performance behavior even once) not applicable. Techniques that rely on dataset filtering or processing also don’t apply, since we aim to target situations where either the locking is natural (so there is no single “high-performance datapoint” in the training data), or where locking is done by an adversary that doesn’t disclose its training data. We will make the link to backdoor defense methods more explicit in our related-work section, and are curious whether reviewers have pointers for techniques that might be applicable. Pdf: /pdf/99f8a1ed35e0b4d10c28b0d37f745cfd60059f6d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging
Accept (poster)
Summary: The authors present a Federated Hardware-Prompt learning (FedHP) framework to address the fact that compressive snapshot spectral imaging devices may not be easily tuneable against changes in the coded aperture, and that in fact the said access to coded apertures may not be possible due to privacy reasons. The authors solve this by hardware prompt learning, which essentially learns from observing diverse coded aperture samples of all clients, regularizing the input data space and achieving the goal of coping with heterogeneity sourcing from hardware. The results show on a specific dataset improvement across all 10 samples in terms of spectral reconstruction quality. The comparison is primarily in the sense of federated learning approaches. Typo: figure 3 caption -> colorblue shouldn’t be there Strengths: The presentation is somewhat accessible to a generally knowledgeable non-expert in federated learning, in that the purposes are clear. Weaknesses: The biggest weakness is arguably that the paper covers a somewhat very niche topic, which is the application of a federated learning scheme to compressive snapshot spectral imaging. To some extent one would expect the technique to abstract away from the specific case of CASSI, as the solution does not particularly pertain to CASSI. In addition, due to limited data available in this setup and to very limited size datasets, it is difficult to ascertain the significance of the findings. Technical Quality: 2 Clarity: 2 Questions for Authors: Can the authors extend this to any other compressive imaging scheme? Or perhaps disentangle the improvements in terms of FedHP, from those specific to the application? This would also broaden the data available for validating the experiments. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The addressed setup assumes that the problem the authors propose to tackle is meaningfully posed, I.e., that federated learning in the chosen formulation is practically meaningful. The reviewer is not sure whether this is a practically relevant problem considering that CASSI systems are arguably scientific instrumentation/experimental devices whose calibration is likely done per case anyways. In addition the topic would appear to be more meaningful for publications that cover CASSI systems such as IEEE TGARRS or the like. It is hard for this reviewer to disentangle the margin of novelty of this paper in terms of federated learning approach vs. impact on the target application. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We much appreciate that the Reviewer `aeby` provides valuable comments and finds the method is designed with clear purpose. `R4.1`: The biggest weakness is arguably that the paper covers a somewhat very niche topic, which is the application of a federated learning scheme to compressive snapshot spectral imaging. To some extent one would expect the technique to abstract away from the specific case of CASSI, as the solution does not particularly pertain to CASSI. | Metrics | FedAvg | FedHP | |:-------------:|:-------------:|:--------------------------:| | PSNR | $27.35\pm1.22$ | $27.87\pm0.89$ | | SSIM | $0.9174\pm0.0046$ | $0.9192\pm0.0047$ | **Table T1. Comparison between FedAvg and FedHP on CACTI (Client:3)** `A4.1`: Our key technical contribution is to provide a new multi-hardware optimization framework adapting to hardware shift by only accessing local data. The principle underlying the proposed FedHP can be potentially extended to broad SCI applications. However, due to the practical cost of data acquisition and building optics systems, this study explores one specific direction, where we focus on spectral SCI and collecting optical masks and real data from multiple hardware. Exploiting the hardware collaboration of computational imaging systems is still in an early stage. This work serves as a proof of concept to inspire future endeavors in a more general scope. We list several potential related applications that might benefit from the proposed method, such as Lensless camera [1], LiDAR [2], HDR camera [3], or CT-Reconstruction [4], cooperating multiple imaging systems via aligning forward models, etc. Per reviewer's inspiration, we tried our best to launch additional experiments by applying FedHP into another prevalent SCI system of Coded Aperture Compressive Temporal Imaging (CACTI). The results in Table T1 above present a performance boost of FedHP over FedAvg baseline (under the same setting as Table 1), demonstrating that the proposed FedHP does not particularly pertain to CASSI. Due to the limited rebuttal time and the heavy workload in deploying real CACTI systems, we can only obtain the intermediate results of both methods (e.g., $1.2\times10^4$ out of $4\times 10^4$). We are committed to updating the latest results during the discussion stage. We will add the above discussion in the manuscript. [1] A simple framework for 3D lensless imaging with programmable masks. CVPR 2021. [2] LiDAR-in-the-loop Hyperparameter Optimization. CVPR 2023. [3] End-to-end high dynamic range camera pipeline optimization. CVPR 2021. [4] DOLCE: A model-based probabilistic diffusion framework for limited-angle ct reconstruction. ICCV 2023. [5] Deep unfolding for snapshot compressive imaging. IJCV 2023. `R4.2`: In addition, due to limited data available in this setup and to very limited size datasets, it is difficult to ascertain the significance of the findings. `A4.2`: This work has constructed the Snapshot Spectral Heterogeneous Dataset (SSHD), which includes multiple practical spectral SCI systems. This dataset will be released to support further research and validation. Despite the current dataset size, our experiments demonstrate consistent performance improvements using FedHP, even under challenging heterogeneous settings. This provides a strong indication of the robustness and potential of the proposed method. `R4.3`: Can the authors extend this to any other compressive imaging scheme? Or perhaps disentangle the improvements in terms of FedHP, from those specific to the application? This would also broaden the data available for validating the experiments. `A4.3`: As shown in `Table T1` and `A4.1`, we tried our best to launch additional experiments by extending FedHP into another prevalent SCI system of Coded Aperture Compressive Temporal Imaging (CACTI), which is used to compress and recover temporal information. By comparison, FedHP outperforms FedAvg by 0.52dB/0.0018 in terms of the PSNR/SSIM despite both only training for $1.2\times10^4$ iterations out of $4\times 10^4$. The results indicate that FedHP demonstrates the generalization ability on new SCI systems. We are committed to updating the latest results during the discussion stage. We will add the above discussion in the manuscript. `R4.4`: The addressed setup assumes that the problem the authors propose to tackle is meaningfully posed, I.e., that federated learning in the chosen formulation is practically meaningful. The reviewer is not sure whether this is a practically relevant problem considering that CASSI systems are arguably scientific instrumentation/experimental devices whose calibration is likely done per case anyways. `A4.4`: While CASSI systems are specialized scientific instruments, the challenge of hardware heterogeneity and data privacy remains significant. Training models for each new hardware configuration can be impractical due to data-starving clients and high computational costs, and the cross-silo cooperation can be impractical without solving privacy concerns for the hardware. FedHP provides a solution and can facilitate broader and more practical deployment of SCI systems in diverse settings. `R4.5`: In addition the topic would appear to be more meaningful for publications that cover CASSI systems such as IEEE TGARRS or the like. It is hard for this reviewer to disentangle the margin of novelty of this paper in terms of federated learning approach vs. impact on the target application. `A4.5`: Our work intersects federated learning and computational imaging, addressing challenges in both fields. The novelty of our approach lies in the integration of a hardware prompt network within a federated learning framework to handle hardware heterogeneity and data privacy issues. This contribution is relevant to both federated learning and SCI communities, and we believe it provides valuable insights and advancements applicable to various imaging systems. --- Rebuttal Comment 1.1: Comment: The authors have addressed all of my questions with their view on why FedHP is a substantial improvement for CASSI. This reviewer would like to thank them for the time spent in providing further experiments on the CACTI case, as well as the time spent to prepare the replies. Focusing on the new Table T1, it appears that FedAvg and FedHP on CACTI perform very closely in PSNR and SSIM, notably in PSNR one could argue that the two distributions overlap. Have the authors performed an analysis of the residuals so that they would be able to ascertain whether the resulting models are statistically distinguishable from their residuals? I.e., not looking only at the mean and standard deviation but at whether the resulting distribution of residuals from FedHP is similar to that of FedAvg. It appears that the results are in a close tie on the current dataset and, not having immediate clarity of how sensitive the dataset is, it is difficult to ascertain whether the proposed technique is sufficiently of impact. The authors report they believe that their technique could be generalized to "several potential related applications that might benefit from the proposed method, such as Lensless camera ,LiDAR, HDR camera, or CT-Reconstruction". This reviewer fully agrees that additional evidence on other modalities, potentially with larger datasets to ascertain the margin between the baseline and FedHP, could increase very significantly the strength and quality of the present paper submission. Due to this, the reviewer would lean to leave my previous rating unaltered. --- Reply to Comment 1.1.1: Title: Response to Reviewer aeby Comment: We appreciate the reviewer's constructive comments and the recognition of our rebuttal. To address the concern regarding the performance comparison between FedHP and FedAvg, we conducted a statistical analysis using a paired t-test to compare the PSNR and SSIM values from FedHP and FedAvg. Specifically, we define the hypotheses as follows: (1) Null hypothesis ($H_0$): there is no significant difference in the PSNR and SSIM values between FedAvg and proposed FedHP. (2) Alternative hypothesis ($H_a$): there is a significant difference in the PSNR and SSIM values between FedAvg and proposed FedHP. We calculated the differences based on the averaged PSNR and SSIM values for each scene from both FedAvg and FedHP, resulting in ten differences values for PSNR ($d_{PSNR}$) and SSIM ($d_{SSIM}$). We performed the paired t-test using $t = \frac{\bar{d}}{s\_d/\sqrt{n}}$, where $\bar{d}$ denotes the mean of the difference values for either PSNR ($d_{PSNR}$) or SSIM ($d_{SSIM}$), $s\_d$ is the standard deviations, and $n$ is the number of the paired observations (e.g., $10$). We calculated the p-value upon the t-distribution for a two-tailed test using the formula p-value$= 2 \times P(T>|t|)$, where $P(T>|t|)$ denotes the probability that a t-distributed random variable with $n-1$ degrees of freedom exceeds the absolute value of the observed t-statistic. For PSNR, we observe $t=2.50$ and p-value$=0.034$. Since the p-value is less than the typical significance level of $0.05$. Therefore, we reject the null hypothesis ($H_0$) and conclude that there is a statistically significant difference between the PSNR values of FedAvg and FedHP. For SSIM, we observe $t=7.39$ and p-value$=0.00004$. The p-value of is significantly less than $0.05$, indicating a very strong statistically significant difference between the SSIM values of FedAvg and FedHP. The test results in PSNR and SSIM confirms that the performance gap between FedHP and FedAvg is statistically significant. We thank the reviewer for the valuable feedback, which has enhanced the quality of our submission. We will add the above discussions into the final version and look forward to any further suggestions from the reviewer. We sincerely appreciate the reviewer’s time and effort. --- Rebuttal 2: Comment: The reviewer is willing to acknowledge the amount of work done by the authors to prove their point, and has decided to update the recommendation. Please update, however, your references to mention that significant future work will be done in the direction of lensless cameras, and CT reconstruction, i.e., in the presence of hardware-defined forward models. This must also include thoroughly all hyperspectral and multispectral compressive imaging strategies that include, among others, random masks applied in a number of ways, including those by random convolution, if the authors see them as feasible. In absence of other forward models in the current paper, it is important that the authors update the manuscript accordingly with a comprehensive set of references on what forward models could be studied in future work, that should be supported by experimental evidence. Thanks again for addressing my concerns. --- Rebuttal Comment 2.1: Title: Response to Reviewer aeby Comment: We thank the reviewer for acknowledging the response and for updating the recommendation. In line with the reviewer’s suggestions, we will revise the manuscript to outline potential future work in areas such as lensless cameras, LiDAR, HDR cameras, and CT reconstruction, especially concerning hardware-defined forward models. We will also include a comprehensive set of references on related hyperspectral and multispectral imaging strategies, incorporating forward models that could be explored in the future work. We much appreciate the reviewer's insightful suggestions, which have significantly contributed to the enhancement of the paper's quality and scope!
Summary: The paper addresses the challenges faced in snapshot compressive imaging (SCI) systems due to hardware shifts and the need for adaptability across multiple hardware configurations. By introducing a hardware-prompt network and leveraging federated learning, the framework enhances the adaptability and performance of SCI models across different hardware configurations. Strengths: 1. The manuscript is well-organized with a clear and logical structure that enhances the readability of the content. 2. The paper provides a detailed background on SCI and FL. The planned release of the Snapshot Spectral Heterogeneous Dataset (SSHD) will significantly aid future research. 3. Using different coded apertures for different clients closely mirrors real-world scenarios, adding significant practical relevance to the study. Weaknesses: 1. The literature review on federated learning (FL) heterogeneity in the Introduction section lacks comprehensiveness. There are numerous recent papers addressing heterogeneity in FL that are not cited here. Additionally, the references included are somewhat outdated. Including more current and diverse references would strengthen the review and provide a more accurate context for the study. 2. the manuscript explains that the coded apertures for each client follow a specific distribution Pc, it does not provide further details about the exact nature or type of this distribution. 3. There are many ways to partition data to construct heterogeneous scenarios, such as practical and pathological methods. The approach of equally splitting the training dataset according to the number of clients is not very convincing. The authors should try different partitioning methods. 4. It is unclear which datasets were used to obtain the experimental results in Tables 1 and 2. The authors did not specify this, which creates confusion in the experimental analysis. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. What is the rationale for using adaptors, and what is their function? 2. What network models are used in the comparison methods? It is necessary to clearly state the fairness of the validated methods. 3. The explanation for Figure 3 is not detailed enough. For example, what is "Patch"? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: 1. In the "Discussion of the client number" section, the number of clients increases very little, and the metrics slightly decline. However, the authors conclude that the performance is stable with the change in the number of clients. The small variation in the number of clients is unconvincing. A larger difference in the number of clients should be set to demonstrate this more effectively. 2. The authors mention the "presence of data privacy" in the contributions, but there is no further discussion or experimental comparison regarding data privacy in the subsequent sections. This makes it difficult to validate their contribution to data privacy protection. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We much appreciate that the Reviewer `1X5M` provides valuable comments and finds the proposed work have significant practical relevance to the study and the collected SSHD dataset can benefit the future research. We will release the dataset and the training/testing code. `R3.1`: The literature review on federated learning (FL) heterogeneity in the Introduction section lacks comprehensiveness. Additionally, the references included are somewhat outdated. `A3.1`: We appreciate the reviewer's commitment in helping improve our draft. We will include more current and diverse references to align with the latest development. `R3.2`: the manuscript explains that the coded apertures for each client follow a specific distribution Pc, it does not provide further details about the exact nature or type of this distribution. `A3.2`: We visualize the distributions of the real-world masks collected from real CASSI systems in Fig. 5$\sim$7. The coded apertures for each client follow specific distributions and can reflect the practical variations, including perturbations, shaking, and potential replacements. `R3.3`: There are many ways to partition data to construct heterogeneous scenarios, such as practical and pathological methods. The approach of equally splitting the training dataset according to the number of clients is not very convincing. The authors should try different partitioning methods. `A3.3`: We'd like to clarify that our approach already incorporates both practical and pathological methods to construct heterogeneous scenarios by considering heterogeneity stemming from both the dataset and hardware variations. * Practical scenario (Table 1): Different clients have non-overlapping masks sampled from the same distribution. This simulates real-world hardware perturbations or imperfections within the same system design, which is a common practical scenario. * Pathological scenario (Table 2): Each client has masks sampled from a specific distribution. This represents a more extreme case of significant hardware variations or replacements across institutions, which can be considered pathological. Besides, the equal splitting of the training dataset across clients, combined with these hardware variations, creates a comprehensive heterogeneous environment. `R3.4`: It is unclear which datasets were used to obtain the experimental results in Tables 1 and 2. The authors did not specify this, which creates confusion in the experimental analysis. `A3.4`: All our experiments use the CAVE dataset for the training and the KAIST dataset for testing, both of which are prevalent benchmarks in the field. We will make it clear in the manuscript about the dataset settings. `R3.5`: What is the rationale for using adaptors, and what is their function? `A3.5`: The adaptor in FedHP (1) allows efficient fine-tuning of pre-trained backbones, enhancing the model's ability to adapt to new clients. (2) The adaptor significantly reduces communication costs in the FL system by only requiring the transmission of adaptors rather than entire model backbones. As shown in Table 3, the adaptor brings a 0.07dB improvement in PSNR and 0.0029 boost in SSIM. The adaptor is a CONV-GELU-CONV structure governed by a residual connection (L207). `R3.6`: What network models are used in the comparison methods? It is necessary to clearly state the fairness of the validated methods. `A3.6`: All compared methods use the same network architecture MST [R1] for the reconstruction backbones and a SwinIR [R2] block for the prompt network (see L240\~L241). The key differences lie in how each method handles the federated learning aspect and hardware variations. In Table 3, we demonstrate the comparable complexity and cost between FedHP and FedAvg. [R1] Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction. In CVPR 2022. [R2] Swinir: Image restoration using swin transformer. In ICCV 2021. `R3.7`: The explanation for Figure 3 is not detailed enough. For example, what is "Patch"? `A3.7`: The ``Patch'' corresponds to the small regions (such as a, b in Fig.3 top-left) that are randomly cropped for the spectral accuracy evaluation. We will provide more detailed explanations in Figure 3. `R3.8`: In the "Discussion of the client number" section, the number of clients increases very little, and the metrics slightly decline. However, the authors conclude that the performance is stable with the change in the number of clients. The small variation in the number of clients is unconvincing. A larger difference in the number of clients should be set to demonstrate this more effectively. `A3.8`: While the performance slightly declines with more clients, FedHP maintains its advantage over FedAvg (0.21dB improvement for C=5 and 0.19dB for C=10). We conjecture that the slight descent tendency might attribute to more complex corporations among the clients and less sufficient training samples for each client. It is non-trivial to collect real hardware systems due to the privacy concern and the cost. FedHP is dedicated to facilitate this process by offering a way of cross-silo cooperation. We are also working on collecting larger-scale data and real hardware. `R3.9`: The authors mention the "presence of data privacy" in the contributions, but there is no further discussion or experimental comparison regarding data privacy in the subsequent sections. This makes it difficult to validate their contribution to data privacy protection. `A3.9`: Thanks for the valuable comment. While we don't explicitly discuss data privacy in our experiments, it is inherent to the federated learning framework we employ. FedHP, like other FL methods, ensures that raw data never leaves the local clients, addressing privacy concerns by design. We will add a dedicated subsection to discuss how our method preserves data privacy and compare it with centralized approaches in terms of privacy protection. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for addressing my concerns. --- Reply to Comment 1.1.1: Title: Response to Reviewer 1X5M Comment: We appreciate the reviewer's valuable comments. We thank the reviewer's recognition of our rebuttal! --- Rebuttal 2: Comment: The authors have addressed most of my concerns; however, two issues remain unresolved, so I will keep my rating unchanged. 1) Regarding the discussion of the client number, Table 4(a) in the paper shows that when C=5, FedHP outperforms FedAvg by 0.27 dB. However, in the rebuttal (Section A3.8), this performance gap is reported as 0.21 dB. The authors have not provided an explanation for this discrepancy. 2) The authors mentioned in the rebuttal that they would add a detailed description of Privacy Protection, but this has not been presented. --- Rebuttal Comment 2.1: Title: Response to Reviewer 1X5M Comment: We appreciate the reviewer's constructive comments and the recognition of our rebuttal. The presentation of *0.21dB improvement for C=5* in `A3.8` was a typographical error and should be consistent with the manuscript, which correctly states a *0.27 dB improvement for C=5*. Our intention was to highlight the advantage of the proposed FedHP and FedAvg as presented in Table 4 (a) for *C=5*. Additionally, we provide a detailed description of the privacy protection inherent in our approach as follows. FedHP inherently addresses privacy from different perspectives. (1) **Hardware decentralization**: In the FedHP framework, real hardware configurations (e.g., real masks) remain confidential to the local clients. This design makes it difficult to reverse-engineer the pattern or values of the real mask without direct sharing. (2) **Raw data decentralization**: FedHP maintains a private hyperspectral dataset for each client. The hyperspectral images are processed locally (e.g., encoding or data augmentation) and never leaves the client, thereby minimizing the risk of exposure. (3) **Training process decentralization**: FedHP only collects the local updates from the prompt network, which are then shared with the central server. The local updates are anonymized and aggregated without accessing underlying data, preventing any tracing back to the data source and thus protecting confidentiality. In Table 3, we quantitatively compared the performance of the proposed “FedHP” and “FedHP w/o FL” under privacy-constrained environments. FedHP demonstrates a $0.6$ dB average improvement (e.g., $31.35$ *v.s.* $30.75$), showcasing its robust model performance and offering a significant privacy advantage that aligns with regulations restricting data sharing. We will add the above discussions into the manuscript. We sincerely expect our response can help solve the reviewer’s concern and expect a future discussion with the reviewer!
Summary: Most existing reconstruction models in snapshot compressive imaging systems are trained using a single hardware configuration, making them highly susceptible to hardware variations. Previous approaches attempted to address this issue by centralizing data from multiple hardware configurations for training, but this proved difficult due to hardware heterogeneity across different platforms and privacy concerns. This paper proposes a Federated Hardware-Prompt Learning (FedHP) framework, which aligns data distributions across different hardware configurations by correcting the data distribution at the source, thereby enabling the trained model to adapt to multiple hardware configurations. The performance on existing datasets shows an improvement compared to previous popular training frameworks. Additionally, the authors have released their own created dataset and code. Strengths: 1.Previous work focused on the data itself, directly correcting various types of data through network models. In contrast, the authors of this paper focus on the root cause of the differences—hardware. They address the issue from the perspective of learning the differences in hardware. 2.The method proposed by the authors has achieved excellent performance compared to existing mainstream methods, and the average performance has also improved. Weaknesses: 1.The number of clients used in the experiments is still relatively small. Although a simple comparison of the impact of different numbers of clients was made, there is not much difference in performance compared to other methods when the number of clients is larger. 2.Although good results were reported on simulated data, more results on real data should be included to evaluate the effectiveness of the proosed method. Technical Quality: 3 Clarity: 3 Questions for Authors: Why does the prompter lead to such a significant improvement, while the effect of the adaptor is not as pronounced? Please provide an in-depth analysis. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The generalization to different hardware systems is crucial for deep learning based methods. The current form of this manuscript only reported on a small scale real dataset captured by several systems. A larger dataset captured by more systems is necessary to evaluate the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We much appreciate that the Reviewer `Jxiy` provides valuable comments and finds the proposed method addresses the issue from the new perspective of hardware with good performance. `R2.1`: The number of clients used in the experiments is still relatively small. Although a simple comparison of the impact of different numbers of clients was made, there is not much difference in performance compared to other methods when the number of clients is larger. `A2.1`: We find FedHP maintains its performance advantage even with increased client numbers (C=5 and C=10), outperforming FedAvg by 0.21dB and 0.19dB respectively as shown in Table 4 (a). These results demonstrate FedHP's scalability and consistent performance improvements across varying numbers of clients. Besides, it is non-trivial to collect real hardware systems due to the privacy concern and the cost. We are dedicated to facilitating this process by offering a way of cross-silo cooperation using FedHP. We are still working on collecting larger-scale data and real-world hardware systems. `R2.2`: Although good results were reported on simulated data, more results on real data should be included to evaluate the effectiveness of the proposed method. `A2.2`: We provide additional comparisons on real data in Fig. 7$\sim$8. We are still working on collecting more data. `R2.3`: Why does the prompter lead to such a significant improvement, while the effect of the adaptor is not as pronounced? Please provide an in-depth analysis. `A2.3`: Thanks for the valuable comment. The prompter plays a crucial role in capturing hardware-specific characteristics and aligning inconsistent data distributions across clients. It directly addresses the heterogeneity rooted in the input data space, which is a key challenge in federated learning for SCI systems. By comparison, the adaptor, while important for efficient fine-tuning, has a more subtle effect on performance. Its primary role is to reduce communication costs in the FL system by allowing us to use pre-trained backbones and only communicate adaptors. `R2.4`: The generalization to different hardware systems is crucial for deep learning based methods. The current form of this manuscript only reports on a small scale real dataset captured by several systems. A larger dataset captured by more systems is necessary to evaluate the method. `A2.4`: We kindly illustrate that current setup can consider a large amount of the real-hardware under two challenging scenarios. * Same distribution: As shown in Table 1, we sample non-overlapping masks from the same distribution for different clients. This simulates hardware perturbations or imperfections within the same system design. * Different distributions: In Table 2, we explore a more challenging scenario where each client has masks sampled from a specific distribution. This closely mimics diverse real-world scenarios, including hardware replacement or significant variations across institutions. These experiments, conducted on real hardware masks with noise (Fig. 9-11 in supplementary), demonstrate FedHP's ability to generalize across various hardware configurations. We appreciate the reviewer's insights on collecting larger dataset. This work laid the foundation for future extension by collecting a Snapshot Spectral Heterogeneous Dataset built upon multiple practical SCI systems for the first time. We are still working on collecting more data and real systems for better evaluation. --- Rebuttal Comment 1.1: Comment: Thanks for the explanations. Concerns like real-system evaluation cannot be addressed in such a short period. I keep my initial rating. --- Reply to Comment 1.1.1: Title: Response to Reviewer Jxiy Comment: We appreciate the reviewer's recognition of our response and support for our work!
Summary: The paper introduces FedHP, a reconstruction method for snapshot compressive imaging systems, which addresses the challenge of cross-hardware learning by proposing a federated learning approach. The key contribution lies in using a hardware-conditioned prompter to align data distributions across different hardware configurations, thereby enhancing the adaptability of pre-trained models without compromising data privacy. Strengths: 1. The writing of the paper is good, making it easy to read and follow with clear arguments. 2. The problem defined in the paper is novel with a clear motivation, providing good inspiration for solving the issue of inconsistent device configurations in snapshot compressive imaging. 3. The proposed method is clear and the conclusions are relatively convincing. Overall, it is an interesting work. Weaknesses: 1. There are some typos in the writing. For example, the caption of Figure 3 and the bold parts in the second row of Table 1 and the eighth row of Table 2 are confusing. 2. The proposed FedHP method is relatively straightforward and lacks deeper insights. Moreover, it does not show a significant performance improvement compared to FedAvg. 3. The experiments are not comprehensive enough. Given that this work aims to address the snapshot compressive imaging (SCI) problem, I suggest adding experiments to test the applicability of other SCI systems, such as Coded Aperture Compressive Temporal Imaging (CACTI). 4. There is a lack of sufficient real-world experiments. It would be beneficial to set up multiple independent SCI systems to test the algorithm's performance. Including reconstruction results obtained from these real-world systems is recommended. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. All the experiments in this paper are based on the SD-CASSI model. Can the same FedHP model be simultaneously applicable to both DD-CASSI and SD-CASSI architectures, which have significantly different designs? 2. Although the proposed method outperforms other algorithms in terms of performance metrics, there are still many artifacts in the reconstructed images. While I understand that this is maybe due to the precision issues of the CASSI system, it is crucial for evaluating the practical usability of the algorithm. Additionally, I am not sure whether the spectral accuracy of the reconstructed images is also optimal in statistical terms, which is vital for spectral imaging systems. 3. Furthermore, if possible, I hope the authors can also address the concerns I raised in the Weaknesses section. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We much appreciate that the Reviewer `BdgG` provides valuable comments and finds the problem novel and the method convincing. `R1.1`: There are some typos in the writing. For example, the caption of Figure 3 and the bold parts in the second row of Table 1 and the eighth row of Table 2 are confusing. `A1.1`: We will correct the caption of Fig. 3 and the bold parts in Tables 1 and 2 to improve clarity. We will thoroughly proofread the manuscript to eliminate other errors. `R1.2`: The proposed FedHP method is relatively straightforward and lacks deeper insights. Moreover, it does not show a significant performance improvement compared to FedAvg. `A1.2`: FedHP offers insightful contributions: * It explicitly models hardware variations in computational imaging systems, which is non-trivial and addresses a key practical challenge. * The plug-and-play prompt network allows learnable hardware configurations, enabling hardware-software co-optimization. * The combination of prompt learning and adaptors reduces communication costs in federated learning systems. We empirically find that FedHP enables a consistent performance boost over FedAvg on different number clients, demonstrating the scalability of the proposed method. Considering that FedAvg is a strong baseline that even outperforms prevalent FL methods such as FedProx and SCAFFOLD, it is non-trivial to achieve performance boost over FedAvg. `R1.3`: I suggest adding experiments to test the applicability of other SCI systems, such as Coded Aperture Compressive Temporal Imaging (CACTI). | Metrics | FedAvg | FedHP | |:-------------:|:-------------:|:--------------------------:| | PSNR | $27.35\pm1.22$ | $27.87\pm0.89$ | | SSIM | $0.9174\pm0.0046$ | $0.9192\pm0.0047$ | **Table T1. Comparison between FedAvg and FedHP on CACTI (Client:3)** `A1.3`: We launch additional experiments by applying FedHP to the real CACTI systems. The results in Table T1 above present a better reconstruction performance of FedHP over FedAvg baseline (under the same setting as Table 1), demonstrating the generalizability ability of FedHP over a new SCI architecture. We will add the above discussion in the manuscript. Due to the limited rebuttal time and the heavy workload in deploying real CACTI systems, we can only obtain the intermediate results of both methods (e.g., $1.2\times 10^4$ out of $4\times 10^4$). We are committed to updating the latest results during the discussion stage. `R1.4`: There is a lack of sufficient real-world experiments. It would be beneficial to set up multiple independent SCI systems to test the algorithm's performance. `A1.4`: We kindly illustrate that all experiments are performed upon multiple independent real-world SCI systems. For both training and testing, we not only consider sampling non-overlapping masks from the same mask distribution, but also explore a challenging scenario where each client can have real masks sampled from a specific distribution, mimicking the diverse real-world scenarios including hardware perturbation and replacement (L27). `R1.5`: All the experiments in this paper are based on the SD-CASSI model. Can the same FedHP model be simultaneously applicable to both DD-CASSI and SD-CASSI architectures, which have significantly different designs? `A1.5`: It is challenging to apply FedHP simultaneously to both DD-CASSI and SD-CASSI architectures due to their different optical designs and forward models. DD-CASSI uses two dispersive elements and a coded aperture, while SD-CASSI uses a single dispersive element and a coded aperture, resulting in distinct measurement formation processes. These differences lead to incompatible forward models, making it difficult to unify them under a single reconstruction model. However, we agree this is an interesting direction for future research. `R1.6`: There are still many artifacts in the reconstructed images. While I understand that this is maybe due to the precision issues of the CASSI system, it is crucial for evaluating the practical usability of the algorithm. `A1.6`: Assessing the practical usability of SCI algorithms is inherently difficult due to the complex interplay of hardware limitations, reconstruction algorithms, and application-specific requirements. The proposed FedHP method itself is actually a step towards uncovering and tackling these practical challenges. By explicitly modeling hardware variations and enabling cross-hardware learning, FedHP aims to address real-world issues that arise when deploying SCI systems across multiple devices or institutions. `R1.7`: I am not sure whether the spectral accuracy of the reconstructed images is also optimal in statistical terms, which is vital for spectral imaging systems. | Methods | Scene 1 | Scene 2 | Scene 3 | |----------|----------|----------|----------| | FedAvg | 0.9816 | 0.8905| 0.8634 | | FedHP | **0.9901** | **0.9523** | **0.8851** | **Table T2. Spectral accuracy comparison on three testing scenes.** `A1.7`: We provide the spectral accuracy comparison in Fig.3, 5, and 6. Specifically, we randomly select a visually informative region (patch) on the hyperspectral images and compute the density values for a specific wavelength (i.e., dividing pixel values on this specific wavelength by the summarized pixel values along all the wavelengths). We statistically measure the spectral accuracy by computing the correlation of the density values between the prediction and the ground truth. FedHP enables better spectral accuracy by comparison. We further perform a statistical experiment on three simulation data, on which we randomly choose 10 spatial regions for the averaged correlation computation. As shown in Table T2 above, FedHP consistently enables a higher averaged correlation over FedAvg, indicating a better spectral accuracy. We will add the above discussions into the final version. --- Rebuttal Comment 1.1: Comment: The author has addressed most of my concerns. I will keep my score unchanged. --- Reply to Comment 1.1.1: Title: Response to Reviewer BdgG Comment: We appreciate the reviewer's approval for our response and recognition of our work!
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation
Accept (poster)
Summary: The paper proposes a framework that integrates large multimodal language models (MLLMs) and diffusion models to enable holistic language planning and vision planning for long-horizon robotic manipulation tasks with complex instructions. The authors jointly train the MLLM and diffusion model for language reasoning and visual imagination through latent image token generation. An explicit consistency loss aligns the reasoned instructions with the imagined subgoal images. Strengths: 1. Novel motivation for integrating of multiple modalities for providing better guidance. 2. Principled design of the framework components like the encoding-side alignment and the latent image token generation approach. Weaknesses: 1. Weak experimental evaluation (see below questions). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. While the authors acknowledge that training and inference costs are significant, the current draft lacks a more in-depth analysis of these problems. In particular, what are the various tradeoffs associated with different MLLMs that can be used, taking into consideration training time/FLOPs/MACs? How does varying these choices impact performance? Experiments answering these questions are equally as important as the ablations being run on training design choices (e.g. alignment loss). 2. Lack of real-world evaluation. Many works ([1], [2]) in this problem setting leveraging foundation models for robotic manipulation demonstrate the advantages of these large MLLMs/generative models in real-world settings, where the distribution of objects is extremely long-tailed. Can the authors show that PERIA can operate with similar success in this regime? [1] [Look Before You Leap: Unveiling the Power of GPT-4V in Robotic Vision-Language Planning](https://arxiv.org/abs/2311.17842) [2] [Zero-Shot Robotic Manipulation with Pretrained Image-Editing Diffusion Models](https://arxiv.org/abs/2310.10639) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Q1 More analysis of Training Cost & Performance between LLM backbone Thanks for the insightful suggestions! Considering most PERIA's computational cost is to align between vision and language, and bridge the gap between pretrained LLMs and image editing models in general domains versus robotics manipulation domain, we conduct further investigation by substituting enhanced model backbones for both the visual encoder and LLM. We compared the training time and performance until convergence and utilized CalFlops, an open-source tool designed for calculating FLOPs and MACs for neural networks, with results summarized as follows: |Model Variant|Training Time (Perception) ↓|Training Time (Reason+Imagine) ↓|FLOPs ↓|MACs ↓|Success Rate ↑| |--|--|--|--|--|--| |PERIA(ViT+Vicuna7B,reported in paper)|8 hrs|42 hrs|1815.38|907.3|68.2| |PERIA(ViT+Vicuna 13B)|13 hrs|66 hrs|3679.82|1810.7|71.9| |PERIA(ViT+llama2 7B)|8 hrs|41 hrs|**1740.89**|**874.8**|69.0| |PERIA(ViT+llama3 8B)|7 hrs|39 hrs|1926.17|962.2|**73.1**| |PERIA(LIV+Vicuna 7B)|5 hrs|38 hrs|1804.60|899.1|69.2| |PERIA(LLaVA1.5 7B)|**4.5 hrs**|**32 hrs**|1814.59|903.0|70.3| **Key Findings:** - **LLM Capabilities Impact Performance and Efficiency:** Stronger LLMs generally improve PERIA's performance. Replacing Vicuna-7B with models like Vicuna-13B, LLaMA2-7B, and LLaMA3-8B led to varying improvements. Training time is influenced by parameter numbers and inherent LLM capability. Despite its larger size, Vicuna-13B underperformed LLaMA3-8B in both performance and training efficiency due to higher parameters and lower capability. Conversely, LLaMA3-8B, with more parameters than Vicuna-7B, showed reduced training time, likely due to its stronger general-domain knowledge facilitating easier fine-tuning. - **Pretrained MLLM and Visual Encoder Enhances Performance and Efficiency:** Pre-trained MLLMs, such as LLaVA 1.5, which have undergone general domain vision-language alignment, obviate the need for alignment LLM and visual encoder from scratch. PERIA trained on LLaVA 1.5 can significantly reduce training time and improve the overall performance. Similarily, LIV, a robotics-specific representation with fewer parameters than ViT-B-32, leveraged its pre-training on robotics datasets to achieve vision-language alignment, alleviating further alignment efforts. **Conclusion:** Our findings suggest balancing stronger LLMs' performance benefits against the computational costs of larger parameters. We recommend prioritizing capable LLMs within similar size constraints and favoring MLLMs or models pre-aligned with the robotics domain. Future work will explore **more powerful model backbones** and **efficient fine-tuning techniques** to reduce computational costs. Additionally, we aim to investigate **lightweight subgoal modalities** (such as object masks, bounding boxes, or keypoint tracking) to balance cost and guidance. We also hope PERIA can serve as a **foundational model** for MLLM-based embodied manipulation research, offering a more efficient starting point than learning from scratch with non-robotics-tuned MLLMs. # Q2 Real Robotics Evaluation Good suggestions! We highly agree on the importance of demonstrating PERIA's capabilities in real-world settings, but we face limitations due to the absence of robotics in our lab. To address this, we use the BridgeData v2 dataset, which features real-world manipulation videos and corresponding action annotations, to evaluate PERIA's potential through: 1. **Language Planning Accuracy**: Assessing action prediction accuracy against provided annotations. 2. **Visual Planning Fidelity**: Evaluating instruction-following ability by generating goal images from given action annotations. This approach allows us to isolate and assess PERIA's high-level cognitive and planning abilities in real-world scenarios, excluding low-level policy execution. The results are as follows (We want to clarify that SuSIE is already included as a key baseline of visual planning method in our main paper. ViLA is also mentioned in related work section but not as baseline before rebuttal due to its unavailability of code and its similar training-free paradigm with the PAR approach baseline, differing primarily in the LLM backbone used. For this, we implement the base version of ViLA with GPT4V based on PAR for the limited timewindow): ||**Language Accuracy↑**|**Visual Fidelity↓**| |--|--|--| |ViLA|0.69|-| |SuSIE|-|21.2| |**PEIRA**|**0.76**|**17.5**| Due to the free-form nature of language annotations in BridgeData v2, we calculate language accuracy with semantic similarity rather than token accuracy, as both metrics show consistent trends in the simulator results presented in Tab.2 of main paper. The visualization example can be found at Fig.6 in uploaded PDF. More details of related work can also be found at Tab.2 in uploaded PDF. **Key Findings:** - Using the same backbone Instructpix2pix as SuSIE, PERIA generated more coherent and task-relevant images, highlighting the importance of visual tokens and the integration of MLLMs in our approach. - Language accuracy decreased for open-vocabulary tasks in real domains compared to simulator performance for the Sim2Real gap, but PERIA still outperforms ViLA for the enhanced perception pretraining, which improves its ability to correctly interpret and describe complex manipulation tasks in real-world scenarios. **Conclusion:** We will conduct comprehensive real-world evaluations as soon as resources permit and we may try SimpleEnv as a temporal substitute of real-robot before ready. Additionally, we will focus on collecting and incorporating more real-robot datasets for pretraining, enhancing PERIA's adaptability to real-world scenarios. --- **Sincere thanks for your insightful review! We hope our response can address the concerns.** --- Rebuttal Comment 1.1: Comment: I have read the authors' rebuttal and comments. Thank you for your detailed response. I maintain my positive rating. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your recognition of our work! We will incorporate the corresponding details into the updated version. The constructive suggestions really help us improve the quality of the paper! Please let us know if you need any further information or clarification.
Summary: The paper tackles the problem of long-horizon task planning on pick-and-place tasks in the Ravens domain. Given a dataset of trajectories, it first learns the projection to align the vision and language encoder for a multimodal LLM. Then it finetunes both the multimodal LLM and a diffusion model to generate a step action in language, where the diffusion model is used to generate a conditioning subgoal image, which is proposed as an intermediate step that helps with the step action generation in language. Strengths: - The paper is overall well-written and the figures are helpful for understanding the method. Weaknesses: - It is unclear, at least from the experiments in the paper, that the diffusion model is actually useful, especially when the output is still in language space. For example, it seems that the tasks studied in the paper can be easily tackled by a modern multimodal language model (likely even the open-sourced ones), by simply providing the the initial image and appropriate prompting. However, this is missing as an important baseline in the paper (and this does not require additional training data). Furthermore, to demonstrate the effectiveness of an image subgoal in addition to a language subgoal, the evaluation would have to be done on tasks that have subgoals that are difficult to describe in language but easy to describe in visual space, but all the evaluated tasks are the contrary. - A related work “Video Language Planning” also seems to be missing from the paper, despite it might involve closed-sourced models. However, the idea seems quite relevant and it’s unclear if the paper provides additional insights for the community. Technical Quality: 3 Clarity: 3 Questions for Authors: See "weaknesses" section above. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The limitations are described in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Q1 Effectiveness of Diffusion model & MLLMs & Tasks better described in image Thank you for insightful questions! We appreciate the opportunity to respond point by point: 1. **Tasks better described in image** We highly agree that tasks with subgoals better described in visual space are crucial. Our evaluation spans three task types. For Blocks & Bowls and Letters, the textures, colors, and backgrounds are relatively simple and straightforward to describe verbally. However, from VIMA-BENCH, which is specialized with multimodal instructions (interleaved image and language), we selected 8 long-horizon tasks categorized into three types: - **Rearrange**: requires precise absolute positioning, which is challenging to specify accurately through qualitative language alone, image with subgoal location is more clear. - **Constraint**: involves explicit intermediate or outcome constraints. Language instruction are often lengthy and unclear. - **Follow:** infers related objects from given example and reason. Object shapes, colors, and textures are often difficult to describe verbally, especially with multiple distractors. We believe these tasks are challenging to clearly described in language only and more suitable for image subgoals. For illustrating examples, please refer to Fig.2 in uploaded PDF. 2. **Effectiveness of Diffusion model:** - **Providing holistic planning capabilities:** Evaluations across all task types demonstrate the performance gains of holistic planning over language planning methods, with improvements of +19.4% in Blocks & Bowls and +17.4% in Letters. The advantages of holistic planning are even more pronounced in VIMA-BENCH with +37.1%, highlighting the difficulty of adequately describing subgoals using language alone, as shown in Tab.1 of main paper. PERIA's own ablation studies on subgoal modality also support similar conclusions, shown in Fig.3 in uploaded PDF. More detailed subgoal guidance always works. - **Enhancing language planning accuracy:** Diffusion model, serving as a visualization module, introduces pixel-level supervision from groundtruth subgoal images to the MLLM. This integration, along with a consistency loss, forms an intermodal bridge that enables language planning to benefit from pixel-level guidance, thus avoiding isolated learning within language supervision.Our ablation studies, which excluded diffussion model and entirely removed visual supervision from PERIA, resulted in a decrease in language planning accuracy, shown in Tab.2 of uploaded PDF. This demonstrates that image generation as an auxiliary prediction task enhances the MLLM's understanding of task instructions and scenes, thereby improving reasoning capabilities, proving the phrase "What I cannot create, I do not understand". 3. **Capability of pre-trained MLLMs:** Good idea! We tested several popular MLLMs, including GPT4V, Claude 3.5 and InternVL-2 with web interfaces. And we conducted a comprehensive evaluation of GPT-4V key and LLaVa 1.5. The results are shown in Tab.2 of uploaded PDF. Due to domain gaps between robotics and general domain, these models often misidentified scene object properties, directly impacting reasoning, shown in Fig.5 in uploaded PDF. While GPT-4V demonstrated superior accuracy among MLLMs, it still underperformed compared to the fine-tuned EmbodiedGPT and PERIA. We do not expect to achieve superiority over pretrained MLLMs in general domain. Rather, we highlight that deploying MLLMs in robotics manipulation requires substantial domain-specific data fine-tuning and foundational capability enhancements (see LLaVa 1.5 after finetune). # Q2 Discussion with VLP Good suggestions! We now make a detailed comparison between PEIRA with VLP. **Similarities:** 1. **Motivation:** Leverage a generative model to generate subgoal with modalities beyond language to offer more sufficient guide 2. **Overall pipeline**: VLM/MLLM + generative model + low-level policy 3. **Necessity for fine-tuning:** VLP finetunes PALM-e 12B and PERIA finetunes Vicuna 7B both using addtional robotics domain dataset. **Key Differences:** 1. **Training Paradigm:** VLP employs decoupled training for reasoning and imagination, with generated videos subject to VLM evaluation before adoption. In contrast, PERIA utilizes joint training of MLLM and diffusion model, aligning their latent spaces and employing a consistency loss to encourage semantically matching language and visual planning. 2. **Subgoal Modality:** VLP generate short-horizon video as subgoal, while PERIA generate keyframe images, offering a more lightweight representation. 3. **Low-Level Policy:** VLP offers three methods, including UniPi's inverse dynamics model and two variants of policy conditioned on either last frame or every frame of generated videos. PERIA's approach is most similar to VLP's last-frame conditioned policy, while avoiding the high fidelity and continuity requirements of the video generation, particularly the inverse dynamics approach. **Further Discussion:** The core motivation of both video and image subgoals is to provide sufficient guidance for action prediction. These subgoals can be represented through various modalities, including language, images, or videos, each offering different trade-offs between expressiveness and computational efficiency. Broadening our perspective, can we explore more lightweight modalities to describe these subgoals, such as **object&place mask, object bbox** or **keypoint tracking**? These approaches could potentially reduce computational cost by focusing prediction on relevant areas but preserve essential semantic information for sufficient guidance to low-level policy. We think this is an intriguing direction for future work, offering opportunities to balance off between computational efficiency and guidance richness of subgoal modality. --- **Sincere thanks for your insightful review! We hope our response can address the concerns.** --- Rebuttal Comment 1.1: Comment: Dear Reviewer tWdi: Thanks again for your valuable comments and constructive suggestions, which are of great help to improve the quality of our work. We sincerely hope that our additional experiments and analysis can properly address the concerns. As the end of the discussion period is approaching, we are keen to read any further valuable feedback after reviewing our rebuttal. If there are any further concerns, questions, or suggestions, please feel free to ask and discuss with us at any time. We are more than willing to response any of them. **Thanks to your hard work and insightful suggestions!** Best regards, Authors --- Rebuttal Comment 1.2: Title: Response Comment: Thank you for the detailed response, and I appreciate the efforts for the additional experiments, which I think have greatly enhanced the paper. I have raised my recommendation accordingly. --- Reply to Comment 1.2.1: Title: Sincere Thanks for your Time and Effort! Comment: We are deeply grateful for your recognition and invaluable feedback. The constructive suggestions have significantly helped us improve the quality of our paper, and we will incorporate the corresponding details into the updated version. Your positive recognition means a great deal to us, and we truly appreciate it. Thanks for your time and efforts again!
Summary: The paper proposes a holistic vision-language planning method for long-horizon robot manipulation, by learning a multi-modal large language model (MLLM). The MLLM generates interleaved language actions and keyframe images based on language goal and the initial image. Each pair of generated language and keyframe image is used as conditioning of a learned motion policy for robot manipulation. Based on a pretrained MLLM model, the paper first learns a projector to align visual encoding to with language on image captioning tasks tailored to robot manipulation. Then it applies instruction tuning to fine-tune the MLLM, an output projector, and a diffusion model to generate interleaved language and images. Additional, the authors propose another training objective to align the generated language and images. All large models are fine-tuned with LoRA. On simulated robot manipulatio benchmarks, the proposed method outperforms imitation learning, language planning, and vision planning methods. The paper also systematically evaluates capabilities of the MLLM along different axes, and justifies the benefits introduced by each loss design via ablation studies. Strengths: - The paper tackles the important challenge of robot long-horizon planning. The proposed method plans jointly in the language and image space, providing rich information for the low-level policy to condition on. - The paper exploits the capabilities of MLLM to generate language and images for robot manipulation, used with a separate low-level policy. I think this is good practice as MLLM is not naturally suitable to generate robot motion. - The experiments are comprehensive and provide useful information on understanding the capability of the trained MLLM. - The paper is in general well-written and easy to follow. Weaknesses: - The explanation of low-level policy is missing from the main paper. This part is very important - the MLLM outputs language and images only, and it's not clear how these modalities are bridged with robot motion. - The contribution of the alignment loss between generated image and language is not sufficiently justified in the experiment. It will be helpful if the authors can provide the task success rate when the loss is absent. Technical Quality: 3 Clarity: 3 Questions for Authors: - I wonder which of the three pretraining tasks is the most important for vision-language alignment in the context of robot manipulation. It will be interesting if the authors can show some ablation studies on this. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Q1 Details of Low-level Policy Sorry for the confusion. Due to limited space, we placed the training details of low-level policy in Appendix E. Thanks for bringing the attention to critical importance of this section for a comprehensive understanding of the PERIA architecture and we plan to incorporate this information into the updated version of main paper. Here we want to make more explanation about the low-level policy. - **Architecture**: To enable multi-task learning, we adopt CLIPort, a wildly-used data-efficient end-to-end multi-task learning algorithm for Ravens that leverages an SE(2) action space. CLIPort introduces several variants and we utilize two version that conditioned on image and language respectively. Both utilize CLIP's visual and language encoders to extract embeddings, which are then fused with observation inputs via element-wise product. We directly train these variants with stepwise language sub-instructions and coherent keyframes as inputs, as **language conditioned version** and **image conditioned version** respectively. Moreover, to accommodate the simultaneous presence of stepwise language sub-instructions and coherent keyframes from vision planning and language planning, we design a variant that is conditioned on both image and sub-instruction simultaneously, noted as the **language-and-image conditioned version**. We maintain the output prediction architecture and just modifying the input processing for fair comparison. Specifically, we fuse the language sub-instruction and corresponding subgoal image through a cross-attention block comprising a 4-layer lightweight attention layer with 4 cross-attention heads. The fused embedding is combined with the observation via element-wise product, maintaining the subsequent modules of the original architecture. Detailed comparisons of the three policy architecture versions are presented in Fig. 4 of the uploaded PDF. - **Findings**: - We investigated performance across tasks with varying horizon lengths of subgoals. Results in Fig. 5(b) in main paper demonstrate that the integration of both modalities provides more comprehensive guidance, significantly enhancing execution accuracy in long-horizon scenarios. - Furthermore, we evaluated our model across diverse task types. As illustrated in Fig. 3 of the uploaded PDF, the additional guidance proves beneficial across all task categories. Notably, tasks from VIMA-BENCH with multi-modal instructions, which are challenging to be sufficient described by language only, exhibited particularly significant performance improvements with the incorporation of subgoal images. Enhancing subgoal guidance through richer modality consistently improves the semantic clarity and accuracy s across diverse task domains. In future work, we aim to investigate more effective policy architectures as low-level policy backbones, such as diffusion policy, to further enhance the PERIA framework's performance and capabilities. # Q2 Contribution of Alignment loss Good suggestions! We conduct more analysis experiments to verify the effectiveness of alignment loss, including the lanugage accuracy, visual fidelity and success rate. The results can be found in Tab.2 in the uploaded PDF. - **The bridge between two modality**: The consistency loss, an auxiliary regularization term besides the supervision loss, serves as a bridge to enable language planning benefit from pixel-level guidance with subgoal images while allowing visual planning to be influenced by semantic-level supervision from language instructions, thus avoiding isolated learning within each modality. The ablation of the consistency loss degrades performance in both language accuracy and visual fidelity, demonstrating the effect of intermodal connection via consistency loss. - **Alleviate conflicts in holistic planning**: By incorporating the alignment loss, we strengthen the synergy between visual and language planning within the MLLM, mitigating the risk of the diffusion model and MLLM working in isolation. The generalization evaluation across three levels demonstrate that the integration is particularly beneficial for unseen tasks, where each modality are more likely to output semantics in isolation, potentially resulting in semantic conflicts and task failure. # Q3 Importance of Three Pretraining Tasks Thanks for this awesome suggestions! Our pretraining strategy comprises three tasks, each targeting specific capabilities: 1. Scene Description (SD): static understanding of single frame, including object understanding and spatial relationships. 2. Action Recognition (AR): dynamic understanding of subsequent images and semantic correlation between language instructions and subgoal images. 3. Video Understanding (VU): continuous comprehension across successive subgoal images and mitigating temporal hallucinations for long-horizon. We conducted an ablation study with variants: PERIA (w/o pretrain), PERIA (w/ SD), PERIA (w/ SD + AR), and PERIA (w/ VU) followed as: ||Language Accuracy ↑|Visual Fidelity ↓|Succees Rate(<= 8steps)↑|Success Rate(> 8steps)↑| |--|--|--|---|--| |PERIA (w/o pretrain) |80.2|16.8|55.5|42.4| |PERIA (w/ SD)|89.3|15.7|63.0|54.2| |PERIA (w/ SD + AR)|92.6|13.6|68.1|59.3| |PERIA (w/ VU)|84.2|15.4|61.0|57.2| |PERIA (w/ SD + AR + VU, Ours) |**97.6**|**12.3**|**71.2**|**66.1**| **Key Findings:** - SD is most important as the fundamental static comprehension task for other two pretrain tasks. - SD+AR achieve similar performance to the default version, indicating that the SD+AR pairing can develop certain VU capabilities. Conversely, VU as the sole pretraining task performs poorly may because it is too challenging to learn without sufficient foudnation capabilities from SD and AR. - Incorporating VU on top of SD+AR significantly improves the success rate for long-horizon tasks exceeding 8 subgoal steps. --- **Sincere thanks for your insightful review! We hope our response can address the concerns.** --- Rebuttal 2: Comment: Thank you for the rebuttal. I'm satisfied with the ablation studies that show the advantages introduced by the chosen low-level policy architecture and the alignment loss. It's also great to see how the three pretraining tasks contribute differently to the performance of PERIA. Good work! --- Rebuttal Comment 2.1: Title: Sincere thanks for the valuable recognition of our work! Comment: **We sincerely thank you for your recognition of our work!** We will incorporate the corresponding details into the updated version. The constructive suggestions really help us improve the quality of the paper! Please let us know if you need any further information or clarification.
Summary: This paper focuses on robotic manipulation with complex instructions. It proposes PERIA, a framework that integrates MLLM and diffusion models to incorporate both language planning and visual planning for long-horizon language-instructed manipulation tasks. Specifically, PERIA first performs a lightweight multi-modal alignment to consolidate the multi-modal perception capabilities. Then, PERIA performs multi-modal instruction tuning, where it outputs both subgoal language descriptions and visual tokens, both of which are fed to a diffusion model to generate subgoal images. PERIA introduces an additional consistency loss between and generated subgoal image and language descriptions. Experimental results demonstrate that PERIA significantly outperforms competitive baselines. Strengths: • This work follows a natural and reasonable pipeline to tackle the manipulation tasks with complex language instructions. Combining language planning and visual generation for manipulation is a sound approach. • The alignment stage empowers the overall capabilities, as demonstrated in the experimental part. • PERIA achieves convincing experimental results compared with previous works. The authors also conduct extensive ablative study to mine more insights. Weaknesses: • End-to-end learning for such a large system requires considerable cost. Such a comprehensive framework may lead to powerful performances but the resources may be a limitation. This paper does not present how much resources PERIA uses or related experiments to address such potential concerns. • One of my concerns is that the consistency objective, which forces the MLLM to output subgoal language descriptions, may suffer from accumulative error. This is because when the generated subgoal image is not the desired image but is a natural image that can be reached within one-step action, the MLLM would learn an incorrect subgoal description. • More literature references and related baselines should be incorporated. • The ablation in visual planning lacks an experiment where PERIA generates subgoal images with either subgoal descriptions or generated visual tokens, which should reveal more insights into what leads to the improvements in visual planning. Technical Quality: 3 Clarity: 3 Questions for Authors: • You generate subgoal images with subgoal descriptions and generate visual tokens. Why not use 1) subgoal descriptions and observation or 2) generated visual tokens alone? The former resembles a world model, and the latter sounds like a decoding of an imagined visual subgoal, both of which sound more natural. I guess you have tried the latter but found it was not as good as adding subgoal language. • What LLM do you use? It is possible that a powerful LLM accounts for superior performance to some extent. Have you compared the LLMs of different works? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors address the limitations at the end of the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Q1 Computation resources Sorry for the ambiguity arising from distributed presentation of computational resource requirements across Appendix. The computational cost of PERIA across three primary stages: Perceive (8 V100 GPUs * 8 hours ), Reason & Imagine (8 V100 GPUs * 42 hours), and Act (single V100 GPUs * 12 hours). For more details of model architecture and training hyperparameters, please refer to Appendix D and E in main paper. **More Analysis:** Considering most PERIA's computational cost is to align between vision and language, and bridge the gap between pretrained LLMs and image editing models in general domains versus robotics manipulation domain, we conduct further investigation by substituting enhanced model backbones for both the visual encoder and LLM. We compared the training time and performance until convergence, follows as: |Model|Training Time(Perception)|Training Time(Reason + Imagine)|Success Rate| |--|--|--|--| |PERIA (ViT + Vicuna 7B, reported in paper)|8hrs|42hrs|68.2| |PERIA (ViT + llama2 7B)|8 hrs|41 hrs|69.0| |PERIA (ViT + llama3 8B)|7 hrs|39 hrs|73.1| |PERIA (LIV + Vicuna 7B)|5 hrs|38 hrs|69.2| |PERIA (LLaVA1.5 7B)|4.5 hrs|32 hrs|70.3| **Key Findings:** a) **More Powerful LLM**: LLaMA-3-8B improved performance and facilitated easier training due to its superior common sense understanding. b) **Specialized Visual Encoder**: LIV, a robotics-specific representation with fewer parameters than ViT-B-32, leveraged its pre-training on robotics datasets to achieve vision-language alignment, alleviating further alignment efforts. c) **Pretrained MLLM:** Pretrained MLLMs such as LLaVA 1.5, having already undergone general domain vision-language alignment, obviate the need for fine-tuning from scratch, thus expediting adaptation to robotics tasks. **Summary:** We will continue to explore **more powerful model backbones** and **more efficient fine-tuning techniques** to alleviate the computational cost. Additionally, we are interested in investigating **more lightweight subgoal modalities** (maybe obj mask, bbox or keypoint tracking) to balance computational cost and sufficient guidance in future work. We also hope PERIA can serve as a **potential base foundation model** for other MLLM-based embodied manipulation research to reduce computational costs, offering a more efficient starting point compared to learning from scratch using MLLMs not fine-tuned with robotics data. # Q2 Consistency loss& Accumulative Error Good questions! We structure our response into three key points. - The case that generated subgoal image is a one-step goal, pointed out by the reviewer, is **Granularity Error**, which also includings repetitions, skipping, or even backtracking to visited subgoals. We expect PERIA to learn appropriate task decomposition granularity with the **supervision loss** from groundtruth labels for both language planning and visual planning. When visual and language planning produce semantically consistent errors (e.g., images and language in one step) at the same time, the supervision loss imposes significant penalties still discouraging such outputs even when consistency loss is low. - The MLLM's subgoal description ability mainly stems from action recognition pretraining in the perceive stage, not solely from consistency loss. Instead, we utilize this ability to implement consistency loss as a self-supervised mechanism during the reason and imagine stages, encouraging semantic alignment between generated instructions and subgoal images within MLLM's unified latent space. - The consistency loss, an auxiliary prediction task besides the supervision loss, serves as a bridge to enable language planning benefit from pixel-level guidance while allowing visual planning to be influenced by semantic-level supervision, thus avoiding isolated learning within each modality. This intermodal connection fosters a more holistic and coherent learning process of PERIA. For more details of visual supervise guidance and consistency, please refer to Fig.5(a) in main paper and Tab.1 in the uploaded PDF. # Q3 More Related Work Thanks for the suggestions! We additionally included comparisons based on existing pretrained MLLM methods and video planning approaches in Tab.2 of the uploaded PDF. We will add theses in the revised version of main paper. # Q4 & Q5 Condition Input for Visual Generation Awesome question! We conduct the detailed ablation studied on three generation mechanisms with language subgoal descriptions, generated visual tokens, or a combination of both as condition. The results can be found at Fig. 1 of uploaded PDF. **Key Findings:** - More tokens as condition input for visual generation is always better but improvement gets marginal when token reachs a threshold. - Visual tokens only lack sufficient semantic, potentially leading to overfitting or training collapse, especially with a limited number of tokens. Increasing the number of visual tokens can alleviate this but it is essentially equivalent to incorporating language tokens. **Discarding the inherent high-level semantics of language tokens and add more visual tokens to learn these semantics from scratch is inefficient, requiring more training time and wastefully ignoring exsited language planning output.** - Language tokens contains rich semantics but can also get benefit from more visual tokens with visual details that are challenging to describe accurately in language. The combinatorial fusion version can achieve comparable performance with fewer visual tokens, striking an efficient balance between semantic richness and visual precision. # Q6 Different LLMs Sorry for the confusions. We use Vicuna-7B as our LLM backbone. The reviewer's insight is correct - the powerful LLM can bring performance increase and the comparisions can be found at Q1 and the Appendix F.3. --- **Sincere thanks for your insightful review! We hope our response can address the concerns.** --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. Though my concerns have not been resolved, I appreciate the efforts in such a detailed rebuttal. I am still positive and will keep my score. --- Reply to Comment 1.1.1: Title: Sincere Thanks for your Time and Effort! Comment: We sincerely thank you for recognizing our work! We will continue to deeply investigate the role of the condition mechanism in image generation and provide more comparative visualizations of the consistency loss ablation. Additionally, we will incorporate detailed visualizations and corresponding analysis of the granularity error into the updated version. We sincerely appreciate your constructive suggestions, which have significantly helped us improve the quality of our paper!
Rebuttal 1: Rebuttal: # **General Response** --- **Sincere thanks to all the Reviewers for the valuable suggestions and recognition of our work!** We sincerely appreciate all reviewers' time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our key contributions and clear presentation of our paper: **Method:** **Novel motivation** for integrating of multiple modalities for providing better guidance [Reviewer WmQe]. The proposed approach follows **a natural and reasonable** pipeline to tackle manipulation tasks with complex language instructions and combining language planning and visual generation is **sound** approach [Reviewer HJPE]. The method plans jointly in the language and image space and is **a good practice** [Reviewer C2qN]. **Experiments:** The paper achieves **convincing** experimental results compared with previous works and conducts extensive ablative studies to **mine more insights** [Reviewer HJPE]. The experiments are **comprehensive** and provide useful information on understanding the capability of the trained MLLM [Reviewer C2qN]. **Clearness:** The paper is generally **well-written** and **easy to follow** [Reviewer C2qN, Reviewer tWdi]. The **figures are helpful** for understanding the method [Reviewer tWdi]. We also thank all reviewers for their insightful and constructive suggestions, which helped a lot in further improving our paper. In addition to the pointwise responses below, we summarize supporting experiments added in the rebuttal according to reviewers' suggestions. **New Experiments:** 1. Additional ablation studies on image conditional generation mechanisms, LLM backbone, consistency loss, pretraining tasks. 2. Additional comparisons with pretrained MLLM in robotics manipulation scenario. 3. Additional evaluations in real-robot datasets BridgeData v2. --- We hope these new additions help address reviewers' concerns around computational resources, effectiveness of different components, potential usage in real-world scenarios, and comparison with existing methods. **We thank the reviewers for their time and feedback in improving the quality of our work, and we hope the revisions further highlight the contributions made. Please let us know if any clarification or additional experiments would further strengthen the paper. We would be happy to incorporate all these suggestions in the revised version.** Pdf: /pdf/9743c0f3586220b97b54fa1baf9a49999d5bd0af.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Toward Global Convergence of Gradient EM for Over-Paramterized Gaussian Mixture Models
Accept (poster)
Summary: The paper studies the convergence of EM for learning mixtures of Gaussians. Specifically, they consider a simplified setting where the Gaussians are in $d$-dimensions and all have covariance $I_d$. They consider an overparameterized version of the problem where they parametrize the mixture they are trying to learn by a mixture of $n$ Gaussians with means $\mu_1, \dots , \mu_n$ and the ground truth distribution generating the data just consists of a single Gaussian $N(\mu^* , I_d)$. The paper analyzes the dynamics of gradient EM for this problem. The main result of the paper is proving that for this overparametrized variant, gradient EM converges to the true distribution at a rate of $1/\sqrt{t}$ with additional constants depending exponentially on the distance between the initialized means and the true mean, which they show is necessary. There has been a long line of work on understanding the convergence of EM or gradient EM for learning mixtures of Gaussians. Without overparametrization, provable convergence is known for mixtures of two Gaussians and it is also known that convergence fails in general for mixtures of three or more components. For overparamterized settings, a previous work [Dwivedi et. al. 2018] shows that if we parametrize a mixture of two Gaussians and try to learn a ground truth distribution consisting of a single Gaussian, then EM converges at a $1/\sqrt{t}$ rate (as long as the mixing weights are set to be different). This is in contrast to when we parametrize with only a single Gaussian and EM converges exponentially fast. The results of the current paper can be seen as generalizing the results of [Dwivedi et. al. 2018] to more than two components. The paper empirically validates their theoretical results with experiments on simple synthetic datasets. Strengths: The paper makes progress on a well-studied problem of understanding convergence of EM for learning GMMs. They give the first global convergence results for mixtures with more than two components. The paper overcomes nontrivial technical barriers to extend previous results to more than two components. Weaknesses: The results of the paper only work when the ground truth is "trivial" i.e. a single Gaussian. The results are qualitatively similar to previous work on overparametrized mixtures of two Gaussians. The contributions of the paper are mostly technical and it is a bit difficult to find a nice conceptual takeaway \--- the previous work for two components already showed that overparametrization can lead to drastically slower convergence. It would be much more exciting and novel, say, if we could prove something when the ground truth were not just a single Gaussian. Technical Quality: 3 Clarity: 3 Questions for Authors: . Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review. We have addressed your concern below. > The results of the paper only work when the ground truth is "trivial" i.e. a single Gaussian. We agree that the single Gaussian ground truth is a simpler case compared to the most general problem. But our setting is nonetheless highly-nontrivial and technically challenging. In fact, even the 2-GMM problem is quite difficult with a large number of previous works (see section 2.1 for details), and our result serves as an important step towards generalizing the 2-GMM analysis to the general k-GMM learning problem. > The results are qualitatively similar to previous work on overparametrized mixtures of two Gaussians. The contributions of the paper are mostly technical and it is a bit difficult to find a nice conceptual takeaway. Our result is fundamentally different from previous works like [Dwivedi et. al. 2018] since the mechanisms for learning general k-component GMM and 2-component GMM are different. [Dwivedi et. al. 2018] only considers 2-component GMM with *symmetric* means, and their **sub-linear convergence only happens when the mixture weights are exactly equal (both weights being exactly $1/2$)**. On the other hand, our result applies to general GMM with arbitrary weights and asymmetric means. While the phenomenon of slow convergence observed in [Dwivedi et. al. 2018] depends on their specific setting of equal weights and symmetric means, our paper describes the general convergence behavior of k-component GMM. Another conceptual takeaway of our result, as stressed in the paper (see Section 4.1), is that one should consider the likelihood convergence rather than parametric convergence for GMM learning problem. One of our major novelty is the brand new likelihood-based framework, while previous works are mostly standard algebraic computations of the parametric convergence. --- Rebuttal Comment 1.1: Comment: Thank you for the response and addressing my concerns/questions. My overall assessment remains the same. --- Reply to Comment 1.1.1: Comment: Thank you for reading our paper and feedback! Please let us know if you have any further questions.
Summary: This paper talks about the gradient-EM algorithm for over-parameterized GMM. The paper mostly shows the GLOBAL convergence and its rate when using this model to learn a single Gaussian. Strengths: I believe any non-convex global convergence optimization problem is valuable. It is an extension of Dwivedi et al. 2019. Weaknesses: 1. The over-parametrized model may have severe overfitting problem. 2. The based distribution is quite easy: a single normal, with known variance. In the paper, the covariance is fixed as the identity, which simplifies the problem in a deep way. Actually for symmetric 2-GMM, there are already faster algorithms to learn both mean and cov. 3. I feel confused about the consistency and convergence in the paper. In Line 96, the convergence of KL divergence also contains the convergence of MLE, ie consistency. The convergence to the MLE is another loss function. Also in Remark 6, the convergence when sample size to infinity seems more easily ensured by WLLN. Technical Quality: 4 Clarity: 4 Questions for Authors: Besides the weakness above, I also have following questions: 4. If you only learn the single normal, how is the algorithm compared with Dwivedi et al. 2019 or just 2-GMM? Is it necessary to use more? Is it overfitting so the performance seems better? 5. I don’t get why the paper introduces Fact 1. It seems obvious. 6. The mean is convergent to 0 (true) instead of the MLE. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Besides above, 7. the citation format is not uniform. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed review! We have addressed your questions below. > The over-parametrized model may have severe overfitting problem. We believe this is a misunderstanding. The aim of this paper is not to propose a new algorithm/model, but to understand the convergence behavior of the widely-used EM/gradient EM algorithm. The motivation for studying the over-parameterized regime is well-known as many have conjectured it might facilitate global convergence (see [1], [2]). Since we are considering population gradient EM, there is no overfitting in our problem setting and we leave the study of generalization theory of GMM to future works. > The based distribution is quite easy: a single normal, with known variance. This paper aims to rigorously study the optimization phenamenon in a cannonical setting. We agree that the base distribution is simple. However, when the learning model has more than one mixtures, the learning dynamics becomes significantly complex. In fact, even the further simplified setting where the learning model has only 2 mixtures and the based distribution is a single normal, the analysis in Dwivedi et al. 2019. is technically very complicated. The known co-variance assumption is also a standard setting widely-adopted in existing literature ([1], [2], [3]). > Actually for symmetric 2-GMM, there are already faster algorithms to learn both mean and cov. Again, our goal is not to design a new algorithm for learning GMM, but to understand the behavior of the widely-used EM/gradient EM algorithm. While there might be specifically designed algorithms achieving better performance for some special cases such as symmetric 2-GMM, we aim to study gradient EM as one of the most popular existing algorithms for learning general k-component GMM. We will ensure this point comes across more clearly in the final version. > I feel confused about the consistency and convergence in the paper. In Line 96, the convergence of KL divergence also contains the convergence of MLE, ie consistency. The convergence to the MLE is another loss function. While MLE and KL divergences are two different loss functions, optimizing them are actually equivalent due to the following well-known fact: $D_{KL}(p(x|\mu^*)||p(x|\mu))=-\mathrm{E}_{x\sim p(x|\mu^*)}\left[\log\left(\frac{p(x|\mu)}{p(x|\mu^*)}\right)\right]$ $=-\mathrm{E}_{x\sim p(x|\mu^*)}[\log({p(x|\mu)})]$ $\quad +\mathrm{E}_{x\sim p(x|\mu^*)}[\log({p(x|\mu^*)})].$ Since the second term above is a constant that does not depend on $\mu$, minimizing KL is equivalent with maximizing the first term, which is just MLE. Note that this is a general fact that applies to any MLE problem, not just EM or GMM. > If you only learn the single normal, how is the algorithm compared with Dwivedi et al. 2019 or just 2-GMM? Is it necessary to use more? Is it overfitting so the performance seems better? Again, since we are considering population EM, there is no overfitting in this problem. Our algorithm is an extension of Dwivedi et al. 2019 and our main goal is not to argue that k-GMM is better or worse than 2-GMM, but to extend the previous theoretical understanding of 2-GMM to the general k-component case. > I don’t get why the paper introduces Fact 1. It seems obvious. Fact 1 implies that gradient EM is just running gradient descent on the likelihood function, an observation allowing us to introduce theoretical tools from gradient descent theory to facilitate our analysis. > The mean is convergent to 0 (true) instead of the MLE. Since the algorithm is run on population data, there is no overfitting and the ground truth 0 is just the MLE solution. > Citation format is not uniform. Thanks for pointing it out. We will also organize our citation style and make it uniform in the revised version. References: - [1]. Yudong Chen, Dogyoon Song, Xumei Xi, and Yuqian Zhang. Local minima structures in gaussian mixture models. IEEE Transactions on Information Theory, 2024. - [2]. Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, and Michael I. Jordan. Local maxima in the likelihood of gaussian mixture models: Structural results and algorithmic consequences. In Neural Information Processing Systems, 2016. - [3]. Sivaraman Balakrishnan, Martin J. Wainwright, and Bin Yu. Statistical guarantees for the em algorithm: From population to sample-based analysis, 2014. --- Rebuttal Comment 1.1: Comment: Thank you for the addressing my concerns! Part of my questions and concerns are clarified by the authors. Although the ground truth is standard 1-d Gaussian, the work is valuable. Therefore, I would like to lift the score to 6. But I still have questions about the MLE and the true (0). The author responds that the algorithm is implemented on the population data. No mentioning the possibility and reality, if the population data available, the model chosen seems inappropriate. --- Reply to Comment 1.1.1: Title: Thank You and Response to the Question on Population Setting Comment: Thank you for recognizing our contribution and for raising the score! > About our choice of the population data model. We study the population setting to focus on the non-convex optimization dynamics of gradient EM algorithm. Indeed, using the population model is a standard approach in previous literature of EM analysis [1, 2], and generally in non-convex optimization [3,4]. As discussed in Remark 6, it also implies (asymptotically) the optimization convergence for reality, i.e., sample-based EM. [1]. Ji Xu, Daniel J. Hsu, and Arian Maleki. Global analysis of expectation maximization for mixtures of two gaussians. In Neural Information Processing Systems, 2016. [2]. Sivaraman Balakrishnan, Martin J. Wainwright, and Bin Yu. Statistical guarantees for the em algorithm: From population to sample-based analysis. In Annals of Statistics, 2014. [3]. Yuandong Tian. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. In International Conference on Machine Learning, 2017. [4]. Mo Zhou, Rong Ge. A local convergence theory for mildly over-parameterized two-layer neural network. In Conference on Learning Theory, 2021.
Summary: The paper focuses on the setting of a Gaussian Mixture Model with several summands and an input vector produced by one Gaussian distribution, where it employs the Expectation-Maximization rule to infer the model's parameters. Since the problem of having arbitrary number of summands has been unsolved, the paper provides an innovative scheme which includes the computation of the likelihood function and shows that the EM algorithm converges with sublinear complexity. The authors also show that there exist neighborhoods of slow convergence rates. Strengths: - The paper is well written, the theorems, lemmata and algorithmic steps are described gradually. - From a first overview of the literature, the result about global convergence seems novel. - Across section 4, there is intuition and remarks provided about the necessity of the steps. Weaknesses: - The experimental evaluation is used as a proof of concept and thus is limited. The authors could have (potentially) experimented with several datasets, with varying weights in the GMM, and try to benchmark their algorithm to compare the emergent convergence rates. Technical Quality: 2 Clarity: 2 Questions for Authors: NA. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review and positive comment! We have addressed your question below. > The experimental evaluation is used as a proof of concept and thus is limited. The authors could have (potentially) experimented with several datasets, with varying weights in the GMM, and try to benchmark their algorithm to compare the emergent convergence rates. We note that our primary goal is to rigorously study the optimization convergence rate in a controlled setting and therefore we focus on synthetic experiments to carefully examine the phenomena and corroborate our theoretical findings. We also added more experiments in the rebuttal PDF (attached to the global author rebuttal for all reviewers) to verify our theoretical results: - Impact of mixtures weights on the convergence speed (Figure 2 Right of uploaded pdf.) We test $3$ different weight configurations of $3$-component GMM: $(\frac{1}{3}, \frac{1}{3}, \frac{1}{3})$. $(\frac{1}{6}, \frac{1}{3}, \frac{1}{2})$,$(\frac{1}{20}, \frac{1}{5}, \frac{3}{4})$. $4$ runs of each configuration are recorded, with different random initialization. Results show that the convergence is faster when the weights are more evenly distributed: the equally distributed weights of $(\frac{1}{3}, \frac{1}{3}, \frac{1}{3})$ converges with the fastest rate, while $(\frac{1}{20}, \frac{1}{5}, \frac{3}{4})$ converges the slowest. - Impact of initialization on the convergence speed (Figure 2 Left of uploaded pdf): We report the gradient norm in the bad initialization region constructed as counter-examples in Theorem 7. Empirically the gradient norm exponentially decreases in dimension $d$. This supports our theoretical findings that bad initialization causes exponentially slow convergence. --- Rebuttal Comment 1.1: Title: Rebuttal Comment: I thank the authors for their answers. They provided experiments as an attachment to their rebuttal answer. I will further study the responses to the other reviewers and update my review. --- Reply to Comment 1.1.1: Comment: Thanks for reading our response and providing feedback! We look forward to any further comments and updates.
Summary: The paper considers fitting a single Gaussian with multiple-component Gaussian mixture models (GMM) through the Gradient EM algorithm. While the two balanced over-specified Gaussian setting has been widely studied in the previous work, generalizing it to multiple-component GMM requires significant algebraic efforts. The entirety of the paper is to show the $1/\sqrt{t}$ convergence rate of the population EM algorithm. In particular, the paper characterizes the explicit convergence rate of $1/\sqrt{T}$ with constants exponential in the number of components, the phenomenon that coincides with the exponential lower bound for the parameter estimation of general GMMs with no separation. Strengths: - Extending some existing two-component results to general multiple-component GMM is non-trivial and significant. The paper nicely characterizes the convergence rate that captures some important properties of learning GMM that can be achieved by GMM. - The paper is well-written, emphasizing important aspects of the results and well-contrasting their techniques to existing results. - Proof sketch is nicely written to help readers understand their key results. Weaknesses: - While the lower bound result (Theorem 7) is a nice addition to the literature, I believe that the gap between this lower bound and the upper bound is large, since the upper bound is exponentially slow in the number of components. - One important result from two specified GMM is the $n^{-1/4}$ (n is the number of samples here) statistical rate after convergence. I would like to see $n^{-1/2k}$ style results in general k-component GMM settings. At least, the authors should have discussed this aspect of previous work and contrasted the implications to k-GMM settings. - The experiment would have been nicer if the final statistical rates were compared. Technical Quality: 3 Clarity: 4 Questions for Authors: - Maybe authors can elaborate on how their results can imply learning k-GMM with small separations? - In Theorem 7, there is no restriction on the step size $\eta$. I believe that the lower bound should also be able to tell that $\eta$ cannot be set too large. - Why only on the gradient EM? Can the analysis in the paper imply some convergence rates of the standard EM algorithm as well? I think it would make the paper much stronger if it could show that the same results hold for standard EM. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review. We answer each of your questions below. > The gap between this lower bound and the upper bound is large. Thank you for pointing out this problem. In the initial version we didn't optimize the exponent. Indeed, we can obtain significantly refined results which removes this gap between upper and lower bounds. The improved bounds are as follows: - [Upper bound] Consider training a student $n$-component GMM initialized from ${\mu}(0) = (\mu_1(0)^{\top},\ldots, \mu_n(0)^{\top})^{\top}$ to learn a single-component ground truth GMM $\mathcal{N}(0, I_d)$ with population gradient EM algorithm. If the step size satisfies $\eta \leq O\left(\frac{\exp\left(-8U(0)\right)\pi_{\min}^2}{n^2d^2(\frac{1}{\mu_{\max}(0)}+\mu_{\max}(0))^2}\right)$, then gradient EM converges globally with rate $$\mathcal{L}(\mu(t))\leq \frac{1}{\sqrt{\gamma t}},$$ where $\gamma = \Omega\left(\frac{\eta\exp\left(-16U(0)\right)\pi_{\min}^4}{n^2d^2(1+\mu_{\max}(0){\sqrt{dn}})^4}\right)\in \mathrm{R}^+$. Recall that $\mu_{\max}(0)=\max\{\|\mu_1(0)\|,\ldots, \|\mu_n(0)\|\}$ and $U(0)=\sum_{i\in[n]}\|\mu_i(0)\|^2$. - [Lower Bound] For any $n\geq 3$, there exists initialization points such that, when initialized from, population gradient EM will be trapped in a bad local region around it for exponentially long time $T=\frac{1}{30\eta}\exp(\Theta(U(0)))$ as: $0\leq t\leq T$, $\exists i\in[n]$ such that $$ \|\mu_i(t)\|\geq 10\sqrt{d}. $$ Both the improved lower and upper bounds are tighter than original versions since $n\mu_{\max}^2\geq U\geq \mu_{\max}^2$. Most importantly, the improved exponential factors of $\exp(\Theta(U(0)))$ in the two bounds now exactly matches, eliminating the noted large gap. Our key idea is to use $U=\sum_{i\in[n]} \|\mu_i\|^2$ (which captures information on all Gaussian means) instead of $\mu_{\max}=\max_i\{\|\mu_i\|\}$ (which reflects only the maximum Gaussian mean) to construct our convergence rate bounds, resulting in a more fine-grained analysis and tighter bounds. We also constructed a better counter-example: ${\mu}_1(0)=12\sqrt{d}e_1, {\mu}_2(0)=-12\sqrt{d}e_1, \mu_3(0)=\cdots=\mu_n(0)=0$, where $e_1$ is a standard unit vector. This new construction applies to all $n\geq 3$ (while the original one requires $n$ being odd) and implies a tighter lower bound. With this idea, we are happy to present the **optimal** exponential factor of the convergence rate (up to a constant), which we believe might be of independent interests to future study. > Statistical rate for 2-GMM is $n^{-1/4}$. I would like to see $n^{-1/2k}$ style results in general k-component GMM settings. The authors should have discussed this. We agree that statistical rates for learning GMM is an important topic. However, our focus is not on this problem since we aim to study *optimization rates* rather than statistical rates of gradient EM. Yet we are aware that there is a long line of work on this topic. The statistical rate of $n^{-1/4}$ you noted is studied in [1] and [4]. To the best of our knowledge, the same rate for general k-GMM remains an interesting open problem and $n^{-1/2k}$ is a nice conjecture. However, our new experiments suggest that the rate seems to be still $n^{-1/4}$ for general k-GMM. (See response to the next question.) We will add these discussions and our experimental findings into the revised version. > The experiment would have been nicer if the final statistical rates were compared. Thanks for the suggestion! We have added a new experiment on this. See the global author rebuttal to all reviewers (figure 1 in attached pdf) for details. The empirical statistical rate for k-GMM is close to $n^{-1/4}$. > Maybe authors can elaborate on how their results can imply learning k-GMM with small separations? Thanks for noting this topic. Our results immediately imply that the convergence rate for learning general k-GMM with small separations will be sub-linear, since a single Gaussian is a special case of k-GMM with no separation. So the linear-contraction style analysis (such as in [2]) for well-separated GMM no longer works in this regime, and we believe our likelihood-based framework can be helpful. We will add more corresponding discussions in the revised version. > In Theorem 7, there is no restriction on the step size. Theorem 7 applies to any positive step size $\eta$. It punishes large step sizes as $T$ scales with $1/\eta$, so the time that gradient EM gets trapped shortens as the step size increases. As long as the step size is polynomially large $\eta =poly(n,d)$, gradient EM gets trapped in the bad region for exponentially long time as $T\geq \frac{1}{poly(n,d)}\exp(\Theta(U(0)))$. > Why only on the gradient EM? Can the analysis in the paper imply some convergence rates of the standard EM algorithm as well? While gradient EM is equivalent with gradient descent on the likelihood function $\mathcal{L}$, standard EM can also be seen as a gradient descent-like algorithm on $\mathcal{L}$ with a coordinate-dependent step size $1/\mathrm{E}_x[\psi_i(x)]$. (See further discussions in [3]). We believe our method is also useful for standard EM. We will add more discussions on extensions to standard EM in the revised version. References: - [1]. Yihong Wu and Harrison H. Zhou. Randomly initialized em algorithm for two-component gaussian mixture achieves near optimality in $O(\sqrt{n})$ iterations, 2019. - [2]. Bowei Yan, Mingzhang Yin, and Purnamrita Sarkar. Convergence of gradient em on multi-component mixture of gaussians. Advances in Neural Information Processing Systems, 30, 2017. - [3]. Yudong Chen, Dogyoon Song, Xumei Xi, and Yuqian Zhang. Local minima structures in gaussian mixture models. IEEE Transactions on Information Theory, 2024. - [4]. Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Michael I. Jordan, Martin J. Wainwright, and Bin Yu. Singularity, misspecification and the convergence rate of em. The Annals of Statistics, 2018. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I thank the authors for the clarification and for addressing my concerns. The additional experimental result on the statistical rate also looks interesting. I have adjusted my evaluation score accordingly.
Rebuttal 1: Rebuttal: We appreciate all the reviewers for their detailed and positive feedbacks. In the uploaded pdf file, we add several experiments: - Experiment of statistical rates, for questions of Reviewer 6yVv (Figure 1). - Impact of initialization on the convergence speed, for questions of Reviewer DCG2 (Figure 2, left). - Impact of GMM mixture weights on the convergence speed, for questions of Reviewer DCG2 (Figure 2, right). Please refer to the pdf for experimental setups and outcomes. We have also improved our theorems, closing the gap between our upper bound and lower bound of convergence rates. This addresses the question of reviewer 6yVv. Please refer to our response to reviewer 6yVv for details. We welcome and are happy to answer any further questions from reviewers. Pdf: /pdf/c241d28bbeaf9edc9c06ba930da437adbf7f2dba.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation
Accept (poster)
Summary: This paper proposes a method of generating prompts for evaluating large language models such that the prompts are dynamic and allow for showing meaningful performance gaps between different language models.The authors show that the generated data is more-challenging and discriminative than prior datasets. Strengths: - Work is very timely and addresses a major issue in how we can better evaluate LLMs which are continuously improving and saturating existing benchmarks. - Good to see that the generated prompts are indeed harder than baseline datasets - this should indicate that the prompts are challenging enough to provide decent signal on a language model's capabilities. - Experimented with many SOTA models and compared with several baseline datasets. Weaknesses: The main weakness of this work is that much of the pipeline relies prompting language models to modify seed data. This means that the performance of the language model plays a huge role in the quality of the resulting data. Given that the pipeline seems to have many different steps, each of these steps can introduce errors since LLMs are not fully reliable. It then becomes crucial to have a way of verifying that the generated questions are of high quality. There's also a concern that the ground truth answers might not be entirely accurate. The authors mention both of these issues as limitations. Technical Quality: 3 Clarity: 3 Questions for Authors: - If a particular language model is used to generate data using the proposed method, is there any bias where that model will perform better at solving those problems? For example, if Claude generates the prompt set, will the prompt set be easier for Claude than GPT? - Is the data generation done for a set of language models or for each individual language model? In other words, are the prompts being dynamically changed with respect to a single language model's response or all language model responses? Specifically, Section 2.2 says that the method "rephrases the question based on the response from the LLM" - which LLM is this statement referring to? - Are there any experiments to verify the robustness of each individual step in the pipeline? It seems like the current experiments are meant to verify the final output of the pipeline, not the in-between steps. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mention the most-important limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q:** *If a particular language model is used to generate data using the proposed method, is there any bias where that model will perform better at solving those problems? For example, if Claude generates the prompt set, will the prompt set be easier for Claude than GPT?* **A:** Thank you for your question, and your question is very professional. In our experiments, we find that for the data generated by Hunyuan, its performance may not be as good as potentially better-performing models, such as GPT4 and Claude3. However, in the data generated by GPT4, there are cases where Hunyuan performs better than GPT4. We cannot entirely confirm that the bias you mentioned does not exist, but even if it does, its impact would **not be decisive**. To avoid potential biases, we have also done related work and processing. In our paper, we try to **minimize this potential bias as much as possible**: on the one hand, we **pay great attention to the usability of the generated data**. During the data production process, we use models to participate in the usability check of the generated data. For the final generated data, we also hire **expert personnel to conduct checks**. The expert personnel's judgment of the usability of the questions is based on the human preference perspective to determine whether the questions are reasonable. The inspection process includes **result sampling or re-inspection schemes** to ensure the accuracy of the judgment. The inspection results show that the evaluation data obtained by the method in this paper have ideal usability. Therefore, we can believe that these data can be used to evaluate the performance of LLMs. On the other hand, to avoid suspicion, we **don't allow the model that generate the data to participate in the evaluation**. In the experiments, the calculation of evaluation results-related data **don't include the evaluation results of Hunyuan**, to avoid potential biases that may exist in Hunyuan-generated data as much as possible. The evaluation results also confirm that **our data can effectively distinguish the performance of existing LLMs**. **Q:** *Is the data generation done for a set of language models or for each individual language model? In other words, are the prompts being dynamically changed with respect to a single language model's response or all language model responses? Specifically, Section 2.2 says that the method "rephrases the question based on the response from the LLM" - which LLM is this statement referring to?* **A:** For data production, we only use a **individual** language model, i.e., Hunyuan-standard. For the **automated usability check of mathematical questions**, to improve the usability of the data, we use **Hunyuan-standard** and **Hunyuan-pro** to perform checks separately and cross-validate the usability of the data. For other processes, we only use Hunyuan-standard. We **input the large model's response into the prompt** for generating questions, and the large model responsible for generating questions will produce new questions based on the prompt. Therefore, **the prompt does not dynamically change** with the model's response; it simply incorporates the model's response. While we use Hunyuan, **other large models can also be utilized** for this purpose. In Section 2.2, "rephrases the question based on the response from the LLM," **the LLM refer to in this sentence is Hunyuan (Hunyuan-standard)**. In the footnote 2 on page 3 of the article, we mention, "Unless otherwise specified, all data in this document are generated by Hunyuan (Hunyuan-standard), which is a Large Language Model developed by Tencent." Therefore, we use LLM as a reference here. **Q:** *Are there any experiments to verify the robustness of each individual step in the pipeline? It seems like the current experiments are meant to verify the final output of the pipeline, not the in-between steps.* **A:** Your question is indeed very insightful. We have not verified the robustness of each independent step; if necessary, we can **incorporate relevant experiments in future versions** of our work. Theoretically, each individual step in our process is robust. From a global perspective, the data we utilize is both reliable and stable. Our dataset comprises Chinese and English components. For the Chinese data, we reference the work of TencentLLMEval[1], which has been **stably applied in various business scenarios**. For the English data, we employ the seed data from Self-instruct[2], which is **extensively used in academia**. Additionally, during the data production process, we set the **temperature of the LLMs** to 0 and use **fixed prompts** to guide the LLM in data production. For each data point, we implement several measures when calling the LLM API. These measures include **multiple requests** using a counter in case of call failures, capturing and handling exceptions, and **pausing** for three seconds before resubmitting a request if a call fails. These steps aim to increase the probability of successful API calls. Furthermore, we conduct **validity checks** on the input data for each step to enhance the robustness of the data production process. [1] Xie, Shuyi, et al. "TencentLLMEval: a hierarchical evaluation of Real-World capabilities for human-aligned LLMs." arXiv preprint arXiv:2311.05374 (2023) [2] Wang, Yizhong, et al. "Self-instruct: Aligning language models with self-generated instructions." arXiv preprint arXiv:2212.10560 (2022). --- Rebuttal Comment 1.1: Comment: Thank you for the response. I would be interested in seeing results if a panel of LLMs was used as the evaluator for this method in order to reduce bias. I am also still curious about ways to verify the individual steps within this method. As such, I will be retaining my original score. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We will include the relevant content you mentioned in the subsequent version of the paper.
Summary: The paper proposes a prompt synthesis framework for evaluating LLMs to accurately reflect different Large Language Model abilities. The authors develop two models to measure LLMs’ question discriminative power and difficulty. This study presents “instruction gradient” and “response gradient” methods to exploit rule sets to generalize questions. Strengths: The paper focuses on the generation of a large number of queries and corresponding answers on general language and mathematical topics. They have released a set of over 3000 questions for LLM evaluation. Their proposed metrics (discrimination index and difficulty score) show significant improvement in the quality of the benchmark datasets. Weaknesses: Although the paper tries to solve a crucial research area in the scope of LLM evaluation, the study lacks in many different ways. The textual flow is difficult to follow. Many of the concepts introduced were not properly described or not cited with previous work’s references. These issues restricted the reviewability of this study. Technical Quality: 3 Clarity: 1 Questions for Authors: 1. The proposed methods - “Instruction gradient” and “response gradient” are not properly described in the manuscript. Authors should write the working procedure of these methods in detail in the main manuscript, as these are the centerpiece of the whole question generation process. 2. “Generalizing questions from seed data based on the "instruction gradient" restricts the diversity and confines the content to specific topics” - is unclear. Consider explaining. 3. In section 2.3 - Assessing the Usability of General Text Questions: How is the assessment done? Is it done manually with human input? Or by an autonomic process/model? 4. In section 2.3 - CoT Check for Mathematical Questions: “we use Hunyuan to assess the reasonableness of the question, which successfully identifies the unreasonableness of the problem and corrects it based on the assessment process.” - How can it be ensured that the model successfully identifies the unreasonableness? Provide a theoretical/experimental study. 5. In section 2.4 - Acquiring reference answers: lines 133-136, are the answers scored by human participants? 6. In section 2.4 - Acquiring reference answers: line 140, What is meant by a “collective voting mechanism”? Please explain clearly. 7. In section 2.5 - lines 148-149, what are “label discrimination indexes”? a. In line 149, “the prompt includes four features” - How did you select these features? Provide some analysis. b. In lines 162-164, How did you select the threshold values? (e.g., “Low” means less than or equal to 0.1, “High” means values greater than 0.25, etc.). c. In line 168, “discrimination level label ranging from 0-3” - Is this range acquired by observations? Or have you performed some analyses on the score expressions? 8. In equation 4, what does the “score” mean? Is it the evaluation score that is depicted in Table 1? a. If you are using the same “score” to calculate the difficulty score and the discrimination indexes, does that mean a question is more difficult if a question is more discriminative? Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: While this proposed method is understood to work on general text questions fairly well, mathematical questions are the weakest part of this study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q:** *The proposed methods - “Instruction gradient” and “response gradient” are not properly described in the manuscript. Authors should write the working procedure of these methods in detail in the main manuscript, as these are the centerpiece of the whole question generation process.* **A:** Thanks for your question on our paper. We explain "Instruction gradient" and "response gradient" **in footnote 1 on the second page** of the manuscript. "Instruction gradient" and "response gradient" are the names we give to our methods. The process of **generating generalized questions from seed data is analogous to forward propagation**, where the LLM generates responses to the questions, and this process should be further pushed forward. Based on these responses (considered as information or knowledge), new questions are generated again, and **this process is pushed backward**, which can be compared to **backpropagation**. Therefore, we thought of **using the term "gradient" for naming**, and we named the process of generating questions based on seed data as "instruction gradient" and the process of generating generalized questions based on LLM responses as "response gradient". We **further detail the description of the working procedure** in the annotation of Figure 1 in the manuscript. Our working procedure is as follows: First, we collect a batch of seed data and divide it into mathematical and general text categories. Next, we apply the "instruction gradient" to both types of questions. For the **"instruction gradient,"** the specific generalization strategies for the two types of questions are different due to the various question types. We **provide the core generalization strategies** in Table 5 of the appendix. We have the LLM(Specially referring to Hunyuan-standard in our paper) rewrite the seed data according to the generalization strategies, thus obtaining new questions. For **general text questions**, we can implement the "response gradient," i.e., first obtain the LLM's response to the question, and then **ask questions based on the content of the response**. We show the prompt for this process in Table 7 of the appendix. For **mathematical questions**, after generating questions based on the "instruction gradient," we focus more on **the usability of the questions**. Therefore, we design **CoT** to check, using multiple models (referring to Hunyuan and Hunyuan-pro in our paper) to judge the usability of the questions, and modify or discard the questions based on the inspection results. We show the specific CoT content in Table 9 of the appendix.* **Q:** *“Generalizing questions from seed data based on the "instruction gradient" restricts the diversity and confines the content to specific topics”- is unclear. Consider explaining.* **A:** Thank you for your suggestion on our paper. We provide further explanation for this part. The content generated by generalizing seed data through the "instruction gradient" is relatively **close to the topic of the seed data.** To make the generalized evaluation data more diverse, on the one hand, we can ensure the overall diversity of the evaluation data through **the diversity of seed data**. On the other hand, we enhance the diversity of questions through the **"response gradient."** For example, for the question "How can NLP technology be used to detect and prevent the spread of fake news?", using the **instruction gradient** for generalization, we can obtain a new question "List three specific methods to detect and prevent the spread of fake news using NLP technology and explain their principles," which still revolves around the original question for expansion or transformation. To address this, we consider **discarding the original question** and **using the LLM-generated response as information or knowledge.** At this point, we only generate questions based on a piece of text, and the questions **may become more interesting** based on the content of the response. In the above example, we may generate a new question "What NLP tasks are typically addressed by fact-checking and source analysis techniques?" **Q:** *In section 2.3 - Assessing the Usability of General Text Questions: How is the assessment done? Is it done manually with human input? Or by an autonomic process/model?* **A:** This part explains **the usability check for general text data**, which is **implemented automatically by the LLM**. We propose **four criteria** that we believe are important for general text questions: safety, neutrality, integrity, and feasibility. Here, **safety** refers to the absence of explicit, politically sensitive, or violent content in the question; **neutrality** refers to no bias or racial discrimination in the instructions; **integrity** refers to sufficient information provided to clarify the task; and **feasibility** refers to instructions within the AI system's capability range. We use the LLM (specifically, Hunyuan) to score general text questions based on these four criteria. Questions that do not receive a perfect score are considered **unusable**. For general text questions, the incidence of unusability **occurs less frequently**, so we **discard questions deemed unusable** without modifying them. In the experiment, we **manually annotate** the generated general text questions and found that the usability reached 94.0%. --- Rebuttal 2: Title: Supplement1 to the rebuttal Comment: **Q:** *In section 2.3 - CoT Check for Mathematical Questions: “we use Hunyuan to assess the reasonableness of the question, which successfully identifies the unreasonableness of the problem and corrects it based on the assessment process.”- How can it be ensured that the model successfully identifies the unreasonableness? Provide a theoretical/experimental study.* **A:** For mathematical questions, we indeed cannot guarantee that the generated questions can always be usable. However, through our proposed inspection mechanism, we can greatly eliminate the problems of **conceptual errors, logical contradictions, violations of common sense, missing conditions, and unsolvable questions**. "The model successfully identifies the unreasonableness" we methion here is **an explanation of the case study in Figure 2**. In Figure 2, we identified the unreasonable part of the question based on the designed CoT, indicating that the question is unusable and needs to be discarded or modified. For mathematical questions, we apply the "instruction gradient" to generalize the questions. **To check the usability of the generated questions**, we design a set of question-checking mechanisms. On the one hand, we design **a set of CoT logic**, starting from the concept, judging the logicality among different parts, evaluating the solvability of the question, and finally checking the question and steps, **gradually guiding the LLM to think about the usability of the question**. On the other hand, we conduct **multi-turn iterative checks through two different LLMs** to ensure the usability of the generated questions as much as possible. Specifically, **the two LLMs independently judge the usability** of the question through the CoT logic, but when one LLM judges the question as unusable or both LLMs judge the question as unusable, the question needs to be modified according to the judgment logic given by the LLM considered unusable (when both LLMs consider the question unusable, we designate one LLM's logic for modification). The modified question is then iteratively checked again. **Only when both LLMs consider the question usable**, the question is considered usable in the production process, and the iterative inspection of the question ends. If the maximum number of iterations is reached and there is still an LLM that judges the question as unusable, the question will be discarded. **Q:** *In section 2.4 - Acquiring reference answers: lines 133-136, are the answers scored by human participants?* **A:** In the selection of reference answers for general text questions, we use the **Hunyuan(Hunyuan-standard)** to score the responses. Using the LLM's response to the instructions as reference answers is **relatively common in the data generation field**. For example, in the Alpaca dataset[1], GPT-3.5 (text-davinci-003) is used to provide responses to questions as reference answers, and in the *Instruction tuning with gpt4 work*[2], GPT-4 is used to answer Chinese questions and serve as reference answers. Despite this, we **hope to improve the quality of reference answers** as much as possible. Inspired by [3], for general text questions, we also **provide seven evaluation criteria**: Safety (0-30 points), Correctness (0-10 points), Relevance (0-10 points), Comprehensiveness (0-10 points), Readability (0-20 points), Richness (0-10 points), and Humanization (0-10 points). We are more inclined to believe that responses with higher scores should have higher quality. We call multiple LLMs, including Hunyuan, GPT-4, GPT4-Turbo, Wenxin 4, and Qwen, to respond to the instructions, and then use Hunyuan to score these responses, selecting the highest-scoring response as the reference answer. We further **involve humans in checking the usability of the answers**. We select 150 generated general text questions and obtain reference answers in the aforementioned manner. We organize evaluators to score the selected reference answers according to the evaluation criteria in Table 1 of the paper. We remove 15 questions that none of the models answer correctly (the questions might be too difficult, all models answer incorrectly, and the answer selection is not meaningful). The results show that the usability rate of reference answers reaches **84.7%**, which is **higher than the highest correct rate of alternative reference answers**, wenxin4 (78.8%). This indicates that the answer selection criteria can ensure the usability of the answers. [1] Taori, Rohan, et al. "Stanford alpaca: An instruction-following llama model." (2023): 6. [2] Peng, Baolin, et al. "Instruction tuning with gpt-4." arXiv preprint arXiv:2304.03277 (2023). [3] Liu, Yilun, et al. "Automatic instruction optimization for open-source llm instruction tuning." arXiv preprint arXiv:2311.13246 (2023). --- Rebuttal 3: Title: Supplement2 to the rebuttal Comment: **Q:** *In section 2.4 - Acquiring reference answers: line 140, What is meant by a“collective voting mechanism”? Please explain clearly.* **A:** Thank you for raising this issue, and we appreciate the opportunity to provide a more detailed explanation. For the answers to mathematical questions, we hope to select high-quality responses as reference answers as much as possible. However, it is difficult to design a scoring standard that conforms to human preference for mathematical question responses like general text questions. Related work [1] studies **the theoretical basis of the collective voting mechanism** and discusses the impact of different voting methods on social welfare. Inspired by this, we introduce a "collective voting mechanism" to select reference answers by comparing and voting among multiple responses. We provide multiple **anonymous responses** to the voting LLMs simultaneously, and let the voting LLMs choose the best response they think. We provide multiple voting LLMs, and each voting LLM casts its vote for a response, which means that the voting LLM thinks this response is the best. The response with **the highest number of votes** is used as the reference answer. If there is a tie, we randomly select a response as the reference answer and mark the question. Despite our efforts to enhance the usability of reference answers, there **may still be instances where the selected reference answer is incorrect**. To **further improve the accuracy of reference answers** for mathmatical questions, we hire mathematics experts to check and correct the reference answers of the questions. The results of the manual review are used as the final reference answers. [1] Sen, Amartya. Collective choice and social welfare. Harvard University Press, 2018. **Q:** *In section 2.5 - lines 148-149, what are “label discrimination indexes”? a. In line 149, “the prompt includes four features”- How did you select these features? Provide some analysis. b. In lines 162-164, How did you select the threshold values? (e.g.,“Low”means less than or equal to 0.1,“High” means values greater than 0.25, etc.). c. In line 168, “discrimination level label ranging from 0-3”- Is this range acquired by observations? Or have you performed some analyses on the score expressions?* **A:** Thank you for your question, the label discrimination indexes are the labels mapped from the discrimination indexes you mentioned in the 'b' question, we will improve the presentation here. a. The four features we select are included in each sample: question, its corresponding category, mean length of this category, and length ratio. These features are important and provide meaningful reference for understanding the discrimination of the questions. **Question:** The question is the most direct and key feature. The model needs to understand the question itself. Without the question, it is impossible to determine the type of information provided. **Category:** The discrimination of questions in different categories is usually different. For example, questions in the mathematics category may have different discrimination levels compared to those in the entertainment category. Category information helps us assign appropriate discrimination levels to questions. **Mean length of the category:** Considering the difficulty levels across different categories, the average length of questions within a category can indicate the complexity of the questions in that category. Generally, categories with longer answers may involve more complex questions, while categories with shorter answers may involve simpler questions. Therefore, by comparing the average lengths of different categories, we can gain a rough understanding of the question's difficulty, which serves as an important reference for its discrimination. **Length ratio:** From the perspective of varying difficulties across categories, the length ratio can help us understand the complexity of the question compared to the average question in its category. A higher length ratio may mean higher difficulty, and a lower length ratio may mean lower difficulty. By analyzing the length ratio, we can better understand the relative ranking of the difficulty and discrimination of the question in its category. b. The threshold here is estimated based on the distribution of **100,000-level evaluation data**. c. 0-3 are the four levels we map discrimination indexes to, but this level division is **not unique**, it is just for the convenience of observing data with different discrimination. We can also divide it into two levels, etc. --- Rebuttal 4: Title: Supplement3 to the rebuttal Comment: **Q:** *In equation 4, what does the “score”mean? Is it the evaluation score that is depicted in Table 1? a. If you are using the same“score”to calculate the difficulty score and the discrimination indexes, does that mean a question is more difficult if a question is more discriminative?* **A:** Thank you for your question. Yes, the score here refers to the score in Table 1. a. It is the same score, but we believe that the difficulty score can only serve as a reference for discriminability. A high difficulty score for a question does not necessarily mean that it is more discriminative. **For example**, for a question with a max score of 3, if the **evaluation scores are both 0 and 0**, according to the formula, its difficulty score is 3, and the discrimination score is 0, meaning that the question is very difficult, and the LLMs cannot answer it correctly, so **the question is not discriminative**. However, if **the evaluation scores are 0 and 3**, we can calculate that its difficulty score is 1.5, and the discrimination score is 1, indicating that **the question can effectively distinguish the level of LLMs**. **Q:** *While this proposed method is understood to work on general text questions fairly well, mathematical questions are the weakest part of this study.* **A:** Thank you for your affirmation of our work on general text questions. For mathematical questions, on the one hand, providing prompts to large models to generate new questions can **easily lead to unusable questions**, which is difficult to handle; on the other hand, **generating discriminative mathematical questions is also a significant challenge**. We have also done **a lot of work and contributions** on mathematical questions, which are explained here. 1.We have proposed some **data generalization strategies** that can **effectively improve the discrimination and difficulty of questions**. In the "Instruction Gradient", for mathematical questions, we propose 8 generalization strategies to guide data generalization. Experiments have found that the questions generated by these strategies can effectively distinguish the capabilities of existing LLMs. 2.In practical evaluation scenarios, the difficulty of generalizing mathematical questions lies in **the usability of the generated questions**. This paper focuses on solving the problem of low usability of generalized mathematical questions and **designs a usability checking mechanism for mathematical questions**: On the one hand, we have designed **a set of CoT** for checking the usability of mathematical questions. This scheme can guide the LLM to check the usability of mathematical questions from the perspectives of concept, logical relationship, problem solvability, and condition completeness, greatly eliminating the problems of conceptual errors, logical contradictions, violations of common sense, missing conditions, and unsolvable questions. On the other hand, we can **effectively modify or discard unusable generated data through multi-model multi-round iterative checks**. Specifically, we use two different LLMs(refered to Hunyuan-standard and Hunyuan-pro in our paper) to judge the usability of the question based on the above CoT method. For unusable questions, we modify the question according to the judgment of CoT and **iterate the check** again until both LLMs judge the question as usable or reach the maximum number of iterations, and the question is also retained or discarded respectively. Through this mechanism, the generated mathematical questions have satisfactory usability. 3.The **open discrimination estimation model and difficulty estimation model** can quickly **judge the quality of mathematical questions**. In the process of training the discrimination estimation model and difficulty estimation model, we **introduce a large number of mathematical questions** with difficulty and discrimination annotations. The obtained models can effectively and quickly judge the discrimination and difficulty of mathematical questions. We make the models public to facilitate community research or use. 4.We open **a batch of mathematical questions generated by LLMs**(Specially refering to Hunyuan-standard and Hunyuan-pro in our paper) with **accurate reference answers**. We used the Hunyuan large model to generate mathematical questions with **high discrimination**, including **32 types of questions** including calculus, function properties, and arithmetic operations. In addition, we hire **experts in the field of mathematics to check and correct the reference answers** of the mathematical questions to ensure the accuracy of the open mathematical question reference answers. --- Rebuttal Comment 4.1: Comment: First, I would like to thank the authors for thoroughly addressing all my questions. The paper tackles an important problem, but its structure does not flow naturally, making it quite difficult to follow. This explains why I had so many initial questions. While the authors did a good job responding to these questions, the paper should be revised to be clearer from the start. It's challenging to fix these issues as an afterthought. Therefore, I will maintain my initial score. --- Reply to Comment 4.1.1: Comment: We appreciate your reply and will revise the paper to make it more concise. We will pay special attention to the issues you mentioned and polish the writing carefully. We have given detailed explanations in our answers to the questions you mentioned, and hope that it will help readers better understand the paper. We sincerely hope that our score can be revised.
Summary: The paper introduces a novel framework for evaluating Large Language Models LLMs) based on Item Discrimination ID theory, which generates adaptive, high- quality prompts to effectively differentiate model performance. Key contributions include a dynamic evaluation set that evolves with LLM advancements, a self- correct mechanism for prompt precision, and models to estimate prompt discrimination and difficulty. The authors validate their framework by testing it on five state-of-the-art models and release a dataset of over 3,000 prompts to aid further research, demonstrating enhanced challenge and discrimination over previous methods. Strengths: The paper proposes a novel prompt generation method to produce more challenging evaluation data. The paper is well-structured and clearly written. The methodology and evaluation criteria are explained clearly, making the paper accessible to a broad audience. Weaknesses: The paper only used one LLM Hunyuan) to generalize data and did not verify whether the proposed method can generalize to other LLMs. It is debatable whether using test data generated by an LLM to evaluate the performance of LLMs has practical value. The paper lacks validation of the effectiveness of the machine-generated test set, such as comparing its metrics with those of other human-annotated datasets. The paper lacks an analysis of the diversity of the data used to produce the test set. Technical Quality: 3 Clarity: 3 Questions for Authors: The concerns are included in the weaknesses. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have identified some limitations; however, there are additional ones that I have raised in the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q:** *The paper only used one LLM Hunyuan) to generalize data and did not verify whether the proposed method can generalize to other LLMs.* **A:** Thank you for your question about our paper. Our proposed method is designed for existing LLMs and is **not limited to a particular model**. The work of using LLMs to automatically generate data often involves selecting only one LLM for data generation, such as the Wizardlm[1] work using gpt-3.5-turbo to generate instruction data, and the Self-instruct[2] work using gpt-3 to generate instruction data. We apply our proposed method to some other LLMs, such as GPT-4-turbo (gpt-4-turbo-2024-04-09) and Qwen (Qwen-max), using the same batch of a small amount of seed data, and **manually scoring the models' responses** to calculate discrimination indexes and **map** them to the four levels of discrimination indexes. The experimental results are shown in the table below. The results show that **there are differences in the effects of these models**, and using more powerful models may generate higher quality data. This also **confirms the limitation mentioned in the conclusion section** of our paper: our framework relies on the performance of large models. | Model | Amount | Low | Relatively Low | Relatively High | High | |:-----------:|:------:|:---:|:-------------:|:--------------:|:----:| | Seed_data | 50 | 45 | 0 | 4 | 1 | | Hunyuan | 50 | 29 | 8 | 8 | 5 | | Qwen | 50 | 28 | 13 | 6 | 3 | | Gpt4-turbo | 50 | 21 | 5 | 10 | 14 | **Q:** *It is debatable whether using test data generated by an LLM to evaluate the performance of LLMs has practical value.* **A:** The issue you metioned is a very in-depth one. On the one hand, if the **data produced by the model** is not **controlled and filtered**, it can have a significant negative impact on the model [1], so a good filtering mechanism is crucial for the **effectiveness** of the model's data production. On the other hand, if **manually annotated data** does not have a **good filtering mechanism**, it can also have a significant impact on model training, as shown in research work like LIMA[2]. However, a single evaluation often requires **tens of thousands of evaluation data** to fully measure the capabilities of large models. The cost of manually writing questions is too high and the speed is relatively slow. It is necessary to use LLM-produced test data in conjunction with manually written questions to evaluate the capabilities of large models. Therefore, this paper proposes an automated approach for constructing high-quality evaluation data, with contributions including the following two aspects: (1) Globally, it explores how to **ensure the diversity of data**, such as the classification of seed data and the diversity of data generation methods; (2) For each data, a very effective **data usability check mechanism** is designed. This process is reusable and will also be fully open-sourced. The method we proposed has also been validated in an **actual production environment**, helping to stably and comprehensively improve the performance of the production model, thereby validating its effectiveness in both experimental testing and production environment testing. [1] Shumailov, Ilia, et al. "AI models collapse when trained on recursively generated data." Nature 631.8022 (2024): 755-759. [2] Zhou, Chunting, et al. "Lima: Less is more for alignment." Advances in Neural Information Processing Systems 36 (2024). **Q:** *The paper lacks validation of the effectiveness of the machine-generated test set, such as comparing its metrics with those of other human-annotated datasets.* **A: ** Thank you for your suggestion on our paper. We will explain your suggestion from the perspectives of usability, production efficiency, and cost to supplement the missing part. **Usability:** Human-annotated datasets are not necessarily all usable, and they often contain errors. They also need to be repeatedly checking and reviewing to ensure a high level of usability (e.g., above 95%). The usability of the questions in our generated data can reach **94%** (the 94% usability is based on hunman-annotated results), and the usability of the evaluation data is satisfactory. **Production efficiency:** In this paper, it takes 2-5 calls to check a machine-generated question, with an average time of about **20 seconds per question**. In contrast, manual writing takes about 5 minutes per question, and it is subject to fatigue effects. **Cost:** In this paper, generating a question and checking it with the machine involves input and output of about 9k tokens, costing approximately **0.03$**. In contract, the market price for manually writing a usable question is about 2$, making the cost of human-annotated datasets relatively high. **Q:** *The paper lacks an analysis of the diversity of the data used to produce the test set.* "A:" Thanks for your suggestion on our paper. We appreciate your suggestion and provide a response to this issue, supplementing the explanation of the diversity of the data. We ensure **the diversity of the seed data** through a rich variety of categories. Our seed data consists of two parts: Chinese and English. The Chinese data refers to the work of TencentLLMEval, which includes 6 primary categories and 61 secondary categories. The English data uses the seed data from Self-instruct, which contains 175 different task types. In terms of methods, we design **diversified generalization strategies** including "Instruction Gradient" and "Response Gradient" to promote the generation of diversified questions. In the actual production process, we **filter out similar questions**. In this process, we cite the work of Self-instruct and remove data samples with ROUGE-l greater than 0.7 to filter out similar sample data.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation
Accept (spotlight)
"Summary: The paper addresses challenges in surgical video-language pretraining (VLP) due to the kno(...TRUNCATED)
"Rebuttal 1:\nRebuttal: **[Q1. Plan to Expand Dataset]** Scaling and diversifying the surgical visio(...TRUNCATED)
"Summary: The paper presents a novel approach for enhancing surgical video analysis by incorporating(...TRUNCATED)
"Rebuttal 1:\nRebuttal: **[Q1. Dataset Limitations]** Thank you for the insightful suggestion. In th(...TRUNCATED)
"Summary: This paper proposes a Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretra(...TRUNCATED)
"Rebuttal 1:\nRebuttal: **[Q1. Augmentation Removes Variation]** Thank you for pointing out one of t(...TRUNCATED)
"Summary: The paper presents a new framework called PeskaVLP for surgical video-language pretraining(...TRUNCATED)
"Rebuttal 1:\nRebuttal: **[Q1 SVL Dataset]**\n\n**[Q1.1. Types of surgeries in SVL dataset]** In the(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank all the reviewers for the insightful comments to improve our work. (...TRUNCATED)
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers
Accept (poster)
"Summary: The paper investigates the complexity of sampling from heavy-tailed distributions and pres(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED)
"Summary: This paper studies the problem of heavy-tailed sampling. First, the paper shows that while(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED)
"Summary: The paper focus on studying the complexity of heavy-tailed sampling and present a separati(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED)
"Summary: The authors provide a lower bound for sampling from heavy tailed distributions under the G(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED)
NeurIPS_2024_submissions_huggingface
2,024
"Summary: This paper studies the complexity of sampling heavy-tailed distributions. It provides lowe(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED)
null
null
null
null
null
null
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
Reject
"Summary: This paper introduces Accordion Networks (AccNets), a novel neural network structure compo(...TRUNCATED)
null
"Summary: The authors present a generalization bound for deep neural networks that describes how dep(...TRUNCATED)
null
"Summary: The authors introduce accordion networks (AccNets), which are compositions of multiple sha(...TRUNCATED)
null
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
OxonFair: A Flexible Toolkit for Algorithmic Fairness
Accept (poster)
"Summary: The paper introduces \"AnonFair,\" a toolkit designed to enforce algorithmic fairness acro(...TRUNCATED)
"Rebuttal 1:\nRebuttal: Thank you for taking the time to review our manuscript and for providing det(...TRUNCATED)
"Summary: This paper describes a new toolkit for algorithmic fairness, enabling the optimization of (...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewer for the feedback and helpful suggestions that will be (...TRUNCATED)
"Summary: The paper introduces a new toolkit designed to enhance algorithmic fairness with greater e(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewer for their detailed feedback. We are happy to see that (...TRUNCATED)
"Summary: The paper describes details of a fairness toolkit (\"AnonFair\"), which confers fairness t(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewer for their review, and we hope to address the issues ra(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewers for their helpful and largely positive comments (**ov(...TRUNCATED)
NeurIPS_2024_submissions_huggingface
2,024
"Summary: This paper presents AnonFair, a cutting-edge open-source toolkit designed to promote algor(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewer for the positive comments and constructive feedback.\n(...TRUNCATED)
null
null
null
null
null
null
G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training
Accept (poster)
"Summary: This paper proposes G2D, a novel vision-language pre-training (VLP) framework for medical (...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewer for the questions!\n>Theoretical analysis for why pseu(...TRUNCATED)
"Summary: This manuscript describes a medical vision-language pre-training framework called Global t(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewer for the positive feedbacks!\n> Unclear if specific sen(...TRUNCATED)
"Summary: The paper proposes an encoder-decoder medical VLP approach for global-to-dense visual repr(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewer for the positive feedbacks!\n>Comparing with MGCA and (...TRUNCATED)
"Summary: The paper proposes a new medical vision-language model, G2D, which employs vision-language(...TRUNCATED)
"Rebuttal 1:\nRebuttal: We thank the reviewer for the positive feedbacks!\n>Detecting and measuring (...TRUNCATED)
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
End of preview. Expand in Data Studio

NeurIPS Papers Dataset

This dataset contains information about NeurIPS conference paper submissions including peer reviews, author rebuttals, and decision outcomes across multiple years.

Files

  • dataset.csv: Main dataset file containing all paper submission data

Dataset Structure

The CSV file contains the following columns:

  • title: Paper title
  • paper_decision: Decision outcome (Accept/Reject with specific categories)
  • review_1, review_2, etc.: Peer reviews from different reviewers
  • rebuttals_1, rebuttals_2, etc.: Author rebuttals responding to reviews
  • global_rebuttals: Overall author responses
  • dataset_source: Source of the data
  • conference_year: Year of the conference

Usage

import pandas as pd

# Load the dataset
df = pd.read_csv('merged_neurips_dataset.csv')

# Example: Print first paper title
print(df['title'].iloc[0])

# Example: Filter accepted papers
accepted_papers = df[df['paper_decision'].str.contains('Accept', na=False)]
print(f"Number of accepted papers: {len(accepted_papers)}")

# Example: Analyze decision distribution
decision_counts = df['paper_decision'].value_counts()
print(decision_counts)

Sample Data Structure

Each row represents a paper submission with associated reviews and rebuttals:

title: "Stress-Testing Capability Elicitation With Password-Locked Models"
paper_decision: "Accept (poster)"
review_1: "Summary: The paper studies whether fine-tuning can elicit..."
rebuttals_1: "Rebuttal 1: Thanks for the review! We are glad you found..."
...

Data Statistics

  • File size: ~287MB
  • Format: CSV with comma-separated values
  • Encoding: UTF-8
  • Contains: Paper reviews, rebuttals, and metadata from NeurIPS conferences

Use Cases

This dataset is valuable for:

  • Peer review analysis: Study patterns in academic peer review
  • Natural language processing: Train models on academic text
  • Research evaluation: Analyze correlation between reviews and acceptance
  • Academic writing: Understand successful paper characteristics
  • Sentiment analysis: Analyze reviewer sentiment and author responses

Citation

If you use this dataset in your research, please cite appropriately and ensure compliance with NeurIPS terms of service.

License

This dataset is released under the MIT License. Please ensure you have appropriate permissions to use this data and comply with NeurIPS's terms of service.

Downloads last month
10