paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_PPjSKy40XUB
End-to-end Algorithm Synthesis with Recurrent Networks: Extrapolation without Overthinking
Machine learning systems perform well on pattern matching tasks, but their ability to perform algorithmic or logical reasoning is not well understood. One important reasoning capability is algorithmic extrapolation, in which models trained only on small/simple reasoning problems can synthesize complex strategies for large/complex problems at test time. Algorithmic extrapolation can be achieved through recurrent systems, which can be iterated many times to solve difficult reasoning problems. We observe that this approach fails to scale to highly complex problems because behavior degenerates when many iterations are applied -- an issue we refer to as "overthinking." We propose a recall architecture that keeps an explicit copy of the problem instance in memory so that it cannot be forgotten. We also employ a progressive training routine that prevents the model from learning behaviors that are specific to iteration number and instead pushes it to learn behaviors that can be repeated indefinitely. These innovations prevent the overthinking problem, and enable recurrent systems to solve extremely hard extrapolation tasks.
Accept
This paper proposes 2 ideas to improve recurrent networks that learn algorithmic tasks, particularly targeting algorithmic extrapolation, i.e. generalizing to much larger instances than seen during training. The 2 ideas are (1) feed the original input to each recurrent step and (2) a new learning algorithm that encourages the network to predict the output correctly regardless of which recurrent step it is at. The paper is well written, the ideas are simple but the experiment results clearly demonstrate their usefulness. All the reviewers are positive about the paper. I’d recommend accepting this paper. I do want to also point out a related work “Strong Generalization and Efficiency in Neural Programs” by Li et al., which although different in many places, has a few nice properties similar to the proposed work, including (1) strong generalization, or algorithmic extrapolation where training on small instances learns neural models that can generalize to almost arbitrarily sized data; (2) computation scales with the difficulty of the problem instance rather than tied to the size of the data instance; (3) neural model has no explicit memory of which step of computation they are at and (4) actually learns or “synthesizes algorithms”.
train
[ "lzH_OHhbqa", "Srr--0bx8ux", "CdlRsk3Uc7y", "5DAgahS3bVr", "jeHYRltJJgJ", "gYRXAj2zM6Z", "M2NkcCEZpS", "ddLA7U85YYa", "xoPj1WarP2", "H_6jwdufnUZ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the replies.", " Thanks to the authors for this reply. My concerns regarding the discrepancies with the previous manuscript are all addressed. I am going to increase my score accordingly, but I still think that a broader comparison with other baselines would be more informative. All of t...
[ -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 5, 4 ]
[ "CdlRsk3Uc7y", "5DAgahS3bVr", "H_6jwdufnUZ", "xoPj1WarP2", "ddLA7U85YYa", "M2NkcCEZpS", "nips_2022_PPjSKy40XUB", "nips_2022_PPjSKy40XUB", "nips_2022_PPjSKy40XUB", "nips_2022_PPjSKy40XUB" ]
nips_2022_MZmv_B1DM3
Optimal Scaling for Locally Balanced Proposals in Discrete Spaces
Optimal scaling has been well studied for Metropolis-Hastings (M-H) algorithms in continuous spaces, but a similar understanding has been lacking in discrete spaces. Recently, a family of locally balanced proposals (LBP) for discrete spaces has been proved to be asymptotically optimal, but the question of optimal scaling has remained open. In this paper, we establish, for the first time, that the efficiency of M-H in discrete spaces can also be characterized by an asymptotic acceptance rate that is independent of the target distribution. Moreover, we verify, both theoretically and empirically, that the optimal acceptance rates for LBP and random walk Metropolis (RWM) are $0.574$ and $0.234$ respectively. These results also help establish that LBP is asymptotically $O(N^\frac{2}{3})$ more efficient than RWM with respect to model dimension $N$. Knowledge of the optimal acceptance rate allows one to automatically tune the neighborhood size of a proposal distribution in a discrete space, directly analogous to step-size control in continuous spaces. We demonstrate empirically that such adaptive M-H sampling can robustly improve sampling in a variety of target distributions in discrete spaces, including training deep energy based models.
Accept
The reviewers and I agree that the contributions of the paper are of interest and useful addition to the literature. Therefore, I recommend accepting the paper. Please consider the reviewers' comments when preparing the camera-ready version.
train
[ "uskqE-Hlf1", "Diw1OZ1E7Fs", "yYUSV_6l_8", "A1n1iYY4OmU", "4wTb_uupf2m", "GXy5tSDkzYU", "P0JsxF-519x", "pJVucs8Bc1n", "3sUIuQk1roe", "St6qS9U07F", "R1INsKQT9Dr", "3fpLXXHGHw", "3pvWP1_dh76", "Lh9oqGPE4p", "XJsEY1G5miq", "iuDRjKSfOzU" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional experiments. I am satisfied with all the answers.", " We updated the comparison table with an extra column for GPU memory usage. Overall ALBP (ours) enjoys similar memory usage as other methods. Note that the memory usage of baseline methods might be dependent on the implementation. Al...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "yYUSV_6l_8", "4wTb_uupf2m", "A1n1iYY4OmU", "R1INsKQT9Dr", "St6qS9U07F", "pJVucs8Bc1n", "pJVucs8Bc1n", "3fpLXXHGHw", "nips_2022_MZmv_B1DM3", "iuDRjKSfOzU", "iuDRjKSfOzU", "XJsEY1G5miq", "Lh9oqGPE4p", "nips_2022_MZmv_B1DM3", "nips_2022_MZmv_B1DM3", "nips_2022_MZmv_B1DM3" ]
nips_2022_VY1dqOF2RjC
ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings
We present a scalable approach for learning open-world object-goal navigation (ObjectNav) – the task of asking a virtual robot (agent) to find any instance of an object in an unexplored environment (e.g., “find a sink”). Our approach is entirely zero-shot – i.e., it does not require ObjectNav rewards or demonstrations of any kind. Instead, we train on the image-goal navigation (ImageNav) task, in which agents find the location where a picture (i.e., goal image) was captured. Specifically, we encode goal images into a multimodal, semantic embedding space to enable training semantic-goal navigation (SemanticNav) agents at scale in unannotated 3D environments (e.g., HM3D). After training, SemanticNav agents can be instructed to find objects described in free-form natural language (e.g., “sink,” “bathroom sink,” etc.) by projecting language goals into the same multimodal, semantic embedding space. As a result, our approach enables open-world ObjectNav. We extensively evaluate our agents on three ObjectNav datasets (Gibson, HM3D, and MP3D) and observe absolute improvements in success of 4.2% - 20.0% over existing zero-shot methods. For reference, these gains are similar or better than the 5% improvement in success between the Habitat 2020 and 2021 ObjectNav challenge winners. In an open-world setting, we discover that our agents can generalize to compound instructions with a room explicitly mentioned (e.g., “Find a kitchen sink”) and when the target room can be inferred (e.g., “Find a sink and a stove”).
Accept
This work introduced a method for object-goal navigation in open-world settings. The crux of the approach is to use a pre-trained vision-and-language model CLIP to associate embeddings of image goals and textual descriptions of the objects such that the model can perform zero-shot navigation given textual goals. This paper received mixed reviews from four expert reviewers, ranging from Borderline Reject to Strong Accept. The reviewers appreciated the problem motivation, the novel method of using large-scale pre-trained models for embodied AI, and the demonstrated zero-shot navigation performances. Nonetheless, concerns have been raised by the reviewers, including the limited evaluations and unfair baseline comparisons. The authors did a good job easing many of the concerns raised in the initial reviews. They successfully swayed Reviewer mStZ to update their rating to Borderline Accept and Reviewer tra9 to Borderline Reject. In addition, Reviewer J2xC mentioned that they will update their rating to Weak Accept if the authors include the results in the new revision, which the authors promised to do. The only reviewer who held a negative opinion after the discussion period is Reviewer tra9, who argued against the authors' claim that their approach reduces the data labeling burden (by using pre-trained CLIP) and pointed out that different baselines are used for different tasks. The AC read the paper, the reviews, and the reviewer-author discussions carefully and found that the authors made convincing arguments in response to Reviewer tra9's criticisms. Given that the majority of the reviewers held positive opinions on this work and no major flaw had been identified, the AC would recommend accepting this work at NeurIPS and letting the community judge the merit of this work. The AC would strongly suggest the authors incorporate all reviewers' comments in the final version.
train
[ "P5Igm2NFHdI", "nN4cfQNMgwI", "xDqEz6_eZhf", "YsFq7rSaHIl", "hWbUDo9Ckx0", "14fJa3awViN", "eo2Ug6rLmMK", "sLZbW6USEb", "4KnAw77srMn", "QUh_EK8iLkQ", "8AJJaoizoq", "u0ehkZhcUSn", "VTyZXZBB-Y", "4BrR2D_EcEN", "ZrqFln-ZY0T", "38-NrMxOvc", "KnhBCpIcNr" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Yes, we will include those experimental results in the paper.", " Thank you for your replies,\n\nMy concerns are almost resolved with demonstrated interesting experimental results. Will you update the manuscript and include these results of the semantic-nav agent in the conventional object-goal navigation (#1 a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 5 ]
[ "nN4cfQNMgwI", "8AJJaoizoq", "8AJJaoizoq", "14fJa3awViN", "sLZbW6USEb", "u0ehkZhcUSn", "u0ehkZhcUSn", "4KnAw77srMn", "QUh_EK8iLkQ", "KnhBCpIcNr", "38-NrMxOvc", "ZrqFln-ZY0T", "4BrR2D_EcEN", "nips_2022_VY1dqOF2RjC", "nips_2022_VY1dqOF2RjC", "nips_2022_VY1dqOF2RjC", "nips_2022_VY1dqOF2...
nips_2022_DpxXyntc12v
Training Subset Selection for Weak Supervision
Existing weak supervision approaches use all the data covered by weak signals to train a classifier. We show both theoretically and empirically that this is not always optimal. Intuitively, there is a tradeoff between the amount of weakly-labeled data and the precision of the weak labels. We explore this tradeoff by combining pretrained data representations with the cut statistic to select (hopefully) high-quality subsets of the weakly-labeled training data. Subset selection applies to any label model and classifier and is very simple to plug in to existing weak supervision pipelines, requiring just a few lines of code. We show our subset selection method improves the performance of weak supervision for a wide range of label models, classifiers, and datasets. Using less weakly-labeled data improves the accuracy of weak supervision pipelines by up to 19% (absolute) on benchmark tasks.
Accept
This paper examines the question of whether it is best to use all the available weakly labeled data to train a model. Contrary to usual practice, it finds that it is often best to filter that data. The creativity of the paper is using a statistic called the cut statistic to select high quality subset of the data. It uses a graph where nearest neighbors of examples are connected using the distance induced by some representation. The approach is very elegant. It can be plugged into many existing approaches. Experiments on the WRENCH benchmark for weak supervision show that this selection method consistently improves five different approaches to creating weak labels. The reviewers all agreed that the paper makes a strong contribution, is clear and well written, and provides an interesting analysis of why the method works.
train
[ "mpmMNaivfD_", "rky_K_ainfQ", "sI4USNtjNve", "bz3J6whLFL", "pmKOEwWK-i_", "dXpVOxMgFKn", "PaZvW0D-1OG_", "NgjBiPE5iXH", "Duui48mkhPk", "6GZZ7dhCp-", "3AJyEIPMmPc", "VLlfp1a1Bjk", "7AbQvMN04h" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your helpful comments on our submission. We hope we addressed your concerns about better contextualizing our approach compared to recent state-of-the-art joint approaches to weak supervision like COSINE and ASTRA. We plan to include the COSINE results reported by Zhang et al. to better contextual...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "NgjBiPE5iXH", "pmKOEwWK-i_", "PaZvW0D-1OG_", "dXpVOxMgFKn", "7AbQvMN04h", "VLlfp1a1Bjk", "3AJyEIPMmPc", "Duui48mkhPk", "6GZZ7dhCp-", "nips_2022_DpxXyntc12v", "nips_2022_DpxXyntc12v", "nips_2022_DpxXyntc12v", "nips_2022_DpxXyntc12v" ]
nips_2022_7cL46kHUu4
Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting
In consequential decision-making applications, mitigating unwanted biases in machine learning models that yield systematic disadvantage to members of groups delineated by sensitive attributes such as race and gender is one key intervention to strive for equity. Focusing on demographic parity and equality of opportunity, in this paper we propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points. We select instances based on their influence on the fairness metric of interest, computed using an infinitesimal jackknife-based approach. The dropping of training points is done in principle, but in practice does not require the model to be refit. Crucially, we find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric. Through careful experiments, we evaluate the effectiveness of the proposed approach on diverse tasks and find that it consistently improves upon existing alternatives.
Accept
This paper presents a tool for mitigating the influence of biased training data without retraining a model. The identified shortcomings of the paper include the restriction on binary classification setting. The meta reviewer recognizes this caveat but feels the focus aligns with the existing fairness definitions and the literature broadly. But the authors are encouraged to make it explicit about the paper’s limitations regarding the above restriction on binary classes and should clarify what and how the analysis and approaches can generalize to the multi-class case. Prior to rebuttal, there was raised concern about the calculation of Hessian inverse and the generalizability to larger models. The authors addressed both during the rebuttal. Please add the new results to the final version.
train
[ "M_yxxkV393", "u5gTITyB211", "prEiXogx9o", "MH8vDjFUej0", "ZjQS59UTsV", "R6bnzFA_Y7A", "vyel-EFJ_hv" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks you for the response.\nI am happy with the response and appreciate the further clarification for some of the explanations I have missed. I believe that discussion and clarification in your response would help with the clarity of the paper. Especially the additional clarification referring to the requiremen...
[ -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "prEiXogx9o", "vyel-EFJ_hv", "R6bnzFA_Y7A", "ZjQS59UTsV", "nips_2022_7cL46kHUu4", "nips_2022_7cL46kHUu4", "nips_2022_7cL46kHUu4" ]
nips_2022_wjqr6aqkLUV
Towards Understanding the Condensation of Neural Networks at Initial Training
Empirical works show that for ReLU neural networks (NNs) with small initialization, input weights of hidden neurons (the input weight of a hidden neuron consists of the weight from its input layer to the hidden neuron and its bias term) condense onto isolated orientations. The condensation dynamics implies that the training implicitly regularizes a NN towards one with much smaller effective size. In this work, we illustrate the formation of the condensation in multi-layer fully connected NNs and show that the maximal number of condensed orientations in the initial training stage is twice the multiplicity of the activation function, where ``multiplicity'' indicates the multiple roots of activation function at origin. Our theoretical analysis confirms experiments for two cases, one is for the activation function of multiplicity one with arbitrary dimension input, which contains many common activation functions, and the other is for the layer with one-dimensional input and arbitrary multiplicity. This work makes a step towards understanding how small initialization leads NNs to condensation at the initial training stage.
Accept
Three reviewers recommended borderline accept, borderline accept, and weak accept. Reviewers found the work clearly written and the claims clearly stated. Of particular interest were the empirical results connecting the number of condensed directions and multiplicity of the nonlinearity, where the article was found to fill gaps and provide new intuitive explanations for important phenomena. A main critique in the initial reviews was the interest in the considered setting and the insufficient discussion of the significance of the findings, which was partly clarified during the discussion leading to updated more favorable reviewer ratings. Overall I find the strengths outweigh the weaknesses and hence I am recommending accept. However, as evidenced in the discussion, the article still can improve in some ways and I strongly encourage the authors to carefully consider the detailed feedback of the reviewers for the preparation of the final manuscript. Specific recommendations include that the revised manuscript better elucidates how the very condensed stage that the authors study is related to expressive, fully trained NNs, and includes a clearer discussion of the motivation and significance, and the promised discussion of activation functions and limitations of the theoretical analysis. Following up on the discussion in response to reviewer Mpot, I would like to suggest the authors do not use their proposed title 'informal proposition’ (or 'Theorem’ or 'Proposition’ etc.) unless the statement has a formal proof and instead use titles such as 'Conjecture’ or 'Empirical observation’ if the statement does not have a formal proof.
train
[ "-WHug2nQ2fM", "EsuRBKDIkqh", "1ZbDIzRmUDs", "TW8hhadNPO", "o0W1DLDxZGX", "cdZkMttFv2l", "pU78tNVlwxF", "PEDRJ4IkNci", "J0cNnQ0wtg6", "LAI6Zs1iLjs", "k6mCMaVoQHZ", "FDhP1UmfTi", "sY8IQ7Cvl7n" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer vTes,\n\nThanks for your comments. Your comment is very helpful for our future study on the relation between condensation and generalization.\n\nBest,\n\nAuthors.", " Thanks to the authors for their detailed response. I now understand that the very limited number of condensed directions only descr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "EsuRBKDIkqh", "k6mCMaVoQHZ", "o0W1DLDxZGX", "k6mCMaVoQHZ", "pU78tNVlwxF", "sY8IQ7Cvl7n", "FDhP1UmfTi", "FDhP1UmfTi", "k6mCMaVoQHZ", "k6mCMaVoQHZ", "nips_2022_wjqr6aqkLUV", "nips_2022_wjqr6aqkLUV", "nips_2022_wjqr6aqkLUV" ]
nips_2022_c6ibx0yl-aG
An Analysis of Ensemble Sampling
Ensemble sampling serves as a practical approximation to Thompson sampling when maintaining an exact posterior distribution over model parameters is computationally intractable. In this paper, we establish a regret bound that ensures desirable behavior when ensemble sampling is applied to the linear bandit problem. This represents the first rigorous regret analysis of ensemble sampling and is made possible by leveraging information-theoretic concepts and novel analytic techniques that may prove useful beyond the scope of this paper.
Accept
There was broad agreement about this paper. All reviewers valued the main contribution: the first rigorous analysis of ensemble sampling as an approximation to Thompson sampling. The reviews and discussion all indicate that the paper is clear and honest in stating its contributions and supporting them with evidence, and that this main contribution is valuable. The remaining discussion revolved around finer points of scope of significance; on one hand, the setting of linear bandits was viewed as relatively simple, and some of the given bounds may not be tight; on the other hand, there is speculation that some of the analysis may apply more broadly than the simple setting considered in the paper. Overall, there is broad consensus about recommending the paper be accepted.
train
[ "2o1IlLoNxz", "hItXbDurqmy", "9LTzOAC_ByV", "BBcdqgIUzW", "FSTUoKt-3Gn", "iqtXrWKHpHo", "3WjV2Z3eLno", "U1auNlZ77lwo", "WMT35L8Uvpz", "Bvu12JhnPM", "_NETt9031kY", "gTim253z9wx", "rr58ADX6Vqf" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for increasing the score and for the thorough, thoughtful and insightful feedback!", " Thanks for confirming, in light of that and the general response to all reviewers I will increase my score. Congrats on the strong paper.", " Thank you again for bringing up this relevant paper. We will ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 2 ]
[ "hItXbDurqmy", "9LTzOAC_ByV", "BBcdqgIUzW", "FSTUoKt-3Gn", "_NETt9031kY", "gTim253z9wx", "rr58ADX6Vqf", "Bvu12JhnPM", "nips_2022_c6ibx0yl-aG", "nips_2022_c6ibx0yl-aG", "nips_2022_c6ibx0yl-aG", "nips_2022_c6ibx0yl-aG", "nips_2022_c6ibx0yl-aG" ]
nips_2022_FxVH7iToXS
Signal Propagation in Transformers: Theoretical Perspectives and the Role of Rank Collapse
Transformers have achieved remarkable success in several domains, ranging from natural language processing to computer vision. Nevertheless, it has been recently shown that stacking self-attention layers — the distinctive architectural component of Transformers — can result in rank collapse of the tokens’ representations at initialization. The question of if and how rank collapse affects training is still largely unanswered, and its investigation is necessary for a more comprehensive understanding of this architecture. In this work, we shed new light on the causes and the effects of this phenomenon. First, we show that rank collapse of the tokens’ representations hinders training by causing the gradients of the queries and keys to vanish at initialization. Furthermore, we provide a thorough description of the origin of rank collapse and discuss how to prevent it via an appropriate depth-dependent scaling of the residual branches. Finally, our analysis unveils that specific architectural hyperparameters affect the gradients of queries, keys and values differently, leading to disproportionate gradient norms. This suggests an explanation for the widespread use of adaptive methods for Transformers' optimization.
Accept
The paper presents a theoretical analysis of "rank collapse" in transformers. The analysis is backed by several experiments on a (En-De) machine translation task. The authors do not study other application domains, but results can be expected to be highly correlated, as those experiments cover encoder-decoder architectures. The authors make and test (re-)scaling recommendations (residual scaling and a temperature parameter). While not new in themselves, those recommendations are in line with their analysis, and they allow them to train transformers to strong performance with SGD. Overall, the paper represents an increment addition to the understanding of transformer optimization and is suitable for publication.
train
[ "aFmJQL38cfI", "xhrKpOMxA8", "qekiH6iQLOh", "jx1HsRKWB8bT", "8l3evAQDO4", "RliUPxsDZ6d", "Zko4K3Z_t6", "UNMBrFpZiwo", "lmRwrpKcy2q", "BZ4yozxoN_S", "tiOxC_hiuuD", "NHucAM9GsaU", "wYsFoJVamY", "KdBed8Nl7wq", "Ty2FKaqzbEI" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer FzSt,\n\nDue to the fact that the deadline for the author-reviewer discussion period is approaching, we are kindly reminding the reviewer to share their thoughts and suggestions about our rebuttal. Actively engaging in the discussion would definitely be of great value for us, and help us further imp...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "lmRwrpKcy2q", "8l3evAQDO4", "Zko4K3Z_t6", "nips_2022_FxVH7iToXS", "RliUPxsDZ6d", "wYsFoJVamY", "UNMBrFpZiwo", "KdBed8Nl7wq", "BZ4yozxoN_S", "tiOxC_hiuuD", "NHucAM9GsaU", "Ty2FKaqzbEI", "nips_2022_FxVH7iToXS", "nips_2022_FxVH7iToXS", "nips_2022_FxVH7iToXS" ]
nips_2022_vmjckXzRXmh
Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations
Contrastive learning is a highly effective method for learning representations from unlabeled data. Recent works show that contrastive representations can transfer across domains, leading to simple state-of-the-art algorithms for unsupervised domain adaptation. In particular, a linear classifier trained to separate the representations on the source domain can also predict classes on the target domain accurately, even though the representations of the two domains are far from each other. We refer to this phenomenon as linear transferability. This paper analyzes when and why contrastive representations exhibit linear transferability in a general unsupervised domain adaptation setting. We prove that linear transferability can occur when data from the same class in different domains (e.g., photo dogs and cartoon dogs) are more related with each other than data from different classes in different domains (e.g., photo dogs and cartoon cats) are. Our analyses are in a realistic regime where the source and target domains can have unbounded density ratios and be weakly related, and they have distant representations across domains.
Accept
The manuscript provides theoretical analyses of why contrastive learning helps domain adaptation. It developed new proof techniques, advancing the current status of contrastive learning theory. The manuscript then studied a version of the existing prototypical network approach by Snell et al., called Preconditioned Feature Averaging PFA (mean classifier + preconditioner) and demonstrated both theoretically and empirically that this simple algorithm works well as a domain adaptation algorithm. Reviewers acknowledged several positive aspects of the manuscript including the phenomenon of linear transferability and the fact that contrastive learning helps with it. There are concerns related to experiments, however, the theoretical contributions in the manuscript outweigh some of the empirical deficiencies. Discussion phase has addressed concerns related to more experiments and showing that the theory is robust to outliers. To further increase the impact of the manuscript in terms of theory indeed predicts several phenomena that are verified by experiments, the authors have responded positively to the suggestion about testing on much larger scale and common for SOTA unsupervised domain adaptation testing datasets such as DomainNet and VisDA. The results will be added to the camera ready version.
train
[ "r_ONBeytvIp", "Z3DXSb1jH5d", "LKDT-dqERp", "ucMWpSS18b", "m3otTWCjQ1U", "NjG8Jc5PxJn", "BEH8P_3nB9X", "3T0zQK6UJiA", "Xj2je5SW580", "00siscBvqq8", "_n_fE26ffsB", "_OyfQ8Y2Ors", "HnVPm1tHA_J", "IB6ZqSUFu5Y", "IHqrPRQ1eGT", "RHgEeERMG8d", "DL-C5CrsVlJ", "buK0GNmn19w", "c83pLlk32y"...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " Thank you to the reviewer for raising this insightful question, and for suggesting adding this discussion in the paper! We will include the clarification about PFA vs. linear probing in our camera ready version when given additional space.", " Worth adding this discussion in the paper for completeness about PFA...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "Z3DXSb1jH5d", "LKDT-dqERp", "ucMWpSS18b", "m3otTWCjQ1U", "BEH8P_3nB9X", "Xj2je5SW580", "wgwdMyhK1Su", "nips_2022_vmjckXzRXmh", "HnVPm1tHA_J", "IHqrPRQ1eGT", "RHgEeERMG8d", "DL-C5CrsVlJ", "buK0GNmn19w", "c83pLlk32y", "nOTAUro5_V3", "aPIgr7B_F9N", "1NV6SRACU6lJ", "3nBKzbIzJDu", "e...
nips_2022_dUSI4vFyMK
Convergence for score-based generative modeling with polynomial complexity
Score-based generative modeling (SGM) is a highly successful approach for learning a probability distribution from data and generating further samples. We prove the first polynomial convergence guarantees for the core mechanic behind SGM: drawing samples from a probability density $p$ given a score estimate (an estimate of $\nabla \ln p$) that is accurate in $L^2(p)$. Compared to previous works, we do not incur error that grows exponentially in time or that suffers from a curse of dimensionality. Our guarantee works for any smooth distribution and depends polynomially on its log-Sobolev constant. Using our guarantee, we give a theoretical analysis of score-based generative modeling, which transforms white-noise input into samples from a learned data distribution given score estimates at different noise scales. Our analysis gives theoretical grounding to the observation that an annealed procedure is required in practice to generate good samples, as our proof depends essentially on using annealing to obtain a warm start at each step. Moreover, we show that a predictor-corrector algorithm gives better convergence than using either portion alone.
Accept
The reviewers and I agree that the contributions of the paper are of interest and useful addition to the literature. Therefore, I recommend accepting the paper. Please consider the reviewers' comments when preparing the camera-ready version.
test
[ "XOZmTPHC6ge", "MwZv5roEkp6", "pyOsTGtc4J0", "vP100HU2ktMe", "Htlz0jxDuP", "V_jBAafn-3n", "HCPSRL2iLU2", "DG3JWyB5W7W", "PvVzDgoJgMB", "wE-xMe69Y3A", "BnVyTVLErHR", "ZSmjv3xrm-", "CE-iPrJ4VaJ", "aG-VjId8R0W", "8zesNnfghr" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your answer. My question was simply out of curiosity and I agree that a full investigation is out of the scope of this paper.", " You are correct that dissipativity is only known to imply a log-Sobolev inequality under a Hessian bound. We will note this caveat in the revision. As [1] assumes a Hessia...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 9, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2, 5, 4 ]
[ "wE-xMe69Y3A", "vP100HU2ktMe", "PvVzDgoJgMB", "HCPSRL2iLU2", "BnVyTVLErHR", "nips_2022_dUSI4vFyMK", "8zesNnfghr", "aG-VjId8R0W", "CE-iPrJ4VaJ", "ZSmjv3xrm-", "nips_2022_dUSI4vFyMK", "nips_2022_dUSI4vFyMK", "nips_2022_dUSI4vFyMK", "nips_2022_dUSI4vFyMK", "nips_2022_dUSI4vFyMK" ]
nips_2022_-h6WAS6eE4
Locating and Editing Factual Associations in GPT
We analyze the storage and recall of factual associations in autoregressive transformer language models, finding evidence that these associations correspond to localized, directly-editable computations. We first develop a causal intervention for identifying neuron activations that are decisive in a model's factual predictions. This reveals a distinct set of steps in middle-layer feed-forward modules that mediate factual predictions while processing subject tokens. To test our hypothesis that these computations correspond to factual association recall, we modify feed-forward weights to update specific factual associations using Rank-One Model Editing (ROME). We find that ROME is effective on a standard zero-shot relation extraction (zsRE) model-editing task, comparable to existing methods. To perform a more sensitive evaluation, we also evaluate ROME on a new dataset of counterfactual assertions, on which it simultaneously maintains both specificity and generalization, whereas other methods sacrifice one or another. Our results confirm an important role for mid-layer feed-forward modules in storing factual associations and suggest that direct manipulation of computational mechanisms may be a feasible approach for model editing. The code, dataset, visualizations, and an interactive demo notebook are available in the supplemental materials.
Accept
The paper proposes a method (ROME) to analyze the storage and recall of factual knowledge in a large-scale autoregressive language model, and find that such knowledge can be controlled by changing weights in the MLP layer. The reviewers all agree the paper is well-motivated, well-motivated, and scientifically sound. The area chair is also impressed with the through experimentation and the quality of the writing. The main issue the reviewers (1iyP,gUgP) pointed out is that the method is not scalable for practical knowledge editing (as the method can only work per fact basis), but the authors confirmed that the goal of this study is not to provide a practical tool but more to understand the inner workings of LMs, which are valuable on its own. The authors have comprehensively addressed the reviewer’s points, e.g., clarifying misunderstanding, adding a human evaluation study, extra results on smaller models, comparing its strength and limitations compared to other methods. I would vote for acceptance.
train
[ "CpiZ8pxdYFnM", "G2oEDMohBTn", "P8l1wrCClC", "Qgm22RSWcHB", "KD-oqmgp5hj", "CmH_ukWVyrF", "SRHpMS3BXqh" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review! We are happy you found our work insightful, significant, and original.\n\n> How did you tune the variance of the noise added to form the corrupted version of hidden states?\n\nIn our casual traces we set $\\sigma = 3\\sigma_{t}$ where $\\sigma_{t}$ is the sampled standard deviation for ...
[ -1, -1, -1, -1, 4, 7, 7 ]
[ -1, -1, -1, -1, 3, 4, 3 ]
[ "SRHpMS3BXqh", "CmH_ukWVyrF", "KD-oqmgp5hj", "nips_2022_-h6WAS6eE4", "nips_2022_-h6WAS6eE4", "nips_2022_-h6WAS6eE4", "nips_2022_-h6WAS6eE4" ]
nips_2022_qYc8VnmUwbv
Efficient and Near-Optimal Smoothed Online Learning for Generalized Linear Functions
Due to the drastic gap in complexity between sequential and batch statistical learning, recent work has studied a smoothed sequential learning setting, where Nature is constrained to select contexts with density bounded by $1/\sigma$ with respect to a known measure $\mu$. Unfortunately, for some function classes, there is an exponential gap between the statistically optimal regret and that which can be achieved efficiently. In this paper, we give a computationally efficient algorithm that is the first to enjoy the statistically optimal $\log(T/\sigma)$ regret for realizable $K$-wise linear classification. We extend our results to settings where the true classifier is linear in an over-parameterized polynomial featurization of the contexts, as well as to a realizable piecewise-regression setting assuming access to an appropriate ERM oracle. Somewhat surprisingly, standard disagreement-based analyses are insufficient to achieve regret logarithmic in $1/\sigma$. Instead, we develop a novel characterization of the geometry of the disagreement region induced by generalized linear classifiers. Along the way, we develop numerous technical tools of independent interest, including a general anti-concentration bound for the determinant of certain matrix averages.
Accept
The reviewers agree that this is a solid contribution. Please do revise the paper according to the reviewers comments and the discussion.
train
[ "8hY6f6qZ5Z", "VVQSxAz9At2", "4gaPYrbhjNP", "2FBs8qnMIk", "SGVf9EcSp7P", "SCT3Q-kpRdH", "INOcQXZzTX4", "HqA4ziMjLEC", "UReHPRJzoJL", "SJQzk3uHptI", "zL7HmbJwiAs" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Just confirming my original review.", " Thank you for the detailed answer! I would encourage the authors to include some discussion on the necessity of the assumptions (in particular, the impossibility result of [Block et al., 2022]) in the camera-ready version. I maintain an overall evaluation of rating 7.", ...
[ -1, -1, -1, -1, -1, -1, -1, 8, 8, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 3 ]
[ "INOcQXZzTX4", "2FBs8qnMIk", "SCT3Q-kpRdH", "zL7HmbJwiAs", "SJQzk3uHptI", "UReHPRJzoJL", "HqA4ziMjLEC", "nips_2022_qYc8VnmUwbv", "nips_2022_qYc8VnmUwbv", "nips_2022_qYc8VnmUwbv", "nips_2022_qYc8VnmUwbv" ]
nips_2022_Jd70afzIvJ4
Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare
Many reinforcement learning (RL) applications have combinatorial action spaces, where each action is a composition of sub-actions. A standard RL approach ignores this inherent factorization structure, resulting in a potential failure to make meaningful inferences about rarely observed sub-action combinations; this is particularly problematic for offline settings, where data may be limited. In this work, we propose a form of linear Q-function decomposition induced by factored action spaces. We study the theoretical properties of our approach, identifying scenarios where it is guaranteed to lead to zero bias when used to approximate the Q-function. Outside the regimes with theoretical guarantees, we show that our approach can still be useful because it leads to better sample efficiency without necessarily sacrificing policy optimality, allowing us to achieve a better bias-variance trade-off. Across several offline RL problems using simulators and real-world datasets motivated by healthcare, we demonstrate that incorporating factored action spaces into value-based RL can result in better-performing policies. Our approach can help an agent make more accurate inferences within underexplored regions of the state-action space when applying RL to observational datasets.
Accept
Reviewers agree that the problem of factored action spaces in RL is important and that this paper makes novel contributions to this setting. The reviewers were satisfied with the post-rebuttal discusion and have converged on an accept recommendation. On revision, the reviewers request that the authors revise the paper according to the clarifications that occurred during post-rebuttal discussion. Also, for context, it's important to note that the concept of factored action spaces goes back a long way in the factored MDP literature and I would request the authors to acknowledge this in their related work discussion as they prepare their final revision. To the best of my knowledge, the first mention of factored action spaces is in a 1996 multiagent MDP paper: Craig Boutilier. Planning, Learning and Coordination in Multiagent Decision Processes. (1996) https://www.cs.toronto.edu/~cebly/Papers/tark96.pdf Somewhat more recently, the following paper presented a sequential hindsight method for compositional MDPs that is an upper bound approximation for (weakly) coupled MDPs. I mention this specific paper since it discusses theoretical results relating to factored action MDP approximations and also presents a simple approximate decomposition methodology that I have found hard to beat empirically: Aswin Raghavan, Saket Joshi, Alan Fern, Prasad Tadepallia, Roni Khardon. Planning in Factored Action Spaces with Symbolic Dynamic Programming. (2012) https://ojs.aaai.org/index.php/AAAI/article/view/8364
val
[ "7UyONTzyemM", "NCE0pRkbyVS", "xZZLTQBdPQ5", "Qlc63iG-efy", "ks_Qu3i_a2-", "EP_awpHPi6", "LGzEXWcjcIR", "_SfpbhW5kFU", "I2jq73um9F4", "EUdNNEpPy-K", "UcwnP5M8dvd", "suXPC19mRu7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I believe the manuscript has been improved and now is more sound. Though I still have some reservations, this study seems to bring in new contributions to the field. Thus, I increased my recommendation score.", " Thanks to the authors for their responses. I do feel the paper has improved slightly with the edits...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "LGzEXWcjcIR", "EP_awpHPi6", "ks_Qu3i_a2-", "nips_2022_Jd70afzIvJ4", "EUdNNEpPy-K", "suXPC19mRu7", "UcwnP5M8dvd", "I2jq73um9F4", "nips_2022_Jd70afzIvJ4", "nips_2022_Jd70afzIvJ4", "nips_2022_Jd70afzIvJ4", "nips_2022_Jd70afzIvJ4" ]
nips_2022_AqexjBWRQFx
Convergent Representations of Computer Programs in Human and Artificial Neural Networks
What aspects of computer programs are represented by the human brain during comprehension? We leverage brain recordings derived from functional magnetic resonance imaging (fMRI) studies of programmers comprehending Python code to evaluate the properties and code-related information encoded in the neural signal. We first evaluate a selection of static and dynamic code properties, such as abstract syntax tree (AST)-related and runtime-related metrics. Then, to learn whether brain representations encode fine-grained information about computer programs, we train a probe to align brain recordings with representations learned by a suite of ML models. We find that both the Multiple Demand and Language systems--brain systems which are responsible for very different cognitive tasks, encode specific code properties and uniquely align with machine learned representations of code. These findings suggest at least two distinct neural mechanisms mediating computer program comprehension and evaluation, prompting the design of code model objectives that go beyond static language modeling. We make all the corresponding code, data, and analysis publicly available at https://github.com/ALFA-group/code-representations-ml-brain
Accept
This work examines the relationship between fMRI recordings of people who read short programs and different properties and representations of the programming code. The aim of the work is to understand what properties of code are encoded by different brain systems, and to understand how similar the representations of code in the brain are to those encoded by self-supervised language models that are pretrained to encode programming code. More specifically, the authors separate the brain data into language (LS) and multiple-demand (MD) networks and show that (1) many code properties are represented in both networks, but (2) more information about several properties is represented in the MD network. The paper presents a fairly original idea, namely that of generalizaing brain encoding of language to code. The work presents the thorough experimentation and improved discussion and motivation: thorough discussion of the literature, good motivation of the analyses, sound interpretation of the results, with clear and concise writing. What is less clear overall is the results, that may not bring as much insights as one would hoped. There are some small differences between the findings here and those from earlier work on this dataset (Ivanova et al., 2020), particularly in assigning a more general role during code comprehension to the language network. Yet these differences are hard to interpret in view of current knowledge. Furthermore, the MD system is overall better at decoding every representation than LS, which does not reveal specific functional characteristics. On the other hand, novel experiments show that simple token-level models are a better match to LS, while the TF-IDF model, which is also token-level, is better correlated with the MD decoding than LS. These results hardly provide a consistent picture. Besides, one may wonder whether the strategy of focussing on such networks (that are not as homogeneous as claimed by the authors) is really a good one. For this reason, there is a large variance between reviewers ---and a very long discussion, which remained open. In particular, the authors are providing new results in Appendix, which complement the ones described in the original submission, but are also hard to gather with the original results. Overall, my feeling is tha the paper opens an interesting direction, and could be accepted at NeurIPS to further feed the discussion.
test
[ "ToX_qN8Wc3U", "yxsP5cAbFQP", "JUwVpgdC8JF", "I0y5AUPcL2u", "6qRufYWjon", "bNNBzMqkq3u", "G54nwUxGy9B", "zSz53opRX-y", "4MLW3D3aoI", "D6Dss8ckBtt", "1O8kUV7xJp9", "jhMMI4pQUcZ", "PpD7ma6grWq", "UDk_dlXPlhf", "7pXuGVbBrkH", "pCXKO65ZWCC", "OwfWG8tXTfM", "2frOooXiM7E", "3V49eMKqetp...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Thanks again! Will do.", " Great, sounds like we are on the same page. Please also make sure to make this disclaimer in the paper itself as well.", " We're sorry if this has been confusing, and thank you for making these clarifications.\n\n> What part of your work actually evaluates the alignment between code...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 2, 5 ]
[ "yxsP5cAbFQP", "JUwVpgdC8JF", "bNNBzMqkq3u", "G54nwUxGy9B", "4MLW3D3aoI", "zSz53opRX-y", "OwfWG8tXTfM", "1O8kUV7xJp9", "D6Dss8ckBtt", "PpD7ma6grWq", "jhMMI4pQUcZ", "zcEmirx6a-", "zcEmirx6a-", "P0_kUCJkhMq", "P0_kUCJkhMq", "dRi5Dl--2Pn", "dRi5Dl--2Pn", "ZOkH38U3Rr9", "ZOkH38U3Rr9"...
nips_2022_-uezmSLXVoE
Active Learning Polynomial Threshold Functions
We initiate the study of active learning polynomial threshold functions (PTFs). While traditional lower bounds imply that even univariate quadratics cannot be non-trivially actively learned, we show that allowing the learner basic access to the derivatives of the underlying classifier circumvents this issue and leads to a computationally efficient algorithm for active learning degree-$d$ univariate PTFs in $\tilde{O}(d^3\log(1/\varepsilon\delta))$ queries. We extend this result to the batch active setting, providing a smooth transition between query complexity and rounds of adaptivity, and also provide near-optimal algorithms for active learning PTFs in several average case settings. Finally, we prove that access to derivatives is insufficient for active learning multivariate PTFs, even those of just two variables.
Accept
This paper contains a fresh and mathematically interesting theoretical analysis of a fundamental problem.
val
[ "0e6JKjhKg8", "p9Plrnk2rq", "HwXWRn9xpV6", "r2n-SzLHwNw", "WTilCo2uGw", "RYtnrOkHWxj", "UWAVEQTlPXn", "XY8o9izKDq8", "65irjNUZWxm", "JmNJpe9vx72", "T4Ifd3KW-_V", "UJ4FIM5DvHT", "vno4DaM93B9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed response. ", " Please acknowledge the authors' rebuttal.\n", " Thanks for the detailed responses to my questions.", " Thanks for the comments and clarifications.", " We thank the reviewer for their insightful comments and questions.\n\nIt is certainly true that in worst-case set...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "UWAVEQTlPXn", "XY8o9izKDq8", "WTilCo2uGw", "RYtnrOkHWxj", "vno4DaM93B9", "UJ4FIM5DvHT", "T4Ifd3KW-_V", "JmNJpe9vx72", "nips_2022_-uezmSLXVoE", "nips_2022_-uezmSLXVoE", "nips_2022_-uezmSLXVoE", "nips_2022_-uezmSLXVoE", "nips_2022_-uezmSLXVoE" ]
nips_2022_Ncyc0JS7Q16
Toward Robust Spiking Neural Network Against Adversarial Perturbation
As spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications, the security concerns in SNNs attract more attention. Currently, researchers have already demonstrated an SNN can be attacked with adversarial examples. How to build a robust SNN becomes an urgent issue. Recently, many studies apply certified training in artificial neural networks (ANNs), which can improve the robustness of an NN model promisely. However, existing certifications cannot transfer to SNNs directly because of the distinct neuron behavior and input formats for SNNs. In this work, we first design S-IBP and S-CROWN that tackle the non-linear functions in SNNs' neuron modeling. Then, we formalize the boundaries for both digital and spike inputs. Finally, we demonstrate the efficiency of our proposed robust training method in different datasets and model architectures. Based on our experiment, we can achieve a maximum $37.7\%$ attack error reduction with $3.7\%$ original accuracy loss. To the best of our knowledge, this is the first analysis on robust training of SNNs.
Accept
This paper applies existing certification-based adversarial robustness techniques to spiking neural networks. They achieve this through upper and lower relaxations of the spiking equations. Review scores were high variance, ranging from 4 through 8. Reviews were generally of high quality. The largest concern was that the use of rate coding for the network's output limited the applicability of the technique. I found the authors' response to this concern satisfying. I appreciate that this paper is the first to apply certification-based techniques to spiking neural networks. I believe it has the potential to produce significant impact for that reason. Based upon the reviews, and my judgement of the potential impact, I recommend the paper be accepted.
test
[ "kBM101f2n6", "SwKyhOrWDLv", "WnQfH-pQKqk", "wAMlE9R0O0X", "YlwYMcHFZ3i", "HGraXLoTS5", "0KCEhq8bvZV", "ANpCF-b1nTN", "yH-9rvKgwyD", "r5jNMqCDEKd", "YuFyGN_3Xnz", "XqIcx2EGR0" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " For your concerns, we have the following comments: Firstly, rate coding is one of the most important coding mechanisms for SNN. As we all know, the DVS devices can generate the input with rate coding directly. Thus, in this work, we mainly discuss our method on rate coding. Secondly, for other coding mechanisms, ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "wAMlE9R0O0X", "YlwYMcHFZ3i", "HGraXLoTS5", "ANpCF-b1nTN", "0KCEhq8bvZV", "yH-9rvKgwyD", "XqIcx2EGR0", "YuFyGN_3Xnz", "r5jNMqCDEKd", "nips_2022_Ncyc0JS7Q16", "nips_2022_Ncyc0JS7Q16", "nips_2022_Ncyc0JS7Q16" ]
nips_2022_SNElc7QmMDe
Towards Optimal Communication Complexity in Distributed Non-Convex Optimization
We study the problem of distributed stochastic non-convex optimization with intermittent communication. We consider the full participation setting where $M$ machines work in parallel over $R$ communication rounds and the partial participation setting where $M$ machines are sampled independently every round from some meta-distribution over machines. We propose and analyze a new algorithm that improves existing methods by requiring fewer and lighter variance reduction operations. We also present lower bounds, showing our algorithm is either $\textit{optimal}$ or $\textit{almost optimal}$ in most settings. Numerical experiments demonstrate the superior performance of our algorithm.
Accept
This paper studied the problem of distributed stochastic non-convex optimization with intermittent communication, and considered both the full participation setting and the partial participation setting. In particular, the paper proposed a new algorithm and showed that it can improve existing methods. The weakness is in that the lower and upper bounds in the stochastic case do not match well, but I think it is ok.
train
[ "_1FKBmBp-DL", "5Y54P8lB00", "0uokW3OnjwO", "Jl7DsEpY-c", "JA4k10XVe46", "9-LK8XET-Ux", "4xdOSQ2V6Bi", "geUmmbpMNq", "by2X4dK96v5", "27Dnk8XbyC4", "jH73QQLzzYX", "wx2SJQm4Blby", "3jhFwmtEnOb", "VzXsUxdHmxZ", "cI_-Q6vlbdSe", "jZ1p3pVl39t", "BmicMUMA1_il", "h5TA4YUNPj6", "OHAy98YUE...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", ...
[ " We thank the reviewer again!", " Having read the rebuttal and checked the supplementary, I decided to raise the score.", " We thank the reviewer again! The discussion was very beneficial and helped us significantly improve our paper.\n\n>I see. Partial participation is usually meant by the sum of functions (a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "5Y54P8lB00", "JA4k10XVe46", "Jl7DsEpY-c", "9-LK8XET-Ux", "zZycgOD9Crm", "4xdOSQ2V6Bi", "geUmmbpMNq", "27Dnk8XbyC4", "27Dnk8XbyC4", "3jhFwmtEnOb", "VzXsUxdHmxZ", "cI_-Q6vlbdSe", "jZ1p3pVl39t", "eJ1ZnxgJTf", "h5TA4YUNPj6", "OHAy98YUEIJ", "zZycgOD9Crm", "8fqjTiAmFgL", "8fqjTiAmFgL"...
nips_2022_b-SNWfqkZc
A Projection-free Algorithm for Constrained Stochastic Multi-level Composition Optimization
We propose a projection-free conditional gradient-type algorithm for smooth stochastic multi-level composition optimization, where the objective function is a nested composition of $T$ functions and the constraint set is a closed convex set. Our algorithm assumes access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle satisfying certain standard unbiasedness and second-moment assumptions. We show that the number of calls to the stochastic first-order oracle and the linear-minimization oracle required by the proposed algorithm, to obtain an $\epsilon$-stationary solution, are of order $\mathcal{O}_T(\epsilon^{-2})$ and $\mathcal{O}_T(\epsilon^{-3})$ respectively, where $\mathcal{O}_T$ hides constants in $T$. Notably, the dependence of these complexity bounds on $\epsilon$ and $T$ are separate in the sense that changing one does not impact the dependence of the bounds on the other. For the case of $T=1$, we also provide a high-probability convergence result that depends poly-logarithmically on the inverse confidence level. Moreover, our algorithm is parameter-free and does not require any (increasing) order of mini-batches to converge unlike the common practice in the analysis of stochastic conditional gradient-type algorithms.
Accept
There is general agreement that this paper is a borderline accept and after my own reading I feel similar.
train
[ "VbZvTEp8ugV", "Sh6puCKfGpT", "E6t0KGZ7kmc", "2uyRkch-NqT", "mxqEgh5hb-c7", "XDwKKU5B1P1", "t_bYdyILWHF", "5gxiLoumth", "EDTiUJ0pcDL", "R0vTg1HAkwl", "U0W0C28wqC3", "h5jrGmZURkc" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for responding to my quesitons! I have read all reviews and responses, and I would be happy to increase my score by 1.", " 1) While the main idea is based on moving-average, the few other papers that considered moving-average estimators in the context of conditional gradient al...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "t_bYdyILWHF", "mxqEgh5hb-c7", "2uyRkch-NqT", "EDTiUJ0pcDL", "5gxiLoumth", "t_bYdyILWHF", "h5jrGmZURkc", "U0W0C28wqC3", "R0vTg1HAkwl", "nips_2022_b-SNWfqkZc", "nips_2022_b-SNWfqkZc", "nips_2022_b-SNWfqkZc" ]
nips_2022_4PJbcrW_7wC
Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity
Graph Neural Networks (GNNs) are widely applied to graph learning problems such as node classification. When scaling up the underlying graphs of GNNs to a larger size, we are forced to either train on the complete graph and keep the full graph adjacency and node embeddings in memory (which is often infeasible) or mini-batch sample the graph (which results in exponentially growing computational complexities with respect to the number of GNN layers). Various sampling-based and historical-embedding-based methods are proposed to avoid this exponential growth of complexities. However, none of these solutions eliminates the linear dependence on graph size. This paper proposes a sketch-based algorithm whose training time and memory grow sublinearly with respect to graph size by training GNNs atop a few compact sketches of graph adjacency and node embeddings. Based on polynomial tensor-sketch (PTS) theory, our framework provides a novel protocol for sketching non-linear activations and graph convolution matrices in GNNs, as opposed to existing methods that sketch linear weights or gradients in neural networks. In addition, we develop a locality-sensitive hashing (LSH) technique that can be trained to improve the quality of sketches. Experiments on large-graph benchmarks demonstrate the scalability and competitive performance of our Sketch-GNNs versus their full-size GNN counterparts.
Accept
The authors propose the use of sketch GNN (based on compressing relevant matrices and sketching typical GNN operations via hashing) to enable better scaling of graph neural networks to very large graphs. The reviewers are all in favor of accepting the paper (with three accepts and one weak accept), and therefore I recommend its acceptance. I encourage the authors to take into account the reviewer comments, as they already indicated they will do during the rebuttal period, when preparing the camera ready version.
train
[ "-Ze5CDpwf6m", "GmZRmiDioo5", "GoOvDgV6YL", "6uTjrJteYrN", "M71sx5sZvcE", "L0__UrdNxS-", "NShMk_IcoVx", "yjB1g_T10CJ", "hjU4Tf1MR0v", "HJa7L8TWiHK", "T1Hp2mkNYP", "ZADWaKZOThc", "8Llznd1jjhK", "9K5bQvtERNS", "w6cggOTWIgI", "bFyF9mpRJ1Y", "pydIITMp3Yl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reply! Now I understand better the scalability difference between GraphSAINT and the proposed Sketch-GNN. I will raise my score accordingly. ", " I thank the authors for providing a detailed response to my questions. I was able to better understand the reason behind using identity matrix as the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "T1Hp2mkNYP", "ZADWaKZOThc", "bFyF9mpRJ1Y", "M71sx5sZvcE", "L0__UrdNxS-", "NShMk_IcoVx", "pydIITMp3Yl", "hjU4Tf1MR0v", "HJa7L8TWiHK", "bFyF9mpRJ1Y", "w6cggOTWIgI", "8Llznd1jjhK", "9K5bQvtERNS", "nips_2022_4PJbcrW_7wC", "nips_2022_4PJbcrW_7wC", "nips_2022_4PJbcrW_7wC", "nips_2022_4PJb...
nips_2022_oPzICxVFqVM
Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance
Score-based generative models are shown to achieve remarkable empirical performances in various applications such as image generation and audio synthesis. However, a theoretical understanding of score-based diffusion models is still incomplete. Recently, Song et al. showed that the training objective of score-based generative models is equivalent to minimizing the Kullback-Leibler divergence of the generated distribution from the data distribution. In this work, we show that score-based models also minimize the Wasserstein distance between them. Specifically, we prove that the Wasserstein distance is upper bounded by the square root of the objective function up to multiplicative constants and a fixed constant offset. Our proof is based on a novel application of the theory of optimal transport, which can be of independent interest to the society. Our numerical experiments support our findings. By analyzing our upper bounds, we provide a few techniques to obtain tighter upper bounds.
Accept
This paper considers a reverse process (4) of the diffusion process (1), and provides an upper bound (Theorem 1) of the Wasserstein distance $W_2(p_0,q_0)$ of a data distribution $p_0$ and the distribution $q_0$ defined by the reverse process at time $t=0$, in terms of the approximation error $J_I$ of the score function $\nabla\log p_t$ by $s_\theta(t)$ and the Wasserstein distance $W_2(p_T,q_T)$. It implies that, provided that $W_2(p_T,q_T)$ is zero, approximating the score function by $s_\theta$ via training will make $W_2(p_0,q_0)$ small, allowing generative modeling of $p_0$ by $q_0$. A more practical upper bound (Theorem 2), as well as an analysis on effects of perturbations of the data distribution $p_0$ (Theorem 3), is also provided. The review ratings/confidences were 8/2, 7/1, 6/4, and 5/3. The average is above the acceptance threshold. Upon reading the reviews, the author responses, as well as the paper itself, I found that the comments by the reviewers with lower ratings were mostly reasonable given the initial version of this paper which was difficult to follow in some places, as well as the fact that one would need simplifications like the continuous-time formulation and assumptions to guarantee integrability of relevant quantities, for theoretical development like the one conducted in this paper, but I found that most of them, especially the issue of the competition between $I(T)$ and $W_2(p_T,q_T)$ in the case of the Denoising Diffusion Probabilistic Models as summarized in Remark 4 in the revised version, were properly addressed in the revised version. I would therefore like to recommend acceptance, which would encourage further discussion among the attendees of the conference on more technical details. A few minor points: - Notational inconsistency: The authors used the notation $g^2(.)$ to mean $(g(.))^2$ and $I(.)^2$ to mean $(I(.))^2$, which seem inconsistent to me. - Equation (11): The integrand $L_f(r)+L_s(r)g^2(r)$ should be put in parentheses. - The term $\frac{1}{2}\log2$, which comes from the prefactor 2 in the square root in Equation (16), should be added to the right-hand side of Equation (19).
train
[ "cYx5VOcTxk", "72wWQJqGWlZ", "FRuA3mfsply", "vS5Kh3lLYS7", "7NHYArjAFwc", "rmWWEZonng7", "yd1TYH_NlB3U", "6SYRwLU3oV", "bIqPqIN232O", "ar4bK-JWFkkb", "cVMCVWo844h", "pR7NfrlRMRL", "E54goC9XYda", "iS9Zuu5tYmS", "LV_ddj7_wmP" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for answering my questions.\n\nI understood that the present study deals with an ideal model of the continuous limit, and how to approximate this limit in a discrete manner can be another issue, which is an open question to be addressed in the next step using techniques of numerical analysis, dynamical ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 1, 4, 3 ]
[ "bIqPqIN232O", "bIqPqIN232O", "6SYRwLU3oV", "yd1TYH_NlB3U", "ar4bK-JWFkkb", "yd1TYH_NlB3U", "LV_ddj7_wmP", "iS9Zuu5tYmS", "E54goC9XYda", "pR7NfrlRMRL", "nips_2022_oPzICxVFqVM", "nips_2022_oPzICxVFqVM", "nips_2022_oPzICxVFqVM", "nips_2022_oPzICxVFqVM", "nips_2022_oPzICxVFqVM" ]
nips_2022_r4RRwBCPDv5
VC Theoretical Explanation of Double Descent
There has been growing interest in generalization performance of large multilayer neural networks, that can be trained to achieve zero training error, and yet they generalize well on test data. This regime is known as ‘second descent’ and it appears to contradict conventional view that optimal model complexity should reflect optimal balance between underfitting and overfitting, aka bias-variance trade-off. This paper presents VC-theoretical analysis of double descent and shows that it can be fully explained by classical VC generalization bounds. We illustrate application of analytic VC-bounds to modeling double descent for classification problems, using empirical results for several learning methods, such as SVM, Least Squares, and Multilayer Perceptron classifiers. In addition, we discuss several possible reasons for misunderstanding of VC-theoretical results in machine learning community.
Reject
Having read the paper on my own, I am, like one of the reviewers, not convinced by the authors' approach: The VC-bound (3) is actually of the following form (see Cherkassy + Mulier, page 420 for a statement that is more clearly formulated than those in Vapnik's books): Given a data set $D$ in the unit ball, the set $H_{\Delta,D}$ of all hyperplanes that correctly classify $D$ with margin $\Delta$, see (9.6) on page 419 for a definition of the latter, has VC-dimension $h \leq \min(\Delta^{-2}, N) + 1$. Now, to apply a generalization bound of Vapnik to this VC-estimate one needs to fix this set $H_{\Delta,D}$ of hyperplanes, get a second data set $D_2$, and train a learning algorithm on this second data set $D_2$ that chooses a predictor from $H_{\Delta,D}$. This is the way one needs to interpret the rather informal corollary of Theorem 5.1 in Vapnik's [9] on page 133. Indeed, the same statements can be found in Vapnik's [8] on page 408, where he explicitly refers to results on page 148, in which the class of predictors is, of course, a-priori fixed. (By the way, if the corollary was interpreted to hold on the original data set $D$, instead, then the first term $m/l$, which is the empirical training error, would always vanish.) Now, if we have a hyperplane $w$ with zero loss on $D$, then it is in $H_{||w||^{-1}, D}$ and one could apply the bounds as described above. But this is far from what is done in the paper. In summary, the paper has a major technical flaw and for this reason it cannot be accepted.
train
[ "lY_YtZPepqj", "w6cb7fx8k_", "ZoLAb-2pr", "y2SiEfoQDKh", "MpM3VTTU2Rh", "41YoxQ23_HF" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for reviewing our paper. The reviewer raises two separate points, regarding applicability of VC bounds and the particular way of estimating VC dimension. Apparently, these points were raised for second descent regime, and they are addressed separately:\n\n- VC-bounds on test error, Eq (1) in...
[ -1, -1, -1, 2, 3, 7 ]
[ -1, -1, -1, 5, 3, 5 ]
[ "y2SiEfoQDKh", "MpM3VTTU2Rh", "41YoxQ23_HF", "nips_2022_r4RRwBCPDv5", "nips_2022_r4RRwBCPDv5", "nips_2022_r4RRwBCPDv5" ]
nips_2022__5rdhnrbl-z
Bayesian Spline Learning for Equation Discovery of Nonlinear Dynamics with Quantified Uncertainty
Nonlinear dynamics are ubiquitous in science and engineering applications, but the physics of most complex systems is far from being fully understood. Discovering interpretable governing equations from measurement data can help us understand and predict the behavior of complex dynamic systems. Although extensive work has recently been done in this field, robustly distilling explicit model forms from very sparse data with considerable noise remains intractable. Moreover, quantifying and propagating the uncertainty of the identified system from noisy data is challenging, and relevant literature is still limited. To bridge this gap, we develop a novel Bayesian spline learning framework to identify parsimonious governing equations of nonlinear (spatio)temporal dynamics from sparse, noisy data with quantified uncertainty. The proposed method utilizes spline basis to handle the data scarcity and measurement noise, upon which a group of derivatives can be accurately computed to form a library of candidate model terms. The equation residuals are used to inform the spline learning in a Bayesian manner, where approximate Bayesian uncertainty calibration techniques are employed to approximate posterior distributions of the trainable parameters. To promote the sparsity, an iterative sequential-threshold Bayesian learning approach is developed, using the alternative direction optimization strategy to systematically approximate L0 sparsity constraints. The proposed algorithm is evaluated on multiple nonlinear dynamical systems governed by canonical ordinary and partial differential equations, and the merit/superiority of the proposed method is demonstrated by comparison with state-of-the-art methods.
Accept
The reviewers are all leaning towards acceptance, with two having acknowledged the extensive author feedback. I encourage the authors to incorporate as clearly as possible the extensive work done during the rebuttal period into the text when updating their paper for publication.
train
[ "MTC6ydbKFD0", "8XBcMQTr_3M", "sLgCrC4ycA", "qHadV7L2oBp", "Ed3TcSWgIX3", "qOysr4GUUTs", "XbG9eAl935U", "qjJUODBM6Uw", "--SYL77QQZb", "cURcV0W7mor", "OWQ6CQiOSkN", "l4yC42UD9BjA", "8M8q0Pon2re", "SirWs1Bb-8L", "a5Nha8WwNc", "uMmSC9yYer", "m15J5gd6XMC", "syvn8LJcwKN", "z8WVOPlXS_W...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author...
[ " Dear Reviewer R79g,\n\nThis is a friendly reminder that the discussion period is coming close to the end. Your comments and suggestions have been vital for helping improve the quality of our paper, which are greatly appreciated. We believe that your concerns have been fully addressed through our response below. W...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 4, 3 ]
[ "v6k-dinHgvi", "sLgCrC4ycA", "qHadV7L2oBp", "Ed3TcSWgIX3", "qOysr4GUUTs", "--SYL77QQZb", "OWQ6CQiOSkN", "SO3XPh-rl8_", "b6VcFT_Rzh", "v6k-dinHgvi", "8M8q0Pon2re", "nips_2022__5rdhnrbl-z", "syvn8LJcwKN", "a5Nha8WwNc", "z8WVOPlXS_W", "m15J5gd6XMC", "boty59yO5MR", "-lrr4w4vudK", "7d...
nips_2022_eJM0aA5Qhhk
Few-shot Learning for Feature Selection with Hilbert-Schmidt Independence Criterion
We propose a few-shot learning method for feature selection that can select relevant features given a small number of labeled instances. Existing methods require many labeled instances for accurate feature selection. However, sufficient instances are often unavailable. We use labeled instances in multiple related tasks to alleviate the lack of labeled instances in a target task. To measure the dependency between each feature and label, we use the Hilbert-Schmidt Independence Criterion, which is a kernel-based independence measure. By modeling the kernel functions with neural networks that take a few labeled instances in a task as input, we can encode the task-specific information to the kernels such that the kernels are appropriate for the task. Feature selection with such kernels is performed by using iterative optimization methods, in which each update step is obtained as a closed-form. This formulation enables us to directly and efficiently minimize the expected test error on features selected by a small number of labeled instances. We experimentally demonstrate that the proposed method outperforms existing feature selection methods.
Accept
The paper studies the problem of feature selection with few labelled samples. The paper develops an optimization framework which applies for both regression and classification: the regression setting has been explored for exposition and the classification details are in the appendix. The use of permutation invariant Neural networks and using Multi-task learning are interesting angles which help the paper demonstrate that features can be selected even when the number of labelled examples are small.
train
[ "uSX01UX4y-", "WZ0SEJ8YTA", "pH1kKTL1BtX", "N8Mfm6xTLr", "fEQiOuwKaPQ", "e9MTVCDg9_T", "_qSv4y1Skv2", "Mz-YzQN_k8", "2wYr1dheUZ", "gtrOO7EVvo7", "jsH-hUwrie", "zzj11C3JuoA", "aoQjwFdHLfs" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the rebuttal. After reading the rebuttal, I would like to rise my score.", " I thank the authors for addressing my comments. The clarifications are helpful and should be incorporated into the paper. I agree that the HSIC is a valuable statistical tool and very useful for feature selection. Nonetheles...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 2 ]
[ "fEQiOuwKaPQ", "_qSv4y1Skv2", "N8Mfm6xTLr", "aoQjwFdHLfs", "e9MTVCDg9_T", "zzj11C3JuoA", "Mz-YzQN_k8", "jsH-hUwrie", "gtrOO7EVvo7", "nips_2022_eJM0aA5Qhhk", "nips_2022_eJM0aA5Qhhk", "nips_2022_eJM0aA5Qhhk", "nips_2022_eJM0aA5Qhhk" ]
nips_2022_8ON84BdnSn
Understanding Hyperdimensional Computing for Parallel Single-Pass Learning
Hyperdimensional computing (HDC) is an emerging learning paradigm that computes with high dimensional binary vectors. There is an active line of research on HDC in the community of emerging hardware because of its energy efficiency and ultra-low latency---but HDC suffers from low model accuracy, with little theoretical understanding of what limits its performance. We propose a new theoretical analysis of the limits of HDC via a consideration of what similarity matrices can be ``expressed'' by binary vectors, and we show how the limits of HDC can be approached using random Fourier features (RFF). We extend our analysis to the more general class of vector symbolic architectures (VSA), which compute with high-dimensional vectors (hypervectors) that are not necessarily binary. We propose a new class of VSAs, finite group VSAs, which surpass the limits of HDC. Using representation theory, we characterize which similarity matrices can be ``expressed'' by finite group VSA hypervectors, and we show how these VSAs can be constructed. Experimental results show that our RFF method and group VSA can both outperform the state-of-the-art HDC model by up to 7.6\% while maintaining hardware efficiency. This work aims to inspire a future interest on HDC in the ML community and connect to the hardware community.
Accept
This paper analyzes some existing limitations of hyperdimensional computing (HDC), and proposes new techniques to alleviate them, showing improvements in model quality while maintaining hardware efficiency. There is a strong consensus among all reviewers that this is a solid submission whose contributions are novel, insightful, and likely to inspire future work in this important direction. Reviewers's questions and concerned were adequately addressed by the authors during the discussion period, making it a clear "Accept".
train
[ "dvb70nh2qgm", "T8-_ZI0gi9K", "B3Gq6llD4Pl", "wDNjL7H4pK7", "Y3vpkQ3_FsV", "FxTNNu4BR_J", "sMgF-Tm_bbk", "X5Nd4Ie8vdt", "LXj0HIISVIi", "PM0Xv8ExSb8" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The main concerns of the reviewer have been addressed. The reviewer is raising the score. ", " Thank you for addressing my concerns. I have no further questions or comments.", " We thanks the reviewer for regarding this paper as novel and highly readable with a good survey of prior work, please see below for ...
[ -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 2, 3, 2 ]
[ "wDNjL7H4pK7", "FxTNNu4BR_J", "PM0Xv8ExSb8", "LXj0HIISVIi", "X5Nd4Ie8vdt", "sMgF-Tm_bbk", "nips_2022_8ON84BdnSn", "nips_2022_8ON84BdnSn", "nips_2022_8ON84BdnSn", "nips_2022_8ON84BdnSn" ]
nips_2022_APQY2WZFZkd
Rapidly Mixing Multiple-try Metropolis Algorithms for Model Selection Problems
The multiple-try Metropolis (MTM) algorithm is an extension of the Metropolis-Hastings (MH) algorithm by selecting the proposed state among multiple trials according to some weight function. Although MTM has gained great popularity owing to its faster empirical convergence and mixing than the standard MH algorithm, its theoretical mixing property is rarely studied in the literature due to its complex proposal scheme. We prove that MTM can achieve a mixing time bound smaller than that of MH by a factor of the number of trials under a general setting applicable to high-dimensional model selection problems with discrete state spaces. Our theoretical results motivate a new class of weight functions called locally balanced weight functions and guide the choice of the number of trials, which leads to improved performance over standard MTM algorithms. We support our theoretical results by extensive simulation studies and real data applications with several Bayesian model selection problems.
Accept
Based on the reviews and discussions, we are happy to recommend acceptance. Please make sure that all comments in the discussion threads are taken into account in the final version of the manuscript.
train
[ "kgJSwCsx6Ic", "o4qR84V-FlM", "oTrjh7iuchR", "BbVBFE3NaXy", "g4nI1w0BL6P", "ZkT1o87KBzK", "DPkBvoC8Wx", "HO-oMopEjJw", "Ag4Gellsu-", "D67BE0OBvSd", "LrN1fVRyCe8", "qz25sA4hXaL", "N4XTVNKh3ec" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed answer!", " Thank you so much for the constructive review. Since there are 4 major overlapped comments raised by multiple reviewers, we created separate “**Response to all**” threads to address these common concerns. We also uploaded ‘**rebuttal.pdf**’ in the supplementary zip file, whic...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "BbVBFE3NaXy", "N4XTVNKh3ec", "qz25sA4hXaL", "g4nI1w0BL6P", "LrN1fVRyCe8", "D67BE0OBvSd", "D67BE0OBvSd", "D67BE0OBvSd", "D67BE0OBvSd", "nips_2022_APQY2WZFZkd", "nips_2022_APQY2WZFZkd", "nips_2022_APQY2WZFZkd", "nips_2022_APQY2WZFZkd" ]
nips_2022_D4fuQ1MveDM
Learning Options via Compression
Identifying statistical regularities in solutions to some tasks in multi-task reinforcement learning can accelerate the learning of new tasks. Skill learning offers one way of identifying these regularities by decomposing pre-collected experiences into a sequence of skills. A popular approach to skill learning is maximizing the likelihood of the pre-collected experience with latent variable models, where the latent variables represent the skills. However, there are often many solutions that maximize the likelihood equally well, including degenerate solutions. To address this underspecification, we propose a new objective that combines the maximum likelihood objective with a penalty on the description length of the skills. This penalty incentivizes the skills to maximally extract common structures from the experiences. Empirically, our objective learns skills that solve downstream tasks in fewer samples compared to skills learned from only maximizing likelihood. Further, while most prior works in the offline multi-task setting focus on tasks with low-dimensional observations, our objective can scale to challenging tasks with high-dimensional image observations.
Accept
This paper studies the problem of learning options in multi-task reinforcement learning. The authors note that previous works optimize an underspecified objective and they propose adding an extra term to the objective function that relates to the description lengths of skills. The authors study their approach and show empirically that it scales to high-dimensional problems and performs well compared to previous approaches. The discovered skills can also be used to solve new tasks using fewer samples than previous approaches. The initial reviews were overall very positive for this paper. During the author-reviewer discussion, the authors provide additional results and provided satisfying answers to most reviewer comments. As a result, the reviewers are unanimous that this work should be accepted and the discussion period did not reveal any other elements to report here. I am pleased to recommend acceptance, congratulations! It seems like the current version of your manuscript already addresses most if not all of the points raised by reviewers. In addition, please do not forget to discuss the limitations of your current work including e.g. different domains that it might (not) work in (see the comment from reviewer CrHt).
train
[ "mL274PBNcuu", "B2CVTrt65O", "pzfHsfWWAlyB", "T6SlM9-Y5NB", "h3rOHdGR9Txw", "zy9N8imUUsQ", "ENiVJ9OvkJ", "oEbDMFL9gjP", "2K3andKXA1V", "SwtC47qyZX", "isSq-p2cGfM", "r0O_f9NHtAj", "lJ3dLTad0E", "Tg6IDEDw229", "ZdSenq9BJd1" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer CrHt,\n\nThank you for the suggestions for improving the paper. Our above response includes two additional experiments to address questions raised in the review, experiments that we believe further strengthen the paper. Together with the discussion above, **have all the concerns been addressed?** If...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "ENiVJ9OvkJ", "Tg6IDEDw229", "Tg6IDEDw229", "Tg6IDEDw229", "nips_2022_D4fuQ1MveDM", "ZdSenq9BJd1", "r0O_f9NHtAj", "2K3andKXA1V", "ZdSenq9BJd1", "Tg6IDEDw229", "lJ3dLTad0E", "nips_2022_D4fuQ1MveDM", "nips_2022_D4fuQ1MveDM", "nips_2022_D4fuQ1MveDM", "nips_2022_D4fuQ1MveDM" ]
nips_2022_Xt9smkoTgQf
Understanding Non-linearity in Graph Neural Networks from the Bayesian-Inference Perspective
Graph neural networks (GNNs) have shown superiority in many prediction tasks over graphs due to their impressive capability of capturing nonlinear relations in graph-structured data. However, for node classification tasks, often, only marginal improvement of GNNs has been observed in practice over their linear counterparts. Previous works provide very few understandings of this phenomenon. In this work, we resort to Bayesian learning to give an in-depth investigation of the functions of non-linearity in GNNs for node classification tasks. Given a graph generated from the statistical model CSBM, we observe that the max-a-posterior estimation of a node label given its own and neighbors' attributes consists of two types of non-linearity, the transformation of node attributes and a ReLU-activated feature aggregation from neighbors. The latter surprisingly matches the type of non-linearity used in many GNN models. By further imposing Gaussian assumption on node attributes, we prove that the superiority of those ReLU activations is only significant when the node attributes are far more informative than the graph structure, which nicely explains previous empirical observations. A similar argument is derived when there is a distribution shift of node attributes between the training and testing datasets. Finally, we verify our theory on both synthetic and real-world networks. Our code is available at <https://github.com/Graph-COM/Bayesian_inference_based_GNN.git>.
Accept
The paper studies the functions of non-linearity in GNNs for node classification tasks, through Bayesian learning. It considers graphs from the statistical model CSBM and shows the max-a-posterior estimation for the label has two types of non-linearity. With Gaussian assumption on the node attributes it further proves that the second type of nonlinearity is only superior when the attributes are far more informative that the graph structure; similarly when there is a distribution shift between training and testing. It also provided verification experiments on synthetic and real data. The paper considers an important topic and has provided a novel in-depth investigation. Some concerns of the reviewers are: 1. Interpretation of the theoretical results. The theorems are quite technical, and more elaboration on their implications can the readers appreciate better the results. The authors have provided some more clarification in the response. 2. Connections between the theoretical models and the practical ones. The analysis makes some assumptions, e.g., dense CSBM graphs, Gaussian attributes. On the other hand, some simplifying assumptions are necessary for theoretical study. The empirical verification also matches the theory well.
train
[ "_NUyZV-nX_", "SUNstvCqLxX", "X7fG6Mv7Qs5", "lGfLV1FN4qQ_", "fJ5X56mKolQ", "JaZyfllppg2", "gmASNMqkxi6", "6g4ClnRfUM", "yURR0rPQ74q", "VN9iQOzWSgA" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the detailed response, and my concerns are addressed. Therefore, I would keep my score which I think is a fair assessment of the paper.", " I appreciate the authors' response, which answers most of my questions. After checking other reviews' comments, I did not have new concerns. Thus, I...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "fJ5X56mKolQ", "JaZyfllppg2", "lGfLV1FN4qQ_", "VN9iQOzWSgA", "yURR0rPQ74q", "6g4ClnRfUM", "nips_2022_Xt9smkoTgQf", "nips_2022_Xt9smkoTgQf", "nips_2022_Xt9smkoTgQf", "nips_2022_Xt9smkoTgQf" ]
nips_2022_agQGDz6gPOo
Improving Self-Supervised Learning by Characterizing Idealized Representations
Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear what characteristics of their representations lead to high downstream accuracies. In this work, we characterize properties that SSL representations should ideally satisfy. Specifically, we prove necessary and sufficient conditions such that for any task invariant to given data augmentations, probes (e.g., linear or MLP) trained on that representation attain perfect accuracy. These requirements lead to a unifying conceptual framework for improving existing SSL methods and deriving new ones. For contrastive learning, our framework prescribes simple but significant improvements to previous methods such as using asymmetric projection heads. For non-contrastive learning, we use our framework to derive a simple and novel objective. Our resulting SSL algorithms outperform baselines on standard benchmarks, including SwAV+multicrops on linear probing of ImageNet.
Accept
Decision: Accept This paper provides a theoretical analysis on the invariance properties of representation learned by self-supervised learning, and derived an algorithmic framework from the theory which includes approaches similar to existing SSL methods. Empirical results demonstrate the competitiveness of the proposed approach. Reviewers found the proposed approach simple but effective, and the related work and limitations are discussed. Initially there are concerns regarding novelty & comparisons to previous work, which are largely addressed in author feedback & revision. After reviewer-AC discussions, it is concluded that we should accept this paper. In revision for camera ready, I'd suggest the authors to follow what they promised in their summary feedback, and improve the clarity of the presentation, especially on section 4 as suggested by one of the reviewers. The additional results provided by the authors in feedback period should also be added.
train
[ "8m_QiC7j2c", "cHmDIeQyuN6", "POxHW3sQMOp", "YkZyBuWqQEn", "u9M2EbKNeVe", "be66qDgMsmQ", "JdqXTIZWefg", "Sqx0cIQP2w0", "zmYVI18880D", "2psJE8rirk", "3Zs_QW6oSh2", "jnIi8pEV-s", "_ZQPp_Af2w_", "aNOU3YqA0X", "U87mUrz9Qd3", "YuCOCFZ1zpG", "_HDCajYyTgV", "WU_LVkjbhdg", "l6b3b7uMwaL",...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " We uploaded the updated manuscript and appendixes with every additional experiment that reviewers have suggested.\n\nIn particular, we added to the main paper the following results that compare DISSL and SwAV on the standard transfer learning benchmarks as suggested by [bMPi,j7cH,L1ta].\n\n| | Food ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "nips_2022_agQGDz6gPOo", "4il0TR1vPYV", "Mt4e7XsMkd", "3Zs_QW6oSh2", "be66qDgMsmQ", "aNOU3YqA0X", "RLGb3Ll27ty", "nips_2022_agQGDz6gPOo", "Mt4e7XsMkd", "4il0TR1vPYV", "uCkkWQ8xAiZ", "uCkkWQ8xAiZ", "uCkkWQ8xAiZ", "RLGb3Ll27ty", "RLGb3Ll27ty", "RLGb3Ll27ty", "RLGb3Ll27ty", "RLGb3Ll27...
nips_2022_rwyISFoSmXd
Conformalized Fairness via Quantile Regression
Algorithmic fairness has received increased attention in socially sensitive domains. While rich literature on mean fairness has been established, research on quantile fairness remains sparse but vital. To fulfill great needs and advocate the significance of quantile fairness, we propose a novel framework to learn a real-valued quantile function under the fairness requirement of Demographic Parity with respect to sensitive attributes, such as race or gender, and thereby derive a reliable fair prediction interval. Using optimal transport and functional synchronization techniques, we establish theoretical guarantees of distribution-free coverage and exact fairness for the induced prediction interval constructed by fair quantiles. A hands-on pipeline is provided to incorporate flexible quantile regressions with an efficient fairness adjustment post-processing algorithm. We demonstrate the superior empirical performance of this approach on several benchmark datasets. Our results show the model’s ability to uncover the mechanism underlying the fairness-accuracy trade-off in a wide range of societal and medical applications.
Accept
This work extends the group-level fairness definitions (that were primarily established for supervised learning tasks) to the problem of conformalized quantile regression. A conceptual contribution is to redefine the group-level fairness using the average prediction to quantiles. Based on this adaptation, the authors further developed a postprocessing technique to revise a trained quantile regressor to satisfy the modified fairness definition for quantiles. All reviewers acknowledged that the paper is reasonably written and the main idea delivers smoothly. A mixture of theoretical and experimental results are provided There were some questions raised prior to the rebuttal and were successfully addressed, including clarifying the running time of computing quantiles, comparing them to other fair approaches, and partially misinterpreting DP. The authors are strongly encouraged to incorporate these comments into the final version. Some reviewers had remaining questions about the novelty of the paper in light of prior results but the meta reviewer feels the introduced concept of fairness on quantile can be a good addition to the literature and might inspire follow-up works.
test
[ "7nYTEqjyyq", "YFiVuwWeOJ", "AobaZ-L8tez", "ZOhUehMgu-", "pSG-kw4TSV", "DGYn1sIph2M", "E_ZRpj48rDw", "b-TBJkl97v", "nKGAOUHjk7v", "ZIje5L6A8Hw", "s3m7AvuIqhV", "wjY5jAukp-e6", "KabWpxJ3UMz", "2uR4sKxDmX6", "8xqCNd8hoyt", "Ow388PRB0u", "e70gchQDnAF", "jvSZ9A8Y6jf", "3jBtJxMJtc0", ...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Thank you so much for the feedback.\n\n- In the current experiment, the kernel we used is the local linear one in defining the quantile functions of subgroups rather than the global kernels as we mentioned in the previous reply. In short, when calculating the $\\tau$-th quantile $x_\\tau$ of $q_\\alpha$ using lo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 2 ]
[ "ZIje5L6A8Hw", "AobaZ-L8tez", "ZOhUehMgu-", "pSG-kw4TSV", "b-TBJkl97v", "e70gchQDnAF", "s3m7AvuIqhV", "nKGAOUHjk7v", "wjY5jAukp-e6", "2uR4sKxDmX6", "jvSZ9A8Y6jf", "KabWpxJ3UMz", "58k6k-3CCDc", "3jBtJxMJtc0", "Ow388PRB0u", "jvSZ9A8Y6jf", "fix4PEEGQk9", "nips_2022_rwyISFoSmXd", "ni...
nips_2022_VK9jfSPnnb
A Simple and Optimal Policy Design for Online Learning with Safety against Heavy-tailed Risk
We consider the classical multi-armed bandit problem and design simple-to-implement new policies that simultaneously enjoy two properties: worst-case optimality for the expected regret, and safety against heavy-tailed risk for the regret distribution. Recently, Fan and Glynn (2021) showed that information-theoretic optimized bandit policies as well as standard UCB policies suffer from some serious heavy-tailed risk; that is, the probability of incurring a linear regret slowly decays at a polynomial rate of $1/T$, as $T$ (the time horizon) increases. Inspired by their result, we further show that any policy that incurs an instance-dependent $O(\ln T)$ regret must incur a linear regret with probability $\Omega(\mathrm{poly}(1/T))$ and that the heavy-tailed risk actually exists for all "instance-dependent consistent" policies. Next, for the two-armed bandit setting, we provide a simple policy design that (i) has the worst-case optimality for the expected regret at order $\tilde O(\sqrt{T})$ and (ii) has the worst-case tail probability of incurring a linear regret decay at an exponential rate $\exp(-\Omega(\sqrt{T}))$. We further prove that this exponential decaying rate of the tail probability is optimal across all policies that have worst-case optimality for the expected regret. Finally, we generalize the policy design and analysis to the general setting with an arbitrary $K$ number of arms. We provide detailed characterization of the tail probability bound for any regret threshold under our policy design. Numerical experiments are conducted to illustrate the theoretical findings. Our results reveal insights on the incompatibility between consistency and light-tailed risk, whereas indicate that worst-case optimality on expected regret and light-tailed risk are compatible.
Accept
The reviewers came to consensus that this paper makes a good contribution to the study on the tail behavior of the regret of bandit problems. I agree with these opinions and please polish the paper so that the minor concerns raised by the reviewers become clear in the final version.
train
[ "WG1tLs8SoOT", "onACBQWL0A-", "ITlHcFP5rt1", "n-bb2VPOS2", "bEBbjL8gXj0", "OnLjxq2o_53", "j-enjWwr7SJ", "YRmQcbUqz02", "5RBphff5gG", "jeb0FmqXNqz", "pJjzPD4K5SY", "TwwHGNt32Xy", "4gJLYQlXLe", "vrY8T8SlHQ5" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the note and the clarification on the general distribution comment. We have followed the suggestion to add the multiplicative error case to the appendix. For the case with non-sub-Gaussian errors, we have added a brief discussion. We look to add more discussions on non-sub-Gaussian errors and an emp...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "n-bb2VPOS2", "ITlHcFP5rt1", "YRmQcbUqz02", "OnLjxq2o_53", "vrY8T8SlHQ5", "vrY8T8SlHQ5", "4gJLYQlXLe", "TwwHGNt32Xy", "pJjzPD4K5SY", "nips_2022_VK9jfSPnnb", "nips_2022_VK9jfSPnnb", "nips_2022_VK9jfSPnnb", "nips_2022_VK9jfSPnnb", "nips_2022_VK9jfSPnnb" ]
nips_2022_lHuPdoHBxbg
C2FAR: Coarse-to-Fine Autoregressive Networks for Precise Probabilistic Forecasting
We present coarse-to-fine autoregressive networks (C2FAR), a method for modeling the probability distribution of univariate, numeric random variables. C2FAR generates a hierarchical, coarse-to-fine discretization of a variable autoregressively; progressively finer intervals of support are generated from a sequence of binned distributions, where each distribution is conditioned on previously-generated coarser intervals. Unlike prior (flat) binned distributions, C2FAR can represent values with exponentially higher precision, for only a linear increase in complexity. We use C2FAR for probabilistic forecasting via a recurrent neural network, thus modeling time series autoregressively in both space and time. C2FAR is the first method to simultaneously handle discrete and continuous series of arbitrary scale and distribution shape. This flexibility enables a variety of time series use cases, including anomaly detection, interpolation, and compression. C2FAR achieves improvements over the state-of-the-art on several benchmark forecasting datasets.
Accept
The reviewers highlight the clarity of the writing, the effectiveness of the proposed approach resulting in SOTA performance, and in particular the comprehensive and clear empirical evaluation with ablation studies and stability experiments, as well as a laudable discussion of the limitations of the proposed approach. The authors were able to address reviewers' concerns around baselines and comparison partners during the discussion period.
train
[ "IP5_3Yiyp9N", "ooybypLh7jl", "Jik_OxY4n", "RJQ_Bz9SlnJ", "NnKuutpbJ92", "tVtCNMluLv-", "McqxgbhU-BGX", "_vSVZx6MYHZ", "xxs1uIuq7gZ", "4SERAX2tKhX", "8c44MGMMfvJ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response, and for their response to the other authors.\nThey were able to provide a lot of clarifications to the method in their answer, and I hope that they will be able to improve their revision accordingly.\nWith this I raise my score from 5 to 6.", " Thank you for addi...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "Jik_OxY4n", "NnKuutpbJ92", "RJQ_Bz9SlnJ", "8c44MGMMfvJ", "tVtCNMluLv-", "4SERAX2tKhX", "_vSVZx6MYHZ", "xxs1uIuq7gZ", "nips_2022_lHuPdoHBxbg", "nips_2022_lHuPdoHBxbg", "nips_2022_lHuPdoHBxbg" ]
nips_2022_ZJqqSa8FsH9
Chain of Thought Imitation with Procedure Cloning
Imitation learning aims to extract high-performance policies from logged demonstrations of expert behavior. It is common to frame imitation learning as a supervised learning problem in which one fits a function approximator to the input-output mapping exhibited by the logged demonstrations (input observations to output actions). While the framing of imitation learning as a supervised input-output learning problem allows for applicability in a wide variety of settings, it is also an overly simplistic view of the problem in situations where the expert demonstrations provide much richer insight into expert behavior. For example, applications such as path navigation, robot manipulation, and strategy games acquire expert demonstrations via planning, search, or some other multi-step algorithm, revealing not just the output action to be imitated but also the procedure for how to determine this action. While these intermediate computations may use tools not available to the agent during inference (e.g., environment simulators), they are nevertheless informative as a way to explain an expert’s mapping of state to actions. To properly leverage expert procedure information without relying on the privileged tools the expert may have used to perform the procedure, we propose procedure cloning, which applies supervised sequence prediction to imitate the complete series of expert computations. This way, procedure cloning learns not only what to do (i.e., the output action), but how and why to do it (i.e., the procedure). Through empirical analysis on navigation, simulated robotic manipulation, and game-playing environments, we show that imitating the intermediate computations of an expert’s behavior enables procedure cloning to learn policies exhibiting significant generalization to unseen environment configurations, including those configurations for which running the expert’s procedure directly is infeasible.
Accept
All reviewers acknowledged the rebuttal or replied to the author responses. The paper proposes to clone procedures in imitation learning rather than blindly mimicking actions. One reviewer is still not convinced that the experiments are sufficient, while I believe they are solid enough for a conference paper. The relation to other methods that take a similar conceptual approach on a high-level is now clear, but should be incorporated in the main paper.
train
[ "mAobtDgI7-B", "AlHeBUb3I0p", "V06kb1Wl4y_Q", "KZgfBG_4r-We", "4rqS-oNduNJo", "y7NAt_hl_HwL", "kSh4OOM-1U_", "hTPLywdDBJC", "4-V0GIPs7D3", "y-rWZnZ_Ex", "ugCrl9c4bba", "eaPCJxbGc8U", "M_77O52Sl2h", "DcOmN5_zSBe", "4am6ZzLZTwr", "Q0cy1ZXImlK" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the follow-up. We will be sure to include a visualization of Aux BC's predictions in the final paper. We also believe that our evaluation across navigation, manipulation, and game play are comprehensive to demonstrate PC's advantage.", " I would like to thank the authors for their timely responses...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "AlHeBUb3I0p", "4rqS-oNduNJo", "hTPLywdDBJC", "hTPLywdDBJC", "y-rWZnZ_Ex", "ugCrl9c4bba", "eaPCJxbGc8U", "4-V0GIPs7D3", "Q0cy1ZXImlK", "4am6ZzLZTwr", "DcOmN5_zSBe", "M_77O52Sl2h", "nips_2022_ZJqqSa8FsH9", "nips_2022_ZJqqSa8FsH9", "nips_2022_ZJqqSa8FsH9", "nips_2022_ZJqqSa8FsH9" ]
nips_2022_MbBTrAvee-N
Hedging as Reward Augmentation in Probabilistic Graphical Models
Most people associate the term `hedging' exclusively with financial applications, particularly the use of financial derivatives. We argue that hedging is an activity that human and machine agents should engage in more broadly, even when the agent's value is not necessarily in monetary units. In this paper, we propose a decision-theoretic view of hedging based on augmenting a probabilistic graphical model -- specifically a Bayesian network or an influence diagram -- with a reward. Hedging is therefore posed as a particular kind of graph manipulation, and can be viewed as analogous to control/intervention and information gathering related analysis. Effective hedging occurs when a risk-averse agent finds opportunity to balance uncertain rewards in their current situation. We illustrate the concepts with examples and counter-examples, and conduct experiments to demonstrate the properties and applicability of the proposed computational tools that enable agents to proactively identify potential hedging opportunities in real-world situations.
Accept
This paper proposes a decision-theoretic view of hedging within the framework of probabilistic graphical models augmented with a reward. After reading each other's reviews and the authors' feedback, the reviewers have solved most of their concerns and agree that the paper deserves publication. However, the authors need to seriously consider the reviewers' suggestions for making their paper clearer in the camera-ready version.
val
[ "alfgfay74M8", "npyyqJU8Qmu", "kVfidbo8Uye", "bpmneNS_eZdp", "fy_xkpgT-dn", "UNPwPvFEOl", "aV1GTFizPex", "O7qIJR-JPjc", "L2qx-TIuf_o", "sjKoB_vWbo" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The CE is not the expected value of the reward in general, except for the case when the decision maker is risk-neutral. Note that in the example provided, CE = u^{-1}(p * u1 + (1-p) * u2). It only equals the expected reward of EV = p * v1 + (1-p) * v2, when u(v) = v (risk-neutral). For a risk-averse decision make...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "npyyqJU8Qmu", "aV1GTFizPex", "bpmneNS_eZdp", "fy_xkpgT-dn", "sjKoB_vWbo", "L2qx-TIuf_o", "O7qIJR-JPjc", "nips_2022_MbBTrAvee-N", "nips_2022_MbBTrAvee-N", "nips_2022_MbBTrAvee-N" ]
nips_2022_7vlIVOBKarp
Multiview Human Body Reconstruction from Uncalibrated Cameras
We present a new method to reconstruct 3D human body pose and shape by fusing visual features from multiview images captured by uncalibrated cameras. Existing multiview approaches often use spatial camera calibration (intrinsic and extrinsic parameters) to geometrically align and fuse visual features. Despite remarkable performances, the requirement of camera calibration restricted their applicability to real-world scenarios, e.g., reconstruction from social videos with wide-baseline cameras. We address this challenge by leveraging the commonly observed human body as a semantic calibration target, which eliminates the requirement of camera calibration. Specifically, we map per-pixel image features to a canonical body surface coordinate system agnostic to views and poses using dense keypoints (correspondences). This feature mapping allows us to semantically, instead of geometrically, align and fuse visual features from multiview images. We learn a self-attention mechanism to reason about the confidence of visual features across and within views. With fused visual features, a regressor is learned to predict the parameters of a body model. We demonstrate that our calibration-free multiview fusion method reliably reconstructs 3D body pose and shape, outperforming state-of-the-art single view methods with post-hoc multiview fusion, particularly in the presence of non-trivial occlusion, and showing comparable accuracy to multiview methods that require calibration.
Accept
This paper presents a calibration-free approach (as traditionally understood) for 3D modeling of humans using multi-view images. The idea of using the human body as a means of providing "semantic calibration" is interesting. The method is scalable with the number of cameras. The paper received three 7s and two 6s. The authors and reviewers had several engaging conversations. The reviewers have also provided several suggestions for improving the presentation of the paper.
train
[ "-R6njOFUu8R", "HxXmxdoy-fV", "V3UXuhQfFPM", "SqqrFI18Fw2", "_6enfmPPKs", "ke0AIzoSAEa", "CSW468pGcMO", "d3Du8pShSoB", "KI-LAWnyLDx", "F1lEVxCkdv1" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nThank you for the constructive review!\n\n[W1]\n\nMultiple view videos are prevalent in social media, e.g., social videos. The majority of these videos form a wide baseline system of cameras where both intrinsic and extrinsic calibration is infeasible in practice due to lack of visual correspondences. In fact, ...
[ -1, -1, -1, -1, -1, 6, 7, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3, 5, 3 ]
[ "ke0AIzoSAEa", "F1lEVxCkdv1", "KI-LAWnyLDx", "CSW468pGcMO", "d3Du8pShSoB", "nips_2022_7vlIVOBKarp", "nips_2022_7vlIVOBKarp", "nips_2022_7vlIVOBKarp", "nips_2022_7vlIVOBKarp", "nips_2022_7vlIVOBKarp" ]
nips_2022_493VFz-ZvDD
Layer Freezing & Data Sieving: Missing Pieces of a Generic Framework for Sparse Training
Recently, sparse training has emerged as a promising paradigm for efficient deep learning on edge devices. The current research mainly devotes the efforts to reducing training costs by further increasing model sparsity. However, increasing sparsity is not always ideal since it will inevitably introduce severe accuracy degradation at an extremely high sparsity level. This paper intends to explore other possible directions to effectively and efficiently reduce sparse training costs while preserving accuracy. To this end, we investigate two techniques, namely, layer freezing and data sieving. First, the layer freezing approach has shown its success in dense model training and fine-tuning, yet it has never been adopted in the sparse training domain. Nevertheless, the unique characteristics of sparse training may hinder the incorporation of layer freezing techniques. Therefore, we analyze the feasibility and potentiality of using the layer freezing technique in sparse training and find it has the potential to save considerable training costs. Second, we propose a data sieving method for dataset-efficient training, which further reduces training costs by ensuring only a partial dataset is used throughout the entire training process. We show that both techniques can be well incorporated into the sparse training algorithm to form a generic framework, which we dub SpFDE. Our extensive experiments demonstrate that SpFDE can significantly reduce training costs while preserving accuracy from three dimensions: weight sparsity, layer freezing, and dataset sieving. Our code and models will be released.
Accept
The paper provides methods to improve the task of sparse training. The reviews agree that the idea is well motivated, novel and that the paper brings insights to sparse training that would be of interest to the community. The experiments seem quite extensive and show that these methods allow to improve the Pareto curve of training process FLOPs vs obtained model quality. One of the reviews raised several issues about the paper, questioning the soundness of the experiments and method. After reviewing the discussion, the major issues seem to be not fundamental flaws but unclear details in the paper. Since these are clarified in the discussion, I view these as minor issues that can be fixed towards a camera ready version, and I urge the authors to carefully go over the reviews and fix the paper to be more clear. Concluding, the consensus around novelty and overall positive feedback that remained positive through the discussion phase, lead me to believe the advantages of the paper outweigh its flaws.
val
[ "JajhWMMbd7p", "7N8lBt7kLfu", "vaGpDgsWsqJ", "IXLm98vyqj", "w4UbBNksCe_", "6hoYz_lpuk", "NJmjI6TTxkK", "RIdnIsAqJ9m", "j2y11HW7sn3", "RoSmbtoWHSf", "vNogKx2EnO-", "S0rk1I_gOv", "lFJqtizDw6H", "RILJmMhGrt1", "F4P7MIQ62jS", "253tKPz4PJy", "KLFfCId86F", "fW5aN020ybs", "XQDNaujjsk", ...
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", ...
[ " First, we want to clarify that all the accuracy results in the paper and rebuttal are averaged over **three training processes with different random seeds**. \n\nSecond, regarding the reviewer’s statement that “*In Table Q, on CIFAR100 a 0.27% percentage point increase in accuracy may not be significant*”, we re...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "6hoYz_lpuk", "IXLm98vyqj", "S0rk1I_gOv", "j6SF8xzq9wr", "RoSmbtoWHSf", "SOQhGxNWMFs", "j2y11HW7sn3", "vNogKx2EnO-", "YcI5NRl-7VO", "UbKooPuKNfN", "uEw1IZxAn7H", "sxzqrCp_Ngb", "hNuD-9dkcnI", "F4P7MIQ62jS", "IP5tKYAZb7k", "IP5tKYAZb7k", "fW5aN020ybs", "5SfKI3a1pG", "g3PV4k9zajq",...
nips_2022_NSWNgQgoF71
Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation
Lipschitz constants are connected to many properties of neural networks, such as robustness, fairness, and generalization. Existing methods for computing Lipschitz constants either produce relatively loose upper bounds or are limited to small networks. In this paper, we develop an efficient framework for computing the $\ell_\infty$ local Lipschitz constant of a neural network by tightly upper bounding the norm of Clarke Jacobian via linear bound propagation. We formulate the computation of local Lipschitz constants with a linear bound propagation process on a high-order backward graph induced by the chain rule of Clarke Jacobian. To enable linear bound propagation, we derive tight linear relaxations for specific nonlinearities in Clarke Jacobian. This formulate unifies existing ad-hoc approaches such as RecurJac, which can be seen as a special case of ours with weaker relaxations. The bound propagation framework also allows us to easily borrow the popular Branch-and-Bound (BaB) approach from neural network verification to further tighten Lipschitz constants. Experiments show that on tiny models, our method produces comparable bounds compared to exact methods that cannot scale to slightly larger models; on larger models, our method efficiently produces tighter results than existing relaxed or naive methods, and our method scales to much larger practical models that previous works could not handle. We also demonstrate an application on provable monotonicity analysis. Code is available at https://github.com/shizhouxing/Local-Lipschitz-Constants.
Accept
The paper develops a methodology for computing the Lipschitz constant of ReLu neural networks (when the input perturbations are measured in the L_infinity norm). The method is based on computing tight upper bounds on the Clarke Jacobian. The basic idea is to apply Interval Bound Propagation techniques to the backward computational graph, which yields an upper bound on the norm of the Clarke Jacobian of the network. Experimental results show superiority over SOTA in terms of scalability, runtime, and the computed bound. The reviewers had a number of concerns most of which were addressed during the discussion phase. I recommend that the authors revise the paper and the experiments according to the reviewers' comments as well as their own responses. The paper was also discussed among the reviewers. One main point of discussion was the novelty of the work (compared to prior art, e.g. Lyu et al) and the fact that bounding the norm of Clarke Jacobian seems to be only beneficial for L_infinity perturbations. However, some reviewers argued (and I agree) that the paper improves over SOTA methods quite notably in terms of efficiency and tightness, and the method scales to much larger models compared to prior works (scale is actually an important challenge in this topic). As a result, I would vote for accepting the paper. As a matter of taste, I don't think I agree with this sentence in the abstract (and similar sentences in other parts of the paper): "Existing methods for computing Lipschitz constants either are computationally inefficient or produce loose upper bounds." We do have good methods that provide non-trivial upper bounds on the Lipschitz constant of NNs. While I agree that the scalability of those methods are still to be improved, we can not really call them inefficient. Hence, I believe that this sentence (and similar sentences in the paper) could be better rephrased.
val
[ "6mbXmD05yQp", "buBZXpFBvv", "9wmrSPvLf3l", "D9yTW5rw7aX", "G1RcRxHL8_M", "V8SdgRbngDi", "jYGGcxbWtD", "q250tswJCDr", "QUH59liUE3J", "dPmKstXIZN1", "7ts2o1G0W75", "MQrmpkFizit", "YoT-QEX-10", "pL2Kf8e8nGK" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their thorough response. It would be nice if the authors could include the discussion on the branching heuristic and to the l_2 norm extension to the paper, perhaps in the appendix: I believe this might be of interest to many readers.\nI increased my score by 1.", " Dear Reviewer LzFR,\n...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "pL2Kf8e8nGK", "pL2Kf8e8nGK", "G1RcRxHL8_M", "V8SdgRbngDi", "V8SdgRbngDi", "QUH59liUE3J", "q250tswJCDr", "pL2Kf8e8nGK", "YoT-QEX-10", "7ts2o1G0W75", "MQrmpkFizit", "nips_2022_NSWNgQgoF71", "nips_2022_NSWNgQgoF71", "nips_2022_NSWNgQgoF71" ]
nips_2022_qPb0m0NXt4j
Emergent Graphical Conventions in a Visual Communication Game
Humans communicate with graphical sketches apart from symbolic languages. Primarily focusing on the latter, recent studies of emergent communication overlook the sketches; they do not account for the evolution process through which symbolic sign systems emerge in the trade-off between iconicity and symbolicity. In this work, we take the very first step to model and simulate this process via two neural agents playing a visual communication game; the sender communicates with the receiver by sketching on a canvas. We devise a novel reinforcement learning method such that agents are evolved jointly towards successful communication and abstract graphical conventions. To inspect the emerged conventions, we define three key properties -- iconicity, symbolicity, and semanticity -- and design evaluation methods accordingly. Our experimental results under different controls are consistent with the observation in studies of human graphical conventions. Of note, we find that evolved sketches can preserve the continuum of semantics under proper environmental pressures. More interestingly, co-evolved agents can switch between conventionalized and iconic communication based on their familiarity with referents. We hope the present research can pave the path for studying emergent communication with the modality of sketches.
Accept
This is a solid work, and all reviewers agree to accept the paper.
train
[ "4n14s7znq5J", "fR2-VSewcHD", "8lLpFiiewpM", "uVRwUNWD3c", "9P6VK0NtVC", "vIi3Ii52fzG", "89Ae6scBhlg", "ZK63xKCVAMV", "fL1AYGpyjoz", "7MNmZn5jL0k", "mMD2LPsCTKy" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and updates; I'd be happy to increase my score on the basis of these revisions and clarifications being made in the paper.", " Thanks for all your comments and for acknowledging our work studying an interesting problem and proposing important evaluation measures.\n\n>**[Q1]:** The eff...
[ -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 5 ]
[ "8lLpFiiewpM", "mMD2LPsCTKy", "uVRwUNWD3c", "7MNmZn5jL0k", "vIi3Ii52fzG", "fL1AYGpyjoz", "ZK63xKCVAMV", "nips_2022_qPb0m0NXt4j", "nips_2022_qPb0m0NXt4j", "nips_2022_qPb0m0NXt4j", "nips_2022_qPb0m0NXt4j" ]
nips_2022_24fiAU_9vT
Few-Shot Non-Parametric Learning with Deep Latent Variable Model
Most real-world problems that machine learning algorithms are expected to solve face the situation with (1) unknown data distribution; (2) little domain-specific knowledge; and (3) datasets with limited annotation. We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV), a learning framework for any dataset with abundant unlabeled data but very few labeled ones. By only training a generative model in an unsupervised way, the framework utilizes the data distribution to build a compressor. Using a compressor-based distance metric derived from Kolmogorov complexity, together with few labeled data, NPC-LV classifies without further training. We show that NPC-LV outperforms supervised methods on all three datasets on image classification in the low data regime and even outperforms semi-supervised learning methods on CIFAR-10. We demonstrate how and when negative evidence lowerbound (nELBO) can be used as an approximate compressed length for classification. By revealing the correlation between compression rate and classification accuracy, we illustrate that under NPC-LV how the improvement of generative models can enhance downstream classification accuracy.
Accept
This paper proposes a learning framework for deep generative modeling called Non-Parametric learning by Compression with Latent Variables (NPC-LV) with a strong theoretical support. The results on image classification benchmarks look compelling and promising. It's impressive that this unsupervised approach achieves strong results even compared with supervised and semi-supervised approaches. The reviewers unanimously think that this work contains novel and interesting ideas to connect the deep generative modeling, non-parametric learning and compressed sensing in an elegant manner. The reviewers also pointed out that more thorough discussion and investigation on generalization ability of the proposed method to other scenarios beyond VAE and image classification would further strengthen the paper. A good discussion on limitations is presented in the paper. After the rebuttal, most concerns that the reviewers raised have been well addressed. The AC recommends acceptance.
test
[ "dULPLjS_I5l", "932U06t_Kof", "4rzXoEdarh9", "SM-rwhoXcy", "TvmgUc0eNRc", "bh0kL1mPU-H", "UvZRE5Qm_wD", "6248v5RrBvu" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarifications and explanations.", " Thanks for the detailed responses to my concerns. The additional experiments and clarifications have addressed the issues. I have updated my rating correspondingly.", " We thank the reviewer for the work, but we are afraid that our paper is severely misun...
[ -1, -1, -1, -1, -1, 9, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "TvmgUc0eNRc", "4rzXoEdarh9", "6248v5RrBvu", "UvZRE5Qm_wD", "bh0kL1mPU-H", "nips_2022_24fiAU_9vT", "nips_2022_24fiAU_9vT", "nips_2022_24fiAU_9vT" ]
nips_2022_m8vzptcFKsT
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
We study the average robustness notion in deep neural networks in (selected) wide and narrow, deep and shallow, as well as lazy and non-lazy training settings. We prove that in the under-parameterized setting, width has a negative effect while it improves robustness in the over-parameterized setting. The effect of depth closely depends on the initialization and the training mode. In particular, when initialized with LeCun initialization, depth helps robustness with the lazy training regime. In contrast, when initialized with Neural Tangent Kernel (NTK) and He-initialization, depth hurts the robustness. Moreover, under the non-lazy training regime, we demonstrate how the width of a two-layer ReLU network benefits robustness. Our theoretical developments improve the results by [Huang et al. NeurIPS21; Wu et al. NeurIPS21] and are consistent with [Bubeck and Sellke NeurIPS21; Bubeck et al. COLT21].
Accept
This paper studies an average notion of robustness and its relation to the depth, the width, and the initialization scheme. The reviewers found the theoretical results insightful and novel, the empirical results thorough, and the paper overall well-written and of high quality. Therefore, I recommend acceptance.
train
[ "FeDlKBKURKW", "MimkVCbsYf", "78y9uCHmqRY", "yoV3a1kq6XY", "O8aFqEGvuf1", "6cgx6lD47ZS", "8guc3J8nhPs", "RwazQ0hEyxi", "AnYtVgmpgZu", "FKkunmJH_ih", "AG4lnCFip5s" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for adding experimental results (including training epochs extensions). Such doing alleviated my remaining minor concerns raised in the review. I'm happy with a paper in its current (revised) form as it stands. Best of luck! ", " Dear reviewer rKts,\n\nWe are thankful for your constructive feedback. W...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "MimkVCbsYf", "RwazQ0hEyxi", "6cgx6lD47ZS", "O8aFqEGvuf1", "8guc3J8nhPs", "AnYtVgmpgZu", "FKkunmJH_ih", "AG4lnCFip5s", "nips_2022_m8vzptcFKsT", "nips_2022_m8vzptcFKsT", "nips_2022_m8vzptcFKsT" ]
nips_2022_-TJpOACwpl5
Uncalibrated Models Can Improve Human-AI Collaboration
In many practical applications of AI, an AI model is used as a decision aid for human users. The AI provides advice that a human (sometimes) incorporates into their decision-making process. The AI advice is often presented with some measure of "confidence" that the human can use to calibrate how much they depend on or trust the advice. In this paper, we present an initial exploration that suggests showing AI models as more confident than they actually are, even when the original AI is well-calibrated, can improve human-AI performance (measured as the accuracy and confidence of the human's final prediction after seeing the AI advice). We first train a model to predict human incorporation of AI advice using data from thousands of human-AI interactions. This enables us to explicitly estimate how to transform the AI's prediction confidence, making the AI uncalibrated, in order to improve the final human prediction. We empirically validate our results across four different tasks---dealing with images, text and tabular data---involving hundreds of human participants. We further support our findings with simulation analysis. Our findings suggest the importance of jointly optimizing the human-AI system as opposed to the standard paradigm of optimizing the AI model alone.
Accept
The paper studies the problem of optimizing an AI agent's reported confidence to improve the overall performance of a human user in a collaborative human-AI decision system. The main idea is to train a human behavior model based on experimental data that can be used to simulate the joint human-AI system and then optimize the AI agent's reported confidence based on this simulated system. The paper provides rigorous experimental evaluation involving human participants, and the results demonstrate the importance of jointly optimizing the human-AI system. The reviewers acknowledged that the paper considers an important problem and provides new insights on how to calibrate AI advice for human use. However, the reviewers also raised several concerns and questions in their initial reviews. We want to thank the authors for their detailed responses and for actively engaging with the reviewers during the discussion phase. The reviewers appreciated the responses, which helped in answering their key questions. The reviewers have an overall positive assessment of the paper, and there is a consensus for acceptance. The reviewers have provided detailed feedback in their reviews, and we strongly encourage the authors to incorporate this feedback when preparing the final version of the paper.
train
[ "fvZAJiGKteT", "MFTNBYaO0pZ", "F03EEMfuxs", "pONLIUYdQf_", "6RBHEWvabZn", "KrytH-WJ549n", "sJKnpihOXUz", "4Z58q4bHJ-", "qYMh5pqOGyK", "-fJa8Ht9yoP" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking time to respond to all the questions and updating the paper. \n\nI still find a bit confusing conditioning on $\\mathbf{x}$ and taking the sum over the $\\mathbf{x}$ on a particular dataset between lines 246-247. If by conditioning on $\\mathbf{x}$ you mean \"minimize the expected loss relati...
[ -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 5, 4 ]
[ "F03EEMfuxs", "6RBHEWvabZn", "-fJa8Ht9yoP", "qYMh5pqOGyK", "4Z58q4bHJ-", "sJKnpihOXUz", "nips_2022_-TJpOACwpl5", "nips_2022_-TJpOACwpl5", "nips_2022_-TJpOACwpl5", "nips_2022_-TJpOACwpl5" ]
nips_2022_ii9X4vtZGTZ
$\alpha$-ReQ : Assessing {\bf Re}presentation {\bf Q}uality in Self-Supervised Learning by measuring eigenspectrum decay
Self-Supervised Learning (SSL) with large-scale unlabelled datasets enables learning useful representations for multiple downstream tasks. However, assessing the quality of such representations efficiently poses nontrivial challenges. Existing approaches train linear probes (with frozen features) to evaluate performance on a given task. This is expensive both \emph{computationally}, since it requires retraining a new prediction head for each downstream task, and \emph{statistically}, requires task-specific labels for multiple tasks. This poses a natural question, \emph{how do we efficiently determine the "goodness" of representations learned with SSL across a wide range of potential downstream tasks?} In particular, a task-agnostic statistical measure of representation quality, that predicts generalization without explicit downstream task evaluation, would be highly desirable. In this work, we analyze characteristics of learned representations $\mathbf{f_\theta}$, in well-trained neural networks with canonical architectures \& across SSL objectives. We observe that the eigenspectrum of the empirical feature covariance $\mathrm{Cov}(\mathbf{f_\theta}$) can be well approximated with the family of power-law distribution. We analytically and empirically (using multiple datasets, e.g. CIFAR, STL10, MIT67, ImageNet) demonstrate that the decay coefficient $\alpha$ serves as a measure of representation quality for tasks that are solvable with a linear readout, i.e. there exist well-defined intervals for $\alpha$ where models exhibit excellent downstream generalization. Furthermore, our experiments suggest that key design parameters in SSL algorithms, such as BarlowTwins \citep{zbontar2021barlow}, implicitly modulate the decay coefficient of the eigenspectrum ($\alpha$). As $\alpha$ depends only on the features themselves, this measure for model selection with hyperparameter tuning for BarlowTwins enables search with less compute.
Accept
Decision: Accept This paper proposes a task agnostic method to evaluate the representation quality of neural network, by looking at the eigenspectrum decay power factor for the feature covariance. Theoretical results and empirical results show the method can be used as a cheap alternative for understanding neural networks and performing model selection. Reviewers commended the approach being simple yet novel, and they are happy with the overall clarity of the manuscript. The main criticisms were: 1. Whether experimental results generalize to larger dataset and network architectures. 2. The interpretation of the method: useful tool for model selection or for understanding neural networks only? In author feedback the above questions were partially addressed. Note here the lack of big data experiments is not a major factor of my decision. The more important point is the authors’ clarification on the eigenspectrum decay being “necessary condition” for good performance, which makes the argument for model selection weaker in a theoretical sense, although the empirical results seems to be ok. I would suggest the authors to make a clear discussion on this as promised in author feedback. As a side note, my brief read of the paper seems to tell me that the theory part considers pre-training, and I would say pre-training is a broader concept than SSL. This means the theory is not specific to SSL, and I hope the authors can clarify this point. Also I suggest the authors to discuss He and Ozay ICML 2022 in light of this paper’s results. https://proceedings.mlr.press/v162/he22c.html
val
[ "MWhPC5glNKW", "hjxr0a_KGh", "5O3BBpPBo99", "EvsrA4q6QGD", "rmJC3RCnnqy", "5i8-0gBakTJ", "639OvsVRWKP7", "XTLkW4joEFR", "npN7qjBI7h2", "br45prR2iteW", "NJHDyU5EPkct", "-JXWEOfOiEs2", "iBbcI6WBdc5", "ryCwhFgavvI", "NuPTi3O2NP", "f_0g-yRtzK", "JfQIH41OHRD", "moA1_6Uoyo8", "PHNvNklY...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " We would like to thank the reviewer for appreciating our efforts and identifying the merits of our paper. We are also grateful to them for increasing their score beyond the acceptance threshold. In line with the reviewer's comment, we too hope that our paper gets through and we get an opportunity to present our r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "hjxr0a_KGh", "5O3BBpPBo99", "br45prR2iteW", "rmJC3RCnnqy", "XTLkW4joEFR", "npN7qjBI7h2", "br45prR2iteW", "NJHDyU5EPkct", "JfQIH41OHRD", "f_0g-yRtzK", "ryCwhFgavvI", "nips_2022_ii9X4vtZGTZ", "OJbTpWSoBjv", "OJbTpWSoBjv", "PHNvNklYiEu", "PHNvNklYiEu", "moA1_6Uoyo8", "nips_2022_ii9X4...
nips_2022_2NcrByUfu9
Maximum a posteriori natural scene reconstruction from retinal ganglion cells with deep denoiser priors
Visual information arriving at the retina is transmitted to the brain by signals in the optic nerve, and the brain must rely solely on these signals to make inferences about the visual world. Previous work has probed the content of these signals by directly reconstructing images from retinal activity using linear regression or nonlinear regression with neural networks. Maximum a posteriori (MAP) reconstruction using retinal encoding models and separately-trained natural image priors offers a more general and principled approach. We develop a novel method for approximate MAP reconstruction that combines a generalized linear model for retinal responses to light, including their dependence on spike history and spikes of neighboring cells, with the image prior implicitly embedded in a deep convolutional neural network trained for image denoising. We use this method to reconstruct natural images from ex vivo simultaneously-recorded spikes of hundreds of retinal ganglion cells uniformly sampling a region of the retina. The method produces reconstructions that match or exceed the state-of-the-art in perceptual similarity and exhibit additional fine detail, while using substantially fewer model parameters than previous approaches. The use of more rudimentary encoding models (a linear-nonlinear-Poisson cascade) or image priors (a 1/f spectral model) significantly reduces reconstruction performance, indicating the essential role of both components in achieving high-quality reconstructed images from the retinal signal.
Accept
The paper proposes a MAP method for natural scene reconstruction from population recordings of RGCs which combines a deep image denoiser with a simple generalized linear neural encoding model. The results are shown to be state of the art, and the method is seen as novel and a useful contribution to the community. The reviewers unanimously recommend acceptance.
train
[ "jD-J_gyckmf", "c0FfaQKBLh1", "ev0OoLZsG0s", "zMpI7yjS9dM", "Fge1FOS1HwP", "maMUaWbXJXE", "coqzsf5Ib8G" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed responses to all reviewers.\n\nI have decided to retain my score, as my overall assessment of this paper has not dramatically changed.", " **Regarding “assumption of the application of the inverse square law with regards to the spike response, ie assuming a Gaussian distribution for the ...
[ -1, -1, -1, -1, 6, 8, 5 ]
[ -1, -1, -1, -1, 3, 5, 3 ]
[ "Fge1FOS1HwP", "coqzsf5Ib8G", "maMUaWbXJXE", "Fge1FOS1HwP", "nips_2022_2NcrByUfu9", "nips_2022_2NcrByUfu9", "nips_2022_2NcrByUfu9" ]
nips_2022_RHa77BXv6k
Continuously Tempered PDMP samplers
New sampling algorithms based on simulating continuous-time stochastic processes called piece-wise deterministic Markov processes (PDMPs) have shown considerable promise. However, these methods can struggle to sample from multi-modal or heavy-tailed distributions. We show how tempering ideas can improve the mixing of PDMPs in such cases. We introduce an extended distribution defined over the state of the posterior distribution and an inverse temperature, which interpolates between a tractable distribution when the inverse temperature is 0 and the posterior when the inverse temperature is 1. The marginal distribution of the inverse temperature is a mixture of a continuous distribution on $[0,1)$ and a point mass at 1: which means that we obtain samples when the inverse temperature is 1, and these are draws from the posterior, but sampling algorithms will also explore distributions at lower temperatures which will improve mixing. We show how PDMPs, and particularly the Zig-Zag sampler, can be implemented to sample from such an extended distribution. The resulting algorithm is easy to implement and we show empirically that it can outperform existing PDMP-based samplers on challenging multimodal posteriors.
Accept
The authors show how to combine a clever temperature schedule for simulated tempering with a Zig-Zag sampler (though the authors suggest this could be expanded to arbitrary PDMP samplers, it was mentioned that this may not be completely obvious), so that the marginal distribution of the inverse temperature is a mixture of a continuous distr. on [0,1) and a point mass at 1--- allowing taking samples directly and obviating the need for importance/rejection sampling. The theory proves stationarity of the resultant chain, and the original simulations demonstrated improvement over the vanilla non-tempered chain. At the request of the reviewers, the authors also compared to other tempering chains and showed encouraging results (even though as was noted, comparison is a bit hard for some methods, since e.g., some are more suited for parallel computation).
train
[ "1Ck8Who7fox", "gVhgzI_1t2", "Uv-06QOQRV", "GIiJFQ-HK5", "B6iNXKyQGFg", "B9h7IkRfDGP", "4T9jkzs9wUX", "NHPn8nGy3lM", "nd6H_CVJyqK", "dYpB88DPx4CZ", "yFPAxh_aZu", "SxavSen0fNf", "qKTa4e_reEd", "zAX15DAm-q3", "U-QAgUMpRRm", "tzuLrmDQzRk", "DKW4jqZG_cT" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " In light of the clarifications and the results from the additional experiments, I'm happy to raise my score to 6.", " Thank you for the updated score. All simulations on the cluster are now complete and have been added to the previous tables. With significantly less tuning our proposed tempering method performs...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "zAX15DAm-q3", "GIiJFQ-HK5", "U-QAgUMpRRm", "B6iNXKyQGFg", "B9h7IkRfDGP", "4T9jkzs9wUX", "NHPn8nGy3lM", "dYpB88DPx4CZ", "SxavSen0fNf", "yFPAxh_aZu", "qKTa4e_reEd", "DKW4jqZG_cT", "tzuLrmDQzRk", "U-QAgUMpRRm", "nips_2022_RHa77BXv6k", "nips_2022_RHa77BXv6k", "nips_2022_RHa77BXv6k" ]
nips_2022_g0QM7IBuCh
Generalization Gap in Amortized Inference
The ability of likelihood-based probabilistic models to generalize to unseen data is central to many machine learning applications such as lossless compression. In this work, we study the generalization of a popular class of probabilistic model - the Variational Auto-Encoder (VAE). We discuss the two generalization gaps that affect VAEs and show that overfitting is usually dominated by amortized inference. Based on this observation, we propose a new training objective that improves the generalization of amortized inference. We demonstrate how our method can improve performance in the context of image modeling and lossless compression.
Accept
The paper studies generalization in generative models and highlights two factors for the lack of generalization, generalization gap of the model itself and generalization gap in the ELBO objective caused due to amortized inference. The authors experimentally show that the latter is the more prevalent reason. To alleviate that the authors propose a reverse wake sleep algorithm which is based on training from a mixture of samples from the true distribution and the model's distribution. This latter algorithm is the primary contribution of the paper which is a novel algorithm as recognized by the reviewers. The reviewers have found the experiments compelling and the conceptual contribution significant but easy to implement thus projecting general adoption. Reviewers unanimously agreed upon the contribution of the paper being above the bar for Neurips.
train
[ "7ZSmtc5i77C", "6vUy_Zr7EY", "CflX1x6K-6", "vKmkjNeyFm_", "XdePLWOvLfh", "WSWazSD0iM6", "hWg5kcA88p0", "vhQ_0bFOjd", "w-WNS49V3N2", "blZ1yq9xdRS", "1yf2cARWYNH", "T2969LPup9K", "pTuCgpmVrqQ", "S_Mum09A3o", "K6I9uclL3RF", "bvt6g95pwi", "I-JmU3Q-1Vz", "LqPiVjLBPkg", "IZb13cxYVfM", ...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " We want to thank the for the feedback and the acknowledgement. In the reply to reviewer Ayo3, we have done an additional experiment that shows our method can also improve cifar100, we will add that to the revised paper. Thanks!", " Thanks to the authors for the in-depth response. Your response and modifications...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "6vUy_Zr7EY", "hWg5kcA88p0", "vKmkjNeyFm_", "S_Mum09A3o", "w-WNS49V3N2", "vhQ_0bFOjd", "bvt6g95pwi", "T2969LPup9K", "1yf2cARWYNH", "nips_2022_g0QM7IBuCh", "LqPiVjLBPkg", "IZb13cxYVfM", "ZLjlhO_pkzw", "ZLjlhO_pkzw", "I-JmU3Q-1Vz", "I-JmU3Q-1Vz", "nips_2022_g0QM7IBuCh", "nips_2022_g0...
nips_2022_Ir8b8lG_Vc
Policy Optimization for Markov Games: Unified Framework and Faster Convergence
This paper studies policy optimization algorithms for multi-agent reinforcement learning. We begin by proposing an algorithm framework for two-player zero-sum Markov Games in the full-information setting, where each iteration consists of a policy update step at each state using a certain matrix game algorithm, and a value update step with a certain learning rate. This framework unifies many existing and new policy optimization algorithms. We show that the \emph{state-wise average policy} of this algorithm converges to an approximate Nash equilibrium (NE) of the game, as long as the matrix game algorithms achieve low weighted regret at each state, with respect to weights determined by the speed of the value updates. Next, we show that this framework instantiated with the Optimistic Follow-The-Regularized-Leader (OFTRL) algorithm at each state (and smooth value updates) can find an $\mathcal{\widetilde{O}}(T^{-5/6})$ approximate NE in $T$ iterations, and a similar algorithm with slightly modified value update rule achieves a faster $\mathcal{\widetilde{O}}(T^{-1})$ convergence rate. These improve over the current best $\mathcal{\widetilde{O}}(T^{-1/2})$ rate of symmetric policy optimization type algorithms. We also extend this algorithm to multi-player general-sum Markov Games and show an $\mathcal{\widetilde{O}}(T^{-3/4})$ convergence rate to Coarse Correlated Equilibria (CCE). Finally, we provide a numerical example to verify our theory and investigate the importance of smooth value updates, and find that using ''eager'' value updates instead (equivalent to the independent natural policy gradient algorithm) may significantly slow down the convergence, even on a simple game with $H=2$ layers.
Accept
The paper makes interesting progress on issues related to multi-agent reinforcement learning providing fast convergence guarantees as well as a unified framework. This is definitely a hot topic of research and it would make for a nice NeurIPS contribution.
val
[ "7WpigjRMuil", "V0pjCCy2WLB", "vkW0Xw0d3r", "gyD8IQcMBCy", "tuGkfmXY18_", "r9tjZtQ43MJ8", "TLR_HKNvnJg", "h0XAPeIN51Z", "wYVUXJhTRg6" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors' response answers my concern. I decide to keep the score unchanged.", " Thanks for the detailed clarifications, especially on the average and last-iterate convergence comparisons. I think it wouldl be a much nicer work in the future if this framework can be generalized to the sample-based setting or...
[ -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 2 ]
[ "gyD8IQcMBCy", "tuGkfmXY18_", "wYVUXJhTRg6", "h0XAPeIN51Z", "TLR_HKNvnJg", "nips_2022_Ir8b8lG_Vc", "nips_2022_Ir8b8lG_Vc", "nips_2022_Ir8b8lG_Vc", "nips_2022_Ir8b8lG_Vc" ]
nips_2022_WBp4dli3No6
Robust Learning against Relational Adversaries
Test-time adversarial attacks have posed serious challenges to the robustness of machine-learning models, and in many settings the adversarial perturbation need not be bounded by small $\ell_p$-norms. Motivated by attacks in program analysis and security tasks, we investigate $\textit{relational adversaries}$, a broad class of attackers who create adversarial examples in a reflexive-transitive closure of a logical relation. We analyze the conditions for robustness against relational adversaries and investigate different levels of robustness-accuracy trade-off due to various patterns in a relation. Inspired by the insights, we propose $\textit{normalize-and-predict}$, a learning framework that leverages input normalization to achieve provable robustness. The framework solves the pain points of adversarial training against relational adversaries and can be combined with adversarial training for the benefits of both approaches. Guided by our theoretical findings, we apply our framework to source code authorship attribution and malware detection. Results of both tasks show our learning framework significantly improves the robustness of models against relational adversaries. In the process, it outperforms adversarial training, the most noteworthy defense mechanism, by a wide margin.
Accept
This paper studies relational adversary, a general threat model in which adversary can manipulate test data via transformations specified by a logical relation. Inspired by the conditions for robustness against relational adversaries and the sources of robustness-accuracy trade-off, the authors propose a learning framework called normalize-and-predict, which leverages input normalization to achieve provable robustness. Experiments on two tasks verify the effectiveness of the proposed method. Overall, the paper is well organized and presented. It proposes a novel and technically sound defense mechanism and takes the first step towards robust learning against relational adversaries. The author responses have address reviewers' concerns, and all reviewers finally agree on the acceptance.
train
[ "iAhEDdeCca3", "TqNeq6LQ97V", "1CpPwBoj7EP", "o0aq92a6ib", "gcZFhKc8ZzY", "ZXuJyAfeLaM", "Rl9Ig1gg9J", "1lseW4vxPP", "n6yerp7aiU0", "kExGTsS1BsZ", "lcxnZ3UogE" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your feedback in both the review and the discussion. Thank you!", " We appreciate your prompt response! Thank you once again for your comments and we will add the related work after the reversion.", " Thanks a lot for the response. It is more clear to me now. ", " I thank the authors for the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 9, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "1CpPwBoj7EP", "o0aq92a6ib", "ZXuJyAfeLaM", "1lseW4vxPP", "kExGTsS1BsZ", "lcxnZ3UogE", "n6yerp7aiU0", "n6yerp7aiU0", "nips_2022_WBp4dli3No6", "nips_2022_WBp4dli3No6", "nips_2022_WBp4dli3No6" ]
nips_2022_Mg-PzsJkEmg
Falconn++: A Locality-sensitive Filtering Approach for Approximate Nearest Neighbor Search
We present Falconn++, a novel locality-sensitive filtering (LSF) approach for approximate nearest neighbor search on angular distance. Falconn++ can filter out potential far away points in any hash bucket before querying, which results in higher quality candidates compared to other hashing-based solutions. Theoretically, Falconn++ asymptotically achieves lower query time complexity than Falconn, an optimal locality-sensitive hashing scheme on angular distance. Empirically, Falconn++ achieves a higher recall-speed tradeoff than Falconn on many real-world data sets. Falconn++ is also competitive with HNSW, an efficient representative of graph-based solutions on high search recall regimes.
Accept
The paper provide a good and exciting improvement over LSH based widely used Falcon Library. All the reviewers found the contribution worthy of publication.
train
[ "8hMu7IjrDs", "R83Ib5uKAk", "Cu8SbKEqHJ6", "ceahn8zBeus", "8Ua2QJkASoU", "tUn0FzSyjK0", "FVbSIpQKfB", "6N9uD3dFt6", "OjHJKAdyBJoc", "Sobo3AN9wX_a", "I8HYrmClVTH", "pgQHKXwAjv", "bavd7q0k_1o", "D-DXpci28LD", "d8VDGTXLud1", "z_mAxrU3Ky", "5DJAVbprO8I" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks. Since Falconn does not support multi-threading, we use 1 thread to measure the query time of Falconn++ and Falconn. Both use the same indexing space, so the benchmark should be fine.", " I would argue that it's not necessary to beat HNSW right away, this can be another paper. Improving FALCON is suffici...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "R83Ib5uKAk", "Cu8SbKEqHJ6", "ceahn8zBeus", "D-DXpci28LD", "6N9uD3dFt6", "OjHJKAdyBJoc", "D-DXpci28LD", "Sobo3AN9wX_a", "bavd7q0k_1o", "I8HYrmClVTH", "pgQHKXwAjv", "5DJAVbprO8I", "z_mAxrU3Ky", "d8VDGTXLud1", "nips_2022_Mg-PzsJkEmg", "nips_2022_Mg-PzsJkEmg", "nips_2022_Mg-PzsJkEmg" ]
nips_2022_KUOKpojFr_
ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model
We present ShapeCrafter, a neural network for recursive text-conditioned 3D shape generation. Existing methods to generate text-conditioned 3D shapes consume an entire text prompt to generate a 3D shape in a single step. However, humans tend to describe shapes recursively---we may start with an initial description and progressively add details based on intermediate results. To capture this recursive process, we introduce a method to generate a 3D shape distribution, conditioned on an initial phrase, that gradually evolves as more phrases are added. Since existing datasets are insufficient for training this approach, we present Text2Shape++, a large dataset of 369K shape--text pairs that supports recursive shape generation. To capture local details that are often used to refine shape descriptions, we build on top of vector-quantized deep implicit functions that generate a distribution of high-quality shapes. Results show that our method can generate shapes consistent with text descriptions, and shapes evolve gradually as more phrases are added. Our method supports shape editing, extrapolation, and can enable new applications in human--machine collaboration for creative design.
Accept
This paper presents a recursive text to shape generation system, introduce a new dataset for text-to-shape, and presents good performance of the proposed method. This paper has the potential of inspiring future work. I encourage the authors to add a discussion (e.g., at limitations or future work) of the necessity of proper metric for evaluation of shape generation. Consolidating this problem can propel the field forward.
train
[ "VkjDn3CbNn", "j-ZkqvpXXJL", "M_HMCyby3m", "BxcQLIC1wD", "2pER1FV2hs-", "GPgq5a976Z7", "Fovm1IMpEho", "v-g9iDfadD", "GNCmf22WH3y", "YquS2_yUaR", "SlwcsA3AmAZ", "64LHawu3KCy3", "7KmPUXMxGkS", "Avlplv2An7e", "ZWk0ePZsb8f", "wJkGE9hf-FU", "DbKaDEfvxSg" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, thanks for appreciating the novel task and the architecture in our paper, and we hope our answer addresses your questions and concerns. We are looking forward to hearing back from you. Please let us know if you have any other questions or concerns about the paper. ", " Thanks to the reviewer for ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "DbKaDEfvxSg", "M_HMCyby3m", "Fovm1IMpEho", "wJkGE9hf-FU", "DbKaDEfvxSg", "Avlplv2An7e", "7KmPUXMxGkS", "YquS2_yUaR", "YquS2_yUaR", "DbKaDEfvxSg", "7KmPUXMxGkS", "7KmPUXMxGkS", "wJkGE9hf-FU", "ZWk0ePZsb8f", "nips_2022_KUOKpojFr_", "nips_2022_KUOKpojFr_", "nips_2022_KUOKpojFr_" ]
nips_2022_QeaYt6w5Xa1
Beyond black box densities: Parameter learning for the deviated components
As we collect additional samples from a data population for which a known density function estimate may have been previously obtained by a black box method, the increased complexity of the data set may result in the true density being deviated from the known estimate by a mixture distribution. To model this phenomenon, we consider the \emph{deviating mixture model} $(1-\lambda^{*})h_0 + \lambda^{*} (\sum_{i = 1}^{k} p_{i}^{*} f(x|\theta_{i}^{*}))$, where $h_0$ is a known density function, while the deviated proportion $\lambda^{*}$ and latent mixing measure $G_{*} = \sum_{i = 1}^{k} p_{i}^{*} \delta_{\theta_i^{*}}$ associated with the mixture distribution are unknown. Via a novel notion of distinguishability between the known density $h_{0}$ and the deviated mixture distribution, we establish rates of convergence for the maximum likelihood estimates of $\lambda^{*}$ and $G^{*}$ under Wasserstein metric. Simulation studies are carried out to illustrate the theory.
Accept
The authors study a mixture of a single known distribution h0 and an mixture model whose parameters are unknown. They propose new notions of distinguishability and partial distinguishability that they use for characterizing convergence rates. The reviewers had a mixed opinion about the paper and had several comments about the technical novelty of the work. The authors addressed these suitably in their rebuttal. In particular, their revised introduction is much clearer about the motivation and contributions. I am happy to recommend acceptance of the paper.
test
[ "dpX5gjsyzu", "jJxEFQHsB3u", "cLRjbLezsl", "D_2n4Zh8Kz", "PNTDbRiAmoc", "ITSnOT5uWaA", "pxp0OjIIMss", "j91yWGjlDtp", "LTgH_lyO0gT", "yCvLma9JRu", "nn2zzsOAEoN", "c4F8_J9889h", "UcR8yebaQv", "2wBVAS4wgZI", "6DSK0gmFEZl", "U-b5YfLKySS", "nD7mtVwyeSK", "G_rody5uKVI", "BjGMZh_ToHI", ...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Thank you for your detailed response. The new modifications greatly help me (and hopefully the reader) to understand the main ideas of the proof. I also want to say that I appreciate the examples showing that the types of new results that the paper produces, in comparison to the existing literature. The new resu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2, 2 ]
[ "jJxEFQHsB3u", "CkzWCAKmoaW", "D_2n4Zh8Kz", "-oRtt2UxVoI", "-oRtt2UxVoI", "pxp0OjIIMss", "G_rody5uKVI", "U-b5YfLKySS", "6DSK0gmFEZl", "nn2zzsOAEoN", "nD7mtVwyeSK", "G_rody5uKVI", "LyhAyOEjve", "nips_2022_QeaYt6w5Xa1", "BjGMZh_ToHI", "-oRtt2UxVoI", "LyhAyOEjve", "yKkYuECGf10", "Ck...
nips_2022_FLzTj4ia8BN
Monte Carlo Augmented Actor-Critic for Sparse Reward Deep Reinforcement Learning from Suboptimal Demonstrations
Providing densely shaped reward functions for RL algorithms is often exceedingly challenging, motivating the development of RL algorithms that can learn from easier-to-specify sparse reward functions. This sparsity poses new exploration challenges. One common way to address this problem is using demonstrations to provide initial signal about regions of the state space with high rewards. However, prior RL from demonstrations algorithms introduce significant complexity and many hyperparameters, making them hard to implement and tune. We introduce Monte Carlo Actor-Critic (MCAC), a parameter free modification to standard actor-critic algorithms which initializes the replay buffer with demonstrations and computes a modified $Q$-value by taking the maximum of the standard temporal distance (TD) target and a Monte Carlo estimate of the reward-to-go. This encourages exploration in the neighborhood of high-performing trajectories by encouraging high $Q$-values in corresponding regions of the state space. Experiments across $5$ continuous control domains suggest that MCAC can be used to significantly increase learning efficiency across $6$ commonly used RL and RL-from-demonstrations algorithms. See https://sites.google.com/view/mcac-rl for code and supplementary material.
Accept
This paper proposes two modifications of Actor-Critic algorithms for reinforcement learning with sparse reward: use demonstrations to seed the replay buffer, and combine TD Q target and Monte Carlo Q-target. All the reviewers agree that this paper has made good progress in an important research direction. The proposed method is simple-to-implement and seems to significantly boost the performance of the actor-critic methods. Most of the concerns in the original reviews were addressed through the rebuttal and discussions. However, one common concern that were raised by multiple reviewers, the lack of empirical/theoretical evidence of bias & variance reduction, remained unsolved. We understand that rigorous theoretical analysis for Deep RL algorithms, in general, is challenging. However, empirical analysis by conducting numerical experiments on simpler toy problems would still significantly improve the quality of this paper.
train
[ "m4OslK8sxa", "0FHju3CGrq", "Xs1Oxy-RSZp", "pIaSWqZ4pI", "bSv6n-EXxAs", "LRV1A8brjRd4", "evHRCZ9ZVf1f", "ShZoOZKsMJP", "Ybt2r_zzgM1", "53O10nqizZ", "aOaiJxX9Oa5", "k6g-pscAMZ", "-XR14HrWEY", "-SDpsTk4mvU", "S7tq-VlBhbr", "n4RNKY_RlBz", "NRmRTUQaBgB", "8WrCQI9eo4O", "CNfpZk6iaWD" ...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for their valuable comments and believe that the paper has been significantly improved as a result of their feedback. To summarize, the primary changes to the paper have been:\n\n * We added appendix B.1, B.5, B.6, B.7 where we evaluated MCAC without demonstration data, studied a variant on...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "nips_2022_FLzTj4ia8BN", "Xs1Oxy-RSZp", "Ybt2r_zzgM1", "bSv6n-EXxAs", "aOaiJxX9Oa5", "S7tq-VlBhbr", "ShZoOZKsMJP", "-XR14HrWEY", "53O10nqizZ", "CNfpZk6iaWD", "k6g-pscAMZ", "8WrCQI9eo4O", "-SDpsTk4mvU", "NRmRTUQaBgB", "n4RNKY_RlBz", "nips_2022_FLzTj4ia8BN", "nips_2022_FLzTj4ia8BN", ...
nips_2022_DoQElY73YR
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attack
A powerful category of (invisible) data poisoning attacks modify a subset of training examples by small adversarial perturbations to change the prediction of certain test-time data. Existing defense mechanisms are not desirable to deploy in practice, as they often either drastically harm the generalization performance, or are attack-specific, and prohibitively slow to apply. Here, we propose a simple but highly effective approach that unlike existing methods breaks various types of invisible poisoning attacks with the slightest drop in the generalization performance. We make the key observation that attacks introduce local sharp regions of high training loss, which when minimized, results in learning the adversarial perturbations and makes the attack successful. To break poisoning attacks, our key idea is to alleviate the sharp loss regions introduced by poisons. To do so, our approach comprises two components: an optimized friendly noise that is generated to maximally perturb examples without degrading the performance, and a randomly varying noise component. The combination of both components builds a very light-weight but extremely effective defense against the most powerful triggerless targeted and hidden-trigger backdoor poisoning attacks, including Gradient Matching, Bulls-eye Polytope, and Sleeper Agent. We show that our friendly noise is transferable to other architectures, and adaptive attacks cannot break our defense due to its random noise component.
Accept
This paper proposes a poisoning defense that unlike existing methods breaks various types of poisoning attacks with a small drop in the generalization. The key claim is that attacks exploit sharp loss regions to craft adversarial perturbations which can substantially alter examples' gradient or representations under small perturbations. The authors then propose to generate noise patterns which maximally perturb examples while minimizing performance degradation. I think the framing of this paper is very important and the authors have done a good job at it. They are claiming to have a defense that is non attack-specific as long as it is restricted to the class of attacks involving visually imperceptible inputs. I believe this claim, if substantiated, to be of sufficient significance to the NeurIPS community. Unfortunately, I noticed that the reviewers largely did not respond to the author rebuttal, other than PZZ3. PZZ3's main concerns were with lack of novelty with respect to the Anti-Backdoor Learning paper, different settings (untargeted, backdoor triggers), and substantiation for the sharp loss hypothesis. Having read carefully the authors' rebuttal to these, I believe the authors have a done a good job of alleviating the concerns and/or misunderstandings. For example I've read the ABL paper and agree it is dealing with a different setting. It was nice to see that the authors actually did experiments to show that ABL is not effective in this setting and vice versa. Reviewer Sp5u was concerned with the attack-specificity of the defense to which the authors rebutted appropriately that is is data specific but not attack-specific as long as it is restricted to the class of attacks involving visually imperceptible inputs. As far as I can tell, there were no other strong concerns. Based on my own assessment, I believe that the central claim of the paper has sufficient evaluations to support it. The attacks considered are highly varied in their techniques and are also recent and SOTA. I therefore recommend accept.
test
[ "peOu2AcIC_5", "l3nBbAr9R2h", "FMM3RYf48M8", "Fizaqb2-pBg", "i2XATGwH7Ct", "crgKHbKwLpO", "HKsuLsqj3S7", "XQ1l8pg_dtn", "Y6WNstYabeH", "JcUAO6C5i_H6", "G9VnLrgcd4W", "qVqB3AQffzK", "TAZCP2rhIDX", "OB2NJJkveMH", "_eNHx61RpIw", "8nJU-XlrWd", "8f67QFCH6Uq", "Ihu91SsS1oY" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " [1] Yang, Yu, Tian Yu Liu, and Baharan Mirzasoleiman. \"Not All Poisons are Created Equal: Robust Training against Data Poisoning.\" In International Conference on Machine Learning, pp. 25154-25165. PMLR, 2022.\n\n[2] Peri, N., Gupta, N., Huang, W. R., Fowl, L., Zhu, C., Feizi, S., Goldstein, T., and Dickerson, J...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "l3nBbAr9R2h", "FMM3RYf48M8", "_eNHx61RpIw", "8f67QFCH6Uq", "Ihu91SsS1oY", "8nJU-XlrWd", "_eNHx61RpIw", "Ihu91SsS1oY", "8f67QFCH6Uq", "8nJU-XlrWd", "8nJU-XlrWd", "_eNHx61RpIw", "_eNHx61RpIw", "_eNHx61RpIw", "nips_2022_DoQElY73YR", "nips_2022_DoQElY73YR", "nips_2022_DoQElY73YR", "ni...
nips_2022__LceCyuVcH
Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
The goal of this work is to build flexible video-language models that can generalize to various video-to-text tasks from few examples. Existing few-shot video-language learners focus exclusively on the encoder, resulting in the absence of a video-to-text decoder to handle generative tasks. Video captioners have been pretrained on large-scale video-language datasets, but they rely heavily on finetuning and lack the ability to generate text for unseen tasks in a few-shot setting. We propose VidIL, a few-shot Video-language Learner via Image and Language models, which demonstrates strong performance on few-shot video-to-text tasks without the necessity of pretraining or finetuning on any video datasets. We use image-language models to translate the video content into frame captions, object, attribute, and event phrases, and compose them into a temporal-aware template. We then instruct a language model, with a prompt containing a few in-context examples, to generate a target output from the composed content. The flexibility of prompting allows the model to capture any form of text input, such as automatic speech recognition (ASR) transcripts. Our experiments demonstrate the power of language models in understanding videos on a wide variety of video-language tasks, including video captioning, video question answering, video caption retrieval, and video future event prediction. Especially, on video future event prediction, our few-shot model significantly outperforms state-of-the-art supervised models trained on large-scale video datasets. Code and processed data are publicly available for research purposes at https://github.com/MikeWangWZHL/VidIL.
Accept
While the reviewers are divided, I agree with those with accept. The paper introduces an interesting alternative without training on videos and numbers useful for the community. The use of pretrained models in a clever way (without finetuning) is not a weakness but a contribution. A reviewer raised a concern about lacking analysis on how the proposed pipeline contributes to the few/zero-shot capability but it is already widely accepted that large-scale pretrained language models can do well in few/zero-shot settings with proper prompts. Also, the use of CLIP is just a way of extracting textual/categorical representation from the input video using a pretrained network and I believe the authors chose to use CLIP mainly because it is trained on a large-scale dataset with an open vocabulary. I think engineering this component is not the main focus of the paper and the lack of ablations on this should not discount the paper’s novelty.
test
[ "40G5RAYJoXU", "7FiPoqJnoCE", "393iuCoRf6l", "PX2qRBl2zx9", "acvZXlKGaoJ", "ii2eqneSHap", "z6DBPn-4NUN", "ZeB0W4kT8tL", "4oEDzdD0QsA", "A2V0VuqavDj", "LmC-Qi_Dk4X", "_EZo63P-vL9", "GqwOPo6NG-", "jBBE5Og-RLa", "5qn3uoOxeEW" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed feedback and confirming the questions have been addressed. We appreciate your highlights on the strength of the proposed framework, including does not rely on video data for training, strong results and new state-of-the-art for future event prediction, better explainability and being hi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 5 ]
[ "ii2eqneSHap", "393iuCoRf6l", "4oEDzdD0QsA", "acvZXlKGaoJ", "ZeB0W4kT8tL", "z6DBPn-4NUN", "jBBE5Og-RLa", "5qn3uoOxeEW", "GqwOPo6NG-", "LmC-Qi_Dk4X", "_EZo63P-vL9", "nips_2022__LceCyuVcH", "nips_2022__LceCyuVcH", "nips_2022__LceCyuVcH", "nips_2022__LceCyuVcH" ]
nips_2022_hUjMhflYvGc
Dynamic Tensor Product Regression
In this work, we initiate the study of \emph{Dynamic Tensor Product Regression}. One has matrices $A_1\in \mathbb{R}^{n_1\times d_1},\ldots,A_q\in \mathbb{R}^{n_q\times d_q}$ and a label vector $b\in \mathbb{R}^{n_1\ldots n_q}$, and the goal is to solve the regression problem with the design matrix $A$ being the tensor product of the matrices $A_1, A_2, \dots, A_q$ i.e. $\min_{x\in \mathbb{R}^{d_1\ldots d_q}}~\|(A_1\otimes \ldots\otimes A_q)x-b\|_2$. At each time step, one matrix $A_i$ receives a sparse change, and the goal is to maintain a sketch of the tensor product $A_1\otimes\ldots \otimes A_q$ so that the regression solution can be updated quickly. Recomputing the solution from scratch for each round is extremely expensive so it is important to develop algorithms which can quickly update the solution with the new design matrix. Our main result is a dynamic tree data structure where any update to a single matrix can be propagated quickly throughout the tree. We show that our data structure can be used to solve dynamic versions of not only Tensor Product Regression, but also Tensor Product Spline regression (which is a generalization of ridge regression) and for maintaining Low Rank Approximations for the tensor product.
Accept
My main concern, that was addressed by the reviewers and was not answered by the authors, is that the improvement is q times faster algorithm for an algorithm that takes time exponential in q. In addition, the missing experimental results makes this a very theoretical paper. Still, I recommend to accept the paper due to the significance of the problem, and conditioned on the promise of the authors to update the requested changes in the final verison.
train
[ "TO6yJUk6u5", "idQNue5Q3zu", "Kv2BT36k0un", "DkZWyg4kC-gU", "Ue7QbQSK-VH", "-J0CCJRo5iG", "JkFIUiVbU9F", "NaMjphkbQZz" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hello reviewers of “Dynamic Tensor Product Regression”,\n\nAgain, we would like to thank you all for taking the time to read our paper and for providing us your very valuable feedback. The reviewer-author discussion period ends in less than 48 hours. So, if you have any further questions/comments about our respon...
[ -1, -1, -1, -1, -1, 7, 5, 4 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "nips_2022_hUjMhflYvGc", "NaMjphkbQZz", "JkFIUiVbU9F", "-J0CCJRo5iG", "nips_2022_hUjMhflYvGc", "nips_2022_hUjMhflYvGc", "nips_2022_hUjMhflYvGc", "nips_2022_hUjMhflYvGc" ]
nips_2022_-BxFk0t7wN
Data-Efficient Augmentation for Training Neural Networks
Data augmentation is essential to achieve state-of-the-art performance in many deep learning applications. However, the most effective augmentation techniques become computationally prohibitive for even medium-sized datasets. To address this, we propose a rigorous technique to select subsets of data points that when augmented, closely capture the training dynamics of full data augmentation. We first show that data augmentation, modeled as additive perturbations, improves learning and generalization by relatively enlarging and perturbing the smaller singular values of the network Jacobian, while preserving its prominent directions. This prevents overfitting and enhances learning the harder to learn information. Then, we propose a framework to iteratively extract small subsets of training data that when augmented, closely capture the alignment of the fully augmented Jacobian with labels/residuals. We prove that stochastic gradient descent applied to the augmented subsets found by our approach has similar training dynamics to that of fully augmented data. Our experiments demonstrate that our method achieves 6.3x speedup on CIFAR10 and 2.2x speedup on SVHN, and outperforms the baselines by up to 10\% across various subset sizes. Similarly, on TinyImageNet and ImageNet, our method beats the baselines by up to 8%, while achieving up to 3.3x speedup across various subset sizes. Finally, training on and augmenting 50% subsets using our method on a version of CIFAR10 corrupted with label noise even outperforms using the full dataset.
Accept
This work demonstrates that it can be sufficient to apply data-augmentation only on a core-set of the data to achieve accuracy comparable to augmenting the full dataset. These findings are supported by theoretical arguments in the NTK framework, and by empirical evaluation. The proposed method can provide a trade-off between the training time and the accuracy when data-augmentation is costly. The reviewers noted that the restriction to additive perturbations might be a limitation of the proposed approach and suggested to incorporate a discussion of these limitations and possible use cases in the final version of the paper (such as also promised in the rebuttal).
train
[ "q1OdVVU8NF0", "HUqScE5S-mn", "JEqQezIxbBo", "jY4WEegZOA7", "Q1nkCdFcV9D", "sX71qB46UL", "VG8BeaZpyML", "frJeHpFaDye", "OQFGqFzKR4w", "OX1DLUX7Nmn", "Ifl38qU9By", "8JbDzXEEwzd", "lZhVrEBEjrLr", "cdatISAVfN-", "zNFn6Qddyyo", "ZRnWx19w7Mb", "fbgeHWFSXa2" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the comment. We were under the impression that we can have an extra page to incorporate the reviewers’ feedback. We moved some materials to the appendix to fit within the page limit (9 pages) based on your comment. Thank you!\n\n", " Dear authors,\nThanks for updating your paper. Please make sure ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "HUqScE5S-mn", "JEqQezIxbBo", "VG8BeaZpyML", "sX71qB46UL", "frJeHpFaDye", "OX1DLUX7Nmn", "cdatISAVfN-", "8JbDzXEEwzd", "zNFn6Qddyyo", "fbgeHWFSXa2", "ZRnWx19w7Mb", "zNFn6Qddyyo", "cdatISAVfN-", "nips_2022_-BxFk0t7wN", "nips_2022_-BxFk0t7wN", "nips_2022_-BxFk0t7wN", "nips_2022_-BxFk0t...
nips_2022_KBUgVv8z7OA
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
The adversarial vulnerability of neural nets, and subsequent techniques to create robust models have attracted significant attention; yet we still lack a full understanding of this phenomenon. Here, we study adversarial examples of trained neural networks through analytical tools afforded by recent theory advances connecting neural networks and kernel methods, namely the Neural Tangent Kernel (NTK), following a growing body of work that leverages the NTK approximation to successfully analyze important deep learning phenomena and design algorithms for new applications. We show how NTKs allow to generate adversarial examples in a ``training-free'' fashion, and demonstrate that they transfer to fool their finite-width neural net counterparts in the ``lazy'' regime. We leverage this connection to provide an alternative view on robust and non-robust features, which have been suggested to underlie the adversarial brittleness of neural nets. Specifically, we define and study features induced by the eigendecomposition of the kernel to better understand the role of robust and non-robust features, the reliance on both for standard classification and the robustness-accuracy trade-off. We find that such features are surprisingly consistent across architectures, and that robust features tend to correspond to the largest eigenvalues of the model, and thus are learned early during training. Our framework allows us to identify and visualize non-robust yet useful features. Finally, we shed light on the robustness mechanism underlying adversarial training of neural nets used in practice: quantifying the evolution of the associated empirical NTK, we demonstrate that its dynamics falls much earlier into the ``lazy'' regime and manifests a much stronger form of the well known bias to prioritize learning features within the top eigenspaces of the kernel, compared to standard training.
Accept
This paper is a borderline case. The review scores are quite widely spread with 4, 4, 6, 8. More specifically: - The score 8 review is unfortunately rather short and could be more informative. - The score 6 review argues that the paper is a nice initial investigation in this direction and would prefer accepting the paper. But they also agree that it's a borderline case and wouldn't argue strongly against rejecting the paper. - The score 4 reviewers both responded to the rebuttals and decided to maintain their scores. Both of them found the experiments too superficial and would have preferred a more in-depth investigation. As a result, the paper presents a difficult decision because there is nothing technically wrong about the paper, but the investigation also appears preliminary and lacking depth. My senior area chair suggested to accept the paper, so I'm going with this recommendation. I would not argue against rejecting the paper if the program chairs would prefer this outcome.
train
[ "8BPKqRdnvMY", "ebP1yEF2mSt", "o5UFtNcEDF3", "O_je45wwF8b", "9qSI9R46PB", "PLJ4nnv_ya", "0LogWqECIc8", "8kCQee7y0g", "mp5KI3BhiHr", "Y9HIydMWYqF", "uIK8UK0BmVn", "Z6nI4_H-H2x" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their response to our comments, and appreciation of our efforts. Perhaps the following clarifications can address your comments, and in particular **convey our complete surprise to read your comment/weakness 3 since we made special effort to include many multi-class experiments**.\n\n> 1...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 3 ]
[ "ebP1yEF2mSt", "PLJ4nnv_ya", "0LogWqECIc8", "nips_2022_KBUgVv8z7OA", "Z6nI4_H-H2x", "uIK8UK0BmVn", "Y9HIydMWYqF", "mp5KI3BhiHr", "nips_2022_KBUgVv8z7OA", "nips_2022_KBUgVv8z7OA", "nips_2022_KBUgVv8z7OA", "nips_2022_KBUgVv8z7OA" ]
nips_2022_9lQmaKMxIUD
Near-Optimal Private and Scalable $k$-Clustering
We study the differentially private (DP) $k$-means and $k$-median clustering problems of $n$ points in $d$-dimensional Euclidean space in the massively parallel computation (MPC) model. We provide two near-optimal algorithms where the near-optimality is in three aspects: they both achieve (1). $O(1)$ parallel computation rounds, (2). near-linear in $n$ and polynomial in $k$ total computational work (i.e., near-linear running time when $n$ is a sufficient polynomial in $k$), (3). $O(1)$ relative approximation and $\text{poly}(k, d)$ additive error. Note that $\Omega(1)$ relative approximation is provably necessary even for any polynomial-time non-private algorithm, and $\Omega(k)$ additive error is a provable lower bound for any polynomial-time DP $k$-means/median algorithm. Our two algorithms provide a tradeoff between the relative approximation and the additive error: the first has $O(1)$ relative approximation and $\sim (k^{2.5} + k^{1.01} \sqrt{d})$ additive error, and the second one achieves $(1+\gamma)$ relative approximation to the optimal non-private algorithm for an arbitrary small constant $\gamma>0$ and with $\text{poly}(k, d)$ additive error for a larger polynomial dependence on $k$ and $d$. To achieve our result, we develop a general framework which partitions the data and reduces the DP clustering problem for the entire dataset to the DP clustering problem for each part. To control the blow-up of the additive error introduced by each part, we develop a novel charging argument which might be of independent interest.
Accept
The paper gives new algorithms for k-means and k-median clustering with differential privacy in the massively parallel computation (MPC) model. These are fundamental clustering problems and the MPC model is a common abstraction for relevant distributed systems such as Hadoop, Spark. The new algorithms have approximation factors and additive errors close to those of best known algorithms for the single machine setting, with a constant number of rounds of communication and near linear total work. All of these parameters are close to optimal, except for possibly a few factors of k, the number of clusters. All reviewers appreciate the new solution to important problems in a relevant setting, especially the near optimality in several dimensions at once: the number of rounds, the total work, the approximation factor, and the additive error.
train
[ "S1Z2wryVR7", "nKgn4kNk76c", "IFakXt2k-5", "GjHHTN3Ele31", "HtjXODwkNBJ", "3VuYghrZE9Y", "GZfaGjT0Lmk", "OZk3ZRnfONh", "9DSdQIBkRU3", "bxCto5K7tyH", "e51y9bT8p6v", "iKUKGLNg_kc", "7ctmAbi-kkz" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Yes, we will definitely include this discussion (both about the faster analysis, and how our charging argument is novel) in the paper. Hopefully the addition to the charging argument intuition can go in the main body, we'll probably just give a pointer to the analysis for the O(nd + poly(k)) time analysis, and up...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "nKgn4kNk76c", "IFakXt2k-5", "GjHHTN3Ele31", "HtjXODwkNBJ", "GZfaGjT0Lmk", "7ctmAbi-kkz", "iKUKGLNg_kc", "e51y9bT8p6v", "bxCto5K7tyH", "nips_2022_9lQmaKMxIUD", "nips_2022_9lQmaKMxIUD", "nips_2022_9lQmaKMxIUD", "nips_2022_9lQmaKMxIUD" ]
nips_2022_k5idxiVdJ3p
On Scrambling Phenomena for Randomly Initialized Recurrent Networks
Recurrent Neural Networks (RNNs) frequently exhibit complicated dynamics, and their sensitivity to the initialization process often renders them notoriously hard to train. Recent works have shed light on such phenomena analyzing when exploding or vanishing gradients may occur, either of which is detrimental for training dynamics. In this paper, we point to a formal connection between RNNs and chaotic dynamical systems and prove a qualitatively stronger phenomenon about RNNs than what exploding gradients seem to suggest. Our main result proves that under standard initialization (e.g., He, Xavier etc.), RNNs will exhibit \textit{Li-Yorke chaos} with \textit{constant} probability \textit{independent} of the network's width. This explains the experimentally observed phenomenon of \textit{scrambling}, under which trajectories of nearby points may appear to be arbitrarily close during some timesteps, yet will be far away in future timesteps. In stark contrast to their feedforward counterparts, we show that chaotic behavior in RNNs is preserved under small perturbations and that their expressive power remains exponential in the number of feedback iterations. Our technical arguments rely on viewing RNNs as random walks under non-linear activations, and studying the existence of certain types of higher-order fixed points called \textit{periodic points} in order to establish phase transitions from order to chaos.
Accept
This submission is borderline. Reviewer EXTU praises its theoretical contribution and highly recommends acceptance. Reviewers niom and YhQb are more tepid but still support acceptance in light of the sound theoretical analysis. Reviewer j5Sz acknowledges that the analysis is sound, but believes the models to which it pertains (RNNs, with certain unconventional features) are of little interest to the community, and therefore argues for rejection. While I agree with j5Sz that the paper may not be of immediate interest to many practitioners, I believe the theory it delivers is worthy on its own, and may lead to further theoretical developments that will apply to more contemporary neural networks. I thus recommend acceptance.
train
[ "SyYkEY_uGxH", "CUXS4Y931pK", "67-uFXY9SFZ", "hlrB9-2sH3I", "BJXwZ4Uy7b", "qRta-WaDy8v", "yRe3zvfrCR9", "FmnxLxFHc1g", "0zD6l8TUuET", "C3tFt74MoP", "Xq2e5BqDuLT", "ZiimKtUyLpK", "IfQvdI_pKy0", "znzJU4ZE9m" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the rigorous answer. Today I learned something.\nBest of Luck!", " We thank the reviewers' for their comments and effort. \n\nWe look forward for your feedback after our responses. Let us know if there is anything more that is needed.\n", " We thank the reviewer for the feedback and interesting ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 9, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 2 ]
[ "67-uFXY9SFZ", "nips_2022_k5idxiVdJ3p", "znzJU4ZE9m", "IfQvdI_pKy0", "ZiimKtUyLpK", "ZiimKtUyLpK", "Xq2e5BqDuLT", "Xq2e5BqDuLT", "Xq2e5BqDuLT", "Xq2e5BqDuLT", "nips_2022_k5idxiVdJ3p", "nips_2022_k5idxiVdJ3p", "nips_2022_k5idxiVdJ3p", "nips_2022_k5idxiVdJ3p" ]
nips_2022_BsSP7pZGFQO
Meta-Learning Dynamics Forecasting Using Task Inference
Current deep learning models for dynamics forecasting struggle with generalization. They can only forecast in a specific domain and fail when applied to systems with different parameters, external forces, or boundary conditions. We propose a model-based meta-learning method called DyAd which can generalize across heterogeneous domains by partitioning them into different tasks. DyAd has two parts: an encoder that infers the time-invariant hidden features of the task with weak supervision, and a forecaster which learns the shared dynamics of the entire domain. The encoder adapts and controls the forecaster during inference using adaptive instance normalization and adaptive padding. Theoretically, we prove that the generalization error of such a procedure is related to the task relatedness in the source domain, as well as the domain differences between source and target. Experimentally, we demonstrate that our model outperforms state-of-the-art approaches on forecasting complex physical dynamics including turbulent flow, real-world sea surface temperature, and ocean currents.
Accept
This work proposes a model-based meta-learning method to forecast physical dynamics. The proposed approach is able to generalize across heterogeneous domains as demonstrated in convincing sets of experiments. The reviewers found the work to be well motivated, clear and self-contained. Authors justified the proposed model architecture and the ablation studies conducted showed the importance of the network components. The authors also provided an adequate description of the data and the evaluation strategy, as well as theoretical guarantees on the generalization error in several settings.
val
[ "9_bUTtpoW4u", "4FppSQfP9x", "AzQC9mB3g6Y", "IM__31klBrS", "9nxpc6GcH8d", "icUqBmkSc_J", "qOM2dZf0cgm", "8p3FCKU6_7a" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank all reviewers for thoroughly reading our paper and providing very positive and high-quality feedback. We greatly appreciate all reviewers pointing out that our method is novel and well-motivated and the writing is clear. Reviewer Nf4r noted that our code is clean to run and has reproducible...
[ -1, -1, -1, -1, -1, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "nips_2022_BsSP7pZGFQO", "AzQC9mB3g6Y", "8p3FCKU6_7a", "qOM2dZf0cgm", "icUqBmkSc_J", "nips_2022_BsSP7pZGFQO", "nips_2022_BsSP7pZGFQO", "nips_2022_BsSP7pZGFQO" ]
nips_2022_D0aqV81d0_k
Optimal Dynamic Regret in LQR Control
We consider the problem of nonstochastic control with a sequence of quadratic losses, i.e., LQR control. We provide an efficient online algorithm that achieves an optimal dynamic (policy) regret of $\tilde{O}(n^{1/3} \mathcal{TV}(M_{1:n}^{2/3} \vee 1)$, where $\mathcal{TV}(M_{1:n})$ is the total variation of any oracle sequence of \emph{Disturbance Action} policies parameterized by $M_1,...,M_n$ --- chosen in hindsight to cater to unknown nonstationarity. The rate improves the best known rate of $\tilde{O}(\sqrt{n (\mathcal{TV}(M_{1:n})+1)} )$ for general convex losses and is information-theoretically optimal for LQR. Main technical components include the reduction of LQR to online linear regression with delayed feedback due to Foster & Simchowitz 2020, as well as a new \emph{proper} learning algorithm with an optimal $\tilde{O}(n^{1/3})$ dynamic regret on a family of "minibatched'' quadratic losses, which could be of independent interest.
Accept
This paper considers the problem of online linear-quadratic control with adversarial noise. Previous work aims to derive regret bounds relative to the best linear controller in hindsight, whereas this work considers *dynamic regret*, where the algorithm ensures regret against all sequences of policies simultaneously, with the regret depending on the variation in the sequence. The main result is to provide an algorithm with policy regret $n^{1/3}\cdot\mathrm{Variation}^{2/3}$, which improves upon the previous state of the art. The reviewers found this paper to be clear and well-written, and found the paper technically interesting (the main challenge is to extend the dynamic regret guarantees of Baby and Wang (2022) guarantees to the setting of \emph{proper} online learning, so that the reduction from online control to online regression of Foster and Simchowitz (2020) can be applied. For the final revision, the authors are encouraged to incorporate the reviewers' feedback to improve the presentation.
train
[ "XWowXTh4XzS", "jAu1wDAhE4V", "67fQj3Shu58", "KOswre6P8AL", "7g2gViFwzUr8", "cMf3iZSQNOG", "r2zwHBgi0ps", "ex0ZfoXWxMx" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing my questions. After reading the author response, I maintain my judgement in favor of acceptance.", " Thank you for your reply! Since camera-ready provides an additional page, I do recommend trying to fit in the parameter settings in main text for camera-ready, or at least a hyperlink to...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "7g2gViFwzUr8", "67fQj3Shu58", "ex0ZfoXWxMx", "r2zwHBgi0ps", "cMf3iZSQNOG", "nips_2022_D0aqV81d0_k", "nips_2022_D0aqV81d0_k", "nips_2022_D0aqV81d0_k" ]
nips_2022_x26Mpsf45P3
Bellman Residual Orthogonalization for Offline Reinforcement Learning
We propose and analyze a reinforcement learning principle that approximates the Bellman equations by enforcing their validity only along a user-defined space of test functions. Focusing on applications to model-free offline RL with function approximation, we exploit this principle to derive confidence intervals for off-policy evaluation, as well as to optimize over policies within a prescribed policy class. We prove an oracle inequality on our policy optimization procedure in terms of a trade-off between the value and uncertainty of an arbitrary comparator policy. Different choices of test function spaces allow us to tackle different problems within a common framework. We characterize the loss of efficiency in moving from on-policy to off-policy data using our procedures, and establish connections to concentrability coefficients studied in past work. We examine in depth the implementation of our methods with linear function approximation, and provide theoretical guarantees with polynomial-time implementations even when Bellman closure does not hold.
Accept
All reviewers recommend acceptance and the meta-reviewer agrees. The reviewers appreciated the generality and the new point of view introduced by the Bellman residual orthogonalization framework for offline RL. Several existing results and techniques are recovered by this paper which can be viewed critical, in the sense that many results are already known. However, the paper generalizes these results significantly, deriving several new results, e.g. the generalization of LSTD to output confidence intervals, and requiring weaker assumptions to achieve existing results, e.g. computationally tractable policy optimization in the linear setting without Bellman completeness. Unfortunately, it is unclear if and how the analyzed methods can be implemented computationally efficiently beyond the linear case. Overall, this is a solid technical paper with useful theoretical insights that were appreciated by all reviewers.
train
[ "KbJXhmE0GJ", "IKunosZjFX", "p5Su04TfdpZ", "iT5Xa5mUHrz", "H5Wx2sl8v_6", "3tgEu1trrZ5", "j1VtC947pF", "I5QZkC3sz_", "YcxluWlfhiR", "V0uitMyeNt0" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks the authors for the detailed answer. I have no further questions regarding the paper and personally I think it is a very good paper. \nI keep my current score. ", " We thank the reviewer for the very detailed feedback and for spotting clarity issues and typos. We also welcome the comment on the writing s...
[ -1, -1, -1, -1, -1, -1, 6, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "iT5Xa5mUHrz", "V0uitMyeNt0", "YcxluWlfhiR", "I5QZkC3sz_", "j1VtC947pF", "nips_2022_x26Mpsf45P3", "nips_2022_x26Mpsf45P3", "nips_2022_x26Mpsf45P3", "nips_2022_x26Mpsf45P3", "nips_2022_x26Mpsf45P3" ]
nips_2022_jFVfKsmKa-
Bounded-Regret MPC via Perturbation Analysis: Prediction Error, Constraints, and Nonlinearity
We study Model Predictive Control (MPC) and propose a general analysis pipeline to bound its dynamic regret. The pipeline first requires deriving a perturbation bound for a finite-time optimal control problem. Then, the perturbation bound is used to bound the per-step error of MPC, which leads to a bound on the dynamic regret. Thus, our pipeline reduces the study of MPC to the well-studied problem of perturbation analysis, enabling the derivation of regret bounds of MPC under a variety of settings. To demonstrate the power of our pipeline, we use it to generalize existing regret bounds on MPC in linear time-varying (LTV) systems to incorporate prediction errors on costs, dynamics, and disturbances. Further, our pipeline leads to regret bounds on MPC in systems with nonlinear dynamics and constraints.
Accept
The paper studies MPC and proposes a general analytic framework to bound dynamic regret. The approach is to achieve a perturbation bound for the MPC problem per step and then obtain a regret bound. Overall the paper saw a lot of discussion, with the reviewers initially questioning the non-triviality of the results, but those questions were sufficiently addressed. Finally, the reviewers have liked the theoretical contributions of the paper, however the practical insight or impact has been particularly questioned. The lack of results on Robust MPC has been a talking from the point of view of whats used in practice, under these assumptions. The paper from the reviewer's assessments lay on the borderline, but the theoretical contributions are strong and that leads to me to a recommendation of marginal accept. However, this recommendation comes with a strong plea to the authors regarding improving the manuscript, especially improving discussion and presentation of recent results on regret analysis of MPC. This is important so that readers can fully comprehend and understand the contribution of the paper.
train
[ "XKWx2x7WyQm", "6T9_CVvNqZ", "ZyDJz5QCmwZ", "GG43ygYIV3v", "PojB-TjejNS", "rn4TEF52weQ", "42qPZzdcI2z", "RTtMRafvudJ", "xqR3Hde_Vl", "k02-GEvW9QL", "T--Ni8rt3-x", "C3wDhdDo1T", "ScRoe_dqH-L", "RuJpYFZh8iE" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comments and follow-up questions. Please find the response to your questions below:\n\n> 1. *The authors' main insight is \"increasing the prediction horizon but with very inaccurate predictions can actually hurt the dynamic regret.\" Maybe I am missing something here. This seems to be common s...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "GG43ygYIV3v", "rn4TEF52weQ", "PojB-TjejNS", "k02-GEvW9QL", "RTtMRafvudJ", "RuJpYFZh8iE", "ScRoe_dqH-L", "ScRoe_dqH-L", "C3wDhdDo1T", "T--Ni8rt3-x", "nips_2022_jFVfKsmKa-", "nips_2022_jFVfKsmKa-", "nips_2022_jFVfKsmKa-", "nips_2022_jFVfKsmKa-" ]
nips_2022_rhdfTOiXBng
NaturalProver: Grounded Mathematical Proof Generation with Language Models
Theorem proving in natural mathematical language – the mixture of symbolic and natural language used by humans – plays a central role in mathematical advances and education, and tests aspects of reasoning that are core to intelligence. Yet it has remained underexplored with modern generative models. We study large-scale language models on two new generation tasks: suggesting the next step in a mathematical proof, and full proof generation. We develop NaturalProver, a language model that generates proofs by conditioning on background references (e.g. theorems and definitions that are either retrieved or human-provided), and optionally enforces their presence with constrained decoding. On theorems from the NaturalProofs benchmark, NaturalProver improves the quality of next-step suggestions and generated proofs over fine-tuned GPT-3, according to human evaluations from university-level mathematics students. NaturalProver is capable of proving some theorems that require short (2-6 step) proofs, and providing next-step suggestions that are rated as correct and useful over 40% of the time, which is to our knowledge the first demonstration of these capabilities using neural language models.
Accept
The paper addresses an exciting problem statement--generating theorems directly in natural language--and shows how to adapt large language models to this task, both for autocompletion, proof reference generation, and wholecloth proof generation. While previous works have considered various auxiliary mathematical tasks posed in natural language, this work takes an important step by making progress toward doing proofs directly in natural language. This is a hard problem, and the authors support their work with experiments showcasing and analyzing different kinds of successes and failures. The reviews are unanimous in recommending acceptance.
train
[ "895ieEVPKcR", "bdD6sD7joi", "-zk7nAP9KVW", "BDSMmLtKxKh", "Bjkf1cnN5kS", "iZUaiZlTIr", "OrBdMHrkcNZ", "P2TgEOoJAta", "8doA8mPi-jj" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors -\n\nThank you for the explanations and addressing my concerns. My apologies for misunderstanding some things and thank you for the clarifications.\n\nYour response has addressed all of my questions and concerns. I have raised my score to accept.\n", " Thank you for responding to my comments. Most ...
[ -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "Bjkf1cnN5kS", "iZUaiZlTIr", "nips_2022_rhdfTOiXBng", "8doA8mPi-jj", "P2TgEOoJAta", "OrBdMHrkcNZ", "nips_2022_rhdfTOiXBng", "nips_2022_rhdfTOiXBng", "nips_2022_rhdfTOiXBng" ]
nips_2022_etY_XXnPkoC
The Implicit Delta Method
Epistemic uncertainty quantification is a crucial part of drawing credible conclusions from predictive models, whether concerned about the prediction at a given point or any downstream evaluation that uses the model as input. When the predictive model is simple and its evaluation differentiable, this task is solved by the delta method, where we propagate the asymptotically-normal uncertainty in the predictive model through the evaluation to compute standard errors and Wald confidence intervals. However, this becomes difficult when the model and/or evaluation becomes more complex. Remedies include the bootstrap, but it can be computationally infeasible when training the model even once is costly. In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of the predictive model to automatically assess downstream uncertainty. We show that the change in the evaluation due to regularization is consistent for the asymptotic variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference. This provides both a reliable quantification of uncertainty in terms of standard errors as well as permits the construction of calibrated confidence intervals. We discuss connections to other approaches to uncertainty quantification, both Bayesian and frequentist, and demonstrate our approach empirically.
Accept
The paper proposes an original new tool to access the uncertainty of a machine learning model. The authors agreed that it is a valuable contribution to our community and deserves acceptance. Importantly, all reviewers mention that there is room for improvement, both in the presentation containing ambiguities and in the empirical evaluation that needs strengthening. The authors properly acknowledge this in their rebuttal, and a consensus emerged from the discussion that those shortcomings are fixable. Thus, I kindly ask the authors to carefully revise their paper for the camera ready by implementing all the changes they committed to and considering all reviewer's comments. This includes (but is not limited to): - Make the paper more accessible for a broader ML audience by including more background, clarifying the scope of the paper in the abstract and introduction, and discussing what kind of uncertainty is studied throughout the paper; - Adding the supplemental results reported during the authors-reviewers discussion; - Report the accuracy of the method using synthetic data in which we can simulate the actual sampling uncertainty ([asked by Reviewer 41Kr](https://openreview.net/forum?id=etY_XXnPkoC&noteId=CLiYTiN03Ed))
train
[ "bsNejzs7j5", "ebAXJaY1IxU", "3PlKu-bfqmA", "CLiYTiN03Ed", "7PMh6VIN9k2", "aT45PB1ZUx4", "0QrktsvIc9B", "eqATGIguVZH", "QRus69kT7Xh", "oOFWxSS2umo", "G756QHhRoLg", "UOw6agXNJI", "1Gzqv2mwuna", "ozXsD_IbLs", "cT2rc_VYt9c", "HK5mFsSMxW", "wRMrK4ddzTD" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the encouraging words. We are glad you found the response helpful. \n\nThank you for the helpful pointers regarding clarifying the terminology on uncertainty in the paper and demonstrating the value of IDM experimentally which we will follow. As mentioned in our latest reply to Reviewer PMCG, we hav...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 4, 3, 4 ]
[ "CLiYTiN03Ed", "3PlKu-bfqmA", "cT2rc_VYt9c", "eqATGIguVZH", "aT45PB1ZUx4", "oOFWxSS2umo", "wRMrK4ddzTD", "wRMrK4ddzTD", "HK5mFsSMxW", "HK5mFsSMxW", "cT2rc_VYt9c", "cT2rc_VYt9c", "ozXsD_IbLs", "nips_2022_etY_XXnPkoC", "nips_2022_etY_XXnPkoC", "nips_2022_etY_XXnPkoC", "nips_2022_etY_XX...
nips_2022_hqRwcqzegr7
Globally Gated Deep Linear Networks
Recently proposed Gated Linear Networks (GLNs) present a tractable nonlinear network architecture, and exhibit interesting capabilities such as learning with local error signals and reduced forgetting in sequential learning. In this work, we introduce a novel gating architecture, named Globally Gated Deep Linear Networks (GGDLNs) where gating units are shared among all processing units in each layer, thereby decoupling the architectures of the nonlinear but unlearned gating and the learned linear processing motifs. We derive exact equations for the generalization properties of Bayesian Learning in these networks in the finite-width thermodynamic limit, defined by $N, P\rightarrow\infty$ while $P/N=O(1)$ where $N$ and $P$ are the hidden layers' width and size of training data sets respectfully. We find that the statistics of the network predictor can be expressed in terms of kernels that undergo shape renormalization through a data-dependent order-parameter matrix compared to the infinite-width Gaussian Process (GP) kernels. Our theory accurately captures the behavior of finite width GGDLNs trained with gradient descent (GD) dynamics. We show that kernel shape renormalization gives rise to rich generalization properties w.r.t. network width, depth, and $L_2$ regularization amplitude. Interestingly, networks with a large number of gating units behave similarly to standard ReLU architectures. Although gating units in the model do not participate in supervised learning, we show the utility of unsupervised learning of the gating parameters. Additionally, our theory allows the evaluation of the network capacity for learning multiple tasks by incorporating task-relevant information into the gating units. In summary, our work is the first exact theoretical solution of learning in a family of nonlinear networks with finite width. The rich and diverse behavior of the GGDLNs suggests that they are helpful analytically tractable models of learning single and multiple tasks, in finite-width nonlinear deep networks.
Accept
Three reviewers recommend accept. The reviewers praised the significant extensions of previous works, the novelty of the ideas, and found the theoretical analysis sound and insightful. The author responses to initial feedback were found to be insightful and reassured the reviewers about their recommendations. Hence I am recommending accept. I encourage the authors to take the reviewers feedback carefully into consideration when preparing the final manuscript and to work on the items promised in the discussion period.
test
[ "CsavlzxdqKU", "Jauanu6yNwC", "NQ1D23xB8SZ", "RTU2-LQs7BS", "S1XsbSmTt7Hi", "IfnO1KS8iXE", "l0u7h4tdkh9", "T29lfJdsC8Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the insightful comments. I am further reassured by the soundness of the results. My recommendation remains unchanged.", " I thank the authors for their answers and clarifications.\n\nI advise to authors to carefully double check the equations and formalism because it looks like there are...
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 2, 2, 3 ]
[ "S1XsbSmTt7Hi", "NQ1D23xB8SZ", "T29lfJdsC8Q", "l0u7h4tdkh9", "IfnO1KS8iXE", "nips_2022_hqRwcqzegr7", "nips_2022_hqRwcqzegr7", "nips_2022_hqRwcqzegr7" ]
nips_2022_bEMrmaw8gOB
Model-based RL with Optimistic Posterior Sampling: Structural Conditions and Sample Complexity
We propose a general framework to design posterior sampling methods for model-based RL. We show that the proposed algorithms can be analyzed by reducing regret to Hellinger distance in conditional probability estimation. We further show that optimistic posterior sampling can control this Hellinger distance, when we measure model error via data likelihood. This technique allows us to design and analyze unified posterior sampling algorithms with state-of-the-art sample complexity guarantees for many model-based RL settings. We illustrate our general result in many special cases, demonstrating the versatility of our framework.
Accept
The paper proposes a new theoretical framework to design posterior sampling methods for model-based RL. All the reviewers agree that the theoretical framework is novel and can avoid many complications in the previous works.
train
[ "-tJF5t-l8C", "dB3YucW_uK", "iLiGpuKBaGB", "dLqWsUoUed5", "zB0Ucc_0C07", "_bbyX54bfzg", "FfukvPOrQN7", "S8CkE8b0jUdO", "eOHMZp8Zajt", "UIVvFRbH_sM", "nYaiQLFPYMg", "v3u5egM2a6i", "So2g7rTJRc" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response, which is helpful in clarifying some of my confusions in the initial review. Given that, I will adjust my reviews correspondingly. ", " Thanks again for engaging in the discussion phase! We will definitely update the discussion per your suggestion.", " Yes, I absolutely agree with the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "_bbyX54bfzg", "iLiGpuKBaGB", "dLqWsUoUed5", "zB0Ucc_0C07", "S8CkE8b0jUdO", "So2g7rTJRc", "v3u5egM2a6i", "nYaiQLFPYMg", "UIVvFRbH_sM", "nips_2022_bEMrmaw8gOB", "nips_2022_bEMrmaw8gOB", "nips_2022_bEMrmaw8gOB", "nips_2022_bEMrmaw8gOB" ]
nips_2022_WPXRVQaP9Oq
On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Interpretable and explainable machine learning has seen a recent surge of interest. We focus on safety as a key motivation behind the surge and make the relationship between interpretability and safety more quantitative. Toward assessing safety, we introduce the concept of *maximum deviation* via an optimization problem to find the largest deviation of a supervised learning model from a reference model regarded as safe. We then show how interpretability facilitates this safety assessment. For models including decision trees, generalized linear and additive models, the maximum deviation can be computed exactly and efficiently. For tree ensembles, which are not regarded as interpretable, discrete optimization techniques can still provide informative bounds. For a broader class of piecewise Lipschitz functions, we leverage the multi-armed bandit literature to show that interpretability produces tighter (regret) bounds on the maximum deviation. We present case studies, including one on mortgage approval, to illustrate our methods and the insights about models that may be obtained from deviation maximization.
Accept
The authors propose to inspect learned models based on their maximum deviation to a reference model. They evaluate the feasibility of computing this deviation for a number of widely used model classes, including generalised linear models and decision trees. The idea is illustrated in cases studies. Reviewers all appreciated the novelty and importance of the proposed problem and the contributions made to examine the feasibility of solving it. Their main concerns were regarding the limited discussion on choosing the reference model and the certification set. The authors expanded greatly on this in their rebuttal and in a revision to the paper. The technical novelty is rather lower but the paper should not have trouble finding an audience in the NeurIPS community due to the broad applicability of the problem under consideration the clarity of the manuscript. I think the added discussion asked for by reviewers is well within what could be expected to be added for a camera-ready version (and indeed, this is already in the Appendix of the revision).
train
[ "JvSst8DCnzV", "1kub43ZM9rB", "lbrXDCQpY9a", "o-DlM0is5E2", "TaURa2iM5BI", "naNrKQCTOYv", "_5jkHzLF7CF", "BX2ND5JPmSF", "cDJCv1sQix5", "1o7fqyJAmbw", "jWlf8KNIA7s", "bTSPZzDOBxI" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you, this answers my questions. I would like to keep my score unchanged. ", " Thanks for the authors' response. Most of my concern are well addressed. I would like to raise my score to Borderline accept. ", " A gentle reminder to the reviewers that if you have any further questions for us or clarificat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 1, 5 ]
[ "naNrKQCTOYv", "o-DlM0is5E2", "nips_2022_WPXRVQaP9Oq", "bTSPZzDOBxI", "jWlf8KNIA7s", "1o7fqyJAmbw", "BX2ND5JPmSF", "cDJCv1sQix5", "nips_2022_WPXRVQaP9Oq", "nips_2022_WPXRVQaP9Oq", "nips_2022_WPXRVQaP9Oq", "nips_2022_WPXRVQaP9Oq" ]
nips_2022_U1m_93ansV
Towards Safe Reinforcement Learning with a Safety Editor Policy
We consider the safe reinforcement learning (RL) problem of maximizing utility with extremely low constraint violation rates. Assuming no prior knowledge or pre-training of the environment safety model given a task, an agent has to learn, via exploration, which states and actions are safe. A popular approach in this line of research is to combine a model-free RL algorithm with the Lagrangian method to adjust the weight of the constraint reward relative to the utility reward dynamically. It relies on a single policy to handle the conflict between utility and constraint rewards, which is often challenging. We present SEditor, a two-policy approach that learns a safety editor policy transforming potentially unsafe actions proposed by a utility maximizer policy into safe ones. The safety editor is trained to maximize the constraint reward while minimizing a hinge loss of the utility state-action values before and after an action is edited. SEditor extends existing safety layer designs that assume simplified safety models, to general safe RL scenarios where the safety model can in theory be arbitrarily complex. As a first-order method, it is easy to implement and efficient for both inference and training. On 12 Safety Gym tasks and 2 safe racing tasks, SEditor obtains much a higher overall safety-weighted-utility (SWU) score than the baselines, and demonstrates outstanding utility performance with constraint violation rates as low as once per 2k time steps, even in obstacle-dense environments. On some tasks, this low violation rate is up to 200 times lower than that of an unconstrained RL method with similar utility performance. Code is available at https://github.com/hnyu/seditor.
Accept
Reviews and responses make sense. The authors made a lot of improvements during the review process. The updated version could be an accepted.
train
[ "JYGzGtWF2Ur", "leElPln3hD", "OhXIo9W1dOP-", "EWzck_sgiQS", "DbHseGrQZUW", "GFconwr4X0K", "xr-3dEpKH0i", "4K_4orCGJHs", "GEYugghCZsN", "nhi8Z4xITjV", "nE6cLWtnaxV", "Lgnkg231vLw", "Xc51HPHckdS", "pu7WcMto9En", "E9p32buOPSz", "-64yPcse_iO", "upyb_UZGAsW", "MwyCSmgfbrK", "5Y0fp1w6e...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " Dear Authors,\nThank you for the rebuttal. After reading the rebuttal and the reviews, I have updated the score. Thank you.", " Thanks, had a look at the revised version. Just updated my score.", " I agree that there are always different aspects of the algorithms and presenting a comparison on the fly (within...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Lgnkg231vLw", "DbHseGrQZUW", "EWzck_sgiQS", "xr-3dEpKH0i", "4K_4orCGJHs", "nips_2022_U1m_93ansV", "AuHNqcEOtOA", "pu7WcMto9En", "AuHNqcEOtOA", "AuHNqcEOtOA", "AuHNqcEOtOA", "af54Y9gmC10", "af54Y9gmC10", "5Y0fp1w6eAS", "MwyCSmgfbrK", "MwyCSmgfbrK", "nips_2022_U1m_93ansV", "nips_202...
nips_2022_N6zHSyChCF2
Discrete Compositional Representations as an Abstraction for Goal Conditioned Reinforcement Learning
Goal-conditioned reinforcement learning (RL) is a promising direction for training agents that are capable of solving multiple tasks and reach a diverse set of objectives. How to \textit{specify} and \textit{ground} these goals in such a way that we can both reliably reach goals during training as well as generalize to new goals during evaluation remains an open area of research. Defining goals in the space of noisy, high-dimensional sensory inputs is one possibility, yet this poses a challenge for training goal-conditioned agents, or even for generalization to novel goals. We propose to address this by learning compositional representations of goals and processing the resulting representation via a discretization bottleneck, for coarser specification of goals, through an approach we call DGRL. We show that discretizing outputs from goal encoders through a bottleneck can work well in goal-conditioned RL setups, by experimentally evaluating this method on tasks ranging from maze environments to complex robotic navigation and manipulation tasks. Additionally, we show a theoretical result which bounds the expected return for goals not observed during training, while still allowing for specifying goals with expressive combinatorial structure.
Accept
This paper proposes a discrete and compositional representation of goal states for goal-conditioned RL. The idea is to learn a goal representation via self-supervised learning and discretize the learned representation via VQ-VAE, and finally use the learned goal representation for goal-conditioned RL. The proposed method improves performance on several goal-conditioned RL benchmarks. All of the reviewers found the idea simple and reasonable, and the results on a variety of benchmarks are quite comprehensive and strong. Although there were concerns around why the proposed discretized representation forms a semantically meaningful latent space and where the improvement comes from, the authors addressed them during the rebuttal period with updated results. All of the reviewers became in favor of the paper as a result. Thus, I recommend accepting this paper.
train
[ "fRX0rLVAciX", "rNiSKkzCtxP", "B9xo86F_OyZ", "kFriUhlHuMu", "7nQHvWrJhUw", "FTNQdSUYQPZ", "iDCvGcfQG07", "bIBe0Cx_fvt", "EQCDdzWci10", "nY0ciRsmif7r", "Sv7Rwa42zx4p", "s5gCn0JCK0EC", "v7H8LggIvRP", "zc63Azc2pI", "wtwnD0AUnMi", "kkQAxHRBclj", "hbyZ5bbOOdt", "UunjDIp0QdX", "cqnusQJ...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " Dear Reviewer,\n\nThank you for your time in discussing this, and explaining it further with details. It has already done a lot to make the paper better. \n\nAs we understand it, your question is about whether most of the improvement comes from the goal representations themselves being discretized, or from selec...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "7nQHvWrJhUw", "nips_2022_N6zHSyChCF2", "kFriUhlHuMu", "EQCDdzWci10", "Sv7Rwa42zx4p", "iDCvGcfQG07", "BB291porV_2", "BB291porV_2", "g8xn0HXEKIv", "zc63Azc2pI", "s5gCn0JCK0EC", "zc63Azc2pI", "fqQe-v3kMM2", "dHwLj47fP_U", "nips_2022_N6zHSyChCF2", "FI0Vlor1HY5", "BB291porV_2", "BB291p...
nips_2022_FkPZGtTxXx6
Natural image synthesis for the retina with variational information bottleneck representation
In the early visual system, high dimensional natural stimuli are encoded into the trains of neuronal spikes that transmit the information to the brain to produce perception. However, is all the visual scene information required to explain the neuronal responses? In this work, we search for answers to this question by developing a joint model of the natural visual input and neuronal responses using the Information Bottleneck (IB) framework that can represent features of the input data into a few latent variables that play a role in the prediction of the outputs. The correlations between data samples acquired from published experiments on ex-vivo retinas are accounted for in the model by a Gaussian Process (GP) prior. The proposed IB-GP model performs competitively to the state-of-the-art feedforward convolutional networks in predicting spike responses to natural stimuli. Finally, the IB-GP model is used in a closed-loop iterative process to obtain reduced-complexity inputs that elicit responses as elicited by the original stimuli. We found three properties of the retina's IB-GP model. First, the reconstructed stimuli from the latent variables show robustness in spike prediction across models. Second, surprisingly the dynamics of the high-dimensional stimuli and RGCs' responses are very well represented in the embeddings of the IB-GP model. Third, the minimum stimuli consist of different patterns: Gabor-type locally high-frequency filters, on- and off-center Gaussians, or a mixture of both. Overall, this work demonstrates that the IB-GP model provides a principled approach for joint learning of the stimuli and retina codes, capturing dynamics of the stimuli-RGCs in the latent space which could help better understand the computation of the early visual system.
Accept
In this paper, the authors describe a new method for estimating the stimulus-response characteristics of biological visual neurons from the retina. The authors employ the Information Bottleneck method to compress the visual representation and compare their results to other models including CNN-based architectures. The authors find that their model is most performant in terms of Pearson correlation and log-likelihood on real neural spike trains recorded in response to natural imagery and Brownian movement. Lastly, the authors use the model to reconstruct stimuli using the learned latent representation. The reviewers applauded the principled approach to the estimation and analysis of neural receptive fields, the clarity of presentation of the method and comparisons across models and the scale of the data. In the responses, the authors were able to showcase new results indicating that the method could scale to a larger number of neurons. One reviewer did take issue with the introduction relying too heavily on the discussion of neural prosthetics to motivate this work as opposed to a discussion of just the neural coding problem. I agree with the reviewer in this sentiment and would like to see the introduction of the paper updated accordingly to better emphasize that this work is focused on the issue of modeling the stimulus-response relationship of a neural population. Given the strong support of the reviewers, this paper will be conditionally accepted provided two updates are performed by the authors: (1) update to the introduction section and (2) update to Figure 4D to fit within the margin.
train
[ "589MZtQJsTJ", "-PLxVjAj9rE", "3-kkXAzuU9M", "F_g2o5bl_tR", "habi9R2RsVr", "29XaWkRSf5E", "mnvof9NzM_g", "PSuFdjCBUi", "lexQ-9PNHJn", "Hp-SGBseNYj", "7BnHnUYrSRC", "tzfm6IdcDiL", "L_KQD1yJ0Tg", "CmLT6tbxXIo", "1Csg3kvVwb4", "YXz1rjZzAN", "zABk2gYiag2", "tUZTKnUGgl", "VQ_BceOGYMW"...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " We thank the reviewer for the comments/feedbacks and are happy that we could answer some of the reviewer's concerns. We believe our work may not be an immediate solution to the prostheses problem but it is a step forward. \n\nWe thank the reviewer for suggesting to look more into the synthesis problem. We want to...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "-PLxVjAj9rE", "F_g2o5bl_tR", "habi9R2RsVr", "tUZTKnUGgl", "L_KQD1yJ0Tg", "mnvof9NzM_g", "YXz1rjZzAN", "lexQ-9PNHJn", "Hp-SGBseNYj", "7BnHnUYrSRC", "CmLT6tbxXIo", "nips_2022_FkPZGtTxXx6", "kwSCSBfyTYo", "1Csg3kvVwb4", "A91-Sq_u08V", "zABk2gYiag2", "nBlZ2Oin6A", "VQ_BceOGYMW", "gC...
nips_2022_ecevn9kPm4
Riemannian Diffusion Models
Diffusion models are recent state-of-the-art methods for image generation and likelihood estimation. In this work, we generalize continuous-time diffusion models to arbitrary Riemannian manifolds and derive a variational framework for likelihood estimation. Computationally, we propose new methods for computing the Riemannian divergence which is needed for likelihood estimation. Moreover, in generalizing the Euclidean case, we prove that maximizing this variational lower-bound is equivalent to Riemannian score matching. Empirically, we demonstrate the expressive power of Riemannian diffusion models on a wide spectrum of smooth manifolds, such as spheres, tori, hyperboloids, and orthogonal groups. Our proposed method achieves new state-of-the-art likelihoods on all benchmarks.
Accept
This paper presents a generalization of continuous-time diffusion models to Riemannian manifolds and derives a variational framework for likelihood estimation. The theoretical analysis is accompanied by numerical experiments. Reviewers generally agree that the paper is novel, technically sound, well-written, and would make a solid contribution to research on diffusion models. Presentation: I would suggest that the presentation be improved to make it more accessible to the general ML community. Specific references should be given to concepts such as Riemannian divergence, Divergence Theorem, etc , since these are not new.
train
[ "mGuEObj6OB", "wNrda09Yb6D", "oFbLdoGSCgz", "XY5DvZbM_n", "RVXxafupN-l", "vK7Un_EjuUs", "u4iy-gQGqes", "h11nqHoH3QA", "UkMdjXk_IPW", "ExhqQgmDCL4", "yolitQQRx3d" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors, thank you for the clarification. I would like to tune up my score. I believe it is very necessary to make the paper more accessible to the general machine learning community. In addition, it sounds like the main paper does not discuss its limitations as mentioned in my previous reviews. It would be ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "h11nqHoH3QA", "UkMdjXk_IPW", "u4iy-gQGqes", "nips_2022_ecevn9kPm4", "nips_2022_ecevn9kPm4", "yolitQQRx3d", "ExhqQgmDCL4", "UkMdjXk_IPW", "nips_2022_ecevn9kPm4", "nips_2022_ecevn9kPm4", "nips_2022_ecevn9kPm4" ]
nips_2022_cwWSpO6rl3Z
Near-Isometric Properties of Kronecker-Structured Random Tensor Embeddings
We give uniform concentration inequality for random tensors acting on rank-1 Kronecker structured signals, which parallels a Gordon-type inequality for this class of tensor structured data. Two variants of the random embedding are considered, where the embedding dimension depends on explicit quantities characterizing the complexity of the signal. As applications of the tools developed herein, we illustrate with examples from signal recovery and optimization.
Accept
The paper derives uniform concentration inequalities for random tensors acting on rank-1, in the spirit of Gordon's inequality. This type of results is at the root of a large literature based on the Convex Min-Max theorem which recently allowed for the analysis of plethora of empirical risk minimization problems in convex settings. The paper goes in the direction of providing the necessary tools to go beyond matrices and extend to projections of low rank signals on random tensors. Therefore such results have a very high potential in terms of applicability and is highly relevant to the ML community. The reviewers are all favorable and I agree with them on the quality of the paper and its presentation. I also acknowledge the detailed answers provided during the rebuttal period. I'm confident that the authors will implement the relevant suggestions of the reviewers to fine tune the final version of the paper.
train
[ "_2OhDY0ApjF", "bBZgmP9fDaj", "ZUemZuyQqfp", "D6ZanT0kUEV", "zHdZWwGamFr", "WVk1GfCvcJB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " - Related Work: Thank you for bringing up the work of [JKW ‘21]. There are a few differences between the questions addressed in this work and the above-mentioned paper which we elaborate below: (1) The main result of Theorem 2.1 in the reference is a JL result, which means it holds for embedding finite point set ...
[ -1, -1, -1, 6, 8, 5 ]
[ -1, -1, -1, 2, 4, 4 ]
[ "WVk1GfCvcJB", "zHdZWwGamFr", "D6ZanT0kUEV", "nips_2022_cwWSpO6rl3Z", "nips_2022_cwWSpO6rl3Z", "nips_2022_cwWSpO6rl3Z" ]
nips_2022_i-UdJ6fWUFc
Semi-supervised Active Linear Regression
Labeled data often comes at a high cost as it may require recruiting human labelers or running costly experiments. At the same time, in many practical scenarios, one already has access to a partially labeled, potentially biased dataset that can help with the learning task at hand. Motivated by such settings, we formally initiate a study of ``semi-supervised active learning'' through the frame of linear regression. Here, the learner has access to a dataset $X \in \mathbb{R}^{(n_{\text{un}}+n_{\text{lab}}) \times d}$ composed of $n_{\text{un}}$ unlabeled examples that a learner can actively query, and $n_{\text{lab}}$ examples labeled a priori. Denoting the true labels by $Y \in \mathbb{R}^{n_{\text{un}} + n_{\text{lab}}}$, the learner's objective is to find $\widehat{\beta} \in \mathbb{R}^d$ such that, $$ \| X \widehat{\beta} - Y \|_2^2 \le (1 + \epsilon) \min_{\beta \in \mathbb{R}^d} \| X \beta - Y \|_2^2 $$ while querying the labels of as few unlabeled points as possible. In this paper, we introduce an instance dependent parameter called the reduced rank, denoted $\text{R}_X$, and propose an efficient algorithm with query complexity $O(\text{R}_X/\epsilon)$. This result directly implies improved upper bounds for two important special cases: $(i)$ active ridge regression, and $(ii)$ active kernel ridge regression, where the reduced-rank equates to the ``statistical dimension'', $\textsf{sd}_\lambda$ and ``effective dimension'', $d_\lambda$ of the problem respectively, where $\lambda \ge 0$ denotes the regularization parameter. Finally, we introduce a distributional version of the problem as a special case of the agnostic formulation we consider earlier; here, for every $X$, we prove a matching instance-wise lower bound of $\Omega (\text{R}_X / \epsilon)$ on the query complexity of any algorithm.
Accept
This submission studies the problem of active (or query) learning for linear regression. It provides label query bounds in terms of novel parameters. All reviewers have appreciated novelty and quality of the results. The problem considered is also clearly of interest to the NeurIPS (theory) community.
train
[ "nKKMnjIoHWd", "3wqMAqjfLY", "yMlXeklYRIa", "6tJ2GXR3A75", "DLaapfYWwa", "kOfsoyCes8Y", "X-FnnPJ5AyN", "CXj5SrleI88", "JAZbIDz-OvF", "fhx3YApbo9W" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We hope that the rebuttal clarifies the questions and concerns raised by the reviewer, as well outline our approach to improving the readability of the paper upon getting the additional page availability. We would be very happy to discuss any further questions about the work and would appreciate if the reviewer m...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "3wqMAqjfLY", "yMlXeklYRIa", "X-FnnPJ5AyN", "DLaapfYWwa", "fhx3YApbo9W", "JAZbIDz-OvF", "CXj5SrleI88", "nips_2022_i-UdJ6fWUFc", "nips_2022_i-UdJ6fWUFc", "nips_2022_i-UdJ6fWUFc" ]
nips_2022_03Qml_SaPqV
Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization
We present a minimax optimal learner for the problem of learning predictors robust to adversarial examples at test-time. Interestingly, we find that this requires new algorithmic ideas and approaches to adversarially robust learning. In particular, we show, in a strong negative sense, the suboptimality of the robust learner proposed by Montasser, Hanneke, and Srebro [2019] and a broader family of learners we identify as local learners. Our results are enabled by adopting a global perspective, specifically, through a key technical contribution: the the global one-inclusion graph, which may be of independent interest, that generalizes the classical one-inclusion graph due to Haussler, Littlestone, and Warmuth [1994]. Finally, as a byproduct, we identify a dimension characterizing qualitatively and quantitatively what classes of predictors $\mathcal{H}$ are robustly learnable. This resolves an open problem due to Montasser et al. [2019], and closes a (potentially) infinite gap between the established upper and lower bounds on the sample complexity of adversarially robust learning.
Accept
A strong theoretical paper that presents novel results on adversarially robust learning algorithms. The paper designs minimax optimal robust learners and in the process identifies a key property namely locality, of existing algorithms that leads to sub-optimality. The paper also resolves an open question from Montasser et al.'19 by identifying a combinatorial quantity that is closely related to robust learning. All the reviewers agree that this is a strong theory paper and should be accepted to NeurIPS.
train
[ "FsJHAX4S_Q2_", "eh1rxZIWykM", "1Ol4SM00m6C", "bTSmY4nogJl", "eIe6v6CvXuG", "XihQS_EmFq" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you for your detailed review and valuable feedback!\n\n> I find the observation between the lines 307-311 very interesting. I would appreciate if the authors further comment on this point in greater detail.\n\nThe robust shattering dimension (Definition 2) gives rise to a \"cube\" in our global one-inclu...
[ -1, -1, -1, 9, 8, 7 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "XihQS_EmFq", "eIe6v6CvXuG", "bTSmY4nogJl", "nips_2022_03Qml_SaPqV", "nips_2022_03Qml_SaPqV", "nips_2022_03Qml_SaPqV" ]
nips_2022_YPpSngE-ZU
MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields
Creating fast and accurate force fields is a long-standing challenge in computational chemistry and materials science. Recently, Equivariant Message Passing Neural Networks (MPNNs) have emerged as a powerful tool for building machine learning interatomic potentials, outperforming other approaches in terms of accuracy. However, they suffer from high computational cost and poor scalability. Moreover, most MPNNs only pass two-body messages leading to an intricate relationship between the number of layers and the expressivity of the features. This work introduces MACE, a new equivariant MPNN model that uses higher order messages, and demonstrates that this leads to an improved learning law. We show that by using four-body messages, the required number of message passing iterations reduces to just one, resulting in a fast and highly parallelizable model, reaching or exceeding state of the art accuracy on the rMD17 and 3BPA benchmark tasks. Our implementation is available at https://github.com/ACEsuit/mace.
Accept
This paper proposes a novel equivariant message passing network for modeling atomistic systems based on the popular Atomic Cluster Expansion formalism. The method relies on a clever factorization of higher order terms into products of two-body terms. This allows MACE to be fast while also taking into account many-body effects. Experimental results seem strong with intriguing scaling properties. Neural network potentials for atomistic systems is a rapidly growing subfield and this seems like a great contribution. All reviewers supported acceptance noting the strong experimental performance, the fast training and inference speed, and demonstrated scaling with dataset size.
train
[ "Uy4ikw8NxpF", "4G8XAPFUSOf", "5gQB12T-nGhd", "qpDDkmDbYH", "07LpFSZHfdQ", "j8DYekkXV1W", "zgTvFlhS9c3", "4lDo4dFDh5", "8CqAc2Ng5y6" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The detailed response has addressed most of my concerns. I will keep the original score.", " I thank the authors for their detailed response. Most of my concerns have been addressed. I will keep the original score.", " Thank you very much for your positive review and excellent comments. We appreciate that you...
[ -1, -1, -1, -1, -1, -1, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "qpDDkmDbYH", "07LpFSZHfdQ", "8CqAc2Ng5y6", "4lDo4dFDh5", "zgTvFlhS9c3", "nips_2022_YPpSngE-ZU", "nips_2022_YPpSngE-ZU", "nips_2022_YPpSngE-ZU", "nips_2022_YPpSngE-ZU" ]
nips_2022_yJEUDfzsTX7
Regret Bounds for Risk-Sensitive Reinforcement Learning
In safety-critical applications of reinforcement learning such as healthcare and robotics, it is often desirable to optimize risk-sensitive objectives that account for tail outcomes rather than expected reward. We prove the first regret bounds for reinforcement learning under a general class of risk-sensitive objectives including the popular CVaR objective. Our theory is based on a novel characterization of the CVaR objective as well as a novel optimistic MDP construction.
Accept
This paper proposes an optimistic algorithm for regret minimization of risk-sensitive measures in tabular episodic MDPs. It shows that this algorithm achieves a $\sqrt{S^3 A^2 K} L poly(T)$ regret bound where $K$ is the number of episodes, $T$ is the episode length and $L$ is the Lipschitz-constant of the weighting function associated with the risk measure (e.g. CVaR). This is the first regret bound in this setting. The initial assessment by all reviewers was overall positive. They appreciated this first result in this new and important setting. However, there were also concerns, especially regarding the algorithmic contribution and the comparison to related work. The authors' response could largely address these issues. However, there are still some concerns regarding the tightness of the analysis and the relation to techniques in existing work. These are described in detail below and have been discussed in the reviewers, AC and SAC discussion. All in all, the paper is recommended to be accepted because of its new guarantee in this new setting, motivating more research in this area that builds on this initial work. However, the authors are encouraged to further comment on the tightness of their analysis in the camera-ready version. **Tightness of the Analysis**: The presented regret bound exhibits an additional $S \sqrt{A} L$ factor compared to state of the art regret bounds in the risk-neutral case. While a dependency on $L$ is expected in the risk-sensitive setting it is not clear that these additional $S$ and $A$ dependencies are necessary. The paper does not discuss where exactly these factors come from in the analysis and does not provide any regret lower bounds for this novel setting. While the rebuttal revision added a discussion to the paper on the existence of these additional factors compared to existing results for the risk-neutral case, several reviewers and the AC found this to be not quite satisfactory. The paper should discuss where these factors come from and why it would be difficult (if not impossible) to remove them. Based on the AC's reading of the paper, these dependencies appear because of two places: (1) $\epsilon_P^{(k)}$ has a $\sqrt{S}$ dependency which seems to be derived from an $\ell_1$ concentration bound. However, it is not clear that this is really necessary and one could instead derive them from concentration arguments on individual $P(s'|s,a)$ probabilities since they are all down-shifted individually. (2) The final display in the proof of Thm 4.1 (Lines 475 and following) seemingly applies a standard argument in regret analyses but gets a worse dependency of S and A (linear instead of under the square root, compare for example to Eq 4.8 in https://arxiv.org/pdf/1807.03765.pdf).
train
[ "cudiJ20hd17", "6zXj1k0-NQ", "O1oPTop6MYT", "jcmCq7oHLubp", "WZG0on2-8Nw", "9a8vspMv72L", "AmrEsju2Nrt", "9Grsc3Q_ho", "KvynHL5n6d", "x3s5EZImIMo", "RJ9j8OIlRsS", "jsHHP-WbE0h", "Hn5_n_IwR5Z", "65X0TOd6e0k" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Experiments.** Thanks for suggesting suitable baselines. We have run simulations on an instance of the standard frozen lake environment designed to have multiple paths with different risk-reward tradeoffs, using a CVaR objective. We have included some preliminary plots and discussion in Appendix E (highlighted ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 2 ]
[ "O1oPTop6MYT", "9Grsc3Q_ho", "KvynHL5n6d", "WZG0on2-8Nw", "AmrEsju2Nrt", "nips_2022_yJEUDfzsTX7", "65X0TOd6e0k", "Hn5_n_IwR5Z", "jsHHP-WbE0h", "RJ9j8OIlRsS", "nips_2022_yJEUDfzsTX7", "nips_2022_yJEUDfzsTX7", "nips_2022_yJEUDfzsTX7", "nips_2022_yJEUDfzsTX7" ]
nips_2022_s776AhRFm67
Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness
We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners. Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction $\beta \ll 1$ of the data distribution. Our proposed notion of barely robust learning requires robustness with respect to a ``larger'' perturbation set; which we show is necessary for strongly robust learning, and that weaker relaxations are not sufficient for strongly robust learning. Our results reveal a qualitative and quantitative equivalence between two seemingly unrelated problems: strongly robust learning and barely robust learning.
Accept
This paper makes important theoretical, conceptual, and algorithmic contributions to the adversarial robustness literature. I recommend the authors carefully go over reviewer suggestions on expositional clarity. Authors are also encouraged to explicitly discuss the limitations of the proposed approach. In particular, they can provide more discussion on the comparison between U(x) and U^{-1}U(x) and state the need to develop robust verification methods in the paper’s context.
train
[ "GYb_uZ0a6cu", "GKTDiQHePm5", "P4KRnt6tRhO", "DeB9jGBclaM", "YCeNkQPLL8l", "dsd_NIoJd_p", "v858_qKHW4", "tmr5QcbpjP3", "97Co7a7hVgi", "l1xT9rJ3W_Z", "6fc5Oo0XNCH", "PScJsiN12Er", "YP8dJgYulOa", "kzbKunQD27d" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I think the paper is qualified for acceptance and maintain my score. The experiments will be helpful for broad audience, so please at least add them into the appendix if the paper gets acceptance.", " Thank you for your valuable feedback, I think you have solved my questions, I thin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 4, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 4, 5 ]
[ "v858_qKHW4", "dsd_NIoJd_p", "DeB9jGBclaM", "nips_2022_s776AhRFm67", "PScJsiN12Er", "kzbKunQD27d", "YP8dJgYulOa", "6fc5Oo0XNCH", "l1xT9rJ3W_Z", "nips_2022_s776AhRFm67", "nips_2022_s776AhRFm67", "nips_2022_s776AhRFm67", "nips_2022_s776AhRFm67", "nips_2022_s776AhRFm67" ]
nips_2022_zkk_7sV6gm8
Timing is Everything: Learning to Act Selectively with Costly Actions and Budgetary Constraints
Many real-world settings involve costs for performing actions; transaction costs in financial systems and fuel costs being common examples. In these settings, performing actions at each time step quickly accumulates costs leading to vastly suboptimal outcomes. Additionally, repeatedly acting produces wear and tear and ultimately, damage. Determining when to act is crucial for achieving successful outcomes and yet, the challenge of efficiently \textit{learning} to behave optimally when actions incur minimally bounded costs remains unresolved. In this paper, we introduce a reinforcement learning (RL) framework named Learnable Impulse Control Reinforcement Algorithm (LICRA), for learning to optimally select both when to act and which actions to take when actions incur costs. At the core of LICRA is a nested structure that combines RL and a form of policy known as \textit{impulse control} which learns to maximise objectives when actions incur costs. We prove that LICRA, which seamlessly adopts any RL method, converges to policies that optimally select when to perform actions and their optimal magnitudes. We then augment LICRA to handle problems in which the agent can perform at most $k<\infty$ actions and more generally, faces a budget constraint. We show LICRA learns the optimal value function and ensures budget constraints are satisfied almost surely. We demonstrate empirically LICRA's superior performance against benchmark RL methods in OpenAI gym's Lunar Lander and in Highway environments.
Reject
After careful consideration, I feel I must reject this paper. This paper essentially proposes a specific hierarchical structure onto the action space for a specific set of MDPs (which motivated the paper), where all actions apart from one (the 'do nothing' action) incur some cost. The proposed mechanism is a reasonable solution technique for such problems. But the fact that hierarchical methods for specific problems that inject a form of domain knowledge are somewhat better than methods that do not has been shown many times, and is unsurprising. The proposed method is, for instance, brittle to slight variations of the problem: what if some actions sometimes do or do not incur costs, for instance, rather than there being one clear such action? If you do nothing to a robot and it then falls off the table, does or does that not count as an immediate cost? Also, the framing of the paper excludes the 'do nothing' action from the 'original' action space, which - as discussed with reviewers - is a strange framing: why was a valid action, which apparently is often the optimal choice, excluded from the original action space? And, more importantly, what if we just add it back? I found the explanations of the authors for why to use the proposed mechanism rather than considering the flat action space to be ultimately unconvincing and, more importantly, rather domain specific and brittle. One could argue that the proposed mechanism is sometimes a useful inductive bias, for some problems, which is not very surprising. Other action decompositions will be useful for other problems. Ultimately, I think the paper should 1) make a better case for why and when this particular hierarchy is important and beneficial compared to just allowing the system to decide for itself when to take each action (including the 'do nothing' action), 2) should be more upfront about limitations (e.g., when is this particular inductive bias a bad idea, rather than beneficial?), and 3) position itself better in existing work (e.g., show more awareness about prior hierarchical approaches - I would argue this is not really a 'novel method', but instead a specific instance of a hierarchical RL system). I will therefore suggest to reject the paper so that the authors have an opportunity to use these comments to improve the paper.
train
[ "M3EBfY6kwLV", "_zrcaeCoMCN", "hqwA4ZkDOK", "fdEWUCStOeC", "ijxyq1BwwN0", "n4K2aPO3B_D", "HSbrbjwy9iS", "d9RaPV5Cwbx", "GiiVaxs5e08", "d05KuGRpn5B", "HD9K2AS1KO", "OHc4Kci_NZ", "2N4b8BpjUq_", "pOv3CagscUT", "4SxzZ9CJQ-", "FRvR-2di6_", "Ft7q7gOlJpa-", "HTxeDUQiaCU7", "PTpkCqUlYe",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", ...
[ " We thank the reviewer for their recent suggestion and oveall comments and lively discussions that have enabled us to greatly improve the paper (please see the very last upload). We are happy to add more finishing adjustments in the camera-ready should the paper be accepted.\n\nWe noted that all reviewers have agr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "_zrcaeCoMCN", "HSbrbjwy9iS", "fdEWUCStOeC", "n4K2aPO3B_D", "HSbrbjwy9iS", "d9RaPV5Cwbx", "GiiVaxs5e08", "d05KuGRpn5B", "OHc4Kci_NZ", "HD9K2AS1KO", "5XnvuxafN86", "2N4b8BpjUq_", "4SxzZ9CJQ-", "ea96bqkTqzV4", "FRvR-2di6_", "Ft7q7gOlJpa-", "S_ZErYJ6iru", "Kyw1UJ3lvzO", "nips_2022_z...
nips_2022_prKLyXwzIW
Robust Generalized Method of Moments: A Finite Sample Viewpoint
For many inference problems in statistics and econometrics, the unknown parameter is identified by a set of moment conditions. A generic method of solving moment conditions is the Generalized Method of Moments (GMM). However, classical GMM estimation is potentially very sensitive to outliers. Robustified GMM estimators have been developed in the past, but suffer from several drawbacks: computational intractability, poor dimension-dependence, and no quantitative recovery guarantees in the presence of a constant fraction of outliers. In this work, we develop the first computationally efficient GMM estimator (under intuitive assumptions) that can tolerate a constant $\epsilon$ fraction of adversarially corrupted samples, and that has an $\ell_2$ recovery guarantee of $O(\sqrt{\epsilon})$. To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As a special case, we apply our algorithm to instrumental variables linear regression with heterogeneous treatment effects, and experimentally demonstrate that it can tolerate as much as $10$ -- $15\%$ corruption, significantly improving upon baseline methods.
Accept
The authors proposed new computationally efficient Generalized Method of Moments (GMM) estimators that are robust to a constant fraction of arbitrary outliers. It is based on modifications of existing algorithms such as SEVER and filtering. Although the analysis is not tight in all the cases, this paper presents an interesting first step towards solving this general family of problems.
train
[ "nYoDc9NY13Z", "631mzSFT7pZ", "_QrH2MXYnd3", "yRks7V0kBi", "eTSaAmB2gPS", "fmVaT-eBCIu", "xqMO1Qxzy8j", "_Mi_cggFUAz", "esW_XRBp0CY" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarification.", " We thank the reviewer for their time. To address the clarification questions:\n\n1. [Re. section 4] Yes, we intended that our references on line 178 for the Filter algorithm also applied to Lemmas 4.1 and 4.2. We'll be more explicit about that.\n\n2. [Re. Lemmas 5.1 and 5.2]...
[ -1, -1, -1, -1, -1, 5, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3, 2 ]
[ "631mzSFT7pZ", "esW_XRBp0CY", "_Mi_cggFUAz", "xqMO1Qxzy8j", "fmVaT-eBCIu", "nips_2022_prKLyXwzIW", "nips_2022_prKLyXwzIW", "nips_2022_prKLyXwzIW", "nips_2022_prKLyXwzIW" ]
nips_2022_ALIYCycCsTy
Improving Intrinsic Exploration with Language Abstractions
Reinforcement learning (RL) agents are particularly hard to train when rewards are sparse. One common solution is to use intrinsic rewards to encourage agents to explore their environment. However, recent intrinsic exploration methods often use state-based novelty measures which reward low-level exploration and may not scale to domains requiring more abstract skills. Instead, we explore natural language as a general medium for highlighting relevant abstractions in an environment. Unlike previous work, we evaluate whether language can improve over existing exploration methods by directly extending (and comparing to) competitive intrinsic exploration baselines: AMIGo (Campero et al., 2021) and NovelD (Zhang et al., 2021). These language-based variants outperform their non-linguistic forms by 47-85% across 13 challenging tasks from the MiniGrid and MiniHack environment suites.
Accept
This paper studies an interesting problem, and overall the reviewers agreed the exposition and validation are sufficient, although there are minor concerns about novelty. However, the work is clear and studies an interesting idea for the RL community. We encourage the authors to consider the issues raised by the reviewers and further improve the work in the final version.
train
[ "8FA-p-1-Ygm", "W88Fyfkav5", "_ebqb4IZ6Cc", "VcGEz7d_14", "KQKe_3Cqcsu", "ZVM5q7F5tBr", "ZQYbU-18PSv", "4M6Lz5L0PLM", "AcGgc5llViZ", "InibEPj94Hpy", "s6VptRYti5", "fE23Kj24rF", "4fpycji8Co9", "9tJDfmoi23L", "l9PillCwaLS", "hMGbtX1sgBL", "5-UFGz-E1by", "1a5G1vvucoD", "MAFHpbc1Ra",...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " Thank you for the response. \n\nPoint taken that a contribution of this paper is that language as a state-abstraction helps for exploration. I think this is expected behavior, but this is still evidence in that direction.\n\n\nRegarding whether L-Amigo can leverage the compositional semantics of language better, ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "9tJDfmoi23L", "MAFHpbc1Ra", "ljuTUBiKL3r", "ZVM5q7F5tBr", "nips_2022_ALIYCycCsTy", "AcGgc5llViZ", "4M6Lz5L0PLM", "s6VptRYti5", "InibEPj94Hpy", "s6VptRYti5", "fE23Kj24rF", "ljuTUBiKL3r", "nips_2022_ALIYCycCsTy", "l9PillCwaLS", "hMGbtX1sgBL", "eK4qJkDffII", "1a5G1vvucoD", "MAFHpbc1R...
nips_2022_X1oVDZIABwF
On the Global Convergence Rates of Decentralized Softmax Gradient Play in Markov Potential Games
Softmax policy gradient is a popular algorithm for policy optimization in single-agent reinforcement learning, particularly since projection is not needed for each gradient update. However, in multi-agent systems, the lack of central coordination introduces significant additional difficulties in the convergence analysis. Even for a stochastic game with identical interest, there can be multiple Nash Equilibria (NEs), which disables proof techniques that rely on the existence of a unique global optimum. Moreover, the softmax parameterization introduces non-NE policies with zero gradient, making it difficult for gradient-based algorithms in seeking NEs. In this paper, we study the finite time convergence of decentralized softmax gradient play in a special form of game, Markov Potential Games (MPGs), which includes the identical interest game as a special case. We investigate both gradient play and natural gradient play, with and without $\log$-barrier regularization. The established convergence rates for the unregularized cases contain a trajectory dependent constant that can be \emph{arbitrarily large}, whereas the $\log$-barrier regularization overcomes this drawback, with the cost of slightly worse dependence on other factors such as the action set size. An empirical study on an identical interest matrix game confirms the theoretical findings.
Accept
The reviewers appreciate the contribution of this paper to the theory of tabular MARL in MPGs, namely convergence rates of PG and NPG with and without a log-barrier regularizer. There were some concerns regarding clarity of the writing (in particular the textual descriptions of the theory and its implications) and of the contributions (and their distinction from existing results). There were also some concerns regarding the novelty of the proof techniques. Nevertheless, there's consensus that the paper makes important contributions. The authors have started to address some of the reviewer feedback in their revision, and are encouraged to continue to do so.
train
[ "ulYkn7Gru4W", "NcEJYMSyCoF", "O6NDHjajdd", "gQiPqizytUY", "WdANWxawbbA", "jRnqfmHDdW", "H1MCFNTg0Gp", "whsvOK_jkf-", "YN7XQKWAmi2", "qyFNGbFfQf", "asUyaf4CvC", "LUD1zXQvAT", "iQ0Rpi5buDz", "PDlPemtpofO", "3-xGZ9dbA_c" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nThanks again for your effort and valuable feedback during the review process. We have made our response and revised the paper accordingly. Since the deadline for reviewer-author discussion is approaching, we would like to kindly remind you to please let us know if your concerns have been resolv...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5 ]
[ "nips_2022_X1oVDZIABwF", "H1MCFNTg0Gp", "gQiPqizytUY", "3-xGZ9dbA_c", "jRnqfmHDdW", "PDlPemtpofO", "iQ0Rpi5buDz", "YN7XQKWAmi2", "qyFNGbFfQf", "LUD1zXQvAT", "nips_2022_X1oVDZIABwF", "nips_2022_X1oVDZIABwF", "nips_2022_X1oVDZIABwF", "nips_2022_X1oVDZIABwF", "nips_2022_X1oVDZIABwF" ]
nips_2022_szt95rn-ql
Single-phase deep learning in cortico-cortical networks
The error-backpropagation (backprop) algorithm remains the most common solution to the credit assignment problem in artificial neural networks. In neuroscience, it is unclear whether the brain could adopt a similar strategy to correctly modify its synapses. Recent models have attempted to bridge this gap while being consistent with a range of experimental observations. However, these models are either unable to effectively backpropagate error signals across multiple layers or require a multi-phase learning process, neither of which are reminiscent of learning in the brain. Here, we introduce a new model, Bursting Cortico-Cortical Networks (BurstCCN), which solves these issues by integrating known properties of cortical networks namely bursting activity, short-term plasticity (STP) and dendrite-targeting interneurons. BurstCCN relies on burst multiplexing via connection-type-specific STP to propagate backprop-like error signals within deep cortical networks. These error signals are encoded at distal dendrites and induce burst-dependent plasticity as a result of excitatory-inhibitory top-down inputs. First, we demonstrate that our model can effectively backpropagate errors through multiple layers using a single-phase learning process. Next, we show both empirically and analytically that learning in our model approximates backprop-derived gradients. Finally, we demonstrate that our model is capable of learning complex image classification tasks (MNIST and CIFAR-10). Overall, our results suggest that cortical features across sub-cellular, cellular, microcircuit and systems levels jointly underlie single-phase efficient deep learning in the brain.
Accept
This paper describes a biologically plausible model of credit assignment in neocortical microcircuits, building on previous work in this area that uses apical dendrites and burst multiplexing. This model expands on these ideas and incorporates additional biological information related to short-term plasticity and cell-types to develop a model that can learn in a single phase, i.e. with no signal required to gate plasticity. The reviewers were very positive about this paper and agreed that it makes a novel, insightful, and important contribution to the biological credit assignment field, and. As such, an accept decision was unanimously reached.
train
[ "sFAXCG_fnUA", "IXWXmKoUWr", "oALkh6Q2JVU", "-o8D_w_OQf", "nwlTD8YhFjS", "SUoLn98SafO", "0e8UcZ4Wa4BI", "frD5R_EeLhe", "XeyAej8PtaV", "A04VHB1VyMt", "Nlat58cIlKX", "Tv97jDIEFF", "cGCvtzzmk4", "fJtm-0D9Ekr" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for all of your helpful comments.\n\n> The misalignment for smaller steps hypothesis seems reasonable: did the authors test it?\n\nWe have not tested it but this is something we are currently looking into and will add to the camera-ready version if we have conclusive results.\n\n> The point about the sc...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "-o8D_w_OQf", "nwlTD8YhFjS", "XeyAej8PtaV", "Tv97jDIEFF", "frD5R_EeLhe", "Tv97jDIEFF", "Tv97jDIEFF", "fJtm-0D9Ekr", "cGCvtzzmk4", "Nlat58cIlKX", "nips_2022_szt95rn-ql", "nips_2022_szt95rn-ql", "nips_2022_szt95rn-ql", "nips_2022_szt95rn-ql" ]
nips_2022_mhe2C2VWwCW
Predictive Querying for Autoregressive Neural Sequence Models
In reasoning about sequential events it is natural to pose probabilistic queries such as "when will event A occur next" or "what is the probability of A occurring before B", with applications in areas such as user modeling, medicine, and finance. However, with machine learning shifting towards neural autoregressive models such as RNNs and transformers, probabilistic querying has been largely restricted to simple cases such as next-event prediction. This is in part due to the fact that future querying involves marginalization over large path spaces, which is not straightforward to do efficiently in such models. In this paper we introduce a general typology for predictive queries in neural autoregressive sequence models and show that such queries can be systematically represented by sets of elementary building blocks. We leverage this typology to develop new query estimation methods based on beam search, importance sampling, and hybrids. Across four large-scale sequence datasets from different application domains, as well as for the GPT-2 language model, we demonstrate the ability to make query answering tractable for arbitrary queries in exponentially-large predictive path-spaces, and find clear differences in cost-accuracy tradeoffs between search and sampling methods.
Accept
The paper tackles an important question: how to answer probabilistic queries about future steps, such as “when will event A occur next”, in the context of autoregressive neural models that are widely used in many applications nowadays. While similar questions have been studied in stochastic processes, it has not received much attention in inference on autoregressive models. This paper formulates the problem, provides a framework, develops baselines, and proposes improved method based on importance sampling and beam search. All reviewers agree that the paper opens an interesting problem space and is technically solid. Therefore, I recommend acceptance.
train
[ "GQMNIiUXmRP", "IG_HP6kVq-k", "Qp4D2jsDtDC", "NvSoWuNTWT", "6wuKiYmRElU", "tX98GfZdx4d", "Ng5UivyN54p", "3bKOn6Cxlk-K", "lxrnbZzzMaN", "CI0_J1iZaL", "f1JYoo78VIm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed response. The authors should consider adding their comments to the final version of the paper, especially other baselines. I raise my score to 6.", " I thank the authors for the clarification.\n\nW1. Thanks for the detailed explanations and for showing the examples queries. I believe thi...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "NvSoWuNTWT", "3bKOn6Cxlk-K", "tX98GfZdx4d", "6wuKiYmRElU", "CI0_J1iZaL", "Ng5UivyN54p", "f1JYoo78VIm", "lxrnbZzzMaN", "nips_2022_mhe2C2VWwCW", "nips_2022_mhe2C2VWwCW", "nips_2022_mhe2C2VWwCW" ]
nips_2022_xvlaiSHgPrC
Composition Theorems for Interactive Differential Privacy
An interactive mechanism is an algorithm that stores a data set and answers adaptively chosen queries to it. The mechanism is called differentially private, if any adversary cannot distinguish whether a specific individual is in the data set by interacting with the mechanism. We study composition properties of differential privacy in concurrent compositions. In this setting, an adversary interacts with $k$ interactive mechanisms in parallel and can interleave its queries to the mechanisms arbitrarily. Previously, Vadhan and Wang [2021] proved an optimal concurrent composition theorem for pure-differential privacy. We significantly generalize and extend their results. Namely, we prove optimal parallel composition properties for several major notions of differential privacy in the literature, including approximate DP, Renyi DP, and zero-concentrated DP. Our results demonstrate that the adversary gains no advantage by interleaving its queries to independently running mechanisms. Hence, interactivity is a feature that differential privacy grants us for free. Concurrently and independently of our work, Vadhan and Zhang [2022] proved an optimal concurrent composition theorem for f-DP [Dong et al., 2022], which implies our result for the approximate DP case.
Accept
The paper proposes proves optimal parallel composition theorems for major notions of differential privacy (approximate DP, Renyi DP, and zero-concentrated DP). We believe that this is an interesting theoretical contribution to the DP literature. We encourage the authors to incorporate the comments from the reviewers to make it more interesting for the ML audience.
train
[ "HSdE6ae3Z-T", "Szn99cD647", "kMywsdnCdp6", "Qj8VFZizhL6", "GxZPfONQX3U", "kUKcsMZnSNi", "pUwR3hOtAvX", "5CMsGjbZco4", "WhOE6tAuhD", "EOjihsjfZjJ", "0_b2dJU8UXi", "kiRcplSi96N", "rnG_Cb4pB11", "SN32Kl3h--d", "Ib3MMqToGbn", "N1M7omlOsLH", "suLL0CQNQ1X", "Hyq3LUN5q5sX", "Neo28RWsJo...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", ...
[ " Yes, in our proof, one round of communication consists of both a query and an answer (we will make this point clear in the revision), and our proof can handle any finite rounds of communication. So, in each round, we can safely assume that the adversary speaks first (if the adversary doesn't have anything to say ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 3 ]
[ "Szn99cD647", "kMywsdnCdp6", "Qj8VFZizhL6", "GxZPfONQX3U", "kUKcsMZnSNi", "Hyq3LUN5q5sX", "0_b2dJU8UXi", "nips_2022_xvlaiSHgPrC", "0_b2dJU8UXi", "N1M7omlOsLH", "kiRcplSi96N", "rnG_Cb4pB11", "suLL0CQNQ1X", "Ib3MMqToGbn", "xE95r1f8YKs", "086DNOcxAp", "TfJZqIs0hP", "Neo28RWsJo5", "n...
nips_2022_6V4vRCbVA3J
Efficient Frameworks for Generalized Low-Rank Matrix Bandit Problems
In the stochastic contextual low-rank matrix bandit problem, the expected reward of an action is given by the inner product between the action's feature matrix and some fixed, but initially unknown $d_1$ by $d_2$ matrix $\Theta^*$ with rank $r \ll \{d_1, d_2\}$, and an agent sequentially takes actions based on past experience to maximize the cumulative reward. In this paper, we study the generalized low-rank matrix bandit problem, which has been recently proposed in \cite{lu2021low} under the Generalized Linear Model (GLM) framework. To overcome the computational infeasibility and theoretical restrain of existing algorithms on this problem, we first propose the G-ESTT framework that modifies the idea from \cite{jun2019bilinear} by using Stein's method on the subspace estimation and then leverage the estimated subspaces via a regularization idea. Furthermore, we remarkably improve the efficiency of G-ESTT by using a novel exclusion idea on the estimated subspace instead, and propose the G-ESTS framework. We also show that both of our methods are the first algorithm to achieve the optimal $\tilde{O}((d_1+d_2)r\sqrt{T})$ bound of regret presented in \cite{lu2021low} up to logarithm terms under some mild conditions, which improves upon the current regret of $\tilde{O}((d_1+d_2)^{3/2} \sqrt{rT})$~\citep{lu2021low}. For completeness, we conduct experiments to illustrate that our proposed algorithms, especially G-ESTS, are also computationally tractable and consistently outperform other state-of-the-art (generalized) linear matrix bandit methods based on a suite of simulations.
Accept
This paper proposes computationally efficient algorithms for low-rank generalized linear bandit with regret and runtime guarantees that improves over state of the art results. All reviewers think that the technical contributions of this paper is solid, especially the insight that one can directly apply Stein's method for accurate subspace estimation without matrix estimation. Accept. The authors are encouraged to include the additional experiments in the rebuttal in the final version.
train
[ "ppWiNpqogwO", "vp5zgckB5jI", "jcFb0sC5VyR", "LJ-lgaePuBc", "Hj6zmD8dgC6", "zkelKPNOyEG", "HoLDQkjdZK6", "vwweS0Er84", "_KrtofflWlE", "GW6wEXLArhE", "XOWuYeG8wia", "1aK2VjLuBCY", "5HW0-5lgSFK", "mPbunRm7TFu", "fKYAhNHla8", "-FCm4n0bzrN", "jDUdaZMMEV9", "rzeawkFlZkd", "rjetvKBsfh2...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " Thanks for your patience and responses!\n\nNow I understand how your Stein's method helps to get a better regret bound, and I'm keep reading about the technical proof of new C.3, but thanks for your effort. I changed my point from 5 to 7.", " Thank you very much for the reply. Since we assume the parameter matr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 4 ]
[ "HoLDQkjdZK6", "jcFb0sC5VyR", "jDUdaZMMEV9", "Hj6zmD8dgC6", "mPbunRm7TFu", "vwweS0Er84", "vwweS0Er84", "_KrtofflWlE", "GW6wEXLArhE", "1aK2VjLuBCY", "4_i_lxJ86oH", "4_i_lxJ86oH", "jbJLqca3MHN", "jbJLqca3MHN", "rzeawkFlZkd", "rjetvKBsfh2", "rjetvKBsfh2", "nips_2022_6V4vRCbVA3J", "n...
nips_2022_TERVhuQVTe
GULP: a prediction-based metric between representations
Comparing the representations learned by different neural networks has recently emerged as a key tool to understand various architectures and ultimately optimize them. In this work, we introduce GULP, a family of distance measures between representations that is explicitly motivated by downstream predictive tasks. By construction, GULP provides uniform control over the difference in prediction performance between two representations, with respect to regularized linear prediction tasks. Moreover, it satisfies several desirable structural properties, such as the triangle inequality and invariance under orthogonal transformations, and thus lends itself to data embedding and visualization. We extensively evaluate GULP relative to other methods, and demonstrate that it correctly differentiates between architecture families, converges over the course of training, and captures generalization performance on downstream linear tasks.
Accept
All reviewers are in agreement that this is an interesting theoretical and empirical contribution and a useful tool in better understanding neural networks.
val
[ "XAfIEbwlSwX", "qeMNfPHEYO", "Ta84UIwpT", "K0_s8kSxIJK", "oO73l3wFFwe", "bzK87rwYjEv", "AVsxmjK3nnu", "1182GkrCSC1", "ksxBs3s-C9", "2P8yrc5I5K_", "DKDTD3jz-d", "__un6xUTVc4" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the detailed response. I have increased the score above.", " I express my thanks to the authors for replying to my comments. After reviewing the response, I think I will stay with my original assessment.\n\nI am mainly worried about its widespread impact and outreach to the...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "bzK87rwYjEv", "Ta84UIwpT", "K0_s8kSxIJK", "__un6xUTVc4", "DKDTD3jz-d", "2P8yrc5I5K_", "ksxBs3s-C9", "nips_2022_TERVhuQVTe", "nips_2022_TERVhuQVTe", "nips_2022_TERVhuQVTe", "nips_2022_TERVhuQVTe", "nips_2022_TERVhuQVTe" ]
nips_2022_FYGrMDwQyL
Online Allocation and Learning in the Presence of Strategic Agents
We study the problem of allocating $T$ sequentially arriving items among $n$ homogenous agents under the constraint that each agent must receive a prespecified fraction of all items, with the objective of maximizing the agents' total valuation of items allocated to them. The agents' valuations for the item in each round are assumed to be i.i.d. but their distribution is apriori unknown to the central planner.vTherefore, the central planner needs to implicitly learn these distributions from the observed values in order to pick a good allocation policy. However, an added challenge here is that the agents are strategic with incentives to misreport their valuations in order to receive better allocations. This sets our work apart both from the online auction mechanism design settings which typically assume known valuation distributions and/or involve payments, and from the online learning settings that do not consider strategic agents. To that end, our main contribution is an online learning based allocation mechanism that is approximately Bayesian incentive compatible, and when all agents are truthful, guarantees a sublinear regret for individual agents' utility compared to that under the optimal offline allocation policy.
Accept
Executive summary: The authors study the repeated allocation of an identical good over T rounds to n strategic buyers in a "no monetary transfers" setting. The buyers have i.i.d. valuations drawn from an unknown distribution, and the algorithm must work with reported valuations. The goal is to maximize social welfare (= sum of valuations) under the constraint that each buyer receives a pre-specified fraction of the total number of goods. The main result is an algorithm for this problem that ensures two things: (a) approximate Bayesian incentive compatibility (approx-BIC) (Definition 2 and Theorem 1) and (b) low individual regret (Definition 3 and Theorem 2). The key idea of the algorithm (is to exploit the iid-ness of the problem) and detect misreports from the underlying CDF using Dvoretzky-Kiefer-Wolfowitz type bounds. Discussion and recommendation: After some initial set back on the problem motivation, the reviewers bought into the motivation for studying this online allocation problem "without monetary transfers" (adding examples such as the foodbank example might be good). There was some discussion around "assuming iid valuations" limiting the generality of the result, but there is in fact a history of papers that studies learning with strategic agents under this assumption (eg Kanoria and Nazerzadeh 2021). The main difference of the current work is that it's working in a setting without money. The idea behind the algorithm is maybe "the obvious think to do" - but of course it still requires some work to formally prove that it actually works. I think one thing that could strengthen the paper would be to add some discussion around the tightness/non-tightness of the approximate BIC and individual regret bounds. Weak accept.
train
[ "fzcoSkDJGlC", "IaJ6wHm2--s", "EMyqr21Wpni", "BaTzJagMpQB", "j3X8vPHgUs7", "yvttjeeqmfM", "260r6fL0G-9", "jGQyPrCR8Q4", "JjE72lkB0o" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response; I have read it, and it clarifies my questions. ", " I just want to let you know that I have read the response and appreciate the effort you made to answer my concerns.", " I thank the authors for their answer to the different reviews. I agree that this example gives the intuition that...
[ -1, -1, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "yvttjeeqmfM", "j3X8vPHgUs7", "BaTzJagMpQB", "JjE72lkB0o", "jGQyPrCR8Q4", "260r6fL0G-9", "nips_2022_FYGrMDwQyL", "nips_2022_FYGrMDwQyL", "nips_2022_FYGrMDwQyL" ]
nips_2022__qsh1p43SIf
Reduced Representation of Deformation Fields for Effective Non-rigid Shape Matching
In this work we present a novel approach for computing correspondences between non-rigid objects, by exploiting a reduced representation of deformation fields. Different from existing works that represent deformation fields by training a general-purpose neural network, we advocate for an approximation based on mesh-free methods. By letting the network learn deformation parameters at a sparse set of positions in space (nodes), we reconstruct the continuous deformation field in a closed-form with guaranteed smoothness. With this reduction in degrees of freedom, we show significant improvement in terms of data-efficiency thus enabling limited supervision. Furthermore, our approximation provides direct access to first-order derivatives of deformation fields, which facilitates enforcing desirable regularization effectively. Our resulting model has high expressive power and is able to capture complex deformations. We illustrate its effectiveness through state-of-the-art results across multiple deformable shape matching benchmarks. Our code and data are publicly available at: https://github.com/Sentient07/DeformationBasis.
Accept
This paper presents a creative approach to shape registration that incorporates a new parameterization of the deformation field. All authors (and the AC) agree the results are convincing and that the method presents novel and interesting ideas. The only (borderline) negative review by reviewer 86v4 seems to be well-addressed by the rebuttal and new ablation experiments; although reviewer 86v4 did not engage during the rebuttal discussion, the AC checked these results and found them reasonable. Hence, a recommendation of "accept" is suitable here. The final version of the paper should be sure to incorporate any new experimental results that appear in the rebuttal discussion.
train
[ "ppJbvveiCjS", "yJIebKwqOat", "lrOgJ243KH4", "UBa5Xp4C4M3", "wOarUKC8xpX", "KW7pgoqJdTz", "QPC00AvOalE", "YCY8-U3MzWr", "pNZgQ0Uh_wr", "QSyi5OX5RWC", "0NJH5JT9ll", "6YxAVTezUtX", "nkHPZvzLwF0" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your suggestions. The definition of \"non-rigid objects\" could encompass a broader range of object categories. To address them, we are willing to make the necessary clarifications for the final version as follows,\n\n1) We will clarify through our text (Abstract, Intro, etc..) that generalizability...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "yJIebKwqOat", "wOarUKC8xpX", "KW7pgoqJdTz", "nips_2022__qsh1p43SIf", "6YxAVTezUtX", "0NJH5JT9ll", "QSyi5OX5RWC", "nkHPZvzLwF0", "nips_2022__qsh1p43SIf", "nips_2022__qsh1p43SIf", "nips_2022__qsh1p43SIf", "nips_2022__qsh1p43SIf", "nips_2022__qsh1p43SIf" ]
nips_2022_MPARWTuMiPh
Interpreting Operation Selection in Differentiable Architecture Search: A Perspective from Influence-Directed Explanations
The Differentiable ARchiTecture Search (DARTS) has dominated the neural architecture search community due to its search efficiency and simplicity. DARTS leverages continuous relaxation to convert the intractable operation selection problem into a continuous magnitude optimization problem which can be easily handled with gradient-descent, while it poses an additional challenge in measuring the operation importance or selecting an architecture from the optimized magnitudes. The vanilla DARTS assumes the optimized magnitudes reflect the importance of operations, while more recent works find this naive assumption leads to poor generalization and is without any theoretical guarantees. In this work, we leverage influence functions, the functional derivatives of the loss function, to theoretically reveal the operation selection part in DARTS and estimate the candidate operation importance by approximating its influence on the supernet with Taylor expansions. We show the operation strength is not only related to the magnitude but also second-order information, leading to a fundamentally new criterion for operation selection in DARTS, named Influential Magnitude. Empirical studies across different tasks on several spaces show that vanilla DARTS and its variants can avoid most failures by leveraging the proposed theory-driven operation selection criterion.
Accept
The paper proposes a novel architecture selection method for differentiable NAS based on influence function calculation. During the discussions reviewers still found several weaknesses of the paper, including 1) the novelty of the application of influence function in NAS, and it requires a series of approximation 2) The updated results from the authors show that the proposed method may not outperform existing approaches (DARTS-PT) in some cases. However, since architecture selection is an important but under-studied problem in NAS, and the proposed method is well-motivated, we still think this paper is a solid and novel attempt on improving NAS. We therefore decide to accept the paper.
train
[ "5PrUzN7wiKf", "vC_b-vYPM3n", "JuWLAPLM3Dr", "TEpGLyG_suK", "OiWkmQ_9OOl", "9dCWffYAxn", "uv5pC1KA23", "vsVQxQL7TBg", "7J4CVx9o72_", "vCdIwW39O0s", "13TM2wqexDR", "JFE3MEpPVH", "NRqv9BbovxT", "4-vaRMV1hRa", "OZ_1_k24PJm", "fNF00I3s7ba", "JFAJgdRFc6o" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for addressing my concerns. After reading the comments and response, I think this paper leverage an interesting approach, influence function, to explain the operation selection part in DARTS, with theoretic analysis and comprehensive experiments. Overall, I agree with Reviewer 3tjK and D82N that the paper ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 7, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5 ]
[ "vCdIwW39O0s", "JuWLAPLM3Dr", "uv5pC1KA23", "4-vaRMV1hRa", "13TM2wqexDR", "nips_2022_MPARWTuMiPh", "vsVQxQL7TBg", "JFE3MEpPVH", "4-vaRMV1hRa", "JFAJgdRFc6o", "fNF00I3s7ba", "NRqv9BbovxT", "OZ_1_k24PJm", "nips_2022_MPARWTuMiPh", "nips_2022_MPARWTuMiPh", "nips_2022_MPARWTuMiPh", "nips_...
nips_2022_wt7cd9m2cz2
Learning single-index models with shallow neural networks
Single-index models are a class of functions given by an unknown univariate ``link'' function applied to an unknown one-dimensional projection of the input. These models are particularly relevant in high dimension, when the data might present low-dimensional structure that learning algorithms should adapt to. While several statistical aspects of this model, such as the sample complexity of recovering the relevant (one-dimensional) subspace, are well-understood, they rely on tailored algorithms that exploit the specific structure of the target function. In this work, we introduce a natural class of shallow neural networks and study its ability to learn single-index models via gradient flow. More precisely, we consider shallow networks in which biases of the neurons are frozen at random initialization. We show that the corresponding optimization landscape is benign, which in turn leads to generalization guarantees that match the near-optimal sample complexity of dedicated semi-parametric methods.
Accept
In this paper, the authors study the problem of learning a single index model using a specific class of two-layer neural networks. Interestingly, at variance with previous works on single-index models, the target single-index model is not matched with the network architecture and Both layers are trained. One of the key results is that gradient flow can achieve a good approximation error in this setting. The consensus is that this theoretical paper constitutes a nice, and well-written analysis of feature learning dynamics, a topic of current interest in statistical learning theories. It definitely adds to the current literature on the separation between neural networks and lazy regime (NTK, etc...), and on the benefits of over parametrization for optimization in non-convex landscapes (benign overfitting, etc...). All referees were clearly supporting acceptance.
train
[ "VqWpscPXpe", "oLr8khrdsup", "9dc8fNJWR5sK", "Ug7v4778FTO", "joHhUejt4uI-", "fT1x-MA9RqO", "zuC6OKE2YD1", "KIE_y5_68p", "D-1EGOwi8xF", "3I3VqHmGgTM", "fqMOFkZpwFn", "aTLHJBImPRS", "LBKbLKYC4rw", "YL-25Y5wARH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the reviewers for their response and for engaging with my suggestions with their edits to the paper.\n\nRe: my first suggestion (about referring to $f_{\\star}$ as a teacher), I realize it is possible it came across as stronger than intended, so I'd like to stress that I believe you should stick to whatev...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4, 3, 4 ]
[ "zuC6OKE2YD1", "9dc8fNJWR5sK", "Ug7v4778FTO", "aTLHJBImPRS", "YL-25Y5wARH", "LBKbLKYC4rw", "aTLHJBImPRS", "fqMOFkZpwFn", "3I3VqHmGgTM", "nips_2022_wt7cd9m2cz2", "nips_2022_wt7cd9m2cz2", "nips_2022_wt7cd9m2cz2", "nips_2022_wt7cd9m2cz2", "nips_2022_wt7cd9m2cz2" ]
nips_2022_3nbKUphLBg5
Sequence Model Imitation Learning with Unobserved Contexts
We consider imitation learning problems where the learner's ability to mimic the expert increases throughout the course of an episode as more information is revealed. One example of this is when the expert has access to privileged information: while the learner might not be able to accurately reproduce expert behavior early on in an episode, by considering the entire history of states and actions, they might be able to eventually identify the hidden context and act as the expert would. We prove that on-policy imitation learning algorithms (with or without access to a queryable expert) are better equipped to handle these sorts of asymptotically realizable problems than off-policy methods. This is because on-policy algorithms provably learn to recover from their initially suboptimal actions, while off-policy methods treat their suboptimal past actions as though they came from the expert. This often manifests as a latching behavior: a naive repetition of past actions. We conduct experiments in a toy bandit domain that show that there exist sharp phase transitions of whether off-policy approaches are able to match expert performance asymptotically, in contrast to the uniformly good performance of on-policy approaches. We demonstrate that on several continuous control tasks, on-policy approaches are able to use history to identify the context while off-policy approaches actually perform worse when given access to history.
Accept
The paper proposes a new imitation learning setting where some context known by the expert is unobserved by the learner (both during learning and when exploiting the learnt policy). The value of this contribution has been acknowledged by the reviewers. Some reviewers raised questions about the novelty of this work compared to specific existing work. The authors provided fair responses that seemed convincing. Some reviewers were also concerned by the quality of the experimental evaluation. The authors proposed a new experiment. It may not fully address the concerns of the reviewers but it seems that the value of the theoretical contribution combined with these experiments is enough for being considered for publication.
train
[ "Yud5gRbGEuS", "Dqte6Nr_jgv", "xWaGOkniVCI", "IqkwrJLrbYm", "YJo2uQDOlhc", "py7yHaZaL39", "X3nWu0hh4Z0", "tIuWieIE8_y", "mX-P5hF4td5", "AZd_7aoww2i" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hello! As the author-reviewer discussion period is coming to a close, please let me know if any addition clarifications (in addition to those presented in our previous response) would be helpful in your evaluation.", " They differ in a few ways:\n\n1) The bounds we derive here are on the entire classes of offli...
[ -1, -1, -1, -1, -1, -1, -1, 3, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "mX-P5hF4td5", "xWaGOkniVCI", "YJo2uQDOlhc", "tIuWieIE8_y", "tIuWieIE8_y", "AZd_7aoww2i", "mX-P5hF4td5", "nips_2022_3nbKUphLBg5", "nips_2022_3nbKUphLBg5", "nips_2022_3nbKUphLBg5" ]
nips_2022_XiwkvDTU10Y
Repairing Neural Networks by Leaving the Right Past Behind
Prediction failures of machine learning models often arise from deficiencies in training data, such as incorrect labels, outliers, and selection biases. However, such data points that are responsible for a given failure mode are generally not known a priori, let alone a mechanism for repairing the failure. This work draws on the Bayesian view of continual learning, and develops a generic framework for both, identifying training examples which have given rise to the target failure, and fixing the model through erasing information about them. This framework naturally allows leveraging recent advances in continual learning to this new problem of model repairment, while subsuming the existing works on influence functions and data deletion as specific instances. Experimentally, the proposed approach outperforms the baselines for both identification of detrimental training data and fixing model failures in a generalisable manner.
Accept
This paper studies the setting where some data points are contaminated and as a result, the learned method suffers from performance degradation. Authors extend an existing continual learning algorithm called Elastic Weight Consolidation and use it for identifying and removing data points that are harmful to performance. The experiments confirms that their proposed methods can indeed identify the harmful data points and repair the learned model. All reviewers find this paper interesting and they are in agreement about accepting the paper. However, they are also in agreement that the paper is hard to follow. I suggest authors to take reviewers suggestions into account and improve the presentation for the camera-ready version to make it easier for the community to benefit from and build on this paper.
train
[ "AqSPhQ-6ka2", "3ORMXxTMBts", "DMLlSWAesx7", "0K2dV5p3c1h", "455TXEY_svF", "3yYGAiLksy", "WNu5AoMeuZC", "aW72h1hkhcj", "afUkrCjf7BxN", "kgXPlgJ2k0", "39QN736Qv-W", "D8IWIm7TEbG", "Oq-RnV0mFjw", "RPN0yTgXyMi", "_9Hpv8G5IHW", "znSxSnQAuEL" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewers for taking a look at our rebuttal/revised submission and engaging in discussions where appropriate. Below we summarise the two main outcomes of the discussion phase: \n\n1. Technical concerns have been addressed in the rebuttal & the revised manuscript. All reviewers think tha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3 ]
[ "kgXPlgJ2k0", "455TXEY_svF", "WNu5AoMeuZC", "3yYGAiLksy", "aW72h1hkhcj", "D8IWIm7TEbG", "Oq-RnV0mFjw", "afUkrCjf7BxN", "39QN736Qv-W", "nips_2022_XiwkvDTU10Y", "znSxSnQAuEL", "RPN0yTgXyMi", "_9Hpv8G5IHW", "nips_2022_XiwkvDTU10Y", "nips_2022_XiwkvDTU10Y", "nips_2022_XiwkvDTU10Y" ]
nips_2022_6HFRBaPmp
Doubly-Asynchronous Value Iteration: Making Value Iteration Asynchronous in Actions
Value iteration (VI) is a foundational dynamic programming method, important for learning and planning in optimal control and reinforcement learning. VI proceeds in batches, where the update to the value of each state must be completed before the next batch of updates can begin. Completing a single batch is prohibitively expensive if the state space is large, rendering VI impractical for many applications. Asynchronous VI helps to address the large state space problem by updating one state at a time, in-place and in an arbitrary order. However, Asynchronous VI still requires a maximization over the entire action space, making it impractical for domains with large action space. To address this issue, we propose doubly-asynchronous value iteration (DAVI), a new algorithm that generalizes the idea of asynchrony from states to states and actions. More concretely, DAVI maximizes over a sampled subset of actions that can be of any user-defined size. This simple approach of using sampling to reduce computation maintains similarly appealing theoretical properties to VI without the need to wait for a full sweep through the entire action space in each update. In this paper, we show DAVI converges to the optimal value function with probability one, converges at a near-geometric rate with probability $1-\delta$, and returns a near-optimal policy in computation time that nearly matches a previously established bound for VI. We also empirically demonstrate DAVI's effectiveness in several experiments.
Accept
The paper proposes and analyzes a new asynchronous value iteration method, which extends asynchrony from states to states and actions. The reviewers have found the paper technically sound and well executed and they agree that it deserves publication. In preparing the final version of their paper, the authors are invited to consider the reviewers' comments.
train
[ "SDGX9vAKo44", "SRkkW4W4f8j", "GjYyiPbIJCZ", "Jj-kdlPd9gl" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for their positive feedback and suggestions for future directions. We are gratified that all the reviewers found our work novel and significant, with clear writing, technically precise theory, and thorough analysis. Reviewer 592T pointed out that DAVI “generalizes the idea of asynchrony in ...
[ -1, 8, 6, 6 ]
[ -1, 1, 4, 3 ]
[ "nips_2022_6HFRBaPmp", "nips_2022_6HFRBaPmp", "nips_2022_6HFRBaPmp", "nips_2022_6HFRBaPmp" ]
nips_2022_3PAIKtWQsc
Learning Two-Player Markov Games: Neural Function Approximation and Correlated Equilibrium
We consider learning Nash equilibria in two-player zero-sum Markov Games with nonlinear function approximation, where the action-value function is approximated by a function in a Reproducing Kernel Hilbert Space (RKHS). The key challenge is how to do exploration in the high-dimensional function space. We propose a novel online learning algorithm to find a Nash equilibrium by minimizing the duality gap. At the core of our algorithms are upper and lower confidence bounds that are derived based on the principle of optimism in the face of uncertainty. We prove that our algorithm is able to attain an $O(\sqrt{T})$ regret with polynomial computational complexity, under very mild assumptions on the reward function and the underlying dynamic of the Markov Games. We also propose several extensions of our algorithm, including an algorithm with Bernstein-type bonus that can achieve a tighter regret bound, and another algorithm for model misspecification that can be applied to neural network function approximation.
Accept
This paper considers the problem of online reinforcement learning in two-player zero-sum Markov games. They consider a class of Markov games called kernel mixture Markov games, which extend the linear mixture MDP setting considered in prior work to the RKHS setting. The main result is to propose and algorithm called KernelCCE-VTR, which uses the principle of value-targeted regression to achieve regret bounds that scale with the effective dimension of the kernel. The authors also provide an improved variant of the algorithm based on Bernstein-type confidence bonuses. The reviewers found the setting to be important and found the paper to be well-written, and agreed that the main results are technically challenging and likely to be a useful starting point for future work on learning Markov games with function approximation. The main issue raised by the reviewers is that the algorithm design and analysis appears to be based on a combination of well-known existing techniques for simpler settings---for the final revision, the paper can be strengthened by more strongly advocating for the novelty required in combining these techniques.
train
[ "4eGQaeZSJGE", "_RCMAR2_yYS", "jx0yjHhT0AG", "JNz8fVFiolG", "GwenlbCJjut", "KFedOqOeFxIU", "l7sqZA4CS69", "t5mL5TfNr6x", "83JQuCALj9G", "CtxPbTbe5EkZ", "5WsuKtPLsQ", "Yw-QZ8EqwKm", "5XXhjTVIfnQ", "FDObgPIwfaD", "Pk0u7C34DEr", "DmYeI1oZCSz", "dg7H0dSX6G7" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for adding this detail to the appendix. I maintain my score.", " We're glad that we have resolved your questions and concerns.", " Thanks for your response! I read the given references and agree that setting the value of $\\bar{Q}_H(x,a,b)$ as $r_H(x,a,b)$ has also been applied in other literatures....
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "GwenlbCJjut", "jx0yjHhT0AG", "JNz8fVFiolG", "l7sqZA4CS69", "KFedOqOeFxIU", "5WsuKtPLsQ", "t5mL5TfNr6x", "83JQuCALj9G", "CtxPbTbe5EkZ", "dg7H0dSX6G7", "DmYeI1oZCSz", "Pk0u7C34DEr", "FDObgPIwfaD", "nips_2022_3PAIKtWQsc", "nips_2022_3PAIKtWQsc", "nips_2022_3PAIKtWQsc", "nips_2022_3PAIK...
nips_2022_6NTFiNpQJ6
Differentially Private Linear Sketches: Efficient Implementations and Applications
Linear sketches have been widely adopted to process fast data streams, and they can be used to accurately answer frequency estimation, approximate top K items, and summarize data distributions. When data are sensitive, it is desirable to provide privacy guarantees for linear sketches to preserve private information while delivering useful results with theoretical bounds. We show that linear sketches can ensure privacy and maintain their unique properties with a small amount of noise added at initialization. From the differentially private linear sketches, we showcase that the state-of-the-art quantile sketch in the turnstile model can also be private and maintain high performance. Experiments further demonstrate that our proposed differentially private sketches are quantitatively and qualitatively similar to noise-free sketches with high utilization on synthetic and real datasets.
Accept
This work constructs differentially private linear sketches for frequency estimation and related problems. It shows that count min/median sketch can be made DP by adding noise. This is an interesting contribution. The reviewers had concerns about the novelty of the paper. The (updated version of the) paper does a reasonable analysis of the algorithm. The authors point out in the rebuttal how their bounds are incomparable to a concurrent work. I am not convinced that the incomparable parts are interesting in the sense that closeness to the non-private count sketch does not seem like a useful goal in itself, and the target of interest is always the original frequencies. The authors do an empirical analysis and show that the algorithm works well under reasonable privacy budgets. Overall, I think this paper may be a reasonable one to accept.
train
[ "wti3xz6wsm3", "g2umbRFezc", "4BRT9Cr3L8-", "cHqTTe3tzn6", "asp5yXZjjQc", "RkxO7yvOZdwu", "dpxHrDpohhv", "DkljMH6yZRH", "nW_3pK5J5JB", "K-WjGG3orSM" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reading our revision and rebuttal.\n\nWe agree that DP is obtained via standard techniques, e.g., the Gaussian mechanism. We wouldn’t necessarily think this is a limitation. Simplicity is often greatly valued, and it is a *zen* to create algorithms using exclusively Gaussian mechanisms so one can ge...
[ -1, -1, -1, -1, -1, -1, 4, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 4, 3 ]
[ "g2umbRFezc", "RkxO7yvOZdwu", "K-WjGG3orSM", "nW_3pK5J5JB", "DkljMH6yZRH", "dpxHrDpohhv", "nips_2022_6NTFiNpQJ6", "nips_2022_6NTFiNpQJ6", "nips_2022_6NTFiNpQJ6", "nips_2022_6NTFiNpQJ6" ]
nips_2022_g-H3oNARs2
On the role of overparameterization in off-policy Temporal Difference learning with linear function approximation
Much of the recent successes of deep learning can be attributed to scaling up the size of the networks to the point where they often are vastly overparameterized. Thus, understanding the role of overparameterization is of increasing importance. While predictive theories have been developed for supervised learning, little is known about the Reinforcement Learning case. In this work, we take a theoretical approach and study the role of overparameterization for off-policy Temporal Difference (TD) learning in the linear setting. We leverage tools from Random Matrix Theory and random graph theory to obtain a characterization of the spectrum of the TD operator. We use this result to study the stability and optimization dynamics of TD learning as a function of the number of parameters.
Accept
The paper studies policy evaluation with linear TD(0) learning in the over-parameterized regime. Using random matrix and random graph theory, the paper characterizes the spectrum of the TD operator and use this to show that TD learning exhibits a double-descent phenomenon. The reviewers found this technical paper to be clear and well-written. They also appreciated the novel analysis, extending results on learning in the important over-parameterized regime in supervised learning to reinforcement learning. However, there were also concerns about the limiting assumptions, in particular the asymptotically rank 1 transition matrix. The authors' response provided further discussion and motivation for this choice which the reviewers found convincing. Overall, while this paper does make strong assumptions, it also provides good insights and novel results that are likely to spark further research in this area. As a result, it is recommended to be accepted.
train
[ "s_R7UxLjIhQ", "clh9hhChg4H", "cYvMWinNh0c", "J8N__kHQ9n", "2yMWozR-xm8", "8LMRH00MFWu", "jsOFDY6V7Dh", "3e7blll3uRw", "zAiKmpq_coK", "BEHdllGkkX3", "uKBFxuaERUy", "cnG-PBoqatg" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to come back on a point raised by reviewer EBEx concerning the strength of the assumption: **random feature vs our random graph model and explain why the latter is actually not in itself a stronger assumption, quite the contrary.**\n\nThe random feature assumption (very prevalent and well studied) w...
[ -1, -1, -1, -1, -1, -1, -1, -1, 9, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 2, 2 ]
[ "3e7blll3uRw", "3e7blll3uRw", "2yMWozR-xm8", "zAiKmpq_coK", "cnG-PBoqatg", "uKBFxuaERUy", "BEHdllGkkX3", "nips_2022_g-H3oNARs2", "nips_2022_g-H3oNARs2", "nips_2022_g-H3oNARs2", "nips_2022_g-H3oNARs2", "nips_2022_g-H3oNARs2" ]
nips_2022_M3WW7TqoMvc
Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement
Bayesian coresets approximate a posterior distribution by building a small weighted subset of the data points. Any inference procedure that is too computationally expensive to be run on the full posterior can instead be run inexpensively on the coreset, with results that approximate those on the full data. However, current approaches are limited by either a significant run-time or the need for the user to specify a low-cost approximation to the full posterior. We propose a Bayesian coreset construction algorithm that first selects a uniformly random subset of data, and then optimizes the weights using a novel quasi-Newton method. Our algorithm is a simple to implement, black-box method, that does not require the user to specify a low-cost posterior approximation. It is the first to come with a general high-probability bound on the KL divergence of the output coreset posterior. Experiments demonstrate that our method provides significant improvements in coreset quality against alternatives with comparable construction times, with far less storage cost and user input required.
Accept
The paper has genereated unanimous enthusiasm and we are happy to recommend acceptance. Please make sure that all comments in the reviews/discussion threads are taken into account in the final version of the manuscript.
train
[ "UdHoL6x5sej", "F7NFBCRghT8", "0yqvgV1zfuB", "fVmsjLcosX6", "OpcK-lMbJW", "q8AVHuhf6W", "dPEw0UnOTE", "na_idL_wYCj", "2JEgMq2wk3Y", "Sp26NQZ37aa", "ALzIbPqIF6", "ua5qmD6VniH", "Ndwg8ig4ekX", "xjsskwtOqSY", "mXjJ_nLrxjc", "cvcMZVvQxv", "wcBj6N_v3y", "m8pQBwOn6C", "HvsCI1xLnZC", ...
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Thank you very much for your reply, and considering our new results. We are pleased to hear that you found these convincing - thank you for helping us to improve our work by including them.", " I thank the authors for their detailed response and for the extra work they put into performing the ablation study. I ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "F7NFBCRghT8", "wcBj6N_v3y", "q8AVHuhf6W", "dPEw0UnOTE", "na_idL_wYCj", "mXjJ_nLrxjc", "ua5qmD6VniH", "ALzIbPqIF6", "Ajqq-WHI-Ad", "Ajqq-WHI-Ad", "Ajqq-WHI-Ad", "NHJfVuTkGoT", "NHJfVuTkGoT", "HvsCI1xLnZC", "HvsCI1xLnZC", "m8pQBwOn6C", "m8pQBwOn6C", "nips_2022_M3WW7TqoMvc", "nips_...
nips_2022_x0LCDsbJ5JF
Learning Spatially-Adaptive Squeeze-Excitation Networks for Image Synthesis and Image Recognition
Learning light-weight yet expressive deep networks in both image synthesis and image recognition remains a challenging problem. Inspired by a more recent observation that it is the data-specificity that makes the multi-head self-attention (MHSA) in the Transformer model so powerful, this paper proposes to extend the widely adopted light-weight Squeeze-Excitation (SE) module to be spatially-adaptive to reinforce its data specificity, as a convolutional alternative of the MHSA, while retaining the efficiency of SE and the inductive basis of convolution. It presents two designs of spatially-adaptive squeeze-excitation (SASE) modules for image synthesis and image recognition respectively. For image synthesis tasks, the proposed SASE is tested in both low-shot and one-shot learning tasks. It shows better performance than prior arts. For image recognition tasks, the proposed SASE is used as a drop-in replacement for convolution layers in ResNets and achieves much better accuracy than the vanilla ResNets, and slightly better than the MHSA counterparts such as the Swin-Transformer and Pyramid-Transformer in the ImageNet-1000 dataset, with significantly smaller models.
Reject
This work proposes to extend the widely adopted light-weight Squeeze-Excitation (SE) module to be spatially-adaptive to reinforce its data specificity while retaining the efficiency of SE and the inductive basis of convolution. All reviewers have recognized the merit of the work, but raise critical concerns on presentation, insufficient justification of methods, and ethical issues. Specifically, the paper does not well explain why the proposed modules would necessarily perform well. This, however, is important for a NeurIPS submission. Moreover, the risks of the method are not properly discussed while the paper has the statement like "The proposed SASE module does not show any potential negative impacts with its current form", which somehow is misleading. Last, the experiments may need to be carefully re-orgnized, and more visualization results would be helpful for understanding the proposed methods.
train
[ "AgbBSpZ3oGt", "bI40d-JhHU", "acRqt7mnkj", "uhHh6IYTgv5", "pm33yWJj4X", "s1A7On5dFnG", "OlLDJBPHFab", "zNPw-CsmJ5D", "258aIap4m4c", "4Y126jSgcLu", "MvbZFwHaCDH", "ViNksnnBOYy", "bafJNTmgWd9", "y8cRCEuDKvE" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the further comments. \n\n**Comments**: Authors do not explain well what the essential difference between the mentioned attention and modulation modules and the proposed one is. Why does the SASE module perform better than other modules? Authors are suggested to give corresponding explanat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 4, 4 ]
[ "bI40d-JhHU", "s1A7On5dFnG", "nips_2022_x0LCDsbJ5JF", "nips_2022_x0LCDsbJ5JF", "y8cRCEuDKvE", "bafJNTmgWd9", "ViNksnnBOYy", "MvbZFwHaCDH", "4Y126jSgcLu", "nips_2022_x0LCDsbJ5JF", "nips_2022_x0LCDsbJ5JF", "nips_2022_x0LCDsbJ5JF", "nips_2022_x0LCDsbJ5JF", "nips_2022_x0LCDsbJ5JF" ]
nips_2022_hPVXHzzK0z
Single-Stage Visual Relationship Learning using Conditional Queries
Research in scene graph generation (SGG) usually considers two-stage models, that is, detecting a set of entities, followed by combining them and labeling all possible relationships. While showing promising results, the pipeline structure induces large parameter and computation overhead, and typically hinders end-to-end optimizations. To address this, recent research attempts to train single-stage models that are more computationally efficient. With the advent of DETR, a set-based detection model, one-stage models attempt to predict a set of subject-predicate-object triplets directly in a single shot. However, SGG is inherently a multi-task learning problem that requires modeling entity and predicate distributions simultaneously. In this paper, we propose Transformers with conditional queries for SGG, namely, TraCQ with a new formulation for SGG that avoids the multi-task learning problem and the combinatorial entity pair distribution. We employ a DETR-based encoder-decoder design and leverage conditional queries to significantly reduce the entity label space as well, which leads to 20% fewer parameters compared to state-of-the-art one-stage models. Experimental results show that TraCQ not only outperforms existing single-stage scene graph generation methods, it also beats state-of-the-art two-stage methods on the Visual Genome dataset, yet is capable of end-to-end training and faster inference.
Accept
Paper was reviewed by five reviewers and received: 3 x Borderline Accept, 1 x Borderline Reject and 1 x Weak Accept. Generally, the reviewers thought that the paper was interesting and had merits. Raised issues revolved around (1) lack of clarity in certain parts of exposition; (2) evaluations and ablations that could have been made stronger, and (3) the role of PnP DETR, which isn't a contribution, towards improved performance. Additional reservations dealt with (4) claims that learning in the non-combinatorial predicate space is easier than the entity pair space, and (5) fairness of comparisons with respect to model capacity and other factors. Authors have provided a compelling rebuttal and this has alleviated many of the reviewer concerns at lest to an extent. Post rebuttal, [ZNFu] remains concerned that generating more predictions may be what is causing improved performance and points out that certain ablations are still missing and can improve the paper. At the same time, [ZNFu], while remains at Borderline Reject, acknowledges that rebuttal has resolved some of the issues in the original review. AC has carefully considered the reviews, the rebuttal, and the paper itself. This appears to be a rather borderline case, however, considering that the overall sentiment of reviewers is positive and that rebuttal has convincingly addressed an important fraction of concerns raised by [ZNFu] and others, even if not all, it is AC's decision that acceptance of the paper is warranted. Authors are very strongly encouraged to add the ablations, as well as make corrections, suggest by reviewers, for the camera ready.
val
[ "gj-12TmD6n1", "3msXHtPwiSx", "GcI-g6IBLtU", "nXV1m7Vf1hr", "KSFH2DC8UmQ", "Oh50208eoci", "hP75DwaIlVy", "lpMSzGzGmdc", "tMG7MgA9vSu", "-wV-nd5n_-v", "NgqglTE4tyv", "yNzLD5X8S8w", "LtUfZuOVZs", "0ZLljybolJ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors have addressed some of my initial concerns. Therefore, I'm changing my rating based on the rebuttal. I still, however, have some concerns. \n\n1. I disagree with the authors claim that generating more predictions doesn't lead to improved performance. The authors are correct when they say that mR@K and...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3, 4 ]
[ "KSFH2DC8UmQ", "0ZLljybolJ", "0ZLljybolJ", "LtUfZuOVZs", "yNzLD5X8S8w", "NgqglTE4tyv", "-wV-nd5n_-v", "tMG7MgA9vSu", "nips_2022_hPVXHzzK0z", "nips_2022_hPVXHzzK0z", "nips_2022_hPVXHzzK0z", "nips_2022_hPVXHzzK0z", "nips_2022_hPVXHzzK0z", "nips_2022_hPVXHzzK0z" ]
nips_2022_6FkSHynJr1
Probable Domain Generalization via Quantile Risk Minimization
Domain generalization (DG) seeks predictors which perform well on unseen test distributions by leveraging data drawn from multiple related training distributions or domains. To achieve this, DG is commonly formulated as an average- or worst-case problem over the set of possible domains. However, predictors that perform well on average lack robustness while predictors that perform well in the worst case tend to be overly-conservative. To address this, we propose a new probabilistic framework for DG where the goal is to learn predictors that perform well with high probability. Our key idea is that distribution shifts seen during training should inform us of probable shifts at test time, which we realize by explicitly relating training and test domains as draws from the same underlying meta-distribution. To achieve probable DG, we propose a new optimization problem called Quantile Risk Minimization (QRM). By minimizing the $\alpha$-quantile of predictor's risk distribution over domains, QRM seeks predictors that perform well with probability $\alpha$. To solve QRM in practice, we propose the Empirical QRM (EQRM) algorithm, and prove: (i) a generalization bound for EQRM; and (ii) that EQRM recovers the causal predictor as $\alpha \to 1$. In our experiments, we introduce a more holistic quantile-focused evaluation protocol for DG, and demonstrate that EQRM outperforms state-of-the-art baselines on CMNIST and several datasets from WILDS and DomainBed.
Accept
The authors have convinced the reviewers of the merits of the paper after extensive and detailed rebuttal and discussion. The reviewers have clearly converged towards acceptance.
train
[ "IrhR3EDN4I_", "qtcvtVJVuy4", "mTHf9cyB6P9", "M6PaSGyrTfa", "5GtcfT7mlaw", "NzMsN6lB1I", "qMbewTOBbrB", "IRrcYuKej6Q", "KTmTzek-CXx", "g9pwRpEDikr", "mQAaVzBUD8Yx", "oh2hQldLR2O", "lRd9vo-TLRb", "mlLWECeSmWs", "2JPLgGCE9gf", "ho_Cmb9esqQe", "c4WVdIRW2E", "SKcXkv5tNQ", "LGm5tPl7rT...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_...
[ " I would like to thank the authors for the detailed rebuttal. Most of my concerns are addressed. I'm happy to increase my rating.", " _\"As for the rebuttal on sufficient domains and samples, it seems there are inconsistencies between the theory and practice.\"_\n\n* **Gaps between theory and practice.** Firstl...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "qtcvtVJVuy4", "mTHf9cyB6P9", "M6PaSGyrTfa", "LGm5tPl7rT5", "LGm5tPl7rT5", "SKcXkv5tNQ", "IRrcYuKej6Q", "oh2hQldLR2O", "g9pwRpEDikr", "mQAaVzBUD8Yx", "LGm5tPl7rT5", "ea53C3JVSCG", "2JPLgGCE9gf", "aR5ptTja_11", "ho_Cmb9esqQe", "SKcXkv5tNQ", "nips_2022_6FkSHynJr1", "nips_2022_6FkSHyn...
nips_2022_1_gypPuWUC3
Teacher Forcing Recovers Reward Functions for Text Generation
Reinforcement learning (RL) has been widely used in text generation to alleviate the exposure bias issue or to utilize non-parallel datasets. The reward function plays an important role in making RL training successful. However, previous reward functions are typically task-specific and sparse, restricting the use of RL. In our work, we propose a task-agnostic approach that derives a step-wise reward function directly from a model trained with teacher forcing. We additionally propose a simple modification to stabilize the RL training on non-parallel datasets with our induced reward function. Empirical results show that our method outperforms self-training and reward regression methods on several text generation tasks, confirming the effectiveness of our reward function.
Accept
This paper proposes a method to design a reward function from a pre-trained language model, and uses it to train a text generation model using reinforcement learning. The approach is novel and the paper presents both theoretical derivations (drawing connections between maxent IRL and the supervised teacher forcing loss) and empirical results to demonstrate the effectiveness of the proposed algorithm, offering new insights for a challenging problem.
train
[ "tC1ASEaDTT_", "PXI5j0Q2wo1", "tbjXeDKIr61", "U6l6FqrO6g", "vCDuhfd68k3", "h7KIQv8ctZ", "WshRDKLM0TY", "iqvOUuWEf_5", "D0Dx4o0SeEP", "-ywrBvvvKqH", "eVyqDwduHJGU", "oDuEhDYO7A5", "tpiewnJ7pgt", "kOnzb4s3cK5", "rgxPArC9a6Q2", "s_9LaMXmJ3", "9DBiWhtQkn", "r7jrckuT75T", "osjVdAkr7BR...
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " We thank all reviewers for the insightful discussion and suggestions. We really appreciate your time and efforts!", " Thank you for the detailed response! I will update my score to 7.", " The reviewer raised a few concerns mostly due to misunderstandings. They were clarified in the author response (our origin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "nips_2022_1_gypPuWUC3", "kOnzb4s3cK5", "Q6V_rvKBra", "npEK9Kw2vQK", "osjVdAkr7BR", "WshRDKLM0TY", "iqvOUuWEf_5", "oDuEhDYO7A5", "nips_2022_1_gypPuWUC3", "npEK9Kw2vQK", "Q6V_rvKBra", "tpiewnJ7pgt", "HqcwkTBTln", "rgxPArC9a6Q2", "s_9LaMXmJ3", "9DBiWhtQkn", "npEK9Kw2vQK", "osjVdAkr7B...