paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_YRDXX4IIA9 | Local Bayesian optimization via maximizing probability of descent | Local optimization presents a promising approach to expensive, high-dimensional black-box optimization by sidestepping the need to globally explore the search space. For objective functions whose gradient cannot be evaluated directly, Bayesian optimization offers one solution -- we construct a probabilistic model of the objective, design a policy to learn about the gradient at the current location, and use the resulting information to navigate the objective landscape. Previous work has realized this scheme by minimizing the variance in the estimate of the gradient, then moving in the direction of the expected gradient. In this paper, we re-examine and refine this approach. We demonstrate that, surprisingly, the expected value of the gradient is not always the direction maximizing the probability of descent, and in fact, these directions may be nearly orthogonal. This observation then inspires an elegant optimization scheme seeking to maximize the probability of descent while moving in the direction of most-probable descent. Experiments on both synthetic and real-world objectives show that our method outperforms previous realizations of this optimization scheme and is competitive against other, significantly more complicated baselines. | Accept | All reviewers are positive and agree that the paper should be accepted. The primary question raised about the poor performance on Hopper was adequately addressed by the author response. Please integrate the changes and reviewer suggestions around clarity into the final paper. | train | [
"BAEFmxt4gqN",
"yTeeIYbcXTRj",
"G341fuUFFwy",
"ktCNw_-vvza",
"qwArs5t5ZvO",
"xzVoKOXYRlS",
"oWYJCrIlhsi",
"T4d6ZgGMdXH3",
"wxQXaH-Sk1zt",
"nR_hVvyQGIB",
"UZ9en-_qSsF",
"P5aKOSWjqrm",
"IXBSnk3s9Ng",
"H6BvsC5W8Iz",
"rJLnslwJr-M"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for their patience.\n\nWe further our discussion on the Hopper experiments here. We earlier argued that the online state normalization scheme employed by Müller et al. might have unintended interactions with both GIBO and MPD. Specifically, systematic differences in the behavior of the algo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"UZ9en-_qSsF",
"xzVoKOXYRlS",
"wxQXaH-Sk1zt",
"oWYJCrIlhsi",
"T4d6ZgGMdXH3",
"rJLnslwJr-M",
"H6BvsC5W8Iz",
"IXBSnk3s9Ng",
"P5aKOSWjqrm",
"UZ9en-_qSsF",
"nips_2022_YRDXX4IIA9",
"nips_2022_YRDXX4IIA9",
"nips_2022_YRDXX4IIA9",
"nips_2022_YRDXX4IIA9",
"nips_2022_YRDXX4IIA9"
] |
nips_2022_cxZEBQFDoFK | A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases | Learned optimizers---neural networks that are trained to act as optimizers---have the potential to dramatically accelerate training of machine learning models. However, even when meta-trained across thousands of tasks at huge computational expense, blackbox learned optimizers often struggle with stability and generalization when applied to tasks unlike those in their meta-training set. In this paper, we use tools from dynamical systems to investigate the inductive biases and stability properties of optimization algorithms, and apply the resulting insights to designing inductive biases for blackbox optimizers. Our investigation begins with a noisy quadratic model, where we characterize conditions in which optimization is stable, in terms of eigenvalues of the training dynamics. We then introduce simple modifications to a learned optimizer's architecture and meta-training procedure which lead to improved stability, and improve the optimizer's inductive bias. We apply the resulting learned optimizer to a variety of neural network training tasks, where it outperforms the current state of the art learned optimizer---at matched optimizer computational overhead---with regard to optimization performance and meta-training speed, and is capable of generalization to tasks far different from those it was meta-trained on. | Accept | All 4 knowledgeable reviewers recommended acceptance of the paper (2x accept, 1x weak accept, 1x borderline accept), appreciating the the importance of the studied problem, the first principles approach and the obtained theoretical and empirical results. I mainly agree and recommend acceptance of the paper. Still, I ask the authors to carefully consider the reviewers' comments when preparing the final version of the paper and in particular improve the presentation in line with the suggestions. Also, some of the raised points on limitations should be included in a revised discussion. | val | [
"GFxgIO5MK2c",
"_EwQmDEl0An",
"pzpxp_8Baa",
"0oX9_4fXqG6",
"TrcKMrM4vDa",
"_xSq7ShzOCh",
"t2bBhRPo5G",
"__enIgB2nx_",
"SjBG62-z5BT",
"yUTD9DNxcW9",
"edqwpxq9cL5"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors response. Thanks for the clarifications! I will keep the initial score!",
" We have now posted a revised version of the paper. We were not able to increase the discussion of the STAR optimizer in Section 5 in this revised draft due to space limitations. However, if the page limit is 10 ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"TrcKMrM4vDa",
"t2bBhRPo5G",
"edqwpxq9cL5",
"yUTD9DNxcW9",
"SjBG62-z5BT",
"__enIgB2nx_",
"nips_2022_cxZEBQFDoFK",
"nips_2022_cxZEBQFDoFK",
"nips_2022_cxZEBQFDoFK",
"nips_2022_cxZEBQFDoFK",
"nips_2022_cxZEBQFDoFK"
] |
nips_2022_8gUjpEsLCU | Empirical Gateaux Derivatives for Causal Inference | We study a constructive procedure that approximates Gateaux derivatives for statistical functionals by finite-differencing, with attention to causal inference functionals. We focus on the case where probability distributions are not known a priori but need also to be estimated from data, leading to empirical Gateaux derivatives, and study relationships between empirical, numerical, and analytical Gateaux derivatives. Starting with a case study of counterfactual mean estimation, we verify the exact relationship between finite-differences and the analytical Gateaux derivative. We then derive requirements on the rates of numerical approximation in perturbation and smoothing that preserve statistical benefits. We study more complicated functionals such as dynamic treatment regimes and the linear-programming formulation for policy optimization infinite-horizon Markov decision processes. In the case of the latter, this approach can be used to approximate bias adjustments in the presence of arbitrary constraints, illustrating the usefulness of constructive approaches for Gateaux derivatives. We find that, omitting unfavorable dimension dependence of smoothing, although rate-double robustness permits for coarser rates of perturbation size than implied by generic approximation analysis of finite-differences for the case of the counterfactual mean, this is not the case for the infinite-horizon MDP policy value.
| Accept | The authors make a solid contribution in the literature on computerized estimation of gateaux derivatives and automatic debiasing, with applications to causal inference, providing theorems on the level of numerical approximation that preserves root-n statistical rates. Therefore this should be an interesting paper for the causal ml community. The authors have addressed most major concerns raised by reviewers in their original evaluation.
On a minor note the authors should also relate to the recent prior work on automatic debiased machine learning for dynamic effects https://arxiv.org/abs/2203.13887 which seems to be capturing their main application, contrary to what is claimed in their related work. | train | [
"w3fJPAYA0r8",
"cYBmb0VMXTu",
"PwBnE0Rye3i",
"5S0hsiP8MU5",
"xufk_mkNelHj",
"7GxJnQwZj0C8",
"RKQ8Lz9joi-",
"e1v3TPAivJh",
"Ys1JwYhq5kv",
"FFh7tRZfDnW",
"-RwCB-hoc_tv",
"efpAkxIiERG",
"GptsOBStVsP",
"kBQ5tyB68Nr",
"vTTE9GXNRgH",
"MahQtH4oX8",
"6JtnZMM8fZqj",
"SGMVMTpuuqp",
"6GjZhh... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Thanks for the clarifications. They are very helpful! ",
" Thanks for reading and for your questions. We clarify below: \n\n## on 1.1 \n\n```This is a point perhaps worth clarification: Gateaux differentiability does not guarantee valid semiparametric inference with the influence function.```\n\nWe agree, and i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
4
] | [
"cYBmb0VMXTu",
"5S0hsiP8MU5",
"6GjZhhls6FG",
"GptsOBStVsP",
"e1v3TPAivJh",
"RKQ8Lz9joi-",
"FFh7tRZfDnW",
"Ys1JwYhq5kv",
"MahQtH4oX8",
"6JtnZMM8fZqj",
"nips_2022_8gUjpEsLCU",
"6GjZhhls6FG",
"raLi3WPAAVk",
"raLi3WPAAVk",
"raLi3WPAAVk",
"RWJXtkBWopS",
"RWJXtkBWopS",
"6GjZhhls6FG",
"... |
nips_2022_QTjJMy-UNO | Adaptive Interest for Emphatic Reinforcement Learning | Emphatic algorithms have shown great promise in stabilizing and improving reinforcement learning by selectively emphasizing the update rule. Although the emphasis fundamentally depends on an interest function which defines the intrinsic importance of each state, most approaches simply adopt a uniform interest over all states (except where a hand-designed interest is possible based on domain knowledge). In this paper, we investigate adaptive methods that allow the interest function to dynamically vary over states and iterations. In particular, we leverage meta-gradients to automatically discover online an interest function that would accelerate the agent’s learning process. Empirical evaluations on a wide range of environments show that adapting the interest is key to provide significant gains. Qualitative analysis indicates that the learned interest function emphasizes states of particular importance, such as bottlenecks, which can be especially useful in a transfer learning setting. | Accept | Well-written and interesting paper that meta-learns the interest function in emphatic RL, rather than using a fixed interest function. The idea is well-motivated and a comprehensive empirical study is performed. There was an an active discussion among the reviewers and authors, in which the authors addressed the various reviewer questions effectively and which resulted in an updated presentation and an increase in scores. Overall, a clear accept. | train | [
"vywPhliI4eI",
"N0o2HsnbP3f",
"JR29fafqIif",
"ZNa8eBXwnLX",
"ti928Vh93-4",
"Ybf0o30BItB",
"6oOk2r-nD0J",
"DTyAk6vgCaP",
"weH0QbfwKy",
"oPMp6mVsFaXM",
"dXLX6xUP5Zi",
"gXl7CbsyW0E",
"UczXr-bZPuE0",
"hHcsGnf9jfU",
"amOfptpHnqA",
"18hAl_4ryO-",
"pNN-QISIcb5",
"0YnxfdkMEb"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for suggesting those experiments, comments, and responses. Please let us know if you have more questions or comments. \n",
" Thanks again for suggesting those experiments, comments, and responses. Please let us know if you have more questions or comments. \n",
" Thank you for the answer and for a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
5
] | [
"ZNa8eBXwnLX",
"JR29fafqIif",
"gXl7CbsyW0E",
"UczXr-bZPuE0",
"Ybf0o30BItB",
"6oOk2r-nD0J",
"oPMp6mVsFaXM",
"weH0QbfwKy",
"oPMp6mVsFaXM",
"18hAl_4ryO-",
"0YnxfdkMEb",
"amOfptpHnqA",
"pNN-QISIcb5",
"nips_2022_QTjJMy-UNO",
"nips_2022_QTjJMy-UNO",
"nips_2022_QTjJMy-UNO",
"nips_2022_QTjJM... |
nips_2022_HBGvWy9Vxq | Human-Robotic Prosthesis as Collaborating Agents for Symmetrical Walking | This is the first attempt at considering human influence in the reinforcement learning control of a robotic lower limb prosthesis toward symmetrical walking in real world situations. We propose a collaborative multi-agent reinforcement learning (cMARL) solution framework for this highly complex and challenging human-prosthesis collaboration (HPC) problem. The design of an automatic controller of the robot within the HPC context is based on accessible physical features or measurements that are known to affect walking performance. Comparisons are made with the current state-of-the-art robot control designs, which are single-agent based, as well as existing MARL solution approaches tailored to the problem, including multi-agent deep deterministic policy gradient (MADDPG) and counterfactual multi-agent policy gradient (COMA). Results show that, when compared to these approaches, treating the human and robot as coupled agents and using estimated human adaption in robot control design can achieve lower stage cost, peak error, and symmetry value to ensure better human walking performance. Additionally, our approach accelerates learning of walking tasks and increases learning success rate. The proposed framework can potentially be further developed to examine how human and robotic lower limb prosthesis interact, an area that little is known about. Advancing cMARL toward real world applications such as HPC for normative walking sets a good example of how AI can positively impact on people’s lives. | Accept | This paper present an approach for control of prosthesis under a novel collaborative multi-agent formulation. The approach is demonstrated in simulation.
All the reviewers agree that the paper contribution is novel and interesting.
The reviewers also provided several suggestions on how to improve the manuscript and pointed out the limitations intrinsic with evaluating the approach only in simulation.
Evaluating the approach in the real-world seems a natural and desirable next step.
I invite the authors to carefully integrate all the feedback received and, in particular, better highlight limitations and societal impact.
Personal comment: The references would benefit from some work: some of the references are poorly formatted; [30] and [31] cite the same paper; and finally you are not really citing 110 papers in the main text, so I think you might have forgot a \nocite in the code. | train | [
"YYTPU9oTMx7",
"IuxsrPb0bN",
"uyMmYvqnui0",
"GQmseyqYHj0",
"WJyp1sK_zTM",
"AqpNiwey0bx",
"YHKeIpgGWLI",
"iM7Ij31SHNw",
"Y4AcfDQxuvZ_",
"-xEqeHEFwGz",
"rbWG462PtEP",
"C30D6QsFbqg"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" [Interpretation of Results] -- great that we were able to be \"super helpful\". Your questions were super helpful to us as well. we will do our best to include materials esp. G4.3 and G4.4. \n[Implementation Details] -- we already agreed to that.\n[Related Work] -- thank you.\n[Other Objectives] and [Societal Imp... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"IuxsrPb0bN",
"iM7Ij31SHNw",
"iM7Ij31SHNw",
"AqpNiwey0bx",
"Y4AcfDQxuvZ_",
"rbWG462PtEP",
"-xEqeHEFwGz",
"C30D6QsFbqg",
"nips_2022_HBGvWy9Vxq",
"nips_2022_HBGvWy9Vxq",
"nips_2022_HBGvWy9Vxq",
"nips_2022_HBGvWy9Vxq"
] |
nips_2022_GisHNaleWiA | Uni[MASK]: Unified Inference in Sequential Decision Problems | Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision making, where many well-studied tasks like behavior cloning, offline RL, inverse dynamics, and waypoint conditioning correspond to different sequence maskings over a sequence of states, actions, and returns. We introduce the UniMASK framework, which provides a unified way to specify models which can be trained on many different sequential decision making tasks. We show that a single UniMASK model is often capable of carrying out many tasks with performance similar to or better than single-task models. Additionally, after fine-tuning, our UniMASK models consistently outperform comparable single-task models. | Accept | This paper extends masked language modeling to other sequential decision making problems. The idea is simple (which is a plus), the experiments are thorough, and the results are convincing. All reviewers agreed this is a good paper. I recommend acceptance. | train | [
"UlnvHlBOLl",
"cYpwcJf7ry2",
"jIuhBb-tARk",
"HssnEDCjpHIM",
"89DEEAw4WXq",
"Gg6bchuL2h6",
"YpNb7fepoR9a",
"8w2CSxqqCFM",
"nrpZFTMxu_F",
"-8Z5PQ1f9s7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. It has addressed most of my concerns. I personally consider Transformer encoders and decoders (without cross-attention) to be the same architecture, but with a different 2-D mask (used in a difference sense than input [MASK] tokens) of shape sentence_length x sentence_length that dete... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"89DEEAw4WXq",
"nips_2022_GisHNaleWiA",
"-8Z5PQ1f9s7",
"nrpZFTMxu_F",
"nrpZFTMxu_F",
"8w2CSxqqCFM",
"8w2CSxqqCFM",
"nips_2022_GisHNaleWiA",
"nips_2022_GisHNaleWiA",
"nips_2022_GisHNaleWiA"
] |
nips_2022_hjqTeP05OMB | Leveraging the Hints: Adaptive Bidding in Repeated First-Price Auctions | With the advent and increasing consolidation of e-commerce, digital advertising has very recently replaced traditional advertising as the main marketing force in the economy. In the past four years, a particularly important development in the digital advertising industry is the shift from second-price auctions to first-price auctions for online display ads. This shift immediately motivated the intellectually challenging question of how to bid in first-price auctions, because unlike in second-price auctions, bidding one's private value truthfully is no longer optimal. Following a series of recent works in this area, we consider a differentiated setup: we do not make any assumption about other bidders' maximum bid (i.e. it can be adversarial over time), and instead assume that we have access to a hint that serves as a prediction of other bidders' maximum bid, where the prediction is learned through some blackbox machine learning model. We consider two types of hints: one where a single point-prediction is available, and the other where a hint interval (representing a type of confidence region into which others' maximum bid falls) is available. We establish minimax optimal regret bounds for both cases and highlight the quantitatively different behavior between the two settings. We also provide improved regret bounds when the others' maximum bid exhibits the further structure of sparsity. Finally, we complement the theoretical results with demonstrations using real bidding data. | Accept | The paper studies a regret minimization model of bidding in repeated first price auctions, when a noisy "hint" about the highest competing bid is available. The question is whether the availability of such a hint can significantly reduce the best achievable regret. The authors give almost matching lower and upper bounds on the regret in several cases (e.g., hint is provided as a point/interval estimate, or if the distribution of the highest competing bid has a small finite support). They also present experimental evaluation of the algorithms.
The reviewers agree that this is a practically relevant and theoretically interesting model, and the results are non-trivial and interesting. The proofs are technically involved, although they do not introduce particularly new techniques. While the main contribution of the paper is theoretical, the experimental results nicely complement the theory. Overall, this is a nice paper and can be accepted to NeurIPS. | val | [
"TajDkvnc-GN",
"oIF6xrid0Q0",
"w6B7hWCLe-9",
"B3zfSB22aH2",
"FBcQ7Aolgjx",
"u7DFGXxojOT",
"dKqJpR_s6km",
"JXVMUPcizWQ",
"eESL7lHXgiJ"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are grateful to all reviewers for their careful reading and detailed reviews. In the rebuttal, we provide detailed point-to-point response to each and every comment of the reviewers, with a separate comment to each review. \n",
" We thank the reviewer for the thoughtful review and positive feedback. We use t... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
4
] | [
"nips_2022_hjqTeP05OMB",
"JXVMUPcizWQ",
"dKqJpR_s6km",
"u7DFGXxojOT",
"eESL7lHXgiJ",
"nips_2022_hjqTeP05OMB",
"nips_2022_hjqTeP05OMB",
"nips_2022_hjqTeP05OMB",
"nips_2022_hjqTeP05OMB"
] |
nips_2022_8ViFz-5Mnnv | ReCo: Retrieve and Co-segment for Zero-shot Transfer | Semantic segmentation has a broad range of applications, but its real-world impact has been significantly limited by the prohibitive annotation costs necessary to enable deployment. Segmentation methods that forgo supervision can side-step these costs, but exhibit the inconvenient requirement to provide labelled examples from the target distribution to assign concept names to predictions. An alternative line of work in language-image pre-training has recently demonstrated the potential to produce models that can both assign names across large vocabularies of concepts and enable zero-shot transfer for classification, but do not demonstrate commensurate segmentation abilities.
We leverage the retrieval abilities of one such language-image pre-trained model, CLIP, to dynamically curate training sets from unlabelled images for arbitrary collections of concept names, and leverage the robust correspondences offered by modern image representations to co-segment entities among the resulting collections. The synthetic segment collections are then employed to construct a segmentation model (without requiring pixel labels) whose knowledge of concepts is inherited from the scalable pre-training process of CLIP. We demonstrate that our approach, termed Retrieve and Co-segment (ReCo) performs favourably to conventional unsupervised segmentation approaches while inheriting the convenience of nameable predictions and zero-shot transfer. We also demonstrate ReCo’s ability to generate specialist segmenters for extremely rare objects. | Accept | After author response and the discussion the paper received 1x borderline reject, 1x borderline accept, 3x weak accept [note that one reviewer mentioned the score increase only in the discussion].
The main strength are:
- Overall novel framework for zero-shot segmentation
- Strong performance
- The authors revised the paper and addressed many/most of the reviewer's concerns/suggestions in the author response.
I recommend acceptance, with the expectation
* the authors provide the additional revisions as promised
* If possible address the comment of reviewer 1QtT "what if remove Eq. (3)? It seems P^c_{new} is already good enough from Figure 2."
| val | [
"figSpb_oY-0",
"8KbwpOFw2zJ",
"wgSXxuMCvT",
"Ic6bL35vchI",
"8ph-7ZxuaTq",
"YeTeUaX-Qbk",
"NQpi8LV7k9",
"VBAWwGuhKw7",
"QCiBx9A4NIi",
"8d25TV0A9cQ",
"ZPueiKzCWH3",
"qGSTpFmTuT",
"xoIDLcX8x_-",
"vjLU178NJ3P",
"8RvUUUSJI_x"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for adding the visualization samples as well as the detailed responses to all of our questions. I agree with most of the strengths that are raised by other reviewers and the authors have also answered my question satisfactorily. I will retain my initial rating of weak accept.\n\nPlease incorpo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
3,
3
] | [
"NQpi8LV7k9",
"QCiBx9A4NIi",
"Ic6bL35vchI",
"VBAWwGuhKw7",
"8RvUUUSJI_x",
"vjLU178NJ3P",
"xoIDLcX8x_-",
"qGSTpFmTuT",
"ZPueiKzCWH3",
"nips_2022_8ViFz-5Mnnv",
"nips_2022_8ViFz-5Mnnv",
"nips_2022_8ViFz-5Mnnv",
"nips_2022_8ViFz-5Mnnv",
"nips_2022_8ViFz-5Mnnv",
"nips_2022_8ViFz-5Mnnv"
] |
nips_2022_M_et7iOQC_s | Boosting the Performance of Generic Deep Neural Network Frameworks with Log-supermodular CRFs | Historically, conditional random fields (CRFs) were popular tools in a variety of application areas from computer vision to natural language processing, but due to their higher computational cost and weaker practical performance, they have, in many situations, fallen out of favor and been replaced by end-to-end deep neural network (DNN) solutions. More recently, combined DNN-CRF approaches have been considered, but their speed and practical performance still falls short of the best performing pure DNN solutions. In this work, we present a generic combined approach in which a log-supermodular CRF acts as a regularizer to encourage similarity between outputs in a structured prediction task. We show that this combined approach is widely applicable, practical (it incurs only a moderate overhead on top of the base DNN solution) and, in some cases, it can rival carefully engineered pure DNN solutions for the same structured prediction task. | Accept | The paper proposes to use Log-supermodular CRFs to smooth the DNN models. The paper is well motivated and conduct extensive experiments to verify the effective of the attractive smoothing algorithm.
It could be better if the paper conduct more experiments based on different networks such as RNN and Transformer. Besides, this paper only explores the proposed attractive smoothing. It should make a comparision with other smoothing methods.
| train | [
"gw0UgTnJLo",
"wDc1-SrstvE",
"ndHl6TYph8",
"3l1iA7XwFXj",
"xCvfiwE7Xab",
"pysOoXOaVhV"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their responses. I am satisfied with some of them but I still have concerns:\n1) It is not entirely clear if the baselines in this paper are strong enough. I'm not an expert in this field but it seems there should\nbe existing (potentially imperfect) alternatives to perform smoothing wit... | [
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
2,
1,
2
] | [
"ndHl6TYph8",
"pysOoXOaVhV",
"xCvfiwE7Xab",
"nips_2022_M_et7iOQC_s",
"nips_2022_M_et7iOQC_s",
"nips_2022_M_et7iOQC_s"
] |
nips_2022__sYOodxTMcF | End-to-end Stochastic Optimization with Energy-based Model | Decision-focused learning (DFL) was recently proposed for stochastic optimization problems that involve unknown parameters. By integrating predictive modeling with an implicitly differentiable optimization layer, DFL has shown superior performance to the standard two-stage predict-then-optimize pipeline. However, most existing DFL methods are only applicable to convex problems or a subset of nonconvex problems that can be easily relaxed to convex ones. Further, they can be inefficient in training due to the requirement of solving and differentiating through the optimization problem in every training iteration. We propose SO-EBM, a general and efficient DFL method for stochastic optimization using energy-based models. Instead of relying on KKT conditions to induce an implicit optimization layer, SO-EBM explicitly parameterizes the original optimization problem using a differentiable optimization layer based on energy functions. To better approximate the optimization landscape, we propose a coupled training objective that uses a maximum likelihood loss to capture the optimum location and a distribution-based regularizer to capture the overall energy landscape. Finally, we propose an efficient training procedure for SO-EBM with a self-normalized importance sampler based on a Gaussian mixture proposal. We evaluate SO-EBM in three applications: power scheduling, COVID-19 resource allocation, and non-convex adversarial security game, demonstrating the effectiveness and efficiency of SO-EBM. | Accept | In agreement with all the reviewers, I recommend acceptance. In the final version, the authors should take into account the reviewers’ recommendations and update the paper accordingly. Also, regarding the suggestion of Reviewer kQia, I recommend that the author include a high-dimensional synthetic experiment to analyze how their method behaves in that scenario. | train | [
"-pZN_tysus",
"8dQ9R6at3zu",
"mpmrz7ct07U",
"rxiOVCVO0Mbi",
"1v8K4YZPWNE",
"2HAxTcquBS",
"0Pag35XywYL",
"UfLatq8Dzf",
"e5NJCgKIl94H",
"Qlgwwb3tU-B",
"0SjGa1awOay",
"PkSi1caTof",
"nAG3QymYlnp",
"CM01dno-gul",
"-CVtYWzeW5"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your responses! We agree that the solver itself does not need to be differentiable. We used ‘differentiable solver’ to refer to existing differentiable optimization libraries, e.g., QPTH and Cvxpylayers, that are able to backpropagate through the optimal solution of the solver. Such a differentiable so... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"8dQ9R6at3zu",
"0SjGa1awOay",
"1v8K4YZPWNE",
"nips_2022__sYOodxTMcF",
"UfLatq8Dzf",
"0Pag35XywYL",
"e5NJCgKIl94H",
"-CVtYWzeW5",
"CM01dno-gul",
"nAG3QymYlnp",
"PkSi1caTof",
"nips_2022__sYOodxTMcF",
"nips_2022__sYOodxTMcF",
"nips_2022__sYOodxTMcF",
"nips_2022__sYOodxTMcF"
] |
nips_2022_kXXPLBEBVGH | Context-enriched molecule representations improve few-shot drug discovery | A central task in computational drug discovery is to construct models from known active molecules to find further promising molecules for subsequent screening. However, typically only very few active molecules are known. Therefore, few-shot learning methods have the potential to improve the effectiveness of this critical phase of the drug discovery process. We introduce a new method for few-shot drug discovery. Its main idea is to enrich a molecule representation by knowledge about known context or reference molecules. Our novel concept for molecule representation enrichment is to associate molecules from both the support set and the query set with a large set of reference (context) molecules through a modern Hopfield network. Intuitively, this enrichment step is analogous to a human expert who would associate a given molecule with familiar molecules whose properties are known. The enrichment step reinforces and amplifies the covariance structure of the data and simultaneously removes spurious correlations arising from the decoration of molecules. We analyze our novel method on FS-Mol, which is the only established few-shot learning benchmark dataset for drug discovery. An ablation study shows that the enrichment step of our method is key to improving the predictive quality. In a domain shift experiment, our new method is more robust than other methods. On FS-Mol, our new method achieves a new state-of-the-art and outperforms all other few-shot methods. | Reject | This paper proposes a context-enriched molecular representation approach for few-shot drug discovery. Specifically, they show that the proposed MHNfs outperform existing methods on standard benchmarks including FS-Mol and the Tox21.
The proposed approach is analogous to the retrieval-based approach or generation models by editing prototypes in the deep learning community, which enrich molecular representations through querying from a massive reference space, namely first looks at a group of context drug-like molecules, and then using self-attention to distribute that information among the query and support set molecules.
This is a borderline paper and the main concern of this paper is the limited insights in result analysis (e.g., what did the model learn from the context). I recommend the authors to address the weaknesses above and resubmit to another venue. | train | [
"kOJejxJ7cb5",
"CqjliIfF35k",
"1iOqKQEaQV",
"N2_xGwnl8Da",
"kkwErRPTGNu",
"5_PuCed9Yfc",
"ScK4hE8AsLO",
"iaXyh8Fc65iC",
"G_Tu73zKdet",
"5FX0U772ghPm",
"Uq4aj9nUfm9",
"EcWBjsjGdxi",
"xgIc5xO2fVa",
"VV0aCt2bZR",
"0jbEdFSrxiPZ",
"AFp6oRLEinM",
"EFplEJ9v71S",
"qd-dNrtV0or",
"EtAvz-y2... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" MHNfs is able to outperform all other methods included in Table 1 on the FS-Mol test set (*). The FS-Mol test set consists of 157 tasks and the performances on the test set can be found in Table 1, column “ALL”. \n\n[a3] really performs well for two sub-categories (oxidoreductases, hydrolases). But this is not a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"kkwErRPTGNu",
"5_PuCed9Yfc",
"ScK4hE8AsLO",
"ScK4hE8AsLO",
"5FX0U772ghPm",
"xgIc5xO2fVa",
"0jbEdFSrxiPZ",
"nips_2022_kXXPLBEBVGH",
"EtAvz-y2OA",
"EtAvz-y2OA",
"qd-dNrtV0or",
"qd-dNrtV0or",
"qd-dNrtV0or",
"EFplEJ9v71S",
"EFplEJ9v71S",
"EFplEJ9v71S",
"nips_2022_kXXPLBEBVGH",
"nips_2... |
nips_2022_scfOjwTtZ8S | EAGER: Asking and Answering Questions for Automatic Reward Shaping in Language-guided RL | Reinforcement learning (RL) in long horizon and sparse reward tasks is notoriously difficult and requires a lot of training steps. A standard solution to speed up the process is to leverage additional reward signals, shaping it to better guide the learning process.
In the context of language-conditioned RL, the abstraction and generalisation properties of the language input provide opportunities for more efficient ways of shaping the reward.
In this paper, we leverage this idea and propose an automated reward shaping method where the agent extracts auxiliary objectives from the general language goal. These auxiliary objectives use a question generation (QG) and a question answering (QA) system: they consist of questions leading the agent to try to reconstruct partial information about the global goal using its own trajectory.
When it succeeds, it receives an intrinsic reward proportional to its confidence in its answer.
This incentivizes the agent to generate trajectories which unambiguously explain various aspects of the general language goal.
Our experimental study using various BabyAI environments shows that this approach, which does not require engineer intervention to design the auxiliary objectives, improves sample efficiency by effectively directing the exploration. | Accept | This paper presents a language-guided auxiliary reward mechanism based on generating Q&A pairs based on agent trajectories and rewarding the agent for producing trajectories that yield correct answers from an answering model. The reviewers broadly found the paper compelling and convincing, and thus I am happy to follow their general consensus in recommending acceptance. | train | [
"3NiJxv39P0q",
"sBRHXJ8zb6j",
"v6Hw82r4keA",
"PdKc_islOq",
"dfRnax8QF-U",
"gLd15adZa7h",
"ZQE3mmm2JJ",
"94i5JZq5gzi",
"-DtU7oRvAc",
"WY_054QLdVB",
"ZhOfPYCqBZQ",
"rBqvq8H6Ax6",
"EjGgrkd1i1p",
"t-_PkzKi6y7",
"l1-oQlYUhVtI",
"JaIyAlgTCar",
"yxPCZ0QWH5r",
"YG5NTaTm8bp",
"mb2JRm3yMKQ... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" For the discussion regarding guessing the answers: I am not convinced that by adding a `no answer' option would resolve this issue. The statement from the author response is not backed by numbers. The paper should include an experiment which trains a QA system without conditioning on the trajectories, and then ev... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"gLd15adZa7h",
"dfRnax8QF-U",
"PdKc_islOq",
"ZQE3mmm2JJ",
"94i5JZq5gzi",
"rBqvq8H6Ax6",
"t-_PkzKi6y7",
"JaIyAlgTCar",
"WY_054QLdVB",
"EjGgrkd1i1p",
"l1-oQlYUhVtI",
"kmuMuFlrjv9",
"Exkbrib98-8",
"mb2JRm3yMKQ",
"nips_2022_scfOjwTtZ8S",
"yxPCZ0QWH5r",
"YG5NTaTm8bp",
"nips_2022_scfOjwT... |
nips_2022_q9XPBhFgL6z | A Causal Analysis of Harm | As autonomous systems rapidly become ubiquitous, there is a growing need for a legal and regulatory framework to
address when and how such a system harms someone. There have been several attempts within the philosophy literature to define harm, but none of them has proven capable of dealing with with the many examples that have been presented, leading some to suggest that the notion of harm should be abandoned and ``replaced by more well-behaved notions''. As harm is generally something that is caused, most of these definitions have involved causality at some level. Yet surprisingly, none of them makes use of causal models and the definitions of actual causality that they can express. In this paper we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality (Halpern, 2016). The key novelty of our definition is that it is based on contrastive causation and uses a default utility to which the utility of actual outcomes is compared. We show that our definition is able to handle the examples from the literature, and illustrate its importance for reasoning about situations involving autonomous systems. | Accept | The paper aims at formally defining a qualitative notion of harm based on "actual causality". This is an important problem tackled, also or in even in particular for ML research. Many ethical issues raised about AI circle around a notion of harm, such as discrimination induced by classifiers, decision of autonomous agents, and even just harmful content of training sets. From this perspective, I really congratulate the authors. However, it is a downside that existing approaches to related approaches within the ML community are not discussed. It is good tradition and practice to provide related work. Indeed, there is no actual causality model of harm yet (as far as I can say) but there are e.g. deontological approaches that could be mentioned or even briefly discussed such as
Liwei Jiang, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Maxwell Forbes, Jon Borchardt, Jenny Liang, Oren Etzioni, Maarten Sap, Yejin Choi: Delphi: Towards Machine Ethics and Norms. CoRR abs/2110.07574 (2021)
Patrick Schramowski, Christopher Tauchmann, Kristian Kersting: Can Machines Help Us Answering Question 16 in Datasheets, and In Turn Reflecting on Inappropriate Content? In Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (2022)
Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf, Kristian Kersting. Large pre-trained language models contain human-like biases of what is right and wrong to do. Nature Machine Intelligence 4(3): 258-268 (2022)
Moreover, the authors should discuss "we define an event to cause harm whenever it causes the utility of the outcome to be lower than the default utility" in more detail. Why is this a good definition? Let us for a moment equate utility with money then even too much money could be harmful due to network effects. In other words, it would be great if the authors could discuss a bit more the assumption made and their implications. This is even more important given the closeness to Bountly's notion of harm, replacing “state of affairs” by “outcomes”, and associating with each outcome a utility. The main downside, however, is the missing illustration of how the presented formalization of harm actually will help to tackle some of the ethical issues of AI. Is this really the right way? But then, who knows, and this paper is indeed taking a very different step than the average NeurIPS paper. In my opinion, the reviewers raise some salient arguments about the suitability of this paper for NeruIPS. For instance, there are indeed positive examples presented only. However, then the negative examples in the review want to illustrate that actual causality is not the right tool either. Still, the discussion within the NeurIPS community would then help already. Moreover, not knowing to break the arm is still causing harm. Anyhow, it is exactly this type of discussion that makes the paper in my humble opinion interesting to the NeurIPS community. The discussions with the authors showed that it is a topic that will provoke a lot of discussions. And since the overall sentiment of the reviewers is positive, I recommend accepting the paper. | train | [
"3sy7zjrDzNu",
"I2m0UtxfVi",
"Q4ZEs4XZvk7",
"cC6HFLMeLrd",
"IcJIrMjCtk",
"F0vT-pmV1zV",
"7ZAFvJ-1svq",
"pa4yInFWh__",
"kkKulIGMsFd",
"R1Oz-7Asurk",
"bt5diU6zJNr",
"mGHGyRSjvIa",
"qHR-0A-nNHb",
"SVEEsjWnld",
"85gpXk-5F_Y",
"q-bP-ra7jU5",
"cwlv5p5iTQ",
"UJkezi9PA6D",
"7iKzN2OQtSc",... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_re... | [
" The reviewer raised a flag that a vaguely defined utility function could lead to ethical issues in practice. While this is true, I don't see it as an ethical issue with the work in this submission itself. authors touch on this in the paper No concerns on my part.",
" This seems to be a false alarm. No issues n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4,
2
] | [
"nips_2022_q9XPBhFgL6z",
"nips_2022_q9XPBhFgL6z",
"85gpXk-5F_Y",
"qHR-0A-nNHb",
"bt5diU6zJNr",
"R1Oz-7Asurk",
"R1Oz-7Asurk",
"mGHGyRSjvIa",
"SVEEsjWnld",
"q-bP-ra7jU5",
"cwlv5p5iTQ",
"UJkezi9PA6D",
"4bvF5KVbNgO",
"yGK1en9oAbv",
"mfOabWOLdZL",
"7iKzN2OQtSc",
"7iKzN2OQtSc",
"Yp8s1NQz... |
nips_2022_dJgYhYKvr1 | The Slingshot Mechanism: An Empirical Study of Adaptive Optimizers and the \emph{Grokking Phenomenon} | The \emph{grokking phenomenon} as reported by Power et al.~\cite{power2021grokking} refers to a regime where a long period of overfitting is followed by a seemingly sudden transition to perfect generalization. In this paper, we attempt to reveal the underpinnings of Grokking via a series of empirical studies. Specifically, we uncover an optimization anomaly plaguing adaptive optimizers at extremely late stages of training, referred to as the \emph{Slingshot Mechanism}. A prominent artifact of the Slingshot Mechanism can be measured by the cyclic phase transitions between stable and unstable training regimes, and can be easily monitored by the cyclic behavior of the norm of the last layers weights. We empirically observe that without explicit regularization, Grokking as reported in \cite{power2021grokking} almost exclusively happens at the onset of \emph{Slingshots}, and is absent without it.
While common and easily reproduced in more general settings, the Slingshot Mechanism does not follow from any known optimization theories that we are aware of, and can be easily overlooked without an in depth examination. Our work points to a surprising and useful inductive bias of adaptive gradient optimizers at late stages of training, calling for a revised theoretical analysis of their origin. | Reject | The paper examines a widely known phenomenon when training neural networks with adaptive optimizers, where the training loss cyclically alternates in later stages. Some evidence is given that, in the absence of explicit regularization, this is associated with improved generalization. The more concrete contribution of the paper is to show a strong link between the aforementioned cyclicity and sudden cyclic growth in the weight of the last layer of the network. The main concern is that beyond this empirical observation a specific mechanism behind the phenomenon is not identified, nor a clear connection is made with the apparent benefit to generalization. While the observations have merit, it is hard to determine whether cause-effect relationships exist between them, making the paper feel as a work-in-progress and only weakly significant to the community (judging by the reviewers being underwhelmed). If the authors are convinced of their message that "slingshots" cause "grokking" (in contrast to, e.g., being a by-product while the real mechanism lies elsewhere), then they are advised to show exactly that in the further elaboration of their work. | train | [
"5e56z8UGWeV",
"sy19PGsLPrF",
"bnzu0nykmK-",
"imwtB9VEHEO",
"RrOv3uGiEm0",
"OTy_PWc-0AR",
"fNOgGDahoqS",
"tvlyTbF4r4Y",
"GmMV6-olEoU",
"s96PB6Fbfjs"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" *We thank **reviewer BLwe** for their active engagement and thoughtful questions.*\n\nWe first apologize for confusing points made by the paper and our previous responses. We want to clarify that we agree with the reviewer that large Lipschitz L would indeed lead to the loss spiking. However, we meant to point ou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"sy19PGsLPrF",
"OTy_PWc-0AR",
"imwtB9VEHEO",
"s96PB6Fbfjs",
"GmMV6-olEoU",
"tvlyTbF4r4Y",
"nips_2022_dJgYhYKvr1",
"nips_2022_dJgYhYKvr1",
"nips_2022_dJgYhYKvr1",
"nips_2022_dJgYhYKvr1"
] |
nips_2022_FR289LMkmxZ | On-Demand Sampling: Learning Optimally from Multiple Distributions | Social and real-world considerations such as robustness, fairness, social welfare, and multi-agent trade-offs have given rise to multi-distribution learning paradigms, such as collaborative [Blum et al. 2017], group distributionally robust [Sagawa et al. 2019], and fair federated [Mohri et al. 2019] learning. In each of these settings, a learner seeks to minimize its worst-case loss over a set of $n$ predefined distributions, while using as few samples as possible. In this paper, we establish the optimal sample complexity of these learning paradigms and give algorithms that meet this sample complexity. Importantly, our sample complexity bounds exceed that of the sample complexity of learning a single distribution only by an additive factor of $\frac{n\log(n)}{\epsilon^2}$. These improve upon the best known sample complexity of agnostic federated learning by Mohri et al. 2019 by a multiplicative factor of $n$, the sample complexity of collaborative learning by Nguyen and Zakynthinou 2018 by a multiplicative factor of $\frac{\log(n)}{\epsilon^3}$, and give the first sample complexity bounds for the group DRO objective of Sagawa et al. 2019. To achieve optimal sample complexity, our algorithms learn to sample and learn from distributions on demand. Our algorithm design and analysis is enabled by our extensions of stochastic optimization techniques for solving stochastic zero-sum games. In particular, we contribute variants of Stochastic Mirror Descent that can trade off between players' access to cheap one-off samples and more expensive reusable ones. | Accept | This paper studies multi-distribution learning; it formulates the problem as a zero-sum game with the stochastic payoff and then uses tools from the framework of stochastic mirror descent to obtain optimal sample complexities for various collaborative learning settings. All reviewers are very positive about this paper: interesting problems, nice techniques, and optimal results. | train | [
"BknySnHTEo",
"ym7JcEaF5sY",
"x7PgVFHROl",
"l4jDTZTi7nr",
"VuEG_x6N_6",
"GwAvBxY7alE",
"KXFH_rpoN8r",
"YYKuEn_RrS2",
"NbE7hkXnyvc",
"evnp5tLoEI7",
"yIT9zuR8l-4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for answering the questions! I would encourage the authors to include part of the discussion in the final version of the paper. My overall evaluation of the paper remains positive.",
" I thank the authors for clarifying my comments. I will keep my score. Thanks!",
" Thanks authors for responding to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"l4jDTZTi7nr",
"GwAvBxY7alE",
"KXFH_rpoN8r",
"yIT9zuR8l-4",
"evnp5tLoEI7",
"NbE7hkXnyvc",
"YYKuEn_RrS2",
"nips_2022_FR289LMkmxZ",
"nips_2022_FR289LMkmxZ",
"nips_2022_FR289LMkmxZ",
"nips_2022_FR289LMkmxZ"
] |
nips_2022_m6HNNpQO8dc | Logical Activation Functions: Logit-space equivalents of Probabilistic Boolean Operators | The choice of activation functions and their motivation is a long-standing issue within the neural network community. Neuronal representations within artificial neural networks are commonly understood as logits, representing the log-odds score of presence of features within the stimulus. We derive logit-space operators equivalent to probabilistic Boolean logic-gates AND, OR, and XNOR for independent probabilities. Such theories are important to formalize more complex dendritic operations in real neurons, and these operations can be used as activation functions within a neural network, introducing probabilistic Boolean-logic as the core operation of the neural network. Since these functions involve taking multiple exponents and logarithms, they are computationally expensive and not well suited to be directly used within neural networks. Consequently, we construct efficient approximations named $\text{AND}_\text{AIL}$ (the AND operator Approximate for Independent Logits), $\text{OR}_\text{AIL}$, and $\text{XNOR}_\text{AIL}$, which utilize only comparison and addition operations, have well-behaved gradients, and can be deployed as activation functions in neural networks. Like MaxOut, $\text{AND}_\text{AIL}$ and $\text{OR}_\text{AIL}$ are generalizations of ReLU to two-dimensions. While our primary aim is to formalize dendritic computations within a logit-space probabilistic-Boolean framework, we deploy these new activation functions, both in isolation and in conjunction to demonstrate their effectiveness on a variety of tasks including image classification, transfer learning, abstract reasoning, and compositional zero-shot learning. | Accept | The review ratings were above the acceptance threshold. The reviewers valued quite positively the originality of this paper that studied advantages of multivariate activate functions, as well as extensive numerical experiments across a wide range of tasks in order to explore their effectiveness. Upon reading the reviews, the author responses, subsequent discussion between the reviewers and the authors, as well as the paper itself, I thought that the restriction of the consideration in this paper to those derived from approximation of Boolean operators on independent Bernoullis was not well motivated nor described. At the same time, the empirical evidences demonstrating the potential usefulness of the proposal are quite interesting, so that I would like to recommend acceptance of this paper, and would expect further discussion on this subject among the attendees of the conference. | train | [
"CsXQ4lx8uYV",
"bdV0NVlTzI7",
"HghhGIyhiMV",
"3sgvwCahUFZ",
"VuhrUeHAb8D",
"rM0Fa2mbkOr",
"IuQbV8wTS_3M",
"Cd5-uJ8aom",
"1WYXjk_sUEt",
"F_F0rEFM89",
"8EPkYzhcTMr",
"KUYVpym3X82",
"1wOsVTAUQiz",
"qPgC6A4N-5q",
"cb9HfeVekCM",
"rvZUu7-T7t7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response, and for considering my feedback in the revised version of the manuscript. My original concerns have now largely been addressed, and to reflect that I will update my score from 3 to 6.",
" > *you did not address my criticism that \"For the CNN case, a different choice of logical-gate... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"bdV0NVlTzI7",
"HghhGIyhiMV",
"3sgvwCahUFZ",
"VuhrUeHAb8D",
"rM0Fa2mbkOr",
"IuQbV8wTS_3M",
"Cd5-uJ8aom",
"1WYXjk_sUEt",
"F_F0rEFM89",
"8EPkYzhcTMr",
"qPgC6A4N-5q",
"rvZUu7-T7t7",
"cb9HfeVekCM",
"nips_2022_m6HNNpQO8dc",
"nips_2022_m6HNNpQO8dc",
"nips_2022_m6HNNpQO8dc"
] |
nips_2022_OptX3Db1P4 | Dynamic pricing and assortment under a contextual MNL demand | We consider dynamic multi-product pricing and assortment problems under an unknown demand over T periods, where in each period, the seller decides on the price for each product or the assortment of products to offer to a customer who chooses according to an unknown Multinomial Logit Model (MNL). Such problems arise in many applications, including online retail and advertising. We propose a randomized dynamic pricing policy based on a variant of the Online Newton Step algorithm (ONS) that achieves a $O(d\sqrt{T}\log(T))$ regret guarantee under an adversarial arrival model. We also present a new optimistic algorithm for the adversarial MNL contextual bandits problem, which achieves a better dependency than the state-of-the-art algorithms in a problem-dependent constant $\kappa$ (potentially exponentially small). Our regret upper bound scales as $\tilde{O}(d\sqrt{\kappa T}+ \log(T)/\kappa)$, which gives a stronger bound than the existing $\tilde{O}(d\sqrt{T}/\kappa)$ guarantees. | Accept | In this paper the authors study the problem of dynamic multi-product pricing and assort under the Multinomial Logit model (MNL). For multi-product pricing problems, they propose the Online Newton method for multiple product pricing with provable regret bound of O(d\sqrt(T)\logT). For the assort problems, they proposed OFU-MNL with better dependency on the problem-dependent parameter.
Overall, the authors have done a good job addressing the reviewers' concerns. While there is still lots of things to do in order to update the paper to meet the suggestions of the reviewers, I think this is a good paper and worth being published at NeurIPS, subject to the aforementioned edits. | train | [
"A_Vnu3Xhvr2",
"0_NNRj_fDXE",
"eZnYayf0_fh",
"oKT-xmBF549I",
"IZSSROURuhU",
"7hMFzDvJMB",
"5k3_eOASPUp",
"fKBnigdOsk_",
"g-Uc5jx9k8U",
"eoxgl9FURLA",
"3aDQ9-c6xxV"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Please find in the supplementary material a first revised version of the paper including a more detailed literature review and the runtime for both algorithms.",
" Please find a first revised version of the paper in the supplementary material. ",
" I would like to thank the authors for putting together the re... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"eZnYayf0_fh",
"oKT-xmBF549I",
"7hMFzDvJMB",
"IZSSROURuhU",
"3aDQ9-c6xxV",
"eoxgl9FURLA",
"g-Uc5jx9k8U",
"nips_2022_OptX3Db1P4",
"nips_2022_OptX3Db1P4",
"nips_2022_OptX3Db1P4",
"nips_2022_OptX3Db1P4"
] |
nips_2022_uOdTKkg2FtP | Off-Team Learning | Zero-shot coordination (ZSC) evaluates an algorithm by the performance of a team of agents that were trained independently under that algorithm. Off-belief learning (OBL) is a recent method that achieves state-of-the-art results in ZSC in the game Hanabi. However, the implementation of OBL relies on a belief model that experiences covariate shift. Moreover, during ad-hoc coordination, OBL or any other neural policy may experience test-time covariate shift. We present two methods addressing these issues. The first method, off-team belief learning (OTBL), attempts to improve the accuracy of the belief model of a target policy πT on a broader range of inputs by weighting trajectories approximately according to the distribution induced by a different policy πb. The second, off-team off-belief learning (OT-OBL), attempts to compute an OBL equilibrium, where fixed point error is weighted according to the distribution induced by cross-play between the training policy π and a different fixed policy πb instead of self-play of π. We investigate these methods in variants of Hanabi. | Accept | The authors propose an improvement to off-belief learning, Off-Team learning, which closes the gap between belief models trained on fixed policies, and evaluation on learned policies for ZSC coordination problems. All reviewers have voted to weak/borderline accept - since I see no conceptual issues with the proposed framework and the evaluation seems sound, I will also vote to accept. The major area of constructive criticism, is that the work seems to be somewhat incremental with respect to Hu et al. (ICML 21).
| val | [
"zear0cSHiFY",
"WGfTVSo7U9_",
"zgkpjl6dQj",
"JZeImEA5aO5",
"MaGO9QTOZIM",
"B7nKlHJfvmg",
"_xa-VU_HjY6g",
"zywx4Py_jG",
"JizQvW32OPe",
"0Iz31HJlapt",
"GJDpPYFlBxPm",
"BrrpQkHVyil",
"fv5iPxNQqLd",
"tBtxd44a96"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your follow-up questions and comments. Please find our additional comments below.\n\n1. \nThank you for sharing this work. To our understanding, the Policy Reuse Problem (Def 1 in Rosman et al., 2016) is about maintaining a belief over tasks (in Zheng et al. above, maintaining a belief over partner ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"WGfTVSo7U9_",
"JizQvW32OPe",
"B7nKlHJfvmg",
"JizQvW32OPe",
"zywx4Py_jG",
"zywx4Py_jG",
"0Iz31HJlapt",
"GJDpPYFlBxPm",
"tBtxd44a96",
"fv5iPxNQqLd",
"BrrpQkHVyil",
"nips_2022_uOdTKkg2FtP",
"nips_2022_uOdTKkg2FtP",
"nips_2022_uOdTKkg2FtP"
] |
nips_2022_zBlj0Cs6dw1 | A Deep Reinforcement Learning Framework for Column Generation | Column Generation (CG) is an iterative algorithm for solving linear programs (LPs) with an extremely large number of variables (columns). CG is the workhorse for tackling large-scale integer linear programs, which rely on CG to solve LP relaxations within a branch and bound algorithm. Two canonical applications are the Cutting Stock Problem (CSP) and Vehicle Routing Problem with Time Windows (VRPTW). In VRPTW, for example, each binary variable represents the decision to include or exclude a route, of which there are exponentially many; CG incrementally grows the subset of columns being used, ultimately converging to an optimal solution. We propose RLCG, the first Reinforcement Learning (RL) approach for CG. Unlike typical column selection rules which myopically select a column based on local information at each iteration, we treat CG as a sequential decision-making problem, as the column selected in an iteration affects subsequent iterations of the algorithm. This perspective lends itself to a Deep Reinforcement Learning approach that uses Graph Neural Networks (GNNs) to represent the variable-constraint structure in the LP of interest. We perform an extensive set of experiments using the publicly available BPPLIB benchmark for CSP and Solomon benchmark for VRPTW. RLCG converges faster and reduces the number of CG iterations by 22.4% for CSP and 40.9% for VRPTW on average compared to a commonly used greedy policy. | Accept | The reviewers all agree that the paper meets the acceptance bar.
At the same time, I would like to encourage the authors to seriously consider adding more experiments to the final paper as recommended by the reviews:
1. Given the motivation in the paper, the recommendation of Reviewer 8Csh about running some experiments for MIP is quite reasonable, and would make the story much more convincing. It would be a significant improvement, worth the extra software engineering efforts.
2. I do not quite agree with the reasons the authors rejected comparison with algorithms recommended by Reviewer Eksw. It was argued that these algorithms are superseded by the one-step lookahead algorithm. However, the latter is a greedy algorithm, while the ML based methods may deviate from it, which could be beneficial (especially, Babaki et al can try to learn a better sequential baseline if available). Some of these papers also consider adding multiple columns, as also suggested by Reviewer DMVD, and accepted as future work by the authors. Hence, comparison to these papers could partially answer that question.
3. Including the validation curves for curriculum learning as requested by Reviewer Eksw would also be quite interesting.
I sincerely hope that authors can run these experiments while preparing the final version. | train | [
"gI7GcyypAGM",
"VkIPjfM_e8W",
"p_zodWIDFC1",
"QRV_upGcsSb",
"dzbmE1RbRD0",
"pHsTvrps4dO",
"_arECoqWJnD",
"C5SHbx0tkrR",
"yXv8NXmUZIa",
"6QDD_6_6wT0",
"JRsKnXmFqF",
"nO_fR2uIYV"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" * __Curriculum learning:__ First, we would like to clarify the confusion (due to misleading terminology on our part) about Figure 7 being a learning curve, in particular the following comments in your original review: \n“In Fig 7 of the Appendix, the authors label their figures as ‘learning curves’. However, this... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"VkIPjfM_e8W",
"p_zodWIDFC1",
"pHsTvrps4dO",
"nO_fR2uIYV",
"JRsKnXmFqF",
"_arECoqWJnD",
"6QDD_6_6wT0",
"yXv8NXmUZIa",
"nips_2022_zBlj0Cs6dw1",
"nips_2022_zBlj0Cs6dw1",
"nips_2022_zBlj0Cs6dw1",
"nips_2022_zBlj0Cs6dw1"
] |
nips_2022_VPhhd5pv0Qs | Sublinear Algorithms for Hierarchical Clustering | Hierarchical clustering over graphs is a fundamental task in data mining and machine learning with applications in many domains including phylogenetics, social network analysis, and information retrieval. Specifically, we consider the recently popularized objective function for hierarchical clustering due to Dasgupta~\cite{Dasgupta16}, namely, minimum cost hierarchical partitioning. Previous algorithms for (approximately) minimizing this objective function require linear time/space complexity. In many applications the underlying graph can be massive in size making it computationally challenging to process the graph even using a linear time/space algorithm. As a result, there is a strong interest in designing algorithms that can perform global computation using only sublinear resources (space, time, and communication). The focus of this work is to study hierarchical clustering for massive graphs under three well-studied models of sublinear computation which focus on space, time, and communication, respectively, as the primary resources to optimize: (1) (dynamic) streaming model where edges are presented as a stream, (2) query model where the graph is queried using neighbor and degree queries, (3) massively parallel computation (MPC) model where the edges of the graph are partitioned over several machines connected via a communication channel.
We design sublinear algorithms for hierarchical clustering in all three models above. At the heart of our algorithmic results is a view of the objective in terms of cuts in the graph, which allows us to use a relaxed notion of cut sparsifiers to do hierarchical clustering while introducing only a small distortion in the objective function. Our main algorithmic contributions are then to show how cut sparsifiers of the desired form can be efficiently constructed in the query model and the MPC model. We complement our algorithmic results by establishing nearly matching lower bounds that rule out the possibility of designing algorithms with better performance guarantees in each of these models. | Accept | The paper presents new algorithm for hierarchical clustering in different regimes. In particular they show a new algorithm for a (dynamic) edge streaming model, for a neighbor query model and for the MPC model. The paper contains both nice theory results and in the rebuttal phase the author(s) supported them with interesting experimental results.
Overall, we suggest to accept the paper as poster. | train | [
"_lqkkqeSl9",
"sR518zN44ia",
"0MJxOVchjB2",
"fOPoUS0mfev",
"R1sOJ3Aivx",
"xuCMPf3gCRc",
"4217sXw8gxJ",
"8WtXgt3-QG",
"JxwA260Sis",
"94ZVpYfaW7",
"7KPi3fRuobM"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed clarification w.r.t. related work as well as the preliminary experimental results. They have answered my questions.",
" (c) On a more technical front, you suggest that our lower bound construction is contrived. However, we would like to point out that lower bounds in general are high... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"fOPoUS0mfev",
"0MJxOVchjB2",
"94ZVpYfaW7",
"JxwA260Sis",
"7KPi3fRuobM",
"8WtXgt3-QG",
"nips_2022_VPhhd5pv0Qs",
"nips_2022_VPhhd5pv0Qs",
"nips_2022_VPhhd5pv0Qs",
"nips_2022_VPhhd5pv0Qs",
"nips_2022_VPhhd5pv0Qs"
] |
nips_2022_d19Dsqtw421 | A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation | The current paper studies sample-efficient Reinforcement Learning (RL) in settings where only the optimal value function is assumed to be linearly-realizable. It has recently been understood that, even under this seemingly strong assumption and access to a generative model, worst-case sample complexities can be prohibitively (i.e., exponentially) large. We investigate the setting where the learner additionally has access to interactive demonstrations from an expert policy, and we present a statistically and computationally efficient algorithm (Delphi) for blending exploration with expert queries. In particular, Delphi requires $\tilde O(d)$ expert queries and a $\texttt{poly}(d,H,|A|,1/\varepsilon)$ amount of exploratory samples to provably recover an $\varepsilon$-suboptimal policy. Compared to pure RL approaches, this corresponds to an exponential improvement in sample complexity with surprisingly-little expert input. Compared to prior imitation learning (IL) approaches, our required number of expert demonstrations is independent of $H$ and logarithmic in $1/\varepsilon$, whereas all prior work required at least linear factors of both in addition to the same dependence on $d$. Towards establishing the minimal amount of expert queries needed, we show that, in the same setting, any learner whose exploration budget is \textit{polynomially-bounded} (in terms of $d,H,$ and $|A|$) will require \textit{at least} $\tilde\Omega(\sqrt{d})$ oracle calls to recover a policy competing with the expert's value function. Under the weaker assumption that the expert's policy is linear, we show that the lower bound increases to $\tilde\Omega(d)$. | Accept | We thank the authors for their submission.
The paper studies finite-horizon MDPs in which *only* the optimal value function is realized by a linear function. The learner has access to the MDP via a generative model and has to minimize its sample complexity for finding a policy with approximately optimal value function. This was shown by prior work to require a number of samples exponential in the problem parameters.
Contributions: First, an algorithm that guarantees polynomial sample complexity if the learner has additional access to $O(d)$ expert demonstrations. Second, a lower bound of $\Omega(\sqrt{d})$ on the required number of expert queries.
The work adds to our understanding of when MDPs with linear function approximation are solvable, showing that a hard RL problem becomes easy with a small amount of additional information. It is very well-written. | train | [
"yPhuuspJzCO",
"KoldYH6_34",
"hC-mrUyXv6",
"v7a7fOvyET7",
"P8wm-cuaEvT",
"K00GILKC8T",
"i702sVyDmbA",
"B4zMSHAO8Bs",
"-CF_4Aq7IDe",
"_R9gSLnosI",
"gyyaywi4x_",
"9MgNM1IUYU"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their reply. \n\n> I would encourage the authors to make more explicit what the exact query model they are using is\n\nWill do!\n\n> One could imagine a case where the optimal policy in the MDP does have a linear value function, but the oracle policy we have access to does not. From wha... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"hC-mrUyXv6",
"K00GILKC8T",
"P8wm-cuaEvT",
"B4zMSHAO8Bs",
"gyyaywi4x_",
"-CF_4Aq7IDe",
"_R9gSLnosI",
"9MgNM1IUYU",
"nips_2022_d19Dsqtw421",
"nips_2022_d19Dsqtw421",
"nips_2022_d19Dsqtw421",
"nips_2022_d19Dsqtw421"
] |
nips_2022_6mej19W1ppP | Certifying Some Distributional Fairness with Subpopulation Decomposition | Extensive efforts have been made to understand and improve the fairness of machine learning models based on observational metrics, especially in high-stakes domains such as medical insurance, education, and hiring decisions. However, there is a lack of certified fairness considering the end-to-end performance of an ML model. In this paper, we first formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem based on the model performance loss bound on a fairness constrained distribution, which is within bounded distributional distance with the training distribution. We then propose a general fairness certification framework and instantiate it for both sensitive shifting and general shifting scenarios. In particular, we propose to solve the optimization problem by decomposing the original data distribution into analytical subpopulations and proving the convexity of the subproblems to solve them. We evaluate our certified fairness on six real-world datasets and show that our certification is tight in the sensitive shifting scenario and provides non-trivial certification under general shifting. Our framework is flexible to integrate additional non-skewness constraints and we show that it provides even tighter certification under different real-world scenarios. We also compare our certified fairness bound with adapted existing distributional robustness bounds on Gaussian data and demonstrate that our method is significantly tighter. | Accept | The paper considers an important problem of certifying fairness of trained classifiers based on its performance on some set of fairness constrained distributions. They show the framework for two types of shifts -sensitive shifts and general shifts.
Reviewers are supportive of acceptance. There were many concerns raised by reviewers and reviewers have promised changes and have made them in their revisions. I hope they can incorporate it in their camera ready. | train | [
"I0RLhgM9NuN",
"1BiOpTwRt7T",
"1p6Fam8XWboU",
"_JUFow6pB_T",
"RZrwF686oM",
"Q1y7IsJdr63",
"rRTazUV69uQ",
"xyGPtDWzsR",
"NxaDiNIpsp",
"jTPl3spLoOZ",
"RtPgzCsWFEP",
"jTapBNAKFjz",
"oM6EijOfV9I",
"6Ukdxh2Radz",
"OWMjtuH0ND",
"rgez5fUgDMu"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the valuable suggestion. We will definitely incorporate these clarifications in our revision. Thank you for helping to improve our work again!",
" Thanks for answering my questions. I am happy with the responses but I suggest authors add these clarifications to the paper.\n\n\n\n",
" T... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"1BiOpTwRt7T",
"oM6EijOfV9I",
"_JUFow6pB_T",
"RZrwF686oM",
"Q1y7IsJdr63",
"NxaDiNIpsp",
"nips_2022_6mej19W1ppP",
"rgez5fUgDMu",
"rgez5fUgDMu",
"OWMjtuH0ND",
"OWMjtuH0ND",
"6Ukdxh2Radz",
"6Ukdxh2Radz",
"nips_2022_6mej19W1ppP",
"nips_2022_6mej19W1ppP",
"nips_2022_6mej19W1ppP"
] |
nips_2022_QFMw21ZKaa_ | Accelerating Certified Robustness Training via Knowledge Transfer | Training deep neural network classifiers that are certifiably robust against adversarial attacks is critical to ensuring the security and reliability of AI-controlled systems. Although numerous state-of-the-art certified training methods have been developed, they are computationally expensive and scale poorly with respect to both dataset and network complexity. Widespread usage of certified training is further hindered by the fact that periodic retraining is necessary to incorporate new data and network improvements. In this paper, we propose Certified Robustness Transfer (CRT), a general-purpose framework for reducing the computational overhead of any certifiably robust training method through knowledge transfer. Given a robust teacher, our framework uses a novel training loss to transfer the teacher’s robustness to the student. We provide theoretical and empirical validation of CRT. Our experiments on CIFAR-10 show that CRT speeds up certified robustness training by 8× on average across three different architecture generations while achieving comparable robustness to state-of-the-art methods. We also show that CRT can scale to large-scale datasets like ImageNet. | Accept | This work considers how to transfer a well-trained, certified robust model (i.e., randomized smoothing model) with data from a new domain. To achieve this, it uses a pre-trained smooth model as the teacher model and proposes a simple loss function to make the student model also learn a similar smoothness. With a good teacher model, the student model can be trained very quickly, much faster than training from scratch.
Although some reviewers and I found the novelty of the work is limited, all of us agree that this work has empirical values in how to efficiently train a robust neural network in practical scenarios. Therefore, I recommend acceptance. | train | [
"1ciz0u63t1F",
"SfU8KyQXNpD",
"_xyBgWaH1tl",
"JYqlAzzt530n",
"c24g9sQ40aVD",
"SEhz3UYKivd",
"GsaRuGNFVIV",
"IP3TzBevVJD",
"fKEElJf5QVc",
"CE-4rdSFkaH",
"KgmOpN7qyFE",
"eHEW7Iel4Z"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing my concerns and I think I better understand the paper's contributions. I will increase my score from 3 to 4 as I have mis-understood the the \"scalability\" of the work. I think the empirical contributions of the work may be useful with more tuning. The presentation in the current paper show... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"SfU8KyQXNpD",
"_xyBgWaH1tl",
"GsaRuGNFVIV",
"c24g9sQ40aVD",
"SEhz3UYKivd",
"eHEW7Iel4Z",
"KgmOpN7qyFE",
"CE-4rdSFkaH",
"nips_2022_QFMw21ZKaa_",
"nips_2022_QFMw21ZKaa_",
"nips_2022_QFMw21ZKaa_",
"nips_2022_QFMw21ZKaa_"
] |
nips_2022_lKULHf7oFDo | Fairness in Federated Learning via Core-Stability | Federated learning provides an effective paradigm to jointly optimize a model benefited from rich distributed data while protecting data privacy. Nonetheless, the heterogeneity nature of distributed data, especially in the non-IID setting, makes it challenging to define and ensure fairness among local agents. For instance, it is intuitively ``unfair" for agents with data of high quality to sacrifice their performance due to other agents with low quality data. Currently popular egalitarian and weighted equity-based fairness measures suffer from the aforementioned pitfall. In this work, we aim to formally represent this problem and address these fairness issues using concepts from co-operative game theory and social choice theory. We model the task of learning a shared predictor in the federated setting as a fair public decision making problem, and then define the notion of core-stable fairness: Given $N$ agents, there is no subset of agents $S$ that can benefit significantly by forming a coalition among themselves based on their utilities $U_N$ and $U_S$ (i.e., $ (|S|/ N) U_S \geq U_N$). Core-stable predictors are robust to low quality local data from some agents, and additionally they satisfy Proportionality (each agent gets at least $1/n$ fraction of the best utility that she can get from any predictor) and Pareto-optimality (there exists no model that can increase the utility of an agent without decreasing the utility of another), two well sought-after fairness and efficiency notions within social choice. We then propose an efficient federated learning protocol CoreFed to optimize a core stable predictor. CoreFed determines a core-stable predictor when the loss functions of the agents are convex. CoreFed also determines approximate core-stable predictors when the loss functions are not convex, like smooth neural networks. We further show the existence of core-stable predictors in more general settings using Kakutani's fixed point theorem. Finally, we empirically validate our analysis on two real-world datasets, and we show that CoreFed achieves higher core-stability fairness than FedAvg while maintaining similar accuracy. | Accept | This paper introduces core stability as a fairness notion for federated learning, which is motivated by social choice theory. The reviewers all agreed that the paper provides a novel contribution to studying fairness in federated learning. The authors have also addressed reviewers' questions during the discussion period. | val | [
"e-zerruANCT",
"wOmrDc5dYxT",
"YRqGZC428fs",
"jXBAaPV4SW9",
"MmEaWLd7p4Q",
"jMP6-RGgqkk",
"hN3Ec_7SbET",
"h1n4_9R1tm",
"qLu22_PCHQZ",
"buQPKT2Nwq-",
"KHIjhSVaSxV",
"ZryFORN4MEC"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your explanation! Your example makes sense. My concern has been addressed.",
" Thank you very much for going through our response. We first give an explicit example showing that the multiplicative guarantee of $|S|/n$ is tight. Then, we address the concerns raised by the example provided by the revi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"wOmrDc5dYxT",
"YRqGZC428fs",
"h1n4_9R1tm",
"qLu22_PCHQZ",
"qLu22_PCHQZ",
"nips_2022_lKULHf7oFDo",
"ZryFORN4MEC",
"KHIjhSVaSxV",
"buQPKT2Nwq-",
"nips_2022_lKULHf7oFDo",
"nips_2022_lKULHf7oFDo",
"nips_2022_lKULHf7oFDo"
] |
nips_2022_VHzCiK727EL | Learning NP-Hard Multi-Agent Assignment Planning using GNN: Inference on a Random Graph and Provable Auction-Fitted Q-learning | This paper explores the possibility of near-optimally solving multi-agent, multi-task NP-hard planning problems with time-dependent rewards using a learning-based algorithm. In particular, we consider a class of robot/machine scheduling problems called the multi-robot reward collection problem (MRRC). Such MRRC problems well model ride-sharing, pickup-and-delivery, and a variety of related problems. In representing the MRRC problem as a sequential decision-making problem, we observe that each state can be represented as an extension of probabilistic graphical models (PGMs), which we refer to as random PGMs. We then develop a mean-field inference method for random PGMs. We then propose (1) an order-transferable Q-function estimator and (2) an order-transferability-enabled auction to select a joint assignment in polynomial-time. These result in a reinforcement learning framework with at least $1-1/e$ optimality. Experimental results on solving MRRC problems highlight the near-optimality and transferability of the proposed methods. We also consider identical parallel machine scheduling problems (IPMS) and minimax multiple traveling salesman problems (minimax-mTSP). | Accept | The approach is novel and significant but there are concerns about the presentation in the paper. The authors should thoroughly update the paper to clarify (formally) the problem being solved and provide more details about the method. The related work should also be updated to be more extensive. | train | [
"ZCSRDpepg6j",
"xGA0CL1tQ-A",
"GzWcVdeaG2S",
"b3_6jCiuhYX",
"DD_jeUsUHu",
"qwKDPdThwVy",
"1w4ZhDu13ja",
"ny6d7f31yBFu",
"76JozKRhKQv",
"bSb9gKn0JKz",
"b0YeHM7zdoa",
"ZNZhOq_Hw_R",
"KFl98TeSTVx",
"DTkklHRWc13",
"Pvq_fyKH2a"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As we are approaching the end of this rebuttal period, we would like to express our sincere thanks to all reviewers for participating discussion actively. We appreciate all their hard work and efforts in providing comments and suggestions. This constructive discussion indeed helps us improve the quality of our su... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
1
] | [
"nips_2022_VHzCiK727EL",
"DD_jeUsUHu",
"1w4ZhDu13ja",
"DTkklHRWc13",
"qwKDPdThwVy",
"ny6d7f31yBFu",
"ZNZhOq_Hw_R",
"bSb9gKn0JKz",
"Pvq_fyKH2a",
"b0YeHM7zdoa",
"DTkklHRWc13",
"KFl98TeSTVx",
"nips_2022_VHzCiK727EL",
"nips_2022_VHzCiK727EL",
"nips_2022_VHzCiK727EL"
] |
nips_2022_2S_GtHBtTUP | Memory safe computations with XLA compiler | Software packages like TensorFlow and PyTorch are designed to support linear algebra operations, and their speed and usability determine their success. However, by prioritising speed, they often neglect memory requirements. As a consequence, the implementations of memory-intensive algorithms that are convenient in terms of software design can often not be run for large problems due to memory overflows. Memory-efficient solutions require complex programming approaches with significant logic outside the computational framework. This impairs the adoption and use of such algorithms. To address this, we developed an XLA compiler extension that adjusts the computational data-flow representation of an algorithm according to a user-specified memory limit. We show that k-nearest neighbour, sparse Gaussian process regression methods and Transformers can be run on a single device at a much larger scale, where standard implementations would have failed. Our approach leads to better use of hardware resources. We believe that further focus on removing memory constraints at a compiler level will widen the range of machine learning methods that can be developed in the future. | Accept | This paper implements a set of optimizations on top of the XLA compiler with the explicit purpose of reducing the memory footprint of an algorithm. This is important because while a lot of optimization work in this space has traditionally focused on speed, memory is more commonly the bottleneck when running large computations. The key strengths of the paper are that it presents a genuinely useful artifact and is generally well explained (except for section 4.3 which is not very well explained). The results also show that one can get some significant memory improvements with zero additional effort if your code already compiles with XLA. The main limitations of the paper are limited novelty (none of the transformations are particularly surprising) and limited relevance to NeurIPS---this is really a compilers paper, and it's not even evaluated on any deep learning workloads. These two limitations make this a borderline paper, but all reviewers considered it to be above the bar.
Minor comment: In section 4.1, in the compiler literature, these match-and-replace optimizations are known as peephole optimizations. The term should be at least mentioned. | test | [
"FYbbYQWQRG2",
"KuRW2NTfx78",
"habMpVpqH50",
"hmGgki004Z0",
"5tzAMSyK2POV",
"vzLjW_shlch",
"VaZ_oSjEpUd",
"Yxjr0MfcUct",
"NhiLINDhd_",
"aVqFsMt59Ix"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" It is really cool to see that eXLA can automatically optimize attention. I raised my score to 7",
" Thank you for your detailed explanation! I am convinced by the information the authors provided and am happy to raise the rating from 4 to 5",
" Thank you authors for response, explanation, and improved descrip... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"VaZ_oSjEpUd",
"hmGgki004Z0",
"vzLjW_shlch",
"5tzAMSyK2POV",
"aVqFsMt59Ix",
"NhiLINDhd_",
"Yxjr0MfcUct",
"nips_2022_2S_GtHBtTUP",
"nips_2022_2S_GtHBtTUP",
"nips_2022_2S_GtHBtTUP"
] |
nips_2022_TATzsweWfof | A Communication-efficient Algorithm with Linear Convergence for Federated Minimax Learning | In this paper, we study a large-scale multi-agent minimax optimization problem, which models many interesting applications in statistical learning and game theory, including Generative Adversarial Networks (GANs). The overall objective is a sum of agents' private local objective functions. We focus on the federated setting, where agents can perform local computation and communicate with a central server. Most existing federated minimax algorithms either require communication per iteration or lack performance guarantees with the exception of Local Stochastic Gradient Descent Ascent (SGDA), a multiple-local-update descent ascent algorithm which guarantees convergence under a diminishing stepsize. By analyzing Local SGDA under the ideal condition of no gradient noise, we show that generally it cannot guarantee exact convergence with constant stepsizes and thus suffers from slow rates of convergence. To tackle this issue, we propose FedGDA-GT, an improved Federated (Fed) Gradient Descent Ascent (GDA) method based on Gradient Tracking (GT). When local objectives are Lipschitz smooth and strongly-convex-strongly-concave, we prove that FedGDA-GT converges linearly with a constant stepsize to global $\epsilon$-approximation solution with $\mathcal{O}(\log (1/\epsilon))$ rounds of communication, which matches the time complexity of centralized GDA method. Then, we analyze the general distributed minimax problem from a statistical aspect, where the overall objective approximates a true population minimax risk by empirical samples. We provide generalization bounds for learning with this objective through Rademacher complexity analysis. Finally, we numerically show that FedGDA-GT outperforms Local SGDA. | Accept | This paper has three main contributions: (1) a generalization bound for learning in adversarial learning frameworks such as GANs based on Rademacher complexity, (2) a proof that local SGDA with constant stepsize does not converge for these problems in federated settings and therefore does not achieve linear convergence, and (3) a new method which circumvents these issues.
Overall, the consensus was that result (1) was both somewhat underwhelming and also seemed somewhat disjointed from the paper. However, results (2) and (3) are compelling, and will be of interest to the federated learning community. This is especially true after the authors removed some of the technical assumptions that they required in the first version of the paper. The updated version of the paper could still use some cleaning up and/or reorganizing, however, I think that overall the paper is above the bar for acceptance. | val | [
"JWLVjRjsfyl",
"6szZd2zw8iC",
"SmoQ6UKQIg8",
"wZumcvCr5lY2",
"67g0mvEBSFn",
"yglcNgl2c2R",
"a2YTdc76LIB",
"voWLhebuTBv2",
"a-KgFly53fz",
"qjhIJF3kEk0",
"WEenSYYgbtq",
"jk3X2ck1qCn",
"RBm470ZgliA",
"0Mo04CnR5WT",
"RWQNbDEB1G",
"BVWbMVa0j-r",
"2vv2OhEVR_b",
"Z1hp_Jnxtpb",
"WZaaS85r... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer LKhi,\n\nThank you for reviewing our paper. We received a lot of constructive feedback from your valuable comments! According to your comments, we made detailed clarification and discussions in our response, which we hope you would find helpful. Since the deadline for author-reviewer discussion is q... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"Z1hp_Jnxtpb",
"2vv2OhEVR_b",
"BVWbMVa0j-r",
"yglcNgl2c2R",
"a2YTdc76LIB",
"a-KgFly53fz",
"voWLhebuTBv2",
"WZaaS85rZ_p",
"WZaaS85rZ_p",
"Z1hp_Jnxtpb",
"2vv2OhEVR_b",
"2vv2OhEVR_b",
"2vv2OhEVR_b",
"BVWbMVa0j-r",
"BVWbMVa0j-r",
"nips_2022_TATzsweWfof",
"nips_2022_TATzsweWfof",
"nips_... |
nips_2022_h2imPVlCCyN | On Efficient Online Imitation Learning via Classification | Imitation learning (IL) is a general learning paradigm for sequential decision-making problems. Interactive imitation learning, where learners can interactively query for expert annotations, has been shown to achieve provably superior sample efficiency guarantees compared with its offline counterpart or reinforcement learning. In this work, we study classification-based online imitation learning (abbrev. COIL) and the fundamental feasibility to design oracle-efficient regret-minimization algorithms in this setting, with a focus on the general non-realizable case. We make the following contributions: (1) we show that in the COIL problem, any proper online learning algorithm cannot guarantee a sublinear regret in general; (2) we propose Logger, an improper online learning algorithmic framework, that reduces COIL to online linear optimization, by utilizing a new definition of mixed policy class; (3) we design two oracle-efficient algorithms within the Logger framework that enjoy different sample and interaction round complexity tradeoffs, and show their improvements over behavior cloning; (4) we show that under standard complexity-theoretic assumptions, efficient dynamic regret minimization is infeasible in the Logger framework.
| Accept | This paper studied imitation learning in the classification setting. The paper shows that using proper online learning algorithms is not sufficient to obtain sublinear regret, and devises an improper learning framework that relies on online linear optimization resulting in provably efficient algorithms.
All the reviewers appreciated the theoretical novelty and are unanimous in their decision to accept the paper. Please incorporate the reviewers' feedback and the resulting discussion. Adding in some basic experimental results (outlined in the "Experimental plan" comment) would strengthen the paper. | train | [
"dJdj8_-fuRG",
"LiGJ-PADbcG",
"3Fpl3WOFRzA",
"MN8Lz_kGKQG",
"FHO8yzgEW3mN",
"odkNRja789",
"AMZbbgpz3c3",
"_SaDOIXVla",
"xrGFbMe-u3St",
"vsZbesNiWsN",
"Elw2ts8titl",
"gPvj4XzlZQs",
"cWkSL-FiYLt",
"iM-SYwCB_8"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my comments. I think that this paper as mentioned above addresses problems that are important to the community, while producing non-trivial theoretical results. Without some really basic experimentation added to the paper, I wont be increasing my score to a full accept. I appreciate the t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
2
] | [
"_SaDOIXVla",
"3Fpl3WOFRzA",
"MN8Lz_kGKQG",
"AMZbbgpz3c3",
"vsZbesNiWsN",
"iM-SYwCB_8",
"cWkSL-FiYLt",
"gPvj4XzlZQs",
"Elw2ts8titl",
"nips_2022_h2imPVlCCyN",
"nips_2022_h2imPVlCCyN",
"nips_2022_h2imPVlCCyN",
"nips_2022_h2imPVlCCyN",
"nips_2022_h2imPVlCCyN"
] |
nips_2022_VoLXWO1L-43 | AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness | Scaling up model sizes can lead to fundamentally new capabilities in many machine learning (ML) tasks. However, training big models requires strong distributed system expertise to carefully design model-parallel execution strategies that suit the model architectures and cluster setups. In this paper, we develop AMP, a framework that automatically derives such strategies. AMP identifies a valid space of model parallelism strategies and efficiently searches the space for high-performed strategies, by leveraging a cost model designed to capture the heterogeneity of the model and cluster specifications. Unlike existing methods, AMP is specifically tailored to support complex models composed of uneven layers and cluster setups with more heterogeneous accelerators and bandwidth. We evaluate AMP on popular models
and cluster setups from public clouds and show that AMP returns parallel strategies that match the expert-tuned strategies on typical cluster setups. On heterogeneous clusters or models with heterogeneous architectures, AMP finds strategies with 1.54$\times$ and 1.77$\times$ higher throughput than state-of-the-art model-parallel systems, respectively. | Accept | This paper presents AMP, a method to automatically find the optimal parallelization strategy for large model training on heterogeneous compute resources taking into account the model architectures and cluster setups. The method is based on a model to estimate the cost of candidate strategies. A combination of heuristics and optimization techniques is used to determine the degree of parallelism, the micro batch size, the device assignment and the pipeline arrangement.
The reviewers agree that the paper addresses a relevant and timely challenge for model training, it proposes a solution that is more general than previous approaches, and it shows promising empirical performance. Most of the questions about the method and concerns about the presentation could be addressed in the rebuttal. The overall assessment of all reviewers is positive, I thus recommend acceptance, counting on the authors to take the feedback into account and incorporate the author response into the revision.
However, I want to emphasize that I think two concerns that came up are important and investing more in them could strengthen the paper a lot.
- First, the cost model is at the core of the contribution and underlies your parallelization strategy. So a more comprehensive ablation study (as requested by two reviewers) would be appropriate. The spearman correlation numbers that you added in response provide valuable information, but they still leave many questions unanswered of how the cost model performs beyond the experimental setup you are currently using and how it behaves along different parameters in the cluster configuration. Such an ablation study could be evaluated independently of the benchmark results. Any effort along this direction to provide additional insights would be helpful and appreciated.
- Second, as you acknowledge experiments are at relatively small scale. However scalability is an important aspect of modern systems that is not discussed sufficiently in the paper. I understand that hardware resources might not be available and there is a justified use case also at smaller scale, but you should be forthcoming about the potential limitations of your model and the search algorithm that come with the size of the cluster you apply your algorithm to. Being clear about the scope of the work helps the reader judge the applicability of your solution in practical problems. | train | [
"b5xplPWko7t",
"Gw2Jg5R3BJC",
"XPAaaOYwXVdF",
"DNOvzpwoR9C",
"YwUESg34m_",
"-PsKAfCxLTH",
"uwlSw_gSvZy",
"_eAtSMSCh5",
"BN_-IBHIWCL",
"DM-1B-IQZAv",
"POoVYfgLvZ0",
"jRxk1W8FQjw"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Response: We thank the reviewer for further constructive comments! To address the cluster scale concern, we further tested AMP's running time on TransGAN (a 24-layer Transformer model) over clusters with sizes of 2x2, 4x4, 8x8, and 16x16, i.e., up to 256 GPUs. The results are shown in the Table below:\n\n| Cluste... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
2
] | [
"Gw2Jg5R3BJC",
"-PsKAfCxLTH",
"YwUESg34m_",
"jRxk1W8FQjw",
"POoVYfgLvZ0",
"DM-1B-IQZAv",
"BN_-IBHIWCL",
"nips_2022_VoLXWO1L-43",
"nips_2022_VoLXWO1L-43",
"nips_2022_VoLXWO1L-43",
"nips_2022_VoLXWO1L-43",
"nips_2022_VoLXWO1L-43"
] |
nips_2022_8bk68fodvD5 | Nonstationary Dual Averaging and Online Fair Allocation | We consider the problem of fairly allocating sequentially arriving items to a set of individuals. For this problem, the recently-introduced PACE algorithm leverages the dual averaging algorithm to approximate competitive equilibria and thus generate online fair allocations. PACE is simple, distributed, and parameter-free, making it appealing for practical use in large-scale systems. However, current performance guarantees for PACE require i.i.d. item arrivals. Since real-world data is rarely i.i.d., or even stationary, we study the performance of PACE on nonstationary data. We start by developing new convergence results for the general dual averaging algorithm under three nonstationary input models: adversarially-corrupted stochastic input, ergodic input, and block-independent (including periodic) input. Our results show convergence of dual averaging up to errors caused by nonstationarity of the data, and recover the classical bounds when the input data is i.i.d. Using these results, we show that the PACE algorithm for online fair allocation simultaneously achieves ``best of many worlds'' guarantees against any of these nonstationary input models as well as against i.i.d. input. Finally, numerical experiments show strong empirical performance of PACE against nonstationary inputs. | Accept | This paper studies the problem of online fair allocation. PACE algorithm has been proposed earlier to tackle this problem. Earlier analysis of this algorithm was under i.i.d. assumption. This paper presents a significant extension of the earlier work, providing guarantees for PACE under a significantly less restrictive data generating processes, e.g. adversarially-corrupted stochastic input, ergodic input, and block-independent (including periodic) input. This extension advances theoretical understanding of PACE algorithm and is of interest to Neurips community. The paper is recommended for acceptance. | train | [
"khebUiFtJTh",
"dOVIBle75B-",
"mD5ZycHVQ3E",
"be_sO-dtPB",
"TY3334oRU0g",
"ilYUKX9dvcB",
"tTDIMeQ9cK",
"kWgukRDGcl0",
"4vRLTp_dmMm",
"L4Gk92FbZvm",
"3jSaf48WXA",
"klgwfEjMH9b",
"FwldoFFyTmD"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response! That addressed my questions.",
" Thanks for replying to my questions. After some clarifications on Q2, the contribution of this paper is more concrete to me. I am happy to increase the score accordingly. ",
" > Suppose that there are two submissions. One has important and interesting... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
4,
1
] | [
"TY3334oRU0g",
"mD5ZycHVQ3E",
"be_sO-dtPB",
"tTDIMeQ9cK",
"FwldoFFyTmD",
"L4Gk92FbZvm",
"klgwfEjMH9b",
"3jSaf48WXA",
"L4Gk92FbZvm",
"nips_2022_8bk68fodvD5",
"nips_2022_8bk68fodvD5",
"nips_2022_8bk68fodvD5",
"nips_2022_8bk68fodvD5"
] |
nips_2022_opw858PBJl6 | New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound | Saliency methods compute heat maps that highlight portions of an input that were most important for the label assigned to it by a deep net. Evaluations of saliency methods convert this heat map into a new masked input by retaining the $k$ highest-ranked pixels of the original input and replacing the rest with "uninformative" pixels, and checking if the net's output is mostly unchanged. This is usually seen as an explanation of the output, but the current paper highlights reasons why this inference of causality may be suspect. Inspired by logic concepts of completeness & soundness, it observes that the above type of evaluation focuses on completeness of the explanation, but ignores soundness. New evaluation metrics are introduced to capture both notions, while staying in an intrinsic framework---i.e., using the dataset and the net, but no separately trained nets, human evaluations, etc. A simple saliency method is described that matches or outperforms prior methods in the evaluations. Experiments also suggest new intrinsic justifications, based on soundness, for popular heuristic tricks such as TV regularization and upsampling. | Accept | The paper introduces and formalizes new evaluation metrics to ensure goodness of saliency methods,.
Reviewers consensus about the paper was positive. They found that the paper contributions are clear and significant and also appreciated the paper originality. I therefore recommend acceptance.
| train | [
"qiCZFABBae",
"-s-VFECJf6T",
"gjPTvVaLBor",
"CyvbLsb6LkL",
"uNOnGRrHi8c",
"IACLMOKrWPF",
"n-YRCuVzZDF",
"az5ZG2nDSn"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I think authors addressed reviewers comments in a satisfying way. I confirm my rating.",
" I want to thank the authors for responding to and incorporating suggestions from my fellow reviewers and me. I continue to recommend accepting this paper.",
" **[kmbL, Ze6X, Ea2U]:** Have prior intrinsic methods been p... | [
-1,
-1,
-1,
-1,
7,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
4,
2,
3,
3
] | [
"gjPTvVaLBor",
"gjPTvVaLBor",
"CyvbLsb6LkL",
"nips_2022_opw858PBJl6",
"nips_2022_opw858PBJl6",
"nips_2022_opw858PBJl6",
"nips_2022_opw858PBJl6",
"nips_2022_opw858PBJl6"
] |
nips_2022_2FNnBhwJsHK | A Unified Framework for Deep Symbolic Regression | The last few years have witnessed a surge in methods for symbolic regression, from advances in traditional evolutionary approaches to novel deep learning-based systems. Individual works typically focus on advancing the state-of-the-art for one particular class of solution strategies, and there have been few attempts to investigate the benefits of hybridizing or integrating multiple strategies. In this work, we identify five classes of symbolic regression solution strategies---recursive problem simplification, neural-guided search, large-scale pre-training, genetic programming, and linear models---and propose a strategy to hybridize them into a single modular, unified symbolic regression framework. Based on empirical evaluation using SRBench, a new community tool for benchmarking symbolic regression methods, our unified framework achieves state-of-the-art performance in its ability to (1) symbolically recover analytical expressions, (2) fit datasets with high accuracy, and (3) balance accuracy-complexity trade-offs, across 252 ground-truth and black-box benchmark problems, in both noiseless settings and across various noise levels. Finally, we provide practical use case-based guidance for constructing hybrid symbolic regression algorithms, supported by extensive, combinatorial ablation studies. | Accept | The paper presents a novel deep symbolic regression approach that is a hybridization of existing methods, showing state-of-the-art performance on the SRBench. Let me stress that being a hybrid is no reason to reject a paper as creating hybrids can be a very creative contribution. And for me this is the case here. the hybrid is not just a "mixture" but actually a very creative rewiring of components of the underlying approaches. This is also supported by the ablation study. Moreover, several of the
issued raised were clarified well in the rebuttal. BTW, the authors may also want to cite other approaches for equation discovery, see e.g.
Jure Brence, Ljupco Todorovski, Saso Dzeroski:
Probabilistic grammars for equation discovery.
Knowl. Based Syst. 224: 107077 (2021)
Will Bridewell, Pat Langley, Ljupco Todorovski, Saso Dzeroski:
Inductive process modeling. Mach. Learn. 71(1): 1-32 (2008)
But this only for making the paper more self-complete. | train | [
"_UWRXziSJJ6",
"ZQ-hVZs2GIS",
"AY4BrBcZWrd",
"I5sQBnjserR",
"3PPJNvGkq3B9",
"_9l8jdO_cqy",
"JyDg9tJMkGs",
"Xo7tblw4u44",
"JJFBgrrNt75",
"vwvaipFHOaq",
"KM_ku0caz6l",
"ZUPDH0X-fcX"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer thanks the authors again for the continuous effort on clarifications and additional analysis to defend the work.\n\nMost of the main concerns the reviewer raised are resolved, and including these discussions would even strengthen this work.\nThe reviewer is now more convinced and made the (probably) ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"ZQ-hVZs2GIS",
"AY4BrBcZWrd",
"I5sQBnjserR",
"3PPJNvGkq3B9",
"_9l8jdO_cqy",
"ZUPDH0X-fcX",
"Xo7tblw4u44",
"KM_ku0caz6l",
"vwvaipFHOaq",
"nips_2022_2FNnBhwJsHK",
"nips_2022_2FNnBhwJsHK",
"nips_2022_2FNnBhwJsHK"
] |
nips_2022_epjxT_ARZW5 | Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation | Uncertainty quantification has received increasing attention in machine learning in the recent past. In particular, a distinction between aleatoric and epistemic uncertainty has been found useful in this regard. The latter refers to the learner's (lack of) knowledge and appears to be especially difficult to measure and quantify. In this paper, we analyse a recent proposal based on the idea of a second-order learner, which yields predictions in the form of distributions over probability distributions. While standard (first-order) learners can be trained to predict accurate probabilities, namely by minimising suitable loss functions on sample data, we show that loss minimisation does not work for second-order predictors: The loss functions proposed for inducing such predictors do not incentivise the learner to represent its epistemic uncertainty in a faithful way. | Accept | This meta review is based on the reviews, the authors rebuttal and the discussion with the reviewers, and ultimately my own judgement on the paper. There was a consensus that the paper contributes interesting insights on uncertainty quantification, and most reviewers praised several aspects of the submission. I feel this work deserves to be featured at NeurIPS and will attract interest from the community. I would like to personally invite the authors to carefully revise their manuscript to take into account the remarks and suggestions made by reviewers. Congratulations! | train | [
"U3FAPcqK0A",
"Ii9F69UWnvk",
"WNY99S7sC8k",
"s6jMIGCYaKr",
"E95El76k88S",
"D8eVcBx048",
"GHPWXp74GFI",
"luwwvy-Xts",
"xpSFnD5XBeZ",
"va_cu_biS2j",
"IYVSPJyxfed",
"CVKhWv7LAst"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification to my questions. The response was very informative. \n\n> Due to page restrictions, we have given only one experiment for illustration. However, as the accepted papers are allowed an additional content page, it would be no problem to extend the experiments as the reviewer suggests.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
4
] | [
"E95El76k88S",
"WNY99S7sC8k",
"GHPWXp74GFI",
"D8eVcBx048",
"CVKhWv7LAst",
"IYVSPJyxfed",
"va_cu_biS2j",
"xpSFnD5XBeZ",
"nips_2022_epjxT_ARZW5",
"nips_2022_epjxT_ARZW5",
"nips_2022_epjxT_ARZW5",
"nips_2022_epjxT_ARZW5"
] |
nips_2022_9-vs8BucEoo | Best of Both Worlds Model Selection | We study the problem of model selection in bandit scenarios in the presence of nested policy classes, with the goal of obtaining simultaneous adversarial and stochastic (``best of both worlds") high-probability regret guarantees. Our approach requires that each base learner comes with a candidate regret bound that may or may not hold, while our meta algorithm plays each base learner according to a schedule that keeps the base learner's candidate regret bounds balanced until they are detected to violate their guarantees. We develop careful mis-specification tests specifically designed to blend the above model selection criterion with the ability to leverage the (potentially benign) nature of the environment. We recover the model selection guarantees of the CORRAL algorithm for adversarial environments, but with the additional benefit of achieving high probability regret bounds. More importantly, our model selection results also hold simultaneously in stochastic environments under gap assumptions. These are the first theoretical results that achieve best-of-both world (stochastic and adversarial) guarantees while performing model selection in contextual bandit scenarios.
| Accept | This work advances the direction on model selection for bandit problems with nested model classes. Reviewers all agree that the results are significant, the contribution is solid, and the paper is well written. Clear accept. | train | [
"fPK3tebdwcM",
"T7yu0CAQT3Y",
"aDMfClU279c",
"pn1X0S_o90Q",
"rO6Ml-3Hf_k",
"IEokEeQ1bp",
"b_R_QPMyZV1",
"vpyfWce7IsZ",
"BlxDzlXPREY",
"8cCx6wAYwFs",
"7jxB9Q8FxXN",
"OgH7pH71lOo"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed explanation. I'll raise my rating to 6.",
" Thanks so much for your comments. We will certainly update the manuscript to include the reviewer's suggestions in the camera ready. ",
" Thank you for your comments. We will further clarify the regret rate we can hope for in the linear ba... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"aDMfClU279c",
"pn1X0S_o90Q",
"rO6Ml-3Hf_k",
"IEokEeQ1bp",
"vpyfWce7IsZ",
"OgH7pH71lOo",
"7jxB9Q8FxXN",
"8cCx6wAYwFs",
"nips_2022_9-vs8BucEoo",
"nips_2022_9-vs8BucEoo",
"nips_2022_9-vs8BucEoo",
"nips_2022_9-vs8BucEoo"
] |
nips_2022_vWUmBjin_-o | Structuring Representations Using Group Invariants | A finite set of invariants can identify many interesting transformation groups. For example, distances, inner products and angles are preserved by Euclidean, Orthogonal and Conformal transformations, respectively. In an equivariant representation, the group invariants should remain constant on the embedding as we transform the input. This gives a procedure for learning equivariant representations without knowing the possibly nonlinear action of the group in the input space. Rather than enforcing such hard invariance constraints on the latent space, we show how to use invariants for "symmetry regularization" of the latent, while guaranteeing equivariance through other means. We also show the feasibility of learning disentangled representations using this approach and provide favorable qualitative and quantitative results on downstream tasks, including world modeling and reinforcement learning. | Accept | This paper proposes a self-supervised framework for learning equivariant representations (for a given group) from data. Applications in reinforcement learning support the claims in the paper. All reviewers agreed that the approach is interesting to the neurips community. Accept | train | [
"uL9hQKf6pD_",
"Ylvx69jsBAG",
"hM1IkoFsKOD",
"ARPknkbcD4",
"3Y91k8xdyqa",
"x1oJbhx0Lc7",
"qfDxcLPwV_u",
"ksp89UN8Syp",
"tgrrv4Md_7D",
"T7s3uyrf2f7"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As today is the last day of the discussion period, we would like to know if the reviewer needs any further clarifications on the paper or our comments.\n\nWe will also like to reiterate that the goal of this work is to design a method to learn equivariance and regularize it for linear action in the latent using I... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
3
] | [
"3Y91k8xdyqa",
"T7s3uyrf2f7",
"tgrrv4Md_7D",
"ksp89UN8Syp",
"qfDxcLPwV_u",
"nips_2022_vWUmBjin_-o",
"nips_2022_vWUmBjin_-o",
"nips_2022_vWUmBjin_-o",
"nips_2022_vWUmBjin_-o",
"nips_2022_vWUmBjin_-o"
] |
nips_2022_u_7qyNFwkP8 | The Query Complexity of Cake Cutting | We consider the query complexity of cake cutting in the standard query model and give lower and upper bounds for computing approximately envy-free, perfect, and equitable allocations with the minimum number of cuts. The lower bounds are tight for computing contiguous envy-free allocations among $n=3$ players and for computing perfect and equitable allocations with minimum number of cuts between $n=2$ players. For $\epsilon$-envy-free allocations with contiguous pieces, we also give an upper bound of $O(n/\epsilon)$ and lower bound of $\Omega(\log(1/\epsilon))$ queries for any number $n \geq 3$ of players.
We also formalize moving knife procedures and show that a large subclass of this family, which captures all the known moving knife procedures, can be simulated efficiently with arbitrarily small error in the Robertson-Webb query model. | Accept | This paper considers the well-studied cake-cutting problem that captures the fair division of a resource (e.g., time, mineral deposits, fossil fuels, and many others) among n parties with equal rights but different interests over the resource. This is also a natural problem in algorithmic game theory. Various such settings are considered where the i'th party gets part P(i) at the end of the protocol. Approximation (various notions of, parametrized by some error epsilon > 0) are essential to the results in the paper.
In the first part of the paper, new upper and lower bounds are developed in the Robertson-Webb model of discrete protocols. For instance, "epsilon-envy-free" protocols are considered (for all (i,j), player i thinks---according to their valuation function---that P(i) is at least P(j) - epsilon AND P(i) is connected) and it is shown, e.g., that: (a) for n=3, Theta(log (1/epsilon)) queries are necessary and sufficient; and (b) for n >= 4, an upper bound of O(n/epsilon) queries and a lower bound of Omega(1/epsilon) queries. This is just a sample. In the second part of the paper, another cake-cutting approach---Moving Knife (MK)---is considered: the paper introduces a model for MK protocols that captures all current MK protocols and shows new results.
This is a comprehensive paper on a natural mathematical model for fair division. | train | [
"EWGmOQtaZsg",
"VVH1x_CPOC_",
"FwiLBJY4pbX",
"PiEXNebWGsZ",
"w21FlTUJvai",
"lULapp0SKwfS",
"Y0Gqti7gWcGT",
"wuhGTEdxIC",
"QzT6YxXeQLH",
"aF5b_vMPM5",
"bx2oR4_7Mn4",
"Lq6Ptb_8qx",
"UOL1VJl2UHF"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have satisfied the few negative comments I had. \nI originally gave the paper an 8-strong accept, and I still give it that rating. ",
" The paper by Branzei and Nisan '19 cites our work and builds on it (e.g. uses our definition of moving knife protocols). We agree we should cite them too and we wil... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"lULapp0SKwfS",
"w21FlTUJvai",
"PiEXNebWGsZ",
"w21FlTUJvai",
"Y0Gqti7gWcGT",
"UOL1VJl2UHF",
"Lq6Ptb_8qx",
"bx2oR4_7Mn4",
"aF5b_vMPM5",
"nips_2022_u_7qyNFwkP8",
"nips_2022_u_7qyNFwkP8",
"nips_2022_u_7qyNFwkP8",
"nips_2022_u_7qyNFwkP8"
] |
nips_2022_cUOR-_VsavA | Structural Pruning via Latency-Saliency Knapsack | Structural pruning can simplify network architecture and improve inference speed. We propose Hardware-Aware Latency Pruning (HALP) that formulates structural pruning as a global resource allocation optimization problem, aiming at maximizing the accuracy while constraining latency under a predefined budget on targeting device. For filter importance ranking, HALP leverages latency lookup table to track latency reduction potential and global saliency score to gauge accuracy drop. Both metrics can be evaluated very efficiently during pruning, allowing us to reformulate global structural pruning under a reward maximization problem given target constraint. This makes the problem solvable via our augmented knapsack solver, enabling HALP to surpass prior work in pruning efficacy and accuracy-efficiency trade-off. We examine HALP on both classification and detection tasks, over varying networks, on ImageNet and VOC datasets, on different platforms. In particular, for ResNet-50/-101 pruning on ImageNet, HALP improves network throughput by $1.60\times$/$1.90\times$ with $+0.3\%$/$-0.2\%$ top-1 accuracy changes, respectively. For SSD pruning on VOC, HALP improves throughput by $1.94\times$ with only a $0.56$ mAP drop. HALP consistently outperforms prior art, sometimes by large margins. Project page at \url{https://halp-neurips.github.io/}. | Accept | Structured neural network pruning is an important problem for efficient inference. Reducing the inference time of the network has many downstream impacts for practical deployments of learned neural networks. The paper tackles the problem of reducing the GPU wall clock latency with structured pruning is a difficult problem due to difficulties in modeling the GPU latency.
The paper proposes latency aware structural pruning (LASP). The idea is to prune less important channels subject to latency budget constraints. The paper seems to build on top of [1] and is closely related to [2]. The main contribution is a) modeling the hardware latency via the lookup table modeling the staircase pattern present in GPUs and b) corresponding resource allocation optimization problem. I think the LASP method provides a clear contribution to the research community in structured neural network pruning and the systematic experiments back up the claim with actual wall clock speedup results on GPUs.
Overall, I would like to suggest the authors to include the baseline comparison results w.r.t GBN and QCQP in the main paper. Also, as a possible future work, I suggest the authors to consider switching the linear objective to the quadratic objective suggested in [2]. The quadratic objective takes into account the coupling effect between neighboring layers: pruning output channels leads to inactive weights along the corresponding input channels in the subsequent layer.
[1] Aflalo et al. Knapsack pruning with inner distillation. 2020
[2] Jeong et al. Optimal channel selection with discrete QCQP. AISTAT 2022 | train | [
"tnjazi00gV",
"b0v3JyfFQH",
"FEeefA_AXUD",
"dcAKiQ5IApu",
"EBTvhuIPpLa",
"HRyQFkJZSJy",
"T3TPiCZEvG",
"fgw5v3LUShF",
"0Az93zmOhqa",
"eGkhnXIjq8H",
"OJZs7SMLjfU",
"aikkYXWj0WX",
"A5md4YzJGAx",
"pe0SlV_t7xu",
"hcEQgBYh3XC",
"aUfVEL-oVry"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time and consideration. We would like to highlight one more time the contributions of this work:\n\n1. We use the latency-aware grouping, which assigns different grouping sizes to each layer according to the latency traits rather than predefined fixed values. To the best of our knowledge, we ar... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
3
] | [
"b0v3JyfFQH",
"FEeefA_AXUD",
"eGkhnXIjq8H",
"aikkYXWj0WX",
"HRyQFkJZSJy",
"0Az93zmOhqa",
"nips_2022_cUOR-_VsavA",
"aUfVEL-oVry",
"hcEQgBYh3XC",
"OJZs7SMLjfU",
"pe0SlV_t7xu",
"A5md4YzJGAx",
"nips_2022_cUOR-_VsavA",
"nips_2022_cUOR-_VsavA",
"nips_2022_cUOR-_VsavA",
"nips_2022_cUOR-_VsavA... |
nips_2022_Roiw2Trm-qP | Subgame Solving in Adversarial Team Games | In adversarial team games, a team of players sequentially faces a team of adversaries. These games are the simplest setting with multiple players where cooperation and competition coexist, and it is known that the information asymmetry among the team members makes equilibrium approximation computationally hard. Although much effort has been spent designing scalable algorithms, the problem of solving large game instances is open. In this paper, we extend the successful approach of solving huge two-*player* zero-sum games, where a blueprint strategy is computed offline by using an abstract version of the game and then it is refined online, that is, during a playthrough. In particular, to the best of our knowledge, our paper provides the first method for online strategy refinement via subgame solving in adversarial team games. Our method, based on the team belief DAG, generates a gadget game and then refine the blueprint strategy by using column-generation approaches in anytime fashion. If the blueprint is sparse, then our whole algorithm runs end-to-end in polynomial time given a best-response oracle; in particular, it avoids expanding the whole team belief DAG, which has exponential worst-case size. We apply our method to a standard test suite, and we empirically show the performance improvement of the strategies thanks to subgame solving. | Accept | The most serious and focused criticism for this paper comes from a single reviewer, who also happens to be the most confident reviewer. The crux of the criticism is that the authors should have compared against the 2021 AAMAS paper, "Multi-Agent Coordination in Adversarial Environments through Signal Mediated Strategies" and the algorithm described therein. The authors point out that this algorithm as designed for symmetric observations and was never tested in the more general case. The reviewer thinks it might work well in the asymmetric case. He could be right, but it seems a bit much to require the authors compare with a reviewers hunch about modifications to existing work. If we allow reviewers to drag authors down that rabbit hole, nothing might ever get published.
The sentiment of the other reviewers is weakly positive. There is general concern that the games used in the experiments are not particularly large, and overall concerns about scaling to larger games.
On the positive side, the approach does appear to be novel and it does appear to do well in the provided experiments. | train | [
"4H-en-fadLX",
"HqMs0bRewE",
"E9eKaB7Yv6n",
"P2h2rJqFyFL",
"hPDJ5aMPcqM",
"dSR4tY8oqwI",
"kWvH8lXbblH",
"Bo5IKHtNf4",
"auBb3loAhn3",
"djtMwtOqoz",
"Xo4qzY8VL5A",
"trF8R_YArjV",
"tV1yGqDAgFt",
"_fET9wI2oZs"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" My point is that the above work I mentioned should be an important baseline for subgame solving because both cannot guarantee convergence to TMECor. The current version of subgame solving still suffers the limitation of memory due to the CG subroutine, but the above work I mentioned does not suffer that limitatio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5,
3
] | [
"E9eKaB7Yv6n",
"P2h2rJqFyFL",
"hPDJ5aMPcqM",
"dSR4tY8oqwI",
"auBb3loAhn3",
"_fET9wI2oZs",
"nips_2022_Roiw2Trm-qP",
"tV1yGqDAgFt",
"trF8R_YArjV",
"Xo4qzY8VL5A",
"nips_2022_Roiw2Trm-qP",
"nips_2022_Roiw2Trm-qP",
"nips_2022_Roiw2Trm-qP",
"nips_2022_Roiw2Trm-qP"
] |
nips_2022_0gouO5saq6K | Multi-Game Decision Transformers | A longstanding goal of the field of AI is a method for learning a highly capable, generalist agent from diverse experience. In the subfields of vision and language, this was largely achieved by scaling up transformer-based models and training them on large, diverse datasets. Motivated by this progress, we investigate whether the same strategy can be used to produce generalist reinforcement learning agents. Specifically, we show that a single transformer-based model – with a single set of weights – trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance. When trained and evaluated appropriately, we find that the same trends observed in language and vision hold, including scaling of performance with model size and rapid adaptation to new games via fine-tuning. We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning, and find that our Multi-Game Decision Transformer models offer the best scalability and performance. We release the pre-trained models and code to encourage further research in this direction. | Accept | This paper demonstrates the generalization abilities of decision transformers relative to other approaches in multi-task settings. The reviewers found the topic interesting and the results compelling, unanimously supporting its inclusion in the conference programme. Accept. | train | [
"pmHQ8qkJ75k",
"TQNVZkzQV1m",
"n2Aubn-axrG",
"cIH9-9kBy9M",
"nNiyOM180Cu",
"HSk1LwBznQb",
"c1xE_vdZsId",
"_Ih9FOJ-C6I",
"W091wt6GL6W"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your thoughtful review! We are happy that the significance of this work is recognized. \n\n> Appendix\n\nWe're not sure if there was a technical issue for you, but there was a “Supplementary Material: ⬇ pdf” link on the OpenReview page (below the paper abstract), before we updated with the rebuttal... | [
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"W091wt6GL6W",
"_Ih9FOJ-C6I",
"c1xE_vdZsId",
"HSk1LwBznQb",
"HSk1LwBznQb",
"nips_2022_0gouO5saq6K",
"nips_2022_0gouO5saq6K",
"nips_2022_0gouO5saq6K",
"nips_2022_0gouO5saq6K"
] |
nips_2022_fWHOcnHb1n | Parameter-free Regret in High Probability with Heavy Tails | We present new algorithms for online convex optimization over unbounded domains that obtain parameter-free regret in high-probability given access only to potentially heavy-tailed subgradient estimates. Previous work in unbounded domains con- siders only in-expectation results for sub-exponential subgradients. Unlike in the bounded domain case, we cannot rely on straight-forward martingale concentration due to exponentially large iterates produced by the algorithm. We develop new regularization techniques to overcome these problems. Overall, with probability at most δ, for all comparators u our algorithm achieves regret O ̃(∥u∥T 1/p log(1/δ)) for subgradients with bounded pth moments for some p ∈ (1, 2]. | Accept | The paper makes an interesting contribution to the literature on online convex optimization with heavy-tailed stochastic gradient including infinite variance. While two reviewers (tcgd and WCJh) were positive about the paper, one reviewer (Wska) had concerns about the motivation and the technical contributions (commenting about it at a very-high level).
In my own reading of the paper, I find it is well-written, highlighting the main challenges and the proof strategy they use to overcome the challenges. The regret bounds obtained are also optimal in a certain sense as they achieve lower bounds obtained recently.
Hence it is recommended for acceptance at Neurips. | train | [
"eBs6-5IYmeM",
"tacH96MvSiq",
"2NRr1uvwlmk",
"V7nvsUfwd8j",
"6B4Le3DhjP4",
"rRWCcyaB1u",
"btQG3oK4Clx",
"gnxMffoUkNu"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I read the other reviews and the author's rebuttal. I'll keep my initial evaluation.",
" Thanks for your helpful comments and questions, we hope our answers below will help clarify the contribution. We think that incorporating these clarifications will improve our manuscript. If you agr... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"V7nvsUfwd8j",
"gnxMffoUkNu",
"btQG3oK4Clx",
"rRWCcyaB1u",
"nips_2022_fWHOcnHb1n",
"nips_2022_fWHOcnHb1n",
"nips_2022_fWHOcnHb1n",
"nips_2022_fWHOcnHb1n"
] |
nips_2022_0VhrZPJXcTU | Learning to Compare Nodes in Branch and Bound with Graph Neural Networks | Branch-and-bound approaches in integer programming require ordering portions of the space to explore next, a problem known as node comparison. We propose a new siamese graph neural network model to tackle this problem, where the nodes are represented as bipartite graphs with attributes. Similar to prior work, we train our model to imitate a diving oracle that plunges towards the optimal solution. We evaluate our method by solving the instances in a plain framework where the nodes are explored according to their rank. On three NP-hard benchmarks chosen to be particularly primal-difficult, our approach leads to faster solving and smaller branch- and-bound trees than the default ranking function of the open-source solver SCIP, as well as competing machine learning methods. Moreover, these results generalize to instances larger than used for training. Code for reproducing the experiments can be found at https://github.com/ds4dm/learn2comparenodes. | Accept | This paper clearly documents a well-executed exploration of an imitation learning / neural diving approach to improving node selection in BB solvers using a GNN approach.
I believe this paper is a useful contribution that pushes forward the important project of integrating modern ML techniques to improve integer programming.
The reviewers were less sanguine than the meta-reviewer. I will explain why.
The authors note, and I concur, that node selection strategies in modern BB solvers have been *heavily* researched for decades, so beating their performance on the majority of these difficult tasks is quite an achievement! (This point confused the first reviewer.)
The second reviewer was disappointed that the authors used an "antiquated" GNN architecture, or compared to "antiquated" baselines, but did not substantiate which modern architectures or comparisons would have been better. In the absence of constructive criticism, I construe these as a misapplication of the standards of deep learning (very fast progress in beating benchmarks) to integer programming (a mature field where progress is slower).
While several GNN approaches are available for variable selection in BB solvers, node selection is an important and independent challenge.
I am not aware of any previous work on deep learning for node selection (nor are the authors),
which explains why the authors chose to compare their GNN-based approach to other strategies they created and the default strategy in SCIP rather than the (non-existent) deep-learning SOTA for the problem.
(This point confused the third reviewer.) | val | [
"azOpLNeMvQh",
"2cYqJ68ryUp",
"L5h8OYam6Si",
"kd3kTsY-rYp",
"A4ANC67MUi2",
"dZlkhk0ALO"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their time and insights in reviewing our paper. However, we believe there is a misunderstanding with regards to what the paper is achieving. There are several distinct decision making tasks that are repeatedly performed by solvers when solving MILPs. One is the task of selecting which no... | [
-1,
-1,
-1,
5,
3,
3
] | [
-1,
-1,
-1,
3,
4,
3
] | [
"dZlkhk0ALO",
"A4ANC67MUi2",
"kd3kTsY-rYp",
"nips_2022_0VhrZPJXcTU",
"nips_2022_0VhrZPJXcTU",
"nips_2022_0VhrZPJXcTU"
] |
nips_2022_W72rB0wwLVu | Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with an Inexact Prox | Inspired by a recent breakthrough of Mishchenko et al. [2022], who for the first time showed that local gradient steps can lead to provable communication acceleration, we propose an alternative algorithm which obtains the same communication acceleration as their method (ProxSkip). Our approach is very different, however: it is based on the celebrated method of Chambolle and Pock [2011], with several nontrivial modifications: i) we allow for an inexact computation of the prox operator of a certain smooth strongly convex function via a suitable gradient-based method (e.g., GD or Fast GD), ii) we perform a careful modification of the dual update step in order to retain linear convergence. Our general results offer the new state-of-the-art rates for the class of strongly convex-concave saddle-point problems with bilinear coupling characterized by the absence of smoothness in the dual function. When applied to federated learning, we obtain a theoretically better alternative to ProxSkip: our method requires fewer local steps ($\mathcal{O}(\kappa^{1/3})$ or $\mathcal{O}(\kappa^{1/4})$, compared to $\mathcal{O}(\kappa^{1/2})$ of ProxSkip), and performs a deterministic number of local steps instead. Like ProxSkip, our method can be applied to optimization over a connected network, and we obtain theoretical improvements here as well. | Accept | The paper obtains new algorithms in the domain of federated learning that provide state of the art complexity guarantees in terms of communication and local gradient oracle queries. These results are obtained by combining ideas and techniques from the literature on proximal splitting algorithms with the specific setting of federated learning. While the reviewers generally appreciated the contributions of the paper and its clarity of presentation, the authors are advised to carefully consider the feedback provided by the reviews when preparing a revision of the paper. Most notably, the paper should be making a more careful comparison to existing work, as recommended by Reviewer zYvm, some of which is summarized here, while the rest can be found in the original review and the discussion.
* There should be clear pointers to the literature for the results that are essentially re-proven in this work. A specific example are the results from Appendix B. Similarly, it needs to be stated clearly which parts of the analysis of Chambolle-Pock are reproduced.
* Appendix D should compare to the results of Kovalev et al.
* Appendix G needs to state clearly that all of the methods mentioned there were already known and provide correct references for all. Further, there needs to be a more careful comparison to the results of Kim and Fessler already cited in the same section.
| train | [
"aeQFORrOoj",
"gqpu6pXcs2P",
"w12METzwvJC",
"QhApq_dWbxL",
"lBkG9g9Gjb7",
"SxAC2muR98R",
"RvKl8TWRbBU",
"P4y2Q917yo",
"Vc0u1UsaJ8X",
"eS7sUbcxZ61",
"j5CqfIcerHU",
"tRKHXf7GiSu",
"dZMXueJc-BR",
"uBsyEDHZ5k",
"xNix6nL-tI9",
"wCLzeLHIIwY",
"ujVcKaKnlXq",
"lMGT2R7e0uY",
"eily52z-JhR"... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official... | [
" > You are right, my bad. These works do not use algorithms for gradient minimization. \n\nThanks for checking and acknowledging, appreciated. \n\n> It seems this paper https://arxiv.org/pdf/2205.09647.pdf does do this, but it was published on arxiv too close to neurips deadline. It would be nice to cite it for th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"gqpu6pXcs2P",
"w12METzwvJC",
"QgHrJ76OP0D",
"Msr898Goqpa",
"eS7sUbcxZ61",
"RvKl8TWRbBU",
"dZMXueJc-BR",
"Vc0u1UsaJ8X",
"2PPXx0PRj5",
"zXtBROxBC-",
"tRKHXf7GiSu",
"xNix6nL-tI9",
"nips_2022_W72rB0wwLVu",
"QgHrJ76OP0D",
"XNHQprVrpe",
"QgHrJ76OP0D",
"Msr898Goqpa",
"Msr898Goqpa",
"Vl... |
nips_2022_ft4xGJ8tIZH | On the detrimental effect of invariances in the likelihood for variational inference | Variational Bayesian posterior inference often requires simplifying approximations such as mean-field parametrisation to ensure tractability. However, prior work has associated the variational mean-field approximation for Bayesian neural networks with underfitting in the case of small datasets or large model sizes. In this work, we show that invariances in the likelihood function of over-parametrised models contribute to this phenomenon because these invariances complicate the structure of the posterior by introducing discrete and/or continuous modes which cannot be well approximated by Gaussian mean-field distributions. In particular, we show that the mean-field approximation has an additional gap in the evidence lower bound compared to a purpose-built posterior that takes into account the known invariances. Importantly, this invariance gap is not constant; it vanishes as the approximation reverts to the prior. We proceed by first considering translation invariances in a linear model with a single data point in detail. We show that, while the true posterior can be constructed from a mean-field parametrisation, this is achieved only if the objective function takes into account the invariance gap. Then, we transfer our analysis of the linear model to neural networks. Our analysis provides a framework for future work to explore solutions to the invariance problem.
| Accept | This paper seeks to understand and explain why variational Bayes seems to underperform (if not fail completely) in overparameterized models such as Bayesian NNs. The proposed explanation is an additional gap in the marginal likelihood bound that results from these invariances. The paper demonstrates the gap in detail for a linear model with translation invariance. The reviewers had a favorable opinion of the work, having only the criticisms of (1) clarity of writing and (2) questioning if the paper can say anything concrete about Bayesian NNs. The authors have substantially revised the work, and I and the reviewers agree that the clarity now meets the bar for publication. I think the second criticism is still valid, and the paper would be better served to be billed as a study of overparameterized models. Yet, the paper's merits outweigh this downside (which I encourage the authors to improve upon before the camera ready).
Also, this paper is related enough to warrant inclusion in the related work, as it seeks to directly address the symmetries algorithmically:
Moore, David A. "Symmetrized variational inference." In NIPS Workshop on Advances in Approximate Bayesian Inference, vol. 4, p. 31. 2016. | test | [
"wlH9Hsp-Jmc",
"dflBVAu5ri",
"MM20EBbgF9",
"NbjM7HWQGYT",
"v78ZrRqJwHl",
"wPTME0ir812",
"beSp3c6iGG",
"YunMGcQOMJC",
"Nk61sEa3ZhF",
"A4w58enT-50"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the in depth replies to my comments and questions. I believe with the changes promised / made that the paper is a much better piece of work than before. I have updated my score to reflect this (5 $\\rightarrow$ 6).",
" Thank you again for your feedback. We have used the feedback of all reviewers t... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
4
] | [
"v78ZrRqJwHl",
"A4w58enT-50",
"A4w58enT-50",
"Nk61sEa3ZhF",
"YunMGcQOMJC",
"beSp3c6iGG",
"nips_2022_ft4xGJ8tIZH",
"nips_2022_ft4xGJ8tIZH",
"nips_2022_ft4xGJ8tIZH",
"nips_2022_ft4xGJ8tIZH"
] |
nips_2022_9cPDqh9fQMy | BayesPCN: A Continually Learnable Predictive Coding Associative Memory | Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning. While the machine learning community's interest in associative memories has recently been rekindled, most work has focused on memory recall ($read$) over memory learning ($write$). In this paper, we present BayesPCN, a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning. Moreover, BayesPCN is able to gradually forget past observations ($forget$) to free its memory. Experiments show that BayesPCN can recall corrupted i.i.d. high-dimensional data observed hundreds to a thousand ``timesteps'' ago without a large drop in recall ability compared to the state-of-the-art offline-learned parametric memory models. | Accept | This paper proposes a bayesian extension to predictive coding neural networks, serving as a form of associative memory. The response from reviewers was lukewarm, but almost unanimously erred on the side of acceptance. The only review arguing for borderline rejection was not as substantial as I would have liked, and did not reply to the authors after their rebuttal. Ultimately, I would have preferred to have a clear champion amongst the other reviewers, arguing more strongly for the paper, but having read the discussion I will take the tepid approval as a sign that the reviewers recommend acceptance, and recommend that the paper be included in the proceedings. | train | [
"9rkDc49Bvj4",
"djfPAh5AIPM",
"h_48m0P05As",
"UQzy2FyFlRv",
"vf-q8dwwLjD",
"Y5mk5Ki5Tbm",
"k6uwIFcmJgP",
"YdPmcxPZLZCM",
"Os8bE0xJiA7",
"5DXzwsyUVLEu",
"zt2fC_U5RK",
"sXvW3kChie",
"7WjUtUt69t",
"1zL666HsiYC",
"H7SYnJX10tB",
"_Dv9YHwGJzx",
"T9RWMxebl34",
"_6JVQG-QAop"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nWe wanted to inquire whether you were satisfied with our responses below. If not, please let us know so we can provide more details or address your remaining concerns before 1 PM PDT today.\n\nWe thank you again for your time!",
" Thank you for the response and for raising your score.\n\nWe be... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
5
] | [
"H7SYnJX10tB",
"h_48m0P05As",
"5DXzwsyUVLEu",
"_Dv9YHwGJzx",
"k6uwIFcmJgP",
"H7SYnJX10tB",
"zt2fC_U5RK",
"Os8bE0xJiA7",
"sXvW3kChie",
"_6JVQG-QAop",
"T9RWMxebl34",
"_Dv9YHwGJzx",
"H7SYnJX10tB",
"nips_2022_9cPDqh9fQMy",
"nips_2022_9cPDqh9fQMy",
"nips_2022_9cPDqh9fQMy",
"nips_2022_9cPD... |
nips_2022_8oj_2Ypp0j | Robustness to Unbounded Smoothness of Generalized SignSGD | Traditional analyses in non-convex optimization typically rely on the smoothness assumption, namely requiring the gradients to be Lipschitz. However, recent evidence shows that this smoothness condition does not capture the properties of some deep learning objective functions, including the ones involving Recurrent Neural Networks and LSTMs. Instead, they satisfy a much more relaxed condition, with potentially unbounded smoothness. Under this relaxed assumption, it has been theoretically and empirically shown that the gradient-clipped SGD has an advantage over the vanilla one. In this paper, we show that clipping is not indispensable for Adam-type algorithms in tackling such scenarios: we theoretically prove that a generalized SignSGD algorithm can obtain similar convergence rates as SGD with clipping but does not need explicit clipping at all. This family of algorithms on one end recovers SignSGD and on the other end closely resembles the popular Adam algorithm. Our analysis underlines the critical role that momentum plays in analyzing SignSGD-type and Adam-type algorithms: it not only reduces the effects of noise, thus removing the need for large mini-batch in previous analyses of SignSGD-type algorithms, but it also substantially reduces the effects of unbounded smoothness and gradient norms. To the best of our knowledge, this work is the first one showing the benefit of Adam-type algorithms compared with non-adaptive gradient algorithms such as gradient descent in the unbounded smoothness setting. We also compare these algorithms with popular optimizers on a set of deep learning tasks, observing that we can match the performance of Adam while beating others. | Accept | During the discussion it became clear that this work naturally extends previous works
on signSGD to encompass generalised methods. Several clarifications were made by the authors throughout the rebuttal process, and this has mostly satisfied the reviewers. Moreover, I think that all in all this work may be interesting to the neurips community, and I recommend to accept it. | train | [
"j1TtDP8tgB",
"z_wHi9rhh_V",
"mi4UKTugg1e",
"nSzLLnjaf83",
"0-WnhmASEUV",
"_gTcIwchrQh",
"0bcnjAR-B4F",
"6NydYa5Njf",
"LtnaH-VrQt",
"Vy7SbB4j84f",
"ZK0hUABkvW0H",
"QSTRAxIa6Q_",
"9JSmSiv7rQJ",
"4wzhbYx_ypy",
"6r1QBUvzMK_",
"nBz0dxRO1SY",
"RcCuBcro2A",
"ju_MYvdCtYj",
"PVjQIGfMSca"... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nThank you so much for spending the time to review our paper. We have carefully answered your questions, including the comparison between our algorithm’s empirical performance and others, the relationship between an average-form bound and our min-form bound, and the numbering of lemmas.\n\nPlease... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"PVjQIGfMSca",
"ju_MYvdCtYj",
"nips_2022_8oj_2Ypp0j",
"0-WnhmASEUV",
"_gTcIwchrQh",
"0bcnjAR-B4F",
"6NydYa5Njf",
"LtnaH-VrQt",
"ZK0hUABkvW0H",
"QSTRAxIa6Q_",
"QSTRAxIa6Q_",
"6r1QBUvzMK_",
"q-PyBCWQa5V",
"q-PyBCWQa5V",
"q-PyBCWQa5V",
"PVjQIGfMSca",
"ju_MYvdCtYj",
"nips_2022_8oj_2Ypp... |
nips_2022_ikWvMRVQBWW | Generalization for multiclass classification with overparameterized linear models | Via an overparameterized linear model with Gaussian features, we provide conditions for good generalization for multiclass classification of minimum-norm interpolating solutions in an asymptotic setting where both the number of underlying features and the number of classes scale with the number of training points. The survival/contamination analysis framework for understanding the behavior of overparameterized learning problems is adapted to this setting, revealing that multiclass classification qualitatively behaves like binary classification in that, as long as there are not too many classes (made precise in the paper), it is possible to generalize well even in settings where regression tasks would not generalize. Besides various technical challenges, it turns out that the key difference from the binary classification setting is that there are relatively fewer training examples of each class in the multiclass setting as the number of classes increases, making the multiclass problem ``harder'' than the binary one. | Accept | The paper makes a solid progress in our understanding of generalization in overpametrized models | train | [
"x5vQD5IX9rU",
"sp7zqDhjf-",
"M8BlUXud27E",
"dW4pRjBWc1h",
"IwEN5UZN3Xt",
"MFOp7kfPOfV",
"BlW-EkyMujf",
"PbzKGYbe1PF",
"FcSE975BDmb",
"l6gDt9GGmVe",
"CQUoKm67-G0",
"s6KR8U56kLv",
"ejLF8MToQ38",
"otrgGaTHj6"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their detailed response. Overall, I believe this is a rather interesting and technically solid contribution that I am happy to see published. The work points out some interesting open questions. While answering some of these would strengthen the paper even further, it still adds value shar... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
1
] | [
"s6KR8U56kLv",
"otrgGaTHj6",
"otrgGaTHj6",
"ejLF8MToQ38",
"ejLF8MToQ38",
"s6KR8U56kLv",
"s6KR8U56kLv",
"CQUoKm67-G0",
"CQUoKm67-G0",
"nips_2022_ikWvMRVQBWW",
"nips_2022_ikWvMRVQBWW",
"nips_2022_ikWvMRVQBWW",
"nips_2022_ikWvMRVQBWW",
"nips_2022_ikWvMRVQBWW"
] |
nips_2022_1l5hEEK_j13 | Finite-Sample Maximum Likelihood Estimation of Location | We consider 1-dimensional location estimation, where we estimate a parameter $\lambda$ from $n$ samples $\lambda + \eta_i$, with each $\eta_i$ drawn i.i.d. from a known distribution $f$. For fixed $f$ the maximum-likelihood estimate (MLE) is well-known to be optimal in the limit as $n \to \infty$: it is asymptotically normal with variance matching the Cramer-Rao lower bound of $\frac{1}{n\mathcal{I}}$, where $\mathcal{I}$ is the Fisher information of $f$. However, this bound does not hold for finite $n$, or when $f$ varies with $n$. We show for arbitrary $f$ and $n$ that one can recover a similar theory based on the Fisher information of a smoothed version of $f$, where the smoothing radius decays with $n$. | Accept | This paper considers the problem of parameter estimation using smoothed observations: indeed, after smoothing although the asymptotic variance may increase we also have finite sample guarantees for the MLE. Although it is not clear how useful this methodology would be for high-dimensional applications, I am impressed by this neat observation and the solid writing. | val | [
"DHnYSXK8xc2",
"2WlMDDpm-hV",
"X4VqC273iAx",
"mk5g0s3geHX",
"629jTeg3GnM",
"Q0DJ2J1Hx6L",
"J4kykQOGXUM",
"YebiS08cmtM",
"EUM_bGcoPFL",
"q1cXDCOHXS3"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for answering my questions. The comparison with prior work seems helpful to include in the text. I maintain my original evaluation.",
" Thank you for your feedback and detailed comments. We agree that the paper has not completely resolved the algorithmic question (in the sense of (A) in the review), and ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"2WlMDDpm-hV",
"q1cXDCOHXS3",
"EUM_bGcoPFL",
"YebiS08cmtM",
"J4kykQOGXUM",
"nips_2022_1l5hEEK_j13",
"nips_2022_1l5hEEK_j13",
"nips_2022_1l5hEEK_j13",
"nips_2022_1l5hEEK_j13",
"nips_2022_1l5hEEK_j13"
] |
nips_2022_9xRZlV6GfOX | Graphein - a Python Library for Geometric Deep Learning and Network Analysis on Biomolecular Structures and Interaction Networks | Geometric deep learning has broad applications in biology, a domain where relational structure in data is often intrinsic to modelling the underlying phenomena. Currently, efforts in both geometric deep learning and, more broadly, deep learning applied to biomolecular tasks have been hampered by a scarcity of appropriate datasets accessible to domain specialists and machine learning researchers alike. To address this, we introduce Graphein as a turn-key tool for transforming raw data from widely-used bioinformatics databases into machine learning-ready datasets in a high-throughput and flexible manner. Graphein is a Python library for constructing graph and surface-mesh representations of biomolecular structures, such as proteins, nucleic acids and small molecules, and biological interaction networks for computational analysis and machine learning. Graphein provides utilities for data retrieval from widely-used bioinformatics databases for structural data, including the Protein Data Bank, the AlphaFold Structure Database, chemical data from ZINC and ChEMBL, and for biomolecular interaction networks from STRINGdb, BioGrid, TRRUST and RegNetwork. The library interfaces with popular geometric deep learning libraries: DGL, Jraph, PyTorch Geometric and PyTorch3D though remains framework agnostic as it is built on top of the PyData ecosystem to enable inter-operability with scientific computing tools and libraries. Graphein is designed to be highly flexible, allowing the user to specify each step of the data preparation, scalable to facilitate working with large protein complexes and interaction graphs, and contains useful pre-processing tools for preparing experimental files. Graphein facilitates network-based, graph-theoretic and topological analyses of structural and interaction datasets in a high-throughput manner. We envision that Graphein will facilitate developments in computational biology, graph representation learning and drug discovery.
Availability and implementation: Graphein is written in Python. Source code, example usage and tutorials, datasets, and documentation are made freely available under the MIT License at the following URL: https://anonymous.4open.science/r/graphein-3472/README.md | Accept | The paper describes a python library called Graphein, for working with biomolecules. The manuscript provides a bridge between life scientists and machine learners, describing the high level concepts that will facilitate a meaningful interaction between the two fields by suitable use of the Graphein library. The library provides programmatic interfaces to query bioinformatics databases, and also enables the use of popular geometric deep learning libraries.
Four reviewers carefully considered the paper, and the associated library, and they unanimously agree that the work provides an excellent contribution to the machine learning literature. It gives me great pleasure that the new directions of software in the NeurIPS community has attracted high quality submissions such as this. Therefore I recommend this paper for acceptance at NeurIPS 2022. Congratulations! | train | [
"OiKibZOulg0",
"YfWQTgJf8l",
"A6jR4WC41xW",
"MJZEwuh2lWB",
"48Nppe1PzZy",
"LaQeUSJe6I",
"emwLbo9ngy",
"k2-sTXvcz-j",
"3KG6JLc9-f"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n**Isn't this paper better suited for the datasets and benchmarks track? This paper is difficult to evaluate on the typical dimensions of NeurIPS main track: originality, quality, clarity, and significance.**\n\nWe thank the reviewer for raising this concern. We carefully considered which venue was more suitable... | [
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
2
] | [
"YfWQTgJf8l",
"3KG6JLc9-f",
"k2-sTXvcz-j",
"LaQeUSJe6I",
"emwLbo9ngy",
"nips_2022_9xRZlV6GfOX",
"nips_2022_9xRZlV6GfOX",
"nips_2022_9xRZlV6GfOX",
"nips_2022_9xRZlV6GfOX"
] |
nips_2022_weoLjoYFvXY | Root Cause Analysis of Failures in Microservices through Causal Discovery | Most cloud applications use a large number of smaller sub-components (called microservices) that interact with each other in the form of a complex graph to provide the overall functionality to the user. While the modularity of the microservice architecture is beneficial for rapid software development, maintaining and debugging such a system quickly in cases of failure is challenging. We propose a scalable algorithm for rapidly detecting the root cause of failures in complex microservice architectures. The key ideas behind our novel hierarchical and localized learning approach are: (1) to treat the failure as an intervention on the root cause to quickly detect it, (2) only learn the portion of the causal graph related to the root cause, thus avoiding a large number of costly conditional independence tests, and (3) hierarchically explore the graph. The proposed technique is highly scalable and produces useful insights about the root cause, while the use of traditional techniques becomes infeasible due to high computation time. Our solution is application agnostic and relies only on the data collected for diagnosis. For the evaluation, we compare the proposed solution with a modified version of the PC algorithm and the state-of-the-art for root cause analysis. The results show a significant improvement in top-$k$ recall while significantly reducing the execution time. | Accept | The paper provides a new approach for learning root causes of failures and is targeted to micro-services. One of the main shortcomings of the paper raised by reviewers was in the evaluation. On the one hand, reviewers pointed out a number of very relevant related works that are not compared with this work, and on the other hand, the bulk of the quantitative assessments are done using synthetic data, and there were some questions about the level of realism of the Sock-store experiment. That said, the rebuttal had a good analysis of the relationship with those related works and argued for why they are not entirely comparable. I do think the evaluation could be stronger; in fact, I think the paper could have more impact and more visibility in a systems conference, but that would require a more complete evaluation on more real systems (it is instructive to compare the evaluation of this paper with that of the Sage paper brought up by one of the reviewers). However, I think the combination of an interesting algorithmic contribution, strong results on simulated data and a compelling real-world case study puts this paper above the bar. | train | [
"znV-AGaegy",
"vL6NH89qQ-",
"YQ8eQPpX8sr",
"0Q-v6gzX5Hv",
"kU78NJDLxMf",
"7GpyuHgff6",
"WpR8lzHxsbY",
"uHHokqPtYF9",
"PxPP4yqt8wa",
"WPIHCXWcruk",
"iwOmX41qm7"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed responses. \n\nAbout false negatives: I think more real world / synthetic cases could be considered to get the false negative aspect of the algorithm, which would also help in comparing with other relevant approaches (e.g., Sage) that have different setting but the same goal as the propose... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"uHHokqPtYF9",
"YQ8eQPpX8sr",
"7GpyuHgff6",
"PxPP4yqt8wa",
"PxPP4yqt8wa",
"PxPP4yqt8wa",
"iwOmX41qm7",
"WPIHCXWcruk",
"nips_2022_weoLjoYFvXY",
"nips_2022_weoLjoYFvXY",
"nips_2022_weoLjoYFvXY"
] |
nips_2022_JCbLxJ1E6SO | Robust Model Selection and Nearly-Proper Learning for GMMs | In learning theory, a standard assumption is that the data is generated from a finite mixture model. But what happens when the number of components is not known in advance? The problem of estimating the number of components, also called model selection, is important in its own right but there are essentially no known efficient algorithms with provable guarantees. In this work, we study the problem of model selection for univariate Gaussian mixture models (GMMs). Given $\textsf{poly}(k/\epsilon)$ samples from a distribution that is $\epsilon$-close in TV distance to a GMM with $k$ components, we can construct a GMM with $\widetilde{O}(k)$ components that approximates the distribution to within $\widetilde{O}(\epsilon)$ in $\textsf{poly}(k/\epsilon)$ time. Thus we are able to approximately determine the minimum number of components needed to fit the distribution within a logarithmic factor. Moreover, by adapting the techniques we obtain similar results for reconstructing Fourier-sparse signals. Prior to our work, the only known algorithms for learning arbitrary univariate GMMs either output significantly more than $k$ components (e.g. $k/\epsilon^2$ components for kernel density estimates) or run in time exponential in $k$. | Accept | This paper gives significantly improved guarantees for (im)proper learning of Gaussian mixtures in 1-dimension. Given samples from a distribution that is close to a mixture of Gaussians, it gives an polynomial time (and samples) algorithm that outputs a mixture with (slightly) more components that is also close. The paper's contributions are solid both in terms of the result and the techniques involved. So this is a clear accept. | train | [
"HK-hdURmYz-",
"yqe5_Btk0uj",
"jEAxsrMpeR",
"Bqs_Z7D1M9h",
"vL4VRtkGiH",
"8rhM4CBe28J",
"8ZXCgyp95o_",
"JNd4Fb5spHd"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We will revise the presentation to include more high-level descriptions for each subsection. We will also add a discussion about the paper by [Wu and Xie 2018]. We note that this paper outputs a mixture with $poly(k/\\epsilon)$ components and thus falls into category (4) in the discussion in the introduction.",... | [
-1,
-1,
-1,
-1,
8,
7,
5,
8
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"JNd4Fb5spHd",
"8ZXCgyp95o_",
"8rhM4CBe28J",
"vL4VRtkGiH",
"nips_2022_JCbLxJ1E6SO",
"nips_2022_JCbLxJ1E6SO",
"nips_2022_JCbLxJ1E6SO",
"nips_2022_JCbLxJ1E6SO"
] |
nips_2022_5wdvW_hI7bP | Explain My Surprise: Learning Efficient Long-Term Memory by predicting uncertain outcomes | In many sequential tasks, a model needs to remember relevant events from the distant past to make correct predictions. Unfortunately, a straightforward application of gradient based training requires intermediate computations to be stored for every element of a sequence. This requires to store prohibitively large intermediate data if a sequence consists of thousands or even millions elements, and as a result, makes learning of very long-term dependencies infeasible. However, the majority of sequence elements can usually be predicted by taking into account only temporally local information. On the other hand, predictions affected by long-term dependencies are sparse and characterized by high uncertainty given only local information. We propose \texttt{MemUP}, a new training method that allows to learn long-term dependencies without backpropagating gradients through the whole sequence at a time. This method can potentially be applied to any recurrent architecture. LSTM network trained with \texttt{MemUP} performs better or comparable to baselines while requiring to store less intermediate data. | Accept | The reviewers found the ideas presented in the paper interesting -- the use of mutual information to train memory for a model, and the clear presentation. Some questions were raised about demonstrating on a more elaborate set up such as NLP tasks -- the main experiments aside from the toy experiments of copy, etc algorithmic tasks, seem to be on RL experiments, but the method has been advertised more broadly in the motivation. Another reviewer raised the question of the complexity of training multiple networks. Nevertheless, the reviewers found the paper interesting enough to recommend a weak accept and I support that recommendation.
From a reviewers lens, I was a little surprised that the paper made no mention of prior works on maximizing mutual information between features of neural networks to improve results. As an example, see the following paper [1] that uses a mutual information regularizer between states at different steps of a recurrent neural networks. There is also a rich literature of doing so for convolutional neural networks. It would have made sense to compare how the idea in the paper performed in comparison to these methods (and in a sense the ablation study which looked at randomly choosing time steps, k, (regardless of the uncertainty estimator) is an experiment in this direction). I understand that part of the paper deals with the choice of time points to increase mutual information between, and so its probably more efficient than the other alternatives, but a comparison (or discussion in related works) would have made the paper stronger.
[1] Better Long-Range DependencyBy Bootstrapping A Mutual Information Regularizer. https://arxiv.org/pdf/1905.11978v1.pdf | train | [
"L41guym-xYw",
"ZxNbzuAlA8i",
"uo1ABXskdWA",
"DJ5Puz-QtJ",
"S69HxUHq76v",
"iadjyvSrD0o",
"A7R6io8hAM2O",
"2zxnXmvxXtX",
"xLPeyyD4k60",
"K0f78_L86a3",
"dKScUlt46X1"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read your latest response. I'm keeping my current score, based on the lack of language modeling task and missing comparison with more advanced RNNs that can handle this type of tasks. \n\nIn the abstract you mention: \"This method can be potentially applied to any gradient based sequence learning\". You ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"uo1ABXskdWA",
"DJ5Puz-QtJ",
"S69HxUHq76v",
"iadjyvSrD0o",
"A7R6io8hAM2O",
"dKScUlt46X1",
"xLPeyyD4k60",
"K0f78_L86a3",
"nips_2022_5wdvW_hI7bP",
"nips_2022_5wdvW_hI7bP",
"nips_2022_5wdvW_hI7bP"
] |
nips_2022_O3My0RK9s_R | Structural Knowledge Distillation for Object Detection | Knowledge Distillation (KD) is a well-known training paradigm in deep neural networks where knowledge acquired by a large teacher model is transferred to a small student.
KD has proven to be an effective technique to significantly improve the student's performance for various tasks including object detection.
As such, KD techniques mostly rely on guidance at the intermediate feature level, which is typically implemented by minimizing an $\ell_{p}$-norm distance between teacher and student activations during training.
In this paper, we propose a replacement for the pixel-wise independent $\ell_{p}$-norm based on the structural similarity (SSIM).
By taking into account additional contrast and structural cues, more information within intermediate feature maps can be preserved.
Extensive experiments on MSCOCO demonstrate the effectiveness of our method across different training schemes and architectures.
Our method adds only little computational overhead, is straightforward to implement and at the same time it significantly outperforms the standard $\ell_p$-norms.
Moreover, more complex state-of-the-art KD methods using attention-based sampling mechanisms are outperformed, including a +3.5 AP gain using a Faster R-CNN R-50 compared to a vanilla model. | Accept | The paper receives positive feedback after rebuttal. All reviewers agree that the idea of distilling structural knowledge for object detection is novel and worth sharing to the community. AC agrees with it and recommends accepting the paper. | test | [
"RkdzH5I_vrG",
"_8Xo7EcCrW",
"Y8E7x5v_jFa",
"Krp-fGEd27Z",
"JhBqE5dxyXw",
"lveApUbDab2",
"LavD9DcDYn",
"l1Ejglegn9_",
"voh6Ki47YK1",
"zWs1jfbOt4",
"tgmF-w1O83u"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The response of authors explan my questions and concerns, especially they add many results of comparison to more SOTA methods, so I decide to increase my score from 4 to 5.",
" We thank the reviewer for their time and thorough review, and are pleased to hear that our work has been received favourably. \nIn part... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
3
] | [
"Krp-fGEd27Z",
"voh6Ki47YK1",
"l1Ejglegn9_",
"l1Ejglegn9_",
"zWs1jfbOt4",
"tgmF-w1O83u",
"nips_2022_O3My0RK9s_R",
"nips_2022_O3My0RK9s_R",
"nips_2022_O3My0RK9s_R",
"nips_2022_O3My0RK9s_R",
"nips_2022_O3My0RK9s_R"
] |
nips_2022_hdZeYGNCTtN | Exploring the Latent Space of Autoencoders with Interventional Assays | Autoencoders exhibit impressive abilities to embed the data manifold into a low-dimensional latent space, making them a staple of representation learning methods. However, without explicit supervision, which is often unavailable, the representation is usually uninterpretable, making analysis and principled progress challenging. We propose a framework, called latent responses, which exploits the locally contractive behavior exhibited by variational autoencoders to explore the learned manifold. More specifically, we develop tools to probe the representation using interventions in the latent space to quantify the relationships between latent variables. We extend the notion of disentanglement to take the learned generative process into account and consequently avoid the limitations of existing metrics that may rely on spurious correlations. Our analyses underscore the importance of studying the causal structure of the representation to improve performance on downstream tasks such as generation, interpolation, and inference of the factors of variation. | Accept | This paper proposes a method to analyze the latent space of autoencoders, by decoding a point in the latent space and then encoding it back, which can be used to build response maps that measure how much a latent dimension changes when some other dimension changes. All reviewers liked the idea, and I support accepting the paper. The paper still contains some presentation issues, the authors should make an effort to improve the readability of the paper and make it more accessible following the suggestions of the reviewers. | train | [
"ub5iskalrXH",
"IHdR5VYdiL5",
"lz2wq3h1P2",
"0venXPoZVxI",
"PG5TuQlryy",
"ztyBhsuJ8v",
"P0EUMBEE_QR",
"823PLP9Fh0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for your careful consideration of the review. I do not have any further pending issue to discuss. ",
" Overall, we are encouraged by the largely positive responses from the reviewers, all of which expressed an interest in our project, and thank everyone for their feedback and support.\n\nBased on al... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"lz2wq3h1P2",
"nips_2022_hdZeYGNCTtN",
"823PLP9Fh0",
"P0EUMBEE_QR",
"ztyBhsuJ8v",
"nips_2022_hdZeYGNCTtN",
"nips_2022_hdZeYGNCTtN",
"nips_2022_hdZeYGNCTtN"
] |
nips_2022_NqDXfe2oC_1 | Performative Power | We introduce the notion of performative power, which measures the ability of a firm operating an algorithmic system, such as a digital content recommendation platform, to cause change in a population of participants. We relate performative power to the economic study of competition in digital economies. Traditional economic concepts struggle with identifying anti-competitive patterns in digital platforms not least due to the complexity of market definition. In contrast, performative power is a causal notion that is identifiable with minimal knowledge of the market, its internals, participants, products, or prices.
Low performative power implies that a firm can do no better than to optimize their objective on current data. In contrast, firms of high performative power stand to benefit from steering the population towards more profitable behavior. We confirm in a simple theoretical model that monopolies maximize performative power. A firm's ability to personalize increases performative power, while competition and outside options decrease performative power. On the empirical side, we propose an observational causal design to identify performative power from discontinuities in how digital platforms display content. This allows to repurpose causal effects from various studies about digital platforms as lower bounds on performative power. Finally, we speculate about the role that performative power might play in competition policy and antitrust enforcement in digital marketplaces. | Accept | The paper proposes the notion of "performative power" to measure a firm's ability to affect its users. This notion is insightful in performative prediction, and the authors have demonstrated it in multiple concrete settings. Overall, all reviewers are very positive about the paper. We recommend including the paper in the program and believe it could open up potential future research directions. | val | [
"KiiXi_xp2JX",
"Uc0YnyiFnvA",
"qllv1ZkKcDd",
"p7S5IR0rtdE",
"un_sYSD-veD",
"yRWuv7NAxuS",
"ApBk0xzwbEL",
"zxRX6UQnAcn",
"Og3bbNwRkpE",
"mop2Ey3xu74"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification. The intuition for the supremum makes sense, especially when $\\mathcal{F}$ is intended to capture only \"reasonable\" actions the firm may take. With respect to this set though, I wish to echo what Reviewer hRaw has said, in that I too would find it helpful to see more discussion ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"un_sYSD-veD",
"p7S5IR0rtdE",
"yRWuv7NAxuS",
"mop2Ey3xu74",
"Og3bbNwRkpE",
"zxRX6UQnAcn",
"nips_2022_NqDXfe2oC_1",
"nips_2022_NqDXfe2oC_1",
"nips_2022_NqDXfe2oC_1",
"nips_2022_NqDXfe2oC_1"
] |
nips_2022_anqloMQdWtP | Debiased Machine Learning without Sample-Splitting for Stable Estimators | Estimation and inference on causal parameters is typically reduced to a generalized method of moments problem, which involves auxiliary functions that correspond to solutions to a regression or classification problem. Recent line of work on debiased machine learning shows how one can use generic machine learning estimators for these auxiliary problems, while maintaining asymptotic normality and root-$n$ consistency of the target parameter of interest, while only requiring mean-squared-error guarantees from the auxiliary estimation algorithms. The literature typically requires that these auxiliary problems are fitted on a separate sample or in a cross-fitting manner. We show that when these auxiliary estimation algorithms satisfy natural leave-one-out stability properties, then sample splitting is not required. This allows for sample re-use, which can be beneficial in moderately sized sample regimes. For instance, we show that the stability properties that we propose are satisfied for ensemble bagged estimators, built via sub-sampling without replacement, a popular technique in machine learning practice. | Accept | The paper provides an analysis that avoids sample-splitting while estimating parameters with objective functions involving other estimated parameters. The results are interesting and the analysis is insightful.
One of the reviewers (Reviewer Becs) raised some concerns about style of the writing, high-level questions on (bagging and stability) and linear moment assumptions. In my reading of the paper, I found no significant issues about the style (in fact the paper is well-written from a statistical point-of-view). The authors addressed the other two issues in the rebuttal. Hence, the score of reviewer Becs was down-weighted and the paper is recommend for acceptance at Neurips. | train | [
"RsmBpT2SLe9H",
"cHAXbCDQ9Nj",
"2nz3pAd4NU",
"P0ZMACuM2eu",
"aNZHWu6T3Q",
"B8IjcTAym1"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your input and effort in reviewing the paper. We appreciate your suggestions and comments.\n\n1.About verifying the conditions of Lemma 2:\n\nIndeed, verifying the conditions might be difficult, but our stability conditions are implied by Algorithmic Stability and $L^{2r}$-continuity (plea... | [
-1,
-1,
-1,
3,
7,
6
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"aNZHWu6T3Q",
"B8IjcTAym1",
"P0ZMACuM2eu",
"nips_2022_anqloMQdWtP",
"nips_2022_anqloMQdWtP",
"nips_2022_anqloMQdWtP"
] |
nips_2022_J4pX8Q8cxHH | HyperTree Proof Search for Neural Theorem Proving | We propose an online training procedure for a transformer-based automated theorem prover. Our approach leverages a new search algorithm, HyperTree Proof Search (HTPS), that learns from previous proof searches through online training, allowing it to generalize to domains far from the training distribution. We report detailed ablations of our pipeline’s main components by studying performance on three environments of increasing complexity. In particular, we show that with HTPS alone, a model trained on annotated proofs manages to prove 65.4% of a held-out set of Metamath theorems, significantly outperforming the previous state of the art of 56.5% by GPT-f. Online training on these unproved theorems increases accuracy to 82.6%. With a similar computational budget, we improve the state of the art on the Lean-based miniF2F-curriculum dataset from 31% to 42% proving accuracy. | Accept | This paper tackles automated theorem proving by combining MCTS with large language models. They show that with supervised pretraining, this combination can outperform a competitive alternative (GPT-f) using an order magnitude less compute, and that more dramatic improvements are possible if one allows online learning during testing. Reviewers said that the core ideas behind the work were not particularly new, but their combination was.
In summary, the paper is a new combination of old ideas that improves results on an important problem, and which has extensive experiments exploring the online test setting across 3 realistic domains of theorem proving. The online test setting applies when one has batches of difficult interrelated search problems, which is conceivably the case from many automated theorem proving settings. These factors suggest accepting the paper, but there were significant reservations from multiple reviewers. Fortunately these issues can be fixed: Although reviewers generally agreed that this was an improvement over the latest state of the art, at least quantitatively, and speculated that the architecture would likely be state of the art on other benchmarks as well, the following presentation issues should be addressed for the camera ready. There was significant confusion over whether _all_ of the paper's results were in the online setting, which could jeopardize the soundness of the quantitative results if baselines did not also use the online setting. Discussions clarified that this was not the case, but the manuscript should be revised to clearly and prominently state these facts to avoid misinterpretation. Referring to the work as AlphaZero-like promoted other confusions. Indeed, because the authors contrast online learning (what they do) with expert iteration (what AlphaZero does), and because this is not a two-player game, the authors are strongly encouraged to nix the AlphaZero analogy. | train | [
"54B6Wjgy_sz",
"XYvHIq0qSDT",
"K5f3678KtL3",
"FQBBfipJeS4",
"NAorN6ool93",
"Y1xnMDUdHbo",
"rT1eYFNRBbk",
"lor6lupDTX8",
"KGnHVJlJf0L",
"qG7blJwe0yb",
"LvKG1qQzxY9",
"sSm15RkT_qA",
"t04KQ0O4acN",
"7-jgFTN12RW",
"dlHovbEbhvv",
"uVpGAIdnKsU",
"OBUv0_hdwTD",
"_C1Xmuu9YzY",
"4AjPL_wmf... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" We agree that there are similarities in the search algorithm, however, the differences that exist between them are critical. Typically, the best-first search algorithm by GPT-f is in fact closer to what we do than TacticZero, and yet in the same supervised setting (i.e. without online learning or iterative traini... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"XYvHIq0qSDT",
"KGnHVJlJf0L",
"FQBBfipJeS4",
"sSm15RkT_qA",
"Y1xnMDUdHbo",
"dlHovbEbhvv",
"lor6lupDTX8",
"7-jgFTN12RW",
"qG7blJwe0yb",
"uVpGAIdnKsU",
"nips_2022_J4pX8Q8cxHH",
"t04KQ0O4acN",
"dLYShrr2Ahu",
"4AjPL_wmfO-",
"_C1Xmuu9YzY",
"OBUv0_hdwTD",
"nips_2022_J4pX8Q8cxHH",
"nips_2... |
nips_2022_5Z3GURcqwT | Spherical Channels for Modeling Atomic Interactions | Modeling the energy and forces of atomic systems is a fundamental problem in computational chemistry with the potential to help address many of the world’s most pressing problems, including those related to energy scarcity and climate change. These calculations are traditionally performed using Density Functional Theory, which is computationally very expensive. Machine learning has the potential to dramatically improve the efficiency of these calculations from days or hours to seconds.
We propose the Spherical Channel Network (SCN) to model atomic energies and forces. The SCN is a graph neural network where nodes represent atoms and edges their neighboring atoms. The atom embeddings are a set of spherical functions, called spherical channels, represented using spherical harmonics. We demonstrate, that by rotating the embeddings based on the 3D edge orientation, more information may be utilized while maintaining the rotational equivariance of the messages. While equivariance is a desirable property, we find that by relaxing this constraint in both message passing and aggregation, improved accuracy may be achieved. We demonstrate state-of-the-art results on the large-scale Open Catalyst 2020 dataset in both energy and force prediction for numerous tasks and metrics. | Accept | The paper presents a new model with state-of-the-art results on the IS2RE task on the OpenCatalyst dataset. 2 of 3 reviewers recommended acceptance, while the 3rd reviewer raised issues with lack of baseline comparisons on small standard molecular potential energy surface datasets. I am inclined to agree with the authors on this one - if the empirical results on a large, challenging benchmark are good enough, baseline comparisons on small benchmarks are not always necessary (did the AlexNet paper include benchmarks on MNIST?) It is slightly unnerving that the results are so good with a non-equivariant model, but it sounds like the authors discuss this as well. Overall this seems like a solid, if not revolutionary, improvement in machine-learned potentials for OpenCatalyst. I recommend acceptance. | train | [
"rP0fdO2T8L0",
"zE9pJphYMV",
"PfV0YKoGkbh",
"xWkNU2Xi-j7",
"ppkzJYf9rDA",
"TPkdFAGlkLsk",
"982GXp3l8ps",
"xE3MfwhqA0M",
"6akc3PTiB-i"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. After the rebuttal period, we allowed the energy conserving model with $l=4$ and 12 layers to fully converge. While this model is significantly slower to train (~8x), it does achieve very similar results to the model that directly estimates forces. The energy-conserving model gets 253... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"zE9pJphYMV",
"xWkNU2Xi-j7",
"6akc3PTiB-i",
"xE3MfwhqA0M",
"982GXp3l8ps",
"nips_2022_5Z3GURcqwT",
"nips_2022_5Z3GURcqwT",
"nips_2022_5Z3GURcqwT",
"nips_2022_5Z3GURcqwT"
] |
nips_2022_WWVcsfI0jGH | Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers | Deep neural networks are susceptible to shortcut learning, using simple features to achieve low training loss without discovering essential semantic structure. Contrary to prior belief, we show that generative models alone are not sufficient to prevent shortcut learning, despite an incentive to recover a more comprehensive representation of the data than discriminative approaches. However, we observe that shortcuts are preferentially encoded with minimal information, a fact that generative models can exploit to mitigate shortcut learning. In particular, we propose Chroma-VAE, a two-pronged approach where a VAE classifier is initially trained to isolate the shortcut in a small latent subspace, allowing a secondary classifier to be trained on the complementary, shortcut-free latent subspace. In addition to demonstrating the efficacy of Chroma-VAE on benchmark and real-world shortcut learning tasks, our work highlights the potential for manipulating the latent space of generative classifiers to isolate or interpret specific correlations. | Accept | This paper explores a method to learn generative classifier while mitigating 'shortcut learning'---relying on spurious correlations. They do so by modelling two latent variables, only one of which is used to classify, and both used to generate data. The underlying intuition is that 'shortcuts' will only manifest in a small fraction of the latent space embedding as they compress. Experiments show the efficacy of the method on multiple datasets.
The reviewers all agreed that the paper tackles an interesting and relevant problem, that the exploration of the performance and experiments considered are quite thorough.
The biggest issue raised in this instance is that the experiments largely only deal with restricted/constrained settings. While it is understandable that the problems tackled here are well accepted benchmarks within this subfield, the paper should make clear it's contribution and include a discussion about the potential for real-world application.
In particular, the X-ray example was something that was highlighted as something that, as currently presented, potentially can imply that the proposed method is viable for actual real-world use. It is strongly suggested that the authors moderate the interpretation / claims / discussion around this example.
The authors also provided additional experiments over specific questions on the information in the non-classifier latent to address reviewer concerns, which was good.
Overall the paper has merit and explores an interesting idea with a good set of experiments that cover a range of settings, and should be accepted.
| train | [
"obdpE1sqDie",
"1SooASUn_mM",
"FAecNi26Jnj",
"uxAVTSuEhnk",
"pfxUH5XLvP",
"dsW2nJ3o3Wv",
"04tlsXP2Fzv",
"5P7v0qbv2qWi",
"oc3FAr7j_ui",
"APMt6a5UfEU",
"vIxclQjnsIH",
"jinPMJAcokF",
"p3cgGnjpmuZ",
"nxiS7WQuM2cy",
"R_O70mVx-k0",
"dXkvIyUw_qB",
"D2E4Pg8dIlt",
"jPM9i6HfEep",
"BAOomDof... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. Thank you for presenting new results on gender vs. hair color cues. I find this interesting and hope that the discussion will be useful to the readers for future works. I have updated my score. ",
" First of all, we thank you for taking the time and effort to read and digest our detailed response. Indeed, we... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"FAecNi26Jnj",
"dsW2nJ3o3Wv",
"04tlsXP2Fzv",
"5P7v0qbv2qWi",
"5P7v0qbv2qWi",
"p3cgGnjpmuZ",
"vIxclQjnsIH",
"APMt6a5UfEU",
"BAOomDofKqr",
"BAOomDofKqr",
"jPM9i6HfEep",
"D2E4Pg8dIlt",
"D2E4Pg8dIlt",
"dXkvIyUw_qB",
"nips_2022_WWVcsfI0jGH",
"nips_2022_WWVcsfI0jGH",
"nips_2022_WWVcsfI0jGH... |
nips_2022_tX_dIvk4j-s | VisCo Grids: Surface Reconstruction with Viscosity and Coarea Grids | Surface reconstruction has been seeing a lot of progress lately by utilizing Implicit Neural Representations (INRs). Despite their success, INRs often introduce hard to control inductive bias (i.e., the solution surface can exhibit unexplainable behaviours), have costly inference, and are slow to train. The goal of this work is to show that replacing neural networks with simple grid functions, along with two novel geometric priors achieve comparable results to INRs, with instant inference, and improved training times. To that end we introduce VisCo Grids: a grid-based surface reconstruction method incorporating Viscosity and Coarea priors. Intuitively, the Viscosity prior replaces the smoothness inductive bias of INRs, while the Coarea favors a minimal area solution. Experimenting with VisCo Grids on a standard reconstruction baseline provided comparable results to the best performing INRs on this dataset. | Accept | The paper originally received mixed scores, with two reviewers recommending acceptance and two rejection. While the reviewers acknowledged the soundness of the approach, the clarity of the explanations, and the good ablations of the method's components, they expressed concerns regarding the limited scope of the experiments, the lack of runtime analysis, and the fairness of the comparison to baselines. The authors' feedback convincingly addressed these concerns; in the discussion, tYGS agreed to raise their score to Weak Accept, and ubX9 to borderline accept. This led to a consensus for acceptance. We nonetheless strongly encourage the authors to incorporate elements of their feedback in the camera-ready version of their paper. | test | [
"b5pWIl2_Lso",
"Qml9CxrAIb",
"R9J-t4TnFh2",
"7tRuzxtAw0i",
"YoVbkIpFp9d",
"sh3_ii3v-",
"3ThgaCC00Pz",
"TL-LYo0pM2t",
"w_w2PypbqIb",
"B5MtKadefYr",
"t_RnaspGj__",
"LzgA3XJUuwl"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. My questions have been sufficiently addressed and some of my feedback has been integrated into the revised paper.\n\nI have also read the other reviews and the authors' response to those papers, and have no further questions.\n\nI stand with my original rating and recommend accepting t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"YoVbkIpFp9d",
"R9J-t4TnFh2",
"sh3_ii3v-",
"LzgA3XJUuwl",
"t_RnaspGj__",
"B5MtKadefYr",
"w_w2PypbqIb",
"nips_2022_tX_dIvk4j-s",
"nips_2022_tX_dIvk4j-s",
"nips_2022_tX_dIvk4j-s",
"nips_2022_tX_dIvk4j-s",
"nips_2022_tX_dIvk4j-s"
] |
nips_2022_AluQNIIb_Zy | Learning Probabilistic Models from Generator Latent Spaces with Hat EBM | This work proposes a method for using any generator network as the foundation of an Energy-Based Model (EBM). Our formulation posits that observed images are the sum of unobserved latent variables passed through the generator network and a residual random variable that spans the gap between the generator output and the image manifold. One can then define an EBM that includes the generator as part of its forward pass, which we call the Hat EBM. The model can be trained without inferring the latent variables of the observed data or calculating the generator Jacobian determinant. This enables explicit probabilistic modeling of the output distribution of any type of generator network. Experiments show strong performance of the proposed method on (1) unconditional ImageNet synthesis at 128$\times$128 resolution, (2) refining the output of existing generators, and (3) learning EBMs that incorporate non-probabilistic generators. Code and pretrained models to reproduce our results are available at https://github.com/point0bar1/hat-ebm. | Accept | **Summary**: This paper proposes an approach for using existing generator networks in an energy based model. The authors introduce a "hat" network that accepts output of the generator and a residual image, and returns a scalar representing the energy value. The parameters of the hat network (and potentially of the generator) can be trained using persistent contrastive divergence, where negative samples are generated by using a Gibbs sampler with Langevin updates to generate both the latent variable for the generator and the residual. The authors demonstrate how this approach can be used to refine samples from an existing generator, generate images by jointly training the generator and the energy function, allow sampling over deterministic generators, and use the EBM for anomaly detection.
**Strengths**: Reviewers [9epW] and [45fe] appreciate the interesting and novel formulation of an energy-based model, which can inspire further work on EBMs for generative learning. Moreover, reviewers agree [9epW,45fe,oJLY] that the proposes formulation is very general and is technically sound. [N2ii] and [45fe] note the fairly clear presentation, with good background on topics such as cooperative learning. All reviewers [9epW, N2ii, 45fe, oJLY] agree that experiments on refinement, generation and OOD detection show improvements relative to baselines.
**Weaknesses:** While reviewers were overall appreciative of this submission, they also note weaknesses with respect to clarity, discussion, and experimental results. One issue that reviewers feel is overlooked is the additional computation cost associated with the approach [9epW,N2ii]. Reviewer [9epW] states that while main idea is presented clearly, parts of the paper are difficult to parse (multiple examples given). Reviewer [45fe] similarly finds discussion lacking and that experimental details are missing (4 examples given).
Reviewer [9epW] notes that experimental results are also not strong across the board. Sample refine does not appear to a strong suit of hat EBM. FID scores are barely an improvement. This is particularly relevant becase refinement baselines do not require training, whereas paper does not compare to refinement baselines like DOT and DGflow. Moreover, there are some problems with baselines. Discussion and comparison against GEBM is missing, which is a model that also combines a base generator and an energy function to draw/refine samples. EBM baselines in Table 2 are not properly discussed in the paper. While synthesis results are impressive, discussion of why the proposed approach works well are missing.
**Author Reviewer Discussion**: Authors provided clarifications to all reviewers, and amended the paper to reference related work on GEBM and DGflow. The authors indicate that they will also attempt to compare against these baselines.
**Reviewer AC Discussion**: The AC indicated that this appears to be highly borderline paper with a significant variance in reviewer scores. Unfortunately, only reviewer [9epW] engaged with the AC during the discussion phase. This reviewer indicated that while the proposed is interesting, they remain of the opinion that the related work section and baselines need work, and that the paper would benefit from a clearer motivation for the proposed approach. On balance they remain of the opinion that this paper is on the borderline.
**Overall Recommendation:** Based on the numerical average of the scores, this submission would be just about above the threshold for acceptance. At the same time, the AC is concerned with the comments about clarity and in particular with the comments about the lack of discussion of related work. In this context, the sparse discussion and the fact that no reviewers have championed the paper is somewhat worrying. On balance, the AC recommendation is to consider this paper as borderline. It is narrowly above the bar for acceptance, but may have to be cut in favor of other papers. | train | [
"Dt12YERNjR6v",
"NQqBdIxAg2_",
"d1ZZfMvylvf",
"NFMnqidPv8T",
"rtCKU4cGeTS",
"owACqgghvYD",
"uKSyexfegnW",
"1nyNi2E7krXc",
"3o7GPq4FOJS",
"PdZspjqmcO",
"zgTNZj_jWLj",
"-Q7vZXFv462",
"bkn_baPtsMX"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad to hear that you maintain your positive evaluation of our work. Contextual information about MCMC implementation will certainly improve the presentation of our method and make our work accessible to a wider audience. We will include a section that addresses your questions in future revisions of the ap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"d1ZZfMvylvf",
"NFMnqidPv8T",
"owACqgghvYD",
"zgTNZj_jWLj",
"bkn_baPtsMX",
"-Q7vZXFv462",
"1nyNi2E7krXc",
"zgTNZj_jWLj",
"PdZspjqmcO",
"nips_2022_AluQNIIb_Zy",
"nips_2022_AluQNIIb_Zy",
"nips_2022_AluQNIIb_Zy",
"nips_2022_AluQNIIb_Zy"
] |
nips_2022_mux7gn3g_3 | On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning | Intelligent agents should have the ability to leverage knowledge from previously learned tasks in order to learn new ones quickly and efficiently. Meta-learning approaches have emerged as a popular solution to achieve this. However, meta-reinforcement learning (meta-RL) algorithms have thus far been restricted to simple environments with narrow task distributions and have seen limited success. Moreover, the paradigm of pretraining followed by fine-tuning to adapt to new tasks has emerged as a simple yet effective solution in supervised learning. This calls into question the benefits of meta learning approaches also in reinforcement learning, which typically come at the cost of high complexity. We therefore investigate meta-RL approaches in a variety of vision-based benchmarks, including Procgen, RLBench, and Atari, where evaluations are made on completely novel tasks. Our findings show that when meta-learning approaches are evaluated on different tasks (rather than different variations of the same task), multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation. This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL. From these findings, we advocate for evaluating future meta-RL methods on more challenging tasks and including multi-task pretraining with fine-tuning as a simple, yet strong baseline. | Accept | Reviewer qvwu summarizes the paper well: The paper presents a study comparing popular meta-learning approaches like Reptile, Pearl and RL^2 with standard multi-task pretraining + fine-tuning on 3 vision based benchmarks, namely Procgen, RLBench and Atari. On all three benchmarks, they test the generalization ability of the approaches on a completely novel task rather than variations of existing tasks from the distribution. They show that on all the tasks multi-task pretraining with fine-tuning performs equally or better than the meta-RL counterparts proposing multi-task pretraining + fine-tuning as a simple yet strong baseline for such tasks.
The other reviewers voted to reject the paper. Their main concerns were:
- Evaluation of vision-only benchmarks
- Issues in evaluation setup
I believe the authors have sufficiently addressed these concerns, but unfortunately, the reviewers did not respond to the authors. In particular prior work already shows fine-tuning is competitive to meta-learning algorithms on state-based tasks. The main contribution of this work is in showing that these findings hold true in vision-based settings too and particularly in the scenario where tasks are different. This is a useful addition to the growing body of work comparing vanilla fine-tuning against meta-learning. I therefore recommend the paper be accepted. | val | [
"xcUFxWkyK3N",
"sgkqrKDdNm",
"w5yhhDgKlR",
"hCN8gXZjFuI",
"FkmJ6EaL8ub",
"gJ8a74ODbNt",
"FcK26qDPKpx",
"fevuW4EXku",
"DgrMkEh52KM",
"C3bHlJtuxXL",
"A6xrHDdd0Bg",
"f6OjxOIqiyB",
"7187daLRtXN"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the additional experiments performed. Bumped the score to 6",
" Thank you and we appreciate your follow-up response. \n\n\nQ2 on prior work that uses TD-error for gradient-based meta-RL: thank you for the clarification and apologies for the confusion. We were not able to identify prior w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"w5yhhDgKlR",
"hCN8gXZjFuI",
"FkmJ6EaL8ub",
"7187daLRtXN",
"fevuW4EXku",
"7187daLRtXN",
"7187daLRtXN",
"f6OjxOIqiyB",
"A6xrHDdd0Bg",
"A6xrHDdd0Bg",
"nips_2022_mux7gn3g_3",
"nips_2022_mux7gn3g_3",
"nips_2022_mux7gn3g_3"
] |
nips_2022_WBhqzpF6KYH | SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery | Unsupervised pre-training methods for large vision models have shown to enhance performance on downstream supervised tasks. Developing similar techniques for satellite imagery presents significant opportunities as unlabelled data is plentiful and the inherent temporal and multi-spectral structure provides avenues to further improve existing pre-training strategies. In this paper, we present SatMAE, a pre-training framework for temporal or multi-spectral satellite imagery based on Masked Autoencoder (MAE). To leverage temporal information, we include a temporal embedding along with independently masking image patches across time. In addition, we demonstrate that encoding multi-spectral data as groups of bands with distinct spectral positional encodings is beneficial. Our approach yields strong improvements over previous state-of-the-art techniques, both in terms of supervised learning performance on benchmark datasets (up to $\uparrow$ 7%), and transfer learning performance on downstream remote sensing tasks, including land cover classification (up to $\uparrow$ 14%) and semantic segmentation. | Accept | Four experts in the field reviewed the paper and recommended Accept, Borderline Reject, Weak Accept, and Weak Accept. According to the reviews, using MAE in satellite imagery is straightforward, but the novelty lies in details, such as using extra tokens to handle temporal consistency, masking strategies, etc. Another major question from the reviewers was about the ablation studies, and the rebuttal addressed it well. Hence, the decision is to recommend the paper for acceptance. We encourage the authors to consider the reviewers' comments and make the necessary changes to the best of their ability. We congratulate the authors on the acceptance of their paper!
| train | [
"aNoiAMRfKXo",
"AaPD3WMiKK",
"_siauORfNUF",
"xo8F27RQn3l",
"pAxDX9v5Cst",
"zEcv01dgwa1",
"nsOq37blLa8",
"ts17V1nlCkL",
"PiSdVb-8rUG",
"oYSM5HeyiiA",
"YVOdftc_KjM",
"jmWboF4dZmQ",
"dmsPPtbfTR7",
"4-uc2VTDBBM",
"YBMZEJhvZG",
"QOWfkP4vJcP",
"Njd4Q-DGh8u",
"h23zJZSHh2R",
"NcrtupEPulP... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official... | [
" Thanks again for the constructive feedback and recognition of our work!",
" Thank you very much for considering our revisions and we are glad that most of your concerns are addressed. Your constructive comments were very helpful in improving our paper. If the paper looks better to you, would you consider updati... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"zEcv01dgwa1",
"xo8F27RQn3l",
"pAxDX9v5Cst",
"QOWfkP4vJcP",
"ts17V1nlCkL",
"NcrtupEPulP",
"PiSdVb-8rUG",
"PiSdVb-8rUG",
"jmWboF4dZmQ",
"QcIJN5GUNE",
"aINivgqYMyr",
"aINivgqYMyr",
"pbnlDfyFokU",
"aINivgqYMyr",
"pbnlDfyFokU",
"QcIJN5GUNE",
"QcIJN5GUNE",
"MNdyYavr5b-",
"MNdyYavr5b-"... |
nips_2022_KQYodS0W0j | Maximizing and Satisficing in Multi-armed Bandits with Graph Information | Pure exploration in multi-armed bandits has emerged as an important framework for modeling decision making and search under uncertainty. In modern applications however, one is often faced with a tremendously large number of options and even obtaining one observation per option may be too costly rendering traditional pure exploration algorithms ineffective. Fortunately, one often has access to similarity relationships amongst the options that can be leveraged. In this paper, we consider the pure exploration problem in stochastic multi-armed bandits where the similarities between the arms is captured by a graph and the rewards may be represented as a smooth signal on this graph. In particular, we consider the problem of finding the arm with the maximum reward (i.e., the maximizing problem) or one that has sufficiently high reward (i.e., the satisficing problem) under this model. We propose novel algorithms GRUB (GRaph based UcB) and zeta-GRUB for these problems and provide theoretical characterization of their performance which specifically elicits the benefit of the graph side information. We also prove a lower bound on the data requirement that shows a large class of problems where these algorithms are near-optimal. We complement our theory with experimental results that show the benefit of capitalizing on such side information. | Accept | All reviewers agree on the merits of the work. | train | [
"2HuB3D4iZg",
"Q1yuOBlGbRNR",
"6cvquCEuFhw",
"YM6z3Rd5dhe"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their careful reading and insightful comments. We provide specific answers to the questions raised below.\n\n**Q : Could you give more details on the settings in which the smoothness assumption is satisfied? How it is possible to infer $\\epsilon$?**\n\n**A :** In our work, we use an *up... | [
-1,
-1,
7,
7
] | [
-1,
-1,
2,
4
] | [
"YM6z3Rd5dhe",
"6cvquCEuFhw",
"nips_2022_KQYodS0W0j",
"nips_2022_KQYodS0W0j"
] |
nips_2022_L-ceBdl2DPb | $k$-Sliced Mutual Information: A Quantitative Study of Scalability with Dimension | Sliced mutual information (SMI) is defined as an average of mutual information (MI) terms between one-dimensional random projections of the random variables. It serves as a surrogate measure of dependence to classic MI that preserves many of its properties but is more scalable to high dimensions. However, a quantitative characterization of how SMI itself and estimation rates thereof depend on the ambient dimension, which is crucial to the understanding of scalability, remain obscure.
This work provides a multifaceted account of the dependence of SMI on dimension, under a broader framework termed $k$-SMI, which considers projections to $k$-dimensional subspaces. Using a new result on the continuity of differential entropy in the 2-Wasserstein metric, we derive sharp bounds on the error of Monte Carlo (MC)-based estimates of $k$-SMI, with explicit dependence on $k$ and the ambient dimension, revealing their interplay with the number of samples. We then combine the MC integrator with the neural estimation framework to provide an end-to-end $k$-SMI estimator, for which optimal convergence rates are established. We also explore asymptotics of the population $k$-SMI as dimension grows, providing Gaussian approximation results with a residual that decays under appropriate moment bounds. All our results trivially apply to SMI by setting $k=1$. Our theory is validated with numerical experiments and is applied to sliced InfoGAN, which altogether provide a comprehensive quantitative account of the scalability question of $k$-SMI, including SMI as a special case when $k=1$. | Accept | While the reviewers have raised concerns, mainly about the significance of k-SMI, they also acknowledged that the theoretical contributions of the paper are solid. On the other hand, I do agree with the authors on the fact that the main contribution of the paper is the theoretical analysis that can be trivially applied to SMI as well, rather than the introduction of k-SMI itself. Hence, I will go ahead and recommend a borderline acceptance for the paper. However, I strongly suggest the authors to take into account the reviewers' concerns, and reword their contributions to emphasize their theoretical contributions, which also clarify the behavior of SMI. | train | [
"IgPnSlZctau",
"-LUA8eQZPh1",
"gSENZU1DxJ",
"7bIZKTVCmp1",
"o4-h3C_VrwR",
"F6wHTFmfAmR",
"D3siRw6mddd",
"idpcTpOdb5K",
"79ys6swkl-5",
"NJGbAg2iD5q",
"0ucSWdgXv3Y",
"tvzrqhKi-I2",
"CJM-soCFr5C"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for providing more feedback. At the risk of oversimplifying, is it fair to say that your main concern can be summarized as follows: The SMI estimator does not seem to improve upon other MI estimation methods (e.g., by first applying dimensionality reduction techniques and then estimating MI from the transf... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"-LUA8eQZPh1",
"79ys6swkl-5",
"79ys6swkl-5",
"NJGbAg2iD5q",
"F6wHTFmfAmR",
"D3siRw6mddd",
"idpcTpOdb5K",
"CJM-soCFr5C",
"tvzrqhKi-I2",
"0ucSWdgXv3Y",
"nips_2022_L-ceBdl2DPb",
"nips_2022_L-ceBdl2DPb",
"nips_2022_L-ceBdl2DPb"
] |
nips_2022_6Kj1wCgiUp_ | Distinguishing discrete and continuous behavioral variability using warped autoregressive HMMs | A core goal in systems neuroscience and neuroethology is to understand how neural circuits generate naturalistic behavior. One foundational idea is that complex naturalistic behavior may be composed of sequences of stereotyped behavioral syllables, which combine to generate rich sequences of actions. To investigate this, a common approach is to use autoregressive hidden Markov models (ARHMMs) to segment video into discrete behavioral syllables. While these approaches have been successful in extracting syllables that are interpretable, they fail to account for other forms of behavioral variability, such as differences in speed, which may be better described as continuous in nature. To overcome these limitations, we introduce a class of warped ARHMMs (WARHMM). As is the case in the ARHMM, behavior is modeled as a mixture of autoregressive dynamics. However, the dynamics under each discrete latent state (i.e. each behavioral syllable) are additionally modulated by a continuous latent ``warping variable.'' We present two versions of warped ARHMM in which the warping variable affects the dynamics of each syllable either linearly or nonlinearly. Using depth-camera recordings of freely moving mice, we demonstrate that the failure of ARHMMs to account for continuous behavioral variability results in duplicate cluster assignments. WARHMM achieves similar performance to the standard ARHMM while using fewer behavioral syllables. Further analysis of behavioral measurements in mice demonstrates that WARHMM identifies structure relating to response vigor. | Accept | This paper introduces warped auto-regressive HMMs, where both discrete state transitions z_{t+1} | z_t and continuous observations x_{t+1} | x_t, z_t, depend on the previous time step, and where the linear dynamics for observation transitions are either time-warped (i.e., the linear transition matrix and bias are multiplied by a time-dependent latent step size) or obtained by Gaussian Process regression with latent prototypes. The method is evaluated and applied to modeling “syllables” of behaviour in free-moving mice in neuroscience experiments directly from depth camera recordings.
Reviewers praised the clarity and structure of the paper (w5pw, N8jX, 4Fyi), the motivation for the paper (N8jX, 4Fyi), the honesty in reporting results on slightly worse GP warped ARHMM (w5pw)
The reviewers noted that the contribution was incremental (w5pw) and did not compare to any modern deep learning baseline (N8jX) - for the latter, the authors partially addressed this point by responding that existing work did not handle tracking and segmentation without supervised training. Reviewers noted that evaluation was limited to a proprietary dataset (it is actually publicly accessible) without human ground-truth annotations (4Fyi) and wished more evaluations had been done - the authors replied that results were reviewed by experimentalists. They also had some questions regarding figures that were properly addressed (w5pw).
Reviewers agree on high scores (6, 7, 7) and therefore I would recommend this paper for acceptance.
Sincerely,
Area Chair | train | [
"FpmTT3vwDGd",
"3L1oLKr4F6d",
"ia70lxSJZIq",
"t4qFUhoC7wh",
"xC6F8qrCSxV",
"DCWi1W0vx8l",
"WGg62f0I1bd",
"LV8-8RtOyxy",
"j6INjJSG8US",
"K_5WT6xMTP7",
"vs2AiRV5ysw",
"t7apALMNOig",
"nGN2Y0uYzlC"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification about the test/train likelihoods.",
" Thank you for your time, comments, and investment in the improvement of our paper. We will investigate B-SOiD and update the paper accordingly if accepted.\n\nWe wish to reiterate a few points in the meantime: first, that the main motivation ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"t4qFUhoC7wh",
"ia70lxSJZIq",
"DCWi1W0vx8l",
"xC6F8qrCSxV",
"K_5WT6xMTP7",
"WGg62f0I1bd",
"j6INjJSG8US",
"nGN2Y0uYzlC",
"t7apALMNOig",
"vs2AiRV5ysw",
"nips_2022_6Kj1wCgiUp_",
"nips_2022_6Kj1wCgiUp_",
"nips_2022_6Kj1wCgiUp_"
] |
nips_2022_7-LTDcvNc_ | Analyzing Data-Centric Properties for Graph Contrastive Learning | Recent analyses of self-supervised learning (SSL) find the following data-centric properties to be critical for learning good representations: invariance to task-irrelevant semantics, separability of classes in some latent space, and recoverability of labels from augmented samples. However, given their discrete, non-Euclidean nature, graph datasets and graph SSL methods are unlikely to satisfy these properties. This raises the question: how do graph SSL methods, such as contrastive learning (CL), work well? To systematically probe this question, we perform a generalization analysis for CL when using generic graph augmentations (GGAs), with a focus on data-centric properties. Our analysis yields formal insights into the limitations of GGAs and the necessity of task-relevant augmentations. As we empirically show, GGAs do not induce task-relevant invariances on common benchmark datasets, leading to only marginal gains over naive, untrained baselines. Our theory motivates a synthetic data generation process that enables control over task-relevant information and boasts pre-defined optimal augmentations. This flexible benchmark helps us identify yet unrecognized limitations in advanced augmentation techniques (e.g., automated methods). Overall, our work rigorously contextualizes, both empirically and theoretically, the effects of data-centric properties on augmentation strategies and learning paradigms for graph SSL. | Accept | Overall the reviews are positive, appreciating the proposed theory, derivation, and presentation. Also some concerns raised are properly addressed during the discussion period. Hence, I recommend the accetance of this paper. | train | [
"1SexGOhWbZX",
"Y_dEuFrsIev",
"mY1T6suK8B",
"sD7xqS9xmGu",
"FbZ-ramO95Z",
"AGt5RkWsrdEw",
"Kj-I2X_P2NB_",
"HiIr2ywOYFr",
"tODbZNpnzwL",
"tfuj7FsB9bB",
"rqZtWa5EaHT",
"qVjPV9Z_DBk",
"f_wxBNUDy17",
"rly7acQ-JBG",
"fgrlxsJOrFa",
"HlYY0jsdMjj",
"EfbbEMrYD4e",
"8W-e03zXAFx"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks you for your positive assessment of our work! We are glad that our rebuttals and addition of Fig. 5 effectively clarified your concerns, and you found extensions of our framework to other loss functions interesting. We address the remaining question about practicality below.\n\n**Practicality**: We respect... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"Y_dEuFrsIev",
"sD7xqS9xmGu",
"AGt5RkWsrdEw",
"8W-e03zXAFx",
"EfbbEMrYD4e",
"HlYY0jsdMjj",
"nips_2022_7-LTDcvNc_",
"tODbZNpnzwL",
"tfuj7FsB9bB",
"8W-e03zXAFx",
"qVjPV9Z_DBk",
"EfbbEMrYD4e",
"rly7acQ-JBG",
"fgrlxsJOrFa",
"HlYY0jsdMjj",
"nips_2022_7-LTDcvNc_",
"nips_2022_7-LTDcvNc_",
... |
nips_2022_sADLRl2STMe | Combining Implicit and Explicit Regularization for Efficient Learning in Deep Networks | Works studying implicit regularization have focused on gradient trajectories during the optimization process in explaining why deep networks favor certain kinds of solutions over others. For deep linear networks, it has been shown that gradient descent implicitly regularizes toward low-rank solutions in matrix completion/factorization tasks. Akin to an accelerative pre-conditioning, these effects become more pronounced with increased depth. Inspired by this, we propose an explicit penalty that mirrors the rank minimization and generalization performance independently of depth but interestingly only takes effect with certain adaptive gradient optimizers (e.g. Adam and some of its close variants)---performing competitively with or outperforming several approaches in matrix completion over a range of parameter and data regimes. Together with the choice of the optimization algorithm, our findings suggest that explicit regularization can play an important role in designing different, desirable forms of regularization and that a more nuanced understanding of this interplay may be necessary. | Accept | All reviewers recommended that the paper be accepted, and I accordingly recommend the same. I encourage the authors to take into account suggestions made by reviewers so as to further improve the text towards the camera-ready version. | train | [
"RuBHHMrJhU-",
"-05C1AxCZuk",
"_GQagFJ22PL",
"9_7uIofSvYn",
"I4GHG97_5Z1",
"aaiYGse5OXH",
"PduHmaawXt",
"472VXBrDArj",
"tJJ2HleWUTw3",
"LbhaHpezeIE",
"hJrjiz6XQq7",
"ycYDiVCk4DG",
"XudPiCFayiz",
"E0jAZKL3qI2",
"ClzcxH2gAdk",
"3LmF3znz-tj"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your explanations about this observation and for looking deeper into the possible reasons. I think the two hypothesis are reasonable and I agree that this would need a more in-depth analysis in future work.",
" Thank you for answering my questions and considering the suggestions. In particular, I ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"_GQagFJ22PL",
"LbhaHpezeIE",
"LbhaHpezeIE",
"aaiYGse5OXH",
"472VXBrDArj",
"PduHmaawXt",
"ycYDiVCk4DG",
"tJJ2HleWUTw3",
"3LmF3znz-tj",
"ClzcxH2gAdk",
"E0jAZKL3qI2",
"XudPiCFayiz",
"nips_2022_sADLRl2STMe",
"nips_2022_sADLRl2STMe",
"nips_2022_sADLRl2STMe",
"nips_2022_sADLRl2STMe"
] |
nips_2022_R9KnuFlvnU | WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents | Most existing benchmarks for grounding language in interactive environments either lack realistic linguistic elements, or prove difficult to scale up due to substantial human involvement in the collection of data or feedback signals. We develop WebShop – a simulated e-commerce website environment with 1.18 million real-world products and 12,087 crowd-sourced text instructions. In this environment, an agent needs to navigate multiple types of webpages and issue diverse actions to find, customize, and purchase a product given an instruction. WebShop provides several challenges including understanding compositional instructions, query (re-)formulation, dealing with noisy text in webpages, and performing strategic exploration. We collect over 1,600 human trajectories to first validate the benchmark, then train and evaluate a diverse range of agents using reinforcement learning, imitation learning, and pre-trained image and language models. Our best model achieves a task success rate of 29%, which significantly outperforms rule heuristics but is far lower than expert human performance (59%). We also analyze agent and human trajectories and ablate various model components to provide insights for developing future agents with stronger language understanding and decision making abilities. Finally, we show our agent trained on WebShop exhibits non-trivial sim-to-real transfer when evaluated on amazon.com and ebay.com, indicating the potential value of our benchmark for developing practical web agents that can operate in the wild. | Accept | This paper proposes a new real-world natural language web interaction benchmark for the shopping domain and associated dataset, WebShop. The paper includes a rule-based baseline, as well as a model-based one that is trained using imitation and reinforcement learning. Performance is reported using these models in addition to human performance, showing a large gap and opportunity for better models. Reviewers made a good set of suggestions, majority of which have been covered by the authors in their rebuttal. These include additional experimentation requested by the reviewers as well as improvements to the paper presentation. | train | [
"v0FJVbyMMSN",
"OQJJTM9Hl6g",
"v78aH2nj9WN",
"55i-CPrEP5_",
"TnHoJIywT9H",
"du5lV-g9FLs",
"2WVG1ggYBTT",
"4S3J6MpDlLP",
"5kWlxAPD9dQ",
"MABXPVsZ4PG",
"jnOIdTLBMYq",
"M6T4t-sQjax",
"ZA5iVIL_tin"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We want to thank reviewers again for their time and valuable questions. As the discussion period is about to end and we don't get to hear back from all reviewers yet, we'd like to briefly summarize our rebuttal and updates:\n\n- We reiterated the motivation of our work, and why we believe large-scale real-world w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"du5lV-g9FLs",
"ZA5iVIL_tin",
"jnOIdTLBMYq",
"ZA5iVIL_tin",
"M6T4t-sQjax",
"nips_2022_R9KnuFlvnU",
"ZA5iVIL_tin",
"M6T4t-sQjax",
"jnOIdTLBMYq",
"nips_2022_R9KnuFlvnU",
"nips_2022_R9KnuFlvnU",
"nips_2022_R9KnuFlvnU",
"nips_2022_R9KnuFlvnU"
] |
nips_2022_Od4oKKwBx7Z | On Infinite Separations Between Simple and Optimal Mechanisms | We consider a revenue-maximizing seller with $k$ heterogeneous items for sale to a single additive buyer, whose values are drawn from a known, possibly correlated prior $\mathcal{D}$. It is known that there exist priors $\mathcal{D}$ such that simple mechanisms --- those with bounded menu complexity --- extract an arbitrarily small fraction of the optimal revenue~(Briest et al. 2015, Hart and Nisan 2019). This paper considers the opposite direction: given a correlated distribution $\mathcal{D}$ witnessing an infinite separation between simple and optimal mechanisms, what can be said about $\mathcal{D}$?
\citet{hart2019selling} provides a framework for constructing such $\mathcal{D}$: it takes as input a sequence of $k$-dimensional vectors satisfying some geometric property, and produces a $\mathcal{D}$ witnessing an infinite gap. Our first main result establishes that this framework is without loss: every $\mathcal{D}$ witnessing an infinite separation could have resulted from this framework. An earlier version of their work provided a more streamlined framework (Hart and Nisan 2013). Our second main result establishes that this restrictive framework is not tight. That is, we provide an instance $\mathcal{D}$ witnessing an infinite gap, but which provably could not have resulted from the restrictive framework.
As a corollary, we discover a new kind of mechanism which can witness these infinite separations on instances where the previous ``aligned'' mechanisms do not. | Accept | This very well-written studies Bayesian mechanism design where we aim to maximize the expected revenue of selling k items to a single buyer with additive valuations. The authors investigate when the optimal revenue is infinite while the best-possible revenue from selling all items in a grand bundle is finite; this question has significance for understanding the power of any mechanism---such as deterministic bundle pricing---that has a finite menu size. A reduction from previous work provides a recipe for constructing such "infinitely-separated" instances, by constructing instances with infinite "MenuGap". The first main result is that infinite MenuGap is also a necessary condition for an instance to be infinitely-separated. A subclass of such infinite-separated instances can be characterized by those also with an infinite "SupGap"---a gap that is upper-bounded by the MenuGap. Previous constructions of infinite MenuGap always involved infinite SupGap, suggesting that infinite SupGap may also be necessary for an instance to be infinitely-separated; the second main result falsifies this by an infinitely-separated instance that has finite SupGap.
Despite some concerns about fit with NeurIPS, I find the overall fit quite good.
| train | [
"5amjLAk1w74",
"Nyp-Q_XgPVg",
"OqbmMs3W4rl",
"rQcxWUb6e6r",
"h8LC3898ZAql",
"-UOaOq-ic81",
"ucwogtkk0pn",
"0oN6wzP_v1h",
"JU1NF4Gkg4W",
"RHvyzhc9jF",
"YpxO3zjKcqz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for this nice answer. Indeed, it seems that it was non trivial to come up with this construction. After reading your response to my review and other reviews, I still think this is a good paper and maintain my score.",
" Thanks a ton! I think it'd be nice to incorporate this discussion somewhere in the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
3
] | [
"OqbmMs3W4rl",
"h8LC3898ZAql",
"YpxO3zjKcqz",
"RHvyzhc9jF",
"JU1NF4Gkg4W",
"0oN6wzP_v1h",
"nips_2022_Od4oKKwBx7Z",
"nips_2022_Od4oKKwBx7Z",
"nips_2022_Od4oKKwBx7Z",
"nips_2022_Od4oKKwBx7Z",
"nips_2022_Od4oKKwBx7Z"
] |
nips_2022_kjR8GiwqCK | IMED-RL: Regret optimal learning of ergodic Markov decision processes | We consider reinforcement learning in a discrete, undiscounted, infinite-horizon Markov decision problem (MDP) under the average reward criterion, and focus on the minimization of the regret with respect to an optimal policy, when the learner does not know the rewards nor transitions of the MDP. In light of their success at regret minimization in multi-armed bandits, popular bandit strategies, such as the optimistic \texttt{UCB}, \texttt{KL-UCB} or the Bayesian Thompson sampling strategy, have been extended to the MDP setup. Despite some key successes, existing strategies for solving this problem either fail to be provably asymptotically optimal, or suffer from prohibitive burn-in phase and computational complexity when implemented in practice. In this work, we shed a novel light on regret minimization strategies, by extending to reinforcement learning the computationally appealing Indexed Minimum Empirical Divergence (\texttt{IMED}) bandit algorithm. Traditional asymptotic problem-dependent lower bounds on the regret are known under the assumption that the MDP is \emph{ergodic}. Under this assumption, we introduce \texttt{IMED-RL} and prove that its regret upper bound asymptotically matches the regret lower bound. We discuss both the case when the supports of transitions are unknown, and the more informative but a priori harder-to-exploit-optimally case when they are known. Rewards are assumed light-tailed, semi-bounded from above. Last, we provide numerical illustrations on classical tabular MDPs, \textit{ergodic} and \textit{communicative} only, showing the competitiveness of \texttt{IMED-RL} in finite-time against state-of-the-art algorithms. \texttt{IMED-RL} also benefits from a lighter complexity. | Accept | Regret theory for ergodic undiscounted infinite-horizon MDP was largely an open problem. The authors made an effort to fill in this gap. All reviewers see merits of the analysis, and the rebuttal has addressed most of the reviewers' concerns. Please make sure to incorporate necessary changes and add missing citations while preparing the final paper. | train | [
"gUfTjXPCb0",
"PBi0h4jDPG",
"tKfwtO17Rci",
"PEIYrkUBH0i",
"us_o64RrC25",
"ebl4mG2fN2D",
"JQNO9CuLBF4",
"WN47aZUa3CG",
"mCwa_0WHbpk",
"7W7N-MoSeUH",
"1gz6Dq7f_L-"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for acknowledging the ideas behind IMED-RL and the theoretical strength of the paper.\n\nWe took your comment into consideration and tried to derive some performance bounds for the communicating only assumption.\nHowever, we designed IMED-RL for the ergodic case, and it really shows up in the anal... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"tKfwtO17Rci",
"PEIYrkUBH0i",
"JQNO9CuLBF4",
"ebl4mG2fN2D",
"nips_2022_kjR8GiwqCK",
"1gz6Dq7f_L-",
"7W7N-MoSeUH",
"mCwa_0WHbpk",
"nips_2022_kjR8GiwqCK",
"nips_2022_kjR8GiwqCK",
"nips_2022_kjR8GiwqCK"
] |
nips_2022_o-mxIWAY1T8 | Semantic Probabilistic Layers for Neuro-Symbolic Learning | We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints. Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space all while being amenable to end-to-end learning via maximum likelihood.
SPLs combine exact probabilistic inference with logical reasoning in a clean and modular way, learning complex distributions and restricting their support to solutions of the constraint. As such, they can faithfully, and efficiently, model complex SOP tasks beyond the reach of alternative neuro-symbolic approaches. We empirically demonstrate that SPLs outperform these competitors in terms of accuracy on challenging SOP tasks such as hierarchical multi-label classification, pathfinding and preference learning, while retaining perfect constraint satisfaction. | Accept | On the basis of the reviews I am recommending acceptance. A weakness for me is that neural models tend to learn logical constraints on structured labels directly from the data so in many cases enforcing the constraints does not improve performance. However, the idea of putting sum-product networks at the classification layer (probabilistic circuits) so that the partition function can be computed efficiently seems novel and interest enough to warrant publication. | train | [
"eyKLFRCmWGo",
"cky11gT4z2",
"qH4gGN25tVb",
"5aJ_YYDLcT_",
"_1qtRhM5bwC",
"XePKnVrYjKa",
"Qw7KyQ5zIjK",
"DZWbudioO0T",
"kOJGI03oHZr",
"RbLGtlxSbFE",
"iT88Oz5QmH",
"0G_xavoYrvY",
"0Oh9skogmbN",
"0-RpBn-D99U",
"wlfhbrj-jgO"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing the complexity problem in more detail and incorporating the discussion into the revision. This discussion would be particularly helpful for the readers outside the subfield to understand the method. I really appreciate the efforts! For the experiments, again, I acknowledge that some of the t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
3
] | [
"qH4gGN25tVb",
"kOJGI03oHZr",
"_1qtRhM5bwC",
"0-RpBn-D99U",
"DZWbudioO0T",
"Qw7KyQ5zIjK",
"wlfhbrj-jgO",
"wlfhbrj-jgO",
"0-RpBn-D99U",
"0Oh9skogmbN",
"0G_xavoYrvY",
"nips_2022_o-mxIWAY1T8",
"nips_2022_o-mxIWAY1T8",
"nips_2022_o-mxIWAY1T8",
"nips_2022_o-mxIWAY1T8"
] |
nips_2022_NtJyGXo0nF | Adversarial training for high-stakes reliability | In the future, powerful AI systems may be deployed in high-stakes settings, where a single failure could be catastrophic. One technique for improving AI safety in high-stakes settings is adversarial training, which uses an adversary to generate examples to train on in order to achieve better worst-case performance.
In this work, we used a safe language generation task (``avoid injuries'') as a testbed for achieving high reliability through adversarial training. We created a series of adversarial training techniques---including a tool that assists human adversaries---to find and eliminate failures in a classifier that filters text completions suggested by a generator. In our task, we determined that we can set very conservative classifier thresholds without significantly impacting the quality of the filtered outputs. We found that adversarial training significantly increased robustness to the adversarial attacks that we trained on--- tripling the time to find adversarial examples without tools and doubling the time with our tool (from 13 to 26 minutes)---without affecting in-distribution performance.
We hope to see further work in the high-stakes reliability setting, including more powerful tools for enhancing human adversaries and better ways to measure high levels of reliability, until we can confidently rule out the possibility of catastrophic deployment-time failures of powerful models. | Accept | The paper used a safe language generation task (``avoid injuries'') as a testbed for achieving high reliability through adversarial training. Reviewers had a high variance in their evaluation of this work. While reviewers found the toolkit interesting and the paper well-written, there were some concerns regarding the lack of theoretical evidence on worst-case reliability and the lack of strong/adaptive attacks and baselines in experiments. Given all, I think the paper is above the accept threshold. | train | [
"r50PLXXkHK",
"upThmcLO8x_",
"hNxyVD4fWMF",
"zS_m2XM1jDe",
"Uz4cFzJ8c8b",
"zwAa86o50-j",
"mHd_0Q06vJyc",
"RiThBRRNGRv",
"KB4V725laIH",
"O-leG1MAKXN"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the reviews and looked at updates to the paper. I think that the authors have considered and addressed some of the relatively minor suggestions that I pointed out. As far as my major concerns go, they continue to be things that this paper did not do as I discussed in my main review. But I also continu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"hNxyVD4fWMF",
"Uz4cFzJ8c8b",
"O-leG1MAKXN",
"KB4V725laIH",
"zwAa86o50-j",
"RiThBRRNGRv",
"nips_2022_NtJyGXo0nF",
"nips_2022_NtJyGXo0nF",
"nips_2022_NtJyGXo0nF",
"nips_2022_NtJyGXo0nF"
] |
nips_2022_11WmFbrIt26 | Provable Defense against Backdoor Policies in Reinforcement Learning | We propose a provable defense mechanism against backdoor policies in reinforcement learning under subspace trigger assumption. A backdoor policy is a security threat where an adversary publishes a seemingly well-behaved policy which in fact allows hidden triggers. During deployment, the adversary can modify observed states in a particular way to trigger unexpected actions and harm the agent. We assume the agent does not have the resources to re-train a good policy. Instead, our defense mechanism sanitizes the backdoor policy by projecting observed states to a `safe subspace', estimated from a small number of interactions with a clean (non-triggered) environment. Our sanitized policy achieves $\epsilon$ approximate optimality in the presence of triggers, provided the number of clean interactions is $O\left(\frac{D}{(1-\gamma)^4 \epsilon^2}\right)$ where $\gamma$ is the discounting factor and $D$ is the dimension of state space. Empirically, we show that our sanitization defense performs well on two Atari game environments. | Accept | The authors present a novel algorithm for defending against backdoor policies in Reinforcement Learning (RL).
The main idea is to project observations onto a "safe" subspace which cleans out the backdoor.
The authors present both empirical finds and theoretical results for their method.
There was an active discussion between reviewers and authors in which the main concerns were addressed.
I recommend that the authors follow the suggestion of reviewer iqtz and emphasise the restriction on the attacker ability more clearly to avoid any over-claims about the capability of the defense method. | train | [
"yCfjbAWOKin",
"CNHKXWKHr8_",
"g8espMZrUJi",
"2niEr-yilWE",
"Ee-M9YgueUD",
"dd2B02NTVXo",
"Ug9PF4HQqOs",
"OFu27n7BHMd",
"4iQ8NTMnUY9",
"RQO-zxpkXAp",
"uIBRy0LwE-q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the authors' reply and will raise the score to 5. I suggest the authors emphasize the restriction on the attacker ability more clearly to avoid any over-claims about the capability of the defense method.",
" We are not saying that putting triggers in high variance directions would be unrealistic. F... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"CNHKXWKHr8_",
"g8espMZrUJi",
"Ee-M9YgueUD",
"uIBRy0LwE-q",
"RQO-zxpkXAp",
"4iQ8NTMnUY9",
"OFu27n7BHMd",
"nips_2022_11WmFbrIt26",
"nips_2022_11WmFbrIt26",
"nips_2022_11WmFbrIt26",
"nips_2022_11WmFbrIt26"
] |
nips_2022_yb3HOXO3lX2 | Defining and Characterizing Reward Gaming | We provide the first formal definition of \textbf{reward hacking}, a phenomenon where optimizing an imperfect proxy reward function, $\mathcal{\tilde{R}}$, leads to poor performance according to the true reward function, $\mathcal{R}$.
We say that a proxy is \textbf{unhackable} if increasing the expected proxy return can never decrease the expected true return.
Intuitively, it might be possible to create an unhackable proxy by leaving some terms out of the reward function (making it ``narrower'') or overlooking fine-grained distinctions between roughly equivalent outcomes, but we show this is usually not the case.
A key insight is that the linearity of reward (in state-action visit counts) makes unhackability a very strong condition.
In particular, for the set of all stochastic policies, two reward functions can only be unhackable if one of them is constant.
We thus turn our attention to deterministic policies and finite sets of stochastic policies, where non-trivial unhackable pairs always exist, and establish necessary and sufficient conditions for the existence of simplifications, an important special case of unhackability.
Our results reveal a tension between using reward functions to specify narrow tasks and aligning AI systems with human values. | Accept | The paper introduces the question of whether a pair of reward function and a proxy reward function is "gameable", i.e., whether maximizing the proxy may actually decrease the return in the original reward function. The authors develop a fairly complete analysis of this problem revealing a set of non-trivial results on, e.g., the possibility of creating ungameable pairs.
There is a general consensus among the reviewers that the contribution is novel and interesting and thus I'm proposing acceptance. For the final version of the paper, I strongly suggest the authors to
- Integrate some of the discussion from the rebuttal
- Clarify that the scope of the paper is more on the introduction of the concept and it's analysis rather in proposing new algorithms or practical approaches
- Expand as much as possible its connections with existing literature on inverse reinforcement learning | train | [
"iYAkSJ_284A",
"kAxhWXskOrx",
"RkcAREm4tp",
"2Ju14InF0RZ",
"3fmGmfWGr7T",
"dD9GVDXvMAb",
"dCiFUisOT-",
"QvOmGDQbjqW",
"urKOzflnZ0C",
"amm42xUUwqT",
"7fa3vOhxwU4",
"uuZcwjIciAn",
"veS-eFZXGmj",
"VkR3oCCnAxa",
"cuMneC_JO1H"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for elaborating. It's possible we've still not understood your point, but will respond as best we can anyways...\n\nBecause our definitions of (simplification/)gameability considers not only the optimal policy, but the entire policy order, we don't believe this property is likely to lead to any additio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
3
] | [
"3fmGmfWGr7T",
"dD9GVDXvMAb",
"2Ju14InF0RZ",
"amm42xUUwqT",
"QvOmGDQbjqW",
"uuZcwjIciAn",
"amm42xUUwqT",
"veS-eFZXGmj",
"amm42xUUwqT",
"cuMneC_JO1H",
"VkR3oCCnAxa",
"nips_2022_yb3HOXO3lX2",
"nips_2022_yb3HOXO3lX2",
"nips_2022_yb3HOXO3lX2",
"nips_2022_yb3HOXO3lX2"
] |
nips_2022_5yjM1sQ1uKZ | A Unified Framework for Alternating Offline Model Training and Policy Learning | In offline model-based reinforcement learning (offline MBRL), we learn a dynamic model from historically collected data, and subsequently utilize the learned model and fixed datasets for policy learning, without further interacting with the environment. Offline MBRL algorithms can improve the efficiency and stability of policy learning over the model-free algorithms. However, in most of the existing offline MBRL algorithms, the learning objectives for the dynamic models and the policies are isolated from each other. Such an objective mismatch may lead to inferior performance of the learned agents. In this paper, we address this issue by developing an iterative offline MBRL framework, where we maximize a lower bound of the true expected return, by alternating between dynamic-model training and policy learning. With the proposed unified model-policy learning framework, we achieve competitive performance on a wide range of continuous-control offline reinforcement learning datasets. Source code is publicly released. | Accept |
In this paper, the authors motivated from the observation that the learning objectives of model and the polices in offline model-based RL are mismatched, and established a lower bound of the true expected return, which includes both the model and policy learning. The authors designed a tractable approximation to the lower bound, building upon which an algorithm is derived with practical implementation. The algorithm is then justified empirical to demonstrate the superior.
Most of the reviewers appreciate the proposed method. The paper can be further improved:
>All reviewers believes there are some gaps in the derivation of the practical algorithm from the original optimization. The rationale behind the approximation and parametrization should be clearly discussed to avoid the possible confusion. I understand there is page limit, but I believe the paper can be reorganized to leave the important discussion, especially the comparison w.r.t. DICE family and VPM, in the maintext, not in the Appendix. \
In sum, the paper indeed provided an interesting method which interactively learn the policy and the model in offline setting by optimizing the lower bound of the true expected return, and a practical approximation scheme with superior performance. I recommend for acceptance. | train | [
"gkcWPYN_nh8",
"OKOW_oy0Rve",
"alj6nbgFbeS",
"_Bq0-n53UqO",
"DujI4lntkd8",
"1ZkUr3lKCAh",
"RAWyzw-Q8RL",
"lrh1Kj72tna",
"9G-Sip4AVe",
"u2Z9pzHLlCV",
"Udvns5FBBY",
"vMi_oMIoISh",
"OsOXFKpWB_",
"rHECcCJZ4t-",
"PFmBIgFa_hK",
"GD2mcOfZSJB",
"U8Q8LV4c-eNJ",
"kZkTxzayh5o",
"xex2I-4-CZN... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" Thank you very much for the response! Since the discussion period is close to the end, below is a quick answer to your question:\n\nThe second term is quite similar to the “Bellman residual error” for value function learning. If we use the target network to stabilize the training, then the loss calculated with t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"OKOW_oy0Rve",
"RAWyzw-Q8RL",
"RAWyzw-Q8RL",
"u2Z9pzHLlCV",
"lrh1Kj72tna",
"PFmBIgFa_hK",
"9G-Sip4AVe",
"kZkTxzayh5o",
"AEqkq_isuqtR",
"Udvns5FBBY",
"U8Q8LV4c-eNJ",
"kZkTxzayh5o",
"AEqkq_isuqtR",
"nips_2022_5yjM1sQ1uKZ",
"GD2mcOfZSJB",
"soNI0PesdiF",
"MtmHllJDdW3",
"xex2I-4-CZN8",
... |
nips_2022_5WuQNQwy56M | S4ND: Modeling Images and Videos as Multidimensional Signals with State Spaces | Visual data such as images and videos are typically modeled as discretizations of inherently continuous, multidimensional signals. Existing continuous-signal models attempt to exploit this fact by modeling the underlying signals of visual (e.g., image) data directly. However, these models have not yet been able to achieve competitive performance on practical vision tasks such as large-scale image and video classification. Building on a recent line of work on deep state space models (SSMs), we propose \method, a new multidimensional SSM layer that extends the continuous-signal modeling ability of SSMs to multidimensional data including images and videos. We show that S4ND can model large-scale visual data in $1$D, $2$D, and $3$D as continuous multidimensional signals and demonstrates strong performance by simply swapping Conv2D and self-attention layers with \method\ layers in existing state-of-the-art models. On ImageNet-1k, \method\ exceeds the performance of a Vision Transformer baseline by $1.5\%$ when training with a $1$D sequence of patches, and matches ConvNeXt when modeling images in $2$D. For videos, S4ND improves on an inflated $3$D ConvNeXt in activity classification on HMDB-51 by $4\%$. S4ND implicitly learns global, continuous convolutional kernels that are resolution invariant by construction, providing an inductive bias that enables generalization across multiple resolutions. By developing a simple bandlimiting modification to S4 to overcome aliasing, S4ND achieves strong zero-shot (unseen at training time) resolution performance, outperforming a baseline Conv2D by $40\%$ on CIFAR-10 when trained on $8 \times 8$ and tested on $32 \times 32$ images. When trained with progressive resizing, S4ND comes within $\sim 1\%$ of a high-resolution model while training $22\%$ faster.
| Accept | The paper develops a multidimensional model (S4ND) with continuous modeling properties, which outperforms state-of-the-art models on large-scale benchmarks. All four reviewers reach an agreement and vote for accepting the paper. Also, the rebuttal and the discussion have successfully addressed all the major concerns raised by the reviewers. AC agrees with all reviewers’ judgments and recommends accepting the paper because of the novelty and high performance of the proposed model. | train | [
"AdNBSLTlPHE",
"gtWRYyw77",
"AhVG1VR-GY",
"5gDBRUacpkY",
"baNhIf9f2Xt",
"yfy2j3fHsR",
"Jpes_TBbXBo",
"d6ngpFqD7v",
"kfxBwV5n4yS"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their time, and their feedback regarding the thoroughness of the experiments as well as thoughtful questions regarding the bandlimiting contribution.\n\n## S4ND generalization\n\nIn hindsight, the generalization of S4 from 1D to higher dimension has a simplicity and elegance to its formu... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"kfxBwV5n4yS",
"d6ngpFqD7v",
"Jpes_TBbXBo",
"yfy2j3fHsR",
"nips_2022_5WuQNQwy56M",
"nips_2022_5WuQNQwy56M",
"nips_2022_5WuQNQwy56M",
"nips_2022_5WuQNQwy56M",
"nips_2022_5WuQNQwy56M"
] |
nips_2022_TYMGhqlSFkC | JAWS: Auditing Predictive Uncertainty Under Covariate Shift | We propose \textbf{JAWS}, a series of wrapper methods for distribution-free uncertainty quantification tasks under covariate shift, centered on the core method \textbf{JAW}, the \textbf{JA}ckknife+ \textbf{W}eighted with data-dependent likelihood-ratio weights. JAWS also includes computationally efficient \textbf{A}pproximations of JAW using higher-order influence functions: \textbf{JAWA}. Theoretically, we show that JAW relaxes the jackknife+'s assumption of data exchangeability to achieve the same finite-sample coverage guarantee even under covariate shift. JAWA further approaches the JAW guarantee in the limit of the sample size or the influence function order under common regularity assumptions. Moreover, we propose a general approach to repurposing predictive interval-generating methods and their guarantees to the reverse task: estimating the probability that a prediction is erroneous, based on user-specified error criteria such as a safe or acceptable tolerance threshold around the true label. We then propose \textbf{JAW-E} and \textbf{JAWA-E} as the repurposed proposed methods for this \textbf{E}rror assessment task. Practically, JAWS outperform state-of-the-art predictive inference baselines in a variety of biased real world data sets for interval-generation and error-assessment predictive uncertainty auditing tasks. | Accept | This paper studies conformal prediction under covariate shift and proposes JAW, a weighted version of jackknife+ to solve this problem. There is a consensus among the expert reviewers that the paper considers an important problem and has substantial contributions that are deemed adequate for publication at NeurIPS2022. The authors provided a rebuttal that has sufficiently addressed the reviewers' concerns as acknowledged by Reviewers `qDpq `, `3P4m`, and `AAyc`.
| train | [
"ZZ6CneR5Ejr",
"z4ouuwHBVI",
"acABqtC2nc",
"KHa1X9H0Q_q",
"lN2KexJ9WD9",
"gyz3DFMAHk9",
"jqppqDgyWDO",
"b0ebBgHNNk1",
"xkp83ds4zvF",
"80lX2PvSrfq",
"aQ0SRgL5iHF",
"o7dZ88Pyn71",
"UDTG6Az5gI",
"_aRRLaD7ofV",
"ZwX9yZifWRp2",
"VtNXU8Z5lYC",
"YhIfBd8cWzlo",
"ZqUMuh_tpM5",
"H5YsLQ8iZg... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for acknowledging our response! If there are no other outstanding concerns, we would really appreciate it if you’d consider updating your score. Thanks again for your helpful review, which helped us improve the paper. ",
" Thanks for your response and most of my concerns have been addressed.",
" Dear R... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"z4ouuwHBVI",
"gyz3DFMAHk9",
"7X2rYIgoSE",
"H5YsLQ8iZgq",
"aQ0SRgL5iHF",
"VtNXU8Z5lYC",
"xkp83ds4zvF",
"80lX2PvSrfq",
"UDTG6Az5gI",
"_aRRLaD7ofV",
"H5YsLQ8iZgq",
"wXwVHGXuS5w",
"wXwVHGXuS5w",
"ZwX9yZifWRp2",
"8mPVzk6z-a4",
"7X2rYIgoSE",
"nips_2022_TYMGhqlSFkC",
"nips_2022_TYMGhqlSF... |
nips_2022_pgF-N1YORd | Disentangling Transfer in Continual Reinforcement Learning | The ability of continual learning systems to transfer knowledge from previously seen tasks in order to maximize performance on new tasks is a significant challenge for the field, limiting the applicability of continual learning solutions to realistic scenarios. Consequently, this study aims to broaden our understanding of transfer and its driving forces in the specific case of continual reinforcement learning. We adopt SAC as the underlying RL algorithm and Continual World as a suite of continuous control tasks. We systematically study how different components of SAC (the actor and the critic, exploration, and data) affect transfer efficacy, and we provide recommendations regarding various modeling options. The best set of choices, dubbed ClonEx-SAC, is evaluated on the recent Continual World benchmark. ClonEx-SAC achieves 87% final success rate compared to 80% of PackNet, the best method in the benchmark. Moreover, the transfer grows from 0.18 to 0.54 according to the metric provided by Continual World. | Accept | All reviewers appreciated the importance of investigating continual reinforcement learning, and the throughout experiments conducted in the paper. While the paper does not introduce new methods, it evaluates existing methods and offers surprising insights. While the experiments focus on a single algorithm and environment suite, these insights are still valuable to the community. For these reasons, I recommend acceptance. | train | [
"EP6PEmC8g3F",
"HSq_4XtYP9M",
"980iAoO5IZm",
"fbPJEoKIoWi",
"vyJZHquREz4",
"nTlsevTheu",
"APOCu99mP2d",
"Ve-B9O_X3QT",
"5QmvzwolJ3U"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 3Rch,\n\nThank you again for your review. We wanted to follow up and see if we have addressed your concerns. Please let us know if you have further questions.",
" Dear Reviewer Qp95,\n\nThank you again for your review. We wanted to follow up and see if we have addressed your concerns. Please let u... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"Ve-B9O_X3QT",
"APOCu99mP2d",
"APOCu99mP2d",
"5QmvzwolJ3U",
"nTlsevTheu",
"Ve-B9O_X3QT",
"nips_2022_pgF-N1YORd",
"nips_2022_pgF-N1YORd",
"nips_2022_pgF-N1YORd"
] |
nips_2022_LJdUUOmWjX | List-Decodable Sparse Mean Estimation via Difference-of-Pairs Filtering | We study the problem of list-decodable sparse mean estimation. Specifically, for a parameter $\alpha \in (0, 1/2)$, we are given $m$ points in $\mathbb{R}^n$, $\lfloor \alpha m \rfloor$ of which are i.i.d. samples from a distribution $D$ with unknown $k$-sparse mean $\mu$. No assumptions are made on the remaining points, which form the majority of the dataset. The goal is to return a small list of candidates containing a vector $\hat \mu$ such that $\|\hat \mu - \mu\|_2$ is small. Prior work had studied the problem of list-decodable mean estimation in the dense setting. In this work, we develop a novel, conceptually simpler technique for list-decodable mean estimation. As the main application of our approach, we provide the first sample and computationally efficient algorithm for list-decodable sparse mean estimation. In particular, for distributions with ``certifiably bounded'' $t$-th moments in $k$-sparse directions and sufficiently light tails, our algorithm achieves error of $(1/\alpha)^{O(1/t)}$ with sample complexity $m = (k\log(n))^{O(t)}/\alpha$ and running time $\mathrm{poly}(mn^t)$. For the special case of Gaussian inliers, our algorithm achieves the optimal error guarantee $\Theta (\sqrt{\log(1/\alpha)})$ with quasi-polynomial complexity. We complement our upper bounds with nearly-matching statistical query and low-degree polynomial testing lower bounds. | Accept | This work considers list-decodable mean estimation: a small fraction (say 10%) of the data are iid draw from D, while the rest are adversarial. The goal is to output a finite number of estimates such that one of them approximates the mean of D. This had been broadly studied since [CSV17]. The main contribution of the current work is two-folds: 1) consider the sample complexity when the mean of D is sparse; and 2) design a new algorithm that runs in manageable time (in some regimes, polynomial time).
The referees are unanimous in recommending acceptance. The review of Ymm8 had to be disregarded due to a COI. | train | [
"SFvbG0LkKYc",
"DMA9JsUKQOD",
"_4gPdncg8pG",
"Dqd_ORQtY7W",
"5RF5tB2Txhq",
"MlllZoUxmVd",
"gvEXOQjg16U",
"69_vSQKY8D",
"hcRJ9587Zz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks very much for the response. It would be helpful if more discussions can be added to Sec 3. On the other side, I agree with the authors that presenting most of the proofs in the main text also shows the simplicity of DoP framework. Thus, I am raising my score.",
" Thanks for the clarification. I suggest a... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Dqd_ORQtY7W",
"5RF5tB2Txhq",
"hcRJ9587Zz",
"69_vSQKY8D",
"gvEXOQjg16U",
"nips_2022_LJdUUOmWjX",
"nips_2022_LJdUUOmWjX",
"nips_2022_LJdUUOmWjX",
"nips_2022_LJdUUOmWjX"
] |
nips_2022_snUOkDdJypm | Finite-Time Last-Iterate Convergence for Learning in Multi-Player Games | We study the question of last-iterate convergence rate of the extragradient algorithm by Korpelevich [1976] and the optimistic gradient algorithm by Popov [1980] in multi-player games. We show that both algorithms with constant step-size have last-iterate convergence rate of $O(\frac{1}{\sqrt{T}})$ to a Nash equilibrium in terms of the gap function in smooth monotone games, where each player's action set is an arbitrary convex set. Previous results only study the unconstrained setting, where each player's action set is the entire Euclidean space. Our results address an open question raised in several recent work by Hsieh et al. [2019], Golowich et al. [2020a,b], who ask for last-iterate convergence rate of either the extragradient or the optimistic gradient algorithm in the constrained setting. Our convergence rates for both algorithms are tight and match the lower bounds by Golowich et al. [2020a,b]. At the core of our results lies a new notion -- the tangent residual, which we use to measure the proximity to equilibrium. We use the tangent residual (or a slight variation of the tangent residual) as the the potential function in our analysis of the extragradient algorithm (or the optimistic gradient algorithm) and prove that it is non-increasing between two consecutive iterates. | Accept | This paper studies the last iterate rate of convergence of the well-known extragradient and optimistic gradient algorithms in smooth monotone games with continuous convex action sets. The main result of the paper is to show that both algorithms (with constant step size) enjoy tight last-iterate convergence rates for setting (previous papers either 1) only applied to unconstrained domains 2) were asymptotic, or 3) required dependence on arbitrarily large problem-dependent constants).
This paper resolves a well-known open problem within the min-max optimization community, and is likely to have significant impact. The reviewers agree that the paper is well-written, and the the techniques (using the "tangent residual" as a potential function) are novel. For the final version, the authors are encouraged to incorporate the reviewers' suggestions to improve the presentation.
| train | [
"2GjLxz9zrTb",
"O2RdscW39zv",
"M5B6RC2ORdX",
"jFfs3UgVZAJ",
"oAp5_raXZY",
"1_w1s4v0Vhe",
"R8D9gs-LGr",
"UVYa6_vgmO",
"91xqFGmhzM",
"SunK7kYN8eh",
"VMdMgsapTqj",
"960EDvNeCfL",
"5SuVboCz5Tc",
"yOjfY6N7vIa",
"k2KQ3uKqGD",
"2tm947XSYf9",
"eRSEyZsdeqM",
"ubh8HuialtC",
"cZyweK1xWkl",
... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" OK. Thanks.",
" Thank you for the encouraging comment.\n\nRegarding Q1, for monotone and Lipschitz VI's our focus is algorithmic, and the goal is to understand the convergence rates of these algorithms rather than viewing them as game dynamics. Thus in contribution 2, we include both EG and OG regardless of whe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"O2RdscW39zv",
"R8D9gs-LGr",
"91xqFGmhzM",
"1_w1s4v0Vhe",
"UVYa6_vgmO",
"960EDvNeCfL",
"VMdMgsapTqj",
"2tm947XSYf9",
"yOjfY6N7vIa",
"QxkxhonnRZv",
"QxkxhonnRZv",
"5SuVboCz5Tc",
"ubh8HuialtC",
"cZyweK1xWkl",
"nips_2022_snUOkDdJypm",
"eRSEyZsdeqM",
"nips_2022_snUOkDdJypm",
"nips_2022... |
nips_2022_g2dXxjD9Ucv | Normalizing Flows for Knockoff-free Controlled Feature Selection | Controlled feature selection aims to discover the features a response depends on while limiting the false discovery rate (FDR) to a predefined level. Recently, multiple deep-learning-based methods have been proposed to perform controlled feature selection through the Model-X knockoff framework. We demonstrate, however, that these methods often fail to control the FDR for two reasons. First, these methods often learn inaccurate models of features. Second, the "swap" property, which is required for knockoffs to be valid, is often not well enforced. We propose a new procedure called FlowSelect to perform controlled feature selection that does not suffer from either of these two problems. To more accurately model the features, FlowSelect uses normalizing flows, the state-of-the-art method for density estimation. Instead of enforcing the "swap" property, FlowSelect uses a novel MCMC-based procedure to calculate p-values for each feature directly. Asymptotically, FlowSelect computes valid p-values. Empirically, FlowSelect consistently controls the FDR on both synthetic and semi-synthetic benchmarks, whereas competing knockoff-based approaches do not. FlowSelect also demonstrates greater power on these benchmarks. Additionally, FlowSelect correctly infers the genetic variants associated with specific soybean traits from GWAS data.
| Accept | This paper describes how to use normalizing flows for selecting features in a way that controls the type-1 error by using a normalizing flow along with MCMC to sample from the null distribution. The majority of the reviewers were positive, however the most confident reviewer was negative. From taking a look at that reviewers concerns, I tend to agree with most of them.
The paper is titled knockoff-free, which means in the context of this paper that both 1) 1-bit p-values are not used and 2) The full knockoff property is not required, only sampling from complete conditionals are required. Most of the experiments compare knockoff methods to the proposed approach, so it's not clear if 1) 1-bit p-values are not great or 2) the model-X process/complete conditional sampling process is better with normalizing flows. The former point is known and the latter point on the best way to sample from the complete conditionals is really the value.
If we take the paper as,
1) complete conditionals are 1-D
2) MCMC can be used to sample from a 1-D unnormalized density
3) Simple MCMC won't be bad because the problem is 1-D
-> Any likelihood based deep generative model can be used to sample complete conditionals
then it's a solid paper.
On the other hand, the belief that flows are the correct choice versus other likelihood-based deep generative models is harder to take as there's only a comparison with a mixture density network used in the original HRT paper. Also from other uses of these models, different models are better in different situations. I'd suggest a heavy discussion in the paper on this point at the minimum. Maybe even a reframing of the paper is needed.
Finally, for the test statistic, the HRT may not be the best choice for work like this paper that studies the problems with estimating X-distribution. The paper "CONTRA: Contrarian statistics for controlled variable selection" at AISTATS 2021 shows that the HRT test statistic is more sensitive to model-X estimation errors than a simple mixture statistic that doesn't give up much power. The choice of test statistic also merits some discussion in step 3. | train | [
"my373n2-r50",
"Va9AyhDi_FM",
"WigS-d_Fax",
"lXDhows4FT",
"sp5fE0vmIX",
"n-p20G9Vy6n",
"Zo0LjLtvXY4",
"Kyb_bmY4KUOV",
"a4594gxJ58R",
"xU2qzLAtyR",
"PPhtR0kcow2",
"OPU-lprWYGn",
"4Rq6Y9W64e7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for continuing this discussion, and for the positive comments about the strength of our experimental results. It appears that we were able to address your original concern about the suitability/fairness of our comparisons.\n\nWe see Theorem 1 as sufficient to advance our narrative. The hypot... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
3
] | [
"Va9AyhDi_FM",
"n-p20G9Vy6n",
"a4594gxJ58R",
"Zo0LjLtvXY4",
"Kyb_bmY4KUOV",
"OPU-lprWYGn",
"xU2qzLAtyR",
"4Rq6Y9W64e7",
"PPhtR0kcow2",
"nips_2022_g2dXxjD9Ucv",
"nips_2022_g2dXxjD9Ucv",
"nips_2022_g2dXxjD9Ucv",
"nips_2022_g2dXxjD9Ucv"
] |
nips_2022_TG8KACxEON | Training language models to follow instructions with human feedback | Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through a language model API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent. | Accept | This paper demonstrates that a smaller model (1.7B InstructGPT) trained with human feedback (using a learned reward model and PPO) is competitive with a much larger pretrained model (175B GPT-3). The results are clearly convincing. The main downside is that there isn’t much analysis on the proposed method (e.g. the influence of the RL algorithm and the amount of human feedback needed). That said, the approach is novel and the results are impressive, which points out new directions for collecting better pretraining data. Thus, I lean towards acceptance. | train | [
"5N8Hhm-2Yea",
"Vv1A4s3Stgv",
"v6zu-OlFiui",
"C6Pyfvqyk8",
"ZnUi_lDD8O",
"WSEtLVknTTn",
"YGyPv0ORrh_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I confirm that I've read the author's response and some of my concerns remain. I'll keep my current score.",
" Thanks for the response. The authors did not answer all of my questions. But given that all reviewers are positive about this paper, I'll maintain my score.",
" We thank their reviewers for their kin... | [
-1,
-1,
-1,
7,
6,
8,
5
] | [
-1,
-1,
-1,
4,
4,
5,
3
] | [
"ZnUi_lDD8O",
"YGyPv0ORrh_",
"nips_2022_TG8KACxEON",
"nips_2022_TG8KACxEON",
"nips_2022_TG8KACxEON",
"nips_2022_TG8KACxEON",
"nips_2022_TG8KACxEON"
] |
nips_2022_0OGMrvHnQbb | Efficiently Factorizing Boolean Matrices using Proximal Gradient Descent | Addressing the interpretability problem of NMF on Boolean data, Boolean Matrix Factorization (BMF) uses Boolean algebra to decompose the input into low-rank Boolean factor matrices. These matrices are highly interpretable and very useful in practice, but they come at the high computational cost of solving an NP-hard combinatorial optimization problem. To reduce the computational burden, we propose to relax BMF continuously using a novel elastic-binary regularizer, from which we derive a proximal gradient algorithm. Through an extensive set of experiments, we demonstrate that our method works well in practice: On synthetic data, we show that it converges quickly, recovers the ground truth precisely, and estimates the simulated rank exactly. On real-world data, we improve upon the state of the art in recall, loss, and runtime, and a case study from the medical domain confirms that our results are easily interpretable and semantically meaningful. | Accept | This was a border-line paper, but I recommended accepting it , mainly due to the impressive rebuttal that answered at least most of my concerns.
There were issues with the experimental results, but I believe most of them can be fixed since the authors provided full source code (which is not so common).
| train | [
"iqsV7Jma9Gf",
"C6X02EV2dF",
"4ep00HAZhna",
"vNAWhVD_g2q",
"0P-rLOd3mmp",
"5B3pZ8WGlV",
"98oRDN3mErj",
"pVhWYI9g-L0",
"d9TEBn7RTPg",
"dMmZPS8R9ea",
"v1HkumZV7z2",
"W2bfstCu40Z",
"j30Asg8Lv1b",
"glut70b1UjE",
"Jtje5hIV5zG",
"zwEepJLakEA",
"8FORzt_ER8",
"aIuHyvCjj2",
"-gnykDZFr9V",... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" We uploaded the code as supplementary material to OpenReview when submitting our first revision.\nTo improve discoverability, we now also uploaded it to Dropbox.\n\nWe included the DropBox link as a placeholder to anonymize our submission.\nIn the camera ready version, we want to replace the DropBox link with a l... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"C6X02EV2dF",
"4ep00HAZhna",
"0P-rLOd3mmp",
"0P-rLOd3mmp",
"5B3pZ8WGlV",
"98oRDN3mErj",
"pVhWYI9g-L0",
"v1HkumZV7z2",
"dMmZPS8R9ea",
"aIuHyvCjj2",
"glut70b1UjE",
"nips_2022_0OGMrvHnQbb",
"eRzmmAjzkJi",
"l5aIeS1tCVc",
"l5aIeS1tCVc",
"-gnykDZFr9V",
"-gnykDZFr9V",
"-gnykDZFr9V",
"ni... |
nips_2022_B5qRau1IxjM | Robust Anytime Learning of Markov Decision Processes | Markov decision processes (MDPs) are formal models commonly used in sequential decision-making.
MDPs capture the stochasticity that may arise, for instance, from imprecise actuators via probabilities in the transition function.
However, in data-driven applications, deriving precise probabilities from (limited) data introduces statistical errors that may lead to unexpected or undesirable outcomes.
Uncertain MDPs (uMDPs) do not require precise probabilities but instead use so-called uncertainty sets in the transitions, accounting for such limited data.
Tools from the formal verification community efficiently compute robust policies that provably adhere to formal specifications, like safety constraints, under the worst-case instance in the uncertainty set.
We continuously learn the transition probabilities of an MDP in a robust anytime-learning approach that combines a dedicated Bayesian inference scheme with the computation of robust policies. In particular, our method (1) approximates probabilities as intervals, (2) adapts to new data that may be inconsistent with an intermediate model, and (3) may be stopped at any time to compute a robust policy on the uMDP that faithfully captures the data so far.
Furthermore, our method is capable of adapting to changes in the environment.
We show the effectiveness of our approach and compare it to robust policies computed on uMDPs learned by the UCRL2 reinforcement learning algorithm in an experimental evaluation on several benchmarks. | Accept | The paper makes a good algorithmic contribution and provides its theoretical analysis. It would benefit from a more extensive empirical evaluation, for instance, evaluating on IPPC-style MDPs expressed in PDDL and RDDL and on MDPs with high-dimensional observations (block MDPs), but even as is this work is a solid step forward. | train | [
"zdmJ4yYCGa",
"DbvZSSYAe2k",
"yZbrALrfmU",
"D42dqq9Sk80",
"gzvHNolxUV4",
"6NoLXqLtw1n",
"QEeK-yRSwT",
"LekFmgkp-nS",
"xAPrLM8cej3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification, those were really insightful points and I appreciate the thorough response. ",
" I indeed missed the fact that the distribution indexed by state and action is independently drawn due to the Markov assumption.\n\nThank you for the clarifications.",
" We would like to thank the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"D42dqq9Sk80",
"gzvHNolxUV4",
"nips_2022_B5qRau1IxjM",
"xAPrLM8cej3",
"LekFmgkp-nS",
"QEeK-yRSwT",
"nips_2022_B5qRau1IxjM",
"nips_2022_B5qRau1IxjM",
"nips_2022_B5qRau1IxjM"
] |
nips_2022_EvtEGQmXe3 | Neural Topological Ordering for Computation Graphs | Recent works on machine learning for combinatorial optimization have shown that learning based approaches can outperform heuristic methods in terms of speed and performance. In this paper, we consider the problem of finding an optimal topological order on a directed acyclic graph (DAG) with focus on the memory minimization problem which arises in compilers. We propose an end-to-end machine learning based approach for topological ordering using an encoder-decoder framework. Our encoder is a novel attention based graph neural network architecture called \emph{Topoformer} which uses different topological transforms of a DAG for message passing. The node embeddings produced by the encoder are converted into node priorities which are used by the decoder to generate a probability distribution over topological orders. We train our model on a dataset of synthetically generated graphs called layered graphs. We show that our model outperforms, or is on-par, with several topological ordering baselines while being significantly faster on synthetic graphs with up to 2k nodes. We also train and test our model on a set of real-world computation graphs, showing performance improvements. | Accept | The reviewers unanimously considered that this paper should be accepted for publication. Among the strengths noted are is the novelty and relevance of the method, the experimental results and finally it was considered to be well-written.
| train | [
"BK063cq8TO",
"UqK6USHwT54",
"Ak-oh3egTU_",
"IqJ9ErYbfLi",
"o_Ld2tzdCw",
"F0sSbnA7aMR7",
"gpgm-hZn4AA",
"857id__AVmP",
"zUUQZ4-VBbi",
"RrDUEk-xTcr"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your valuable feedback, and we are willing to clarify our statement regarding point number 3. As we wish to stress, given how the policy is used at inference time, there is no one-to-one correspondence between the optimal sequence and a distribution; there is only a correspondence between the optima... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
5
] | [
"UqK6USHwT54",
"Ak-oh3egTU_",
"RrDUEk-xTcr",
"zUUQZ4-VBbi",
"857id__AVmP",
"857id__AVmP",
"nips_2022_EvtEGQmXe3",
"nips_2022_EvtEGQmXe3",
"nips_2022_EvtEGQmXe3",
"nips_2022_EvtEGQmXe3"
] |
nips_2022_5oS20NUCJEX | Benign, Tempered, or Catastrophic: Toward a Refined Taxonomy of Overfitting | The practical success of overparameterized neural networks has motivated the recent scientific study of \emph{interpolating methods}-- learning methods which are able fit their training data perfectly. Empirically, certain interpolating methods can fit noisy training data without catastrophically bad test performance, which defies standard intuitions from statistical learning theory. Aiming to explain this, a large body of recent work has studied \emph{benign overfitting}, a behavior seen in certain asymptotic settings under which interpolating methods approach Bayes-optimality, even in the presence of noise. In this work, we argue that, while benign overfitting has been instructive to study, real interpolating methods like deep networks do not fit benignly. That is, noise in the train set leads to suboptimal generalization, suggesting that these methods fall in an intermediate regime between benign and catastrophic overfitting, in which asymptotic risk is neither is neither Bayes-optimal nor unbounded, with the confounding effect of the noise being ``tempered" but non-negligible. We call this behavior \textit{tempered overfitting}. We first provide broad empirical evidence for our three-part taxonomy, demonstrating that deep neural networks and kernel machines fit to noisy data can be reasonably well classified as benign, tempered, or catastrophic. We then specialize to kernel (ridge) regression (KR), obtaining conditions on the ridge parameter and kernel eigenspectrum under which KR exhibits each of the three behaviors, demonstrating the consequences for KR with common kernels and trained neural networks of infinite width using experiments on natural and synthetic datasets. | Accept | This paper proposes a new taxonomy of overfitting that has the prospect of facilitating future discussions about overfitting, overparametrization and various mysteries of deep learning. The meta-reviewer recommends acceptance. | train | [
"WcAtK-pYrS5",
"Lz1Tp0l3LKI",
"ZbkXu0uFwcU",
"ZU00lNNf4as",
"4EyC-Er-lH9A",
"qDpNFBq6C12",
"Q9RbUafBVQf",
"oKd9UPz8r98"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. And I would like to keep my score unchanged. ",
" I appreciate the response and the fact that the authors have taken my suggestions into account in their various manuscript revisions.\n\nAs stated in my initial review, I believe this work to be very interesting and helpful to the co... | [
-1,
-1,
-1,
-1,
-1,
8,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"ZU00lNNf4as",
"4EyC-Er-lH9A",
"oKd9UPz8r98",
"Q9RbUafBVQf",
"qDpNFBq6C12",
"nips_2022_5oS20NUCJEX",
"nips_2022_5oS20NUCJEX",
"nips_2022_5oS20NUCJEX"
] |
nips_2022__1bgdFHhA70 | Evident: a Development Methodology and a Knowledge Base Topology for Data Mining, Machine Learning and General Knowledge Management | Software has been developed for knowledge discovery, prediction and management for over 30 years. However, there are still unresolved pain points when using existing project development and artifact management methodologies. Historically, there has been a lack of applicable methodologies. Further, methodologies that have been applied, such as Agile, have several limitations including scientific unfalsifiability that reduce their applicability. Evident, a development methodology rooted in the philosophy of logical reasoning and EKB, a knowledge base topology, are proposed. Many pain points in data mining, machine learning and general knowledge management are alleviated conceptually. Evident can be extended potentially to accelerate philosophical exploration, science discovery, education as well as knowledge sharing & retention across the globe. EKB offers one solution of storing information as knowledge, a granular level above data. Related topics in computer history, software engineering, database, sensing hardware, philosophy, and project & organization & military managements are also discussed. | Reject | The paper identifies relevant issues in the current software development process for machine learning, data mining, and knowledge management, however it does not provide any practical evidence that the proposed directions can solve the identified issues. The content of the paper is more suited for a position paper than for a technical scientific paper, which is the target of NeurIPS. So, while recognising some value in the contribution of the paper, I believe its nature does not completely match the NeurIPS expectations, and it would be more suitable for conferences where position papers are one of the components of the technical program. | train | [
"Q04VmgIxMY6",
"en1bhDL3qoa",
"xAZaJNdWtLT",
"pNw2axc4F_Y",
"pav13lbSWDx",
"QpxG2oi7lFw",
"S5LS0UPwCYF",
"SmRfpsgwRA",
"kyfdwIRgVhk",
"3fkOD0gmurn",
"c1s8TX0Xzbn",
"Fo127SzUtYX",
"KVkjLn5bEsL"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely hope the reviewers can reconsider the acceptance of our draft based on the criteria Yann Lecunn, Turing Award Winner, uses to review papers:\n\n\"I don't review often, but I'm a pretty gentle reviewer. I'm asking myself \"would the community be better off with or without this paper?\" (https://twitte... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"pNw2axc4F_Y",
"QpxG2oi7lFw",
"kyfdwIRgVhk",
"pav13lbSWDx",
"KVkjLn5bEsL",
"S5LS0UPwCYF",
"SmRfpsgwRA",
"Fo127SzUtYX",
"3fkOD0gmurn",
"c1s8TX0Xzbn",
"nips_2022__1bgdFHhA70",
"nips_2022__1bgdFHhA70",
"nips_2022__1bgdFHhA70"
] |
nips_2022_yKDKNzjHg8N | Characterizing Datapoints via Second-Split Forgetting | Researchers investigating example hardness have increasingly focused on the dynamics by which neural networks learn and forget examples throughout training. Popular metrics derived from these dynamics include (i) the epoch at which examples are first correctly classified; (ii) the number of times their predictions flip during training; and (iii) whether their prediction flips if they are held out. However, these metrics do not distinguish among examples that are hard for distinct reasons, such as membership in a rare subpopulation, being mislabeled, or belonging to a complex subpopulation. In this paper, we propose *second-split forgetting time* (SSFT), a complementary metric that tracks the epoch (if any) after which an original training example is forgotten as the network is fine-tuned on a randomly held out partition of the data. Across multiple benchmark datasets and modalities, we demonstrate that *mislabeled* examples are forgotten quickly, and seemingly *rare* examples are forgotten comparatively slowly. By contrast, metrics only considering the first split learning dynamics struggle to differentiate the two. At large learning rates, SSFT tends to be robust across architectures, optimizers, and random seeds. From a practical standpoint, the SSFT can (i) help to identify mislabeled samples, the removal of which improves generalization; and (ii) provide insights about failure modes. Through theoretical analysis addressing overparameterized linear models, we provide insights into how the observed phenomena may arise. | Accept | The paper studies metric to characterize the difficulty of examples in terms of training dynamics: in addition to first-split learning time (FSLT), similar in principle to existing methods, they propose the novel second-split forgetting time (SSFT). There is strong empirical evidence that SSFT allows to discriminate mislabelled and rare examples, and is a robust metric. The paper also proposes a theoretical analysis in a simplified setting.
The reviewers were unanimous: the method is very well written and provides good intuition. FSLT looks simple, and is yet a very efficient method. A very nice job was done by the authors to characterize various informal notions of difficulty ("rare", "mislabeled" and "complex"). A few minor problems were discussed with the reviewers, and carefully addressed by the authors. We therefore all enthusiastically recommend to accept the paper. Some reviewers spontaneously proposed to highlight the paper (oral or spotlight presentation or award) and I support this. | train | [
"GLbq736aK-z",
"4gY4R2TU4b",
"i3y8_BUntH",
"sGKNptmquAF",
"1p6ErBpXVY",
"AkZPxiAZxBz",
"J_Xt5qWTv9g",
"-Is6fBhelqec",
"JkZws0LxG1n",
"pCGkVcPfTrg1",
"c54PKsYTRtv",
"BRxHbJs8UL",
"d8Lw3niaRCv",
"ULKah_gL6DQ",
"5HFCMHdKIOZ",
"I501BTXLvFU"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your encouraging feedback! We will mention this in the final version of the main paper.",
" As per your suggestion, we have also run the AUC calculation on two more vision datasets, CIFAR100 and EMNIST. The combined table of results is presented below (we will merge the whole table in the main pap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
4,
3
] | [
"i3y8_BUntH",
"sGKNptmquAF",
"pCGkVcPfTrg1",
"1p6ErBpXVY",
"J_Xt5qWTv9g",
"c54PKsYTRtv",
"BRxHbJs8UL",
"nips_2022_yKDKNzjHg8N",
"d8Lw3niaRCv",
"ULKah_gL6DQ",
"I501BTXLvFU",
"5HFCMHdKIOZ",
"nips_2022_yKDKNzjHg8N",
"nips_2022_yKDKNzjHg8N",
"nips_2022_yKDKNzjHg8N",
"nips_2022_yKDKNzjHg8N"... |
nips_2022_mamv07NQWk | Regret Bounds for Multilabel Classification in Sparse Label Regimes | Multi-label classification (MLC) has wide practical importance, but the theoretical understanding of its statistical properties is still limited. As an attempt to fill this gap, we thoroughly study upper and lower regret bounds for two canonical MLC performance measures, Hamming loss and Precision@$\kappa$. We consider two different statistical and algorithmic settings, a non-parametric setting tackled by plug-in classifiers \`a la $k$-nearest neighbors, and a parametric one tackled by empirical risk minimization operating on surrogate loss functions. For both, we analyze the interplay between a natural MLC variant of the low noise assumption, widely studied in binary classification, and the label sparsity, the latter being a natural property of large-scale MLC problems. We show that those conditions are crucial in improving the bounds, but the way they are tangled is not obvious, and also different across the two settings. | Accept | This paper studies regret bounds for multi-label classification and derives new upper bounds for surrogate risk minimization under a low-noise assumption. Although there is some concern regarding the conditions, all reviewers support accepting this paper. | test | [
"9wk_EWdTRiV",
"uBVMza-qh1c",
"6Fxfjqvp3Br",
"OsEe0z1upl8",
"0FH1Tme5d0F",
"0OrFKHIef5H",
"amHd_DqvoMi"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" - On extension to rank loss, 0/1 loss and micro/macro F-measures.\n\nThank you for pointing this out. Indeed, in the process of compressing the paper to the space limit, we shortened this part to a sentence that turned out not to be precise enough. The message we wanted to convey is the following. Based on existi... | [
-1,
-1,
-1,
-1,
5,
7,
8
] | [
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"amHd_DqvoMi",
"0OrFKHIef5H",
"0FH1Tme5d0F",
"nips_2022_mamv07NQWk",
"nips_2022_mamv07NQWk",
"nips_2022_mamv07NQWk",
"nips_2022_mamv07NQWk"
] |
nips_2022_4wrB7Mo9_OQ | Resolving the data ambiguity for periodic crystals | The fundamental model of all solid crystalline materials is a periodic set of atomic centers considered up to rigid motion in Euclidean space. The major obstacle to materials discovery was highly ambiguous representations of periodic crystals that didn't allow fast and reliable comparisons and led to numerous (near-) duplicates in many databases of experimental and simulated crystals. This paper exemplarily resolves the ambiguity by invariants, which are descriptors without false negatives.
The new Pointwise Distance Distributions (PDD) is a numerical matrix with a near-linear time complexity and an exactly computable metric. The strongest theoretical result is generic completeness (absence of false positives) for all finite and periodic sets of points in any dimension. The strength of PDD is shown by 200B+ pairwise comparisons of all periodic structures in the world's largest collection (Cambridge Structural Database) of existing materials over two days on a modest desktop.
| Accept | While the paper is not rated too high by the reviewers, and overall the endorsement for it is a bit less strong that what we would have liked to see, it seems that the paper might have results that are of interest to those working on materials science applications of ML.
One common complaint the authors had was the limited experimentation, but given the authors' response regarding the validation (which in particular goes through an important dataset), the concern was overcome. However, to improve the impact of the paper, and to offer stronger motivation for followup works, it would be a valuable addition to the final version of the paper to have more experiments, especially those that more elaborately outline suitable ablation studies, to illustrate the theory in action, and importantly, to add in some comparisons against some of the generic methods (which will be understandably, not respecting all desired invariances, still, it is good to see how much of a big deal that is, notwithstanding the comments on Platon et al).
One minor but potentially useful technical point to note is that what the authors call "continuity" (Theorem 4.3) is a type of Lipschitz continuity, which can be further exploited for algorithm design and analysis.
| test | [
"sUFlXtHoJfn",
"0dFI5Qh6Qb1",
"ZSTR02yRBGe",
"IUfR67yWJJc",
"jhYEJBKU4O4L",
"w3nUnL1RWnz",
"RmEUxq29la",
"iG7F25pagX",
"CIk7kWuYVFS",
"y3LKjgQSCuY",
"N5kFpGT_C4w",
"cdpFv-NnTw6",
"z1CZq4VTdU",
"fLEZ46kg4KS",
"AXhLtYTaIwM",
"vYt_GJrx1o",
"rZme34btJUp",
"zw8BMb4LQ-"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" >I appreciate the authors' efforts to address my concerns. Now I am more convinced by the additional merits of the proposed PDD over graph-based methods for filtering duplicated materials. Hence, I would like to raise my score to 5.\n\nThank you for increasing the score.\n\n>Given the weak experiments \n\nMay we ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"0dFI5Qh6Qb1",
"rZme34btJUp",
"IUfR67yWJJc",
"jhYEJBKU4O4L",
"w3nUnL1RWnz",
"z1CZq4VTdU",
"iG7F25pagX",
"zw8BMb4LQ-",
"y3LKjgQSCuY",
"N5kFpGT_C4w",
"cdpFv-NnTw6",
"AXhLtYTaIwM",
"vYt_GJrx1o",
"rZme34btJUp",
"zw8BMb4LQ-",
"nips_2022_4wrB7Mo9_OQ",
"nips_2022_4wrB7Mo9_OQ",
"nips_2022_... |
nips_2022_3y80RPgHL7s | The Power and Limitation of Pretraining-Finetuning for Linear Regression under Covariate Shift | We study linear regression under covariate shift, where the marginal distribution over the input covariates differs in the source and the target domains, while the conditional distribution of the output given the input covariates is similar across the two domains. We investigate a transfer learning approach with pretraining on the source data and finetuning based on the target data (both conducted by online SGD) for this problem. We establish sharp instance-dependent excess risk upper and lower bounds for this approach. Our bounds suggest that for a large class of linear regression instances, transfer learning with $O(N^2)$ source data (and scarce or no target data) is as effective as supervised learning with $N$ target data. In addition, we show that finetuning, even with only a small amount of target data, could drastically reduce the amount of source data required by pretraining. Our theory sheds light on the effectiveness and limitation of pretraining as well as the benefits of finetuning for tackling covariate shift problems. | Accept | After the rebuttal, the majority of the reviewers were convinced that the paper is novel and interesting and it should be accepted. In preparing the camera-ready, I suggest to the authors to take into account the reviewers' comments to better explain the used assumptions to the readers and avoid potential misunderstandings. | val | [
"-yLNQL74BWo",
"tUA46A8Fu0",
"VhO0ozq2a7p",
"az0XTlRBahD",
"BqX8Q5Yc6Jv",
"wVXCzsEVG1P",
"fBl7yWUjUcG",
"ipdBx3j7a6_",
"lnO3NukXirl",
"j4LIvw-akFq",
"xMn0J7MwGgT",
"NPLFPMsjkA",
"NO83ySchfYZ",
"ZNorHSJU15G",
"BnOZbALsc6K",
"tgMXbmuhhp2",
"IZbcYIg4ry7",
"7XKQbqnb1z8"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" In our humble opinion, even with the well-specified linear model assumption, our results are still significant and shed great light on understanding pretraining-finetuning under the covariate shift. While pretraining-finetuning has been widely used in deep learning, without a solid understanding in the simplest p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"tUA46A8Fu0",
"VhO0ozq2a7p",
"az0XTlRBahD",
"BqX8Q5Yc6Jv",
"ipdBx3j7a6_",
"fBl7yWUjUcG",
"lnO3NukXirl",
"lnO3NukXirl",
"j4LIvw-akFq",
"xMn0J7MwGgT",
"7XKQbqnb1z8",
"IZbcYIg4ry7",
"tgMXbmuhhp2",
"BnOZbALsc6K",
"nips_2022_3y80RPgHL7s",
"nips_2022_3y80RPgHL7s",
"nips_2022_3y80RPgHL7s",
... |
nips_2022_OJ4mMfGKLN | Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency | Pre-training on time series poses a unique challenge due to the potential mismatch between pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends, and long-range and short-cyclic effects, which can lead to poor downstream performance. While domain adaptation methods can mitigate these shifts, most methods need examples directly from the target domain, making them suboptimal for pre-training. To address this challenge, methods need to accommodate target domains with different temporal dynamics and be capable of doing so without seeing any target examples during pre-training. Relative to other modalities, in time series, we expect that time-based and frequency-based representations of the same example are located close together in the time-frequency space. To this end, we posit that time-frequency consistency (TF-C) --- embedding a time-based neighborhood of an example close to its frequency-based neighborhood --- is desirable for pre-training. Motivated by TF-C, we define a decomposable pre-training model, where the self-supervised signal is provided by the distance between time and frequency components, each individually trained by contrastive estimation. We evaluate the new method on eight datasets, including electrodiagnostic testing, human activity recognition, mechanical fault detection, and physical status monitoring. Experiments against eight state-of-the-art methods show that TF-C outperforms baselines by 15.4% (F1 score) on average in one-to-one settings (e.g., fine-tuning an EEG-pretrained model on EMG data) and by 8.4% (precision) in challenging one-to-many settings (e.g., fine-tuning an EEG-pretrained model for either hand-gesture recognition or mechanical fault prediction), reflecting the breadth of scenarios that arise in real-world applications. The source code and datasets are available at https://github.com/mims-harvard/TFC-pretraining. | Accept | The reviewers highlight the novelty and significance of the proposed approach, the extensive empirical evaluation, and the clarity of writing. Additional experimental results provided during the rebuttal cleared up some concerns around the general applicability of the approach. | train | [
"HbCJWVBCtk",
"YTRlm-ZFkd7",
"9y8NZK9lMiI",
"xsvkUY5MetN",
"luDHqVPvEVD",
"CgqPLmO4uD4",
"Z--jSpncoUd",
"EbDdRZmPu7J",
"p_3MquRH2Ec",
"OHYdCkLjHPP",
"MRTcpnIKosC",
"crC49pYj1n",
"6mgTeBvma1m",
"1YGOXcawA_o",
"ZJvLdLVW8aY",
"Xr0Mt5MmQ-K",
"llwLnbA5cHE",
"DTdimuxM-v",
"auzIGOTy6L"
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their detailed response to my questions. I think most of my questions have been answered. As the authors mentioned, adding some of the details in your comments to the main paper would help strengthen an already strong paper.\n\nParticularly, the added details on intuition beh... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"Z--jSpncoUd",
"EbDdRZmPu7J",
"xsvkUY5MetN",
"nips_2022_OJ4mMfGKLN",
"auzIGOTy6L",
"auzIGOTy6L",
"auzIGOTy6L",
"DTdimuxM-v",
"llwLnbA5cHE",
"llwLnbA5cHE",
"llwLnbA5cHE",
"llwLnbA5cHE",
"Xr0Mt5MmQ-K",
"Xr0Mt5MmQ-K",
"nips_2022_OJ4mMfGKLN",
"nips_2022_OJ4mMfGKLN",
"nips_2022_OJ4mMfGKLN... |
nips_2022_lJx2vng-KiC | Exact learning dynamics of deep linear networks with prior knowledge | Learning in deep neural networks is known to depend critically on the knowledge embedded in the initial network weights. However, few theoretical results have precisely linked prior knowledge to learning dynamics. Here we derive exact solutions to the dynamics of learning with rich prior knowledge in deep linear networks by generalising Fukumizu's matrix Riccati solution \citep{fukumizu1998effect}. We obtain explicit expressions for the evolving network function, hidden representational similarity, and neural tangent kernel over training for a broad class of initialisations and tasks. The expressions reveal a class of task-independent initialisations that radically alter learning dynamics from slow non-linear dynamics to fast exponential trajectories while converging to a global optimum with identical representational similarity, dissociating learning trajectories from the structure of initial internal representations. We characterise how network weights dynamically align with task structure, rigorously justifying why previous solutions successfully described learning from small initial weights without incorporating their fine-scale structure. Finally, we discuss the implications of these findings for continual learning, reversal learning and learning of structured knowledge. Taken together, our results provide a mathematical toolkit for understanding the impact of prior knowledge on deep learning. | Accept | This paper derives the exact gradient flow dynamics in linear networks studying the impact of initialization under mild conditions so as to incorporate prior knowledge as initialization. Fukumizu's matrix Riccati formulation for learning dynamics is utilized for understanding deep linear networks. Under very mild conditions, the solution can be obtained numerically. The paper uses this solution to understand deep learning phenomena such as alignment, catastrophic forgetting and knowledge revision. In effect the authors provide the first step to provide mathematical tools for characterizing prior knowledge in neural networks.
Reviewers agree that while the formalism in the paper is limited to "linear" networks, the insights drawn from the analysis and research direction the paper opens up will be valuable for the deep learning community and bear interesting connections to different analyses. While initial evaluations were mixed, after the author's responses, all reviewers recommended acceptance for the paper.
| train | [
"PeotgP9LnWi",
"Ww6BpNNMbQ4",
"VeExvh_Bfz9",
"8esHFTBFNNS",
"1F7Mjoq6TU",
"zmURXZC6u_6",
"DKi-i_ZHsPu",
"YzfkrviYLIs",
"3407ayu3Byr",
"7jc_S_0O2C",
"KfsAkTV9jYYV",
"meixLSn-vdN",
"a6VpVd9bLXW",
"OoXiCYnLqBC",
"G4Ng05XM_fF",
"49sRobzV72",
"0rI1MJ2ZfV_",
"AjrmHg2EZoR",
"uSpzzZK4nq_... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" I would like to thank the authors for addressing all of my questions. I reviewed the revised paper and appendix and the results and proof are all clear to me now. I will revise the score accordingly.",
" We thank the reviewer for the careful read of our paper and the many relevant reviews, we very much apprecia... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"49sRobzV72",
"8esHFTBFNNS",
"YzfkrviYLIs",
"1F7Mjoq6TU",
"KfsAkTV9jYYV",
"7jc_S_0O2C",
"3407ayu3Byr",
"G4Ng05XM_fF",
"meixLSn-vdN",
"a6VpVd9bLXW",
"OoXiCYnLqBC",
"y7K9o35ioo",
"y7K9o35ioo",
"y7K9o35ioo",
"ME5UoULX8BJ",
"uSpzzZK4nq_",
"AjrmHg2EZoR",
"nips_2022_lJx2vng-KiC",
"nips... |
nips_2022_V22VeIZ9QU | Automatic Differentiation of Programs with Discrete Randomness | Automatic differentiation (AD), a technique for constructing new programs which compute the derivative of an original program, has become ubiquitous throughout scientific computing and deep learning due to the improved performance afforded by gradient-based optimization. However, AD systems have been restricted to the subset of programs that have a continuous dependence on parameters. Programs that have discrete stochastic behaviors governed by distribution parameters, such as flipping a coin with probability $p$ of being heads, pose a challenge to these systems because the connection between the result (heads vs tails) and the parameters ($p$) is fundamentally discrete. In this paper we develop a new reparameterization-based methodology that allows for generating programs whose expectation is the derivative of the expectation of the original program. We showcase how this method gives an unbiased and low-variance estimator which is as automated as traditional AD mechanisms. We demonstrate unbiased forward-mode AD of discrete-time Markov chains, agent-based models such as Conway's Game of Life, and unbiased reverse-mode AD of a particle filter. Our code is available at https://github.com/gaurav-arya/StochasticAD.jl. | Accept | The topic of this paper is interesting, and I wanted to get the basic idea, so I looked at the paper myself. Had I been a reviewer I would have recommended rejection. However, as the existing reviews are positive, and the authors will not have a chance to respond to my objections, I will recommend acceptance (in spite of the low confidence scores of the reviewers where the high scores have confidence 2.) For the sake of the authors I will list my complaints.
My fundamental complaint is about clarity. The central problem is the notation X(p). In the main text this is only defined as a "stochastic program". But what is it allowed to be a distribution on? What is the allowed dimension of the parameter (vector?) p? If X(p) is an arbitrary stochastic program what is E[X(p)]? If X(p) defines a distribution on a finite abstract set then clearly E[X(p)] CANNOT mean the expectation of a random value. The first sentence of appendix B.2 need to appear before any appearance of the notation E[X(p)] in the main text. Even if that is done, the authors are assuming a fixed embedding of each random value of x in a Euclidean space. But in the vast majority of modern applications that embedding must be learned. For example, in a VAE model of grammar induction one must learn an embedding of the grammar nonterminal symbols. The formal set up would be greatly clarified by explicitly introducing a "loss function" f as part of the given data and write, for example E_{x \sim X(p)} [f(x)]. The main body of the text needs more intuition about the meaning of w and Y in theorem 2.2 and some kind of sketch of a proof. The paper would be significantly simpler if limited it to the case where X(p) is entirely discrete. This is the interesting case and there is then no need for X'(p). The technically hard issue is w and Y in theorem 2.2. If a mixed stochastic-continuous case is needed for the general case then it could come later where it is well motivated. An explicit form for w and Y should appear in the body of the paper rather than a naked claim that they exist. Any implementation must construct w and Y so the proof needs to be constructive. Examples are nice, but they are no substitute for the proof of the general case. Some sketch of that proof needs to appear in the body of the text --- most importantly a computable solution for w and Y. As it stands it appears that the authors are trying to hide the artificial nature of their technical set-up --- the use of a-priori fixed embeddings of abstract tokens.
| val | [
"Gu1ld_zhcaz",
"Pi95EerQJo2",
"GAqFFaFZsZX",
"fCRuxdHo6w5",
"pDG3hybOyy",
"4wXTzCydpwp",
"Qn-o7FGBDbi",
"RxQLXYkzvPf",
"8XH-I0WYr9N",
"kjWuDdEsZtp",
"avpAlEw2_7T",
"MLUmCBz1fgp"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > In definition 2.2, it seems to me that w is the gradient of the probability of the output of the SCG flipping from X to Y (where the gradient is w.r.t. p). Is this an accurate characterization?\n\nThat’s right. The only qualification is that the sign of the limit affects the form of $w$, as stochastic derivativ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
3
] | [
"Pi95EerQJo2",
"MLUmCBz1fgp",
"fCRuxdHo6w5",
"avpAlEw2_7T",
"4wXTzCydpwp",
"kjWuDdEsZtp",
"8XH-I0WYr9N",
"nips_2022_V22VeIZ9QU",
"nips_2022_V22VeIZ9QU",
"nips_2022_V22VeIZ9QU",
"nips_2022_V22VeIZ9QU",
"nips_2022_V22VeIZ9QU"
] |
nips_2022_9HBbWAsZxFt | Unsupervised Reinforcement Learning with Contrastive Intrinsic Control | We introduce Contrastive Intrinsic Control (CIC), an unsupervised reinforcement learning (RL) algorithm that maximizes the mutual information between state-transitions and latent skill vectors. CIC utilizes contrastive learning between state-transitions and skills vectors to learn behaviour embeddings and maximizes the entropy of these embeddings as an intrinsic reward to encourage behavioural diversity. We evaluate our algorithm on the Unsupervised RL Benchmark (URLB) in the asymptotic state-based setting, which consists of a long reward-free pre-training phase followed by a short adaptation phase to downstream tasks with extrinsic rewards. We find that CIC improves over prior exploration algorithms in terms of adaptation efficiency to downstream tasks on state-based URLB. | Accept | This paper presents an algorithm for representation and skill learning in RL, based on a contrastive approach. Experiments combining this method with different intrinsic rewards are presented, along with a number of sensitivity analyses.
Overall, the reviewers recognize that there are novel components here despite the work being an extension of existing work. There is some disagreement with regards to the value of these novel components, in particular whether the specific discriminator used is sufficiently evaluated, and whether the proposed method is well-motivated (i.e. by theory). While I am sensitive to these concerns, I find that the paper does a reasonable job at detailing the proposed agent architecture and provides some fresh insights into unsupervised RL. | train | [
"MTo4HsV7Qu-",
"dXL_5jDh_lHs",
"vDKhAByVDS",
"tU7OE7GiRX",
"wVriw1TvMgk",
"-A2kYM-qo0K",
"ikCqnTOx0jP",
"l7YnBLD0e6x",
"GX4kv6QJPdB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. The explanations for the notation and experiments are helpful, but some outstanding issues exist which are described below.\n\nThe primary technical contribution as claimed by the paper (lines 37-41, 119-120) is the novel estimator for the discriminator. However, there are concerns re... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"-A2kYM-qo0K",
"tU7OE7GiRX",
"wVriw1TvMgk",
"GX4kv6QJPdB",
"l7YnBLD0e6x",
"ikCqnTOx0jP",
"nips_2022_9HBbWAsZxFt",
"nips_2022_9HBbWAsZxFt",
"nips_2022_9HBbWAsZxFt"
] |
nips_2022_ue4gP8ZKiWb | Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization | The right to be forgotten calls for efficient machine unlearning techniques that make trained machine learning models forget a cohort of data. The combination of training and unlearning operations in traditional machine unlearning methods often leads to the expensive computational cost on large-scale data. This paper presents a prompt certified machine unlearning algorithm, PCMU, which executes one-time operation of simultaneous training and unlearning in advance for a series of machine unlearning requests, without the knowledge of the removed/forgotten data. First, we establish a connection between randomized smoothing for certified robustness on classification and randomized smoothing for certified machine unlearning on gradient quantization. Second, we propose a prompt certified machine unlearning model based on randomized data smoothing and gradient quantization. We theoretically derive the certified radius R regarding the data change before and after data removals and the certified budget of data removals about R. Last but not least, we present another practical framework of randomized gradient smoothing and quantization, due to the dilemma of producing high confidence certificates in the first framework. We theoretically demonstrate the certified radius R' regarding the gradient change, the correlation between two types of certified radii, and the certified budget of data removals about R'. | Accept | This paper proposes an algorithm for simultaneous learning and unlearning without the knowledge of the datapoints that will be forgotten. This reduces the computational cost associated with unlearning in a unified fashion. Both experimental and theoretical results are interesting and the paper would be a great addition to NeurIPS. | train | [
"TcblDvQZftU",
"-hQVMw8WcW",
"hiHwa5BvAv_",
"3-Y09EnG5z",
"PsWNGI3UNHq",
"ipnd53729Dg",
"7DelSLaHFF",
"kb_J_V6aykq",
"5FdJndL_42",
"E2R9F_HdyZ5",
"vruw-gGcWK9",
"cx4nOLQ-3AP",
"rkoT44_QMZH",
"Hzvg9fTR9G6"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors' response, which cleared all of my questions. It is a cool work to enable removed data-agnostic simultaneous training and unlearning for better unlearning efficiency and generality.",
" Thanks again for taking the time to review our submission!\n\nWe believe that our work makes a fundamen... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"5FdJndL_42",
"hiHwa5BvAv_",
"E2R9F_HdyZ5",
"PsWNGI3UNHq",
"kb_J_V6aykq",
"7DelSLaHFF",
"nips_2022_ue4gP8ZKiWb",
"Hzvg9fTR9G6",
"rkoT44_QMZH",
"vruw-gGcWK9",
"cx4nOLQ-3AP",
"nips_2022_ue4gP8ZKiWb",
"nips_2022_ue4gP8ZKiWb",
"nips_2022_ue4gP8ZKiWb"
] |
nips_2022_A2Ya5aLtyuG | Do Current Multi-Task Optimization Methods in Deep Learning Even Help? | Recent research has proposed a series of specialized optimization algorithms for deep multi-task models. It is often claimed that these multi-task optimization (MTO) methods yield solutions that are superior to the ones found by simply optimizing a weighted average of the task losses. In this paper, we perform large-scale experiments on a variety of language and vision tasks to examine the empirical validity of these claims. We show that, despite the added design and computational complexity of these algorithms, MTO methods do not yield any performance improvements beyond what is achievable via traditional optimization approaches. We highlight alternative strategies that consistently yield improvements to the performance profile and point out common training pitfalls that might cause suboptimal results. Finally, we outline challenges in reliably evaluating the performance of MTO algorithms and discuss potential solutions. | Accept | This paper reveals several important facts about multi-task optimization methods by providing a comprehensive benchmarking of MOT methods. They find that with carefully tuning the hyper-parameter, MTO methods give similar results compared to simply averaging the weight of each task. Then they go deeper into why this has not been noticed in previous work by studying the evaluation step and provide solutions to standardize the evaluation of these methods.
In particular, the reviewers have appreciated the comprehensive evaluation and tenable findings based on their experiments. Hence, I recommend that the paper be accepted. | train | [
"JVR-OX4Me5X",
"yD701QSdl",
"7nSsy6ZqGCC",
"mgQWY94_s4Y",
"JhuoJkWePk",
"qskFxMfCQvH",
"m8TY_I6F86G"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate your incorporation of my recommendations and clarifying my questions on the experimental setup. I believe that this paper will be of value to both the MTO community as well as the ML community as a whole. ",
" We thank the reviewer for their supportive comments. Following your suggestions, we will ... | [
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"yD701QSdl",
"JhuoJkWePk",
"qskFxMfCQvH",
"m8TY_I6F86G",
"nips_2022_A2Ya5aLtyuG",
"nips_2022_A2Ya5aLtyuG",
"nips_2022_A2Ya5aLtyuG"
] |
nips_2022_R1fj6401HJF | Instance-optimal PAC Algorithms for Contextual Bandits | In the stochastic contextual bandit setting, regret-minimizing algorithms have been extensively researched, but their instance-minimizing best-arm identification counterparts remain seldom studied. In this work, we focus on the stochastic bandit problem in the $(\epsilon,\delta)$-PAC setting: given a policy class $\Pi$ the goal of the learner is to return a policy $\pi\in \Pi$ whose expected reward is within $\epsilon$ of the optimal policy with probability greater than $1-\delta$. We characterize the first instance-dependent PAC sample complexity of contextual bandits through a quantity $\rho_{\Pi}$, and provide matching upper and lower bounds in terms of $\rho_{\Pi}$ for the agnostic and linear contextual best-arm identification settings. We show that no algorithm can be simultaneously minimax-optimal for regret minimization and instance-dependent PAC for best-arm identification. Our main result is a new instance-optimal and computationally efficient algorithm that relies on a polynomial number of calls to a cost-sensitive classification oracle. | Accept | This paper considers the contextual bandit problem with general policies/function approximation. The authors focus on the PAC setting, where the goal is to identify an eps-optimal policy, and provide several new results regarding instance-dependent sample complexity:
- A new complexity measure, which is shown to capture the instance-optimal PAC sample complexity.
- A lower bound which shows that no algorithm can simultaneously achieve minimax optimal and instance-optimal PAC sample complexity.
- An oracle-efficient algorithm with sample complexity nearly matching the lower bound.
The reviewers agreed that the problem this paper studies is interesting and relevant to the bandit/decision making community, and has potential for large impact. All of the results in the paper are novel, and there are many interesting technical ideas. They also found the paper to be well-written, though there are some aspects that can be improved.
One reviewer took issue with the assumption that the algorithm has access to unlabeled data (assumption 1), but did not defend this position in the discussion period. Nevertheless, for the final revision, the authors are encouraged to expand the discussion and justification around this assumption. In addition, the authors are encouraged to incorporate the reviewers' suggestions to improve upon presentation.
| train | [
"OsAPYWUn5G0",
"3scC7ip4Sh3",
"hXNDZrmSu1",
"8H-uYwW8M7c",
"Van-BpGDiTA",
"0i_G3NFonsN",
"WQPFciDrUuQ",
"5yGGBYbzaOc",
"i05Uim6zM1v",
"9bQMWKM2ZYV",
"FnbwpwM13cW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your clarifications. I have updated my \"Soundness Score\" accordingly. ",
" I thank the authors for the response! The response addressed my comments and I will keep my rating as Accept.",
" Thanks for the comment!",
" We thank the reviewer for providing valuable feedback. Regarding the paper ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
1
] | [
"0i_G3NFonsN",
"Van-BpGDiTA",
"8H-uYwW8M7c",
"FnbwpwM13cW",
"9bQMWKM2ZYV",
"i05Uim6zM1v",
"5yGGBYbzaOc",
"nips_2022_R1fj6401HJF",
"nips_2022_R1fj6401HJF",
"nips_2022_R1fj6401HJF",
"nips_2022_R1fj6401HJF"
] |
nips_2022_LivA_JyyJM | Thinned random measures for sparse graphs with overlapping communities | Network models for exchangeable arrays, including most stochastic block models, generate dense graphs with a limited ability to capture many characteristics of real-world social and biological networks. A class of models based on completely random measures like the generalized gamma process (GGP) have recently addressed some of these limitations. We propose a framework for thinning edges from realizations of GGP random graphs that models observed links via nodes' overall propensity to interact, as well as the similarity of node memberships within a large set of latent communities. Our formulation allows us to learn the number of communities from data, and enables efficient Monte Carlo methods that scale linearly with the number of observed edges, and thus (unlike dense block models) sub-quadratically with the number of entities or nodes. We compare to alternative models for both dense and sparse networks, and demonstrate effective recovery of latent community structure for real-world networks with thousands of nodes. | Accept | The authors present a framework for thinning edges from from random graph realizations from the generalized gamma process (GGP) to generate sparse graphs with mixed community memberships. The authors provide an efficient Monte Carlo methods that scale sub-quadratically with the number of nodes. There are concerns about scalability of the proposed method and its novelty over the GGP based construction of Caron and Fox. The reviewers also note that the paper needs proof reading and a clearer exposition. | train | [
"XmVXKsgW76",
"xO_awpM2YzH",
"TUPCSHHUfd",
"x1CI8RhKyVC",
"-F0Au1zPogG",
"9z4SaYDJEbA",
"PD8WAaebJMv",
"GiFJFXAUTDs",
"1y9Ja-7EyyN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for providing the detailed responses to reviewers. I still support this paper for acceptance. The main weaknesses I see are that novelty and evaluation are both somewhat limited.",
" Dear author(s), thanks for your responses. I am quite happy with your changes, several of the issues I have p... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
2
] | [
"9z4SaYDJEbA",
"-F0Au1zPogG",
"x1CI8RhKyVC",
"1y9Ja-7EyyN",
"GiFJFXAUTDs",
"PD8WAaebJMv",
"nips_2022_LivA_JyyJM",
"nips_2022_LivA_JyyJM",
"nips_2022_LivA_JyyJM"
] |
nips_2022_p-56bnzZhQ7 | Cryptographic Hardness of Learning Halfspaces with Massart Noise | We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples $(\mathbf{x}, y) \in \mathbb{R}^N \times \{ \pm 1\}$, where the distribution of $\mathbf{x}$ is arbitrary and the label $y$ is a Massart corruption of $f(\mathbf{x})$, for an unknown halfspace $f: \mathbb{R}^N \to \{ \pm 1\}$, with flipping probability $\eta(\mathbf{x}) \leq \eta < 1/2$. The goal of the learner is to compute a hypothesis with small 0-1 error. Our main result is the first computational hardness result for this learning problem. Specifically, assuming the (widely believed) subexponential-time hardness of the Learning with Errors (LWE) problem, we show that no polynomial-time Massart halfspace learner can achieve error better than $\Omega(\eta)$, even if the optimal 0-1 error is small, namely $\mathrm{OPT} = 2^{-\log^{c} (N)}$ for any universal constant $c \in (0, 1)$. Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible. | Accept | The paper studies the hardness of PAC learning halfspaces in the presence of Massart noise. Recent work showed that the problem is hard for all algorithms using only statistical queries (the SQ model). While this class contains most learning algorithms, it does not include powerful algorithmic techniques such as Gaussian elimination or certain lattice algorithms. The present work shows similar hardness result for all algorithms under the assumption that learning with errors (LWE) is hard for subexponential-time algorithms. All reviewers appreciate the hardness result on a central problem in learning theory. Some reviewers are slightly concerned that the subexponential-time hard assumption is quite strong but there are evidences linking the problem with other problems at the foundation of lattice-based cryptography and the assumption has also been used in several other previous works. | train | [
"-hgQJ5m8nLe",
"yueHu8WRaNJ",
"qXcyv_UlXIv",
"URSsI42NsY-",
"Tj9z5kIQNgWH",
"u0cR654d5D",
"fSosImxugK0",
"nPwW-eq_z8N",
"9VjgcwLiIU",
"z9ups4tsPAe",
"VjHRib5UyW",
"mfZVHy6y5mD",
"OMITsLwc7QP",
"WEkB6H9pCN3"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Interesting, thank you for the detailed clarification. So very roughly, it is a spacing constraint arising from the spacing in the original CLWE problem. That makes sense. I think what you have just written might be valuable to put in this paper as an appendix.",
" We start by explaining the meaning of the para... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"yueHu8WRaNJ",
"URSsI42NsY-",
"u0cR654d5D",
"fSosImxugK0",
"WEkB6H9pCN3",
"OMITsLwc7QP",
"mfZVHy6y5mD",
"VjHRib5UyW",
"nips_2022_p-56bnzZhQ7",
"nips_2022_p-56bnzZhQ7",
"nips_2022_p-56bnzZhQ7",
"nips_2022_p-56bnzZhQ7",
"nips_2022_p-56bnzZhQ7",
"nips_2022_p-56bnzZhQ7"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.