paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_Jpxd93u2vK- | Rare Gems: Finding Lottery Tickets at Initialization | Large neural networks can be pruned to a small fraction of their original size, with little loss in accuracy, by following a time-consuming "train, prune, re-train" approach. Frankle & Carbin conjecture that we can avoid this by training lottery tickets, i.e., special sparse subnetworks found at initialization, that can be trained to high accuracy. However, a subsequent line of work presents concrete evidence that current algorithms for finding trainable networks at initialization, fail simple baseline comparisons, e.g., against training random sparse subnetworks. Finding lottery tickets that train to better accuracy compared to simple baselines remains an open problem. In this work, we resolve this open problem by proposing Gem-Miner which finds lottery tickets at initialization that beat current baselines. Gem-Miner finds lottery tickets trainable to accuracy competitive or better than Iterative Magnitude Pruning (IMP), and does so up to $19\times$ faster. | Accept | This paper proposes GEM-MINER to find lottery tickets at initialization. It tries to maximize the accuracy of subnetworks before weight training, and can discover subnetworks comparable to IMP with warmup while being much faster. The approach can pass the sanity checks proposed by prior work.
It received scores of 4568. All the reviewers agree that the problem studied in this paper is important for lottery ticket hypothesis (LTH). The reviewers appreciate the authors' extensive experiments and thorough analysis. Most of the concerns have been well addressed, though Reviewer BBr3 still showed concerns on why 'Rare Gem' can solve the problems others could not solve, and commented that more insightful discussions behind the finding are needed.
Overall, the AC thinks that this paper presents valuable findings, and the responses to reviewers' comments are adequate, therefore, the AC would like to recommend acceptance of the paper. | train | [
"-y-lVTU8LAU",
"Z19KDfeZUfH",
"5kMuafYxH6l",
"39tY9EjSbDe",
"DsThpafJmaN",
"nAMQrzsFShj",
"n4CKTOlgE4T",
"FrbTbZ-feGz",
"aSaFCWSi9v0",
"ZVASW05wofP",
"Dn7T9JUS_Bd",
"wVJo0xkfU5p",
"_LqYcZy8Ta"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the time to respond to my comments. My concerns regarding weaknesses 1,3,4,5 are generally addressed. As for the second weakness, I think the authors' response still cannot explain why global pruning and gradual pruning are useful. For example, the discussion about Fig.7 is still a phenomenon... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"DsThpafJmaN",
"5kMuafYxH6l",
"n4CKTOlgE4T",
"DsThpafJmaN",
"_LqYcZy8Ta",
"wVJo0xkfU5p",
"Dn7T9JUS_Bd",
"ZVASW05wofP",
"nips_2022_Jpxd93u2vK-",
"nips_2022_Jpxd93u2vK-",
"nips_2022_Jpxd93u2vK-",
"nips_2022_Jpxd93u2vK-",
"nips_2022_Jpxd93u2vK-"
] |
nips_2022_qJpEiCrM3XK | A Curriculum Perspective of Robust Loss Functions | Learning with noisy labels is a fundamental problem in machine learning. A large body of work aims to design loss functions robust against label noise. However, it remain open questions why robust loss functions can underfit and why loss functions deviating from theoretical robustness conditions can appear robust. To tackle these questions, we show that a broad array of loss functions differs only in the implicit sample-weighting curriculums they induce. We then adopt the resulting curriculum perspective to analyze how robust losses interact with various training dynamics, which helps elucidate the above questions. Based on our findings, we propose simple fixes to make robust losses that severely underfit competitive to state-of-the-art losses. Notably, our novel curriculum perspective complements the common theoretical approaches focusing on bounding the risk minimizers. | Reject | The paper focuses on investigating the underfitting issue of certain robust loss functions and why functions deviating from theoretical robustness conditions can still be robust. A key vehicle to support the analysis is a standard form with equivalent gradients that enables the investigation of different robust loss functions from a curriculum learning perspective. In this standard form, each loss function is factorized into a sample weight and an implicit loss. Empirical studies have been conducted to show the interaction between sample-weighing curricula and different training dynamics to gain deeper insights on the underfitting issue and why loss combination can mitigate this. A shifting/rescaling strategy is also developed to adapt the sample-weighing curricula so that the underfitting issue can be addressed.
The paper contributes to an important area in machine learning given the labeling errors commonly exist in most real-world datasets. The proposed work complements most existing research that focuses on the design of robust loss functions and bounding the risk minimizer over these functions. The derivation of the standard form with equivalent gradients is new, which nicely connects a set of robust loss functions with a sample-weighting curriculum. The analysis afterwards also shows some interesting insights on the open questions being considered.
Major concerns from reviewers include whether some of the important findings will hold in general. These concerns are related to the evaluation on limited datasets (e.g., only CIFAR100 suffers from underfitting) and only MAE is tested for the effectiveness of the shifting/rescaling strategy. While the authors’ rebuttal addresses some of these critical comments, a more comprehensive evaluation may still be needed to make a convincing case. Furthermore, while the analysis is interesting, in order to justify it is truly useful, it is still necessary to show how it helps to inform the design of better robust loss functions or lead to a better learning process that can further improve the current models. The proposed shifting/rescaling strategy shows some promise, but it is designed in a rather heuristic way. It heavily relies on a critical hyper-parameter that is dataset dependent and may be quite sensitive and thus difficult to be set properly in practice.
The Senior AC and AC discussed the paper and the authors concerns and feel that the paper is not yet above the NeurIPS publication bar. The authors are encouraged to take the reviewer feedback into account to prepare an improved draft which they can submit to an upcoming conference.
| train | [
"2Tk_jsCvPG",
"fc6HoCEfsh2",
"G-MqpfZ3G0N",
"6PP2RY7lAE1",
"UnSDB7tkq11",
"hCny_U6ZgFr",
"cZ3iU6wIU1",
"fFQzS6sFe9Y",
"a4H7gY2Hb_C",
"JRaJqFxCw9a",
"0Vc03aZvPyR",
"0Au6FUxizY"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your comments!\n\nWe tune for the best hyperparameter and learning rates for these losses. The default learning rate (0.01) achieves the best overall performance. The expected performance of AUL(68.43) and AGCE(68.94) can be improved to 72.69 and 74.53, respectively, with the best hyperparameters. MAE ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
5,
4
] | [
"G-MqpfZ3G0N",
"hCny_U6ZgFr",
"6PP2RY7lAE1",
"0Au6FUxizY",
"0Vc03aZvPyR",
"JRaJqFxCw9a",
"nips_2022_qJpEiCrM3XK",
"a4H7gY2Hb_C",
"nips_2022_qJpEiCrM3XK",
"nips_2022_qJpEiCrM3XK",
"nips_2022_qJpEiCrM3XK",
"nips_2022_qJpEiCrM3XK"
] |
nips_2022_-welFirjMss | Optimal Transport of Classifiers to Fairness | In past work on fairness in machine learning, the focus has been on forcing the prediction of classifiers to have similar statistical properties for people of different demographics. To reduce the violation of these properties, fairness methods usually simply rescale the classifier scores, ignoring similarities and dissimilarities between members of different groups. Yet, we hypothesize that such information is relevant in quantifying the unfairness of a given classifier. To validate this hypothesis, we introduce Optimal Transport to Fairness (OTF), a method that quantifies the violation of fairness constraints as the smallest Optimal Transport cost between a probabilistic classifier and any score function that satisfies these constraints. For a flexible class of linear fairness constraints, we construct a practical way to compute OTF as a differentiable fairness regularizer that can be added to any standard classification setting. Experiments show that OTF can be used to achieve an improved trade-off between predictive power and fairness. | Accept | Reviewers are fairly positive about this paper. This paper takes an optimal transport approach to projecting an unfair score function to a fairness constrained set by minimizing transport cost. This addresses the issue of making similar changes to individuals with similar features instead of just post processing score thresholds to match the necessary fairness criteria. This work also provides a nice regularization term reflecting the OT cost to be optimized during training. They handle fairness measures that are linear in a certain sense.
Main concerns that were brought was:
a) Computational complexity : Due to pairwise transport and cost matrices being involved - reviewers were worried about time complexity. Answer from authors is to do batch computation which is reasonable.
b) More experimental validation was suggested. Authors responded to adding one additional evaluation. However, it would be great if authors could add additional evaluations before camera ready as there is plenty of time.
c) There was question about using unbiased Sinkhorn divergence . Authors clarified that this would lead to a multi level optimization problem while their approach is simpler.
I believe the concerns were adequately addressed and not major.
In any case, I reiterate to the authors to consider adding another evaluation on additional datasets as suggested by one of the reviewers before camera ready. | train | [
"ic5mDpTSC03",
"k111JDY8trV",
"fiWpnNtO2CI",
"P-taH_jSc5y",
"VtY4wOsjyBx",
"NUzOkIVjX4O",
"z8XIgSlBXws",
"XE48sMzo669",
"IQt0L-IchJT"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the Reviewer for continuing this interesting discussion. We apologize that our response is again lengthy, but we hope the Reviewer will appreciate the thoroughness. \n\n1. We believe the quoted expressions are confusing, in part, because our rebuttal discusses both the situation *before the revision* and... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"k111JDY8trV",
"P-taH_jSc5y",
"nips_2022_-welFirjMss",
"IQt0L-IchJT",
"XE48sMzo669",
"z8XIgSlBXws",
"nips_2022_-welFirjMss",
"nips_2022_-welFirjMss",
"nips_2022_-welFirjMss"
] |
nips_2022_tVbJdvMxK2- | Navigating Memory Construction by Global Pseudo-Task Simulation for Continual Learning | Continual learning faces a crucial challenge of catastrophic forgetting. To address this challenge, experience replay (ER) that maintains a tiny subset of samples from previous tasks has been commonly used. Existing ER works usually focus on refining the learning objective for each task with a static memory construction policy. In this paper, we formulate the dynamic memory construction in ER as a combinatorial optimization problem, which aims at directly minimizing the global loss across all experienced tasks. We first apply three tactics to solve the problem in the offline setting as a starting point. To provide an approximate solution to this problem under the online continual learning setting, we further propose the Global Pseudo-task Simulation (GPS), which mimics future catastrophic forgetting of the current task by permutation. Our empirical results and analyses suggest that the GPS consistently improves accuracy across four commonly used vision benchmarks. We have also shown that our GPS can serve as the unified framework for integrating various memory construction policies in existing ER works. | Accept | This paper introduces an approach for improving the efficacy of experience replay approaches in continual learning, by cleverly deciding what should be in the replay buffer. Specifically, for each task the approach blends random and class-balanced memories according to a parameter that is fit explicitly (in the offline case) or fit approximately via simulated future tasks (in the online case). The authors show this approach outperforms baselines in avoiding forgetting. The reviewers are somewhat split on this paper, with the main criticisms raised concerning to what extent the simulated tasks accurately portray the challenges posed by real tasks. After reading the paper and reviews, I think this concern is valid and the paper is somewhat close to the border, but I believe (alongside 2 of the 3 reviewers) that overall the paper's innovations outweigh its weaknesses; and I recommend it for acceptance. | train | [
"DsVJbdDgAsl",
"DUrMuOJ0Or",
"Z7nSNm2Lq-z",
"YxUK6JdYSMW",
"gc_3nWu5hJU",
"47GOjTsMTf6",
"cTMhzQ0RGTn",
"w0rCsJRfrK",
"UqAL1PXOVy5",
"gTKJDvTrd3H"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the feedback! We would like to clarify our response as follows:\n\n**Q1 – Pseudo-tasks**\n\nThanks for the comment, and we will make the wording clarification of forward transfer in the draft. We experiment with the zero-shot forward transfer in Appendix C.2 to show the superiority of permutation ov... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"DUrMuOJ0Or",
"47GOjTsMTf6",
"nips_2022_tVbJdvMxK2-",
"gTKJDvTrd3H",
"gTKJDvTrd3H",
"UqAL1PXOVy5",
"w0rCsJRfrK",
"nips_2022_tVbJdvMxK2-",
"nips_2022_tVbJdvMxK2-",
"nips_2022_tVbJdvMxK2-"
] |
nips_2022_8XWP2ewX-im | Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit | There is mounting empirical evidence of emergent phenomena in the capabilities of deep learning methods as we scale up datasets, model sizes, and training times. While there are some accounts of how these resources modulate statistical capacity, far less is known about their effect on the computational problem of model training. This work conducts such an exploration through the lens of learning $k$-sparse parities of $n$ bits, a canonical family of problems which pose theoretical computational barriers. In this setting, we find that neural networks exhibit surprising phase transitions when scaling up dataset size and running time. In particular, we demonstrate empirically that with standard training, a variety of architectures learn sparse parities with $n^{O(k)}$ examples, with loss (and error) curves abruptly dropping after $n^{O(k)}$ iterations. These positive results nearly match known SQ lower bounds, even without an explicit sparsity-promoting prior. We elucidate the mechanisms of these phenomena with a theoretical analysis: we find that the phase transition in performance is not due to SGD "stumbling in the dark" until it finds the hidden set of features; instead, we show that SGD gradually amplifies a Fourier gap in the population gradient. | Accept | This paper studies learning $k$-sparse parities of $n$ bits by a neural network. The authors show empirically that SGD efficiently learns sparse parties and also provide theoretical analysis for 2-layer MLPs. In particular, the authors show that training neural networks with SGD but without any sparsity penalty can solve the problem with $n^{O(k)}$ samples and steps, matching the standard statistical query (SQ) lower bounds for learning $k$-sparse parties of $n$ bits.
The major comments pointed out by reviewers include (1) the motivation for studying the parity problem and the progress measure, (2) the mismatch between the experimental results and claim, and (3) the gap between the theoretical results and the empirical observations. To address these comments, the authors have submitted a revised version which includes a further introduction to the parity learning problem and clarifies the reasoning for choosing the progress measure. The authors also clarify the empirical claims concerning the experimental results. Though the gap between the theoretical results and the empirical observations can not be improved in the current version and needs further investigation in future work, this paper advances our theoretical understanding of the computational aspect of scaling in deep learning through the parity learning problem. So I recommend accepting, but the authors are advised to incorporate the discussions about empirical claims in the final version. | train | [
"zsYNRfz_6EU",
"A4ms-OF8M2M",
"76LGgdLtZhZ",
"RehsuKN3zY",
"Y4tCJmqpe4e",
"INP8n2TRG4Aq",
"dDOI_SCrcZ4",
"ram9ETIXrFd",
"SSxRuhKRQG7",
"knfkwo84Fzo",
"1aFv0hl8jU",
"pjWCFKx1f3q",
"reta-Mx0ncm"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors' reply and clarification. \n\nBy reading the references in the recent response, I better understand the meaning of parity in practice and it solves my most concerns. Additionally, I recommend the authors to incorporate the influence of parity or some real-world examples into the appendix of... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"A4ms-OF8M2M",
"RehsuKN3zY",
"Y4tCJmqpe4e",
"SSxRuhKRQG7",
"ram9ETIXrFd",
"nips_2022_8XWP2ewX-im",
"nips_2022_8XWP2ewX-im",
"pjWCFKx1f3q",
"1aFv0hl8jU",
"reta-Mx0ncm",
"nips_2022_8XWP2ewX-im",
"nips_2022_8XWP2ewX-im",
"nips_2022_8XWP2ewX-im"
] |
nips_2022_9i7Sf1aRYq | How and Why to Manipulate Your Own Agent: On the Incentives of Users of Learning Agents | The usage of automated learning agents is becoming increasingly prevalent in many online economic applications such as online auctions and automated trading. Motivated by such applications, this paper is dedicated to fundamental modeling and analysis of the strategic situations that the users of automated learning agents are facing. We consider strategic settings where several users engage in a repeated online interaction, assisted by regret-minimizing learning agents that repeatedly play a "game" on their behalf. We propose to view the outcomes of the agents' dynamics as inducing a "meta-game" between the users. Our main focus is on whether users can benefit in this meta-game from "manipulating" their own agents by misreporting their parameters to them. We define a general framework to model and analyze these strategic interactions between users of learning agents for general games and analyze the equilibria induced between the users in three classes of games. We show that, generally, users have incentives to misreport their parameters to their own agents, and that such strategic user behavior can lead to very different outcomes than those anticipated by standard analysis. | Accept | Most reviews are positive and think that the paper solves an interesting and non-trivial problem. One reviewer points out some concerns on the motivating example, and it seems to be addressed in the author response. I have a different concern that in real world, the set of eligible bidders in each auction differs a lot, and maybe the authors can add some discussion on how this affects the result. | val | [
"AbVXAOn_fCt",
"h7JhmP7TXy",
"dESRpdiq6eQ",
"L8B9V8CqQN5",
"Lx9R3_fhgkK",
"NGBsGLvbHWW",
"IGmlClDEBS1",
"ZvbEwuLdbkL",
"PbheoaOq_9I",
"GnJLgbMWLAS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply.",
" Thank you for the response. I thought this was a good paper before, and I still do. I do not change my score.",
" We thank the reviewer for the feedback. We address below the main points that were raised in the review. \n\n1.Out-of-equilibrium patterns and dynamic equilibria: We wou... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"L8B9V8CqQN5",
"Lx9R3_fhgkK",
"IGmlClDEBS1",
"ZvbEwuLdbkL",
"PbheoaOq_9I",
"GnJLgbMWLAS",
"nips_2022_9i7Sf1aRYq",
"nips_2022_9i7Sf1aRYq",
"nips_2022_9i7Sf1aRYq",
"nips_2022_9i7Sf1aRYq"
] |
nips_2022_qC2BwvfaNdd | Data-IQ: Characterizing subgroups with heterogeneous outcomes in tabular data | High model performance, on average, can hide that models may systematically underperform on subgroups of the data. We consider the tabular setting, which surfaces the unique issue of outcome heterogeneity - this is prevalent in areas such as healthcare, where patients with similar features can have different outcomes, thus making reliable predictions challenging. To tackle this, we propose Data-IQ, a framework to systematically stratify examples into subgroups with respect to their outcomes. We do this by analyzing the behavior of individual examples during training, based on their predictive confidence and, importantly, the aleatoric (data) uncertainty. Capturing the aleatoric uncertainty permits a principled characterization and then subsequent stratification of data examples into three distinct subgroups (Easy, Ambiguous, Hard). We experimentally demonstrate the benefits of Data-IQ on four real-world medical datasets. We show that Data-IQ's characterization of examples is most robust to variation across similarly performant (yet different models), compared to baselines. Since Data-IQ can be used with any ML model (including neural networks, gradient boosting etc.), this property ensures consistency of data characterization, while allowing flexible model selection. Taking this a step further, we demonstrate that the subgroups enable us to construct new approaches to both feature acquisition and dataset selection. Furthermore, we highlight how the subgroups can inform reliable model usage, noting the significant impact of the Ambiguous subgroup on model generalization. | Accept | This easy-to-follow paper has been thoroughly evaluated by five competent reviewers. Four of them rated the work as acceptable (two full and two weak accepts), while one recommended a rejection. In my opinion, the reviewer with the negative assessment has not raised fundamental issues that would disqualify this paper from being considered for NeurIPS. The authors provided extensive clarifications to all the reviewers, including the one with a negative opinion. That reviewer did not engage in discussion with the authors. I recommend accepting this paper without reservations | train | [
"QwNp0rjsn_D",
"Jg10863gGUk",
"IizyJXn2g_z",
"QePHWzL4Un",
"ytt7fkJ2s_f",
"5yc1S6KPLd2",
"hM4tKVCoJ53",
"CspahM9KdMU",
"jkCitzbG2iI",
"haATnAmxLoO",
"IbV8FOBLmzV",
"FsVW-QfSozw",
"NtUFK-Ek6Gh",
"dpWjH9MKhd",
"_6DDjpagwem",
"Kkz2PNJMbE8",
"Xkrw2V7dbVd",
"fR_91bNYAJ",
"arXVsFkuAG6"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"autho... | [
" Dear authors of paper 6611:\n\nThank you very much for the detailed reply! The results of Data-IQ across different model types and data modalities make it more convincing, and the analysis of the training dynamics clarifies my concern. The point that I was trying to make by bringing up active learning in my revie... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3,
3
] | [
"Jg10863gGUk",
"NtUFK-Ek6Gh",
"FsVW-QfSozw",
"haATnAmxLoO",
"jkCitzbG2iI",
"CspahM9KdMU",
"VHGPMXP8H18T",
"9OQvSU_McKV",
"A7QuW0heDWt",
"rwvdYkP7cF",
"9OQvSU_McKV",
"3cUX0WGCHa",
"omLYyd9Oloz",
"_6DDjpagwem",
"Kkz2PNJMbE8",
"Xkrw2V7dbVd",
"fR_91bNYAJ",
"omLYyd9Oloz",
"06UcuTo6ggz... |
nips_2022_pluyPFTiTeJ | Domain Generalization without Excess Empirical Risk | Given data from diverse sets of distinct distributions, domain generalization aims to learn models that generalize to unseen distributions. A common approach is designing a data-driven surrogate penalty to capture generalization and minimize the empirical risk jointly with the penalty. We argue that a significant failure mode of this recipe is an excess risk due to an erroneous penalty or hardness in joint optimization. We present an approach that eliminates this problem. Instead of jointly minimizing empirical risk with the penalty, we minimize the penalty under the constraint of optimality of the empirical risk. This change guarantees that the domain generalization penalty cannot impair optimization of the empirical risk, \ie, in-distribution performance. To solve the proposed optimization problem, we demonstrate an exciting connection to rate-distortion theory and utilize its tools to design an efficient method. Our approach can be applied to any penalty-based domain generalization method, and we demonstrate its effectiveness by applying it to three examplar methods from the literature, showing significant improvements. | Accept | The paper considers the class of penalty-based methods methods for domain generalization which aim at minimizing a combination of the empirical risk and a penalty term used as proxy of the generalization error in the unseen domains (examples include IRM, CORAL, etc). The paper argues that such methods often perform poorly in practice either due to an erroneous penalty term, or due to the hardness of joint optimization which results in the failure to minimize the empirical risk. To address this issue, the authors propose a method which, instead of jointly minimizing empirical risk with the penalty, it minimizes the penalty under the constraint of optimality of the empirical risk. The success of this methods is based on the assumption that assume that excess risk is not required for generalization. In other words, in-domain performance need not be sacrificed for out-of-domain performance (no trade-off between the two). As a result, the proposed method essentially eliminates excess empirical risk by reformulating penalty-based DG such that the penalty is minimized under the constraint of optimal empirical risk.
All the three reviewers found the results novel an interesting. The reviewers had some concerns (about the framework and the experiments) which were addressed during the discussion period. I recommend that the authors also apply all the reviewers' comments in the revised manuscript. Also, for the experiments, I think it'd useful to compare the numbers with SOTA numbers for the datasets considered in the experiments (they are all available in the WILDS leaderboard).
| train | [
"s1x7ude-5zw",
"KQmDA_d_9bW",
"IjKREnaY0nk",
"mlCqkbezqwQ",
"XKS_UcnWhE2",
"fAhL8rd0YNe",
"jo9aYBEfLwE",
"kG9xZ5ZPBW",
"sTOy5k15Mbn"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. We believe there is a slight misunderstanding primarily caused by our ambiguous statement. When we said \"... consistently improves the performance of all tested methods...\", we meant the improvement is consistent over algorithms, not for every single experiment. Our method improves all ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"KQmDA_d_9bW",
"mlCqkbezqwQ",
"XKS_UcnWhE2",
"sTOy5k15Mbn",
"kG9xZ5ZPBW",
"jo9aYBEfLwE",
"nips_2022_pluyPFTiTeJ",
"nips_2022_pluyPFTiTeJ",
"nips_2022_pluyPFTiTeJ"
] |
nips_2022_FJ42JCNNUYT | LECO: Learnable Episodic Count for Task-Specific Intrinsic Reward | Episodic count has been widely used to design a simple yet effective intrinsic motivation for reinforcement learning with a sparse reward. However, the use of episodic count in a high-dimensional state space as well as over a long episode time requires a thorough state compression and fast hashing, which hinders rigorous exploitation of it in such hard and complex exploration environments. Moreover, the interference from task-irrelevant observations in the episodic count may cause its intrinsic motivation to overlook task-related important changes of states, and the novelty in an episodic manner can lead to repeatedly revisit the familiar states across episodes. In order to resolve these issues, in this paper, we propose a learnable hash-based episodic count, which we name LECO, that efficiently performs as a task-specific intrinsic reward in hard exploration problems. In particular, the proposed intrinsic reward consists of the episodic novelty and the task-specific modulation where the former employs a vector quantized variational autoencoder to automatically obtain the discrete state codes for fast counting while the latter regulates the episodic novelty by learning a modulator to optimize the task-specific extrinsic reward. The proposed LECO specifically enables the automatic transition from exploration to exploitation during reinforcement learning. We experimentally show that in contrast to the previous exploration methods LECO successfully solves hard exploration problems and also scales to large state spaces through the most difficult tasks in MiniGrid and DMLab environments. | Accept | Reviews were mixed here and all quite borderline. There are legitimate points raised for why this is being consistently given borderline ratings, with two in particular resonating with my own reading (novelty and comparisons with other methods). However, despite these issues the paper itself is a solid contribution, and I think could easily lead to others building on the core ideas and approach. The paper has also improved notably during revisions and at present does not have any fundamental flaws that should preclude publication. Therefore, I recommend acceptance. | train | [
"GURT77TpQi",
"ftZEeYaJ9N-",
"1a_46MPyl7Z",
"dcoiiU4jx_",
"p2b2i8-4jWa",
"DV-Viriirfw",
"Fu1N2oR6qcu",
"WwbfxalP8W_",
"V5LxBZmD3s",
"62M9WPbBKFK",
"ZbJd4iUUaHb"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" - Experiments of Naive Combination\n - We experimentally compare LECO to a naive combination of VQ-VAE and LIRPG where we directly optimize the episodic counter using the meta-gradient of extrinsic rewards. More concretely, we define the intrinsic reward $r^i_t(a_{t-1}, s_t, a_t, s_{t+1})$ = $r_t^{ta}(a_{t-1}... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"ftZEeYaJ9N-",
"WwbfxalP8W_",
"ZbJd4iUUaHb",
"62M9WPbBKFK",
"62M9WPbBKFK",
"V5LxBZmD3s",
"V5LxBZmD3s",
"V5LxBZmD3s",
"nips_2022_FJ42JCNNUYT",
"nips_2022_FJ42JCNNUYT",
"nips_2022_FJ42JCNNUYT"
] |
nips_2022_wiHzQWwg3l | Extrapolative Continuous-time Bayesian Neural Network for Fast Training-free Test-time Adaptation | Human intelligence has shown remarkably lower latency and higher precision than most AI systems when processing non-stationary streaming data in real-time. Numerous neuroscience studies suggest that such abilities may be driven by internal predictive modeling. In this paper, we explore the possibility of introducing such a mechanism in unsupervised domain adaptation (UDA) for handling non-stationary streaming data for real-time streaming applications. We propose to formulate internal predictive modeling as a continuous-time Bayesian filtering problem within a stochastic dynamical system context. Such a dynamical system describes the dynamics of model parameters of a UDA model evolving with non-stationary streaming data. Building on such a dynamical system, we then develop extrapolative continuous-time Bayesian neural networks (ECBNN), which generalize existing Bayesian neural networks to represent temporal dynamics and allow us to extrapolate the distribution of model parameters before observing the incoming data, therefore effectively reducing the latency. Remarkably, our empirical results show that ECBNN is capable of continuously generating better distributions of model parameters along the time axis given historical data only, thereby achieving (1) training-free test-time adaptation with low latency, (2) gradually improved alignment between the source and target features and (3) gradually improved model performance over time during the real-time testing stage. | Accept | Paper combines the use of a particle filtering differential equation (newly proposed in this paper) for sampling posterior parameters of a bayesian neural network, with unsupervised domain adaptation, and achieves strong results on tasks demonstrated. Reviewers found the method novel and the problem important. Several questions were raised about the motivation of using Bayesian neural networks, about the utility of combining several of the pieces of the loss function proposed, about the computational cost, but the authors provided satisfactory answers to the questions and provided ablations showing the utility of the different components of the loss function. One downside pointed out was that the method is quite dense and involves several parts that are intertwined. I hope the authors will revise their submission, taking into account the points raised by the reviewers. | train | [
"aUj078GK0-Q",
"hP9Tzo3rAnY",
"VeIic9FeMD5",
"2UEvAW56o7r",
"5VWtxV8vHhA",
"93elVRQLzBP",
"Hx-LAMlYaP4",
"CTrwu1vkBN8",
"A_qZ5EBSP7A",
"i8KuY2xykLs",
"L4QaOh1SqWf",
"btlfBJGeWB",
"d_-hjjFh5kl",
"l5Rh5RVso0O"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I want to thank the authors for taking my concerns seriously and responding to each of my questions. I have upgraded my ratings.",
" Thanks for your follow-up question and for keeping the communication channel open. We noticed our mistake and agreed that a non-Bayesian meta network could also be used to dynamic... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"A_qZ5EBSP7A",
"VeIic9FeMD5",
"l5Rh5RVso0O",
"l5Rh5RVso0O",
"l5Rh5RVso0O",
"l5Rh5RVso0O",
"btlfBJGeWB",
"btlfBJGeWB",
"btlfBJGeWB",
"d_-hjjFh5kl",
"nips_2022_wiHzQWwg3l",
"nips_2022_wiHzQWwg3l",
"nips_2022_wiHzQWwg3l",
"nips_2022_wiHzQWwg3l"
] |
nips_2022_thirVlDJ2IL | A Fourier Approach to Mixture Learning | We revisit the problem of learning mixtures of spherical Gaussians. Given samples from a mixture $\frac{1}{k}\sum_{j=1}^{k}\mathcal{N}(\mu_j, I_d)$, the goal is to estimate the means $\mu_1, \mu_2, \ldots, \mu_k \in \mathbb{R}^d$ up to a small error. The hardness of this learning problem can be measured by the \emph{separation} $\Delta$ defined as the minimum distance between all pairs of means. Regev and Vijayaraghavan (2017) showed that with $\Delta = \Omega(\sqrt{\log k})$ separation, the means can be learned using $\mathrm{poly}(k, d)$ samples, whereas super-polynomially many samples are required if $\Delta = o(\sqrt{\log k})$ and $d = \Omega(\log k)$. This leaves open the low-dimensional regime where $d = o(\log k)$.
In this work, we give an algorithm that efficiently learns the means in $d = O(\log k/\log\log k)$ dimensions under separation $d/\sqrt{\log k}$ (modulo doubly logarithmic factors). This separation is strictly smaller than $\sqrt{\log k}$, and is also shown to be necessary. Along with the results of Regev and Vijayaraghavan (2017), our work almost pins down the critical separation threshold at which efficient parameter learning becomes possible for spherical Gaussian mixtures. More generally, our algorithm runs in time $\mathrm{poly}(k)\cdot f(d, \Delta, \epsilon)$, and is thus fixed-parameter tractable in parameters $d$, $\Delta$ and $\epsilon$.
Our approach is based on estimating the Fourier transform of the mixture at carefully chosen frequencies, and both the algorithm and its analysis are simple and elementary. Our positive results can be easily extended to learning mixtures of non-Gaussian distributions, under a mild condition on the Fourier spectrum of the distribution. | Accept | Overall the reviewers felt that this paper should be accepted because it provides a nice theoretical result that pins down the sample vs. gap tradeoff for learning mixtures of Gaussians in low-dimension -- a well studied and interesting problem. | train | [
"YcE2rVhGqtd",
"fC4xIGzwGj_",
"C58LfobH16p",
"OzcLaK-8RP4",
"R69hurPRNp",
"XC5rqcRqAVL",
"M7XwNfVj5GJ",
"DUUWu_YXx_6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I am satisfied with it and have no further clarifications. ",
" Hi, thank you for answering my questions. I have understood the literature on the mixture learning complexity better by reading the answers.\n\nFor the score, I still keep the score 6 because the authors do not provide ... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
2
] | [
"R69hurPRNp",
"OzcLaK-8RP4",
"DUUWu_YXx_6",
"M7XwNfVj5GJ",
"XC5rqcRqAVL",
"nips_2022_thirVlDJ2IL",
"nips_2022_thirVlDJ2IL",
"nips_2022_thirVlDJ2IL"
] |
nips_2022_2hp6sIBsCDH | Global Linear and Local Superlinear Convergence of IRLS for Non-Smooth Robust Regression | We advance both the theory and practice of robust $\ell_p$-quasinorm regression for $p \in (0,1]$ by using novel variants of iteratively reweighted least-squares (IRLS) to solve the underlying non-smooth problem. In the convex case, $p=1$, we prove that this IRLS variant converges globally at a linear rate under a mild, deterministic condition on the feature matrix called the stable range space property. In the non-convex case, $p\in(0,1)$, we prove that under a similar condition, IRLS converges locally to the global minimizer at a superlinear rate of order $2-p$; the rate becomes quadratic as $p\to 0$. We showcase the proposed methods in three applications: real phase retrieval, regression without correspondences, and robust face restoration. The results show that (1) IRLS can handle a larger number of outliers than other methods, (2) it is faster than competing methods at the same level of accuracy, (3) it restores a sparsely corrupted face image with satisfactory visual quality. | Accept | Thank you for your submission to NeurIPS. The reviewers unanimously found the contributions to be solid, and the paper to be clear and well-written. All three reviewers unanimously recommend accepting the paper. One reviewer remarks that "it is actually interesting to see that the smoothing parameter plays a critical rule in the convergence of IRLS, and I believe that it has been ignored by many practitioners in this area." Please incorporate reviewer feedback in preparing the camera ready version. | train | [
"POHhEWmqV4O",
"9W8XQGpwu10",
"jFF0b9t3I6",
"r3HpD2LD0dU",
"th3WTLHcA3F",
"CM2PbTgW6yj",
"qaVZwzOXTTX",
"8Hv0WNgv9lh",
"rlH-l0Ipsay",
"8Pd7jCzr3dy",
"2NgMJWq8fl9",
"L7FWCATRJZX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the detailed response, and it addressed all of my concerns. In my opinion it is a solid paper, and I prefer not to change my score.",
" I thank the authors for their detailed response and clarification on each comments. I remained my point of view. Thanks.",
" > Although the stable RSP... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"jFF0b9t3I6",
"r3HpD2LD0dU",
"L7FWCATRJZX",
"2NgMJWq8fl9",
"8Pd7jCzr3dy",
"nips_2022_2hp6sIBsCDH",
"nips_2022_2hp6sIBsCDH",
"nips_2022_2hp6sIBsCDH",
"nips_2022_2hp6sIBsCDH",
"nips_2022_2hp6sIBsCDH",
"nips_2022_2hp6sIBsCDH",
"nips_2022_2hp6sIBsCDH"
] |
nips_2022_22hMrSbQXzt | Constrained Update Projection Approach to Safe Policy Optimization | Safe reinforcement learning (RL) studies problems where an intelligent agent has to not only maximize reward but also avoid exploring unsafe areas. In this study, we propose CUP, a novel policy optimization method based on Constrained Update Projection framework that enjoys rigorous safety guarantee. Central to our CUP development is the newly proposed surrogate functions along with the performance bound. Compared to previous safe reinforcement learning meth- ods, CUP enjoys the benefits of 1) CUP generalizes the surrogate functions to generalized advantage estimator (GAE), leading to strong empirical performance. 2) CUP unifies performance bounds, providing a better understanding and in- terpretability for some existing algorithms; 3) CUP provides a non-convex im- plementation via only first-order optimizers, which does not require any strong approximation on the convexity of the objectives. To validate our CUP method, we compared CUP against a comprehensive list of safe RL baselines on a wide range of tasks. Experiments show the effectiveness of CUP both in terms of reward and safety constraint satisfaction. We have opened the source code of CUP at https://github.com/zmsn-2077/CUP-safe-rl. | Accept | This paper studies safe reinforcement learning and proposes a new policy optimization method with a rigorous safety guarantee. During the author-reviewer discussion period, the reviewers' concerns were mostly resolved. The reviewers have reached the consensus that the contribution of the proposed method is sufficient to be borderline accepted. I recommend it for acceptance and suggest the authors incorporate the reviewers' comments into the final version. | test | [
"KFNHB52qeiS",
"_99zsLQHT1g",
"DEw781yE6Y",
"sLlhXnzcyqg",
"vQMpIJBhH1d",
"_bAGyXPdeG_",
"5UXFfyCp2G",
"ZKAG17tKH2",
"6SB8iRedZ6U",
"Z1NX4NOz8Ds",
"74RoF7NHXfM",
"FVwBkZDLCC-",
"OeWK5K5Zv-",
"ecWV27ebiyi",
"bFVZAJoeV4",
"d_l4ib5-0gV",
"pasZh-pX6Ce",
"Cq_4llzffxw",
"IOUIOlcKVlS",
... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thanks for your further feedback.\n\nWe have updated a new version of the submission, where we provided comparison in the main draft, see page 7, 235-241, and remove the Section B.4 according to your suggestions.",
" Thanks for your further feedback. \n\n\nWe have provided a comparison in the main draft, see p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"DEw781yE6Y",
"DEw781yE6Y",
"ecWV27ebiyi",
"bFVZAJoeV4",
"_bAGyXPdeG_",
"ZKAG17tKH2",
"pasZh-pX6Ce",
"6SB8iRedZ6U",
"FVwBkZDLCC-",
"LLrJkTjQoEQ",
"nips_2022_22hMrSbQXzt",
"IOUIOlcKVlS",
"IOUIOlcKVlS",
"LLrJkTjQoEQ",
"mZQtvMZG5dp",
"Cq_4llzffxw",
"Cq_4llzffxw",
"nips_2022_22hMrSbQXz... |
nips_2022_IfgOWI5v2f | Conformal Off-Policy Prediction in Contextual Bandits | Most off-policy evaluation methods for contextual bandits have focused on the expected outcome of a policy, which is estimated via methods that at best provide only asymptotic guarantees. However, in many applications, the expectation may not be the best measure of performance as it does not capture the variability of the outcome. In addition, particularly in safety-critical settings, stronger guarantees than asymptotic correctness may be required. To address these limitations, we consider a novel application of conformal prediction to contextual bandits. Given data collected under a behavioral policy, we propose \emph{conformal off-policy prediction} (COPP), which can output reliable predictive intervals for the outcome under a new target policy. We provide theoretical finite-sample guarantees without making any additional assumptions beyond the standard contextual bandit setup, and empirically demonstrate the utility of COPP compared with existing methods on synthetic and real-world data. | Accept | The paper develops conformal prediction tools for OPE under the contextual bandit model. The reviewers find the problem of clear importance and the quality of the paper sufficiently high. All reviewers are in favor of acceptance. The authors are encouraged to incorporate the reviewers' feedback and have a clear discussion of the technical novelty in the final version. | train | [
"9GIbNWjJVx",
"vjgXbW1ZKSn",
"gUQraj96mTp",
"VUY-esTCV5I",
"ChGuGthJBWV",
"U9bcrMJ22an",
"jzQhcqQ-Jg",
"-cK9WAYMMyJ",
"Cqwfv1tLMe2",
"zOYIps_b4X",
"vKt9kO8tmY"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer very much for their response - we agree, and will make the changes in the final version of our paper as suggested.",
" We thank the reviewer very much for their reply. \n\nOur experiments have the following structure: for each method reported in tables, we use 10 runs, in each of which we... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"gUQraj96mTp",
"VUY-esTCV5I",
"ChGuGthJBWV",
"-cK9WAYMMyJ",
"vKt9kO8tmY",
"zOYIps_b4X",
"Cqwfv1tLMe2",
"Cqwfv1tLMe2",
"nips_2022_IfgOWI5v2f",
"nips_2022_IfgOWI5v2f",
"nips_2022_IfgOWI5v2f"
] |
nips_2022_JLWOTZpWZzY | AutoML Two-Sample Test | Two-sample tests are important in statistics and machine learning, both as tools for scientific discovery as well as to detect distribution shifts.
This led to the development of many sophisticated test procedures going beyond the standard supervised learning frameworks, whose usage can require specialized knowledge about two-sample testing. We use a simple test that takes the mean discrepancy of a witness function as the test statistic and prove that minimizing a squared loss leads to a witness with optimal testing power. This allows us to leverage recent advancements in AutoML. Without any user input about the problems at hand, and using the same method for all our experiments, our AutoML two-sample test achieves competitive performance on a diverse distribution shift benchmark as well as on challenging two-sample testing problems. | Accept | This paper proposes a two-sample test based on AutoML. The paper is interesting and it proposes what promises to be a practical method. However, the reviewers have noted several discussions that need to be made explicit in the paper. I will recommend this for acceptance with a strong encouragement for the authors to incorporate the reviewer comments into crafting the final manuscript. | train | [
"zXpArHjJzi",
"afvVaBnPCvL",
"ur0B-_OQlg",
"p4rE6lgIun",
"zfOw2TwAIPp",
"-F2Ak0wN-JR",
"IgWX88dmnXC",
"ExDs6AdQ4B2",
"wXCy--1Qtow",
"rtwyT5bP3kC",
"Ri3CcqVaFI",
"QI-Hd36DmIm",
"qHL7yP-g3r",
"3vFmgJ5IQpW",
"akYAxBpM7F3",
"IaAymn1JTrq",
"l9YLjs8TDsg"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for their time, their participation in the discussion, and their helpful suggestions. We will use the additional page for the camera-ready version to discuss the topics raised in the reviews as we outlined in our rebuttal.\n\nThe main additions will be:\n\n- Consistency of our method, relat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"qHL7yP-g3r",
"-F2Ak0wN-JR",
"rtwyT5bP3kC",
"IgWX88dmnXC",
"Ri3CcqVaFI",
"wXCy--1Qtow",
"qHL7yP-g3r",
"QI-Hd36DmIm",
"l9YLjs8TDsg",
"IaAymn1JTrq",
"akYAxBpM7F3",
"3vFmgJ5IQpW",
"nips_2022_JLWOTZpWZzY",
"nips_2022_JLWOTZpWZzY",
"nips_2022_JLWOTZpWZzY",
"nips_2022_JLWOTZpWZzY",
"nips_2... |
nips_2022_Ul1legCUGIV | Constraining Gaussian Processes to Systems of Linear Ordinary Differential Equations | Data in many applications follows systems of Ordinary Differential Equations (ODEs).This paper presents a novel algorithmic and symbolic construction for covariance functions of Gaussian Processes (GPs) with realizations strictly following a system of linear homogeneous ODEs with constant coefficients, which we call LODE-GPs. Introducing this strong inductive bias into a GP improves modelling of such data. Using smith normal form algorithms, a symbolic technique, we overcome two current restrictions in the state of the art: (1) the need for certain uniqueness conditions in the set of solutions, typically assumed in classical ODE solvers and their probabilistic counterparts, and (2) the restriction to controllable systems, typically assumed when encoding differential equations in covariance functions. We show the effectiveness of LODE-GPs in a number of experiments, for example learning physically interpretable parameters by maximizing the likelihood. | Accept | The reviewers found this paper novel, significant for the community, and well written. All four reviewers recommended accepting the paper. I also appreciated the numerous illustrative examples in the paper. You addressed many of the remaining questions/concerns in your rebuttal and in the author-reviewer discussion. Please, go through the reviews once more for the camera-ready and take these into account. | train | [
"pFqmHs7mh5Cy",
"S5-LBkBszEF",
"S0vGrRCHSru-",
"ApPTCqcWqzs",
"pfF1y2MM1n1",
"qzU4cct6sQ",
"7o_KxPUlw0",
"tg8KgvRpovg",
"dwbwaMoo6R7",
"PC3l4yRTkO3",
"ZZ9mIr2gAKU",
"14xNJrzYk-",
"CMwNP3HrfM2",
"AqtyiIZeZbD"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you very much for your feedback and the new score.\n\nWe will take care to implement your comments into the camera ready version of the paper!\n\n",
" Thanks for this response, it has help clarify a number of points for me.\n\nI have raised my score by 1 point to 6 (and increased contribution score to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"S5-LBkBszEF",
"tg8KgvRpovg",
"7o_KxPUlw0",
"qzU4cct6sQ",
"AqtyiIZeZbD",
"CMwNP3HrfM2",
"CMwNP3HrfM2",
"CMwNP3HrfM2",
"14xNJrzYk-",
"ZZ9mIr2gAKU",
"nips_2022_Ul1legCUGIV",
"nips_2022_Ul1legCUGIV",
"nips_2022_Ul1legCUGIV",
"nips_2022_Ul1legCUGIV"
] |
nips_2022_pNEisJqGuei | Value Function Decomposition for Iterative Design of Reinforcement Learning Agents | Designing reinforcement learning (RL) agents is typically a difficult process that requires numerous design iterations. Learning can fail for a multitude of reasons and standard RL methods provide too few tools to provide insight into the exact cause. In this paper, we show how to integrate \textit{value decomposition} into a broad class of actor-critic algorithms and use it to assist in the iterative agent-design process. Value decomposition separates a reward function into distinct components and learns value estimates for each. These value estimates provide insight into an agent's learning and decision-making process and enable new training methods to mitigate common problems. As a demonstration, we introduce SAC-D, a variant of soft actor-critic (SAC) adapted for value decomposition. SAC-D maintains similar performance to SAC, while learning a larger set of value predictions. We also introduce decomposition-based tools that exploit this information, including a new reward \textit{influence} metric, which measures each reward component's effect on agent decision-making. Using these tools, we provide several demonstrations of decomposition's use in identifying and addressing problems in the design of both environments and agents. Value decomposition is broadly applicable and easy to incorporate into existing algorithms and workflows, making it a powerful tool in an RL practitioner's toolbox. | Accept | The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. However, reviewers expressed different opinions on the merits and contributions of this work.
One reviewer pointed out that similar ideas have been explored before and that the paper does not necessarily give new insights into value decomposition. The authors counter-argued by saying that the proposed method is the first concrete value-decomposition framework (with these particular properties) to be proposed in the literature.
Another reviewer had a more pessimistic view of the merits of this work. They argued that the paper lacks novelty and simply applies the idea of value decomposition to a different setting—in the context of actor-critic (AC) methods. They also argued that they expected a more formal discussion about the impact of value decomposition on AC methods and on whether re-weighting reward components could cause instability. The authors responded that off-policy DRL algorithms typically do not have convergence guarantees and that their goal was orthogonal: to provide more insight into the design of agents. The authors also argued that standard convergence guarantees apply if the reward function stops changing. Ultimately, this reviewer's two main points of contention were:
1. From their point of view, methods that help implement agents via iterative design could, in principle, be interesting for discussion in other venues, such as a workshop paper, but that this level of contribution does not meet the bar for a full conference paper; and
2. Having dynamic weights could cause the method to become unstable and the paper did not study whether this could be a problem in practice.
Two other reviewers, by contrast, expressed very positive views on this work.
One reviewer argued that even though the general idea has been explored before, using it to guide the reward function design is an original contribution. They also argued that the experiments were well-designed. After reading the authors' rebuttal, this particular reviewer said that the authors addressed all their technical questions.
Another reviewer expressed similarly strong positive views on this paper, arguing that:
1. It explores an important direction that ultimately leads to the design and improvement of new algorithms.
2. Even if the underlying idea is not new, this is the first extension of that idea to actor-critic algorithms.
3. The paper proposes a novel perspective for diagnosing agent failures and iteratively improving agents, and that this new viewpoint directly engages with the practical difficulties of deploying RL agents.
4. This reviewer further supported their positive view of this paper by stating that the experiments are broad, interesting, and reveal the considerable potential behind the proposed method. For these reasons, they believe this paper will likely be of general interest to the community.
Ultimately, the main point of contention between the opinion of these two reviewers seems to result from their opposite views on what, precisely, is the contribution of this work:
- One reviewer sees this paper as one that merely applies the idea of value decomposition of actor-critic methods, without providing thorough theoretical and formal analyses. This reviewer argued that this contribution does not meet the bar for publication as a full conference paper.
- The other reviewer views this paper as one that introduces novel diagnostic tools for understanding learning agents. They argued that they found the cleanliness around the proposed diagnostic tools to be appealing and that the resulting proposal for iterative design, based on these diagnostics, is broadly useful for RL. This reviewer highlighted three kinds of diagnostic unlocked by the perspective introduced in the paper: **(1)** identifying insufficient state features; **(2)** value prediction errors; and **(3)** reward and exploration. They believe these avenues are new and useful when viewed from the perspective of iteratively identifying and correcting failure modes of agents. Ultimately, they disagreed with the reviewer above by arguing that they did not find the attachment to actor-critic algorithms all that critical.
Overall, thus, it seems like there is disagreement regarding the merits of this paper primarily due to two reviewers not agreeing on whether the proposed iterative design method is (in principle) a sufficient contribution to a conference paper. Regarding this matter, a fourth reviewer expressed a positive view of this work by stating that the questions studied here address an important aspect missing in the literature: how to diagnose agents. They argued that—given this paper's contributions towards that goal—the lack of formal analyses should not be considered a critical reason for rejection.
All reviewers encourage the authors to update their work based on their constructive criticisms and, in particular, in a way that tackles the points of contention mentioned in the original reviews and their post-rebuttal comments. | test | [
"WKTWONLyU4",
"D4z-ZTM4sQ4",
"XJee4KS8MAL",
"BY6XfuOIvJr",
"DyVoqNtGEu7",
"pIip6D-KIy",
"uC0vL9lTXx",
"e48SIYUesir",
"moVcdOC6PO9",
"O-BTBKWWus",
"hkA5J8gVays",
"dtygV19NmvM"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their subsequent response. While value decomposition can be studied from an algorithmic perspective, which we do think is a valuable approach, that is not our focus here. Instead we focus on demonstrating how value decomposition provides a tool for engineers to debug and design agents in... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
4
] | [
"D4z-ZTM4sQ4",
"DyVoqNtGEu7",
"e48SIYUesir",
"uC0vL9lTXx",
"dtygV19NmvM",
"hkA5J8gVays",
"O-BTBKWWus",
"moVcdOC6PO9",
"nips_2022_pNEisJqGuei",
"nips_2022_pNEisJqGuei",
"nips_2022_pNEisJqGuei",
"nips_2022_pNEisJqGuei"
] |
nips_2022_7fdVZR_cl7 | Perfect Sampling from Pairwise Comparisons | In this work, we study how to efficiently obtain perfect samples from a discrete distribution $\mathcal{D}$ given access only to pairwise comparisons of elements of its support. Specifically, we assume access to samples $(x, S)$, where $S$ is drawn from a distribution over sets $\mathcal{Q}$ (indicating the elements being compared), and $x$ is drawn from the conditional distribution $\mathcal{D}_S$ (indicating the winner of the comparison) and aim to output a clean sample $y$ distributed according to $\mathcal{D}$. We mainly focus on the case of pairwise comparisons where all sets $S$ have size 2. We design a Markov chain whose stationary distribution coincides with $\mathcal{D}$ and give an algorithm to obtain exact samples using the technique of Coupling from the Past. However, the sample complexity of this algorithm depends on the structure of the distribution $\mathcal{D}$ and can be even exponential in the support of $\mathcal{D}$ in many natural scenarios. Our main contribution is to provide an efficient exact sampling algorithm whose complexity does not depend on the structure of $\mathcal{D}$. To this end, we give a parametric Markov chain that mixes significantly faster given a good approximation to the stationary distribution. We can obtain such an approximation using an efficient learning from pairwise comparisons algorithm (Shah et al., JMLR 17, 2016). Our technique for speeding up sampling from a Markov chain whose stationary distribution is approximately known is simple, general and possibly of independent interest. | Accept | The authors show how to perfectly sample a discrete distribution, given sample access to the Bradley-Terry-Luce model on subsets of size 2. While the learning problem has been previously studied extensively, this work initiates the study of sampling. Technically, the authors introduce re-weighting and rejection sampling ideas that speed up coupling from the past by utilizing an approximate learning algorithm; these techniques could be useful in other applications, as the authors hint in the rebuttal.
The reviewers agreed that the paper is technically quite strong and that it's quite well written. The authors responded to all remaining questions by the reviewers, clearing the path to the paper being accepted. | test | [
"O5jPfmGSNS",
"s34NydJTtov",
"c1kRliXMuIK",
"YkkdwtO4nMR",
"xdxwuLdtKi",
"psLx7GIqzri",
"RxKjudhJpqP",
"bLD2v25AXb4",
"t39R7easxKx",
"38A7ygjqj5"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comments! The ideas for further applications of the CFTP technique are interesting.",
" It is a nice response, and it would be good to include this discussion in the paper. Thanks for the interesting paper!",
" We thank the reviewer for the positive feedback and the provided comments and co... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
2
] | [
"xdxwuLdtKi",
"c1kRliXMuIK",
"38A7ygjqj5",
"t39R7easxKx",
"bLD2v25AXb4",
"RxKjudhJpqP",
"nips_2022_7fdVZR_cl7",
"nips_2022_7fdVZR_cl7",
"nips_2022_7fdVZR_cl7",
"nips_2022_7fdVZR_cl7"
] |
nips_2022_2zQx2Pxbd7J | An $\alpha$-No-Regret Algorithm For Graphical Bilinear Bandits | We propose the first regret-based approach to the \emph{Graphical Bilinear Bandits} problem, where $n$ agents in a graph play a stochastic bilinear bandit game with each of their neighbors. This setting reveals a combinatorial NP-hard problem that prevents the use of any existing regret-based algorithm in the (bi-)linear bandit literature. In this paper, we fill this gap and present the first regret-based algorithm for graphical bilinear bandits using the principle of optimism in the face of uncertainty. Theoretical analysis of this new method yields an upper bound of $\tilde{O}(\sqrt{T})$ on the $\alpha$-regret and evidences the impact of the graph structure on the rate of convergence. Finally, we show through various experiments the validity of our approach. | Accept | We thank the authors for their submission.
In this paper, agents are vertices on a graph where an edge between agents i,j indicates that they play a multi-armed bandit game against each other, where the expected reward is bilinear in both agents' actions. The goal is to maximize the total expected reward over all edges of the graph, and prior work shows that this is an NP-hard problem.
Contribution: efficient optimism-based algorithms for minimizing $\alpha$-approximate regret, thus circumventing the computational hardness.
Considering approximate regret in this setting is a novel approach.
The paper is clearly written and presents a convincing experimental evaluation. | train | [
"HpT5ieX0xGJ",
"n0wPtxJOtV0G",
"TCg2eAEXHOB",
"K0aXrfK5PQ5",
"mc-HWFA75h",
"yhLonUHBrx7",
"V6Ix5CSvj1_"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for answering all my questions. ",
" Many thanks for your comments. We address them below.\n\n\"Can the author please either include a proof of Proposition 3.1 (regarding the efficiency of Max-Cut approximation), or provide a reference proving it?\"\n\nThe proof which gives an approximation ... | [
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"n0wPtxJOtV0G",
"V6Ix5CSvj1_",
"yhLonUHBrx7",
"mc-HWFA75h",
"nips_2022_2zQx2Pxbd7J",
"nips_2022_2zQx2Pxbd7J",
"nips_2022_2zQx2Pxbd7J"
] |
nips_2022_FhuM-kk8Pbk | Listen to Interpret: Post-hoc Interpretability for Audio Networks with NMF | This paper tackles post-hoc interpretability for audio processing networks. Our goal is to interpret decisions of a trained network in terms of high-level audio objects that are also listenable for the end-user. To this end, we propose a novel interpreter design that incorporates non-negative matrix factorization (NMF). In particular, a regularized interpreter module is trained to take hidden layer representations of the targeted network as input and produce time activations of pre-learnt NMF components as intermediate outputs. Our methodology allows us to generate intuitive audio-based interpretations that explicitly enhance parts of the input signal most relevant for a network's decision. We demonstrate our method's applicability on popular benchmarks, including a real-world multi-label classification task. | Accept | The paper presents a novel approach for interpreting a neural network's decision on audio input. The motivation of the research is clear, and all the reviewers have agreed on the importance of the addressed task and the originality of the proposed solution. Also, the authors have adequately responded to the reviewers' concerns during the rebuttal, including some additional experiments such as a comparison with attribution interpretation or a modified baseline. Therefore, I gladly recommend this paper be accepted in NeurIPS 2022.
## Strengths
- The paper addresses one of the underexplored yet important tasks of interpretability of neural networks for the audio domain. As visualization is helpful for interpreting neural networks in the vision domain, sonification is a reasonable and intuitive way to analyze how the neural network works on audio. This paper presents a promising solution for this purpose.
- The architecture of the proposed model is original and shows high novelty. The experiment for the evaluation was well-designed, showing meaningful results.
## Weaknesses
- Many reviewers suggested that it would be interesting to see how the proposed model works on more complex data, such as music audio. The authors have explained why they focused on sound event detection as an initial goal. The reviewer's suggestion would remain a challenging but valuable goal for future research.
| val | [
"vnWolmod8kD",
"rOLsv3OkDaw",
"X5AfK9nsnD8o",
"iefnBv1IC7KX",
"q_SESLXv_iy",
"PcEKjevxF4EU",
"z02O3agzvjq5",
"jWkMg-zSG-v",
"cys8dzika5",
"02w5FpZ1i_F",
"viVPfr2GBxA"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your positive update. Please find our response to your queries below:\n* $L_{of}$ is a typo from us, it should be $L_{FID}$. Thanks for pointing this out. '$L_{of}$' was our old notation for fidelity loss. \n* Yes, it is the test loss of interpreter in Fig. 1. The training loss also follows a simila... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"rOLsv3OkDaw",
"02w5FpZ1i_F",
"02w5FpZ1i_F",
"02w5FpZ1i_F",
"viVPfr2GBxA",
"cys8dzika5",
"jWkMg-zSG-v",
"nips_2022_FhuM-kk8Pbk",
"nips_2022_FhuM-kk8Pbk",
"nips_2022_FhuM-kk8Pbk",
"nips_2022_FhuM-kk8Pbk"
] |
nips_2022_r-CsquKaHvk | Temporally-Consistent Survival Analysis | We study survival analysis in the dynamic setting: We seek to model the time to an event of interest given sequences of states. Taking inspiration from temporal-difference learning, a central idea in reinforcement learning, we develop algorithms that estimate a discrete-time survival model by exploiting a temporal-consistency condition. Intuitively, this condition captures the fact that the survival distribution at consecutive states should be similar, accounting for the delay between states. Our method can be combined with any parametric survival model and naturally accommodates right-censored observations. We demonstrate empirically that it achieves better sample-efficiency and predictive performance compared to approaches that directly regress the observed survival outcome. | Accept | This paper draws connections between survival analysis where covariates appear over time and td-learning in reinforcement learning. The connection is based on a recursion in the survival distribution when the process satisfies a Markov assumption. The paper then uses this connection to develop a new estimator for survival analysis using dynamic programming. The paper has the potential to connect two disparate communities in ML. From a technical standpoint, I am interested in what happens when the Markov assumption is weakened. | train | [
"m6YlyPg-t3Y",
"PQgsXt_Z6-W",
"NpJXDE0bvja",
"1hH4EoSaaLe",
"7Y0fZYhX_Q9",
"WBdWyZayqYz",
"8q7Z98YiWvy",
"pLrJcjCSI2C",
"aq8qbt5s5LU",
"59EPw-ax7k"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Regarding the last point, you are right and I was wrong, though I prefer the alternative one.",
" The authors have addressed my questions and there are no further concerns. ",
" Thank you very much for your thorough review! We are grateful for such detailed feedback, and we will make a number of improvements ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"NpJXDE0bvja",
"1hH4EoSaaLe",
"59EPw-ax7k",
"aq8qbt5s5LU",
"pLrJcjCSI2C",
"8q7Z98YiWvy",
"nips_2022_r-CsquKaHvk",
"nips_2022_r-CsquKaHvk",
"nips_2022_r-CsquKaHvk",
"nips_2022_r-CsquKaHvk"
] |
nips_2022_jjlQkcHxkp0 | Exponential Separations in Symmetric Neural Networks | In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network~\parencite{santoro2017simple} architecture as a natural generalization of the DeepSets~\parencite{zaheer2017deep} architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size $N$ with elements in dimension $D$, which can be efficiently approximated by the former architecture, but provably requires width exponential in $N$ and $D$ for the latter. | Accept | Four domain experts recommended acceptance for this paper and I agree with their assessment. The writeup presents its setup, result, and argument all very clearly. The "width separation" result is solid and characterizes an exponential gap in expressive capacity between two material neural net architectures. Reviewers agree that the analysis carries independent interest/novelty as well, and are optimistic that it might be useful in other separation arguments (e.g. Reviewer Ddfp's comment about graph neural nets).
Especially in initial reviews, there were some concerns raised around motivation and grounding in practice. For instance, Reviewer YvXW naturally questioned the importance of separating the two types of architectures studied here, specifically asking whether this was grounded in a known empirical discrepancy between them. I think this was addressed well in the authors' subsequent response -- with references -- and it seems the reviewer agrees. Still, I found this thread helpful and I suspect that readers will naturally ask a similar question. I would recommend that the authors consider incorporating some of this reply into the paper itself. (Whether/how to do this is up to them, and doesn't bear on my acceptance recommendation.) In the same discussion, there was a good point raised to remark on learnability (e.g. by gradient descent). The authors mentioned they would comment on this in the next draft and I'd encourage that further as well.
Thanks to reviewers and authors both for their work. Overall this a nice research contribution and a well-written paper. | train | [
"f0GvvElW1hs",
"rGCANvAwu2k",
"LHWe1JpWb4",
"fJTr30XVEJs",
"y6KENdf0wMQ",
"1R-5YyJ5y88",
"AMJhTMmmTyQ",
"dWJKFl5zxM"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your review! We address your concerns and questions below:\n\nWe appreciate the feedback that motivation was insufficient and we agree, see the response to reviewer YvXW for planned changes to explain why this separation is natural and practically motivated.\n\nThe tightness of our lower bound is a go... | [
-1,
-1,
-1,
-1,
7,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"dWJKFl5zxM",
"AMJhTMmmTyQ",
"1R-5YyJ5y88",
"y6KENdf0wMQ",
"nips_2022_jjlQkcHxkp0",
"nips_2022_jjlQkcHxkp0",
"nips_2022_jjlQkcHxkp0",
"nips_2022_jjlQkcHxkp0"
] |
nips_2022__3ELRdg2sgI | STaR: Bootstrapping Reasoning With Reasoning | Generating step-by-step "chain-of-thought" rationales improves language model performance on complex reasoning tasks like mathematics or commonsense question-answering. However, inducing language model rationale generation currently requires either constructing massive rationale datasets or sacrificing accuracy by using only few-shot inference. We propose a technique to iteratively leverage a small number of rationale examples and a large dataset without rationales, to bootstrap the ability to perform successively more complex reasoning. This technique, the "Self-Taught Reasoner" (STaR), relies on a simple loop: generate rationales to answer many questions, prompted with a few rationale examples; if the generated answers are wrong, try again to generate a rationale given the correct answer; fine-tune on all the rationales that ultimately yielded correct answers; repeat. We show that STaR significantly improves performance on multiple datasets compared to a model fine-tuned to directly predict final answers, and performs comparably to fine-tuning a 30$\times$ larger state-of-the-art language model on CommensenseQA. Thus, STaR lets a model improve itself by learning from its own generated reasoning. | Accept | All reviewers found this paper to be interesting and timely on a topic of much current interest -- prompting LLMs with rationales to improve their accuracy. While methodologically the paper follows existing work in not so novel ways, the proposed procedure leads to strong enough empirical gains to be of wide interest to the community. The authors are encouraged to revise the paper incorporating the reviewer's suggestions. | train | [
"3RRSoj2CpD",
"ZfqU4VZwK3o",
"hUybfhiyb0U",
"wCbiOKXFGMq",
"btzclyHy7C0",
"I3JA08__geh",
"Rl5BprCICVu"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate your work to improve the paper!",
" Thanks again to the reviewers for their constructive and positive feedback. Following up on the additional experiments, we found that the model's performance on the CQA test set was 64.9%. This is about what one might expect, as the CQA test set is known to be mo... | [
-1,
-1,
-1,
7,
7,
6,
7
] | [
-1,
-1,
-1,
3,
4,
4,
4
] | [
"Rl5BprCICVu",
"hUybfhiyb0U",
"nips_2022__3ELRdg2sgI",
"nips_2022__3ELRdg2sgI",
"nips_2022__3ELRdg2sgI",
"nips_2022__3ELRdg2sgI",
"nips_2022__3ELRdg2sgI"
] |
nips_2022_I1mkUkaguP | Gradient Estimation with Discrete Stein Operators | Gradient estimation---approximating the gradient of an expectation with respect to the parameters of a distribution---is central to the solution of many machine learning problems. However, when the distribution is discrete, most common gradient estimators suffer from excessive variance. To improve the quality of gradient estimation, we introduce a variance reduction technique based on Stein operators for discrete distributions. We then use this technique to build flexible control variates for the REINFORCE leave-one-out estimator. Our control variates can be adapted online to minimize variance and do not require extra evaluations of the target function. In benchmark generative modeling tasks such as training binary variational autoencoders, our gradient estimator achieves substantially lower variance than state-of-the-art estimators with the same number of function evaluations. | Accept | Thanks to the authors for this submission. The reviewers all agreed that this is a novel and effective method for reducing gradient estimates in a discrete setting. The reviewer-author discussion was fruitful, and changes to the manuscript have improved the submission.
| train | [
"QPzuZNCekhs",
"DHFbuaxv1Ld",
"kyNKP0mjI7o",
"k7v7iDZ_H5E",
"3kJxkZNSmLV",
"HwKA9np6lvv",
"Ubrmx-ksHo6",
"hnrasw8I1G5",
"QziTqiLl7lT",
"tQR9iGuVcV2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate your detailed response to my questions and the changes made to the manuscript based on those. I think the paper is overall a good paper and I would recommend its acceptance.",
" Thank you very much for your helpful answers - they addressed my questions very clearly. I increased my confidence to 4 a... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"HwKA9np6lvv",
"k7v7iDZ_H5E",
"tQR9iGuVcV2",
"QziTqiLl7lT",
"hnrasw8I1G5",
"Ubrmx-ksHo6",
"nips_2022_I1mkUkaguP",
"nips_2022_I1mkUkaguP",
"nips_2022_I1mkUkaguP",
"nips_2022_I1mkUkaguP"
] |
nips_2022_-9PV7GKwYpM | Composite Feature Selection Using Deep Ensembles | In many real world problems, features do not act alone but in combination with each other. For example, in genomics, diseases might not be caused by any single mutation but require the presence of multiple mutations. Prior work on feature selection either seeks to identify individual features or can only determine relevant groups from a predefined set. We investigate the problem of discovering groups of predictive features without predefined grouping. To do so, we define predictive groups in terms of linear and non-linear interactions between features. We introduce a novel deep learning architecture that uses an ensemble of feature selection models to find predictive groups, without requiring candidate groups to be provided. The selected groups are sparse and exhibit minimum overlap. Furthermore, we propose a new metric to measure similarity between discovered groups and the ground truth. We demonstrate the utility our model on multiple synthetic tasks and semi-synthetic chemistry datasets, where the ground truth structure is known, as well as an image dataset and a real-world cancer dataset. | Accept | ### Summary of paper
This work addresses the problem of grouped feature selection in the supervised learning setting. A new method based on ensemble of features is proposed as well as a new metric to evaluate the results on synthetic, semi-synthetic and real data.
### Rebuttal
Authors were engaged and addressed all the reviewers' concerns and questions. While reviewers did not engage in the rebuttal discussion nor post-rebuttal discussion, they significantly raised their scores, bringing the average score from 4.75 to 5.75, indicating that their main concerns were addressed.
### Acceptance
There are no major concerns that remain to be addressed and the paper is ready to be published. I hence recommend acceptance of the paper.
| train | [
"YJ5q21Y5BMj",
"OGMkTla4xXG",
"9MmsxUnpRwO",
"OFoIIyT9j4i",
"oljKsGp8L4S",
"ZNGt2GF45QN",
"st7H3VxQL_M",
"On9lsByCChs",
"aTy7AepbWi",
"1Sen3HLOwIr",
"ilWRT1H8cb2V",
"9w-pSAjSiGG",
"mM-fW1t-yhR",
"6nUvqAXHqc",
"PYsDUFiG1n",
"1Cdq0FzhDUF",
"P3oULmf4oKx",
"cCJvVbmm2Ch",
"yrLOsmCSCPq... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author... | [
" Thank you for your support! We are very happy that we could address all of your concerns. In particular, we are glad that you liked the METABRIC example!\n\nFinally, we have added the following sentence to our conclusion to discuss the additional regularization hyperparameter:\n\n_\"Additionally, to discover grou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
3
] | [
"OGMkTla4xXG",
"9MmsxUnpRwO",
"Bq6XN5pzKY",
"S8oeHlBpFST",
"st7H3VxQL_M",
"On9lsByCChs",
"cCJvVbmm2Ch",
"vrCJdIE17sf",
"Bq6XN5pzKY",
"eG7ONpyow4k",
"S8oeHlBpFST",
"pseP8B6kpzc",
"nips_2022_-9PV7GKwYpM",
"pseP8B6kpzc",
"S8oeHlBpFST",
"pseP8B6kpzc",
"S8oeHlBpFST",
"pseP8B6kpzc",
"S... |
nips_2022_sof8l4cki9 | Fast Mixing of Stochastic Gradient Descent with Normalization and Weight Decay | We prove the Fast Equilibrium Conjecture proposed by Li et al., (2020), i.e., stochastic gradient descent (SGD) on a scale-invariant loss (e.g., using networks with various normalization schemes) with learning rate $\eta$ and weight decay factor $\lambda$ mixes in function space in $\mathcal{\tilde{O}}(\frac{1}{\lambda\eta})$ steps, under two standard assumptions: (1) the noise covariance matrix is non-degenerate and (2) the minimizers of the loss form a connected, compact and analytic manifold. The analysis uses the framework of Li et al., (2021) and shows that for every $T>0$, the iterates of SGD with learning rate $\eta$ and weight decay factor $\lambda$ on the scale-invariant loss converge in distribution in $\Theta\left(\eta^{-1}\lambda^{-1}(T+\ln(\lambda/\eta))\right)$ iterations as $\eta\lambda\to 0$ while satisfying $\eta \le O(\lambda)\le O(1)$. Moreover, the evolution of the limiting distribution can be described by a stochastic differential equation that mixes to the same equilibrium distribution for every initialization around the manifold of minimizers as $T\to\infty$. | Accept | This paper continues a line of works studying SGD via a similar SDE, this time employing a variety of tools not used before, for instance time-rescaling and normalization layers. The reviewers on one hand were concerned at times that this was incremental, but also uncovered a variety of interesting contributions. As such, I feel this paper is a clear accept, and am excited to see it in the conference. That said, the discussions between reviewers and authors were quite detailed and I feel the authors could greatly strengthen their presentation by carefully adjusting for them, making their intended message more clear for future readers. | val | [
"xgP4pfOSD0K",
"tlSLstnF-GD",
"p3fVuBpNQxu",
"6yfK61IPntc",
"bge2o_364Qd",
"3FsrpcloFfl",
"aIvNcCjMfR",
"U-hp3i-hnIC",
"90y5vyNS4B2",
"COPLOiCna3xQ",
"PvcTcn_YM-B",
"ihILa-_NMdf",
"ZMCTTmy14yi",
"8waQCgQpTU",
"L2umB6ZkzZ",
"BoTocS7XIF8",
"7HyXVTyitKi",
"x3sXc9ougGc"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We deleted the original statement in line 290-292 and added one paragraph at the end of Sec. 7.2 to further clarify about using equation (10) to decide the LR decay time.\n\nFor your information, here is a list of changes might take to run [this GitHub repo](https://github.com/bearpaw/pytorch-classification) with... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
1,
2,
4
] | [
"tlSLstnF-GD",
"p3fVuBpNQxu",
"bge2o_364Qd",
"3FsrpcloFfl",
"U-hp3i-hnIC",
"90y5vyNS4B2",
"PvcTcn_YM-B",
"ihILa-_NMdf",
"ZMCTTmy14yi",
"nips_2022_sof8l4cki9",
"x3sXc9ougGc",
"7HyXVTyitKi",
"BoTocS7XIF8",
"L2umB6ZkzZ",
"nips_2022_sof8l4cki9",
"nips_2022_sof8l4cki9",
"nips_2022_sof8l4c... |
nips_2022_rrYWOpf_Vnf | Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent | In this paper, we study the statistical limits in terms of Sobolev norms of gradient descent for solving inverse problem from randomly sampled noisy observations using a general class of objective functions. Our class of objective functions includes Sobolev training for kernel regression, Deep Ritz Methods (DRM), and Physics Informed Neural Networks (PINN) for solving elliptic partial differential equations (PDEs) as special cases. We consider a potentially infinite-dimensional parameterization of our model using a suitable Reproducing Kernel Hilbert Space and a continuous parameterization of problem hardness through the definition of kernel integral operators. We prove that gradient descent over this objective function can also achieve statistical optimality and the optimal number of passes over the data increases with sample size. Based on our theory, we explain an implicit acceleration of using a Sobolev norm as the objective function for training, inferring that the optimal number of epochs of DRM becomes larger than the number of PINN when both the data size and the hardness of tasks increase, although both DRM and PINN can achieve statistical optimality. | Accept | This paper studies the statistical performance of Deep Ritz Methods (DRM) and Physics Informed Neural Networks (PINN) for solving an inverse problem with respect to elliptic equations. As a key example, consider the problem of estimating the solution $u$ to Poisson's equation $\Delta u = f$ given noisy observations of $f$. When the solution is a function in an RKHS, and using a Sobolev norm as the objective function, they show that gradient descent achieves implicit acceleration. They use this theory to show that both DRM and PINN achieve statistical optimality, but the number of epochs needed for DRM is larger. This is a nice contribution to the growing area of deep PDE solvers, and provides some illuminating theoretical insights. There was some debate between a reviewer and the authors about whether the results follow from existing bounds for kernel regression, but the authors convincingly argued that the norm of $A^{-1}$ in general need not be bounded and so those results cannot be applied in a blackbox manner. Additionally there are other differences and complications. | val | [
"qtT252wR1Kf",
"UQjtDGtlji",
"TC8uuZAPmkNm",
"XWeR0Cv2Ymp",
"a0u52pBegA",
"caAoDObPzSz",
"AUvnYWaVqox",
"ab99Spu2iCB",
"PmXo3VgiERV",
"Y6jSdI7u9YS",
"IUGVRT9XaX7w",
"LD_TIhXJXgN",
"5HKpG_BvseP",
"vTyOKJkThsU",
"33RA91JCJe",
"1fKjfxBUc8M",
"mKuCtBQ2EBu",
"lda4H4t7E_z",
"BcKYt81WgK... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thanks for addressing my questions.",
" I would like to thank the authors for the reply and my concerns are dually addressed. I have increased my score to 8.",
" Have we resolved the reviewer's concern?",
" Firstly, our paper follows traditional kernel regression literature and is cited in the paper as [17,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
1,
2,
4
] | [
"mKuCtBQ2EBu",
"LD_TIhXJXgN",
"XWeR0Cv2Ymp",
"a0u52pBegA",
"caAoDObPzSz",
"AUvnYWaVqox",
"ab99Spu2iCB",
"Y6jSdI7u9YS",
"Y6jSdI7u9YS",
"1fKjfxBUc8M",
"BcKYt81WgKo",
"iLfJqJQIZh0",
"vTyOKJkThsU",
"jRFbCiKkrbR",
"hbQIlByEMW",
"BcKYt81WgKo",
"lda4H4t7E_z",
"nips_2022_rrYWOpf_Vnf",
"n... |
nips_2022_B4OTsjq63T5 | Bayesian inference via sparse Hamiltonian flows | A Bayesian coreset is a small, weighted subset of data that replaces the full dataset during Bayesian inference, with the goal of reducing computational cost. Although past work has shown empirically that there often exists a coreset with low inferential error, efficiently constructing such a coreset remains a challenge. Current methods tend to be slow, require a secondary inference step after coreset construction, and do not provide bounds on the data marginal evidence. In this work, we introduce a new method---sparse Hamiltonian flows---that addresses all three of these challenges. The method involves first subsampling the data uniformly, and then optimizing a Hamiltonian flow parametrized by coreset weights and including periodic momentum quasi-refreshment steps. Theoretical results show that the method enables an exponential compression of the dataset in a representative model, and that the quasi-refreshment steps reduce the KL divergence to the target. Real and synthetic experiments demonstrate that sparse Hamiltonian flows provide accurate posterior approximations with significantly reduced runtime compared with competing dynamical-system-based inference methods. | Accept | All reviewers agree that the paper proposes an interesting approach to Bayesian inference incorporating coresets with Hamiltonian flows. Although some reviewers have some technical concerns at their first reviews, basically those have been resolved by the authors' responses. Thus, although there are some points that should be modified from the current form, I think we can expect the authors modify the paper in the camera-ready by reflecting the discussion. Based on these, I recommend acceptance for this paper. | val | [
"32kXBIzj_s",
"7LD0fAbOz8M",
"THs8nNQi6I2",
"6YB6xfMQR2C",
"V0M5bGlREVMW",
"UUMrDeKALZau",
"XVx60WSMtuy",
"rHi0FbR8udI",
"Ev9ZYoRkA3m",
"_ZYCQCa8iYX",
"Kj7xJtGm1s"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the follow-up! \n\nYour understanding of one of the main strengths of our work is correct: the use of a coreset enables our method to be computationally efficient, and Prop 3.1 shows how big the coreset must be to enable an accurate reproduction of the full posterior. \n\nGeneric normalizing flows (Syl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"7LD0fAbOz8M",
"UUMrDeKALZau",
"6YB6xfMQR2C",
"rHi0FbR8udI",
"Kj7xJtGm1s",
"XVx60WSMtuy",
"_ZYCQCa8iYX",
"Ev9ZYoRkA3m",
"nips_2022_B4OTsjq63T5",
"nips_2022_B4OTsjq63T5",
"nips_2022_B4OTsjq63T5"
] |
nips_2022_BjGawodFnOy | SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning | Value factorisation is a useful technique for multi-agent reinforcement learning (MARL) in global reward game, however, its underlying mechanism is not yet fully understood. This paper studies a theoretical framework for value factorisation with interpretability via Shapley value theory. We generalise Shapley value to Markov convex game called Markov Shapley value (MSV) and apply it as a value factorisation method in global reward game, which is obtained by the equivalence between the two games. Based on the properties of MSV, we derive Shapley-Bellman optimality equation (SBOE) to evaluate the optimal MSV, which corresponds to an optimal joint deterministic policy. Furthermore, we propose Shapley-Bellman operator (SBO) that is proved to solve SBOE. With a stochastic approximation and some transformations, a new MARL algorithm called Shapley Q-learning (SHAQ) is established, the implementation of which is guided by the theoretical results of SBO and MSV. We also discuss the relationship between SHAQ and relevant value factorisation methods. In the experiments, SHAQ exhibits not only superior performances on all tasks but also the interpretability that agrees with the theoretical analysis. The implementation of this paper is placed on https://github.com/hsvgbkhgbv/shapley-q-learning. | Accept | Reviewers appreciate that the paper is making an insightful contribution to the important field of cooperative MARL and its connection with cooperative game theory. The paper is clear and mostly well motivated, and the theoretical analysis and empirical evaluation are sufficient. | train | [
"EOESm04y5e",
"9Fo0vr8TQkB",
"3s_T-P4uKh4",
"AAVsrKyuux",
"QF-2rnOSN2",
"Ok2EsCF8RIj",
"gYDhZr-GmSO",
"2r43uM1dI-w",
"ZIn7O8u8gi",
"66pmL6oOwM5"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer XnpC,\n\n\nYour knowledge on the cooperative game, such as social dilemma that studies the condition of cooperation and the cooperative game theory (seen from your comments), actually has astonished the authors.\n\nThe authors would appreciate the insightful and helpful discussion with you. It is re... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"66pmL6oOwM5",
"3s_T-P4uKh4",
"QF-2rnOSN2",
"nips_2022_BjGawodFnOy",
"66pmL6oOwM5",
"ZIn7O8u8gi",
"2r43uM1dI-w",
"nips_2022_BjGawodFnOy",
"nips_2022_BjGawodFnOy",
"nips_2022_BjGawodFnOy"
] |
nips_2022_OGM9dXemmq | Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics | Stochastic partial differential equations (SPDEs) are the mathematical tool of choice for modelling spatiotemporal PDE-dynamics under the influence of randomness. Based on the notion of mild solution of an SPDE, we introduce a novel neural architecture to learn solution operators of PDEs with (possibly stochastic) forcing from partially observed data. The proposed Neural SPDE model provides an extension to two popular classes of physics-inspired architectures. On the one hand, it extends Neural CDEs and variants -- continuous-time analogues of RNNs -- in that it is capable of processing incoming sequential information arriving at arbitrary spatial resolutions. On the other hand, it extends Neural Operators -- generalizations of neural networks to model mappings between spaces of functions -- in that it can parameterize solution operators of SPDEs depending simultaneously on the initial condition and a realization of the driving noise. By performing operations in the spectral domain, we show how a Neural SPDE can be evaluated in two ways, either by calling an ODE solver (emulating a spectral Galerkin scheme), or by solving a fixed point problem. Experiments on various semilinear SPDEs, including the stochastic Navier-Stokes equations, demonstrate how the Neural SPDE model is capable of learning complex spatiotemporal dynamics in a resolution-invariant way, with better accuracy and lighter training data requirements compared to alternative models, and up to 3 orders of magnitude faster than traditional solvers. | Accept | This paper provides an extension of Neural PDEs to Stochastic PDE, which is a very important area lying in the intersection between dynamic systems and stochastic analysis. We think the work is well-motivated, novel, and relevant. The author did a good job in the rebuttal to allieviate most of reviewers' concern about the contribution statement, the evaluation, the presentation, and the adaptivity to irregular sampling. We expect the authors revise the paper accordingly in the next version. Please also explain why there are only two baselines in the paper. It would be great to see some discussions about whether the work can shed some lights on learning the unknown SPDEs, beyond solving them. | train | [
"PJRtKjPwSyf",
"p5vc9_fgtjp",
"xWIvLLirZvk",
"8iHe2W6IV1B",
"WneeagPvw4Y",
"i6KAARt7WL9",
"EBpBpSzRKC",
"f8W4sMrPp6S",
"29MB7UWlWZl",
"Pal-UFWRPgn",
"fTXoxHcpL4c",
"Bh4yx1Hp5zC",
"aP8x7AYsVJ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors, \n\nI did read the response you already provided to Reviewer QRvu so there was no need to copy-paste it here. My question was a bit more specific and related to the fact that almost all Neural DE models are applicable to learning dynamics from real-world observations from some unknown dynamics. Howe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
2,
3
] | [
"p5vc9_fgtjp",
"xWIvLLirZvk",
"f8W4sMrPp6S",
"Bh4yx1Hp5zC",
"fTXoxHcpL4c",
"EBpBpSzRKC",
"f8W4sMrPp6S",
"aP8x7AYsVJ",
"Pal-UFWRPgn",
"Bh4yx1Hp5zC",
"nips_2022_OGM9dXemmq",
"nips_2022_OGM9dXemmq",
"nips_2022_OGM9dXemmq"
] |
nips_2022_6aIYRZvbmk- | Amortized Inference for Heterogeneous Reconstruction in Cryo-EM | Cryo-electron microscopy (cryo-EM) is an imaging modality that provides unique insights into the dynamics of proteins and other building blocks of life. The algorithmic challenge of jointly estimating the poses, 3D structure, and conformational heterogeneity of a biomolecule from millions of noisy and randomly oriented 2D projections in a computationally efficient manner, however, remains unsolved. Our method, cryoFIRE, performs ab initio heterogeneous reconstruction with unknown poses in an amortized framework, thereby avoiding the computationally expensive step of pose search while enabling the analysis of conformational heterogeneity. Poses and conformation are jointly estimated by an encoder while a physics-based decoder aggregates the images into an implicit neural representation of the conformational space. We show that our method can provide one order of magnitude speedup on datasets containing millions of images, without any loss of accuracy. We validate that the joint estimation of poses and conformations can be amortized over the size of the dataset. For the first time, we prove that an amortized method can extract interpretable dynamic information from experimental datasets. | Accept | The paper studies the problem of simultaneously estimating poses and conformation of a biomolecule, cryo-EM images. It presents a pipeline which integrates reconstruction and conformation estimation, leading to significant time savings compared to methods that alternate between accurately estimating poses and conformations. Conventional pose estimation is expensive because it involves searching over the space of rigid body motions, by repeatedly rendering images from from various viewpoints. The paper proposes an autoencoder-like structure, which generates conformation and pose estimates for each image, which are used by a decoder to produce an estimate of the conformation.
Reviewers generally evaluated the paper positively, noting that it achieves an order of magnitude speedup compared to a conventional baseline. Reviewers note that although elements of the paper have previously appeared (the use of autoencoder-like architectures for cryo-EM reconstruction, the use of amortized inference over conformations and poses), the combination employed here is novel. Reviewers generally appreciated the paper’s experimental results, while raising some concerns about the SNR and baselines for comparisons. Finally, reviewers noted that the paper provided a clear exposition of both cryo-EM and the proposed techniques. Overall, the paper exhibits a well-chosen combination of learning techniques, which lead to performance improvements for a problem of significant scientific interest. | train | [
"xnY0mUMFMU",
"HrwkxW-Ovz",
"DyBew-Gp3F2",
"eJC59I2U26S",
"DrugJi8GpqD",
"O_WcGV4-fD",
"YIIHYdNDEKH",
"IJsKGv2yc03",
"8Cl4k30AB9r",
"vHQf0C3KCAl",
"W519kzeyg0A"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for the detailed replies! It resolves most of my concerns. I have no further questions and believe that weak acceptance is a proper recommendation for the work.",
" The authors sufficiently addressed my concerns. I think the method makes a good contribution to the cryo-EM community, however, ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"O_WcGV4-fD",
"DrugJi8GpqD",
"nips_2022_6aIYRZvbmk-",
"W519kzeyg0A",
"vHQf0C3KCAl",
"8Cl4k30AB9r",
"IJsKGv2yc03",
"nips_2022_6aIYRZvbmk-",
"nips_2022_6aIYRZvbmk-",
"nips_2022_6aIYRZvbmk-",
"nips_2022_6aIYRZvbmk-"
] |
nips_2022_Fn17vlng9pD | NIERT: Accurate Numerical Interpolation through Unifying Scattered Data Representations using Transformer Encoder | Numerical interpolation for scattered data aims to estimate values for target points based on those of some observed points. Traditional approaches produce estimations through constructing an interpolation function that combines multiple basis functions. These approaches require the basis functions to be pre-defined explicitly, thus greatly limiting their applications in practical scenarios. Recent advances exhibit an alternative strategy that learns interpolation functions directly from observed points using machine learning techniques, say deep neural networks. This strategy, although promising, cannot effectively exploit the correlations between observed points and target points as it treats these types of points separately. Here, we present a learning-based approach to numerical interpolation using encoder representations of Transformers (thus called NIERT). NIERT treats the value of each target point as a masked token, which enables processing target points and observed points in a unified fashion. By calculating the partial self-attention between target points and observed points at each layer, NIERT gains advantages of exploiting the correlations among these points and, more importantly, avoiding the unexpected interference of target points on observed points. NIERT also uses the pre-training technique to further improve its accuracy. On three representative datasets, including two synthetic datasets and a real-world dataset, NIERT outperforms the existing approaches, e.g., on the TFRD-ADlet dataset for temperature field reconstruction, NIERT achieves an MAE of $1.897\times 10^{-3}$, substantially better than the transformer-based approach (MAE: $27.074\times 10^{-3}$). These results clearly demonstrate the accuracy of NIERT and its potential to apply in multiple practical fields. | Reject | This work proposed an accurate approach that called NIERT for numerical interpolation problems. The motivation of the proposed NIERT is not explained very well. The major limitation lacks of real-world applications in high-dimensional datasets. Due to the main concern of the work, this shoud be necessary. | train | [
"O-Oc8VmBdOS",
"yDM_MDsdxqy",
"P4Bkm-YIES",
"AUHVblncDdC",
"Ypw5Q7l2Sw0",
"iFZEd9wa00e",
"M6flYXI7eq8",
"olqNdbJl3KO",
"_CRZGbkWnBX",
"E4ElsIyDU3V"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer 93ZQ for the instructive feedback and for approving the experiments we have done is in the rebuttal stage. We address the reviewer's concerns below.\n\n> **[A2]** Thanks for trying cubic splines. By adaptive basis splines, I meant something like this: https://stat.ethz.ch/R-manual/R-devel/li... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"yDM_MDsdxqy",
"AUHVblncDdC",
"E4ElsIyDU3V",
"E4ElsIyDU3V",
"_CRZGbkWnBX",
"olqNdbJl3KO",
"olqNdbJl3KO",
"nips_2022_Fn17vlng9pD",
"nips_2022_Fn17vlng9pD",
"nips_2022_Fn17vlng9pD"
] |
nips_2022_0xbP4W7rdJW | VAEL: Bridging Variational Autoencoders and Probabilistic Logic Programming | We present VAEL, a neuro-symbolic generative model integrating variational autoencoders (VAE) with the reasoning capabilities of probabilistic logic (L) programming. Besides standard latent subsymbolic variables, our model exploits a probabilistic logic program to define a further structured representation, which is used for logical reasoning. The entire process is end-to-end differentiable. Once trained, VAEL can solve new unseen generation tasks by (i) leveraging the previously acquired knowledge encoded in the neural component and (ii) exploiting new logical programs on the structured latent space. Our experiments provide support on the benefits of this neuro-symbolic integration both in terms of task generalization and data efficiency. To the best of our knowledge, this work is the first to propose a general-purpose end-to-end framework integrating probabilistic logic programming into a deep generative model. | Accept | The paper proposes to include a component enforcing logical constraints on top of a variational autoencoder (VAE). The resulting method, VAEL. does so by leveraging ProbLog in addition to a neural encoder and decoder. VAEL is employed for simple generative tasks with constraints over the outputs, such as conditional image generation with MNIST and small Mario levels.
The reviewers have appreciated the direction where VAEL is heading and the importance of integrating constraints in deep generative models. Some concerns are still open. Specifically, the motivation behind some architectural choices (and the specific choice of VAEs as deep generative models) and the small-scale nature of the experiments. The complexity and scaling campabilities of performing symbolic reasoning with ProbLog are not discussed in depth.
| train | [
"64gF3QnmBuv",
"MzSQCm7G3_K",
"vPRA4fEcRv7",
"CetP65gThrR",
"qsHoJicgAwF",
"lxqb99M9RuGe",
"29ON8uJ4Vww",
"6rs1_gpbMEW",
"YzJxsE37X1",
"FSHlc6bDSTh",
"rDqcEGc59Ng",
"44rcrvI36dv",
"q4naTl_AHJ2",
"nLGtW1IgIt",
"q0PKGrj2jxr",
"dxx90Zw4Ln_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their efforts in clarifying the extra information used by their method. In this case, I would be glad to keep my original score as \"6: Weak Accept\".",
" We thank the reviewer for acknowledging and clarifying the nature of their concerns.\n\nWe understand and respect the p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"lxqb99M9RuGe",
"vPRA4fEcRv7",
"FSHlc6bDSTh",
"44rcrvI36dv",
"29ON8uJ4Vww",
"6rs1_gpbMEW",
"rDqcEGc59Ng",
"YzJxsE37X1",
"dxx90Zw4Ln_",
"q0PKGrj2jxr",
"nLGtW1IgIt",
"q4naTl_AHJ2",
"nips_2022_0xbP4W7rdJW",
"nips_2022_0xbP4W7rdJW",
"nips_2022_0xbP4W7rdJW",
"nips_2022_0xbP4W7rdJW"
] |
nips_2022_Qx6UPW0r9Lf | Reinforced Genetic Algorithm for Structure-based Drug Design | Structure-based drug design (SBDD) aims to discover drug candidates by finding molecules (ligands) that bind tightly to a disease-related protein (targets), which is the primary approach to computer-aided drug discovery. Recently, applying deep generative models for three-dimensional (3D) molecular design conditioned on protein pockets to solve SBDD has attracted much attention, but their formulation as probabilistic modeling often leads to unsatisfactory optimization performance. On the other hand, traditional combinatorial optimization methods such as genetic algorithms (GA) have demonstrated state-of-the-art performance in various molecular optimization tasks. However, they do not utilize protein target structure to inform design steps but rely on a random-walk-like exploration, which leads to unstable performance and no knowledge transfer between different tasks despite the similar binding physics. To achieve a more stable and efficient SBDD, we propose Reinforced Genetic Algorithm (RGA) that uses neural models to prioritize the profitable design steps and suppress random-walk behavior. The neural models take the 3D structure of the targets and ligands as inputs and are pre-trained using native complex structures to utilize the knowledge of the shared binding physics from different targets and then fine-tuned during optimization. We conduct thorough empirical studies on optimizing binding affinity to various disease targets and show that RGA outperforms the baselines in terms of docking scores and is more robust to random initializations. The ablation study also indicates that the training on different targets helps improve the performance by leveraging the shared underlying physics of the binding processes.
The code is available at https://github.com/futianfan/reinforced-genetic-algorithm. | Accept | This paper proposes a genetic algorithm guided by reinforcement learning to design molecules. The RL agent in this method can directly perceive the 3D structure of protein targets. This paper presented a very comprehensive evaluation of the proposed method and existing SBDD methods. The improvement over of the method over previous ones is evidenced by the comprehensive experiment, though it is not very significant compared to traditional genetic algorithm.
Authors may want analyze the biases and show a few examples of generated molecules to better illustrate the effectiveness. | train | [
"jhwLkCKNHms",
"J9EWsEsdvT",
"Kf4QfOEtqj",
"fqGNBcJlwY",
"TaAiToxtYGL",
"ieniULSPQvv",
"cTNHluizJ05",
"j8WfJMyCtEA",
"aTDKPxxhmC"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response!\n\nAgree that the paper focuses on developing a new SBDD algorithm. \nWhat I mean is that, it would make the paper stronger if the author optimizes the molecule with Vina, and evaluates them using another docking software such as Glide (or others). \nUsing another independent evaluati... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
3,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"fqGNBcJlwY",
"aTDKPxxhmC",
"j8WfJMyCtEA",
"cTNHluizJ05",
"ieniULSPQvv",
"nips_2022_Qx6UPW0r9Lf",
"nips_2022_Qx6UPW0r9Lf",
"nips_2022_Qx6UPW0r9Lf",
"nips_2022_Qx6UPW0r9Lf"
] |
nips_2022_nV230sPnEBN | One for All: Simultaneous Metric and Preference Learning over Multiple Users | This paper investigates simultaneous preference and metric learning from a crowd of respondents. A set of items represented by $d$-dimensional feature vectors and paired comparisons of the form ``item $i$ is preferable to item $j$'' made by each user is given. Our model jointly learns a distance metric that characterizes the crowd's general measure of item similarities along with a latent ideal point for each user reflecting their individual preferences. This model has the flexibility to capture individual preferences, while enjoying a metric learning sample cost that is amortized over the crowd. We first study this problem in a noiseless, continuous response setting (i.e., responses equal to differences of item distances) to understand the fundamental limits of learning. Next, we establish prediction error guarantees for noisy, binary measurements such as may be collected from human respondents, and show how the sample complexity improves when the underlying metric is low-rank. Finally, we establish recovery guarantees under assumptions on the response distribution. We demonstrate the performance of our model on both simulated data and on a dataset of color preference judgements across a large number of users. | Accept | All reviewers felt this paper deserved to be accepted given its strong theoretical contributions and clear presentation. The authors may wish to consider additional experiments given that a number of reviewers found the empirical evaluation to be relatively weak, and are encouraged to follow through on incorporating conclusions and limitations into the main paper body. | train | [
"d_U5rb5oqCH",
"m3zrpDQi8ND",
"XEiPP-UtMK",
"FyiIWjDrWEf",
"SWkbFH2D6iM",
"v4tX-yzICkGb",
"IPN9XS8YyHF",
"uyQEZDJRj2J",
"G6h5NQWXm1g",
"xUDQ2Ul6MB",
"wvAujb51lUd"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the time to read and review our work. Please refer to the general response to all reviewers for a summary of the core points of our submission as well as for responses to questions shared by multiple reviewers. Below, we respond to your individual points directly.\n\nPlease see the general re... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"wvAujb51lUd",
"XEiPP-UtMK",
"xUDQ2Ul6MB",
"G6h5NQWXm1g",
"uyQEZDJRj2J",
"IPN9XS8YyHF",
"nips_2022_nV230sPnEBN",
"nips_2022_nV230sPnEBN",
"nips_2022_nV230sPnEBN",
"nips_2022_nV230sPnEBN",
"nips_2022_nV230sPnEBN"
] |
nips_2022_TIPyxNbzeB8 | Uplifting Bandits | We introduce a new multi-armed bandit model where the reward is a sum of multiple random variables, and each action only alters the distributions of some of these variables. Upon taking an action, the agent observes the realizations of all variables. This model is motivated by marketing campaigns and recommender systems, where the variables represent outcomes on individual customers, such as clicks. We propose UCB-style algorithms that estimate the uplifts of the actions over a baseline. We study multiple variants of the problem, including when the baseline and affected variables are unknown, and prove sublinear regret bounds for all of these. In addition, we provide regret lower bounds that justify the necessity of our modeling assumptions. Experiments on synthetic and real-world datasets demonstrate the benefit of methods that estimate the uplifts over policies that do not use this structure.
| Accept | The paper's main contribution is, in my opinion, conceptual. It introduces a new multi-armed bandit problem with arms ('interventions') that affect a sparse set of downstream variables, a 'baseline effect' and an additive reward ('uplift') structure. The motivation for this is drawn from uplift modeling and its associated causal inference from marketing and e-commerce domains.
While this could, on one hand, be cast aside as 'yet another' combinatorial-type bandit problem, I believe that it brings to the table an interesting and thought-provoking information and decision structure to reason about, setting it apart from typical variations of bandit problems. Though one of the reviewers questions its scope as being rather narrow, I believe it is worth taking a risk ('pulling an exploratory arm'!) and see this paper published in the hope that it spurs other fruitful work in this interface of marketing and ML.
I note also that the paper is written in a clear and lucid fashion, and urge the author(s) to suitable incorporate their suggested additions and clarifications arising from the engagement with the reviewers into the final version. | train | [
"8tC9DSzG-j",
"FzUZwASVtDl",
"K7FkuhvvGtt",
"KGlt2dReOcm",
"AgIbOLLg7_h",
"4F9qHrOk3AJ",
"TeU8udDbxvC",
"urm404PptY",
"_M7wVLOkppv",
"IYAc5o9FXFC",
"HE2HW6YLLr",
"cKo2CIDrUM",
"4xc7k2fslt",
"3Gdo5cJ3D9",
"aW6uQCsW1U"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed rebuttal. It clarified and addressed all my questions. I keep my score as it is.",
" Thank you, this helps answer the questions above. I will maintain the score.",
" Dear reviewers,\n\nWe wanted to thank you for insightful and detailed reviews. This allowed us to put together what ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"HE2HW6YLLr",
"IYAc5o9FXFC",
"nips_2022_TIPyxNbzeB8",
"4F9qHrOk3AJ",
"urm404PptY",
"TeU8udDbxvC",
"aW6uQCsW1U",
"aW6uQCsW1U",
"3Gdo5cJ3D9",
"4xc7k2fslt",
"cKo2CIDrUM",
"nips_2022_TIPyxNbzeB8",
"nips_2022_TIPyxNbzeB8",
"nips_2022_TIPyxNbzeB8",
"nips_2022_TIPyxNbzeB8"
] |
nips_2022_I-6yh2-dkyD | CyCLIP: Cyclic Contrastive Language-Image Pretraining | Recent advances in contrastive representation learning over paired image-text data have led to models such as CLIP that achieve state-of-the-art performance for zero-shot classification and distributional robustness. Such models typically require joint reasoning in the image and text representation spaces for downstream inference tasks. Contrary to prior beliefs, we demonstrate that the image and text representations learned via a standard contrastive objective are not interchangeable and can lead to inconsistent downstream predictions. To mitigate this issue, we formalize consistency and propose CyCLIP, a framework for contrastive representation learning that explicitly optimizes for the learned representations to be geometrically consistent in the image and text space. In particular, we show that consistent representations can be learned by explicitly symmetrizing (a) the similarity between the two mismatched image-text pairs (cross-modal consistency); and (b) the similarity between the image-image pair and the text-text pair (in-modal consistency). Empirically, we show that the improved consistency in CyCLIP translates to significant gains over CLIP, with gains ranging from 10%-24% for zero-shot classification on standard benchmarks (CIFAR-10, CIFAR-100, ImageNet1K) and 10%-27% for robustness to various natural distribution shifts. | Accept | The authors identify the inconsistency problem in CLIP for the first time and proposes a simply but effective solution to the problem. The experiments are comprehensive and the results are convincing. Reviewers' questions are well addressed in the rebuttal. The reviewers all agreed on accepting this paper. After reading the paper, I also agree with the reviewers. | train | [
"9fRRLqTsbo0",
"umPjPWrNM6D",
"fZ1VfT6myPA",
"4QJPCvzPR__",
"4e--LQXDsVq",
"m57sysgUp5O",
"O8ccWqhUeq-",
"lVGoYrO7shG",
"Qh_kfFBHRgw",
"fOvrGnz7oEH",
"7F8U2XmaSzK",
"Y4HMQyWd70EF",
"NIeCObGtg4R",
"g6seQPGMmLN",
"QL_14WI_GCt",
"kDVy9vQI-vw"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for the detailed response. I appreciate the authors for adding the additional experiments and discussion. The paper now looks more complete to me, and the response has cleared some of my doubts. I have raised my score and recommend accept.\n\n",
" > **The first is the size of the dataset. Wou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
4
] | [
"g6seQPGMmLN",
"4e--LQXDsVq",
"O8ccWqhUeq-",
"lVGoYrO7shG",
"m57sysgUp5O",
"fOvrGnz7oEH",
"Y4HMQyWd70EF",
"NIeCObGtg4R",
"nips_2022_I-6yh2-dkyD",
"7F8U2XmaSzK",
"kDVy9vQI-vw",
"QL_14WI_GCt",
"g6seQPGMmLN",
"nips_2022_I-6yh2-dkyD",
"nips_2022_I-6yh2-dkyD",
"nips_2022_I-6yh2-dkyD"
] |
nips_2022_m16lH6XJsbb | Spectrum Random Masking for Generalization in Image-based Reinforcement Learning | Generalization in image-based reinforcement learning (RL) aims to learn a robust policy that could be applied directly on unseen visual environments, which is a challenging task since agents usually tend to overfit to their training environment. To handle this problem, a natural approach is to increase the data diversity by image based augmentations. However, different with most vision tasks such as classification and detection, RL tasks are not always invariant to spatial based augmentations due to the entanglement of environment dynamics and visual appearance. In this paper, we argue with two principles for augmentations in RL: First, the augmented observations should facilitate learning a universal policy, which is robust to various distribution shifts. Second, the augmented data should be invariant to the learning signals such as action and reward. Following these rules, we revisit image-based RL tasks from the view of frequency domain and propose a novel augmentation method, namely Spectrum Random Masking (SRM),which is able to help agents to learn the whole frequency spectrum of observation for coping with various distributions and compatible with the pre-collected action and reward corresponding to original observation. Extensive experiments conducted on DMControl Generalization Benchmark demonstrate the proposed SRM achieves the state-of-the-art performance with strong generalization potentials. | Accept | The authors have introduced a method of data augmentation for image-based reinforcement learning that performs masking in the frequency domain, combined with techniques for stabilizing Q-learning, to achieve improved performance on a number of DMControl Generalization Benchmark tasks.
There was agreement among the reviewers that this work is novel and technically sound, and their concerns were mainly related to the breadth of tasks explored in the initial submission. During the review process the authors have gone to considerable effort to introduce new tasks (e.g. DrawerWorld, Robosuite and CARLA) and the improved performance of their method appears to generalize well. I believe that this work will be of broad interest to the RL community and recommend it for acceptance. | test | [
"ZtyOcCgJL_w",
"Rl2iNVYGjIh",
"EiW0PBt-tX",
"h7oB66lKoNx",
"E4JbRseIu5",
"UWv_q3y07W",
"76vL-6cQWVW",
"JvUZdlL7Kbs",
"yLz4fkjCUwM",
"ax2U29_Y-sR",
"iNCB_qXK4Ck",
"oUYnnlyTem3",
"5kZaJKefPRE",
"b3l7ZENwTyu",
"avA7VbHLLO",
"oW2fCqEDIz3"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely appreciate your advice and recommendations of different environments. Thank you for your effort and supportive feedback. We learned a lot from the review. We have added the discussion in the revised manuscript and will further elaborate on it in the final draft.",
" Thanks for the additional experi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"Rl2iNVYGjIh",
"yLz4fkjCUwM",
"h7oB66lKoNx",
"ax2U29_Y-sR",
"nips_2022_m16lH6XJsbb",
"76vL-6cQWVW",
"JvUZdlL7Kbs",
"oW2fCqEDIz3",
"avA7VbHLLO",
"b3l7ZENwTyu",
"5kZaJKefPRE",
"nips_2022_m16lH6XJsbb",
"nips_2022_m16lH6XJsbb",
"nips_2022_m16lH6XJsbb",
"nips_2022_m16lH6XJsbb",
"nips_2022_m... |
nips_2022_Z72wo6oOZQp | First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization | How can we train an assistive human-machine interface (e.g., an electromyography-based limb prosthesis) to translate a user's raw command signals into the actions of a robot or computer when there is no prior mapping, we cannot ask the user for supervision in the form of action labels or reward feedback, and we do not have prior knowledge of the tasks the user is trying to accomplish? The key idea in this paper is that, regardless of the task, when an interface is more intuitive, the user's commands are less noisy. We formalize this idea as a completely unsupervised objective for optimizing interfaces: the mutual information between the user's command signals and the induced state transitions in the environment. To evaluate whether this mutual information score can distinguish between effective and ineffective interfaces, we conduct a large-scale observational study on 540K examples of users operating various keyboard and eye gaze interfaces for typing, controlling simulated robots, and playing video games. The results show that our mutual information scores are predictive of the ground-truth task completion metrics in a variety of domains, with an average Spearman's rank correlation of 0.43. In addition to offline evaluation of existing interfaces, we use our unsupervised objective to learn an interface from scratch: we randomly initialize the interface, have the user attempt to perform their desired tasks using the interface, measure the mutual information score, and update the interface to maximize mutual information through reinforcement learning. We evaluate our method through a small-scale user study with 12 participants who perform a 2D cursor control task using a perturbed mouse, and an experiment with one expert user playing the Lunar Lander game using hand gestures captured by a webcam. The results show that we can learn an interface from scratch, without any user supervision or prior knowledge of tasks, with less than 30 minutes of human-in-the-loop training. | Accept | The paper describes an approach to learning an adaptive user interface (i.e., mapping raw inputs to the agent's actions) in an unsupervised way via reinforcement learning. The goal is to learn interfaces that are intuitive for the user, with the supposition that the user's inputs become less noisy as the interface becomes more intuitive. To that end, the proposal is to use the mutual information between the raw input provided by the user and the resulting state transitions as a reward proxy. The approach is evaluated on a series of control and typing domains as well as a small-scale user study involving a cursor control task.
The paper was reviewed by three researchers who read the author response an discussed the paper with the AC. The reviewers agree that the problem of adapting a user interface in an unsupervised way is interesting and the proposed use of mutual information for adaptation is sensible and interesting. The reviewers initially raised concerns about the absence of a compelling use case and the experimental evaluations, notably the lack of appropriate baselines (Reviewer eRaa) and inadequate experiments (Reviewers eRaa and phLr). The authors made a concerted effort to address most of the reviewers' concerns, which included experiments conduced on the cursor domain using the alternative method suggested by Reviewer eRaa. However, the authors did not address the experimentation issues raised by Reviewer phLr, who finds that the paper lacks experimental evidence for some of the claims being made. As it stands, the paper doesn't show that the interface that can be achieved with this approach is truly intuitive. Making such a claim requires comparative experiments with appropriate baseline interfaces and more detailed user analyses. As such a detailed set of user studies may out-of-scope for a conference-length algorithms paper focused on the use of mutual information as a reward proxy for interface learning, the claims in the paper should be revisited. | train | [
"gCeQ1FY_K3Z",
"25edvp55BQO",
"q_X71mEuKhe",
"yq149BZtmAc",
"1zHmanfOOhI",
"za4gcJgFQM4",
"D3JAIfHvbp8",
"jYfMl7hqqd-",
"xXJqWdOPnXw",
"-AoZEgXtyuW",
"A-K-7CV8TEl"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This paper presents a novel interface that can translate users’ raw command signals into robot actions without the requirement of prior mapping. The authors evaluated the method through both evaluation of prior datasets and observational study with 12 participants. The authors have added details of the user study... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2022_Z72wo6oOZQp",
"q_X71mEuKhe",
"jYfMl7hqqd-",
"nips_2022_Z72wo6oOZQp",
"xXJqWdOPnXw",
"A-K-7CV8TEl",
"-AoZEgXtyuW",
"xXJqWdOPnXw",
"nips_2022_Z72wo6oOZQp",
"nips_2022_Z72wo6oOZQp",
"nips_2022_Z72wo6oOZQp"
] |
nips_2022_ByMYEibhiXO | Learning Superpoint Graph Cut for 3D Instance Segmentation | 3D instance segmentation is a challenging task due to the complex local geometric structures of objects in point clouds. In this paper, we propose a learning-based superpoint graph cut method that explicitly learns the local geometric structures of the point cloud for 3D instance segmentation. Specifically, we first oversegment the raw point clouds into superpoints and construct the superpoint graph. Then, we propose an edge score prediction network to predict the edge scores of the superpoint graph, where the similarity vectors of two adjacent nodes learned through cross-graph attention in the coordinate and feature spaces are used for regressing edge scores. By forcing two adjacent nodes of the same instance to be close to the instance center in the coordinate and feature spaces, we formulate a geometry-aware edge loss to train the edge score prediction network. Finally, we develop a superpoint graph cut network that employs the learned edge scores and the predicted semantic classes of nodes to generate instances, where bilateral graph attention is proposed to extract discriminative features on both the coordinate and feature spaces for predicting semantic labels and scores of instances. Extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, show that our method achieves new state-of-the-art performance on 3D instance segmentation. | Accept | The paper follows the recent trend of segmenting point clouds into instances using superpoints. Technically, the paper proposes to construct superpoint graph, and learns to perform graph cut for the segmentation. Three reviewers generally appreciate the idea and empirical results, although some architectural components and hyperparameters are less evaluated.
After the rebuttal phase, three reviewers hold their original decision on accepting the paper. AC agrees. Congratulations!
| train | [
"oGOgXtvRzsL",
"ZB1-dFvViD",
"Q9ESxT_hQNg",
"kumcOzQp3Me",
"SGwHAGuw0MM",
"FFCIIZNBGfy",
"S8KUFJrfeb"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Q7**: A large number of hyper-parameters, including $\\delta$ and $\\beta$ for triplet loss (on line 184) , soft threshold $\\theta$ for semantic prediction (on line 192), threshold for edge cut (on line 201), and $k$ for cross $k$-NN graph (on line 292).\n\n**A7**: In 3D instance segmentation, the triplet loss... | [
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"ZB1-dFvViD",
"FFCIIZNBGfy",
"S8KUFJrfeb",
"SGwHAGuw0MM",
"nips_2022_ByMYEibhiXO",
"nips_2022_ByMYEibhiXO",
"nips_2022_ByMYEibhiXO"
] |
nips_2022_WNSyF9qZaMd | Learning Bipartite Graphs: Heavy Tails and Multiple Components | We investigate the problem of learning an undirected, weighted bipartite graph under the Gaussian Markov random field model, for which we present an optimization formulation along with an efficient algorithm based on the projected gradient descent. Motivated by practical applications, where outliers or heavy-tailed events are present, we extend the proposed learning scheme to the case in which the data follow a multivariate Student-$t$ distribution. As a result, the optimization program is no longer convex, but a verifiably convergent iterative algorithm is proposed based on the majorization-minimization framework. Finally, we propose an efficient and provably convergent algorithm for learning $k$-component bipartite graphs that leverages rank constraints of the underlying graph Laplacian matrix. The proposed estimators outperform state-of-the-art methods for bipartite graph learning, as evidenced by real-world experiments using financial time series data. | Accept | The reviewers agree that this work deals with an important problem and provided a sound solution. | train | [
"uJLtK1oquP2",
"6dE9hc0Q3I",
"1vigfC4YsYk",
"UBmyawNJ_H",
"SfOZ-4FSHiU-",
"gJC_S9f0cU",
"halUaKrju0",
"itEZX-ZXcgy",
"2H9HBDISaD",
"dpoY_fZgzEd",
"FnIT2W7wsUGd",
"iRX93xg3IQE",
"1rK9gyzSm5W",
"9X9Ioz5rSjc",
"4zUUUw9bSKm",
"vd9Bnoy-yhO"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed responses and additional experiments. I will retain my original recommendation for this paper. ",
" Dear reviewers,\n\nFirstly, we would like to thank Reviewer LmQW for kindly getting back to us after our responses for their questions.\n\nSecondly, we would like to ask Reviewers Jahb... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"6dE9hc0Q3I",
"nips_2022_WNSyF9qZaMd",
"UBmyawNJ_H",
"iRX93xg3IQE",
"nips_2022_WNSyF9qZaMd",
"nips_2022_WNSyF9qZaMd",
"4zUUUw9bSKm",
"4zUUUw9bSKm",
"4zUUUw9bSKm",
"vd9Bnoy-yhO",
"vd9Bnoy-yhO",
"9X9Ioz5rSjc",
"9X9Ioz5rSjc",
"nips_2022_WNSyF9qZaMd",
"nips_2022_WNSyF9qZaMd",
"nips_2022_WN... |
nips_2022_6TJryN46h7j | MissDAG: Causal Discovery in the Presence of Missing Data with Continuous Additive Noise Models | State-of-the-art causal discovery methods usually assume that the observational data is complete. However, the missing data problem is pervasive in many practical scenarios such as clinical trials, economics, and biology. One straightforward way to address the missing data problem is first to impute the data using off-the-shelf imputation methods and then apply existing causal discovery methods. However, such a two-step method may suffer from suboptimality, as the imputation algorithm may introduce bias for modeling the underlying data distribution. In this paper, we develop a general method, which we call MissDAG, to perform causal discovery from data with incomplete observations. Focusing mainly on the assumptions of ignorable missingness and the identifiable additive noise models (ANMs), MissDAG maximizes the expected likelihood of the visible part of observations under the expectation-maximization (EM) framework. In the E-step, in cases where computing the posterior distributions of parameters in closed-form is not feasible, Monte Carlo EM is leveraged to approximate the likelihood. In the M-step, MissDAG leverages the density transformation to model the noise distributions with simpler and specific formulations by virtue of the ANMs and uses a likelihood-based causal discovery algorithm with directed acyclic graph constraint. We demonstrate the flexibility of MissDAG for incorporating various causal discovery algorithms and its efficacy through extensive simulations and real data experiments. | Accept |
All reviewers were convinced by the scientific value and results about this paper and voted for acceptance.
The authors did a good job addressing thoroughly the various concerned raised.
As a result, two of the reviewers happily increased their score.
A common concern remaining is the dense writing. The reviewers underlined the need
for the authors to improve the clarity and presentation. Acceptance is recommended. | train | [
"htS03k0W9xH",
"Ni9A_j5AoB",
"PS6Nk5-e29W",
"Ss4sxvchA6fB",
"pi5SoswnuR6",
"PQwvHbF79mK",
"vZVyg9wcX71",
"O0Jajcdk-cY",
"vORdAKMnnIf",
"-YeXw3MWEr0",
"wD4sYeCeMu4",
"3yFxjl35Wq6",
"DEWPJzmVrYO",
"N8udch3wbd",
"m6zsO0RiN5Q",
"2_rLwW3R9Zg",
"_wo92Vd4ot-"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I am happy with the authors' answers to my concerns, and with what I read in the rest of the discussion. Also because I do not consider myself an expert on the topic of this paper and the other reviewers are more positive, I have increased my score from 4 to 6.",
" Dear Reviewer GKqy,\n\nThanks for your great e... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"DEWPJzmVrYO",
"N8udch3wbd",
"N8udch3wbd",
"PQwvHbF79mK",
"N8udch3wbd",
"vORdAKMnnIf",
"_wo92Vd4ot-",
"_wo92Vd4ot-",
"_wo92Vd4ot-",
"2_rLwW3R9Zg",
"2_rLwW3R9Zg",
"m6zsO0RiN5Q",
"N8udch3wbd",
"nips_2022_6TJryN46h7j",
"nips_2022_6TJryN46h7j",
"nips_2022_6TJryN46h7j",
"nips_2022_6TJryN4... |
nips_2022_7KKL3Z5sod | Efficient Scheduling of Data Augmentation for Deep Reinforcement Learning | In deep reinforcement learning (RL), data augmentation is widely considered as a tool to induce a set of useful priors about semantic consistency and improve sample efficiency and generalization performance. However, even when the prior is useful for generalization, distilling it to RL agent often interferes with RL training and degenerates sample efficiency. Meanwhile, the agent is forgetful of the prior due to the non-stationary nature of RL. These observations suggest two extreme schedules of distillation: (i) over the entire training; or (ii) only at the end. Hence, we devise a stand-alone network distillation method to inject the consistency prior at any time (even after RL), and a simple yet efficient framework to automatically schedule the distillation. Specifically, the proposed framework first focuses on mastering train environments regardless of generalization by adaptively deciding which {\it or no} augmentation to be used for the training. After this, we add the distillation to extract the remaining benefits for generalization from all the augmentations, which requires no additional new samples. In our experiments, we demonstrate the utility of the proposed framework, in particular, that considers postponing the augmentation to the end of RL training. | Accept | Although data-augmentation is now a common practice in Deep RL this submission provides a new RL-friendly approach to how and when use augmentation in Deep RL, supported by extensive experimental study, which is a non-trivial contribution to the field. | train | [
"0G2ZUjTAN8A",
"XEimPsnJNb",
"Tq3IB0xpnB8",
"_-RBLHVKJUb",
"YmY7q1FodV2",
"hz5BjWV3ho",
"NqDrYTTXW9",
"T9j0N9JFVk",
"SFsyaSfFje8",
"povD5b8yKQ",
"_fAY19xs8KYz",
"kaeLX6IuBaV",
"mxXAvNfFQs1X",
"2lUXPE3k8J7",
"-lMoQoE9mhl",
"o_pV3A9RrEh",
"qLeBF3nFT8t"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response, and also we really appreciate for updating your rating. \n\nWe add explanation of easy-bg mode in section 5.1 about Weakness 1. \nAnd you can check any other change with blue text in the paper. \n ",
" We appreciate for your response and increasing rating.\n\nWe revise our paper wit... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"hz5BjWV3ho",
"NqDrYTTXW9",
"T9j0N9JFVk",
"SFsyaSfFje8",
"nips_2022_7KKL3Z5sod",
"povD5b8yKQ",
"_fAY19xs8KYz",
"kaeLX6IuBaV",
"mxXAvNfFQs1X",
"qLeBF3nFT8t",
"o_pV3A9RrEh",
"-lMoQoE9mhl",
"2lUXPE3k8J7",
"nips_2022_7KKL3Z5sod",
"nips_2022_7KKL3Z5sod",
"nips_2022_7KKL3Z5sod",
"nips_2022... |
nips_2022__VjQlMeSB_J | Chain of Thought Prompting Elicits Reasoning in Large Language Models | We explore how generating a chain of thought---a series of intermediate reasoning steps---significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier. | Accept | All reviewers have voted to accept the paper, and this is a solid work. | train | [
"s3gR3QY1AzU",
"btdmTr7gCmG",
"UigApuo4wz0",
"F1NW7CQl-Ic",
"wa180-OY605",
"MnisCno20Ha",
"0G7K0T-FrqL",
"-kJ1LgLVyGp",
"w8sX2wPYe0",
"EOkXV8TeeZ",
"61aKx0W4wkt"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my concersn and answering my quesitons! The experimental results and discussions are intriguing. I'm happy to increase my score. ",
" Thanks for the further discussion!\n\n> Do you have any insights into why the performance improvement by CoT with smaller models is somewheat limited or ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"btdmTr7gCmG",
"UigApuo4wz0",
"wa180-OY605",
"nips_2022__VjQlMeSB_J",
"MnisCno20Ha",
"w8sX2wPYe0",
"EOkXV8TeeZ",
"61aKx0W4wkt",
"nips_2022__VjQlMeSB_J",
"nips_2022__VjQlMeSB_J",
"nips_2022__VjQlMeSB_J"
] |
nips_2022_6ZI4iF_T7t | Weakly Supervised Representation Learning with Sparse Perturbations | The theory of representation learning aims to build methods that provably invert the data generating process with minimal domain knowledge or any source of supervision. Most prior approaches require strong distributional assumptions on the latent variables and weak supervision (auxiliary information such as timestamps) to provide provable identification guarantees. In this work, we show that if one has weak supervision from observations generated by sparse perturbations of the latent variables--e.g. images in a reinforcement learning environment where actions move individual sprites--identification is achievable under unknown continuous latent distributions. We show that if the perturbations are applied only on mutually exclusive blocks of latents, we identify the latents up to those blocks. We also show that if these perturbation blocks overlap, we identify latents up to the smallest blocks shared across perturbations. Consequently, if there are blocks that intersect in one latent variable only, then such latents are identified up to permutation and scaling. We propose a natural estimation procedure based on this theory and illustrate it on low-dimensional synthetic and image-based experiments. | Accept | This submission theoretically analyzes when it is possible to identify a latent representation, up to various classes of symmetries, under the assumption that one has access to observations corresponding to sparse changes to the latent variables. This is primarily a theoretical paper, with a small amount of empirical validation. Overall, reviewers found the paper interesting and appreciated the granularity of the conclusions. They had concerns about the relationship to some of the more traditional prior work in the area as well as the realism of some of the assumptions, but subsequent discussion resolved the reviewers’ concerns. All reviewers now favor acceptance, so I believe this paper should be accepted. | train | [
"XvXX_eyu6z3q",
"DiH2RqzUUav",
"u75xqE-7ZLK",
"XrSCRxciR2D",
"BnyZ489zXLE",
"d_0TujjOBUV",
"IzzHPwikGa",
"aHn4mhvdjYK",
"BSX5tztc1y",
"aplQHv0G7Eg",
"GWk7zogR2T8",
"3fV2eRkE57S",
"F6__lzw91AF",
"457u4v0LyCB"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. My concerns have been addressed; I have raised the score and recommend acceptance.",
" 3. **Regarding comparision with globally optimal model:** The loss of the globally optimal model is zero (the perfect encoder and the perfect estimate of perturbation have to lead to a zero loss as... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
4
] | [
"BnyZ489zXLE",
"u75xqE-7ZLK",
"XrSCRxciR2D",
"d_0TujjOBUV",
"457u4v0LyCB",
"F6__lzw91AF",
"3fV2eRkE57S",
"BSX5tztc1y",
"GWk7zogR2T8",
"nips_2022_6ZI4iF_T7t",
"nips_2022_6ZI4iF_T7t",
"nips_2022_6ZI4iF_T7t",
"nips_2022_6ZI4iF_T7t",
"nips_2022_6ZI4iF_T7t"
] |
nips_2022_Q82UCjXNSWL | Association Graph Learning for Multi-Task Classification with Category Shifts | In this paper, we focus on multi-task classification, where related classification tasks share the same label space and are learned simultaneously. In particular, we tackle a new setting, which is more realistic than currently addressed in the literature, where categories shift from training to test data. Hence, individual tasks do not contain complete training data for the categories in the test set. To generalize to such test data, it is crucial for individual tasks to leverage knowledge from related tasks. To this end, we propose learning an association graph to transfer knowledge among tasks for missing classes. We construct the association graph with nodes representing tasks, classes and instances, and encode the relationships among the nodes in the edges to guide their mutual knowledge transfer. By message passing on the association graph, our model enhances the categorical information of each instance, making it more discriminative. To avoid spurious correlations between task and class nodes in the graph, we introduce an assignment entropy maximization that encourages each class node to balance its edge weights. This enables all tasks to fully utilize the categorical information from related tasks. An extensive evaluation on three general benchmarks and a medical dataset for skin lesion classification reveals that our method consistently performs better than representative baselines. | Accept | This paper focuses on a variant of multi-task classification that it argues is new and of practical interest. In this setting, multiple tasks share a label space, such as in the case of a domain shift setting in which classes exist in multiple domains. The distinguishing feature of this setting is that not all classes have observed instances in all tasks. The paper proposes a graph-based approach, in which nodes representing tasks, classes, and instances are connected in an "association graph." A graph neural network is then learned over the association graph. Learnable metric functions weight the edges. Then representations of instances are classified using task-specific classifiers. Experiments on three benchmark tasks and a skin lesion task show that the proposed approach can consistently outperform both multi-task methods and sensible baselines like empirical risk minimization on all data.
The reviewers generally found the paper well-written, the new setting interesting and important to study, and the experimental evaluation thorough. During the review process, the authors added several other experiments and discussion justifying the use of a graph structure to effectively share knowledge across tasks. | train | [
"-jNuPrGiO4k",
"5Lbt7yrpXQZ",
"5FSxPTMyaDK",
"eY1DsY7kaBe",
"5T0X3g7KgLe",
"fLRpzMEsTkm",
"YGmLLbCYv_",
"nXw2VvMMr_4",
"7uU2ZckYj2w",
"8cnqJaWAVXh",
"Govap0QxDO",
"ywoba0of10P",
"jb7nv8mpY0S",
"V9R_vAqP0d4",
"cqb-zJpQOd1",
"0cujrSQCPww",
"2rmzvbT-7To"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the author's efforts to clarify my questions, I would like to raise my score to 5.",
" Thanks for your detailed response. I think it is a technically solid, moderate-to-high impact paper and I will keep my score.",
" Thank you for detailed explanations, the authors have answered all my questions ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"nXw2VvMMr_4",
"jb7nv8mpY0S",
"eY1DsY7kaBe",
"5T0X3g7KgLe",
"Govap0QxDO",
"8cnqJaWAVXh",
"nips_2022_Q82UCjXNSWL",
"2rmzvbT-7To",
"2rmzvbT-7To",
"0cujrSQCPww",
"cqb-zJpQOd1",
"cqb-zJpQOd1",
"V9R_vAqP0d4",
"nips_2022_Q82UCjXNSWL",
"nips_2022_Q82UCjXNSWL",
"nips_2022_Q82UCjXNSWL",
"nips... |
nips_2022_cqyBfRwOTm1 | Learning from Label Proportions by Learning with Label Noise | Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels. The task is to learn a classifier to predict the labels of future individual instances. Prior work on LLP for multi-class data has yet to develop a theoretically grounded algorithm. In this work, we propose an approach to LLP based on a reduction to learning with label noise, using the forward correction (FC) loss of \textcite{Patrini2017MakingDN}. We establish an excess risk bound and generalization error analysis for our approach, while also extending the theory of the FC loss which may be of independent interest. Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures, compared to the leading methods. | Accept | This paper proposes an approach to LLP based on a reduction to learning with label noise, using the forward correction (FC). The method is verified both theoretically and experimentally. The theory of the FC loss may also be of independent interest. The authors have addressed most of the comments. After the discussion, the reviewers unanimously recommended acceptance. | train | [
"_VsNvBEJOA_",
"uvoXdfx5C4",
"sxW8JexT-v",
"YaR4P7POM2",
"zBVO6eoHaRx",
"S0XyDFiS09",
"ShsK-0IFtIF",
"XInUqLqvHE3",
"u-cn_-TvCWj",
"IVHRuOmBU-X",
"JwK1obfIOst",
"2HVIVaFLBs7",
"Vs_cdfvt7y"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the further discussion. My original question actually means that, in practice, we may have a very large number of bags & classes, and annotating each bag with label proportions could be difficult ($\\gamma_k$ may not be observed); how to estimate the multi-class noise matrix in such cases? In the binar... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
2,
5
] | [
"sxW8JexT-v",
"XInUqLqvHE3",
"zBVO6eoHaRx",
"ShsK-0IFtIF",
"JwK1obfIOst",
"Vs_cdfvt7y",
"2HVIVaFLBs7",
"JwK1obfIOst",
"IVHRuOmBU-X",
"nips_2022_cqyBfRwOTm1",
"nips_2022_cqyBfRwOTm1",
"nips_2022_cqyBfRwOTm1",
"nips_2022_cqyBfRwOTm1"
] |
nips_2022_ZfaEZyQDrok | SignRFF: Sign Random Fourier Features | The industry practice has been moving to embedding based retrieval (EBR). For example, in many applications, the embedding vectors are trained by some form of two-tower models. During serving phase, candidates (embedding vectors) are retrieved according to the rankings of cosine similarities either exhaustively or by approximate near neighbor (ANN) search algorithms. For those applications, it is natural to apply ``sign random projections'' (SignRP) or variants, on the trained embedding vectors to facilitate efficient data storage and cosine distance computations. SignRP is also one of the standard indexing schemes for conducting approximate near neighbor search. In the literature, SignRP has been popular and, to an extent, becomes the default method for ``locality sensitive hashing'' (LSH).
In this paper, we propose ``sign random Fourier features'' (SignRFF) as an alternative to SignRP. The original method of random Fourier features (RFF) is a standard technique for approximating the Gaussian kernel (as opposed to the linear cosine kernel), in the literature of large-scale machine learning. Basically, RFF applies a simple nonlinear transformation on the samples generated by random projections (RP). Thus, in the pipeline of EBR, it is straightforward to replace SignRP by SignRFF. This paper explains, in a principled manner, why it makes sense to do so.
In this paper, a new analytical measure called \textbf{Ranking Efficiency (RE)} is developed, which in retrospect is closely related to the ``two-sample mean'' $t$-test statistic for binomial variables. RE provides a systematic and unified framework for comparing different LSH methods. We compare our proposed SignRP with SignRP, KLSH (kernel LSH), as well SQ-RFF (which is another 1-bit coding scheme for RFF). According to the RE expression, SignRFF consistently outperforms KLSH (for Gaussian kernel) and SQ-RFF. SignRFF also outperforms SignRP in the relatively high similarity region. The theoretical comparison results are consistent with our empirical findings. In addition, experiments are conducted to compare SignRFF with a wide range of data-dependent and deep learning based hashing methods and show the advantage of SignRFF with a sufficient number of hash bits.
| Accept | This was a borderline paper, with three (mostly) positive reviews and one somewhat negative review.
Some of the concerns raised by the reviewers include marginal contributions as well as some issues with the empirical results (though some reviewers felt the results were strong). It does seem that at least the first reviewer was satisfied by the rebuttal response, and the concerns of the more negative reviewer seemed to be addressed at least somewhat in the rebuttal.
I also took a close look at the paper. I am somewhat on the fence with this paper but I think the merits outweigh any flaws and that the paper could be accepted at the conference. The paper has two somewhat disjoint contributions (one on the SignRFF method and the other introducing the ranking efficiency measure), each of which alone is not enough to carry a NeurIPS paper. I think together it's OK but a bit non-traditional in its presentation. It might be good in the final version to consider (if possible) adding additional comparisons, particularly to deep hashing methods. | train | [
"6-tc7k42LZ9",
"HMQPuthekhq",
"jS-qbJ-ikSe",
"MoaBq9EPw-",
"DU_P04giiaw",
"cZd_HQsERu3",
"F3VNUHXHOa",
"A3KtgXH1Rfx",
"PWYNqSoo10a",
"mJRxs8J3zjN",
"8XE9BFPDwm4"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer\n\nAgain, thank you for the constructive review comments. Please let us know if our rebuttal has adequately addressed your concerns. \n\nThank you\n\nAuthors ",
" Dear Referee #U4Yu,\n\nThank you. \n\nWe are very pleased to know that our response has addressed your concerns. We appreciate your tho... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"DU_P04giiaw",
"jS-qbJ-ikSe",
"F3VNUHXHOa",
"8XE9BFPDwm4",
"mJRxs8J3zjN",
"PWYNqSoo10a",
"A3KtgXH1Rfx",
"nips_2022_ZfaEZyQDrok",
"nips_2022_ZfaEZyQDrok",
"nips_2022_ZfaEZyQDrok",
"nips_2022_ZfaEZyQDrok"
] |
nips_2022_n6QYLjlYhkG | HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences | In this paper, we tackle the important yet under-investigated problem of making long-horizon prediction of event sequences. Existing state-of-the-art models do not perform well at this task due to their autoregressive structure. We propose HYPRO, a hybridly normalized probabilistic model that naturally fits this task: its first part is an autoregressive base model that learns to propose predictions; its second part is an energy function that learns to reweight the proposals such that more realistic predictions end up with higher probabilities. We also propose efficient training and inference algorithms for this model. Experiments on multiple real-world datasets demonstrate that our proposed HYPRO model can significantly outperform previous models at making long-horizon predictions of future events. We also conduct a range of ablation studies to investigate the effectiveness of each component of our proposed methods. | Accept | The authors propose in this submission a hybrid approach for the event prediction problem combining an auto-regressive model and an energy model aiming to correct a compounding error issue that auto-regressive models can have. Reading the submission and according to reviewers, paper is well written and the approach is reasonable and well-motivated. Moreover, the discussion with the reviewers has allowed the authors to provide additional arguments supporting their different claims convincingly.
Therefore I recommend this paper for acceptance. | train | [
"TogWonarBKr",
"VoW2uyoUdO",
"0Z3YGQXvhI3",
"_Z_DP4nQPOE",
"DwvEZYPBFB7",
"msExGKoiKXO",
"JjnfqyDiRhj",
"SPb8ueWIOt",
"HXqI-xOOW_N",
"Q-7xJpKbdJP6",
"y6V9zdw_XYg",
"WncBraMJcBn",
"Gl3NVyW8mjJ",
"GhZ_72MRtW",
"Mlq5uvYwAB",
"IKct-0CV3nF",
"Ec8z7wGBmGY",
"gZWSeALygJ86",
"KdrVclP4FdA... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"offici... | [
" We thank all the reviewers for their supportive feedback. \n\nWe will surely structure the final version as we have discussed and agreed. Most importantly, we will rewrite the Introduction to emphasize our precise technical contributions (see Technical Contribution in our rebuttal). \n\nWe will also include all t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_n6QYLjlYhkG",
"0Z3YGQXvhI3",
"WncBraMJcBn",
"DwvEZYPBFB7",
"Mlq5uvYwAB",
"JjnfqyDiRhj",
"DslRFpJL2Yt",
"DslRFpJL2Yt",
"Qiixk6Vyw3d",
"Qiixk6Vyw3d",
"Qiixk6Vyw3d",
"Qiixk6Vyw3d",
"DslRFpJL2Yt",
"DslRFpJL2Yt",
"tlpJw6_eyfp",
"nips_2022_n6QYLjlYhkG",
"nips_2022_n6QYLjlYhkG",
... |
nips_2022_PuagBLcAf8n | Off-Policy Evaluation for Action-Dependent Non-stationary Environments | Methods for sequential decision-making are often built upon a foundational assumption that the underlying decision process is stationary. This limits the application of such methods because real-world problems are often subject to changes due to external factors (\textit{passive} non-stationarity), changes induced by interactions with the system itself (\textit{active} non-stationarity), or both (\textit{hybrid} non-stationarity). In this work, we take the first steps towards the fundamental challenge of on-policy and off-policy evaluation amidst structured changes due to active, passive, or hybrid non-stationarity. Towards this goal, we make a \textit{higher-order stationarity} assumption such that non-stationarity results in changes over time, but the way changes happen is fixed. We propose, OPEN, an algorithm that uses a double application of counterfactual reasoning and a novel importance-weighted instrument-variable regression to obtain both a lower bias and a lower variance estimate of the structure in the changes of a policy's past performances. Finally, we show promising results on how OPEN can be used to predict future performances for several domains inspired by real-world applications that exhibit non-stationarity. | Accept | The paper addresses off-policy evaluation for non-stationary environments. A key distinction from the prior work, highlighted by multiple reviewers is the aspect that non-stationarity can come from the agent’s own actions (as opposed to just the exogenous factors). The paper uses a combination of importance sampling and bias and variance reduction to show strong consistency of the derived estimator.
There are certain assumptions that were considered too restrictive by a reviewer (re: Meta-transition matrix of the agent and the MDP transition matrix). The authors in their rebuttal provided some insight into why the assumption is necessary and why it makes sense from a pragmatic point of view. I request author’s to further expand upon the discussion on this assumption in the final copy.
Another concern raised by a critical reviewer focused on novelty, especially in comparison to work in MDPs (mostly due lack of clarity in the definition of active non-stationarity). The critical reviewer, however, was satisfied about other questions and concerns he had raised.
Overall, this is an interesting work with good theoretical and empirical insights and the reviewer pool leans towards acceptance.
| train | [
"Zuni6VIC0D9",
"IpyohwBtrq",
"GUfZI-JXZ8y",
"j3AE-UfpMB",
"gu3B9ISJMp-",
"vR_uElqTiXP",
"IKAOx4Wovc",
"pDE5Iuf7uXg",
"jM5oFV3BE2G",
"c0mup4wlCXE",
"_Osj7Q4FcWU",
"6Sw2kC41lZf",
"j6tevwUk2-8",
"j8B9Foyan5y"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the response from the authors. While most of my questions/concerns are addressed, I'd like to see more explicit discussion on Q1 regarding the definition of \"active non-stationarity” which seems lacking from the paper/rebuttal. I'm worried about potential overlaps with existing work in MDP.",
" Th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2
] | [
"IKAOx4Wovc",
"c0mup4wlCXE",
"6Sw2kC41lZf",
"j6tevwUk2-8",
"j8B9Foyan5y",
"j8B9Foyan5y",
"j6tevwUk2-8",
"6Sw2kC41lZf",
"6Sw2kC41lZf",
"_Osj7Q4FcWU",
"nips_2022_PuagBLcAf8n",
"nips_2022_PuagBLcAf8n",
"nips_2022_PuagBLcAf8n",
"nips_2022_PuagBLcAf8n"
] |
nips_2022_z2cG3k8xa3C | Asymptotics of smoothed Wasserstein distances in the small noise regime | We study the behavior of the Wasserstein-$2$ distance between discrete measures $\mu$ and $\nu$ in $\mathbb{R}^d$ when both measures are smoothed by small amounts of Gaussian noise. This procedure, known as Gaussian-smoothed optimal transport, has recently attracted attention as a statistically attractive alternative to the unregularized Wasserstein distance. We give precise bounds on the approximation properties of this proposal in the small noise regime, and establish the existence of a phase transition: we show that, if the optimal transport plan from $\mu$ to $\nu$ is unique and a perfect matching, there exists a critical threshold such that the difference between $W_2(\mu, \nu)$ and the Gaussian-smoothed OT distance $W_2(\mu \ast \mathcal{N}_\sigma, \nu\ast \mathcal{N}_\sigma)$ scales like $\exp(-c /\sigma^2)$ for $\sigma$ below the threshold, and scales like $\sigma$ above it. These results establish that for $\sigma$ sufficiently small, the smoothed Wasserstein distance approximates the unregularized distance exponentially well. | Accept | All reviewers are in agreement that the main factors (in particular, the results and their presentation) are above the bar for NeurIPS. No significant concerns remain following the author response and the discussion period. I encourage the authors to carefully take into account all of the minor comments when preparing the camera-ready version. | test | [
"WKlPxogNSq9",
"FiOuelf8F3QA",
"lp-xunxTe6l",
"ymxcBXHs2xl",
"r71dcL4WDt",
"Zo54HmowuMq",
"vPv5jfJy2Sd",
"Zdja_mFUI-H",
"OaeYvX3_kM3",
"oaEiuisVmd",
"pauFCTpQRZn",
"4FBPA290PFI",
"VSTjqCez0qZ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the interesting clarification on compactly supported measures, and for taking into account the suggestions in the revised manuscript.",
" We thank the reviewer for the comments. We will remake Figure 1 and enlarge the axes of Figure 2 in our future revision. \n\nOn the $\\log-\\log$ plot, we meant to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"vPv5jfJy2Sd",
"ymxcBXHs2xl",
"Zdja_mFUI-H",
"Zo54HmowuMq",
"nips_2022_z2cG3k8xa3C",
"VSTjqCez0qZ",
"4FBPA290PFI",
"pauFCTpQRZn",
"oaEiuisVmd",
"nips_2022_z2cG3k8xa3C",
"nips_2022_z2cG3k8xa3C",
"nips_2022_z2cG3k8xa3C",
"nips_2022_z2cG3k8xa3C"
] |
nips_2022_9BL0-oS7W7_ | Defending Against Adversarial Attacks via Neural Dynamic System | Although deep neural networks (DNN) have achieved great success, their applications in safety-critical areas are hindered due to their vulnerability to adversarial attacks. Some recent works have accordingly proposed to enhance the robustness of DNN from a dynamic system perspective. Following this line of inquiry, and inspired by the asymptotic stability of the general nonautonomous dynamical system, we propose to make each clean instance be the asymptotically stable equilibrium points of a slowly time-varying system in order to defend against adversarial attacks. We present a theoretical guarantee that if a clean instance is an asymptotically stable equilibrium point and the adversarial instance is in the neighborhood of this point, the asymptotic stability will reduce the adversarial noise to bring the adversarial instance close to the clean instance. Motivated by our theoretical results, we go on to propose a nonautonomous neural ordinary differential equation (ASODE) and place constraints on its corresponding linear time-variant system to make all clean instances act as its asymptotically stable equilibrium points. Our analysis suggests that the constraints can be converted to regularizers in implementation. The experimental results show that ASODE improves robustness against adversarial attacks and outperforms state-of-the-art methods. | Accept | The paper studies the problem of enhancing neural network robustness from a dynamic system perspective. To this end, the authors proposed a nonautonomous neural ordinary differential equation (ASODE) that makes clean instances be their asymptotically stable equilibrium points. In this way, the asymptotic stability will reduce the adversarial noise to bring the nearby adversarial examples close to the clean instance. The empirical studies show that the proposed method can defend against existing attacks and outperform SOTA defense methods. Most reviewers rated this work positively and agreed that the proposed method is interesting, technically sound, and theoretically grounded. In the original reviews, they also raised several concerns including missing ablation study and sensitivity analysis, missing references, and some presentation issues. The authors properly addressed these concerns in the rebuttal and after the discussion, some reviewers increased their scores and the majority leaned towards acceptance. Overall, given the general support from the reviewers and the revised version of the paper, I recommend accepting the paper. | test | [
"u391wyw9XLA",
"Yewvrz3VF0Y",
"oi-wn7jxsVK",
"HU7VTXnIMIQ",
"-hWP714k3ke",
"eNvrc5pcgGE",
"fQfGxm_JWlt",
"tNwIjtghFHfE",
"8PRrHTK1vjD",
"nqm1cwa-uRe",
"htwSdzl23Ki",
"lHNJLSTjTOB",
"Um3Vx7Ho4F",
"Bog_8jNC27m",
"iHLnQvBNxF",
"aujXMGeDIdB",
"2btIG11ZcbS",
"yRoy2uNt1u9",
"W2uX0i3N5O... | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Ljok, we will really appreciate it if the reviewer can go over our detailed response and revisions. Please feel free to ask us any questions you may still have and we will be more than happy to answer them. Thank you again for reviewing our paper and we look forward to discussing with you.",
" Tha... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
5,
4
] | [
"W2uX0i3N5O",
"yRoy2uNt1u9",
"2btIG11ZcbS",
"W2uX0i3N5O",
"yRoy2uNt1u9",
"2btIG11ZcbS",
"tNwIjtghFHfE",
"lHNJLSTjTOB",
"yRoy2uNt1u9",
"yRoy2uNt1u9",
"yRoy2uNt1u9",
"yRoy2uNt1u9",
"W2uX0i3N5O",
"2btIG11ZcbS",
"aujXMGeDIdB",
"nips_2022_9BL0-oS7W7_",
"nips_2022_9BL0-oS7W7_",
"nips_202... |
nips_2022_tmer8WAEzV | Sampling with Riemannian Hamiltonian Monte Carlo in a Constrained Space | We demonstrate for the first time that ill-conditioned, non-smooth, constrained distributions in very high dimension, upwards of 100,000, can be sampled efficiently \emph{in practice}. Our algorithm incorporates constraints into the Riemannian version of Hamiltonian Monte Carlo and maintains sparsity. This allows us to achieve a mixing rate independent of smoothness and condition numbers. On benchmark data sets in systems biology and linear programming, our algorithm outperforms existing packages by orders of magnitude. In particular, we achieve a 1,000-fold speed-up for sampling from the largest published human metabolic network (RECON3D). Our package has been incorporated into a popular Bioinformatics library. | Accept | The focus of the submission is practically efficient sampling of non-smooth constrained log-concave distributions in high dimension as formulated in (1). In order to tackle this task, the authors design a constrained variant of the RHMC (Riemannian Hamiltonian Monte Carlo) technique, relying on self-concordant barrier functions. They demonstrate the practical efficiency of the proposed method (achieving significantly improved sampling time) compared to existing constrained samplers, with additional theoretical results in the supplement.
Sampling in high dimension is a fundamental problem of data science with various successful applications; designing new methods in the area is of clear interest for the NeurIPS community. As it was assessed by the reviewers, the authors deliver important new tools in this context. As it was also noted, the manuscript could be made slightly more accessible to wider audience. | train | [
"3N2mZkouCw",
"cOXqZatPfUq",
"Mv_xJquPwi",
"FFpHLN9HiWV",
"8mLQa3hWgtc",
"HiSzsZH8MtT",
"aPI2l7lnLLd",
"kMVpQAlvj8",
"FfHZomVC9dP",
"I4JEwgjVKGI",
"f3eiXWXAwqs",
"G2DAmYtK2go",
"sES-JH-ElisT",
"geyjbu0gLzP",
"Kh4C7YQg-AI",
"kujBXZvWJ_",
"8oklbBzPW1q",
"K2NKERBJKX"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I already had a high rating, and my plan is to keep it high.",
" Thanks again for your review. We are glad our responses clarified your questions and thank you for considering improving the score. We will add figures and more detailed explanations to make the relevant information cleare... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4,
3
] | [
"sES-JH-ElisT",
"FFpHLN9HiWV",
"aPI2l7lnLLd",
"f3eiXWXAwqs",
"HiSzsZH8MtT",
"FfHZomVC9dP",
"G2DAmYtK2go",
"nips_2022_tmer8WAEzV",
"K2NKERBJKX",
"8oklbBzPW1q",
"kujBXZvWJ_",
"Kh4C7YQg-AI",
"geyjbu0gLzP",
"nips_2022_tmer8WAEzV",
"nips_2022_tmer8WAEzV",
"nips_2022_tmer8WAEzV",
"nips_202... |
nips_2022_Xa1T165JEhB | Optimal-er Auctions through Attention | RegretNet is a recent breakthrough in the automated design of revenue-maximizing auctions. It combines the flexibility of deep learning with the regret-based approach to relax the Incentive Compatibility (IC) constraint (that participants prefer to bid truthfully) in order to approximate optimal auctions. We propose two independent improvements of RegretNet. The first is a neural architecture denoted as RegretFormer that is based on attention layers. The second is a loss function that requires explicit specification of an acceptable IC violation denoted as regret budget. We investigate both modifications in an extensive experimental study that includes settings with constant and inconstant number of items and participants, as well as novel validation procedures tailored to regret-based approaches. We find that RegretFormer consistently outperforms RegretNet in revenue (i.e. is optimal-er) and that our loss function both simplifies hyperparameter tuning and allows to unambiguously control the revenue-regret trade-off by selecting the regret budget. | Accept | This paper builds on the nascent optimal auction design via deep learning literature. It modifies the at-this-point standard approach, RegretNet, to this setting from Duetting et al (ICML-19, but on arXiv earlier) by incorporating a transformer-based architecture into, roughly, the same training scenario, albeit with a different loss function that is more appropriate for this setting. That loss function is a nice add-on in that RegretNet and kin are notoriously hard to train, and this seems to help. From an empirical point of view alone, I believe this paper is worth accepting because it pushes the boundary on a tricky empirical problem, both in terms of ease of training but also in terms of (in many cases) the quality of the final network itself, measured in revenue / IC violation. I agree with some of the negative sentiment from Reviewer u7uQ, though, w.r.t. over-claims of "optimal-er" and poor comparison against analytic baselines; this should be changed in a final/future version of the work. | train | [
"kmxDcKtkQ8V",
"by1MWHSzGI",
"qkfWgprfyKh",
"36rg5j-WmzS",
"l4VBzXgAJE_",
"PV9nO9kAH3e",
"NCUCIZLXLqu",
"I7JDs0qSbl",
"5_-TXBz0u81",
"Tz_em7rb3zkG",
"Nb9wfoumUwV",
"gNcrMFpmyTWB",
"Qh3DNvnnysL",
"7rnforlf3Jz",
"PMIR8mEE3oC",
"vpf3QxmClAz",
"um3VouMv09w",
"wxnZnSzFksN"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification, we appreciate it!",
" I'm retaining my scores for the following reason:\n- While the application is different, the core contribution is the same - applying a transformer-based architecture for auction design.\n - I do not find the loss function to be novel. Given that $R_{max}$,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"by1MWHSzGI",
"l4VBzXgAJE_",
"NCUCIZLXLqu",
"I7JDs0qSbl",
"PV9nO9kAH3e",
"Nb9wfoumUwV",
"gNcrMFpmyTWB",
"Tz_em7rb3zkG",
"wxnZnSzFksN",
"um3VouMv09w",
"vpf3QxmClAz",
"PMIR8mEE3oC",
"nips_2022_Xa1T165JEhB",
"nips_2022_Xa1T165JEhB",
"nips_2022_Xa1T165JEhB",
"nips_2022_Xa1T165JEhB",
"nip... |
nips_2022_qTCiw1frE_l | The Phenomenon of Policy Churn | We identify and study the phenomenon of policy churn, that is, the rapid change of the greedy policy in value-based reinforcement learning. Policy churn operates at a surprisingly rapid pace, changing the greedy action in a large fraction of states within a handful of learning updates (in a typical deep RL set-up such as DQN on Atari). We characterise the phenomenon empirically, verifying that it is not limited to specific algorithm or environment properties. A number of ablations help whittle down the plausible explanations on why churn occurs to just a handful, all related to deep learning. Finally, we hypothesise that policy churn is a beneficial but overlooked form of implicit exploration that casts $\epsilon$-greedy exploration in a fresh light, namely that $\epsilon$-noise plays a much smaller role than expected. | Accept | The paper identifies a hitherto unknown phenomenon in value-based reinforcement learning called policy churn. All the reviewers agree that this is a very interesting phenomenon, and that the paper studies the phenomenon comprehensively. All reviewers also agreed that this paper is likely to inspire many follow-up works. Understanding the causes and harnessing policy churn can potentially significantly impact the deep RL state-of-the-art. | val | [
"A8z7ds1lDGx",
"L2pwQzzPSkI",
"mMuzrEna4H",
"nopnC-ISd_R",
"uTotUMEqhD",
"wU792HrNQDA",
"qIMUXYiuQ4n",
"T2p5RbKByPc"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response, and the addition of new experiments demonstrating the broad appearance of the policy churn phenomena. I raise my score to an 8 in light of this.",
" > It is still largely unclear where the concrete root cause of the policy churn is.\n\nIndeed, there remain open questions ... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nopnC-ISd_R",
"wU792HrNQDA",
"qIMUXYiuQ4n",
"T2p5RbKByPc",
"nips_2022_qTCiw1frE_l",
"nips_2022_qTCiw1frE_l",
"nips_2022_qTCiw1frE_l",
"nips_2022_qTCiw1frE_l"
] |
nips_2022_kMiL9hWbD1z | RTFormer: Efficient Design for Real-Time Semantic Segmentation with Transformer | Recently, transformer-based networks have shown impressive results in semantic
segmentation. Yet for real-time semantic segmentation, pure CNN-based ap-
proaches still dominate in this field, due to the time-consuming computation
mechanism of transformer. We propose RTFormer, an efficient dual-resolution
transformer for real-time semantic segmenation, which achieves better trade-off
between performance and efficiency than CNN-based models. To achieve high
inference efficiency on GPU-like devices, our RTFormer leverages GPU-Friendly
Attention with linear complexity and discards the multi-head mechanism. Besides,
we find that cross-resolution attention is more efficient to gather global context in-
formation for high-resolution branch by spreading the high level knowledge learned
from low-resolution branch. Extensive experiments on mainstream benchmarks
demonstrate the effectiveness of our proposed RTFormer, it achieves state-of-the-art
on Cityscapes, CamVid and COCOStuff, and shows promising results on ADE20K. | Accept | Reviewers agree that the proposed RTFormer block and overall network architecture achieves good trade-off between performance and efficiency on several datasets. The design of GPU-friendly attention and cross-resolution attention improves the computational efficiency over multi-head attention, and well captures global context information when updating high-resolution embeddings.
The main concern, as mentioned by several reviewers, is the overall novelty as some ideas are related to previous work (GPU-friendly attention and hybrid convolutional-transformer architecture). Other issue includes missing baselines that are based on light-weighted attention designs, or do not use attention at all, but this have been well resolved in the author feedback. In summary, the pros outweigh the cons and therefore AC recommends acceptance. | train | [
"OV6JTQF5ov",
"glkcJs-NDKf",
"VAKiPjFhdm0",
"quCY4q_rDfe",
"yvX1K4dsfn3",
"8JxaJpbk7T3",
"HKx4ghJULp",
"9cGSHF21WIB",
"JRxvb82X_wG",
"gOOgk8oiFbZ",
"VdFztZ5ZbnDW",
"ue-_qrtrdc1",
"9WXH2SptzVr",
"CHlR3e7eS9v",
"hT375AvjL0",
"8Bx8-EvhYTJ",
"AU6nXOR3c1"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers, \nSince the rebuttal discussion is about to end soon, it would be better to let us know whether our replies have addressed you questions. \nAnd don't hesitate to contact us if you have any further clarifications required.\n",
" Thanks for your reply and detailed review. \nGlad to know we have ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
5
] | [
"nips_2022_kMiL9hWbD1z",
"VAKiPjFhdm0",
"gOOgk8oiFbZ",
"nips_2022_kMiL9hWbD1z",
"8JxaJpbk7T3",
"HKx4ghJULp",
"9cGSHF21WIB",
"VdFztZ5ZbnDW",
"AU6nXOR3c1",
"8Bx8-EvhYTJ",
"nips_2022_kMiL9hWbD1z",
"hT375AvjL0",
"CHlR3e7eS9v",
"nips_2022_kMiL9hWbD1z",
"nips_2022_kMiL9hWbD1z",
"nips_2022_kM... |
nips_2022_VE8QRTrWAMb | Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback | The standard assumption in reinforcement learning (RL) is that agents observe feedback for their actions immediately. However, in practice feedback is often observed in delay. This paper studies online learning in episodic Markov decision process (MDP) with unknown transitions, adversarially changing costs, and unrestricted delayed bandit feedback. More precisely, the feedback for the agent in episode $k$ is revealed only in the end of episode $k + d^k$, where the delay $d^k$ can be changing over episodes and chosen by an oblivious adversary. We present the first algorithms that achieve near-optimal $\sqrt{K + D}$ regret, where $K$ is the number of episodes and $D = \sum_{k=1}^K d^k$ is the total delay, significantly improving upon the best known regret bound of $(K + D)^{2/3}$. | Accept | This paper has initially received mixed reviews, but the author response has successfully addressed the concerns of the less enthusiastic reviewers, thus we have eventually reached consensus that the paper is suitable for publication at NeurIPS 2022. That said, I encourage the authors to take all the reviewers' comments into account when preparing the final version, especially when it comes to improving the readability and self-containedness of the proofs in the appendix. As for the usage of the term "near-optimal" in the title, I concur with the reviewers who pointed out that this may not be the perfect choice of words. This is not to say that it is necessary to change the title of the paper, but perhaps a better choice of wording may give a better overall impression to future readers of the paper. | train | [
"XSJUhT_cKCu",
"aqsPl1rZUg",
"aAT-GoKNEpw",
"-iBx754HNCf",
"TkTTo6MHbAI",
"fkWGSBie64Y",
"qaLtmLchtFz",
"W2Pl6aTlCr",
"A0VOPAX3TqN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I have read other reviews and responses and decide to maintain the score.",
" Thank you for your answer, I do not have further questions and I have increased my score.",
" Thanks for the clarifications. I read the other reviews and maintain my evaluation. ",
" > *\"\"near-optimal\" ... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"-iBx754HNCf",
"fkWGSBie64Y",
"TkTTo6MHbAI",
"A0VOPAX3TqN",
"W2Pl6aTlCr",
"qaLtmLchtFz",
"nips_2022_VE8QRTrWAMb",
"nips_2022_VE8QRTrWAMb",
"nips_2022_VE8QRTrWAMb"
] |
nips_2022_rTvH1_SRyXs | Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations | A critical problem in the literature on post hoc explanations is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some use game theoretic notions such as Shapley-Aumann values, and some are ad hoc, driven by the goal of obtaining clean visualizations. Such fragmentation of goals not only prevents a coherent conceptual understanding of post hoc explainability but also causes the practical challenge of not knowing which method to use when.
In this work, we take the first steps towards bridging these gaps and unify eight popular post hoc explanation methods (LIME, C-LIME, SHAP, Vanilla Gradients, Gradients x Input, Integrated Gradients, SmoothGrad, Occlusion) by showing that they can all be viewed as performing local function approximation of the black-box model, with methods differing only by the local neighbourhoods and loss functions used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods which demonstrates that no single method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on the faithfulness of an explanation to the black-box model. We empirically validate our theoretical results using various real-world datasets, model classes, and prediction tasks.
By formalizing a mathematical framework which unifies diverse explanation methods, our work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective and characterizing their relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones. | Accept | The decision is to accept the paper.
The authors propose a unifying Local Function Approximation framework for characterizing a number of local explanation methods into a single formalism. Using this formalism, the authors provide a no-free-lunch theorem indicating that no explanation method can dominate all others across all perturbation specifications. The authors then provide a guiding principle regarding global recovery of the underlying function when the explanation function class is complex enough, and give several examples where different explanation methods do/do not satisfy this principle. There are also empirical results to confirm the theory, and suggestions for designing new explanation methods based on the theory.
Overall, there was agreement that the LFA framework provides a useful unified perspective on many different explanation methods. Some of the remaining concerns from reviewers could be addressed by discussing the limitations of the framework more explicitly in the main paper. For example, referencing appendix A.1 and the types of methods that are not characterized by this framework; or discussing how the choice of explanation method may also depend on how the particular use case is mapped to neighborhoods etc, which is out of scope here (in the spirit of the current comment about the choice of G). This is mainly a matter of expanding the discussion a bit, which should be feasible with an extra page. | train | [
"U_eUqMjpo8",
"fEdKi0PAq7b",
"9Kqc5MjSv5",
"KTT0yDXfs0E",
"Vl6yaGD8BQI",
"SgZJKC8Ma2-",
"RRjlkrroI3",
"0SLDvdgi_kq",
"p393pxAXMU4",
"Wl1__aPrCG6",
"w2USJ4_ILS",
"37_zhYc0Zkau",
"zpikyRm_A6D",
"kRY0q3ENX-",
"PPlzZ8l3jZ2",
"tMYobElSYfy",
"9p-5j99fd1w",
"-aHr08Fo5M8",
"Fohqo3Kuv0n",... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_... | [
" Thank you once again for your responses and active engagement. It helps us immensely. Regarding additional theory or experiments, we are interested in learning what specific theory or experiments you would like to see?\n\nPlease note that this request for additional theory or experiments was only raised in commen... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
2
] | [
"fEdKi0PAq7b",
"9Kqc5MjSv5",
"KTT0yDXfs0E",
"SgZJKC8Ma2-",
"p393pxAXMU4",
"w2USJ4_ILS",
"U8f3sbCTVBD",
"Fohqo3Kuv0n",
"-aHr08Fo5M8",
"nips_2022_rTvH1_SRyXs",
"37_zhYc0Zkau",
"U8f3sbCTVBD",
"WwLNn_TpfYU",
"PPlzZ8l3jZ2",
"Fohqo3Kuv0n",
"9p-5j99fd1w",
"-aHr08Fo5M8",
"nips_2022_rTvH1_S... |
nips_2022_iivHwZoWzR4 | On the Computational Efficiency of Adapting Transformer Models via Adversarial Noise | Pretraining Transformer-based language models followed by adapting the pre-trained models to a downstream task is an effective transfer mechanism in NLP. While it is well-known that the pretraining stage is computationally expensive, even the adaptation starts to become time-consuming for many downstream tasks as Transformers grow in size rapidly.
Prior work focuses on reducing the pretraining wall-clock time via increasing the batch size to obtain higher training throughput on multiple processors. However, few studies have explored how such a scheme may benefit the adaptation phase. On the other hand, adversarial training has shown improved generalization for adapting Transformer models on many NLP tasks, but it is often treated as a separate line of research, where its effectiveness under the large-batch regime is not well understood.
In this paper, we show that adversarial training obtains promising model accuracy even with a considerably larger batch size. However, the computational complexity associated with this approach, due to the high cost of generating adversaries, prevents it from reducing adaptation costs with an increasing number of processors. As such, we systematically study adversarial large-batch optimization for adapting transformers from a computational complexity perspective. Our investigation yields efficient and practical algorithms for adapting transformer models. We show in experiments that our proposed method attains up to 9.8$\times$ adaptation speedups over the baseline on BERT$_{base}$ and RoBERTa$_{large}$, while achieving comparable and sometimes higher accuracy than the state-of-the-art large-batch optimization methods. | Reject | This paper presents ScaLA, a method to improve the efficiency of finetuning of a pretrained language model using adversarial training. Experiments confirm that ScaLA allows faster finetuning compared to standard approaches without reducing accuracy.
I think this is a nice approach that can have a lot of impact for those who would like to finetune large language models. However, the reviewers have concerns regarding novelty of the paper and would like to see the proposed method to be tested for other larger scale pretrained models as well as more analysis. I encourage the authors to incorporate these changes and resubmit to another conference. | train | [
"N3EXr3fyljG",
"BmjeCESl15l",
"Tb6-vW3_DQJ",
"u6-cVwrc73k",
"B4fXimy1b1T",
"zRyAIKZnGqw",
"s8Uri6jgLfs",
"JEVirhzilj0",
"RRC0fRiKipW",
"liztPYu4kCQ",
"xwGRPlX5h_-",
"dvd-i1G3_Uk",
"9m7BIMibkrN",
"fYJ12KM8tzR",
"Uiy5r73Lzoo",
"vXOB3bIptDX"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad that your concerns are mostly addressed. This is a very positive NeurIPS experience for us, and we sincerely thank you for your constructive comments and active discussions that helped improve the paper. ",
" Thanks for following up with the additional results and roadmap. I have updated my review a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"BmjeCESl15l",
"Tb6-vW3_DQJ",
"u6-cVwrc73k",
"zRyAIKZnGqw",
"zRyAIKZnGqw",
"s8Uri6jgLfs",
"JEVirhzilj0",
"RRC0fRiKipW",
"liztPYu4kCQ",
"xwGRPlX5h_-",
"vXOB3bIptDX",
"Uiy5r73Lzoo",
"fYJ12KM8tzR",
"nips_2022_iivHwZoWzR4",
"nips_2022_iivHwZoWzR4",
"nips_2022_iivHwZoWzR4"
] |
nips_2022_zSdz5scsnzU | Latent Planning via Expansive Tree Search | Planning enables autonomous agents to solve complex decision-making problems by evaluating predictions of the future. However, classical planning algorithms often become infeasible in real-world settings where state spaces are high-dimensional and transition dynamics unknown. The idea behind latent planning is to simplify the decision-making task by mapping it to a lower-dimensional embedding space. Common latent planning strategies are based on trajectory optimization techniques such as shooting or collocation, which are prone to failure in long-horizon and highly non-convex settings. In this work, we study long-horizon goal-reaching scenarios from visual inputs and formulate latent planning as an explorative tree search. Inspired by classical sampling-based motion planning algorithms, we design a method which iteratively grows and optimizes a tree representation of visited areas of the latent space. To encourage fast exploration, the sampling of new states is biased towards sparsely represented regions within the estimated data support. Our method, called Expansive Latent Space Trees (ELAST), relies on self-supervised training via contrastive learning to obtain (a) a latent state representation and (b) a latent transition density model. We embed ELAST into a model-predictive control scheme and demonstrate significant performance improvements compared to existing baselines given challenging visual control tasks in simulation, including the navigation for a deformable object. | Accept | This work proposes a method to first learn a lower dimensional embedding via contrastive learning, then learns a transition model and then utilizes a planner inspired from sampling-based motion planner literature to plan in this latent space from start to goal states. A model-predictive controller is harnessed to follow the desired trajectories in latent space. Overall, the work is well presented and has been well received by reviewers. It should inspire techniques in a lot of related areas like vision-language navigation. During author-reviewer discussion period there was rich interaction and various additional clarifications, and experiments were added by the authors. The authors are encouraged to incorporate them into camera-ready version and release reproducible source-code to accompany the paper. | train | [
"G-sw4F9-mKr",
"TLzp3bqvBJf",
"JURFDo6ss-E",
"MHuxcYRRpf",
"SeBua5bkG2_",
"ocQLB3fsoDg",
"6ChfGdLxh9",
"OBaKyLCQjiG",
"6FSQDr_m5FQ",
"kCgOp0owdBq",
"ilX91x8b9_",
"_pC8iyGduGt",
"0sPhLKYuMoo",
"znfspd3sEM6",
"8XkvdIqpPG9",
"7D-8cyUz9Q_"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their detailed responses, in particular regarding computation times. The responses corroborate my positive assessment of this paper.",
" Dear reviewer,\n\nThank you for your response and consideration of the revised version of our paper.\n\n> Would be quite interesting to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"ilX91x8b9_",
"ocQLB3fsoDg",
"SeBua5bkG2_",
"6ChfGdLxh9",
"kCgOp0owdBq",
"_pC8iyGduGt",
"6FSQDr_m5FQ",
"nips_2022_zSdz5scsnzU",
"8XkvdIqpPG9",
"7D-8cyUz9Q_",
"znfspd3sEM6",
"0sPhLKYuMoo",
"nips_2022_zSdz5scsnzU",
"nips_2022_zSdz5scsnzU",
"nips_2022_zSdz5scsnzU",
"nips_2022_zSdz5scsnzU"... |
nips_2022_qqHMvHbfu6 | Emergent Communication: Generalization and Overfitting in Lewis Games | Lewis signaling games are a class of simple communication games for simulating the emergence of language. In these games, two agents must agree on a communication protocol in order to solve a cooperative task. Previous work has shown that agents trained to play this game with reinforcement learning tend to develop languages that display undesirable properties from a linguistic point of view (lack of generalization, lack of compositionality, etc). In this paper, we aim to provide better understanding of this phenomenon by analytically studying the learning problem in Lewis games. As a core contribution, we demonstrate that the standard objective in Lewis games can be decomposed in two components: a co-adaptation loss and an information loss. This decomposition enables us to surface two potential sources of overfitting, which we show may undermine the emergence of a structured communication protocol. In particular, when we control for overfitting on the co-adaptation loss, we recover desired properties in the emergent languages: they are more compositional and generalize better. | Accept | This paper shows that the objective for Lewis games, as treated in the recent emergent communication (EC) literature, can be decomposed into two losses: an information loss (whether each message refers to a unique referent) and a co-adaptation loss (which quantifies the alignment of the speaker and listener). It shows that lack of generalization in unregularized EC is mostly due to the latter, and empirically shows that intervening on co-adaptation via regularization (early stopping/reinitialization) improves generalization.
The reviewers are generally positive about this paper. The one somewhat negative score is actually quite positive as well: the reviewer concedes that “this is a solid paper and deserves to be accepted even in its current state, at some venue”, but questions the overall impact of the contribution and whether it merits publication in NeurIPS. Unfortunately a fourth review, although promised by the reviewer, did not materialize in time, even after repeated prodding, so I had to provide it myself (see below).
As an area chair, I am somewhat torn about this paper. It is well-written and I think the field will benefit from having these things made more explicit. Sadly, I also think it shows very clearly how frustratingly little progress EC has made as a field. We are still talking about the same things, years later after its revival, in uninteresting unrealistic toy settings looking for “linguistics” and “compositionality” when basic information/communication theory combined with basic optimization would probably be more adequate. Overall, I think this paper can help the field move in the right direction and I am recommending conditional acceptance: the notation needs to be shored up, the assumptions and limitations need to be made much more explicit, and it needs to be made suitable not only for readers intricately familiar with EC, but also for readers who are only just reading their first paper on the subject.
—
More detailed AC review:
Strengths:
* Presentation: This paper is well-written, the decomposition makes a lot of sense and will not come as a surprise to anyone working in the field.
* Soundness: The experiments and evaluation are thorough and appear to be easily reproducible given the details provided. The paper provides additional experiments on different more “complex games” to overcome potential criticisms of its toy task nature.
* Impact: The EC literature, or even the field of EC broadly speaking, is extremely troubled by a poor understanding of how basic assumptions (eg questions of optimization, setup) impact observations (eg compositionality) and there is an extraordinary amount of wheel-reinvention, exacerbated by the lack of a standardized evaluation protocol. This paper has the potential to help practitioners be more explicit about their assumptions.
Weaknesses:
* Applicability and novelty: The main contribution of this paper, in my mind, is showing the decomposition and using it to elucidate the impact of training dynamics on the emerging communication protocol. However, this decomposition only applies in a limited setting and that assumption is not nearly made explicit enough. The final loss is negative log likelihood and the speaker is trained via policy gradients, without any constraints or regularization (either on the communication channel or the listener supervision). Almost all papers in EC are exactly about what constraints/regularization/dynamics we can impose in order to obtain better generalization. I think the distinction (i.e. “decomposition”) between decodability (“information”) and learnability (“adaptation”) is well-known amongst serious EC practitioners, and almost any paper published on the subject deals with these in some form.
* Clarity: Given the above observation, and the fact that the decomposition follows from trivial math, the exposition itself is valuable if it makes something very explicit that was heretofore not explicit enough, such that future work will benefit from it being explicit. The paper has the potential to do this, but in my opinion, disappointingly, falls short: too much of the writing assumes too much prior knowledge on the part of the reader. For this paper to be maximally valuable, I would want it to be understandable by someone who doesn’t know anything about emergent communication and reads this as their very first paper. This issue is particularly prevalent in the most important part, Section 2 and the corresponding appendix: the notation in the proofs is almost unforgivably convoluted; the sub- and super-script mixing for different parameterizations is unnecessarily confusing, especially with the listener parameterization \phi never actually being introduced as such; the actual proofs in the appendix never making explicit that the two losses concern the speaker and listener respectively (try having this read by someone unfamiliar with the field, they’d instantly be lost); and all of it is basically building on the work from a very specific group of people who do things in a very particular way using a framework (EGG) that nobody else uses, without ever making explicit that other EC papers do things very differently (there are definitely EC papers that do early stopping, freezing/probing in different phases of training, etc. but they tend to study what constraints can be imposed on top of the standard task formulation to make things generalize, usually in much more sophisticated games). I also did not like that the communication channel itself was not bottlenecked, with |V|=T=10 — in this setting everything collapses to basic information theory, and the assumption is not realistic for studying any emergent linguistic phenomena.
Overall, if it was up to me, I would have written this paper very differently, with an argument along the lines of: the EC literature is a mess, let’s make things more explicit by decomposing the loss in the most basic EC setting, and then we can understand all of the interventions in the prior EC literature (eg ensembles/freezing/populations/grounding/constraints/regularizations/etc) in this new light, and we can even come up with some new approaches to tackling this problem, such as down-weighting co-adaptation. I think the paper has merit, and I think it can be accepted but it really has to address its shortcomings: Section 2 and the proofs have to be notationally extremely clear with the proofs written out in much more detail and the paper has to make its assumptions much more explicit, i.e. that it applies to the limited basic setting that was mostly studied by a very limited group of people.
Typos I came across:
- “and experimental choice” - choices
- “to ease the reader intuition” - reader’s
- “deep model are large enough” - models
- “the train co-adapation keeps dismishing” - co-adaptation, diminishing | train | [
"Mu1yHPStAld",
"WqId7oV0iXi",
"mUaV2t8ysJ",
"nqsammEDORr",
"bHsVsrxXhBa",
"PSchRumVnWe",
"kq8gnzEL7E",
"yMOmUKeSuff",
"vkVJaC9ylrV",
"_pwOkH3cLuW"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors response, which adequately addressed my questions, and I keep my \"strong accept\" assessment. ",
" Thanks to the authors and reviewers for the additional discussion points. I think in some sense this is a non-answer, as much as my original review was a non-criticism: this is a solid p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"bHsVsrxXhBa",
"mUaV2t8ysJ",
"nqsammEDORr",
"yMOmUKeSuff",
"_pwOkH3cLuW",
"vkVJaC9ylrV",
"nips_2022_qqHMvHbfu6",
"nips_2022_qqHMvHbfu6",
"nips_2022_qqHMvHbfu6",
"nips_2022_qqHMvHbfu6"
] |
nips_2022_E28hy5isRzC | Entropy-Driven Mixed-Precision Quantization for Deep Network Design | Deploying deep convolutional neural networks on Internet-of-Things (IoT) devices is challenging due to the limited computational resources, such as limited SRAM memory and Flash storage. Previous works re-design a small network for IoT devices, and then compress the network size by mixed-precision quantization. This two-stage procedure cannot optimize the architecture and the corresponding quantization jointly, leading to sub-optimal tiny deep models. In this work, we propose a one-stage solution that optimizes both jointly and automatically. The key idea of our approach is to cast the joint architecture design and quantization as an Entropy Maximization process. Particularly, our algorithm automatically designs a tiny deep model such that: 1) Its representation capacity measured by entropy is maximized under the given computational budget; 2) Each layer is assigned with a proper quantization precision; 3) The overall design loop can be done on CPU, and no GPU is required. More impressively, our method can directly search high-expressiveness architecture for IoT devices within less than half a CPU hour. Extensive experiments on three widely adopted benchmarks, ImageNet, VWW and WIDER FACE, demonstrate that our method can achieve the state-of-the-art performance in the tiny deep model regime. Code and pre-trained models are available at https://github.com/alibaba/lightweight-neural-architecture-search. | Accept | this paper proposes a joint architecture design and quantization method by casting it as an Entropy Maximization process. reviewers give consensus acceptance. | train | [
"NLVqbsxCVvA",
"t3lLBZpqSu9",
"LGC3_ovuIi",
"RQJbHMvr_j5",
"IEGdJFpSl5Y",
"q_wGNskA3L",
"3vYQL8PgFGf",
"fwXfGHe6qZ",
"iJyoUjCq2e1",
"HQrgZfCDhEL",
"LsEb7vRccOI",
"DNsmRaM7D5v",
"J0PfAuR_Z6g",
"Q1Q-Oeg-VKM"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer xZvK,\n\nThanks again for all your constructive comments and response.\n\nBest regards,\n\nAuthors",
" I sincerely thank the authors for providing detailed response. Most of my concerns have been addressed and thus I decide to increase my score. ",
" Dear reviewers and meta reviewers:\n\nHope th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
5
] | [
"t3lLBZpqSu9",
"DNsmRaM7D5v",
"nips_2022_E28hy5isRzC",
"nips_2022_E28hy5isRzC",
"q_wGNskA3L",
"DNsmRaM7D5v",
"fwXfGHe6qZ",
"J0PfAuR_Z6g",
"Q1Q-Oeg-VKM",
"LsEb7vRccOI",
"nips_2022_E28hy5isRzC",
"nips_2022_E28hy5isRzC",
"nips_2022_E28hy5isRzC",
"nips_2022_E28hy5isRzC"
] |
nips_2022_o3HXEEBKnD | Label-Aware Global Consistency for Multi-Label Learning with Single Positive Labels | In single positive multi-label learning (SPML), only one of multiple positive labels is observed for each instance. The previous work trains the model by simply treating unobserved labels as negative ones, and designs the regularization to constrain the number of expected positive labels. However, in many real-world scenarios, the true number of positive labels is unavailable, making such methods less applicable. In this paper, we propose to solve SPML problems by designing a Label-Aware global Consistency (LAC) regularization, which leverages the manifold structure information to enhance the recovery of potential positive labels. On one hand, we first perform pseudo-labeling for each unobserved label based on its prediction probability. The consistency regularization is then imposed on model outputs to balance the fitting of identified labels and exploring of potential positive labels. On the other hand, by enforcing label-wise embeddings to maintain global consistency, LAC loss encourages the model to learn more distinctive representations, which is beneficial for recovering the information of potential positive labels. Experiments on multiple benchmark datasets validate that the proposed method can achieve state-of-the-art performance for solving SPML tasks. | Accept | This paper studies the single-positive multi-label learning problem. To address this problem, the authors adopt the pseudo-labels to recover the potential positive labels and adopt the global consistency regularization for label-wise embeddings to learn more distinctive feature representations. Experimental results demonstrate the superiority of the proposal. All reviewers agree to accept this paper, so I recommend acceptance. Moreover, I still have some more suggestions: 1) The font size in Figure 3 could be larger to make the plot clear. 2) The reference format is not unified. I suggest the authors revise the reference format carefully. | test | [
"HvYTTEQyxZ",
"mTeRWSmHX_",
"Fx0YBmLLlK",
"MS865KXiX_Y",
"qcp1TygMSuvN",
"tA_4NKKfCev",
"bPE_WBakInA",
"Vk3RE16L-hh",
"FFUwPVEKZrX",
"zRUt6SIlKV9"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their rebuttal addressing my concerns. ",
" Thanks for the authors’ feedback. ",
" Thanks for great efforts on the review of this paper. We will try our best to answer all your concerns.\n\nQ1: The basis for saying the sentences in Page 5 Line 150-153 needs to be introduced.\n\nA1: $\\... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"tA_4NKKfCev",
"Fx0YBmLLlK",
"zRUt6SIlKV9",
"FFUwPVEKZrX",
"Vk3RE16L-hh",
"bPE_WBakInA",
"nips_2022_o3HXEEBKnD",
"nips_2022_o3HXEEBKnD",
"nips_2022_o3HXEEBKnD",
"nips_2022_o3HXEEBKnD"
] |
nips_2022_vdh62914QR | Black-Box Generalization: Stability of Zeroth-Order Learning | We provide the first generalization error analysis for black-box learning through derivative-free optimization. Under the assumption of a Lipschitz and smooth unknown loss, we consider the Zeroth-order Stochastic Search (ZoSS) algorithm, that updates a $d$-dimensional model by replacing stochastic gradient directions with stochastic differences of $K+1$ perturbed loss evaluations per dataset (example) query. For both unbounded and bounded possibly nonconvex losses, we present the first generalization bounds for the ZoSS algorithm. These bounds coincide with those for SGD, and they are independent of $d$, $K$ and the batch size $m$, under appropriate choices of a slightly decreased learning rate. For bounded nonconvex losses and a batch size $m=1$, we additionally show that both generalization error and learning rate are independent of $d$ and $K$, and remain essentially the same as for the SGD, even for two function evaluations. Our results extensively extend and consistently recover established results for SGD in prior work, on both generalization bounds and corresponding learning rates. If additionally $m=n$, where $n$ is the dataset size, we recover generalization guarantees for full-batch GD as well. | Accept | The paper provides generalization bounds for zeroth order stochastic search (ZoSS) based on algorithmic stability. The paper appears to follow from a fairly modest modification of Hardt, Recht and Singer `16, but the consensus among reviewers is that this modification is not trivial, and the bounds are novel and interesting. Consequently, I recommend acceptance. | train | [
"WF-YRv_J-i",
"_TPxpMI4WHc",
"dybuauAL8B",
"BlR_0V_w8Tb",
"0QKfRDMzCM",
"mvBdbkh2xIR",
"Abe1xRQB_T",
"OC939APveyd",
"Hz96ltvueg-",
"Tx2uc6aqw0I",
"hbx9pC3bYDm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response. After reading the rebuttal and the other reviews, I wish to modify my review and increase my score to a 5. ",
" Thank you for the detailed response. I am satisfied with your answers and will update the score accordingly.",
" We would like to thank the revi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"BlR_0V_w8Tb",
"Abe1xRQB_T",
"nips_2022_vdh62914QR",
"OC939APveyd",
"Hz96ltvueg-",
"hbx9pC3bYDm",
"Tx2uc6aqw0I",
"nips_2022_vdh62914QR",
"nips_2022_vdh62914QR",
"nips_2022_vdh62914QR",
"nips_2022_vdh62914QR"
] |
nips_2022_hPkGV4BPsmv | DReS-FL: Dropout-Resilient Secure Federated Learning for Non-IID Clients via Secret Data Sharing | Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data. Different from centralized training, the local datasets across clients in FL are non-independent and identically distributed (non-IID). In addition, the data-owning clients may drop out of the training process arbitrarily. These characteristics will significantly degrade the training performance. This paper proposes a Dropout-Resilient Secure Federated Learning (DReS-FL) framework based on Lagrange coded computing (LCC) to tackle both the non-IID and dropout problems. The key idea is to utilize Lagrange coding to secretly share the private datasets among clients so that each client receives an encoded version of the global dataset, and the local gradient computation over this dataset is unbiased. To correctly decode the gradient at the server, the gradient function has to be a polynomial in a finite field, and thus we construct polynomial integer neural networks (PINNs) to enable our framework. Theoretical analysis shows that DReS-FL is resilient to client dropouts and provides privacy protection for the local datasets. Furthermore, we experimentally demonstrate that DReS-FL consistently leads to significant performance gains over baseline methods. | Accept | This paper discusses a novel secure aggregation method for dropout resilient federated learning. The proposed solution is interesting, with some reasonable theoretical and empirical analysis. The authors did a good job of engaging with the reviewers in the discussion phase. From my own reading, I found the paper quite lacking in motivation though. The primary motivating scenarios for secure aggregation are for FL from on-premise datasets, such as learning in healthcare settings across hospitals or insurance providers. In such cases, is there a significant dropout concern? On the other hand, I don't see the proposed schemes as practical in the learning from mobile devices scenario, where there is not even a fixed client pool to begin with. It seems that the authors need to think about the motivating scenarios for their work and add discussion about this to the paper. | train | [
"9YhJ2cpohQF",
"BroDumPjNO",
"3iwSSycsqv_",
"C4Pvto9_iYe",
"fvQBH7rx8N",
"-N0fklpvUo",
"NVrSBKL_sK",
"h-hDtMfM8u",
"sOLGvhj2n3",
"DY9SLd41jdt",
"RiwemmFbUY8",
"1-BBv96alB",
"PYPFDAzxUhio",
"v_iu_TKG65",
"p_O_TMSEHtY",
"6wep4-6-FD",
"wPF7Hwx9dAf",
"GW8KvsEx5Go",
"vfhmY828cLO",
"... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" __We are glad to see that the reviewer has raised the evaluation score. Based on the response above, the authors would like to take the last chance to clarify two points.__\n\n> $\\textbf{Comment 1:}$ The practicality of the proposed scheme seems quite limited due to the huge communication cost. (As the authors m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"BroDumPjNO",
"3iwSSycsqv_",
"-N0fklpvUo",
"NVrSBKL_sK",
"NVrSBKL_sK",
"NVrSBKL_sK",
"1-BBv96alB",
"QX9DSpDWpNm",
"WlbsaTq1e04",
"EQKiT7Q41r6",
"_uSod9T8pxw",
"c9N7JlvXjfp",
"nips_2022_hPkGV4BPsmv",
"c9N7JlvXjfp",
"c9N7JlvXjfp",
"c9N7JlvXjfp",
"c9N7JlvXjfp",
"c9N7JlvXjfp",
"c9N7J... |
nips_2022_nosngu5XwY9 | Dynamic Inverse Reinforcement Learning for Characterizing Animal Behavior | Understanding decision-making is a core goal in both neuroscience and psychology, and computational models have often been helpful in the pursuit of this goal. While many models have been developed for characterizing behavior in binary decision-making and bandit tasks, comparatively little work has focused on animal decision-making in more complex tasks, such as navigation through a maze. Inverse reinforcement learning (IRL) is a promising approach for understanding such behavior, as it aims to infer the unknown reward function of an agent from its observed trajectories through state space. However, IRL has yet to be widely applied in neuroscience. One potential reason for this is that existing IRL frameworks assume that an agent's reward function is fixed over time. To address this shortcoming, we introduce dynamic inverse reinforcement learning (DIRL), a novel IRL framework that allows for time-varying intrinsic rewards. Our method parametrizes the unknown reward function as a time-varying linear combination of spatial reward maps (which we refer to as "goal maps"). We develop an efficient inference method for recovering this dynamic reward function from behavioral data. We demonstrate DIRL in simulated experiments and then apply it to a dataset of mice exploring a labyrinth. Our method returns interpretable reward functions for two separate cohorts of mice, and provides a novel characterization of exploratory behavior. We expect DIRL to have broad applicability in neuroscience, and to facilitate the design of biologically-inspired reward functions for training artificial agents. | Accept | Unanimous acceptance recommendations.
Reviewer yX2q had a long list of weaknesses, and was the most thorough reviewer, but most have since been addressed by the authors. While prior time-varying IRL prior methods exist, their application their dynamic inverse reinforcement learning (DIRL) method to animal decision-making in more complex tasks from to real animal (mouse) data seems uncommon yet relevant to NeurIPS. There are some concerns about scalability, but for these applications like inferring mouse goals and rewards in maze like environments, scaling isn't as necessary.
From my own reading of this work, I agree with the reviewers it's also interesting, and worth accepting. The authors' modelling of the low-rank time-varying reward structure was nice and non-trivial, and it does give interesting interpretability to a discrete number of mice behaviors it discovers like "explore", "go home", "find water". So the application and method seem interesting and novel. It's possible this could be better suited to a neuroscience conference instead, but the use of IRL here might complicate that, so being between fields, the best venue for a work like this is possibly NeurIPS.
While I recommend accept here, the higher scoring reviewers were not overly specific in the merits of the method to warrant an award, so I interpret their scores as slightly overzealous, and so without significant algorithmic novelty beyond the time-varying modelling in this context (which was nice but not ground breaking), I'll not argue for an award. | train | [
"OSqxtwFhj0G",
"iRXmh49tqe0",
"n2OtjY9SAU-",
"53OSGGCxrt",
"YAhtUnEn5aY",
"3KjjeTTuf9J",
"CQP2Qm_ZQm",
"GTT01vmW8HV",
"3T_FmqYeW_U",
"3E4cx6Rtwcx",
"Q_bxawdzWO",
"99LzOB0Z9-z"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for thoroughly responding to my comments. Your responses address a number of my concerns above, and I will increase my rating of the paper as a result.",
" We thank the reviewer for their detailed comments and analysis of our paper. We are pleased that they found our paper to be clear and motivated by... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
10
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
3,
5
] | [
"3E4cx6Rtwcx",
"3E4cx6Rtwcx",
"3E4cx6Rtwcx",
"3E4cx6Rtwcx",
"99LzOB0Z9-z",
"Q_bxawdzWO",
"3T_FmqYeW_U",
"nips_2022_nosngu5XwY9",
"nips_2022_nosngu5XwY9",
"nips_2022_nosngu5XwY9",
"nips_2022_nosngu5XwY9",
"nips_2022_nosngu5XwY9"
] |
nips_2022_7fGIR2oIHTl | Confidence-based Reliable Learning under Dual Noises | Deep neural networks (DNNs) have achieved remarkable success in a variety of computer vision tasks, where massive labeled images are routinely required for model optimization. Yet, the data collected from the open world are unavoidably polluted by noise, which may significantly undermine the efficacy of the learned models. Various attempts have been made to reliably train DNNs under data noise, but they separately account for either the noise existing in the labels or that existing in the images. A naive combination of the two lines of works would suffer from the limitations in both sides, and miss the opportunities to handle the two kinds of noise in parallel. This works provides a first, unified framework for reliable learning under the joint (image, label)-noise. Technically, we develop a confidence-based sample filter to progressively filter out noisy data without the need of pre-specifying noise ratio. Then, we penalize the model uncertainty of the detected noisy data instead of letting the model continue over-fitting the misleading information in them. Experimental results on various challenging synthetic and real-world noisy datasets verify that the proposed method can outperform competing baselines in the aspect of classification performance. | Accept | Novel work that addresses the somewhat realistic setting of noise in input (images) and annotated labels. The method uses confidence scores to decide whether the labels or the images are noisy. Reviewers agree that the work is clear and easy to follow + easy to implement. I disagree with the reviewer that the proposed method is trivial -- fundamentally, showing that these simple thresholds actually help in a variety of settings is in fact a useful contribution to the community. I believe the simplicity of the proposed approach will make it more likely that this work is adopted or studied in the community, so I recommend acceptance. | train | [
"fqVmQnPJLS2",
"TCd22gWjAd0",
"EE9iFzez9D",
"ok1Y_uavSNq",
"Pi8ghHyR5u",
"drxadRMQEW_",
"4DOBLJWNs-h",
"7isYBmZVMOu",
"b5sHsOY-LbN"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed response. It does address most of my concerns. \n\nI just want to point out that despite the experiment on Tiny ImageNet, it is still unknown about the impact of dual noises in the real-world scenario, where images may have higher quality and exhibit higher diversity in the visual context.... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"ok1Y_uavSNq",
"7isYBmZVMOu",
"nips_2022_7fGIR2oIHTl",
"b5sHsOY-LbN",
"7isYBmZVMOu",
"4DOBLJWNs-h",
"nips_2022_7fGIR2oIHTl",
"nips_2022_7fGIR2oIHTl",
"nips_2022_7fGIR2oIHTl"
] |
nips_2022_CFAsKosKwwk | Interpolation and Regularization for Causal Learning | Recent work shows that in complex model classes, interpolators can achieve statistical generalization and even be optimal for statistical learning. However, despite increasing interest in learning models with good causal properties, there is no understanding of whether such interpolators can also achieve *causal generalization*. To address this gap, we study causal learning from observational data through the lens of interpolation and its counterpart---regularization. Under a simple linear causal model, we derive precise asymptotics for the causal risk of the min-norm interpolator and ridge regressors in the high-dimensional regime.
We find a large range of behavior that can be precisely characterized by a new measure of *confounding strength*. When confounding strength is positive, which holds under independent causal mechanisms---a standard assumption in causal learning---we find that interpolators cannot be optimal. Indeed, causal learning requires stronger regularization than statistical learning. Beyond this assumption, when confounding is negative, we observe a phenomenon of self-induced regularization due to positive alignment between statistical and causal signals. Here, causal learning requires weaker regularization than statistical learning, interpolators can be optimal, and optimal regularization can even be negative. | Accept |
The consensus view was that the theoretical analysis was important, fundamentally novel and very interesting. The paper was well written. | train | [
"Hi_-jK7h39M",
"JBlZJobI33",
"B2U9Mr4y72C",
"n7oE4d2caeu",
"qBreUcJ7WB3",
"jvMP4wyujvw",
"9tMiKh49gYf",
"t99NE17xyfM",
"Ff45w6AMj07",
"dPocqYwEytV",
"l-EyaMeiIXz",
"RoyOiM6m0vL",
"5Vr1mZxa-7Z",
"-_JDGNz7gz",
"8MzdXeAYXHZ",
"p3Lle5H3gVg"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Since there are no further comments or concerns, we hope that the reviewer would consider improving their overall score for the paper. Especially since they acknowledge the soundness, contribution, and presentation with the highest scores. Alternatively, could the reviewer kindly clarify their overall score? If t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
1
] | [
"B2U9Mr4y72C",
"t99NE17xyfM",
"Ff45w6AMj07",
"p3Lle5H3gVg",
"8MzdXeAYXHZ",
"5Vr1mZxa-7Z",
"dPocqYwEytV",
"p3Lle5H3gVg",
"8MzdXeAYXHZ",
"-_JDGNz7gz",
"RoyOiM6m0vL",
"5Vr1mZxa-7Z",
"nips_2022_CFAsKosKwwk",
"nips_2022_CFAsKosKwwk",
"nips_2022_CFAsKosKwwk",
"nips_2022_CFAsKosKwwk"
] |
nips_2022_FO0Gb8IL1p5 | Off-Policy Evaluation with Policy-Dependent Optimization Response | The intersection of causal inference and machine learning for decision-making is rapidly expanding, but the default decision criterion remains an average of individual causal outcomes across a population. In practice, various operational restrictions ensure that a decision-maker's utility is not realized as an average but rather as an output of a downstream decision-making problem (such as matching, assignment, network flow, minimizing predictive risk). In this work, we develop a new framework for off-policy evaluation with policy-dependent linear optimization responses: causal outcomes introduce stochasticity in objective function coefficients. Under this framework, a decision-maker's utility depends on the policy-dependent optimization, which introduces a fundamental challenge of optimization bias even for the case of policy evaluation. We construct unbiased estimators for the policy-dependent estimand by a perturbation method, and discuss asymptotic variance properties for a set of adjusted plug-in estimators. Lastly, attaining unbiased policy evaluation allows for policy optimization: we provide a general algorithm for optimizing causal interventions. We corroborate our theoretical results with numerical simulations. | Accept |
In the paper, the authors considered a new formulation for the off-policy evaluation and optimiation, in which the utility measured by the output of a downstream decision-making problem, therefore, the policy value is dependent on individual reponse. This difference induces additional optimization bias in the estimand and requires new technique to handle it. The authors constructed estimators by combining perturbation method with IPW estimators, and provided the theoretical guarantees and empirical study.
Most of the reviewers provide positive feedback on this paper. It will be great if the authors can take the authors suggestions into account to improve the papers:
- Although as the authors explained the finite sample analysis is not new, for completeness, I think it will be great if this can be added into the paper.
- As reviewer suggeseted, the method is motivated well from practical problems, however, has not justified with real-world applications. The paper will be significantly improved if this part can be added.
Finally, there is naturally technique, interchangeability principle [1, 2], can be used to bypass the optimization bias, besides the perturbation method, which should be discussed and compared.
> [1] Dai, Bo, Niao He, Yunpeng Pan, Byron Boots, and Le Song. "Learning from conditional distributions via dual embeddings." In Artificial Intelligence and Statistics, pp. 1458-1467. PMLR, 2017.\
[2] Shapiro, A., and Dentcheva, D. (2014). Lectures on stochastic programming: modeling and theory. SIAM (Vol. 16). | train | [
"uui9Df6679v",
"s0_joUdyTGE",
"DdnR86mkXaf",
"0dNt4Lc-8Gp",
"JvuFQDC4RDl",
"p7l-uJ54Hd0",
"UwH6-GNH8y",
"mXZRgs11hsc",
"wxkO1Z2gscP",
"1hDtpLxhFa3",
"puZ2nX5s1O7",
"z3OLNQ26M0",
"ey4R6Hqnhy",
"Hf1fUvst8r4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for the response. I have no further questions or concerns. ",
" Hi,\n\nThanks for your clarification on the optimization bias. When I re-read the section, it was clear that this was, in fact, discussed. I believe that the confusion was with the section that follows in lines 181-190 (in particu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"wxkO1Z2gscP",
"1hDtpLxhFa3",
"nips_2022_FO0Gb8IL1p5",
"JvuFQDC4RDl",
"UwH6-GNH8y",
"puZ2nX5s1O7",
"Hf1fUvst8r4",
"ey4R6Hqnhy",
"z3OLNQ26M0",
"puZ2nX5s1O7",
"nips_2022_FO0Gb8IL1p5",
"nips_2022_FO0Gb8IL1p5",
"nips_2022_FO0Gb8IL1p5",
"nips_2022_FO0Gb8IL1p5"
] |
nips_2022_BuQIv5Qe35 | PDSketch: Integrated Domain Programming, Learning, and Planning | This paper studies a model learning and online planning approach towards building flexible and general robots. Specifically, we investigate how to exploit the locality and sparsity structures in the underlying environmental transition model to improve model generalization, data-efficiency, and runtime-efficiency. We present a new domain definition language, named PDSketch. It allows users to flexibly define high-level structures in the transition models, such as object and feature dependencies, in a way similar to how programmers use TensorFlow or PyTorch to specify kernel sizes and hidden dimensions of a convolutional neural network. The details of the transition model will be filled in by trainable neural networks. Based on the defined structures and learned parameters, PDSketch automatically generates domain-independent planning heuristics without additional training. The derived heuristics accelerate the performance-time planning for novel goals. | Accept | This paper present a new domain definition language aimed at defining high-level structures of transition models. The benefit of using this language is the possibility of injecting human priors into the models thus achieving better learning.
The idea of the paper is novel and bold.
The reviewers highlighted different aspects of how the paper could be improved in terms of clarity and structure (potentially move some of the experiments from the appendix), but generally agreed on the value of its contribution.
I encourage the authors to incorporate all the feedback received. | train | [
"qStN0Ex3KX",
"tQm0c9wSZRrG",
"fVc4gbuO40X",
"c_bMDOknn5s",
"wh6gPaTElv2",
"CTn8e9IMwGV",
"kGR-WMmK7fpR",
"s94gHKPzr-e",
"f9Y-cjNOfzN",
"3P2RRIHJRIk",
"SRlw_7zx8Q6"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I do not have more questions. ",
" Thank you for your time and consideration. Your understanding is correct.\n\n**Q1**: Model learns to predict properties.\n\n**A1**: Yes. The model learns to predict the properties purely from images. The model can learn because in action definitions yo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
2
] | [
"CTn8e9IMwGV",
"fVc4gbuO40X",
"wh6gPaTElv2",
"SRlw_7zx8Q6",
"3P2RRIHJRIk",
"f9Y-cjNOfzN",
"s94gHKPzr-e",
"nips_2022_BuQIv5Qe35",
"nips_2022_BuQIv5Qe35",
"nips_2022_BuQIv5Qe35",
"nips_2022_BuQIv5Qe35"
] |
nips_2022_AlkMMzUX95 | Spatial Mixture-of-Experts | Many data have an underlying dependence on spatial location; it may be weather on the Earth, a simulation on a mesh, or a registered image. Yet this feature is rarely taken advantage of, and violates common assumptions made by many neural network layers, such as translation equivariance. Further, many works that do incorporate locality fail to capture fine-grained structure. To address this, we introduce the Spatial Mixture-of-Experts (SMoE) layer, a sparsely-gated layer that learns spatial structure in the input domain and routes experts at a fine-grained level to utilize it. We also develop new techniques to train SMoEs, including a self-supervised routing loss and damping expert errors. Finally, we show strong results for SMoEs on numerous tasks, and set new state-of-the-art results for medium-range weather prediction and post-processing ensemble weather forecasts. | Accept | This work introduces a novel layer in order to extract information better at a fine grained level. The basic idea is to introduce a spatial mixture of expert which will localized computations to a specific part of the image. Given the novelty the method, I recommend accepting this paper. However, please add the modifications proposed to reviewer vuZ5. | val | [
"hfY9DZsl1nH",
"JlefpDF7lLn",
"Y8o3QHLrOv_",
"O12AeF9j5f-",
"gmCDjY8GV0i",
"lQkv6l49dU",
"2EvQ8CO1X9",
"4I33n_iwml",
"RD7qFn3UNBZ",
"2vnVeC7K6p7",
"kNw-kfP4bc",
"E0GnP6wATpL",
"4hgYkyNguMy"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you!\n\n> If there are measurements with improvement on an additional relevant application, even if not SotA but compared to strong enough appropriate baseline, IMO that would empirically demonstrate a greater reach and bring this from borderline/weak accept to clear accept.\n\nWe are working on additional ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4,
3
] | [
"JlefpDF7lLn",
"O12AeF9j5f-",
"4hgYkyNguMy",
"E0GnP6wATpL",
"kNw-kfP4bc",
"2vnVeC7K6p7",
"RD7qFn3UNBZ",
"nips_2022_AlkMMzUX95",
"nips_2022_AlkMMzUX95",
"nips_2022_AlkMMzUX95",
"nips_2022_AlkMMzUX95",
"nips_2022_AlkMMzUX95",
"nips_2022_AlkMMzUX95"
] |
nips_2022_qOgSCLE5E8 | Adaptive Distribution Calibration for Few-Shot Learning with Hierarchical Optimal Transport | Few-shot classification aims to learn a classifier to recognize unseen classes during training, where the learned model can easily become over-fitted based on the biased distribution formed by only a few training examples. A recent solution to this problem is calibrating the distribution of these few sample classes by transferring statistics from the base classes with sufficient examples, where how to decide the transfer weights from base classes to novel classes is the key. However, principled approaches for learning the transfer weights have not been carefully studied. To this end, we propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes, which is built upon a hierarchical Optimal Transport (H-OT) framework. By minimizing the high-level OT distance between novel samples and base classes, we can view the learned transport plan as the adaptive weight information for transferring the statistics of base classes. The learning of the cost function between a base class and novel class in the high-level OT leads to the introduction of the low-level OT, which considers the weights of all the data samples in the base class. Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches and owns desired cross-domain generalization ability, indicating the effectiveness of the learned adaptive weights. | Accept | This paper builds off of the distribution calibration approach for few-shot learning known as “Free Lunch,” replacing the Euclidean metric with Hierarchical-Optimal Transport. The result is a more principled and empirically effective approach. The main issues among the reviewers were concerned with the limitations of Free Lunch and whether this new approach could overcome them, and what sort of additional computational cost would be incurred by using optimal transport. While more expensive, it is felt that overall the approach is sufficiently novel and effective.
| train | [
"NgCyXMciF6H",
"tdf7ftGHEm",
"7CK-kTiXFW7",
"3HLsBYbSEK1",
"r-hGDgysjdj",
"5wqeznyk1e",
"EZdxCVtswa",
"pBXmDCPRwm-",
"Y7l4yVTf5cB",
"Cb7PAaf3jp",
"erE-q0dMVPH",
"M7b2ld7FNnL",
"_pJrzc1fLJZ",
"GJTNEcC3PIC",
"SddQNI93F1",
"DYNmUlSqaHv",
"XXomK2oQSZ",
"QGWqg6ii_0P",
"oONf1FinLu4"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot!",
" Thank you for the responses to my comments. My major concern was about comparison with Free-Lunch, which is somewhat addressed and hence I will change my rating to 5 from 4.",
" Thanks for your comments.",
" Thanks for your comments.",
" Thanks for your careful checks and constructive co... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
3
] | [
"tdf7ftGHEm",
"erE-q0dMVPH",
"pBXmDCPRwm-",
"5wqeznyk1e",
"EZdxCVtswa",
"Cb7PAaf3jp",
"Y7l4yVTf5cB",
"M7b2ld7FNnL",
"oONf1FinLu4",
"QGWqg6ii_0P",
"XXomK2oQSZ",
"DYNmUlSqaHv",
"nips_2022_qOgSCLE5E8",
"nips_2022_qOgSCLE5E8",
"nips_2022_qOgSCLE5E8",
"nips_2022_qOgSCLE5E8",
"nips_2022_qO... |
nips_2022_N_D-JLau3Z | Iterative Structural Inference of Directed Graphs | In this paper, we propose a variational model, iterative Structural Inference of Directed Graphs (iSIDG), to infer the existence of directed interactions from observational agents’ features over a time period in a dynamical system. First, the iterative process in our model feeds the learned interactions back to encourage our model to eliminate indirect interactions and to emphasize directional representation during learning. Second, we show that extra regularization terms in the objective function for smoothness, connectiveness, and sparsity prompt our model to infer a more realistic structure and to further eliminate indirect interactions. We evaluate iSIDG on various datasets including biological networks, simulated fMRI data, and physical simulations to demonstrate that our model is able to precisely infer the existence of interactions, and is significantly superior to baseline models. | Accept | This is a bordeline paper.
All reviewers liked the paper but were a bit concerned with the number of regularization terms (the loss combines VAE and information bottleneck approaches) and thus the number of hyperparameters. Secondly, the reviewers were also initially concerned with the size of the benchmarks but the discussion phase convinced them that these are standard in the field. Thirdly, some reviewers pointed to more related work. This, however, was not a crucial point.
Since all reviewers finds the paper interesting and well-written, acceptance is recommended. | train | [
"hlM5QU-wtYI",
"FZnJYuyJWG",
"SXWIVucSoih",
"qjhKW5kSpH1",
"1T20xqrg22",
"1-jGFHhnkhG",
"LAURaMjLHIq",
"g9Y5W4EDOXm",
"8gQp5vT4cQb",
"tv4u0bfpVli",
"596YUkFbLWG",
"LQczii-XAAa",
"PVZax9LjOMG",
"Kq25C8MBzUH"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank all three reviewers for their time and valuable feedback!\n\n\n\nWe appreciate their assessment of our work as a **\"novel and simple, yet effective\"**, **\"well-written and the presentation is clear\"** (cY4q), **\"the idea of iterative learning of the adjacency matrix is new\"** (3CVG), ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"nips_2022_N_D-JLau3Z",
"SXWIVucSoih",
"g9Y5W4EDOXm",
"1T20xqrg22",
"1-jGFHhnkhG",
"Kq25C8MBzUH",
"g9Y5W4EDOXm",
"8gQp5vT4cQb",
"PVZax9LjOMG",
"596YUkFbLWG",
"LQczii-XAAa",
"nips_2022_N_D-JLau3Z",
"nips_2022_N_D-JLau3Z",
"nips_2022_N_D-JLau3Z"
] |
nips_2022_lhl_rYNdiH6 | Contrastive Graph Structure Learning via Information Bottleneck for Recommendation | Graph convolution networks (GCNs) for recommendations have emerged as an important research topic due to their ability to exploit higher-order neighbors. Despite their success, most of them suffer from the popularity bias brought by a small number of active users and popular items. Also, a real-world user-item bipartite graph contains many noisy interactions, which may hamper the sensitive GCNs. Graph contrastive learning show promising performance for solving the above challenges in recommender systems. Most existing works typically perform graph augmentation to create multiple views of the original graph by randomly dropping edges/nodes or relying on predefined rules, and these augmented views always serve as an auxiliary task by maximizing their correspondence. However, we argue that the graph structures generated from these vanilla approaches may be suboptimal, and maximizing their correspondence will force the representation to capture information irrelevant for the recommendation task. Here, we propose a Contrastive Graph Structure Learning via Information Bottleneck (CGI) for recommendation, which adaptively learns whether to drop an edge or node to obtain optimized graph structures in an end-to-end manner. Moreover, we innovatively introduce the Information Bottleneck into the contrastive learning process to avoid capturing irrelevant information among different views and help enrich the final representation for recommendation. Extensive experiments on public datasets are provided to show that our model significantly outperforms strong baselines. | Accept | All reviews were on the side of acceptance, but opinion varied from borderline to strong acceptance.
Several aspects of the paper were appreciated:
- The problem was considered important, and the model and its technical ideas were considered interesting. - - The model and its components were considered clearly motivated.
- The performance and robustness advantage over baselines in empirical results was appreciated, as were the ablation studies, and the analysis of the information bottleneck part.
- The paper was considered well written and technically sound, and code was provided.
The reviewers dis have several concerns:
- A claim about alleviating popularity bias was considered not well supported. One reviewer considered it was not well supported either by theory or experiments, whereas another reviewer felt that empirical results did show reduction of popularity bias and noise robustness but no clear theoretical link was provided.
- Opinion on experiments varied: three reviewers considered the experiments extensive / thorough / having good results, but one considered the experiments weak and not giving results on all three data sets in each experiment; additional information was provided by authors in a response.
- Some related work on information bottleneck was missing.
- The novelty was considered limited as a combination of existing approaches.
The discussion with the authors resolved at least in part the latter two concerns.
Overall, it seems the paper can be publishable in NeurIPS provided the improvements provided during the discussion phase are sufficiently taken into account in the final manuscript. | train | [
"xRBEPbjuLHx",
"rXAFoBzwzYU",
"WtywreJjpE2",
"PUziSJ_fRhV",
"41wq1K-lmYt",
"R07s4qbJCBD",
"bgfrq3zWfd",
"fMxdMVb8Za8",
"47leb8ACeXG",
"Cvglxrda31i",
"nD9WtUDaNgg",
"LjHTD59sA1S",
"Of7CAITmzI5",
"AFDBEBX8bXz",
"dLWfvebzgOH",
"oeV4gKJI3hv"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer 5uX4: \n \nThanks a lot for your efforts in reviewing this paper. We have tried our best to address all your concerns and provided more experiments. Please let us know whether there are any unclear explanations. In addition, if you have any further questions, we will also be very glad to further c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
3
] | [
"AFDBEBX8bXz",
"Of7CAITmzI5",
"PUziSJ_fRhV",
"fMxdMVb8Za8",
"oeV4gKJI3hv",
"dLWfvebzgOH",
"dLWfvebzgOH",
"dLWfvebzgOH",
"AFDBEBX8bXz",
"AFDBEBX8bXz",
"Of7CAITmzI5",
"Of7CAITmzI5",
"nips_2022_lhl_rYNdiH6",
"nips_2022_lhl_rYNdiH6",
"nips_2022_lhl_rYNdiH6",
"nips_2022_lhl_rYNdiH6"
] |
nips_2022_PBmJC6rDnR6 | A Geometric Perspective on Variational Autoencoders | This paper introduces a new interpretation of the Variational Autoencoder framework by taking a fully geometric point of view. We argue that vanilla VAE models unveil naturally a Riemannian structure in their latent space and that taking into consideration those geometrical aspects can lead to better interpolations and an improved generation procedure. This new proposed sampling method consists in sampling from the uniform distribution deriving intrinsically from the learned Riemannian latent space and we show that using this scheme can make a vanilla VAE competitive and even better than more advanced versions on several benchmark datasets. Since generative models are known to be sensitive to the number of training samples we also stress the method's robustness in the low data regime. | Accept | The submission proposes a geometrically motivated method to sample from a trained variational autoencoder by defining a Riemannian metric on the latent space and a corresponding distribution to correct the sampling process of VAEs. According to most reviewers and reading the submission, the method is clearly explained and provides a consistent improvement in sample quality according to FID. The authors have mostly addressed the reviewers concerns as well.
I recommend this paper for acceptance. | train | [
"vHes9NtxXeY",
"ZJ6j1Y9FAz",
"3RL8zVfxYND",
"qSY91YWC4q",
"BYffrmrtmUk",
"L0CPOO9o1DI",
"74AFhRuQC_h"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank Reviewer AaFY for his/her review and suggestions to improve the experimental results.\n\n- *baselines implementations*: As stated in Appendix. D, in the paper, we decided to use the code and hyper-parameters provided by the authors (if available) for each method we compared to. If the code was not availa... | [
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
2,
2
] | [
"74AFhRuQC_h",
"L0CPOO9o1DI",
"BYffrmrtmUk",
"nips_2022_PBmJC6rDnR6",
"nips_2022_PBmJC6rDnR6",
"nips_2022_PBmJC6rDnR6",
"nips_2022_PBmJC6rDnR6"
] |
nips_2022_DbEVhhuNjr | Foundation Posteriors for Approximate Probabilistic Inference | Probabilistic programs provide an expressive representation language for generative models. Given a probabilistic program, we are interested in the task of posterior inference: estimating a latent variable given a set of observed variables. Existing techniques for inference in probabilistic programs often require choosing many hyper-parameters, are computationally expensive, and/or only work for restricted classes of programs. Here we formulate inference as masked language modeling: given a program, we generate a supervised dataset of variables and assignments, and randomly mask a subset of the assignments. We then train a neural network to unmask the random values, defining an approximate posterior distribution. By optimizing a single neural network across a range of programs we amortize the cost of training, yielding a "foundation" posterior able to do zero-shot inference for new programs. The foundation posterior can also be fine-tuned for a particular program and dataset by optimizing a variational inference objective. We show the efficacy of the approach, zero-shot and fine-tuned, on a benchmark of STAN programs. | Accept | This paper proposes a novel and interesting perspective on leveraging large masked language models as ways to initialize posterior distributions across probabilistic programming language (PPL) tasks. The idea is that this distribution can later be fine tuned over different probabilistic programs.
All reviewers acknowledged that the idea is a novel application of masked language models and, despite being a natural analogy to the way these models are already used nowadays in NLP, can be potentially impactful for amortized inference in PPLs.
The paper is accepted upon the introduction of the discussions emerged during the rebuttal concerning related works and presentation. I also advise the authors to think about substituting the term "Foundation" with a more precise technical term ("Masked Language Model Posteriors"?, "Transformer Posteriors"?,...). Imprecise umbrella terms, nowadays, bring more noise than help and inflate the hype around simple concepts. | test | [
"bjWURcwRgQ",
"rAWa62rW5lM",
"7XuP-KQsfRF",
"mho778nMHh4",
"xRS1Pbmu4p",
"4osx76ljdCjJ",
"M9rvb67v02",
"fdHKRiD-Z4E",
"EKBy1W4pu_7",
"HQhh3EuY3ET",
"Tjf--pMp5HS",
"SCoBzOEpDOn",
"b91h9dTgcy",
"Z0INWLLLDXl",
"Mf1ElYmN9TQ",
"YVjLLcvfl9t",
"rjxc-fr5Cwm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the reply and I believe most of my concerns have been addressed. \n\nIn terms of the STAN question, I would suggest adding explanations of what kinds of programs STAN supports and what common features they have. It does not necessarily need to be thorough or formal in the sense that it mainly serves to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"EKBy1W4pu_7",
"7XuP-KQsfRF",
"HQhh3EuY3ET",
"xRS1Pbmu4p",
"4osx76ljdCjJ",
"M9rvb67v02",
"fdHKRiD-Z4E",
"rjxc-fr5Cwm",
"YVjLLcvfl9t",
"Tjf--pMp5HS",
"Mf1ElYmN9TQ",
"Z0INWLLLDXl",
"nips_2022_DbEVhhuNjr",
"nips_2022_DbEVhhuNjr",
"nips_2022_DbEVhhuNjr",
"nips_2022_DbEVhhuNjr",
"nips_202... |
nips_2022_MZoyeKrpVYP | On Non-Linear operators for Geometric Deep Learning | This work studies operators mapping vector and scalar fields defined over a manifold $\mathcal{M}$, and which commute with its group of diffeomorphisms $\text{Diff}(\mathcal{M})$. We prove that in the case of scalar fields $L^p_\omega(\mathcal{M,\mathbb{R}})$, those operators correspond to point-wise non-linearities, recovering and extending known results on $\mathbb{R}^d$. In the context of Neural Networks defined over $\mathcal{M}$, it indicates that point-wise non-linear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields $L^p_\omega(\mathcal{M},T\mathcal{M})$, we show that those operators are solely the scalar multiplication. It indicates that $\text{Diff}(\mathcal{M})$ is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of $\mathcal{M}$. | Accept | Overall: studies operators mapping vector and scalar fields defined over a manifold M, and which commute with its group of diffeomorphisms Diff(M).
Reviews: The paper received three reviews, all of them on the positive side: Accept (less confident), Weak Accept (less confident), borderline accept (less confident). It seems that all reviewers are happy with the paper + the proposed changes by the authors during the rebuttal.. The reviewers found the paper is clear and has a clean presentation. The findings are interesting, and connect with the ML community. The authors have provided satisfactory answers to reviewers' comments, answering most of them successfully.
Confidence of reviews: Overall, the reviewers are fairly confident. We will put more weight to the reviews that got engaged in the rebuttal discussion period. | train | [
"c1N4z9Q-U-s",
"SFabmuFKuI",
"HtdCEEmrvvA",
"Uh9-fw-_u_D",
"51LnYE7Phe",
"rWsbTmKpxG",
"TjPeasSmT2"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nWe would like to thank the reviewer for his remarks and positive review! We thought to have added all the definitions to notations once they appeared in the text, yet it seems we might have missed some; we added:\n\n– lines 73 to 76: definition of $L^p(\\mathcal{M}, E)$ \n\n– line 111: definition of $T_u\\mathc... | [
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
2,
2,
2
] | [
"TjPeasSmT2",
"rWsbTmKpxG",
"51LnYE7Phe",
"nips_2022_MZoyeKrpVYP",
"nips_2022_MZoyeKrpVYP",
"nips_2022_MZoyeKrpVYP",
"nips_2022_MZoyeKrpVYP"
] |
nips_2022_F2Gk6Vr3wu | Scale-invariant Learning by Physics Inversion | Solving inverse problems, such as parameter estimation and optimal control, is a vital part of science. Many experiments repeatedly collect data and rely on machine learning algorithms to quickly infer solutions to the associated inverse problems. We find that state-of-the-art training techniques are not well-suited to many problems that involve physical processes. The highly nonlinear behavior, common in physical processes, results in strongly varying gradients that lead first-order optimizers like SGD or Adam to compute suboptimal optimization directions.
We propose a novel hybrid training approach that combines higher-order optimization methods with machine learning techniques. We take updates from a scale-invariant inverse problem solver and embed them into the gradient-descent-based learning pipeline, replacing the regular gradient of the physical process.
We demonstrate the capabilities of our method on a variety of canonical physical systems, showing that it yields significant improvements on a wide range of optimization and learning problems. | Accept | Reviewers agreed that solving inverse problems is a long-standing problem in science and engineering, and the authors propose an interesting technique to integrate the physics solver with a neural network to address this challenge. | train | [
"5q6cGF7gvEd",
"lFdMDeq3Y0l",
"3i911frD6LG",
"IX1P7V2sqIu",
"6UfVRnkmtFQ",
"iR87-UnC_-",
"zrmsPKmf5x3",
"tOw0eHgUcD"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > Highly related methods are missing for comparisons. There are many optimization methods to avoid computing the inverse of the Hessian matrix. For example, the Hessian-free method. James Martens. Deep learning via Hessian-free optimization. ICML, 2010.\n\nOur evaluation already contains comparisons to more moder... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"tOw0eHgUcD",
"zrmsPKmf5x3",
"IX1P7V2sqIu",
"iR87-UnC_-",
"nips_2022_F2Gk6Vr3wu",
"nips_2022_F2Gk6Vr3wu",
"nips_2022_F2Gk6Vr3wu",
"nips_2022_F2Gk6Vr3wu"
] |
nips_2022_9Hjh0tMT1pm | Towards Improving Faithfulness in Abstractive Summarization | Despite the success achieved in neural abstractive summarization based on pre-trained language models, one unresolved issue is that the generated summaries are not always faithful to the input document.
There are two possible causes of the unfaithfulness problem:
(1) the summarization model fails to understand or capture the gist of the input text, and (2) the model over-relies on the language model to generate fluent but inadequate words.
In this work, we propose a Faithfulness Enhanced Summarization model (FES), which is designed for addressing these two problems and improving faithfulness in abstractive summarization.
For the first problem, we propose to use question-answering (QA) to examine whether the encoder fully grasps the input document and can answer the questions on the key information in the input.
The QA attention on the proper input words can also be used to stipulate how the decoder should attend to the source.
For the second problem, we introduce a max-margin loss defined on the difference between the language and the summarization model, aiming to prevent the overconfidence of the language model.
Extensive experiments on two benchmark summarization datasets, CNN/DM and XSum, demonstrate that our model significantly outperforms strong baselines.
The evaluation of factual consistency also shows that our model generates more faithful summaries than baselines. | Accept | All reviewers agree that the method is interesting, the experiment results are thorough and significant. Multiple reviewers mentioned the paper would have been stronger by demonstrating performance on a dataset in another domain (i.e., not newswire), but still gave high scores. Given the consensus among reviewers, this paper should be accepted. | train | [
"7hIuABuiHQM",
"90_s6iAdO59",
"MUpAI2ekpns",
"zDCryXlbCHx",
"HGur2lDvya",
"52bwT0AnNGc",
"NLMPnig0qLK",
"N90l3tJiLc",
"VgDx9VBAiD",
"YSEYdgs4c8E",
"RowUKvyHuOn"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing my comments/questions. ",
" All the issues I have raised have been resolved. Will raise my score.",
" Dear Reviewer,\n\nWe appreciate a lot for your insightful review comments. Do you have any further comments on our paper?\n\nRegards.",
" Thank you for the valuable comments that help ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"HGur2lDvya",
"MUpAI2ekpns",
"NLMPnig0qLK",
"RowUKvyHuOn",
"YSEYdgs4c8E",
"VgDx9VBAiD",
"N90l3tJiLc",
"nips_2022_9Hjh0tMT1pm",
"nips_2022_9Hjh0tMT1pm",
"nips_2022_9Hjh0tMT1pm",
"nips_2022_9Hjh0tMT1pm"
] |
nips_2022_3vpvnMVOUKE | Nonlinear MCMC for Bayesian Machine Learning | We explore the application of a nonlinear MCMC technique first introduced in [1] to problems in Bayesian machine learning. We provide a convergence guarantee in total variation that uses novel results for long-time convergence and large-particle (``propagation of chaos'') convergence. We apply this nonlinear MCMC technique to sampling problems including a Bayesian neural network on CIFAR10. | Accept | The contribution of this submission is strong - an analysis of convergence of non-linear MCMC methods. Most reviewers agree that the submission is theoretically interesting and of interest to the NeurIPS community. Thus I recommend acceptance.
However, I note (and share with some reviewers) the following concern: the empirical results on CIFAR10 do not show the benefit of the proposed method. The result of the linear baseline (ULA) is very far from SOTA for Resnet and Cifar10, yet ULA outperforms the non-linear version in terms of accuracy and time efficiency. It would be great if there is a realistic model/dataset (between the simple 2D toy experiment and perhaps too ambitious resnet/cifar10) that can be included to show the benefits (if any) of nonlinear MCMC. Some reviewers also raised a concern about the organisation/flow of the paper which I hope the authors will fix in the camera-ready version. | val | [
"Xy87zNvk_Ob",
"6kphkGXxW6i",
"m5F6mV4D0s",
"b4krtVz7E9S",
"K1f6CHdkkl0",
"KOg_Ii3lDTP",
"oy5RlSC--aw",
"iglzDq-_eJv",
"xTUXh6Wri8",
"9uKiZB6Z9O",
"lespcOPgmRH",
"Qu9zH7YLEtl",
"9JXtKiX45J9",
"haMQu7_mzTK",
"_WDdnPUZEZ8",
"8GRrSrVNVc"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 3nmg,\nThank you for your response and your time spent reviewing our paper.",
" I thank the authors for the reply! The authors addressed some minor concerns I had with the paper, and I believe it's a strong paper that stands to be accepted! I stand by my initial review and rating, and thank the au... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
3,
4
] | [
"6kphkGXxW6i",
"lespcOPgmRH",
"iglzDq-_eJv",
"KOg_Ii3lDTP",
"oy5RlSC--aw",
"9uKiZB6Z9O",
"Qu9zH7YLEtl",
"xTUXh6Wri8",
"8GRrSrVNVc",
"_WDdnPUZEZ8",
"haMQu7_mzTK",
"9JXtKiX45J9",
"nips_2022_3vpvnMVOUKE",
"nips_2022_3vpvnMVOUKE",
"nips_2022_3vpvnMVOUKE",
"nips_2022_3vpvnMVOUKE"
] |
nips_2022_qwjrO7Rewqy | Neural Approximation of Graph Topological Features | Topological features based on persistent homology capture high-order structural information so as to augment graph neural network methods. However, computing extended persistent homology summaries remains slow for large and dense graphs and can be a serious bottleneck for the learning pipeline. Inspired by recent success in neural algorithmic reasoning, we propose a novel graph neural network to estimate extended persistence diagrams (EPDs) on graphs efficiently. Our model is built on algorithmic insights, and benefits from better supervision and closer alignment with the EPD computation algorithm. We validate our method with convincing empirical results on approximating EPDs and downstream graph representation learning tasks. Our method is also efficient; on large and dense graphs, we accelerate the computation by nearly 100 times. | Accept | A reasonably interesting paper on deep models to approximate a popular class of objects, extended persistence diagrams, in topological data analysis. This is both an important problem, since these can be used throughout various parts of graph machine learning, and is a challenging one that requires technical innovations. The authors deliver on this, introducing quite good results.
The reviewers (and I) are in agreement about the paper, and the additional results the authors provided in response to the reviews are convincing. For example, on large sparse graphs, the proposed algorithms maintain a big speedup advantage.
Perhaps the only question that came up is whether this work is more of a TDA-specific topic, but in general the paper clearly fits well within the machine learning community. | test | [
"-GMBohn-377",
"EB8RsPOukq6",
"k3HlBEMbxMZ",
"4kj3G964RPa",
"Dzw7zCoo9N-",
"x7b4qRPn-v",
"mtdIiAgptzVI",
"0KitxKb10ZG",
"ig7wPaz2vaF",
"rGvko1sDc3E",
"2EHKJCgv_SS",
"aTTk3pptci9",
"UFsTj5TZg4Z",
"TUTXbkn1AwA",
"xnH12FGqyn",
"2CY8u2vKqw",
"H6J1IL71S0A"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks very much for your support!\n\nWe use the degree centrality function implemented with the package \"networkx==2.5\". The algorithm is available in https://networkx.org/documentation/stable/_modules/networkx/algorithms/centrality/degree_alg.html#degree_centrality. And we will clarify this in the paper.",
... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"EB8RsPOukq6",
"k3HlBEMbxMZ",
"xnH12FGqyn",
"Dzw7zCoo9N-",
"mtdIiAgptzVI",
"0KitxKb10ZG",
"ig7wPaz2vaF",
"rGvko1sDc3E",
"2CY8u2vKqw",
"H6J1IL71S0A",
"2CY8u2vKqw",
"UFsTj5TZg4Z",
"TUTXbkn1AwA",
"xnH12FGqyn",
"nips_2022_qwjrO7Rewqy",
"nips_2022_qwjrO7Rewqy",
"nips_2022_qwjrO7Rewqy"
] |
nips_2022_7TGpLKADODE | Self-Supervised Fair Representation Learning without Demographics | Fairness has become an important topic in machine learning. Generally, most literature on fairness assumes that the sensitive information, such as gender or race, is present in the training set, and uses this information to mitigate bias. However, due to practical concerns like privacy and regulation, applications of these methods are restricted. Also, although much of the literature studies supervised learning, in many real-world scenarios, we want to utilize the large unlabelled dataset to improve the model's accuracy. Can we improve fair classification without sensitive information and without labels? To tackle the problem, in this paper, we propose a novel reweighing-based contrastive learning method. The goal of our method is to learn a generally fair representation without observing sensitive attributes.Our method assigns weights to training samples per iteration based on their gradient directions relative to the validation samples such that the average top-k validation loss is minimized. Compared with past fairness methods without demographics, our method is built on fully unsupervised training data and requires only a small labelled validation set. We provide rigorous theoretical proof of the convergence of our model. Experimental results show that our proposed method achieves better or comparable performance than state-of-the-art methods on three datasets in terms of accuracy and several fairness metrics. | Accept | The paper proposes a contrastive learning approach to learn fair representations, without having explicit access to sensitive attributes. The approach leverages a small validation set with sensitive attributes to help guide training, by assigning per-sample weights based on the effect on the validation loss. All reviewers were supportive of acceptance, with a common appreciation for the novelty of the problem setting, and the modularity of the proposed approach.
The authors can consider adding citations to works operating in the related setting where there are _noisy_ protected attributes, e.g., Gupta et al., "Proxy Fairness"; Lamy et al., "Noise-tolerant fair classification"; Wang et al., "Robust Optimization for Fairness with Noisy Protected Groups". | train | [
"crf94XpmgfQ",
"B8a9DIbuXm2",
"zywNuCMdlPs",
"FSMftujXdRg",
"bZ9PK9NidhQ",
"jl6DJFetKyE",
"sD11AxXCHjn",
"I56LKvE-ZWm",
"rv0eccVRS_k",
"E5EZAJrA7Gn",
"AyEw91rJDwj",
"cZe3BxfCW6-",
"irqAi15h8tN"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your valuable feedback! We sincerely appreciate your time and effort in reviewing our paper.",
" I have read the response from the authors and appreciate their further clarifications. The authors have addressed most of my concerns. Therefore, I increased my score to 5.",
" Thanks very much for your... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"B8a9DIbuXm2",
"rv0eccVRS_k",
"FSMftujXdRg",
"sD11AxXCHjn",
"nips_2022_7TGpLKADODE",
"irqAi15h8tN",
"cZe3BxfCW6-",
"AyEw91rJDwj",
"E5EZAJrA7Gn",
"nips_2022_7TGpLKADODE",
"nips_2022_7TGpLKADODE",
"nips_2022_7TGpLKADODE",
"nips_2022_7TGpLKADODE"
] |
nips_2022_2B2xIJ299rx | Efficient Training of Low-Curvature Neural Networks | Standard deep neural networks often have excess non-linearity, making them susceptible to issues
such as low adversarial robustness and gradient instability. Common methods to address these
downstream issues, such as adversarial training, are expensive and often sacrifice predictive accuracy.
In this work, we address the core issue of excess non-linearity via curvature, and
demonstrate low-curvature neural networks (LCNNs) that obtain drastically lower curvature
than standard models while exhibiting similar predictive performance. This leads to improved
robustness and stable gradients, at a fraction of the cost of standard adversarial training.
To achieve this, we decompose overall model curvature in terms of curvatures and slopes of
its constituent layers. To enable efficient curvature minimization of constituent layers,
we introduce two novel architectural components: first, a non-linearity called centered-softplus
that is a stable variant of the softplus non-linearity, and second, a Lipschitz-constrained
batch normalization layer.
Our experiments show that LCNNs have lower curvature, more stable gradients and increased
off-the-shelf adversarial robustness when compared to standard neural networks, all without
affecting predictive performance. Our approach is easy to use and can be readily incorporated
into existing neural network architectures. | Accept | This paper studies low-curvature training of neural networks. Authors first propose a normalized curvature metric (that is invariant to the scaling of the gradient) and provide an upper bound for it in terms of the curvatures and slopes of different layers. Then they move on to the main contribution which is introducing an alternative activation and batch-normalization components which effectively limits the curvature. Finally, they show that when trained with these alternative components, one can achieve the same accuracy while having more stable gradients and more robust networks without significant training time increase. In my opinion, the fact that the robustness to input perturbations and the stable gradients come with simply substituting the previous components with the new one is the biggest advantage. On top of that, in terms of the training time, the proposed method is superior to other robust baseline approaches.
While I'm recommending acceptance, I strongly suggest authors to make the following improvements for the camera-ready to increase the impact of their work significantly:
1) More data & more models: Current experiments are very limited in terms of architectures and datasets. I suggest adding ImageNet results and two other small datasets (say SVHN and Fashion MNIST). Also, I suggest repeating the experiments on some none-ResNet architectures as well.
2) Authors have argued that the stable gradients are useful for interpretably. While this might have been established separately, it is still interesting to have at least one experiment to demonstrate this.
3) Even though this method is faster than other robust baselines, it is still 1.6x slower than standard training which is a significant limitation for adaptation of these components. I don't see an inherent reason for this slowness. Would be great if the implementation can be further optimized in terms of the running time.
4) One thing that I think is missing at the moment is a clear bottom line written somewhere regarding what we can conclude from the experiments (all this information is in the paper, but I think it would be better if it was highlighted somehow as “main conclusions” or sth of that form).
| val | [
"JZpqzt7DwG",
"0iaUuALz2Y",
"ochVdMC1-qs",
"sQWC5C3HF9lW",
"5SLhjEOJEKu",
"o-wD4WUKBR8",
"zD7GhHhZY08l",
"us46KUhO7Fv",
"nIDB1WMcsTC",
"pvjCZxhFzZZ",
"pnQmxQxCc_p",
"B1eulxPs6b1",
"-3s1Q1tdHxqP",
"Csq4uO2Pbir",
"xmf6QGAsbpN",
"DIAe_58tGii",
"3PJUxD-sawK"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the author for the kind response. I would like to discuss more about the reponses. \n\n(1) \"if the loss at a point is scaled by a large value\", I am wondering why and in which case would we need to be invariant to such scaling? In my opinion, the definiation of curvature should focus more on the r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
4
] | [
"ochVdMC1-qs",
"B1eulxPs6b1",
"5SLhjEOJEKu",
"pnQmxQxCc_p",
"pvjCZxhFzZZ",
"-3s1Q1tdHxqP",
"nIDB1WMcsTC",
"nIDB1WMcsTC",
"3PJUxD-sawK",
"DIAe_58tGii",
"xmf6QGAsbpN",
"Csq4uO2Pbir",
"nips_2022_2B2xIJ299rx",
"nips_2022_2B2xIJ299rx",
"nips_2022_2B2xIJ299rx",
"nips_2022_2B2xIJ299rx",
"ni... |
nips_2022_wzJcEb5Mm4 | Log-Polar Space Convolution Layers | Convolutional neural networks use regular quadrilateral convolution kernels to extract features. Since the number of parameters increases quadratically with the size of the convolution kernel, many popular models use small convolution kernels, resulting in small local receptive fields in lower layers. This paper proposes a novel log-polar space convolution (LPSC) layer, where the convolution kernel is elliptical and adaptively divides its local receptive field into different regions according to the relative directions and logarithmic distances. The local receptive field grows exponentially with the number of distance levels. Therefore, the proposed LPSC not only naturally encodes local spatial structures, but also greatly increases the single-layer receptive field while maintaining the number of parameters. We show that LPSC can be implemented with conventional convolution via log-polar space pooling and can be applied in any network architecture to replace conventional convolutions. Experiments on different tasks and datasets demonstrate the effectiveness of the proposed LPSC. | Accept | Thank you for the submission. After reading the paper (twice) and all reviews, my summary could be found below.
Overall, the reviewers agree that the paper has some positive aspects, namely:
+ The idea of log-polar convolution makes intuitive sense since the rectangular pixel quantization is more an artifact of the capture and storage systems rather than a feature of nature.
+ A practical pooling method for easily incorporating the method into existing SW, HW, and networks.
+ The proposed approach is generic and applicable to most existing CNNs, and it is easy to implement.
+ The choice of experiments is motivated well.
However, there is mainly one deficiency reported:
- The method seems to bring some gain in performance, but it also results in slower training and inference, and it requires additional hyperparameters.
I would like to thank the authors for their rebuttal, it helped a lot to improve the paper. I want to also thank the reviewers for their discussions. After the rebuttal, it seems that most concerns have been solved and even though the average score is borderline accept/weak accept, I believe that the paper could be accepted.
| train | [
"NQpP0hsYVTF",
"LxWp3IlOOMc",
"My_TYplhe7m",
"pIWffy5tjZr",
"-MoZQ09t8wO",
"v1omVemuHoW",
"cHjeW5v1ij",
"kFPAGLPG8L",
"pM5zl3jH76p",
"CdVmVInpzWo",
"HdS03J0H6Bd",
"2keeX9uJ_Cn",
"deVFAp0d74Q",
"Kv1QZlvrf8Z",
"i7jraXfSgY",
"T6KIk6o0RC",
"teGaBdosNKc",
"9sMHoByZV7_",
"ehbdACsCJw",
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"officia... | [
" Thank you very much again for your valuable and constructive comments, which we believe have improved our manuscript significantly.",
" I have updated my rating, thanks for your rebuttal",
" Thank you very much again for your valuable and constructive comments, which we believe have improved our manuscript si... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"LxWp3IlOOMc",
"pM5zl3jH76p",
"pIWffy5tjZr",
"HdS03J0H6Bd",
"nips_2022_wzJcEb5Mm4",
"cHjeW5v1ij",
"kFPAGLPG8L",
"2keeX9uJ_Cn",
"i7jraXfSgY",
"9sMHoByZV7_",
"0SbBHV_UvN9",
"deVFAp0d74Q",
"Kv1QZlvrf8Z",
"R9VtCgRC11Y",
"T6KIk6o0RC",
"teGaBdosNKc",
"eJNBzl9TPzg",
"ehbdACsCJw",
"Zswns... |
nips_2022_g2cM5983pw | Reinforcement Learning with Logarithmic Regret and Policy Switches | In this paper, we study the problem of regret minimization for episodic Reinforcement Learning (RL) both in the model-free and the model-based setting. We focus on learning with general function classes and general model classes, and we derive results that scale with the eluder dimension of these classes. In contrast to the existing body of work that mainly establishes instance-independent regret guarantees, we focus on the instance-dependent setting and show that the regret scales logarithmically with the horizon $T$, provided that there is a gap between the best and the second best action in every state. In addition, we show that such a logarithmic regret bound is realizable by algorithms with $O(\log T)$ switching cost (also known as adaptivity complexity). In other words, these algorithms rarely switch their policy during the course of their execution. Finally, we complement our results with lower bounds which show that even in the tabular setting, we cannot hope for regret guarantees lower than $O(\log T)$. | Accept | Reviewers are generally positive about the paper and I see that this paper's techniques are differentiated from KSWY 21. Please make sure you address all the reviewers' comments and incorporate them (and any new experimental results, if applicable) in your camera-ready. | train | [
"OdzKCxRXbMA",
"3dOkuFzf6Ju",
"uCeSa7igjKP",
"8NSNbIjJs1b",
"39wnMjlji52",
"Rbz6an1SaSR",
"tBqUvSoGmJZ",
"aUcoEQNvETq",
"7zMcta8fafo"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for addressing my questions. I remain positive about this paper. ",
" We would like to thank the reviewer for finding our paper very well-written and for mentioning that our results are important to the RL community. \n\n(**Technical improvements**) We want to now comment on the improvements... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
"8NSNbIjJs1b",
"aUcoEQNvETq",
"7zMcta8fafo",
"tBqUvSoGmJZ",
"Rbz6an1SaSR",
"nips_2022_g2cM5983pw",
"nips_2022_g2cM5983pw",
"nips_2022_g2cM5983pw",
"nips_2022_g2cM5983pw"
] |
nips_2022_peZSbfNnBp4 | Ensemble of Averages: Improving Model Selection and Boosting Performance in Domain Generalization | In Domain Generalization (DG) settings, models trained independently on a given set of training domains have notoriously chaotic performance on distribution shifted test domains, and stochasticity in optimization (e.g. seed) plays a big role. This makes deep learning models unreliable in real world settings. We first show that this chaotic behavior exists even along the training optimization trajectory of a single model, and propose a simple model averaging protocol that both significantly boosts domain generalization and diminishes the impact of stochasticity by improving the rank correlation between the in-domain validation accuracy and out-domain test accuracy, which is crucial for reliable early stopping. Taking advantage of our observation, we show that instead of ensembling unaveraged models (that is typical in practice), ensembling moving average models (EoA) from independent runs further boosts performance. We theoretically explain the boost in performance of ensembling and model averaging by adapting the well known Bias-Variance trade-off to the domain generalization setting. On the DomainBed benchmark, when using a pre-trained ResNet-50, this ensemble of averages achieves an average of $68.0\%$, beating vanilla ERM (w/o averaging/ensembling) by $\sim 4\%$, and when using a pre-trained RegNetY-16GF, achieves an average of $76.6\%$, beating vanilla ERM by $\sim 6\%$. | Accept | This work introduces a hyperparameter-free model averaging and ensembling scheme for the domain generalization setting that achieves state-of-the-art results. Though the proposed method is simple and is related to existing techniques, the strong and comprehensive evaluation demonstrates that it is highly effective and therefore is of interest to the community. In addition, the paper is clear and well written, with careful analyses and useful theoretical intuitions. In summary, this paper will be a useful addition to the NeurIPS program. | train | [
"0NcicIb_zSj",
"SVbeSVJ9hL",
"Khl6wISP2_",
"v3QQKLv_-fb",
"gcRUFtC4LHN",
"UaTZgEPyLTEY",
"jK03_HJcg-6",
"xfbseAS7FlR",
"jvfe-17fxif",
"yiIlrV5CaHO",
"KzCIuiQvFBJ",
"s0nrVNAEeF6",
"kf8U-1wx0zk",
"ByLBr90TY42",
"N6BlG6P6e5q"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your questions.\n\n1. Intuitively, the value of $t\\_0$ acts as a trade-off between the bias and the variance term in the bias-variance decomposition discussed in our work. Specifically, we find that the expected loss on the out-domain data can be factorized into a bias and a variance term. By using... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"SVbeSVJ9hL",
"jK03_HJcg-6",
"gcRUFtC4LHN",
"UaTZgEPyLTEY",
"yiIlrV5CaHO",
"jvfe-17fxif",
"N6BlG6P6e5q",
"ByLBr90TY42",
"ByLBr90TY42",
"kf8U-1wx0zk",
"s0nrVNAEeF6",
"nips_2022_peZSbfNnBp4",
"nips_2022_peZSbfNnBp4",
"nips_2022_peZSbfNnBp4",
"nips_2022_peZSbfNnBp4"
] |
nips_2022_L74c-iUxQ1I | Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs | The training of neural networks by gradient descent methods is a cornerstone of the deep learning revolution. Yet, despite some recent progress, a complete theory explaining its success is still missing. This article presents, for orthogonal input vectors, a precise description of the gradient flow dynamics of training one-hidden layer ReLU neural networks for the mean squared error at small initialisation. In this setting, despite non-convexity, we show that the gradient flow converges to zero loss and characterise its implicit bias towards minimum variation norm. Furthermore, some interesting phenomena are highlighted: a quantitative description of the initial alignment phenomenon and a proof that the process follows a specific saddle to saddle dynamics. | Accept | This paper studies the gradient flow dynamics for a one-hidden layer ReLU network with square loss and orthogonal inputs. In particular, the authors show that gradient flow converges towards a global minimum for sufficiently small initialization and that the solution consists in the minimum $\ell_2$ norm interpolator. Interesting insights are provided on the various phases of the dynamics, which is characterized by an initial alignment, followed by fitting the positive labels, then the negative ones and a final convergence. A saddle to saddle dynamics (conjectured in earlier work) is also unveiled.
The assumption of orthogonal inputs is rather strict, and it is the main weakness of the paper. That being said, the reviewers and this AC agree that the results of this paper are novel and interesting for the NeurIPS community. Hence, I am happy to recommend acceptance. As a final note, I would like to encourage the authors to include in the camera ready the discussions related to the feedback received from the reviewers (in particular, the experiments in higher dimensional settings).
| train | [
"SN5nAxUQhX",
"eU04r7FgtqH",
"-n-vGP_wAHB",
"Pvs7ma9pf3G",
"9us3zBHzYFv",
"DnzM6Yr-pC",
"MBPWcrl1-oJ",
"zI6NhVPUTXb",
"dOiW_UjaKQw",
"kMi-AjmZH1",
"nxvAvhS3y92"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for the detailed answers to my questions! I have also read all other reviews and the authors' responses, and would like to keep my score unchanged. I would recommend this paper be accepted.",
" I thank the authors for the clarifications. After taking the time to read through your comments, I h... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
5
] | [
"Pvs7ma9pf3G",
"DnzM6Yr-pC",
"nxvAvhS3y92",
"kMi-AjmZH1",
"dOiW_UjaKQw",
"MBPWcrl1-oJ",
"zI6NhVPUTXb",
"nips_2022_L74c-iUxQ1I",
"nips_2022_L74c-iUxQ1I",
"nips_2022_L74c-iUxQ1I",
"nips_2022_L74c-iUxQ1I"
] |
nips_2022_TwyEk7HzJb6 | Bandit Learning in Many-to-one Matching Markets with Uniqueness Conditions | An emerging line of research is dedicated to the problem of one-to-one matching markets with bandits, where the preference of one side is unknown and thus we need to match while learning the preference through multiple rounds of interaction. However, in many real-world applications such as online recruitment platform for short-term workers, one side of the market can select more than one participant from the other side, which motivates the study of the many-to-one matching problem. Moreover, the existence of a unique stable matching is crucial to the competitive equilibrium of the market. In this paper, we first introduce a more general new \textit{$\tilde{\alpha}$}-condition to guarantee the uniqueness of stable matching in many-to-one matching problems, which generalizes some established uniqueness conditions such as \textit{SPC} and \textit{Serial Dictatorship}, and recovers the known $\alpha$-condition if the problem is reduced to one-to-one matching. Under this new condition, we design an MO-UCB-D4 algorithm with $O\left(\frac{NK\log(T)}{\Delta^2}\right)$ regret bound, where $T$ is the time horizon, $N$ is the number of agents, $K$ is the number of arms, and $\Delta$ is the minimum reward gap. Extensive experiments show that our algorithm achieves uniform good performances under different uniqueness conditions. | Reject | The paper generalizes one of the results of Basu et al.'s ICML'21 paper to the case of many-to-one matchings, showing a distributed multi-armed bandits mechanism with logarithmic regret for finding a stable matching assuming a strong uniqueness condition on the input. I believe the paper does not meet the bar for acceptance into NeurIPS due to the following major weaknesses:
* The paper is not very well written and it is hard to understand in places.
* It is not clear how difficult the generalization is, given the existing result in the one-to-one matching case.
* The assumptions of the model, in particular the uniqueness assumption, are very strong, and restrict the result to a very limited (and fragile) set of input instances. | train | [
"b7-_Eum6n9R",
"s56M792hF4e",
"sfy4JzALXC",
"9pWxICTT5h",
"LxyBh2am4C",
"tQte4F_lGbq",
"UDWjBQWhJ2d",
"ioskVP36J8r",
"lGUirLdKmuQ",
"7cOTSSWOVIb",
"-qfKeQ9PRWC",
"IHEsFmF1PGS",
"h6MwgelLdB",
"np6O9RAAU1Z",
"Q5SeB5i6E7S",
"HSA9yhXfhAv",
"dcJ684SFdaU",
"rO36HAjdy5R"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We apologize we have misunderstood your question on the ``minimum reward gap''. We agree that the case of indifferent agents would be more general. However, as far as we know, a lot of works studying the traditional (offline) matching markets would assume preferences to be strict [3, 8, 9, 10, 11], perhaps due to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"s56M792hF4e",
"LxyBh2am4C",
"9pWxICTT5h",
"IHEsFmF1PGS",
"7cOTSSWOVIb",
"UDWjBQWhJ2d",
"ioskVP36J8r",
"lGUirLdKmuQ",
"rO36HAjdy5R",
"-qfKeQ9PRWC",
"dcJ684SFdaU",
"h6MwgelLdB",
"HSA9yhXfhAv",
"Q5SeB5i6E7S",
"nips_2022_TwyEk7HzJb6",
"nips_2022_TwyEk7HzJb6",
"nips_2022_TwyEk7HzJb6",
... |
nips_2022_lMrpZ-ycIaT | Self-Supervised Learning of Brain Dynamics from Broad Neuroimaging Data | Self-supervised learning techniques are celebrating immense success in natural language processing (NLP) by enabling models to learn from broad language data at unprecedented scales. Here, we aim to leverage the success of these techniques for mental state decoding, where researchers aim to identify specific mental states (e.g., the experience of anger or joy) from brain activity. To this end, we devise a set of novel self-supervised learning frameworks for neuroimaging data inspired by prominent learning frameworks in NLP. At their core, these frameworks learn the dynamics of brain activity by modeling sequences of activity akin to how NLP models sequences of text. We evaluate the frameworks by pre-training models on a broad neuroimaging dataset spanning functional Magnetic Resonance Imaging data from 11,980 experimental runs of 1,726 individuals across 34 datasets and subsequently adapting the pre-trained models to benchmark mental state decoding datasets. The pre-trained models transfer well, generally outperforming baseline models trained from scratch, while models trained in a learning framework based on causal language modeling clearly outperform the others. | Accept | This submission solicited interesting discussions between the reviewers and the authors, and was seen as bringing interesting ideas to the table. The ideas of the masked self-supervised pre-training for brain imaging is an exciting one.
One strong concern is that the evaluation is not very conclusive, and it is not clear that the present work gives solid evidence that the pre-training actually is beneficial. | train | [
"8nkpSKnJfrH",
"r-hisdI9v1",
"UCi19Q0a4O",
"xiqxjFMfxr9",
"eurAcL_YnBz",
"kiB3jxY8OtE",
"lEp-iS484c",
"5z6L0nDuHH9",
"KVrx8NlaM4s",
"iq_XfO0heF",
"wpGTFY6PRid",
"Rkp3gVLpzu",
"Q2ne640gAbe",
"TN-pSLtZtj8",
"zKYxQpoqcV",
"Je_nxUPJdcP",
"vbP0VsUIehO",
"77_0MlWcZSV",
"XsQLIrkNu_F",
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" We thank the reviewer for these final remarks and for reconsidering their initial rating of our manuscript.\n\nWhile supervised pre-training is extremely challenging (if not impossible) to implement across many neuroimaging datasets (as outlined in our response to Q2), we would like to point out that other empiri... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"r-hisdI9v1",
"edkqbYfZ6nO",
"eurAcL_YnBz",
"TN-pSLtZtj8",
"zKYxQpoqcV",
"lEp-iS484c",
"wpGTFY6PRid",
"KVrx8NlaM4s",
"iq_XfO0heF",
"Yo3yEDoXCv",
"Rkp3gVLpzu",
"Q2ne640gAbe",
"TN-pSLtZtj8",
"zKYxQpoqcV",
"edkqbYfZ6nO",
"vbP0VsUIehO",
"77_0MlWcZSV",
"XmLXIvNgooH",
"nips_2022_lMrpZ-... |
nips_2022_aPgQdvSAuw | On Translation and Reconstruction Guarantees of the Cycle-Consistent Generative Adversarial Networks | The task of unpaired image-to-image translation has witnessed a revolution with the introduction of the cycle-consistency loss to Generative Adversarial Networks (GANs). Numerous variants, with Cycle-Consistent Adversarial Network (CycleGAN) at their forefront, have shown remarkable empirical performance. The involvement of two unalike data spaces and the existence of multiple solution maps between them are some of the facets that make such architectures unique. In this study, we investigate the statistical properties of such unpaired data translator networks between distinct spaces, bearing the additional responsibility of cycle-consistency. In a density estimation setup, we derive sharp non-asymptotic bounds on the translation errors under suitably characterized models. This, in turn, points out sufficient regularity conditions that maps must obey to carry out successful translations. We further show that cycle-consistency is achieved as a consequence of the data being successfully generated in each space based on observations from the other. In a first-of-its-kind attempt, we also provide deterministic bounds on the cumulative reconstruction error. In the process, we establish tolerable upper bounds on the discrepancy responsible for ill-posedness in such networks. | Accept | This paper analyzes the consistency of cycle-consistent GANs. The authors provide theoretical insights in how the information is preserved by ReLU DNN translators via functional analysis. They additionally show the equivalence of L1 and 1-W distance under the cyclic loss context. The highlight of this paper is the rigorous statistical analysis and fundamental theoretical insights in generative models. A minor point is that the reviewers suggest that more illustrative and intuitive presentation of the results will smooth the reading. In general it's a interesting paper and the AC recommends acceptance. | train | [
"SkNPYXOyeDS",
"2cSHzqZftX",
"QlpL4GKDSDkN",
"6LwBcgbAedC",
"OvX92Nu9raz",
"D0FgV4ey61B",
"MyIT1T7XFiO",
"aMixiRl_zWp",
"Wv-rEvIo-i5"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" In the supplementary material, we have added diagrams highlighting the simultaneous translations and regeneration in the space $\\\\mathcal{Y}$. This is to maintain space economy in the main paper, and we will integrate the illustrations with our discussions seamlessly in the final version.",
" We thank the rev... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
1
] | [
"2cSHzqZftX",
"Wv-rEvIo-i5",
"aMixiRl_zWp",
"MyIT1T7XFiO",
"D0FgV4ey61B",
"nips_2022_aPgQdvSAuw",
"nips_2022_aPgQdvSAuw",
"nips_2022_aPgQdvSAuw",
"nips_2022_aPgQdvSAuw"
] |
nips_2022_mjVmifxpKqS | Near-Optimal Regret Bounds for Multi-batch Reinforcement Learning | In this paper, we study the episodic reinforcement learning (RL) problem modeled by finite-horizon Markov Decision Processes (MDPs) with constraint on the number of batches. The multi-batch reinforcement learning framework, where the agent is required to provide a time schedule to update policy before everything, which is particularly suitable for the scenarios where the agent suffers extensively from changing the policy adaptively. Given a finite-horizon MDP with $S$ states, $A$ actions and planning horizon $H$, we design a computational efficient algorithm to achieve near-optimal regret of $\tilde{O}(\sqrt{SAH^3K\ln(1/\delta)})$\footnote{$\tilde{O}(\cdot)$ hides logarithmic terms of $(S,A,H,K)$} in $K$ episodes using $O\left(H+\log_2\log_2(K) \right)$ batches with confidence parameter $\delta$.
To our best of knowledge, it is the first $\tilde{O}(\sqrt{SAH^3K})$ regret bound with $O(H+\log_2\log_2(K))$ batch complexity. Meanwhile, we show that to achieve $\tilde{O}(\mathrm{poly}(S,A,H)\sqrt{K})$ regret, the number of batches is at least $\Omega\left(H/\log_A(K)+ \log_2\log_2(K) \right)$, which matches our upper bound up to logarithmic terms.
Our technical contribution are two-fold: 1) a near-optimal design scheme to explore over the unlearned states; 2) an computational efficient algorithm to explore certain directions with an approximated transition model.ion model. | Accept | The paper studies the batch RL problem, in which the algorithm first decide a switching schedule and then switch the policies based on this schedule. The proposed approach achieves a good regret upper bound matching existing non-batch algorithms (although the lower-order terms are still large). The batch complexity on the other hand matches the lower bound (up to log factors).
The reviewers believe that the theoretical contributions are solid and qualified to be published in NeurIPS. The authors did a good a job in addressing the computation complexity in the rebuttal phase. The meta-reviewer suggests the authors to further clarify presentation issues. Also, it would be good to cite the recent RL theory papers in the tabular setting (including those with a generative model). | train | [
"lMmFXiltNu",
"YVl6D34wdXu",
"rnVHwGJSSsh",
"aMwWrGshz4A",
"93R7NjFPnKs",
"X9Z6WgBXFzm",
"b-N-OHSHY5S",
"4jeYi5D7txC",
"61IkiBW7CE",
"ZSFcyNrH2LR",
"7Q2WbaglQz0",
"bgHx11XD4x"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the suggestion. [Qiao et. al.] works on a similar task, but they mainly consider deterministic policies, while the policies in our algorithm might be non-deterministic. We will carefully discuss this paper in the related works.",
" Thank you for the response. I am happy to keep my original score.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"YVl6D34wdXu",
"X9Z6WgBXFzm",
"b-N-OHSHY5S",
"bgHx11XD4x",
"7Q2WbaglQz0",
"ZSFcyNrH2LR",
"61IkiBW7CE",
"nips_2022_mjVmifxpKqS",
"nips_2022_mjVmifxpKqS",
"nips_2022_mjVmifxpKqS",
"nips_2022_mjVmifxpKqS",
"nips_2022_mjVmifxpKqS"
] |
nips_2022_k_XHLBD4qPO | On Overcompression in Continual Semantic Segmentation | Class-Incremental Semantic Segmentation (CISS) is an emerging challenge of Continual Learning (CL) in Computer Vision. In addition to the well-known issue of catastrophic forgetting, CISS suffers from the semantic drift of the background class, further increasing forgetting. Existing attempts aim to solve this using pseudo-labelling, knowledge distillation or model freezing. We argue and demonstrate that frozen or rigid models suffer from poor expressibility due to overcompression. We improve on these methods by focusing on the offline training process and the expressiveness of the learnt representations. Beyond the characterisation and demonstration of this issue in terms of the Information Bottleneck principle, we show the benefit of two practical measures: (1) using shared but wider convolution modules before final classifiers to improve scaling for new, continual tasks; (2) introducing dropout into the encoder-decoder architecture to improve regularisation and decrease the overcompression of information in the representation space. We improve the IoU on the 15-1 and 10-1 scenarios by over 2% and 3% respectively while maintaining a smaller memory and MAdds footprint. Last, we propose a new benchmark setting that lies closer to the nature of lifelong learning to drive the development of more realistic and valuable architectures in the future. | Reject | This paper deals with continual learning in semantic segmentation. Authors introduce wider convolution at final feature extraction layer and apply dropout to limit the overcompression issue.
No reviewer was convinced by the approach and they have raised many issues, including model design choice, training protocol and missing experiments.
No rebuttal has been provided by the authors.
As it is, this submission is not ready for publication, and we encourage the authors to consider the reviewers feedbacks for future publication. | train | [
"XSoq65WvzP",
"1j_m1ns1uk_",
"LW0O1nuXiCy",
"78y1VyNOZHe",
"76Mp7njk-le",
"rfBZHDc0SE9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author did not submit a response to my concerns, so I keep the original score.",
" The authors have not responded to my concerns, therefore I stick to my initial score.",
" - The paper considers the problem of continual learning on semantic segmentation.\n\n- The authors propose two simple ideas to improv... | [
-1,
-1,
4,
3,
4,
3
] | [
-1,
-1,
4,
4,
3,
3
] | [
"78y1VyNOZHe",
"rfBZHDc0SE9",
"nips_2022_k_XHLBD4qPO",
"nips_2022_k_XHLBD4qPO",
"nips_2022_k_XHLBD4qPO",
"nips_2022_k_XHLBD4qPO"
] |
nips_2022_vsNQkquutZk | WaveBound: Dynamic Error Bounds for Stable Time Series Forecasting | Time series forecasting has become a critical task due to its high practicality in real-world applications such as traffic, energy consumption, economics and finance, and disease analysis. Recent deep-learning-based approaches have shown remarkable success in time series forecasting. Nonetheless, due to the dynamics of time series data, deep networks still suffer from unstable training and overfitting. Inconsistent patterns appearing in real-world data lead the model to be biased to a particular pattern, thus limiting the generalization. In this work, we introduce the dynamic error bounds on training loss to address the overfitting issue in time series forecasting. Consequently, we propose a regularization method called WaveBound which estimates the adequate error bounds of training loss for each time step and feature at each iteration. By allowing the model to focus less on unpredictable data, WaveBound stabilizes the training process, thus significantly improving generalization. With the extensive experiments, we show that WaveBound consistently improves upon the existing models in large margins, including the state-of-the-art model. | Accept | The reviewers highlight the novelty of the method, the clarity of writing, and the consistent performance improvements over baselines. Initial concerns by the reviewers related to missing related work, missing empirical evaluation within the more mature short-term forecasting experimental paradigm, and missing empirical comparisons (comparing to reversible instance norm, isolation of the effect of EMA) were addressed by the authors during the discussion period. Some concerns around the overall motivation behind the proposed approach remain, but are outweighed by the empirical effectiveness. | train | [
"VUdpOEYBD9M",
"LtZcOW6yjS",
"GOQbi3cKXv2",
"MannFcGC0paU",
"O5xsKoA3Q-5",
"gqwO24icD1",
"txzSLNeHQ90",
"g-NxUWoUhkk",
"SzE8_Fj40YO",
"8NN7cKcfqgC",
"MQqige1Hhw",
"pfT1ENMX1Qs",
"5dLvwiMnzqN",
"WxQRSt7jPo",
"TKBrJeew3-",
"zpiLhnsgnDy",
"mGSlZjXkUJb",
"H3O8fFnDnsA",
"QNLn8OkqgTR",... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_... | [
" Thank you very much for the rebuttal response, my concerns are addressed. I will stand my position as accept.",
" Dear reviewers,\n\nWe greatly appreciate your efforts in reviewing our manuscript.\n\nWe hope that our responses and discussions have addressed the reviewers’ concerns.\n\nIn case of any further iss... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"WxQRSt7jPo",
"nips_2022_vsNQkquutZk",
"O5xsKoA3Q-5",
"5dLvwiMnzqN",
"NeNzOeQQ8oM",
"QNLn8OkqgTR",
"QNLn8OkqgTR",
"DQfi2mcoMaE",
"NeNzOeQQ8oM",
"NeNzOeQQ8oM",
"LCln1oYjLK5",
"LCln1oYjLK5",
"LCln1oYjLK5",
"DQfi2mcoMaE",
"NeNzOeQQ8oM",
"NeNzOeQQ8oM",
"QNLn8OkqgTR",
"nips_2022_vsNQkqu... |
nips_2022_nP6e73uxd1 | Sampling from Log-Concave Distributions with Infinity-Distance Guarantees | For a $d$-dimensional log-concave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to a convex body $K$, the problem of outputting samples from a distribution $\nu$ which is $\varepsilon$-close in infinity-distance $\sup_{\theta \in K} |\log \frac{\nu(\theta)}{\pi(\theta)}|$ to $\pi$ arises in differentially private optimization. While sampling within total-variation distance $\varepsilon$ of $\pi$ can be done by algorithms whose runtime depends polylogarithmically on $\frac{1}{\varepsilon}$, prior algorithms for sampling in $\varepsilon$ infinity distance have runtime bounds that depend polynomially on $\frac{1}{\varepsilon}$. We bridge this gap by presenting an algorithm that outputs a point $\varepsilon$-close to $\pi$ in infinity distance that requires at most $\mathrm{poly}(\log \frac{1}{\varepsilon}, d)$ calls to a membership oracle for $K$ and evaluation oracle for $f$, when $f$ is Lipschitz. Our approach departs from prior works that construct Markov chains on a $\frac{1}{\varepsilon^2}$-discretization of $K$ to achieve a sample with $\varepsilon$ infinity-distance error, and present a method to directly convert continuous samples from $K$ with total-variation bounds to samples with infinity bounds. This approach also allows us to obtain an improvement on the dimension $d$ in the running time for the problem of sampling from a log-concave distribution on polytopes $K$ with infinity distance $\varepsilon$, by plugging in TV-distance running time bounds for the Dikin Walk Markov chain. | Accept | Sampling from log-concave distributions is a well studied problem and there are many existing algorithms that can sample from a distribution close to the true distribution up to a small total variation distance. The paper gives a new reduction that can use these algorithms as a subroutine to get samples from a distribution close to the true distribution in infinity distance i.e. the densities are close everywhere. This problem commonly arises in differentially private optimization. The reduction is simple and can be implemented easily. All reviewers agree that the paper is a significant contribution to the literature, it is well written, and the algorithm has potential to be useful in practice. | val | [
"RpoYyO9AxD7",
"6hyXSJQZzSA",
"nHuLgyXm0c1",
"xWwpZkGBo5f",
"j0O_2olPsxY",
"Euuf-hNK8RJ",
"QEB7iZoNM1K",
"FIR219hL1X-",
"4OZUT_F-flX",
"G5Lu8HDxqDx"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their elaborate response. I maintain that their work is a strong contribution to the literature and I keep my score.",
" Thank you for the clarifications, which addressed all of my concerns. I apologize for missing Remark C.2.\n\nAs I said, this is a solid contribution, and I am happy to... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"Euuf-hNK8RJ",
"nHuLgyXm0c1",
"G5Lu8HDxqDx",
"FIR219hL1X-",
"QEB7iZoNM1K",
"4OZUT_F-flX",
"nips_2022_nP6e73uxd1",
"nips_2022_nP6e73uxd1",
"nips_2022_nP6e73uxd1",
"nips_2022_nP6e73uxd1"
] |
nips_2022_2Tv54LpM9cK | Distributed Inverse Constrained Reinforcement Learning for Multi-agent Systems | This paper considers the problem of recovering the policies of multiple interacting experts by estimating their reward functions and constraints where the demonstration data of the experts is distributed to a group of learners. We formulate this problem as a distributed bi-level optimization problem and propose a novel bi-level ``distributed inverse constrained reinforcement learning" (D-ICRL) algorithm that allows the learners to collaboratively estimate the constraints in the outer loop and learn the corresponding policies and reward functions in the inner loop from the distributed demonstrations through intermittent communications. We formally guarantee that the distributed learners asymptotically achieve consensus which belongs to the set of stationary points of the bi-level optimization problem. | Accept | The paper produces one of the first analysis of distributed inverse reinforcement learning, which is formulated as a bilevel optimization problem. The paper initiates a new line of research, contains a good mix of theoretical and empirical results, and has received relatively high scores from reviewers. | train | [
"-z9ZQpoktqf",
"VwsNlELXS-u",
"E2QXAr_UWO",
"gPGW3vZpLkk",
"ltMS_lGYQcD",
"f08ldIrj0mY",
"0gzLH1ezWd",
"vZgFGbCe0f",
"96cQEv1MvPo",
"rM3yKzMAZxs",
"gEoEftL8TxP",
"w-3rchmTGeZ",
"5SACn4jbMW",
"_X-IRHYntP",
"HPFBxB3k5uY",
"CJveDlqOvyb",
"ywsPlMaJXn",
"EuG2drdgMaB",
"SfJw9fZQUTP",
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_... | [
" I have gone through all the responses. The authors have done a nice job! Some of the concerns are resolved while the authors also admit some limitations, but at least everything is clear. The paper is generally in a good shape. I do encourage the authors to simplify some notations and add more comments to the mai... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"ltMS_lGYQcD",
"CJveDlqOvyb",
"EuG2drdgMaB",
"_X-IRHYntP",
"f08ldIrj0mY",
"0gzLH1ezWd",
"vZgFGbCe0f",
"96cQEv1MvPo",
"iK-dpC5kVyJ",
"gEoEftL8TxP",
"w-3rchmTGeZ",
"5SACn4jbMW",
"YJ6hyFk2alc",
"HPFBxB3k5uY",
"Ss3oFM5YOYJ",
"ywsPlMaJXn",
"EuG2drdgMaB",
"SfJw9fZQUTP",
"nips_2022_2Tv5... |
nips_2022_T-aVFGCSQNV | Early Stage Convergence and Global Convergence of Training Mildly Parameterized Neural Networks | The convergence of GD and SGD when training mildly parameterized neural networks starting from random initialization is studied. For a broad range of models and loss functions, including the widely used square loss and cross entropy loss, we prove an ''early stage convergence'' result. We show that the loss is decreased by a significant amount in the early stage of the training, and this decreasing is fast. Furthurmore, for exponential type loss functions, and under some assumptions on the training data, we show global convergence of GD. Instead of relying on extreme over-parameterization, our study is based on a microscopic analysis of the activation patterns for the neurons, which helps us derive gradient lower bounds. The results on activation patterns, which we call ``neuron partition'', help build intuitions for understanding the behavior of neural networks' training dynamics, and may be of independent interest. | Accept | There was an extensive discussion of the article concluding in reviewer recommendations weak accept, borderline accept, and accept. Although there were some reservations, particularly about the assumptions on the data, model and loss, the reviewers found that the article is well-written, technically sound, that it studies important problems of neural network training process, and is valuable for providing a novel analysis. Based on these merits, I am recommending accept. However, I will ask that the authors carefully consider the extensive feedback in the preparation of the final manuscript, particularly the comments concerning the presentation and discussion of the assumptions and limitations, and also carefully work on the improvements discussed during the discussion period, particularly the technical detail of the non Lipschitz gradient, as well as the promised additions, such as the proofs for the quadratic loss in the multi-class setting.
| train | [
"VWpsEkqK3kY",
"I7Zf8ZlG6jd",
"EnHCkO_xRp",
"phNUevbun24",
"ODZxjFR0bP-",
"lshCIEftjQ",
"iaFAa5dSIo",
"k3gnpLARjL3",
"4xlmcXKZq4i",
"VzvToWzM6Dr",
"lKi7FiA9b-k",
"zfn7LGc3kpQ",
"82_vQCTXvyZ",
"7QqoHeLCPLh",
"yYo3kaaFQB",
"JD8aKYuJclL",
"YwfxUQsoe-2",
"fwfZ3ZeKcHD",
"BgZWX22Cidn",... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" We thank the reviewer for providing an excellent way to deal with the use of $\\lesssim$. Indeed, it is helpful to give the scale of the hidden constant if the constant is complex. In our revised version, we fixed this issue by either providing the exact hidden constant or giving its scale, such as Line 146, Line... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
5
] | [
"VzvToWzM6Dr",
"lKi7FiA9b-k",
"lKi7FiA9b-k",
"lshCIEftjQ",
"iaFAa5dSIo",
"82_vQCTXvyZ",
"zfn7LGc3kpQ",
"hQpOxLjsk3B",
"wB6HbiMEDTi",
"BgZWX22Cidn",
"fwfZ3ZeKcHD",
"yYo3kaaFQB",
"yYo3kaaFQB",
"YwfxUQsoe-2",
"YwfxUQsoe-2",
"qgtt6tsBAbE",
"tQqCVCCYT-M",
"L5e6lKiigg",
"L5e6lKiigg",
... |
nips_2022_FhyrZ92DcI9 | Task-level Differentially Private Meta Learning | We study the problem of meta-learning with task-level differential privacy. Meta-learning has received increasing attention recently because of its ability to enable fast generalization to new task with small number of data points. However, the training process of meta learning likely involves exchange of task specific information, which may pose privacy risk especially in some privacy-sensitive applications. Therefore, it is important to provide strong privacy guarantees such that the learning process will not reveal any task sensitive information. To this end, existing works have proposed meta learning algorithms with record-level differential privacy, which is not sufficient in many scenarios since it does not protect the aggregated statistics based on the task dataset as a whole. Moreover, the utility guarantees in the prior work are based on assuming that the loss function satisfies both smoothness and quadratic growth conditions, which do not necessarily hold in practice. To address these issues, we propose meta learning algorithms with task-level differential privacy; that is, our algorithms protect the privacy of the entire dataset for each task. In the case when a single meta model is trained, we give both privacy and utility guarantees assuming only that the loss is convex and Lipschitz. Moreover, we propose a new private clustering-based meta-learning algorithm that enables private meta learning of multiple meta models. This can provide significant accuracy gains over the single meta model paradigm, especially when the tasks distribution cannot be well represented by a single meta model. Finally, we conduct several experiments demonstrating the effectiveness of our proposed algorithms. | Accept | The paper resulted in lukewarm support for the paper, although they were positive. There were some concerns regarding the practicality of the algorithms mentioned in the paper. Also, I took a quick look at the paper. The setup seems similar to model personalization (https://papers.nips.cc/paper/2018/hash/aa97d584861474f4097cf13ccb5325da-Abstract.html), where folks have analyzed non-convex models with DP. The paper misses citing this line of work. I will request the authors to include the discussions from the rebuttal phase, and also add a discussion to compare with the work along the lines of DP personalization (mentioned above). | train | [
"1AtHxR1nZ0",
"B3wz0XOMAaX",
"86OaXNcwAyL",
"_Pj-KjVorB",
"Q_TIUgYGhwQ",
"glal3alFTDa"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the valuable feedback. Please, see our response to your comments and questions below.\n\n- Regarding the comment about the limited contribution of our work, we respectfully disagree with the reviewer. First, our proposed method is not a straightforward application of DP optimization techniques to exis... | [
-1,
-1,
-1,
5,
6,
4
] | [
-1,
-1,
-1,
3,
3,
4
] | [
"glal3alFTDa",
"Q_TIUgYGhwQ",
"_Pj-KjVorB",
"nips_2022_FhyrZ92DcI9",
"nips_2022_FhyrZ92DcI9",
"nips_2022_FhyrZ92DcI9"
] |
nips_2022_vkhYWVtfcSQ | Surprise Minimizing Multi-Agent Learning with Energy-based Models | Multi-Agent Reinforcement Learning (MARL) has demonstrated significant suc2 cess by virtue of collaboration across agents. Recent work, on the other hand, introduces surprise which quantifies the degree of change in an agent’s environ4 ment. Surprise-based learning has received significant attention in the case of single-agent entropic settings but remains an open problem for fast-paced dynamics in multi-agent scenarios. A potential alternative to address surprise may be realized through the lens of free-energy minimization. We explore surprise minimization in multi-agent learning by utilizing the free energy across all agents in a multi-agent system. A temporal Energy-Based Model (EBM) represents an estimate of surprise which is minimized over the joint agent distribution. Our formulation of the EBM is theoretically akin to the minimum conjugate entropy objective and highlights suitable convergence towards minimum surprising states. We further validate our theoretical claims in an empirical study of multi-agent tasks demanding collabora14 tion in the presence of fast-paced dynamics. Our implementation and agent videos are available at the anonymous Project Webpage. | Accept | All reviewers appreciated the quality of the paper, its contributions, clarity and theoretical justification, and novelty. I agree with the reviewers in recommending acceptance. | train | [
"wHOp9VopJ6RR",
"UDj1uTVHlu8",
"SPCITI3zX1T",
"m8j_lZIcY58",
"G1Anbyj_E3I",
"IL-NcPMyn6C",
"fPqF3vci6Go",
"NYX23mdnp1u",
"CMQk42HQM9",
"mGOHo9D431u",
"SbDJU3xhXn1",
"EGHchTNrR1b",
"jm579kx3LQn",
"0Mvbaqw_LJ",
"zR0dQQk5XLu",
"5QlLIrC1-B",
"_hT28Z_aWtS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you! We would be happy to address your remaining concerns. Kindly let us know any changes/suggestions which would be required for you to support a stronger acceptance of our work.",
" The responses have addressed most of my concerns. I'd like to raise the score. ",
" Thanks for addressing my comments! N... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
5
] | [
"UDj1uTVHlu8",
"jm579kx3LQn",
"SbDJU3xhXn1",
"zR0dQQk5XLu",
"IL-NcPMyn6C",
"fPqF3vci6Go",
"NYX23mdnp1u",
"CMQk42HQM9",
"_hT28Z_aWtS",
"5QlLIrC1-B",
"5QlLIrC1-B",
"zR0dQQk5XLu",
"zR0dQQk5XLu",
"nips_2022_vkhYWVtfcSQ",
"nips_2022_vkhYWVtfcSQ",
"nips_2022_vkhYWVtfcSQ",
"nips_2022_vkhYWV... |
nips_2022_5OLcPQaYTVg | Learning Predictions for Algorithms with Predictions | A burgeoning paradigm in algorithm design is the field of algorithms with predictions, in which algorithms can take advantage of a possibly-imperfect prediction of some aspect of the problem. While much work has focused on using predictions to improve competitive ratios, running times, or other performance measures, less effort has been devoted to the question of how to obtain the predictions themselves, especially in the critical online setting. We introduce a general design approach for algorithms that learn predictors: (1) identify a functional dependence of the performance measure on the prediction quality and (2) apply techniques from online learning to learn predictors, tune robustness-consistency trade-offs, and bound the sample complexity. We demonstrate the effectiveness of our approach by applying it to bipartite matching, ski-rental, page migration, and job scheduling. In several settings we improve upon multiple existing results while utilizing a much simpler analysis, while in the others we provide the first learning-theoretic guarantees. | Accept | The paper introduces a general design approach for algorithms that learn predictors. This is achieved by identifying a functional dependence of the performance measure on the prediction quality, and applying techniques from online learning to learn predictors against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees. The problem is well-motivated and the proposed solution is general and interesting. Majority of the reviewers' concerns are addressed by the rebuttal. The paper will be significantly stronger if the authors benefit from the additional page to incorporate the feedback into the main paper. | test | [
"qjpKg-2bBZ5",
"kYPVpgJUN7",
"RDCyZxkMTpS",
"aXVo3-XwEwG",
"7EdylqYrBIz",
"QPp9q3Wcjn3",
"9M_MLs0SdcP",
"adB7Q_c9Faf",
"PQWWxjkEpue",
"TUIehjG7hVY"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response! I believe my questions and concerns have been addressed. I would recommend accept. ",
" Thank you for your positive review and insightful suggestions. We hope to address your comments and questions below:\n1. [*I am not convinced that the paper exhausts all its potential. Particularl... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
5
] | [
"kYPVpgJUN7",
"TUIehjG7hVY",
"PQWWxjkEpue",
"adB7Q_c9Faf",
"9M_MLs0SdcP",
"nips_2022_5OLcPQaYTVg",
"nips_2022_5OLcPQaYTVg",
"nips_2022_5OLcPQaYTVg",
"nips_2022_5OLcPQaYTVg",
"nips_2022_5OLcPQaYTVg"
] |
nips_2022_UmFSx2c4ubT | GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks | Spiking Neural Networks (SNNs) have been studied over decades to incorporate their biological plausibility and leverage their promising energy efficiency. Throughout existing SNNs, the leaky integrate-and-fire (LIF) model is commonly adopted to formulate the spiking neuron and evolves into numerous variants with different biological features. However, most LIF-based neurons support only single biological feature in different neuronal behaviors, limiting their expressiveness and neuronal dynamic diversity. In this paper, we propose GLIF, a unified spiking neuron, to fuse different bio-features in different neuronal behaviors, enlarging the representation space of spiking neurons. In GLIF, gating factors, which are exploited to determine the proportion of the fused bio-features, are learnable during training. Combining all learnable membrane-related parameters, our method can make spiking neurons different and constantly changing, thus increasing the heterogeneity and adaptivity of spiking neurons. Extensive experiments on a variety of datasets demonstrate that our method obtains superior performance compared with other SNNs by simply changing their neuronal formulations to GLIF. In particular, we train a spiking ResNet-19 with GLIF and achieve $77.35\%$ top-1 accuracy with six time steps on CIFAR-100, which has advanced the state-of-the-art. Codes are available at https://github.com/Ikarosy/Gated-LIF. | Accept | Although the scores are boarderline, the reviewers appreciate the proposed model as a novel as an interesting generalization of multiple existing models. The authors ironed out a bug in the evaluation, that was caught by a review, and report that results still hold (on a larger dataset). | train | [
"AmiSdQvCIZ",
"5PrOuTArK_O",
"GatWMzp7WmN",
"HMp-Tbhjdg4",
"_55dtET3mS",
"MUbPygbi_Q",
"vaMk9qxbAuX",
"QrdNLBv1h0N",
"kAO7cfdwpE1",
"l3HpzROjZrC",
"Gwf0q0h-nPf",
"GVb6UPnlR5g",
"9l9e_N2utF4",
"ePDByLj59hv"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your time in commenting and further clarifying your concerns. We want to add more responses to your concern. And, we realy really hope that you would re-consider your rating.\n\n1. From the perspective of a single spiking neuron, it is true that a GLIF-based neuron only behaves with fixed bio-features.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"GatWMzp7WmN",
"HMp-Tbhjdg4",
"GVb6UPnlR5g",
"vaMk9qxbAuX",
"GVb6UPnlR5g",
"nips_2022_UmFSx2c4ubT",
"ePDByLj59hv",
"9l9e_N2utF4",
"GVb6UPnlR5g",
"Gwf0q0h-nPf",
"nips_2022_UmFSx2c4ubT",
"nips_2022_UmFSx2c4ubT",
"nips_2022_UmFSx2c4ubT",
"nips_2022_UmFSx2c4ubT"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.