paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_Tocn9vYMU-o | Approaching Quartic Convergence Rates for Quasi-Stochastic Approximation with Application to Gradient-Free Optimization | Stochastic approximation is a foundation for many algorithms found in machine learning and optimization. It is in general slow to converge: the mean square error vanishes as $O(n^{-1})$. A deterministic counterpart known as quasi-stochastic approximation is a viable alternative in many applications, including gradient-free optimization and reinforcement learning. It was assumed in prior research that the optimal achievable convergence rate is $O(n^{-2})$. It is shown in this paper that through design it is possible to obtain far faster convergence, of order $O(n^{-4+\delta})$, with $\delta>0$ arbitrary. Two techniques are introduced for the first time to achieve this rate of convergence. The theory is also specialized within the context of gradient-free optimization, and tested on standard benchmarks. The main results are based on a combination of novel application of results from number theory and techniques adapted from stochastic approximation theory.
| Accept | All reviewers are happy with the new ideas and strength of results in this paper. | train | [
"AVmvurJWfpI",
"FfuKFiWBYEP",
"Iy8DMA08YbH4",
"-SCDLbEIQJK",
"lw3TYGTQ7A",
"b3xB4J-tZDo",
"C0nROcIqnU"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" You may be reassured to learn that we discovered an error in our statement on dependencies: the exponential bound in Baker’s Theorem concerns $K$ (the number of frequencies) and NOT $d$. Origin of error: we took $K=d$ to simplify notation. We apologize for the confusion this caused!\n\nTo avoid assuming $K=d$ we ... | [
-1,
-1,
-1,
7,
7,
7,
5
] | [
-1,
-1,
-1,
2,
4,
3,
4
] | [
"FfuKFiWBYEP",
"Iy8DMA08YbH4",
"nips_2022_Tocn9vYMU-o",
"nips_2022_Tocn9vYMU-o",
"nips_2022_Tocn9vYMU-o",
"nips_2022_Tocn9vYMU-o",
"nips_2022_Tocn9vYMU-o"
] |
nips_2022_KnCS9390Va | Delving into Out-of-Distribution Detection with Vision-Language Representations | Recognizing out-of-distribution (OOD) samples is critical for machine learning systems deployed in the open world. The vast majority of OOD detection methods are driven by a single modality (e.g., either vision or language), leaving the rich information in multi-modal representations untapped. Inspired by the recent success of vision-language pre-training, this paper enriches the landscape of OOD detection from a single-modal to a multi-modal regime. Particularly, we propose Maximum Concept Matching (MCM), a simple yet effective zero-shot OOD detection method based on aligning visual features with textual concepts. We contribute in-depth analysis and theoretical insights to understand the effectiveness of MCM. Extensive experiments demonstrate that MCM achieves superior performance on a wide variety of real-world tasks. MCM with vision-language features outperforms a common baseline with pure visual features on a hard OOD task with semantically similar classes by 56.60% (FPR95). Code is available at https://github.com/deeplearning-wisc/MCM. | Accept | This paper presents an interesting and novel try at using vision-language multi-modal models for OOD tasks. The experiments are sufficiently validated on diverse OOD datasets. Besides, the theoretical explanations of softmax scaling are quite insightful. All reviewers give positive scores. During the discussion phase, the authors have accordingly added more ablation studies and comparisons. Despite some reviewers raising concerns about the possible limited novelty of using softmax scaling for the CLIP-like model. The authors give sufficient discussions and provide theoretical justifications, which will inspire the community. The meta-reviewers thus recommend accepting it. | val | [
"ctwF8MuUcJ0",
"stRrQs0_uKR",
"NyVmZhF_e3W",
"Mhc4BckSlxA",
"iBJNBtirTrH",
"8p6jBZZNpQ4",
"o1V_lBZpmzW",
"0bt8oU7Sw5i",
"6QyAptC8YIa",
"MNAJ_LD2K7D"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" >**Q1: Comparisons with recent works**.\n\nThanks for the suggestion! For the Energy score, please refer to Appendix F.1 for a detailed discussion where we investigate the effectiveness of Energy score based on CLIP. For GradNorm, as suggested, we provide the results as follows. For reference, we also paste the r... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"MNAJ_LD2K7D",
"6QyAptC8YIa",
"Mhc4BckSlxA",
"0bt8oU7Sw5i",
"o1V_lBZpmzW",
"nips_2022_KnCS9390Va",
"nips_2022_KnCS9390Va",
"nips_2022_KnCS9390Va",
"nips_2022_KnCS9390Va",
"nips_2022_KnCS9390Va"
] |
nips_2022_wsnMW0c_Au | Non-convex online learning via algorithmic equivalence | We study an algorithmic equivalence technique between non-convex gradient descent and convex mirror descent. We start by looking at a harder problem of regret minimization in online non-convex optimization. We show that under certain geometric and smoothness conditions, online gradient descent applied to non-convex functions is an approximation of online mirror descent applied to convex functions under reparameterization. In continuous time, the gradient flow with this reparameterization was shown to be \emph{exactly} equivalent to continuous-time mirror descent by Amid and Warmuth, but theory for the analogous discrete time algorithms is left as an open problem. We prove an $O(T^{\frac{2}{3}})$ regret bound for non-convex online gradient descent in this setting, answering this open problem. Our analysis is based on a new and simple algorithmic equivalence method. | Accept | The main result of the paper is on establishing an approximate equivalence between online gradient descent (OGD) on non-convex losses with online mirror descent (OMD) on convex reparametrizations of the losses. In a previous result by Amid and Warmuth, which applies to the the continuous-time setting, we have exact conditions where the gradient flow with this reparameterization is exactly equivalent to continuous-time mirror descent. However, the current paper focuses on providing discrete time algorithms. The main theoretical contribution of the paper is to provide sufficient general conditions for the approximate equivalence to go through, leading to an O(T^{2/3}) regret bound for OGD on non-convex losses.
The paper was discussed to a good extent between the authors and the reviewers, and among the reviewers. The reviewers (and the AC) agreed that the paper provides a novel analysis which is based on a simple, elegant, and novel algorithmic equivalence method. Most of the concerns of the reviewers were addressed during the discussion phase. However, the reviewers still felt that the paper lacks relevant and important applications where the approximate equivalence (and reparametrization) would apply to and would lead to non-trivial results. All in all, the reviewers agreed on the decision that the paper lies on the acceptance border (with inclination towards acceptance).
| train | [
"5i15t3JQNhs",
"ucoGOk5DtfJ",
"rOMGs9EYVox",
"Ruic9DVpJcu",
"XVEPq5fmkbG",
"zXFYYqwu2s",
"8HafODwSscP",
"UQCsHcl-9jr",
"mCI7V19I9bBJ",
"zqw03J3uZb",
"NQJPBsy6Ylp",
"a_thCunR3HB",
"iHnW-aoBPbm",
"LKJiHm-8mdK",
"A-6bJ3iWq_h"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have checked the comments of Reviewer x2pL and Reviewer NwoT. As they bring up some issues that I didn't raise, e.g., the issue of G raised by Reviewer NwoT, I tend to keep the score as is. \n\nIn my opinion, the title of the paper sounds grandiose, but the result (a bit) falls short of it. ",
" I would like ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"zqw03J3uZb",
"NQJPBsy6Ylp",
"Ruic9DVpJcu",
"XVEPq5fmkbG",
"zXFYYqwu2s",
"8HafODwSscP",
"UQCsHcl-9jr",
"mCI7V19I9bBJ",
"a_thCunR3HB",
"A-6bJ3iWq_h",
"LKJiHm-8mdK",
"iHnW-aoBPbm",
"nips_2022_wsnMW0c_Au",
"nips_2022_wsnMW0c_Au",
"nips_2022_wsnMW0c_Au"
] |
nips_2022_ePZsWeGJXyp | VICRegL: Self-Supervised Learning of Local Visual Features | Most recent self-supervised methods for learning image representations focus on either producing a global feature with invariance properties, or producing a set of local features. The former works best for classification tasks while the latter is best for detection and segmentation tasks. This paper explores the fundamental trade-off between learning local and global features. A new method called VICRegL is proposed that learns good global and local features simultaneously, yielding excellent performance on detection and segmentation tasks while maintaining good performance on classification tasks. Concretely, two identical branches of a standard convolutional net architecture are fed two differently distorted versions of the same image. The VICReg criterion is applied to pairs of global feature vectors. Simultaneously, the VICReg criterion is applied to pairs of local feature vectors occurring before the last pooling layer. Two local feature vectors are attracted to each other if their l2-distance is below a threshold or if their relative locations are consistent with a known geometric transformation between the two input images. We demonstrate strong performance on linear classification and segmentation transfer tasks. Code and pretrained models are publicly available at: https://github.com/facebookresearch/VICRegL | Accept | This paper proposes to extend the existing VICReg objective to the local features for obtaining good performances on both image-level and dense prediction tasks. In specific, while the global features are obtained by an average pooling on the output feature maps, the local pairs are determined by both of the feature distance and spatial location distance. The technical novelty seems to be somewhat incremental due to a little bit simple modification of the existing global objective to the local objective for dense representation learning. However, extensive experiments on several benchmarks including ablations and visualization clearly demonstrate the effectiveness of the proposed self-supervised representation learning for both classification and segmentation (+detection) tasks. Especially, the authors faithfully addressed most concerns and questions raised by the reviewers, and the overall quality of the paper seems to be significantly improved. Therefore, I would recommend to accept this paper. | train | [
"ErcZ8BkBZID",
"bWhBiQfpSQW",
"gECQ5SqHGBk",
"7x3eOY9YjrE",
"oiNqX_FOF8V",
"hfsIEFMuKz3",
"0Cj_bsag4ON",
"QYU__mHEN8H",
"t0aTYzLHKA",
"Tfdd3NDhUgU",
"4kP0SFotBe6"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 7) *L127, L135, 181: Is there a motivation / intuition on why it is better to keep only the top- pairs? Together with the l2-distance loss, this could reinforce the intuition that the l2-distance loss act as a regularizer/booster as it would be applied only to feature vectors that are already close to each other,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"bWhBiQfpSQW",
"4kP0SFotBe6",
"7x3eOY9YjrE",
"Tfdd3NDhUgU",
"hfsIEFMuKz3",
"0Cj_bsag4ON",
"t0aTYzLHKA",
"nips_2022_ePZsWeGJXyp",
"nips_2022_ePZsWeGJXyp",
"nips_2022_ePZsWeGJXyp",
"nips_2022_ePZsWeGJXyp"
] |
nips_2022_C0VKVmhlKgb | Bayesian Clustering of Neural Spiking Activity Using a Mixture of Dynamic Poisson Factor Analyzers | Modern neural recording techniques allow neuroscientists to observe the spiking activity of many neurons simultaneously. Although previous work has illustrated how activity within and between known populations of neurons can be summarized by low-dimensional latent vectors, in many cases what determines a unique population may be unclear. Neurons differ in their anatomical location, but also, in their cell types and response properties. Moreover, multiple distinct populations may not be well described by a single low-dimensional, linear representation. To tackle these challenges, we develop a clustering method based on a mixture of dynamic Poisson factor analyzers (DPFA) model, with the number of clusters treated as an unknown parameter. To do the analysis of DPFA model, we propose a novel Markov chain Monte Carlo (MCMC) algorithm to efficiently sample its posterior distribution. Validating our proposed MCMC algorithm with simulations, we find that it can accurately recover the true clustering and latent states and is insensitive to the initial cluster assignments. We then apply the proposed mixture of DPFA model to multi-region experimental recordings, where we find that the proposed method can identify novel, reliable clusters of neurons based on their activity, and may, thus, be a useful tool for neural data analysis. | Accept | The authors present a mixture of dynamic Poisson factor analyzers (sometimes called Poisson linear dynamical systems) model. The model itself is not especially novel (it seems closely related to Poisson switching linear dynamical systems) but the authors make up for it with a well-developed approach to Bayesian inference. They allow for unknown numbers of states with a mixture of finite mixtures model, and they handle the nonconjugacy of the Poisson-Gaussian model with a Metropolis-corrected Polya-gamma augmentation scheme. The empirical results do not show a clear improvement on Neuropixels recordings from the Allen Brain Observatory, but again the assessment is thorough. Overall, I think the paper conveys valuable findings and ideas, and lays a nice foundation for future work. I encourage the authors to incorporate and address the reviewers' feedback when preparing the final manuscript, and to consider expanding the discussion of how the Mix-DPFA model relates to Poisson SLDS. | train | [
"8Bkd2ncM--",
"_gGDPGRfbtZ",
"chPO2_UMwbG",
"UGjE7cFR8pm",
"kMY-PLTAdMDu",
"TvaeALXzDw7",
"rkFm9kksFip",
"oyBrKEbhwIE",
"bLslKcgePFE",
"LubRU5yvC5"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for these additional comments.\n\nA comparison with switching models, looking at decoding by cluster, and examining the latent states in more detail would all be interesting directions for future work.\n\nFor low firing rates - we *include* 72% of neurons in the analysis. We've now tried to clarify in the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
"UGjE7cFR8pm",
"chPO2_UMwbG",
"TvaeALXzDw7",
"kMY-PLTAdMDu",
"LubRU5yvC5",
"bLslKcgePFE",
"oyBrKEbhwIE",
"nips_2022_C0VKVmhlKgb",
"nips_2022_C0VKVmhlKgb",
"nips_2022_C0VKVmhlKgb"
] |
nips_2022_hX5Ia-ION8Y | MCVD - Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation | Video prediction is a challenging task. The quality of video frames from current state-of-the-art (SOTA) generative models tends to be poor and generalization beyond the training data is difficult.
Furthermore, existing prediction frameworks are typically not capable of simultaneously handling other video-related tasks such as unconditional generation or interpolation. In this work, we devise a general-purpose framework called Masked Conditional Video Diffusion (MCVD) for all of these video synthesis tasks using a probabilistic conditional score-based denoising diffusion model, conditioned on past and/or future frames. We train the model in a manner where we randomly and independently mask all the past frames or all the future frames. This novel but straightforward setup allows us to train a single model that is capable of executing a broad range of video tasks, specifically: future/past prediction -- when only future/past frames are masked; unconditional generation -- when both past and future frames are masked; and interpolation -- when neither past nor future frames are masked. Our experiments show that this approach can generate high-quality frames for diverse types of videos. Our MCVD models are built from simple non-recurrent 2D-convolutional architectures, conditioning on blocks of frames and generating blocks of frames. We generate videos of arbitrary lengths autoregressively in a block-wise manner. Our approach yields SOTA results across standard video prediction and interpolation benchmarks, with computation times for training models measured in 1-12 days using $\le$ 4 GPUs.
Project page: \url{https://mask-cond-video-diffusion.github.io}
Code: \url{https://mask-cond-video-diffusion.github.io/} | Accept | The paper proposes the use of diffusion model for masked video modeling and shows promising results in video generation and completion. All of the reviewers agree that the paper is a good fit for publication at NeurIPS. I appreciate that the authors engaged with the reviewers and improved the paper! | val | [
"9PlTjdV9Mgo",
"BBzbHKYzo_d",
"OQSuyrQ2kZW",
"4y67BdnXS0A",
"9Dg0poRrsnr",
"iAI4Si7xRQH",
"4tqXOs_c6L0",
"OVsqmx3puN1",
"S_GmF8NtovW",
"h48DDvR1pKF",
"N_74gltQjN4",
"cNMs3nbxeO2",
"xz2yaLp8Np",
"HBAa01cW2Ow"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As asked by the reviewers, we updated the paper with the revisions and highlighted the changes.\n\nNote that we do not yet have the IS and FID results because we were delayed due to problems with the chainer and cupy packages dependencies and we had to contact the authors for further help. We will post the result... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"nips_2022_hX5Ia-ION8Y",
"iAI4Si7xRQH",
"9Dg0poRrsnr",
"h48DDvR1pKF",
"nips_2022_hX5Ia-ION8Y",
"4tqXOs_c6L0",
"xz2yaLp8Np",
"S_GmF8NtovW",
"HBAa01cW2Ow",
"N_74gltQjN4",
"cNMs3nbxeO2",
"nips_2022_hX5Ia-ION8Y",
"nips_2022_hX5Ia-ION8Y",
"nips_2022_hX5Ia-ION8Y"
] |
nips_2022_IFXTZERXdM7 | Solving Quantitative Reasoning Problems with Language Models | Language models have achieved remarkable performance on a wide range of tasks that require natural language understanding. Nevertheless, state-of-the-art models have generally struggled with tasks that require quantitative reasoning, such as solving mathematics, science, and engineering questions at the college level. To help close this gap, we introduce Minerva, a large language model pretrained on general natural language data and further trained on technical content. The model achieves strong performance in a variety of evaluations, including state-of-the-art performance on the MATH dataset. We also evaluate our model on over two hundred undergraduate-level problems in physics, biology, chemistry, economics, and other sciences that require quantitative reasoning, and find that the model can correctly answer nearly a quarter of them. | Accept | This is a very strong solid work that extracts equations from scientific papers and show strong performance in mathematical reasoning. | train | [
"-HyjmSgHTY",
"2DXWcAn_QKM",
"KMX4HyO8xtp",
"umVMDRMKIN-",
"GDyStNZjsS0",
"OUN7kBHG2mD",
"IhNzpc20v2t",
"Q6oP9-nQ8hE",
"JMO2HywlVNq",
"uxZSxiVFd4r",
"YS4dVxZ35XK",
"fWyUVKIcadp",
"4pHn3WfOaXM"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors answered most of my queries and there is no change in my rating.",
" Thank you for the detailed response and answering all of my questions.",
" We thank the reviewer for their review and feedback.\nRegarding the overall contribution, we refer the reviewer to the general comment to all reviewers, a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
7,
6,
2,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3,
5
] | [
"IhNzpc20v2t",
"OUN7kBHG2mD",
"4pHn3WfOaXM",
"fWyUVKIcadp",
"YS4dVxZ35XK",
"uxZSxiVFd4r",
"JMO2HywlVNq",
"nips_2022_IFXTZERXdM7",
"nips_2022_IFXTZERXdM7",
"nips_2022_IFXTZERXdM7",
"nips_2022_IFXTZERXdM7",
"nips_2022_IFXTZERXdM7",
"nips_2022_IFXTZERXdM7"
] |
nips_2022_9t24EBSlZOa | Attention-based Neural Cellular Automata | Recent extensions of Cellular Automata (CA) have incorporated key ideas from modern deep learning, dramatically extending their capabilities and catalyzing a new family of Neural Cellular Automata (NCA) techniques. Inspired by Transformer-based architectures, our work presents a new class of _attention-based_ NCAs formed using a spatially localized—yet globally organized—self-attention scheme. We introduce an instance of this class named _Vision Transformer Cellular Automata (ViTCA)_. We present quantitative and qualitative results on denoising autoencoding across six benchmark datasets, comparing ViTCA to a U-Net, a U-Net-based CA baseline (UNetCA), and a Vision Transformer (ViT). When comparing across architectures configured to similar parameter complexity, ViTCA architectures yield superior performance across all benchmarks and for nearly every evaluation metric. We present an ablation study on various architectural configurations of ViTCA, an analysis of its effect on cell states, and an investigation on its inductive biases. Finally, we examine its learned representations via linear probes on its converged cell state hidden representations, yielding, on average, superior results when compared to our U-Net, ViT, and UNetCA baselines. | Accept | As summarized by reviewer GAh5, this paper proposes a novel combination of vision transformers and Neural Cellular Automata (NCAs), and uses them to create denoising autoencoders. An NCA is essentially a cellular automaton with the node updates being performed by a neural net. The ViTCA proposed here adds to this attention heads that are only focused on neighboring cells, and includes positional encodings. The paper demonstrates superior performance to a ViT on denoising autoencoder tasks, and demonstrates the robustness of the model to various perturbations.
All reviewers agree that this is a novel contribution to NCAs. By replacing convolutions with vision transformers, it opens up many possibilities (such as scale). The experiments were conducted with detail and clarity. I do agree with reviewer mu6v, that while the quality of the paper is good, the experiments go fairly in-depth although on just one task of image denoising (which is less interesting than texture generation or other forms of morphogenesis), but even in its current form, it definitely meets the bar for a solid acceptance recommendation from me.
| train | [
"5LpFefJXwUn",
"8vE6vQOrU8h-",
"JnKwUTCyJlrK",
"wXbHChf0BSH",
"6LJ5x9p5mMS",
"ptQKLkLgznS",
"gYJ8vwqzVE",
"dt4B5562MKRR",
"jhNON-Kk9Pb",
"gZfOXtdsND-",
"2ZMclWh6z9g",
"U3uh9evRRB_",
"4torJNPNNtT",
"D14W8q6et0h",
"7RBqn2C5JDa",
"grZhuZoPgge",
"ZWQIU0q-FF2",
"Id7IxNZerzk",
"981whYW... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer... | [
" I thank the authors for their very thorough reply which has indeed helped me understand the paper better and change my mind about it.\n\nA few comments about the reply: \n\nParts 1 and 2: Please disregard my comment on the three datasets, it was an oversight on my part and I apologize for it. \nIndeed this paper ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"wXbHChf0BSH",
"JnKwUTCyJlrK",
"nips_2022_9t24EBSlZOa",
"6LJ5x9p5mMS",
"ptQKLkLgznS",
"gYJ8vwqzVE",
"dt4B5562MKRR",
"jhNON-Kk9Pb",
"gZfOXtdsND-",
"2ZMclWh6z9g",
"U3uh9evRRB_",
"xn1NWE-ONQ",
"D14W8q6et0h",
"7RBqn2C5JDa",
"grZhuZoPgge",
"itN2a3nTANu",
"Id7IxNZerzk",
"981whYWVz_q",
... |
nips_2022_v9Wjc2OWjz | The price of ignorance: how much does it cost to forget noise structure in low-rank matrix estimation? | We consider the problem of estimating a rank-$1$ signal corrupted by structured rotationally invariant noise, and address the following question: \emph{how well do inference algorithms perform when the noise statistics is unknown and hence Gaussian noise is assumed?} While the matched Bayes-optimal setting with unstructured noise is well understood, the analysis of this mismatched problem is only at its premises. In this paper, we make a step towards understanding the effect of the strong source of mismatch which is the noise statistics. Our main technical contribution is the rigorous analysis of a Bayes estimator and of an approximate message passing (AMP) algorithm, both of which incorrectly assume a Gaussian setup. The first result exploits the theory of spherical integrals and of low-rank matrix perturbations; the idea behind the second one is to design and analyze an artificial AMP which, by taking advantage of the flexibility in the denoisers, is able to "correct" the mismatch. Armed with these sharp asymptotic characterizations, we unveil a rich and often unexpected phenomenology. For example, despite AMP is in principle designed to efficiently compute the Bayes estimator, the former is \emph{outperformed} by the latter in terms of mean-square error. We show that this performance gap is due to an incorrect estimation of the signal norm. In fact, when the SNR is large enough, the overlaps of the AMP and the Bayes estimator coincide, and they even match those of optimal estimators taking into account the structure of the noise.
| Accept | This paper studies precise high-dimensional asymptotics in a simple low-rank matrix estimation problem. When there exists a distributional mismatch between the true noise distribution and the Gaussian noise assumption imposed to run the AMP algorithm, the authors observe, and formally quantify, the performance gap between the AMP algorithm and the Bayes estimator. While the model considered in this paper might still be too idealistic (which limits its broader impacts), the paper is well-written and solid. All reviewers recommend acceptance of this paper, and I echo their recommendation. | train | [
"DPtJRWy-pa",
"XEwzERKE196",
"7JbIvSjd7xJ",
"hFgMe16jjIw1",
"kZ8QtRkFOi-",
"PETbs6VDG2y",
"WHZ8e1slaz4",
"Rw8gBrdIjcT",
"7-mndDrPc2D",
"zbikadWyf7v",
"Is06A1nhSVG",
"HpDDQeOazel",
"FaWi64Mr-og",
"RicTV0sC2p",
"5LuitNkWlVR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for detailed response to my comments. I'll raise the rating to 6. ",
" Thank you to the authors for the detailed response, to which I am satisfied. I will raise the rating from 5 to 6.",
" Thank you for the positive feedback and evaluation of our paper.\n\nWe agree with this last comment, an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"zbikadWyf7v",
"WHZ8e1slaz4",
"kZ8QtRkFOi-",
"7-mndDrPc2D",
"PETbs6VDG2y",
"FaWi64Mr-og",
"5LuitNkWlVR",
"5LuitNkWlVR",
"RicTV0sC2p",
"HpDDQeOazel",
"nips_2022_v9Wjc2OWjz",
"nips_2022_v9Wjc2OWjz",
"nips_2022_v9Wjc2OWjz",
"nips_2022_v9Wjc2OWjz",
"nips_2022_v9Wjc2OWjz"
] |
nips_2022_48TmED6BvGZ | Biological Learning of Irreducible Representations of Commuting Transformations | A longstanding challenge in neuroscience is to understand neural mechanisms underlying the brain’s remarkable ability to learn and detect transformations of objects due to motion. Translations and rotations of images can be viewed as orthogonal transformations in the space of pixel intensity vectors. Every orthogonal transformation can be decomposed into rotations within irreducible two-dimensional subspaces (or representations). For sets of commuting transformations, known as toroidal groups, Cohen and Welling proposed a mathematical framework for learning the irreducible representations. We explore the possibility that the brain also learns irreducible representations using a biologically plausible learning mechanism. The first is based on SVD of the anti-symmetrized outer product of the vectors representing consecutive images and is implemented by a single-layer neural network. The second is based on PCA of the difference between consecutive frames and is implemented in a two-layer network but with greater biological plausibility. Both networks learn image rotations (replicating Cohen and Welling’s results) as well as translations. It would be interesting to search for the proposed networks in nascent connectomics and physiology datasets. | Accept | This manuscript presents novel biologically plausible algorithms for learning representations for Lie groups. The derivation of the algorithms and the networks are based on previously studied biologically plausible networks. Although there are some limitations, the reviewers agree that this work is sound, clearly presented, and represents a valuable contribution to both computational/theoretical neuroscience and machine learning. | train | [
"WqoQAyQnQvi",
"P69Cpfz9Hi",
"oQoCOw_gWIm",
"NrJho8N2NBH",
"AWo0WetEIYF",
"mRErNVPW02G",
"34XDhdE1Ohv",
"TMFSzVuwiOf",
"zmYScOqmfHj",
"j8aMGivvtog",
"jNwRq6fspyu",
"joEuus4fSr",
"qrGsgQnB3cj",
"ygpIDpFsm8k"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n### Lack of evidence that the structure of image transformations due to movement is learnt\n\n> The type of results that you provide on real images in Figure 6b-c also witnesses the mismatch between abstraction of the provided results and the concrete problem that was intended to be addressed. Couldn't you **si... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"P69Cpfz9Hi",
"NrJho8N2NBH",
"qrGsgQnB3cj",
"zmYScOqmfHj",
"jNwRq6fspyu",
"ygpIDpFsm8k",
"qrGsgQnB3cj",
"qrGsgQnB3cj",
"joEuus4fSr",
"nips_2022_48TmED6BvGZ",
"nips_2022_48TmED6BvGZ",
"nips_2022_48TmED6BvGZ",
"nips_2022_48TmED6BvGZ",
"nips_2022_48TmED6BvGZ"
] |
nips_2022_Bct2f8fRd8S | The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning | Does prompting a large language model (LLM) like GPT-3 with explanations improve in-context learning? We study this question on two NLP tasks that involve reasoning over text, namely question answering and natural language inference. We test the performance of four LLMs on three textual reasoning datasets using prompts that include explanations in multiple different styles. For these tasks, we find that including explanations in the prompts for OPT, GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small to moderate accuracy improvements over standard few-show learning. However, text-davinci-002 is able to benefit more substantially.
We further show that explanations generated by the LLMs may not entail the models’ predictions nor be factually grounded in the input, even on simple tasks with extractive explanations. However, these flawed explanations can still be useful as a way to verify LLMs’ predictions post-hoc. Through analysis in our three settings, we show that explanations judged by humans to be good—logically consistent with the input and the prediction—more likely cooccur with accurate predictions. Following these observations, we train calibrators using automatically extracted scores that assess the reliability of explanations, allowing us to improve performance post-hoc across all of our datasets. | Accept | The authors perform an analysis that suggests that explanations may not provide reliable signal in few-shot in-context learning, showing that adding explanations yields only minimal gains over raw in-context learning. They then develop an approach to approximate the reliability of predictions automatically using these explanations.
In the initial reviews, the reviewers pointed out issues with the empirical rigor of the study and its framing. However, they seem to have addressed these concerns by narrowing the scope of their contributions and providing additional experiments supporting their claims. | train | [
"7wOapfV8m7",
"gu3l-nETHbi",
"RNd3XKs7237",
"CycfcwLfXx",
"7cOyFuSdTpKV",
"BYl0AIcaXiN",
"eGavVUhIVIK",
"3x8rNW3WYuL",
"SWgkJoKqIln",
"K8GZ4Ryirl8",
"rC5W4rhbJLz",
"9vzRj9H6mTo"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the response! We'd like to respectfully let you know the review score above is not actually updated.",
" I think the new framing of the paper \"The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning\" makes more sense. Though the mathematical reasoning chain of tho... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"gu3l-nETHbi",
"BYl0AIcaXiN",
"CycfcwLfXx",
"eGavVUhIVIK",
"9vzRj9H6mTo",
"rC5W4rhbJLz",
"K8GZ4Ryirl8",
"SWgkJoKqIln",
"nips_2022_Bct2f8fRd8S",
"nips_2022_Bct2f8fRd8S",
"nips_2022_Bct2f8fRd8S",
"nips_2022_Bct2f8fRd8S"
] |
nips_2022_-vXEN5rIABY | Inductive Logical Query Answering in Knowledge Graphs | Formulating and answering logical queries is a standard communication interface for knowledge graphs (KGs).
Alleviating the notorious incompleteness of real-world KGs, neural methods achieved impressive results in link prediction and complex query answering tasks by learning representations of entities, relations, and queries. Still, most existing query answering methods rely on transductive entity embeddings and cannot generalize to KGs containing new entities without retraining entity embeddings.
In this work, we study the inductive query answering task where inference is performed on a graph containing new entities with queries over both seen and unseen entities. To this end, we devise two mechanisms leveraging inductive node and relational structure representations powered by graph neural networks (GNNs).
Experimentally, we show that inductive models are able to perform logical reasoning at inference time over unseen nodes generalizing to graphs up to 500% larger than training ones. Exploring the efficiency--effectiveness trade-off, we find the inductive relational structure representation method generally achieves higher performance, while the inductive node representation method is able to answer complex queries in the inference-only regime without any training on queries and scale to graphs of millions of nodes. Code is available at
https://github.com/DeepGraphLearning/InductiveQE | Accept | It is very hard to make a final decision on this paper; the scores are: 4,6,8, and 5. The research problem raised in this paper is interesting and worth further study.
However, reviewers have raised some concerns about the experimental resuls. In the original paper, they did not compare with any external baselines. In the discussion period, the authors presented additional results with BetaE, which were originally designed for transductive reasoning. Not surprisingly, BetaE delivers poor results. How strong is this comparison? One would expect some experiments with better initialization for unseen entities, e.g. with KB embeddings (e.g. TransE and ConvE used in KBC tasks). | val | [
"78ydZOlM_G",
"KT8qv_ElOMl",
"ubVwLdolJH6",
"WcPGQPDP4nw",
"cp9Q_DGJ-zM",
"qGfbE2rMfQ4",
"gsLDVbtnFHC",
"a3SbpB61uc",
"soEikxEsk7h",
"supK8F7dA1G",
"GRI7twJrfts",
"_Nf1cD8YXfu",
"oWx_x0LvcSV",
"cVY0WL46uV-",
"aAxW4MsAkSt",
"XqTaP3kTYn",
"c8ma3t7hoN",
"H9GfKHAHvI",
"MTd_7R_WP1x",
... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" Thanks for the suggestion - the edge-type baseline should indeed be stronger. \nWe finished implementing this baseline within those hours after your suggestion but did not manage to complete the experimental evaluation until the end of the discussion period. Nevertheless, we will add this baseline's results to bo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"a3SbpB61uc",
"qGfbE2rMfQ4",
"gsLDVbtnFHC",
"soEikxEsk7h",
"oWx_x0LvcSV",
"XqTaP3kTYn",
"aAxW4MsAkSt",
"cVY0WL46uV-",
"MTd_7R_WP1x",
"nips_2022_-vXEN5rIABY",
"_Nf1cD8YXfu",
"V7wszp_QQHJ",
"bOmJy3B1Zrf",
"aAxW4MsAkSt",
"XqTaP3kTYn",
"skRJ9zJA4J",
"H9GfKHAHvI",
"MTd_7R_WP1x",
"QlQy... |
nips_2022_4v7PSPp-TAe | Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games | We study the problem of finding the Nash equilibrium in a two-player zero-sum Markov game. Due to its formulation as a minimax optimization program, a natural approach to solve the problem is to perform gradient descent/ascent with respect to each player in an alternating fashion. However, due to the non-convexity/non-concavity of the underlying objective function, theoretical understandings of this method are limited. In our paper, we consider solving an entropy-regularized variant of the Markov game. The regularization introduces structures into the optimization landscape that make the solutions more identifiable and allow the problem to be solved more efficiently. Our main contribution is to show that under proper choices of the regularization parameter, the gradient descent ascent algorithm converges to the Nash equilibrium of the original unregularized problem. We explicitly characterize the finite-time performance of the last iterate of our algorithm, which vastly improves over the existing convergence bound of the gradient descent ascent algorithm without regularization. Finally, we complement the analysis with numerical simulations that illustrate the accelerated convergence of the algorithm. | Accept | The paper studies the problem of finding the Nash equilibrium of a two-player zero-sum Markov game. Despite nonconvexity, this min-max optimization satisfies the PL condition and hence it is "easy" to solve. The authors rigorously studied the iteration complexity of this problem. The reviewers found the paper well organized and self-contained. The reviewers also found the contributions of the work solid. I recommend acceptance of the work. It would be great if the authors can address the reviewer's concerns and particularly the following concerns in the final version of the paper (most of which were answered in the discussion forum, but not in the paper):
- Please include discussions on various related works (such as[Cen 2021], [Perolat 2015], [Yu 2020] in the paper.) Please avoid having a "laundry list" type of citations, and do your best to discuss each of these works to the extent possible in the paper given one additional page (as done in the discussion forum). In addition, some of the references are closely related. For example, [Yang et al 2020]'s two-sided PL setting is closely related to the paper. Also, [Ostrovskii et al 2021] considers non-Euclidean geometries which is closely related to entropy regularization.
- Please include more details on the experiments (see the individual reviews)
- Please include the discussion on Assumption 2 (see the individual reviews)
| train | [
"g9PR4n4YizQ",
"qIj7QWvrka1",
"CFJYvRd2-rj",
"PcbgxIoetN",
"zLknPT-bNyc",
"4_cW-71k0YT",
"gfeLn_YR7gm",
"5AsLGrfWnW",
"epqk9yIEyKm",
"vuwS3_bENB",
"r1AsSAqJmnG",
"_VEA53u8QOrU",
"xFHZFLF__dm",
"KPoRmJ2IQHO",
"lEGeYfAZVX"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We do not understand why the reviewer mentioned: \"there is no theoretical advantage of the author's algorithm compared with [Yu 2020]\"? The work in [Yu 2020] is the value-based method and provides regret analysis. On the other hand, our paper is about policy gradient descent ascent and study finite-time analysi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"PcbgxIoetN",
"PcbgxIoetN",
"PcbgxIoetN",
"epqk9yIEyKm",
"4_cW-71k0YT",
"gfeLn_YR7gm",
"lEGeYfAZVX",
"KPoRmJ2IQHO",
"xFHZFLF__dm",
"_VEA53u8QOrU",
"nips_2022_4v7PSPp-TAe",
"nips_2022_4v7PSPp-TAe",
"nips_2022_4v7PSPp-TAe",
"nips_2022_4v7PSPp-TAe",
"nips_2022_4v7PSPp-TAe"
] |
nips_2022_UqA1mcOxiq | Posted Pricing and Dynamic Prior-independent Mechanisms with Value Maximizers | We study posted price auctions and dynamic prior-independent mechanisms for (ROI-constrained) value maximizers. In contrast to classic (quasi-linear) utility maximizers, these agents aim to maximize their total value subject to a minimum ratio of value per unit of payment made. When personalized posted prices are allowed, posted price auctions for value maximizers can be reduced to posted price auctions for utility maximizers. However, for anonymous posted prices, the well-known $\frac 1 2$ approximation for utility maximizers is impossible for value maximizers and we provide a posted price mechanism with $\frac12(1 - 1/e)$ approximation. Moreover, we demonstrate how to apply our results to design prior-independent mechanisms in a dynamic environment; and to the best of our knowledge, this gives the first constant revenue approximation with multiple value maximizers. Finally, we provide an extension to combinatorial auctions with submodular / XOS agents. | Accept | This paper got uniformly positive reviews. That said, reading into the actual text of the reviews, it is evident that the results are not as strong as one might like. The biggest limitation is that the result rely heavily on personalized pricing for reducing the problem to one of utility maximization for utility maximizers; posted prices are often not a very attractive approach in practice. | train | [
"3ES3Ks_6D_f",
"iVnc14hLMPC",
"CY10lVO6XiO",
"CyQQ5EiuOS",
"WMZqgWw7IWV",
"ixMZdpU4Zx",
"DcmopXNHkc",
"-1sY6yTYjK",
"JWYLke0KTJb"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Yes, that is correct (sorry for the confusion). Please let us know if you have any other questions or comments.",
" Thanks for the response. Perhaps I was unclear, but it seems to me that the proof of Proposition 7 makes use of the fact that ROI = 1. My question was (1) is this necessary and (2) does the 1/4 b... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iVnc14hLMPC",
"ixMZdpU4Zx",
"CyQQ5EiuOS",
"JWYLke0KTJb",
"-1sY6yTYjK",
"DcmopXNHkc",
"nips_2022_UqA1mcOxiq",
"nips_2022_UqA1mcOxiq",
"nips_2022_UqA1mcOxiq"
] |
nips_2022_pAq8iDy00Oa | Incorporating Prior Knowledge into Neural Networks through an Implicit Composite Kernel | It is challenging to guide neural network (NN) learning with prior knowledge. In contrast, many known properties, such as spatial smoothness or seasonality, are straightforward to model by choosing an appropriate kernel in a Gaussian process (GP). Many deep learning applications could be enhanced by modeling such known properties. For example, convolutional neural networks (CNNs) are frequently used in remote sensing, which is subject to strong seasonal effects. We propose to blend the strengths of deep learning and the clear modeling capabilities of GPs by using a composite kernel that combines a kernel implicitly defined by a neural network with a second kernel function chosen to model known properties (e.g., seasonality). Then, we approximate the resultant GP by combining a deep network and an efficient mapping based on the Nystrom approximation, which we call Implicit Composite Kernel (ICK). ICK is flexible and can be used to include prior information in neural networks in many applications. We demonstrate the strength of our framework by showing its superior performance and flexibility on both synthetic and real-world data sets. The code is available at: https://anonymous.4open.science/r/ICK_NNGP-17C5/. | Reject | The submission considers fusing prior knowledge into neural networks by *modulating* the learnt features. The modulation is either additive or multiplicative using another set of features outputted by a kernel-based mapping with linear/periodic kernels. The method is akin to using composite kernels (or combining kernels) in the GP literature and is tested on several regression benchmarks.
The reviewers acknowledged the method is simple and seems to give a desirable performance when the prior knowledge is known. I (AC) read the paper and have several concerns (some of which have already been raised by one or more reviewers):
- the clarity around the connection to composite GP kernels could be improved (there was also a question about the goal of this connection and the authors stated that it is to motivate the proposed approach),
- many desirable properties of GPs (hyperparameter learning using the marginal likelihood, predictive uncertainty) seem to be lost since the objective function is only MSE/MAE and the parameters are not treated probabilistically [the authors promised to consider this in the next iteration].
- limited experiments to show the capability of the ICK framework beyond two data sources, and for various forms of prior knowledge beyond seasonality and the PM 2.5 forecasting task. Some further analyses of the forecasting results + errorbars would be appreciated.
Despite the simplicity of the approach, for the above reasons, I feel that the submission is not ready for publication. I hope, however, to see an updated version published at a conference soon. The reviews are already fairly positive and the discussions here could be useful for polishing up the experiments and writing.
| train | [
"u6V9iRNHe9c",
"Cx5kURhXdkf",
"bRh_QuKc4_",
"J6mSkG7haCD",
"3tkIcg4iOKo",
"uXC-qS3oNFG",
"umpqbFQ_Oyw",
"NdWPtR0YZK6",
"UxNRXpPSfJ0",
"JPyAgg45_Pk",
"OfRRwd4IquA",
"DN8v6MafF-0",
"5vAkcB2IJ4L",
"Iep8FBMFwJE",
"sNcjo6L7ECd",
"lXiq4OAkSCI",
"klSjXjlfBHw",
"XZ3ARXKB-UX"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" i thank the authors for their response. i think it's valuable that the authors added some additional empirical evaluations. in my opinion, this is still less empirical evaluation than we would see in an ideal neurips submission of this kind, but it is a step in the right direction. for this reason i have raised m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"JPyAgg45_Pk",
"bRh_QuKc4_",
"J6mSkG7haCD",
"3tkIcg4iOKo",
"DN8v6MafF-0",
"NdWPtR0YZK6",
"UxNRXpPSfJ0",
"XZ3ARXKB-UX",
"klSjXjlfBHw",
"lXiq4OAkSCI",
"lXiq4OAkSCI",
"sNcjo6L7ECd",
"sNcjo6L7ECd",
"nips_2022_pAq8iDy00Oa",
"nips_2022_pAq8iDy00Oa",
"nips_2022_pAq8iDy00Oa",
"nips_2022_pAq8... |
nips_2022_tadPkBL2gHa | Recruitment Strategies That Take a Chance | In academic recruitment settings, including faculty hiring and PhD admissions, committees aim to maximize the overall quality of recruited candidates, but there is uncertainty about whether a candidate would accept an offer if given one. Previous work has considered algorithms that make offers sequentially and are subject to a hard budget constraint. We argue that these modeling choices may be inconsistent with the practice of academic recruitment. Instead, we restrict ourselves to a single batch of offers, and we treat the target number of positions as a soft constraint, so we risk overshooting or undershooting the target. Specifically, our objective is to select a subset of candidates that maximizes the overall expected value associated with candidates who accept, minus an expected penalty for deviating from the target. We first analyze the guarantees provided by natural greedy heuristics, showing their desirable properties despite the simplicity. Depending on the structure of the penalty function, we further develop algorithms that provide fully polynomial-time approximation schemes and constant-factor approximations to this objective. Empirical evaluation of our algorithms corroborates these theoretical results. | Accept | Thank the authors for their submission.
The paper studies a hiring problem, arguably a more realistic formulation compared to prior work. An agent has access to a pool of candidates, each with its own value and probability of accepting a hiring offer. The goal is to select a batch of candidates as to maximize the cumulative value of the candidates accepting the offer minus a penalty term for deviating from a target number of candidates.
The problem is fully explored considering multiple types of penalty terms. Optimal algorithms are provided whenever possible and otherwise approximation algorithms are shown, all attained using simple greedy strategies. Synthetic experiments are also provided. The paper is well-written and easy to follow. | train | [
"eUfdZLFE8V",
"GNfC88gD2Vh",
"4iuGiJJzgZIG",
"3IKsiNEHCCU",
"qtWb9XwPs3o",
"v6LindN6rao",
"drxf5aJf8Sg",
"ENMtGjIo-BH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are grateful for the kind review. ",
" ### Questions\n\n> In all experiments presented, xGreedy seems to be equivalent to, if not better than, LowValueL1+. What is the reason behind this result? Is it because LowValueL1+ was run with $\\tau=0$? If so, how sensitive is LowValueL1+ to the choice of $\\tau$ and... | [
-1,
-1,
-1,
-1,
5,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"v6LindN6rao",
"ENMtGjIo-BH",
"drxf5aJf8Sg",
"qtWb9XwPs3o",
"nips_2022_tadPkBL2gHa",
"nips_2022_tadPkBL2gHa",
"nips_2022_tadPkBL2gHa",
"nips_2022_tadPkBL2gHa"
] |
nips_2022_XByg4kotW5 | When does return-conditioned supervised learning work for offline reinforcement learning? | Several recent works have proposed a class of algorithms for the offline reinforcement learning (RL) problem that we will refer to as return-conditioned supervised learning (RCSL). RCSL algorithms learn the distribution of actions conditioned on both the state and the return of the trajectory. Then they define a policy by conditioning on achieving high return. In this paper, we provide a rigorous study of the capabilities and limitations of RCSL something which is crucially missing in previous work. We find that RCSL returns the optimal policy under a set of assumptions that are stronger than those needed for the more traditional dynamic programming-based algorithms. We provide specific examples of MDPs and datasets that illustrate the necessity of these assumptions and the limits of RCSL. Finally, we present empirical evidence that these limitations will also cause issues in practice by providing illustrative experiments in simple point-mass environments and on datasets from the D4RL benchmark. | Accept | This paper theoretically analyzes a new popular class of RL algorithms, referred to as Return-Conditioned Supervised Learning (RCSL). The paper theoretically shows that this class of method requires stronger assumptions than standard DP-based approaches for learning the optimal policy. The paper shows that RCSL method requires near-deterministic dynamics as well as a priori knowledge of the optimal conditioning function to perform well, and construct examples where the stated assumptions are necessary for empirical performance.
This paper is solid in all four axes: originality, quality, clarity, and significance. All reviewers advocated for acceptance.
| train | [
"_fzlLLAIOy",
"FXUIIrtZ52q",
"Jmz8e5BwH2I",
"hSiukmIQef",
"x_C-ZfMB-U",
"ldH7l9wucwo",
"9RCVDvDLYJP",
"wiQ9caBS4n",
"gL0em4Eoviv",
"cVeFKbJqU2s",
"MfU4GpRjgEN",
"zu_3rwoZkDL",
"IFI56J_2HUo"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for engaging in the discussion!\n\n1. It is still not clear to us why the reviewer thinks that RCSL is just BC. This does not seem to be substantiated and there are key differences, namely the ability to condition on returns and outperform the behavior policy.\n\n2. We want to reiterate that there is no ne... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"x_C-ZfMB-U",
"hSiukmIQef",
"gL0em4Eoviv",
"9RCVDvDLYJP",
"wiQ9caBS4n",
"IFI56J_2HUo",
"zu_3rwoZkDL",
"MfU4GpRjgEN",
"cVeFKbJqU2s",
"nips_2022_XByg4kotW5",
"nips_2022_XByg4kotW5",
"nips_2022_XByg4kotW5",
"nips_2022_XByg4kotW5"
] |
nips_2022_G2kkDEujOw | Detection and Localization of Changes in Conditional Distributions | We study the change point problem that considers alterations in the conditional distribution of an inferential target on a set of covariates. This paired data scenario is in contrast to the standard setting where a sequentially observed variable is analyzed for potential changes in the marginal distribution. We propose new methodology for solving this problem, by starting from a simpler task that analyzes changes in conditional expectation, and generalizing the tools developed for that task to conditional distributions. Large sample properties of the proposed statistics are derived. In empirical studies, we illustrate the performance of the proposed method against baselines adapted from existing tools. Two real data applications are presented to demonstrate its potential. | Accept | This manuscript enjoyed universal recommendation of acceptance from the reviewers after the initial review phase. The reviewers did note several minor issues in these initial reviews, many of which were resolved by insightful responses from the authors. I encourage the authors to edit the manuscript to reflect the insights gained from this interaction when preparing an updated version. | val | [
"vCLuGpdBkT7",
"Wbf6_GTNtyi",
"RUhnJoeQENW",
"raLyEEEBk9n",
"FMgFC3oro57",
"VUtw1EKJqLc",
"a5DdXDn4V3v",
"6OMlzfxnXtf",
"-Bq5pRuM8SG",
"_jBUbG196xD",
"CsEUkqcV07V"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the suggestion. We will look carefully at multiple points in the real data for the final paper (it is easy to implement a binary segmentation algorithm - one for which we have no theoretical guarantees). ",
" Dear authors,\n\nThank you for your response and for adding those discussions to the manu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
3
] | [
"Wbf6_GTNtyi",
"RUhnJoeQENW",
"CsEUkqcV07V",
"_jBUbG196xD",
"-Bq5pRuM8SG",
"6OMlzfxnXtf",
"nips_2022_G2kkDEujOw",
"nips_2022_G2kkDEujOw",
"nips_2022_G2kkDEujOw",
"nips_2022_G2kkDEujOw",
"nips_2022_G2kkDEujOw"
] |
nips_2022_8cC2JeUyz9 | Inference and Sampling for Archimax Copulas | Understanding multivariate dependencies in both the bulk and the tails of a distribution is an important problem for many applications, such as ensuring algorithms are robust to observations that are infrequent but have devastating effects. Archimax copulas are a family of distributions endowed with a precise representation that allows simultaneous modeling of the bulk and the tails of a distribution. Rather than separating the two as is typically done in practice, incorporating additional information from the bulk may improve inference of the tails, where observations are limited. Building on the stochastic representation of Archimax copulas, we develop a non-parametric inference method and sampling algorithm. Our proposed methods, to the best of our knowledge, are the first that allow for highly flexible and scalable inference and sampling algorithms, enabling the increased use of Archimax copulas in practical settings. We experimentally compare to state-of-the-art density modeling techniques, and the results suggest that the proposed method effectively extrapolates to tails while scaling to higher dimensional data. Our findings suggest that the proposed algorithms can be used in a variety of applications where understanding the interplay between the bulk and the tails of a distribution is necessary, such as health and safety. | Accept | The paper proposes a new method for inference and for sampling in archimax copulas. All the reviewers praised the soundess and clarity of the paper, the novetly of the ideas and the experimental results. Copulas might not be one of the core topics of the NeuRIPS community, but the reviewers pointed out that:
1) the authors did a great job at explaining copulas to the ML community, a valuable tool to model extreme events.
2) the method builds a connection between copula and deep generative modeling, and hence opens new research directions.
Hence, they all enthusiastically recommend to accept the paper, and I agree with them.
Some of the reviewers [HCqW, 5Yjr] also supported the idea to highlight the paper (oral or spotlight presentation). | train | [
"u9bqGMdnKLI",
"Ta3ZHzSIeRa",
"yMLIUaTZw9h",
"QCzw5H8JT0",
"np0mhs2pQZR",
"dcdtRX1qqy",
"0f6s-kDhoXx",
"2Fc5CVGD1Xb",
"dzx55EZlCY3",
"nryJ4J2iNCY",
"ciUthYW50V5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your careful consideration and response to my comments. I have accordingly raised my score and I recommend acceptance of this paper. ",
" Thank you very much for your careful responses. I am happy with your responses to my comments. In particular, my concerns given in comments (e) and (f) have bee... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"nryJ4J2iNCY",
"0f6s-kDhoXx",
"QCzw5H8JT0",
"ciUthYW50V5",
"nryJ4J2iNCY",
"dzx55EZlCY3",
"2Fc5CVGD1Xb",
"nips_2022_8cC2JeUyz9",
"nips_2022_8cC2JeUyz9",
"nips_2022_8cC2JeUyz9",
"nips_2022_8cC2JeUyz9"
] |
nips_2022_4RC_vI0OgIS | Online Deep Equilibrium Learning for Regularization by Denoising | Plug-and-Play Priors (PnP) and Regularization by Denoising (RED) are widely-used frameworks for solving imaging inverse problems by computing fixed-points of operators combining physical measurement models and learned image priors. While traditional PnP/RED formulations have focused on priors specified using image denoisers, there is a growing interest in learning PnP/RED priors that are end-to-end optimal. The recent Deep Equilibrium Models (DEQ) framework has enabled memory-efficient end-to-end learning of PnP/RED priors by implicitly differentiating through the fixed-point equations without storing intermediate activation values. However, the dependence of the computational/memory complexity of the measurement models in PnP/RED on the total number of measurements leaves DEQ impractical for many imaging applications. We propose ODER as a new strategy for improving the efficiency of DEQ through stochastic approximations of the measurement models. We theoretically analyze ODER giving insights into its convergence and ability to approximate the traditional DEQ approach. Our numerical results suggest the potential improvements in training/testing complexity due to ODER on three distinct imaging applications. | Accept | The paper proposes a learning method (specifically a deep equilibrium learning approach) for 'regularization by denoising', a plug-and-play method for solving inverse problems.
After the rebuttal, all reviewers support acceptance of the paper. The reviewers find the paper to be well written, the problem to be interesting, and the claims to be well supported (reviewer Hjnn), both empirically (reviewer uDGc) and through theory. Reviewer A7f5 finds the work particularly exciting since both memory and training time are reduced, without sacrificing image quality.
Based on my own reading and the unanimous support of the reviewers, I recommend acceptance of the paper. A nice contribution!
| train | [
"TQ4Kro-7fQ",
"Ent0syKxGsu",
"lyjjwlUHJqD",
"KqGCZENygay",
"yII4cz2FVfT",
"68r80VihjmR",
"1MzjP4DS--",
"khte5GsOCdd",
"8H3yiKW1IX3",
"G5ZnOW3zQS",
"5lxy6m_JNsz",
"LmmRuIUs3m3",
"MDe1tMfDUvT",
"Rlrt1ucHTy",
"XZRwyDXwHy8",
"OZRPcwccrx",
"DKaNoObghAl",
"_LcVvjUVdqr"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you all again for reviewing our work. An additional thanks to those reviewers that have already read our responses and the area chair for managing the review of our paper. Let us know if there is anything else we can do to improve your evaluation of our work.",
" Dear reviewer, thank you again for reading... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"nips_2022_4RC_vI0OgIS",
"MDe1tMfDUvT",
"68r80VihjmR",
"1MzjP4DS--",
"khte5GsOCdd",
"LmmRuIUs3m3",
"G5ZnOW3zQS",
"5lxy6m_JNsz",
"nips_2022_4RC_vI0OgIS",
"DKaNoObghAl",
"_LcVvjUVdqr",
"OZRPcwccrx",
"XZRwyDXwHy8",
"nips_2022_4RC_vI0OgIS",
"nips_2022_4RC_vI0OgIS",
"nips_2022_4RC_vI0OgIS",... |
nips_2022_Ms6QZafNv01 | Optimal algorithms for group distributionally robust optimization and beyond | Distributionally robust optimization (DRO) can improve the robustness and fairness of learning methods. In this paper, we devise stochastic algorithms for a class of DRO problems including group DRO, subpopulation fairness, and empirical conditional value at risk (CVaR) optimization. Our new algorithms achieve faster convergence rates than existing algorithms for multiple DRO settings. We also provide a new information-theoretic lower bound that implies our bounds are tight for group DRO. Empirically, too, our algorithms outperform known methods. | Reject | The main criticism raised by the reviewers was the unconvincing experiments. The reviewers generally liked the simplicity of the method presented the paper, but were unconvinced by the impact/utility of the results. Even though the paper focuses on the convex regime, the paper may benefit from some DL experiments. This is not an uncommon paradigm in research (e.g. Adam and other convex optimization methods with DL experiments, etc.)
There were some other comments that are worth addressing in a future version of the paper. (e.g. adding the discussion mini-batching, improved exposition).
| val | [
"TMYo0oW7WK",
"FgduSpLX2p9K",
"b3ztScQOEf",
"qEt2Lj4hmUa",
"xRW8ZUnxUpx",
"EWm_lsZerKL",
"0nluoTQP-yx",
"4N3DRIWnVkf",
"lSxj-kCwY_L"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I apologize for the late response. \n\n> We believe that our OCO approach is valuable because it yields an algorithm for a wide range of DRO problems in a very simple way. In fact, the previous work on group DRO [Sagawa et al. 2020] analyzed the convergence of their algorithm with more complicated results of conv... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
3
] | [
"xRW8ZUnxUpx",
"lSxj-kCwY_L",
"4N3DRIWnVkf",
"0nluoTQP-yx",
"EWm_lsZerKL",
"nips_2022_Ms6QZafNv01",
"nips_2022_Ms6QZafNv01",
"nips_2022_Ms6QZafNv01",
"nips_2022_Ms6QZafNv01"
] |
nips_2022_68YyraaeYmc | Exploring through Random Curiosity with General Value Functions | Efficient exploration in reinforcement learning is a challenging problem commonly addressed through intrinsic rewards. Recent prominent approaches are based on state novelty or variants of artificial curiosity. However, directly applying them to partially observable environments can be ineffective and lead to premature dissipation of intrinsic rewards. Here we propose random curiosity with general value functions (RC-GVF), a novel intrinsic reward function that draws upon connections between these distinct approaches. Instead of using only the current observation’s novelty or a curiosity bonus for failing to predict precise environment dynamics, RC-GVF derives intrinsic rewards through predicting temporally extended general value functions. We demonstrate that this improves exploration in a hard-exploration diabolical lock problem. Furthermore, RC-GVF significantly outperforms previous methods in the absence of ground-truth episodic counts in the partially observable MiniGrid environments. Panoramic observations on MiniGrid further boost RC-GVF's performance such that it is competitive to baselines exploiting privileged information in form of episodic counts. | Accept | This paper proposes an intrinsic motivation method which by and large extends RND, aiming to service the needs of agents in longer horizon exploration problems. There was a bit of a spread of scores amongst reviewers, but based upon reading the extensive discussion, I am recommending acceptance it seems that on the balance of opinions, the consensus leans towards the reviewers agreeing that the result is important, the method useful, and the paper is of a publishable nature. | train | [
"f7_wdPo8z4",
"-b5w3aWYiI",
"_QgR7ScUIOZ",
"xV1bc4zhTTF",
"PAsWVgHqay4y",
"DpQO-UEvHwW",
"rBDlIamoTe",
"8Rk-JXkvgCQ",
"E16cmcIZ83w",
"rwjpb03mFrt",
"wQkBa9W4th_",
"zV61lGZXnpaZ",
"2diEm9AlHVyA",
"fRa3sPvp3fr",
"eN13k0dAACj",
"5oZgn1xIM7i",
"EyoBvz8pCby",
"K3kr1GNNR2N"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **\"And now that I re-read that section, I believe the authors don't discuss this relationship between scaling the prediction error with essentially an epistemic uncertainty estimate well enough. The authors discuss how the ensemble disagreement vanishes for stochastic states so the prediction error will be multi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"-b5w3aWYiI",
"PAsWVgHqay4y",
"wQkBa9W4th_",
"rwjpb03mFrt",
"8Rk-JXkvgCQ",
"zV61lGZXnpaZ",
"nips_2022_68YyraaeYmc",
"E16cmcIZ83w",
"5oZgn1xIM7i",
"eN13k0dAACj",
"K3kr1GNNR2N",
"2diEm9AlHVyA",
"fRa3sPvp3fr",
"EyoBvz8pCby",
"nips_2022_68YyraaeYmc",
"nips_2022_68YyraaeYmc",
"nips_2022_6... |
nips_2022_U14PKEu18bK | Unsupervised Multi-View Object Segmentation Using Radiance Field Propagation | We present radiance field propagation (RFP), a novel approach to segmenting objects in 3D during reconstruction given only unlabeled multi-view images of a scene. RFP is derived from emerging neural radiance field-based techniques, which jointly encodes semantics with appearance and geometry. The core of our method is a novel propagation strategy for individual objects' radiance fields with a bidirectional photometric loss, enabling an unsupervised partitioning of a scene into salient or meaningful regions corresponding to different object instances. To better handle complex scenes with multiple objects and occlusions, we further propose an iterative expectation-maximization algorithm to refine object masks. To the best of our knowledge, RFP is the first unsupervised approach for tackling 3D scene object segmentation for neural radiance field (NeRF) without any supervision, annotations, or other cues such as 3D bounding boxes and prior knowledge of object class. Experiments demonstrate that RFP achieves feasible segmentation results that are more accurate than previous unsupervised image/scene segmentation approaches, and are comparable to existing supervised NeRF-based methods. The segmented object representations enable individual 3D object editing operations. Codes and datasets will be made publicly available. | Accept | Reviewers are generally positive about the submission, and all recommend acceptance post rebuttal. They appreciate the new formulation and the strong results. The AC agrees and recommends acceptance. | val | [
"IVYBK8XQdAz",
"_3K2UecPweM",
"vx03V4tbzq",
"T_ZS_2SvcwR",
"JhCHXScKUpw",
"gQtcrvtEECk",
"5VkqryGj8WKL",
"GIj7Ama3OjH",
"OM_sZvG6Ivm0",
"t1svVU4CtZXV",
"V5eKKqmd898",
"hfctnz74qn0",
"3rAT5SY2qKT",
"ZSoLdqairaj",
"8djwdb56ab",
"HQMvjfJhO7",
"_7uOjLuMsB8",
"0107dvh5kr",
"uNtf1hz9gs... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer QA4f,\n\nThanks for your positive and valuable feedback, which would help us improve this work.\n\nAnonymous authors",
" Dear Reviewer GXm6,\n\nThank you for your response and appreciation of our approach. Please rest assured that we will stress our limitations in our final version.\n\nAnonymous a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"JhCHXScKUpw",
"T_ZS_2SvcwR",
"nips_2022_U14PKEu18bK",
"V5eKKqmd898",
"ZSoLdqairaj",
"uNtf1hz9gs",
"0107dvh5kr",
"_7uOjLuMsB8",
"HQMvjfJhO7",
"nips_2022_U14PKEu18bK",
"HQMvjfJhO7",
"_7uOjLuMsB8",
"0107dvh5kr",
"uNtf1hz9gs",
"nips_2022_U14PKEu18bK",
"nips_2022_U14PKEu18bK",
"nips_2022... |
nips_2022_sNcn-E3uPHA | Text Classification with Born's Rule | This paper presents a text classification algorithm inspired by the notion of superposition of states in quantum physics. By regarding text as a superposition of words, we derive the wave function of a document and we compute the transition probability of the document to a target class according to Born's rule. Two complementary implementations are presented. In the first one, wave functions are calculated explicitly. The second implementation embeds the classifier in a neural network architecture. Through analysis of three benchmark datasets, we illustrate several aspects of the proposed method, such as classification performance, explainability, and computational efficiency. These ideas are also applicable to non-textual data. | Accept | This paper has 2 accepts (7) and 2 borderline accepts (5). The average is 6.
The modification of Reviewer Sqxh does not show in the system, but he stated as follows in our discussion “I tend to modify the score of this paper to five.”
This paper shows an algorithm that delivers outstanding text classification performance despite its extreme simplicity and speed. The quantum “explanation” for the algorithm is weak. The experiments were limited to somewhat simple text classification problems, but the authors added some convincing results on sentiment analysis in their rebuttal. The paper is clearly written and the provided code is well documented, so it should be reproducible (though none of the reviewers did rerun the experiments). The authors provided a revised version that clarified some aspects (though it did not include the additional experiments provided in the rebuttal).
The consensus after discussion is accept, given that this paper may open a new class of simple and high performance classification algorithms. The explanation for their effectiveness remains an open problem, and this paper could be improved by opening the discussion.
We recommend the following modifications in order to mitigate overreaching claims that could make the authors look naïve:
- Quantum interpretation of a sentence as superposition of words. This seems to imply that detecting a single word is sufficient to classify a sentence, and that the best solution is a combination of these weak classifiers based on a single word or N-gram. The success of Adaboost on text classification testifies of the power of these methods. However such methods do not work so well when sentences are longer, or when there is a stronger compositionality, for instance sentiment analysis (IMDB, SST). The results on EmoInt provided in the refutation are a step in the right direction. They show the method can handle some level of context, thought sentences are still short. How could this method handle long sentences? The paper would benefit from a discussion about what the superposition of words hypothesis implies, what are it limitations, and maybe how to overcome them.
- Table 1 provides preliminary results but some of the claims can be turn off for readers with experience in linear classifiers (SVM and LR) applied to text recognition, as the authors apparently failed to select the correct algorithm (SGD classifier). Comparisons with SVMs and LR in table1 suggest BC is 1 million times faster than SVMs. However, the same difference in speed can be observed within different Sklearn SVM implementations, depending on the algorithm. The SGD classifier, which supports both SVM and LR, has a computational complexity which is even better than the O(NJK) reported in the paper, as it depends on the number of non-zero features rather than the number of features J.
Furthermore, the accuracy provided for the SVM (79.4) also seems far below the SOTA. For instance, the reference [17] reports an accuracy of 82.27 on 20NG using TF-IDF SVMs (the most vanilla setting).
- As shown in Eq.(6), the model is fundamentally linear. Explainability by taking the feature with the highest weight is as ancient as ML itself. How this method performs better than traditional linear approaches should be better illustrated. The authors just say “The words appear semantically correlated to their respective class and do not contain neither noisy words, such as stopwords or punctuation, or words whose meaning is too general to be representative of a specific class. "
- Novelty of this paper in the field of quantum-inspired classification. Even in the revised version, while they quote work on language modeling, the authors do not seem to have quoted any work on classification, even though it was pointed by the first reviewer (https://ojs.aaai.org/index.php/AAAI/article/view/17567). As pointed in their rebuttal, this is very different from their work, but they should mention that work.
| val | [
"zwyoUH2zlTA",
"R0tzSnli73YQ",
"CayN85oQcn",
"7VeLZ9K40cFY",
"MSQTXI6NUXQ",
"xJHCf0QnTy",
"28ZMpM5YIPB",
"dhYkN-NYlrH",
"UyZCxMVOqALT",
"SDYm3ToGMVRm",
"F7KtSNQIZp",
"MqgZgPaJIvJ",
"3uCarveSVBL",
"do1U_Defjd"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This is to acknowledge that I have read the authors' response. So far, my questions about the paper have been sufficiently answered and I thank the authors for taking the time to provide very detailed responses and experiments. My score will remain a 7 as I would like to see this paper at the conference.",
" We... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
2,
4
] | [
"R0tzSnli73YQ",
"7VeLZ9K40cFY",
"MSQTXI6NUXQ",
"xJHCf0QnTy",
"UyZCxMVOqALT",
"do1U_Defjd",
"3uCarveSVBL",
"MqgZgPaJIvJ",
"F7KtSNQIZp",
"nips_2022_sNcn-E3uPHA",
"nips_2022_sNcn-E3uPHA",
"nips_2022_sNcn-E3uPHA",
"nips_2022_sNcn-E3uPHA",
"nips_2022_sNcn-E3uPHA"
] |
nips_2022_8E8tgnYlmN | SIREN: Shaping Representations for Detecting Out-of-Distribution Objects | Detecting out-of-distribution (OOD) objects is indispensable for safely deploying object detectors in the wild. Although distance-based OOD detection methods have demonstrated promise in image classification, they remain largely unexplored in object-level OOD detection. This paper bridges the gap by proposing a distance-based framework for detecting OOD objects, which relies on the model-agnostic representation space and provides strong generality across different neural architectures. Our proposed framework SIREN contributes two novel components: (1) a representation learning component that uses a trainable loss function to shape the representations into a mixture of von Mises-Fisher (vMF) distributions on the unit hypersphere, and (2) a test-time OOD detection score leveraging the learned vMF distributions in a parametric or non-parametric way. SIREN achieves competitive performance on both the recent detection transformers and CNN-based models, improving the AUROC by a large margin compared to the previous best method. Code is publicly available at https://github.com/deeplearning-wisc/siren. | Accept | This work proposes a new unified distributional model to address out of distribution detection and improves over the state of the art.
While the approach shares notable similarities with other works in the domain, the idea of creating a unified distributional representation also at the intermediate features, especially in the context of transformers is new.
While this work can be criticized for some missing comparisons with related works, the overall approach seems both sounds and novel and of general interest. Therefore, I suggest its acceptance for NeurIPS 2022. | train | [
"J1S0RHKSrHy",
"T8kmGUfnDtV",
"14imOqQUp8w",
"1CmtAp14p2",
"Agpf_QafbOJ",
"IuttBJlyZA8",
"XiUPC8GQW79",
"nPbmCQVnuQO",
"YYvWJ_t8j3"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I will keep my score because the differences between the proposed model and the cited training-based approaches are not significant. No direct comparison against the methods we presented was shown. Finally, the majority of the concerns we presented were not covered in the rebuttal. Neither citing nor comparing ag... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
2
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"YYvWJ_t8j3",
"14imOqQUp8w",
"YYvWJ_t8j3",
"nPbmCQVnuQO",
"XiUPC8GQW79",
"nips_2022_8E8tgnYlmN",
"nips_2022_8E8tgnYlmN",
"nips_2022_8E8tgnYlmN",
"nips_2022_8E8tgnYlmN"
] |
nips_2022_t3X5yMI_4G2 | Reincarnating Reinforcement Learning: Reusing Prior Computation to Accelerate Progress | Learning tabula rasa, that is without any prior knowledge, is the prevalent workflow in reinforcement learning (RL) research. However, RL systems, when applied to large-scale settings, rarely operate tabula rasa. Such large-scale systems undergo multiple design or algorithmic changes during their development cycle and use ad hoc approaches for incorporating these changes without re-training from scratch, which would have been prohibitively expensive. Additionally, the inefficiency of deep RL typically excludes researchers without access to industrial-scale resources from tackling computationally-demanding problems. To address these issues, we present reincarnating RL as an alternative workflow or class of problem settings, where prior computational work (e.g., learned policies) is reused or transferred between design iterations of an RL agent, or from one RL agent to another. As a step towards enabling reincarnating RL from any agent to any other agent, we focus on the specific setting of efficiently transferring an existing sub-optimal policy to a standalone value-based RL agent. We find that existing approaches fail in this setting and propose a simple algorithm to address their limitations. Equipped with this algorithm, we demonstrate reincarnating RL's gains over tabula rasa RL on Atari 2600 games, a challenging locomotion task, and the real-world problem of navigating stratospheric balloons. Overall, this work argues for an alternative approach to RL research, which we believe could significantly improve real-world RL adoption and help democratize it further. Open-sourced code and trained agents at https://agarwl.github.io/reincarnating_rl. | Accept | This paper proposes a novel method for transferring prior policies across design and system changes to improve the sample efficiency of RL algorithms, which could ultimately help unlock RL for real-world use cases.
There was an active discussion across the reviewing process in which the authors managed to address the concern of the reviewers, leading to updated scores.
Based on the contributions of the paper and highly positive final reviews I recommend the paper for acceptance.
| train | [
"VyC9lD2-pR",
"JaJv00rMnTi",
"acIVDId807W",
"cr_h-5cn9cK",
"EJPRgjEgOmQ",
"H2fnLR5h7G",
"gnQbKtdUcfo",
"i1F_JoxHnVO",
"yGpfFA_vFFa",
"VHNMgnLz0AM",
"2u0WqO2r1e6",
"aWrZNDQensX",
"dvsgH_o49J",
"6BNRJF8QZ-",
"j3YlPJawdD",
"5V3OFWHXELD",
"xPw3ENBwL7f"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Apologies for the delay in reply. I appreciate the thoughtful response. \n\nMy concerns have been addressed and updated my score.",
" We thank the reviewer for reading our response and engaging in follow-up discussion.\n\n> **If the pre-trained policy is good enough to collect samples without safety issue or i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
4
] | [
"6BNRJF8QZ-",
"cr_h-5cn9cK",
"gnQbKtdUcfo",
"2u0WqO2r1e6",
"aWrZNDQensX",
"nips_2022_t3X5yMI_4G2",
"i1F_JoxHnVO",
"j3YlPJawdD",
"VHNMgnLz0AM",
"6BNRJF8QZ-",
"5V3OFWHXELD",
"dvsgH_o49J",
"xPw3ENBwL7f",
"nips_2022_t3X5yMI_4G2",
"nips_2022_t3X5yMI_4G2",
"nips_2022_t3X5yMI_4G2",
"nips_20... |
nips_2022_fcMd-tuWwiO | A sharp NMF result with applications in network modeling | Given an $n \times n$ non-negative rank-$K$ matrix $\Omega$ where $m$ eigenvalues are negative, when can we write $\Omega = Z P Z'$ for non-negative matrices $Z \in \mathbb{R}^{n, K}$ and $P \in \mathbb{R}^{K, K}$? While most existing works focused on the case of $m = 0$, our primary interest is on the case of general $m$. With new proof ideas we develop, we present sharp results on when the NMF problem is solvable, which significantly extend existing results on this topic. The NMF problem is partially motivated by applications in network modeling. For a network with $K$ communities, rank-$K$ models are popular, with many proposals. The DCMM model is
a recent rank-$K$ model which is especially useful and interpretable in practice. To enjoy such properties, it is of interest to study
when a rank-$K$ model can be rewritten as a DCMM model. Using our NMF results, we show that for a rank-$K$ model with parameters in the most interesting range, we can always rewrite it as a DCMM model. | Accept | This was a borderline paper, which fell just above the bar for acceptance. The reviewers felt the work was interesting and original, although perhaps the problem studied is a bit niche for NeurIPS. | train | [
"GuaJ9moWlM6",
"IuJT_KY5eJa",
"ufGgm1cS1PJ",
"D7DtPMawYdP",
"dTOCN0eFmHP",
"kQPkeApL8ek",
"80roWzmJ6S5"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank you for your time and valuable comments. \nWe are especially glad that you recognize the main contributions \nof our paper, and think our paper as well-written and well-motivated. \nWe have tried our best to address your comments \nand prepared a revised version and a point-to-point respons... | [
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"80roWzmJ6S5",
"kQPkeApL8ek",
"dTOCN0eFmHP",
"dTOCN0eFmHP",
"nips_2022_fcMd-tuWwiO",
"nips_2022_fcMd-tuWwiO",
"nips_2022_fcMd-tuWwiO"
] |
nips_2022_RP1CtZhEmR | Generating multivariate time series with COmmon Source CoordInated GAN (COSCI-GAN) | Generating multivariate time series is a promising approach for sharing sensitive data in many medical, financial, and IoT applications. A common type of multivariate time series originates from a single source such as the biometric measurements from a medical patient. This leads to complex dynamical patterns between individual time series that are hard to learn by typical generation models such as GANs. There is valuable information in those patterns that machine learning models can use to better classify, predict or perform other downstream tasks. We propose a novel framework that takes time series’ common origin into account and favors channel/feature relationships preservation. The two key points of our method are: 1) the individual time series are generated from a common point in latent space and 2) a central discriminator favors the preservation of inter-channel/feature dynamics. We demonstrate empirically that our method helps preserve channel/feature correlations and that our synthetic data performs very well in downstream tasks with medical and financial data. | Accept | The authors propose GroupGAN which uses a separate generator and discriminator for generating each data channel and a central discriminator for accurately capturing the correlation structure between different channels. This is a borderline paper and there were extensive discussions among the reviewers about this paper. The reviewers all agreed that having two sets of discriminators is a good idea. However, the evaluation for this paper were only done with low-dimensional time series and the reviewers voiced concerns about the generalization of this method to higher dimensions. Some parts still need further clarification: for example, as Reviewer zeED pointed out, dimensionality reduction in this paper remains an extremely odd preprocessing step.
Overall, accepting this paper may start new discussions in the community. I hope the authors address the concerns raised by the reviewers in the camera-ready version of the paper. | test | [
"hEsKWm29pFc",
"_zvEKtIjIUK",
"UctbUkccq5l",
"_dpp3vKiIJi",
"MobEC8zTrBe",
"1thAHg_rcG2",
"Sb2CFI2n-rh",
"POP3TWApNKe",
"G8WqI_b5fMt",
"FHVMN-fEybu",
"GxWL9jSn2Et",
"KLo1s7po_Ad"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. My concerns have been addressed. I think this is a good paper and a score of 7 is appropriate. My score remains the same.",
" Thank you for your thorough response.\n\nIn the final version of the paper, we will make sure to clarify any confusion caused by the use of the terms “noise”... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"POP3TWApNKe",
"_dpp3vKiIJi",
"Sb2CFI2n-rh",
"1thAHg_rcG2",
"KLo1s7po_Ad",
"GxWL9jSn2Et",
"FHVMN-fEybu",
"G8WqI_b5fMt",
"nips_2022_RP1CtZhEmR",
"nips_2022_RP1CtZhEmR",
"nips_2022_RP1CtZhEmR",
"nips_2022_RP1CtZhEmR"
] |
nips_2022_dT0eNsO2YLu | Learnable Polyphase Sampling for Shift Invariant and Equivariant Convolutional Networks | We propose learnable polyphase sampling (LPS), a pair of learnable down/upsampling layers that enable truly shift-invariant and equivariant convolutional networks. LPS can be trained end-to-end from data and generalizes existing handcrafted downsampling layers. It is widely applicable as it can be integrated into any convolutional network by replacing down/upsampling layers. We evaluate LPS on image classification and semantic segmentation. Experiments show that LPS is on-par with or outperforms existing methods in both performance and shift consistency. For the first time, we achieve true shift-equivariance on semantic segmentation (PASCAL VOC), i.e., 100% shift consistency, outperforming baselines by an absolute 3.3%. | Accept | The paper proposes an end-to-end learnable polyphase sampling and shows competitive performance and sufficient novelty. Major concerns of the reviewers seem to be addressed during the rebuttal and therefore it can be accepted. | train | [
"FiuhrKzltT0",
"B-Lqx0sr-75",
"puOeRu3kox",
"tlkKG7miBaN",
"-XWd0hw0UAf",
"Rrq_7i5HaF",
"EIbIlxaR6_7",
"FDHIjRfNlL",
"IhA4m8k0Hc",
"LUyt23r5xNB"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Interesting thought. In our classification experiments, we replaced **all** downsampling layers with LPD. The number of LPD layers is hence determined by the architecture. In our segmentation experiments, we replaced **all** down/upsampling layers with LPD/LPU respectively. We have not experimented with replacing... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"B-Lqx0sr-75",
"Rrq_7i5HaF",
"tlkKG7miBaN",
"-XWd0hw0UAf",
"LUyt23r5xNB",
"IhA4m8k0Hc",
"FDHIjRfNlL",
"nips_2022_dT0eNsO2YLu",
"nips_2022_dT0eNsO2YLu",
"nips_2022_dT0eNsO2YLu"
] |
nips_2022_1wVBLK1Xuc | Policy Optimization with Advantage Regularization for Long-Term Fairness in Decision Systems | Long-term fairness is an important factor of consideration in designing and deploying learning-based decision systems in high-stake decision-making contexts. Recent work has proposed the use of Markov Decision Processes (MDPs) to formulate decision-making with long-term fairness requirements in dynamically changing environments, and demonstrated major challenges in directly deploying heuristic and rule-based policies that worked well in static environments. We show that policy optimization methods from deep reinforcement learning can be used to find strictly better decision policies that can often achieve both higher overall utility and less violation of the fairness requirements, compared to previously-known strategies. In particular, we propose new methods for imposing fairness requirements in policy optimization by regularizing the advantage evaluation of different actions. Our proposed methods make it easy to impose fairness constraints without reward engineering or sacrificing training efficiency. We perform detailed analyses in three established case studies, including attention allocation in incident monitoring, bank loan approval, and vaccine distribution in population networks. | Accept | I recommend acceptance due to the positive opinions of the reviewers. This paper proposes a method relevant to fairness in RL, for enforcing constraints during policy optimization. The reviews judge the method (inspired by Lyapunov stability ideas) and its empirical evaluation as technically sound. Many initial points of confusion or uncertainty were addressed and resolve during the author discussion. | train | [
"yi5l7bLQcg",
"hYstilot5z",
"6yy5Am32Zg",
"MR-Hj7MdI2a",
"b4qoQOJ7IWC",
"ooetGd8k8MK",
"QpsBZn0mnP8",
"addoRHgWidb",
"8MJyfGXu6-4",
"YMKfO9KVDpf",
"BTH3L4QgcKd",
"qzpbiFrEgAq",
"vaDPMA-yp1U"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for adding some theoretical justification. I have updated my score based on the author rebuttal.",
" I would like to thank authors for the detailed response. I would like to encourage authors to incorporate the proposed clarifications in the manuscript, so that the contribution and takeaway msgs can be c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"ooetGd8k8MK",
"8MJyfGXu6-4",
"addoRHgWidb",
"b4qoQOJ7IWC",
"QpsBZn0mnP8",
"vaDPMA-yp1U",
"qzpbiFrEgAq",
"BTH3L4QgcKd",
"YMKfO9KVDpf",
"nips_2022_1wVBLK1Xuc",
"nips_2022_1wVBLK1Xuc",
"nips_2022_1wVBLK1Xuc",
"nips_2022_1wVBLK1Xuc"
] |
nips_2022_5yAmUvdXAve | Cluster and Aggregate: Face Recognition with Large Probe Set | Feature fusion plays a crucial role in unconstrained face recognition where inputs (probes) comprise of a set of $N$ low quality images whose individual qualities vary. Advances in attention and recurrent modules have led to feature fusion that can model the relationship among the images in the input set. However, attention mechanisms cannot scale to large $N$ due to their quadratic complexity and recurrent modules suffer from input order sensitivity. We propose a two-stage feature fusion paradigm, Cluster and Aggregate, that can both scale to large $N$ and maintain the ability to perform sequential inference with order invariance. Specifically, Cluster stage is a linear assignment of $N$ inputs to $M$ global cluster centers, and Aggregation stage is a fusion over $M$ clustered features. The clustered features play an integral role when the inputs are sequential as they can serve as a summarization of past features. By leveraging the order-invariance of incremental averaging operation, we design an update rule that achieves batch-order invariance, which guarantees that the contributions of early image in the sequence do not diminish as time steps increase. Experiments on IJB-B and IJB-S benchmark datasets show the superiority of the proposed two-stage paradigm in unconstrained face recognition. | Accept | The paper received 4 positive reviewers, and the reviewer increased/remained their scores after the rebuttal.
The paper pursues a useful direction of unconstrained face recognition. All reviewers agree that the results are impressive and the method has novelty. | val | [
"K6mSt8VEc0f",
"XByGrDlao3Q",
"QWxWyZ3iRb8",
"Cg8eQdj25rD",
"4JIOUl3PeB1",
"udpVhsSWAXC",
"CyZUkt1phIP",
"Q3_IBU3mFrL",
"meE4u9yVic_",
"8OsRqLX0ze",
"OQJSo7hs2V",
"LRafArfhzpN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' effort to address my concerns and comments. \n",
" The rebuttal has solved most of my former concerns and this paper is ok for acceptance based on the positive comments from other reviewers.",
" Thank you for the answers. I believe that most questions have been properly answered and,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"udpVhsSWAXC",
"CyZUkt1phIP",
"4JIOUl3PeB1",
"Q3_IBU3mFrL",
"LRafArfhzpN",
"OQJSo7hs2V",
"8OsRqLX0ze",
"meE4u9yVic_",
"nips_2022_5yAmUvdXAve",
"nips_2022_5yAmUvdXAve",
"nips_2022_5yAmUvdXAve",
"nips_2022_5yAmUvdXAve"
] |
nips_2022_toleacrf7Hv | Parameter tuning and model selection in Optimal Transport with semi-dual Brenier formulation | Over the past few years, numerous computational models have been developed to solve Optimal Transport (OT) in a stochastic setting, where distributions are represented by samples and where the goal is to find the closest map to the ground truth OT map, unknown in practical settings. So far, no quantitative criterion has yet been put forward to tune the parameter of these models and select maps that best approximate the ground truth. To perform this task, we propose to leverage the Brenier formulation of OT. Theoretically, we show that this formulation guarantees that, up to sharp a distortion parameter depending on the smoothness/strong convexity and a statistical deviation term, the selected map achieves the lowest quadratic error to the ground truth. This criterion, estimated via convex optimization, enables parameter tuning and model selection among entropic regularization of OT, input convex neural networks and smooth and strongly convex nearest-Brenier (SSNB) models.
We also use this criterion to question the use of OT in Domain-Adaptation (DA). In a standard DA experiment, it enables us to identify the potential that is closest to the true OT map between the source and the target. Yet, we observe that this selected potential is far from being the one that performs best for the downstream transfer classification task. | Accept | While there were few misunderstandings, the rebuttal successfully convinced all the reviewers that the paper should be accepted. Please take into account the reviewers' comments in preparing the camera-ready, especially the ones concerning the clarity of the paper. | train | [
"Xg90JkFd42r",
"NM8p_w_FVeX",
"-9ZsM_i2vVl",
"9LPIy7REe85",
"NPBhit9-aX5g",
"0xch6tKy5q",
"vcpNLkPCRZ8",
"fh1flYMSjH",
"cSEDcYRrfFL",
"U_qfA950ZVZ",
"Ng0Ir1spmuu",
"1kUAyszI3ML",
"CbVeAcVV4wO",
"4ZVlCT87ShU",
"XIx58E8IeYB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have answered my questions and I increase the score to borderline accept.",
" Hi, thank you for your answer. I checked the proposition carefully and I understand my mistake here. I increased the score to 6. ",
" Dear reviewer.\n\nIndeed, the semi-dual $J(f)$ diverges because the Legendre transform... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"9LPIy7REe85",
"-9ZsM_i2vVl",
"NPBhit9-aX5g",
"vcpNLkPCRZ8",
"U_qfA950ZVZ",
"cSEDcYRrfFL",
"fh1flYMSjH",
"XIx58E8IeYB",
"4ZVlCT87ShU",
"CbVeAcVV4wO",
"1kUAyszI3ML",
"nips_2022_toleacrf7Hv",
"nips_2022_toleacrf7Hv",
"nips_2022_toleacrf7Hv",
"nips_2022_toleacrf7Hv"
] |
nips_2022_6yuil2_tn9a | Handcrafted Backdoors in Deep Neural Networks | When machine learning training is outsourced to third parties, $backdoor$ $attacks$ become practical as the third party who trains the model may act maliciously to inject hidden behaviors into the otherwise accurate model. Until now, the mechanism to inject backdoors has been limited to $poisoning$. We argue that a supply-chain attacker has more attack techniques available by introducing a $handcrafted$ attack that directly manipulates a model's weights. This direct modification gives our attacker more degrees of freedom compared to poisoning, and we show it can be used to evade many backdoor detection or removal defenses effectively. Across four datasets and four network architectures our backdoor attacks maintain an attack success rate above 96%. Our results suggest that further research is needed for understanding the complete space of supply-chain backdoor attacks. | Accept | This paper proposes a backdoor injection method that directly manipulates model weights after training. The backdoored method can achieve comparable clean accuracy and a high attack success rate through injecting and compromising handcrafted filters, increasing the separation in activations, and increasing the logit of a target class. The reviewers agree that the proposed backdoor injection method is novel and interesting. The authors are suggested to conduct more experiments on evaluating whether their handcrafted models cannot be detected/removed by existing defenses. | val | [
"WMDU4G46xx0",
"glMaYn3dDsv",
"G-j-ThMJ2C",
"O60SizkTNeXF",
"jM3cmI9Nja",
"hkuIncjYr2c",
"syAZBtbxwIq",
"XWLZNC1pLwk",
"HPF5HH4eWQw"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We’re happy to answer the reviewer’s additional questions.\n\n—\n\n(1) We first clarify that, while we used the term “validation data” from the paper, our attack actually does not need any data from the actual validation set to run the attack. The first step of our attack is to find dead neurons, and we do this b... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"glMaYn3dDsv",
"hkuIncjYr2c",
"nips_2022_6yuil2_tn9a",
"HPF5HH4eWQw",
"XWLZNC1pLwk",
"syAZBtbxwIq",
"nips_2022_6yuil2_tn9a",
"nips_2022_6yuil2_tn9a",
"nips_2022_6yuil2_tn9a"
] |
nips_2022_1xqE9fRZch5 | Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis | Recent self-supervised advances in medical computer vision exploit the global and local anatomical self-similarity for pretraining prior to downstream tasks such as segmentation. However, current methods assume i.i.d. image acquisition, which is invalid in clinical study designs where follow-up longitudinal scans track subject-specific temporal changes. Further, existing self-supervised methods for medically-relevant image-to-image architectures exploit only spatial or temporal self-similarity and do so via a loss applied only at a single image-scale, with naive multi-scale spatiotemporal extensions collapsing to degenerate solutions. To these ends, this paper makes two contributions: (1) It presents a local and multi-scale spatiotemporal representation learning method for image-to-image architectures trained on longitudinal images. It exploits the spatiotemporal self-similarity of learned multi-scale intra-subject image features for pretraining and develops several feature-wise regularizations that avoid degenerate representations; (2) During finetuning, it proposes a surprisingly simple self-supervised segmentation consistency regularization to exploit intra-subject correlation. Benchmarked across various segmentation tasks, the proposed framework outperforms both well-tuned randomly-initialized baselines and current self-supervised techniques designed for both i.i.d. and longitudinal datasets. These improvements are demonstrated across both longitudinal neurodegenerative adult MRI and developing infant brain MRI and yield both higher performance and longitudinal consistency. | Accept | The paper proposes a self-supervised deep-learning framework for image-to-image translation tasks, such as segmentation, that accommodates and fully exploits longitudinal data. Specifically the method provides a mechanism to impose consistency in output across multiple points from the same individual and simple regularisation terms to avoid some problems common with other methods, such as mode collapse. The authors compare the method against baselines in two distinct neuroimaging segmentation tasks, which nicely demonstrate the additional power afforded by imposing longitudinal consistency.
The reviews overall reported that the submission tackles an important problem, presents well formulated experiments, and shows a significant advance over the SOTA.
The reviews raised concerns raised included non-specialist accessibility, adding more related work, and questions w.r.t. the claims, presentation, and relevance of the results, and particularly about the mode collapse problem. The authors have addressed these concerns, in particular by adding a new section (Section E/Need for regularization) to the Supplementary Material which contains several new visualizations and quantifications of the mode collapse problem (from ablation experiments).
The paper has been updated to reflect some clarifications required by reviewer de4n. Some citations suggested by Reviewer r9Zh are now discussed in the submission.
As it is, the paper meets all conditions for acceptance at NeurIPS 2022.
| val | [
"KfujYbvlQAG",
"8-LjQy8PDZV",
"-nkY_37R9g",
"3N3OnmQlaT8",
"wAHBlvOQ2jI",
"OZQbYjWCg8r",
"h3WXvetow6S",
"1B5Dj2M7BsV"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" (cont’d)\n\n**What are our solutions doing?** To rectify the problems of this ablation, we introduce several forms of regularization designed to increase the spatial and inter-channel variability of representations and ultimately yield diverse representations for neuroanatomical structures. To explain **Figure 1*... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"8-LjQy8PDZV",
"1B5Dj2M7BsV",
"nips_2022_1xqE9fRZch5",
"h3WXvetow6S",
"OZQbYjWCg8r",
"nips_2022_1xqE9fRZch5",
"nips_2022_1xqE9fRZch5",
"nips_2022_1xqE9fRZch5"
] |
nips_2022_o8vYKDWMnq1 | Understanding Deep Neural Function Approximation in Reinforcement Learning via $\epsilon$-Greedy Exploration | This paper provides a theoretical study of deep neural function approximation in reinforcement learning (RL) with the $\epsilon$-greedy exploration under the online setting. This problem setting is motivated by the successful deep Q-networks (DQN) framework that falls in this regime. In this work, we provide an initial attempt on theoretical understanding deep RL from the perspective of function class and neural networks architectures (e.g., width and depth) beyond the ``linear'' regime. To be specific, we focus on the value based algorithm with the $\epsilon$-greedy exploration via deep (and two-layer) neural networks endowed by Besov (and Barron) function spaces, respectively, which aims at approximating an $\alpha$-smooth Q-function in a $d$-dimensional feature space. We prove that, with $T$ episodes, scaling the width $m = \widetilde{\mathcal{O}}(T^{\frac{d}{2\alpha + d}})$ and the depth $L=\mathcal{O}(\log T)$ of the neural network for deep RL is sufficient for learning with sublinear regret in Besov spaces. Moreover, for a two layer neural network endowed by the Barron space, scaling the width $\Omega(\sqrt{T})$ is sufficient. To achieve this, the key issue in our analysis is how to estimate the temporal difference error under deep neural function approximation as the $\epsilon$-greedy exploration is not enough to ensure "optimism". Our analysis reformulates the temporal difference error in an $L^2(\mathrm{d}\mu)$-integrable space over a certain averaged measure $\mu$, and transforms it to a generalization problem under the non-iid setting. This might have its own interest in RL theory for better understanding $\epsilon$-greedy exploration in deep RL. | Accept | This paper provides a theoretical study of the successful deep Q-networks framework under an online, episodic Markov decision process (MDP) model with T episodes. All reviewers and the AC believe this is a solid RL theory paper. | train | [
"x-UOvfk_mGJ",
"xGABUouCtQO",
"FokvdXhcex",
"v9Wzf_8yT2Q",
"OR9I6XHSYx5",
"qlRlvydlhYM9",
"xa60BstGRaJ",
"yeqOu6aPGeb",
"JzbBDOi14ub",
"F7ebi7rot5F",
"I4qKHzkJFhp",
"now2Fxty9C5",
"QkZ4fsDvby",
"L3INGY0ani1",
"MbvDyZWMBLu",
"Y_pS7CaLTxx",
"2BEG0EneVOU"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 7U3N,\n\nWe are grateful for your constructive feedback on improving this work. \nWe will polish this paper based on your suggestions in the final version.\n\nBest regards,\n\nAuthors",
" I thank the authors for their efforts. \n\nWith these modifications, the results now make sense to me, and I w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"xGABUouCtQO",
"OR9I6XHSYx5",
"v9Wzf_8yT2Q",
"yeqOu6aPGeb",
"xa60BstGRaJ",
"nips_2022_o8vYKDWMnq1",
"now2Fxty9C5",
"JzbBDOi14ub",
"I4qKHzkJFhp",
"I4qKHzkJFhp",
"2BEG0EneVOU",
"QkZ4fsDvby",
"Y_pS7CaLTxx",
"MbvDyZWMBLu",
"nips_2022_o8vYKDWMnq1",
"nips_2022_o8vYKDWMnq1",
"nips_2022_o8vY... |
nips_2022_177GzUAds8U | Compositional generalization through abstract representations in human and artificial neural networks | Humans have a remarkable ability to rapidly generalize to new tasks that is difficult to reproduce in artificial learning systems.
Compositionality has been proposed as a key mechanism supporting generalization in humans, but evidence of its neural implementation and impact on behavior is still scarce. Here we study the computational properties associated with compositional generalization in both humans and artificial neural networks (ANNs) on a highly compositional task. First, we identified behavioral signatures of compositional generalization in humans, along with their neural correlates using whole-cortex functional magnetic resonance imaging (fMRI) data. Next, we designed pretraining paradigms aided by a procedure we term primitives pretraining to endow compositional task elements into ANNs. We found that ANNs with this prior knowledge had greater correspondence with human behavior and neural compositional signatures. Importantly, primitives pretraining induced abstract internal representations, excellent zero-shot generalization, and sample-efficient learning. Moreover, it gave rise to a hierarchy of abstract representations that matched human fMRI data, where sensory rule abstractions emerged in early sensory areas, and motor rule abstractions emerged in later motor areas. Our findings give empirical support to the role of compositional generalization in humans behavior, implicate abstract representations as its neural implementation, and illustrate that these representations can be embedded into ANNs by designing simple and efficient pretraining procedures. | Accept | The manuscript presents a story about compositionality that ties together neuroscience and models; with a focus on how compositionality enables generalization in a task performed by humans undergoing fMRI. Reviewers were largely happy with the manuscript and authors thoroughly addressed the questions that reviewers had.
The manuscript is suggestive. The experiment is in many ways limited, and it's not clear what conclusions one can draw at present about the design of models or of the brain. But, it is likely a significant audience at NeurIPS will be interested in this topic and it may spark followup work. This looks like the beginnings of an interesting line of work expanding the paradigm of mapping between brains and artificial neural networks. | test | [
"8e6zkYMwTKb",
"hvOA945x0qU",
"AA43TAS2oFA",
"eMCokHt2cs7",
"q9-EEt-AJWv",
"X3S2-0zzF9",
"nt04u50K1uF",
"Tt6JZEm3CGj",
"qwoRxFPdYq",
"6yg5si55R-V",
"08sVwqYiR32",
"v6MhcORqhdL",
"4gUj8DuNtZ",
"H1Wye_KY9AC",
"u-p530dYdwk",
"iQlhqbB1ikT",
"u4EJPxur7sT",
"deVUaxnGOAx"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their thorough response. \n\nMy three main concerns were (1) the correspondence that the authors were drawing between high/low level processing areas and layers of a network (2) the linear nature of the decoding and (3) the clustering of the learned rules. \n\nThe responses give plausible ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"qwoRxFPdYq",
"u-p530dYdwk",
"eMCokHt2cs7",
"u-p530dYdwk",
"nips_2022_177GzUAds8U",
"deVUaxnGOAx",
"Tt6JZEm3CGj",
"u4EJPxur7sT",
"6yg5si55R-V",
"08sVwqYiR32",
"v6MhcORqhdL",
"iQlhqbB1ikT",
"H1Wye_KY9AC",
"u-p530dYdwk",
"nips_2022_177GzUAds8U",
"nips_2022_177GzUAds8U",
"nips_2022_177G... |
nips_2022_ZwnPdpCw6d | Robust $\phi$-Divergence MDPs | In recent years, robust Markov decision processes (MDPs) have emerged as a prominent modeling framework for dynamic decision problems affected by uncertainty. In contrast to classical MDPs, which only account for stochasticity by modeling the dynamics through a stochastic process with a known transition kernel, robust MDPs additionally account for ambiguity by optimizing in view of the most adverse transition kernel from a prescribed ambiguity set. In this paper, we develop a novel solution framework for robust MDPs with $s$-rectangular ambiguity sets that decomposes the problem into a sequence of robust Bellman updates and simplex projections. Exploiting the rich structure present in the simplex projections corresponding to $\phi$-divergence ambiguity sets, we show that the associated $s$-rectangular robust MDPs can be solved substantially faster than with state-of-the-art commercial solvers as well as a recent first-order solution scheme, thus rendering them attractive alternatives to classical MDPs in practical applications. | Accept | This is a somewhat borderline paper. The reviewers were unanimously positive, but they all had concerns. In reading through the concerns and responses, it seems that many (though perhaps not all) of the concerns could be addressed with additional references and some expository modifications.
If the paper makes the final cut, I encourage the authors to follow through with their proposed modifications and do their best to address the concerns expressed by the reviewers. | train | [
"N6gEZVF2Q8",
"doZ8g0v40-",
"14VhUuSj-4U",
"-E4S5G4UrW_",
"bDftwMdZqom",
"J0qOfHU-Mmv",
"lkv0P-tVqPA",
"4FKrA_Uhbry",
"blCUbebQ_e3",
"1lItAfqBl4o",
"zm8_jzWo65j",
"1yJhGudJXj",
"LWA8OaRvK4j",
"EwaxyILs8LX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank all reviewers for their time reading this paper and providing reviews for our submission.\n\nWe take this last opportunity to clarify the importance of model-based methods. While we totally agree that model-free approach is an importance part of reinforcement learning, we would like to emph... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
4
] | [
"nips_2022_ZwnPdpCw6d",
"1yJhGudJXj",
"-E4S5G4UrW_",
"J0qOfHU-Mmv",
"EwaxyILs8LX",
"EwaxyILs8LX",
"LWA8OaRvK4j",
"LWA8OaRvK4j",
"1yJhGudJXj",
"1yJhGudJXj",
"1yJhGudJXj",
"nips_2022_ZwnPdpCw6d",
"nips_2022_ZwnPdpCw6d",
"nips_2022_ZwnPdpCw6d"
] |
nips_2022_PCZfDUH8fIn | The price of unfairness in linear bandits with biased feedback | In this paper, we study the problem of fair sequential decision making with biased linear bandit feedback. At each round, a player selects an action described by a covariate and by a sensitive attribute. The perceived reward is a linear combination of the covariates of the chosen action, but the player only observes a biased evaluation of this reward, depending on the sensitive attribute. To characterize the difficulty of this problem, we design a phased elimination algorithm that corrects the unfair evaluations, and establish upper bounds on its regret. We show that the worst-case regret is smaller than $\mathcal{O}(\kappa_* ^{1/3}\log(T)^{1/3}T^{2/3})$, where $\kappa_*$ is an explicit geometrical constant characterizing the difficulty of bias estimation. We prove lower bounds on the worst-case regret for some sets of actions showing that this rate is tight up to a possible sub-logarithmic factor. We also derive gap-dependent upper bounds on the regret, and matching lower bounds for some problem instance. Interestingly, these results reveal a transition between a regime where the problem is as difficult as its unbiased counterpart, and a regime where it can be much harder. | Accept | The authors study a linear bandit problem with biased feedback, develop an algorithm and bound the corresponding regret. The bandit problem they study is meaningful and highly relevant. I therefore recommend to accept the paper. | train | [
"YMeUH3wzQaO",
"HC6FTwnHiMK",
"9TApBRqt_ba",
"wN0zb5xyn2cd",
"5BfiU4olIj7",
"E8nn48mlCm",
"k8KXOpXriH",
"s92aD8ERmT",
"GdpJZt-CQkf",
"GvSeIePO61P",
"FGRTSHbGmL0"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for including the results on the case of more than two groups.",
" Thanks for the response. Please add these details to the paper, so that the paper is clearly placed in the fairness literature.",
" We thank the reviewer for giving us an opportunity to clarify a fairness-related aspect of our model,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
4
] | [
"s92aD8ERmT",
"9TApBRqt_ba",
"wN0zb5xyn2cd",
"k8KXOpXriH",
"E8nn48mlCm",
"FGRTSHbGmL0",
"GvSeIePO61P",
"GdpJZt-CQkf",
"nips_2022_PCZfDUH8fIn",
"nips_2022_PCZfDUH8fIn",
"nips_2022_PCZfDUH8fIn"
] |
nips_2022_Yo0s4qp_UMR | Intrinsic Sliced Wasserstein Distances for Comparing Collections of Probability Distributions on Manifolds and Graphs | Collections of probability distributions arise in a variety of statistical applications ranging from user activity pattern analysis to brain connectomics. In practice these distributions are represented by histograms over diverse domain types including finite intervals, circles, cylinders, spheres, other manifolds, and graphs. This paper introduces an approach for detecting differences between two collections of histograms over such general domains. We propose the intrinsic slicing construction that yields a novel class of Wasserstein distances on manifolds and graphs. These distances are Hilbert embeddable, allowing us to reduce the histogram collection comparison problem to a more familiar mean testing problem in a Hilbert space. We provide two testing procedures, one based on resampling and another on combining $p$-values from coordinate-wise tests. Our experiments in a variety of data settings show that the resulting tests are powerful and the $p$-values are well-calibrated. Example applications to user activity patterns and spatial data are provided. | Reject | In this paper, the authors propose the intrinsic sliced Wasserstein distances and a Hypothesis testing framework for the proposed measure. The idea of using eigenfunctions and eigenvalues for sliced Wasserstein is interesting. The authors addressed a part of the concerns raised by the reviewers. However, the advantage over the existing methods (sliced Wasserstein distances and MMDs) is unclear. Thus, it needs a major revision and cannot be accepted with the current version. I encourage the authors to revise the paper based on the reviewer's comments and resubmit it to a future venue. | train | [
"sYX-4NGT6Vd",
"Fga-pHMgGfW",
"lplTigI0YI",
"K18RareNnVw",
"KpFNvnSTdS9",
"bFZvPWpHQC0",
"ocdxyWymf9T",
"BWI1FpUmjG",
"zLkPrSbYwlB",
"WgLlzOUjFf",
"k0LU8-7EpHb"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification. As you suggest, one can define this as Kernel mean embedding (KME). However it is not useful because the main property of KME used in the MMD theory is that $KME(P) = KME(Q) \\Rightarrow P = Q$. This is central to the MMD theory, and the respective proofs rely on this property (e.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"lplTigI0YI",
"K18RareNnVw",
"BWI1FpUmjG",
"ocdxyWymf9T",
"bFZvPWpHQC0",
"zLkPrSbYwlB",
"WgLlzOUjFf",
"k0LU8-7EpHb",
"nips_2022_Yo0s4qp_UMR",
"nips_2022_Yo0s4qp_UMR",
"nips_2022_Yo0s4qp_UMR"
] |
nips_2022_oQIJsMlyaW_ | SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance | The leap in performance in state-of-the-art computer vision methods is attributed to the development of deep neural networks. However it often comes at a computational price which may hinder their deployment. To alleviate this limitation, structured pruning is a well known technique which consists in removing channels, neurons or filters, and is commonly applied in order to produce more compact models. In most cases, the computations to remove are selected based on a relative importance criterion. At the same time, the need for explainable predictive models has risen tremendously and motivated the development of robust attribution methods that highlight the relative importance of pixels of an input image or feature map. In this work, we discuss the limitations of existing pruning heuristics, among which magnitude and gradient-based methods. We draw inspiration from attribution methods to design a novel integrated gradient pruning criterion, in which the relevance of each neuron is defined as the integral of the gradient variation on a path towards this neuron removal. Furthermore, We propose an entwined DNN pruning and fine-tuning flowchart to better preserve DNN accuracy while removing parameters. We show through extensive validation on several datasets, architectures as well as pruning scenarios that the proposed method, dubbed SInGE, significantly outperforms existing state-of-the-art DNN pruning methods. | Accept | Novel pruning method based on integrated gradients. Reviewers agreed that the method is well-motivated and that the comparisons showcase the potential of this method. There are some concerns regarding fairness of the comparisons in terms of flops and parameter count. I believe some of the rebuttal answers from the authors address those concerns. I think this work is novel and interesting enough to be accepted at NeurIPS. | train | [
"ih807BRirEf",
"I30Tz_wMY_d",
"5GKvqy43GB",
"B30fVkMu537",
"oiy0czUOwAB",
"2AU2x-IwnRi",
"4z14txofMK",
"PEg6TsDOMCr",
"X6DVmgoEUFv",
"0YaSRGOZNIm",
"EyPzNX6zn0",
"4xlLMSqj1Lm",
"vYLO5Hgf7KG",
"CYiZLyZ1bk"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Given the short amount of time before the end of the discussion period, we will update the manuscript with the suggested changes, that will allow to enhance the quality of the paper. As for using non-zero baselines, since the goal of the proposed work is the removal of neurons, in its current form we have to spec... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"I30Tz_wMY_d",
"PEg6TsDOMCr",
"B30fVkMu537",
"2AU2x-IwnRi",
"X6DVmgoEUFv",
"4z14txofMK",
"CYiZLyZ1bk",
"vYLO5Hgf7KG",
"0YaSRGOZNIm",
"EyPzNX6zn0",
"4xlLMSqj1Lm",
"nips_2022_oQIJsMlyaW_",
"nips_2022_oQIJsMlyaW_",
"nips_2022_oQIJsMlyaW_"
] |
nips_2022_SYdg8tcFgdG | Sample-Efficient Learning of Correlated Equilibria in Extensive-Form Games | Imperfect-Information Extensive-Form Games (IIEFGs) is a prevalent model for real-world games involving imperfect information and sequential plays. The Extensive-Form Correlated Equilibrium (EFCE) has been proposed as a natural solution concept for multi-player general-sum IIEFGs. However, existing algorithms for finding an EFCE require full feedback from the game, and it remains open how to efficiently learn the EFCE in the more challenging bandit feedback setting where the game can only be learned by observations from repeated playing.
This paper presents the first sample-efficient algorithm for learning the EFCE from bandit feedback. We begin by proposing $K$-EFCE---a generalized definition that allows players to observe and deviate from the recommended actions for $K$ times. The $K$-EFCE includes the EFCE as a special case at $K=1$, and is an increasingly stricter notion of equilibrium as $K$ increases. We then design an uncoupled no-regret algorithm that finds an $\varepsilon$-approximate $K$-EFCE within $\widetilde{\mathcal{O}}(\max_{i}X_iA_i^{K}/\varepsilon^2)$ iterations in the full feedback setting, where $X_i$ and $A_i$ are the number of information sets and actions for the $i$-th player. Our algorithm works by minimizing a wide-range regret at each information set that takes into account all possible recommendation histories. Finally, we design a sample-based variant of our algorithm that learns an $\varepsilon$-approximate $K$-EFCE within $\widetilde{\mathcal{O}}(\max_{i}X_iA_i^{K+1}/\varepsilon^2)$ episodes of play in the bandit feedback setting. When specialized to $K=1$, this gives the first sample-efficient algorithm for learning EFCE from bandit feedback. | Accept | Reviews on this paper are uniformly positive, and all reviewers feel that it is
an interesting set of results. One minor criticism is that some reviewers felt the presentation
could be improved, and the authors should try to address this for the camera ready.
| train | [
"Eo7UU4uVAV5",
"3J4boPOP2qq",
"vw09YaVXgbF",
"9Sw_aIzLVf4",
"2h6TQD0aXaV",
"EsoRCKAlrTx",
"3LMAo02lsB",
"ErZk6LJdEb",
"05G9T4HtyEb",
"V2MqL9_YCUX",
"vkqYtKrba8J",
"G38nDnnZD5o"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. I don't have any additional questions. It might make sense to explicitly mention the two weaknesses with the current approach (difficulties in achieving model-freeness and adversarial bandit feedback).",
" Apologies for the late reply.\n\nIn the original EFCE paper by Von Stengel, devi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
2,
4
] | [
"ErZk6LJdEb",
"3LMAo02lsB",
"EsoRCKAlrTx",
"nips_2022_SYdg8tcFgdG",
"05G9T4HtyEb",
"V2MqL9_YCUX",
"vkqYtKrba8J",
"G38nDnnZD5o",
"nips_2022_SYdg8tcFgdG",
"nips_2022_SYdg8tcFgdG",
"nips_2022_SYdg8tcFgdG",
"nips_2022_SYdg8tcFgdG"
] |
nips_2022_a3ymtHbL5p5 | In Differential Privacy, There is Truth: on Vote-Histogram Leakage in Ensemble Private Learning | When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns. The canonical Private Aggregation of Teacher Ensembles, or PATE, computes output labels by aggregating the predictions of a (possibly distributed) collection of teacher models via a voting mechanism. The mechanism adds noise to attain a differential privacy guarantee with respect to the teachers' training data. In this work, we observe that this use of noise, which makes PATE predictions stochastic, enables new forms of leakage of sensitive information. For a given input, our adversary exploits this stochasticity to extract high-fidelity histograms of the votes submitted by the underlying teachers. From these histograms, the adversary can learn sensitive attributes of the input such as race, gender, or age. Although this attack does not directly violate the differential privacy guarantee, it clearly violates privacy norms and expectations, and would not be possible $\textit{at all}$ without the noise inserted to obtain differential privacy. In fact, counter-intuitively, the attack $\textbf{becomes easier as we add more noise}$ to provide stronger differential privacy. We hope this encourages future work to consider privacy holistically rather than treat differential privacy as a panacea. | Accept | Although the fact that DP does not protect against population statistics is a widely known fact, the paper weaves this together with PATE (which relies on DP statistics) to demonstrate the danger of mis interpreting the protection guarantees provided by DP. This is a point worth discussing among the privacy and security community. | val | [
"kMG3HEjosJ0",
"ea15mPrUl7M",
"mpctw0JHge",
"1rF9NzXzzgc",
"GjPIzxHk502",
"BdBuVIQg48d",
"ZjzBEZg0MyHx",
"zZEB3uKQYIL",
"2QLqrHX_Q4V",
"7Yi8p6td3UI",
"SPCdFwrs3j",
"BUflsDNrtK",
"JdZeisM-_WO",
"sW5TcGhUv5N",
"apfcJLQKrs"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I read all the reviewers and the authors' replies. I think the authors' replies were very good, and I continue to have my super positive opinion about this paper. I hope it gets accept with a spotlight talk. ",
" Thank you for your response. To clarify our response to point 7: the reviewer is right that if the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
3,
10
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"zZEB3uKQYIL",
"mpctw0JHge",
"2QLqrHX_Q4V",
"sW5TcGhUv5N",
"BUflsDNrtK",
"ZjzBEZg0MyHx",
"7Yi8p6td3UI",
"apfcJLQKrs",
"sW5TcGhUv5N",
"JdZeisM-_WO",
"BUflsDNrtK",
"nips_2022_a3ymtHbL5p5",
"nips_2022_a3ymtHbL5p5",
"nips_2022_a3ymtHbL5p5",
"nips_2022_a3ymtHbL5p5"
] |
nips_2022_3vmKQUctNy | Washing The Unwashable : On The (Im)possibility of Fairwashing Detection | The use of black-box models (e.g., deep neural networks) in high-stakes decision-making systems, whose internal logic is complex, raises the need for providing explanations about their decisions. Model explanation techniques mitigate this problem by generating an interpretable and high-fidelity surrogate model (e.g., a logistic regressor or decision tree) to explain the logic of black-box models.
In this work, we investigate the issue of fairwashing, in which model explanation techniques are manipulated to rationalize decisions taken by an unfair black-box model using deceptive surrogate models. More precisely, we theoretically characterize and analyze fairwashing, proving that this phenomenon is difficult to avoid due to an irreducible factor---the unfairness of the black-box model.
Based on the theory developed, we propose a novel technique, called FRAUD-Detect (FaiRness AUDit Detection), to detect fairwashed models by measuring a divergence over subpopulation-wise fidelity measures of the interpretable model.
We empirically demonstrate that this divergence is significantly larger in purposefully fairwashed interpretable models than in honest ones.
Furthermore, we show that our detector is robust to an informed adversary trying to bypass our detector. The code implementing FRAUD-Detect is available at https://github.com/cleverhans-lab/FRAUD-Detect. | Accept | The reviewers were split about this paper: on one hand they appreciated the motivation and the comprehensive experiments in the paper, on the other they were concerned about the clarity of the paper, even worried about a potential flaw. I have decided to vote to accept given the clear and convincing author response. I urge the authors to take all of the reviewers changes into account (if not already done so). Once done this paper will be a nice addition to the conference! | val | [
"XBoo6J6ju2v",
"VGcKQ_53iM",
"SAN-XrEa3ZD",
"WXZA0P2N-xH",
"IXwM19pJ77D",
"nBSewmI3tEP",
"Yootz_DbIsW",
"IyDrOyPPKNzm",
"RvaaiY86OmZB",
"LEl_WYTIZg7",
"P3_6qS6JpI4",
"8Q9zgDstwLV",
"CnNvY-PmgLS",
"USsylUR6O0pZ",
"XrA1G8PKrup",
"A2txta97HE",
"4Nud5BnYKNWn",
"9tqLesugn61",
"Zh34qCf... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Dear reviewer vBkK,\n\nThank you very much for the very positive feedback and insightful suggestions! We would be happy to answer any further questions you may have before the response period ends today.\n\nWarm regards,\nPaper3526 Authors",
" Thank you very much for your time reading our responses, upgrading y... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"4Nud5BnYKNWn",
"RvaaiY86OmZB",
"IyDrOyPPKNzm",
"nBSewmI3tEP",
"nBSewmI3tEP",
"XrA1G8PKrup",
"nips_2022_3vmKQUctNy",
"P3_6qS6JpI4",
"8Q9zgDstwLV",
"nips_2022_3vmKQUctNy",
"EvJ4GagR51c",
"CnNvY-PmgLS",
"USsylUR6O0pZ",
"eeK51LFm3H",
"A2txta97HE",
"Zh34qCft8o2",
"9tqLesugn61",
"nips_2... |
nips_2022_yts7fLpWY9G | Graph Neural Networks with Adaptive Readouts | An effective aggregation of node features into a graph-level representation via readout functions is an essential step in numerous learning tasks involving graph neural networks. Typically, readouts are simple and non-adaptive functions designed such that the resulting hypothesis space is permutation invariant. Prior work on deep sets indicates that such readouts might require complex node embeddings that can be difficult to learn via standard neighborhood aggregation schemes. Motivated by this, we investigate the potential of adaptive readouts given by neural networks that do not necessarily give rise to permutation invariant hypothesis spaces. We argue that in some problems such as binding affinity prediction where molecules are typically presented in a canonical form it might be possible to relax the constraints on permutation invariance of the hypothesis space and learn a more effective model of the affinity by employing an adaptive readout function. Our empirical results demonstrate the effectiveness of neural readouts on more than 40 datasets spanning different domains and graph characteristics. Moreover, we observe a consistent improvement over standard readouts (i.e., sum, max, and mean) relative to the number of neighborhood aggregation iterations and different convolutional operators. | Accept | The paper proposes the use of an adaptive readout function in GNNs together with extensive empirical work to support it. The reviewers all found the paper interesting and are generally in favor of accepting it (with one marking strong accept with high confidence). Therefore, I recommend the paper be accepted, and encourage the authors to take into account the reviewer comments (as they have also indicated in their responses) when preparing the camera ready version. In particular, I would like to encourage them to use the extra page given there to reconsider the split of materials between main paper and appendix. | test | [
"MVrdCFdFrdG",
"umsyl8Ru2R",
"J6YSlIU9Md",
"bNoevCP37WR",
"SnqNyy0D1As",
"Lgq7x6mnuF9",
"bpp5gr2Xtlw",
"4Eb_gEmjl7d",
"UTGVdySO2pq",
"N-tXl6UDFV3",
"hzBRRGC9RDi",
"tGx7NKW-v0G",
"ZPqibJeirF2"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the clarification on \"takeaway messages\". We will be expanding the existing discussion on readout functions in the camera-ready version. While our experiments are detailed, they do not cover all the possible factors that might influence a decision on the readout type. We will, however,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"umsyl8Ru2R",
"J6YSlIU9Md",
"4Eb_gEmjl7d",
"SnqNyy0D1As",
"ZPqibJeirF2",
"bpp5gr2Xtlw",
"tGx7NKW-v0G",
"UTGVdySO2pq",
"hzBRRGC9RDi",
"nips_2022_yts7fLpWY9G",
"nips_2022_yts7fLpWY9G",
"nips_2022_yts7fLpWY9G",
"nips_2022_yts7fLpWY9G"
] |
nips_2022__N4k45mtnuq | Approximate Euclidean lengths and distances beyond Johnson-Lindenstrauss | A classical result of Johnson and Lindenstrauss states that a set of $n$ high dimensional data points can be projected down to $O(\log n/\epsilon^2)$ dimensions such that the square of their pairwise distances is preserved up to a small distortion $\epsilon\in(0,1)$. It has been proved that the JL lemma is optimal for the general case, therefore, improvements can only be explored for special cases. This work aims to improve the $\epsilon^{-2}$ dependency based on techniques inspired by the Hutch++ Algorithm, which reduces $\epsilon^{-2}$ to $\epsilon^{-1}$ for the related problem of implicit matrix trace estimation. We first present an algorithm to estimate the Euclidean lengths of the rows of a matrix. We prove for it element-wise probabilistic bounds that are at least as good as standard JL approximations in the worst-case, but are asymptotically better for matrices with decaying spectrum. Moreover, for any matrix, regardless of its spectrum, the algorithm achieves $\epsilon$-accuracy for the total, Frobenius norm-wise relative error using only $O(\epsilon^{-1})$ queries. This is a quadratic improvement over the norm-wise error of standard JL approximations. We also show how these results can be extended to estimate (i) the Euclidean distances between data points and (ii) the statistical leverage scores of tall-and-skinny data matrices, which are ubiquitous for many applications, with analogous theoretical improvements. Proof-of-concept numerical experiments are presented to validate the theoretical analysis. | Accept | All reviews for this paper were positive, albeit with a varying level of enthusiasm. Reviewers found the problem, the results (both theoretical and experimental) and the techniques (very) interesting. The main concerns were whether the paper is a good fit for the conference (given that dimensionality reduction is more of a machine learning tool rather a machine learning problem per se) and lack of experiments on real data. But ultimately, the positives significantly outweighed the negatives. | val | [
"_eS4BUSEPOy",
"2idMuQEOgjv",
"jotI6i18VwR",
"-IF6VcJ85bR",
"Lc7p3RlJ0m9",
"rKFArHCDnh",
"8PsjtdRbr-6",
"Gnk0ZZZLnY",
"OZ_JhyPVfyF",
"-gdRFMfzAQ",
"JzyVl__k3Wh",
"xB5tefVForE",
"zMqMTpJi68N",
"6zFCPWgEtiA",
"GiPM4mjhY5I",
"6tHpiqYnZHo",
"7LjZRQULin3"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking our responses into consideration and for raising the score! Just a note, there appears to be this \"Rebuttal Acknowledgement\" button for the reviewers, we are not sure if it's mandatory or if it has any meaning in general, but we thought to mention just in case. Thanks again",
" I thank th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
2
] | [
"2idMuQEOgjv",
"Lc7p3RlJ0m9",
"-IF6VcJ85bR",
"zMqMTpJi68N",
"xB5tefVForE",
"-gdRFMfzAQ",
"Gnk0ZZZLnY",
"OZ_JhyPVfyF",
"7LjZRQULin3",
"JzyVl__k3Wh",
"6zFCPWgEtiA",
"GiPM4mjhY5I",
"6tHpiqYnZHo",
"nips_2022__N4k45mtnuq",
"nips_2022__N4k45mtnuq",
"nips_2022__N4k45mtnuq",
"nips_2022__N4k4... |
nips_2022_eF_Mx-3Sm92 | Change-point Detection for Sparse and Dense Functional Data in General Dimensions | We study the problem of change-point detection and localisation for functional data sequentially observed on a general $d$-dimensional space, where we allow the functional curves to be either sparsely or densely sampled. Data of this form naturally arise in a wide range of applications such as biology, neuroscience, climatology and finance. To achieve such a task, we propose a kernel-based algorithm named functional seeded binary segmentation (FSBS). FSBS is computationally efficient, can handle discretely observed functional data, and is theoretically sound for heavy-tailed and temporally-dependent observations. Moreover, FSBS works for a general $d$-dimensional domain, which is the first in the literature of change-point estimation for functional data. We show the consistency of FSBS for multiple change-point estimation and further provide a sharp localisation error rate, which reveals an interesting phase transition phenomenon depending on the number of functional curves observed and the sampling frequency for each curve. Extensive numerical experiments illustrate the effectiveness of FSBS and its advantage over existing methods in the literature under various settings. A real data application is further conducted, where FSBS localises change-points of sea surface temperature patterns in the south Pacific attributed to El Ni\~{n}o. | Accept | The paper studies change-point detection and localization for functional data, which is an interesting and timely topic. I agree with some reviewers that the paper might be a better fit with the traditional statistical venue. The authors have done a great job in the rebuttal phase in addressing reviewers’ comments. I believe it is a worthwhile paper to be published in NeurIPS. | val | [
"6qXA16Ge32E",
"8Wh8buLR2KP",
"bgUypABo-ah",
"BsV-T-bvLTK",
"aPgo3KAGUzL",
"HrFNYEEwd9",
"sdLmW4C2gkc",
"_kyTms0qfT",
"XbVgOZk1hzv",
"oG5TKZNbNk2"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your comments and suggestions. In the following, we reply to your comments point-by-point. We have submitted revised main text and supplementary files.\n\n**Whether competing method is state-of-the-art**\n\nTo the best of our knowledge, the three competing methods are still the state-of-th... | [
-1,
-1,
-1,
-1,
-1,
4,
8,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3,
1
] | [
"oG5TKZNbNk2",
"XbVgOZk1hzv",
"_kyTms0qfT",
"sdLmW4C2gkc",
"HrFNYEEwd9",
"nips_2022_eF_Mx-3Sm92",
"nips_2022_eF_Mx-3Sm92",
"nips_2022_eF_Mx-3Sm92",
"nips_2022_eF_Mx-3Sm92",
"nips_2022_eF_Mx-3Sm92"
] |
nips_2022_ADfBF9PoTvw | Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients | Minimizing the inclusive Kullback-Leibler (KL) divergence with stochastic gradient descent (SGD) is challenging since its gradient is defined as an integral over the posterior. Recently, multiple methods have been proposed to run SGD with biased gradient estimates obtained from a Markov chain. This paper provides the first non-asymptotic convergence analysis of these methods by establishing their mixing rate and gradient variance. To do this, we demonstrate that these methods—which we collectively refer to as Markov chain score ascent (MCSA) methods—can be cast as special cases of the Markov chain gradient descent framework. Furthermore, by leveraging this new understanding, we develop a novel MCSA scheme, parallel MCSA (pMCSA), that achieves a tighter bound on the gradient variance. We demonstrate that this improved theoretical result translates to superior empirical performance.
| Accept | All reviewers recommend accepting the paper. If the authors want to increase the impact of their work, a demonstration on a large-scale problem would help a lot. | test | [
"ywkwCYTEK9Y",
"zSUNT4nRnqf",
"rYHNWpJW19S",
"3izwxVJjOmU",
"As9SqjVaomZ",
"Z_UTJYxRxM_",
"20sKzh4nc2J",
"6vBW3I2NFaK",
"56U-l452I6-",
"ZWMOeOVdJnh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my concerns. I increased my score to an *Accept*.",
" Thank you for addressing my questions and suggestions. These answers are helpful and addressed my concerns about mixing rate. The updated Figures 1, 2 are also more informative and highlight the benefits of the proposed approach. The... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
4
] | [
"rYHNWpJW19S",
"3izwxVJjOmU",
"20sKzh4nc2J",
"6vBW3I2NFaK",
"ZWMOeOVdJnh",
"56U-l452I6-",
"nips_2022_ADfBF9PoTvw",
"nips_2022_ADfBF9PoTvw",
"nips_2022_ADfBF9PoTvw",
"nips_2022_ADfBF9PoTvw"
] |
nips_2022_rG7HZZtIc- | D^2NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from a Monocular Video | Given a monocular video, segmenting and decoupling dynamic objects while recovering the static environment is a widely studied problem in machine intelligence. Existing solutions usually approach this problem in the image domain, limiting their performance and understanding of the environment. We introduce Decoupled Dynamic Neural Radiance Field (D^2NeRF), a self-supervised approach that takes a monocular video and learns a 3D scene representation which decouples moving objects, including their shadows, from the static background. Our method represents the moving objects and the static background by two separate neural radiance fields with only one allowing for temporal changes. A naive implementation of this approach leads to the dynamic component taking over the static one as the representation of the former is inherently more general and prone to overfitting. To this end, we propose a novel loss to promote correct separation of phenomena. We further propose a shadow field network to detect and decouple dynamically moving shadows. We introduce a new dataset containing various dynamic objects and shadows and demonstrate that our method can achieve better performance than state-of-the-art approaches in decoupling dynamic and static 3D objects, occlusion and shadow removal, and image segmentation for moving objects. Project page: https://d2nerf.github.io/ | Accept | This paper attacks an interesting problem with NERF, decoupling moving objects, including their shadows, from the static background. All four reviewers recommend accepting the paper, and the weaknesses identified did not detract from substantive contributions. Therefore I am accepting this paper. | train | [
"Laa0JY-LcQq",
"EcyakYJ7Mv6",
"U6Z_9HarEg",
"UlkiJ4MtgSV",
"-ECKPEL1JyJ",
"k4YwP2MzwUE",
"49VPU8viCK",
"4E2lwN9l0Yp",
"iMrpU4ZP436",
"ZHmwrHDIg9p",
"-Hp1TOVqbsb"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors well addressed my concerns. Actually, I really appreciate this paper although the technique is relatively simple.",
" We thank the reviewer PH91 for the detailed comments and constructive suggestions. Below are our responses to the questions.\n\n### R4-Q1 Hyperparameter Tuning\nPlease refer to All-Q... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"EcyakYJ7Mv6",
"-Hp1TOVqbsb",
"ZHmwrHDIg9p",
"iMrpU4ZP436",
"4E2lwN9l0Yp",
"49VPU8viCK",
"nips_2022_rG7HZZtIc-",
"nips_2022_rG7HZZtIc-",
"nips_2022_rG7HZZtIc-",
"nips_2022_rG7HZZtIc-",
"nips_2022_rG7HZZtIc-"
] |
nips_2022_pbILUUf_hBN | A Near-Optimal Best-of-Both-Worlds Algorithm for Online Learning with Feedback Graphs | We consider online learning with feedback graphs, a sequential decision-making framework where the learner's feedback is determined by a directed graph over the action set. We present a computationally-efficient algorithm for learning in this framework that simultaneously achieves near-optimal regret bounds in both stochastic and adversarial environments. The bound against oblivious adversaries is $\tilde{O} (\sqrt{\alpha T})$, where $T$ is the time horizon and $\alpha$ is the independence number of the feedback graph. The bound against stochastic environments is $O\big((\ln T)^2 \max_{S\in \mathcal I(G)} \sum_{i \in S} \Delta_i^{-1}\big)$ where $\mathcal I(G)$ is the family of all independent sets in a suitably defined undirected version of the graph and $\Delta_i$ are the suboptimality gaps.
The algorithm combines ideas from the EXP3++ algorithm for stochastic and adversarial bandits and the EXP3.G algorithm for feedback graphs with a novel exploration scheme. The scheme, which exploits the structure of the graph to reduce exploration, is key to obtain best-of-both-worlds guarantees with feedback graphs.
We also extend our algorithm and results to a setting where the feedback graphs are allowed to change over time. | Accept | The reviewers came to consensus that this paper makes a good progress on the online learning with feedback graphs. I agree with these opinions and please polish the manuscript so that the minor concerns raised by the reviewers become clear in the final version. | train | [
"Tpb-L3JzRtA",
"izVGQf99RxA",
"KdtlqvDc_-",
"s18y0s6fSY",
"-kWNVtJZoZd",
"TGIkxFAC_h1",
"qSztex4PIKJ",
"Y1GhdPD_paM",
"CTzhPx9_Ap1",
"JbUfn_PB0Vk",
"NkK-YOP4Ds"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors response and my issues are addressed by the authors. Although the current regret bound in the stochastic world may not match the instance-optimal regret bound, the algorithm and the bounds are interesting based on my understanding.",
" > Could the authors explain more about the relationsh... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"KdtlqvDc_-",
"JbUfn_PB0Vk",
"JbUfn_PB0Vk",
"CTzhPx9_Ap1",
"Y1GhdPD_paM",
"NkK-YOP4Ds",
"NkK-YOP4Ds",
"nips_2022_pbILUUf_hBN",
"nips_2022_pbILUUf_hBN",
"nips_2022_pbILUUf_hBN",
"nips_2022_pbILUUf_hBN"
] |
nips_2022_VpHFHz57fT | Improved Imaging by Invex Regularizers with Global Optima Guarantees | Image reconstruction enhanced by regularizers, e.g., to enforce sparsity, low rank or smoothness priors on images, has many successful applications in vision tasks such as computer photography, biomedical and spectral imaging. It has been well accepted that non-convex regularizers normally perform better than convex ones in terms of the reconstruction quality. But their convergence analysis is only established to a critical point, rather than the global optima. To mitigate the loss of guarantees for global optima, we propose to apply the concept of invexity and provide the first list of proved invex regularizers for improving image reconstruction. Moreover, we establish convergence guarantees to global optima for various advanced image reconstruction techniques after being improved by such invex regularization. To the best of our knowledge, this is the first practical work applying invex regularization to improve imaging with global optima guarantees. To demonstrate the effectiveness of invex regularization, numerical experiments are conducted for various imaging tasks using benchmark datasets. | Accept | This paper addresses image reconstruction problems exploiting invex regularizers (which are not necessarily convex). For many modern signal processing applications, invexity of the cost is proved. Many examples are considered, and an extensive comparison with state-of-the art methods are provided in an application section.
As all reviewers point out unanimously, the paper is very well presented, original and overall of very high quality. It is therefore a clear accept. But I recall the authors the strict rule on the number of pages for the final version (10 pages). | train | [
"JDtEA373sDK",
"bnWXxB5uKR0",
"1xAuJE0fvEz",
"CqEYLrW9M-e",
"-0eNFI4cJoP",
"Vm4TmtHQsb_i",
"x3ZMZKRGBHx",
"2ITW96jtnN",
"srxk-bpmZ2",
"LcilE_ky0vB",
"p4PdNCvwGiw",
"E1I3QWsO1dv"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad to know that the reviewer found our rebuttal helpful. Thank you for the comments, and the support reconsidering the decision. Here we address your new questions.\n# Questions\n- **Q1.** We thank you for the suggested plot. We will include this new comparative plot between invex and convex regularizers... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"bnWXxB5uKR0",
"x3ZMZKRGBHx",
"-0eNFI4cJoP",
"Vm4TmtHQsb_i",
"2ITW96jtnN",
"srxk-bpmZ2",
"LcilE_ky0vB",
"p4PdNCvwGiw",
"E1I3QWsO1dv",
"nips_2022_VpHFHz57fT",
"nips_2022_VpHFHz57fT",
"nips_2022_VpHFHz57fT"
] |
nips_2022_K4W92FUXSF9 | Random Normalization Aggregation for Adversarial Defense | The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/Random-Norm-Aggregation and the MindSpore code is available at https://gitee.com/mindspore/models/tree/master/research/cv/RNA. | Accept | This paper intrdouces the relation between normalizations and adversarial transferability, and proposes a method using random normalization aggregation for enhancing adversarial robustness.
Three reviewers agreed with the interesing idea, thorough expreiments, theroetical analysis, and the effectivess, so they gave acceptance score.
However, one reviewer (ioBB) raised a concern on the results of Auto-attack (AA) and lack of in-depth discussion on the results.
Unfortunately, AC and the reviewers failed to make a consensus on decision during the discussion period, .
AC carefully read the paper, the rebuttal, and the reviewers' discussion. The main remaining issue raised by Reviewer-ioBB is it is not clear the reason why the results of AA are highter than PGD. The authors provided more extensive experimental results, focusing on comparing the results of AA (and breakdown of four AA) and PGD. Also, they conjecture these result from handling adversarial transferability under randomness-based method via normalization aggregation.
AC also agrees with the authors and Reviewer-q4wp that the in-depth analysis on the reason of AA-PGD results is ouf-of-scope of this paper and theses results are consistent to recent works on adversarial transferability. So, this issue might be left as future work.
Because it seems that the contribution of this paper is enough for machine leanring community except the discussion on the AA-PGD reason, AC recommends accepting this paper.
| val | [
"9Mi0ZKs_KJu",
"m_Pj40FCq3M",
"rm_Xy5T3vIc",
"uZoJD27svsF",
"BMdhg5cn6u",
"NEWlyXiaA2",
"J5igU5TTFst",
"LrBIAQhEo2",
"4yBJht7-9LO",
"7x3Q7KDMBKI",
"iv8Gf2P3NAf",
"JV85WO8uf5r",
"MzFTdBRfwjP",
"EFQhhXZ5No3",
"nSKAALteAYA",
"U5IfnGUqYIb",
"Eaq81KDPOQ",
"Fv9NOoBGkg9",
"79myZnvaau",
... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" We sincerely appreciate all the reviewers for their valuable comments and suggestions. We have revised the manuscript according to the comments. The updated contents are highlighted by blue text in the revised manuscript.",
" ### Re Auto-Attack Results (Cont.)\nThanks for your reply again. We carefully checked ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
5
] | [
"nips_2022_K4W92FUXSF9",
"rm_Xy5T3vIc",
"uZoJD27svsF",
"BMdhg5cn6u",
"NEWlyXiaA2",
"LrBIAQhEo2",
"U5IfnGUqYIb",
"JV85WO8uf5r",
"7x3Q7KDMBKI",
"MzFTdBRfwjP",
"5p_AwfsA59F",
"EFQhhXZ5No3",
"Fv9NOoBGkg9",
"nSKAALteAYA",
"5p_AwfsA59F",
"dEpSZTfAD-C",
"Fv9NOoBGkg9",
"gtI-3418p5W",
"37... |
nips_2022_8hs7qlWcnGs | ToDD: Topological Compound Fingerprinting in Computer-Aided Drug Discovery | In computer-aided drug discovery (CADD), virtual screening (VS) is used for comparing a library of compounds against known active ligands to identify the drug candidates that are most likely to bind to a molecular target. Most VS methods to date have focused on using canonical compound representations (e.g., SMILES strings, Morgan fingerprints) or generating alternative fingerprints of the compounds by training progressively more complex variational autoencoders (VAEs) and graph neural networks (GNNs). Although VAEs and GNNs led to significant improvements in VS performance, these methods suffer from reduced performance when scaling to large virtual compound datasets. The performance of these methods has shown only incremental improvements in the past few years. To address this problem, we developed a novel method using multiparameter persistence (MP) homology that produces topological fingerprints of the compounds as multidimensional vectors. Our primary contribution is framing the VS process as a new topology-based graph ranking problem by partitioning a compound into chemical substructures informed by the periodic properties of its atoms and extracting their persistent homology features at multiple resolution levels. We show that the margin loss fine-tuning of pretrained Triplet networks attains highly competitive results in differentiating between compounds in the embedding space and ranking their likelihood of becoming effective drug candidates. We further establish theoretical guarantees for the stability properties of our proposed MP signatures, and demonstrate that our models, enhanced by the MP signatures, outperform state-of-the-art methods on benchmark datasets by a wide and highly statistically significant margin (e.g., 93\% gain for Cleves-Jain and 54\% gain for DUD-E Diverse dataset). | Accept | The reviewers mostly liked the paper. They mentioned the sound theoretical foundations, stability, and strong empirical performance. The rebuttal was able to convince most reviewers that the paper should be accepted for NeurIPS. | train | [
"yCbTnA-K5_-",
"NN5G56U0acZ",
"QTz_pimZ0j",
"4PbHQT4PcGL",
"NUfeeks0Gn",
"lYFjiFPVrga",
"522kpJ4WwrG",
"Y_GG3nYqaJz",
"Y0hQkpKpab0",
"97rflTT4UdD",
"aKLHdU0OBEX",
"9hIlI6nGfaX",
"p24u3trYKAx",
"2yr2kpFgjE1",
"c2VEJ-UMF-r",
"QVkFG1s4uzD",
"EYmN43JnRF",
"ZekkkzxDsay",
"MFweTsG5lWK"... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" We provided detailed responses, pointing to the specific parts of the paper that articulates the points you expressed concerns. We would very much appreciate if you could consider updating the scores before the deadline today.",
" Thanks so much! We are very grateful for your constructive and detailed response,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"NUfeeks0Gn",
"c2VEJ-UMF-r",
"4PbHQT4PcGL",
"lYFjiFPVrga",
"522kpJ4WwrG",
"c2VEJ-UMF-r",
"Y0hQkpKpab0",
"97rflTT4UdD",
"aKLHdU0OBEX",
"MFweTsG5lWK",
"9hIlI6nGfaX",
"p24u3trYKAx",
"2yr2kpFgjE1",
"Qrb9bOYNphv",
"QVkFG1s4uzD",
"EYmN43JnRF",
"ZekkkzxDsay",
"cJUuBYwgALE",
"1wtup6iKbMz... |
nips_2022_9cU2iW3bz0 | Score-Based Diffusion meets Annealed Importance Sampling | More than twenty years after its introduction, Annealed Importance Sampling (AIS) remains one of the most effective methods for marginal likelihood estimation. It relies on a sequence of distributions interpolating between a tractable initial distribution and the target distribution of interest which we simulate from approximately using a non-homogeneous Markov chain. To obtain an importance sampling estimate of the marginal likelihood, AIS introduces an extended target distribution to reweight the Markov chain proposal. While much effort has been devoted to improving the proposal distribution used by AIS, by changing the intermediate distributions and corresponding Markov kernels, an underappreciated issue is that AIS uses a convenient but suboptimal extended target distribution. This can hinder its performance. We here leverage recent progress in score-based generative modeling (SGM) to approximate the optimal extended target distribution for AIS proposals corresponding to the discretization of Langevin and Hamiltonian dynamics. We demonstrate these novel, differentiable, AIS procedures on a number of synthetic benchmark distributions and variational auto-encoders. | Accept | The submission presents a novel and interesting method using recent advances in score-based diffusion to improve the recently proposed differentiable AIS log marginal likelihood estimates. The experiments clearly show the benefit of using Monte Carlo diffusion. The writing is clear and of high quality. For these reasons, all reviewers were unanimous in recommending acceptance.
AC notes: (1) out of curiosity, how is the adjusted Langevin and Hamiltonian versions perform on the static targets?, and (2) further to the question below on the timings, would it be useful to add a comparison where the time is kept similar between U(L/H)A and that with MCD, for example, 2K intermediate densities for U(L/H)A and K for the MCD variants. | val | [
"Pep9osFbTxX",
"EQhPXhjGBCa",
"fZWzYY-Vcu2",
"JF9MeCfyEcRo",
"g3TNWW6WzU1",
"cTS8--uXS5k",
"q2kc1YSi7m",
"dHWGvP2Udjn",
"GeLoADmowWw",
"igrOTHHpRGR",
"sB6CykdwO-b"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear author, I appreciate the clarification on amortized inference and the SOTA result reassurance. Your paper looks promising, and I will raise my score. ",
" Dear authors,\nthanks for your message, and apologies for being a bit late with my comments.\nAll looks good to me, I will raise my score and cross fing... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"dHWGvP2Udjn",
"cTS8--uXS5k",
"JF9MeCfyEcRo",
"g3TNWW6WzU1",
"q2kc1YSi7m",
"sB6CykdwO-b",
"igrOTHHpRGR",
"GeLoADmowWw",
"nips_2022_9cU2iW3bz0",
"nips_2022_9cU2iW3bz0",
"nips_2022_9cU2iW3bz0"
] |
nips_2022_0oQv1Ftt_gK | Rethinking Counterfactual Explanations as Local and Regional Counterfactual Policies | Among the challenges not yet resolved for Counterfactual Explanations (CE), there are stability, synthesis of the various CE and the lack of plausibility/sparsity guarantees. From a more practical point of view, recent studies show that the prescribed counterfactual recourses are often not implemented exactly by the individuals and demonstrate that most state-of-the-art CE algorithms are very likely to fail in this noisy environment. To address these issues, we propose a probabilistic framework that gives a sparse local counterfactual rule for each observation: we provide rules that give a range of values that can change the decision with a given high probability instead of giving diverse CE. In addition, the recourses derived from these rules are robust by construction. These local rules are aggregated into a regional counterfactual rule to ensure the stability of the counterfactual explanations across observations. Our local and regional rules guarantee that the recourses are faithful to the data distribution because our rules use a consistent estimator of the probabilities of changing the decision based on a Random Forest. In addition, these probabilities give interpretable and sparse rules as we select the smallest set of variables having a given probability of changing the decision. Codes for computing our counterfactual rules are available, and we compare their relevancy with standard CE and recent similar attempts. | Reject | This paper highlights a series of limitations of existing methods in the literature on algorithmic recourse (e.g., recourses are not implemented exactly and are often noisy), and posits new definitions for Local and Regional Counterfactual Rules and proposes a novel algorithmic framework to learn them. All the reviewers opine that there is very little motivation for why the new definitions should be exactly the way they are, and what their strengths and weaknesses are. In addition, it is unclear when we can expect the proposed approach to provide a good estimate of the criterion. Furthermore, the problems highlighted in this work have been explored in several recent works (e.g., Dominguez-Olmedo et. al., ICML 2022, On the adversarial robustness of causal algorithmic recourse; Pawelczyk et. al., 2022, Let Users Decide: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse). We encourage the authors to address the aforementioned aspects, discuss related works, and also compare the proposed approach with these works.
| train | [
"4jQAfTmwhC_",
"rf0FToJYAJR",
"SGVf-bkNXwM",
"q0piYh-w-LT",
"rzr1m5Ccc0t",
"ougHsWQjknj",
"iDN-SU2Pfkf",
"-IXS9ZaDX4C",
"t67oJZ161NH",
"wOgG82jiOU",
"aA5RCc-Djap",
"0dOErpyxBr8"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" ** \"notice that asymptotic consistency of an estimator does not mean that it is always good in practice\": we do agree with the reviewer that the consistency is not the definitive answer, and that the rate of convergence is a much better answer. Nevertheless, we think that we already provide several new concepts... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"SGVf-bkNXwM",
"q0piYh-w-LT",
"iDN-SU2Pfkf",
"-IXS9ZaDX4C",
"0dOErpyxBr8",
"iDN-SU2Pfkf",
"-IXS9ZaDX4C",
"aA5RCc-Djap",
"wOgG82jiOU",
"nips_2022_0oQv1Ftt_gK",
"nips_2022_0oQv1Ftt_gK",
"nips_2022_0oQv1Ftt_gK"
] |
nips_2022_HuiLIB6EaOk | VTC-LFC: Vision Transformer Compression with Low-Frequency Components | Although Vision transformers (ViTs) have recently dominated many vision tasks, deploying ViT models on resource-limited devices remains a challenging problem. To address such a challenge, several methods have been proposed to compress ViTs. Most of them borrow experience in convolutional neural networks (CNNs) and mainly focus on the spatial domain. However, the compression only in the spatial domain suffers from a dramatic performance drop without fine-tuning and is not robust to noise, as the noise in the spatial domain can easily confuse the pruning criteria, leading to some parameters/channels being pruned incorrectly. Inspired by recent findings that self-attention is a low-pass filter and low-frequency signals/components are more informative to ViTs, this paper proposes compressing ViTs with low-frequency components. Two metrics named low-frequency sensitivity (LFS) and low-frequency energy (LFE) are proposed for better channel pruning and token pruning. Additionally, a bottom-up cascade pruning scheme is applied to compress different dimensions jointly. Extensive experiments demonstrate that the proposed method could save 40% ~ 60% of the FLOPs in ViTs, thus significantly increasing the throughput on practical devices with less than 1% performance drop on ImageNet-1K. | Accept | The authors present a method to improve ViT efficiency by pruning channels and tokens using a selection mechanism that emphasizes low spatial frequency information. In particular they propose two measures: Low Frequency Sensitivity (LFS) and Low Frequency Energy (LFE).
1) LFS comprises two parts (Eq 3), the contribution between is controlled by a weighting hyper-parameter (though ultimately a Taylor approximation is used):
a. The difference in the loss function of the model on the low-pass image with and without a removed weight.
b. The difference in KL divergence of the class token before and after low-pass filtering with and without the weight in question.
2) LFE measures the proportionate energy of a token among the energy of all tokens after low-pass filtering (Eq 6), combined with a measure of the attention weights (Eq 8).
Pruning is carried out via a bottom-up cascade mechanism (Sec 3.4).
Performance is evaluated on a variety of model sizes on ImageNet 1K.
Pros:
- [R] First work to compress a model emphasizing low-frequency information
- [AC/R] Throughput improvement is significant with minimal performance drop.
- [R] Clear and correct formulas.
- [AC/R] Well written.
- [AC/R] Outperforms other compression methods.
Cons:
- [R] More thorough evaluation needed (ImageNet Real/V2 and CIFAR 10/100). Authors followed up and provided this requested evaluation data.
- [R] No comparison on the impact of the number of epochs. Authors supplied information.
- [R] No ablation on hyperparameter for LFS. Authors supplied information.
- [R] Unclear how the method might work with window based ViT such as Swin. Authors provided these additional experiments.
- [R] Unclear what is the cutoff for low/high frequency components. Authors provided ablation of varying cutoffs.
Paper has unanimous accept ratings. Authors addressed reviewer concerns, and reviewers complimented authors for a good job. AC recommends accept.
AC Rating: Strong Accept | train | [
"JZsi4eJRd1I",
"ZNqt4Wm24U",
"nPvl7hDtZLa",
"Quxley1wJNc",
"iw_SXWyBgt6",
"lt7WGJSwwlt",
"6SNAvcr1DlV",
"oW0I_g5KFW",
"8w19rqu2dXx",
"H-AGPlkcRr"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the comments and suggestions, we will discuss the MiniViT, TinyViT, and other related ones in the revised vision.",
" I am glad to see the additional experiments and details, which addressed my concerns. Thanks to the authors for the efforts in this discussion. After reading the comments of other ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"ZNqt4Wm24U",
"iw_SXWyBgt6",
"Quxley1wJNc",
"6SNAvcr1DlV",
"oW0I_g5KFW",
"8w19rqu2dXx",
"H-AGPlkcRr",
"nips_2022_HuiLIB6EaOk",
"nips_2022_HuiLIB6EaOk",
"nips_2022_HuiLIB6EaOk"
] |
nips_2022_YiFQqYAk1xH | Dynamic Fair Division with Partial Information | We consider the fundamental problem of fairly and efficiently allocating $T$ indivisible items among $n$ agents with additive preferences. The items become available over a sequence of rounds, and every item must be allocated immediately and irrevocably before the next one arrives. Previous work shows that when the agents' valuations for the items are drawn from known distributions, it is possible (under mild technical assumptions) to find allocations that are envy-free with high probability and Pareto efficient ex-post.
We study a \emph{partial-information} setting, where it is possible to elicit ordinal but not cardinal information. When a new item arrives, the algorithm can query each agent for the relative rank of this item with respect to a subset of the past items.
When values are drawn from i.i.d.\ distributions, we give an algorithm that is envy-free and $(1-\epsilon)$-welfare-maximizing with high probability. We provide similar guarantees (envy-freeness and a constant approximation to welfare with high probability) even with minimally expressive queries that ask for a comparison to a single previous item. For independent but non-identical agents, we obtain envy-freeness and a constant approximation to Pareto efficiency with high probability. We prove that all our results are asymptotically tight. | Accept | All reviewers are positive about the paper and found that the problem of sequential/online fair allocation of indivisible items interesting (and relatively new), and the theoretical results significant and sometimes surprising. Technically, the paper also has a "bandit" flavor, which makes it a good fit for NeurIPS. | train | [
"6o8PIrq3mRQ",
"fXj7VJ0vaA76",
"KpuqPphKG8hc",
"6R_-t2PRJUR",
"16Lb8YY5Cxj",
"B6hqF0mQmPx",
"alWIUJKzqgL",
"K7a4YFuUjSr",
"FY3kxSr4YD",
"R51oSWk8W_q",
"RH1mgpZTJa_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read your reply. I appreciate the additional remark on the adversarial and random order models. Although I have not understood exactly the impossibilities in the random order model, mentioning those points in the main body would strengthen the motivation to consider the iid model.",
" Thank you for addre... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"16Lb8YY5Cxj",
"6R_-t2PRJUR",
"RH1mgpZTJa_",
"R51oSWk8W_q",
"FY3kxSr4YD",
"K7a4YFuUjSr",
"nips_2022_YiFQqYAk1xH",
"nips_2022_YiFQqYAk1xH",
"nips_2022_YiFQqYAk1xH",
"nips_2022_YiFQqYAk1xH",
"nips_2022_YiFQqYAk1xH"
] |
nips_2022_78T4K99jvbE | Set-based Meta-Interpolation for Few-Task Meta-Learning | Meta-learning approaches enable machine learning systems to adapt to new tasks given few examples by leveraging knowledge from related tasks. However, a large number of meta-training tasks are still required for generalization to unseen tasks during meta-testing, which introduces a critical bottleneck for real-world problems that come with only few tasks, due to various reasons including the difficulty and cost of constructing tasks. Recently, several task augmentation methods have been proposed to tackle this issue using domain-specific knowledge to design augmentation techniques to densify the meta-training task distribution. However, such reliance on domain-specific knowledge renders these methods inapplicable to other domains. While Manifold Mixup based task augmentation methods are domain-agnostic, we empirically find them ineffective on non-image domains. To tackle these limitations, we propose a novel domain-agnostic task augmentation method, Meta-Interpolation, which utilizes expressive neural set functions to densify the meta-training task distribution using bilevel optimization. We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains such as image classification, molecule property prediction, text classification and speech recognition. Experimentally, we show that Meta-Interpolation consistently outperforms all the relevant baselines. Theoretically, we prove that task interpolation with the set function regularizes the meta-learner to improve generalization. We provide our source code in the supplementary material. | Accept | This paper uses a set transformer to create new tasks at meta training time when the amount of data for meta training is scarce. This approach seems to be highly effective and will make a worthwhile contribution to the few-shot learning toolbox. Many of the reviewer concerns were addressed through additional ablations and experiments, e.g., using MAML instead of protonets. Please include these experiments in the final draft as they add quite a bit to the paper. | train | [
"fubRdmMYex0",
"vSnDqbqYcaX",
"zZnQ16hWWv",
"Y43k845vRK",
"8OifB207VoI",
"S121PAuwnyw",
"f5TNbFLNGzo",
"dwPK3cK2SvG",
"Nw5Svgwz2WS",
"z3DYT9UBfZ6",
"tH1o9E7tx0r",
"7JFxsh-_Ykz",
"B7DdQcPUiCW",
"ENmghKRsLbx",
"_pF6TwwBIT",
"VYe0wbTdz0w",
"Q2o2uZ9rve0o",
"L3B4TqnXHRyY",
"i0XqiG-bvM... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Thank you for taking the time and effort to review and re-evaluate our work. As suggested, we report how the test accuracy changes as we vary the number of **meta-validation tasks** in the table below and include it in the revision. Although the performance of Meta-Interpolation slightly decreases if we reduce th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"zZnQ16hWWv",
"Y43k845vRK",
"Q2o2uZ9rve0o",
"Nw5Svgwz2WS",
"ENmghKRsLbx",
"dwPK3cK2SvG",
"tH1o9E7tx0r",
"_pF6TwwBIT",
"z3DYT9UBfZ6",
"L3B4TqnXHRyY",
"nips_2022_78T4K99jvbE",
"nips_2022_78T4K99jvbE",
"i0XqiG-bvM",
"Xu-XiIShmQ",
"4sXDdLvo3FK",
"Xu-XiIShmQ",
"byYCaa__H16",
"i0XqiG-bvM... |
nips_2022_WyiM4lDJOcK | How To Design Stable Machine Learned Solvers For Scalar Hyperbolic PDEs | Machine learned partial differential equation (PDE) solvers trade the robustness of classical numerical methods for potential gains in accuracy and/or speed. A key challenge for machine learned PDE solvers is to maintain physical constraints that will improve robustness while still retaining the flexibility that allows these methods to be accurate. In this paper, we show how to design solvers for scalar hyperbolic PDEs that are stable by construction. We call our technique 'global stabilization.' Unlike classical numerical methods, which guarantee stability by putting local constraints on the solver, global stabilization adjusts the time-derivative of the discrete solution to ensure that global invariants and stability conditions are satisfied. Although global stabilization can be used to ensure the stability of any scalar hyperbolic PDE solver that uses method of lines, it is designed for machine learned solvers. Global stabilization's unique design choices allow it to guarantee stability without degrading the accuracy of an already-accurate machine learned solver. | Reject | Thank you for your submission to NeurIPS. All four reviewers authors are enthusiastic of this work, though three of the four reviewers had major concerns with the actual submission. In response, the authors carried out a major revision within a very short turn-around, completely rewriting most of the paper. All four reviewers are highly appreciative of the authors' responsiveness in replying to the reviews, in submitting new revisions taking into account reviewer comments.
Unfortunately, some concerns remain even after the author-reviewer discussion period closed. Broadly, the three reviewers all felt that there had been too many edits of the paper to be able to verify/falsify the newest content in all detail. The new revision is sufficiently different as to necessitate a fresh set of reviews, in a way that the conference format of NeurIPS is not able to facilitate.
Overall, the core idea is interesting and potentially very useful. The paper studies a critical problem typically overlooked by the current trend of combining ML with PDE solvers, and that it is transparent and upfront about its limitations. The paper asks a good question that can inspire important follow-up works. I highly encourage the authors to take reviewer comments into account, to expand into a complete, detailed experimental section and compare against other ML-based approaches, and to resubmit to an upcoming ML conference.
| train | [
"gsusE7k06Dy",
"CcsPz7HVZsV",
"Zclbg7AOamC",
"M-N2QvWURzA",
"Y_zKK_pCb7P",
"DY2CZPtxcW_k",
"d08D2iszg9c",
"cSMUvOWijNL",
"dxhSKRHoS8",
"YaLGT7y0U7d",
"Bs5jrWE64V",
"j7tad74ze1P",
"PuKMB9IkWr",
"6x3bzz_Dwur",
"9oh7uFZX1_7"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Reviewers,\n\nWe have submitted a second revised paper. \n\nOur paper now contains an experimental verification of the claim that global stabilization \"can be used to stabilize machine learned PDE solvers without degrading the accuracy of an already-accurate solver.\" See lines 247-251 in section 6.1, figure 3, ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"nips_2022_WyiM4lDJOcK",
"cSMUvOWijNL",
"DY2CZPtxcW_k",
"Y_zKK_pCb7P",
"j7tad74ze1P",
"dxhSKRHoS8",
"9oh7uFZX1_7",
"6x3bzz_Dwur",
"PuKMB9IkWr",
"j7tad74ze1P",
"nips_2022_WyiM4lDJOcK",
"nips_2022_WyiM4lDJOcK",
"nips_2022_WyiM4lDJOcK",
"nips_2022_WyiM4lDJOcK",
"nips_2022_WyiM4lDJOcK"
] |
nips_2022_pl279jU4GOu | Convergence beyond the over-parameterized regime using Rayleigh quotients | In this paper, we present a new strategy to prove the convergence of Deep Learning architectures to a zero training (or even testing) loss by gradient flow. Our analysis is centered on the notion of Rayleigh quotients in order to prove Kurdyka-Lojasiewicz inequalities for a broader set of neural network architectures and loss functions. We show that Rayleigh quotients provide a unified view for several convergence analysis techniques in the literature. Our strategy produces a proof of convergence for various examples of parametric learning. In particular, our analysis does not require the number of parameters to tend to infinity, nor the number of samples to be finite, thus extending to test loss minimization and beyond the over-parameterized regime. | Accept | This paper proposes a new method for proving the convergence of gradient flow to zero loss by leveraging Rayleigh quotients to establish KL inequalities, a strategy that can apply even without overparameterization. The reviewers found the paper to be well written and generally easy to follow, despite a few concerns about burdensome notation. The discussion highlighted a few minor technical issues which were addressed in the revision. Overall the consensus is that this paper provides valuable new tools and the results will be of interest to the theory community, so I recommend acceptance.
| train | [
"Q7EkaphhNL",
"Q_zvBRkvN-",
"OVbMSUNA9_m",
"aHJBKvysxAP",
"I1lszOaskNd",
"2pYUFS41w8i",
"2b4FxF5P-JY",
"WJi4wM5oE32"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I agree with that there is a gap from the presented results to deep networks but it is still a very insightful work. I am looking forward to your follow-up version. ",
" The comments/concerns/suggestions have been reflected in the rebuttal version. Also thanks for highlighting the updates for easier follow-up.\... | [
-1,
-1,
-1,
-1,
-1,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"I1lszOaskNd",
"OVbMSUNA9_m",
"WJi4wM5oE32",
"2b4FxF5P-JY",
"2pYUFS41w8i",
"nips_2022_pl279jU4GOu",
"nips_2022_pl279jU4GOu",
"nips_2022_pl279jU4GOu"
] |
nips_2022_M_WuaKoaEfQ | A Quadrature Rule combining Control Variates and Adaptive Importance Sampling | Driven by several successful applications such as in stochastic gradient descent or in Bayesian computation, control variates have become a major tool for Monte Carlo integration. However, standard methods do not allow the distribution of the particles to evolve during the algorithm, as is the case in sequential simulation methods. Within the standard adaptive importance sampling framework, a simple weighted least squares approach is proposed to improve the procedure with control variates. The procedure takes the form of a quadrature rule with adapted quadrature weights to reflect the information brought in by the control variates. The quadrature points and weights do not depend on the integrand, a computational advantage in case of multiple integrands. Moreover, the target density needs to be known only up to a multiplicative constant. Our main result is a non-asymptotic bound on the probabilistic error of the procedure. The bound proves that for improving the estimate's accuracy, the benefits from adaptive importance sampling and control variates can be combined. The good behavior of the method is illustrated empirically on synthetic examples and real-world data for Bayesian linear regression. | Accept | This paper proposes a novel method to perform Monte Carlo integration combining control variates and annealed importance sampling. All reviewers agreed the algorithm was of interest, the theoretical evidence was strong, and the experimental results were sufficiently convincing, so there was a consensus on acceptance.
(As a minor aside, I would encourage the authors to consider the font sizes in their figures.) | val | [
"sVLPlvPUyQc",
"t_bP5zcnOWr",
"vqdZTE7vIjA",
"pRBs2uVDYL",
"tYQk2fZk0iUa",
"DxiCgFnivyr",
"5NvdPn9BS5D",
"6IehXvQ3aGG",
"QHrUBibw5qN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your detailed response! I appreciate the updated figures and the discussion about the choice of control variates and have increased my score to a 7. ",
" Thanks so much for your response! I must admit i made a typo in my initial review: I meant Bayesian **logistic** regression. I think an example lik... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"tYQk2fZk0iUa",
"DxiCgFnivyr",
"pRBs2uVDYL",
"QHrUBibw5qN",
"6IehXvQ3aGG",
"5NvdPn9BS5D",
"nips_2022_M_WuaKoaEfQ",
"nips_2022_M_WuaKoaEfQ",
"nips_2022_M_WuaKoaEfQ"
] |
nips_2022_s0AgNH86p8 | TransBoost: Improving the Best ImageNet Performance using Deep Transduction | This paper deals with deep transductive learning, and proposes TransBoost as a procedure for fine-tuning any deep neural model to improve its performance on any (unlabeled) test set provided at training time. TransBoost is inspired by a large margin principle and is efficient and simple to use. Our method significantly improves the ImageNet classification performance on a wide range of architectures, such as ResNets, MobileNetV3-L, EfficientNetB0, ViT-S, and ConvNext-T, leading to state-of-the-art transductive performance.
Additionally we show that TransBoost is effective on a wide variety of image classification datasets. The implementation of TransBoost is provided at: https://github.com/omerb01/TransBoost . | Accept | This paper initially received mixed opinions. After intensive author-reviewer and reviewer-reviewer discussions, all reviewers converged and recommended acceptance. AC recommends accepting the paper. | train | [
"INoXbQ8Zi-S",
"T-WaMK4cdd",
"_Lp9ndV_ESR",
"N1GxPSnRO8a",
"Lz4B6TRwMFX",
"gmHCbc5x6Xa",
"ZrTJYDgzis",
"l37040edpk_",
"lbrhEu-PIwY",
"Fq1ZdERlH-s",
"KyIvPRrZs1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After reading relevant materials, I realize that the unlabelled test data are usually used in transduction learning. The experimental results in Table 1 also shows that TransBoost outperforms self-supervised learning methods such as SimCLRv2 in transductive setting, which demonstrate its superiority. The practica... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"N1GxPSnRO8a",
"gmHCbc5x6Xa",
"N1GxPSnRO8a",
"Lz4B6TRwMFX",
"ZrTJYDgzis",
"KyIvPRrZs1",
"Fq1ZdERlH-s",
"lbrhEu-PIwY",
"nips_2022_s0AgNH86p8",
"nips_2022_s0AgNH86p8",
"nips_2022_s0AgNH86p8"
] |
nips_2022_lHj-q9BSRjF | Data Distributional Properties Drive Emergent In-Context Learning in Transformers | Large transformer-based models are able to perform in-context few-shot learning, without being explicitly trained for it. This observation raises the question: what aspects of the training regime lead to this emergent behavior? Here, we show that this behavior is driven by the distributions of the training data itself. In-context learning emerges when the training data exhibits particular distributional properties such as burstiness (items appear in clusters rather than being uniformly distributed over time) and having a large number of rarely occurring classes. In-context learning also emerges more strongly when item meanings or interpretations are dynamic rather than fixed. These properties are exemplified by natural language, but are also inherent to naturalistic data in a wide range of other domains. They also depart significantly from the uniform, i.i.d. training distributions typically used for standard supervised learning. In our initial experiments, we found that in-context learning traded off against more conventional weight-based learning, and models were unable to achieve both simultaneously. However, our later experiments uncovered that the two modes of learning could co-exist in a single model when it was trained on data following a skewed Zipfian distribution -- another common property of naturalistic data, including language. In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models. Our findings indicate how the transformer architecture works together with particular properties of the training data to drive the intriguing emergent in-context learning behaviour of large language models, and indicate how future work might encourage both in-context and in-weights learning in domains beyond language. | Accept | This paper poses and analyses an interesting question -- do statistical properties of the training data affect emergent behavior (e.g. in-context learning) in Transformers? The study is novel since it probes the properties of the data itself, as opposed to most existing work that has studied the effect of model architectures and training algorithms on model capabilities. The paper shows that properties like Zipfian distribution and burstiness are useful for eliciting in-context learning (two properties that are widely found in natural languages, where in-context learning has had a rapid emergence). All the reviewers appreciated the originality and timeliness of this paper, and experiments provide interesting conclusions. Overall, I believe the paper will spark future inquiry into data distributional properties.
I recommend the authors pay attention to Reviewer D2K5's comment about clarifying the definition of "burstiness" in the writing and making it more consistent with the experiments performed. | train | [
"RAq5Jj7XAUW",
"cGsRNvCXNKw",
"aHEj2yR3sDh",
"O49jV3rf69-",
"aruS1CmsNi",
"dbEsYMqzcJV",
"YerysSgAp3C",
"mVHiGW_khIP2",
"0dQDB7H5iX8",
"7IdtocvytDY",
"fCuIobnorb2",
"oQ6EtaGfSQh",
"FAqR025Bgfo",
"TTXMQ8I87q6",
"kC_ZtRDUX2k",
"2Zdk3a3EpS_",
"KhcjQCOOSm-",
"lVQ4ErKPTrR"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your engagement and helpful comments through this process -- our paper is now significantly clearer, thanks to your help. And yes, we do hope that this work can ignite a new chain of research going forward! We think that answering these questions is important both for understanding current m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
9,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
3
] | [
"YerysSgAp3C",
"aHEj2yR3sDh",
"O49jV3rf69-",
"KhcjQCOOSm-",
"kC_ZtRDUX2k",
"TTXMQ8I87q6",
"mVHiGW_khIP2",
"0dQDB7H5iX8",
"lVQ4ErKPTrR",
"fCuIobnorb2",
"kC_ZtRDUX2k",
"FAqR025Bgfo",
"KhcjQCOOSm-",
"2Zdk3a3EpS_",
"nips_2022_lHj-q9BSRjF",
"nips_2022_lHj-q9BSRjF",
"nips_2022_lHj-q9BSRjF"... |
nips_2022_WUMH5xloWn | Automatic differentiation of nonsmooth iterative algorithms | Differentiation along algorithms, i.e., piggyback propagation of derivatives, is now routinely used to differentiate iterative solvers in differentiable programming. Asymptotics is well understood for many smooth problems but the nondifferentiable case is hardly considered. Is there a limiting object for nonsmooth piggyback automatic differentiation (AD)? Does it have any variational meaning and can it be used effectively in machine learning? Is there a connection with classical derivative? All these questions are addressed under appropriate contractivity conditions in the framework of conservative derivatives which has proved useful in understanding nonsmooth AD. For nonsmooth piggyback iterations, we characterize the attractor set of nonsmooth piggyback iterations as a set-valued fixed point which remains in the conservative framework. This has various consequences and in particular almost everywhere convergence of classical derivatives. Our results are illustrated on parametric convex optimization problems with forward-backward, Douglas-Rachford and Alternating Direction of Multiplier algorithms as well as the Heavy-Ball method. | Accept | The reviewers agreed that the paper has solid and novel technical contributions. Nevertheless, please consider elaborating more on the background of the techniques used in the revision, so that the paper is more self-contained. | train | [
"Y2jcv4oTqqC",
"Z2zaM65baIY",
"7tDXRRj7-zit",
"PVMji9XTYwK",
"7FNjAW-82Go",
"9-RY--2VZKU",
"q0kmzAL6YHu",
"ow-Phwz5Gk",
"0bQlpiavAJK"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for his positive feedback and his relevant questions.\n\n**A comment on the difference between the convergence speed of Piggyback iterations for the mentioned optimization algorithms is missing.**\n\nYes definitely, we will add a discussion in Section 4.1 along the following lines:\n\n- Cons... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
2
] | [
"ow-Phwz5Gk",
"0bQlpiavAJK",
"ow-Phwz5Gk",
"q0kmzAL6YHu",
"9-RY--2VZKU",
"nips_2022_WUMH5xloWn",
"nips_2022_WUMH5xloWn",
"nips_2022_WUMH5xloWn",
"nips_2022_WUMH5xloWn"
] |
nips_2022_QW98XBAqNRa | Truncated proposals for scalable and hassle-free simulation-based inference | Simulation-based inference (SBI) solves statistical inverse problems by repeatedly running a stochastic simulator and inferring posterior distributions from model-simulations. To improve simulation efficiency, several inference methods take a sequential approach and iteratively adapt the proposal distributions from which model simulations are generated. However, many of these sequential methods are difficult to use in practice, both because the resulting optimisation problems can be challenging and efficient diagnostic tools are lacking. To overcome these issues, we present Truncated Sequential Neural Posterior Estimation (TSNPE). TSNPE performs sequential inference with truncated proposals, sidestepping the optimisation issues of alternative approaches. In addition, TSNPE allows to efficiently perform coverage tests that can scale to complex models with many parameters. We demonstrate that TSNPE performs on par with previous methods on established benchmark tasks. We then apply TSNPE to two challenging problems from neuroscience and show that TSNPE can successfully obtain the posterior distributions, whereas previous methods fail. Overall, our results demonstrate that TSNPE is an efficient, accurate, and robust inference method that can scale to challenging scientific models. | Accept | This paper received generally positive reviews, with one reviewer originally weakly backing rejection but ultimately backing acceptance after substantial discussions, and the other three confidently backing acceptance from the off. Based on the reviewer's comments and my own assessments, I see the positives and negatives of the work as follows:
Positives
- Very strong writing and presentation
- Good range of experiments
- Key ideas seem relatively simple to use and deploy
- Important problem area and seems to make decent progress on two known issues in the area (leakage and coverage tests)
Negatives
- The novelty of the work is quite limited: the core approach is arguably a special case of an approach that already exists (APT) and the paper is mostly combining known ideas rather than proposing anything especially original. Thus, though the paper does certainly have some new ideas and is potentially useful to the community, I do think it is quite incremental.
- The use of SIR is worrisome and likely to cause theoretical and occasional practical failure cases, even if these have not especially manifested in the current experiments
- The method introduces biases (from SIR and the truncation itself) that might be difficult to quantify and there is a lack of any notable theoretical guarantees (though the objective is quite clearly sound with epsilon=0 and no iteration of the truncation, this provides no guarantees for real-world settings that will have epsilon>0 and multiple rounds of truncation).
- Some of the claims about efficiency and rejection rates could have been more clearly demonstrated and explained.
On balance, I agree with the reviewers that the positives outweigh the negatives; this is well-polished and potential useful work that will be of interest to the community. As such, I recommend acceptance.
Minor comment
- I believe that the suggestion that TSNPE “can scale to problems that were previously inaccessible to neural posterior estimation” is over claiming given the actual experiment results, given the fact that only one approach (with a single “out of the box” proposal) is actually compared to and the fact that the results are generally qualitative rather than quantitative. Moreover, given that TSNPE can arguably be seen as a special case of APT in the first place, the claim seems distinctly unreasonable. I would thus like to see this claimed dropped or at least significantly toned down. | train | [
"W2kYHyZ-hKx",
"4oGlaFmyboU",
"WgUXsiz-c7",
"nYv-XqoFXo-",
"eIQ2nzFXvr5",
"YGqa6yF7TV",
"AW97Xe9MriA",
"Z6gAB2DOITs",
"-awbGEyQ2V",
"1QKIoyI8ube",
"Dgn8uwhFFjv",
"O2kFOnEwoicV",
"Ze_sDjh27Ov",
"BrZbbyaYBfm",
"YKzs3dvoBg0",
"uz9I8Khx4An",
"wYyx75RDsiE",
"Dcq9ffwJ2Eh",
"v0WHdo6uBvo... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" We thank the reviewer for their immense efforts in reviewing our paper and are happy that the reviewer can now recommend acceptance!",
" I will reply primarily to my concern about the assumption. I think your numerical and empirical arguments are strong and go along with how truncation has been justified in exi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3
] | [
"4oGlaFmyboU",
"-awbGEyQ2V",
"-awbGEyQ2V",
"eIQ2nzFXvr5",
"Z6gAB2DOITs",
"Ze_sDjh27Ov",
"BrZbbyaYBfm",
"YKzs3dvoBg0",
"1QKIoyI8ube",
"v0WHdo6uBvo",
"1wqZ6A1pNB",
"uncTSFyp0rQ",
"wYyx75RDsiE",
"Dcq9ffwJ2Eh",
"nips_2022_QW98XBAqNRa",
"WWwvxu7P2I",
"SevXSBJbY9L",
"5xmwBTeSYv",
"1wqZ... |
nips_2022_M12autRxeeS | Extracting computational mechanisms from neural data using low-rank RNNs | An influential framework within systems neuroscience posits that neural computations can be understood in terms of low-dimensional dynamics in recurrent circuits. A number of methods have thus been developed to extract latent dynamical systems from neural recordings, but inferring models that are both predictive and interpretable remains a difficult challenge. Here we propose a new method called Low-rank Inference from Neural Trajectories (LINT), based on a class of low-rank recurrent neural networks (lrRNNs) for which a link between connectivity and dynamics has been previously demonstrated. By fitting such networks to trajectories of neural activity, LINT yields a mechanistic model of latent dynamics, as well as a set of axes for dimensionality reduction and verifiable predictions for inactivations of specific populations of neurons. Here, we first demonstrate the consistency of our method and apply it to two use cases: (i) we reverse-engineer "black-box" vanilla RNNs trained to perform cognitive tasks, and (ii) we infer latent dynamics and neural contributions from electrophysiological recordings of nonhuman primates performing a similar task. | Accept | Building on recent theoretical work on the dynamics of low-rank recurrent neural networks, the authors present a method called LINT for learning low-rank network models directly from data. As the reviewers point out, from a purely technical perspective, the idea is straightforward: simply optimize a low-rank parameterization of an RNN. Similar ideas have been considered under the heading of _tensorized_ neural networks [e.g. 1]. See also related works cited therein, which considers low-rank parameterizations of weight matrices [e.g. 2].
Though the technical innovation may be limited, the work makes up for it with connections to recent research in theoretical neuroscience and interesting experiments. The reviewers raise many important caveats and limitations (e.g. are these tasks really reflective of the complexity of "real world" tasks in ML and experimental neuroscience?). Overall, though, the reviewers and I think this paper offers valuable contributions. I encourage the authors to revise their manuscript in light of these thorough and constructive reviews.
[1] Novikov, A., Podoprikhin, D., Osokin, A. and Vetrov, D.P., 2015. Tensorizing neural networks. Advances in neural information processing systems, 28.
[2] Denil, M., Shakibi, B., Dinh, L., Ranzato, M.A. and De Freitas, N., 2013. Predicting parameters in deep learning. Advances in neural information processing systems, 26. | train | [
"sqLAhlsRUIB",
"AyApxb3-bRlV",
"0MjhEJj2zNsO",
"s5C9-M2SqC_",
"3-FV5SelSgT",
"No-Ifq3lOp",
"Gn-k4kcme2Y",
"-tRUYzPPDtZ",
"sgt0XRChYO",
"rrlH_jJfgV_",
"DgKRFPYA06u",
"nSXAXj4HwXT",
"wiJgKauMfkj"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their thoughtful responses and revision. I’m still in favor of this work, although I think my main point (3) has not really been addressed. That is, the K-back task is perhaps helpful in further confirming that the authors’ method works well. But it doesn’t give me more reason to believe t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"3-FV5SelSgT",
"DgKRFPYA06u",
"No-Ifq3lOp",
"-tRUYzPPDtZ",
"wiJgKauMfkj",
"nSXAXj4HwXT",
"DgKRFPYA06u",
"rrlH_jJfgV_",
"nips_2022_M12autRxeeS",
"nips_2022_M12autRxeeS",
"nips_2022_M12autRxeeS",
"nips_2022_M12autRxeeS",
"nips_2022_M12autRxeeS"
] |
nips_2022_nC8VC8gVGPo | Training Spiking Neural Networks with Local Tandem Learning | Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient over their predecessors. However, there is a lack of an efficient and generalized training method for deep SNNs, especially for deployment on analog computing substrates. In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL). The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN. By decoupling the learning of network layers and leveraging highly informative supervisor signals, we demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity. Our experimental results have also shown that the SNNs thus trained can achieve comparable accuracies to their teacher ANNs on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. Moreover, the proposed LTL rule is hardware friendly. It can be easily implemented on-chip to perform fast parameter calibration and provide robustness against the notorious device non-ideality issues. It, therefore, opens up a myriad of opportunities for training and deployment of SNN on ultra-low-power mixed-signal neuromorphic computing chips. | Accept | This paper proposes a novel method of training spiking neural networks (SNNs) by matching the intermediate feature representations of SNNs with pre-trained ANNs. The method is on-chip and local, allowing SNNs to be learned directly on neuromorphic hardware.
All reviewers agreed that the problem that the paper target to solve is important, and the proposed method is novel. During the discussion period, the authors successfully addressed the concerns of the reviewers. Therefore, I recommend acceptance. | train | [
"5J7vjXGkLvN",
"wsgFc3XF0v-",
"bLZIt6wS2Ru",
"JfhZW-mt1_zf",
"tzsuhNDeCW",
"qWFIQo3ZEDu",
"xxk2pZmHEYH",
"HKZfrmmUaDl",
"XULyGrnpl7l",
"mH-LgMtzEo",
"EkpSATo17Hg",
"p7E4NPz_vYz",
"Vl8acFyGHWN",
"c7GmH_kyLu",
"4EZnLd7biE",
"KFssSzpRGP"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" >As for the supplemented results, I have a question about the setting of the experiments, i.e. the dataset and the noise level.\n\nThank you very much for raising the question on the experiment setting. Due to time constraints, we only manage to run experiments on the MNIST dataset, which is much simpler than the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"JfhZW-mt1_zf",
"HKZfrmmUaDl",
"4EZnLd7biE",
"c7GmH_kyLu",
"mH-LgMtzEo",
"xxk2pZmHEYH",
"c7GmH_kyLu",
"XULyGrnpl7l",
"Vl8acFyGHWN",
"KFssSzpRGP",
"4EZnLd7biE",
"nips_2022_nC8VC8gVGPo",
"nips_2022_nC8VC8gVGPo",
"nips_2022_nC8VC8gVGPo",
"nips_2022_nC8VC8gVGPo",
"nips_2022_nC8VC8gVGPo"
] |
nips_2022_OxHn1Yz_Kl3 | Causal Identification under Markov equivalence: Calculus, Algorithm, and Completeness | One common task in many data sciences applications is to answer questions about the effect of new interventions, like: `what would happen to $Y$ if we make $X$ equal to $x$ while observing covariates $Z=z$?'. Formally, this is known as conditional effect identification, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. A plethora of methods was developed for solving this problem, including the celebrated do-calculus [Pearl, 1995]. In practice, these results are not always applicable since they require a fully specified causal diagram as input, which is usually not available. In this paper, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We make the following contributions under this relaxed setting. First, we introduce a new causal calculus, which subsumes the current state-of-the-art, PAG-calculus. Second, we develop an algorithm for conditional effect identification given a PAG and prove it to be both sound and complete. In words, failure of the algorithm to identify a certain effect implies that this effect is not identifiable by any method. Third, we prove the proposed calculus to be complete for the same task. | Accept | The reviewers are all in agreeement that the paper constitutes a fundamental advance in the theory of causal inference. The authors responded to the reviewers' remaining questions in a detailed way, and there is no further issue with the paper being accepted. | val | [
"VGrrc-U_qYh",
"glUFdtaLhU",
"sNb62uRXji",
"gzod7hDtZF",
"zGtfYR-_UrMW",
"2SvzT2NkHhr6",
"lyCJMrwC2NT",
"EM-k9AndGyoP",
"XGgbmyd9eKVg",
"rCfjAIGKucN",
"X_gxr0T_n9U",
"PBQrtl8m59h",
"7Ijg-5tAFU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors of the paper for their detailed reply to my questions!",
" The authors answer most of my questions/concerns and I would like to increase my score.",
" Thank you for your assessment and time. Below, we address the raised issues.\n\n1. “the natural question is the contribution ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
9,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
4
] | [
"2SvzT2NkHhr6",
"X_gxr0T_n9U",
"X_gxr0T_n9U",
"X_gxr0T_n9U",
"rCfjAIGKucN",
"rCfjAIGKucN",
"7Ijg-5tAFU",
"7Ijg-5tAFU",
"PBQrtl8m59h",
"nips_2022_OxHn1Yz_Kl3",
"nips_2022_OxHn1Yz_Kl3",
"nips_2022_OxHn1Yz_Kl3",
"nips_2022_OxHn1Yz_Kl3"
] |
nips_2022_JO9o3DgV9l2 | Shield Decentralization for Safe Multi-Agent Reinforcement Learning | Learning safe solutions is an important but challenging problem in multi-agent reinforcement learning (MARL). Shielded reinforcement learning is one approach for preventing agents from choosing unsafe actions. Current shielded reinforcement learning methods for MARL make strong assumptions about communication and full observability. In this work, we extend the formalization of the shielded reinforcement learning problem to a decentralized multi-agent setting. We then present an algorithm for decomposition of a centralized shield, allowing shields to be used in such decentralized, communication-free environments. Our results show that agents equipped with decentralized shields perform comparably to agents with centralized shields in several tasks, allowing shielding to be used in environments with decentralized training and execution for the first time. | Accept | It is agreed among reviewers that the paper should be accepted. Hope the authors can address the comments from the reviewers in the final version as promised. | train | [
"471G9AUSV9K",
"ijJIMGfaxXj",
"Xcx8pALqUQt",
"6x6XlpGL7LF",
"3ZYcsN-V6tR",
"2E4a1OHsq66",
"LeB00eP_BzL",
"a4OINaTdHuj",
"Zpkg0WblbZj",
"3wlADu2UURC"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comments. We will add a note about the complexity of multi-agent centralized shield synthesis to the “Limitations” section.",
" I appreciate the comment regarding the decentralized shield permissiveness. While I do think it is an important aspect, providing an informal intuition is very helpf... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"ijJIMGfaxXj",
"LeB00eP_BzL",
"6x6XlpGL7LF",
"3ZYcsN-V6tR",
"3wlADu2UURC",
"Zpkg0WblbZj",
"a4OINaTdHuj",
"nips_2022_JO9o3DgV9l2",
"nips_2022_JO9o3DgV9l2",
"nips_2022_JO9o3DgV9l2"
] |
nips_2022_ONB4RdP2GX | Hardness in Markov Decision Processes: Theory and Practice | Meticulously analysing the empirical strengths and weaknesses of reinforcement learning methods in hard (challenging) environments is essential to inspire innovations and assess progress in the field. In tabular reinforcement learning, there is no well-established standard selection of environments to conduct such analysis, which is partially due to the lack of a widespread understanding of the rich theory of hardness of environments. The goal of this paper is to unlock the practical usefulness of this theory through four main contributions. First, we present a systematic survey of the theory of hardness, which also identifies promising research directions. Second, we introduce $\texttt{Colosseum}$, a pioneering package that enables empirical hardness analysis and implements a principled benchmark composed of environments that are diverse with respect to different measures of hardness. Third, we present an empirical analysis that provides new insights into computable measures. Finally, we benchmark five tabular agents in our newly proposed benchmark. While advancing the theoretical understanding of hardness in non-tabular reinforcement learning remains essential, our contributions in the tabular setting are intended as solid steps towards a principled non-tabular benchmark. Accordingly, we benchmark four agents in non-tabular versions of $\texttt{Colosseum}$ environments, obtaining results that demonstrate the generality of tabular hardness measures. | Accept | The reviewers' opinions are quite consistent towards a weak accept.
I'm not confident with the big title "Hardness in Markov Decision Processes: Theory and Practice". This paper is more like a survey + benchmark review instead of a research article. Neither the theory part or the practice part is novel enough as a research article. It's a bit thin as a survey paper.
I personally tend to weak reject but I respect the reviewers' weak accept. | val | [
"cmS02ENDIxn",
"h6ezFO7qNSt",
"3f7lrVwuzls",
"pQrtK_3z1ji",
"bRkWdE1DUd4y",
"vCE0Eip_C8",
"HKSTkoETOJh",
"NMHyXl-iWRe",
"UTKPqmu8FaF",
"T40pUTIWc8f",
"nqhfSznSD-D",
"8JCuksFBSX",
"JK4gvN3kcei",
"x08qws44Q48",
"xV6IVVwFYfq",
"d-jWtRydvxc",
"F5aqWd-lNnh",
"xhVf2q_wc8",
"OvjOS8S_1A9... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thank you for the follow-up response that has addressed all of my remaining concerns. I think including these examples and discussions in the main text or the appendix would benefit future readers for a more clear understanding of the proposed complete hardness measure.",
" Thanks for considering our response! ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"h6ezFO7qNSt",
"3f7lrVwuzls",
"UTKPqmu8FaF",
"bRkWdE1DUd4y",
"vCE0Eip_C8",
"HKSTkoETOJh",
"JK4gvN3kcei",
"nips_2022_ONB4RdP2GX",
"T40pUTIWc8f",
"fVyzK749QP-",
"8JCuksFBSX",
"JK4gvN3kcei",
"UtpuGDcSVHT",
"xV6IVVwFYfq",
"OvjOS8S_1A9",
"F5aqWd-lNnh",
"xhVf2q_wc8",
"nips_2022_ONB4RdP2G... |
nips_2022_eXggxYNbQi | On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity | Most complex machine learning and modelling techniques are prone to over-fitting and may subsequently generalise poorly to future data. Artificial neural networks are no different in this regard and, despite having a level of implicit regularisation when trained with gradient descent, often require the aid of explicit regularisers. We introduce a new framework, Model Gradient Similarity (MGS), that (1) serves as a metric of regularisation, which can be used to monitor neural network training, (2) adds insight into how explicit regularisers, while derived from widely different principles, operate via the same mechanism underneath by increasing MGS, and (3) provides the basis for a new regularisation scheme which exhibits excellent performance, especially in challenging settings such as high levels of label noise or limited sample sizes. | Accept | This paper is controversial among the reviewers. On the positive side, reviewers like the novelty of the concept, the derivations and the clear presentation. The negative review wonders why the proposed method performs much better than dropout, similarity to Lipschitz constraints, and whether proper early stopping was used. The authors addressed some of the concerns, though the reviewer was not convinced. In the AC's opinion, the objections are sufficiently addressed and are not clear enough for rejection. One reviewer wanted to know more about the computational costs of using this technique and more discussions on the limitations.
The paper proposed a novel and interesting approach to regularization and seems to be a good contribution to the community. | train | [
"hVVf-yBooFq",
"pVlU5ol-3Zu",
"FBjkUhBsMr",
"QEcEeM8Q9aB",
"xbVvYpgeskn",
"aArb2Mx8Ck",
"fMsBE2S-b6x"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response and fixing equation (2).",
" Thank you for detailed comments and thoughtful review! Below we answer all questions/concerns that you had:\n\n### Originality\nWe have added small discussion on the use of influence functions by Koh and Percy.\n\n### Quality\n> ...the fact that one thing ... | [
-1,
-1,
-1,
-1,
7,
3,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"QEcEeM8Q9aB",
"fMsBE2S-b6x",
"aArb2Mx8Ck",
"xbVvYpgeskn",
"nips_2022_eXggxYNbQi",
"nips_2022_eXggxYNbQi",
"nips_2022_eXggxYNbQi"
] |
nips_2022_U_hOegGGglw | A Closer Look at Prototype Classifier for Few-shot Image Classification | The prototypical network is a prototype classifier based on meta-learning and is widely used for few-shot learning because it classifies unseen examples by constructing class-specific prototypes without adjusting hyper-parameters during meta-testing.
Interestingly, recent research has attracted a lot of attention, showing that training a new linear classifier, which does not use a meta-learning algorithm, performs comparably with the prototypical network.
However, the training of a new linear classifier requires the retraining of the classifier every time a new class appears.
In this paper, we analyze how a prototype classifier works equally well without training a new linear classifier or meta-learning.
We experimentally find that directly using the feature vectors, which is extracted by using standard pre-trained models to construct a prototype classifier in meta-testing, does not perform as well as the prototypical network and training new linear classifiers on the feature vectors of pre-trained models.
Thus, we derive a novel generalization bound for a prototypical classifier and show that the transformation of a feature vector can improve the performance of prototype classifiers.
We experimentally investigate several normalization methods for minimizing the derived bound and find that the same performance can be obtained by using the L2 normalization and minimizing the ratio of the within-class variance to the between-class variance without training a new classifier or meta-learning. | Accept | It has been shown that linear classifier heads on top of pre-trained models can outperform meta-learning approaches. However, this is less adaptable than prototypical classifier heads and requires retraining with each new set of classes. This paper theoretically investigates the generalization of prototypical classifiers and uses this to explore several feature transformations to improve their performance. While there were concerns about the novelty over and above Cao et al., and some minor clarity issues, the reviewers were all generally in agreement that this is a useful contribution to the community and a good starting point for improving prototype classifiers from a theoretical perspective. | val | [
"1ZhCaa_aWgA",
"nnURz19tLJo",
"C0s_TDp4kcF",
"sQVropbHm_3",
"ODBQdyBpel",
"394lrsLJNO2",
"KM7CD0chfA0",
"eCnzsDeAIXo",
"oPCaJvKd31P"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the reviewers for providing such a thorough feedback. I have increased my score to weak accept.",
" ### Q5. Can t-SNE feature visualization be shown to see how feature transformation affects discriminability?\n\nWe have updated the paper and put the visualization in Appendix A.8.\n\nHoweve... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"C0s_TDp4kcF",
"oPCaJvKd31P",
"oPCaJvKd31P",
"eCnzsDeAIXo",
"eCnzsDeAIXo",
"KM7CD0chfA0",
"nips_2022_U_hOegGGglw",
"nips_2022_U_hOegGGglw",
"nips_2022_U_hOegGGglw"
] |
nips_2022__2-r5UurHp | ALIFE: Adaptive Logit Regularizer and Feature Replay for Incremental Semantic Segmentation | We address the problem of incremental semantic segmentation (ISS) recognizing novel object/stuff categories continually without forgetting previous ones that have been learned. The catastrophic forgetting problem is particularly severe in ISS, since pixel-level ground-truth labels are available only for the novel categories at training time. To address the problem, regularization-based methods exploit probability calibration techniques to learn semantic information from unlabeled pixels. While such techniques are effective, there is still a lack of theoretical understanding of them. Replay-based methods propose to memorize a small set of images for previous categories. They achieve state-of-the-art performance at the cost of large memory footprint. We propose in this paper a novel ISS method, dubbed ALIFE, that provides a better compromise between accuracy and efficiency. To this end, we first show an in-depth analysis on the calibration techniques to better understand the effects on ISS. Based on this, we then introduce an adaptive logit regularizer (ALI) that enables our model to better learn new categories, while retaining knowledge for previous ones. We also present a feature replay scheme that memorizes features, instead of images directly, in order to reduce memory requirements significantly. Since a feature extractor is changed continually, memorized features should also be updated at every incremental stage. To handle this, we introduce category-specific rotation matrices updating the features for each category separately. We demonstrate the effectiveness of our approach with extensive experiments on standard ISS benchmarks, and show that our method achieves a better trade-off in terms of accuracy and efficiency. | Accept | This work deals with incremental semantic segmentation. The authors propose a three-step incremental learning approach. They provide an in-depth analysis of the probability calibration methods widely used for the ISS, and introduce an interesting proposal for incrementally adapting the memorized features using global alignment by rotations. They show strong results on standard benchmarks for incremental segmentation, including the ablation study.
The rebuttal provides valuable insight, and the questions raised by the reviewers have been convincingly answered by the authors.
On the whole, the reviewers converged positively, the novelty and the interest of the proposal stand out clearly.
Authors are encouraged to consider all comments for their final version.
| train | [
"SgraBMEBFv5",
"NHzjR12yKsU",
"afrd-VkfSV6",
"UmnO06xx1RM",
"nvgNZC837qM",
"BRDL97E8T_Y",
"OUsvAO3OHI6",
"9kOv11eMGnL",
"Ewp_vXCATs8",
"eSvaRlYFdAu",
"RrYHO04HNPu",
"qKcX1-FH6pO",
"jqF4J6VXohk",
"Dcf21AkZoK8",
"ipm2baEcZnO",
"7fqdGm1-8fQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for providing detailed answers to my questions and supplementary experiments. Here are few comments on your replies.\n\n[Incremental work] Please don't be offended by this statement. By incremental I only meant that your work, which is done seriously, builds on existing concepts (improvement of distilla... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"9kOv11eMGnL",
"qKcX1-FH6pO",
"BRDL97E8T_Y",
"nvgNZC837qM",
"jqF4J6VXohk",
"jqF4J6VXohk",
"Dcf21AkZoK8",
"Dcf21AkZoK8",
"ipm2baEcZnO",
"7fqdGm1-8fQ",
"7fqdGm1-8fQ",
"7fqdGm1-8fQ",
"nips_2022__2-r5UurHp",
"nips_2022__2-r5UurHp",
"nips_2022__2-r5UurHp",
"nips_2022__2-r5UurHp"
] |
nips_2022_RQ385yD9dqR | When are Local Queries Useful for Robust Learning? | Distributional assumptions have been shown to be necessary for the robust learnability of concept classes when considering the exact-in-the-ball robust risk and access to random examples by Gourdeau et al. (2019). In this paper, we study learning models where the learner is given more power through the use of local queries, and give the first distribution-free algorithms that perform robust empirical risk minimization (ERM) for this notion of robustness. The first learning model we consider uses local membership queries (LMQ), where the learner can query the label of points near the training sample. We show that, under the uniform distribution, LMQs do not increase the robustness threshold of conjunctions and any superclass, e.g., decision lists and halfspaces. Faced with this negative result, we introduce the local equivalence query (LEQ) oracle, which returns whether the hypothesis and target concept agree in the perturbation region around a point in the training sample, as well as a counterexample if it exists. We show a separation result: on one hand, if the query radius $\lambda$ is strictly smaller than the adversary's perturbation budget $\rho$, then distribution-free robust learning is impossible for a wide variety of concept classes; on the other hand, the setting $\lambda=\rho$ allows us to develop robust ERM algorithms. We then bound the query complexity of these algorithms based on online learning guarantees and further improve these bounds for the special case of conjunctions. We finish by giving robust learning algorithms for halfspaces with margins on both $\{0,1\}^n$ and $\mathbb{R}^n$. | Accept | Solid contribution to NTK theory | train | [
"L4Sjj_fmxvK",
"L3LqbiRFPa",
"iJSgUjpjZKl",
"nVDv8EwlY6W",
"CCyaAjMVS-Y",
"DOe0k-pu4wq",
"9E-LIXoXfUT",
"gmuRqD-RbE2",
"ADwOdiJwsb",
"CIXvv6V2wlJ",
"OMON6HmkTei"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We reply to the remaining concerns below. Please let us know if you have any questions. \n\n**For the LEQ**, if one believes that the exact-in-the-ball notion of robust risk is worth investigating, then our lower bound for the LMQ model and our impossibility result for $\\lambda<\\rho$-LEQ give ample justificatio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3
] | [
"iJSgUjpjZKl",
"iJSgUjpjZKl",
"DOe0k-pu4wq",
"CCyaAjMVS-Y",
"gmuRqD-RbE2",
"OMON6HmkTei",
"CIXvv6V2wlJ",
"ADwOdiJwsb",
"nips_2022_RQ385yD9dqR",
"nips_2022_RQ385yD9dqR",
"nips_2022_RQ385yD9dqR"
] |
nips_2022_vGQiU5sqUe3 | Contrastive Learning as Goal-Conditioned Reinforcement Learning | In reinforcement learning (RL), it is easier to solve a task if given a good representation. While deep RL should automatically acquire such good representations, prior work often finds that learning representations in an end-to-end fashion is unstable and instead equip RL algorithms with additional representation learning parts (e.g., auxiliary losses, data augmentation). How can we design RL algorithms that directly acquire good representations? In this paper, instead of adding representation learning parts to an existing RL algorithm, we show (contrastive) representation learning methods are already RL algorithms in their own right. To do this, we build upon prior work and apply contrastive representation learning to action-labeled trajectories, in such a way that the (inner product of) learned representations exactly corresponds to a goal-conditioned value function. We use this idea to reinterpret a prior RL method as performing contrastive learning, and then use the idea to propose a much simpler method that achieves similar performance. Across a range of goal-conditioned RL tasks, we demonstrate that contrastive RL methods achieve higher success rates than prior non-contrastive methods. We also show that contrastive RL outperforms prior methods on image-based tasks, without using data augmentation or auxiliary objectives | Accept | How to design RL algorithms that directly acquire good representations? This paper gives an answer that contrastive representation learning can be cast as a goal-conditioned RL using the inner product of learned representations.
The technical novelty of this paper is sound, with the thorough theoretic motivation of the proposed method and solid experiments. The presentation of this paper is also satisfactory.
All the reviewers provided positive feedback on this paper. I also enjoy reading this paper.
| train | [
"qrd0-LPD2QS",
"R7omj7GTqGg",
"bium8946y4w",
"C0eDvQWAep",
"wMgFP9Uv3U",
"jm-7hNeVKma",
"hfYW1BHSJ4Q",
"8DXt9JkU3l",
"CpibpnJH8Dl"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their detailed response and for running the additional experiments. I find the additional experiment and video with the hand-tracking camera interesting, thank you for running that.",
" I appreciate the authors taking the time to respond to my comments and making appropriat... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"C0eDvQWAep",
"jm-7hNeVKma",
"wMgFP9Uv3U",
"CpibpnJH8Dl",
"8DXt9JkU3l",
"hfYW1BHSJ4Q",
"nips_2022_vGQiU5sqUe3",
"nips_2022_vGQiU5sqUe3",
"nips_2022_vGQiU5sqUe3"
] |
nips_2022_fSfcEYQP_qc | A Neural Corpus Indexer for Document Retrieval | Current state-of-the-art document retrieval solutions mainly follow an index-retrieve paradigm, where the index is hard to be directly optimized for the final retrieval target. In this paper, we aim to show that an end-to-end deep neural network unifying training and indexing stages can significantly improve the recall performance of traditional methods. To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query. To optimize the recall performance of NCI, we invent a prefix-aware weight-adaptive decoder architecture, and leverage tailored techniques including query generation, semantic document identifiers, and consistency-based regularization. Empirical studies demonstrated the superiority of NCI on two commonly used academic benchmarks, achieving +17.6% and +16.8% relative enhancement for Recall@1 on NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to the best baseline method. | Accept | This paper proposes a new framework for neural IR: given query, directly predict a document ID. The document IDs are obtained by hierarchical clustering of documents beforehand. This is a novel formulation of the problem, and is very distinct from current two-stage methods that have a high-recall sparse retrieval stage, followed by a high-precision neural reranker, or approximate nearest neighbor methods that encode both documents and queries as vectors. The paper is fairly well-written, the authors have addressed reviewers concerns with honest detailed feedback, and have made their code available to facilitate experimentation. The results are particularly strong compared to more traditional BM25-based models and competing neural approaches like DSI. I anticipate there will be much follow-on work and eventually a paradigm shift in neural IR. | train | [
"46EG1rMC16",
"3v8uKUbaVDG",
"_niAOyJll-A",
"Ie9pl5Lvt6H",
"9iHltuKlxAL",
"q8ObgHbsAds",
"ZFdyQa4Tae2",
"Yz8jrsogSL",
"5T633xe-7i3",
"xSA8ciX29e1"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the answers. Most of my questions are answered and I support for the acceptance of the paper.",
" Dear reviewers, \n\nThank you for taking time in reading our paper and providing valuable comments. We briefly address common questions here. \n \n1. Regarding the concern that query generation dominate... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
3
] | [
"q8ObgHbsAds",
"nips_2022_fSfcEYQP_qc",
"xSA8ciX29e1",
"5T633xe-7i3",
"Yz8jrsogSL",
"ZFdyQa4Tae2",
"nips_2022_fSfcEYQP_qc",
"nips_2022_fSfcEYQP_qc",
"nips_2022_fSfcEYQP_qc",
"nips_2022_fSfcEYQP_qc"
] |
nips_2022_USoYIT4IQz | Invertible Monotone Operators for Normalizing Flows | Normalizing flows model probability distributions by learning invertible transformations that transfer a simple distribution into complex distributions. Since the architecture of ResNet-based normalizing flows is more flexible than that of coupling-based models, ResNet-based normalizing flows have been widely studied in recent years. Despite their architectural flexibility, it is well-known that the current ResNet-based models suffer from constrained Lipschitz constants. In this paper, we propose the monotone formulation to overcome the issue of the Lipschitz constants using monotone operators and provide an in-depth theoretical analysis. Furthermore, we construct an activation function called Concatenated Pila (CPila) to improve gradient flow. The resulting model, Monotone Flows, exhibits an excellent performance on multiple density estimation benchmarks (MNIST, CIFAR-10, ImageNet32, ImageNet64). Code is available at https://github.com/mlvlab/MonotoneFlows. | Accept | The paper proposes a new type of ResNet-based Normalizing Flows that, unlike previous versions of these flows, do not require the Lipschitz constant of each layer to be less than 1. The authors use monotone operators, which they show to be strictly more expressive and propose a new activation function called Concatenated Pila (CPila). The method is evaluated on toy datasets as well as standard image datasets, outperforming the baseline i-DenseNet model.
Strengths:
1 - Well written and clear paper.
2 - Originality of the monotone formulation.
3 - Improvements are small, but consistent across multiple settings.
4 - Theoretical analysis of expressive power.
5 - Ablation experiments to justify activation function.
Weaknesses:
- No significant weaknesses are mentioned by the reviewers.
Decision:
All the reviewers agree on acceptance, indicating that this is a strong paper. I encourage the authors to use the feedback provided by the reviewers to improve the paper for its camera-ready version. | train | [
"CtGgEqxWF2J",
"ksppUzbRjHL",
"-U4KPktD-v",
"2A65hmkWyAK",
"UaY0d0GpBnc",
"h0P5w9vY-jx",
"lK6AkAK4dK_",
"F0aYdHk5z3P",
"E_35onA-aTY",
"AOAlSsurbx-",
"yJDA-iu5cyi"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply. I'm happy with all the answers.\n\nI would suggest to incorporate the reply to Q4 into the paper (i.e. that the notation is overloaded and that $F(x) = \\\\{y \\; | \\; (x, y) \\in F\\\\}$). This would help to avoid confusing authors without a background in monotone operator theory.\n\nA... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"lK6AkAK4dK_",
"UaY0d0GpBnc",
"nips_2022_USoYIT4IQz",
"yJDA-iu5cyi",
"AOAlSsurbx-",
"E_35onA-aTY",
"F0aYdHk5z3P",
"nips_2022_USoYIT4IQz",
"nips_2022_USoYIT4IQz",
"nips_2022_USoYIT4IQz",
"nips_2022_USoYIT4IQz"
] |
nips_2022_Ya9lATuQ3gg | Large-Scale Retrieval for Reinforcement Learning | Effective decision making involves flexibly relating past experiences and relevant contextual information to a novel situation. In deep reinforcement learning (RL), the dominant paradigm is for an agent to amortise information that helps decision-making into its network weights via gradient descent on training losses. Here, we pursue an alternative approach in which agents can utilise large-scale context-sensitive database lookups to support their parametric computations. This allows agents to directly learn in an end-to-end manner to utilise relevant information to inform their outputs. In addition, new information can be attended to by the agent, without retraining, by simply augmenting the retrieval dataset. We study this approach for offline RL in 9x9 Go, a challenging game for which the vast combinatorial state space privileges generalisation over direct matching to past experiences. We leverage fast, approximate nearest neighbor techniques in order to retrieve relevant data from a set of tens of millions of expert demonstration states. Attending to this information provides a significant boost to prediction accuracy and game-play performance over simply using these demonstrations as training trajectories, providing a compelling demonstration of the value of large-scale retrieval in offline RL agents. | Accept | This paper uses nearest neighbor methods to retrieve and exploit information from similar games during planning, whilst playing the game of go (although the method is extensible to other environments which support muzero-style agents). The reviewers found this approach interesting and ultimately worth publishing, although there was a range of scores. However, the emerging consensus during the review phase seemed to lean towards acceptance, a recommendation I am happy to support from having read the discussion. | val | [
"fhyj8wqrppS",
"_L5TOR5QWxd",
"QbCWNEUSwIm",
"6F0V7ID6Hcf",
"6Bs87cX8FVt",
"p2mG3uYsXpy",
"kTLJS81kISt",
"JZstx1fY-G",
"oxmQWYza7t",
"FNgcnFVhKOT",
"iMPS0Xkay-Y",
"GR6SvKdgZQW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comments. We continue to strongly believe that Go is if anything a particularly challenging domain for Offline RL. We note that the existence of superhuman Atari agents does not preclude studying offline RL in Atari (as in the RL Unplugged suite).",
" Thank you for your helpful responses! Aft... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"6F0V7ID6Hcf",
"kTLJS81kISt",
"6Bs87cX8FVt",
"6Bs87cX8FVt",
"GR6SvKdgZQW",
"FNgcnFVhKOT",
"iMPS0Xkay-Y",
"oxmQWYza7t",
"nips_2022_Ya9lATuQ3gg",
"nips_2022_Ya9lATuQ3gg",
"nips_2022_Ya9lATuQ3gg",
"nips_2022_Ya9lATuQ3gg"
] |
nips_2022_88_wNI6ZBDZ | Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach | Deep neural networks often suffer from poor generalization caused by complex and non-convex loss landscapes. One of the popular solutions is Sharpness-Aware Minimization (SAM), which smooths the loss landscape via minimizing the maximized change of training loss when adding a perturbation to the weight. However, we find the indiscriminate perturbation of SAM on all parameters is suboptimal, which also results in excessive computation,~\emph{i.e.}, double the overhead of common optimizers like Stochastic Gradient Descent~(SGD). In this paper, we propose an efficient and effective training scheme coined as Sparse SAM (SSAM), which achieves sparse perturbation by a binary mask. To obtain the sparse mask, we provide two solutions which are based on Fisher information and dynamic sparse training, respectively. In addition, we theoretically prove that SSAM can converge at the same rate as SAM,~\emph{i.e.}, $O(\log T/\sqrt{T})$. Sparse SAM not only has the potential for training acceleration but also smooths the loss landscape effectively. Extensive experimental results on CIFAR10, CIFAR100, and ImageNet-1K confirm the superior efficiency of our method to SAM, and the performance is preserved or even better with a perturbation of merely 50\% sparsity. Code is available at \url{https://github.com/Mi-Peng/Sparse-Sharpness-Aware-Minimization}. | Accept | This paper proposes a new scheme to improve computational efficiency of the SAM optimizer. The original SAM requires to asses the loss value at a perturbed point. The perturbation lives in the full parameter space. This paper shows that computing the perturbation in every direction is not necessary. By only selecting a sparse subset of the parameters to undergo a perturbation, the authors are able to maintain the original generalization performance of SAM while significantly reducing the number of FLOPS. The paper also presents a convergence analysis for the proposed scheme.
There was an active discussion between authors and reviewers, and authors provided very very thorough response to the questions and issues raised by the reviewers. As a result of this, multiple reviewers increased the original score. At the end, 3 out of 4 reviews are on the positive side, and the remaining one is a borderline reject.
In concordance with the majority of the reviewers, I believe improving the computational cost of SAM is of huge practical interest, and this paper is a step toward in direction. I recommend accept. As the thorough reply from the authors contains interesting details, I strongly suggest that authors include these details to their submission (maybe to the supplementary material?)
| train | [
"f_qElCoEags",
"k6vzSXUG9jS",
"WY11JyKGsQc",
"IcJzL5QfML9",
"lVBc-t7FBvk",
"HtUS3R_usz",
"OPFozuHC_O",
"I8ASUvzgB9",
"BAFHKJ2ETfY",
"KgyuYnUVWTC",
"EDVGJaecPsbP",
"DSj5YCtertH",
"2gTU92j7M3q",
"kterOJSghnSD",
"m5MYUSlLR1I",
"oHs-ctrB0m",
"pH0s9GyVkPx",
"HSyTEc5YM76",
"ed-lcCrSo_4... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for your reply. Your suggestions make our paper better. Using Pytorch API \"requires_grad\", we extend our SSAM-F into Block-wise SSAM-F, i.e., the unperturbed weights do not compute the gradient and save the training time. The Block-wise SSAM-F achieves the comparable performance on CIFAR and the tr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"k6vzSXUG9jS",
"HtUS3R_usz",
"oHs-ctrB0m",
"HSyTEc5YM76",
"ed-lcCrSo_4",
"nips_2022_88_wNI6ZBDZ",
"kterOJSghnSD",
"oHs-ctrB0m",
"oHs-ctrB0m",
"oHs-ctrB0m",
"ed-lcCrSo_4",
"ed-lcCrSo_4",
"pH0s9GyVkPx",
"pH0s9GyVkPx",
"HSyTEc5YM76",
"nips_2022_88_wNI6ZBDZ",
"nips_2022_88_wNI6ZBDZ",
"... |
nips_2022_bk8vkdQfBS | Explainability Via Causal Self-Talk | Explaining the behavior of AI systems is an important problem that, in practice, is generally avoided. While the XAI community has been developing an abundance of techniques, most incur a set of costs that the wider deep learning community has been unwilling to pay in most situations. We take a pragmatic view of the issue, and define a set of desiderata that capture both the ambitions of XAI and the practical constraints of deep learning. We describe an effective way to satisfy all the desiderata: train the AI system to build a causal model of itself. We develop an instance of this solution for Deep RL agents: Causal Self-Talk. CST operates by training the agent to communicate with itself across time. We implement this method in a simulated 3D environment, and show how it enables agents to generate faithful and semantically-meaningful explanations of their own behavior. Beyond explanations, we also demonstrate that these learned models provide new ways of building semantic control interfaces to AI systems. | Accept | This paper proposes causal self-talk (CST) as a means to obtain more explainable AI systems. The work lists a set of desiderata for explainable AI and argues that CST satisfies this set. The paper is well written and the experimental results, although in a toy setting called "DaxDucks" are reasonably convincing. The reviewers lean towards acceptance, with Reviewer g2gH in particular strongly advocating for the work.
I like that this approach is generally applicable in principle, and I strongly dislike that it is only showcased on one (non-open source, toy) task. The other qualm I have is that the introduction reads way too much as if the authors came up with this all on their own, and I strongly encourage them to pay proper respect to prior work in this field, rather than stashing all that work away as a long enumeration in the related work section, which most people will gloss over. | train | [
"zbh7f02RCC3",
"NTU-RQd_TyD",
"v7fbbTP-Et",
"dO929dyN49B",
"kwHSnNenLeJI",
"6m5OVO4wUUT",
"EdHzQPPaRxJ",
"ITqJnNnzwBr",
"EAZr21q_-f1",
"CpAeWbveU4C",
"ddiv2XW_cvJ",
"OyJBosXDyHs",
"eO8YOBTFHY",
"GhUF4ugzR1g",
"N8dgj6jjHNL"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As promised, we have amended our previous submission to return the page count to 9 pages. This has involved merely minor changes to the text.",
" Thanks the authors for their clarifications. I will keep my original ratings.",
" Thank you authors for engaging with my comments and taking steps to improve the cl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
3
] | [
"ddiv2XW_cvJ",
"EAZr21q_-f1",
"CpAeWbveU4C",
"ITqJnNnzwBr",
"6m5OVO4wUUT",
"N8dgj6jjHNL",
"GhUF4ugzR1g",
"GhUF4ugzR1g",
"eO8YOBTFHY",
"OyJBosXDyHs",
"nips_2022_bk8vkdQfBS",
"nips_2022_bk8vkdQfBS",
"nips_2022_bk8vkdQfBS",
"nips_2022_bk8vkdQfBS",
"nips_2022_bk8vkdQfBS"
] |
nips_2022_t0VbBTw-o8 | Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks | Randomized smoothing is one of the most promising frameworks for certifying the adversarial robustness of machine learning models, including Graph Neural Networks (GNNs). Yet, existing randomized smoothing certificates for GNNs are overly pessimistic since they treat the model as a black box, ignoring the underlying architecture. To remedy this, we propose novel gray-box certificates that exploit the message-passing principle of GNNs: We randomly intercept messages and carefully analyze the probability that messages from adversarially controlled nodes reach their target nodes. Compared to existing certificates, we certify robustness to much stronger adversaries that control entire nodes in the graph and can arbitrarily manipulate node features. Our certificates provide stronger guarantees for attacks at larger distances, as messages from farther-away nodes are more likely to get intercepted. We demonstrate the effectiveness of our method on various models and datasets. Since our gray-box certificates consider the underlying graph structure, we can significantly improve certifiable robustness by applying graph sparsification. | Accept | The paper proposes a novel approach to certify the robustness of graph neural networks via randomized smoothing. It does so by treating the networks as ``gray-box'' models and leveraging message passing routines. This yields an improved lower bound on probabilistic certification.
All reviewers recognized the technical quality of the work and its novel perspective on adversarial robustness for graph neural networks. Some concerns regarding the probabilistic certification and the experimental details have been successfully addressed during the rebuttal.
The paper is recommended for acceptance, conditioned on the inclusion of the additional experiments and discussions arisen in the rebuttal. | train | [
"kdNaPYdZq6c",
"z5gUZozcBRK",
"cKwU-NdVEh",
"_2wNvAIfSDk",
"ST1r1n7csHK",
"QudunzAfTM",
"Ztum2Og--2j",
"eCVtujtxj-M",
"CZpoAj7MyPd",
"XvyO-xPQO0W",
"xz_WM8EiKk5",
"JiRxqFGj19Y",
"ik8kYZ_P3dP",
"lFropGBgXpC",
"IVszeRwJNT",
"IhFLWBkPgr0"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed explanations from the author. My questions are solved and I would raise my score.",
" Thank you for the clarification and updating the paper. I am satisfied with the response. Therefore, I decided to keep my support of acceptance of this paper.",
" Thank you for your response!\n\nAl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
4
] | [
"eCVtujtxj-M",
"_2wNvAIfSDk",
"ST1r1n7csHK",
"QudunzAfTM",
"XvyO-xPQO0W",
"xz_WM8EiKk5",
"CZpoAj7MyPd",
"IhFLWBkPgr0",
"IVszeRwJNT",
"lFropGBgXpC",
"ik8kYZ_P3dP",
"nips_2022_t0VbBTw-o8",
"nips_2022_t0VbBTw-o8",
"nips_2022_t0VbBTw-o8",
"nips_2022_t0VbBTw-o8",
"nips_2022_t0VbBTw-o8"
] |
nips_2022_djOANbV2zSu | UQGAN: A Unified Model for Uncertainty Quantification of Deep Classifiers trained via Conditional GANs | We present an approach to quantifying both aleatoric and epistemic uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs). While most works in the literature that use GANs to generate out-of-distribution (OoD) examples only focus on the evaluation of OoD detection, we present a GAN based approach to learn a classifier that produces proper uncertainties for OoD examples as well as for false positives (FPs). Instead of shielding the entire in-distribution data with GAN generated OoD examples which is state-of-the-art, we shield each class separately with out-of-class examples generated by a conditional GAN and complement this with a one-vs-all image classifier. In our experiments, in particular on CIFAR10, CIFAR100 and Tiny ImageNet, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers. Furthermore, we also find that the generated GAN examples do not significantly affect the calibration error of our classifier and result in a significant gain in model accuracy. | Accept | The authors propose a new approach for training image classifiers with complete uncertainty quantification based on generative adversarial networks. The main idea is to use GANs to "shield" each class separately from the out-of-class (OoC) regime. This is done in combination with a one-vs-all classifier in the final DNN layer trained jointly with a class-conditional generator for out-of-class data in an adversarial framework. Finally, these classifiers are then used to model class conditional likelihoods. The empirical validation shows improved OoD detection and FP detection performance when compared to SOTA in this setting.
The reviewers appreciated the clarity of exposition and the positioning with respect to the related works. The unified approach applicable both to FP detection and OoD detection was deemed novel. On the negative side, the method seems to be extremely involved in terms of the required architectural pieces, distinction between low-dim and high-dim settings, primarily low-resolution data used for evaluation, and the number of hyperparameters. During the discussion the authors addressed the main questions raised by the reviewers. Nevertheless, given that all of the reviewers are leaning positive, I'll recommend the acceptance of this work. Please do a full pass in terms of formatting of the whole manuscript, including removing inline tables and figures, removing things like double parenthesis, bolding specific letters (e.g. L247), clarify the flow of information in figure 1 so that one can grasp the high-level overview of the algorithm, and incorporate the remaining points raised during the discussion. | train | [
"LysUvSOUmwy",
"0TeK_QGlfPi",
"LBgploJ5QkZ",
"GVLDtijmSqp",
"hfKN5IJ6oE",
"r4bRgx9aXkw",
"Li7tyqogcP",
"rZqYATMYyLj",
"CwoCIOVEE-v",
"XXWfjMpFsLQ",
"vWJUxLW3hE",
"NCivu_8FmQm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your feedback, which resolves most of my concerns. My rating remains unchanged.",
" Thanks for the response. My questions have been answered, and the additional experiments do reinforce that the setting is likely to be robust. My ratings remain unchanged.",
" Thanks for the response. \nMost of m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
4
] | [
"hfKN5IJ6oE",
"r4bRgx9aXkw",
"rZqYATMYyLj",
"Li7tyqogcP",
"NCivu_8FmQm",
"vWJUxLW3hE",
"XXWfjMpFsLQ",
"CwoCIOVEE-v",
"nips_2022_djOANbV2zSu",
"nips_2022_djOANbV2zSu",
"nips_2022_djOANbV2zSu",
"nips_2022_djOANbV2zSu"
] |
nips_2022_WOuGTb9QswS | Oscillatory Tracking of Continuous Attractor Neural Networks Account for Phase Precession and Procession of Hippocampal Place Cells | Hippocampal place cells of freely moving rodents display an intriguing temporal organization in their responses known as `theta phase precession', in which individual neurons fire at progressively earlier phases in successive theta cycles as the animal traverses the place fields. Recent experimental studies found that in addition to phase precession, many place cells also exhibit accompanied phase procession, but the underlying neural mechanism remains unclear. Here, we propose a neural circuit model to elucidate the generation of both kinds of phase shift in place cells' firing. Specifically, we consider a continuous attractor neural network (CANN) with feedback inhibition, which is inspired by the reciprocal interaction between the hippocampus and the medial septum. The feedback inhibition induces intrinsic mobility of the CANN which competes with the extrinsic mobility arising from the external drive. Their interplay generates an oscillatory tracking state, that is, the network bump state (resembling the decoded virtual position of the animal) sweeps back and forth around the external moving input (resembling the physical position of the animal). We show that this oscillatory tracking naturally explains the forward and backward sweeps of the decoded position during the animal's locomotion. At the single neuron level, the forward and backward sweeps account for, respectively, theta phase precession and procession. Furthermore, by tuning the feedback inhibition strength, we also explain the emergence of bimodal cells and unimodal cells, with the former having co-existed phase precession and procession, and the latter having only significant phase precession. We hope that this study facilitates our understanding of hippocampal temporal coding and lays foundation for unveiling their computational functions. | Accept | This paper presents non-trivial and novel theoretical and computational modeling that accounts for experimentally observed phenomena: the theta phase procession and precession. These phenomena are implicated in the neural representation and learning of neuronal networks involving hippocampus. Although the current manuscript does not address the learning and the presentations are limited to a linear track environment, it represents a clear advance in linking spatial and temporal information representations by extending the standard continuous attractor models that do not exhibit phase procession/precession. Furthermore, the model elegantly explains the biologically observed unimodal and bimodal cells. The authors are encouraged to improve the clarity of some parts of the writing.
| train | [
"OHqZqhVrR1a",
"tiZnl1YiphN",
"f0UjaT07PRD",
"ZAi8_pGYYG0x",
"P6__NJpkfg",
"bOXVvJFNvr-",
"fPKMYyWUKhg",
"HJGQG2DMad"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear All:\n\nWe appreciate the reviewers for their questions/comments and their time and effort in reviewing the paper. Since we haven't got any feedback from the reviewers (the deadline is Aug 09 '22 08:00 PM UTC), we are wondering if the updates to the manuscript and replies to the corresponding questions resol... | [
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_WOuGTb9QswS",
"f0UjaT07PRD",
"HJGQG2DMad",
"fPKMYyWUKhg",
"bOXVvJFNvr-",
"nips_2022_WOuGTb9QswS",
"nips_2022_WOuGTb9QswS",
"nips_2022_WOuGTb9QswS"
] |
nips_2022_gd7ZI0X7Q-h | ELASTIC: Numerical Reasoning with Adaptive Symbolic Compiler | Numerical reasoning over text is a challenging task of Artificial Intelligence (AI), requiring reading comprehension and numerical reasoning abilities. Previous approaches use numerical reasoning programs to represent the reasoning process. However, most works do not separate the generation of operators and operands, which are key components of a numerical reasoning program, thus limiting their ability to generate such programs for complicated tasks. In this paper, we introduce the numEricaL reASoning with adapTive symbolIc Compiler (ELASTIC) model, which is constituted of the RoBERTa as the Encoder and a Compiler with four modules: Reasoning Manager, Operator Generator, Operands Generator, and Memory Register. ELASTIC is robust when conducting complicated reasoning. Also, it is domain agnostic by supporting the expansion of diverse operators without caring about the number of operands it contains. Experiments show that ELASTIC achieves 68.96 and 65.21 of execution accuracy and program accuracy on the FinQA dataset and 83.00 program accuracy on the MathQA dataset, outperforming previous state-of-the-art models significantly. | Accept | The paper presents a new model called Numerical Reasoning with Adaptive Symbolic Compiler that is able to perform numerical reasoning tasks. One of the key ideas in this model is to separate the generation of operators and operands and to include a memory register to remember intermediate values. The method is compared against FinQANet and shows good results on both the MathQA and FinQA datasets. In the reviews and rebuttal, it emerged that the performance of this model is marginally lower than the performance of state-of-the-art large language models on this task; however, the difference in size and training resources required by this model compared with the LLM is so vast that this still represents a significant advance over the state of the art. It would nevertheless be good for this to be mentioned in the paper itself. Overall, this is a novel approach that advances an important problem. | train | [
"-s7DWja2kNO",
"-LorF9aSAlb",
"YWLv9tgEibp",
"CmrLWSAEoWe",
"52p1szSO7gz",
"Xe9UFGW1s2q",
"1zIHA2raYr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The writing has been improved in the revised version and most of the questions have been resolved. Although the authors' response still does not convince me of an excellent technical novelty, it's reasonable to raise the overall rating (borderline accept).",
" We thank the R_2nDD for providing two references a... | [
-1,
-1,
-1,
-1,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"YWLv9tgEibp",
"1zIHA2raYr",
"Xe9UFGW1s2q",
"52p1szSO7gz",
"nips_2022_gd7ZI0X7Q-h",
"nips_2022_gd7ZI0X7Q-h",
"nips_2022_gd7ZI0X7Q-h"
] |
nips_2022_Y0Bm5tL92lg | Adaptation Accelerating Sampling-based Bayesian Inference in Attractor Neural Networks | The brain performs probabilistic Bayesian inference to interpret the external world. The sampling-based view assumes that the brain represents the stimulus posterior distribution via samples of stochastic neuronal responses. Although the idea of sampling-based inference is appealing, it faces a critical challenge of whether stochastic sampling is fast enough to match the rapid computation of the brain. In this study, we explore how latent stimulus sampling can be accelerated in neural circuits. Specifically, we consider a canonical neural circuit model called continuous attractor neural networks (CANNs) and investigate how sampling-based inference of latent continuous variables is accelerated in CANNs. Intriguingly, we find that by including noisy adaptation in the neuronal dynamics, the CANN is able to speed up the sampling process significantly. We theoretically derive that the CANN with noisy adaptation implements the efficient sampling method called Hamiltonian dynamics with friction, where noisy adaption effectively plays the role of momentum. We theoretically analyze the sampling performances of the network and derive the condition when the acceleration has the maximum effect. Simulation results confirm our theoretical analyses. We further extend the model to coupled CANNs and demonstrate that noisy adaptation accelerates the sampling of the posterior distribution of multivariate stimuli. We hope that this study enhances our understanding of how Bayesian inference is realized in the brain. | Accept | The reviewers agree that the paper makes an interesting contribution, connecting inference in probabilistic models with network models from computation neuroscience. | test | [
"RflrZneAuD",
"jTe45k97nY",
"2BFz2RINkmX",
"35nV-RCFgiy",
"bbfrUEslFNM",
"nbh53ZrFpaG",
"wrz-BXo5bXC",
"soV-imBAEMC",
"fNYOruOzvau",
"pO13zsDeDVU",
"lB_dTpnDShW",
"rwZez-Qg4Yr",
"6J2TvEYPutq",
"0C2KfUkk8sN",
"lXVInxUzEPW",
"FFEY-x_ia9Q",
"wcpRF9YAnJV"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the comments. Yes, we will add the points on the extensions to multimodal distributions and experimental predictions in the revised manuscript.",
" Thank you for the detailed responses to my questions. After taking these into consideration, I have increased my rating by 1 point.\nIt wo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
4
] | [
"jTe45k97nY",
"lXVInxUzEPW",
"FFEY-x_ia9Q",
"lXVInxUzEPW",
"nbh53ZrFpaG",
"6J2TvEYPutq",
"wcpRF9YAnJV",
"FFEY-x_ia9Q",
"FFEY-x_ia9Q",
"lXVInxUzEPW",
"lXVInxUzEPW",
"0C2KfUkk8sN",
"0C2KfUkk8sN",
"nips_2022_Y0Bm5tL92lg",
"nips_2022_Y0Bm5tL92lg",
"nips_2022_Y0Bm5tL92lg",
"nips_2022_Y0Bm... |
nips_2022_hYx-xr1wdo | Subspace clustering in high-dimensions: Phase transitions \& Statistical-to-Computational gap | A simple model to study subspace clustering is the high-dimensional $k$-Gaussian mixture model where the cluster means are sparse vectors. Here we provide an exact asymptotic characterization of the statistically optimal reconstruction error in this model in the high-dimensional regime with extensive sparsity, i.e. when the fraction of non-zero components of the cluster means $\rho$, as well as the ratio $\alpha$ between the number of samples and the dimension are fixed, while the dimension diverges. We identify the information-theoretic threshold below which obtaining a positive correlation with the true cluster means is statistically impossible. Additionally, we investigate the performance of the approximate message passing (AMP) algorithm analyzed via its state evolution, which is conjectured to be optimal among polynomial algorithm for this task. We identify in particular the existence of a statistical-to-computational gap between the algorithm that requires a signal-to-noise ratio $\lambda_{\text{alg}} \ge k / \sqrt{\alpha}$ to perform better than random, and the information theoretic threshold at $\lambda_{\text{it}} \approx \sqrt{-k \rho \log{\rho}} / \sqrt{\alpha}$. Finally, we discuss the case of sub-extensive sparsity $\rho$ by comparing the performance of the AMP with other sparsity-enhancing algorithms, such as sparse-PCA and diagonal thresholding. | Accept | The reviewers appreciate the solid theoretical results concerning statistical-computational tradeoff in subspace clustering. The exact asymptotics and clear presentation make the paper stand out. Therefore, I recommend acceptance. Meanwhile, please carefully revise the paper according to the reviews to highlight its strengths and originality. | train | [
"T5uDZmw7KOh",
"EqGlM7m3YV-",
"Xqevl_M01xC",
"37j7DYFdl9f",
"ILCp1dcPaCg",
"phnG4XgHFD",
"8Mo8n9Mp4S",
"pq77t80jMx",
"ZoUlENfju0s",
"tYfFZIryCrz",
"vABRFKQbDl",
"9SPuGlBiKid",
"j1rU9DOj7y_",
"U1O-q2-pMvV",
"qC8sIpmDbIw",
"ybPQRBgsj1gP",
"ZKhLrqvsS10",
"3_2I4mesXQ",
"sQfH4tGo9Ju",... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thank you for the response. ",
" All of my concerns have been addressed by the authors. Based on the comments of the other reviewers and the authors' responses. I'd like to keep the same rating as in my previous review: 8: Strong Accept.",
" Thanks for the rapid reply! Your note clears up my misunderstanding.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4,
3
] | [
"vABRFKQbDl",
"tYfFZIryCrz",
"37j7DYFdl9f",
"phnG4XgHFD",
"8Mo8n9Mp4S",
"pq77t80jMx",
"j1rU9DOj7y_",
"ZoUlENfju0s",
"qC8sIpmDbIw",
"LSEjJ5hnRpc",
"CZwr5R3TerX",
"j1rU9DOj7y_",
"sQfH4tGo9Ju",
"3_2I4mesXQ",
"ybPQRBgsj1gP",
"ZKhLrqvsS10",
"nips_2022_hYx-xr1wdo",
"nips_2022_hYx-xr1wdo"... |
nips_2022_K-A4tDJ6HHf | Diagnosing failures of fairness transfer across distribution shift in real-world medical settings | Diagnosing and mitigating changes in model fairness under distribution shift is an important component of the safe deployment of machine learning in healthcare settings. Importantly, the success of any mitigation strategy strongly depends on the \textit{structure} of the shift. Despite this, there has been little discussion of how to empirically assess the structure of a distribution shift that one is encountering in practice. In this work, we adopt a causal framing to motivate conditional independence tests as a key tool for characterizing distribution shifts. Using our approach in two medical applications, we show that this knowledge can help diagnose failures of fairness transfer, including cases where real-world shifts are more complex than is often assumed in the literature. Based on these results, we discuss potential remedies at each step of the machine learning pipeline. | Accept | This is a compelling work characterizing some forms of model (non-)robustness to drift through a causal lens, with a focus on performance metrics including group-level fairness. The methodological novelty to the work is a method for discovering structure for that drift, then using that structure to (i) estimate impact on metrics and (ii) mitigate those impacts. That method requires a domain expert to provide a rough causal graph for the application at hand, which is a light negative; yet, the work then presents two in-depth studies in the healthcare space to argue that this is a surmountable requirement, at least in some settings. This is a tough space to operate in, and I appreciated this very deep dive into two "real" cases -- as did the reviewers. | train | [
"KYmbCxLOtV",
"o0Eec0ncvWo",
"VzRL_ZXp9j0",
"upHtjwYKWnM",
"cENEpiGuzp",
"Jo9xzwSWTI",
"YxB0Xbe6_Xs",
"hY5uJ20kosG",
"2UizF4ntcYQ",
"20SqT_A3l1",
"K0hpbJGM6y3",
"_3600EdWMD",
"q_qLAw8-LEN",
"IpXQJg104Ul",
"d7OZTaulT13",
"ybyyAGn7bqO",
"z9IsR-su8oy",
"UOXdSJWjN7A5",
"vVoAW6iUk8",
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Hi folks --\n\nThanks to the reviewers for their initial reviews, to the authors for a thoughtful rebuttal, and to both sides for their discussion in the meantime. Reviewers -- is there anything else that would be helpful to ask the authors before we move to our final deliberative phase? Please do get those fin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"nips_2022_K-A4tDJ6HHf",
"VzRL_ZXp9j0",
"vVoAW6iUk8",
"20SqT_A3l1",
"YxB0Xbe6_Xs",
"2UizF4ntcYQ",
"ybyyAGn7bqO",
"z9IsR-su8oy",
"UOXdSJWjN7A5",
"d7OZTaulT13",
"nips_2022_K-A4tDJ6HHf",
"_Yn2Vaz-DmE",
"IpXQJg104Ul",
"Kze6JYe6icM",
"ybyyAGn7bqO",
"z9IsR-su8oy",
"UOXdSJWjN7A5",
"vVoAW6... |
nips_2022_AYQI3rlp9tW | Efficient identification of informative features in simulation-based inference | Simulation-based Bayesian inference (SBI) can be used to estimate the parameters of complex mechanistic models given observed model outputs without requiring access to explicit likelihood evaluations. A prime example for the application of SBI in neuroscience involves estimating the parameters governing the response dynamics of Hodgkin-Huxley (HH) models from electrophysiological measurements, by inferring a posterior over the parameters that is consistent with a set of observations. To this end, many SBI methods employ a set of summary statistics or scientifically interpretable features to estimate a surrogate likelihood or posterior. However, currently, there is no way to identify how much each summary statistic or feature contributes to reducing posterior uncertainty. To address this challenge, one could simply compare the posteriors with and without a given feature included in the inference process. However, for large or nested feature sets, this would necessitate repeatedly estimating the posterior, which is computationally expensive or even prohibitive. Here, we provide a more efficient approach based on the SBI method neural likelihood estimation (NLE): We show that one can marginalize the trained surrogate likelihood post-hoc before inferring the posterior to assess the contribution of a feature. We demonstrate the usefulness of our method by identifying the most important features for inferring parameters of an example HH neuron model. Beyond neuroscience, our method is generally applicable to SBI workflows that rely on data features for inference used in other scientific fields. | Accept | The paper presents a method for feature selection in simulation-based inference, that is, for quantifying to what extent each feature (or summary statistic) contributes to reducing posterior uncertainty. As this can be accomplished naively by re-estimating the posterior after systematically omitting features, the focus is on developing a fast method instead. The proposed method is evaluated on parameter estimation of a Hodgkin-Huxley neuron model from neuroscience.
The reviewers found the paper to be clearly written and did not voice major concerns regarding its technical quality. The proposed method is simple, efficient and clearly presented, and the evaluation on the Hodgkin-Huxley model is convincing. A potential drawback of the method is that it requires a model that can be analytically marginalized over, and thus may not benefit from current and future advances in generative modelling.
Seeing as the paper is of good quality without major problems, I'm happy to recommend acceptance. | train | [
"QL76PFG5Io",
"bxAbQptsUUh",
"Gn9X5HqKNXY",
"Jl23y_4yzIJ",
"2viccE_zl3I",
"wHbB9R_dsk",
"3zRixAq_SmF",
"SD6PXx-wR8r",
"8bDJO2FgaQ0",
"IvAHl64OJE",
"ZPTAVPrDxAA",
"QLjQkDGsl16",
"-MmtLhBBrLw"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their reply and for appreciating our efforts.",
" We thank the reviewer for their reply and for appreciating our efforts.",
" Thanks for your reply. I think it all makes sense and I'm glad you added the discussion of MI and how a single observation can be important in neuroscience. I... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"Gn9X5HqKNXY",
"Jl23y_4yzIJ",
"3zRixAq_SmF",
"SD6PXx-wR8r",
"-MmtLhBBrLw",
"QLjQkDGsl16",
"ZPTAVPrDxAA",
"IvAHl64OJE",
"nips_2022_AYQI3rlp9tW",
"nips_2022_AYQI3rlp9tW",
"nips_2022_AYQI3rlp9tW",
"nips_2022_AYQI3rlp9tW",
"nips_2022_AYQI3rlp9tW"
] |
nips_2022_S8-duMv77W3 | Sound and Complete Causal Identification with Latent Variables Given Local Background Knowledge | Great efforts have been devoted to causal discovery from observational data, and it is well known that introducing some background knowledge attained from experiments or human expertise can be very helpful. However, it remains unknown that \emph{what causal relations are identifiable given background knowledge in the presence of latent confounders}. In this paper, we solve the problem with sound and complete orientation rules when the background knowledge is given in a \emph{local} form. Furthermore, based on the solution to the problem, this paper proposes a general active learning framework for causal discovery in the presence of latent confounders, with its effectiveness and efficiency validated by experiments. | Accept | This paper presents sound and complete orientation rules to incorporate local causal background knowledge along with algorithms implementing these rules. Reviewers were universally appreciative of the contributions and in favor of acceptance. | train | [
"z8PPK1ZoEYE",
"JxQ6YfyrTNh",
"7Rfn5OZbEnF",
"vxI0KVJx6lV",
"fdtEfdiTJ7Z",
"orx6mbsHCn",
"H1dvx5xfTYi",
"XKI6qh_BGcG",
"Zlx1MFXuf7H"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for answering my questions in detail. I like this paper. I will maintain my recommendation of acceptance.",
" Thank you for your clarification. All my concerns have been addressed.",
" Thanks for the response regarding the comparison to Jaber et al. I will keep my current score.",
" Thank you for ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"fdtEfdiTJ7Z",
"orx6mbsHCn",
"vxI0KVJx6lV",
"Zlx1MFXuf7H",
"XKI6qh_BGcG",
"H1dvx5xfTYi",
"nips_2022_S8-duMv77W3",
"nips_2022_S8-duMv77W3",
"nips_2022_S8-duMv77W3"
] |
nips_2022_aJ5xc1QB7EX | Deep Active Learning by Leveraging Training Dynamics | Active learning theories and methods have been extensively studied in classical statistical learning settings. However, deep active learning, i.e., active learning with deep learning models, is usually based on empirical criteria without solid theoretical justification, thus suffering from heavy doubts when some of those fail to provide benefits in applications. In this paper, by exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics. In particular, we prove that the convergence speed of training and the generalization performance is positively correlated under the ultra-wide condition and show that maximizing the training dynamics leads to a better generalization performance. Furthermore, to scale up to large deep neural networks and data sets, we introduce two relaxations for the subset selection problem and reduce the time complexity from polynomial to constant. Empirical results show that dynamicAL not only outperforms the other baselines consistently but also scales well on large deep learning models. We hope our work inspires more attempts in bridging the theoretical findings of deep networks and practical impacts in deep active learning applications. | Accept | It is the consensus of the reviewers that this paper makes a worthwhile contribution to active deep learning. The author(s)' idea of optimizing for convergence speed is interesting and of potential significance. The meta-reviewer would recommend acceptance of the paper as a poster. | train | [
"8nb4XHsnjto",
"gPwAjqQG8a",
"hYl3yke91Vp",
"3RI1AaWdKVn",
"G6Wnh0LDsgSi",
"1Sv0-h2jNvX",
"oBPyMxXNte",
"fO0rGtE_3i",
"rBYyWEpHoqf",
"mmTLrVy3a3ai",
"Y0o1aRjqNRS",
"5HTCrUQliqsl",
"GcSOCZ0xlT",
"I-HemXrWAES",
"gPmbnRlrCV4",
"axcumpLmIuE",
"G_fn61eZFj-",
"SzpOhpMbhPa"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the positive comments. We really appreciate all the discussions and suggestions!\n \n \n \nBest,\n \nAuthors.",
" Thanks for the responses.",
" Thanks for an invigorating discussion, this paper was a pleasure to review. I hope that others read it with as much enthusiasm as I did :)... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"hYl3yke91Vp",
"I-HemXrWAES",
"fO0rGtE_3i",
"Y0o1aRjqNRS",
"1Sv0-h2jNvX",
"oBPyMxXNte",
"rBYyWEpHoqf",
"Y0o1aRjqNRS",
"mmTLrVy3a3ai",
"GcSOCZ0xlT",
"G_fn61eZFj-",
"SzpOhpMbhPa",
"axcumpLmIuE",
"gPmbnRlrCV4",
"nips_2022_aJ5xc1QB7EX",
"nips_2022_aJ5xc1QB7EX",
"nips_2022_aJ5xc1QB7EX",
... |
nips_2022_vdxOesWgbyN | Model Extraction Attacks on Split Federated Learning | Federated learning (FL) is a popular collaborative learning scheme involving multiple clients and a server. FL focuses on client's data privacy but exposes interfaces for Model Extraction (ME) attacks. As FL periodically collects and shares model parameters, a malicious client can download the latest model and thus steal model Intellectual Property (IP). Split Federated Learning (SFL), a recent variant of FL, splits the model into two, giving one part of the model (client-side model) to clients, and the remaining part (server-side model) to the server. While SFL was primarily designed to facilitate training on resource-constrained devices, it prevents some ME attacks by blocking prediction queries. In this work, we expose the vulnerability of SFL and show how ME attacks can be launched by malicious clients querying the gradient information from server-side. We propose five ME attacks that differ in the gradient usage in data crafting, generating, gradient matching and soft-label crafting as well as in the attacker data availability assumptions. We show that the proposed ME attacks work exceptionally well for SFL. For instance, when the server-side model has five layers, our proposed ME attack can achieve over 90% accuracy with less than 2% accuracy degradation with VGG-11 on CIFAR-10. | Reject | The paper studies the vulnerability of split federated learning with model extraction attacks. The paper provides five attacks and evaluates them experimentally. The authors also provided additional experimental results during the author rebuttal. While the topic and techniques are interesting, reviewers raise concerns about the novelty, and lack of experiments on standard FL datasets (e.g., LEAF) or large number of clients. While authors addressed some of these concerns during rebuttals, the paper can benefit from (a) explaining the novelty of the contributions (b) clarifying the assumptions made in the paper (c) explaining if the paper considers cross-device or cross-silo federated learning (b) adding more experiments on standard FL datasets and tasks. | train | [
"kKhqPi4bx66",
"6rcDCgmbem",
"RO6cLx8n77x",
"u4yz63jlC0W",
"2B7WGMIIwK-",
"BdMlQCndw04",
"eaWcCrnjC5e",
"FHDXEJS0zMW",
"6z0kI3ytJ1H",
"hH_ooIzqUaM",
"5McwCk4_5-i",
"XkN21yG7xdY",
"lczOKdJhtsa",
"3KFwNH-Vni",
"euJ40q_hPyf",
"Qg-gVEdwo0",
"Ov_cwfW0YsS",
"XmMKYe6dfr"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' detailed response. I will keep my rating for this work.",
" **Response to Q2**: We kindly disagree that showing the vulnerability of split federated learning is not a major finding. The model extraction attack (MEA) we are focusing on is a unique attack vector for SFL that does not exi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"5McwCk4_5-i",
"u4yz63jlC0W",
"lczOKdJhtsa",
"FHDXEJS0zMW",
"XmMKYe6dfr",
"nips_2022_vdxOesWgbyN",
"nips_2022_vdxOesWgbyN",
"XmMKYe6dfr",
"XmMKYe6dfr",
"Ov_cwfW0YsS",
"Ov_cwfW0YsS",
"Qg-gVEdwo0",
"euJ40q_hPyf",
"nips_2022_vdxOesWgbyN",
"nips_2022_vdxOesWgbyN",
"nips_2022_vdxOesWgbyN",
... |
nips_2022_kUnHCGiILeU | CageNeRF: Cage-based Neural Radiance Field for Generalized 3D Deformation and Animation | While implicit representations have achieved high-fidelity results in 3D rendering, deforming and animating the implicit field remains challenging. Existing works typically leverage data-dependent models as deformation priors, such as SMPL for human body animation. However, this dependency on category-specific priors limits them to generalize to other objects. To solve this problem, we propose a novel framework for deforming and animating the neural radiance field learned on \textit{arbitrary} objects. The key insight is that we introduce a cage-based representation as deformation prior, which is category-agnostic. Specifically, the deformation is performed based on an enclosing polygon mesh with sparsely defined vertices called \textit{cage} inside the rendering space, where each point is projected into a novel position based on the barycentric interpolation of the deformed cage vertices. In this way, we transform the cage into a generalized constraint, which is able to deform and animate arbitrary target objects while preserving geometry details. Based on extensive experiments, we demonstrate the effectiveness of our framework in the task of geometry editing, object animation and deformation transfer. | Accept | All reviewers consider the central idea of the paper to be novel and interesting, and that deformable NeRFs are a valuable research area.
All reviewers agree that the initial paper was poorly presented, and that the revised version is considerably improved.
While accepting the authors' rebuttal regarding the lack of suitable datasets to evaluate deformable NeRFs. the qualitative presentation could be improved further, and in particular, it is recommended to include video sequences of continuous deformation, which will more clearly show the artifacts of both the proposed and other methods.
| train | [
"AvefCYkEgGJ",
"jbY6bX2xetp",
"9brrPgIBGB",
"UqVBG66Xd",
"lULOkqkxKBf",
"xDQ51os4Dsn",
"vWC8xXwWBj4",
"odKun87dErh",
"fdP5umkrtW",
"k37IHb3akv",
"XGM5ISTKcfV",
"vLCwspF5Gi",
"qVic8M5nxix",
"8Q9Oavka_v2",
"xQdx6IzOvfY",
"fs7tUxaUoZ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the effort and time you contributed to reviewing our manuscript. We are glad that our response addressed your concerns.",
" Thank you for these clarifications. I think that incorporating them somehow into another revised version would be helpful, as I was not confident in my understanding of test-... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"jbY6bX2xetp",
"9brrPgIBGB",
"odKun87dErh",
"fdP5umkrtW",
"vWC8xXwWBj4",
"vWC8xXwWBj4",
"XGM5ISTKcfV",
"qVic8M5nxix",
"vLCwspF5Gi",
"nips_2022_kUnHCGiILeU",
"8Q9Oavka_v2",
"xQdx6IzOvfY",
"fs7tUxaUoZ",
"nips_2022_kUnHCGiILeU",
"nips_2022_kUnHCGiILeU",
"nips_2022_kUnHCGiILeU"
] |
nips_2022_k5uFiFLWv3X | Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation | Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples, which can produce erroneous predictions by injecting imperceptible perturbations. In this work, we study the transferability of adversarial examples, which is significant due to its threat to real-world applications where model architecture or parameters are usually unknown. Many existing works reveal that the adversarial examples are likely to overfit the surrogate model that they are generated from, limiting its transfer attack performance against different target models. To mitigate the overfitting of the surrogate model, we propose a novel attack method, dubbed reverse adversarial perturbation (RAP). Specifically, instead of minimizing the loss of a single adversarial point, we advocate seeking adversarial example located at a region with unified low loss value, by injecting the worst-case perturbation (the reverse adversarial perturbation) for each step of the optimization procedure. The adversarial attack with RAP is formulated as a min-max bi-level optimization problem. By integrating RAP into the iterative process for attacks, our method can find more stable adversarial examples which are less sensitive to the changes of decision boundary, mitigating the overfitting of the surrogate model. Comprehensive experimental comparisons demonstrate that RAP can significantly boost adversarial transferability. Furthermore, RAP can be naturally combined with many existing black-box attack techniques, to further boost the transferability. When attacking a real-world image recognition system, Google Cloud Vision API, we obtain 22% performance improvement of targeted attacks over the compared method. Our codes are available at https://github.com/SCLBD/Transfer_attack_RAP. | Accept | This paper studies the transferability of adversarial examples. In general, the reviewers found the paper is well motivated, and the proposed method is simple and effective. Most initial concerns were about missing comparisons and ablations.
All these concerns are well addressed in the rebuttal. As a result, all reviewers unanimously agree to accept this submission. | train | [
"s91Zj1Y0Xr",
"5A7qwy6kHx",
"9krxfYDt7Nf",
"ruI-bVhcqt-",
"2eElVfK7J3",
"lPwHD3bqOe",
"5EaHB0Tzovz",
"muRmI2ZfGhu",
"J-xmiYLuOvV",
"RTQxoPRssgr",
"ozB2ph99g3u",
"X0OBkSOkfhR",
"FuF-aXauftE",
"I_xfzw-dSkV",
"_fEXdVqJU_S0",
"Ajc-IqaP1NI",
"k4-7HWoaBM7",
"mjPZHC9GJqS",
"9AP9ijHZLT",... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Dear Reviewer pUE8, \n\nConsidering the discussion stage is close to the end, we are looking forward to your further feedback about our latest response (posted one day ago, see below), whether your remaining concern has been addressed. We would like to discuss with you in more details. \nGreatly appreciate your h... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
5
] | [
"muRmI2ZfGhu",
"nips_2022_k5uFiFLWv3X",
"r1O5naRNSGz",
"Iceg-We4ALP",
"Ajc-IqaP1NI",
"RTQxoPRssgr",
"muRmI2ZfGhu",
"ozB2ph99g3u",
"Iceg-We4ALP",
"r1O5naRNSGz",
"zxjtPtUSwq9",
"Gygif0cheP",
"nips_2022_k5uFiFLWv3X",
"_fEXdVqJU_S0",
"Iceg-We4ALP",
"k4-7HWoaBM7",
"r1O5naRNSGz",
"zxjtPt... |
nips_2022_OoN6TVb4Vkq | Contextual Bandits with Knapsacks for a Conversion Model | We consider contextual bandits with knapsacks, with an underlying structure between rewards generated and cost vectors suffered. We do so motivated by sales with commercial discounts. At each round, given the stochastic i.i.d.\ context $\mathbf{x}_t$ and the arm picked $a_t$ (corresponding, e.g., to a discount level), a customer conversion may be obtained, in which case a reward $r(a,\mathbf{x}_t)$ is gained and vector costs $\mathbf{c}(a_t,\mathbf{x}_t)$ are suffered (corresponding, e.g., to losses of earnings). Otherwise, in the absence of a conversion, the reward and costs are null. The reward and costs achieved are thus coupled through the binary variable measuring conversion or the absence thereof. This underlying structure between rewards and costs is different from the linear structures considered by Agrawal and Devanur [2016] (but we show that the techniques introduced in the present article may also be applied to the case of these linear structures). The adaptive policies exhibited in this article solve at each round a linear program based on upper-confidence estimates of the probabilities of conversion given $a$ and $\mathbf{x}$. This kind of policy is most natural and achieves a regret bound of the typical order $(\mathrm{OPT}/B) \smash{\sqrt{T}}$, where $B$ is the total budget allowed, $\mathrm{OPT}$ is the optimal expected reward achievable by a static policy, and $T$ is the number of rounds. | Accept | As acknowledged in the reviews (and I concur), this is a well written paper that introduces a relevant and interesting contextual-bandits model and gives solid technical results. The paper certainly has its limitations, but overall the technical contributions are novel and well executed. The authors have effectively addressed the questions and concerns brought up in the reviews. I recommend acceptance. | train | [
"xnuTCS-3X1x",
"iQMK1OSTzu",
"5OCBuiBulfu0",
"5_-3IwR-2a4",
"CD_52oxB-_",
"wxDpAZmonR",
"PWDuWGz1rDi",
"jctrIYyz5aF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their detailed response and they have addressed most of my questions on the technical results. In terms of future directions, I think the main limitations of the current work (e.g., finiteness of contextual space, lack of specific lower bound results, etc.) remain to be further looked at. ... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"5_-3IwR-2a4",
"5OCBuiBulfu0",
"jctrIYyz5aF",
"PWDuWGz1rDi",
"wxDpAZmonR",
"nips_2022_OoN6TVb4Vkq",
"nips_2022_OoN6TVb4Vkq",
"nips_2022_OoN6TVb4Vkq"
] |
nips_2022_buXZ7nIqiwE | Using natural language and program abstractions to instill human inductive biases in machines | Strong inductive biases give humans the ability to quickly learn to perform a variety of tasks. Although meta-learning is a method to endow neural networks with useful inductive biases, agents trained by meta-learning may sometimes acquire very different strategies from humans. We show that co-training these agents on predicting representations from natural language task descriptions and programs induced to generate such tasks guides them toward more human-like inductive biases. Human-generated language descriptions and program induction models that add new learned primitives both contain abstract concepts that can compress description length. Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key. | Accept | The submission explores differences in human and machine inductive biases. Using the task of generating a pattern in a grid, the authors first show that models trained on human inputs generalize better to machine generated inputs than other human inputs, suggesting that models lack the correct inductive bias. However, by exploiting representations of natural language descriptions or programs during training, models can be given a human-like inductive bias. The reviewers agree that this is creative, thought provoking work backed up by thorough experiments, and the paper it is particularly well written. The main area for improvement in future work would be showing that these results would generalize to other, more complex domains. | train | [
"bk-L9PoPZkc",
"revRBERMQc-",
"wejsH1UHEWA",
"sKkadxqyNvZ",
"kV3Yg4HxA-y",
"tdgMn-r5YWB",
"5RnUbSW8zA5",
"SBCSSU8gshG",
"A5IcF6hPDSk",
"8k3BbpTDx28",
"UAarKPlDOJS",
"9D2RU7P-fQa"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot for the detailed responses, all my points are adequately addressed. I agree with the authors that the weaknesses I mentioned in my initial review are better seen as avenues for future work. Very interesting to learn about this meta-learning approach for RL, I will read more about that. I will keep my... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"5RnUbSW8zA5",
"nips_2022_buXZ7nIqiwE",
"9D2RU7P-fQa",
"9D2RU7P-fQa",
"9D2RU7P-fQa",
"UAarKPlDOJS",
"UAarKPlDOJS",
"8k3BbpTDx28",
"8k3BbpTDx28",
"nips_2022_buXZ7nIqiwE",
"nips_2022_buXZ7nIqiwE",
"nips_2022_buXZ7nIqiwE"
] |
nips_2022_5vVSA_cdRqe | FairVFL: A Fair Vertical Federated Learning Framework with Contrastive Adversarial Learning | Vertical federated learning (VFL) is a privacy-preserving machine learning paradigm that can learn models from features distributed on different platforms in a privacy-preserving way. Since in real-world applications the data may contain bias on fairness-sensitive features (e.g., gender), VFL models may inherit bias from training data and become unfair for some user groups. However, existing fair machine learning methods usually rely on the centralized storage of fairness-sensitive features to achieve model fairness, which are usually inapplicable in federated scenarios. In this paper, we propose a fair vertical federated learning framework (FairVFL), which can improve the fairness of VFL models. The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way. Specifically, each platform with fairness-insensitive features first learns local data representations from local features. Then, these local representations are uploaded to a server and aggregated into a unified representation for the target task. In order to learn a fair unified representation, we send it to each platform storing fairness-sensitive features and apply adversarial learning to remove bias from the unified representation inherited from the biased data. Moreover, for protecting user privacy, we further propose a contrastive adversarial learning method to remove private information from the unified representation in server before sending it to the platforms keeping fairness-sensitive features. Experiments on three real-world datasets validate that our method can effectively improve model fairness with user privacy well-protected. | Accept | This paper presents a fair vertical federated learning framework (FairVFL), by learning a set of unified fair representations of data/features distributed across decentralized platforms, i.e., these representations do not reflect sensitive attributes such as age/gender. This is accomplished by having platforms learn local data representations from fairness-insensitive features, which are then aggregated at a central server; this is then sent (after a contrastive adversarial learning method removes private information) to platforms with fairness-sensitive features to remove bias by using adversarial learning.
All reviewers found the method sound and the experimental results convincing -- two real-world datasets were used to show that the proposed method can effectively improve model fairness while preserving user privacy.
Some concerns were raised over the communication and computational overhead of the proposed method, but none of the reviewers considered it serious enough as to merit rejection.
One note of caution on the Accept recommendation: this is a paper that did not receive a review with confidence level above 3. Usually there is at least one review at the level of 4. | train | [
"9ZjHiPrXNGF",
"GxPGJhyOd7",
"EeqUSAG8noQ",
"MOEnQvexrAG",
"HQ-DeicBLuu",
"wFp48j0qsY0",
"lLXATqNtUL"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' response on my questions. They more or less answer my questions. However, as I am not an expert in fairness-aware ML, I maintain my score at 6 at the present stage. ",
" Thank the reviewer for the insightful comments and constructive suggestions. Our detailed responses to your comments... | [
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"GxPGJhyOd7",
"lLXATqNtUL",
"wFp48j0qsY0",
"HQ-DeicBLuu",
"nips_2022_5vVSA_cdRqe",
"nips_2022_5vVSA_cdRqe",
"nips_2022_5vVSA_cdRqe"
] |
nips_2022_U-RsnLYHcKa | Wasserstein Logistic Regression with Mixed Features | Recent work has leveraged the popular distributionally robust optimization paradigm to combat overfitting in classical logistic regression. While the resulting classification scheme displays a promising performance in numerical experiments, it is inherently limited to numerical features. In this paper, we show that distributionally robust logistic regression with mixed (\emph{i.e.}, numerical and categorical) features, despite amounting to an optimization problem of exponential size, admits a polynomial-time solution scheme. We subsequently develop a practically efficient cutting plane approach that solves the problem as a sequence of polynomial-time solvable exponential conic programs. Our method retains many of the desirable theoretical features of previous works, but---in contrast to the literature---it does not admit an equivalent representation as a regularized logistic regression, that is, it represents a genuinely novel variant of the logistic regression problem. We show that our method outperforms both the unregularized and the regularized logistic regression on categorical as well as mixed-feature benchmark instances. | Accept | The focus of the submission is distributionally robust logistic regression when the discrepancy used in the ambiguity set is the Wasserstein distance and the features are mixed (i.e., they can contain both numerical and categorical variables). After showing that the resulting optimization problem (1) with the log-loss function can be reformulated as a finite-dimensional exponential conic program (Theorem 1), they (i) prove that (1) can be solved in polynomial time (Theorem 2), (ii) show that it does not admit a regularized logistic regression form (Theorem 3) as it is the case for purely numerical features, (iii) propose a column-and-constraint solver (Theorem 4-5). The practical efficiency of the proposed method is illustrated on 14 UCI benchmarks.
Logistic regression (LR) is among the most popular tools in machine learning and statistics. Handling mixed features for LR in the distributionally robust case is a relevant problem. The submission represents a solid work combining both important theoretical and empirical insights as it was evaluated by the reviewers. | test | [
"gaZawnAkM-t",
"0I006FwGddZ",
"G1bU6X2sm7q",
"Am69inPjr3s",
"splbxmpvlJX",
"DMPYhQsxSTX",
"92GuBodTaD4",
"ED2llEMsWzr",
"bZhsJ1qhoQn",
"JcJbj_bNpSO_",
"KxjyQTKIcnj",
"oPb6MRFSWrP",
"yty4aBSyHb",
"oXUNFLadop1",
"wBh4ptSzkjx",
"BzljoklYltS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed explanations. I have no more questions and would tend to raise my rating from 5 to 6 (weak accept).",
" We thank you again for your comments! We agree with you on the statement of Theorem 2 and will clarify that in our next revision of the paper.",
" I thank the authors for their r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3,
3
] | [
"ED2llEMsWzr",
"G1bU6X2sm7q",
"BzljoklYltS",
"nips_2022_U-RsnLYHcKa",
"BzljoklYltS",
"BzljoklYltS",
"wBh4ptSzkjx",
"wBh4ptSzkjx",
"oXUNFLadop1",
"yty4aBSyHb",
"oPb6MRFSWrP",
"nips_2022_U-RsnLYHcKa",
"nips_2022_U-RsnLYHcKa",
"nips_2022_U-RsnLYHcKa",
"nips_2022_U-RsnLYHcKa",
"nips_2022_U... |
nips_2022_RW-OOBU11xl | Forecasting Human Trajectory from Scene History | Predicting the future trajectory of a person remains a challenging problem, due to randomness and subjectivity. However, the moving patterns of human in constrained scenario typically conform to a limited number of regularities to a certain extent, because of the scenario restrictions (\eg, floor plan, roads and obstacles) and person-person or person-object interactivity. Thus, an individual person in this scenario should follow one of the regularities as well. In other words, a person's subsequent trajectory has likely been traveled by others. Based on this hypothesis, we propose to forecast a person's future trajectory by learning from the implicit scene regularities. We call the regularities, inherently derived from the past dynamics of the people and the environment in the scene, \emph{scene history}. We categorize scene history information into two types: historical group trajectories and individual-surroundings interaction. To exploit these information for trajectory prediction, we propose a novel framework Scene History Excavating Network (SHENet), where the scene history is leveraged in a simple yet effective approach. In particular, we design two components, the group trajectory bank module to extract representative group trajectories as the candidate for future path, and the cross-modal interaction module to model the interaction between individual past trajectory and its surroundings for trajectory refinement, respectively. In addition, to mitigate the uncertainty in the evaluation, caused by the aforementioned randomness and subjectivity, we propose to include smoothness into evaluation metrics. We conduct extensive evaluations to validate the efficacy of proposed framework on ETH, UCY, as well as a new, challenging benchmark dataset PAV, demonstrating superior performance compared to state-of-the-art methods. | Accept | Initially, the paper received mixed reviews (3456). The major concerns raised by the reviews were:
1. What is the contribution of the PAV dataset? (XkVC)
2. There should be experiments on existing datasets, e.g. SDD or inDD. (XkVC, tAHR)
3. Is it fair to use curve smoothing on the GT during evaluation? To be fair, other works should be trained with CS loss too. What is the reason for CS, does it affect generalization of the model? No ablation for CS loss (XkVC, ZK4Z)
4. reported results of some SOTA works on ETH/UCY are worse than the published results. Why not use the original reported results? (XkVC, bcbq)
5. what are the settings of the hyperparameters? Is it dataset specific? (XkVC)
6. ablation study on the clustering methods used for constructing the group trajectory bank. (XkVC)
7. The bank is fixed after training -- how to guarantee the learned bank can be used in a new environment, new scene (tAHR, ZK4Z)
8. show results on ETH/UCY w/o using video data (tAHR)
9. can the proposed method work well from bird-eye view or when assumed information is missing? (tAHR)
10. evaluation metric used on ETH/UCY is not clear. stochastic vs deterministic? (bcbq, ZK4Z)
11. novelty is not significant (ZK4Z)
12. Missing details: scene-trajectory alignment, how to set K, how to set no. of semantic classes, range of 13. coordinates for cosine similarity, (ZK4Z)
14. No ablation on memory vs complexity (ZK4Z)
15. No ablation on the memory bank size and initialization (ZK4Z)
The authors wrote a response to address these concerns. All reviewers were satisfied with the response. Overall, the reviewers found the paper interesting, although the approach is somewhat incremental and more experiments could be added (e.g., on SDD and inDD, as well as ablation studies). Nonetheless, the final ratings increased to 5666. The AC agrees with the reviewers and thus recommends accept. The authors should further revise the paper according to the reviews and discussion. | train | [
"XABytwwoccs",
"-hG3ueuQ68s",
"TNakIlFF1p5",
"U8h6nkOI3TW",
"uVzYlOdv6kz",
"gBtB2gGMQ5R",
"ObeRUwc_b3F",
"ss2Lv1jmOlv",
"UPoQsN8ve8E",
"wFnrE9EUdJ-",
"vkHT3qcLrc0",
"Nd0oyqhrMbI",
"N7VNW5XXS1K"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for uploading the revised the paper. I've checked the manuscript and increased my rating. I highly encourage authors to include several more lateset works in this year CVPR, since there are some highly relevant papers in the proceedings. i.e. [1][2][3] etc.\n\nAlso, there are some typos in the manuscrip... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
5
] | [
"-hG3ueuQ68s",
"TNakIlFF1p5",
"gBtB2gGMQ5R",
"uVzYlOdv6kz",
"ss2Lv1jmOlv",
"N7VNW5XXS1K",
"Nd0oyqhrMbI",
"vkHT3qcLrc0",
"wFnrE9EUdJ-",
"nips_2022_RW-OOBU11xl",
"nips_2022_RW-OOBU11xl",
"nips_2022_RW-OOBU11xl",
"nips_2022_RW-OOBU11xl"
] |
nips_2022_-QHUWgkh1OY | DOGE-Train: Discrete Optimization on GPU with End-to-end Training | We present a fast, scalable, data-driven approach for solving linear relaxations of 0-1 integer linear programs using a graph neural network.
Our solver is based on the Lagrange decomposition based algorithm of Abbas et al. (2022).
We make the algorithm differentiable and perform backpropagation through the dual update scheme for end-to-end training of its algorithmic parameters.
This allows to preserve the algorithm's theoretical properties including feasibility and guaranteed non-decrease in the lower bound.
Since the method of Abbas et al. (2022) can get stuck in suboptimal fixed points, we provide additional freedom to our graph neural network to predict non-parametric update steps for escaping such points while maintaining dual feasibility.
For training of the graph neural network we use an unsupervised loss and perform experiments on large-scale real world datasets.
We train on smaller problems and test on larger ones showing strong generalization performance with a graph neural network comprising only around $10k$ parameters.
Our solver achieves significantly faster performance and better dual objectives than its non-learned version of Abbas et al. (2022).
In comparison to commercial solvers our learned solver achieves close to optimal objective values of LP relaxations and is faster by up to an order of magnitude on very large problems from structured prediction and on selected combinatorial optimization problems.
Our code will be made available upon acceptance. | Reject | The presented paper introduces DOGE-Train method that targets discrete optimization problems. It allows finding solutions for discrete problems utilizing GPUs. This is achieved by pre-training on smaller size instances and then hoping it would also generalize for larger instances that are coming from a similar family of problems. Overall, the idea is related to FastDOG but has some improvements.
Despite showing promising results, there are still a few concerns raised by reviewer "ehgi". Also, I am not sure if the NeurIPS would be the best fit for this paper.
| train | [
"gty32GFx0n",
"rXhOq7PlT_1",
"YJhD4RuNZh6",
"NPu1OPxn15J",
"-B52eYXbLe3",
"lkjA46s77zv",
"8zdVg9hMc9u",
"VP_0kjHvbbm",
"h0guQuH13o",
"Zqqt02Ca_Dt",
"lJgbzZrZwi",
"9tL_NmDe8t",
"geuQAPKcV4B",
"Fj9QQKnjdGm",
"j_Omy9SN7--S",
"bbZtrYMvzVW",
"IFpEq1n5Bj",
"UGe4d01mfCp"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for detailed feedback. We will add more context and discussion about specialized solvers in the final version.\n\n_Regards,_\n\n_Paper 3276 authors_",
" Thanks for answering questions! Now, the proposed approach becomes technically more clear and the presentation is more consistent. The pr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"rXhOq7PlT_1",
"9tL_NmDe8t",
"-B52eYXbLe3",
"lkjA46s77zv",
"lJgbzZrZwi",
"h0guQuH13o",
"VP_0kjHvbbm",
"geuQAPKcV4B",
"Zqqt02Ca_Dt",
"Fj9QQKnjdGm",
"UGe4d01mfCp",
"UGe4d01mfCp",
"IFpEq1n5Bj",
"bbZtrYMvzVW",
"nips_2022_-QHUWgkh1OY",
"nips_2022_-QHUWgkh1OY",
"nips_2022_-QHUWgkh1OY",
"... |
nips_2022_Mn4IkuWamy | The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning | We study the multi-step off-policy learning approach to distributional RL. Despite the apparent similarity between value-based RL and distributional RL, our study reveals intriguing and fundamental differences between the two cases in the multi-step setting. We identify a novel notion of path-dependent distributional TD error, which is indispensable for principled multi-step distributional RL. The distinction from the value-based case bears important implications on concepts such as backward-view algorithms. Our work provides the first theoretical guarantees on multi-step off-policy distributional RL algorithms, including results that apply to the small number of existing approaches to multi-step distributional RL. In addition, we derive a novel algorithm, Quantile Regression-Retrace, which leads to a deep RL agent QR-DQN-Retrace that shows empirical improvements over QR-DQN on the Atari-57 benchmark. Collectively, we shed light on how unique challenges in multi-step distributional RL can be addressed both in theory and practice. | Accept | The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. Although the reviewers generally expressed positive views on the proposed method, they also pointed out many possible limitations of this paper. On the one hand, one reviewer acknowledged that the authors provided the first theoretical guarantees on multi-step off-policy distributional RL algorithms via a novel algorithm. They argued, however, that the methodological novelty may not be too significant compared to the baseline QR-DQN, and that the experiments could be improved. After reading the authors' rebuttal, this reviewer—although still with an overall positive impression of the quality of this work—said that they do not believe the paper fully shows the particularity of applying the multi-step idea to DRL when compared to some value-based methods. They also believe that the empirical section of the paper was not strong enough. Another reviewer agreed that this is novel and significant work, as it introduces a non-trivial extension of one-step distributional RL to the multi-step off-policy setting. They argued that this paper provides a theoretical basis for the continued study of multi-step paradigms in off-policy reinforcement learning, which is important/significant. This reviewer, however, pointed out that the experimental results were not sufficiently convincing to support the claims that employing the multi-step distributional RL algorithm provides a significant empirical advantage. This reviewer carefully analyzed the authors' rebuttal and appreciated that they clarified the reviewer's original points of confusion. Overall, however, this reviewer agreed with others that the empirical results are not overly convincing. Furthermore, one of the reviewers argued that the notion of path-dependent distributional TD error is novel. They pointed out, as the main limitation of this paper, that investigating the multi-step distributional RL setting "may not be [particularly interesting since] some theoretical results on one-step distributional RL can be straightforward to extend into the multi-step version". They also argued that the unbiased QR-loss was not clearly motivated since the bias for RL may also be useful for exploration. After reading the authors' rebuttal, this particular reviewer updated their score since the authors provided more evidence to strengthen their theoretical and empirical claims. Overall, all reviewers were positively impressed with the quality of this work but brought up many points of contention regarding ways in which the paper could/should still be improved. They encourage the authors to update their work based on their constructive criticisms and, in particular, in a way that tackles the points of contention mentioned in the original reviews and their post-rebuttal comments. | train | [
"rSUPz7CiQ6v",
"NwxvIV9wXrK",
"Ru3MpVeFVF",
"KDTEFhHoGZF",
"1l0MdPmRM7b",
"x4nwqyXnz2Q",
"WVLJ6jtFaKW",
"L1Lm3Lv9P3c",
"WA9rPtbXc_A",
"p2FKaE61Rns",
"NXA8RZbdfnZ",
"8iMurE2pyT_",
"VOQQNC02muM"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your reply. We are glad that the explanation above has clarified things. We will make sure to include such discussions and incorporate your suggestions in our revision.",
" Thank you very much for the informative response, your argument is now clear to me! Could you please add a small no... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"NwxvIV9wXrK",
"Ru3MpVeFVF",
"KDTEFhHoGZF",
"WA9rPtbXc_A",
"WVLJ6jtFaKW",
"nips_2022_Mn4IkuWamy",
"L1Lm3Lv9P3c",
"VOQQNC02muM",
"8iMurE2pyT_",
"NXA8RZbdfnZ",
"nips_2022_Mn4IkuWamy",
"nips_2022_Mn4IkuWamy",
"nips_2022_Mn4IkuWamy"
] |
nips_2022_VwRFJi9crEH | Personalized Subgraph Federated Learning | In real-world scenarios, subgraphs of a larger global graph may be distributed across multiple devices or institutions, and only locally accessible due to privacy restrictions, although there may be links between them. Recently proposed subgraph Federated Learning (FL) methods deal with those missing links across private local subgraphs while distributively training Graph Neural Networks (GNNs) on them. However, they have overlooked the inevitable heterogeneity among subgraphs, caused by subgraphs comprising different parts of a global graph. For example, a subgraph may belong to one of the communities within the larger global graph. A naive subgraph FL in such a case will collapse incompatible knowledge from local GNN models trained on heterogeneous graph distributions. To overcome such a limitation, we introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNN models rather than learning a single global GNN model, and propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it. A crucial challenge in personalized subgraph FL is that the server does not know which subgraph each client has. FED-PUB thus utilizes functional embeddings of the local GNNs using random graphs as inputs to compute similarities between them, and use them to perform weighted averaging for server-side aggregation. Further, it learns a personalized sparse mask at each client to select and update only the subgraph-relevant subset of the aggregated parameters. We validate FED-PUB for its subgraph FL performance on six datasets, considering both non-overlapping and overlapping subgraphs, on which ours largely outperforms relevant baselines. | Reject | The author(s) present(s) a new subgraph federated learning approach to learn a single GNN model that computes embeddings based on the relationship between local graphs. This approach goes beyond the previous approaches that consider the local subgraphs separately.
The paper is interesting and present some novel ideas but it should address two fundamental critiques by the reviewers before acceptance. In particular the authors should:
- clarify the novelty of their approach(especially considering the missing discussion with previous work)
- improve the experimental section. The current results are interesting but not fully convincing and the experimental section could be improved significantly by including additional settings suggested by the reviewers
Overall, the paper is interesting but it is not ready for publication at this point | train | [
"sNZimWDO6Ow",
"1NmnBT7ogae",
"V1XMplRDyb",
"jY9lPdlQAXi",
"r7eAGfLwrbX",
"yjPxe342La",
"GxIxs3n97uw",
"T3POAOwTao",
"ozIULvzpAm4",
"UI9L93T2Iu",
"8VNyOq7wiI",
"rsFvzra8iQ",
"qsDyC7OnlQt",
"KvJ-WBKbQIs",
"EWy8Dg55lyv",
"VsRS6Oi-6jr",
"NN85sabUXrQ",
"L-07kJX7cjq",
"F3OXYEl1lv",
... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
" We sincerely thank you for your helpful comments, as well as your time and effort in reviewing our paper. We are happy to hear that your concerns are mostly resolved. \n\n**NOTE:** I know you must be very busy, but could you please reflect your updated score in your original comment? You promised to increase your... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
4
] | [
"1NmnBT7ogae",
"Zjti7Vui6j_",
"auZDhY46SWU",
"GxIxs3n97uw",
"ozIULvzpAm4",
"T3POAOwTao",
"CedWT0qAIju",
"8VNyOq7wiI",
"UI9L93T2Iu",
"i3TQduj1KP",
"SCVMP7IrQm6",
"auZDhY46SWU",
"5jKdm5NEJG",
"i3TQduj1KP",
"SCVMP7IrQm6",
"auZDhY46SWU",
"5jKdm5NEJG",
"5jKdm5NEJG",
"5jKdm5NEJG",
"5... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.