paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2021_ZBfUo_dr4H
DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras
We introduce DROID-SLAM, a new deep learning based SLAM system. DROID-SLAM consists of recurrent iterative updates of camera pose and pixelwise depth through a Dense Bundle Adjustment layer. DROID-SLAM is accurate, achieving large improvements over prior work, and robust, suffering from substantially fewer catastrophic failures. Despite training on monocular video, it can leverage stereo or RGB-D video to achieve improved performance at test time. The URL to our open source code is https://github.com/princeton-vl/DROID-SLAM.
accept
The paper received strong reviews and I am happy to recommend it for acceptance. I encourage the authors to make the improvements that they indicated in their rebuttal in the final version.
train
[ "7HL8eD_MzMA", "yZ1Cw6NUvR", "1QgizvDOJNZ", "cankpsRX0a2", "GqpiXL8Eqw4", "D9b7gTzdlzg", "8XhS2VuUGh", "h2yvjEUTNos" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **R-SLAM is trained on TatanAir while most of the compared methods (DeepFactors, DeepV2D, D3VO, DeepTAM) are not.** This is a good point and we will make it more clear in the paper what training data is used for each method. Of these methods (DeepFactors, DeepV2D, D3VO, DeepTAM) only DeepV2D has publicly availabl...
[ -1, -1, -1, -1, 9, 7, 9, 8 ]
[ -1, -1, -1, -1, 5, 5, 3, 5 ]
[ "h2yvjEUTNos", "8XhS2VuUGh", "D9b7gTzdlzg", "GqpiXL8Eqw4", "nips_2021_ZBfUo_dr4H", "nips_2021_ZBfUo_dr4H", "nips_2021_ZBfUo_dr4H", "nips_2021_ZBfUo_dr4H" ]
nips_2021_XjIm8hOCxTm
Few-Shot Object Detection via Association and DIscrimination
Object detection has achieved substantial progress in the last decade. However, detecting novel classes with only few samples remains challenging, since deep learning under low data regime usually leads to a degraded feature space. Existing works employ a holistic fine-tuning paradigm to tackle this problem, where the model is first pre-trained on all base classes with abundant samples, and then it is used to carve the novel class feature space. Nonetheless, this paradigm is still imperfect. Durning fine-tuning, a novel class may implicitly leverage the knowledge of multiple base classes to construct its feature space, which induces a scattered feature space, hence violating the inter-class separability. To overcome these obstacles, we propose a two-step fine-tuning framework, Few-shot object detection via Association and DIscrimination (FADI), which builds up a discriminative feature space for each novel class with two integral steps. 1) In the association step, in contrast to implicitly leveraging multiple base classes, we construct a compact novel class feature space via explicitly imitating a specific base class feature space. Specifically, we associate each novel class with a base class according to their semantic similarity. After that, the feature space of a novel class can readily imitate the well-trained feature space of the associated base class. 2) In the discrimination step, to ensure the separability between the novel classes and associated base classes, we disentangle the classification branches for base and novel classes. To further enlarge the inter-class separability between all classes, a set-specialized margin loss is imposed. Extensive experiments on standard Pascal VOC and MS-COCO datasets demonstrate that FADI achieves new state-of-the-art performance, significantly improving the baseline in any shot/split by +18.7. Notably, the advantage of FADI is most announced on extremely few-shot scenarios (e.g. 1- and 3- shot).
accept
The paper proposes an alternative to fine-tuning for few-shot object detection, by instead associating each new class to an existing base class and then training with a max margin loss between this new class from the base class. The idea is simple, its implementation probably still leaves some performance on the table (e.g. visual similarity vs. WordNet similarity). The experimental validation is strong, in particular in term of AP on new classes on standard benchmarks. The reviewers agree that the paper is mostly clear and convincing. The authors addressed some of the limitations of the paper in the author response, in particular in term of comparisons with the literature. I recommend acceptance as a poster.
train
[ "zF9j-CTbNJQ", "UfnRn5dTbHq", "vJX9uwMeza", "1MLcpR5LkDz", "rZcCptEuJJS", "8HibdHLsnT_", "2jjlxj-GLUl", "30NTQEqL80", "Qb0ebxyZMVI" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a two-step fine-tuning framework for few-shot object detection, where the first stage is an association step, associating each novel class with a base class according to their semantic similarity, and the second stage is a discrimination step, which ensure the separability between the novel cla...
[ 6, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ 5, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "nips_2021_XjIm8hOCxTm", "vJX9uwMeza", "zF9j-CTbNJQ", "2jjlxj-GLUl", "Qb0ebxyZMVI", "30NTQEqL80", "nips_2021_XjIm8hOCxTm", "nips_2021_XjIm8hOCxTm", "nips_2021_XjIm8hOCxTm" ]
nips_2021__H7TNRQQeH8
Neural Dubber: Dubbing for Videos According to Scripts
Dubbing is a post-production process of re-recording actors’ dialogues, which is extensively used in filmmaking and video production. It is usually performed manually by professional voice actors who read lines with proper prosody, and in synchronization with the pre-recorded videos. In this work, we propose Neural Dubber, the first neural network model to solve a novel automatic video dubbing (AVD) task: synthesizing human speech synchronized with the given video from the text. Neural Dubber is a multi-modal text-to-speech (TTS) model that utilizes the lip movement in the video to control the prosody of the generated speech. Furthermore, an image-based speaker embedding (ISE) module is developed for the multi-speaker setting, which enables Neural Dubber to generate speech with a reasonable timbre according to the speaker’s face. Experiments on the chemistry lecture single-speaker dataset and LRS2 multi-speaker dataset show that Neural Dubber can generate speech audios on par with state-of-the-art TTS models in terms of speech quality. Most importantly, both qualitative and quantitative evaluations show that Neural Dubber can control the prosody of synthesized speech by the video, and generate high-fidelity speech temporally synchronized with the video.
accept
The original submission was missing an essential comparison with Lip2Wav. This has been added in the rebuttal period, and all reviewers now recommend acceptance. The authors should include the Lip2Wav comparison and the missing references highlighted by the reviewers in the final version.
test
[ "hLqNsnXZ8F", "kwMfWk-RzpX", "mlgi7-c3Ujk", "mFMgVecuUPC", "mIb8i-2X6L", "6g7hFWbZcb5", "9kshTfM771", "Y4G3ISm3bGi", "jQJC2E3Uhp", "pZiKdp8Nfiw", "aEiWTFCHKqc", "rvdiBYRrYAf", "1w-uuEyXGNd", "iBee2GOdOEA", "hP0Z2oDYvyR", "zzs2vApHNKV", "RgorrOI46zY" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose to solve the problem of text to speech generation based on video. Thus, this is video-text guided speech generation task that they term as silent video dubbing. The method is based on a transformer architecture that combines text and visual lip motion representations as encoders ...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "nips_2021__H7TNRQQeH8", "nips_2021__H7TNRQQeH8", "hP0Z2oDYvyR", "9kshTfM771", "Y4G3ISm3bGi", "jQJC2E3Uhp", "iBee2GOdOEA", "1w-uuEyXGNd", "pZiKdp8Nfiw", "hLqNsnXZ8F", "nips_2021__H7TNRQQeH8", "hP0Z2oDYvyR", "zzs2vApHNKV", "RgorrOI46zY", "nips_2021__H7TNRQQeH8", "nips_2021__H7TNRQQeH8",...
nips_2021_Hk2oOy4GJlH
Neural Bootstrapper
Bootstrapping has been a primary tool for ensemble and uncertainty quantification in machine learning and statistics. However, due to its nature of multiple training and resampling, bootstrapping deep neural networks is computationally burdensome; hence it has difficulties in practical application to the uncertainty estimation and related tasks. To overcome this computational bottleneck, we propose a novel approach called Neural Bootstrapper (NeuBoots), which learns to generate bootstrapped neural networks through single model training. NeuBoots injects the bootstrap weights into the high-level feature layers of the backbone network and outputs the bootstrapped predictions of the target, without additional parameters and the repetitive computations from scratch. We apply NeuBoots to various machine learning tasks related to uncertainty quantification, including prediction calibrations in image classification and semantic segmentation, active learning, and detection of out-of-distribution samples. Our empirical results show that NeuBoots outperforms other bagging based methods under a much lower computational cost without losing the validity of bootstrapping.
accept
Reviewers recognized the importance of providing an efficient way to bootstrap neural nets for uncertainty quantification and praised the proposed approximate scheme. During the review phase, reviewers highlighted how many experimental details and comparison w.r.t. modern neural uncertainty quantification methods were missing from the current version of the paper. The authors provided an extensive rebuttal and managed to clarify the missing details and convince reviewers on certain points. However, some questions were left open: a predictive performance comparison is still missing as well as many references as highlighted by the reviewers.This work is accepted subject to the authors incorporating the promised changes and additional experimental results.
train
[ "IdT_RvuoQFw", "5qdiGPpBq7g", "B-jJCxXW4nw", "Swc54XL8Hc8", "n9K3_os3-u", "1_PtWmwPwYb", "RXiBhlPJnmO", "O5_RYmwSgD", "GVRzt_ifEvb", "0YDdoROUEO_", "Fkbj90acLSy", "YMtL8XqtpXe", "SnMd7M8Ir0", "HFEHgtKwYPE", "a5zOzdnXap", "q88e8bOfMC9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work advances an approach for generating bootstrapped neural networks through a single training run, instead of training individual networks for each bootstrap subsets of the dataset. The method, dubbed Neural Bootstrapper (NeuBoots), can be seen as a last-layer approach for uncertainty quantification with an...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_Hk2oOy4GJlH", "Swc54XL8Hc8", "n9K3_os3-u", "O5_RYmwSgD", "1_PtWmwPwYb", "GVRzt_ifEvb", "GVRzt_ifEvb", "SnMd7M8Ir0", "0YDdoROUEO_", "YMtL8XqtpXe", "a5zOzdnXap", "IdT_RvuoQFw", "q88e8bOfMC9", "nips_2021_Hk2oOy4GJlH", "nips_2021_Hk2oOy4GJlH", "nips_2021_Hk2oOy4GJlH" ]
nips_2021_ZdyLIxqgz29
An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning
We address an inherent difficulty in welfare-theoretic fair ML, by proposing an equivalently-axiomatically justified alternative setting, and studying the resulting computational and statistical learning questions. Welfare metrics quantify overall wellbeing across a population of groups, and welfare-based objectives and constraints have recently been proposed to incentivize fair ML methods to satisfy their diverse needs. However, many ML problems are cast as loss minimization tasks, rather than utility maximization, and thus require nontrivial modeling to construct utility functions. We define a complementary metric, termed malfare, measuring overall societal harm, with axiomatic justification via the standard axioms of cardinal welfare, and cast fair ML as malfare minimization over the risk values (expected losses) of each group. Surprisingly, the axioms of cardinal welfare (malfare) dictate that this is not equivalent to simply defining utility as negative loss and maximizing welfare. Building upon these concepts, we define fair-PAC learning, where a fair-PAC learner is an algorithm that learns an ε-δ malfare-optimal model, with bounded sample complexity for any data distribution and (axiomatically justified) malfare concept. We show conditions under which many standard PAC-learners may be converted to fair-PAC learners. This places fair-PAC learning on firm theoretical ground, as it yields statistical, and in some cases computational, efficiency guarantees for many well-studied machine-learning models. Fair-PAC learning is also practically relevant, as it democratizes fair ML by providing concrete training algorithms with rigorous generalization guarantees.
accept
The reviewers’ overall assessment of the paper was that (a) it proposes an interesting, relatively well-justified formulation of fairness through the concept of malware minimization. (2) The ensuing learnability analysis is non-trivial and the proposed algorithms can be practically informative. Reviewers made several suggestions to the authors to improve the flow and exposition (in particular, it is important that the authors expand their discussion of the proposed axioms, in particular, axiom 6, in the main body of the paper). Assuming that the authors will reflect those changes in their next revision of the paper, I recommend acceptance.
train
[ "yakczZpZ4OF", "jUvQlwB8Ddt", "h3HXi7EW5Y", "N7DL_SEF4Vo", "JPru9oZyUMX", "MWzRxfMOAHq", "Pui34HrDrX", "7F5_tGzSKlc", "c38FV2rG4Ky", "yinhutfk-WM" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " (1) I believe a short discussion about this should appear in the main text of the paper.\n(2) In the second question, when asking about ERM oracle, I am referring to the commonly used abstraction of “oracle-efficiency”, which in practice is translated to assuming access to some successful heuristic for efficientl...
[ -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, 2 ]
[ "MWzRxfMOAHq", "h3HXi7EW5Y", "c38FV2rG4Ky", "yinhutfk-WM", "7F5_tGzSKlc", "Pui34HrDrX", "nips_2021_ZdyLIxqgz29", "nips_2021_ZdyLIxqgz29", "nips_2021_ZdyLIxqgz29", "nips_2021_ZdyLIxqgz29" ]
nips_2021_JCodE7xOcc5
HSVA: Hierarchical Semantic-Visual Adaptation for Zero-Shot Learning
Zero-shot learning (ZSL) tackles the unseen class recognition problem, transferring semantic knowledge from seen classes to unseen ones. Typically, to guarantee desirable knowledge transfer, a common (latent) space is adopted for associating the visual and semantic domains in ZSL. However, existing common space learning methods align the semantic and visual domains by merely mitigating distribution disagreement through one-step adaptation. This strategy is usually ineffective due to the heterogeneous nature of the feature representations in the two domains, which intrinsically contain both distribution and structure variations. To address this and advance ZSL, we propose a novel hierarchical semantic-visual adaptation (HSVA) framework. Specifically, HSVA aligns the semantic and visual domains by adopting a hierarchical two-step adaptation, i.e., structure adaptation and distribution adaptation. In the structure adaptation step, we take two task-specific encoders to encode the source data (visual domain) and the target data (semantic domain) into a structure-aligned common space. To this end, a supervised adversarial discrepancy (SAD) module is proposed to adversarially minimize the discrepancy between the predictions of two task-specific classifiers, thus making the visual and semantic feature manifolds more closely aligned. In the distribution adaptation step, we directly minimize the Wasserstein distance between the latent multivariate Gaussian distributions to align the visual and semantic distributions using a common encoder. Finally, the structure and distribution adaptation are derived in a unified framework under two partially-aligned variational autoencoders. Extensive experiments on four benchmark datasets demonstrate that HSVA achieves superior performance on both conventional and generalized ZSL. The code is available at \url{https://github.com/shiming-chen/HSVA}.
accept
This paper presents a two-stage adaptation approach, i.e., structure adaptation (SA) and distribution adaptation (DA), for closing the gap between the visual (image) feature space and the semantic (e.g., attributes) feature space, resulting in a common space that not only captures the manifold structure but also induces the discriminative property. With the learned common space, (G)ZSL can be easily fulfilled, and very promising experimental results on four ZSL benchmarks are achieved. This work is motivated and posed on the foundation of CADA-VAE [11]. Through the authors’ rebuttal as well as the insightful discussion among the reviewers, the AC thinks that the improvement over [11] brought by the proposed HSVA is sufficient in terms of both methodology and performance. The authors proffered a good rebuttal, where more ablation studies are shown to demonstrate the effectiveness of HSVA. Taking together all the detailed comments, the AC votes for accepting the paper and would believe that it can advance the research of ZSL. In the camera-ready version, the authors are suggested to: 1) strengthen the writing, 2) supplement the key ablation results in the main paper, and 3) highlight the differences from the existing works and clarify the necessity of the overall objective in Eq. (13).
train
[ "kp8ZHk0I_fV", "8LtSelpv8X0", "JP19tFw7GgL", "SeuoslXgguc", "Xtn2eb2ihhe", "3OdVsK1bvl", "ntlFQaSxMGR", "8vS7m1Clj86", "kpi09axj5TC", "ciCP1Pdsl52" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes an hierarchical semantic-visual adaptation framework to migrate the distribution and structure disagreement in generative zero-shot learning (ZSL) models. This network adopts hierarchical two-step adaptation. The task-specific adaptation is complemented by training two task-specific encoders wi...
[ 7, -1, -1, 6, 7, -1, -1, -1, -1, 6 ]
[ 5, -1, -1, 4, 4, -1, -1, -1, -1, 3 ]
[ "nips_2021_JCodE7xOcc5", "kp8ZHk0I_fV", "ciCP1Pdsl52", "nips_2021_JCodE7xOcc5", "nips_2021_JCodE7xOcc5", "kp8ZHk0I_fV", "SeuoslXgguc", "Xtn2eb2ihhe", "ciCP1Pdsl52", "nips_2021_JCodE7xOcc5" ]
nips_2021_CtugaUzfYw
Higher Order Kernel Mean Embeddings to Capture Filtrations of Stochastic Processes
Stochastic processes are random variables with values in some space of paths. However, reducing a stochastic process to a path-valued random variable ignores its filtration, i.e. the flow of information carried by the process through time. By conditioning the process on its filtration, we introduce a family of higher order kernel mean embeddings (KMEs) that generalizes the notion of KME to capture additional information related to the filtration. We derive empirical estimators for the associated higher order maximum mean discrepancies (MMDs) and prove consistency. We then construct a filtration-sensitive kernel two-sample test able to capture information that gets missed by the standard MMD test. In addition, leveraging our higher order MMDs we construct a family of universal kernels on stochastic processes that allows to solve real-world calibration and optimal stopping problems in quantitative finance (such as the pricing of American options) via classical kernel-based regression methods. Finally, adapting existing tests for conditional independence to the case of stochastic processes, we design a causal-discovery algorithm to recover the causal graph of structural dependencies among interacting bodies solely from observations of their multidimensional trajectories.
accept
Our opinion is that this paper should be accepted though there are some weaknesses: it would have been easy in the experiment section to use real-world data from quantitative finance instead of draws from a fractional BM; comparisons to CI tests from the causal inference area; and readability. On a more detailed level it would be good if the authors would cite a disintegration theorem for their family of measure P_{X|F_{X_t}}.
train
[ "OSz1f904LT0", "QQ9nHBXbfBy", "RPcKAcF2ly", "lN2XElArek", "rpci79Igotw", "0sIW_SHBSq5", "xiutiSlHvC", "g9fT3Eldycd", "WPLuY17GhLB", "CwfUcMRKAwF", "VpAdb8bz4z", "lcSk6LpWcW", "hca9XsWjB1Q", "zjcJIEtjybt", "TaAGeYfEY1", "oquim5Bxe8-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I suggest adding the pointer to the analytical formula of the toy example in [2]. \n\nI will keep my original score.", "The paper proposes a generalized approach to kernel mean embeddings – higher-order mean embeddings -- and new two-sample test based on the newly proposed kernel wh...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "hca9XsWjB1Q", "nips_2021_CtugaUzfYw", "xiutiSlHvC", "lcSk6LpWcW", "0sIW_SHBSq5", "g9fT3Eldycd", "WPLuY17GhLB", "CwfUcMRKAwF", "VpAdb8bz4z", "zjcJIEtjybt", "QQ9nHBXbfBy", "oquim5Bxe8-", "TaAGeYfEY1", "nips_2021_CtugaUzfYw", "nips_2021_CtugaUzfYw", "nips_2021_CtugaUzfYw" ]
nips_2021_Xp5BhDKdil5
Low-Rank Subspaces in GANs
The latent space of a Generative Adversarial Network (GAN) has been shown to encode rich semantics within some subspaces. To identify these subspaces, researchers typically analyze the statistical information from a collection of synthesized data, and the identified subspaces tend to control image attributes globally (i.e., manipulating an attribute causes the change of an entire image). By contrast, this work introduces low-rank subspaces that enable more precise control of GAN generation. Concretely, given an arbitrary image and a region of interest (e.g., eyes of face images), we manage to relate the latent space to the image region with the Jacobian matrix and then use low-rank factorization to discover steerable latent subspaces. There are three distinguishable strengths of our approach that can be aptly called LowRankGAN. First, compared to analytic algorithms in prior work, our low-rank factorization of Jacobians is able to find the low-dimensional representation of attribute manifold, making image editing more precise and controllable. Second, low-rank factorization naturally yields a null space of attributes such that moving the latent code within it only affects the outer region of interest. Therefore, local image editing can be simply achieved by projecting an attribute vector into the null space without relying on a spatial mask as existing methods do. Third, our method can robustly work with a local region from one image for analysis yet well generalize to other images, making it much easy to use in practice. Extensive experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
accept
The paper used the Jacobean of the generator and Robust PCA (on covariance of Jacobean) to find directions for editing a region A that keep a region B in an image intact. By basically projecting the robust PCA eigenvectors of region A on the null space of robust PCA in region B. The idea is intuitive and makes sense and works in practice. After reading the reviews and the paper. I have few suggestions to clarify the paper. Although reviewers said the paper is easy to follow, and I am familiar with works in the topic I found the description of the method to be quite unclear and not precise. Please include a sketch of an algorithm in the paper that makes it clear to the reader what is your method, and please also make your code available since one can not reproduce this paper in its current version. is your method I) 1. The method starts by finding $z_A$ and $z_{B}$ , to produce $x_{A},x_{B}$ and compute Robust PCA of $J^{\top}J$ at $z_{A}$ and $z_{B}$ get $V_{A}, V_{B}$ 2. Project $V_{A}$ on null space of $V_{B}$, get $P_{A}$ 3. $G(z_{A}+ \alpha P_{A,j})$ and merge with $x_{B}$ to produce new images or II) 1. Find $z$ to reproduce image then compute robust PCA on $JG_{A}(z)^{\top}JG_{A}(z)$ , by restricting the covariance to pixels in region A , same for region B 2. Continue with steps 2 and 3 from above using latent $z$ I am left with a question to the authors that you should clarify in the final manuscript . I may have overlooked it in appendix but I did not see it. The method works because of projection to the null space of region B or because of the use of Robust PCA. What if you replaced in your method Robust PCA with just PCA , I think this ablation should be included in the paper. I recommend a weak accept but I ask the authors to include a sketch of clear procedure of how their method is applied, open source their code, and to do the ablation robust PCA versus PCA.
train
[ "9oTwMSc2b7", "_Zp0SnQg-OX", "aMeWcFxlmjk", "77qqa7p4NTL", "hABgWlTIPG6", "P1YJwn9i0Y6", "TBZ5aeTU4Fv", "F4e6c27r0xH", "TMXy4uI-dh", "IMfDOId0DO2", "FSG26Yt7WZt", "Kv75zfQBXe_", "fPKmfyjruw" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose an approach to identify meaningful latent directions in a GAN latent space. First, they factorize the Jacobian of a generator map $G: z \\to \\mathrm{RGB}$, treating its principal axis as the desired directions. Then they fine-tune them in a supervised regime to guarantee only the regions of th...
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_Xp5BhDKdil5", "hABgWlTIPG6", "77qqa7p4NTL", "IMfDOId0DO2", "F4e6c27r0xH", "TBZ5aeTU4Fv", "TMXy4uI-dh", "Kv75zfQBXe_", "9oTwMSc2b7", "fPKmfyjruw", "nips_2021_Xp5BhDKdil5", "nips_2021_Xp5BhDKdil5", "nips_2021_Xp5BhDKdil5" ]
nips_2021_4h4oqp-ATxb
Neural Symplectic Form: Learning Hamiltonian Equations on General Coordinate Systems
In recent years, substantial research on the methods for learning Hamiltonian equations has been conducted. Although these approaches are very promising, the commonly used representation of the Hamilton equation uses the generalized momenta, which are generally unknown. Therefore, the training data must be represented in this unknown coordinate system, and this causes difficulty in applying the model to real data. Meanwhile, Hamiltonian equations also have a coordinate-free expression that is expressed by using the symplectic 2-form. In this study, we propose a model that learns the symplectic form from data using neural networks, thereby providing a method for learning Hamiltonian equations from data represented in general coordinate systems, which are not limited to the generalized coordinates and the generalized momenta. Consequently, the proposed method is capable not only of modeling target equations of both Hamiltonian and Lagrangian formalisms but also of extracting unknown Hamiltonian structures hidden in the data. For example, many polynomial ordinary differential equations such as the Lotka-Volterra equation are known to admit non-trivial Hamiltonian structures, and our numerical experiments show that such structures can be certainly learned from data. Technically, each symplectic 2-form is associated with a skew-symmetric matrix, but not all skew-symmetric matrices define the symplectic 2-form. In the proposed method, using the fact that symplectic 2-forms are derived as the exterior derivative of certain differential 1-forms, we model the differential 1-form by neural networks, thereby improving the efficiency of learning.
accept
All ratings were "accept". The reviewers raised various suggestions which the authors responded to thoroughly, with additional experiments that reinforced their results. This should be a solid contribution.
test
[ "yPvPPOmTj88", "OGD-f1oVaVN", "soI1bXJGoun", "AnPwM-mppM", "NC6VKv75td", "oVN0LGFntfe", "ko7-Ex8e7tK", "k2z-sPmuJtv", "9tXApfiAtlV", "OICji5uoSA", "LTPbDMFfACH", "-ISPqmdSUEa", "eOWGFftM3i", "Ym71zeraQzS", "tHvqypPhTc8", "stuIMAiHWE", "sJg_DKvMT7", "l0m7iKuybJz", "upu1aIP20BX", ...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " Many thanks for your kind review and suggestions, which are really helpful for improving the paper. We will add the suggested investigation in the final version. Thank you very much indeed again.", "Authors present a ML-based approach to learning dynamics in the space of Hamiltonian systems. The approach is bas...
[ -1, 6, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "soI1bXJGoun", "nips_2021_4h4oqp-ATxb", "AnPwM-mppM", "NC6VKv75td", "oVN0LGFntfe", "-ISPqmdSUEa", "9tXApfiAtlV", "nips_2021_4h4oqp-ATxb", "OICji5uoSA", "LTPbDMFfACH", "sJg_DKvMT7", "stuIMAiHWE", "tHvqypPhTc8", "nips_2021_4h4oqp-ATxb", "l0m7iKuybJz", "OGD-f1oVaVN", "k2z-sPmuJtv", "Y...
nips_2021_2e_VWzcU4j7
Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting
Gen Li, Yuxin Chen, Yuejie Chi, Yuantao Gu, Yuting Wei
accept
The paper studies the RL problem under Q* linear realizability assumptions. This setting has been extensively studied in a stream of recent papers detailing impossibility results as well as positive results depending on whether access to a generative model and large suboptimality gaps are available or not. The authors make a non-trivial contribution in showing a positive (i.e., sublinear regret) in a novel online setting where the agent is allowed to to revisit previously traversed states. This setting is somewhat in between the full online and generative model scenarios and it is motivated in problems where the agent is not allowed to pick an arbitrary state in advance but it can record states and reset the system to states that have been visited in the past (e.g., games). While the scope of this setting could be debatable, it is overall reasonable and it sheds more light in the intricate limits of learnability in the Q* linear realizable case. Also, the theoretical and algorithmic contributions are non trivial. Overall, I believe the paper is likely to encourage further discussions in the community and inspire further research on the topic. From rebuttal and the extensive discussion with reviewers, there are a number of elements that the authors should integrate in the camera ready version: - The current main theorem is somehow difficult to parse as it "mixes" sample complexity and regret. I suggest to either explain these two contributions better or first summarize the overall result as a sublinear regret bound and then explain how it is obtain. - As pointed out in the discussion, the final regret depends on 1/Delta^2. This suggests that the worst-case bound may be of order O(K^{2/3}). While the focus is actually to prove "learnability" (i.e., sublinear regret) and this may be overall unavoidable, I think it's worth pointing this out explicitly as it is a clear venue where people may focus to improve the current result. - Acknowledge explicitly other limitations in the current version (e.g., knowledge of the gap). Again, this is not the first time this assumption is made, but it is worth making it very clear so as to encourage further research in this direction. - Be sure to properly contrast your setting and results with Weisz et. al 2021a. - The reviewers pointed out a few things that may need further clarification.
train
[ "BH1mGW46vDb", "3Lat5Drgcwz", "Wj0JEkUEa8P", "7STT9wjTvr", "F0RlsvqfzNV", "jpE5QUe0CjB", "a5PcUKV4Kwt", "bKgSKGH7O-m", "E4X84oP01l" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Disclaimer: I am not an expert in theoretical RL. My review is thus restricted to the overall soundness of the manuscript.\n\nThe authors describe theoretical work proposing a new sampling strategy for the online linear Q* problem making it sample efficient.\nThey modify and extend the LSVI-UCB algorithms for line...
[ 7, 7, -1, -1, -1, -1, -1, 6, 6 ]
[ 2, 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_2e_VWzcU4j7", "nips_2021_2e_VWzcU4j7", "jpE5QUe0CjB", "E4X84oP01l", "bKgSKGH7O-m", "3Lat5Drgcwz", "BH1mGW46vDb", "nips_2021_2e_VWzcU4j7", "nips_2021_2e_VWzcU4j7" ]
nips_2021_8Uui49rOfc
Self-Paced Contrastive Learning for Semi-supervised Medical Image Segmentation with Meta-labels
The contrastive pre-training of a recognition model on a large dataset of unlabeled data often boosts the model’s performance on downstream tasks like image classification. However, in domains such as medical imaging, collecting unlabeled data can be challenging and expensive. In this work, we consider the task of medical image segmentation and adapt contrastive learning with meta-label annotations to scenarios where no additional unlabeled data is available. Meta-labels, such as the location of a 2D slice in a 3D MRI scan, often come for free during the acquisition process. We use these meta-labels to pre-train the image encoder, as well as in a semi-supervised learning step that leverages a reduced set of annotated data. A self-paced learning strategy exploiting the weak annotations is proposed to furtherhelp the learning process and discriminate useful labels from noise. Results on five medical image segmentation datasets show that our approach: i) highly boosts the performance of a model trained on a few scans, ii) outperforms previous contrastive and semi-supervised approaches, and iii) reaches close to the performance of a model trained on the full data.
accept
The reviewers broadly agreed that the overall motivation for the work (reducing the dependency on labels in medical image segmentation) and the novelty and thorough evaluation of the proposed method are of general interest to the community. There were a few minor clarity points brought up that should be addressed in the revised version, and the authors should also include the additional experimental results from the discussion.
train
[ "ylhQOGkdfUO", "i3B3FoWWDUa", "qDiEL13x24", "MNkRYYUx3q", "1oyucy20C1P", "fV3Na_8czDR", "uOT2mw3-bGa", "yncmLf0j8rQ", "gzEWT53OUuU", "ijZsd7Z4BYU", "aEYWItERbXo", "wuGy_Xwu6A", "OT0FE1L7ru" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We kindly remind the reviewer that today is the last day to post feedback and modify the score. We hope our answers have addressed all remaining concerns", " Thank you for your time and insightful feedback!\n\nWe will incorporate the changes into the revised paper, and address the issues in detail here:\n\n_Wea...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "aEYWItERbXo", "aEYWItERbXo", "aEYWItERbXo", "gzEWT53OUuU", "nips_2021_8Uui49rOfc", "ijZsd7Z4BYU", "yncmLf0j8rQ", "OT0FE1L7ru", "wuGy_Xwu6A", "1oyucy20C1P", "nips_2021_8Uui49rOfc", "nips_2021_8Uui49rOfc", "nips_2021_8Uui49rOfc" ]
nips_2021_od-00q5T2vB
Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems
Recurrent neural networks (RNNs) are powerful models for processing time-series data, but it remains challenging to understand how they function. Improving this understanding is of substantial interest to both the machine learning and neuroscience communities. The framework of reverse engineering a trained RNN by linearizing around its fixed points has provided insight, but the approach has significant challenges. These include difficulty choosing which fixed point to expand around when studying RNN dynamics and error accumulation when reconstructing the nonlinear dynamics with the linearized dynamics. We present a new model that overcomes these limitations by co-training an RNN with a novel switching linear dynamical system (SLDS) formulation. A first-order Taylor series expansion of the co-trained RNN and an auxiliary function trained to pick out the RNN's fixed points govern the SLDS dynamics. The results are a trained SLDS variant that closely approximates the RNN, an auxiliary function that can produce a fixed point for each point in state-space, and a trained nonlinear RNN whose dynamics have been regularized such that its first-order terms perform the computation, if possible. This model removes the post-training fixed point optimization and allows us to unambiguously study the learned dynamics of the SLDS at any point in state-space. It also generalizes SLDS models to continuous manifolds of switching points while sharing parameters across switches. We validate the utility of the model on two synthetic tasks relevant to previous work reverse engineering RNNs. We then show that our model can be used as a drop-in in more complex architectures, such as LFADS, and apply this LFADS hybrid to analyze single-trial spiking activity from the motor system of a non-human primate.
accept
All reviewers agree that the paper proposes an interesting approach to the problem of improving the understanding of RNNs. Although some reviewers have some technical concerns at their first reviews, basically those have been resolved by the authors' responses. Thus, although there are some points that should be modified from the current form, I think we can expect the authors modify the paper in the camera-ready by reflecting the discussion. Based on these, I recommend acceptance (poster) for this paper.
train
[ "ZERhDGbxq5", "gYuAjUhUbOo", "p4DQ3OOh2tX", "7eqiXBNiL5", "CNlAKjF6QYv", "BTiWVlyjb", "UGTUu_Wel2h", "SQGYBXS3ow5", "sr_9TlfaYUc", "qRJhh_SHqLC", "SwL5tRusCRE", "3esUu9ZgHqg", "LL0Wy1Heqrh", "Yxh0Oi9_uK0", "mVQMUz7apib", "zFNCNpFYV3V", "oFkHTeF6FO", "DsYH2D46oJT", "5dlNtAw-6kF", ...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ " Thank you for your thoughtful reply.", " I thank the authors again for all their clarifications. I'm leaning more toward acceptance now (score updated). I hope I will really find all the promised changes in the final version!", "The central idea of this paper is to co-train a switching linear dynamical system...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "mVQMUz7apib", "7eqiXBNiL5", "nips_2021_od-00q5T2vB", "CNlAKjF6QYv", "BTiWVlyjb", "UGTUu_Wel2h", "0r5P5F2MhWj", "sr_9TlfaYUc", "qRJhh_SHqLC", "oFkHTeF6FO", "3esUu9ZgHqg", "5dlNtAw-6kF", "zFNCNpFYV3V", "y0C09m2jL07", "Pcd-qLqTHr", "p4DQ3OOh2tX", "64LcI3zOj4e", "nips_2021_od-00q5T2vB...
nips_2021_xkQ4MhLv52X
Learning-Augmented Dynamic Power Management with Multiple States via New Ski Rental Bounds
We study the online problem of minimizing power consumption in systems with multiple power-saving states. During idle periods of unknown lengths, an algorithm has to choose between power-saving states of different energy consumption and wake-up costs. We develop a learning-augmented online algorithm that makes decisions based on (potentially inaccurate) predicted lengths of the idle periods. The algorithm's performance is near-optimal when predictions are accurate and degrades gracefully with increasing prediction error, with a worst-case guarantee almost identical to the optimal classical online algorithm for the problem. A key ingredient in our approach is a new algorithm for the online ski-rental problem in the learning augmented setting with tight dependence on the prediction error. We support our theoretical findings with experiments.
accept
This paper was borderline. The reviewers appreciated the contributions of this paper to the online algorithms literature, but some reviewers were skeptical about the learning aspect (fit for neurips vs theory conferences) and the application to power management. After discussions, I believe that papers in the area of algorithms with ML predictions are a good fit for neurips (in addition, there have been multiple such papers recently accepted to neurips and icml) and am in favor of acceptance.
train
[ "b-86kN1QfhR", "X-oMHdg2TU0", "qB4cdDAS3Yf", "F2ak2NldGgV", "NNqsqWKYmP7", "SJrmtQJEPPF", "QvehUkTcTtq", "2fMNOk1GMAN" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper is motivated by the challenge of dynamic power management in environments such as data centers. Computational devices like CPUs can transition to a sleep mode, for the purpose of preserving power. However, \"waking up\" entails consuming power, and so there is a tradeoff between entering a power-saving ...
[ 5, -1, -1, -1, -1, 6, 6, 6 ]
[ 3, -1, -1, -1, -1, 4, 3, 4 ]
[ "nips_2021_xkQ4MhLv52X", "b-86kN1QfhR", "2fMNOk1GMAN", "QvehUkTcTtq", "SJrmtQJEPPF", "nips_2021_xkQ4MhLv52X", "nips_2021_xkQ4MhLv52X", "nips_2021_xkQ4MhLv52X" ]
nips_2021_syu7m80S_CA
Learning Equivariant Energy Based Models with Equivariant Stein Variational Gradient Descent
We focus on the problem of efficient sampling and learning of probability densities by incorporating symmetries in probabilistic models. We first introduce Equivariant Stein Variational Gradient Descent algorithm -- an equivariant sampling method based on Stein's identity for sampling from densities with symmetries. Equivariant SVGD explicitly incorporates symmetry information in a density through equivariant kernels which makes the resultant sampler efficient both in terms of sample complexity and the quality of generated samples. Subsequently, we define equivariant energy based models to model invariant densities that are learned using contrastive divergence. By utilizing our equivariant SVGD for training equivariant EBMs, we propose new ways of improving and scaling up training of energy based models. We apply these equivariant energy models for modelling joint densities in regression and classification tasks for image datasets, many-body particle systems and molecular structure generation.
accept
The paper proposes an equivariant version of the Stein variational gradient descent algorithm for sampling from densities with symmetries. Using this sampling algorithm, the authors developed equivariant versions of energy-based models for learning invariant densities. The reviewers all agree that this is an interesting paper with nice contributions. I recommend acceptance as poster.
val
[ "GgwcKrYKl34", "9sOLr8zhXae", "apffxtSrfP", "6axBAbv_SsG", "HynkUiMdLe-", "8Popax_Beb", "16sjpeBhCic", "wYvenD_vlnn", "5SONLrcST62", "Mkq3IVG9Cz", "0Q4vfzFKcBZ", "GNRIbAVuotS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for the clarification that cleared my concern over experiments. \nI have raised my score. ", "The paper proposes an equivariant version of the Stein Variational Gradient Descent (SVGD) Algorithm for sampling from densities with symmetries. Using this sampling algorithm, the authors develop...
[ -1, 7, 6, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 2, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "wYvenD_vlnn", "nips_2021_syu7m80S_CA", "nips_2021_syu7m80S_CA", "8Popax_Beb", "5SONLrcST62", "16sjpeBhCic", "apffxtSrfP", "9sOLr8zhXae", "GNRIbAVuotS", "0Q4vfzFKcBZ", "nips_2021_syu7m80S_CA", "nips_2021_syu7m80S_CA" ]
nips_2021_syIj5ggwCYJ
Information Directed Sampling for Sparse Linear Bandits
Stochastic sparse linear bandits offer a practical model for high-dimensional online decision-making problems and have a rich information-regret structure. In this work we explore the use of information-directed sampling (IDS), which naturally balances the information-regret trade-off. We develop a class of information-theoretic Bayesian regret bounds that nearly match existing lower bounds on a variety of problem instances, demonstrating the adaptivity of IDS. To efficiently implement sparse IDS, we propose an empirical Bayesian approach for sparse posterior sampling using a spike-and-slab Gaussian-Laplace prior. Numerical results demonstrate significant regret reductions by sparse IDS relative to several baselines.
accept
This paper studies sparse linear bandits in the Bayesian setting. Theoretically, it shows that information-directed sampling (IDS) has regret guarantees that adapt to (1) small/large action sets; (2) explorative action sets. Empirically, it gives a practical approximate implementation of IDS based on an empirical Bayesian approach. All reviewers are in consensus that this paper makes a solid contribution to a problem of good interest to the NeurIPS community. In the final version, please incorporate the reviewers' suggestion and re-run the experiments if necessary.
train
[ "cJ3p337uzz_", "wNxS-9xrI2", "Hv8mKmiTrJU", "_5WY1ODZQgc", "Oyzm_jbqzu3", "hzzPy6J1ebm", "bOPAVlY0s7", "Px_YsyjcYng", "J2Aa2mcQUgq", "ZeEI3vzxcjQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " After reviewing the author's responses to my and other reviewers' comments, I raise my score. I think this is a good paper.", "The paper studies a high-dimensional linear bandit problem with sparse structure. The authors propose information-directed sampling (IDS), which naturally 4 balances the information-reg...
[ -1, 7, -1, 7, -1, -1, -1, -1, 7, 7 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, 4, 4 ]
[ "Px_YsyjcYng", "nips_2021_syIj5ggwCYJ", "Oyzm_jbqzu3", "nips_2021_syIj5ggwCYJ", "ZeEI3vzxcjQ", "_5WY1ODZQgc", "J2Aa2mcQUgq", "wNxS-9xrI2", "nips_2021_syIj5ggwCYJ", "nips_2021_syIj5ggwCYJ" ]
nips_2021_dsevgAwUH4m
Linear Convergence of Gradient Methods for Estimating Structured Transition Matrices in High-dimensional Vector Autoregressive Models
In this paper, we present non-asymptotic optimization guarantees of gradient descent methods for estimating structured transition matrices in high-dimensional vector autoregressive (VAR) models. We adopt the projected gradient descent (PGD) for single-structured transition matrices and the alternating projected gradient descent (AltPGD) for superposition-structured ones. Our analysis demonstrates that both gradient algorithms converge linearly to the statistical error even though the strong convexity of the objective function is absent under the high-dimensional settings. Moreover our result is sharp (up to a constant factor) in the sense of matching the phase transition theory of the corresponding model with independent samples. To the best of our knowledge, this analysis constitutes first non-asymptotic optimization guarantees of the linear rate for regularized estimation in high-dimensional VAR models. Numerical results are provided to support our theoretical analysis.
accept
Given that the authors have satisfactorily addressed the concerns of the reviewers, the consensus of the review committee is that the paper should be accepted for presentation at Neurips. I would like to ask the authors to read the paper carefully one more time and revise all the typos/missed notation definitions including the ones that are pointed out by the reviewers.
train
[ "y2YZA7tB1Cj", "PzwsiJ13Dr4", "sULm2FbPZv", "bDTRCTtSXRI", "9fZC5h7hvmS", "lUTTKwcinW0", "8vgcsGB7zp4", "84lY1XKTnaa", "1QGO8mDYIWk", "KAy7p_rEUyf", "gPOe0Ds9wUg", "qOjKrcNiYk1", "b919J21Nb-Z", "pZV4lEPNTNm", "TEptYQmsq3" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a recovery guarantee for gradient-based optimization method applied to high-dimensional vector autoregressive (VAR) models. It considers both single-structured and superposition-structured transition matrix, and shows that the distance between the true parameter and the iterates generated by pr...
[ 7, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "nips_2021_dsevgAwUH4m", "gPOe0Ds9wUg", "bDTRCTtSXRI", "nips_2021_dsevgAwUH4m", "8vgcsGB7zp4", "nips_2021_dsevgAwUH4m", "84lY1XKTnaa", "1QGO8mDYIWk", "lUTTKwcinW0", "TEptYQmsq3", "pZV4lEPNTNm", "bDTRCTtSXRI", "y2YZA7tB1Cj", "nips_2021_dsevgAwUH4m", "nips_2021_dsevgAwUH4m" ]
nips_2021_ys6L_NWchCp
Large-Scale Unsupervised Object Discovery
Existing approaches to unsupervised object discovery (UOD) do not scale up to large datasets without approximations that compromise their performance. We propose a novel formulation of UOD as a ranking problem, amenable to the arsenal of distributed methods available for eigenvalue problems and link analysis. Through the use of self-supervised features, we also demonstrate the first effective fully unsupervised pipeline for UOD. Extensive experiments on COCO~\cite{Lin2014cocodataset} and OpenImages~\cite{openimages} show that, in the single-object discovery setting where a single prominent object is sought in each image, the proposed LOD (Large-scale Object Discovery) approach is on par with, or better than the state of the art for medium-scale datasets (up to 120K images), and over 37\% better than the only other algorithms capable of scaling up to 1.7M images. In the multi-object discovery setting where multiple objects are sought in each image, the proposed LOD is over 14\% better in average precision (AP) than all other methods for datasets ranging from 20K to 1.7M images. Using self-supervised features, we also show that the proposed method obtains state-of-the-art UOD performance on OpenImages.
accept
The formulation of unsupervised object detection as a PageRank problem is neat, and allows to scale up to significantly larger datasets. The paper is accepted, but authors are encouraged to better motivate the use of supervised features (and/or highlight the experiments with self-supervised features more).
train
[ "RoY1Oa6EEKf", "n3QpicMVjl0", "nVfsLR2_B8h", "iLNDz6GcCSy", "Y0YsI8LKC7e", "aR8Zr6bJvEp", "CskCvWVHUy5", "sqar_7zSijE", "QtAa1PSgFSS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces an efficient approach for Unsupaervised Object Discovery (UOD). This efficiency was achieved by considering the exising UOD formulation as a ranking problem by modifying the binary importance vector as having a floating value. This modification allows to apply this approach to a very large-sc...
[ 6, 7, -1, 6, -1, -1, -1, -1, 5 ]
[ 3, 4, -1, 2, -1, -1, -1, -1, 3 ]
[ "nips_2021_ys6L_NWchCp", "nips_2021_ys6L_NWchCp", "Y0YsI8LKC7e", "nips_2021_ys6L_NWchCp", "n3QpicMVjl0", "iLNDz6GcCSy", "RoY1Oa6EEKf", "QtAa1PSgFSS", "nips_2021_ys6L_NWchCp" ]
nips_2021_Fa-w-10s7YQ
Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and Tracking of Object Poses in 3D Space
As a basic component of SE(3)-equivariant deep feature learning, steerable convolution has recently demonstrated its advantages for 3D semantic analysis. The advantages are, however, brought by expensive computations on dense, volumetric data, which prevent its practical use for efficient processing of 3D data that are inherently sparse. In this paper, we propose a novel design of Sparse Steerable Convolution (SS-Conv) to address the shortcoming; SS-Conv greatly accelerates steerable convolution with sparse tensors, while strictly preserving the property of SE(3)-equivariance. Based on SS-Conv, we propose a general pipeline for precise estimation of object poses, wherein a key design is a Feature-Steering module that takes the full advantage of SE(3)-equivariance and is able to conduct an efficient pose refinement. To verify our designs, we conduct thorough experiments on three tasks of 3D object semantic analysis, including instance-level 6D pose estimation, category-level 6D pose and size estimation, and category-level 6D pose tracking. Our proposed pipeline based on SS-Conv outperforms existing methods on almost all the metrics evaluated by the three tasks. Ablation studies also show the superiority of our SS-Conv over alternative convolutions in terms of both accuracy and efficiency. Our code is released publicly at https://github.com/Gorilla-Lab-SCUT/SS-Conv.
accept
The paper presents a Sparse Steerable Convolution to accelerate steerable convolution (ST-Conv) via sparse tensors, while still preserving the SE(3)-equivariance. The authors also show application of the proposed method to 3D semantic analysis including the tasks of instance-level 6D pose estimation, category-level 6D pose and size estimation, and category-level 6D pose tracking and they achieve superior performance on these datasets. All reviewers agree on the soundness of the proposed method and its efficiency over steerable convolutions or dense convolutions.
val
[ "5dJdIeUQ40y", "mIRtahN4r_n", "QQ89YtK9j-p", "x2cnuWcBkEG", "Pp1lwO6VFNl", "l_HEO-be5n", "m5W4cby6Px", "Z8DQHVCgGZf", "o079ZyrMBpo", "heAaXf3XkE", "uKEwDVxlU01", "ytnkAD6DEE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Hi,\n\nThank you for the rebuttal. Mostly the authors address my comments well.\n\nOnly one point concerning c) Extension of SS-Conv to more general cases.\n\nThe PN_VLAD [2] is a weak baseline, and it would be better to use a stronger backbone as per [https://github.com/jac99/MinkLoc3D]\n\nI would like to mainta...
[ -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, 4, -1, 2, -1, -1, -1, -1, -1, 3, 4 ]
[ "o079ZyrMBpo", "m5W4cby6Px", "nips_2021_Fa-w-10s7YQ", "heAaXf3XkE", "nips_2021_Fa-w-10s7YQ", "Z8DQHVCgGZf", "uKEwDVxlU01", "Pp1lwO6VFNl", "ytnkAD6DEE", "QQ89YtK9j-p", "nips_2021_Fa-w-10s7YQ", "nips_2021_Fa-w-10s7YQ" ]
nips_2021_Pvzqji3at5
Noisy Adaptation Generates Lévy Flights in Attractor Neural Networks
Lévy flights describe a special class of random walks whose step sizes satisfy a power-law tailed distribution. As being an efficientsearching strategy in unknown environments, Lévy flights are widely observed in animal foraging behaviors. Recent studies further showed that human cognitive functions also exhibit the characteristics of Lévy flights. Despite being a general phenomenon, the neural mechanism at the circuit level for generating Lévy flights remains unresolved. Here, we investigate how Lévy flights can be achieved in attractor neural networks. To elucidate the underlying mechanism clearly, we first study continuous attractor neural networks (CANNs), and find that noisy neural adaptation, exemplified by spike frequency adaptation (SFA) in this work, can generate Lévy flights representing transitions of the network state in the attractor space. Specifically, the strength of SFA defines a travelling wave boundary, below which the network state displays local Brownian motion, and above which the network state displays long-jump motion. Noises in neural adaptation causes the network state to intermittently switch between these two motion modes, manifesting the characteristics of Lévy flights. We further extend the study to a general attractor neural network, and demonstrate that our model can explain the Lévy-flight phenomenon observed during free memory retrieval of humans. We hope that this study will give us insight into understanding the neural mechanism for optimal information processing in the brain.
accept
All the reviewers have agreed that the submission is original and interesting with clear contributions to the domain. Authors’ responses also addressed most major concerns. Hence, I recommend an acceptance. On another note, as reviewer JwRM partially pointed out, using power-laws and Levy flights for modeling stochastic optimization algorithms have attracted considerable attention in other parts of ML as well. In this respect, I suggest the authors to check the papers and discuss in their paper if they think relevant, that reviewer JwRM suggested, as well as the following one, which is directly based on a Levy-driven Langevin equation: Simsekli, U., Sagun, L., & Gurbuzbalaban, M. (2019, May). A tail-index analysis of stochastic gradient noise in deep neural networks. In International Conference on Machine Learning (pp. 5827-5837). PMLR. Zhou, P., Feng, J., Ma, C., Xiong, C., & Hoi, S. C. H. (2020). Towards Theoretically Understanding Why Sgd Generalizes Better Than Adam in Deep Learning. Advances in Neural Information Processing Systems, 33.
train
[ "kwuUAVQ0UMc", "T1DHqRNCZ9L", "NLXXKWammnS", "Q7bPlZBzlt-", "YR5MCZDig4P", "zaA4y0UqGP2", "ZLi4Shlkl8e", "OY5uD2hXQXJ", "3lqXhV9hnnh", "1jx6iwtUPUd", "tVOYQEkUmzf", "_s15K01HYh8", "1kocximV2mR" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " We thank the reviewer again for the feedback. We will definitely make this point clearer in the revised manuscript. And we thank all the reviewers for their effort made during the discussion panel. All your valuable comments indeed help us a lot to improve the work. ", "The submission suggests a neural mechani...
[ -1, 7, -1, -1, 7, -1, -1, 6, -1, -1, -1, -1, 7 ]
[ -1, 3, -1, -1, 3, -1, -1, 1, -1, -1, -1, -1, 2 ]
[ "NLXXKWammnS", "nips_2021_Pvzqji3at5", "_s15K01HYh8", "1jx6iwtUPUd", "nips_2021_Pvzqji3at5", "3lqXhV9hnnh", "1kocximV2mR", "nips_2021_Pvzqji3at5", "tVOYQEkUmzf", "YR5MCZDig4P", "OY5uD2hXQXJ", "T1DHqRNCZ9L", "nips_2021_Pvzqji3at5" ]
nips_2021_yAvCV6NwWQ
On Linear Stability of SGD and Input-Smoothness of Neural Networks
The multiplicative structure of parameters and input data in the first layer of neural networks is explored to build connection between the landscape of the loss function with respect to parameters and the landscape of the model function with respect to input data. By this connection, it is shown that flat minima regularize the gradient of the model function, which explains the good generalization performance of flat minima. Then, we go beyond the flatness and consider high-order moments of the gradient noise, and show that Stochastic Gradient Dascent (SGD) tends to impose constraints on these moments by a linear stability analysis of SGD around global minima. Together with the multiplicative structure, we identify the Sobolev regularization effect of SGD, i.e. SGD regularizes the Sobolev seminorms of the model function with respect to the input data. Finally, bounds for generalization error and adversarial robustness are provided for solutions found by SGD under assumptions of the data distribution.
accept
All reviewers have praised the clarity of the paper as well as the soundness and novelty of the results. Some important questions regarding completeness of some definitions and applicability of the current assumptions have been raised in the reviews, which the authors have clarified with a strong rebuttal. I encourage the authors to incorporate these remarks in the camera ready version.
train
[ "15J4M1m340H", "pvf7m9rXrN", "srVB3nP8UVW", "p6SRmS3xNcb", "9J6Bu_JQMtW", "Dwz2w5fJ15y", "NzV2nhV-SG", "YkGpNsXOCiR", "fpw5V00q7mJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper extends previous stability analysis ([30]) to high-order moments of the iterator. They characterize the implicit regularization of SGD by showing that a stable global minimum must satisfy conditions regarding its flatness and non-uniformity. In addition, the authors build relations between the model func...
[ 8, 6, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_yAvCV6NwWQ", "nips_2021_yAvCV6NwWQ", "p6SRmS3xNcb", "pvf7m9rXrN", "15J4M1m340H", "fpw5V00q7mJ", "YkGpNsXOCiR", "nips_2021_yAvCV6NwWQ", "nips_2021_yAvCV6NwWQ" ]
nips_2021_RgH0gGH9B64
Joint inference and input optimization in equilibrium networks
Many tasks in deep learning involve optimizing over the inputs to a network to minimize or maximize some objective; examples include optimization over latent spaces in a generative model to match a target image, or adversarially perturbing an input to worsen classifier performance. Performing such optimization, however, is traditionally quite costly, as it involves a complete forward and backward pass through the network for each gradient step. In a separate line of work, a recent thread of research has developed the deep equilibrium (DEQ) model, a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer. In this paper, we show that there is a natural synergy between these two settings. Although, naively using DEQs for these optimization problems is expensive (owing to the time needed to compute a fixed point for each gradient step), we can leverage the fact that gradient-based optimization can itself be cast as a fixed point iteration to substantially improve the overall speed. That is, we simultaneously both solve for the DEQ fixed point and optimize over network inputs, all within a single "augmented" DEQ model that jointly encodes both the original network and the optimization process. Indeed, the procedure is fast enough that it allows us to efficiently train DEQ models for tasks traditionally relying on an "inner" optimization loop. We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
accept
The reviewers found the paper to be interesting, novel and well-written. This paper will likely be of interest to the community and inspire follow-on work. In review, the authors agreed to add a discussion on assumptions on assumptions on l() and f_theta required to ensure (local) convergence; we expect the authors to follow through on this.
train
[ "1xash-NRUla", "-RntV7nYpY", "tZVpCkTgYPv", "Q5hkEPljqgV", "sl5APsDTFF1", "7zP8raRUMoY", "taH4cZ3pkPw", "3kaJLU7pmNn" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the detailed comments and feedback. We try to address the main concerns of the reviewer as follows:\n\n1) **Stability** - The stability issues were indeed a cause of concern with the JIIO optimization and training. We believe that there were multiple factors contributing to it and explor...
[ -1, -1, -1, -1, 5, 7, 7, 6 ]
[ -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "3kaJLU7pmNn", "7zP8raRUMoY", "taH4cZ3pkPw", "sl5APsDTFF1", "nips_2021_RgH0gGH9B64", "nips_2021_RgH0gGH9B64", "nips_2021_RgH0gGH9B64", "nips_2021_RgH0gGH9B64" ]
nips_2021_0fPgXqP1Mq
A unified framework for bandit multiple testing
In bandit multiple hypothesis testing, each arm corresponds to a different null hypothesis that we wish to test, and the goal is to design adaptive algorithms that correctly identify large set of interesting arms (true discoveries), while only mistakenly identifying a few uninteresting ones (false discoveries). One common metric in non-bandit multiple testing is the false discovery rate (FDR). We propose a unified, modular framework for bandit FDR control that emphasizes the decoupling of exploration and summarization of evidence. We utilize the powerful martingale-based concept of "e-processes" to ensure FDR control for arbitrary composite nulls, exploration rules and stopping times in generic problem settings. In particular, valid FDR control holds even if the reward distributions of the arms could be dependent, multiple arms may be queried simultaneously, and multiple (cooperating or competing) agents may be querying arms, covering combinatorial semi-bandit type settings as well. Prior work has considered in great detail the setting where each arm's reward distribution is independent and sub-Gaussian, and a single arm is queried at each step. Our framework recovers matching sample complexity guarantees in this special case, and performs comparably or better in practice. For other settings, sample complexities will depend on the finer details of the problem (composite nulls being tested, exploration algorithm, data dependence structure, stopping rule) and we do not explore these; our contribution is to show that the FDR guarantee is clean and entirely agnostic to these details.
accept
This is a borderline paper. As the reviewers point out there are some valuable contributions but also various shortcomings. For instance, it is not fully obvious what the benefit of the unified framework is, the objective of the algorithm design is not stated clearly, and the paper could be improved significantly in a revision with regard to the presentation. Despite these shortcomings I believe that the contributions are valuable and I recommend to accept the paper.
train
[ "MDeoMwb3-b", "xbAbv4_SXid", "df_HozPm85", "adJq8b8nxCQ", "F8EmykEOsn", "WloCio76dJm", "rujQHyY3ACc", "_HL_T3Pat6W", "88ojD_q5qcv" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors give a unified framework for bandit multiple testing in terms of e-values. This framework is more flexible than prior work, allowing for dependencies between random variables and adaptive algorithm without corrections required in prior work, most notably JJ. The authors apply this result in a setting s...
[ 5, 5, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_0fPgXqP1Mq", "nips_2021_0fPgXqP1Mq", "xbAbv4_SXid", "MDeoMwb3-b", "88ojD_q5qcv", "_HL_T3Pat6W", "nips_2021_0fPgXqP1Mq", "nips_2021_0fPgXqP1Mq", "nips_2021_0fPgXqP1Mq" ]
nips_2021_go3GvM7aFD
Recovering Latent Causal Factor for Generalization to Distributional Shifts
Distributional shifts between training and target domains may degrade the prediction accuracy of learned models, mainly because these models often learn features that possess only correlation rather than causal relation with the output. Such a correlation, which is known as ``spurious correlation'' statistically, is domain-dependent hence may fail to generalize to unseen domains. To avoid such a spurious correlation, we propose \textbf{La}tent \textbf{C}ausal \textbf{I}nvariance \textbf{M}odels (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction. Specifically, the LaCIM introduces a pair of correlated latent factors: (a) causal factor and (b) others, while the extent of this correlation is governed by a domain variable that characterizes the distributional shifts. On the basis of this, we prove that the distribution of observed variables conditioning on latent variables is shift-invariant. Equipped with such an invariance, we prove that the causal factor can be recovered without mixing information from others, which induces the ground-truth predicting mechanism. We propose a Variational-Bayesian-based method to learn this invariance for prediction. The utility of our approach is verified by improved generalization to distributional shifts on various real-world data. Our code is freely available at \url{https://github.com/wubotong/LaCIM}.
accept
This paper investigates the problem that distribution shift may degrade the prediction accuracy of the learned model. It proposes a way to distinguish between the underlying causal factors and other factors, together with a strategy to deal with distribution shift by using only causal factors for prediction. The studied problem is interesting, and reviewers agree that both theoretical studies and empirical results are convincing.
train
[ "TYd17xicsdJ", "jHZ7SDe2tMN", "RuN_fDcmoiR", "qXCgojIrOXq", "jyvoHeEyKCW", "DeBiq63FAjU", "hlKCp_IuE8U", "moUwZgyTzi", "BA0pEDjsait", "0zP2qKAWCH0", "CQwwOM-fcmn", "R02wDijPBxR" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestions and efforts in reviewing our paper. We will accordingly include some discussions in our rebuttal as more explanations for better clarification in section 4.\n\nPlease allow me to say something about the latent causal structure in real-world applications. In many cases, the causal struct...
[ -1, -1, 6, -1, -1, 7, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, 4, 2 ]
[ "RuN_fDcmoiR", "RuN_fDcmoiR", "nips_2021_go3GvM7aFD", "0zP2qKAWCH0", "hlKCp_IuE8U", "nips_2021_go3GvM7aFD", "BA0pEDjsait", "CQwwOM-fcmn", "DeBiq63FAjU", "R02wDijPBxR", "nips_2021_go3GvM7aFD", "nips_2021_go3GvM7aFD" ]
nips_2021_kSv_AMdehh3
Graph Differentiable Architecture Search with Structure Learning
Discovering ideal Graph Neural Networks (GNNs) architectures for different tasks is labor intensive and time consuming. To save human efforts, Neural Architecture Search (NAS) recently has been used to automatically discover adequate GNN architectures for certain tasks in order to achieve competitive or even better performance compared with manually designed architectures. However, existing works utilizing NAS to search GNN structures fail to answer the question: how NAS is able to select the desired GNN architectures? In this paper, we investigate this question to solve the problem, for the first time. We conduct a measurement study with experiments to discover that gradient based NAS methods tend to select proper architectures based on the usefulness of different types of information with respect to the target task. Our explorations further show that gradient based NAS also suffers from noises hidden in the graph, resulting in searching suboptimal GNN architectures. Based on our findings, we propose a Graph differentiable Architecture Search model with Structure Optimization (GASSO), which allows differentiable search of the architecture with gradient descent and is able to discover graph neural architectures with better performance through employing graph structure learning as a denoising process in the search procedure. The proposed GASSO model is capable of simultaneously searching the optimal architecture and adaptively adjusting graph structure by jointly optimizing graph architecture search and graph structure denoising. Extensive experiments on real-world graph datasets demonstrate that our proposed GASSO model is able to achieve state-of-the-art performance compared with existing baselines.
accept
This paper studied the NAS problem for GNNs, provided analysis on the limitation of existing gradient based NAS for this task, and proposed to perform graph structure learning at the same time. The reviewers generally find the analysis is interesting and the paper to be well written. The authors have provided additional experiments and explanations during the rebuttal, and all the reviewers found the rebuttal to be helpful during the committee discussion. Based on this, we recommend the acceptance of the paper.
train
[ "0js-N_IJFS", "9STFB5qfZVt", "Q0Y0yYtiC8e", "PRGKpdpC5Bx", "k5OckQQPFGg", "czQXe30S_zS", "GuxK5JvpR5b", "NpAUm_TnquO", "5UEIS79FAIL", "LDhAb7BnZuA", "XCe3ufZWoCr" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ " We very much appreciate reviewer's feedback to our response.\n\nWe will surely incorporate reviewer's suggestions into our revised versions.\n\n\nFor the scalability issue, the reviewer's concern is reasonable, since considering densely connected edges needs $O(n^2)$ time/memory complexity. We think a possible fu...
[ -1, 6, -1, -1, 6, -1, -1, -1, 7, -1, 9 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, 5, -1, 5 ]
[ "Q0Y0yYtiC8e", "nips_2021_kSv_AMdehh3", "GuxK5JvpR5b", "k5OckQQPFGg", "nips_2021_kSv_AMdehh3", "XCe3ufZWoCr", "9STFB5qfZVt", "k5OckQQPFGg", "nips_2021_kSv_AMdehh3", "5UEIS79FAIL", "nips_2021_kSv_AMdehh3" ]
nips_2021_iHisgL7PFj2
Designing Counterfactual Generators using Deep Model Inversion
Explanation techniques that synthesize small, interpretable changes to a given image while producing desired changes in the model prediction have become popular for introspecting black-box models. Commonly referred to as counterfactuals, the synthesized explanations are required to contain discernible changes (for easy interpretability) while also being realistic (consistency to the data manifold). In this paper, we focus on the case where we have access only to the trained deep classifier and not the actual training data. While the problem of inverting deep models to synthesize images from the training distribution has been explored, our goal is to develop a deep inversion approach to generate counterfactual explanations for a given query image. Despite their effectiveness in conditional image synthesis, we show that existing deep inversion methods are insufficient for producing meaningful counterfactuals. We propose DISC (Deep Inversion for Synthesizing Counterfactuals) that improves upon deep inversion by utilizing (a) stronger image priors, (b) incorporating a novel manifold consistency objective and (c) adopting a progressive optimization strategy. We find that, in addition to producing visually meaningful explanations, the counterfactuals from DISC are effective at learning classifier decision boundaries and are robust to unknown test-time corruptions.
accept
The reviews highlight that this a technically sound, well-written contribution to the problem of counterfactual explanations in the novel and relevant setting where only the classifier is accessible. The manuscript is therefore accepted. The impact of the paper could be further strengthened if it was put in context of other ongoing work of counterfactual explanations, eg [1]. [1] Karimi et al. A survey of algorithmic recourse: contrastive explanations and consequential recommendations
train
[ "I60n8ax53G", "i-9p5_33iAK", "fuBYDLhh_V", "sFmpYqHaO2N", "zwCuWx4D_TB", "lwviwniYjUM", "Z0v-aPLUS7O", "Ryzs1Acc1y7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "**(motivation)** Many approaches exist to generate counterfactual images, but what should be done when no training data is available? Current approaches lead to adversarial-looking approaches, which are not satisfying enough as an explanation.\n\n**(approach)** The authors use Deep Inversion and a novel objective ...
[ 7, 7, -1, -1, -1, -1, 5, 7 ]
[ 3, 5, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_iHisgL7PFj2", "nips_2021_iHisgL7PFj2", "i-9p5_33iAK", "I60n8ax53G", "Ryzs1Acc1y7", "Z0v-aPLUS7O", "nips_2021_iHisgL7PFj2", "nips_2021_iHisgL7PFj2" ]
nips_2021_FM8auLVlRMo
A Faster Maximum Cardinality Matching Algorithm with Applications in Machine Learning
Nathaniel Lahn, Sharath Raghvendra, Jiacheng Ye
accept
The reviewers consider the submission to have valuable contributions, and the ideas may help OT-based algorithms. There were some concerns about the presentation including: 1) the impact on the ML community is not clear; 2) the experiments are deficient: it is not clear that random point sets are in any way representative (e.g., they have a smaller value k, which may not be the case for a typical pointset). The authors are encouraged to update the paper in accordance to the reviews/post-rebuttal comments.
train
[ "tRlpI2C2p2m", "CVzfgHiwBBd", "caME9VvIHOT", "Lro8QSGnqO", "-8fq373juHh", "VXOUitRVPCx", "h3NbtJDo3D_", "d7n0BNre6V8" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thanks to the authors for these clarifications. I have raised my score accordingly.", "This paper has the following contributions about algos for max matching on graphs with recursive separators:\n\n1 simplifying the recent LR algo to not require dual weights\n\n2 showing how this algo can find maximum matching...
[ -1, 7, -1, 7, -1, -1, -1, 6 ]
[ -1, 4, -1, 4, -1, -1, -1, 3 ]
[ "h3NbtJDo3D_", "nips_2021_FM8auLVlRMo", "-8fq373juHh", "nips_2021_FM8auLVlRMo", "d7n0BNre6V8", "Lro8QSGnqO", "CVzfgHiwBBd", "nips_2021_FM8auLVlRMo" ]
nips_2021_NFurmj-rIWe
Dynamic population-based meta-learning for multi-agent communication with natural language
In this work, our goal is to train agents that can coordinate with seen, unseen as well as human partners in a multi-agent communication environment involving natural language. Previous work using a single set of agents has shown great progress in generalizing to known partners, however it struggles when coordinating with unfamiliar agents. To mitigate that, recent work explored the use of population-based approaches, where multiple agents interact with each other with the goal of learning more generic protocols. These methods, while able to result in good coordination between unseen partners, still only achieve so in cases of simple languages, thus failing to adapt to human partners using natural language. We attribute this to the use of static populations and instead propose a dynamic population-based meta-learning approach that builds such a population in an iterative manner. We perform a holistic evaluation of our method on two different referential games, and show that our agents outperform all prior work when communicating with seen partners and humans. Furthermore, we analyze the natural language generation skills of our agents, where we find that our agents also outperform strong baselines. Finally, we test the robustness of our agents when communicating with out-of-population agents and carefully test the importance of each component of our method through ablation studies.
accept
This paper introduces an interesting approach to combating semantic drift in communication-based training for language agents. The biggest concern on reviewers was the need to more clearly describe the details of methods and results for the human experiments. It will be very important to add methods, ethics, and results details in the revision. Some reviewers felt that the domains considered were too artificial, but I believe (along with other reviewers) that these are an appropriate first step given previous work in this area.
train
[ "0quBPRIvyMc", "mweusFzCc76", "mrbaLdFPrL", "PqFUjazm5tw", "3iecZjRsIMp", "oKbTIaAR80b", "LZjY2YErkBs", "Bn3kzKSflex", "wAC-2PmqLq", "ZoFZvBPz7N", "-UL616p27xC", "BnoEcpIL4G1", "HiEaNDEsT25", "UHVQbwy6HBf", "_vEtVGuElTQ", "U1p42AdGfhd", "CkdAdLAQvmM", "OyGWghjxk7o", "yxU5JnlMJ5" ...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work deals with the problem of multi-agent communication where the agents can be artificial or human. The latter means that the conversation has to be in natural language. New agents can be added or removed from conversation so the environment is not static. The authors propose a dynamic population-based meta...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 9 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 1 ]
[ "nips_2021_NFurmj-rIWe", "mrbaLdFPrL", "PqFUjazm5tw", "3iecZjRsIMp", "_vEtVGuElTQ", "OyGWghjxk7o", "BnoEcpIL4G1", "U1p42AdGfhd", "nips_2021_NFurmj-rIWe", "0quBPRIvyMc", "CkdAdLAQvmM", "nips_2021_NFurmj-rIWe", "CkdAdLAQvmM", "yxU5JnlMJ5", "OyGWghjxk7o", "nips_2021_NFurmj-rIWe", "nips_...
nips_2021_4cEapqXfP30
Adversarial Neuron Pruning Purifies Backdoored Deep Models
As deep neural networks (DNNs) are growing larger, their requirements for computational resources become huge, which makes outsourcing training more popular. Training in a third-party platform, however, may introduce potential risks that a malicious trainer will return backdoored DNNs, which behave normally on clean samples but output targeted misclassifications whenever a trigger appears at the test time. Without any knowledge of the trigger, it is difficult to distinguish or recover benign DNNs from backdoored ones. In this paper, we first identify an unexpected sensitivity of backdoored DNNs, that is, they are much easier to collapse and tend to predict the target label on clean samples when their neurons are adversarially perturbed. Based on these observations, we propose a novel model repairing method, termed Adversarial Neuron Pruning (ANP), which prunes some sensitive neurons to purify the injected backdoor. Experiments show, even with only an extremely small amount of clean data (e.g., 1%), ANP effectively removes the injected backdoor without causing obvious performance degradation.
accept
This paper focuses on the problem of backdoor defense without any knowledge of triggers. The proposal is an adversarially neuron perturbation to find the most sensitive neuron to backdoor trigger, and then use Adversarial Neuron Pruning (ANP) to purify the model. The philosophy behind sounds quite interesting to me, namely, exploring the correlations between the sensitive neurons to injected backdoor and the multiplicative perturbation on neurons. This philosophy leads to a novel algorithm design I have never seen. The clarity and novelty are above the bar of NeurIPS. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to accept this paper for publication! Please include the additional experimental results in the next version.
train
[ "yMepL52Pli", "v-lC6iJuxmj", "VV1smvW-8Lo", "8NeD_t5lU4U", "6DZMMIL1e6c", "QGhVxxHlIf", "IofeAGrXbIW", "ZMOn3ZWYT4q", "-Aj1xM4PFJ", "UHXi1DwmC9G" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed response. I keep my score after rebuttal.", " Thank you for the detailed answers. The answers to Q1 and Q2 are satisfactory. I agree with your points and appreciate the technique of ANP drastically degrades ASR. Therefore, I am willing to keep my score after rebuttal.", " **Q1**: There...
[ -1, -1, -1, -1, -1, -1, 6, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 3, 5 ]
[ "VV1smvW-8Lo", "8NeD_t5lU4U", "IofeAGrXbIW", "-Aj1xM4PFJ", "UHXi1DwmC9G", "ZMOn3ZWYT4q", "nips_2021_4cEapqXfP30", "nips_2021_4cEapqXfP30", "nips_2021_4cEapqXfP30", "nips_2021_4cEapqXfP30" ]
nips_2021_AuVKs6JmBtY
Towards Robust and Reliable Algorithmic Recourse
As predictive models are increasingly being deployed in high-stakes decision making (e.g., loan approvals), there has been growing interest in post-hoc techniques which provide recourse to affected individuals. These techniques generate recourses under the assumption that the underlying predictive model does not change. However, in practice, models are often regularly updated for a variety of reasons (e.g., dataset shifts), thereby rendering previously prescribed recourses ineffective.To address this problem, we propose a novel framework, RObust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts. To the best of our knowledge, this work proposes the first ever solution to this critical problem. We also carry out theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts: 1) We quantify the probability of invalidation for recourses generated without accounting for model shifts. 2) We prove that the additional cost incurred due to the robust recourses output by our framework is bounded. Experimental evaluation on multiple synthetic and real-world datasets demonstrates the efficacy of the proposed framework.
accept
The expert reviewers for the most part appreciated the paper and were largely positive. There were many important issues raised but the reviewers felt the authors responded appropriately and suggested a reasonable path to addressing these within the scope of this review process. The authors are expected to carefully address the points raised by reviewers in a final version, including as they outlined in their response. This includes the points raised by about the theory and assumptions as well as justifying choices in the model -- the answers from the reviewers were adequate and they need to be incorporated into the paper.
train
[ "wOR-WhhB9Pf", "E4ftmSYw9a3", "Muu0ws2j2l", "81qSW2nmFVU", "LAjBgKaoqqb", "0EtkUqeMgmN", "lI1aIQ8VEMc", "zjjoCdzZ6Z", "dmobVPkM4db", "kgR6cT5TT6h", "8JhENqWggnr", "EhJURK6iynU", "3Uqg_nIS-0e" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes a method to construct recourses that are valid under worst case additive model shifts for linear models and applied to non-linear ones with LIME. The paper analyzes these robust recourses theoretically but not for the given algorithm. I think the paper overall is of great quality but it lacks in...
[ 6, -1, -1, -1, 7, -1, -1, -1, 6, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1 ]
[ "nips_2021_AuVKs6JmBtY", "81qSW2nmFVU", "0EtkUqeMgmN", "8JhENqWggnr", "nips_2021_AuVKs6JmBtY", "EhJURK6iynU", "8JhENqWggnr", "kgR6cT5TT6h", "nips_2021_AuVKs6JmBtY", "3Uqg_nIS-0e", "wOR-WhhB9Pf", "LAjBgKaoqqb", "dmobVPkM4db" ]
nips_2021_vecLnc6g6iQ
Neural Rule-Execution Tracking Machine For Transformer-Based Text Generation
Sequence-to-Sequence (Seq2Seq) neural text generation models, especially the pre-trained ones (e.g., BART and T5), have exhibited compelling performance on various natural language generation tasks. However, the black-box nature of these models limits their application in tasks where specific rules (e.g., controllable constraints, prior knowledge) need to be executed. Previous works either design specific model structures (e.g., Copy Mechanism corresponding to the rule "the generated output should include certain words in the source input'') or implement specialized inference algorithms (e.g., Constrained Beam Search) to execute particular rules through the text generation. These methods require the careful design case-by-case and are difficult to support multiple rules concurrently. In this paper, we propose a novel module named Neural Rule-Execution Tracking Machine (NRETM) that can be equipped into various transformer-based generators to leverage multiple rules simultaneously to guide the neural generation model for superior generation performance in an unified and scalable way. Extensive experiments on several benchmarks verify the effectiveness of our proposed model in both controllable and general text generation tasks.
accept
This paper proposes a technique for including global constraints during decoding, and letting these be softly violated during decoding. Although the technique seems to be reasonable (there were a few concerns about the terseness of the approach), and reasonably evaluated, the problem of constrained optimisation is one of (if not the most) widely studied problem in optimisation, and indeed, constrained optimisation has come up numerous times even in "decoding problems" (e.g., frameworks for decoding like dual decomposition, Rush and Collins, 2012, and lagrangian relaxation, Martins et al., 2012 et passim), and using first order predicate logic to express constraints has a long history (the constrained conditional models of Rizzolo and Roth, 2007, and Markov Logic Networks of Poon and Domingos). I would encourage the authors to explicate how this work relates to other work in constrained optimisation in the context of decoding algorithms, and in particular, to be more precise in the motivation and nature of the constraints used in the experiments (for instance, are they designed on a validation set and then tested on a held-out test set?).
train
[ "OSHn-Co8I69", "7AT_JYR_jWU", "wGHyEBj08TX", "YTSd0QCCXv7", "TldfDMEreDw", "bOdzLz9QncJ", "0mkGWd9wQab", "54QHy_H9R0", "rr5Umm7zp57", "NQcIF6ikNYQ", "NeBGs1utWAk", "rza672UekiQ", "tBCGH9tFa_", "A2HAt15IkLA", "GBcOLGtH-eP", "GTLtwbMtSpg", "fBb_4EMexwa", "xe77x5FefgK", "Lyf8_iGcLiN...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "o...
[ " Thank you for your generous response. We appreciate your effort in reviewing our submission. We are very grateful to see that our response resolve your concerns in the initial review processing. We do spend a significant amount of effort in preparing to respond to your questions. We later found that these questio...
[ -1, -1, 6, -1, -1, 7, -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "7AT_JYR_jWU", "YTSd0QCCXv7", "nips_2021_vecLnc6g6iQ", "wGHyEBj08TX", "0mkGWd9wQab", "nips_2021_vecLnc6g6iQ", "CkhNYDIhVsR", "rza672UekiQ", "NQcIF6ikNYQ", "B5v-G2J42EZ", "nips_2021_vecLnc6g6iQ", "4KGiRxx5Iuw", "nips_2021_vecLnc6g6iQ", "GBcOLGtH-eP", "xe77x5FefgK", "fBb_4EMexwa", "nip...
nips_2021_D0xGh031I9m
Scalable Online Planning via Reinforcement Learning Fine-Tuning
Lookahead search has been a critical component of recent AI successes, such as in the games of chess, go, and poker. However, the search methods used in these games, and in many other settings, are tabular. Tabular search methods do not scale well with the size of the search space, and this problem is exacerbated by stochasticity and partial observability. In this work we replace tabular search with online model-based fine-tuning of a policy neural network via reinforcement learning, and show that this approach outperforms state-of-the-art search algorithms in benchmark settings. In particular, we use our search algorithm to achieve a new state-of-the-art result in self-play Hanabi, and show the generality of our algorithm by also showing that it outperforms tabular search in the Atari game Ms. Pacman.
accept
The paper proposes to replace tabular MCTS-like search methods with finetuning a pretrained policy for the states encountered online. The the high-level method is a good contribution despite being straightforward. The experiments are fairly limited though, for what is largely an empirical paper. Even with the addition of a few more Atari games as a result of the post-rebuttal discussion, the evaluation is weak by the standards of empirical RL papers. I'd expect at least 20-30 more Atari games, or the Atari games on which the authors already have the results and the Procgen suite on top of that. I'm not discounting Hanabi -- it's a hard benchmark, but it's still just one benchmark. Running experiments on more games/environments is especially important since, according to the additional results mentioned in the discussion, in low-data and weak-blueprint regimes RL-Fine-Tune's advantage over MCTS is far from apparent. On balance, I cautiously recommend acceptance since RL-Fine-Tune seems novel and a strong baseline, and due to its simplicity it's likely to be experimented with and built on by others.
train
[ "V-so1enAnLl", "mlfWQz2TbtU", "j0qmL58HhH3", "_j09v401zGw", "bImEGZZsPqW", "NPUD-qCEhF", "YzVvEII6Tf9", "JjQUtjl2ZMm", "CgweLMHamNZ", "DyMhXw_70Ji", "cRD2ZYBtboZ", "6NaVGhKgjI" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes to replace a Monte-Carlo tree search style search method by instead fine-tuning a neural network by running reinforcement learning on trajectories generated by a perfect model of the environment starting from the current state at inference time. They apply this idea in a cooperative multi-agent...
[ 7, -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_D0xGh031I9m", "j0qmL58HhH3", "JjQUtjl2ZMm", "nips_2021_D0xGh031I9m", "V-so1enAnLl", "nips_2021_D0xGh031I9m", "CgweLMHamNZ", "_j09v401zGw", "NPUD-qCEhF", "nips_2021_D0xGh031I9m", "NPUD-qCEhF", "_j09v401zGw" ]
nips_2021_npvEdo4Ftb1
Adversarial Regression with Doubly Non-negative Weighting Matrices
Many machine learning tasks that involve predicting an output response can be solved by training a weighted regression model. Unfortunately, the predictive power of this type of models may severely deteriorate under low sample sizes or under covariate perturbations. Reweighting the training samples has aroused as an effective mitigation strategy to these problems. In this paper, we propose a novel and coherent scheme for kernel-reweighted regression by reparametrizing the sample weights using a doubly non-negative matrix. When the weighting matrix is confined in an uncertainty set using either the log-determinant divergence or the Bures-Wasserstein distance, we show that the adversarially reweighted estimate can be solved efficiently using first-order methods. Numerical experiments show that our reweighting strategy delivers promising results on numerous datasets.
accept
The paper is technically sound, however its motivation and practical impact are unclear: the reasons why the contribution is one method of choice to go for small sample situations are not spelled out clearly enough. This makes the paper a bit subpar wrt the expectations for a paper at NeurIPS.
train
[ "cSV-qCKFG8s", "FWmfdLzU0mo", "OuEWqxGsZwz", "EtSL4wdH_X", "ChOXVQklYjG", "As-23g6rdIy", "-6y4aFf_Lo", "KWF0IZzrzZp", "ygQsCQvySop", "D9YFhXDWM42", "aidYgU0_oyv", "YZm4S0f53Hj" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a method that minimizes the worst-case risk among a set of some generalized weighted empirical risks. The generalized weighted empirical risks are defined using doubly non-negative (i.e., non-negative, positive (semi-)definite) matrices that is close to the matrix corresponding to the original ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "nips_2021_npvEdo4Ftb1", "OuEWqxGsZwz", "EtSL4wdH_X", "ChOXVQklYjG", "-6y4aFf_Lo", "YZm4S0f53Hj", "cSV-qCKFG8s", "aidYgU0_oyv", "D9YFhXDWM42", "nips_2021_npvEdo4Ftb1", "nips_2021_npvEdo4Ftb1", "nips_2021_npvEdo4Ftb1" ]
nips_2021_G7W2mriQLxf
Learned Robust PCA: A Scalable Deep Unfolding Approach for High-Dimensional Outlier Detection
Robust principal component analysis (RPCA) is a critical tool in modern machine learning, which detects outliers in the task of low-rank matrix reconstruction. In this paper, we propose a scalable and learnable non-convex approach for high-dimensional RPCA problems, which we call Learned Robust PCA (LRPCA). LRPCA is highly efficient, and its free parameters can be effectively learned to optimize via deep unfolding. Moreover, we extend deep unfolding from finite iterations to infinite iterations via a novel feedforward-recurrent-mixed neural network model. We establish the recovery guarantee of LRPCA under mild assumptions for RPCA. Numerical experiments show that LRPCA outperforms the state-of-the-art RPCA algorithms, such as ScaledGD and AltProj, on both synthetic datasets and real-world applications.
accept
The paper studies learned approaches to the problem of recovering a low-rank matrix from sparsely corrupted observations (here, RPCA). This problem has been intensely studied, and a number of methods have been proposed based on both convex and nonconvex optimization. The paper takes the perspective of *learned* optimization, in which one learns parameters of an optimization method from data, by interpreting iterations of the method as layers of a neural network. This has the advantage of adapting the method to (1) perform as well as possible for a given (small) number of iterations and (2) to perform as well as possible on data whose distribution mimics that of the training data. There are applications of learned optimization to RPCA. However, these approaches are based on the singular value decomposition, and so are not scalable to very large input matrices. The paper proposes a learned optimization method which combines soft-thresholding / proximal methods and a factorized low-rank model which is optimized using a scaled gradient descent (ScaledGD) which better copes with the ambiguity between factors. The paper uses learning to determine a sequence of step sizes and a sequence of thresholding parameters. The paper allows the learned method to be applied for arbitrary numbers of steps, by treating the layers as a recurrent neural net which can be iterated indefinitely. The main potential advantages compared to previous learned RPCA approaches are (1) avoiding the need to perform singular value decomposition, (2) the use of RNN’s enables the learned method to be applied for arbitrary numbers of iterations. The paper provides theoretical analysis showing that there exist parameters which enable linear convergence from an appropriate initialization (given by performing a single truncated SVD, as in many prior works). Experiments show significantly improved convergence speed compared to ScaledGD and alternating minimization. Initial reviews praised the clear motivation and presentation of the paper, as well as the idea of combining feedforward and recurrent networks to learn more flexible optimizers. At the same time, reviewers raised questions about the novelty of the paper’s proposals, the relationship to other notions of RPCA, and various aspects of the experiments. After responses from the authors, the reviewers converged to an evaluation of the paper as above the bar for acceptance, appreciating the paper’s empirical improvements, and the idea of combining feedforward and recurrent networks to handle arbitrary numbers of iterations. The paper would benefit from clarifications as to the novelty of the theoretical recovery results, and where technical innovations were required in the proofs. 

train
[ "6rJWW7dpZAn", "bLBiMfZxVX_", "H7-CI9ESpS", "MdoS3d3nMNN", "WnYtsnzYWZ", "g854Dm3eAdG", "1Aoz2qbPJOW", "gyQIgo0g5Qy", "qm8USklruxW", "YPkf4tRYBCF", "kXymPXjLcmC" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi Authors,\n\nThanks a lot for the detailed response. The work provided some interesting results. However in my opinion, the contribution to PCA or optimization is not significant enough. I will not change my score.\n\nReviewer", " To all reviewers:\n\nWe really appreciate all the reviewers for their valuable ...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "bLBiMfZxVX_", "nips_2021_G7W2mriQLxf", "qm8USklruxW", "qm8USklruxW", "kXymPXjLcmC", "YPkf4tRYBCF", "gyQIgo0g5Qy", "nips_2021_G7W2mriQLxf", "nips_2021_G7W2mriQLxf", "nips_2021_G7W2mriQLxf", "nips_2021_G7W2mriQLxf" ]
nips_2021_pzmwfDLoANS
Proxy-Normalizing Activations to Match Batch Normalization while Removing Batch Dependence
We investigate the reasons for the performance degradation incurred with batch-independent normalization. We find that the prototypical techniques of layer normalization and instance normalization both induce the appearance of failure modes in the neural network's pre-activations: (i) layer normalization induces a collapse towards channel-wise constant functions; (ii) instance normalization induces a lack of variability in instance statistics, symptomatic of an alteration of the expressivity. To alleviate failure mode (i) without aggravating failure mode (ii), we introduce the technique "Proxy Normalization" that normalizes post-activations using a proxy distribution. When combined with layer normalization or group normalization, this batch-independent normalization emulates batch normalization's behavior and consistently matches or exceeds its performance.
accept
The paper proposes a theoretical analysis on different normalization schemes. It demonstrates that normalizing schemes which don't rely on batch-statistics suffer from channel collapse or lack or expressivity. It then proposes a novel normalization scheme to address those issues. Although the practicality of the algorithm could be improved, proxy-norm is able to match BN while not using the batch statistics. Additionally, the analysis presented in the paper is novel and reviewers agreed that it would be of value to the community.
val
[ "khtenjlgFVN", "Gsal7n2m-0p", "Scdrj2YU8gL", "yVZmwC-Ftm", "SUm3XgcG1jH", "yhUH-9lKaXB", "WNx09mBOPwc", "hvgP3Q1ur40", "_xsQ-hOB-I0", "ZQAiBpdIFD" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " We wish to thank all the reviewers for their post-rebuttal feedback. \n", " I have read the author responses and other reviews\n\nThe authors have addressed my concerns about the implementation issues, and I also think the theoretical motivations and exposition for the proposed proxy-normalized method are fasci...
[ -1, -1, 6, 7, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 3, 3, -1, -1, -1, -1, -1, 5 ]
[ "_xsQ-hOB-I0", "WNx09mBOPwc", "nips_2021_pzmwfDLoANS", "nips_2021_pzmwfDLoANS", "yVZmwC-Ftm", "SUm3XgcG1jH", "ZQAiBpdIFD", "Scdrj2YU8gL", "nips_2021_pzmwfDLoANS", "nips_2021_pzmwfDLoANS" ]
nips_2021_-t6TeG3A6Do
Dynamic Bottleneck for Robust Self-Supervised Exploration
Exploration methods based on pseudo-count of transitions or curiosity of dynamics have achieved promising results in solving reinforcement learning with sparse rewards. However, such methods are usually sensitive to environmental dynamics-irrelevant information, e.g., white-noise. To handle such dynamics-irrelevant information, we propose a Dynamic Bottleneck (DB) model, which attains a dynamics-relevant representation based on the information-bottleneck principle. Based on the DB model, we further propose DB-bonus, which encourages the agent to explore state-action pairs with high information gain. We establish theoretical connections between the proposed DB-bonus, the upper confidence bound (UCB) for linear case, and the visiting count for tabular case. We evaluate the proposed method on Atari suits with dynamics-irrelevant noises. Our experiments show that exploration with DB bonus outperforms several state-of-the-art exploration methods in noisy environments.
accept
Paper presents a method for intrinsic exploration based on learning a representation of observation that can ignore distractors using the dynamics bottleneck principle. Optimality of the exploration bonus in the case of bandits is established. Results on standard benchmarks used in the exploration literature are convincing. Reviewers unanimously vote to accept the paper, which I agree with.
test
[ "r9Bf9VlBeV-", "XxRcvLAnN1", "kO_hO2g-2mf", "zi5ScX8tZQ", "78b5aiNQfF9", "B6vYAhSxT6A", "XPF6nhJCSXU", "T7xOpfstyhn", "j4Km1ltV9Ty", "ekkx-6omh2X", "6T7Jb_B8bPe", "uTCz8nsBCO9", "pPlB44S47s", "_mKD2Y8QsP5", "v3gi7OnMPEc", "YYI28_bgXx" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We appreciate the suggestions about the limitation of our work and agree that they are necessary for our work. Indeed, since pure exploration without extrinsic rewards is very difficult in most tasks, a random baseline is required to show whether the exploration methods learn meaningful behaviors. We adopt the ra...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "kO_hO2g-2mf", "78b5aiNQfF9", "78b5aiNQfF9", "XPF6nhJCSXU", "ekkx-6omh2X", "pPlB44S47s", "6T7Jb_B8bPe", "nips_2021_-t6TeG3A6Do", "nips_2021_-t6TeG3A6Do", "uTCz8nsBCO9", "T7xOpfstyhn", "j4Km1ltV9Ty", "YYI28_bgXx", "v3gi7OnMPEc", "nips_2021_-t6TeG3A6Do", "nips_2021_-t6TeG3A6Do" ]
nips_2021_3BI2dazLpN
ProTo: Program-Guided Transformer for Program-Guided Tasks
Programs, consisting of semantic and structural information, play an important role in the communication between humans and agents. Towards learning general program executors to unify perception, reasoning, and decision making, we formulate program-guided tasks which require learning to execute a given program on the observed task specification. Furthermore, we propose Program-Guided Transformers (ProTo), which integrates both semantic and structural guidance of a program by leveraging cross-attention and masked self-attention to pass messages between the specification and routines in the program. ProTo executes a program in a learned latent space and enjoys stronger representation ability than previous neural-symbolic approaches. We demonstrate that ProTo significantly outperforms the previous state-of-the-art methods on GQA visual reasoning and 2D Minecraft policy learning datasets. Additionally, ProTo demonstrates better generalization to unseen, complex, and human-written programs.
accept
All reviewers appreciated the novel approach for learning to execute symbolic programs using both program syntax and semantics. Adding additional experimental evaluation and comparisons to IPA-GNN and symbolic planner for Minecraft was also greatly appreciated. It would be great to incorporate the feedback, and the experimental and architectural clarifications from the reviews and author responses in the final version.
val
[ "CFOD0caAoG1", "vu-p7rrsxa5", "HTP45b6bE7X", "o3_LXTBqN3d", "2Kbab1nXT6y", "-UwHtICblAL", "soclCmqj4KV", "x5rq1PmSIJ7", "xbgcqOqFQ6e", "pbV4brCnKJj", "gF_uRMO4cz3", "bk2AzjQpgXv", "2oVVQrnKZhe", "xXlL7Ltm7w6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the very detailed response -- combined with the responses to the other reviewers, I have no further questions and will continue in the general discussion thread among reviewers above!", " Thanks for the detailed response! The experimental setting is clearer now, and I do not have more questions at...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "xbgcqOqFQ6e", "x5rq1PmSIJ7", "o3_LXTBqN3d", "2Kbab1nXT6y", "soclCmqj4KV", "nips_2021_3BI2dazLpN", "xXlL7Ltm7w6", "2oVVQrnKZhe", "gF_uRMO4cz3", "bk2AzjQpgXv", "nips_2021_3BI2dazLpN", "nips_2021_3BI2dazLpN", "nips_2021_3BI2dazLpN", "nips_2021_3BI2dazLpN" ]
nips_2021_GAiM0RXrMfF
An Efficient Transfer Learning Framework for Multiagent Reinforcement Learning
Transfer Learning has shown great potential to enhance single-agent Reinforcement Learning (RL) efficiency. Similarly, Multiagent RL (MARL) can also be accelerated if agents can share knowledge with each other. However, it remains a problem of how an agent should learn from other agents. In this paper, we propose a novel Multiagent Policy Transfer Framework (MAPTF) to improve MARL efficiency. MAPTF learns which agent's policy is the best to reuse for each agent and when to terminate it by modeling multiagent policy transfer as the option learning problem. Furthermore, in practice, the option module can only collect all agent's local experiences for update due to the partial observability of the environment. While in this setting, each agent's experience may be inconsistent with each other, which may cause the inaccuracy and oscillation of the option-value's estimation. Therefore, we propose a novel option learning algorithm, the successor representation option learning to solve it by decoupling the environment dynamics from rewards and learning the option-value under each agent's preference. MAPTF can be easily combined with existing deep RL and MARL approaches, and experimental results show it significantly boosts the performance of existing methods in both discrete and continuous state spaces.
accept
The reviewer team agrees that this paper has some clear strengths and weaknesses: (+) It proposes a new way of combining two known techniques, the option framework and successor representation, for MARL. (+) The proposed algorithm achieves good performance empirically. (+) The additional experimental results provided in the author response phase addressed some of the reviewers' concerns, especially the comparison with QMIX and the additional ablation study. These results improve the quality of the paper and the significance of the work. (-) The applicability of the proposed technique is limited by some level of homogeneity or similarity among the agents. The authors discussed how the technique can be used for heterogeneous agents in the response phase but the benefit of the framework clearly relies on a (sub)set of agents who have similarities in performing the tasks and do not need to carefully coordinate with each other. (-) While the authors claim that they have made efforts on improving the clarity of the paper since the last submission to an earlier conference, the reviewer team finds that the paper still lacks clarity in a number of places, which makes it hard for the reader to understand the paper. For example, it was not very clear which parameters are shared and which ones are agent-specific. Alg 1 lacks clear explanations in some of the important steps. Some of the related work is mentioned without explaining the differences from this work clearly. The authors' response clarified some of these points but the paper's clarity needs to be improved. In addition, there are typos and grammar mistakes throughout the paper. Overall, the reviewer team views this paper as a borderline paper mainly due to the lack of clarity. The novel combination of the option framework and successor representation may inspire further research and the significantly improved empirical performance over baselines is promising. In the next version of the paper, I would recommend the authors include the additional experimental results presented in the response phase and improve the clarity of the paper based on reviewers' comments and suggestions.
train
[ "2VC1MXz2gC", "oOJfe3rWvbO", "NUlHjvuFxXi", "eRt0L5WkltP", "l1K8Zjt-MYF", "5l-DELojro2", "Oe__VZ56ANb", "1JuVV9lp6sU", "T3KkgV4xN6w", "OCY1JHKcapU", "VfsmZMlJfQG", "7EITnA6PHd", "vDkVA6fLWG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a Multiagent Policy Transfer Framework (MAPTF) that learns which agent policy is appropriate for reuse and when to terminate it. Consequently, MAPTF is seen as an option learning approach to policy transfer; it uses the Successor Representation Option (SRO) algorithm proposed by the authors to ...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_GAiM0RXrMfF", "eRt0L5WkltP", "l1K8Zjt-MYF", "2VC1MXz2gC", "vDkVA6fLWG", "1JuVV9lp6sU", "1JuVV9lp6sU", "T3KkgV4xN6w", "7EITnA6PHd", "vDkVA6fLWG", "2VC1MXz2gC", "nips_2021_GAiM0RXrMfF", "nips_2021_GAiM0RXrMfF" ]
nips_2021_Fw0IQgaGlhh
Learning to Time-Decode in Spiking Neural Networks Through the Information Bottleneck
One of the key challenges in training Spiking Neural Networks (SNNs) is that target outputs typically come in the form of natural signals, such as labels for classification or images for generative models, and need to be encoded into spikes. This is done by handcrafting target spiking signals, which in turn implicitly fixes the mechanisms used to decode spikes into natural signals, e.g., rate decoding. The arbitrary choice of target signals and decoding rule generally impairs the capacity of the SNN to encode and process information in the timing of spikes. To address this problem, this work introduces a hybrid variational autoencoder architecture, consisting of an encoding SNN and a decoding Artificial Neural Network (ANN). The role of the decoding ANN is to learn how to best convert the spiking signals output by the SNN into the target natural signal. A novel end-to-end learning rule is introduced that optimizes a directed information bottleneck training criterion via surrogate gradients. We demonstrate the applicability of the technique in an experimental settings on various tasks, including real-life datasets.
accept
I want to thank the authors and reviewers for engaging with this paper, as well as the reviewers for participating in an internal discussion. We have some disagreement between the reviewers with several positive reviews and one leaning rejection. After reading the paper, and all of the related discussion I'm going to recommend this paper for acceptance. I feel as though work has sufficient Quality, Originality, Clarity and Significance for acceptance. I believe that many of the remaining issues reviewers have are largely stylistic and are not enough to block acceptance.
train
[ "sNomBy7CilT", "iElSOCfGYsu", "ThsBVn3okz", "vR8BSiZRYu9", "52gbhXFG4uZ", "Qy0h_1GLtsY", "A3nShbx4uf", "DdHNy-iSVnJ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed comments. \n \nReadability. \\\n[1] The use of some symbols is confusing, e.g., \";\" in Eq. (2). \\\n(A) We have used standard notation, e.g., the use of the semi-colon is as in the reference textbook by Cover and Thomas, Elements of Information Theory (Wiley, 2001).\n \n[2] As seen i...
[ -1, -1, -1, -1, 7, 8, 6, 5 ]
[ -1, -1, -1, -1, 3, 2, 4, 4 ]
[ "DdHNy-iSVnJ", "A3nShbx4uf", "Qy0h_1GLtsY", "52gbhXFG4uZ", "nips_2021_Fw0IQgaGlhh", "nips_2021_Fw0IQgaGlhh", "nips_2021_Fw0IQgaGlhh", "nips_2021_Fw0IQgaGlhh" ]
nips_2021_76tTYokjtG
NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform
Achille Thin, Yazid Janati El Idrissi, Sylvain Le Corff, Charles Ollion, Eric Moulines, Arnaud Doucet, Alain Durmus, Christian Robert
accept
The paper develops a new, normalizing constant estimation method which is unbiased, and leverages this to achieve improved sampling methods for the same family of distributions. From a technical perspective the insights are definitely above the bar. The main concern that the reviewers and I shared is the presentation of the paper and the lack of intuition provided for some of the more technical aspects of it. While I understand that space is limited, I encourage the authors to expand upon the exposition in Section 2 in particular. Additionally, the experimental evaluation was lacking, however, in the discussions, the authors provided additional experiments and context which I think will improve the paper substantially.
val
[ "jlSL09-275V", "9YQuztM94Hg", "aHJBzvesLkk", "tTzdGZyHCD", "qwdhwyXZ7A_", "ggohY7o4zOz", "qMdnKp-CnIQ", "gE-LNgv97fo", "O2ECUcfMnzc", "2z0P527mqMG", "ZFllpMey20v", "YqbniSGFz15", "TkwcxaEjTWe" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes NEO which extends Non-Equilibrium Importance Sampling (NEIS) to use discrete orbit and extra weights.\nThe probability distribution and sampling process is derived based on the Jacobian determinant.\nThis method could be applied to estimate the partition function and a biased estimation called ...
[ 6, -1, -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, 7 ]
[ 3, -1, -1, -1, 2, -1, 4, -1, -1, -1, -1, -1, 2 ]
[ "nips_2021_76tTYokjtG", "nips_2021_76tTYokjtG", "tTzdGZyHCD", "O2ECUcfMnzc", "nips_2021_76tTYokjtG", "ZFllpMey20v", "nips_2021_76tTYokjtG", "YqbniSGFz15", "jlSL09-275V", "TkwcxaEjTWe", "qwdhwyXZ7A_", "qMdnKp-CnIQ", "nips_2021_76tTYokjtG" ]
nips_2021_ahrSWZgjkg
Relaxing Local Robustness
Klas Leino, Matt Fredrikson
accept
The authors propose relaxations of robustness which deal with the fact that tasks can have varying label ambiguity. All reviewers agreed on the quality of the work and its relevance to security/safety critical settings. In particular, this definition will likely help scale certification approaches to larger datasets where label ambiguity is more common. In addition to editing the paper to include new discussion presented in their responses, I encourage the authors to make their code public to facilitate reproducibility. In particular, it will be important to clarify in the writing that the focus in this paper is on deterministic certification of L2 robustness so that the choice of GloRo Nets is better motivated. The authors are also encouraged to outline clearly how follow-up on the work may study other certification techniques as suggested by the discussion with reviewer TzLt.
train
[ "aipVAKS1IO", "FxPWU97e-hJ", "tQqn14hSBKh", "36HIdEch37e", "83FC50baDRN", "5Jn1WFM8Qci", "w1esewYiYCP", "I5BqSC9UNwv", "GsfCncR5FF7", "y6JrhOegTXk", "dhKEG10N5pv", "k3B1dxA0yu", "gov9nvZIXu", "oAXfMq0jXJL", "Kj6EN3XLfdb", "QFhQNYf6gv7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the reply. This addressed my concerns. ", " Thank you for the detailed response. I look forward to seeing the CIFAR10 vs CIFAR100 10 super class comparison in the final version", "The paper introduced two relaxed safety properties for classifiers: (1) relaxed top-k robustness, which is different...
[ -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "w1esewYiYCP", "gov9nvZIXu", "nips_2021_ahrSWZgjkg", "I5BqSC9UNwv", "nips_2021_ahrSWZgjkg", "k3B1dxA0yu", "dhKEG10N5pv", "y6JrhOegTXk", "oAXfMq0jXJL", "tQqn14hSBKh", "QFhQNYf6gv7", "83FC50baDRN", "Kj6EN3XLfdb", "nips_2021_ahrSWZgjkg", "nips_2021_ahrSWZgjkg", "nips_2021_ahrSWZgjkg" ]
nips_2021_Bx6qKuBM2AD
Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer
Ge Yang, Edward Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, Jianfeng Gao
accept
This paper proposes an approach for "zero-shot transfer" of optimal hyperparameters from small models to large models, which enables efficient tuning of large models such as GPT-3. Such a study on efficient tuning of large models is highly anticipated by our community, and may inspire future research in this area. Reviewers generally recommended acceptance (4 out of 5), acknowledging the significance and experimentation of this work. They, with one referee still below the bar, pointed out several important concerns. Authors shall revise their paper to: 1. Elaborate the novelty by connecting to the Maximal Update Parameterization [40], highlighting clearly which part is due to [40] and which part is novel in this paper; 2. Town down the claim of "zero-shot" hyperparameter transfer, because hyperparameters of the regularization and the optimizers cannot be transferred, and the hyperparameter transfer can only be applied within the same model family; 3. Document clearly in what cases their approach is applicable, e.g. which hyperparameters can be transferred and which ones cannot, and how practitioners can deal with the difficult ones -- simply bypassing difficult cases is not good, because it will make scope of the approach quite narrow. I would stress that the author rebuttal did not respond to all the concerns, so the acceptance is conditioned on the authors' revision, in which at least the meta reviews must be well addressed.
train
[ "DdF6XeA2UC", "W-UIcy0rWk7", "1iD6vYGvmCY", "cVPHAzEoq7u", "rEa4Tly6cUY", "TXLNzNgjmrx", "flEHR9S3oz", "annM4tL-Od9", "BblrzsfTmD8", "SFTV8fEKj5J", "YSYBDoPHun0", "n5jXD0CS2IX" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an alternative method for tuning hyperparameters which is to tune hyperparameters on a small width model and transfer these sets of hyperparameters to the full network. The core idea is to utilize the maximal update parameterization so that optimization hyperparameters remain stable under model ...
[ 7, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "nips_2021_Bx6qKuBM2AD", "cVPHAzEoq7u", "SFTV8fEKj5J", "n5jXD0CS2IX", "DdF6XeA2UC", "YSYBDoPHun0", "BblrzsfTmD8", "nips_2021_Bx6qKuBM2AD", "nips_2021_Bx6qKuBM2AD", "nips_2021_Bx6qKuBM2AD", "nips_2021_Bx6qKuBM2AD", "nips_2021_Bx6qKuBM2AD" ]
nips_2021_iFadi3f5V5I
Statistical Regeneration Guarantees of the Wasserstein Autoencoder with Latent Space Consistency
The introduction of Variational Autoencoders (VAE) has been marked as a breakthrough in the history of representation learning models. Besides having several accolades of its own, VAE has successfully flagged off a series of inventions in the form of its immediate successors. Wasserstein Autoencoder (WAE), being an heir to that realm carries with it all of the goodness and heightened generative promises, matching even the generative adversarial networks (GANs). Needless to say, recent years have witnessed a remarkable resurgence in statistical analyses of the GANs. Similar examinations for Autoencoders however, despite their diverse applicability and notable empirical performance, remain largely absent. To close this gap, in this paper, we investigate the statistical properties of WAE. Firstly, we provide statistical guarantees that WAE achieves the target distribution in the latent space, utilizing the Vapnik–Chervonenkis (VC) theory. The main result, consequently ensures the regeneration of the input distribution, harnessing the potential offered by Optimal Transport of measures under the Wasserstein metric. This study, in turn, hints at the class of distributions WAE can reconstruct after suffering a compression in the form of a latent law.
accept
All reviewers liked the paper and support acceptance. So it is a clear accept. The reviewers provided good feedback on shortcomings in the presentation. The authors are strongly encouraged to take these comments into account for the final version of the paper.
train
[ "uO5t3bxbDIZ", "mavUg27XYn", "kOfOE_Va6hx", "4AbHahN4Odu", "fBaG-jBRQkI", "rQ8AfdEqQur", "GudpIvzQdn_", "INbT1QP3rS", "LfvC1FkKKfM", "_0GP2YPyav" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ " Thank you for the response, which answers my questions to greater detail than I expected. I continue to feel that this paper should be accepted at NeurIPS.", "This paper analyzes the statistical properties of WAE, providing theoretical guarantees for both sample reconstruction and latent distribution tasks. To ...
[ -1, 7, -1, -1, 6, -1, -1, -1, -1, 7 ]
[ -1, 3, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "LfvC1FkKKfM", "nips_2021_iFadi3f5V5I", "4AbHahN4Odu", "mavUg27XYn", "nips_2021_iFadi3f5V5I", "INbT1QP3rS", "INbT1QP3rS", "fBaG-jBRQkI", "_0GP2YPyav", "nips_2021_iFadi3f5V5I" ]
nips_2021_urueR03mkng
Leveraging the Inductive Bias of Large Language Models for Abstract Textual Reasoning
Christopher Rytting, David Wingate
accept
This paper has certainly been put through the ringer of the "moving goalposts" of successive conference reviews, and for that I have a lot of sympathy (although would not make a decision on the basis thereof). This paper presents an in-depth study of the ability of large pre-trained language models to generalise well on symbolic reasoning tasks, i.e. verifying whether some of the knowledge distilled through the pre-training objective(s) presents a useful inductive bias which aids generalisation in such downstream tasks which have a more structured nature than, say, RTE or QA. The reviewers concur that the paper is well written, and presents interesting results. Given the interest the community has in this class of models, I concur that this is an important and timely contribution. Reviewer VwQ8 (rating 7) writes in strongest support, and in no small amount of detail, clearly seeking to champion the paper for publication. From reading Reviewer hX8A (rating 6) I could not tell why the rating was not higher. Reviewer KcKG (rating 6) is mildly supporting the paper, while presenting some concerns about whether the experiments are conclusive. I would have liked to read their thoughts following the authors' detailed response, but sadly it seems not to have been forthcoming. Finally Reviewer gnNv (rating 3) wrote at significant length regarding their concerns about this paper, and engaged in what I think must be the most detailed discussion I've witnessed during this round of area chairing. Academically speaking, I find myself agreeing with a lot of objections and concerns Reviewer gnNv raises. The discussions with the authors here makes for a fascinating read, and I am not clear where the reviewer landed in terms of revising their assessment. I think the fairest way to view the outcome of this conversation is that there are fundamental difficulties with evaluating the sort of question the paper seeks to address, and in a sense the concerns levelled here are a statement about these difficulties more than they are a statement about the appropriateness the authors took, and the rigour thereof. Perhaps the paper is worth publishing on the grounds of its ability to generate such discussion alone. Taking these reviews holistically, I have formed the strong opinion the paper should be accepted. It generates debate and controversy, but for the right reasons, and attempt to bring further light to the capabilities (and limits) of large pretrained language models. I would hope that the authors could, in as much detail as possible, incorporate their response to reviewers KcKG and gnNv in the final draft of the paper, so that the readers may benefit from these conversations (albeit indirectly).
train
[ "YA-HtgxMmAa", "lrttu4zTIKD", "RSPYOm8V9SW", "RDS_yE4RAyi", "_T6OtHUbCd", "qrH4geOSNmd", "hGsZk9AYIUQ", "Fqli0Wb1kzN", "fFbE9__G81a", "2cw6WK-gV-k", "88JFRyvQiC" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ```The axes of generalization we explore are useful not just to see whether the model grasps the task by substituting similar words (which we show with part-of-speech generalization, and could be an ability also possessed by embeddings), but also to understand the limits of cardinality generalization in terms of ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 4 ]
[ "lrttu4zTIKD", "RSPYOm8V9SW", "qrH4geOSNmd", "88JFRyvQiC", "2cw6WK-gV-k", "fFbE9__G81a", "Fqli0Wb1kzN", "nips_2021_urueR03mkng", "nips_2021_urueR03mkng", "nips_2021_urueR03mkng", "nips_2021_urueR03mkng" ]
nips_2021_j3fpZLKcXF
Differentiable Simulation of Soft Multi-body Systems
We present a method for differentiable simulation of soft articulated bodies. Our work enables the integration of differentiable physical dynamics into gradient-based pipelines. We develop a top-down matrix assembly algorithm within Projective Dynamics and derive a generalized dry friction model for soft continuum using a new matrix splitting strategy. We derive a differentiable control framework for soft articulated bodies driven by muscles, joint torques, or pneumatic tubes. The experiments demonstrate that our designs make soft body simulation more stable and realistic compared to other frameworks. Our method accelerates the solution of system identification problems by more than an order of magnitude, and enables efficient gradient-based learning of motion control with soft robots.
accept
Reviewers unanimously enjoyed this paper and thought it was a timely and impactful contribution to NeurIPS, especially in reinforcement learning and robotics. Reviewers especially highlighted the soft-body model and the motion planning experiments. There were some questions about the clarity of the exposition, and so the authors are encouraged to address the issues raised by the referees before the camera ready version of the paper is due.
test
[ "mmL6yFOCHzN", "7rEiy7dHXsv", "m2ahrrvRSg", "Ez10yXKtW02", "6azxTHTtdgw", "RbaySalZp1v", "otlYt_lOuhC", "PzziGaWBfSl", "zOEz2MZHaGX", "DPx9WdE9Ge", "sVlvgHPG3n", "tOtRD4MbxkX" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for all your suggestions and questions. We will address the comments and update the changes in our rebuttal to the revised version.", " Thank you for your responses. After reading the other reviews and the corresponding rebuttal statements I still think this is an interesting paper and I suppo...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 4 ]
[ "7rEiy7dHXsv", "otlYt_lOuhC", "Ez10yXKtW02", "PzziGaWBfSl", "zOEz2MZHaGX", "sVlvgHPG3n", "tOtRD4MbxkX", "DPx9WdE9Ge", "nips_2021_j3fpZLKcXF", "nips_2021_j3fpZLKcXF", "nips_2021_j3fpZLKcXF", "nips_2021_j3fpZLKcXF" ]
nips_2021_TLXpi2j6F7
Good Classification Measures and How to Find Them
Several performance measures can be used for evaluating classification results: accuracy, F-measure, and many others. Can we say that some of them are better than others, or, ideally, choose one measure that is best in all situations? To answer this question, we conduct a systematic analysis of classification performance measures: we formally define a list of desirable properties and theoretically analyze which measures satisfy which properties. We also prove an impossibility theorem: some desirable properties cannot be simultaneously satisfied. Finally, we propose a new family of measures satisfying all desirable properties except one. This family includes the Matthews correlation coefficient and a so-called symmetric balanced accuracy that was not previously used in classification literature. We believe that our systematic approach gives an important tool to practitioners for adequately evaluating classification results.
accept
Throughout the discussion the reviewers agreed that the paper provides an interesting and comprehensive study of performence measures, and that it is of interest to the NeurIPS community. We urge the authors to revise the paper according to the comments provided by the reviewers in the rebuttal. Thanks for submitting your work to NeurIPS!
train
[ "kFV2HslNC4o", "nL6bUGSh35", "kdPq_asKbpg", "tyJycNUVQCC", "_Huxh-2APTe", "uDaFWLeHrjs", "kvHkN55lnkw", "011SmRSZZp", "Q_SfTMr-FBO" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We would like to thank you again for your suggestions; we improved the paper accordingly. Since there is about a week left to continue the discussion, we would like to know whether you have any additional comments after reading our response. We believe that our analysis of the constant baseline addresses the main...
[ -1, -1, 7, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, 3, 4 ]
[ "Q_SfTMr-FBO", "tyJycNUVQCC", "nips_2021_TLXpi2j6F7", "uDaFWLeHrjs", "Q_SfTMr-FBO", "kdPq_asKbpg", "011SmRSZZp", "nips_2021_TLXpi2j6F7", "nips_2021_TLXpi2j6F7" ]
nips_2021_90M-91IZ0JC
Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck
Adversarial examples, generated by carefully crafted perturbation, have attracted considerable attention in research fields. Recent works have argued that the existence of the robust and non-robust features is a primary cause of the adversarial examples, and investigated their internal interactions in the feature space. In this paper, we propose a way of explicitly distilling feature representation into the robust and non-robust features, using Information Bottleneck. Specifically, we inject noise variation to each feature unit and evaluate the information flow in the feature representation to dichotomize feature units either robust or non-robust, based on the noise variation magnitude. Through comprehensive experiments, we demonstrate that the distilled features are highly correlated with adversarial prediction, and they have human-perceptible semantic information by themselves. Furthermore, we present an attack mechanism intensifying the gradient of non-robust features that is directly related to the model prediction, and validate its effectiveness of breaking model robustness.
accept
This paper utilizes the information bottleneck mechanism to disentangle robust and non-robust features in the representation space of a deep neural network. The paper shows a high correlation between the (wrong) prediction on an adversarial example and the identified non-robust features for the example. Finally, the paper proposes a new adversarial attack that aims to enlarge the gradient for the non-robust features to degrade the model performance. Even though robust and non-robust features have been discussed in the context of adversarial examples before, this paper proposed an interesting and novel approach to disentangle these features in the representation space. The experiments in the paper do demonstrate a clear connection between the non-robust features and the performance of the model on adversarial examples. Furthermore, the identified adversarial attack is shown to be very effective in rendering the model useless. Thus, this paper will be a valuable addition to NeurIPS 2021. There is some room for improvement in the writing of the paper. Please address the following issues in the revised version. Instead of formally introducing many notations, key definitions, and algorithmic details, they appear as part of various paragraphs. There is no discussion of the computational complexity issues, especially in the context of the proposed attack. Furthermore, how feasible is it to perform the proposed attack (NRF) as it requires first disentangling the robust vs. non-robust features and then accordingly perturbing those? Does NRF provide any guidance to train more robust models? Interestingly, it appears from Table 4, that model degradation is quite significant even when robust features are perturbed. In fact, perturbing the robust features is more effective than all baselines considered in Table 3 (obviously, except NRF). This point is glossed over in lines 284-288. Addressing these aforementioned issues will certainly improve the overall quality and impact of this paper.
train
[ "QjerYtJ0YY_", "95VC3yOgmdg", "NDkKs-i1Sl7", "UDREBiMJmUf", "aB4bjrOzCF4", "6M_tLrks6t9", "CEWzBqGXUkp", "Rnt48tnC26E", "356hzldHyz", "XodPYkPvvd", "MjUnjebF9nd", "uMsCw-PSZZW" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for suggestions to improve the presentation in this paper. Below, we have inline answers for the reviewer’s suggestions.\n\n---\n\n**Q1. I suggest moving that information to the main body.**\n\n**A1.** We agree with that it is better to move information of L2 bounded attack for...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, 3, -1, 4, -1, -1, -1, -1, -1, 3, 3, 2 ]
[ "NDkKs-i1Sl7", "nips_2021_90M-91IZ0JC", "aB4bjrOzCF4", "nips_2021_90M-91IZ0JC", "95VC3yOgmdg", "uMsCw-PSZZW", "UDREBiMJmUf", "MjUnjebF9nd", "XodPYkPvvd", "nips_2021_90M-91IZ0JC", "nips_2021_90M-91IZ0JC", "nips_2021_90M-91IZ0JC" ]
nips_2021_FwVmM8Zol_8
Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Independent Projected Kernels
Gaussian processes are machine learning models capable of learning unknown functions in a way that represents uncertainty, thereby facilitating construction of optimal decision-making systems. Motivated by a desire to deploy Gaussian processes in novel areas of science, a rapidly-growing line of research has focused on constructively extending these models to handle non-Euclidean domains, including Riemannian manifolds, such as spheres and tori. We propose techniques that generalize this class to model vector fields on Riemannian manifolds, which are important in a number of application areas in the physical sciences. To do so, we present a general recipe for constructing gauge independent kernels, which induce Gaussian vector fields, i.e. vector-valued Gaussian processes coherent withgeometry, from scalar-valued Riemannian kernels. We extend standard Gaussian process training methods, such as variational inference, to this setting. This enables vector-valued Gaussian processes on Riemannian manifolds to be trained using standard methods and makes them accessible to machine learning practitioners.
accept
The majority of the reviewers argued rather strongly for accepting this paper. One reviewer saw the paper as more borderline, but the concerns were addressed in the rebuttal. This paper is to most parts well written and, as one of the reviewers say, a joy to read. However, the reviewers have all made a number of remarks for improving the paper, and I recommend that you take care in incorporating them in the camera-ready.
train
[ "2sUWj6CL3CF", "7AzmhDI41iE", "8viUFp2s88", "h3-NaLBbyXN", "Kqtd4irGK7r", "3cyHSj-HA0D", "oTwRI7Uvxx", "gIiXqa2HP6R" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a natural extension of vector-valued GPs to data residing on Riemannian manifolds. The construction is straight-forward (replace matrix products with bilinear forms), which I consider a good thing. This also imply that it is relatively easy to apply the developed theory in practice. Toy demonstr...
[ 8, 7, -1, -1, -1, -1, 9, 5 ]
[ 5, 4, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_FwVmM8Zol_8", "nips_2021_FwVmM8Zol_8", "2sUWj6CL3CF", "7AzmhDI41iE", "gIiXqa2HP6R", "oTwRI7Uvxx", "nips_2021_FwVmM8Zol_8", "nips_2021_FwVmM8Zol_8" ]
nips_2021_sjRlHsawmRf
On the Representation Power of Set Pooling Networks
Point clouds and sets are ubiquitous, unusual and unstructured data-types which present unique problems to machine learning when used as inputs. Since sets are inherently unaffected by permutations, the input space for these problems naturally forms a non-Euclidean space whose topology depends on the task and which is oftentimes not even a manifold. Moreover, similar inputs can have wildly varying cardinalities and so the input space in its most general form is infinite-dimensional. Despite these mathematical difficulties, PointNet (Qi et al. 2017) and Deep Sets (Zaheer et al. 2017) form two foundational contributions for deep learning in this area. In this paper we study the expressive power of such networks and prove new cardinality-agnostic universality results for point clouds as well as extensions of these models beyond point clouds. These results completely characterize the approximable functions and so can be used to compare the representational strength of the underlying model classes. In particular, a normalized version of the DeepSets architecture cannot uniformly approximate the diameter function but can uniformly approximate the center-of-mass function whereas PointNet can uniformly approximate the former but not the latter. Additionally, even when limited to a fixed input cardinality, PointNet cannot uniformly approximate the average value of a continuous function over sets of more than two points. We additionally obtain explicit error lower-bounds for this error of approximation and a present a simple geometric method to produce arbitrarily many examples of this failure-mode.
accept
This paper gives a novel theoretical understanding on what functions "normalized" DeepSets and PointNets can represent, from a topological point of view. The work is novel and provides meaningful guidance to the community about the difference between these approaches; many of the (numerous) reviewers, however, had complaints about the clarity of the presentation as well as confusions on some specific points. It will greatly benefit the paper to take these into account for the camera-ready version.
train
[ "jcee7Zmg8Jw", "35Uv0_pZ98d", "Mt88XUyhpt", "SUH_r0DDei", "SG4bE0KO5u7", "fjW_XJy6Nez", "nAKrsrO7E0y", "RWWA6AEXlID", "Tp6tmwD924d", "8Ed_SybjGfL", "77LBCqgLYRX", "qIcY0tSmfnW", "wsIxQT3V6wB", "xbzPcQKdTo", "cLoQuyDxTv", "mXrq6vvAQuO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors prove universality results for permutation invariant networks: in particular for PointNet and 'normalized DeepSets'. The two classes of networks differ in the pooling layer: max pooling for PointNet and averaging for Normalized DeepSets.\n\nDespite their apparent similarity, the authors show that these...
[ 6, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 7 ]
[ 4, 2, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "nips_2021_sjRlHsawmRf", "nips_2021_sjRlHsawmRf", "8Ed_SybjGfL", "RWWA6AEXlID", "nips_2021_sjRlHsawmRf", "qIcY0tSmfnW", "77LBCqgLYRX", "jcee7Zmg8Jw", "mXrq6vvAQuO", "35Uv0_pZ98d", "cLoQuyDxTv", "SG4bE0KO5u7", "xbzPcQKdTo", "nips_2021_sjRlHsawmRf", "nips_2021_sjRlHsawmRf", "nips_2021_sj...
nips_2021_Nl7VO_Y7K4Q
Learning Policies with Zero or Bounded Constraint Violation for Constrained MDPs
Tao Liu, Ruida Zhou, Dileep Kalathil, Panganamala Kumar, Chao Tian
accept
This paper studies the problem of learning episodic CMDPs, which provides improved theoretical results comparing to previous works. This work is technically sound and novel with strong results. All the reviewers have reached a consensus on the acceptance of this work. We suggest that the authors further polish their paper to prepare a camera-ready version based on the reviewers’ comments.
train
[ "xfzMFtBHCQ9", "JKnbWYP7PBd", "DxjK5l6eEdu", "hdPGZ946Ul", "PjFB9G1d1BR", "2fh6wo3E4Mv" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents two novel reinforcement learning algorithms for CMDPS, one which has no constraint violations with high probability, but requires a known safe policy, and another which bounds the probability of constraint regret, but only requires the existence of a safe policy. The first algorithm picks pessi...
[ 6, -1, -1, -1, 7, 6 ]
[ 2, -1, -1, -1, 3, 5 ]
[ "nips_2021_Nl7VO_Y7K4Q", "PjFB9G1d1BR", "2fh6wo3E4Mv", "xfzMFtBHCQ9", "nips_2021_Nl7VO_Y7K4Q", "nips_2021_Nl7VO_Y7K4Q" ]
nips_2021_yH2VrkpiCK6
A Prototype-Oriented Framework for Unsupervised Domain Adaptation
Existing methods for unsupervised domain adaptation often rely on minimizing some statistical distance between the source and target samples in the latent space. To avoid the sampling variability, class imbalance, and data-privacy concerns that often plague these methods, we instead provide a memory and computation-efficient probabilistic framework to extract class prototypes and align the target features with them. We demonstrate the general applicability of our method on a wide range of scenarios, including single-source, multi-source, class-imbalance, and source-private domain adaptation. Requiring no additional model parameters and having a moderate increase in computation over the source model alone, the proposed method achieves competitive performance with state-of-the-art methods.
accept
Thanks for your submission to NeurIPS. This paper had somewhat mixed reviews, with 3 mostly positive and 1 more negative reviewer. During the rebuttal phase, the negative reviewer maintained his/her position that the paper was not ready for publication, so we did not fully reach consensus on the paper. However, I appreciated the responses that the authors provided to all the reviews, particularly to the more negative reviewer. There are some valid concerns here---particularly with regard to the experimental comparisons---but overall I am positive about this paper and am happy to recommend it for acceptance at the conference. Please try to incorporate as much of the rebuttal material as you can in the final paper, as it will strengthen the paper, as well as making sure to address as many of the reviewer criticisms as possible.
train
[ "_fmz0jErew", "9lgfyuA_1rq", "sLoArOGo80", "MqOjgcPdAF4", "BVHzJu6KJtz", "KcB0YNA2W2", "_e_YL3hZfmb", "fBNeWoxIp2", "viaq2BUEzfl", "pni-b7M8dR2", "zkDjXP3E4ck", "dNQLg2QOGM_", "-5OIfYgqeAy" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer ZA7J,\n\nWe sincerely appreciate you responding to our rebuttal and explaining your rating. We will carefully revise our paper to incorporate these valuable comments and suggestions of all four reviewers. We believe our revisions will mainly help enhance the clarity and provide further reassurance o...
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "9lgfyuA_1rq", "nips_2021_yH2VrkpiCK6", "MqOjgcPdAF4", "BVHzJu6KJtz", "KcB0YNA2W2", "9lgfyuA_1rq", "dNQLg2QOGM_", "-5OIfYgqeAy", "zkDjXP3E4ck", "zkDjXP3E4ck", "nips_2021_yH2VrkpiCK6", "nips_2021_yH2VrkpiCK6", "nips_2021_yH2VrkpiCK6" ]
nips_2021_qDrpme0FAi
Mining the Benefits of Two-stage and One-stage HOI Detection
Two-stage methods have dominated Human-Object Interaction~(HOI) detection for several years. Recently, one-stage HOI detection methods have become popular. In this paper, we aim to explore the essential pros and cons of two-stage and one-stage methods. With this as the goal, we find that conventional two-stage methods mainly suffer from positioning positive interactive human-object pairs, while one-stage methods are challenging to make an appropriate trade-off on multi-task learning, \emph{i.e.}, object detection, and interaction classification. Therefore, a core problem is how to take the essence and discard the dregs from the conventional two types of methods. To this end, we propose a novel one-stage framework with disentangling human-object detection and interaction classification in a cascade manner. In detail, we first design a human-object pair generator based on a state-of-the-art one-stage HOI detector by removing the interaction classification module or head and then design a relatively isolated interaction classifier to classify each human-object pair. Two cascade decoders in our proposed framework can focus on one specific task, detection or interaction classification. In terms of the specific implementation, we adopt a transformer-based HOI detector as our base model. The newly introduced disentangling paradigm outperforms existing methods by a large margin, with a significant relative mAP gain of 9.32% on HICO-Det. The source codes are available at https://github.com/YueLiao/CDN.
accept
This paper presents work on human-object interaction (HOI) detection. The main contributions include an analysis of two-stage versus one-stage HOI detection methods and a multi-layer transformer architecture that achieves state of the art results. The initial reviews pointed to concerns over clarity of some terminology and overall originality and significance of the contributions. However, the reviewers unanimously recommended acceptance of the paper after the discussion period, based on the good empirical results and sensible architecture.
train
[ "205SSNH078E", "VzS3fIU8O_7", "WrP5ohzgYH8", "b6U-sF5Py0K", "-Q4jgcF4hTh", "2h7xE2_SvkU", "BdqxERLz-4X", "hhZkt1sbx9T", "S43kxJ16Wnb", "SyuNQArAiji", "mYZeufCPIs", "svptOumhwOO" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response and appreciation. We will update the contents of our rebuttals and provide a more straightforward description of the re-weighting module in the revised version. Thank you again for your effort and valuable suggestions.", " Thanks for your response and appreciation. We will update all th...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 7, 8 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "-Q4jgcF4hTh", "b6U-sF5Py0K", "nips_2021_qDrpme0FAi", "2h7xE2_SvkU", "hhZkt1sbx9T", "WrP5ohzgYH8", "svptOumhwOO", "mYZeufCPIs", "SyuNQArAiji", "nips_2021_qDrpme0FAi", "nips_2021_qDrpme0FAi", "nips_2021_qDrpme0FAi" ]
nips_2021_Qsd05NmZHIZ
Discerning Decision-Making Process of Deep Neural Networks with Hierarchical Voting Transformation
Neural network based deep learning techniques have shown great success for numerous applications. While it is expected to understand their intrinsic decision-making processes, these deep neural networks often work in a black-box way. To this end, in this paper, we aim to discern the decision-making processes of neural networks through a hierarchical voting strategy by developing an explainable deep learning model, namely Voting Transformation-based Explainable Neural Network (VOTEN). Specifically, instead of relying on massive feature combinations, VOTEN creatively models expressive single-valued voting functions between explicitly modeled latent concepts to achieve high fitting ability. Along this line, we first theoretically analyze the major components of VOTEN and prove the relationship and advantages of VOTEN compared with Multi-Layer Perceptron (MLP), the basic structure of deep neural networks. Moreover, we design efficient algorithms to improve the model usability by explicitly showing the decision processes of VOTEN. Finally, extensive experiments on multiple real-world datasets clearly validate the performances and explainability of VOTEN.
accept
This work proposes a novel network structure for better explainability. While there are many aspects of explainability that are not assessed in this paper, the local and global prediction processing cases are provided beside the accuracy. The term decision-making is a bit confusing as it often refers to decision-making tasks, while this paper handles prediction tasks. Overall, this paper is significantly novel, while its applicability is not quite clear.
train
[ "hS_zz7rt94e", "GhSsfB-0dCH", "-U18xON5xPT", "ZySzfFqPJDj", "7kWl5sEQH2", "AenARmaqQFs" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ***Comment 1**: Generally speaking, this is a well-written paper, and I enjoyed reading it. The proposed model is introduced well in a convincing and insightful way.*\n\n**Response**: Thank you for your acknowledgment and all your encouraging words. \n\n***Comment 2**: The authors give straightforward analysis to...
[ -1, -1, -1, 3, 5, 8 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "AenARmaqQFs", "7kWl5sEQH2", "ZySzfFqPJDj", "nips_2021_Qsd05NmZHIZ", "nips_2021_Qsd05NmZHIZ", "nips_2021_Qsd05NmZHIZ" ]
nips_2021_QO93ev_yPqn
Risk-averse Heteroscedastic Bayesian Optimization
Many black-box optimization tasks arising in high-stakes applications require risk-averse decisions. The standard Bayesian optimization (BO) paradigm, however, optimizes the expected value only. We generalize BO to trade mean and input-dependent variance of the objective, both of which we assume to be unknown a priori. In particular, we propose a novel risk-averse heteroscedastic Bayesian optimization algorithm (RAHBO) that aims to identify a solution with high return and low noise variance, while learning the noise distribution on the fly. To this end, we model both expectation and variance as (unknown) RKHS functions, and propose a novel risk-aware acquisition function. We bound the regret for our approach and provide a robust rule to report the final decision point for applications where only a single solution must be identified. We demonstrate the effectiveness of RAHBO on synthetic benchmark functions and hyperparameter tuning tasks.
accept
This paper makes an important contribution to the theory of Bayesian optimization. Namely, it provides a sublinear cumulative regret bound in the heteroscedastic setting, the first of its kind, and show that this regret bound additionally applies in the risk averse setting. This result is particularly significant given that the overwhelming majority of Bayesian optimization papers assume homoscedastic noise, which may arguably less common than the heteroscedastic setting. Be sure to incorporate suggested minor corrections by the reviewers in the final camera ready version.
train
[ "pboBGJ4F5g", "xo8GC4dkHl", "htsHZ5yRXYR", "b1W0HAkZ-A", "wpzhA13Zjiq", "RMhfijy_UL0", "5vIyjU2UPoM", "ANOGXkmMj2k", "s_e1mpDyTs", "A0K8kpX4hmE", "ep3kdjrWYwh", "InVRvy23hWj", "gve-yK2jyb7" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for the comments and suggestions. \nWe will improve the exposition accordingly.", "This paper proposes a risk-averse Bayesian optimization algorithm, which aims to find an input with both large expected return and small noise-dependent noise variance. Both the mean function ...
[ -1, 5, -1, 7, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "htsHZ5yRXYR", "nips_2021_QO93ev_yPqn", "wpzhA13Zjiq", "nips_2021_QO93ev_yPqn", "RMhfijy_UL0", "s_e1mpDyTs", "A0K8kpX4hmE", "gve-yK2jyb7", "xo8GC4dkHl", "InVRvy23hWj", "b1W0HAkZ-A", "nips_2021_QO93ev_yPqn", "nips_2021_QO93ev_yPqn" ]
nips_2021_btfPZRDdP2F
Invertible DenseNets with Concatenated LipSwish
We introduce Invertible Dense Networks (i-DenseNets), a more parameter efficient extension of Residual Flows. The method relies on an analysis of the Lipschitz continuity of the concatenation in DenseNets, where we enforce invertibility of the network by satisfying the Lipschitz constant. Furthermore, we propose a learnable weighted concatenation, which not only improves the model performance but also indicates the importance of the concatenated weighted representation. Additionally, we introduce the Concatenated LipSwish as activation function, for which we show how to enforce the Lipschitz condition and which boosts performance. The new architecture, i-DenseNet, out-performs Residual Flow and other flow-based models on density estimation evaluated in bits per dimension, where we utilize an equal parameter budget. Moreover, we show that the proposed model out-performs Residual Flows when trained as a hybrid model where the model is both a generative and a discriminative model.
accept
This paper proposes an extension of Residual Flows to construct invertible deep neural nets. The paper features two technical contributions: The invertible residual blocks are replaced by invertible dense blocks, and the LipSwish activation function is replaced by a concatenated version. While these contributions can be considered marginal/incremental (reviewer Lrfi) and with medium novelty (reviewer psZ5), all reviewers agree the contributions offer enough value for being accepted. The reviewers acknowledged the author response and they engaged in committee discussions. No serious concerns surfaced during the review process, and the reviewers reached a consensus recommendation to accept the paper.
train
[ "7XW0ghUxx0", "BO19snoSz", "IItuEwIL-0T", "0Cs2pv9eGh", "pjlxTarPJFw", "2zt5awaQ39H", "XtsvqP85oh", "h7e6PgRMpw", "sOui16Fpt3", "rLObsLq4tYJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate that the author adds a runtime comparison against Residual Flows. Thanks for your clarification.", "The paper proposes a parameter-efficient DenseNet block and a CLipSwish (concatenated LipSwish) activation function for use with the DenseNet blocks in the residual flows setting. The paper outperfo...
[ -1, 7, -1, -1, -1, -1, -1, 6, 8, 7 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "XtsvqP85oh", "nips_2021_btfPZRDdP2F", "0Cs2pv9eGh", "rLObsLq4tYJ", "BO19snoSz", "sOui16Fpt3", "h7e6PgRMpw", "nips_2021_btfPZRDdP2F", "nips_2021_btfPZRDdP2F", "nips_2021_btfPZRDdP2F" ]
nips_2021_1r2EannVuIA
Topological Detection of Trojaned Neural Networks
Deep neural networks are known to have security issues. One particular threat is the Trojan attack. It occurs when the attackers stealthily manipulate the model's behavior through Trojaned training samples, which can later be exploited. Guided by basic neuroscientific principles, we discover subtle -- yet critical -- structural deviation characterizing Trojaned models. In our analysis we use topological tools. They allow us to model high-order dependencies in the networks, robustly compare different networks, and localize structural abnormalities. One interesting observation is that Trojaned models develop short-cuts from shallow to deep layers. Inspired by these observations, we devise a strategy for robust detection of Trojaned models. Compared to standard baselines it displays better performance on multiple benchmarks.
accept
This is an interesting paper with a creative, deep and promising approach. Moreover, the thorough rebuttal and subsequent discussion seems to have addressed successfully the majority of the main comments the reviewers have raised.
train
[ "0CRqrn1ijM9", "dk6ODv66hyo", "Mz038CIl4SJ", "x2BXPMr9DgC", "OAP_0B3_Mda", "UtpaBM6LA5A", "jIO3R4jISX", "G3n8b05pgge", "2DCPPj9Xx_", "qg2kef2kfPb", "0tMs0qP0hbP", "uIijiqTV9ES", "0NlDO2Ur5bZ", "15vZBzAfR90" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " We would like to thank the reviewer for the extra response -- and for all the constructive comments. We will of course follow the suggestions and include the existing transferability experiments as well as add new experiments on the clean-label attacks -- thank you for pointing that out.\n\nAgain: thank you for t...
[ -1, -1, 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 3, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "x2BXPMr9DgC", "UtpaBM6LA5A", "nips_2021_1r2EannVuIA", "0tMs0qP0hbP", "nips_2021_1r2EannVuIA", "jIO3R4jISX", "2DCPPj9Xx_", "Mz038CIl4SJ", "uIijiqTV9ES", "OAP_0B3_Mda", "Mz038CIl4SJ", "OAP_0B3_Mda", "15vZBzAfR90", "nips_2021_1r2EannVuIA" ]
nips_2021_yKdYdQbo22W
Provably Strict Generalisation Benefit for Invariance in Kernel Methods
It is a commonly held belief that enforcing invariance improves generalisation. Although this approach enjoys widespread popularity, it is only very recently that a rigorous theoretical demonstration of this benefit has been established. In this work we build on the function space perspective of Elesedy and Zaidi [8] to derive a strictly non-zero generalisation benefit of incorporating invariance in kernel ridge regression when the target is invariant to the action of a compact group. We study invariance enforced by feature averaging and find that generalisation is governed by a notion of effective dimension that arises from the interplay between the kernel and the group. In building towards this result, we find that the action of the group induces an orthogonal decomposition of both the reproducing kernel Hilbert space and its kernel, which may be of interest in its own right.
accept
The paper provides theoretical work on the invariance of positive definite kernels and RKHS. The obtained results are novel and of high theoretical significance. On the other hand, its practical meaning is not very clear in this paper. Overall, the theoretical development on the invariance of kernel/RKHS is significant enough and applicable to a variety of problems in the future. We judge the work is worth presented in NeurIPS.
test
[ "XCMRoepdI8M", "SsI5HD-c_U", "cqYB2uVJiD4", "VVYDoq1yPq", "b-aTnMLONUn", "k5slgaT61Xq", "_hFPZ2DGU9T", "4mYm-lnmXtf", "HDQdFh-SkZ1", "x5qbPH25vIU", "6PUACeTl8c6" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper considers kernel ridge regression (KRR) where the target is assumed to be invariant with the actions of a compact group. \nThe authors show a reduced generalisation error for the feature-averaged version of the KRR estimator, proving the benefit of incorporating invariant features. \n To my knowledge, t...
[ 7, -1, -1, -1, -1, -1, -1, -1, 9, 5, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_yKdYdQbo22W", "cqYB2uVJiD4", "VVYDoq1yPq", "b-aTnMLONUn", "XCMRoepdI8M", "HDQdFh-SkZ1", "x5qbPH25vIU", "6PUACeTl8c6", "nips_2021_yKdYdQbo22W", "nips_2021_yKdYdQbo22W", "nips_2021_yKdYdQbo22W" ]
nips_2021_u1XV9BPAB9
Formalizing the Generalization-Forgetting Trade-off in Continual Learning
We formulate the continual learning (CL) problem via dynamic programming and model the trade-off between catastrophic forgetting and generalization as a two-player sequential game. In this approach, player 1 maximizes the cost due to lack of generalization whereas player 2 minimizes the cost due to catastrophic forgetting. We show theoretically that a balance point between the two players exists for each task and that this point is stable (once the balance is achieved, the two players stay at the balance point). Next, we introduce balanced continual learning (BCL), which is designed to attain balance between generalization and forgetting and empirically demonstrate that BCL is comparable to or better than the state of the art.
accept
The paper addresses the balance between stability-plasticity tradeoff as a two player sequential game and​​ show theoretically that a balance point between the two players exists for each task and that this point is stable. This leads to a theoretically justified approach, balanced continual learning (BCL), which has empirical performance that is comparable to or better than the state of the art CL approaches. Overall, the paper addresses an important question, provides a new perspective of formulating the cost, provides theoretical justification as well as empirical evaluation for the proposed method. The reviewers raised some points that were satisfactorily addressed in the rebuttal, which helped improved their understanding of where the paper stands.
train
[ "hOnefg5r2o", "nU3XzOjQrPi", "FlFHemFgSG1", "Tr2RL57EMqm", "PdBHEpoYFyL", "UBAKXe5jhTS", "lrhw-ltWMb", "gyL6g73LvJ1", "t_Gnq0nz7dT", "aF9sRDphPM0", "l6FIfzj1xL", "uZPG0anE8Ju" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper theoretically investigates the trade-off between generalization and forgetting in continual learning (CL). The authors prove the existance of a saddle point in the generalization/forgetting space; they show that it is stable under certain conditions; and propose a practical algorithm called balanced cont...
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "nips_2021_u1XV9BPAB9", "lrhw-ltWMb", "nips_2021_u1XV9BPAB9", "aF9sRDphPM0", "FlFHemFgSG1", "l6FIfzj1xL", "hOnefg5r2o", "nips_2021_u1XV9BPAB9", "uZPG0anE8Ju", "FlFHemFgSG1", "nips_2021_u1XV9BPAB9", "nips_2021_u1XV9BPAB9" ]
nips_2021_a_f_NR8mMr9
Risk-Aware Transfer in Reinforcement Learning using Successor Features
Sample efficiency and risk-awareness are central to the development of practical reinforcement learning (RL) for complex decision-making. The former can be addressed by transfer learning, while the latter by optimizing some utility function of the return. However, the problem of transferring skills in a risk-aware manner is not well-understood. In this paper, we address the problem of transferring policies between tasks in a common domain that differ only in their reward functions, in which risk is measured by the variance of reward streams. Our approach begins by extending the idea of generalized policy improvement to maximize entropic utilities, thus extending the dynamic programming's policy improvement operation to sets of policies \emph{and} levels of risk-aversion. Next, we extend the idea of successor features (SF), a value function representation that decouples the environment dynamics from the rewards, to capture the variance of returns. Our resulting risk-aware successor features (RaSF) integrate seamlessly within the RL framework, inherit the superior task generalization ability of SFs, while incorporating risk into the decision-making. Experiments on a discrete navigation domain and control of a simulated robotic arm demonstrate the ability of RaSFs to outperform alternative methods including SFs, when taking the risk of the learned policies into account.
accept
After reading each other's reviews and the authors' feedback, the reviewers discussed the merits and flaws of the paper. The authors' feedback was instrumental in the reviewers' change of mind. The paper is still borderline, but I trust the authors' promises to revise their paper according to the reviewers' suggestions. and I propose to accept this paper. I will check the final version of this paper hoping to find all the additions that have been requested.
test
[ "H4xl2h70qK1", "DNIE_hvixT", "WPdRzoEjNMS", "q4tZY6FrDkQ", "Wxfeh9gu8U", "Zy4Kc9Y4xo", "BdFEyZ5wkFF", "_yZC3tsX2N", "cYC0N3Vn4M", "vwLJ4LVyWHo", "O-VOENKThhS", "06IN1xFN10F", "NTfdM4vElZJ", "cdJzJ997rv8", "X_1t7teZyc0" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper extends the successor feature framework including the GPE/GPI methods for generalisation across tasks with risk awareness. The paper includes experiments for the associated methods on toy domains. Overall the paper has good clarity, however the theoretical results do not seem super useful and the the p...
[ 6, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, 2 ]
[ "nips_2021_a_f_NR8mMr9", "WPdRzoEjNMS", "O-VOENKThhS", "nips_2021_a_f_NR8mMr9", "BdFEyZ5wkFF", "cYC0N3Vn4M", "vwLJ4LVyWHo", "nips_2021_a_f_NR8mMr9", "06IN1xFN10F", "X_1t7teZyc0", "H4xl2h70qK1", "_yZC3tsX2N", "cdJzJ997rv8", "nips_2021_a_f_NR8mMr9", "nips_2021_a_f_NR8mMr9" ]
nips_2021_x8qirBbT9xp
Causal Inference for Event Pairs in Multivariate Point Processes
Causal inference and discovery from observational data has been extensively studied across multiple fields. However, most prior work has focused on independent and identically distributed (i.i.d.) data. In this paper, we propose a formalization for causal inference between pairs of event variables in multivariate recurrent event streams by extending Rubin's framework for the average treatment effect (ATE) and propensity scores to multivariate point processes. Analogous to a joint probability distribution representing i.i.d. data, a multivariate point process represents data involving asynchronous and irregularly spaced occurrences of various types of events over a common timeline. We theoretically justify our point process causal framework and show how to obtain unbiased estimates of the proposed measure. We conduct an experimental investigation using synthetic and real-world event datasets, where our proposed causal inference framework is shown to exhibit superior performance against a set of baseline pairwise causal association scores.
accept
Causal inference in point processes is a novel and open area, and the reviewers agreed that this is a useful contribution both methodologically and from an applied perspective. There are a number of changes that you should definitely make to improve the manuscript.
train
[ "bUrrjQc5tPx", "INrVfd1z6Cy", "vkhT8I4YFTo", "xC9omzIaGcs", "APXWav1z8V8", "fKrInhSwTgT", "k8lRBMdMIU-", "2YeJLGk4E1B", "UrFsOS3bPG1" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your answers.\nA correction from my side:\n\"Ignorability means that the counterfactual outcomes are independent from the actual ~~outcome~~ **causative factor X** given the covariates.\" Your definition is perfectly correct. The connection with back-door criterion is valid as stated.", " We than...
[ -1, -1, -1, -1, -1, 8, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "INrVfd1z6Cy", "UrFsOS3bPG1", "fKrInhSwTgT", "k8lRBMdMIU-", "2YeJLGk4E1B", "nips_2021_x8qirBbT9xp", "nips_2021_x8qirBbT9xp", "nips_2021_x8qirBbT9xp", "nips_2021_x8qirBbT9xp" ]
nips_2021_nehzxAdyJxF
Evaluating model performance under worst-case subpopulations
Mike Li, Hongseok Namkoong, Shangzhou Xia
accept
The paper provides some interesting theoretical results on estimating the worst-case performance of a model over all subpopulations of a given size, and has supporting experimental evaluations. The reviews generated a detailed back-and-forth discussion with the authors. The main concerns raised by the reviewers are about the lack of sufficient experimental results demonstrating the efficacy of the proposed method and the lack of practical guidance for choice of the group size parameter $\alpha$. My impression of the paper is that the results presented would be of interest to the ML fairness community, and so I would recommend accepting it. However, it's very important that the authors do a thorough job of incorporating the following changes in the final version of the paper (in addition to the other changes that they have promised to make in their response): - Simulation study analyzing the asymptotic convergence of the proposed estimator - Experiments illustrating how the parameter $\alpha$ should be set I would also like to emphasize that the reviewers were chosen from diverse backgrounds within ML to assess both the theoretical and practical aspects of the work, and I think they have done a thorough job of reviewing the paper. I understand that there were some initial questions about the correctness of Theorem 2 (which, to be honest, I too had during my initial reading of the paper), and I am glad that the authors were able to satisfactory resolve them. I trust that the authors will use all the feedback provided and make all the changes they have promised in their response.
test
[ "5diB3vekBKW", "f5EwogehyXt", "PDemNUfffHD", "Yv1LwWhjHce", "Qecf1K3F3Lo", "WPFK5QLbO7s", "9wsOWPxkDH0", "Py3J0h9d5y0", "003UIf_fMv8", "av9BfOusEMG", "cLhCNLf9R3i", "sbNbOUf5Q7", "GuzXGr_QEvu", "BonERdipNgL", "ez1CGkOM36Z", "52dxoPdpYaY", "JQgOT_cpOJ6", "vk47KHw6Rbr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and the clarifications.", " Thank you for the response and the clarifications.", " Dear authors,\n\nThank you for the response and the clarifications. The provided example and clarifications are indeed very helpful and I'm looking forward to seeing them integrated in a next version ...
[ -1, -1, -1, -1, -1, -1, 5, -1, 6, -1, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, 2, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "BonERdipNgL", "cLhCNLf9R3i", "GuzXGr_QEvu", "9wsOWPxkDH0", "9wsOWPxkDH0", "Py3J0h9d5y0", "nips_2021_nehzxAdyJxF", "sbNbOUf5Q7", "nips_2021_nehzxAdyJxF", "9wsOWPxkDH0", "52dxoPdpYaY", "003UIf_fMv8", "JQgOT_cpOJ6", "vk47KHw6Rbr", "nips_2021_nehzxAdyJxF", "nips_2021_nehzxAdyJxF", "nips...
nips_2021_pPbrtkTHe9
Privately Publishable Per-instance Privacy
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP). Standard differential privacy (DP) gives us a worst-case bound that might be orders of magnitude larger than the privacy loss to a particular individual relative to a fixed dataset. The pDP framework provides a more fine-grained analysis of the privacy guarantee to a target individual, but the per-instance privacy loss itself might be a function of sensitive data. In this paper, we analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
accept
The authors thought that data dependent privacy properties are an interesting direction of study. The main concern in discussion was the motivation for the results. The reviewers questioned when this notion of privacy would be useful, commenting (among other things) that it could not be directly used by the algorithm designer, only reported to the users. There were also comments about the readability, which could be improved. The paper otherwise seems solid. As this is a relatively new and underexplored notion, I am OK with accepting the paper even if it doesn't fulfil the full promise discussed at the start of the intro. However, by the same token that this is a new notion, it is very important that the authors address this motivation better than done in the current submission: for instance, identifying and commenting on the strengths and weaknesses of the result (or any results of this form). The should also be better discussion of related work, how this notion differs from very similar ones, relative advantages, etc.: it may be tough for an outsider to distinguish between the several similar sounding notions. I expect that the authors will do all this in their final version of this paper.
train
[ "msy5UbH69sq", "6DLSYH1ovi9", "BcwV0kZdQmM", "FRccOg83Lu", "Eldncq1kL6H", "Wbd11C_bmYT", "pG-7_6Tvmd6", "IAOuTO311FS", "kaEOnScG4AB", "1rW91E9aWBC", "DbmgigWFyGW" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper looks at how to release per-instance differentially private (pDP) privacy bounds, focusing on empirical risk minimization (ERM) with a convex loss function. The paper proposes methods for releasing either a looser data independent estimate or a tighter data-dependent one. Without properly checking the pr...
[ 7, -1, -1, -1, 6, -1, -1, -1, -1, 7, 5 ]
[ 3, -1, -1, -1, 4, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_pPbrtkTHe9", "pG-7_6Tvmd6", "FRccOg83Lu", "IAOuTO311FS", "nips_2021_pPbrtkTHe9", "msy5UbH69sq", "Eldncq1kL6H", "DbmgigWFyGW", "1rW91E9aWBC", "nips_2021_pPbrtkTHe9", "nips_2021_pPbrtkTHe9" ]
nips_2021_1TuwAYxRAC
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning
Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target. However, UDA is not always successful and several accounts of `negative transfer' have been reported in the literature. In this work, we prove a simple lower bound on the target domain error that complements the existing upper bound. Our bound shows the insufficiency of minimizing source domain error and marginal distribution mismatch for a guaranteed reduction in the target domain error, due to the possible increase of induced labeling function mismatch. This insufficiency is further illustrated through simple distributions for which the same UDA approach succeeds, fails, and may succeed or fail with an equal chance. Motivated from this, we propose novel data poisoning attacks to fool UDA methods into learning representations that produce large target domain errors. We evaluate the effect of these attacks on popular UDA methods using benchmark datasets where they have been previously shown to be successful. Our results show that poisoning can significantly decrease the target domain accuracy, dropping it to almost 0% in some cases, with the addition of only 10% poisoned data in the source domain. The failure of these UDA methods demonstrates their limitations at guaranteeing cross-domain generalization consistent with our lower bound. Thus, evaluating UDA methods in adversarial settings such as data poisoning provides a better sense of their robustness to data distributions unfavorable for UDA.
accept
This paper studies the vulnerability of domain adaptation to adversarial attacks. The key idea is to show the failure of UDA by proving a simple lower bound, then present example distributions where UDA succeeds or fails. This paper proposes several novel data poisoning attacks that use clean-label and mislabeled points to fail existing UDA methods and verify these attack methods in the experiments on two benchmark datasets. However, there exists some limitations as follows. 1) In-depth analysis: despite the empirical discovery being interesting and promising, the current version seems preliminary and the in-depth analysis is lacking. 2) Insufficient theory: the proposed theoretical result is insufficient to understand the inherent limitation. 3) Writing issues: the writing in the manuscript is still problematic. This paper is a boardline case according to the average rating. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to marginally accept this paper for publication! Please include the additional experimental results in the next version.
train
[ "-SNKCHYRYPY", "lI_SWTPH076", "7H2YUCV9VfJ", "73q2TllLYNp", "ZTr2sPh0ghX", "g9Pl1NIxP_W", "2WFsPcdUWe3", "ZdoUE2KL9U", "2baHzdqfwPX", "-urmI95494", "teH6qVULIul", "HmIv-xEHb-F", "QL_BMfaWYXK", "UuN4sKzfRG7", "fSubKJ9sWKZ", "t2DP26k4g9c", "BdiBn9zH7RW", "Av-16VuTOqS", "6PcanCvfwa8...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "...
[ "In this work, the authors aim to study the limitations of Unsupervised Domain Adaptation through data poisoning. Specifically, the authors first show the failure of UDA by proving a simple lower bound, then present example distributions where UDA succeeds or fails. Finally, they propose several novel data poisonin...
[ 6, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_1TuwAYxRAC", "nips_2021_1TuwAYxRAC", "73q2TllLYNp", "g9Pl1NIxP_W", "nips_2021_1TuwAYxRAC", "teH6qVULIul", "-urmI95494", "2baHzdqfwPX", "teH6qVULIul", "UuN4sKzfRG7", "HmIv-xEHb-F", "QL_BMfaWYXK", "UuN4sKzfRG7", "nips_2021_1TuwAYxRAC", "nips_2021_1TuwAYxRAC", "lI_SWTPH076", ...
nips_2021_1H6zA8wIhKk
Coresets for Clustering with Missing Values
Vladimir braverman, Shaofeng Jiang, Robert Krauthgamer, Xuan Wu
accept
The paper provides core set construction for clustering such as k means with multiple missing values. All the reviews are highly positive and support acceptance. The authors should carefully address all the comments that the reviewers have brought up including adding new baselines to their experimental evaluation.
train
[ "DFYJXJvWNhL", "jfjlhEB0kDK", "8Kshy3c5wj", "J0sGWDbDdQs", "HBJgrzIF7yW", "BmR49BopZR8", "Ac7d4gq-Mly", "oDNyXLA-VRo", "6_xTNJpf7Lz", "dhT03KjzAsi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Seems we are in agreement this paper should be accepted. I see no major concerns, and agreement the paper is interesting. ", "The paper suggests a coreset for clustering points in $\\mathbb{R}^d$ that have multiple missing values (coordinates) including K-means and K-median. The paper gives the first coreset...
[ -1, 8, -1, -1, -1, -1, -1, 9, 8, 8 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "J0sGWDbDdQs", "nips_2021_1H6zA8wIhKk", "BmR49BopZR8", "dhT03KjzAsi", "6_xTNJpf7Lz", "jfjlhEB0kDK", "oDNyXLA-VRo", "nips_2021_1H6zA8wIhKk", "nips_2021_1H6zA8wIhKk", "nips_2021_1H6zA8wIhKk" ]
nips_2021_1oP1duoZxx
Boosting with Multiple Sources
We study the problem of learning accurate ensemble predictors, in particular boosting, in the presence of multiple source domains. We show that the standard convex combination ensembles in general cannot succeed in this scenario and adopt instead a domain-weighted combination. We introduce and analyze a new boosting algorithm, MULTIBOOST, for this scenario and show that it benefits from favorable theoretical guarantees. We also report the results of several experiments with our algorithm demonstrating that it outperforms natural baselines on multi-source text-based, image-based and tabular data. We further present an extension of our algorithm to the federated learning scenario and report favorable experimental results for that setting as well. Additionally, we describe in detail an extension of our algorithm to the multi-class setting, MCMULTIBOOST, for which we also report experimental results.
accept
This paper introduces and analyzes a new boosting algorithm for learning in the presence of multiple source domains. We were persuaded by the author response to the concerns expressed in the reviews, especially about novelty wrt MMR.
train
[ "KSICFvQ46Q5", "lSCvrALH2ID", "aKfd87SpPt", "JQGC5QdALKs", "FXE8IzCkYSG", "JPlwqBryu1k", "loL7XKmGSb", "dSaNXrNgAIa", "A2kqCabWb7-", "DqCyEFEbX97", "FXm2qm-fIKK" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies the problem of learning boosted classifiers with data from multiple source domains. The authors propose Q-ensembles to deal with it and seek a solution for any mixture of the source distribution. They propose the algorithm MultiBoost to solve it inspired by coordinate descent. They present theore...
[ 6, 7, -1, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, 4, -1, 4, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_1oP1duoZxx", "nips_2021_1oP1duoZxx", "dSaNXrNgAIa", "nips_2021_1oP1duoZxx", "KSICFvQ46Q5", "FXm2qm-fIKK", "JQGC5QdALKs", "lSCvrALH2ID", "DqCyEFEbX97", "loL7XKmGSb", "nips_2021_1oP1duoZxx" ]
nips_2021_eL-mdUUwQVZ
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation
Bowen Zhang, Yifan liu, Zhi Tian, Chunhua Shen
accept
This paper introduces a new type of feature decoder for semantic segmentation, which is more efficient than previous work. Experimental work is ok, but could have been more convincing if architectures were achieving state-of-the-art performance. There were a number of improvements suggested by the reviewers, in particular i) improving the clarity of the paper ii) speed comparison with previous work which should be included in the final version of the paper.
train
[ "Jq37erYebBK", "X81MMNC3yMV", "fZ9la-vrmGT", "YWELSYbuw0", "PwS6hrGm2ED", "3yYoTQMDdOJ", "vE5Sn2pPTkh", "xTrZgcGXCfp", "ei4NQerT-y_", "dI7nl2lqFE5" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors design a simple and effective decoder named NRD for the Semantic Segmentation task. They represent the local patch of the label map with a neural network, which makes it possible to restore details in the prediction with a low computational cost. Experiments on three benchmarks show the ...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "nips_2021_eL-mdUUwQVZ", "YWELSYbuw0", "dI7nl2lqFE5", "Jq37erYebBK", "xTrZgcGXCfp", "dI7nl2lqFE5", "Jq37erYebBK", "ei4NQerT-y_", "nips_2021_eL-mdUUwQVZ", "nips_2021_eL-mdUUwQVZ" ]
nips_2021_hJOLFJIJ_zy
Dense Keypoints via Multiview Supervision
This paper presents a new end-to-end semi-supervised framework to learn a dense keypoint detector using unlabeled multiview images. A key challenge lies in finding the exact correspondences between the dense keypoints in multiple views since the inverse of the keypoint mapping can be neither analytically derived nor differentiated. This limits applying existing multiview supervision approaches used to learn sparse keypoints that rely on the exact correspondences. To address this challenge, we derive a new probabilistic epipolar constraint that encodes the two desired properties. (1) Soft correspondence: we define a matchability, which measures a likelihood of a point matching to the other image’s corresponding point, thus relaxing the requirement of the exact correspondences. (2) Geometric consistency: every point in the continuous correspondence fields must satisfy the multiview consistency collectively. We formulate a probabilistic epipolar constraint using a weighted average of epipolar errors through the matchability thereby generalizing the point-to-point geometric error to the field-to-field geometric error. This generalization facilitates learning a geometrically coherent dense keypoint detection model by utilizing a large number of unlabeled multiview images. Additionally, to prevent degenerative cases, we employ a distillation-based regularization by using a pretrained model. Finally, we design a new neural network architecture, made of twin networks, that effectively minimizes the probabilistic epipolar errors of all possible correspondences between two view images by building affinity matrices. Our method shows superior performance compared to existing methods, including non-differentiable bootstrapping in terms of keypoint accuracy, multiview consistency, and 3D reconstruction accuracy.
accept
Reviewers agreed that this is a solid paper that deserves acceptance. Authors are highly encouraged to address the key comments reported by reviewers as well as to implement all the improvements (as indicated by authors in the rebuttal) in the final camera-ready version.
val
[ "_a5Rdfhp6KY", "G-uDXPl4z4r", "XYM2JNnBazS", "jP7Rwft44nX", "_BEH_pgoAml", "8iSu-N7BwB", "U6iIM74av8y", "sYIEvhA_dW_", "xRIxsR0L0qh", "jfLwB3L1bL2", "egQsz_Ue4V", "BwBjZdJAnBw", "l5-KZG1pFTy", "Eu9zqUiOQoi", "2g_72c2aP1E", "oAKmrUlhnEZ", "N4muY6LV7n", "mfZiAbwG0BM", "tiCtF6ILfaB"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "au...
[ " My concerns are addressed and I believe this paper would be a good contribution to the conference.", " Thank you, score revised to 7 (accept) owing to clarity of claims, as discussed.", "The paper presents a method for learning continuous dense keypoint fields with part-supervised/labelled data and part-unlab...
[ -1, -1, 7, -1, -1, 7, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 6 ]
[ -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "N4muY6LV7n", "xRIxsR0L0qh", "nips_2021_hJOLFJIJ_zy", "oAKmrUlhnEZ", "U6iIM74av8y", "nips_2021_hJOLFJIJ_zy", "sYIEvhA_dW_", "l5-KZG1pFTy", "jfLwB3L1bL2", "tiCtF6ILfaB", "nips_2021_hJOLFJIJ_zy", "HtIiPRihdA", "Eu9zqUiOQoi", "2g_72c2aP1E", "8iSu-N7BwB", "Q2v9zTuDytl", "SdUhkGdGTd", "...
nips_2021_SehIKudiIo1
Scatterbrain: Unifying Sparse and Low-rank Attention
Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, Christopher Ré
accept
In this paper, the authors discussed the effect of sparse and low-rank approximation for the transformer. The proposed combination leads to better performances comparing to the existing fast transformer with each approximation scheme only. The paper is well-motivated and the theoretical and empirical analyses are reasonable. Therefore, I recommend the accept this submission. The paper can be further improved by taking the reviewers' suggestions about the experiments, e.g., full-training-from-the-scratch (Reviewer Gfk7), comparison on LRA (6aFS), tradeoff between memory vs. accuracy (QK3S), and comparing with more baselines (Hndw).
train
[ "HFhRj_l6tLr", "cR3iGeUijo", "ZT3qsfOSGTx", "V8AJT42OQ6", "ZXtJRx0KMS", "A9BxaIHjdn8", "HNBF-T0Wcw7", "2y4iarlPUN0", "mynmOc2TnnU", "PhLURm0mcj", "Fju-1T5pZL", "pmGnL2_ziLk", "-6aiEsgFkf9" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This submission presents an efficient approximation method of attention matrices. This work demonstrates the effectivenss of the proposed method really well in the domain of dense attention, e.g., Transformer or self-attention.\n\nThe authors provided thorough motivational examples, both theoretical and empirical ...
[ 9, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "nips_2021_SehIKudiIo1", "ZT3qsfOSGTx", "V8AJT42OQ6", "ZXtJRx0KMS", "HFhRj_l6tLr", "2y4iarlPUN0", "-6aiEsgFkf9", "Fju-1T5pZL", "pmGnL2_ziLk", "A9BxaIHjdn8", "nips_2021_SehIKudiIo1", "nips_2021_SehIKudiIo1", "nips_2021_SehIKudiIo1" ]
nips_2021_dTp-VUFDIB
PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning
A critical aspect of human visual perception is the ability to parse visual scenes into individual objects and further into object parts, forming part-whole hierarchies. Such composite structures could induce a rich set of semantic concepts and relations, thus playing an important role in the interpretation and organization of visual signals as well as for the generalization of visual perception and reasoning. However, existing visual reasoning benchmarks mostly focus on objects rather than parts. Visual reasoning based on the full part-whole hierarchy is much more challenging than object-centric reasoning due to finer-grained concepts, richer geometry relations, and more complex physics. Therefore, to better serve for part-based conceptual, relational and physical reasoning, we introduce a new large-scale diagnostic visual reasoning dataset named PTR. PTR contains around 80k RGBD synthetic images with ground truth object and part level annotations regarding semantic instance segmentation, color attributes, spatial and geometric relationships, and certain physical properties such as stability. These images are paired with 800k machine-generated questions covering various types of reasoning types, making them a good testbed for visual reasoning models. We examine several state-of-the-art visual reasoning models on this dataset and observe that they still make many surprising mistakes in situations where humans can easily infer the correct answer. We believe this dataset will open up new opportunities for part-based reasoning. PTR dataset and baseline models are publicly available.
accept
Thanks for submitting your work to NeurIPS. Proposing a new benchmark for visual reasoning on part-whole hierarchies is really an impressiv and important contribution. The dataset contains 80k RGBD synthetic images with ground truth object and part level annotations as well as informations about the concept, relation, analogy, arithmetic, and physic. Several vanilla baselines, CNN-based baselines, and neuro-symbolic baseline are included and compared. All reviewers agree that this is a very solid and strong contribution. I fully agree with this sentiment. Thanks for th enice work
train
[ "aiqdFAEYEA9", "W3CS6GonlC2", "Phjs7XykHkc", "T1jZzqmHIC", "lc6plwPZ-i1", "mccdi0wNL8i", "pQ2KuXKijK-", "gWyOFxMs0hH", "OiZPZG3E2Li", "m-H_F8A2sRj", "EFayNrOvah", "xMsdp5z6PX", "k8I8ahaXWa", "MKdwUsajXlP", "8jVGx9n2lNN", "A_mZ3UFs_X", "xTc1p4_OdBx" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " We genuinely thank all reviewers and ACs for their efforts and time in reviewing our paper, as well as their constructive suggestions that contribute to the improvement of our paper. We sincerely appreciate the positive 7-7-6-6 evaluation from reviewers XMc6, jSyT, tbaK, Qvcs.\n\nHere is a summary of our respons...
[ -1, 6, -1, -1, -1, -1, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, -1, -1, -1, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_dTp-VUFDIB", "nips_2021_dTp-VUFDIB", "T1jZzqmHIC", "W3CS6GonlC2", "m-H_F8A2sRj", "lc6plwPZ-i1", "nips_2021_dTp-VUFDIB", "nips_2021_dTp-VUFDIB", "8jVGx9n2lNN", "xMsdp5z6PX", "nips_2021_dTp-VUFDIB", "pQ2KuXKijK-", "nips_2021_dTp-VUFDIB", "xTc1p4_OdBx", "gWyOFxMs0hH", "W3CS6Gon...
nips_2021_vGjTOxss-Dl
Property-Aware Relation Networks for Few-Shot Molecular Property Prediction
Molecular property prediction plays a fundamental role in drug discovery to identify candidate molecules with target properties. However, molecular property prediction is essentially a few-shot problem which makes it hard to use regular machine learning models. In this paper, we propose a Property-Aware Relation networks (PAR) to handle this problem. In comparison to existing works, we leverage the fact that both relevant substructures and relationships among molecules change across different molecular properties. We first introduce a property-aware embedding function to transform the generic molecular embeddings to substructure-aware space relevant to the target property. Further, we design an adaptive relation graph learning module to jointly estimate molecular relation graph and refine molecular embeddings w.r.t. the target property, such that the limited labels can be effectively propagated among similar molecules. We adopt a meta-learning strategy where the parameters are selectively updated within tasks in order to model generic and property-aware knowledge separately. Extensive experiments on benchmark molecular property prediction datasets show that PAR consistently outperforms existing methods and can obtain property-aware molecular embeddings and model molecular relation graph properly.
accept
The paper proposes a framework for few shot learning for molecular properties prediction. Reviewers have agreed that the problem of few shot training is important in the molecular space and that the approach proposed in the paper is novel and elegant. Authors showed evidence of their method working in the few shot regime. Accept
train
[ "A62zt96L5GY", "o1sUPQu8gw-", "_liA52XaYgP", "R5lpVfQYVAR", "cv61aREVDR8", "KqyWGmPs9wb", "nvi1lBcwuZD", "LQeQavZh2d", "XclSsJMFPH5", "fAJJ7YKC6b", "zJ1Iyd8GpG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Dear reviewer, we are grateful that you now vote for us. We will rewrite and proofread the paper based on your suggestions. Thanks for your time and efforts!", "This paper proposes a novel framework to solve a few-shot molecular property prediction problem w.r.t different properties. It takes GNN as the encoder...
[ -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, 6 ]
[ -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "_liA52XaYgP", "nips_2021_vGjTOxss-Dl", "R5lpVfQYVAR", "XclSsJMFPH5", "nvi1lBcwuZD", "nips_2021_vGjTOxss-Dl", "LQeQavZh2d", "KqyWGmPs9wb", "o1sUPQu8gw-", "zJ1Iyd8GpG", "nips_2021_vGjTOxss-Dl" ]
nips_2021_RUQ1zwZR8_
Differentially Private Learning with Adaptive Clipping
Existing approaches for training neural networks with user-level differential privacy (e.g., DP Federated Averaging) in federated learning (FL) settings involve bounding the contribution of each user's model update by {\em clipping} it to some constant value. However there is no good {\em a priori} setting of the clipping norm across tasks and learning settings: the update norm distribution depends on the model architecture and loss, the amount of data on each device, the client learning rate, and possibly various other parameters. We propose a method wherein instead of a fixed clipping norm, one clips to a value at a specified quantile of the update norm distribution, where the value at the quantile is itself estimated online, with differential privacy. The method tracks the quantile closely, uses a negligible amount of privacy budget, is compatible with other federated learning technologies such as compression and secure aggregation, and has a straightforward joint DP analysis with DP-FedAvg. Experiments demonstrate that adaptive clipping to the median update norm works well across a range of federated learning tasks, eliminating the need to tune any clipping hyperparameter.
accept
The paper presents a method for adaptively setting the gradient clipping threshold for DP federated learning. The proposed method is mostly a combination of known techniques, but it provides a neat and well-presented solution to an important practical problem. The reviews are mixed: some reviewers are more positive because of more emphasis on the practical relevance while others are more negative because of emphasis on limited technical novelty. The AC is in favour of accepting the paper because of its practical usefulness and potential impact. In my opinion, the authors have been able to answer the criticisms of the negative reviews in their rebuttal. Unfortunately these reviewers have been unresponsive in the discussion and thus the reviews have not been updated in light of the author responses, potentially leaving the reviews unnecessarily negative. While not reflected in the reviews, this paper was discussed extensively by the AC (who read it in full and championed the work) and the SAC, leading to the decision to accept.
test
[ "mgJI4Opz3t", "5MMFMnnKCYI", "pGaxh9EUIj2", "4x3oxKVvmfY", "8OpaZ3xJ-ui", "v6pOLUNI2Z7", "VJOeo4-bJCJ", "aO4rCm1Kvlp" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review.\n* The main advantage of our approach relative to the other quantile estimation algorithms you mentioned is that it lets us track the moving quantile very efficiently (with respect to privacy budget), and transmitting the bare minimum of information from client to server. The methods yo...
[ -1, -1, -1, -1, 5, 4, 6, 6 ]
[ -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "aO4rCm1Kvlp", "VJOeo4-bJCJ", "v6pOLUNI2Z7", "8OpaZ3xJ-ui", "nips_2021_RUQ1zwZR8_", "nips_2021_RUQ1zwZR8_", "nips_2021_RUQ1zwZR8_", "nips_2021_RUQ1zwZR8_" ]
nips_2021_VjKhSULF7Gb
Can Less be More? When Increasing-to-Balancing Label Noise Rates Considered Beneficial
In this paper, we answer the question of when inserting label noise (less informative labels) can instead return us more accurate and fair models. We are primarily inspired by three observations: 1) In contrast to reducing label noise rates, increasing the noise rates is easy to implement; 2) Increasing a certain class of instances' label noise to balance the noise rates (increasing-to-balancing) results in an easier learning problem; 3) Increasing-to-balancing improves fairness guarantees against label bias. In this paper, we first quantify the trade-offs introduced by increasing a certain group of instances' label noise rate w.r.t. the loss of label informativeness and the lowered learning difficulties. We analytically demonstrate when such an increase is beneficial, in terms of either improved generalization power or the fairness guarantees. Then we present a method to insert label noise properly for the task of learning with noisy labels, either without or with a fairness constraint. The primary technical challenge we face is due to the fact that we would not know which data instances are suffering from higher noise, and we would not have the ground truth labels to verify any possible hypothesis. We propose a detection method that informs us which group of labels might suffer from higher noise without using ground truth labels. We formally establish the effectiveness of the proposed solution and demonstrate it with extensive experiments.
accept
Reviewers were unanimously positive about the paper's contribution to a topical problem, namely, learning with label noise and its implications on model fairness. The initial reviews raised some technical questions that were satisfactorily addressed by the authors. The authors are further encouraged to incorporate the other suggestions provided by reviewers for the updated version of the work.
train
[ "71xi1HiqQv", "24iwDJJgQfZ", "p9F7PbkGAlq", "2iUdhvxUKq", "O0TvpYXzVZJ", "oGGA7tqCBjW", "alvjCmIsTYg", "4PI0hJtr6AG", "7feU8NQIl29", "2DLaenqRETw" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear Reviewer ayxB,\n\nSince we haven't received any updates or feedback, we want to kindly follow up with you that whether we have addressed all your concerns or you still have any further questions that we could clarify? We are looking forward to further discussions. Thank you very much!\n\nSincerely,\n\nAuthor...
[ -1, 7, 7, 6, -1, -1, -1, -1, -1, 6 ]
[ -1, 2, 3, 4, -1, -1, -1, -1, -1, 4 ]
[ "alvjCmIsTYg", "nips_2021_VjKhSULF7Gb", "nips_2021_VjKhSULF7Gb", "nips_2021_VjKhSULF7Gb", "4PI0hJtr6AG", "24iwDJJgQfZ", "2DLaenqRETw", "2iUdhvxUKq", "p9F7PbkGAlq", "nips_2021_VjKhSULF7Gb" ]
nips_2021_fUxqIofPPi
Projected GANs Converge Faster
Generative Adversarial Networks (GANs) produce high-quality images but are challenging to train. They need careful regularization, vast amounts of compute, and expensive hyper-parameter sweeps. We make significant headway on these issues by projecting generated and real samples into a fixed, pretrained feature space. Motivated by the finding that the discriminator cannot fully exploit features from deeper layers of the pretrained model, we propose a more effective strategy that mixes features across channels and resolutions. Our Projected GAN improves image quality, sample efficiency, and convergence speed. It is further compatible with resolutions of up to one Megapixel and advances the state-of-the-art Fréchet Inception Distance (FID) on twenty-two benchmark datasets. Importantly, Projected GANs match the previously lowest FIDs up to 40 times faster, cutting the wall-clock time from 5 days to less than 3 hours given the same computational resources.
accept
The submission discusses projected GANs, i.e., GANs which project generated and real samples into a fixed, pretrained feature space. The initial reviewer assessment is generally positive. The reviewers provided further clarifications in the rebuttal which clarified questions. Reviewers remained assured about their initial assessment. AC thinks that this is a valuable study that will be of interest.
train
[ "zDeChzhlauA", "tAvUkM3rApd", "xYIh4MnyazW", "WRrpPi6DTGa", "NBbKJ5TPsQs", "5R4W9WHX1-F", "EQUb5dVfAFS", "Y0Fg05JI1X", "HmPiDakPrnX", "BJt931lZVNo", "BXu2F_15wK", "8PomkLUAbpb", "6ciYVjiTsz", "UnrttPn9Ywa", "V1kx2f9IL18" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer zAjH,\n\nThank you for your comprehensive review and your time. We address your concerns in the following, starting with the primary one.\n\n__Q1: Correlation with FID scores__\n\nOther reviewers also share your concern about a possible correlation between our training objective and FID, and we agre...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "xYIh4MnyazW", "NBbKJ5TPsQs", "nips_2021_fUxqIofPPi", "zDeChzhlauA", "zDeChzhlauA", "UnrttPn9Ywa", "V1kx2f9IL18", "xYIh4MnyazW", "6ciYVjiTsz", "V1kx2f9IL18", "UnrttPn9Ywa", "6ciYVjiTsz", "nips_2021_fUxqIofPPi", "nips_2021_fUxqIofPPi", "nips_2021_fUxqIofPPi" ]
nips_2021_jQuTUXeQsy8
Generating High-Quality Explanations for Navigation in Partially-Revealed Environments
We present an approach for generating natural language explanations of high-level behavior of autonomous agents navigating in partially-revealed environments. Our counterfactual explanations communicate changes to interpratable statistics of the belief (e.g., the likelihood an exploratory action will reach the unseen goal) that are estimated from visual input via a deep neural network and used (via a Bellman equation variant) to inform planning far into the future. Additionally, our novel training procedure mimics explanation generation, allowing us to use planning performance as an objective measure of explanation quality. Simulated experiments validate that our explanations are both high quality and can be used in interventions to directly correct bad behavior; agents trained via our training-by-explaining procedure achieve 9.1% lower average cost than a non-learned baseline (12.7% after interventions) in environments derived from real-world floor plans.
accept
The paper proposes a method for generating counterfactual explanations for the decisions of a high-level planner that navigates an agent through a partially known environment. The planner reasons over subgoals and the proposed method performs gradient descent over the parameters of these subgoals to identify changes that cause the planner to choose one subgoal over another. A subsequent round of gradient descent is performed with all but one parameter fixed in an effort to filter out false positives. The results are converted to a textual explanation using a rule-based grammar. The quality of the explanations is evaluated by using them to train the navigational agent following much the same procedure that is used to identify the explanations. The problem of generating a valid and concise explanation for why a policy/planner chose one action over another is challenging and touches on broader problems in interpretability/explainability that are of interest to many in the Ml community. The reviewers agree that the proposed approach is carefully designed and is technically sound. The method is presented clearly and the paper, as a whole, is well written. The experimental evaluation provides a compelling demonstration of the utility of the resulting explanations. As the reviewers note, while this evaluation is deliberately not subjective, a human evaluation of the explanations would provide more insight into the method's broader utility. However, this can be left for future work.
train
[ "xeHIPp5tz4G", "8M6ol_6gGHr", "ptNi_8OcO0H", "gO5JACDSpnw", "aMdJ-P9R7RW", "MnnUUPkPHf2", "4SdKvl1JLmE", "AM-cEElqyel", "Ngaz9fHKVUw", "Fz35i-lq1L", "zgq3OR6EwcC", "vihHJe-V-8-", "okeC7FMIY5" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "- This paper discusses the challenges of providing high-quality explanations of agent navigation decision making in partially-observable environments. PointGoal in particular is studied.\n- The authors proposed a transparency-by-design method based on subgoal-based actions. The agent selects a subgoal in the for...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_jQuTUXeQsy8", "ptNi_8OcO0H", "aMdJ-P9R7RW", "MnnUUPkPHf2", "gO5JACDSpnw", "4SdKvl1JLmE", "AM-cEElqyel", "okeC7FMIY5", "xeHIPp5tz4G", "vihHJe-V-8-", "nips_2021_jQuTUXeQsy8", "nips_2021_jQuTUXeQsy8", "nips_2021_jQuTUXeQsy8" ]
nips_2021_ZAh31ihNaoF
De-randomizing MCMC dynamics with the diffusion Stein operator
Approximate Bayesian inference estimates descriptors of an intractable target distribution - in essence, an optimization problem within a family of distributions. For example, Langevin dynamics (LD) extracts asymptotically exact samples from a diffusion process because the time evolution of its marginal distributions constitutes a curve that minimizes the KL-divergence via steepest descent in the Wasserstein space. Parallel to LD, Stein variational gradient descent (SVGD) similarly minimizes the KL, albeit endowed with a novel Stein-Wasserstein distance, by deterministically transporting a set of particle samples, thus de-randomizes the stochastic diffusion process. We propose de-randomized kernel-based particle samplers to all diffusion-based samplers known as MCMC dynamics. Following previous work in interpreting MCMC dynamics, we equip the Stein-Wasserstein space with a fiber-Riemannian Poisson structure, with the capacity of characterizing a fiber-gradient Hamiltonian flow that simulates MCMC dynamics. Such dynamics discretizes into generalized SVGD (GSVGD), a Stein-type deterministic particle sampler, with particle updates coinciding with applying the diffusion Stein operator to a kernel function. We demonstrate empirically that GSVGD can de-randomize complex MCMC dynamics, which combine the advantages of auxiliary momentum variables and Riemannian structure, while maintaining the high sample quality from an interacting particle system.
accept
This paper proposes an elegant generalisation of the Stein variational gradient descent method that introduces additional degrees of freedom into how the dynamics are defined, together with empirical evidence in support of this additional flexibility being practically useful. All reviewers agreed on the correctness of the method and that the paper is mostly well-written, but there was one reviewer who viewed the contribution as too incremental. The paper addresses an important open question - deterministic alternatives to MCMC - which will be of general interest at NeurIPS.
train
[ "32TBxqhSVH", "sYX5hwbNWjr", "KZb2kNfgxpe", "iVTUQDaw_oG", "SyUkrBWlwSk", "RSagSFmPJva", "jOOZ5P_6DNk", "EpAHqRMDs_A", "tKDR3tvdmrG", "wyCbJLXWevT", "f13QA5wMTtQ", "vIFnwiRIXGg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for responding to my questions. \n\n", "The paper combines the results of two previous works: (Liu, 2017) and (Liu et al., 2019). I'll first describe the contribution of the previous works and then will move to the contribution of the current paper. \n\n(Liu, 2017) provides the theoretical analysis of...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "jOOZ5P_6DNk", "nips_2021_ZAh31ihNaoF", "SyUkrBWlwSk", "nips_2021_ZAh31ihNaoF", "sYX5hwbNWjr", "vIFnwiRIXGg", "f13QA5wMTtQ", "wyCbJLXWevT", "nips_2021_ZAh31ihNaoF", "nips_2021_ZAh31ihNaoF", "nips_2021_ZAh31ihNaoF", "nips_2021_ZAh31ihNaoF" ]
nips_2021_-VjKyYX-PI9
Sparsely Changing Latent States for Prediction and Planning in Partially Observable Domains
Christian Gumbsch, Martin V. Butz, Georg Martius
accept
The paper presents a recurrent architecture with an inductive bias (via gating and L0 regularization) that encourages sparse updates to its state. This is hypothesized to lead to improved generalization, and is empirically evaluated in a number of partially observable environments. The reviewers were initially sceptical, but the discussion with the authors has allayed most concerns, so now the consensus is towards acceptance. In general, the reviewers agree that the paper is well written and the contribution is interesting (albeit somewhat incremental). Overall, I'm happy to recommend acceptance.
train
[ "45E0aG6jgL", "JeknaC2uzlH", "t7qktGFCD2I", "qyguVhnhhcW", "SQhnZEGlrbI", "EdiFFGCrwO4", "uf75CbT3z5b", "5LVQI-n1SDt", "6JZoPbYv01A", "xOlDx2XaYgg", "SSRzZEjfjvQ", "Y4poIfp9y-O", "ehwSDh3_W_B", "5IQ3lLEIxS", "e2ZsQPK6Wsr", "QoyhJllFh31", "TzFbAyYNo-c" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks for all the comments. I carefully read the authors' responses and revised my rating accordingly.", "This paper proposes a new recurrent neural network architecture (RNN) called GateL0RD, designed to learn sparsely changing latent variables of sequential data. The authors hypothesize that latent variables...
[ -1, 6, 6, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "xOlDx2XaYgg", "nips_2021_-VjKyYX-PI9", "nips_2021_-VjKyYX-PI9", "SQhnZEGlrbI", "5LVQI-n1SDt", "nips_2021_-VjKyYX-PI9", "e2ZsQPK6Wsr", "ehwSDh3_W_B", "QoyhJllFh31", "SSRzZEjfjvQ", "5IQ3lLEIxS", "nips_2021_-VjKyYX-PI9", "t7qktGFCD2I", "JeknaC2uzlH", "EdiFFGCrwO4", "TzFbAyYNo-c", "nips...
nips_2021_amH9JxZN7C
PreferenceNet: Encoding Human Preferences in Auction Design with Deep Learning
The design of optimal auctions is a problem of interest in economics, game theory and computer science. Despite decades of effort, strategyproof, revenue-maximizing auction designs are still not known outside of restricted settings. However, recent methods using deep learning have shown some success in approximating optimal auctions, recovering several known solutions and outperforming strong baselines when optimal auctions are not known. In addition to maximizing revenue, auction mechanisms may also seek to encourage socially desirable constraints such as allocation fairness or diversity. However, these philosophical notions neither have standardization nor do they have widely accepted formal definitions. In this paper, we propose PreferenceNet, an extension of existing neural-network-based auction mechanisms to encode constraints using (potentially human-provided) exemplars of desirable allocations. In addition, we introduce a new metric to evaluate an auction allocations' adherence to such socially desirable constraints and demonstrate that our proposed method is competitive with current state-of-the-art neural-network based auction designs. We validate our approach through human subject research and show that we are able to effectively capture real human preferences.
accept
The review team sentiment is positive about the paper: it is a new dimension to the problem of automated auction design via NN that makes it more practical and allows us to capture subtle notions like the perception of fairness? There were initial concerns around scalability and experiments on harder (non-iid) datasets, but those were adequately addressed by the authors in the discussion. Please incorporate those in the final version. There are additional concerns on whether biases could be introduced in the model and that through this methodology biases would be harder to identify/correct since they are not explicitly written in a formula, but hidden in the data. While I share the concern of the reviewers, I think this is inherent from this type of approach, so we need to live with that. Finally, our recommendation is to accept the paper.
train
[ "UVCWaST-yt", "VrmC8DE_Pmk", "hBIKKc5Johx", "bjE0EtevQLr", "oLm8L54MDNr", "T5oENu6w98n", "IeQ0zFjOh1D", "oAPMrPAy9r0" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies of designing revenue optimizing auctions subject to the an additional constraint on fairness. Instead of algebraically capturing the fairness of an allocation, the authors train a neural network termed PreferenceNet that assign a fairness score to any allocation. The neural network for strategy-...
[ 6, -1, -1, -1, -1, 7, 5, 6 ]
[ 3, -1, -1, -1, -1, 2, 3, 3 ]
[ "nips_2021_amH9JxZN7C", "UVCWaST-yt", "oAPMrPAy9r0", "IeQ0zFjOh1D", "T5oENu6w98n", "nips_2021_amH9JxZN7C", "nips_2021_amH9JxZN7C", "nips_2021_amH9JxZN7C" ]
nips_2021_srzTZmjko0N
Large-Scale Learning with Fourier Features and Tensor Decompositions
Random Fourier features provide a way to tackle large-scale machine learning problems with kernel methods. Their slow Monte Carlo convergence rate has motivated the research of deterministic Fourier features whose approximation error can decrease exponentially in the number of basis functions. However, due to their tensor product extension to multiple dimensions, these methods suffer heavily from the curse of dimensionality, limiting their applicability to one, two or three-dimensional scenarios. In our approach we overcome said curse of dimensionality by exploiting the tensor product structure of deterministic Fourier features, which enables us to represent the model parameters as a low-rank tensor decomposition. We derive a monotonically converging block coordinate descent algorithm with linear complexity in both the sample size and the dimensionality of the inputs for a regularized squared loss function, allowing to learn a parsimonious model in decomposed form using deterministic Fourier features.We demonstrate by means of numerical experiments how our low-rank tensor approach obtains the same performance of the corresponding nonparametric model, consistently outperforming random Fourier features.
accept
This paper gives a deterministic version of the random feature model by constructing the features explicitly using certain tensor structure. The additional structure allows more efficient computation and performs well in practice. The reviewers found that the experiments are convincing and the ideas are sound. There were some confusions about the exact complexity of the proposed algorithm, which should be clarified in the revised paper.
train
[ "J8lJssmfe60", "cY5Ia0rtOe", "q8P1lNF3ijK", "STCz9xlaMwq", "iSN1oW8v2Zn", "OZSgmQJKux1", "H28y_LAPo_n", "SytYvaI3Q7a", "jz3C1qQN2r2" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. I will keep my score. I have a few suggestions when you revise the paper:\n\n1. Compare the dimensionality and complexity of the proposed method with others (e.g. full-rank, RFF, Nystrom etc) more clearly, perhaps in a table.\n2. Discuss the relationship of the proposed method with Hilber...
[ -1, 6, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, 3, -1, -1, -1, -1, 3, 4, 4 ]
[ "q8P1lNF3ijK", "nips_2021_srzTZmjko0N", "SytYvaI3Q7a", "cY5Ia0rtOe", "jz3C1qQN2r2", "H28y_LAPo_n", "nips_2021_srzTZmjko0N", "nips_2021_srzTZmjko0N", "nips_2021_srzTZmjko0N" ]
nips_2021_lMgDDWb1ULW
Hash Layers For Large Sparse Models
We investigate the training of sparse layers that use different parameters for different inputs based on hashing in large Transformer models. Specifically, we modify the feedforward layer to hash to different sets of weights depending on the current token, over all tokens in the sequence. We show that this procedure either outperforms or is competitive with learning-to-route mixture-of-expert methods such as Switch Transformers and BASE Layers, while requiring no routing parameters or extra terms in the objective function such as a load balancing loss, and no sophisticated assignment algorithm. We study the performance of different hashing techniques, hash sizes and input features, and show that balanced and random hashes focused on the most local features work best, compared to either learning clusters or using longer-range context. We show our approach works well both on large language modeling and dialogue tasks, and on downstream fine-tuning tasks.
accept
This work presents a surprising result that a deterministic routing strategy for sparse MoE models can yield competitive results for language tasks compared to learnable approaches. The experiments conducted are extensive and well thought. Regarding using perplexity as the only comparison metric, I suggest the authors either consider additional metrics (e.g., qualitative/human evaluation) or using multiple perplexity values, e.g., in Figure 2 or 3, to make a conclusion. The reason is a single small difference of < 1 ppl point (at the range of ppl > 20) might not be conclusive. Nevertheless, I agree with the reviewers that this paper is well written and the proposed approach serves as a simple and strong baseline. Hence, I recommend Accept.
train
[ "r08OscDevtX", "1j3Z2-LO0ly", "K-5-xYB52j", "8YVPySV3vgj", "nVF1GKv_pgl", "i1c67VD5aef", "7BOle3OswO5", "u6qfiy4Xu2X", "1SjGVOdRHEj" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Just to further clarify my point regarding applicability to non-causal layers; I didn't mean to convey that hash layers are not applicable to non-causal attention layers, but the fact that in this study the comparison is limited to causal layers. Bi-directional attention might require the need for more complex ro...
[ -1, 7, -1, -1, -1, -1, 7, 7, 9 ]
[ -1, 4, -1, -1, -1, -1, 5, 5, 4 ]
[ "nVF1GKv_pgl", "nips_2021_lMgDDWb1ULW", "1SjGVOdRHEj", "1j3Z2-LO0ly", "u6qfiy4Xu2X", "7BOle3OswO5", "nips_2021_lMgDDWb1ULW", "nips_2021_lMgDDWb1ULW", "nips_2021_lMgDDWb1ULW" ]
nips_2021_SvrYl-FDq2
Sliced Mutual Information: A Scalable Measure of Statistical Dependence
Mutual information (MI) is a fundamental measure of statistical dependence, with a myriad of applications to information theory, statistics, and machine learning. While it possesses many desirable structural properties, the estimation of high-dimensional MI from samples suffers from the curse of dimensionality. Motivated by statistical scalability to high dimensions, this paper proposes sliced MI (SMI) as a surrogate measure of dependence. SMI is defined as an average of MI terms between one-dimensional random projections. We show that it preserves many of the structural properties of classic MI, while gaining scalable computation and efficient estimation from samples. Furthermore, and in contrast to classic MI, SMI can grow as a result of deterministic transformations. This enables leveraging SMI for feature extraction by optimizing it over processing functions of raw data to identify useful representations thereof. Our theory is supported by numerical studies of independence testing and feature extraction, which demonstrate the potential gains SMI offers over classic MI for high-dimensional inference.
accept
The paper has received overwhelmingly positive reviews, where reviewerss largely agreed on the originality/novelty and the significance of the proposed quantity SMI. I also join the reviewers in that seeing the curse of dimensionality can be circumvented by random projections in fields other than probability metrics is interesting and exciting. Provided the authors will resolve the confusion about the DPI and rephrase their corresponding sentences, I am recommending an acceptance.
train
[ "qY7cqsxnOd5", "_a0dA5ELzKq", "IR5NUuV4erw", "uRhjTyXWF_Z", "tstlH1o0oTJ", "bYimWwLpE5r", "usyXVxmNIL4", "bkR7sE5q65F", "bcZxMXNM32", "4m6LiUuWV-U" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am fully satisfied with the answers provided by the authors. I think that the authors did a tremendous job in taking into account all the reviews and answering the question and telling what can be added for the final version.\n\nAbout my question (2) relative to dispersions in 4.1 and 4.2: I was thinking exact...
[ -1, -1, 7, -1, -1, -1, -1, 6, 7, 4 ]
[ -1, -1, 3, -1, -1, -1, -1, 3, 3, 4 ]
[ "tstlH1o0oTJ", "bYimWwLpE5r", "nips_2021_SvrYl-FDq2", "4m6LiUuWV-U", "bcZxMXNM32", "bkR7sE5q65F", "IR5NUuV4erw", "nips_2021_SvrYl-FDq2", "nips_2021_SvrYl-FDq2", "nips_2021_SvrYl-FDq2" ]
nips_2021_iqpFcg2TAa0
Emergent Communication under Varying Sizes and Connectivities
Recent advances in deep neural networks allowed artificial agents to derive their own emergent languages that promote interaction, coordination, and collaboration within a group. Just as we humans have succeeded in creating a shared language that allows us to interact within a large group, can the emergent communication within an artificial group converge to a shared, agreed language? This research provides an analytical study of the shared emergent language within the group communication settings of different sizes and connectivities. As the group size increases up to hundreds, agents start to speak dissimilar languages, but the rate at which they successfully communicate is maintained. We observe the emergence of different dialects when we restrict the group communication to have local connectivities only. Finally, we provide optimization results of group communication graphs when the number of agents one can communicate with is restricted or when we penalize communication between distant agent pairs. The optimized communication graphs show superior communication success rates compared to graphs with same number of links as well as the emergence of hub nodes and scale-free networks.
accept
This paper studies the effect of population size and connectivity of the communication topology of those populations in emergent communication and referential games. While population size has received some (very limited) attention in the literature, how the topology of the communicating agents is organized has been quite understudied. The authors report both communicative accuracy as a function of populations/connectivity but also present analysis of the resulting languages. Overall, this has been among the most productive authors-reviewers interaction I've experienced. Authors have worked with reviewers to clarify questions and address concerns. Some new content has been generated during the discussion period and so I would ask the authors to update their manuscript respectively to reflect these discussions.
train
[ "CBuMhdbHLKF", "STj9VDKn25V", "HVGQLVwrZU2", "e0eJn7tDAKM", "GGWnvi-O1vn", "beopE4G1czB", "5ACqTsrX_It", "h7P8ERs-8ly", "BhiTagwtsl8", "_DRLJDpDlu", "aAjsmmLaWF0", "-Xtn4HaKI9y", "3owQBC4t52Z", "mjJZN8Zws00", "BrGvxFa5d-1", "rZExtfm5GWg", "5JadDZtOzRN", "2rR_C_COHmC", "47eDVTsCrO...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_...
[ " Dear reviewer `z6vN`,\n\nThank you for the feedback. We will add discussions on different training regimes as well as the additional results using RL for training. \nWe will also include our analyses on the predictive performance during training.", "The paper studies the effect of population size and graph conn...
[ -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "HVGQLVwrZU2", "nips_2021_iqpFcg2TAa0", "5JadDZtOzRN", "mjJZN8Zws00", "Z_JPsB3dB-P", "5ACqTsrX_It", "nips_2021_iqpFcg2TAa0", "_DRLJDpDlu", "nips_2021_iqpFcg2TAa0", "3owQBC4t52Z", "BhiTagwtsl8", "BhiTagwtsl8", "BhiTagwtsl8", "rZExtfm5GWg", "nips_2021_iqpFcg2TAa0", "1XzltZoIt1-", "STj9...
nips_2021_mIKui9t0jDq
Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks
Designing efficient exploration is central to Reinforcement Learning due to the fundamental problem posed by the exploration-exploitation dilemma. Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment.However, this technique becomes infeasible for complex environments due to the computational intractability of maintaining probability distributions over parameters of outcome models of corresponding complexity.Moreover, the approximation techniques introduced to mitigate this issue typically result in poor exploration-exploitation trade-offs, as observed in the case of deep neural network models with approximate posterior methods that have been shown to underperform in the deep bandit scenario.In this paper we introduce Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandits.While Bayesian approaches like Thompson Sampling estimate outcomes uncertainty indirectly by first quantifying the variability over the parameters of the outcome model, SAU is a frequentist approach that directly estimates the uncertainty of the outcomes based on the value predictions.Importantly, we show theoretically that the uncertainty measure estimated by SAU asymptotically matches the uncertainty provided by Thompson Sampling, as well as its regret bounds.Because of its simplicity SAU can be seamlessly applied to deep contextual bandits as a very scalable drop-in replacement for epsilon-greedy exploration.We confirm empirically our theory by showing that SAU-based exploration outperforms current state-of-the-art deep Bayesian bandit methods on several real-world datasets at modest computation cost, and make the code to reproduce our results available at \url{https://github.com/ibm/sau-explore}.
accept
Two reviewers recommended acceptance of the paper (1x weak accept, 1xaccept) and two reviewers recommended (weak) rejection of the paper. The most positive review had low confidence and I downweighed it when comming to a final decision for the paper. The reviewers acknowledged the simple and efficient proposed exploration approach but also raised concerns regarding completeness of the theoretical analysis and the presented empirical evaluation. While a more complete theoretical analysis was not deemed crucial for accpetance of the paper, the missing baselines make it impossible to fully (empirically) understand the value of the proposed approached. The authors commented on this issue but did not provide an empirical comparison but rather highlighted (only) the theoretical differences. I am therefore recommending rejection of the paper but at the same time would like to encourage the authors to improve their paper according to the reviewers' suggestions and in particular work on the empirical comparison of the proposed approach to recently proposed algorithms in the field.
train
[ "a5kLWwXWVcE", "6Y0sqiriKub", "m4XD_j9Bn-x", "w4SIUN6XoTB", "MZImI25eGZ-", "P4hzSNiHxtX", "GFQJhOuQgP", "wjVODaAwTMm", "H10qbwcYSOS", "iNGXcr5brAH", "QcTsHjMY_qS", "monBxEPmwMd", "yifsduOXCVE", "KGTD8xbMXQ3", "YmY8unOSoOG", "mRPv2QUgw2x" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "### Summary\n\n(+) Simplicity!\n\n(+) Formal justification and guarantees of the approach in two popular bandit settings\n\n(+) The experiments on the bandit benchmark from [18] are convincing. I think it is meaningful to first consider and understand the bandit setting before the RL setting.\n\n(+/-) The paper is...
[ 6, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "nips_2021_mIKui9t0jDq", "m4XD_j9Bn-x", "MZImI25eGZ-", "nips_2021_mIKui9t0jDq", "GFQJhOuQgP", "yifsduOXCVE", "H10qbwcYSOS", "iNGXcr5brAH", "monBxEPmwMd", "KGTD8xbMXQ3", "a5kLWwXWVcE", "w4SIUN6XoTB", "mRPv2QUgw2x", "YmY8unOSoOG", "nips_2021_mIKui9t0jDq", "nips_2021_mIKui9t0jDq" ]
nips_2021_Ba3odanehCw
Regret Minimization Experience Replay in Off-Policy Reinforcement Learning
In reinforcement learning, experience replay stores past samples for further reuse. Prioritized sampling is a promising technique to better utilize these samples. Previous criteria of prioritization include TD error, recentness and corrective feedback, which are mostly heuristically designed. In this work, we start from the regret minimization objective, and obtain an optimal prioritization strategy for Bellman update that can directly maximize the return of the policy. The theory suggests that data with higher hindsight TD error, better on-policiness and more accurate Q value should be assigned with higher weights during sampling. Thus most previous criteria only consider this strategy partially. We not only provide theoretical justifications for previous criteria, but also propose two new methods to compute the prioritization weight, namely ReMERN and ReMERT. ReMERN learns an error network, while ReMERT exploits the temporal ordering of states. Both methods outperform previous prioritized sampling algorithms in challenging RL benchmarks, including MuJoCo, Atari and Meta-World.
accept
We have a wide range of opinions for this paper. On the positive side, most reviewers agree that the problem addressed by the paper is important. But then, there are several concerns too: - One of the main issues is the novelty, both theoretical and algorithmic, compared to DisCor (Kumar et al., "DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction," 2020). Some of the reviewers believe the theoretical novelty is not significant compared to DisCor, while others believe that the theoretical result, especially Theorem 1, is significant and insightful. Since this is a point of contention, I suggest the authors to be more explicit in comparing their results with the prior work. - The other issue is regarding the empirical studies, and whether the improvements are significant or not. The paper uses 3 or 4 random seeds in its experiments. The standard deviation of results is often large enough that we have a lot of overlap between curves in figures or the values in the table (for Atari). - There is also a concern regarding Theorem 2. The upper bound in Theorem 2 is "$\gamma R_\text{max} / (1 - \gamma) + ...$". This additive form makes the upper bound almost vacuous, as $R_\text{max}/(1 - \gamma)$ is equal to $Q_\text{max}$ and the quantity that is bounded is $|Q_k - Q^*|$, which is trivially smaller than 2 $Q_\text{max}$. I should note that after bringing up this issue in the discussions, the authors provided another upper bound. That requires a careful verification by the reviewers. - Some of the theoretical results are not presented precisely. For example, Theorem 1 provides a guarantee about a relaxation of an optimization problem in Eq. (1). But the relaxation is not specified in the main body. It appears to be Eqs. (12-13) in the appendix, which are presented as a part of the proof of an auxiliary lemma. This is simply too remote from where the result is stated. It might be OK if the result is about a relaxed optimization problem instead of the original one, but the paper should be very clear and precise about it. Overall, I think this paper might be interesting for the community, but it can benefit a lot from some major revisions. Given that two out of four reviewers are on the negative side, and that one of the positive reviewers is only recommending a weak accept, and all concerns mentioned above, unfortunately I cannot recommend the acceptance of this paper in its current form.
train
[ "z93CzEQa0S4", "B74hzsGr23d", "Ua-d6I9yEUe", "h9_rU1MYIOw", "Ba8t57Y4N_C", "PMqIAxs5Y3", "LKmQVNe0Hw", "PwUKvKUK71", "eHL6d-Gbjwr", "eEvsNFTgz8N", "Oqs9Dldl9a" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a new experience sampling method by formulating the problem as regret minimization. They try to minimize the gap between the policy given all the data so far and the optimal policy. The paper provides a solution to such a regret minimization problem. Based on the solution, the paper claims that ...
[ 5, -1, -1, -1, -1, -1, -1, -1, 6, 4, 9 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_Ba3odanehCw", "LKmQVNe0Hw", "Ba8t57Y4N_C", "eEvsNFTgz8N", "Oqs9Dldl9a", "z93CzEQa0S4", "z93CzEQa0S4", "eHL6d-Gbjwr", "nips_2021_Ba3odanehCw", "nips_2021_Ba3odanehCw", "nips_2021_Ba3odanehCw" ]
nips_2021_h1-ilmYbdea
Relative Uncertainty Learning for Facial Expression Recognition
In facial expression recognition (FER), the uncertainties introduced by inherent noises like ambiguous facial expressions and inconsistent labels raise concerns about the credibility of recognition results. To quantify these uncertainties and achieve good performance under noisy data, we regard uncertainty as a relative concept and propose an innovative uncertainty learning method called Relative Uncertainty Learning (RUL). Rather than assuming Gaussian uncertainty distributions for all datasets, RUL builds an extra branch to learn uncertainty from the relative difficulty of samples by feature mixup. Specifically, we use uncertainties as weights to mix facial features and design an add-up loss to encourage uncertainty learning. It is easy to implement and adds little or no extra computation overhead. Extensive experiments show that RUL outperforms state-of-the-art FER uncertainty learning methods in both real-world and synthetic noisy FER datasets. Besides, RUL also works well on other datasets such as CIFAR and Tiny ImageNet. The code is available at https://github.com/zyh-uaiaaaa/Relative-Uncertainty-Learning.
accept
After discussion, the reviewers are all for accepting the work. It's well-motivated, the idea is simple, and empirically it appears to work well in practice across a solid set of experiments. The specific novelty is fairly low, but I find this a highly overrated criteria for acceptance. I highly recommend the authors address the rebuttal concerns as they further polish the paper.
train
[ "GXmiNbEIt8G", "dxMeRWVPAIE", "MHpVr-4ndKq", "7j6eV2buaA", "xqnoUkOMY4j", "Ed0x_k4dwNk", "F-MoVqWvAzb", "CdtM3zrc0xJ", "z0xQc-Pw9uu", "EAW9gyqyfB", "8Bdk8-00zhq", "MDWojgtt5SV", "qUDQHTejwt", "MLkAMhdzt3", "NUgcJqxCt1" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ " Dear reviewer, Thanks for your time and efforts spent on reviewing our paper. As the paper Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty Estimation for Facial Expression Recognition is closely related to our work, we are very glad to include it in our final paper.", "This paper focuse...
[ -1, 6, -1, -1, -1, 7, -1, -1, -1, 6, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, -1, 5, -1, -1, -1, 3, -1, -1, -1, -1, -1 ]
[ "MHpVr-4ndKq", "nips_2021_h1-ilmYbdea", "7j6eV2buaA", "CdtM3zrc0xJ", "F-MoVqWvAzb", "nips_2021_h1-ilmYbdea", "MLkAMhdzt3", "qUDQHTejwt", "8Bdk8-00zhq", "nips_2021_h1-ilmYbdea", "MDWojgtt5SV", "NUgcJqxCt1", "dxMeRWVPAIE", "Ed0x_k4dwNk", "EAW9gyqyfB" ]
nips_2021_GrZmKDYCp6H
An Information-theoretic Approach to Distribution Shifts
Safely deploying machine learning models to the real world is often a challenging process. For example, models trained with data obtained from a specific geographic location tend to fail when queried with data obtained elsewhere, agents trained in a simulation can struggle to adapt when deployed in the real world or novel environments, and neural networks that are fit to a subset of the population might carry some selection bias into their decision process.In this work, we describe the problem of data shift from an information-theoretic perspective by (i) identifying and describing the different sources of error, (ii) comparing some of the most promising objectives explored in the recent domain generalization and fair classification literature. From our theoretical analysis and empirical evaluation, we conclude that the model selection procedure needs to be guided by careful considerations regarding the observed data, the factors used for correction, and the structure of the data-generating process.
accept
The authors study out-of-distribution (OOD) generalisation using an information-theoretic framework, decomposing the contributions of concept and covariate shift to the overall error. They go on to show how the framework yields an upper bound on OOD error in the context of a (latent) representation of the input variables. Consistent with related results, this leads to a conclusion that to minimise OOD error necessitates losing some information which is predictive of the training data. The optimal tradeoff relies on data unavailable at training time and so the authors study 3 proxies for minimising concept shift in the representation: independence, sufficiency and separation. In an empirical evaluation, the authors study minimising these criteria in comparison with existing approaches with related objectives. Several reviewers appreciated the information-theoretic formulation of OOD error and the theoretical treatment of it. Reviewer S3bF raised concerns about the significance of the technical contribution but this is to some extent compensated by the insights gained by the change in framework compared to related works. Additionally, several comments were made about the clarity of the organisation of the paper. In particular, transitions between sections are often hard to follow (see e.g., Section 2.2. which starts without any lead-in). The authors would do well to guide the reader better through the sections. Nevertheless, given the reviewers appreciation for the work, and the arguably minor concerns that remain after the discussion phase, I believe that a re-organisation of the paper is sufficient.
train
[ "-XuwqkJWBl", "OsVjW7YwPH0", "-1WwvYCJUZ", "Z0wPvzVAXUA", "AZVUCv3K1OR", "PD8-Oxqiwnz", "87HEIfozX5", "KnUcR_n4rcB", "E7VZMqA1Gsc" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your responses, I have updated my score. The main reason behind my initial score was a similar concern as reviewer S3bF which was that the work doesn't lead to any quantifiable and 'actionable' conclusions about the problem of distributional shifts. However, I think this work does a good job of expl...
[ -1, 7, -1, -1, -1, -1, 6, 7, 5 ]
[ -1, 3, -1, -1, -1, -1, 4, 3, 1 ]
[ "Z0wPvzVAXUA", "nips_2021_GrZmKDYCp6H", "E7VZMqA1Gsc", "OsVjW7YwPH0", "KnUcR_n4rcB", "87HEIfozX5", "nips_2021_GrZmKDYCp6H", "nips_2021_GrZmKDYCp6H", "nips_2021_GrZmKDYCp6H" ]
nips_2021_UJw7jgbLgS
TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness
Adversarial Transferability is an intriguing property - adversarial perturbation crafted against one model is also effective against another model, while these models are from different model families or training processes. To better protect ML systems against adversarial attacks, several questions are raised: what are the sufficient conditions for adversarial transferability, and how to bound it? Is there a way to reduce the adversarial transferability in order to improve the robustness of an ensemble ML model? To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness. Our theoretical analysis shows that only promoting the orthogonality between gradients of base models is not enough to ensure low transferability; in the meantime, the model smoothness is an important factor to control the transferability. We also provide the lower and upper bounds of adversarial transferability under certain conditions. Inspired by our theoretical analysis, we propose an effective Transferability Reduced Smooth (TRS) ensemble training strategy to train a robust ensemble with low transferability by enforcing both gradient orthogonality and model smoothness between base models. We conduct extensive experiments on TRS and compare with 6 state-of-the-art ensemble baselines against 8 whitebox attacks on different datasets, demonstrating that the proposed TRS outperforms all baselines significantly.
accept
This paper analyzes the reason for adversarial transferability between models, and proposes an algorithm to reduce the transferability between base models. All reviewer agree that this is an interesting topic and the paper provides theoretical insight on the transferability problem. On the other side, the experimental analyses need to be strengthened. What are the additional costs for the reduction of adversarial transferability? Overall, this is an interesting paper. I recommend accept.
test
[ "WY7xQPIoBuL", "Oyn3m2LRs5S", "roaWgwZSaS7", "UL2XMt-hH51", "G7ep4xb1tZ1", "5r4e-4DJByF", "DlYBbOimDil", "8ac09C05GxG", "PP5Xq2AHjNp" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The manuscript describes the problem of adversarial transferability, the process of adversarial attacks that can be transferred across models, which can help scale adversarial attack strategies. As a guardrail against such attacks, it is desired that models are robust and protected against such attacks. Towards so...
[ 6, -1, 6, -1, -1, -1, -1, 6, 7 ]
[ 4, -1, 3, -1, -1, -1, -1, 3, 5 ]
[ "nips_2021_UJw7jgbLgS", "UL2XMt-hH51", "nips_2021_UJw7jgbLgS", "PP5Xq2AHjNp", "roaWgwZSaS7", "8ac09C05GxG", "WY7xQPIoBuL", "nips_2021_UJw7jgbLgS", "nips_2021_UJw7jgbLgS" ]
nips_2021_k7Q71M4BPNK
Towards Sample-Optimal Compressive Phase Retrieval with Sparse and Generative Priors
Zhaoqiang Liu, Subhroshekhar Ghosh, Jonathan Scarlett
accept
This paper studies the problem of phase retrieval with generative priors. First they show under Gaussian measurements and a generative model that is $L$-Lipschitz and has $k$ inputs, $O(k \log L)$ measurements suffice for recovery. Furthermore they give an algorithm, predicated on solving a particular spectral initialization problem, that actually attains this number of measurements. There is a catch here: The optimization problem itself is likely hard. This is not surprising. After all, even in sparse phase retrieval there is conjectured to be a computational vs. statistical gap (and evidence is known based on the planted clique problem). Compared to the closely related work of Hand-Leong-Voroninski the main differences are (1) Hand et al. work with $d$-layer networks and get suboptimal dependence on $d$, whereas the bounds here are optimal up to constant factors (2) Hand et al. require an expansion condition on the network, which is not needed here. However (3) Hand et al. show that expansion gives a favorable optimization landscape, and thus have no hidden assumptions about being able to solve computationally hard problems.
train
[ "TcC5oF0NqQe", "qVo0MJrot1y", "IuY49hIeQgL", "Ibt34ToOWQW", "iThdM_iz6Yl", "6GFNTuf4Mh" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for the comments. Regarding the prior work and experiments, our responses are as follows.\n\n(**Prior work**) This additional reference [BA] is very helpful, and we will definitely discuss it in the revised version. Indeed, both [PH] and [BA] obtain certain order-optimal sample complexity guarantees ...
[ -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "qVo0MJrot1y", "iThdM_iz6Yl", "nips_2021_k7Q71M4BPNK", "nips_2021_k7Q71M4BPNK", "nips_2021_k7Q71M4BPNK", "nips_2021_k7Q71M4BPNK" ]
nips_2021_qGvMv3undNJ
Moser Flow: Divergence-based Generative Modeling on Manifolds
We are interested in learning generative models for complex geometries described via manifolds, such as spheres, tori, and other implicit surfaces. Current extensions of existing (Euclidean) generative models are restricted to specific geometries and typically suffer from high computational costs. We introduce Moser Flow (MF), a new class of generative models within the family of continuous normalizing flows (CNF). MF also produces a CNF via a solution to the change-of-variable formula, however differently from other CNF methods, its model (learned) density is parameterized as the source (prior) density minus the divergence of a neural network (NN). The divergence is a local, linear differential operator, easy to approximate and calculate on manifolds. Therefore, unlike other CNFs, MF does not require invoking or backpropagating through an ODE solver during training. Furthermore, representing the model density explicitly as the divergence of a NN rather than as a solution of an ODE facilitates learning high fidelity densities. Theoretically, we prove that MF constitutes a universal density approximator under suitable assumptions. Empirically, we demonstrate for the first time the use of flow models for sampling from general curved surfaces and achieve significant improvements in density estimation, sample quality, and training complexity over existing CNFs on challenging synthetic geometries and real-world benchmarks from the earth and climate sciences.
accept
The following is a summary of the pros and cons that resulted from the reviews and the discussion period. pros: * novel and nontrivial contributions to the flow literature. (R6hn, oTmD, STbS, VH2c, 96Cx) - reduces computational cost to train CNFs by no longer needing to solve an ODE. - likelihood computation does not require evaluating an ODE, constrast to regular CNFs. - Probably universal density estimator under suitable assumptions. * interesting variety of experiments that show the effectiveness of the proposed method. (oTmD) * for the most part well written (oTmD, 96Cx) cons: * limitations: - scalability limitations to higher dimensional problems. --> authors respond that higher dimensions are indeed challenging and promis to expand on the precise challenges in an updated version of the paper. (R6hn, 96Cx) - regularization of the negative part of the density in moser flows introduces an additional hyperparameter for tuning. worries that the regularization parameter might lead to unstable training of chosen wrongly. --> authors added experiments to show the behavior of the model under different ranges of this hyperparameter. (R6hn) - can the negative density also lead to moser flow not producing a valid density? --> authors rebut this in their response. (R6hn, 96Cx) * confusion around theorem 3, exact or approxiamte computation of traces etc. --> solved by author response.(R6hn) * comparison with other normalizing flow methods on manifolds would strengthen the paper. --> addressed in discussion. (oTmD, 96Cx) * introducing an explicit proof of theorem 2 and 3 would improve the paper (arguments for proof can now be found in main paper). --> addressed in discussion. (oTmD, 96Cx) * Clarifications needed about applicability of method. Can Moser flows still be used for manifolds that are not submanifolds of a Euclidian space? What if the manifold is not known? --> discussed in author response. (96Cx) * improvements needed in terms of evaluation: - figure 2: prior methods can also model this distribution, so the success of the proposed method over previous methods is unclear. (VH2c) - quantitative evaluation would make the claims stronger of being more computationally efficient, capturing densities better (e.g. Fig 3). --> addressed in author response. (VH2c, 96Cx) - comparison is needed with FFJORD in terms of both likelihood and runtime/#iterations. --> the authors include a link in their response with these results. (R6hn) - figure 5: why can't previous approaches be used to model this problem? --> addressed in author response. (VH2c) - table 1: how do the number of parameters of this method compare to previous work? --> addressed in author response. (VH2c) Authors and reviewers engaged into active discussions during the discussion period. Most of the concerns raised by the reviewers were addressed, leading to 3/5 reviewers raising their scores and a consensus to accept this submission. Considering the nontrivial contribution of this work and the future research it could lead to, the recommended decision is accept.
train
[ "uDh2xjItd-g", "hv0KVisheE", "tXrQt1nly-x", "hhcmJR5Ka2Q", "5pughs1tM1V", "SgeIteisz5a", "wihsGT_lhhl", "jMoNwQK3O7n", "ErF7yh5W0fB", "dnK-g7hRynJ", "nEKE2c4Q1Oy", "HlokU99wFYR", "oxVer3jJRB", "AvKtDWF3vD5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper tackles the problem of learning flow-based generative models. The proposed approach, a new instance of continuous normalizing-flow methods, models the density as a difference between the prior and a divergence term, parameterized by a neural network.\n\nThe claims are that:\n* MF constitutes a universal ...
[ 7, -1, 7, -1, -1, 7, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 2, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_qGvMv3undNJ", "ErF7yh5W0fB", "nips_2021_qGvMv3undNJ", "nEKE2c4Q1Oy", "jMoNwQK3O7n", "nips_2021_qGvMv3undNJ", "HlokU99wFYR", "SgeIteisz5a", "uDh2xjItd-g", "AvKtDWF3vD5", "tXrQt1nly-x", "oxVer3jJRB", "nips_2021_qGvMv3undNJ", "nips_2021_qGvMv3undNJ" ]
nips_2021_xyoFSmocONi
Structure-Aware Random Fourier Kernel for Graphs
Gaussian Processes (GPs) define distributions over functions and their generalization capabilities depend heavily on the choice of kernels. In this paper, we propose a novel structure-aware random Fourier (SRF) kernel for GPs that brings several benefits when modeling graph-structured data. First, SRF kernel is defined with a spectral distribution based on the Fourier duality given by the Bochner's theorem, transforming the kernel learning problem to a distribution inference problem. Second, SRF kernel admits a random Fourier feature formulation that makes the kernel scalable for optimization. Third, SRF kernel enables to leverage geometric structures by taking subgraphs as inputs. To effectively optimize GPs with SRF kernel, we develop a variational EM algorithm, which alternates between an inference procedure (E-step) and a learning procedure (M-step). Experimental results on five real-world datasets show that our model can achieve state-of-the-art performance in two typical graph learning tasks, i.e., object classification and link prediction.
accept
The paper received positive reviews and the rebuttal phase clarified some potential issues. Overall, this is a solid contribution, which should be accepted. The final submission should, however, incorporate suggestions made during the rebuttal phase.
train
[ "wGIXxvA66vF", "SoreBb3sy4r", "o5hOo5-c6oV", "30WRgd2bkIa", "xKFCBwEPNqm", "c_4QZ6OJ8sH", "1K8mpfLqyfs", "lEUdAdkKinQ", "nUEJ7d2Q17d", "tWOa6Zqep7", "0ly9_7tnm9l", "HaC1YFPa3gL", "HKP7XfqQnTD", "jhVuGEeH5zs" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I thank the author for their reply. Author's response is satisfactory to me, I would like to keep my score as it is.", " We really appreciate the reviewer’s detailed and insightful comments as well as the constructive suggestions. \n\nWe agree that node-level similarity is also an important problem in graph ker...
[ -1, -1, -1, 6, -1, -1, 7, -1, -1, -1, -1, -1, 9, 7 ]
[ -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "tWOa6Zqep7", "o5hOo5-c6oV", "xKFCBwEPNqm", "nips_2021_xyoFSmocONi", "c_4QZ6OJ8sH", "0ly9_7tnm9l", "nips_2021_xyoFSmocONi", "nUEJ7d2Q17d", "1K8mpfLqyfs", "jhVuGEeH5zs", "30WRgd2bkIa", "HKP7XfqQnTD", "nips_2021_xyoFSmocONi", "nips_2021_xyoFSmocONi" ]
nips_2021_9BnCwiXB0ty
Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling
Progressively applying Gaussian noise transforms complex data distributions to approximately Gaussian. Reversing this dynamic defines a generative model. When the forward noising process is given by a Stochastic Differential Equation (SDE), Song et al (2021) demonstrate how the time inhomogeneous drift of the associated reverse-time SDE may be estimated using score-matching. A limitation of this approach is that the forward-time SDE must be run for a sufficiently long time for the final distribution to be approximately Gaussian. In contrast, solving the Schrödinger Bridge (SB) problem, i.e. an entropy-regularized optimal transport problem on path spaces, yields diffusions which generate samples from the data distribution in finite time. We present Diffusion SB (DSB), an original approximation of the Iterative Proportional Fitting (IPF) procedure to solve the SB problem, and provide theoretical analysis along with generative modeling experiments. The first DSB iteration recovers the methodology proposed by Song et al. (2021), with the flexibility of using shorter time intervals, as subsequent DSB iterations reduce the discrepancy between the final-time marginal of the forward (resp. backward) SDE with respect to the prior (resp. data) distribution. Beyond generative modeling, DSB offers a widely applicable computational optimal transport tool as the continuous state-space analogue of the popular Sinkhorn algorithm (Cuturi, 2013).
accept
All reviewers recommended accepting this paper, and even if there were a number of concerns, the consensus among the reviewers was clear. This was an enjoyable paper to read and I recommend that you address the issues raised by the reviewers in the camera-ready, and especially pay attention to the comments regarding improving clarity and facilitating understanding.
train
[ "bBfAq4IcpN", "L4SIjR0pAr", "bTNWXmvU6c", "GltSLJJBdVr", "P6ut9hfz5uA", "0GPfwimqKjN", "bQbx-XgYCFF", "hCfu7m0f6m", "oAaH4SY1BLc", "-lgrC17KpJc" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors reply. My concerns are eliminated. ", " Thank you for the clarification. I think this is a great paper!", " Thank you for your review and constructive feedback. We are glad that you appreciate the paper.\n\n---------------------------\n\n- **“Figure 1 is not really informative and shou...
[ -1, -1, -1, -1, -1, -1, 8, 8, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 3 ]
[ "GltSLJJBdVr", "P6ut9hfz5uA", "-lgrC17KpJc", "oAaH4SY1BLc", "hCfu7m0f6m", "bQbx-XgYCFF", "nips_2021_9BnCwiXB0ty", "nips_2021_9BnCwiXB0ty", "nips_2021_9BnCwiXB0ty", "nips_2021_9BnCwiXB0ty" ]
nips_2021_U34rQjnImpM
Improving Transferability of Representations via Augmentation-Aware Self-Supervision
Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering. However, such invariance could be harmful to downstream tasks if they rely on the characteristics of the data augmentations, e.g., location- or color-sensitive. This is not an issue just for unsupervised learning; we found that this occurs even in supervised learning because it also learns to predict the same label for all augmented samples of an instance. To avoid such failures and obtain more generalizable representations, we suggest to optimize an auxiliary self-supervised loss, coined AugSelf, that learns the difference of augmentation parameters (e.g., cropping positions, color adjustment intensities) between two randomly augmented samples. Our intuition is that AugSelf encourages to preserve augmentation-aware information in learned representations, which could be beneficial for their transferability. Furthermore, AugSelf can easily be incorporated into recent state-of-the-art representation learning methods with a negligible additional training cost. Extensive experiments demonstrate that our simple idea consistently improves the transferability of representations learned by supervised and unsupervised methods in various transfer learning scenarios. The code is available at https://github.com/hankook/AugSelf.
accept
This paper aims to improve the transferability of representations via augmentation-aware self-supervision, by preserving both augmentation-invariant and augmentation-aware information. The main finding is that augmentation-variant information may be relevant to downstream tasks, which is interesting and inspiring for the self-supervised learning community. The rebuttal is informative and relevant, and after the discussion all reviewers unanimously recommended acceptance. Reviewers also pointed out some important directions for improvement, such as a larger set of augmentations for pre-training, more sophisticated fine-tuning methods, and a clear elaboration on the relationship between augmentation invariance and awareness. Authors shall guarantee to include the rebuttal material into their future draft.
train
[ "yBAbcfclI5N", "xfRQIRlfewW", "b-KHCl__oB7", "Pg52Qd42CRG", "Au0LRWySR49", "SCx7DEtzsz", "xGvE5_FWDGV", "NQCvs2iYQfr", "eouGZGeuifG", "vk7stl0AmZm", "ksjoqTwja3c", "VDPjQl5TiB", "jyMoPt8LrPl" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for providing your additional feedback.\n\nWe first emphasize that we want $f(x)$ to learn (or contain) both augmentation-invariant and augmentation-aware information (or features) in the input $x$. To this end, we train $g$ and $\\phi$ to extract each information from $f(x)$, respectively, i.e., we w...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "b-KHCl__oB7", "SCx7DEtzsz", "vk7stl0AmZm", "nips_2021_U34rQjnImpM", "nips_2021_U34rQjnImpM", "NQCvs2iYQfr", "VDPjQl5TiB", "Au0LRWySR49", "jyMoPt8LrPl", "ksjoqTwja3c", "nips_2021_U34rQjnImpM", "nips_2021_U34rQjnImpM", "nips_2021_U34rQjnImpM" ]
nips_2021_M_lkFOwVdYc
Long-Short Transformer: Efficient Transformers for Language and Vision
Transformers have achieved success in both language and vision domains. However, it is prohibitively expensive to scale them to long sequences such as long documents or high-resolution images, because self-attention mechanism has quadratic time and memory complexities with respect to the input sequence length. In this paper, we propose Long-Short Transformer (Transformer-LS), an efficient self-attention mechanism for modeling long sequences with linear complexity for both language and vision tasks. It aggregates a novel long-range attention with dynamic projection to model distant correlations and a short-term attention to capture fine-grained local correlations. We propose a dual normalization strategy to account for the scale mismatch between the two attention mechanisms. Transformer-LS can be applied to both autoregressive and bidirectional models without additional complexity. Our method outperforms the state-of-the-art models on multiple tasks in language and vision domains, including the Long Range Arena benchmark, autoregressive language modeling, and ImageNet classification. For instance, Transformer-LS achieves 0.97 test BPC on enwik8 using half the number of parameters than previous method, while being faster and is able to handle 3x as long sequences compared to its full-attention version on the same hardware. On ImageNet, it can obtain the state-of-the-art results (e.g., a moderate size of 55.8M model solely trained on 224x224 ImageNet-1K can obtain Top-1 accuracy 84.1%), while being more scalable on high-resolution images. The source code and models are released at https://github.com/NVIDIA/transformer-ls.
accept
This paper aims to advance the efficient transformer by integrating long-range attention and short-term attention. The reviewers identified several risks of this paper. Most of reviewers had concerns about the novelty, as the presented method is a combination of existing techniques. The reviewers also raised questions regarding the architecture choices, evaluation setups for LRA and lack of experiment for downstream NLP tasks. During the rebuttal, as acknowledged by the reviewers, the authors have successfully addressed most of the experimental concerns by adding more results. Despite the fact that the methodology is rather incremental, the empirical results seem quite strong and possibly be impactful to massive CV and NLP downstream tasks.
train
[ "P32raiivQ6_", "cfJ-mk7mOpE", "KMHkzpz-_V1", "sCZs4rkIhL", "twlGUNGVq7M", "QIp9kEoit3v", "hwecGCSyf4", "--zuyd4YFGY", "1255b0koo70", "zilnYt150AI", "e6uMVThc7t", "-_uT3b44Uh", "0c5HHwvOnCs", "235phN2JGYd", "YPY-w4-PKiu", "soF12wP2wSP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors made a lot of efforts to address the comments and concerns. The additional experiments on ViL address most of my concerns. After reading the authors' responses and other reviewers' comments, I lean to keep the original rating that is slightly positive. \n", "The paper introduces Long-Sh...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "-_uT3b44Uh", "nips_2021_M_lkFOwVdYc", "zilnYt150AI", "cfJ-mk7mOpE", "nips_2021_M_lkFOwVdYc", "--zuyd4YFGY", "nips_2021_M_lkFOwVdYc", "1255b0koo70", "twlGUNGVq7M", "cfJ-mk7mOpE", "soF12wP2wSP", "YPY-w4-PKiu", "235phN2JGYd", "nips_2021_M_lkFOwVdYc", "nips_2021_M_lkFOwVdYc", "nips_2021_M...
nips_2021_qe9z54E_cqE
Post-Training Sparsity-Aware Quantization
Quantization is a technique used in deep neural networks (DNNs) to increase execution performance and hardware efficiency. Uniform post-training quantization (PTQ) methods are common, since they can be implemented efficiently in hardware and do not require extensive hardware resources or a training set. Mapping FP32 models to INT8 using uniform PTQ yields models with negligible accuracy degradation; however, reducing precision below 8 bits with PTQ is challenging, as accuracy degradation becomes noticeable, due to the increase in quantization noise. In this paper, we propose a sparsity-aware quantization (SPARQ) method, in which the unstructured and dynamic activation sparsity is leveraged in different representation granularities. 4-bit quantization, for example, is employed by dynamically examining the bits of 8-bit values and choosing a window of 4 bits, while first skipping zero-value bits. Moreover, instead of quantizing activation-by-activation to 4 bits, we focus on pairs of 8-bit activations and examine whether one of the two is equal to zero. If one is equal to zero, the second can opportunistically use the other's 4-bit budget; if both do not equal zero, then each is dynamically quantized to 4 bits, as described. SPARQ achieves minor accuracy degradation and a practical hardware implementation.
accept
This paper explores the activation sparsity at both bit-level and numerical-level for 8-bit post-training quantization. For bit-level, a dynamic quantization technique is proposed to dynamically examine and trim leading zero-bits, which enabling a multi-scale quantization and a 2x4bit-8bit MAC unit. For numerical level, the paper proposed to skip a pair of activations when one of them is zero. The techniques are evaluated on popular CNN vision models and dataset, achieving baseline accuracy and potentially 2x speedup with some reasonable overhead. Overall, the paper is strong. The techniques introduced are interesting and novel. The experiments are sound, and results are solid. However, it would have been helpful if the author could describe more clearly on the hardware benefits, which seems to be confusing to some reviewers. In addition, the hardware evaluation methodology in appendix should be moved to the main paper as suggested by reviewers, plus more detailed explanation on the overhead of control logic. In short, this paper is recommended for acceptance.
train
[ "SMPBrYyUkc", "ryGWxWoBTeE", "QLlotjKT-aG", "4Al1CD1A5gA", "j6om0CEwpC8", "tKlDwIcU7Eh", "EYSMNT0xOvw", "fYmFzwHrIgK", "sUitlsjaBd3", "cozGlIlzAHo", "z-xE8rJgF6" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a sparsity-aware quantization (SPARQ) method, in which n-bit quantization takes place by picking the most significant n bits from the 8-bit value representation. Also, SPARQ is implemented on a systolic array(SA) or a tensor core(TC) DP unit. Finally, compared with the previous PTQ work, this p...
[ 6, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "nips_2021_qe9z54E_cqE", "EYSMNT0xOvw", "4Al1CD1A5gA", "j6om0CEwpC8", "z-xE8rJgF6", "SMPBrYyUkc", "cozGlIlzAHo", "sUitlsjaBd3", "nips_2021_qe9z54E_cqE", "nips_2021_qe9z54E_cqE", "nips_2021_qe9z54E_cqE" ]
nips_2021_2STmSnZAEt2
The Implicit Bias of Minima Stability: A View from Function Space
Rotem Mulayoff, Tomer Michaeli, Daniel Soudry
accept
This work analyzes the prediction surface of weights reached by descent methods, arguing it is smooth on average (integral of second derivative is small). Reviewers are supportive, but had many detailed comments, to which the authors in turn gave detailed responses. I believe incorporating these discussions and comments in revisions will greatly strengthen the paper, and urge the authors to do so.
train
[ "lBtn56Z5Kdk", "tRy7bGot6u2", "DfHxf-PM4X1", "HNd0Qq6r7JO", "ZXXj9AGnS_L", "wdkip5g5SN2", "ZyKRJq1zZAA", "3DJklHj-cLe", "jesohDCWoVF", "cjsvEVEkGs" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThis paper attempts to answer the implicit bias of SGD via a stability analysis on a single hidden layer univariate ReLU network. The main result is that a stable solution that can be found by a larger learning rate must be smoother. In detail, they show the weighted L1 norm of the second derivative is upper bou...
[ 7, -1, -1, 7, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, 1, 5 ]
[ "nips_2021_2STmSnZAEt2", "3DJklHj-cLe", "wdkip5g5SN2", "nips_2021_2STmSnZAEt2", "cjsvEVEkGs", "HNd0Qq6r7JO", "jesohDCWoVF", "lBtn56Z5Kdk", "nips_2021_2STmSnZAEt2", "nips_2021_2STmSnZAEt2" ]
nips_2021_YA0wIYi-yM3
Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free Reinforcement Learning
Gen Li, Laixi Shi, Yuxin Chen, Yuantao Gu, Yuejie Chi
accept
Dear authors, Following the rebuttal, the reviewers reached a consensus to positively evaluate the contribution.
train
[ "5u5XXFETLiL", "WKiU3cT2rfY", "JRGT7AlYgC", "sWX9sK2-AEM", "HG5yh7p8tht", "dh3BYkHRw65", "w_m3vcI8D-b", "zvKOnfuLVQ", "XqeCB9gX4-l" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a modified model-free reinforcement learning algorithm for time-inhomogeneous tabular MDP, in which it presents the novel technique of *Double Advantage*. It achieves nearly optimal regret and more importantly, the required sample size of being nearly optimal is significantly smaller than previ...
[ 7, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ 2, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "nips_2021_YA0wIYi-yM3", "HG5yh7p8tht", "sWX9sK2-AEM", "dh3BYkHRw65", "XqeCB9gX4-l", "5u5XXFETLiL", "zvKOnfuLVQ", "nips_2021_YA0wIYi-yM3", "nips_2021_YA0wIYi-yM3" ]
nips_2021_01884FCwbNf
Robust Auction Design in the Auto-bidding World
In classic auction theory, reserve prices are known to be effective for improving revenue for the auctioneer against quasi-linear utility maximizing bidders. The introduction of reserve prices, however, usually do not help improve total welfare of the auctioneer and the bidders. In this paper, we focus on value maximizing bidders with return on spend constraints---a paradigm that has drawn considerable attention recently as more advertisers adopt auto-bidding algorithms in advertising platforms---and show that the introduction of reserve prices has a novel impact on the market. Namely, by choosing reserve prices appropriately the auctioneer can improve not only the total revenue but also the total welfare. Our results also demonstrate that reserve prices are robust to bidder types, i.e., reserve prices work well for different bidder types, such as value maximizers and utility maximizers, without using bidder type information. We generalize these results for a variety of auction mechanisms such as VCG, GSP, and first-price auctions. Moreover, we show how to combine these results with additive boosts to improve the welfare of the outcomes of the auction further. Finally, we complement our theoretical observations with an empirical study confirming the effectiveness of these ideas using data from online advertising auctions.
accept
All the reviewers agreed that this paper studies a worthwhile topic, and are generally positively disposed towards the paper. At the same time, several reviewers felt that the technical contributions of the paper are somewhat marginal. On a relatively minor note, I will also echo what one reviewer said about related work: there is significant work by several industry labs on autobidding, and especially budget constraints. The present paper seems somewhat heavily skewed towards citing work primarily from Google. While this is probably due to an accident of only hearing about those particular works, it does seem like something the authors should take care to fix, as it looks a bit odd. A specific early work that I want to point out is the paper by Borgs et al. "Dynamics of bid optimization in online advertisement auctions." Proceedings of the 16th international conference on World Wide Web. 2007.
train
[ "VCYwdemaQEQ", "HS6D9P-Hvx7", "DGDOUS2sgmg", "kIyv7Mpd_tR", "w9xryqAZgd8", "DeTDObpoU62", "teeMBIweG3R", "E2sW9tqrH6G", "3vyTQ7C-4oI", "N-jgiudllo" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for pointing us to these related works. We will cite and discuss them in the revision.", " Thank you for your responses.\n\nRelated to the question of comparison to previous works; the auction design/autobidding/budgeted bidding problem is studied by a number of industrial research labs in addition to th...
[ -1, -1, 6, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, 3, 4, 4 ]
[ "HS6D9P-Hvx7", "DeTDObpoU62", "nips_2021_01884FCwbNf", "DGDOUS2sgmg", "N-jgiudllo", "3vyTQ7C-4oI", "E2sW9tqrH6G", "nips_2021_01884FCwbNf", "nips_2021_01884FCwbNf", "nips_2021_01884FCwbNf" ]
nips_2021_zdC5eXljMPy
Weighted model estimation for offline model-based reinforcement learning
This paper discusses model estimation in offline model-based reinforcement learning (MBRL), which is important for subsequent policy improvement using an estimated model. From the viewpoint of covariate shift, a natural idea is model estimation weighted by the ratio of the state-action distributions of offline data and real future data. However, estimating such a natural weight is one of the main challenges for off-policy evaluation, which is not easy to use. As an artificial alternative, this paper considers weighting with the state-action distribution ratio of offline data and simulated future data, which can be estimated relatively easily by standard density ratio estimation techniques for supervised learning. Based on the artificial weight, this paper defines a loss function for offline MBRL and presents an algorithm to optimize it. Weighting with the artificial weight is justified as evaluating an upper bound of the policy evaluation error. Numerical experiments demonstrate the effectiveness of weighting with the artificial weight.
accept
The reviewers find the problem addressed by the paper important and the proposed idea novel. The most important negative aspect of the paper, however, is with its empirical studies: There are several issues there: 1) Although the empirical results for the simpler environment (Pendulum) is promising, the results for more difficult set of problems (MuJuCo tasks) are underwhelming. In most cases, there is no significant difference between the proposed method (which corresponds to the choice of $\alpha > 0$, and specifically $\alpha = 0.2$ in the results), and $\alpha = 0$, which would be the case when the method reduces to the usual ERM with no sample weighting. The lack of significant difference is more exasperated as the results are based on only 5 random seeds and the standard deviation is large in most cases, and we have a lot of overlap between confidence intervals. It is difficult to say with certainty if the new method actually helps or not. It also appears that there is no really good explanation on why there is not much difference. I should note that the authors provided new results during the discussion period with 10 runs for a subset of the problems. Although it is OK if the method is not performing very well, it would make a much stronger paper if we at least learn why that is the case. I suggest that the authors perform similar experiments, but with larger number of runs and perhaps with proper statistical tests. In addition to this major issue, there are some other comments: 2) The proposed method follows a sequence of approximation. Ablation studies would be helpful in providing insight in the effect of each approximation. 3) Empirical comparison with methods that use "natural" weights strengthens the paper. 4) There is another comment, which was not raised by reviewers, but is based on my reading of the paper. I do not give much weight to this because the authors have not had a chance to respond to it. In the derivations leading to Eq. (3), the first inequality uses the telescoping lemma with the choice of distribution of expectation to be w.r.t. the discounted future state distribution induced by the model $P_\theta$ and the value function inside the expectation to be w.r.t. the true model $P^*$. This derivation leads to the introduction of density ratio w_theta in order to change the outer expectation from $d_{P_\theta}$ to $d_{P^*}$ in Eq. (3). But I believe it was also possible to use the telescoping lemma differently. If the expectation was taken w.r.t. $d_{P^*}$, the value function inside the expectation would be $V_{P_\theta}$. In the next line, we would still upper bound that value function by $\max_{s,a} |r(s,a)| / (1 - \gamma)$, and we would get the same $B$. The result would be that we wouldn't need any density ratio $w_\theta(s, a)$. If what I wrote is correct, this means that we could have an upper bound that does not require the introduction of any weights. If so, much of the proposed method would not be needed. It would be helpful if the authors can clarify this. Perhaps one upper bound is tighter than the other one? **Evaluation:** Overall, I would consider this as a *borderline* paper. The suggested method is interesting and might be useful too, but I believe that there is room for improvement and there are straightforward ways that this paper can significantly be improved.
train
[ "pWFKj_mQkBD", "w3S_H827T0I", "ZPsAGe3sDbo", "Ltjg9NEuMLI", "m3crsqgiLLH", "r5kry_mSnP", "UbX-evDuO0", "BVQgZ5wSG10", "W_reFjh4kj", "qe-w95SmNG", "wGN-GyapjzD", "YrzBs-iiwiz", "AkMF7PWljDF", "jif0DHOpZFr", "9kr1n1xAJih", "reC7BxsJhJ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for clarifying my confusions!", "The paper presents a model-based policy optimization algorithm for offline reinforcement learning. To account for the mismatch between the available dataset and the experience used during training, the proposed method uses estimation of the density ratio of the state d...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, -1, -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3 ]
[ "9kr1n1xAJih", "nips_2021_zdC5eXljMPy", "Ltjg9NEuMLI", "BVQgZ5wSG10", "nips_2021_zdC5eXljMPy", "UbX-evDuO0", "W_reFjh4kj", "nips_2021_zdC5eXljMPy", "jif0DHOpZFr", "nips_2021_zdC5eXljMPy", "YrzBs-iiwiz", "qe-w95SmNG", "w3S_H827T0I", "m3crsqgiLLH", "reC7BxsJhJ", "nips_2021_zdC5eXljMPy" ]
nips_2021_wJXWzCsGlZw
Practical, Provably-Correct Interactive Learning in the Realizable Setting: The Power of True Believers
We consider interactive learning in the realizable setting and develop a general framework to handle problems ranging from best arm identification to active classification. We begin our investigation with the observation that agnostic algorithms \emph{cannot} be minimax-optimal in the realizable setting. Hence, we design novel computationally efficient algorithms for the realizable setting that match the minimax lower bound up to logarithmic factors and are general-purpose, accommodating a wide variety of function classes including kernel methods, H{\"o}lder smooth functions, and convex functions. The sample complexities of our algorithms can be quantified in terms of well-known quantities like the extended teaching dimension and haystack dimension. However, unlike algorithms based directly on those combinatorial quantities, our algorithms are computationally efficient. To achieve computational efficiency, our algorithms sample from the version space using Monte Carlo ``hit-and-run'' algorithms instead of maintaining the version space explicitly. Our approach has two key strengths. First, it is simple, consisting of two unifying, greedy algorithms. Second, our algorithms have the capability to seamlessly leverage prior knowledge that is often available and useful in practice. In addition to our new theoretical results, we demonstrate empirically that our algorithms are competitive with Gaussian process UCB methods.
accept
The paper presents an algorithm for solving a set of problems in the realizable interactive learning framework via a new sampling based algorithm. All the reviewers liked the theoretical results in the paper. There was considerable and productive discussion among the authors and the reviewers and in the end all the reviewers agreed that the results in the paper warrant publication but the authors must do a better job in the camera ready version of discussing the pros and cons of their bounds and how they related to prior work.
val
[ "9qBwwVEgL2q", "B0_tO4qDUW0", "dCGAJuxgfah", "YpTSID102lD", "bF6RmlNItHz", "68blHenKKQt", "ibWYy8dggi9", "21zyqGngIQ-", "EwzDte0Gw2G", "oQKIZxA_j37", "xYeB9CGL9Yb", "ebuCh1CufZY", "RQHmY-JtyW2", "O3uU5alLLBK", "ouh9-TmBgUx", "g_rzykdgm4", "RKj0aR2dkMk" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a general algorithmic framework for interactive learning, which applies to bandit and active classification. The paper positions its main contribution in a general framework for computationally efficient interactive learning, while making a strong assumption that the data are realizable. \n\nI...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "nips_2021_wJXWzCsGlZw", "dCGAJuxgfah", "YpTSID102lD", "bF6RmlNItHz", "68blHenKKQt", "ibWYy8dggi9", "21zyqGngIQ-", "EwzDte0Gw2G", "xYeB9CGL9Yb", "nips_2021_wJXWzCsGlZw", "9qBwwVEgL2q", "g_rzykdgm4", "RKj0aR2dkMk", "ouh9-TmBgUx", "nips_2021_wJXWzCsGlZw", "nips_2021_wJXWzCsGlZw", "nips...
nips_2021_tX4OCWu3P7R
Deconditional Downscaling with Gaussian Processes
Refining low-resolution (LR) spatial fields with high-resolution (HR) information, often known as statistical downscaling, is challenging as the diversity of spatial datasets often prevents direct matching of observations. Yet, when LR samples are modeled as aggregate conditional means of HR samples with respect to a mediating variable that is globally observed, the recovery of the underlying fine-grained field can be framed as taking an "inverse" of the conditional expectation, namely a deconditioning problem. In this work, we propose a Bayesian formulation of deconditioning which naturally recovers the initial reproducing kernel Hilbert space formulation from Hsu and Ramos (2019). We extend deconditioning to a downscaling setup and devise efficient conditional mean embedding estimator for multiresolution data. By treating conditional expectations as inter-domain features of the underlying field, a posterior for the latent field can be established as a solution to the deconditioning problem. Furthermore, we show that this solution can be viewed as a two-staged vector-valued kernel ridge regressor and show that it has a minimax optimal convergence rate under mild assumptions. Lastly, we demonstrate its proficiency in a synthetic and a real-world atmospheric field downscaling problem, showing substantial improvements over existing methods.
accept
After the rebuttal and discussion, the consensus among the high-confidence reviewers is that the paper should be accepted. The meta-reviewer agrees. On the theoretical side, the paper makes valuable contributions that clearly go beyond previous work. On the practical side, the application in climate science is valuable. For the community to fully benefit from the paper, serious efforts need to go into making the paper more accessible. First, the explanation of the problem setup and the background needs to be improved to make the paper accessible to non-experts. In addition to the advice given by the reviewers, which needs to be included, the authors could make use of the appendix to further clarify the background material and previous work. Second, as promised by the authors in the rebuttal, the presentation of the two theoretical contributions to (1) the theory of deconditional mean embeddings and (2) statistical downscaling needs to be improved.
train
[ "2Z_NKVmwyBw", "xM1Ew_0GK_4", "l3ISazjNpiN", "HEL9OvP52ej", "a3OGIWyrb9n", "117buVs04k", "C9xCmpfBTbd", "fwiby_hsh9T", "8RPFwdLN5_O", "N5t75IMT_rq", "IzIfnuFu-rI", "1xD362GxzpZ", "t9Wacou_ioX", "hXKQa_clPj1", "8MyVimnqhOp", "qua2c_GqyYM", "N6qQoGjlSiu", "88NNWY_BFtc", "FDrpvek88g...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "...
[ " I thank the authors for the detailed answers to all reviews. As written in my initial review, I was confused when reviewing the paper how the different parts fit together and if a better structure exists in order to improve readability. While I appreciate the \nfeedback of the authors and in particular the new in...
[ -1, 7, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2 ]
[ "hXKQa_clPj1", "nips_2021_tX4OCWu3P7R", "117buVs04k", "8RPFwdLN5_O", "fwiby_hsh9T", "IzIfnuFu-rI", "nips_2021_tX4OCWu3P7R", "t9Wacou_ioX", "IzIfnuFu-rI", "qua2c_GqyYM", "88NNWY_BFtc", "88NNWY_BFtc", "C9xCmpfBTbd", "dDcgl4foe2w", "aJyU0b_gQBN", "FDrpvek88g", "nips_2021_tX4OCWu3P7R", ...
nips_2021_877bJocr-w
Image Generation using Continuous Filter Atoms
In this paper, we model the subspace of convolutional filters with a neural ordinary differential equation (ODE) to enable gradual changes in generated images. Decomposing convolutional filters over a set of filter atoms allows efficiently modeling and sampling from a subspace of high-dimensional filters. By further modeling filters atoms with a neural ODE, we show both empirically and theoretically that such introduced continuity can be propagated to the generated images, and thus achieves gradually evolved image generation. We support the proposed framework of image generation with continuous filter atoms using various experiments, including image-to-image translation and image generation conditioned on continuous labels. Without auxiliary network components and heavy supervision, the proposed continuous filter atoms allow us to easily manipulate the gradual change of generated images by controlling integration intervals of neural ordinary differential equation. This research sheds the light on using the subspace of network parameters to navigate the diverse appearance of image generation.
accept
This paper proposes using continuous filter atoms (based on Neural ODE) for image generation. These continuous filter atoms can enable smoother image synthesis compared to convolutional filters. The paper has received positive reviews. Many reviewers find the idea interesting and the results encouraging. The core idea is novel. There are concerns regarding the writing and baselines. The rebuttal addressed most of the concerns. The AC agreed with the reviewers’ consensus and recommended accepting the paper. The authors are encouraged to improve their writing according to the reviewers' comments.
train
[ "s6t9kPKcqF2", "jQkCs-1r3pu", "IRNqZJBvzNk", "okFL2Q8AZQe", "68QFIv6w2Xz", "AZV9JB3CQPD", "ZuPNFCHr04n", "_JL-WgTZR9", "v1lrO7gtBBJ", "CE0YIIncVI", "Q75VZfRA3RH", "HUYjfUkgFog" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors build on previous conditional image generation models, which are based on conditional Generative Adversarial Networks (cGAN). The authors use the observation made by previous work [1, 2], that the kernel $F$ of a convolution layer can be replaced with a low-rank, two-component tensor factorization $F =...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "nips_2021_877bJocr-w", "_JL-WgTZR9", "AZV9JB3CQPD", "nips_2021_877bJocr-w", "nips_2021_877bJocr-w", "okFL2Q8AZQe", "Q75VZfRA3RH", "s6t9kPKcqF2", "HUYjfUkgFog", "nips_2021_877bJocr-w", "nips_2021_877bJocr-w", "nips_2021_877bJocr-w" ]
nips_2021_an8FSGbuCw
Latent Equilibrium: Arbitrarily fast computation with arbitrarily slow neurons
The response time of physical computational elements is finite, and neurons are no exception. In hierarchical models of cortical networks each layer thus introduces a response lag. This inherent property of physical dynamical systems results in delayed processing of stimuli and causes a timing mismatch between network output and instructive signals, thus afflicting not only inference, but also learning. We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components which avoids these issues by harnessing the ability of biological neurons to phase-advance their output with respect to their membrane potential. This principle enables quasi-instantaneous inference independent of network depth and avoids the need for phased plasticity or computationally expensive network relaxation phases. We jointly derive disentangled neuron and synapse dynamics from a prospective energy function that depends on a network's generalized position and momentum. The resulting model can be interpreted as a biologically plausible approximation of error backpropagation in deep cortical networks with continuous-time, leaky neuronal dynamics and continuously active, local plasticity. We demonstrate successful learning of standard benchmark datasets, achieving competitive performance using both fully-connected and convolutional architectures, and show how our principle can be applied to detailed models of cortical microcircuitry. Furthermore, we study the robustness of our model to spatio-temporal substrate imperfections to demonstrate its feasibility for physical realization, be it in vivo or in silico.
accept
Reviewers were in agreement the proposed framework provided a novel and elegant model for learning in neurons with physical delays, with clear writing and presentation, and thorough experimental results.
val
[ "r1OKraeFQz", "D0r3Nq7Zt9H", "dj7q5cTfiZH", "kJv5EVFvld0", "Uk1gg-lQZC", "KhzlfRsh2GA", "UIbBbIH-hZ7", "okY2vzY4kLI", "mtqkl9vQid1", "oOY7Ef1igt", "3Bdn6-kL8H", "vWrtqzsmlt", "EK_yul6MT7P" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the encouraging reply!\nWe will definitely address the issues raised here in the revised version of the manuscript.", "The authors propose a framework from which a neuron model with (slow) neural dynamics close to physical substrates can be derived along with a plasticity rule for learn...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, 9, -1, 8, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, 3, 3 ]
[ "dj7q5cTfiZH", "nips_2021_an8FSGbuCw", "kJv5EVFvld0", "D0r3Nq7Zt9H", "UIbBbIH-hZ7", "nips_2021_an8FSGbuCw", "EK_yul6MT7P", "vWrtqzsmlt", "oOY7Ef1igt", "nips_2021_an8FSGbuCw", "nips_2021_an8FSGbuCw", "nips_2021_an8FSGbuCw", "nips_2021_an8FSGbuCw" ]