paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2021_GNO4e26auiF
Ranking Policy Decisions
Hadrien Pouget, Hana Chockler, Youcheng Sun, Daniel Kroening
accept
The direction of the paper is based on the premise that learned RL policies are unnecessarily complex and only a small subset of the overall set of decisions yields considerable improvements over some simple baseline. Thus the paper considers ranking of RL policy decisions, and in particular, identifying states where the RL policy's decisions most influence the outcome of the agent achieving its objective. The paper presents a novel method based on the software testing domain, spectrum-based fault localization (SBFL). The paper uses this to simplify the policy and shows that their approach can help agents considerably reduce the length of their trajectories while still attaining high reward in various settings. The paper tackles an important problem with a novel approach. Their results show a vast improvement over a variety of settings, however, some concerns are raised by reviewers about the generality especially to complex environments / continuous settings. There are also concerns raised about the interpretability part and RQ3 and 4. While the paper is far from perfect, in my opinion, the novel approach to an important problem and the empirical results for the settings studied push it over the bar. The authors, however, should carefully revise the paper accommodating all of the points that arose in the reviews, including but not limited to mentioning the use of NoFrameskip-v4 versions of the environments in which actions aren’t repeated, a clarification (ore even better, a formal definition) of the ranking, and addressing the saliency map interpretation.
train
[ "LbvrvRsar2b", "BtZSCdRq8k_", "uAeVPW_VrmZ", "19NVPWqP6vJ", "RhamJLza009", "XUm8dOg2BdA", "p4q7pm1up57", "6xHM_Hp5nsM", "5Heo0xO2Hfe", "G6kQI9B3Ib", "eNV3he1l3UM", "0KSa_ewrhU", "ziXZsFIAO1", "W5EKzns3cp", "hqZzQg9ciib", "LmScm9vmtUd", "JZl0TSAwj2j" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I write to thank the authors for responding to my review. I was happy with their responses, and continue to recommend acceptance of this paper. ", " Thank you for the valuable discussion, which we will be sure to include in the final version.", "The goal of this work is to leverage fault localization techniq...
[ -1, -1, 6, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, 3, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "ziXZsFIAO1", "19NVPWqP6vJ", "nips_2021_GNO4e26auiF", "5Heo0xO2Hfe", "p4q7pm1up57", "nips_2021_GNO4e26auiF", "W5EKzns3cp", "eNV3he1l3UM", "G6kQI9B3Ib", "0KSa_ewrhU", "hqZzQg9ciib", "uAeVPW_VrmZ", "JZl0TSAwj2j", "XUm8dOg2BdA", "LmScm9vmtUd", "nips_2021_GNO4e26auiF", "nips_2021_GNO4e26...
nips_2021_AVS8CamBecS
Searching the Search Space of Vision Transformer
Vision Transformer has shown great visual representation power in substantial vision tasks such as recognition and detection, and thus been attracting fast-growing efforts on manually designing more effective architectures. In this paper, we propose to use neural architecture search to automate this process, by searching not only the architecture but also the search space. The central idea is to gradually evolve different search dimensions guided by their E-T Error computed using a weight-sharing supernet. Moreover, we provide design guidelines of general vision transformers with extensive analysis according to the space searching process, which could promote the understanding of vision transformer. Remarkably, the searched models, named S3 (short for Searching the Search Space), from the searched space achieve superior performance to recently proposed models, such as Swin, DeiT and ViT, when evaluated on ImageNet. The effectiveness of S3 is also illustrated on object detection, semantic segmentation and visual question answering, demonstrating its generality to downstream vision and vision-language tasks. Code and models will be available at https://github.com/microsoft/Cream.
accept
Four experts reviewed this paper and gave ratings 6, 6, 6, and 5, respectively --- Reviewer 3mCn decided to change 7 to 6 in a private discussion. The reviewers expressed concerns about novelty and writing. Particularly, some reviewers commented that the E-T error heuristic and linear approximation were not well-motivated, and the novelty was small. Reviewer W7fM felt strongly about the “poor state of the writing.” However, the reviewers generally appreciated the study of NAS for Transformers, and a reviewer considered the design of the Transformer-tailored search space original. AC agreed that such a study could benefit future research on Transformers and felt that the concerns can be addressed in a reasonable revision of the paper. Hence, the decision is to recommend the paper for acceptance. The authors are encouraged to make necessary changes in the camera-ready to address the reviewers' questions to the best of their ability. Especially, the authors may consider finding a native English speaker to help polish the writing. We congratulate the authors on the acceptance of their paper!
train
[ "-B8MSiAYWKm", "NqyxHrY9e2o", "vjdTToPjjfK", "mzdroWDIhKa", "q0753jT9IO7", "ytiroAq_kzP", "95G9NHEIh9k", "K-yREySyZXB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a neural architecture search method called S3 that first searches the search space before identifying good architectures from the discovered search space. The authors focus on applying S3 to vision transformers and design a search space modeled after ViT. They evaluate S3 discovered archite...
[ 5, -1, -1, -1, -1, 6, 6, 7 ]
[ 5, -1, -1, -1, -1, 4, 4, 5 ]
[ "nips_2021_AVS8CamBecS", "-B8MSiAYWKm", "ytiroAq_kzP", "K-yREySyZXB", "95G9NHEIh9k", "nips_2021_AVS8CamBecS", "nips_2021_AVS8CamBecS", "nips_2021_AVS8CamBecS" ]
nips_2021_RSc-kfiLMNn
Relative stability toward diffeomorphisms indicates performance in deep nets
Leonardo Petrini, Alessandro Favero, Mario Geiger, Matthieu Wyart
accept
All four knowledgeable reviewers recommend accepting this submission. I agree. This submission makes a valuable contribution by demonstrating the invariance of image classifiers to diffeomorphism.
test
[ "BPeHfBsF7VO", "LRCGSUQIVIK", "JXve_5TBxXG", "zUgzmf-NHHr", "Rv8P3sWhBZ", "wqyZj85fG-2", "1363nS35omN", "9VlwrmfB06u", "xWRytjHxcCP", "QL3w2n6eRQ", "i1KiUVc2JSC", "UAhwzYKCex4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors consider the relationship between the performance of modern deep\nneural network architectures trained on standard benchmark natural image\ndatasets and their relationship to stability with respect to smooth\ndeformations of the input image. These issues have been studied previously in\nthe context of ...
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, 7, 7 ]
[ 3, -1, -1, -1, -1, 3, -1, -1, -1, -1, 4, 5 ]
[ "nips_2021_RSc-kfiLMNn", "QL3w2n6eRQ", "1363nS35omN", "Rv8P3sWhBZ", "xWRytjHxcCP", "nips_2021_RSc-kfiLMNn", "UAhwzYKCex4", "i1KiUVc2JSC", "wqyZj85fG-2", "BPeHfBsF7VO", "nips_2021_RSc-kfiLMNn", "nips_2021_RSc-kfiLMNn" ]
nips_2021_HLalhDvDwrQ
Raw Nav-merge Seismic Data to Subsurface Properties with MLP based Multi-Modal Information Unscrambler
Traditional seismic inversion (SI) maps the hundreds of terabytes of raw-field data to subsurface properties in gigabytes. This inversion process is expensive, requiring over a year of human and computational effort. Recently, data-driven approaches equipped with Deep learning (DL) are envisioned to improve SI efficiency. However, these improvements are restricted to data with highly reduced scale and complexity. To extend these approaches to real-scale seismic data, researchers need to process raw nav-merge seismic data into an image and perform convolution. We argue that this convolution-based way of SI is not only computationally expensive but also conceptually problematic. Seismic data is not naturally an image and need not be processed as images. In this work, we go beyond convolution and propose a novel SI method. We solve the scalability of SI by proposing a new auxiliary learning paradigm for SI (Aux-SI). This paradigm breaks the SI into local inversion tasks, which predicts each small chunk of subsurface properties using surrounding seismic data. Aux-SI combines these local predictions to obtain the entire subsurface model. However, even this local inversion is still challenging due to: (1) high-dimensional, spatially irregular multi-modal seismic data, (2) there is no concrete spatial mapping (or alignment) between subsurface properties and raw data. To handle these challenges, we propose an all-MLP architecture, Multi-Modal Information Unscrambler (MMI-Unscrambler), that unscrambles seismic information by ingesting all available multi-modal data. The experiment shows that MMI-Unscrambler outperforms both SOTA U-Net and Transformer models on simulation data. We also scale MMI-Unscrambler to raw-field nav-merge data on Gulf-of-Mexico to obtain a geologically sound velocity model with an SSIM score of 0.8. To the best of our knowledge, this is the first successful demonstration of the DL approach on SI for real, large-scale, and complicated raw field data.
accept
This work presents a SOTA Multi-layer perceptron solution for seismic inversion. The reviewers all recognized the improved performance relative to past approaches and the design and work involved in creating a practical block-based multi-modal approach. In particular the care taken to acknowledge and address the issues stemming from cross-block correlations. The authors also seemed to have proposed very doable revisions, which would even further strengthen the paper. Therefore I am happy to recommend that this work be accepted at NeurIPS.
train
[ "NREP1kY8l_", "xl1B-LAzcXv", "BFGrUL-SYXr", "ewt6Ape2uXB", "0kXK7QAoQ6B", "acAIrV5raIz", "RxD42M6F7A", "icxp4sq3lKz", "r4MHPEmZ2g9", "VeQVV9wjten", "5Oix9XYUh6c", "hYRfVhiSp97", "Yg-2t8O-xoR", "hDglNn8egPI", "8jEQyYfQPHa", "VN8ZIiHpjDC", "2pPiBuuMxpE", "ZZ76r7Sj5PM", "8_HCg8Laay7...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ " Thank you for the review and the support. Reducing the computational load of DL especially when applied to large scale machine learning problems such as the one discussed in this paper is an active area of research. Leveraging the research in this area, we hope to achieve energy efficient solutions to the proble...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 4 ]
[ "0kXK7QAoQ6B", "acAIrV5raIz", "ewt6Ape2uXB", "hDglNn8egPI", "nips_2021_HLalhDvDwrQ", "nips_2021_HLalhDvDwrQ", "icxp4sq3lKz", "8jEQyYfQPHa", "hYRfVhiSp97", "5Oix9XYUh6c", "Yg-2t8O-xoR", "Yg-2t8O-xoR", "nhhiXVHlpZA", "2pPiBuuMxpE", "8_HCg8Laay7", "ZZ76r7Sj5PM", "nips_2021_HLalhDvDwrQ",...
nips_2021_HCOdL3dWab
Inverse Problems Leveraging Pre-trained Contrastive Representations
We study a new family of inverse problems for recovering representations of corrupted data. We assume access to a pre-trained representation learning network R(x) that operates on clean images, like CLIP. The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A. We propose a supervised inversion method that uses a contrastive objective to obtain excellent representations for highly corrupted images. Using a linear probe on our robust representations, we achieve a higher accuracy than end-to-end supervised baselines when classifying images with various types of distortions, including blurring, additive noise, and random pixel masking. We evaluate on a subset of ImageNet and observe that our method is robust to varying levels of distortion. Our method outperforms end-to-end baselines even with a fraction of the labeled data in a wide range of forward operators.
accept
While the reviewers criticize the quality of writing and the unusual terminology, the artificial nature of the proposed problem, they seem to be be in agreement about the novelty and significance of the results.
train
[ "L-_7OEqy-XV", "7yrEZdhd-F5", "neyFBqmT93", "lPv-E161-QN", "OJmlwik3BWg", "JLZjuD6VgxX", "ANB2pd_t3zV", "H11N6VDIm9", "DaE5U1MIgT" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This works suggests a novel method on how to harness the representational power of models such as CLIP. In this task the input is distorted with a known function, such as missing pixels or gaussian noise, blurring. This works proposes training a student contrastively by matching the clip representation of the clea...
[ 7, 6, -1, -1, -1, -1, -1, 5, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_HCOdL3dWab", "nips_2021_HCOdL3dWab", "nips_2021_HCOdL3dWab", "H11N6VDIm9", "7yrEZdhd-F5", "DaE5U1MIgT", "L-_7OEqy-XV", "nips_2021_HCOdL3dWab", "nips_2021_HCOdL3dWab" ]
nips_2021_3-GCM92yaB3
The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation
Comparing metric measure spaces (i.e. a metric space endowed with a probability distribution) is at the heart of many machine learning problems. The most popular distance between such metric measure spaces is the Gromov-Wasserstein (GW) distance, which is the solution of a quadratic assignment problem. The GW distance is however limited to the comparison of metric measure spaces endowed with a \emph{probability} distribution. To alleviate this issue, we introduce two Unbalanced Gromov-Wasserstein formulations: a distance and a more tractable upper-bounding relaxation. They both allow the comparison of metric spaces equipped with arbitrary positive measures up to isometries. The first formulation is a positive and definite divergence based on a relaxation of the mass conservation constraint using a novel type of quadratically-homogeneous divergence. This divergence works hand in hand with the entropic regularization approach which is popular to solve large scale optimal transport problems. We show that the underlying non-convex optimization problem can be efficiently tackled using a highly parallelizable and GPU-friendly iterative scheme. The second formulation is a distance between mm-spaces up to isometries based on a conic lifting. Lastly, we provide numerical experiments on synthetic and domain adaptation data with a Positive-Unlabeled learning task to highlight the salient features of the unbalanced divergence and its potential applications in ML.
accept
A standard approach for defining distances between metric measure spaces is to use the Gromov-Wasserstein distance. However this formalism is limited to settings where the metric measure space has an underlying probability distribution. The main contributions of this paper are in formulating unbalanced analogues of the Gromov-Wasserstein distance that work with general positive measures. The specific formulations adapt ideas from unbalanced and conic optimal transport. Furthermore they use entropic regularization to design a Sinkhorn-style iterative algorithm and give experimental results on synthetic data and for problems in domain adaptation. The paper is well-written and has a nice mix of contributions. The only weakness is that they specialize to KL divergences which seems to limit the scope and applicability. Furthermore, in their setup working with more general divergences appears to be much more computationally challenging.
train
[ "TtKJILz0IC0", "yswdhrqv8qV", "l3xMihZUt_v", "ZfImun0ILL", "SmZUcMMoEB4", "_x-TmI0R2nQ", "Db6IZQpZBYX", "yPg9MpTevZ3", "2cotUu08yoi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper extends the Gromov-Wasserstein (GW) distance framework to two novel relaxations, the unbalanced GW (UGW) and the conic GW (CGW), which both address the question of comparing arbitrary positive measures on different metric spaces. The paper also proves that CGW is upper bounded by UGW, and introduces an a...
[ 7, -1, 7, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_3-GCM92yaB3", "Db6IZQpZBYX", "nips_2021_3-GCM92yaB3", "l3xMihZUt_v", "nips_2021_3-GCM92yaB3", "2cotUu08yoi", "TtKJILz0IC0", "l3xMihZUt_v", "nips_2021_3-GCM92yaB3" ]
nips_2021_AAWuCvzaVt
Diffusion Models Beat GANs on Image Synthesis
Prafulla Dhariwal, Alexander Nichol
accept
All reviewers agree that this is a well-written paper with strong experiments that provide non-trivial insights and results in state-of-the-art diffusion models for image synthesis. The evaluation goes beyond prior work in terms of evaluating sFID, Precision-Recall, and perorming several ablations on neural architecture and the impact of classifier guidance on sample quality. While there were some concerns around the novelty of proposed methods, the careful experiments and strong empirical results in this paper helps to advance the capabilities of generative image models in an impressive fashion.
train
[ "GGpDiMSAtmE", "Rc4UUydUmr1", "frCBsoX0BhI", "r4y-1BuSyAv", "605n3muOhX1", "I5NWMJu9oT7", "PxuXdyX29NE", "vVnEJNzEv0i", "TC3-2ORTZyE", "mX6GOhlWLA2", "NsvkBeYEEEK" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the reviewers for answering my questions. As I said in my main review, the paper can be of significant importance to the generative modeling community, and hence I keep my rating unchanged, and vote for acceptance. ", "This work shows that with some architectural changes, classifier guided sampling and ...
[ -1, 7, -1, -1, -1, -1, -1, -1, 9, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "PxuXdyX29NE", "nips_2021_AAWuCvzaVt", "605n3muOhX1", "I5NWMJu9oT7", "Rc4UUydUmr1", "NsvkBeYEEEK", "mX6GOhlWLA2", "TC3-2ORTZyE", "nips_2021_AAWuCvzaVt", "nips_2021_AAWuCvzaVt", "nips_2021_AAWuCvzaVt" ]
nips_2021_-mGv2KxQ43D
Learning MDPs from Features: Predict-Then-Optimize for Sequential Decision Making by Reinforcement Learning
In the predict-then-optimize framework, the objective is to train a predictive model, mapping from environment features to parameters of an optimization problem, which maximizes decision quality when the optimization is subsequently solved. Recent work on decision-focused learning shows that embedding the optimization problem in the training pipeline can improve decision quality and help generalize better to unseen tasks compared to relying on an intermediate loss function for evaluating prediction quality. We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) that are solved via reinforcement learning. In particular, we are given environment features and a set of trajectories from training MDPs, which we use to train a predictive model that generalizes to unseen test MDPs without trajectories. Two significant computational challenges arise in applying decision-focused learning to MDPs: (i) large state and action spaces make it infeasible for existing techniques to differentiate through MDP problems, and (ii) the high-dimensional policy space, as parameterized by a neural network, makes differentiating through a policy expensive. We resolve the first challenge by sampling provably unbiased derivatives to approximate and differentiate through optimality conditions, and the second challenge by using a low-rank approximation to the high-dimensional sample-based derivatives. We implement both Bellman-based and policy gradient-based decision-focused learning on three different MDP problems with missing parameters, and show that decision-focused learning performs better in generalization to unseen tasks.
accept
The authors propose a "predict then optimize" approach for sequential decision problems. In other words, they consider a family of problems where features of the problem are given that are indicative of the MDP definition (reward and/or transition function). The proposed solution strategy is then a pipeline, where first missing MDP parameters are predicted and subsequently the MDP is solved using policy-based or value-based methods. Importantly, unlike e.g. many model-based RL approaches, this is not done in a 2-stage approach (where the dynamics model or reward function are trained on a separate supervised prediction loss) but end-to-end (the prediction of MDP parameters is learned to optimize the final RL loss). The main challenge in applying this framework is the large state-action spaces and large policy spaces concerned. So, the authors focus their attention on approximating key steps of the algorithm to obtain practicable algorithms in this setting. The reviewers are unanimously positive about this paper. - All reviewers consider the paper novel (though one reviewer considers it not entirely original) - The reviewers consider the paper mathematically sound and the design of algorithmic components to be well justified - The reviewers considered the paper to be well-written - While the experiments to showcase clear benefits, they are mostly limited to toy tasks and the results could benefit from additional comparison - Three reviewers did mention that the problem statement is not well-motivated. What in what 'real' settings can we apply this method? Considering the unanimous positivity and agreement on novelty, soundness, and clarity, I recommend this paper to be accepted.
train
[ "UgGqU2Hfw1J", "yUVDjhXYNur", "10StjmYgcUE", "lPx0AmQCVJn", "CNvWmwVFGtL", "pEBXCPp7_a", "QVNcNELstc7", "QiurvZ4gWe3", "4jjkwSdpZj3", "8XYidt3Ocy1", "FpkYRy1nIV", "E4s8ku0ao7a" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We are sincerely grateful to the reviewer for providing valuable feedback.\n\nIn the literature of optimization problems using the predict-then-optimize framework, to our knowledge, unrolling (Stoyanov et al. 2011; Domke 2012; Amos et al. 2017) is considered the most common alternative of achieving end-to-end dif...
[ -1, -1, -1, -1, 7, 7, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, 4, 3 ]
[ "yUVDjhXYNur", "8XYidt3Ocy1", "QiurvZ4gWe3", "4jjkwSdpZj3", "nips_2021_-mGv2KxQ43D", "nips_2021_-mGv2KxQ43D", "CNvWmwVFGtL", "pEBXCPp7_a", "E4s8ku0ao7a", "FpkYRy1nIV", "nips_2021_-mGv2KxQ43D", "nips_2021_-mGv2KxQ43D" ]
nips_2021_rC3zu-OqnII
A Closer Look at the Worst-case Behavior of Multi-armed Bandit Algorithms
One of the key drivers of complexity in the classical (stochastic) multi-armed bandit (MAB) problem is the difference between mean rewards in the top two arms, also known as the instance gap. The celebrated Upper Confidence Bound (UCB) policy is among the simplest optimism-based MAB algorithms that naturally adapts to this gap: for a horizon of play n, it achieves optimal O(log n) regret in instances with "large" gaps, and a near-optimal O(\sqrt{n log n}) minimax regret when the gap can be arbitrarily "small." This paper provides new results on the arm-sampling behavior of UCB, leading to several important insights. Among these, it is shown that arm-sampling rates under UCB are asymptotically deterministic, regardless of the problem complexity. This discovery facilitates new sharp asymptotics and a novel alternative proof for the O(\sqrt{n log n}) minimax regret of UCB. Furthermore, the paper also provides the first complete process-level characterization of the MAB problem in the conventional diffusion scaling. Among other things, the "small" gap worst-case lens adopted in this paper also reveals profound distinctions between the behavior of UCB and Thompson Sampling, such as an "incomplete learning" phenomenon characteristic of the latter.
accept
This is strong paper that I would like to see accepted as a spotlight. There is a minor criticism concerning the restriction to two arms but the reviewers feel that this is not a major concern. There are some suggestions by the reviewers of how to improve the paper further and it would be good if the authors could incorporate these.
train
[ "pm_RS3rq2CH", "Jok7EEtDcpV", "NgxfVezX3oG", "zcXwIRgIHEW", "H8q2sZetPN", "mXxv-VTSiHz", "K8UhrYHsqZ5", "-qDRi_Jh5oW" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies the arm sampling behavior of UCB and Thompson sampling algorithms. For the two-arm case, the asymptotic behavior of arm sampling is characterized for different regimes (small, large, and medium) of suboptimality gap. Using this characterization, the minimax regret of UCB is shown to O(n\\logn), w...
[ 7, 7, -1, -1, -1, -1, 7, 9 ]
[ 4, 3, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_rC3zu-OqnII", "nips_2021_rC3zu-OqnII", "Jok7EEtDcpV", "-qDRi_Jh5oW", "K8UhrYHsqZ5", "pm_RS3rq2CH", "nips_2021_rC3zu-OqnII", "nips_2021_rC3zu-OqnII" ]
nips_2021__wPmKqEMxss
SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization
Multilayer-perceptrons (MLP) are known to struggle learning functions of high-frequencies, and in particular, instances of wide frequency bands.We present a progressive mapping scheme for input signals of MLP networks, enabling them to better fit a wide range of frequencies without sacrificing training stability or requiring any domain specific preprocessing. We introduce Spatially Adaptive Progressive Encoding (SAPE) layers, which gradually unmask signal components with increasing frequencies as a function of time and space. The progressive exposure of frequencies is monitored by a feedback loop throughout the neural optimization process, allowing changes to propagate at different rates among local spatial portions of the signal space. We demonstrate the advantage of our method on variety of domains and applications: regression of low dimensional signals and images, representation learning of occupancy networks, and a geometric task of mesh transfer between 3D shapes.
accept
This paper introduces a novel spatially adaptive progressive encoding layer and demonstrates that it performs well on a variety of tasks. The reviewers found to be interesting and relevant for the NeurIPS community, and I recommend acceptance.
val
[ "MTWzRSfQRG", "w-tC-wzEdDa", "SxYa_NqGYby", "7AFCqTXmuEc", "CUtgJEgnT1", "VNKAIkk8S_", "XGaYRP6seWg", "6CtnXdp4zwD", "0KkV4qVfNe", "C5jMQifaiiN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Implicit neural representations are a promising approach to learning compressed representations of data --- for example they can learn to represent an image where the network inputs are the (x,y) coordinates and the outputs are the RGB pixel values. This idea has all sorts of applications in data science and physi...
[ 9, 8, -1, -1, 7, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, 2, -1, -1, -1, -1, 4 ]
[ "nips_2021__wPmKqEMxss", "nips_2021__wPmKqEMxss", "0KkV4qVfNe", "6CtnXdp4zwD", "nips_2021__wPmKqEMxss", "w-tC-wzEdDa", "MTWzRSfQRG", "CUtgJEgnT1", "C5jMQifaiiN", "nips_2021__wPmKqEMxss" ]
nips_2021_qpdc7sCpbi
A Biased Graph Neural Network Sampler with Near-Optimal Regret
Graph neural networks (GNN) have recently emerged as a vehicle for applying deep network architectures to graph and relational data. However, given the increasing size of industrial datasets, in many practical situations, the message passing computations required for sharing information across GNN layers are no longer scalable. Although various sampling methods have been introduced to approximate full-graph training within a tractable budget, there remain unresolved complications such as high variances and limited theoretical guarantees. To address these issues, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem but with a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded pay outs. And unlike prior bandit-GNN use cases, the resulting policy leads to near-optimal regret while accounting for the GNN training dynamics introduced by SGD. From a practical standpoint, this translates into lower variance estimates and competitive or superior test accuracy across several benchmarks.
accept
This paper proposes a novel bandit algorithm for sampling neighbourhoods for GNNs. The reviewers agreed that the work was of good quality, significance and originality. Some reviewers noted, and the authors are encouraged to take on board, that the current write up is quite dense and exposition could be improved by addressing this.
test
[ "RNuTE6xjzLo", "R5_UAHDkwz0", "R-7sMoPx4d", "XKY3xg9mns8", "mMqmvGsG4mO", "ITOE53jcu3", "H-QGEb1sT--", "coKmQpFhWpZ", "hez7MKKFxob", "N0NlOu7aV6S", "Q55FIV80QUP" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for the detailed rebuttal. I think all of my questions were addressed in a satisfactory way, and I am happy to maintain my score of 7.", " Thanks for the additional feedback. In this regard, the reviewer mentioned the possible inclusion of additional tests with synthetic graphs where our ...
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 3 ]
[ "XKY3xg9mns8", "R-7sMoPx4d", "mMqmvGsG4mO", "coKmQpFhWpZ", "Q55FIV80QUP", "hez7MKKFxob", "N0NlOu7aV6S", "nips_2021_qpdc7sCpbi", "nips_2021_qpdc7sCpbi", "nips_2021_qpdc7sCpbi", "nips_2021_qpdc7sCpbi" ]
nips_2021_iVL-2vJsy4e
Equilibrium Refinement for the Age of Machines: The One-Sided Quasi-Perfect Equilibrium
In two-player zero-sum extensive-form games, Nash equilibrium prescribes optimal strategies against perfectly rational opponents. However, it does not guarantee rational play in parts of the game tree that can only be reached by the players making mistakes. This can be problematic when operationalizing equilibria in the real world among imperfect players. Trembling-hand refinements are a sound remedy to this issue, and are subsets of Nash equilibria that are designed to handle the possibility that any of the players may make mistakes. In this paper, we initiate the study of equilibrium refinements for settings where one of the players is perfectly rational (the ``machine'') and the other may make mistakes. As we show, this endeavor has many pitfalls: many intuitively appealing approaches to refinement fail in various ways. On the positive side, we introduce a modification of the classical quasi-perfect equilibrium (QPE) refinement, which we call the one-sided quasi-perfect equilibrium. Unlike QPE, one-sided QPE only accounts for mistakes from one player and assumes that no mistakes will be made by the machine. We present experiments on standard benchmark games and an endgame from the famous man-machine match where the AI Libratus was the first to beat top human specialist professionals in heads-up no-limit Texas hold'em poker. We show that one-sided QPE can be computed more efficiently than all known prior refinements, paving the way to wider adoption of Nash equilibrium refinements in settings with perfectly rational machines (or humans perfectly actuating machine-generated strategies) that interact with players prone to mistakes. We also show that one-sided QPE tends to play better than a Nash equilibrium strategy against imperfect opponents.
accept
The paper presents a refinement of Nash equilibrium for cases where one player is rational and the other is not. Some nice properties have been presented about the one-sided quasi-perfect equilibrium (QPE). The paper is well-written with very valuable insights. Most concerns raised in the reviews have been addressed by the authors. The contents are solid enough to justify for a publication, although the significance of the work highly relies on whether we can find practical scenarios of one-sided QPE.
test
[ "kfFdl8qhmV", "g_kmCQe_AhB", "BW79EhHM2YH", "T4Lhc2uMwGq", "gC0TSf6R0Vf", "8sih0SzxtuX", "BFkiqCen61G", "ZzX-l0-795h", "ympW6Y_puQI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper investigates a trembling-hand refinement of Nash equilibrium in sequential games in which one of the players is considered entirely rational (i.e., a machine), while the second player makes mistakes the first player should account for (i.e., a human). The authors discuss two most commonly studied two-si...
[ 8, -1, 5, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, 5, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_iVL-2vJsy4e", "T4Lhc2uMwGq", "nips_2021_iVL-2vJsy4e", "gC0TSf6R0Vf", "BFkiqCen61G", "ympW6Y_puQI", "BW79EhHM2YH", "kfFdl8qhmV", "nips_2021_iVL-2vJsy4e" ]
nips_2021_QMG2bzvk5HV
Interpreting Representation Quality of DNNs for 3D Point Cloud Processing
In this paper, we evaluate the quality of knowledge representations encoded in deep neural networks (DNNs) for 3D point cloud processing. We propose a method to disentangle the overall model vulnerability into the sensitivity to the rotation, the translation, the scale, and local 3D structures. Besides, we also propose metrics to evaluate the spatial smoothness of encoding 3D structures, and the representation complexity of the DNN. Based on such analysis, experiments expose representation problems with classic DNNs, and explain the utility of the adversarial training. The code will be released when this paper is accepted.
accept
This paper introduces metrics that reflect the representation quality, and properties, acquired by a deep-net that is applied in 3D Point-Cloud (PC) processing tasks. This paper presents several ways to measure the sensitivity of point cloud networks to rotation, scale, translation and local structure. All reviewers laud the originality, clarity of the manuscript, and the ideas. The initial paper lacked comparative analysis on different datasets, which the rebuttal seems to have addressed. All reviewers agree that this paper provides an interesting result for the community to justify acceptance.
train
[ "x3qTkEx19dU", "A4My_nl5jAN", "EHGcKo8H-hi", "WHpaDzENRU8", "-mNriH3FKC-", "gTOmeOresim", "UQmcivZ5cyt", "sayNP2yYWpN", "XBWwG6tyrqY", "Jwy3XY8Kkjm", "q81bzrlnTVY", "HktVr-gWSU", "mvxTtCpE2FT", "MCyQ3zjXfi7", "l3p7pk5Uiq3", "ZjuaJdTbK9X" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your great efforts on the review of this paper. We will follow your suggestions to update the paper.", "This paper measures the sensitivity of various point cloud classification networks (like PointNet, PointNet++, DGCNN) to rotation, translation, scale, and local 3D structure. It also p...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "EHGcKo8H-hi", "nips_2021_QMG2bzvk5HV", "gTOmeOresim", "UQmcivZ5cyt", "mvxTtCpE2FT", "UQmcivZ5cyt", "HktVr-gWSU", "MCyQ3zjXfi7", "l3p7pk5Uiq3", "ZjuaJdTbK9X", "A4My_nl5jAN", "A4My_nl5jAN", "ZjuaJdTbK9X", "nips_2021_QMG2bzvk5HV", "nips_2021_QMG2bzvk5HV", "nips_2021_QMG2bzvk5HV" ]
nips_2021_-KGLlWv6kIc
How Fine-Tuning Allows for Effective Meta-Learning
Representation learning has served as a key tool for meta-learning, enabling rapid learning of new tasks. Recent works like MAML learn task-specific representations by finding an initial representation requiring minimal per-task adaptation (i.e. a fine-tuning-based objective). We present a theoretical framework for analyzing a MAML-like algorithm, assuming all available tasks require approximately the same representation. We then provide risk bounds on predictors found by fine-tuning via gradient descent, demonstrating that the method provably leverages the shared structure. We illustrate these bounds in the logistic regression and neural network settings. In contrast, we establish settings where learning one representation for all tasks (i.e. using a "frozen representation" objective) fails. Notably, any such algorithm cannot outperform directly learning the target task with no other information, in the worst case. This separation underscores the benefit of fine-tuning-based over “frozen representation” objectives in few-shot learning.
accept
This paper provides some theoretical insight into when and why fine-tuning is beneficial for meta-learning. Specifically, it shows how learning a single fixed representation for all tasks can fail, and under what conditions allowing the representation to be tuned for downstream tasks can be successful. Reviewers all appreciated the addition of theoretical insight and a framework for understanding meta-learning. The main criticism from reviewers was that it was hard to get intuition for the assumptions made and the settings considered. Authors should include better examples and diagrams in the camera-ready version.
train
[ "Ym7ixkhpdF7", "n9B6ZaMU_C6", "uXryls1VGSr", "vWWqCZyyz22", "aXMU3PJs3Wt", "jKix7ZNUTs1", "janeJIARVHo", "6naHPXKMu4m", "Dy0M8CCakE3", "OLr2UVWtZqE" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your response.\n\n**A simple example.** The statement is easiest to visualize with a linear model $y = x^\\top\\theta_t + z$, with Gaussian $z$ and Gaussian $x$ with covariance $\\Sigma$. In our fine-tuning setting, $\\theta_t = (B + \\Delta_t)w_t = Bw_t + \\Delta_tw_t$, where $B$ is the shared repr...
[ -1, -1, -1, -1, -1, -1, 6, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 2, 3, 2 ]
[ "n9B6ZaMU_C6", "aXMU3PJs3Wt", "OLr2UVWtZqE", "Dy0M8CCakE3", "6naHPXKMu4m", "janeJIARVHo", "nips_2021_-KGLlWv6kIc", "nips_2021_-KGLlWv6kIc", "nips_2021_-KGLlWv6kIc", "nips_2021_-KGLlWv6kIc" ]
nips_2021_IEniJ8TiV1
Cooperative Stochastic Bandits with Asynchronous Agents and Constrained Feedback
lin yang, Yu-Zhen Janice Chen, Stephen Pasteris, Mohammad Hajiesmaili, John C. S. Lui, Don Towsley
accept
Reviewers are mostly positive about this submission. They agree that the paper studies a new and relevant problem, and the techniques are non-trivial and interesting. The reviews also provide many directions for the paper to improve, e.g. (1) correcting and clarifying some technical details (2) adjusting the presentation order for better reader experience (3) strengthening the lower bound.
train
[ "jZeiLF5F9ef", "3kRW-AZtUcw", "IFZXRKY0RFc", "96U67HathkK", "4NLIdIeSUAQ", "7sBKOoYndae", "LytJkOb2b8h", "k9r7iXLVBXF", "c3RbLLxlah0", "yCR7-8yePK-", "CNlMjhuW8ke", "hKe262QS_wC", "M6iydShbdW", "ka6pPN0OkQ1", "7X-_Ho38YA", "i6e3PwPq8zy" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks! Reviewer BHsX has helped clarify this point. I misunderstood it earlier. ", " One last correction in the statement of the reviewer. The rigorous statement of the first option mentioned by the reviewer should be as follows. The changes are highlighted in **bold**. \n\nFirst, whenever any other agent pu...
[ -1, -1, -1, 5, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "3kRW-AZtUcw", "IFZXRKY0RFc", "4NLIdIeSUAQ", "nips_2021_IEniJ8TiV1", "7sBKOoYndae", "LytJkOb2b8h", "M6iydShbdW", "nips_2021_IEniJ8TiV1", "yCR7-8yePK-", "hKe262QS_wC", "7X-_Ho38YA", "k9r7iXLVBXF", "96U67HathkK", "i6e3PwPq8zy", "nips_2021_IEniJ8TiV1", "nips_2021_IEniJ8TiV1" ]
nips_2021_rh0vIXw6i33
Multiple Descent: Design Your Own Generalization Curve
This paper explores the generalization loss of linear regression in variably parameterized families of models, both under-parameterized and over-parameterized. We show that the generalization curve can have an arbitrary number of peaks, and moreover, the locations of those peaks can be explicitly controlled. Our results highlight the fact that both the classical U-shaped generalization curve and the recently observed double descent curve are not intrinsic properties of the model family. Instead, their emergence is due to the interaction between the properties of the data and the inductive biases of learning algorithms.
accept
We thank the authors for this submission. Overall, the paper presents an interesting perspective on the fact that the generalization curve can have multiple descents, and that the locations can be explicitly controlled. The paper well-motivates the approach. The authors have provided extensive responses to the concerns raised and the AC + reviewers really thank them for their effort. Overall, the new results obtained during the rebuttal definitely improve the quality of the paper. We all believe that the inclusion of these results during the rebuttal period is something that does not heavily change the message of this paper. There was discussion and consensus that this work is interesting. However, there are definitely points that need to be handled, based on the discussion so far. Having in mind issues/concerns raised by the reviewers, the main points of reviewers during further discussion were that this paper deserves publication, given the promised fixes by the authors during the discussion period.
train
[ "cO33wuM5Jk", "EboIDVsStMu", "Stk_sUL6eTi", "fEz5VB1wMvZ", "F-B4hLLJfc8", "HPE7mXIkDvP", "BCrMU3s4E0", "ZZ7tyAFnuo", "y3vM8B_jHaS", "n1o0xBZmNp9", "xTNIJtSK0x", "uDWvp3z5AL", "X0kb4tQyQC7", "4d-vLWw5hUj", "QctJ_GXiPSR", "Qa0om0s4vP", "ix9RVjobQ7_" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Our setup is indeed the same as that of recent well-cited works, for example [34] and [28]. During the rolling discussions with Reviewer w6vd, Reviewer w6vd has agreed that our setup is the same as [34]. We hope this clarifies the issue. Our work followed up the unexplored challenges of [34], shared the same setu...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 7, 3 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_rh0vIXw6i33", "nips_2021_rh0vIXw6i33", "F-B4hLLJfc8", "ix9RVjobQ7_", "HPE7mXIkDvP", "4d-vLWw5hUj", "n1o0xBZmNp9", "y3vM8B_jHaS", "QctJ_GXiPSR", "X0kb4tQyQC7", "nips_2021_rh0vIXw6i33", "ix9RVjobQ7_", "xTNIJtSK0x", "EboIDVsStMu", "Qa0om0s4vP", "nips_2021_rh0vIXw6i33", "nips_...
nips_2021_Tzkev89HeLZ
On Empirical Risk Minimization with Dependent and Heavy-Tailed Data
Abhishek Roy, Krishnakumar Balasubramanian, Murat A. Erdogdu
accept
The paper focuses on obtaining bounds for the Empirical Risk Minimization (ERM) procedure for regression, when the data is both heavy-tailed and non-iid/dependent. This work combined both the heavy-tailed and non-iid case. Reviewers were positive. However, the work is rather technical, and they correctly point out that the authors would make the paper substantially more accessible to the typical NeurIPS reader if they spend time on incorporating reviewer suggestions for improving the presentation.
train
[ "TUKzbO9xirW", "hI3nWOJNbpg", "ZfTg3z_ZNqG", "d_kka8TnRNt", "LDUgmitBknC", "z6FgU89WUNh", "daafErykJyt", "h-629elopve", "UhoVQIjGai9", "BhfL86v-kN6", "UhGOGtwjYzk", "jjvKFFGpSJ3" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This submission is a technical paper, which is concerned with obtaining bounds for the Empirical Risk Minimization (ERM) procedure for regression, when the data is both heavy-tailed and dependent (non-iid).\nThe performance of ERM in the heavy-tailed context is by now fairly well-understood, in part thanks to the ...
[ 6, -1, -1, -1, 7, -1, -1, -1, -1, -1, 8, 6 ]
[ 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_Tzkev89HeLZ", "UhoVQIjGai9", "d_kka8TnRNt", "h-629elopve", "nips_2021_Tzkev89HeLZ", "daafErykJyt", "UhGOGtwjYzk", "jjvKFFGpSJ3", "LDUgmitBknC", "TUKzbO9xirW", "nips_2021_Tzkev89HeLZ", "nips_2021_Tzkev89HeLZ" ]
nips_2021_DHnThtAyoPj
Gone Fishing: Neural Active Learning with Fisher Embeddings
There is an increasing need for effective active learning algorithms that are compatible with deep neural networks. This paper motivates and revisits a classic, Fisher-based active selection objective, and proposes BAIT, a practical, tractable, and high-performing algorithm that makes it viable for use with neural models. BAIT draws inspiration from the theoretical analysis of maximum likelihood estimators (MLE) for parametric models. It selects batches of samples by optimizing a bound on the MLE error in terms of the Fisher information, which we show can be implemented efficiently at scale by exploiting linear-algebraic structure especially amenable to execution on modern hardware. Our experiments demonstrate that BAIT outperforms the previous state of the art on both classification and regression problems, and is flexible enough to be used with a variety of model architectures.
accept
Reviewers agreed that this is a good contribution to NeurIPS, *provided* that the additional experiments and clarifications provided by the authors during the discussion are included to the paper.
train
[ "A-WsS6YCOV", "gS8HO19ultI", "cbyZWHNyYp-", "DPJnrNSHH4K", "_Q0AM9ii2P", "oiPmdgyN1Zd", "UmYfC0Bw-pL", "AL31RhDuYen", "xDZbtbZN5Wl", "mnbeT_NahXz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper introduces an algorithm (BAIT) for batch active learning, designed for and tested on neural networks. It centres around an objective in eq 4, argmin_select trace(I_select^-1, I_all), with I as the Fisher information matrix of the final layer parameters of a neural network, an objective that has been prev...
[ 6, -1, 7, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_DHnThtAyoPj", "xDZbtbZN5Wl", "nips_2021_DHnThtAyoPj", "UmYfC0Bw-pL", "AL31RhDuYen", "nips_2021_DHnThtAyoPj", "cbyZWHNyYp-", "mnbeT_NahXz", "A-WsS6YCOV", "nips_2021_DHnThtAyoPj" ]
nips_2021_ZCHxGFmc62a
On Riemannian Optimization over Positive Definite Matrices with the Bures-Wasserstein Geometry
In this paper, we comparatively analyze the Bures-Wasserstein (BW) geometry with the popular Affine-Invariant (AI) geometry for Riemannian optimization on the symmetric positive definite (SPD) matrix manifold. Our study begins with an observation that the BW metric has a linear dependence on SPD matrices in contrast to the quadratic dependence of the AI metric. We build on this to show that the BW metric is a more suitable and robust choice for several Riemannian optimization problems over ill-conditioned SPD matrices. We show that the BW geometry has a non-negative curvature, which further improves convergence rates of algorithms over the non-positively curved AI geometry. Finally, we verify that several popular cost functions, which are known to be geodesic convex under the AI geometry, are also geodesic convex under the BW geometry. Extensive experiments on various applications support our findings.
accept
The authors consider the Riemannian optimization with Bures-Wasserstein geometry for symmetric positive definite matrix manifold. Since the reviewers agree that the content of the paper will interest many researchers working on this subject, we conclude that the paper is worthy of acceptance. I think that the comment on the Frobenius norm by a reviewer is important. I expect that the authors add some comments on that issue in the revision.
train
[ "MTrsy6fpAo5", "MiVLzij7_QM", "NxSImYeTP_", "UppzPl5vX03", "w_lAtebMUhk", "jokr7x_nLo", "Zu_jUzzCj3j", "y4Eo6K075vX", "3aNdoyXsE4H", "szqhkqjcvsI", "yFj39RNV8E_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors consider the Riemannian optimization with Bures-Wasserstein (BW) geometry for symmetric positive definite (SPD) matrix manifold. The authors compare the proposed approach with its counterpart with the popular Affine-Invariant (AI) geometry.\n\nThe authors illustrate that the proposed approach (with BW ...
[ 6, 6, -1, -1, -1, 6, -1, -1, -1, -1, 8 ]
[ 3, 4, -1, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "nips_2021_ZCHxGFmc62a", "nips_2021_ZCHxGFmc62a", "UppzPl5vX03", "w_lAtebMUhk", "y4Eo6K075vX", "nips_2021_ZCHxGFmc62a", "yFj39RNV8E_", "jokr7x_nLo", "MiVLzij7_QM", "MTrsy6fpAo5", "nips_2021_ZCHxGFmc62a" ]
nips_2021_dkw9OQMn1t
Refining Language Models with Compositional Explanations
Pre-trained language models have been successful on text classification tasks, but are prone to learning spurious correlations from biased datasets, and are thus vulnerable when making inferences in a new domain. Prior work reveals such spurious patterns via post-hoc explanation algorithms which compute the importance of input features. Further, the model is regularized to align the importance scores with human knowledge, so that the unintended model behaviors are eliminated. However, such a regularization technique lacks flexibility and coverage, since only importance scores towards a pre-defined list of features are adjusted, while more complex human knowledge such as feature interaction and pattern generalization can hardly be incorporated. In this work, we propose to refine a learned language model for a target domain by collecting human-provided compositional explanations regarding observed biases. By parsing these explanations into executable logic rules, the human-specified refinement advice from a small set of explanations can be generalized to more training examples. We additionally introduce a regularization term allowing adjustments for both importance and interaction of features to better rectify model behavior. We demonstrate the effectiveness of the proposed approach on two text classification tasks by showing improved performance in target domain as well as improved model fairness after refinement.
accept
The paper introduces an approach to incorporating natural language explanations into model fine-tuning. Annotators are presented with a heat-map of feature attribution scores on a target unlabeled corpus and they are asked to refine the importance scores and describe interaction of features in language. Description are then parsed into logical rules, which provide abstract templates that can be instantiated on other examples, thus improving coverage. Then the model is then regularized to align with the pseudo-labeled attribution data. -- The paper tackles the important problem of efficiently exploiting costly human explanation on few examples by generalizing such explanations "compositionally" to other examples: to do so, the method effectively uses known techniques in semantic parsing, pseudo-labeling and feature attribution methods. All reviewers and myself agree that the method is interesting and novel, the experimentation is "careful & thorough" and the results confirm the effectiveness of the pseudo-labeling approach. I suggest the authors to incorporate all the precious reviewers' feedback. Following reviewers' feedback, extra care should be put into proofreading the paper for grammar and spelling errors (some I can think of right now are in Fig. 2 caption "annotaters", "rulesand", ...). I recommend this paper for acceptance.
train
[ "2GjfEvcwrQ", "2fF0lgsuK_V", "os6p1_f-D_Z", "16TvjV5OXzP", "m-fmb1oGAhw", "BlnfxeG_qWq", "UQPE2h3xYu-", "MM7NT3yyWwy", "1zxDV5Br60Z", "WQ82SXHKk5r", "RGRKa-CsnHG", "FNFb9XRvATL", "z7Rm1cWdjhe", "_P3aJJzK2OB", "MvLhIJWFs2K" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your to-the-point response. I'd like to see the systemic discussion you mentioned in the next version of the paper. I will keep my (already positive) score. ", " ### 1. Could you elaborate a bit more on how C_strict can have a higher quality than C_sample?\n\nThank you for the follow-up question. We ...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "WQ82SXHKk5r", "BlnfxeG_qWq", "RGRKa-CsnHG", "nips_2021_dkw9OQMn1t", "MM7NT3yyWwy", "1zxDV5Br60Z", "16TvjV5OXzP", "16TvjV5OXzP", "z7Rm1cWdjhe", "_P3aJJzK2OB", "MvLhIJWFs2K", "nips_2021_dkw9OQMn1t", "nips_2021_dkw9OQMn1t", "nips_2021_dkw9OQMn1t", "nips_2021_dkw9OQMn1t" ]
nips_2021_9S7jZvhS7SP
Going Beyond Linear RL: Sample Efficient Neural Function Approximation
Deep Reinforcement Learning (RL) powered by neural net approximation of the Q function has had enormous empirical success. While the theory of RL has traditionally focused on linear function approximation (or eluder dimension) approaches, little is known about nonlinear RL with neural net approximations of the Q functions. This is the focus of this work, where we study function approximation with two-layer neural networks (considering both ReLU and polynomial activation functions). Our first result is a computationally and statistically efficient algorithm in the generative model setting under completeness for two-layer neural networks. Our second result considers this setting but under only realizability of the neural net function class. Here, assuming deterministic dynamics, the sample complexity scales linearly in the algebraic dimension. In all cases, our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
accept
This paper has received a lot of discussion. In the end, the reviewers agree that the paper contributes to the literature and should be accepted. There are a number of useful comments that the authors should take into account when preparing the camera ready version of the paper.
train
[ "rh5sVIHzJtx", "zsVXdaGM1W", "tW9N4qGxPw", "9_ArfjkXkWi", "QFNSID2fwPa", "XYBPY3NjK6L", "Cqqtki_bgkL", "qkqsiGv9N_6", "-9rI2W9Muy6", "_1ZL5Z8LzBl", "Hhl9F8pO3Fq", "f3TQFCOnCZ", "HeLJ1jT8a5i", "RxmIOzCma7A", "1nC7kmYQ0lk", "d3F5tNGjLa", "7ujbrVnth5W", "JZbET0m03Xt", "1IYllpgxwc", ...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " Dear Area chair, PC and other Reviewers,\nRealizability is not an assumption that can be proved! This is like asking to prove that the optimal regressor f* is a neural net, this is an assumption. The entire field of statistical learning works with assumptions like this or chooses to compare to the best neural ne...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, 3 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, 5 ]
[ "zsVXdaGM1W", "QFNSID2fwPa", "nips_2021_9S7jZvhS7SP", "RxmIOzCma7A", "-9rI2W9Muy6", "_1ZL5Z8LzBl", "qkqsiGv9N_6", "7ujbrVnth5W", "d3F5tNGjLa", "JZbET0m03Xt", "nips_2021_9S7jZvhS7SP", "1nC7kmYQ0lk", "nips_2021_9S7jZvhS7SP", "f3TQFCOnCZ", "HeLJ1jT8a5i", "zzy8VqUUBIq", "tW9N4qGxPw", "...
nips_2021_NEQYGJr1qL3
Scalable Neural Data Server: A Data Recommender for Transfer Learning
Absence of large-scale labeled data in the practitioner's target domain can be a bottleneck to applying machine learning algorithms in practice. Transfer learning is a popular strategy for leveraging additional data to improve the downstream performance, but finding the most relevant data to transfer from can be challenging. Neural Data Server (NDS), a search engine that recommends relevant data for a given downstream task, has been previously proposed to address this problem (Yan et al., 2020). NDS uses a mixture of experts trained on data sources to estimate similarity between each source and the downstream task. Thus, the computational cost to each user grows with the number of sources and requires an expensive training step for each data provider.To address these issues, we propose Scalable Neural Data Server (SNDS), a large-scale search engine that can theoretically index thousands of datasets to serve relevant ML data to end users. SNDS trains the mixture of experts on intermediary datasets during initialization, and represents both data sources and downstream tasks by their proximity to the intermediary datasets. As such, computational cost incurred by users of SNDS remains fixed as new datasets are added to the server, without pre-training for the data providers.We validate SNDS on a plethora of real world tasks and find that data recommended by SNDS improves downstream task performance over baselines. We also demonstrate the scalability of our system by demonstrating its ability to select relevant data for transfer outside of the natural image setting.
accept
Quite a borderline paper with a high variance of scores. I'm recommending acceptance though the average score is probably below what would correspond to an accepted paper at NeurIPS. Mostly I'm siding with the positive reviewers because: -- The negative score/review is somewhat of an outlier compared to the other two. -- The two positive reviewers voiced their support of acceptance in the discussion, whereas the negative reviewer did not weigh in. This caused me to downweight their assessment a little. -- The rebuttal seems reasonable, and was at least persuasive to the most positive reviewer. Again, the negative reviewer was rebutted though did not respond (which I take as somewhat implicit support). Ultimately a tough call. The positive reviewers found the problem important, the solution effective, and praised the introduction of new datasets for the task. The negative reviewer criticized the novelty of the work, though again this was not a consensus view. Most reviewers raised issues of clarity, though these can likely be addressed in a revision (or have already been addressed in the rebuttal). Still may be borderline given that the paper doesn't quite have a "champion" and likely will have lower scores than other accepted papers. But there is reasonable endorsement by the reviewers (especially weighted by participation in the discussion), and the criticisms seem like not major red flags.
train
[ "CpmbitnCYyA", "TzeTAYaDwT-", "e-31yxGRsg8", "aswRDBGBc1E", "4JzEe2symVm", "PyZ8l_Wg0L_", "6WJ4yTkDH8", "8U9aLvnJb2n", "rZykisK-4Ha" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper extends Neural Data Server (NDS), which recommends relevant data points for the target task, to be more scalable, by employing a fleet of a mixture of experts. Each expert is trained on data slices, which is based on predefined assumptions, and generates representations that will be used to select relev...
[ 5, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_NEQYGJr1qL3", "4JzEe2symVm", "PyZ8l_Wg0L_", "nips_2021_NEQYGJr1qL3", "CpmbitnCYyA", "rZykisK-4Ha", "8U9aLvnJb2n", "nips_2021_NEQYGJr1qL3", "nips_2021_NEQYGJr1qL3" ]
nips_2021_KLS346_Asf
What can linearized neural networks actually say about generalization?
For certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization, but for the networks used in practice, the empirical NTK only provides a rough first-order approximation. Still, a growing body of work keeps leveraging this approximation to successfully analyze important deep learning phenomena and design algorithms for new applications. In our work, we provide strong empirical evidence to determine the practical validity of such approximation by conducting a systematic comparison of the behavior of different neural networks and their linear approximations on different tasks. We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks, even when they achieve very different performances. However, in contrast to what was previously reported, we discover that neural networks do not always perform better than their kernel approximations, and reveal that the performance gap heavily depends on architecture, dataset size and training task. We discover that networks overfit to these tasks mostly due to the evolution of their kernel during training, thus, revealing a new type of implicit bias.
accept
This paper explores what we can learn about neural network generalization through linearization, such as the empirical NTK. There were a number of supportive sentiments, but also concerns, discussed at length in the rebuttal period. One of the key concerns was novelty with respect to Baratin et. al. After discussion, it was understood that there are still novel contributions, such as the relative complexity of tasks being predicted with the empirical NTK, some experiments demonstrating the superiority of linearity model, and how alignment can hurt generalization. The final version of this paper should _carefully_ discuss Baratin et. al, and other related work, and address all of the reviewer questions, including experiments that were part of the rebuttal in the paper.
train
[ "KORQNkZ3-3d", "Vrq2aUES5eU", "HnnBi-MPq1Q", "SVS-fcIPgxW", "FHulDm7Gkz", "gNxOtlrH4N5", "3aQDVU6pIPp", "J0t3FhyG_H2", "VlbdqKjOpk", "vLV_0rvm27O" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thank you for your answer, and for being open to increasing your score. Specifically, considering that [Reviewer rfX8](https://openreview.net/forum?id=KLS346_Asf&noteId=SVS-fcIPgxW) has recognized they were harsh in their previous review, significantly increased their score, and that [Reviewer 4dYZ](https://openr...
[ -1, -1, 5, -1, 7, -1, -1, -1, -1, 5 ]
[ -1, -1, 5, -1, 3, -1, -1, -1, -1, 2 ]
[ "gNxOtlrH4N5", "SVS-fcIPgxW", "nips_2021_KLS346_Asf", "J0t3FhyG_H2", "nips_2021_KLS346_Asf", "VlbdqKjOpk", "FHulDm7Gkz", "HnnBi-MPq1Q", "vLV_0rvm27O", "nips_2021_KLS346_Asf" ]
nips_2021_eVuMspr9cu5
CATs: Cost Aggregation Transformers for Visual Correspondence
We propose a novel cost aggregation network, called Cost Aggregation Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations. Cost aggregation is a highly important process in matching tasks, which the matching accuracy depends on the quality of its output. Compared to hand-crafted or CNN-based methods addressing the cost aggregation, in that either lacks robustness to severe deformations or inherit the limitation of CNNs that fail to discriminate incorrect matches due to limited receptive fields, CATs explore global consensus among initial correlation map with the help of some architectural designs that allow us to fully leverage self-attention mechanism. Specifically, we include appearance affinity modeling to aid the cost aggregation process in order to disambiguate the noisy initial correlation maps and propose multi-level aggregation to efficiently capture different semantics from hierarchical feature representations. We then combine with swapping self-attention technique and residual connections not only to enforce consistent matching, but also to ease the learning process, which we find that these result in an apparent performance boost. We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies. Code and trained models are available at https://github.com/SunghwanHong/CATs.
accept
This paper initially had mixed reviews (7,7,6,3). The reviewers seem to agree that the proposed method is interesting, new enough to warrant publications, and the experiments are solid. The negative reviewer raised concerns about the paper's presentation, including lack of justification of certain technical choices and statements. Thanks to the author's rebuttal, many of these concerned have been addressed (also thanks to the additional experiments). The negative reviewer upgraded their rating to 5, saying they are convinced by the new experiments that the reviewers have added. In a private discussion between the reviewers (not visible to the authors), an approximate consensus emerge: the amount of revision required to improve the presentation is considered small enough to warrant acceptance now. The reviewers explicitly said that they trust the authors to make the promised changes.
train
[ "EHIESfpm2Pb", "wOtDLTuyfrq", "a-lFohyKy-0", "r0hs3NeHaJm", "yAGtbDAGfiJ", "EuyaCTCpdlh", "VAGHKYO68t", "zoo4CF3oGUu", "pUFDegUw-zH", "QTZbsNO5J8", "jEKjeNuf8Tj", "uL8J7hMWFY5", "Y1Wzb1Zhjhz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " We are glad that the reviewers found that our rebuttal provided reasonings with sufficient backups, which helped to resolve the main concerns raised by reviewer FB1A and CGvU. We also agree that the justification to design of our proposed method (why transformer and cost map) is indeed extremely important that wi...
[ -1, 6, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ -1, 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "nips_2021_eVuMspr9cu5", "nips_2021_eVuMspr9cu5", "nips_2021_eVuMspr9cu5", "nips_2021_eVuMspr9cu5", "EuyaCTCpdlh", "VAGHKYO68t", "a-lFohyKy-0", "wOtDLTuyfrq", "a-lFohyKy-0", "r0hs3NeHaJm", "nips_2021_eVuMspr9cu5", "Y1Wzb1Zhjhz", "nips_2021_eVuMspr9cu5" ]
nips_2021_5tSmnxXb0cx
Asynchronous Stochastic Optimization Robust to Arbitrary Delays
Alon Cohen, Amit Daniely, Yoel Drori, Tomer Koren, Mariano Schain
accept
This paper studies the convergence of gradient descent with delayed updates. While most prior results depended on a parameter corresponding to the largest encountered delay, this paper derives convergence analysis depending on just the average delay. The presented algorithm does not need to know the average delay parameter beforehand. The reviewers found this an overall solid technical contribution and recommend acceptance. The authors are strongly encouraged to take the reviewer's feedback into account when preparing the revision, as a few concerns remain: - reviewer v3Nq spotted some inaccuracies in the proofs, which the author's promised to fix - while indeed prior methods need 'to know' the maximum delay to set the theoretical step size, in practice the tuning is limited to a single parameter only (the stepsize). The proposed scheme has two hyperparameters (stepsize and threshold), and the reviewers concerns on practical benefits seem justified. It would be great if these comments could be addressed diligently in the revision. Additionally, I would encourage the authors to comment on the concurrent work [[Aviv et al, Learning Under Delayed Feedback: Implicitly Adapting to Gradient Delays, ICML 2021](http://proceedings.mlr.press/v139/aviv21a/aviv21a.pdf)] published at ICML earlier this year. It would be great to explain the key differences and similarities of the proof techniques and results to the readers.
train
[ "XBd1VP34AQG", "a-TJobFEkhR", "9hN_8DCFtgT", "MO0qHuRhG65", "yZ8qgeedwKe", "aKnzfdBrjFt", "CBlF7HZh8p-", "UwaPLDL2IYN", "eCgQERcjR9z", "qsl401klQ4b", "owMPX3Ld-xq", "-lAATqXuyDX", "E7qXGsjZ7S1", "yZ-SFRbpjyh" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a distributed SGD algorithm, called Picky SGD, for asynchronous implementation of SGD with multiple workers and arbitrary delays. Compared to prior works which analyzed a plain SGD algorithm, the main advantage of picky SGD is that it has a tighter complexity bound of $O(\\sigma^2/\\epsilon^4 +...
[ 5, -1, -1, 7, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7 ]
[ 3, -1, -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, 4 ]
[ "nips_2021_5tSmnxXb0cx", "9hN_8DCFtgT", "qsl401klQ4b", "nips_2021_5tSmnxXb0cx", "E7qXGsjZ7S1", "CBlF7HZh8p-", "eCgQERcjR9z", "nips_2021_5tSmnxXb0cx", "-lAATqXuyDX", "XBd1VP34AQG", "yZ-SFRbpjyh", "UwaPLDL2IYN", "MO0qHuRhG65", "nips_2021_5tSmnxXb0cx" ]
nips_2021_vAMh-dcNMcR
Consistent Non-Parametric Methods for Maximizing Robustness
Robi Bhattacharjee, Kamalika Chaudhuri
accept
The paper presents a novel and interesting idea to combine robustness and consistency. It is well written and provides explicit examples how the general framework can be applied. On the negative side, only consistency is considered and more refined convergence results such as learning rates or finite sample bounds are missing.
train
[ "ZZ0dRd6XqKf", "FaN-_dG5Sm", "H7qWLDKAby", "tktKtbPQXF", "ny7oT2iwKFN", "pIkCtgZUPiy", "g127pz-mMcO", "MZhj5THcLKS", "6dABzUVl5jw", "Bc3Yh5HPeKJ", "uIov7f1jQJ5" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper aims to give a new robust learning framework where the radius of adversarial attacks at different points can adaptively change. The main purpose is \"to increase robustness, without sacrificing accuracy\", as claimed by the author(s). In order to achieve this goal, author(s) have utilized a number of co...
[ 5, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "nips_2021_vAMh-dcNMcR", "pIkCtgZUPiy", "MZhj5THcLKS", "nips_2021_vAMh-dcNMcR", "uIov7f1jQJ5", "ZZ0dRd6XqKf", "Bc3Yh5HPeKJ", "6dABzUVl5jw", "nips_2021_vAMh-dcNMcR", "nips_2021_vAMh-dcNMcR", "nips_2021_vAMh-dcNMcR" ]
nips_2021_dZ33IBX-uRm
Generalizable Multi-linear Attention Network
The majority of existing multimodal sequential learning methods focus on how to obtain effective representations and ignore the importance of multimodal fusion. Bilinear attention network (BAN) is a commonly used fusion method, which leverages tensor operations to associate the features of different modalities. However, BAN has a poor compatibility for more modalities, since the computational complexity of the attention map increases exponentially with the number of modalities. Based on this concern, we propose a new method called generalizable multi-linear attention network (MAN), which can associate as many modalities as possible in linear complexity with hierarchical approximation decomposition (HAD). Besides, considering the fact that softmax attention kernels cannot be decomposed as linear operation directly, we adopt the addition random features (ARF) mechanism to approximate the non-linear softmax functions with enough theoretical analysis. We conduct extensive experiments on four datasets of three tasks (multimodal sentiment analysis, multimodal speaker traits recognition, and video retrieval), the experimental results show that MAN could achieve competitive results compared with the state-of-the-art methods, showcasing the effectiveness of the approximation decomposition and addition random features mechanism.
accept
The reviewers are very positive about the paper and is clearly an accept (6,7,8). Perhaps the nature of the multimodal tasks that are chosen are slightly contrived making it not quite worth of a Spotlight.
train
[ "rD1_y_KM5iR", "idH2yoT7qa_", "EX6HEv7RaFR", "6S8C-UjEul1", "x_EdDcWCCB7", "13sP3cq-T1Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper extends the bilinear attention network (BAN) to process more modalities. Through production operation, the proposed model can accept multiple modalities as input. Then the authors utilize CAO, LSC, HAD and ARF to optimize the multi-linear model. This model achieves some great results on several tasks. H...
[ 6, 7, 8, -1, -1, -1 ]
[ 5, 3, 5, -1, -1, -1 ]
[ "nips_2021_dZ33IBX-uRm", "nips_2021_dZ33IBX-uRm", "nips_2021_dZ33IBX-uRm", "rD1_y_KM5iR", "EX6HEv7RaFR", "idH2yoT7qa_" ]
nips_2021_Hcr9mgBG6ds
Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning
In this paper, we provide a theory of using graph neural networks (GNNs) for multi-node representation learning (where we are interested in learning a representation for a set of more than one node). We know that GNN is designed to learn single-node representations. When we want to learn a node set representation involving multiple nodes, a common practice in previous works is to directly aggregate the multiple node representations learned by a GNN into a joint representation of the node set. In this paper, we show a fundamental constraint of such an approach, namely the inability to capture the dependence between nodes in the node set, and argue that directly aggregating individual node representations does not lead to an effective joint representation for multiple nodes. Then, we notice that a few previous successful works for multi-node representation learning, including SEAL, Distance Encoding, and ID-GNN, all used node labeling. These methods first label nodes in the graph according to their relationships with the target node set before applying a GNN. Then, the node representations obtained in the labeled graph are aggregated into a node set representation. By investigating their inner mechanisms, we unify these node labeling techniques into a single and most basic form, namely labeling trick. We prove that with labeling trick a sufficiently expressive GNN learns the most expressive node set representations, thus in principle can solve any joint learning tasks over node sets. Experiments on one important two-node representation learning task, link prediction, verified our theory. Our work establishes a theoretical foundation of using GNNs for joint prediction tasks over node sets.
accept
The paper focuses on GNNs for link prediction and sheds light on the performance gap between GAE and SEAL by identifying a key limitation in GAE. To overcome this limitation, a labeling trick is proposed that enables the unification of existing frameworks and that generalizes to high-order node set prediction. The AC and reviewers carefully examined the author feedback and all agree that this paper makes some very pertinent contributions leading to a better understanding of GNNs for link prediction. It would be important to incorporate into the main text the remark on computational complexity from the author response. As noted by the authors, the labeling trick can result in significant computational overhead, and such a limitation should not be completely relegated to the supplements.
train
[ "16A1kOc6Fl", "SdH1a9qdIrY", "nUBnj6ryeT2", "cCynwxwjTx_", "7r-Jpp1yNRl", "EZLdbRarU2O", "wjSN7H1evqU", "8ArUMo_wznL" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a new concept \"labeling trick\" along with multiple ways of using or decorating it such as \"valid labeling trick\" and \"zero-one labeling trick\". It proves that popular link prediction algorithms such as SEAL, Distance Encoding, and ID-GNN can all be considered as specific implementations...
[ 6, -1, -1, -1, -1, 6, 7, 9 ]
[ 3, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2021_Hcr9mgBG6ds", "16A1kOc6Fl", "8ArUMo_wznL", "wjSN7H1evqU", "EZLdbRarU2O", "nips_2021_Hcr9mgBG6ds", "nips_2021_Hcr9mgBG6ds", "nips_2021_Hcr9mgBG6ds" ]
nips_2021_nFdJSm9dy83
SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients
Feihu Huang, Junyi Li, Heng Huang
accept
During the discussion phase, the paper has received intensive discussion between the authors and the reviewers about the merits and the concerns. Two reviewers have a strong favor of accepting this paper given that the paper presents a new analysis of Adam method for constrained non-convex optimization and also an improved variant of Adam by using recursive variance reduction methods with an optimal complexity. The major concerns are from the used convergence measure that is different from the standard convergence measure, and the authors' argument about its implication for the convergence of the standard measure in terms of (proximal) gradient norm. The AC agrees that the presented results are interesting and the analysis of the Adam-style methods for constrained non-convex optimization is novel. The concern about the inconsistency between the presented convergence measure and the standard convergence measure is understandable given that the paper considers the constrained non-convex optimization. However, the authors should weaken their argument about its implication for the convergence of the gradient norm or add more evidence such as empirical results presented in the rebuttal for further supporting their claim. The AC believes the concern should be addressable, and hence recommends an acceptance.
val
[ "0U1PrWfKvqC", "PlHtwKlRNJF", "jztbKscUKGY", "P2Mz7GfAA3B", "WPjyrvxTQPu", "uBuRLVimAe6", "8DPanIwE05t", "TPo2KQL_ffA" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thanks for your comments. We address your concerns one by one as follows:\n\n1) We provide the case 2 in our algorithm, which only is a global adaptive matrix example as the global adaptive learning rate used in AdaGrad-Norm [24]. In the numerical experiments, we only use the adaptive matrix $H_t$ given in the ca...
[ -1, 8, 8, 4, -1, -1, -1, 4 ]
[ -1, 4, 5, 4, -1, -1, -1, 5 ]
[ "TPo2KQL_ffA", "nips_2021_nFdJSm9dy83", "nips_2021_nFdJSm9dy83", "nips_2021_nFdJSm9dy83", "PlHtwKlRNJF", "P2Mz7GfAA3B", "jztbKscUKGY", "nips_2021_nFdJSm9dy83" ]
nips_2021_PFBHMlpaWY
General Nonlinearities in SO(2)-Equivariant CNNs
Invariance under symmetry is an important problem in machine learning. Our paper looks specifically at equivariant neural networks where transformations of inputs yield homomorphic transformations of outputs. Here, steerable CNNs have emerged as the standard solution. An inherent problem of steerable representations is that general nonlinear layers break equivariance, thus restricting architectural choices. Our paper applies harmonic distortion analysis to illuminate the effect of nonlinearities on Fourier representations of SO(2). We develop a novel FFT-based algorithm for computing representations of non-linearly transformed activations while maintaining band-limitation. It yields exact equivariance for polynomial (approximations of) nonlinearities, as well as approximate solutions with tunable accuracy for general functions. We apply the approach to build a fully E(3)-equivariant network for sampled 3D surface data. In experiments with 2D and 3D data, we obtain results that compare favorably to the state-of-the-art in terms of accuracy while permitting continuous symmetry and exact equivariance.
accept
Congratulations, the paper is accepted to NeurIPS 2021! Please add a discussion on tradeoffs of approximate versus exact equivariance. Furthermore, please add additions to the supplementary as discussed with reviewer 5pXb. Please consider to include/elaborate the discussion on the challenges in generalizing your method to other transformation groups. Lastly, please incorporate other suggestions and edits as discussed in rebuttal and reviews.
train
[ "RJ3XDCEE8Z", "jezVom2gWuf", "cJcDcr9YTv", "T5KiCp0Rv9Z", "vGuAHtuac_D", "cCM2QLDD06", "dWEXnv-F-z", "RA9mL4pNuDU", "GnXWr3FYj6F", "SAP-cVZeseV", "vTUQuwAWNRt", "KSV9UJeKW8", "b0ZH-bJAO0D", "oS227Jaiys", "_tX37nCh10e", "PeEGX1FEbfH" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We agree that section 3 should be updated to make the simultaneous dependency of the convolution on $R^2$ and SO(2) clear at this point, in addition to the derivation in our previous response which we plan to add to the supplementary material of the paper. Regarding the similarity of our approach to SE(3) based s...
[ -1, -1, -1, -1, -1, 6, -1, -1, 7, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, -1, -1, -1, 2, -1, -1, 4, -1, -1, -1, -1, -1, 5, 5 ]
[ "jezVom2gWuf", "cJcDcr9YTv", "dWEXnv-F-z", "vGuAHtuac_D", "KSV9UJeKW8", "nips_2021_PFBHMlpaWY", "oS227Jaiys", "vTUQuwAWNRt", "nips_2021_PFBHMlpaWY", "b0ZH-bJAO0D", "PeEGX1FEbfH", "cCM2QLDD06", "GnXWr3FYj6F", "_tX37nCh10e", "nips_2021_PFBHMlpaWY", "nips_2021_PFBHMlpaWY" ]
nips_2021_F-H4oe3MXXI
Denoising Normalizing Flow
Normalizing flows (NF) are expressive as well as tractable density estimation methods whenever the support of the density is diffeomorphic to the entire data-space. However, real-world data sets typically live on (or very close to) low-dimensional manifolds thereby challenging the applicability of standard NF on real-world problems. Here we propose a novel method - called Denoising Normalizing Flow (DNF) - that estimates the density on the low-dimensional manifold while learning the manifold as well. The DNF works in 3 steps. First, it inflates the manifold - making it diffeomorphic to the entire data-space. Secondly, it learns an NF on the inflated manifold and finally it learns a denoising mapping - similarly to denoising autoencoders. The DNF relies on a single cost function and does not require to alternate between a density estimation phase and a manifold learning phase - as it is the case with other recent methods. Furthermore, we show that the DNF can learn meaningful low-dimensional representations from naturalistic images as well as generate high-quality samples.
accept
The paper proposes an NF with simultaneous manifold and density-on-manifold learning. Inflation noise is added to make p(x) smooth and the reconstruction regularizer makes the first d variables independent from noise and hence extracts information of clean data. The proposed method is well-motivated and shows good performance. We expect that the authors revise the paper according to the discussion made with reviewers.
val
[ "Fi48rKvYSaf", "Cu4gE1Bl6i8", "J49HzCoXQ6R", "POzNvPhvhT8", "r1wsbFkVdER", "hswk4cVnesp", "G7dUf17xnRR", "NKFLS3wsO6z", "L7758YrZo80", "21DiNWYD-wY", "e1245kUsUB", "rsYfxJZ2Uzw", "9-wnkKeEPu3", "CSFIFDFqUUK" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ " We wanted to confirm the intuition of R1 that the DNF can be used for probabilistic inference. We have trained the DNF on the polynomial surface dataset (see [9] for more details) which depends on a parameter $\\theta$. To evaluate the inference performance, the maximum mean discrepancy (MMD) between posterior sa...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "rsYfxJZ2Uzw", "J49HzCoXQ6R", "POzNvPhvhT8", "r1wsbFkVdER", "hswk4cVnesp", "rsYfxJZ2Uzw", "nips_2021_F-H4oe3MXXI", "rsYfxJZ2Uzw", "rsYfxJZ2Uzw", "e1245kUsUB", "rsYfxJZ2Uzw", "nips_2021_F-H4oe3MXXI", "nips_2021_F-H4oe3MXXI", "nips_2021_F-H4oe3MXXI" ]
nips_2021_lHmhW2zmVN
Attention over Learned Object Embeddings Enables Complex Visual Reasoning
Neural networks have achieved success in a wide array of perceptual tasks but often fail at tasks involving both perception and higher-level reasoning. On these more challenging tasks, bespoke approaches (such as modular symbolic components, independent dynamics models or semantic parsers) targeted towards that specific type of task have typically performed better. The downside to these targeted approaches, however, is that they can be more brittle than general-purpose neural networks, requiring significant modification or even redesign according to the particular task at hand. Here, we propose a more general neural-network-based approach to dynamic visual reasoning problems that obtains state-of-the-art performance on three different domains, in each case outperforming bespoke modular approaches tailored specifically to the task. Our method relies on learned object-centric representations, self-attention and self-supervised dynamics learning, and all three elements together are required for strong performance to emerge. The success of this combination suggests that there may be no need to trade off flexibility for performance on problems involving spatio-temporal or causal-style reasoning. With the right soft biases and learning objectives in a neural network we may be able to attain the best of both worlds.
accept
This paper received 4 strong accepts. The reviewers have lauded the work because of extensive experiments and clear cutting results which are pushing the SOTA. The approach is likely to be of broad interest because this "connectionist" approach is pitted again the "symbolic" one in an area where so-called neuro-symbolic models have been dominating. The AC thus recommends an oral.
train
[ "4StLT5bZTjZ", "EeIaI11WuIT", "tiRf96Vo6ub", "wBfNjWDo9Jv", "pFNEvvm1GbE", "8OaUI9FkUD", "R4Wa505Z8do", "3sxc9pZQjM", "rmz3uOC0t7Y", "Pk3Q7CASsPr", "Hc9MIFLMngl", "Uthy-nsiNy4" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks again for your detailed comments – your review is incredibly helpful in helping us improve the quality of our paper. We will go through your comments carefully when revising our paper. Also, thank you for clarifying your point about additional pretraining. We weren't sure if you were looking for anything s...
[ -1, 9, -1, -1, -1, 7, 8, -1, -1, -1, -1, 8 ]
[ -1, 5, -1, -1, -1, 5, 4, -1, -1, -1, -1, 5 ]
[ "tiRf96Vo6ub", "nips_2021_lHmhW2zmVN", "Pk3Q7CASsPr", "Hc9MIFLMngl", "3sxc9pZQjM", "nips_2021_lHmhW2zmVN", "nips_2021_lHmhW2zmVN", "8OaUI9FkUD", "Uthy-nsiNy4", "EeIaI11WuIT", "R4Wa505Z8do", "nips_2021_lHmhW2zmVN" ]
nips_2021_aj8x18_Te9
Differentially Private Federated Bayesian Optimization with Distributed Exploration
Bayesian optimization (BO) has recently been extended to the federated learning (FL) setting by the federated Thompson sampling (FTS) algorithm, which has promising applications such as federated hyperparameter tuning. However, FTS is not equipped with a rigorous privacy guarantee which is an important consideration in FL. Recent works have incorporated differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms. Following this general DP framework, our work here integrates DP into FTS to preserve user-level privacy. We also leverage the ability of this general DP framework to handle different parameter vectors, as well as the technique of local modeling for BO, to further improve the utility of our algorithm through distributed exploration (DE). The resulting differentially private FTS with DE (DP-FTS-DE) algorithm is endowed with theoretical guarantees for both the privacy and utility and is amenable to interesting theoretical insights about the privacy-utility trade-off. We also use real-world experiments to show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee (small privacy loss) and induces a trade-off between privacy and utility.
accept
This paper combines differential privacy with federated Bayesian optimization using the federated Thompson sampling algorithm. We appreciate this research direction, but find that the paper falls below the (high) bar for NeurIPS for a combination of the following reasons. - The technical novelty of the contribution is limited. The proposed algorithm uses the now standard approach to differentially private federated learning of reducing the problem to iteratively averaging vectors, clipping them, and adding Gaussian noise. The privacy and utility analyses seem to closely mirror prior work. - Several reviewers were uncertain how to interpret the main utility guarantee (Theorem 1) and its various parameters. This was clarified by the author response, but concerns remain. In particular, it shows limited improvement over the baseline of local learning. Thus the significance of the contribution is not sufficiently clear. - The clarity of the exposition needs to be improved. The problem setup and motivation for this work are not adequately explained. A high level issue is that the introduction focuses on *algorithms* instead of discussing the *problems* that they are meant to solve.
train
[ "f27gU1vgGe1", "bEpF099YopR", "h9oOwGWNxJ5", "ei8-zBRJkd_", "tEELhkDvkF", "GHyK7AGJQpe", "_i459QPRHk", "It0KVSQ76Mj", "pFo8M_gFJq", "9j0MJuwLiU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a differentially private version of FTS-DE algorithm. Theoretically, the authors provide not only the privacy guarantee of the proposed algorithm, but also provide the privacy-utility trade-off. Experimentally, the impact of different parameters is verified. The results match the privacy-utilit...
[ 5, -1, 5, -1, -1, -1, -1, -1, 6, 6 ]
[ 2, -1, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "nips_2021_aj8x18_Te9", "_i459QPRHk", "nips_2021_aj8x18_Te9", "GHyK7AGJQpe", "9j0MJuwLiU", "h9oOwGWNxJ5", "pFo8M_gFJq", "f27gU1vgGe1", "nips_2021_aj8x18_Te9", "nips_2021_aj8x18_Te9" ]
nips_2021_bdA60x7yG0T
Differentiable Learning Under Triage
Multiple lines of evidence suggest that predictive models may benefit from algorithmic triage. Under algorithmic triage, a predictive model does not predict all instances but instead defers some of them to human experts. However, the interplay between the prediction accuracy of the model and the human experts under algorithmic triage is not well understood. In this work, we start by formally characterizing under which circumstances a predictive model may benefit from algorithmic triage. In doing so, we also demonstrate that models trained for full automation may be suboptimal under triage. Then, given any model and desired level of triage, we show that the optimal triage policy is a deterministic threshold rule in which triage decisions are derived deterministically by thresholding the difference between the model and human errors on a per-instance level. Building upon these results, we introduce a practical gradient-based algorithm that is guaranteed to find a sequence of predictive models and triage policies of increasing performance. Experiments on a wide variety of supervised learning tasks using synthetic and real data from two important applications---content moderation and scientific discovery---illustrate our theoretical results and show that the models and triage policies provided by our gradient-based algorithm outperform those provided by several competitive baselines.
accept
Strengths: - Sensible algorithmic approach - Sound theoretical analysis for setup and algorithm - Thorough experiments on both synthetic and real data Weaknesses: - Clarity could be improved, over-usage of math and notation - In experiments - better describing baselines and adding ablation studies Summary: Reviewers are in agreement that this is a good paper and should be accepted. Most concerns regarded particular inclarities, but discussions were helpful in this respect. The authors also extended their local convergence result to a global results under some assumptions, which addressed a concern raised by one of the reviewers.
train
[ "O0l7MIZ0x_B", "h8hVGjRbcdn", "ZrUwkrdzEZH", "YIJ8r9WbxHP", "wPtDitelmdL", "woEvf60Ns-", "XoJvN4YXiCf", "jDoqqUfxsA" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the helpful comments and suggestions, which will help improve our paper.\n\n[Building a sequence of triage policies] We sequentially train the predictive models as follows. At step $t = 1$, we train the model $m_{\\theta_1}$ using all training samples, as implied by the choice of initial...
[ -1, -1, -1, -1, 7, 7, 6, 6 ]
[ -1, -1, -1, -1, 2, 3, 3, 3 ]
[ "jDoqqUfxsA", "XoJvN4YXiCf", "wPtDitelmdL", "woEvf60Ns-", "nips_2021_bdA60x7yG0T", "nips_2021_bdA60x7yG0T", "nips_2021_bdA60x7yG0T", "nips_2021_bdA60x7yG0T" ]
nips_2021_euUgX7XM9j
A New Theoretical Framework for Fast and Accurate Online Decision-Making
Nicolò Cesa-Bianchi, Tom Cesari, Yishay Mansour, Vianney Perchet
accept
The paper considers a variant of a sequential selection problem under "partial feedback." This is formulated and presented as a variant of an ROI problem. The main result is an algorithm for implementing the selection rule and its convergence rate analysis. The main difficulty is discerning when to stop the experimentation and move to the next stage. Two of the reviewers are positive about the paper and one is more negative in his/her remarks. Generally there is an appreciation for the problem being studied and its broad relevance and interest. (A good chance to thank all referees for their invaluable input.) I think this is an interesting paper and has the potential to be relevant for NeurIPS audience. I will go along with the two more positive reviews, although my personal view on the paper is that it still has some major issues and I believe it to be a borderline accept at best. I hope some of these issues can be addressed in the revision. 1. In my view the general "motivating story" as well as the selected title overpromises and this sets the tone for the paper. The sequential technology testing story is indeed something many companies engage in, but I seriously doubt whether any reasonable practical process resembles the objectives in the paper where the company has a pre-defined set of policies and it tries to achieve result close to the "best" in this class. This type of logic, similar to the best action in hindsight, and the "expert" paradigm in ML is more of theoretical analysis value. The latter is a meaningful analysis framework, but connecting it through this motivating example is tenuous at best. It leaves a bit of a sense of an answer searching for a problem. While the paper studies online decision making, certainly the key results only pertain to a rather restricted canvass in that space. For example, multistage A/B testing may better reflect what the paper is modeling and analyzing. I would hope the authors can address this in their revision. 2. If the idea is to connect to such practical settings, why not take a simple class of parametrized policies and analyze that? The problem can be fairly easily formulated as a sequential selection problem for which there is rich literature and plenty of heuristics to choose from. For example, in such settings it is common, as in classical PAC learning, to select a tolerance parameter, say delta, and study the run length of sequential experiments needed to conclude with high probability that the true mean is positive (or exceeds, some epsilon, say). It is then possible to study this as a heuristic with epsilon set to shrink with sampling horizon. This would be somewhat akin to a modified best arm identification problem, but with the added element of having multiple stages in the game (see further comments below). The current formulation seems to me a bit contrived, especially that it effectively requires a finite cardinality policy class. (The extensions discussed in the paper notwithstanding.) 3. I also feel the placement of the contribution relative to literature is sketchy in parts. The problem is in essence a sequential selection problem. Perhaps i'm missing something, but I don't see a direct connection to the prophet inequality literature. That literature either seeks to approximate state-dependent stopping rules with single (or multiple) threshold rules, or attempts to bound losses of expected optimal performance relative to the best offline solution in a horizon independent manner. If this is supposed to represent the sequential selection literature it is not the best connective fabric. There is vast literature on optimal stopping problems that have multiple stages that are far more relevant for example. The connection to the bandit problem is also superficial. Each stage is more connected to a best arm identification in an environment where there is a known alternative that has mean zero. As such the complexity analysis in Emilie Kauffman's work, in particular the recent paper with Garvier on sequential hypothesis testing with overlapping hypotheses seems more relevant. Rather than screening less relevant strands I would suggest to focus more on the core areas that intersect with this problem and then clearly explain how the paper adds to existing results. (As an additional example, the recent papers on A/B testing by Schmit et al and Azevedo et al are mentioned in passing only...) 4. The absence of a lower bound argument leaves the analysis incomplete (the authors' conjecture aside). But I recognize this is beyond the scope of the revision. Finally, in conjunction with 1.) I believe assigning a more modest (and descriptive) title and packaging aligned with contribution would also be a welcome modification in the revision.
train
[ "5J7d8I0rfq", "ubKnucjn1CL", "FcXuEJUKv8", "FL3A0dipHyf", "DHeba_vH8Yj", "3AxSbUBmpm3", "h7q5wSV3v2K", "ghs883lK_em", "qX0Vrj6EDkK", "95NXvWtSzO_" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarifications. I will leave my review as is. ", " We thank the reviewer for their questions.\n\n(i) Regarding the response to the other reviewer, what we meant is that the *proof* of the impossibility result is relatively simple and perhaps not worthy of the main body. Yet, its *consequences* ar...
[ -1, -1, -1, 7, -1, -1, -1, -1, 7, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, 4, 3 ]
[ "3AxSbUBmpm3", "FcXuEJUKv8", "h7q5wSV3v2K", "nips_2021_euUgX7XM9j", "ghs883lK_em", "qX0Vrj6EDkK", "95NXvWtSzO_", "FL3A0dipHyf", "nips_2021_euUgX7XM9j", "nips_2021_euUgX7XM9j" ]
nips_2021_K_MD-PMTLtA
When Expressivity Meets Trainability: Fewer than $n$ Neurons Can Work
Jiawei Zhang, Yushun Zhang, Mingyi Hong, Ruoyu Sun, Zhi-Quan Luo
accept
This paper makes a significant contribution and the results were found interesting and technically innovative. However, the authors failed to discuss relevant literature in their original paper, in particular Daniely's and Bubeck et al. papers. Regarding Bubeck et al, this is an easily fixable mistake, and there seem to be no real issue here that cannot be resolved. However, as far as Daniely's work, the relevance is too large to be brushed off. Now, the authors did make convincing arguments in their rebuttal. But, bottom-line, they make arguments about the validity of the result in Daniely. Now, of course this is completely acceptable, and from my brief view of their comments I am even inclined to believe that they have a point. But such claims and discussions should have been part of the original paper and must go through the scrutiny of peer-review, they shouldn't be assessed in the limited form of the discussion period. Given that Neurips does not accept revised versions, it is not possible to assess how the paper might look like once these issues will be resolved by the authors, and therefore I cannot recommend acceptance.
train
[ "Eh6Nwfv-5td", "RX1GAVl0aXP", "YimzNjyIYhM", "lmZCCmg2lVz", "GDSw5vf4SSI", "sH207nDmZ9c", "fYgBY3Um9tR", "rZPWsSslvkl", "qqA27eBJIc", "4xgzHy_VCt", "swpddq-6L0w", "WUNWIbzlwY", "YiRIUJ15PaX", "59aOhnoLECe" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We’d like to thank the reviewer for careful reading of our response and re-checking our proof. We are grateful that the reviewer spends much time reading our response & proof. Following your advice, we will add an additional paragraph to compare our work to Daniely 2019 & Bubeck et al. 2020. We will also re-poli...
[ -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "YimzNjyIYhM", "GDSw5vf4SSI", "nips_2021_K_MD-PMTLtA", "nips_2021_K_MD-PMTLtA", "fYgBY3Um9tR", "WUNWIbzlwY", "lmZCCmg2lVz", "59aOhnoLECe", "YiRIUJ15PaX", "YimzNjyIYhM", "YimzNjyIYhM", "nips_2021_K_MD-PMTLtA", "nips_2021_K_MD-PMTLtA", "nips_2021_K_MD-PMTLtA" ]
nips_2021_Laz0L5tjml
Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation
Knowledge distillation (KD) has recently been identified as a method that can unintentionally leak private information regarding the details of a teacher model to an unauthorized student. Recent research in developing undistillable nasty teachers that can protect model confidentiality has gained significant attention. However, the level of protection these nasty models offer has been largely untested. In this paper, we show that transferring knowledge to a shallow sub-section of a student can largely reduce a teacher’s influence. By exploring the depth of the shallow subsection, we then present a distillation technique that enables a skeptical student model to learn even from a nasty teacher. To evaluate the efficacy of our skeptical students, we conducted experiments with several models with KD on both training data-available and data-free scenarios for various datasets. While distilling from nasty teachers, compared to the normal student models, skeptical students consistently provide superior classification performance of up to ∼59.5%. Moreover, similar to normal students, skeptical students maintain high classification accuracy when distilled from a normal teacher, showing their efficacy irrespective of the teacher being nasty or not. We believe the ability of skeptical students to largely diminish the KD-immunity of potentially nasty teachers will motivate the research community to create more robust mechanisms for model confidentiality. We have open-sourced the code at https://github.com/ksouvik52/Skeptical2021
accept
Previous work defined a *nasty teacher* to be a specially trained network that yields nearly the same performance as a normal one; but if used as a teacher model, it will significantly degrade the performance of student models that try to imitate it (with the goal of protecting against model stealers). In this paper, the authors proposed an attack that neutralizes the protection of nasty teachers, allowing the training of accurate student models. The reviewers agree that this is an interesting paper, all of them supporting acceptance.
train
[ "-cbxqckcMe8", "RHOlhgbF2ma", "luTEQpZo9B0", "3wo0e7_r0h", "SM4JbqX55I", "DvpJWNAEnU1", "keDxn_eDTLW", "TDHMJ1Etx6l", "Z45zAV9WC2S", "4zgAYH2gr4V", "PqlBx6RWBAR", "Y-cfC34Qr97", "bRj-VC5aMRW", "3REbgiVyAcF", "UEmLTrlNFN", "7oxo_wLKYXa", "vaXR-XUtJmc", "P9Vfn2KY4Rz" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear reviewer j2bi,\n\nWe are glad to receive these comments and an \"accept\" from you. As promised we will incorporate the additional results, rearrange the paper (bringing some results from supp. to original manuscript), and add additional motivations for the work. \n\nRegards,\n\nAuthors", "1) The main cont...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 7 ]
[ -1, 5, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3 ]
[ "luTEQpZo9B0", "nips_2021_Laz0L5tjml", "3REbgiVyAcF", "DvpJWNAEnU1", "nips_2021_Laz0L5tjml", "keDxn_eDTLW", "Z45zAV9WC2S", "PqlBx6RWBAR", "vaXR-XUtJmc", "bRj-VC5aMRW", "P9Vfn2KY4Rz", "nips_2021_Laz0L5tjml", "UEmLTrlNFN", "RHOlhgbF2ma", "Y-cfC34Qr97", "P9Vfn2KY4Rz", "SM4JbqX55I", "n...
nips_2021_RSdCxeaOF1
High Probability Complexity Bounds for Line Search Based on Stochastic Oracles
We consider a line-search method for continuous optimization under a stochastic setting where the function values and gradients are available only through inexact probabilistic zeroth and first-order oracles. These oracles capture multiple standard settings including expected loss minimization and zeroth-order optimization. Moreover, our framework is very general and allows the function and gradient estimates to be biased. The proposed algorithm is simple to describe, easy to implement, and uses these oracles in a similar way as the standard deterministic line search uses exact function and gradient values. Under fairly general conditions on the oracles, we derive a high probability tail bound on the iteration complexity of the algorithm when applied to non-convex smooth functions. These results are stronger than those for other existing stochastic line search methods and apply in more general settings.
accept
This was perhaps the most difficult decision in my pile, and I felt the need to carefully reviewed the paper myself. The main point against the paper is that the results are similar to many existing results that hold in expectation, and it is unlikely that members of the community would be surprised that this algorithm works with high probability. On the other hand, the authors have done a thorough job both theoretically and empirically. I believe this work will make a good reference on the topic, and the authors have addressed nearly all the points brought up in the reviews (e.g., experiments on non-convex problems and issues with the assumptions). So overall I leaned towards recommending acceptance. Some additional "reviewer comments" from my readthrough: - The paper needs to spend more time, especially early in the paper, making connections to machine learning. For example, the paper needs to be more clear that in order to converge the mini-batch size must grow as a stationary point is approached. It should discuss the relationship between the mini-batch size and the error in the gradient (this was brought up in several reviews, and this discussion will be useful for many readers). - I feel like this algorithm is more-closely related to "growing batch" methods than to stochastic gradient methods. The paper should discuss these related works, and emphasize the connections to them. The paper could also refer to the growing-batch papers as ways people have tried to set the batch size in methods like this (because setting the batch size for hybrid methods like this is not easy, so referring to other papers is probably all you will have space for). - The paper needs to *explicitly say that there is a minimum batch size required*, given an accuracy level, which is different than the usual SGD setting. - Given the connection, I would also like to see a growing-batch method in the experiments (SLS does not quite seem like the right method to compare to).
train
[ "VaCMx_iNE5D", "w8ecTSgJb9O", "4_1TGMOOa1t", "GRNetkJKdC4", "ofbKLnHEFvo", "ujryVZ_F18x", "qSHHLimn4yI", "ozf89DIQcbd", "bVj-FhQdqqD", "7RSxDs5muBg", "-fXLlGUsVih", "CLAfOq7hyHY", "Tfi_hz96c-M", "uXWP4JvazE1", "fUdxkynSTK0", "4blSryzZaZE", "rCR6sa9AgGa" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " We appreciate you reading our response.", "Consider unconstrained smooth optimization with probabilistic zeroth- and first-order oracles. This paper proposes a new gradient descent with line search method and provides a high-probability upper bound of the iteration complexity. The authors highlight that their a...
[ -1, 7, -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 4 ]
[ "4_1TGMOOa1t", "nips_2021_RSdCxeaOF1", "4blSryzZaZE", "nips_2021_RSdCxeaOF1", "ozf89DIQcbd", "CLAfOq7hyHY", "bVj-FhQdqqD", "7RSxDs5muBg", "Tfi_hz96c-M", "fUdxkynSTK0", "nips_2021_RSdCxeaOF1", "uXWP4JvazE1", "rCR6sa9AgGa", "-fXLlGUsVih", "GRNetkJKdC4", "w8ecTSgJb9O", "nips_2021_RSdCxe...
nips_2021_KBnXrODoBW
Pay Attention to MLPs
Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years. Here we propose a simple network architecture, gMLP, based solely on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications. Our comparisons show that self-attention is not critical for Vision Transformers, as gMLP can achieve the same accuracy. For BERT, our model achieves parity with Transformers on pretraining perplexity and is better on some downstream NLP tasks. On finetuning tasks where gMLP performs worse, making the gMLP model substantially larger can close the gap with Transformers. In general, our experiments show that gMLP can scale as well as Transformers over increased data and compute.
accept
The paper introduces an MLP-based model with gating (gMLP) that achieves comparable performance to Transformers, showing that self-attention is not critical for the success of these models. While multiple concerns were raised regarding the limitations of the proposed method, all four reviewers appreciate the strong empirical results reported in the paper and recommend acceptance. The AC agrees with this decision, and requests the authors to add to the final version the discussion and additional information provided in the rebuttal, as well as more clearly describe the limitations of the proposed approach.
train
[ "c4h6SCgtCjl", "F9HFiMuSDM", "lyctd9-7YWq", "DxDgM5-Y6x9", "tO3HAmwFR6", "Zr4Zq1GnSU3", "ag2-kr2TILT", "L9zCruQy2V3", "dTl46HxdNhP", "IPJdr3G91vN", "Vr9lQ80uBLa", "M-9ZofKgvO_", "GIcnaXpw1_i", "DTJzFxlIUS", "w9jB8orHGqr" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the question. Below we report inference latencies on V100 GPUs wrt various input sizes.\n\n*Text classification (gMLP-base)*:\n\n| Seq length | 32 | 64 | 128 | 256 | 512 | 1024 | 2048 | 4096 |\n|:-----------------:|:--:|----|-----|-----|-----|------|------|------|\n| V100 Latency (ms) | 13 | 13 | ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "F9HFiMuSDM", "dTl46HxdNhP", "nips_2021_KBnXrODoBW", "tO3HAmwFR6", "Zr4Zq1GnSU3", "ag2-kr2TILT", "L9zCruQy2V3", "IPJdr3G91vN", "w9jB8orHGqr", "lyctd9-7YWq", "DTJzFxlIUS", "GIcnaXpw1_i", "nips_2021_KBnXrODoBW", "nips_2021_KBnXrODoBW", "nips_2021_KBnXrODoBW" ]
nips_2021_cY8bNhXEB1
An Image is Worth More Than a Thousand Words: Towards Disentanglement in The Wild
Unsupervised disentanglement has been shown to be theoretically impossible without inductive biases on the models and the data. As an alternative approach, recent methods rely on limited supervision to disentangle the factors of variation and allow their identifiability. While annotating the true generative factors is only required for a limited number of observations, we argue that it is infeasible to enumerate all the factors of variation that describe a real-world image distribution. To this end, we propose a method for disentangling a set of factors which are only partially labeled, as well as separating the complementary set of residual factors that are never explicitly specified. Our success in this challenging setting, demonstrated on synthetic benchmarks, gives rise to leveraging off-the-shelf image descriptors to partially annotate a subset of attributes in real image domains (e.g. of human faces) with minimal manual effort. Specifically, we use a recent language-image embedding model (CLIP) to annotate a set of attributes of interest in a zero-shot manner and demonstrate state-of-the-art disentangled image manipulation results.
accept
The problem setup with partially labelled images in practically meaningful. The whole paper is well organized and presented. The proposed disentangling method is well motivated and interesting. The empirical results on a number of tasks are good. I really appreciate the detailed and informative rebuttals from the authors. It helped resolving the main concerns in the initial reviews. All reviewers finally agree the acceptance.
train
[ "UIlTjfJTx6", "00xQ_hg7krm", "4Ur0CJlrzvx", "JXdStNr-rN1", "c-DXG7gotx", "XsEYbDLYGiJ", "v-hTCslJFFM", "f0dYxfeFfKC", "OI9p8PUw_iP", "DSqJNo4Doyf", "yytf51CbGy3", "pt_UFUK2L0", "VnLy4u24kyb", "ldX0lAc_TBI", "UjtznlfzAkk", "vUiEfGFYX9", "FLnHVVPXTdc" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ " Thank you for acknowledging our clarifications and for the positive reconsideration of our paper. We believe that measuring the \"editing strength\" is a good idea, we will incorporate this additional metric in the final version.", "This paper proposes an encoder-decoder based disentanglement method that applie...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, 6, -1, 8, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1 ]
[ "00xQ_hg7krm", "nips_2021_cY8bNhXEB1", "00xQ_hg7krm", "XsEYbDLYGiJ", "nips_2021_cY8bNhXEB1", "FLnHVVPXTdc", "OI9p8PUw_iP", "yytf51CbGy3", "ldX0lAc_TBI", "nips_2021_cY8bNhXEB1", "UjtznlfzAkk", "nips_2021_cY8bNhXEB1", "00xQ_hg7krm", "DSqJNo4Doyf", "pt_UFUK2L0", "FLnHVVPXTdc", "c-DXG7go...
nips_2021_WSykyaty6Q
Dynamics of Stochastic Momentum Methods on Large-scale, Quadratic Models
We analyze a class of stochastic gradient algorithms with momentum on a high-dimensional random least squares problem. Our framework, inspired by random matrix theory, provides an exact (deterministic) characterization for the sequence of function values produced by these algorithms which is expressed only in terms of the eigenvalues of the Hessian. This leads to simple expressions for nearly-optimal hyperparameters, a description of the limiting neighborhood, and average-case complexity. As a consequence, we show that (small-batch) stochastic heavy-ball momentum with a fixed momentum parameter provides no actual performance improvement over SGD when step sizes are adjusted correctly. For contrast, in the non-strongly convex setting, it is possible to get a large improvement over SGD using momentum. By introducing hyperparameters that depend on the number of samples, we propose a new algorithm sDANA (stochastic dimension adjusted Nesterov acceleration) which obtains an asymptotically optimal average-case complexity while remaining linearly convergent in the strongly convex setting without adjusting parameters.
accept
This paper presents several interesting results concerning stochastic gradient methods on high-dimensional least-squares problems. The characterization of the role played by the momentum coefficient in terms of convergence is sharp. The new task-dependent algorithm matches the optimal complexity.
test
[ "Vfho80tTaFX", "p5wNdY4BmZP", "wTEA_SuLv_E", "IdS94g0pJ99", "diC4d0_7n9M", "SwvwIOx7O7U", "H0TZGir-wdd", "KBxGoA3UBFw", "oUNheiohc4B", "2pIyjTlYCh4", "W9KZjxqCBv1", "XbhAdNdwEis" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for this detailed response. They address adequately my comments, and (if accepted) I do encourage the authors to make the revisions to page 9 and Appendix D as discussed above.", " Thank you for the comment.\n\nWe wanted to clarify the neighborhood convergence.\n\n[1]. In the over parameterized...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 7, 8 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "H0TZGir-wdd", "wTEA_SuLv_E", "oUNheiohc4B", "nips_2021_WSykyaty6Q", "SwvwIOx7O7U", "IdS94g0pJ99", "XbhAdNdwEis", "W9KZjxqCBv1", "2pIyjTlYCh4", "nips_2021_WSykyaty6Q", "nips_2021_WSykyaty6Q", "nips_2021_WSykyaty6Q" ]
nips_2021_BKpNqR19JgD
Adversarial Examples in Multi-Layer Random ReLU Networks
We consider the phenomenon of adversarial examples in ReLU networks with independent Gaussian parameters. For networks of constant depth and with a large range of widths (for instance, it suffices if the width of each layer is polynomial in that of any other layer), small perturbations of input vectors lead to large changes of outputs. This generalizes results of Daniely and Schacham (2020) for networks of rapidly decreasing width and of Bubeck et al (2021) for two-layer networks. Our proof shows that adversarial examples arise in these networks because the functions they compute are \emph{locally} very similar to random linear functions. Bottleneck layers play a key role: the minimal width up to some point in the network determines scales and sensitivities of mappings computed up to that point. The main result is for networks with constant depth, but we also show that some constraint on depth is necessary for a result of this kind, because there are suitably deep networks that, with constant probability, compute a function that is close to constant.
accept
The paper studies adversarial examples in ReLU networks with independent gaussian parameters. Reviewers were generally happy with the results of the paper. In particular, authors' responses in the rebuttal period helped clarify some concerns. Overall, I think the paper is above the accept threshold.
train
[ "puIwRwXh-hg", "BOEfVkACmq", "t2hlXalauov", "sSAbGTRhrgZ", "Pyj0ubxiAp0", "Evh1AXZAxp", "tHDSEEMFtw", "i-tqTnAD5e", "KHLJgYSTk4c" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "EDIT: I will be keeping my (positive) score.\n\nThe paper studies robustness properties of ReLU networks that have random weights. The main finding is that random ReLU nets of constant depth are susceptible to adversarial examples for every input vector. The main theorem relies on some conditions about the widths ...
[ 7, 7, 6, -1, -1, -1, -1, -1, 5 ]
[ 2, 2, 2, -1, -1, -1, -1, -1, 2 ]
[ "nips_2021_BKpNqR19JgD", "nips_2021_BKpNqR19JgD", "nips_2021_BKpNqR19JgD", "tHDSEEMFtw", "puIwRwXh-hg", "BOEfVkACmq", "t2hlXalauov", "KHLJgYSTk4c", "nips_2021_BKpNqR19JgD" ]
nips_2021_IBHP61avv0R
Efficient Statistical Assessment of Neural Network Corruption Robustness
We quantify the robustness of a trained network to input uncertainties with a stochastic simulation inspired by the field of Statistical Reliability Engineering. The robustness assessment is cast as a statistical hypothesis test: the network is deemed as locally robust if the estimated probability of failure is lower than a critical level.The procedure is based on an Importance Splitting simulation generating samples of rare events. We derive theoretical guarantees that are non-asymptotic w.r.t. sample size. Experiments tackling large scale networks outline the efficiency of our method making a low number of calls to the network function.
accept
This paper uses a hypothesis testing based approach to provide a statistical assessment of corruption robustness, with a scalable MC-based computational approach. The authors have made an impressive effort to address the concerns of the reviewers. Though not all the concerns are fully addressed, the emerging consensus is that the additional comments, clarifications, and also experiments, have satisfactorily addressed many of the concerns raised.
val
[ "-vSHFNKawXK", "ynz2NV8OTE4", "teYjS1ce9BW", "SR7-MvZttAM", "feOWLMoDOzE", "27GwQjwuR0d", "Gdl2aybZYb", "7IB0gELiG_s", "06AW_9JXrzU", "0K6As5QZsa3", "lymaJj7IN8P", "Z43dFiJ_7oi", "3bH4qKQC8_" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your comments.\n\nFrom what we understood it is in deed possible as long as anonymity is respected. \n\nBelow you can find the results for 100 images from the ImageNet validation dataset, ran on a Nvidia Tesla V100 GPU, \n\nNetwork, Epsilon, Verified (%), Calls, Avg. Compute time (s)\n\nM...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "ynz2NV8OTE4", "27GwQjwuR0d", "nips_2021_IBHP61avv0R", "feOWLMoDOzE", "Gdl2aybZYb", "7IB0gELiG_s", "06AW_9JXrzU", "lymaJj7IN8P", "teYjS1ce9BW", "Z43dFiJ_7oi", "3bH4qKQC8_", "nips_2021_IBHP61avv0R", "nips_2021_IBHP61avv0R" ]
nips_2021_KrAVI2AhNJh
A Highly-Efficient Group Elastic Net Algorithm with an Application to Function-On-Scalar Regression
Feature Selection and Functional Data Analysis are two dynamic areas of research, with important applications in the analysis of large and complex data sets. Straddling these two areas, we propose a new highly efficient algorithm to perform Group Elastic Net with application to function-on-scalar feature selection, where a functional response is modeled against a very large number of potential scalar predictors. First, we introduce a new algorithm to solve Group Elastic Net in ultra-high dimensional settings, which exploits the sparsity structure of the Augmented Lagrangian to greatly reduce the computational burden. Next, taking advantage of the properties of Functional Principal Components, we extend our algorithm to the function-on-scalar regression framework. We use simulations to demonstrate the CPU time gains afforded by our approach compared to its best existing competitors, and present an application to data from a Genome-Wide Association Study on childhood obesity.
accept
Three reviewers indicated acceptance, only one reviewer had concerns about the novelty with respect to reference [1]. It turned out, however, that [1] is basically an unpublished pre-print of this paper, and in their rebuttal, the authors could convincingly show that even with respect to [1], the paper contains some important extensions. So I recommend acceptance of this paper.
test
[ "3e0HmI4OtX3", "UObW3xh_ZAy", "xoLZAyA0rM", "XvnGW2de4R7", "bH7eLhh4nkP", "7Uv4E2oaxb2", "K1jET8f1Vdv", "jOVP6fLT7Sz", "7ZGUFloZ_6l", "y6XLTcWOu1", "nFW5f0qqz8a" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new group elastic net algorithm solver for linear regression problem with multivariate response variable. The paper then claims its application on function-on-scalar. I am not an expert in optimization, but I feel that this paper is strikingly similar to [1], which aims to solve a similar ela...
[ 4, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3 ]
[ "nips_2021_KrAVI2AhNJh", "xoLZAyA0rM", "XvnGW2de4R7", "bH7eLhh4nkP", "3e0HmI4OtX3", "nFW5f0qqz8a", "y6XLTcWOu1", "7ZGUFloZ_6l", "nips_2021_KrAVI2AhNJh", "nips_2021_KrAVI2AhNJh", "nips_2021_KrAVI2AhNJh" ]
nips_2021_zjJyjQj1W7U
Hierarchical Clustering: $O(1)$-Approximation for Well-Clustered Graphs
Bogdan-Adrian Manghiuc, He Sun
accept
The authors present a new hierarchical algorithm to cluster graphs. The main result showed in the paper is that the new algorithm returns a constant approximation for a well-clustered graph. The paper is a clear extension of previous work in NeurIPS and it presents interesting theoretical results. One shortcoming of the current write-up is that the proposed algorithm does not have great experimental performances. Overall, the paper is interesting and proposes some novel ideas and it will be a nice contribution to NeurIPS.
train
[ "M2OMmUba0aY", "BxBa-3rPQM", "A_9Z5wRdn5P", "lj_Q4Ww_1al", "IFKZ1g8dTkP", "m0wP_eSQyo", "ectZOjg6w_h", "NeI_xwNrY-b", "ZwXClI-Ckc", "OYgvF9gTfBC", "XiqFLjkRH0", "VFK3L2xbs-w", "vmpqcRRdGXQ", "EEPGAqNaveH", "gn7LbaGxIbx", "k0TWnJdblq" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for reading our response and appreciation of our work. We hope that we have answered your detailed questions, and we are glad to see that you will keep your positive opinions of our work.", " Thanks to the authors for responding. The rebuttal shows that the authors have put a lot of consideration in...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "BxBa-3rPQM", "NeI_xwNrY-b", "ZwXClI-Ckc", "ectZOjg6w_h", "m0wP_eSQyo", "VFK3L2xbs-w", "XiqFLjkRH0", "gn7LbaGxIbx", "OYgvF9gTfBC", "k0TWnJdblq", "EEPGAqNaveH", "vmpqcRRdGXQ", "nips_2021_zjJyjQj1W7U", "nips_2021_zjJyjQj1W7U", "nips_2021_zjJyjQj1W7U", "nips_2021_zjJyjQj1W7U" ]
nips_2021_6ns5QTPQ_d
Realistic evaluation of transductive few-shot learning
Transductive inference is widely used in few-shot learning, as it leverages the statistics of the unlabeled query set of a few-shot task, typically yielding substantially better performances than its inductive counterpart. The current few-shot benchmarks use perfectly class-balanced tasks at inference. We argue that such an artificial regularity is unrealistic, as it assumes that the marginal label probability of the testing samples is known and fixed to the uniform distribution. In fact, in realistic scenarios, the unlabeled query sets come with arbitrary and unknown label marginals. We introduce and study the effect of arbitrary class distributions within the query sets of few-shot tasks at inference, removing the class-balance artefact. Specifically, we model the marginal probabilities of the classes as Dirichlet-distributed random variables, which yields a principled and realistic sampling within the simplex. This leverages the current few-shot benchmarks, building testing tasks with arbitrary class distributions. We evaluate experimentally state-of-the-art transductive methods over 3 widely used data sets, and observe, surprisingly, substantial performance drops, even below inductive methods in some cases. Furthermore, we propose a generalization of the mutual-information loss, based on α-divergences, which can handle effectively class-distribution variations. Empirically, we show that our transductive α-divergence optimization outperforms state-of-the-art methods across several data sets, models and few-shot settings.
accept
The submission investigates the effect of a non-uniform marginal label distribution on transductive approaches to few-shot classification. When the marginal label distribution of test episodes is sampled from a Dirichlet distribution, the paper finds that state-of-the-art transductive approaches suffer a substantial performance drop. The submission also introduces a generalization of the mutual information loss which better handles class imbalance and is shown to outperform competing approaches in that setting. Reviewers found that the paper is well-written and sheds light on an underexplored and important aspect of evaluating transductive approaches to few-shot classification. Overall they found that the proposed generalization of the mutual information loss is technically sound and convincingly backed up by experiments. Some reviewers expressed concerns about the experimental design (the value of alpha and the number of values tried, its optimal value as a function of the shot setting, the choice of baselines to compare against) which were addressed to their satisfaction by the authors. Ultimately, all reviewers agree that the paper should be accepted. I therefore recommend acceptance.
train
[ "5yyshzF2Ch", "1isEv78KlJv", "EUH3weaFXbc", "BZAlAXgzVd", "XOtkxpnKhOA", "IZw4-L1XQBk", "nrtckzP_rJg", "uI4CPNKw6W", "ue9JEWXnIeu", "SvBlcN7NArT", "hdF8V1DjUUL", "hAwBHPlNFcr", "heqVu9cRDrV", "nJ4bWA-qdoL", "m3sfWM8G8i8", "b3sXylWuwsO", "kP01t9uwi-J", "RC5bqsNPmxa", "YkupJEeGF0i"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ "This paper evaluates several transductive few-shot learning methods [15,28,20,21,23,17] on class-imbalanced FSL tasks. The class distribution in the query set follows a Dirichlet distribution. The support set is balanced. This work proposes a novel method based on $\\alpha$-divergences addressing the imbalance pro...
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 5, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "nips_2021_6ns5QTPQ_d", "BZAlAXgzVd", "nips_2021_6ns5QTPQ_d", "5yyshzF2Ch", "uI4CPNKw6W", "uI4CPNKw6W", "uI4CPNKw6W", "ue9JEWXnIeu", "EUH3weaFXbc", "5yyshzF2Ch", "EUH3weaFXbc", "5yyshzF2Ch", "5yyshzF2Ch", "nips_2021_6ns5QTPQ_d", "EUH3weaFXbc", "EUH3weaFXbc", "5yyshzF2Ch", "5yyshzF2...
nips_2021_0kCxbBQknN
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Quantization is a popular technique that transforms the parameter representation of a neural network from floating-point numbers into lower-precision ones (e.g., 8-bit integers). It reduces the memory footprint and the computational cost at inference, facilitating the deployment of resource-hungry models. However, the parameter perturbations caused by this transformation result in behavioral disparities between the model before and after quantization. For example, a quantized model can misclassify some test-time samples that are otherwise classified correctly. It is not known whether such differences lead to a new security vulnerability. We hypothesize that an adversary may control this disparity to introduce specific behaviors that activate upon quantization. To study this hypothesis, we weaponize quantization-aware training and propose a new training framework to implement adversarial quantization outcomes. Following this framework, we present three attacks we carry out with quantization: (i) an indiscriminate attack for significant accuracy loss; (ii) a targeted attack against specific samples; and (iii) a backdoor attack for controlling the model with an input trigger. We further show that a single compromised model defeats multiple quantization schemes, including robust quantization techniques. Moreover, in a federated learning scenario, we demonstrate that a set of malicious participants who conspire can inject our quantization-activated backdoor. Lastly, we discuss potential counter-measures and show that only re-training consistently removes the attack artifacts. Our code is available at https://github.com/Secure-AI-Systems-Group/Qu-ANTI-zation
accept
The reviewers were satisfied by the responses by the authors, and encourage them to include additional results to the final version of the paper.
train
[ "DTeYBnmyC8G", "03LSKlInEsC", "BJCZ1CN9PJm", "IvFCsGnCb_E", "yVmZOEV84Q", "C4x8fve-2c_", "0KSvN8ZsWnH", "i_8WO1UPGVA", "dwfG-2BEPbb", "pETrsFlMeX", "qPTVVX9LnDX", "kK3CPCMdmJ", "fl9sOeaHW4d" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your thoughtful feedback. We will make sure to include our responses in the main body of our paper and clarify the experimental section to avoid confusion.", " I thank authors for their response. I keep my rating as accept. I encourage authors to incorporate their response into the paper, es...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 5 ]
[ "03LSKlInEsC", "i_8WO1UPGVA", "IvFCsGnCb_E", "0KSvN8ZsWnH", "nips_2021_0kCxbBQknN", "fl9sOeaHW4d", "kK3CPCMdmJ", "qPTVVX9LnDX", "pETrsFlMeX", "nips_2021_0kCxbBQknN", "nips_2021_0kCxbBQknN", "nips_2021_0kCxbBQknN", "nips_2021_0kCxbBQknN" ]
nips_2021_Ra-2OvXr7UU
Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings
Raef Bassily, Cristóbal Guzmán, Michael Menart
accept
This paper studies private algorithms for stochastic convex and non-convex optimization when the losses of interest come from Generalized Linear Models. For this case, they show that faster algorithms are possible for the non-smooth convex case for ell_2, and better rates are possible for the l_1 case, compared to the corresponding results for convex non-GLM losses. The author also study non-convex losses and there they show that under various constraints, one can privately find approximately stationary points. The paper pushes forward the research in this important are of private stochastic optimization. The authors are encouraged to go over the reviews and incorporate corrections, as well as the relevant clarifications that came in the rebuttal. I recommend this paper be accepted.
train
[ "Ws864chOccf", "OVsZwtNTFVU", "u3mnmy3N2tP", "pDoxxz8huF_", "o-lTb3Kxbox", "9Bf1LSzE_TJ", "ciTW4CyhCwt" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies differentially private stochastic optimization under different settings and develop several novel results. The authors first consider convex and nonsmooth generalized linear models, for which they develop an algorithm with optimal generalization bounds and nearly linear time. As a comparison, th...
[ 7, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, -1, -1, 3, 2 ]
[ "nips_2021_Ra-2OvXr7UU", "u3mnmy3N2tP", "Ws864chOccf", "ciTW4CyhCwt", "9Bf1LSzE_TJ", "nips_2021_Ra-2OvXr7UU", "nips_2021_Ra-2OvXr7UU" ]
nips_2021_edmYVRkYZv
TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement Learning
We propose a novel approach to interactive theorem-proving (ITP) using deep reinforcement learning. The proposed framework is able to learn proof search strategies as well as tactic and arguments prediction in an end-to-end manner. We formulate the process of ITP as a Markov decision process (MDP) in which each state represents a set of potential derivation paths. This structure allows us to introduce a novel backtracking mechanism which enables the agent to efficiently discard (predicted) dead-end derivations and restart the derivation from promising alternatives. We implement the framework in the HOL theorem prover. Experimental results show that the framework using learned search strategies outperforms existing automated theorem provers (i.e., hammers) available in HOL when evaluated on unseen problems. We further elaborate the role of key components of the framework using ablation studies.
accept
This paper presents an RL approach to learning how to construct proofs in the context of an interactive theorem prover. Unlike prior work, the approach presented in this paper does not rely on existing human generated proofs. The main contribution of the paper is in the way it sets up the learning problem, although the learning techniques are fairly conventional. The main shortcoming of the paper is that it does not compare with other learned theorem provers. However, the non-learning-based baselines that it does compare against are state-of-the-art. I was actually surprised that the BFS search strategies were able to prove so many more theorems than the off-the-shelf hammers (Z3, E, Vampire). I suspect there is still a lot of room for improvement in this space, but this paper makes a clear and solid contribution to the state of the art in learned theorem provers and should be accepted.
train
[ "20K-9Z0wd9Q", "80FSluyKF82", "LaapO3BXZy5", "c2Ncm9ely89", "4Acth-ZFe19", "FRbp7PnhP3U", "_0XCkr8Q8zH", "_4EmvaNWsLO" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper discusses how to train a tactic-based theorem prover from\nscratch with a limited set of tactics and no supervised learning data.\nOne particularly interesting feature is that TacticZero can teach itself\nstrategies to guide the search.\n The connection between the MDP and tree search is not explicit e...
[ 6, -1, -1, -1, -1, 9, 8, 6 ]
[ 4, -1, -1, -1, -1, 4, 5, 4 ]
[ "nips_2021_edmYVRkYZv", "_0XCkr8Q8zH", "20K-9Z0wd9Q", "_4EmvaNWsLO", "FRbp7PnhP3U", "nips_2021_edmYVRkYZv", "nips_2021_edmYVRkYZv", "nips_2021_edmYVRkYZv" ]
nips_2021_70Q_NeHImB3
Integrating Tree Path in Transformer for Code Representation
Learning distributed representation of source code requires modelling its syntax and semantics. Recent state-of-the-art models leverage highly structured source code representations, such as the syntax trees and paths therein. In this paper, we investigate two representative path encoding methods shown in previous research work and integrate them into the attention module of Transformer. We draw inspiration from the ideas of positional encoding and modify them to incorporate these path encoding. Specifically, we encode both the pairwise path between tokens of source code and the path from the leaf node to the tree root for each token in the syntax tree. We explore the interaction between these two kinds of paths by integrating them into the unified Transformer framework. The detailed empirical study for path encoding methods also leads to our novel state-of-the-art representation model TPTrans, which finally outperforms strong baselines. Extensive experiments and ablation studies on code summarization across four different languages demonstrate the effectiveness of our approaches. We release our code at \url{https://github.com/AwdHanPeng/TPTrans}.
accept
This was a controversial paper. On the one hand, the paper's central idea is straightforward and also closely related to multiple recent efforts. On the other hand, a simple idea can be a valuable contribution if established to be effective. While the original version of the paper left some doubts about the method's effectiveness, the authors have engaged with the reviewers thoroughly during the discussion period and provided compelling additional results. The majority of the reviewers appreciated this, and based on their advice, I am recommending that the paper be accepted as a poster. Please make sure to address the suggestions made in the discussion thread.
train
[ "4XTSeiqHrtA", "j5M8N0V5KoA", "-mjtkO2bzWz", "kys-f1PVCi", "T_Fh9lfkAm5", "rSP7uKZUms", "3N0YVhxpM_5", "FIb3pzGF3m", "fN4xC8-0gGH", "RSJK8_38kQN", "8HaqltYKazw", "FItNYB4zqi8", "5TBVeBCysQo", "2JFitvBnbeJ", "TpPJNJXUY9G", "ozVfjliSddY", "9GJTpO8TLtq", "XW0HN5aW3B9", "wVosmT7MPub"...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " Thanks for your reading. \n\nAs you point out, incorporating syntax _structure_ of code snippet is much essential for code representation learning, which is worthy of in-depth research. Much appreciate your valuable suggestions and positive comments again.\n\nBesides, we have added many experiments and post new p...
[ -1, 7, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "FIb3pzGF3m", "nips_2021_70Q_NeHImB3", "nips_2021_70Q_NeHImB3", "fN4xC8-0gGH", "nips_2021_70Q_NeHImB3", "fN4xC8-0gGH", "nips_2021_70Q_NeHImB3", "TMon8t51O81", "PG_IXivG-W4", "nips_2021_70Q_NeHImB3", "nips_2021_70Q_NeHImB3", "nips_2021_70Q_NeHImB3", "3N0YVhxpM_5", "WtwdbyrFx6x", "ozVfjliS...
nips_2021_5kTlVBkzSRx
Twins: Revisiting the Design of Spatial Attention in Vision Transformers
Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully devised yet simple spatial attention mechanism performs favorably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins- PCPVT and Twins-SVT. Our proposed architectures are highly efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks including image-level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks.
accept
This submission received 3 positive final ratings: 7, 6, 6. On the positive side, reviewers appreciated simplicity and effectiveness of the idea, strong empirical performance and clear presentation. At the same time, some of them initially expressed concerns with overall novelty and motivation of certain design choices. After an extensive discussion between the authors and the reviewers during the rebuttal period, one of the reviewers upgraded their score (from negative to positive), while others remained unchanged. Overall, the strengths of this paper outweigh its weaknesses, so the final recommendation is to accept for poster presentation.
train
[ "iQxG7RX1yZH", "_8sVdbZaJ8Y", "dfqQLKDAmoc", "bcZ_6yO-YQv", "-r7h5RjBKvP", "Q0LluhO2Im3", "sKewB1xWwES", "yywuRH_aIA0", "uZb46sxNacr", "6boO_Hz8h8K", "sq1kGStlvtS", "1MG4v7IYL1Y" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks for the timely response. For now, Twins by itself is a further step towards real-world applications for transformer-based architectures. Just as you mentioned, the deployed performance indeed involves many factors, some of which are hard to quantitatively evaluate and can vary constantly. Therefore, we try...
[ -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, 7 ]
[ -1, -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, 5 ]
[ "bcZ_6yO-YQv", "sq1kGStlvtS", "nips_2021_5kTlVBkzSRx", "Q0LluhO2Im3", "uZb46sxNacr", "yywuRH_aIA0", "nips_2021_5kTlVBkzSRx", "6boO_Hz8h8K", "dfqQLKDAmoc", "sKewB1xWwES", "1MG4v7IYL1Y", "nips_2021_5kTlVBkzSRx" ]
nips_2021_K9WlOVPEpnM
Evaluating State-of-the-Art Classification Models Against Bayes Optimality
Evaluating the inherent difficulty of a given data-driven classification problem is important for establishing absolute benchmarks and evaluating progress in the field. To this end, a natural quantity to consider is the \emph{Bayes error}, which measures the optimal classification error theoretically achievable for a given data distribution. While generally an intractable quantity, we show that we can compute the exact Bayes error of generative models learned using normalizing flows. Our technique relies on a fundamental result, which states that the Bayes error is invariant under invertible transformation. Therefore, we can compute the exact Bayes error of the learned flow models by computing it for Gaussian base distributions, which can be done efficiently using Holmes-Diaconis-Ross integration. Moreover, we show that by varying the temperature of the learned flow models, we can generate synthetic datasets that closely resemble standard benchmark datasets, but with almost any desired Bayes error. We use our approach to conduct a thorough investigation of state-of-the-art classification models, and find that in some --- but not all --- cases, these models are capable of obtaining accuracy very near optimal. Finally, we use our method to evaluate the intrinsic "hardness" of standard benchmark datasets.
accept
Based on review and discussion, * The paper is clear, * the contributions is significant and original, * the claims are well expressed * the experiments are well executed. Hence, I recommend acceptance of this work.
train
[ "Cd0DNVgXVYm", "34gqkQH-XH-", "0GPtMSHwwTQ", "fwXbzynpy1D", "nYoubqesKyC", "1VmcDm_PKQ", "5sqpEDt1Om9", "P9T7qRvCREP" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update: I've read the other reviews and the authors' responses, and I feel this paper is a valuable contribution. I am in favor of accepting and will increase my score to 7. \n\nThis paper proposed a new way for benchmarking state-of-the-art models. By leveraging normalizing flows and properties of Bayes error, th...
[ 7, 7, -1, -1, -1, -1, 5, 7 ]
[ 3, 4, -1, -1, -1, -1, 3, 5 ]
[ "nips_2021_K9WlOVPEpnM", "nips_2021_K9WlOVPEpnM", "P9T7qRvCREP", "5sqpEDt1Om9", "34gqkQH-XH-", "Cd0DNVgXVYm", "nips_2021_K9WlOVPEpnM", "nips_2021_K9WlOVPEpnM" ]
nips_2021_9BpjtPMyDQ
Data-Efficient Instance Generation from Instance Discrimination
Ceyuan Yang, Yujun Shen, Yinghao Xu, Bolei Zhou
accept
All reviewers agree that the paper should be accepted, given it's effectiveness. I agree with the reviewers. The contents of the rebuttal are important to distinguish the paper from previous work (especially ContraD) and therefore should be incorporated into the final version.
train
[ "y-ycDJzS5sl", "VwhwjHXjqIK", "VumxpVVFm-M", "2gcGgsWszN6", "ZLlCPu3XCnE", "nliHqG_4YzL", "afakAWPpLO_", "LXUYxjYcesL", "_i-mP89xmQr", "fAeivojbOiD", "4dfyvE_ali", "6mNgv9PUEM", "Hdb4FfSNhuY", "F9LHScS8lxv", "mb0hQP2wW_3", "828uQhZTTlV", "7w6Efu_X_5a" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a kind of contrastive loss that aims to discriminate both real and generated instances for GAN training. Experiments on FFHQ and AFHQ demonstrate that the proposed method implemented upon StyleGAN2-ADA can improve FID especially with limited data. Strengths:\n- The proposed method sounds novel ...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "nips_2021_9BpjtPMyDQ", "nips_2021_9BpjtPMyDQ", "ZLlCPu3XCnE", "_i-mP89xmQr", "afakAWPpLO_", "nips_2021_9BpjtPMyDQ", "4dfyvE_ali", "_i-mP89xmQr", "F9LHScS8lxv", "Hdb4FfSNhuY", "VwhwjHXjqIK", "828uQhZTTlV", "y-ycDJzS5sl", "7w6Efu_X_5a", "nips_2021_9BpjtPMyDQ", "nips_2021_9BpjtPMyDQ", ...
nips_2021_rqfq0CYIekd
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by state-of-the-art techniques are inconsistent, unstable, and provide very little insight into their correctness and reliability. In addition, these methods are also computationally inefficient, and require significant hyper-parameter tuning. In this paper, we address the aforementioned challenges by developing a novel Bayesian framework for generating local explanations along with their associated uncertainty. We instantiate this framework to obtain Bayesian versions of LIME and KernelSHAP which output credible intervals for the feature importances, capturing the associated uncertainty. The resulting explanations not only enable us to make concrete inferences about their quality (e.g., there is a 95% chance that the feature importance lies within the given range), but are also highly consistent and stable. We carry out a detailed theoretical analysis that leverages the aforementioned uncertainty to estimate how many perturbations to sample, and how to sample for faster convergence. This work makes the first attempt at addressing several critical issues with popular explanation methods in one shot, thereby generating consistent, stable, and reliable explanations with guarantees in a computationally efficient manner. Experimental evaluation with multiple real world datasets and user studies demonstrate that the efficacy of the proposed framework.
accept
There are three extremely favorable reviews and one very strong dissenting review. The dissenting opinion (after good discussion) is: LIME, SHAP , Anchors etc.. - why are these appropriate notions of post hoc explanations. Something that is akin to probability of sufficiency/ necessity etc (causal notions) are truly reliable explanations. Authors claim reliability but solve stability problems in prior post-hoc procedures. Authors' contention: This dissenting opinion dismisses a whole line of prior work on (popular) post-hoc explanations and we have sought to solve issue of these prior explanation methods. The conflict at the heart seems to be : Are explanations provided by LIME, SHAP etc.. valid/reliable explanations in the first place. Solving stability issues of these seem to not address the core problem. My position is this: Probability of sufficiency/necessity are counterfactual notions that in general require knowledge about causal generative models behind the data. Sometimes, stronger assumptions like monotonicty etc.. (pls see - https://ftp.cs.ucla.edu/pub/stat_ser/r271-A.pdf) are required if we are to make conclusions from data alone. Several recent works are exploring, generalizing and finding novel sufficient conditions for estimating these. But still they require some side information. LIME, SHAP are perturbation based techniques based on the data manifold. While I agree with the dissenting opinion that they are not causally grounded (and hence not reliable in some absolute sense), in practice sometimes exploring the data manifold (in the factual sense) can provide explanations that are useful in practice. This is attested by the popularity of these methods and them being used in many places at this point. Another way to look at this is - given the popularity of these perturbation based notions on the data manifold, one would also want to see this line of enquiry mature given their easy to use data driven nature. I would not dismiss this line of enquiry completely. Given the very favorable reviews from others, I am inclined to accept this paper. Authors may want to explicitly note that their method solves stability issues in a certain class of post hoc explanation methods and does not deal with explanation methods based on counterfactual notions.
train
[ "r3i7UNBYjkc", "rKSzlswLOd8", "Ynu0Spm4Xl", "0kBJgCiM_lV", "veOnoY8I_Nh", "I-LeWtmzJz6", "xb8_HTV4oE", "DulxXuVSA0", "4NMJtZ2ojof", "Vy1-QQfW8Jl", "8mAwN3XssMb", "XCC5CcIwUgg", "14reoGO66h" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their follow up comments. Below, we address specific questions/concerns raised by the reviewer: \n\n**”Why is this even deemed to be an explanation (and not statistic correlation)?\"**: \n\nOur usage of the term “explanation” is consistent with prior literature on post hoc explainability...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 8, 3, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "rKSzlswLOd8", "I-LeWtmzJz6", "veOnoY8I_Nh", "nips_2021_rqfq0CYIekd", "Vy1-QQfW8Jl", "XCC5CcIwUgg", "8mAwN3XssMb", "14reoGO66h", "I-LeWtmzJz6", "0kBJgCiM_lV", "nips_2021_rqfq0CYIekd", "nips_2021_rqfq0CYIekd", "nips_2021_rqfq0CYIekd" ]
nips_2021_SnONpXZ_uQ_
Learning Graph Models for Retrosynthesis Prediction
Retrosynthesis prediction is a fundamental problem in organic synthesis, where the task is to identify precursor molecules that can be used to synthesize a target molecule. A key consideration in building neural models for this task is aligning model design with strategies adopted by chemists. Building on this viewpoint, this paper introduces a graph-based approach that capitalizes on the idea that the graph topology of precursor molecules is largely unaltered during a chemical reaction. The model first predicts the set of graph edits transforming the target into incomplete molecules called synthons. Next, the model learns to expand synthons into complete molecules by attaching relevant leaving groups. This decomposition simplifies the architecture, making its predictions more interpretable, and also amenable to manual correction. Our model achieves a top-1 accuracy of 53.7%, outperforming previous template-free and semi-template-based methods.
accept
This paper proposes a new method, GraphRetro, for single step retrosynthesis. GraphRetro treats retrosynthesis as a two stage problem. The first stage predicts the reaction edit, which is similar to previous methods. The second one predicts which leaving group should be attached to the synthons. Different from existing methods, this paper treats the second stage as a multi-class classification problem, which is simpler and achieves some better top-k accuracy for k<5. However, this paper should compare GraphRetro with G2Gs more sufficiently and provide more insights. Besides, this paper should provide large scale experiments on USPTO-full.
train
[ "s4r-5Sc2x7H", "NQ97cFgEZO-", "SqpI55_sDL", "ytuZSbkC-Bx", "ADbqXl0m8rM", "3h3aMDiR4K4", "ljXblImFww4", "HyD5Nip3PES", "SSfj6q8kZc1", "Irip7Vkyx1S", "RKFzaEgsgrB", "flmMPD1JrmY", "JPID7-jnbh3", "xz8QkAXP5_C", "W0l4c-yRSL", "Oq27wmRPw0M" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper deals with molecular graph generation via synthon, incomplete molecular graphs. \nUnlike existing works, this paper formulates the completion of incomplete molecular graphs not as a generation problem but as a classification problem to select one of a set of pre-computed leaving groups fragments.\nThe t...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 4 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "nips_2021_SnONpXZ_uQ_", "ADbqXl0m8rM", "ljXblImFww4", "RKFzaEgsgrB", "SSfj6q8kZc1", "HyD5Nip3PES", "Irip7Vkyx1S", "RKFzaEgsgrB", "RKFzaEgsgrB", "s4r-5Sc2x7H", "Oq27wmRPw0M", "W0l4c-yRSL", "xz8QkAXP5_C", "nips_2021_SnONpXZ_uQ_", "nips_2021_SnONpXZ_uQ_", "nips_2021_SnONpXZ_uQ_" ]
nips_2021_umuW_b77q9A
Differentiable Equilibrium Computation with Decision Diagrams for Stackelberg Models of Combinatorial Congestion Games
We address Stackelberg models of combinatorial congestion games (CCGs); we aim to optimize the parameters of CCGs so that the selfish behavior of non-atomic players attains desirable equilibria. This model is essential for designing such social infrastructures as traffic and communication networks. Nevertheless, computational approaches to the model have not been thoroughly studied due to two difficulties: (I) bilevel-programming structures and (II) the combinatorial nature of CCGs. We tackle them by carefully combining (I) the idea of \textit{differentiable} optimization and (II) data structures called \textit{zero-suppressed binary decision diagrams} (ZDDs), which can compactly represent sets of combinatorial strategies. Our algorithm numerically approximates the equilibria of CCGs, which we can differentiate with respect to parameters of CCGs by automatic differentiation. With the resulting derivatives, we can apply gradient-based methods to Stackelberg models of CCGs. Our method is tailored to induce Nesterov's acceleration and can fully utilize the empirical compactness of ZDDs. These technical advantages enable us to deal with CCGs with a vast number of combinatorial strategies. Experiments on real-world network design instances demonstrate the practicality of our method.
accept
I and the Reviewers have positive opinions on the paper and no major concern has been raised during the reviewing and discussion phases. Furthermore, the Reviewers found the rebuttals very useful, illustrating and clarifying better several points. In particular, I encourage the authors to add the example provided in the rebuttal to Reviewer 2skQ in the camera ready to their paper (the authors could add the example in the supplementary material if no space is available in the main body of the paper).
train
[ "MjO6pdbCwHT", "948bR7J-H_O", "JOQliCOBnTH", "W5UIdxVkuT4", "BcJrja0CoTM", "xcc3Hub6pZ7", "dK87tMRT2t9", "FrpE1Uce1o-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the problem of designing parameters of combinatorial congestion games, where selfish non-atomic agents choose an optimal route/network to optimize their own objectives. The combinatorial congestion games are a special type of potential games. The equilibrium of a congestion game with given param...
[ 6, 7, 7, -1, -1, -1, -1, 6 ]
[ 3, 4, 3, -1, -1, -1, -1, 2 ]
[ "nips_2021_umuW_b77q9A", "nips_2021_umuW_b77q9A", "nips_2021_umuW_b77q9A", "948bR7J-H_O", "MjO6pdbCwHT", "JOQliCOBnTH", "FrpE1Uce1o-", "nips_2021_umuW_b77q9A" ]
nips_2021_x3RPoH3bCQ-
Inverse Optimal Control Adapted to the Noise Characteristics of the Human Sensorimotor System
Computational level explanations based on optimal feedback control with signal-dependent noise have been able to account for a vast array of phenomena in human sensorimotor behavior. However, commonly a cost function needs to be assumed for a task and the optimality of human behavior is evaluated by comparing observed and predicted trajectories. Here, we introduce inverse optimal control with signal-dependent noise, which allows inferring the cost function from observed behavior. To do so, we formalize the problem as a partially observable Markov decision process and distinguish between the agent’s and the experimenter’s inference problems. Specifically, we derive a probabilistic formulation of the evolution of states and belief states and an approximation to the propagation equation in the linear-quadratic Gaussian problem with signal-dependent noise. We extend the model to the case of partial observability of state variables from the point of view of the experimenter. We show the feasibility of the approach through validation on synthetic data and application to experimental data. Our approach enables recovering the costs and benefits implicit in human sequential sensorimotor behavior, thereby reconciling normative and descriptive approaches in a computational framework.
accept
The paper introduces a theoretical framework for estimating cost functions in linear, quadratic, Gaussian systems with action- and state-dependent noises. Algorithms for inference are derived for the complete and partial observation cases and distinguish between the agent’s and the experimenter's inference tasks. The approach is validated on synthetic and experimental data and applied to human sequential sensorimotor behavior. Whilst the proposed approach is a simple applications of dynamic Bayesian inference and maximum likelihood estimate, the proposed inference formulation was considered original. The theory was derived under linear, quadratic, Gaussian assumptions, but the approach is demonstrated to work well in real-world applications, such as real arm and eye movement data. The clarity of the paper could be improved, especially w.r.t. to some literature.
train
[ "xlolco6BX0E", "u0lCHoZndKI", "nPo5SLCvR_f", "-jO7F5IJvf7", "NePAsE2oEP0", "9WGsquv2F_3", "biN9cZUoPs", "uJB-5N6mF16", "96L-TyiMSkU", "H9v_nxOycVj" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " *** I thank the authors for addressing my concerns with respect to the impact of the moment matching approximation and incorporating two baselines. ***\n\nWe are happy this helped.\n\n*** Nonetheless, I believe that using a single experiment is not sufficient for this conference. ***\n\nWe are a bit unclear about...
[ -1, 4, 6, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, 4, 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "u0lCHoZndKI", "nips_2021_x3RPoH3bCQ-", "nips_2021_x3RPoH3bCQ-", "nPo5SLCvR_f", "H9v_nxOycVj", "96L-TyiMSkU", "u0lCHoZndKI", "nips_2021_x3RPoH3bCQ-", "nips_2021_x3RPoH3bCQ-", "nips_2021_x3RPoH3bCQ-" ]
nips_2021_svlanLvYsTd
Deep Neural Networks as Point Estimates for Deep Gaussian Processes
Neural networks and Gaussian processes are complementary in their strengths and weaknesses. Having a better understanding of their relationship comes with the promise to make each method benefit from the strengths of the other. In this work, we establish an equivalence between the forward passes of neural networks and (deep) sparse Gaussian process models. The theory we develop is based on interpreting activation functions as interdomain inducing features through a rigorous analysis of the interplay between activation functions and kernels. This results in models that can either be seen as neural networks with improved uncertainty prediction or deep Gaussian processes with increased prediction accuracy. These claims are supported by experimental results on regression and classification datasets.
accept
The paper shows an equivalence between forward passes of a neural network and deep Gaussian processes. It received a positive review, and low-borderline reviews that were appreciative of the method but concerned about novelty over a workshop paper appearing this year. This paper was discussed extensively with the AC, Senior AC, and an external expert who provided an additional opinion on the paper. Quoting from the communication with the external expert: I would accept this paper. The points the authors make at the very top are significant by themselves: - addressing the issue of variance starvation - the model is proper and not degenerate - training of the model is possible The AC and Senior AC are in agreement that there is significant additional novelty and development over Sun et al. [45] to warrant publication of this paper. The authors should take into account the detailed comments in the reviews pointing out the relationship between the works, and to appropriately include the relationship in the camera ready version. The paper provides a valuable contribution to unifying DGPs and DNNs, and shows significant novel theoretical and practical results.
train
[ "OGZ_v29Vf9F", "aW3I2I-wO8f", "v9Hkhx74Pcv", "UNv7YecU6PB", "wAlNRZl8LvU", "KfkW_coG84g", "BtZ2Sm4f-cg", "XTF-n0weovL", "4qr7EqcXbOD", "0PFi7s9B1H_", "Z5FCu2iwb6b", "WQaZT3HfXI", "hF-A8MpDcrW" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nI appreciate your clarification. I tend to keep the current score because I feel that the paper needs to be significantly revised regarding the issues I raised, which I believe you also agreed in the response. More precisely, the claim that deep NNs are point estimates of deep GPs now seems to be...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "aW3I2I-wO8f", "4qr7EqcXbOD", "UNv7YecU6PB", "wAlNRZl8LvU", "KfkW_coG84g", "XTF-n0weovL", "Z5FCu2iwb6b", "hF-A8MpDcrW", "WQaZT3HfXI", "nips_2021_svlanLvYsTd", "nips_2021_svlanLvYsTd", "nips_2021_svlanLvYsTd", "nips_2021_svlanLvYsTd" ]
nips_2021_sBBnfOFtPc
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero, Francesco Cagnetta, Matthieu Wyart
accept
The paper addresses a topic which has drawn a fair amount of attention recently, namely: if and how do locality and translation-invariance aid convnets in easing the curse of dimensionality. The paper does so by studying a teacher-student framework for kernel regression (see [4]). The reviewers found the implications of the analysis (i.e., 'locality is important') very interesting, and generally appreciated the analytic technique employed in the paper (modulo standard caveats of the replica methods). That said, the paper provoked considerable discussion among the reviewers regarding the acceptable extent to which conclusions drawn based on the kernel ('lazy') regime should be regarded as valid for ReLU networks, and raised several concerns in terms of presentation, clarity. All in all, I think this is a reasonable paper to accept if there is room.
train
[ "gcr9kSr0AYE", "C1BXC99aDSr", "LkL4TVTqGKt", "s2jgimq71mi", "PAOIoiLA2cg", "7pd8cWpsAuc", "0VWX6lgRRVt", "l1ld1ljU0AX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors study the generalization error of kernel regression in a teacher-student scenario where the kernels are obtained as NTK of either one-hidden layer locally-connected NNs or convolutional NNs. They consider the teacher as a Gaussian random field of covariance the same kernel as the student...
[ 7, 5, 7, -1, -1, -1, -1, 8 ]
[ 4, 3, 3, -1, -1, -1, -1, 5 ]
[ "nips_2021_sBBnfOFtPc", "nips_2021_sBBnfOFtPc", "nips_2021_sBBnfOFtPc", "gcr9kSr0AYE", "C1BXC99aDSr", "LkL4TVTqGKt", "l1ld1ljU0AX", "nips_2021_sBBnfOFtPc" ]
nips_2021_wfJCeMS-jH
Causal Identification with Matrix Equations
Causal effect identification is concerned with determining whether a causal effect is computable from a combination of qualitative assumptions about the underlying system (e.g., a causal graph) and distributions collected from this system. Many identification algorithms exclusively rely on graphical criteria made of a non-trivial combination of probability axioms, do-calculus, and refined c-factorization (e.g., Lee & Bareinboim, 2020). In a sequence of increasingly sophisticated results, it has been shown how proxy variables can be used to identify certain effects that would not be otherwise recoverable in challenging scenarios through solving matrix equations (e.g., Kuroki & Pearl, 2014; Miao et al., 2018). In this paper, we develop a new causal identification algorithm which utilizes both graphical criteria and matrix equations. Specifically, we first characterize the relationships between certain graphically-driven formulae and matrix multiplications. With such characterizations, we broaden the spectrum of proxy variable based identification conditions and further propose novel intermediary criteria based on the pseudoinverse of a matrix. Finally, we devise a causal effect identification algorithm, which accepts as input a collection of marginal, conditional, and interventional distributions, integrating enriched matrix-based criteria into a graphical identification approach.
accept
The paper contains a solid and novel theoretical contribution. Authors should make sure that the remarks that convinced a reviewer to raise the score are sufficiently accounted for in the final version. Please take remarks on readability serious to help readers understand the challenging material. Abstract and introduction should be more explicit about the fact that not the full joint distribution of the observed variables are given, hence this is a method of merging information from distributions of subsets.
train
[ "k0QXpT1YmoW", "l9qRx3NPv9D", "yX8w1qk3UE0", "6805-NyIoqy", "vYgHVmsAX-u", "YgibhLvyn_", "p77W0vMLkVU", "eEEISxaYhmG", "kk7m0Tc-hYE" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the clarification. Your responses are very helpful.", " We appreciate you taking the time to read our rebuttal and reconsider our paper, thank you. \n\nRegarding the first questions, the set differences will be evaluated from left to right. \n\n For example, let $\\mathbf{A}=\\\\{1,2,3,...
[ -1, -1, -1, 6, -1, -1, -1, 7, 8 ]
[ -1, -1, -1, 3, -1, -1, -1, 4, 3 ]
[ "l9qRx3NPv9D", "yX8w1qk3UE0", "YgibhLvyn_", "nips_2021_wfJCeMS-jH", "kk7m0Tc-hYE", "6805-NyIoqy", "eEEISxaYhmG", "nips_2021_wfJCeMS-jH", "nips_2021_wfJCeMS-jH" ]
nips_2021_lMrwT4C93eT
Private and Non-private Uniformity Testing for Ranking Data
Róbert Busa-Fekete, Dimitris Fotakis, Emmanouil Zampetakis
accept
This paper considers the problem of testing uniformity of rankings vs a Mallows model, given sample access to these rankings. The paper presents novel algorithms for this problem, both private and non-private. It is a nice contribution to the literature. As one of the reviewers pointed out, it can be helpful to add some remarks in the paper about lower bounds for this problem.
train
[ "i00BjLkCAP", "-f-OoaEA4Ez", "QyLzS1sWJyX", "H07aOzuE8qZ", "33If9nfUFww", "LGH3YNkvxec", "wAr0yP8HYT", "PnRWcAD1L0G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thanks for the feedback and the clarifications! ", "The paper studies uniformity testing for certain structured distributions on large domains. The classic uniformity testing problem is to decide, using sample access, whether an unknown distribution over a domain of size $n$ is the uniform distribution or $\\ep...
[ -1, 6, 7, -1, -1, -1, -1, 7 ]
[ -1, 3, 3, -1, -1, -1, -1, 4 ]
[ "33If9nfUFww", "nips_2021_lMrwT4C93eT", "nips_2021_lMrwT4C93eT", "wAr0yP8HYT", "PnRWcAD1L0G", "-f-OoaEA4Ez", "QyLzS1sWJyX", "nips_2021_lMrwT4C93eT" ]
nips_2021_jeATherHHGj
Model-Based Reinforcement Learning via Imagination with Derived Memory
Model-based reinforcement learning aims to improve the sample efficiency of policy learning by modeling the dynamics of the environment. Recently, the latent dynamics model is further developed to enable fast planning in a compact space. It summarizes the high-dimensional experiences of an agent, which mimics the memory function of humans. Learning policies via imagination with the latent model shows great potential for solving complex tasks. However, only considering memories from the true experiences in the process of imagination could limit its advantages. Inspired by the memory prosthesis proposed by neuroscientists, we present a novel model-based reinforcement learning framework called Imagining with Derived Memory (IDM). It enables the agent to learn policy from enriched diverse imagination with prediction-reliability weight, thus improving sample efficiency and policy robustness. Experiments on various high-dimensional visual control tasks in the DMControl benchmark demonstrate that IDM outperforms previous state-of-the-art methods in terms of policy robustness and further improves the sample efficiency of the model-based method.
accept
The submission introduces Imagining with Derived Memory (IDM), a novel extension of the Dreamer agent, a model based RL methodology using rollouts in latent space to improve policy training. The proposed method wants to improve both training efficiency and robustness, and at its core involves regularising the dynamic model by explicitly constraining the imagined trajectories to be smoother wrt to perturbation of the latents, i.e. perturbed latents should map to similar reconstructions. The smooth model is used to generate a richer set of trajectories, compared to Dreamer, by perturbing initial states collected interacting with the environment, and reweighting them in the final training loss using a learned discriminator function, ensuring plausibility of the sampled trajectories. I expect the new elements presented in this paper to be of broad interest to the community; in particular, the way training data is augmented is widely applicable, and could motivate more groups to refine the presented technique, addressing different sources of uncertainty in the trajectory data. One drawback of the paper it that the experimental section only presents limited (although quite promising!) results on continuous control tasks; while more experiments and ablations would make the paper more immediately impactful, reviewers agreed that the current experimental setup is sufficient to support the claims made in the submission. Finally, the reviewers made numerous suggestions on how to improve the presentation of both the methods and results, and authors have already incorporated feedback, or committed to include it in the next iteration of the paper.
train
[ "hs-7rHt1ln", "uLwyUbBcXdT", "dum_KxAPzP4", "U_5m1lCQZpS", "XCMsWkaegkx", "831zbbO-64z", "D8ZWpOJ99wQ", "uEgzynDQxsu", "b23yc89WuhC", "ZDe3xOjT6mC", "kvBdO2w7c69", "KpjeiCHgpM1", "RnrB3iz5eYm", "_gxwMfEjyk1", "KjlCGMrVghC", "ZvE2qpcE4D", "8MpOaF4TW-i" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your responsible reply and understanding. We will continue to polish our paper by your suggestion. Thanks a lot!\n\nThanks for your hard work. The authors.", " Thank you for confirming this. I've updated my score to reflect the improved clarity of the paper after the revisions. ", "...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, 3 ]
[ "uLwyUbBcXdT", "U_5m1lCQZpS", "nips_2021_jeATherHHGj", "831zbbO-64z", "dum_KxAPzP4", "XCMsWkaegkx", "RnrB3iz5eYm", "RnrB3iz5eYm", "dum_KxAPzP4", "KpjeiCHgpM1", "nips_2021_jeATherHHGj", "KjlCGMrVghC", "8MpOaF4TW-i", "ZvE2qpcE4D", "kvBdO2w7c69", "nips_2021_jeATherHHGj", "nips_2021_jeAT...
nips_2021_YQeWoRnwTnE
Compositional Transformers for Scene Generation
We introduce the GANformer2 model, an iterative object-oriented transformer, explored for the task of generative modeling. The network incorporates strong and explicit structural priors, to reflect the compositional nature of visual scenes, and synthesizes images through a sequential process. It operates in two stages: a fast and lightweight planning phase, where we draft a high-level scene layout, followed by an attention-based execution phase, where the layout is being refined, evolving into a rich and detailed picture. Our model moves away from conventional black-box GAN architectures that feature a flat and monolithic latent space towards a transparent design that encourages efficiency, controllability and interpretability. We demonstrate GANformer2's strengths and qualities through a careful evaluation over a range of datasets, from multi-object CLEVR scenes to the challenging COCO images, showing it successfully achieves state-of-the-art performance in terms of visual quality, diversity and consistency. Further experiments demonstrate the model's disentanglement and provide a deeper insight into its generative process, as it proceeds step-by-step from a rough initial sketch, to a detailed layout that accounts for objects' depths and dependencies, and up to the final high-resolution depiction of vibrant and intricate real-world scenes. See https://github.com/dorarad/gansformer for model implementation.
accept
- The proposed method tackles an important problem, takes a reasonable and novel approach (the loss function and recurrent generation), and the experiments are well planned, executed, and show SOTA results. - The rebuttal addressed the major concerns from the reviewers well and thus some reviews raised the score. - The clarity is good enough but some figures and the details of the architecture need to be clarified further.
train
[ "qjOyfhTpj-x", "QXbdRQkxNCo", "NYlcjzc5R_d", "jDXPKVcIy82", "mtP_7zkcT63", "2jCPXKmcYYr", "XMK3po-Qrm", "_xYuP2ElPEc", "9VtP151XB6o", "Tf1LL_Czci7", "X3LGmW4Ftc", "pOcPYz1lLSN", "mhBZ2dGcu2G", "BtH-C4ksKt", "qi24y1My_mK", "efNJ1ukgrdO", "UjP2x0ml7nV", "usv3QXSByu-" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you so much for reviewing our response and for raising the score! we truly appreciate your time and effort, and your feedback is very helpful to us.\n\n**Order Invariance**: \nIn the CNN experiments we performed, we make the ablation only on the transformation from the intermediate latents (w1,...,wk) to t...
[ -1, -1, -1, -1, 6, -1, -1, 8, 7, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, -1, -1, 5, -1, -1, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "X3LGmW4Ftc", "2jCPXKmcYYr", "XMK3po-Qrm", "BtH-C4ksKt", "nips_2021_YQeWoRnwTnE", "Tf1LL_Czci7", "jDXPKVcIy82", "nips_2021_YQeWoRnwTnE", "nips_2021_YQeWoRnwTnE", "mtP_7zkcT63", "pOcPYz1lLSN", "9VtP151XB6o", "nips_2021_YQeWoRnwTnE", "mtP_7zkcT63", "mhBZ2dGcu2G", "usv3QXSByu-", "_xYuP2...
nips_2021_WnJXcebN7hX
An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap
Yuanhao Wang, Ruosong Wang, Sham Kakade
accept
The reviewers all liked the paper very much. It will be a very nice addition to the conference program, presenting interesting and important additions to the growing body of literature on the theoretical analysis of RL algorithms with function approximation under the assumption that the action-value function of the optimal policy ($Q^*$) is linearly realizable.
train
[ "u4hDH_tv_k", "3jpW60Ig6c", "mu4vlNR8QQc", "MwB5cUiZ1lf", "1EcynD76gzG", "_uCiK6a-nw6", "hemBtggKzib", "tba6cZN55vU", "ljs7aS9iE65" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for addressing my questions and concerns. With this in mind, I have changed my rating from 7 to 8.", "This paper considers online RL with linear realizability of Q* and a suboptimality gap assumption. It shows an exponential sample complexity lower bound holds for this setting,...
[ -1, 8, -1, -1, -1, -1, 8, 8, 8 ]
[ -1, 4, -1, -1, -1, -1, 3, 4, 4 ]
[ "mu4vlNR8QQc", "nips_2021_WnJXcebN7hX", "3jpW60Ig6c", "ljs7aS9iE65", "hemBtggKzib", "tba6cZN55vU", "nips_2021_WnJXcebN7hX", "nips_2021_WnJXcebN7hX", "nips_2021_WnJXcebN7hX" ]
nips_2021_m7XHyicfGTq
Combating Noise: Semi-supervised Learning by Region Uncertainty Quantification
Semi-supervised learning aims to leverage a large amount of unlabeled data for performance boosting. Existing works primarily focus on image classification. In this paper, we delve into semi-supervised learning for object detection, where labeled data are more labor-intensive to collect. Current methods are easily distracted by noisy regions generated by pseudo labels. To combat the noisy labeling, we propose noise-resistant semi-supervised learning by quantifying the region uncertainty. We first investigate the adverse effects brought by different forms of noise associated with pseudo labels. Then we propose to quantify the uncertainty of regions by identifying the noise-resistant properties of regions over different strengths. By importing the region uncertainty quantification and promoting multi-peak probability distribution output, we introduce uncertainty into training and further achieve noise-resistant learning. Experiments on both PASCAL VOC and MS COCO demonstrate the extraordinary performance of our method.
accept
As mentioned by multiple reviewers, the ideas in this work and especially analysis of errors/failure modes were found to be interesting and novel. However, the reviewers raised significant legitimate empirical and experimental deficiencies that were surprising. This includes lack of standard experimental settings (e.g. varying amount of labels from 1-10%) and comparisons to recent state of art that were published before the NeurIPS deadline (Unbiased Teacher, Instant-Teaching, and Humble Teacher). Such adherence to standard practices and fair comparisons to related work are necessary to move the field forward and should have been included initially. During the rebuttal the authors provided these results and comparisons and addressed the empirical concerns. The authors should add these results, in addition to descriptions of how the proposed method differs from ideas presented in those papers. Unfortunately, this did not leave much room for discussion of other aspects of the work. For example, there is a question of novelty (mentioned by 123Z) and furthermore there are several uncertainty-based pseudo-labeling methods (in the context of classification or even segmentation, e.g. [A, B]) that have come out that are not even discussed. The authors should clearly address this and discuss how their methods differ (besides utilizing a similar idea in a new task, object detection). Finally, several reviewers pointed out several specific writing improvements that should be addressed. These should be included in the final version. Overall, the paper did provide interesting analysis and addressed the experimental concerns, and so can be accepted. However, all of the above must be addressed. [A] In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning, ICLR 2021. [B] Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation, IJCV 2021.
val
[ "nqJtWurM3fc", "wf3ABreFik8", "n0qZzwONFd", "VddAhFaZ_3X", "H5CAoY2hINr", "R1w0eoeIxJD", "FwJ09N7txGb", "bS9GAQcRVfM", "JBIJ8XbFmvU", "Yh8gE0znIGp" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "A method for semi-supervised 2D object detection is proposed. The main contribution is an uncertainty estimate of pseudo-labels that comes from a combination of classification score (s) \nand a localization score. The latter is estimated by the IoU score with the set of pseudo-labels and is normalized to values be...
[ 7, -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, 5, -1, -1, -1, -1, -1, 3, 5 ]
[ "nips_2021_m7XHyicfGTq", "VddAhFaZ_3X", "nips_2021_m7XHyicfGTq", "R1w0eoeIxJD", "nqJtWurM3fc", "n0qZzwONFd", "Yh8gE0znIGp", "JBIJ8XbFmvU", "nips_2021_m7XHyicfGTq", "nips_2021_m7XHyicfGTq" ]
nips_2021_Ur2B8gSfZm3
Reducing the Covariate Shift by Mirror Samples in Cross Domain Alignment
Eliminating the covariate shift cross domains is one of the common methods to deal with the issue of domain shift in visual unsupervised domain adaptation. However, current alignment methods, especially the prototype based or sample-level based methods neglect the structural properties of the underlying distribution and even break the condition of covariate shift. To relieve the limitations and conflicts, we introduce a novel concept named (virtual) mirror, which represents the equivalent sample in another domain. The equivalent sample pairs, named mirror pairs reflect the natural correspondence of the empirical distributions. Then a mirror loss, which aligns the mirror pairs cross domains, is constructed to enhance the alignment of the domains. The proposed method does not distort the internal structure of the underlying distribution. We also provide theoretical proof that the mirror samples and mirror loss have better asymptotic properties in reducing the domain shift. By applying the virtual mirror and mirror loss to the generic unsupervised domain adaptation model, we achieved consistently superior performance on several mainstream benchmarks.
accept
The paper proposes a method for unsupervised domain adaptation (UDA) by constructing "virtual mirror" samples of source domain in target domain (and vice versa) and then "aligning" the corresponding mirror pairs across domains for matching feature distributions. The method is novel (reviewer zzRU raised a concern on limited novelty by pointing out the connection with CyCADA but I think the proposed idea is sufficiently different) and shows strong empirical performance. The method has some heuristic components as pointed out by reviewers (eg, reliance on k-means to get clusters, the method to construct mirrors) however the authors have pointed out connections to optimal transport theory in their response. I suggest the authors make this more prominent in the revised version of the paper. Overall, the paper is above the acceptance threshold in my view.
val
[ "1HUbZxx2dJd", "tnkw1vIH1YX", "zyNx3kCtNpZ", "V-iEg8L8x_", "LY90EIxRcS", "2z3b9P8rSZ4", "TURRY9tUYL", "-iCl6NcuoSK", "5zTBc4e4r8H", "evF0wU3ZY5A", "F5Yp7PCQEo", "s-geOx0kx41" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I read the authors' discussion with Reviewer ZzRU. Looking forward to reading the revised version.", "In this paper, the author proposed a sample-level alignment method for solving the covariate shift problem in unsupervised domain adaptation. They noticed that the existing methods do not work well since they n...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "LY90EIxRcS", "nips_2021_Ur2B8gSfZm3", "V-iEg8L8x_", "2z3b9P8rSZ4", "TURRY9tUYL", "-iCl6NcuoSK", "5zTBc4e4r8H", "tnkw1vIH1YX", "s-geOx0kx41", "F5Yp7PCQEo", "nips_2021_Ur2B8gSfZm3", "nips_2021_Ur2B8gSfZm3" ]
nips_2021_pTmYjQadg9
Permutation-Invariant Variational Autoencoder for Graph-Level Representation Learning
Robin Winter, Frank Noe, Djork-Arné Clevert
accept
This paper proposed a new variational encoder for unsupervised graph representation learning. The main contribution is the permutation invariance, which is missing in most existing works on graph generative modeling. After the rebuttal, the reviewers found that most of the concerns have been properly addressed, and during the committee discussion we all agree that the paper can be accepted. Despite that the are still challenges in this direction, this work has made a nontrivial contribution. The authors are encouraged to refine the paper according to the reviewers' comments in the final version.
train
[ "BdCHqS9uWB-", "Nh20QKQ8XsX", "q5YpIddhAIh", "hxWDVpyUggd", "GjzYY0YskU7", "F3Ur-LLaP6p", "kyatCuKkxp", "DPV2y9dkvWp", "mvAZO4MhNEc", "G8zByLcuGzh", "mG7-tsFH3LX", "S2A8uabVt-T", "mVI_yYpHn-k", "h-EJCbbj_tD", "q4hUsKv5Py", "yYGFHeCNfY2", "vB8k7By211W", "aqx7Lcxb65w", "Ml91B3vTlxP...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "a...
[ " We thank the reviewer for the clarification of the mentioned references. We will add those with a discussion to the manuscript. Thanks for the suggestion.\n", " The first reference I referred to is “Provably Powerful Graph Networks“, NeurIPS 2019. The model there essentially is kind of the transformer on edges ...
[ -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "Nh20QKQ8XsX", "DPV2y9dkvWp", "hxWDVpyUggd", "F3Ur-LLaP6p", "nips_2021_pTmYjQadg9", "kyatCuKkxp", "mvAZO4MhNEc", "mG7-tsFH3LX", "G8zByLcuGzh", "y6UpauCWb21", "S2A8uabVt-T", "q4hUsKv5Py", "h-EJCbbj_tD", "yYGFHeCNfY2", "_UXR4o3eYJ", "_UXR4o3eYJ", "nips_2021_pTmYjQadg9", "y6UpauCWb21"...
nips_2021_RmuXDtjDhG
Causal Abstractions of Neural Networks
Structural analysis methods (e.g., probing and feature attribution) are increasingly important tools for neural network analysis. We propose a new structural analysis method grounded in a formal theory of causal abstraction that provides rich characterizations of model-internal representations and their roles in input/output behavior. In this method, neural representations are aligned with variables in interpretable causal models, and then interchange interventions are used to experimentally verify that the neural representations have the causal properties of their aligned variables. We apply this method in a case study to analyze neural models trained on Multiply Quantified Natural Language Inference (MQNLI) corpus, a highly complex NLI dataset that was constructed with a tree-structured natural logic causal model. We discover that a BERT-based model with state-of-the-art performance successfully realizes parts of the natural logic model’s causal structure, whereas a simpler baseline model fails to show any such structure, demonstrating that neural representations encode the compositional structure of MQNLI examples.
accept
I don't think this paper requires a long metareview. They introduce a method for aligning the predictions of a network with an induced causal model of the task's "dynamics". They show it works on some NLI tasks which is respectable (although I would have liked to see some other domain too... maybe something to do in future work!). The reviewers were excited about the work, and several were ready to champion its acceptance after discussion with the authors. I recommend acceptance, and hope that ACL's loss here will be NeurIPS' gain.
train
[ "kf0UGZr4Phw", "vjw0oy48TTH", "JbH7pm44JtY", "7xMsFGpjwj3", "mXTaNyRSxZm", "mEnX-u3ndsN", "iSYlc4HxrcQ", "N6lRfxxsOwB", "ssOeMIHdPd", "LC-ZXHo3S39", "votW2T1g-6-", "OMr_9LJHU-k", "N94_VIRh2I9", "216pmtWiOqR", "_4YsZ7XwuO2", "nVB4spThAN3", "Mba1NBdxVOW", "0_ZGBxyA1Lu" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you so much! We benefitted greatly from all the feedback, and we are grateful to have such invested and attentive reviewers. We are passionate about this method and are currently hard at work applying it to new domains such as navigation and mathematical reasoning!", "see main This paper aims to formaliz...
[ -1, 7, -1, 7, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 4, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "JbH7pm44JtY", "nips_2021_RmuXDtjDhG", "LC-ZXHo3S39", "nips_2021_RmuXDtjDhG", "_4YsZ7XwuO2", "ssOeMIHdPd", "ssOeMIHdPd", "nips_2021_RmuXDtjDhG", "Mba1NBdxVOW", "nVB4spThAN3", "N94_VIRh2I9", "N6lRfxxsOwB", "OMr_9LJHU-k", "0_ZGBxyA1Lu", "7xMsFGpjwj3", "vjw0oy48TTH", "nips_2021_RmuXDtjD...
nips_2021_5BVsfC0goqI
Conic Blackwell Algorithm: Parameter-Free Convex-Concave Saddle-Point Solving
Julien Grand-Clément, Christian Kroer
accept
The paper introduces parameter and scale free algorithms for convex concave saddle point problems, based on the Blackwell approachability algorithm. The resulting algorithmic schemes exhibit standard 1/sqrt(T) regret, but in comparison to existing algorithms do not require the knowledge of the problem parameters. The techniques for proving convergence rely on results from conic optimization. Numerical results on real and synthetic data demonstrate the efficacy of the introduced algorithms. Overall, the reviews appreciated the novelty and the simplicity of the framework. However, there were several issues raised by the reviews relating to relationship with related work and the overall clarity. The authors should take these comments into account when preparing the revised version of the paper.
test
[ "T5HyJz5vbVE", "OEvH3jBeNhk", "Paetsc47lw5", "t7VdqM7NYN", "cjZuXsGDF5J", "tQt-ZTWwR-", "XPjzri0m3a0", "XRv_KWfF-o2", "s2JDyBGo3HD", "V_eOvfhzs_1", "4zSvEuwm6Nc", "WG2s3sSEmRG", "PPysnPh_pdb", "hFkt9N-3_t", "eMx8V0I1Ifj", "-S_FMusUHe_", "AFai8zk6Jc6", "CyaZGsElKdM", "PcVas-sykGB"...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " I would like to thank the authors for the detailed explanation. As the response does not give a convincing answer to such discrepancy, I would remain my score. Overall, I believe blackwell approachabity has great potential for new algorithm design and worst/avetage case analysis would be valuable.", " We thank...
[ -1, -1, -1, 7, -1, -1, 4, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, 3, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "WG2s3sSEmRG", "XPjzri0m3a0", "cjZuXsGDF5J", "nips_2021_5BVsfC0goqI", "tQt-ZTWwR-", "s2JDyBGo3HD", "nips_2021_5BVsfC0goqI", "4zSvEuwm6Nc", "PcVas-sykGB", "nips_2021_5BVsfC0goqI", "PPysnPh_pdb", "hFkt9N-3_t", "eMx8V0I1Ifj", "wWtvVUnDm-", "-S_FMusUHe_", "V_eOvfhzs_1", "XPjzri0m3a0", ...
nips_2021_ZBhZDNaiww
3DP3: 3D Scene Perception via Probabilistic Programming
We present 3DP3, a framework for inverse graphics that uses inference in a structured generative model of objects, scenes, and images. 3DP3 uses (i) voxel models to represent the 3D shape of objects, (ii) hierarchical scene graphs to decompose scenes into objects and the contacts between them, and (iii) depth image likelihoods based on real-time graphics. Given an observed RGB-D image, 3DP3's inference algorithm infers the underlying latent 3D scene, including the object poses and a parsimonious joint parametrization of these poses, using fast bottom-up pose proposals, novel involutive MCMC updates of the scene graph structure, and, optionally, neural object detectors and pose estimators. We show that 3DP3 enables scene understanding that is aware of 3D shape, occlusion, and contact structure. Our results demonstrate that 3DP3 is more accurate at 6DoF object pose estimation from real images than deep learning baselines and shows better generalization to challenging scenes with novel viewpoints, contact, and partial observability.
accept
This paper presents a method for multi-object 6DoF pose estimation with a "probabilistic programming" model. Before inference, the model learns priors on 3D occupancy/shape for objects. Then, given the number of objects in the scene and the classes for objects in a test scene (observed in a depth map), inference proceeds by MCMC sampling scene graphs and poses and occupancies, to minimize a depth reconstruction error. The reviewers raised concerns regarding fairness of comparisons to baselines, known versus unknown number of objects at inference time, violation of priors, (lack of) comparisons to other object-centric generative models. The rebuttal of the authors addressed those concerns and generated extensive discussion. The paper is suggested for publication.
train
[ "5bTLYCH_Xko", "jRNpuhOQMMP", "32WdRfnk3y", "U-JeROuOJcm", "iaLbPYp2fGA", "_wphLHkuyut", "-N_d_9F7iCW", "3flXfZFWJPv", "n1pAF3jkrpU", "P_06Z-LcHQ", "ISYteR2A0-", "MbnhUUd-YDv", "0gkR7ET6nzZ", "HoMQYfshFE3", "X3K3J8Dwp4K" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " We are checking in again to see if there is any feedback/comments in response to the additional experiments and clarifications we have provided. We appreciate the feedback you have provided thus far and want to ensure that we have properly addressed all your concerns!", "This paper proposes a probabilistic mode...
[ -1, 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 3, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "0gkR7ET6nzZ", "nips_2021_ZBhZDNaiww", "3flXfZFWJPv", "nips_2021_ZBhZDNaiww", "-N_d_9F7iCW", "n1pAF3jkrpU", "n1pAF3jkrpU", "MbnhUUd-YDv", "ISYteR2A0-", "nips_2021_ZBhZDNaiww", "U-JeROuOJcm", "jRNpuhOQMMP", "X3K3J8Dwp4K", "nips_2021_ZBhZDNaiww", "nips_2021_ZBhZDNaiww" ]
nips_2021_-_D-ss8su3
Novel Upper Bounds for the Constrained Most Probable Explanation Task
Tahrima Rahman, Sara Rouhani, Vibhav Gogate
accept
The authors present two approaches for bounding "constrained" most probable explanation (CMPE) tasks in graphical models, in which we seek an MPE solution to one model, constrained to a subset of configurations by another model. Both methods are simple (in a good way), relaxing the CMPE to an unconstrained MPE problem, or to a multi-choice knapsack problem, then using Lagrangian optimization to tighten the resulting bounds. Reviewers were unanimously positive, highlighting the novelty of the work as a strength. Several reviewers did bring up points that should be addressed in a final version, however, including comparison to LP-based techniques, and some issues with the presentation (see individual reviews for details).
test
[ "OeD6H7OG6X", "MXHxgSFoIl", "CWx-ENs0jx", "n63jupPO0AB", "vRUUI87WB3-", "qSNsOQYS_E0", "e3TkU9g8b9", "WHCR35Nogae" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for a thoughtful, detailed review. Here are answers to your questions:\n\n***\nQ1: One thing the is perhaps missing is a discussion on the usefulness of these upper bounds. Although it is mentioned in the introduction that in principle they can be used to guide a heuristic search algorithm, it would be ...
[ -1, -1, -1, -1, 7, 7, 7, 8 ]
[ -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "WHCR35Nogae", "e3TkU9g8b9", "qSNsOQYS_E0", "vRUUI87WB3-", "nips_2021_-_D-ss8su3", "nips_2021_-_D-ss8su3", "nips_2021_-_D-ss8su3", "nips_2021_-_D-ss8su3" ]
nips_2021_MLT9wFYMlJ9
Why Spectral Normalization Stabilizes GANs: Analysis and Improvements
Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs). However, current understanding of SN's efficacy is limited. In this work, we show that SN controls two important failure modes of GAN training: exploding and vanishing gradients. Our proofs illustrate a (perhaps unintentional) connection with the successful LeCun initialization. This connection helps to explain why the most popular implementation of SN for GANs requires no hyper-parameter tuning, whereas stricter implementations of SN have poor empirical performance out-of-the-box. Unlike LeCun initialization which only controls gradient vanishing at the beginning of training, SN preserves this property throughout training. Building on this theoretical understanding, we propose a new spectral normalization technique: Bidirectional Scaled Spectral Normalization (BSSN), which incorporates insights from later improvements to LeCun initialization: Xavier initialization and Kaiming initialization. Theoretically, we show that BSSN gives better gradient control than SN. Empirically, we demonstrate that it outperforms SN in sample quality and training stability on several benchmark datasets.
accept
The submission discusses the benefits of spectral normalization for GAN training. Initial reviewer assessment was mostly positive while one reviewer was a bit more concerned. After a discussion with the authors the reviewer appreciated the contributions while remaining unconvinced that controlling the variance via Lecun initialization is the key for SN's success. AC thinks this paper provides interesting insights that could spur future research.
val
[ "2YBzFsHx4jg", "onqDPDoW14o", "HhJYoiftiiT", "LPKz1CD6wdt", "bYeEUAJLMEL", "k_FFd3Kiv8s", "GM3FYHhgLqe", "q5E6tcMf_Y", "aDYaDbp-VzR", "Pt2nMsJt8v0", "mMkjS3ScRxc", "VU4otRKb81W" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper analyzes the benefits of the spectral norm on GAN: preventing gradient exploding and gradient vanishing. They prove that SN can control the variance of weights, which can prevent gradient vanishing, following the logic of LeCun initialization. They further propose a bidirectional SN, which is an improvem...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nips_2021_MLT9wFYMlJ9", "HhJYoiftiiT", "LPKz1CD6wdt", "k_FFd3Kiv8s", "nips_2021_MLT9wFYMlJ9", "2YBzFsHx4jg", "VU4otRKb81W", "mMkjS3ScRxc", "Pt2nMsJt8v0", "nips_2021_MLT9wFYMlJ9", "nips_2021_MLT9wFYMlJ9", "nips_2021_MLT9wFYMlJ9" ]
nips_2021_AcoMwAU5c0s
$(\textrm{Implicit})^2$: Implicit Layers for Implicit Representations
Zhichun Huang, Shaojie Bai, J. Zico Kolter
accept
After the discussion and rebuttal process, all reviewers now recommend acceptance, albeit with several borderline ratings. Several reviewers flagged limitations of the experimental evaluation, which the authors have responded to by running additional experiments. The discussion also cleared up some confusion about memory savings, which seems to have stemmed from a misinterpretation of Figure 5. Overall, reviewers seem to agree that the combination of implicit computation and implicit representations is interesting and creates some nice synergies, and also that this idea deserves the attention of the research community. I am therefore inclined to recommend acceptance, though this is conditional on the authors adding the new results to the paper and addressing the various other remarks and recommendations made by the reviewers (e.g. with respect to Figure 5).
test
[ "88KldYQuaaA", "OwJImmaHpFH", "qU9ZNWzrml2", "HPkPl1Fd_M", "_MvrmMHZxV", "lJkAC7deakv", "G-BZLQx4seb", "JubKdtT1ZtO", "9iG6GU5vlzZ", "MlvMrpGXWVO", "1t6JpnpCvTq", "yQt5y1Xf9pM", "IhlFtuhpzY" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks a lot for addressing my concerns. I do believe Figure~5 needs to be substantially revised as suggested by the other reviewers too and that your comment about the \"representational power of dynamic system\" be included in some form in the paper, with appropriate references.\n\n", " Thank you for clarifyi...
[ -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, -1, 4, 5, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "9iG6GU5vlzZ", "MlvMrpGXWVO", "nips_2021_AcoMwAU5c0s", "nips_2021_AcoMwAU5c0s", "G-BZLQx4seb", "JubKdtT1ZtO", "HPkPl1Fd_M", "IhlFtuhpzY", "yQt5y1Xf9pM", "qU9ZNWzrml2", "nips_2021_AcoMwAU5c0s", "nips_2021_AcoMwAU5c0s", "nips_2021_AcoMwAU5c0s" ]
nips_2021_LKntyz8tp1S
Best Arm Identification in Contaminated Stochastic Bandits
Arpan Mukherjee, Ali Tajer, Pin-Yu Chen, Payel Das
accept
Overall, reviewers are mostly positive about this paper (especially after author response). The reviewers agree that the paper studies a new and relevant problem, and the concentration bound result is interesting. According to reviewers' comments and discussions, it would be nicer if authors can improve the presentation to highlight the challenges and improvement over prior work.
val
[ "s8KqDeey_hM", "M-aypwNCPem", "c6D9ZY0aDE", "ZBCP9XIggca", "uQZSWD6KrsT", "rDzYnOXLuH5", "N6YRtDtL7vG", "XZLFQ2G_z2E", "6MrjjdbGH8N", "5gIqHgADttT", "UJGMvmaXsf-", "18fzQTX7n-A", "GeJQQufwE5", "ZPKTZeolnDM", "_PyQs92lepF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper studies the best-arm-identification problem in the Huber contamination model where each arm has some probability epsilon of being sampled from some adversarial distribution. \n\nThe setting in [4] was modified by replacing the median estimator with a trimmed-mean estimator. Since adversarial corruptions...
[ 5, -1, 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_LKntyz8tp1S", "ZBCP9XIggca", "nips_2021_LKntyz8tp1S", "ZPKTZeolnDM", "nips_2021_LKntyz8tp1S", "N6YRtDtL7vG", "5gIqHgADttT", "6MrjjdbGH8N", "_PyQs92lepF", "UJGMvmaXsf-", "GeJQQufwE5", "_PyQs92lepF", "uQZSWD6KrsT", "c6D9ZY0aDE", "s8KqDeey_hM" ]
nips_2021_DTVfEJIL3DB
MADE: Exploration via Maximizing Deviation from Explored Regions
In online reinforcement learning (RL), efficient exploration remains particularly challenging in high-dimensional environments with sparse rewards. In low-dimensional environments, where tabular parameterization is possible, count-based upper confidence bound (UCB) exploration methods achieve minimax near-optimal rates. However, it remains unclear how to efficiently implement UCB in realistic RL tasks that involve non-linear function approximation. To address this, we propose a new exploration approach via maximizing the deviation of the occupancy of the next policy from the explored regions. We add this term as an adaptive regularizer to the standard RL objective to balance exploration vs. exploitation. We pair the new objective with a provably convergent algorithm, giving rise to a new intrinsic reward that adjusts existing bonuses. The proposed intrinsic reward is easy to implement and combine with other existing RL algorithms to conduct exploration. As a proof of concept, we evaluate the new intrinsic reward on tabular examples across a variety of model-based and model-free algorithms, showing improvements over count-only exploration strategies. When tested on navigation and locomotion tasks from MiniGrid and DeepMind Control Suite benchmarks, our approach significantly improves sample efficiency over state-of-the-art methods.
accept
The initial reviews were overall positive (and raised concerns well addressed in the rebuttal), except for one review that raised concerns about the proof of the provided theoretical result. This led to an engaged discussion between the authors and the reviewers, that clarified many aspects of the proof and of the approach. Some questions remained about the interpretation of some quantities ($\rho_{cover}$, which is part of the iteration-dependent regularizer, and $d^{\pi_{mix}}$, the iterate computed by the algorithm). Maybe that making a parallel with mirror descent could help clarify this aspect (for the idea of regularizing wrt the last iterates, even though $\rho_{cover}$ is not the last iterate due to different weighting of the involved occupancy measures, and even though the regularizer is not a Bregman divergence). Overall, I do think that this helpful discussion addressed sufficiently the initial concerns, and that the theoretical result is technically correct. Beside, this is not the sole contribution of this paper, it also provides a strong empirical analysis, where the approach compares favourably to strong baselines. To sum up, I think this paper would be a solid addition to the neurips program, and I recommend its acceptance. I strongly encourage the authors to revise their paper taking into account the discussion with reviewer dMyV as well as other reviews (notably, it is more than Frank-Wolfe, but less general), I think it will widen the audience and increase the impact of this contribution.
train
[ "GL4NnNYc-5s", "avS7X6Qpxr3", "8FPxf4zuC7Z", "S3OZ-Gy8Cmd", "CFDggTqeEHB", "H5cO5GNQ4-y", "a0KwzRQuQ2", "mSZJAmdF6ms", "baoL7tIwBew", "5xyBkI9ngD", "9qDytsa_pCp", "am9t8SJsHHb", "ScjOyYXdD-F", "Y1YVB7QESus", "pJu8-C7zLbx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an new approach for exploration in reinforcement learning (RL). The idea is to give an exploration bonus to visiting with the new policy state-action pairs which have low visitation density compared to previously visited state-action pairs. The paper provides a convergence proof assuming a perfe...
[ 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_DTVfEJIL3DB", "nips_2021_DTVfEJIL3DB", "S3OZ-Gy8Cmd", "H5cO5GNQ4-y", "pJu8-C7zLbx", "mSZJAmdF6ms", "ScjOyYXdD-F", "baoL7tIwBew", "5xyBkI9ngD", "a0KwzRQuQ2", "Y1YVB7QESus", "GL4NnNYc-5s", "avS7X6Qpxr3", "nips_2021_DTVfEJIL3DB", "nips_2021_DTVfEJIL3DB" ]
nips_2021_zlhpIYub2d0
Variational Automatic Curriculum Learning for Sparse-Reward Cooperative Multi-Agent Problems
We introduce an automatic curriculum algorithm, Variational Automatic Curriculum Learning (VACL), for solving challenging goal-conditioned cooperative multi-agent reinforcement learning problems. We motivate our curriculum learning paradigm through a variational perspective, where the learning objective can be decomposed into two terms: task learning on the current curriculum, and curriculum update to a new task distribution. Local optimization over the second term suggests that the curriculum should gradually expand the training tasks from easy to hard. Our VACL algorithm implements this variational paradigm with two practical components, task expansion and entity curriculum, which produces a series of training tasks over both the task configurations as well as the number of entities in the task. Experiment results show that VACL solves a collection of sparse-reward problems with a large number of agents. Particularly, using a single desktop machine, VACL achieves 98% coverage rate with 100 agents in the simple-spread benchmark and reproduces the ramp-use behavior originally shown in OpenAI’s hide-and-seek project.
accept
This work introduces a new curriculum learning algorithm for cooperative multi-agent reinforcement learning. All reviewers appreciated the method's novelty and that the paper was well-written. All found the theoretical motivation convincing and insightful. There were some initial concerns about the experimental results raised by reviewer spUJ, but they found the author's response convincing and thus increased their score.
train
[ "X2QGKk2WYr", "JT9tBH3kfGZ", "9tAWbnlvLKd", "W3Noforf_uS", "mVfWW-Lirdt", "LjZcO6Kebwo", "X9NMzUYEJzm", "SL7E2XerlO", "Z-qmDpsG8-p", "HgsLNrMhUpn", "K3M2hWq3Lpp", "JQPx9WntisM", "jBHZpmu7RDZ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ " We additionally conduct experiments in Speaker-Listener which is one of the basic tasks in the MADDPG[1] paper. This task consists of two cooperative agents, a speaker, and a listener, and three landmarks of differing colors. The speaker and listener obtain +1 reward when the listener covers the correct landmark...
[ -1, 7, -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1 ]
[ -1, 5, -1, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1 ]
[ "LjZcO6Kebwo", "nips_2021_zlhpIYub2d0", "jBHZpmu7RDZ", "HgsLNrMhUpn", "nips_2021_zlhpIYub2d0", "JQPx9WntisM", "mVfWW-Lirdt", "HgsLNrMhUpn", "nips_2021_zlhpIYub2d0", "K3M2hWq3Lpp", "Z-qmDpsG8-p", "mVfWW-Lirdt", "JT9tBH3kfGZ" ]
nips_2021_OJLaKwiXSbx
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, Steven Chu Hong Hoi
accept
This submission proposes to tackle vision-language tasks by combining a contrastive loss used to first “align” the vision & language representations before “fusing” them via a cross-attention model trained via masked language modeling and image-text matching losses. Reviewers were generally in agreement that this is a strong and well-written submission, proposing a well-motivated approach with convincing state-of-the-art quantitative and qualitative results. While the overall objective and training procedure is rather complex, each component is ablated and demonstrated to be necessary for the method’s strong performance. Some reviewers raised a few questions and concerns about missing ablations and design decisions. These questions were addressed satisfactorily by the authors’ responses. I strongly encourage the authors to incorporate the feedback and their provided answers into the camera-ready version of the submission where appropriate. Given these strengths and especially given the recent interest in the area of vision-language representation and transfer learning, I recommend the submission for a spotlight presentation at NeurIPS.
train
[ "Pz9q6qQwr2V", "_PvmPBYyIkX", "LjYVolIGYbF", "DZx_hZJYag9", "DiS1Oy7hwMs", "vuvTPNJhuRf", "UM-VfWV-FvO", "fLyjDZenoJ3", "qZLBo2wew1", "jVO2S0ogGGB", "iT2YPf1DH6r", "UdR3mB_etLj", "CN-D2fqzw0M", "KXtqfzHNVO2" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. After reading other reviewer's comments and the author's responses, I would like to remain my initial positive rating of accept.", " Thank you for your response to my questions. With the author response and other positive reviews, I will be leaving my initial rating as is. ", " I ...
[ -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 9, 7, 7, 7 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "UM-VfWV-FvO", "jVO2S0ogGGB", "fLyjDZenoJ3", "qZLBo2wew1", "nips_2021_OJLaKwiXSbx", "DiS1Oy7hwMs", "KXtqfzHNVO2", "CN-D2fqzw0M", "UdR3mB_etLj", "iT2YPf1DH6r", "nips_2021_OJLaKwiXSbx", "nips_2021_OJLaKwiXSbx", "nips_2021_OJLaKwiXSbx", "nips_2021_OJLaKwiXSbx" ]
nips_2021_c0O9vBVSvIl
Variational Model Inversion Attacks
Given the ubiquity of deep neural networks, it is important that these models do not reveal information about sensitive data that they have been trained on. In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy. In order to optimize this variational objective, we choose a variational family defined in the code space of a deep generative model, trained on a public auxiliary dataset that shares some structural similarity with the target dataset. Empirically, our method substantially improves performance in terms of target attack accuracy, sample realism, and diversity on datasets of faces and chest X-ray images.
accept
The paper proposes a well-motivated method with technical novelty, and good performance is shown. Reviewer's major concerns, including the questionable assumption and the seemingly inconsistent baseline results with the original papers, have been addressed. Mathematical notation should be improved in the camera-ready version.
train
[ "hIx-eM5S1fQ", "wdyKIwWeypa", "nXuaMP2CV48", "leRN_Zon839", "BneiXG_jCE1", "ExORqvtfrIw", "DQElXvwfYQR", "o03qKPIw_h6", "GJ1Thd3zRhJ", "4mtiG7EgeSi" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper frames the model inversion problem as a variational inference problem, where the goal is to approximate the posterior distribution p^{tar}(x|y) from the known target classifier p^{tar}(y|x). From a class of candidate distributions q(x), the aim is to find one that is close to p^{tar}(x|y), and this is a...
[ 5, -1, -1, -1, 6, -1, -1, -1, -1, 7 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, -1, 3 ]
[ "nips_2021_c0O9vBVSvIl", "nXuaMP2CV48", "GJ1Thd3zRhJ", "ExORqvtfrIw", "nips_2021_c0O9vBVSvIl", "o03qKPIw_h6", "4mtiG7EgeSi", "BneiXG_jCE1", "hIx-eM5S1fQ", "nips_2021_c0O9vBVSvIl" ]
nips_2021_hfkER_KJiNw
Graph Neural Networks with Adaptive Residual
Graph neural networks (GNNs) have shown the power in graph representation learning for numerous tasks. In this work, we discover an interesting phenomenon that although residual connections in the message passing of GNNs help improve the performance, they immensely amplify GNNs' vulnerability against abnormal node features. This is undesirable because in real-world applications, node features in graphs could often be abnormal such as being naturally noisy or adversarially manipulated. We analyze possible reasons to understand this phenomenon and aim to design GNNs with stronger resilience to abnormal features. Our understandings motivate us to propose and derive a simple, efficient, interpretable, and adaptive message passing scheme, leading to a novel GNN with Adaptive Residual, AirGNN. Extensive experiments under various abnormal feature scenarios demonstrate the effectiveness of the proposed algorithm.
accept
The paper discovers the phenomenon that residual connections can amplify GNNs' vulnerability against abnormal node features. It analyzes possible reasons and based on the understanding designs an effective message passing scheme to help solve the issue. The observed phenomenon is interesting and important for related applications, and the study and design are reasonable with impressive results. The good response from the authors has clarified several concerns of the reviewers (better presentation, more thorough experiments, comparison with related work etc). The clarification in the response should be incorporated into the final version if accepted.
train
[ "29Jy9iCnjug", "xkwnCwU10Gc", "oPfd-P9DTX8", "hSt0b5hHzlu", "mVaK_FjjYaC", "y5OPDpU_vLO", "q8FlF_3HU4h", "Pj5zZ3VPEF", "UJNiU_Ge_v", "tS2_XbbrgYR", "0knVXtgonb-", "tiv5JRjswk", "mX9QyClwMwz", "K1j1CkRSZtS", "caxUgK7ZDkN", "-i9idSZopSH" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " \nDear reviewer, \n\nThanks for your quick reply and we really appreciate that you carefully read our responses.\n\nWe can understand your suspicion regarding the directions of the two deviations. We want to clarify that (1) the fixed selection is randomly selected among numerous selections; (2) we have consisten...
[ -1, -1, -1, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "xkwnCwU10Gc", "hSt0b5hHzlu", "q8FlF_3HU4h", "mVaK_FjjYaC", "0knVXtgonb-", "nips_2021_hfkER_KJiNw", "tiv5JRjswk", "tS2_XbbrgYR", "nips_2021_hfkER_KJiNw", "K1j1CkRSZtS", "-i9idSZopSH", "y5OPDpU_vLO", "caxUgK7ZDkN", "UJNiU_Ge_v", "nips_2021_hfkER_KJiNw", "nips_2021_hfkER_KJiNw" ]
nips_2021_B9NOIHl2Z6K
Efficient Active Learning for Gaussian Process Classification by Error Reduction
Active learning sequentially selects the best instance for labeling by optimizing an acquisition function to enhance data/label efficiency. The selection can be either from a discrete instance set (pool-based scenario) or a continuous instance space (query synthesis scenario). In this work, we study both active learning scenarios for Gaussian Process Classification (GPC). The existing active learning strategies that maximize the Estimated Error Reduction (EER) aim at reducing the classification error after training with the new acquired instance in a one-step-look-ahead manner. The computation of EER-based acquisition functions is typically prohibitive as it requires retraining the GPC with every new query. Moreover, as the EER is not smooth, it can not be combined with gradient-based optimization techniques to efficiently explore the continuous instance space for query synthesis. To overcome these critical limitations, we develop computationally efficient algorithms for EER-based active learning with GPC. We derive the joint predictive distribution of label pairs as a one-dimensional integral, as a result of which the computation of the acquisition function avoids retraining the GPC for each query, remarkably reducing the computational overhead. We also derive the gradient chain rule to efficiently calculate the gradient of the acquisition function, which leads to the first query synthesis active learning algorithm implementing EER-based strategies. Our experiments clearly demonstrate the computational efficiency of the proposed algorithms. We also benchmark our algorithms on both synthetic and real-world datasets, which show superior performance in terms of sampling efficiency compared to the existing state-of-the-art algorithms.
accept
The reviewers generally liked the novel ideas presented in the paper about speeding up active learning. Concerns expressed originally in the reviews were mitigated by the rebuttal and a good discussion between the reviewers. The contributions made in the paper are sufficiently strong to merit publication at NeurIPS. The authors should however make sure to clearly incorporate that answers the questions of the reviewers in the final version.
train
[ "AWrtokLSZBj", "ZhTs4B0uzT", "OW44u-xRP2o", "ykgwjTmZa7", "_3uRLn_Rqu", "Y_ZLYaDJw6H", "krI670ab6h", "_Au7cPx6vvv", "qUeZT5aCrkj", "ySXFJvLp2sG", "F09pORUSlxY", "muOUtLT_EXO", "WdQVyGZfkS", "MFNsqT7e_A-" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the reply. We would like to emphasize that we consider sample efficiency as the most important criterion for active learning. In particular in real-world problems, such as materials science applications, obtaining labelled samples can be either difficult or expensive (cost and time). In ...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "OW44u-xRP2o", "nips_2021_B9NOIHl2Z6K", "F09pORUSlxY", "nips_2021_B9NOIHl2Z6K", "Y_ZLYaDJw6H", "ySXFJvLp2sG", "nips_2021_B9NOIHl2Z6K", "ZhTs4B0uzT", "MFNsqT7e_A-", "WdQVyGZfkS", "muOUtLT_EXO", "nips_2021_B9NOIHl2Z6K", "nips_2021_B9NOIHl2Z6K", "nips_2021_B9NOIHl2Z6K" ]
nips_2021_SBNs7EULzqq
Non-Asymptotic Analysis for Two Time-scale TDC with General Smooth Function Approximation
Yue Wang, Shaofeng Zou, Yi Zhou
accept
This paper provides non-asymptotic analysis for TDC algorithm with general smooth function approximation, which is technically sound and novel. The authors’ responses have well addressed the reviewers’ concerns. All the reviewers have reached a consensus on the acceptance of this work. We suggest that the authors further polish their paper to prepare a camera-ready version based on the reviewers’ comments.
train
[ "8ATsRnJGxfe", "tW-Q_fnRvHi", "yBRtiAQ678", "O4qiEsCMGDY", "mHcYmpTl623", "lhbdXikiSHi" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper provides finite-sample analysis on the two time-scale TDC algorithm with non-linear function approximation, proposed by [5] Bhatnagar et al. Both the i.i.d. setting and the Markovian setting are considered. The convergence rate is near-optimal given the target MSPBE function is generally non-convex. Th...
[ 6, -1, -1, -1, 6, 8 ]
[ 4, -1, -1, -1, 2, 5 ]
[ "nips_2021_SBNs7EULzqq", "8ATsRnJGxfe", "lhbdXikiSHi", "mHcYmpTl623", "nips_2021_SBNs7EULzqq", "nips_2021_SBNs7EULzqq" ]
nips_2021_XXxoCgHsiRv
A Little Robustness Goes a Long Way: Leveraging Robust Features for Targeted Transfer Attacks
Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with different architectures. However, targeted adversarial examples—optimized to be classified as a chosen target class—tend to be less transferable between architectures. While prior research on constructing transferable targeted attacks has focused on improving the optimization procedure, in this work we examine the role of the source classifier. Here, we show that training the source classifier to be "slightly robust"—that is, robust to small-magnitude adversarial examples—substantially improves the transferability of class-targeted and representation-targeted adversarial attacks, even between architectures as different as convolutional neural networks and transformers. The results we present provide insight into the nature of adversarial examples as well as the mechanisms underlying so-called "robust" classifiers.
accept
This paper observes that classifiers with a certain degree of robustness lead to more transferable (targeted) adversarial examples. The manuscript shows how features learned by robust classifiers transfer better to other classifiers. This provides some further insights (building on https://arxiv.org/abs/2102.05110) into transferability, a phenomenon which remains poorly understood in the adversarial ML community. Thus, there is merit to the work proposed here. I encourage the authors to take into account discussions from the reviews while preparing the camera ready of their manuscript, and to open-source their code. In particular, the authors acknowledged that their discussion of universality is hard to follow, if not a bit circular, so I recommend following suggestions made in the author response to rewrite Section 4. Finally, the authors conducted their experiments using a single optimizer (see line 71), I would encourage them to confirm their findings on additional optimizers to obtain a more complete result.
val
[ "yLX2rlnApm1", "ScrfPNYZpVA", "6-TqbLrZhNg", "VKBn5t2Kb_3", "OYpSf4lZBK", "u6dOmAiYHmJ", "GsQgGj925h", "OREk_dd1Ot_", "T3L0lsd8wxB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper demonstrates that strong transferability across models can be achieved by using slightly-robust source networks through a comprehensive study and few analyses.\n \n- Originality: \n \n * This work seems to be a follow up study of [67] with a more comprehensive study, including analysis on CNN and trans...
[ 5, 7, -1, -1, -1, -1, -1, 6, 5 ]
[ 2, 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "nips_2021_XXxoCgHsiRv", "nips_2021_XXxoCgHsiRv", "OYpSf4lZBK", "OREk_dd1Ot_", "ScrfPNYZpVA", "yLX2rlnApm1", "T3L0lsd8wxB", "nips_2021_XXxoCgHsiRv", "nips_2021_XXxoCgHsiRv" ]
nips_2021_YP1ham75vml
TriBERT: Human-centric Audio-visual Representation Learning
The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visual-linguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT -- a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy.
accept
This work proposes a new transformer-based model that can learn across three modalities: vision, pose, and audio for tackling the task of sound source separation. The authors did a good job during rebuttal and turned two slightly negative reviewers into positive ones. The final score is very borderline (three borderline accepts and one borderline reject). This work could clearly benefit from a more extensive experimental evaluation, but AC feels this work is very interesting and deserves to be published on NeurIPS. The reviewers did raise some valuable concerns that should be addressed in the final camera-ready version of the paper. The authors are also encouraged to make other necessary changes.
train
[ "mNYsyXJDQwR", "XNEoUguthyN", "dULdB1slNb1", "Z9daTkW4yn", "nFX2nsBouIZ", "qq6KHKqyX09", "FCL_GOumbDS", "ExA3iXJUc3", "edKoEyTKNsT", "lzjWrFCpNyn", "WZUiMlobBiU", "RKd9ZxrTVJM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "By pointing out the recent transformer-based models are mostly designed for visual-language data, this paper introduce TriBERT which specifically targeting on audio-visual modalities to address such limitation. Inspired by ViLBERT, TriBERT is a transformer based model which enables contextual feature learning acro...
[ 6, -1, 6, -1, -1, -1, 5, -1, -1, -1, -1, 6 ]
[ 4, -1, 3, -1, -1, -1, 5, -1, -1, -1, -1, 3 ]
[ "nips_2021_YP1ham75vml", "WZUiMlobBiU", "nips_2021_YP1ham75vml", "edKoEyTKNsT", "qq6KHKqyX09", "nips_2021_YP1ham75vml", "nips_2021_YP1ham75vml", "RKd9ZxrTVJM", "dULdB1slNb1", "FCL_GOumbDS", "mNYsyXJDQwR", "nips_2021_YP1ham75vml" ]
nips_2021_Ir-WwGboFN-
How does a Neural Network's Architecture Impact its Robustness to Noisy Labels?
Noisy labels are inevitable in large real-world datasets. In this work, we explore an area understudied by previous works --- how the network's architecture impacts its robustness to noisy labels. We provide a formal framework connecting the robustness of a network to the alignments between its architecture and target/noise functions. Our framework measures a network's robustness via the predictive power in its representations --- the test performance of a linear model trained on the learned representations using a small set of clean labels. We hypothesize that a network is more robust to noisy labels if its architecture is more aligned with the target function than the noise. To support our hypothesis, we provide both theoretical and empirical evidence across various neural network architectures and different domains. We also find that when the network is well-aligned with the target function, its predictive power in representations could improve upon state-of-the-art (SOTA) noisy-label-training methods in terms of test accuracy and even outperform sophisticated methods that use clean labels.
accept
The paper studies the connection between the architecture of deep neural networks and their robustness to noisy labels, which hasn’t been studied yet. A take-away message is that a well-designed architecture can help learn good representations even the training sample have label noise. Theoretical and empirical analyses are provided. Although Reviewers RRS5 and NWJ7 concern that the theoretical analysis is limited, the paper has certain merits in empirical contributions. All the reviewers agree that the paper is interesting. One reviewer commented that the work can be potentially very useful in several areas, such as network architecture search, training with noisy data, and representation learning, to which the meta-reviewer also agrees. Since the paper brings a unique spark that may potentially enlighten other researchers and benefit the machine learning society, the meta-reviewer is happy to recommend an accept. We ask the authors to carefully take the useful comments from reviewers in the final version, e.g., some writings that look overconfident should be revised. We also suggest the authors review an important topic in learning with noisy labels, i.e., modelling the label noise [r1], which has been exploited to correct loss [r2, r3] and should be useful in designing and validating the architecture of deep models by only exploiting the noisy data. [r1] Yao et al. "Dual T: Reducing estimation error for transition matrix in label-noise learning." In NeurIPS 2020. [r2] Natarajan et al. "Learning with noisy labels." In NeurIPS 2013. [r3] Liu et al. "Classification with noisy labels by importance reweighting." IEEE Transactions on pattern analysis and machine intelligence 38.3: 447-461, 2015.
test
[ "iPDlx9yTNbo", "6nTXKjlmoW", "PY51O5NXopD", "ce5r6fGs8Or", "6S3m0xOIST", "vx069RXEzFk", "xn-nDfc1Shd", "DTpEjIf3rNc", "Qy0JmTOQA5l", "M5-zT1ViBBb", "srvzkKuiMHH", "yEh-H-qe92x" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper focuses on the connection between the neural network’s architecture and its robustness when learning with noisy labels. The authors provide both theoretical and experimental results to justify their claims. Learning with noisy labels is one of the hottest topics in weakly supervised learning. This pap...
[ 6, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 2 ]
[ "nips_2021_Ir-WwGboFN-", "xn-nDfc1Shd", "6nTXKjlmoW", "yEh-H-qe92x", "srvzkKuiMHH", "M5-zT1ViBBb", "Qy0JmTOQA5l", "iPDlx9yTNbo", "nips_2021_Ir-WwGboFN-", "nips_2021_Ir-WwGboFN-", "nips_2021_Ir-WwGboFN-", "nips_2021_Ir-WwGboFN-" ]
nips_2021_sNw3VBPL7rg
Calibration and Consistency of Adversarial Surrogate Losses
Pranjal Awasthi, Natalie Frank, Anqi Mao, Mehryar Mohri, Yutao Zhong
accept
I agree with the reviewers that this paper makes substantive contributions to understanding the role of surrogate losses in adversarial learning. To make the paper more accessible, I encourage the authors to make the exposition (at least the first few sections) more friendly to readers who don’t have an extensive background in learning theory.
train
[ "NlL4jybSQEy", "fkxb9a2oJv_", "m4X_TIODJpK", "GJ6oyVfKcdC", "KWOuOC3lVU3", "Gco1L3BSMnq", "Gs7BvaokSH6", "dntf1xpLKsy", "cpKngmFE8yH", "95MHEK6Vmxd", "Ec8u907kDry", "aEpEiD-s00_", "2YUJm74mvCD", "994nj8Vtuv9" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors provide an extensive characterization of calibration and consistency for surrogates to the l2 robust 0-1 loss, along the lines of what Bartlett et al. and Steinwart provided for the standard 0-1 loss. They demonstrate that convex losses and supremum-based convex losses are not calibrated, and further ...
[ 8, -1, 9, -1, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1 ]
[ 3, -1, 3, -1, -1, -1, 3, -1, -1, 2, -1, -1, -1, -1 ]
[ "nips_2021_sNw3VBPL7rg", "GJ6oyVfKcdC", "nips_2021_sNw3VBPL7rg", "2YUJm74mvCD", "dntf1xpLKsy", "cpKngmFE8yH", "nips_2021_sNw3VBPL7rg", "aEpEiD-s00_", "Ec8u907kDry", "nips_2021_sNw3VBPL7rg", "95MHEK6Vmxd", "Gs7BvaokSH6", "m4X_TIODJpK", "NlL4jybSQEy" ]
nips_2021_BR5bZGhzrel
The Value of Information When Deciding What to Learn
All sequential decision-making agents explore so as to acquire knowledge about a particular target. It is often the responsibility of the agent designer to construct this target which, in rich and complex environments, constitutes a onerous burden; without full knowledge of the environment itself, a designer may forge a sub-optimal learning target that poorly balances the amount of information an agent must acquire to identify the target against the target's associated performance shortfall. While recent work has developed a connection between learning targets and rate-distortion theory to address this challenge and empower agents that decide what to learn in an automated fashion, the proposed algorithm does not optimally tackle the equally important challenge of efficient information acquisition. In this work, building upon the seminal design principle of information-directed sampling (Russo & Van Roy, 2014), we address this shortcoming directly to couple optimal information acquisition with the optimal design of learning targets. Along the way, we offer new insights into learning targets from the literature on rate-distortion theory before turning to empirical results that confirm the value of information when deciding what to learn.
accept
After discussions between the authors and reviewers, it appears as though the reviewers have all reached a consensus that this paper is worthy of acceptance, so I am happy to support that consensus. I want to thank the authors for their detailed responses and thank the reviewers for engaging in discussion with the authors.
train
[ "9R2aRw9fs6q", "XpKBP1iz0Hp", "a6_fNTFPh92", "_eLCSVtbAnt", "o3YhnCbWFJv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Update after author response and discussion**: I am happy to see most of my assumptions positively confirmed and clarified by the authors, as well as the criticism by other reviewers being sufficiently addressed. Some of my improvements remain valid, but I do not consider them crucial for publication. I therefor...
[ 6, 6, 6, 7, 6 ]
[ 4, 3, 2, 2, 2 ]
[ "nips_2021_BR5bZGhzrel", "nips_2021_BR5bZGhzrel", "nips_2021_BR5bZGhzrel", "nips_2021_BR5bZGhzrel", "nips_2021_BR5bZGhzrel" ]
nips_2021_vLyI__SoeAe
Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning
Recently many algorithms were devised for reinforcement learning (RL) with function approximation. While they have clear algorithmic distinctions, they also have many implementation differences that are algorithm-independent and sometimes under-emphasized. Such mixing of algorithmic novelty and implementation craftsmanship makes rigorous analyses of the sources of performance improvements across algorithms difficult. In this work, we focus on a series of off-policy inference-based actor-critic algorithms -- MPO, AWR, and SAC -- to decouple their algorithmic innovations and implementation decisions. We present unified derivations through a single control-as-inference objective, where we can categorize each algorithm as based on either Expectation-Maximization (EM) or direct Kullback-Leibler (KL) divergence minimization and treat the rest of specifications as implementation details. We performed extensive ablation studies, and identified substantial performance drops whenever implementation details are mismatched for algorithmic choices. These results show which implementation or code details are co-adapted and co-evolved with algorithms, and which are transferable across algorithms: as examples, we identified that tanh Gaussian policy and network sizes are highly adapted to algorithmic types, while layer normalization and ELU are critical for MPO's performances but also transfer to noticeable gains in SAC. We hope our work can inspire future work to further demystify sources of performance improvements across multiple algorithms and allow researchers to build on one another's both algorithmic and implementational innovations.
accept
While the reviewers thought there were various ways that this paper could be improved, there was also general consensus that this framework which unites multiple existing algorithms was interesting and useful, prompting one reviewer to comment on the new insights it brought them. The authors are encouraged to address the feedback from the reviewers in their camera ready.
train
[ "A4pTwoVgEtU", "NYspl1Sk5v", "HcrAn2Uwh56", "eo9APdYLuen", "8rR_6_0Xpcb", "5qrFkTyI-XP", "2XysYWkHwEs", "HiD7p0bY7hV", "mAydCn0OWRQ", "8KqhoUKeH8J", "EbZH4Ek4XeN", "Mv2M8xEWd0E", "ytByHR67JcA", "3jE4SoiDe4Y", "GfXV4plJsgo", "yt8jMT4R5St", "RHZ2J1NlZzQ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for your update. As the reviewer pointed out, we agree that there are still a lot of margins to validate the co-dependent properties (maybe exponential combinations). We will continually address these problems current deep RL research faces, and hope that our works could build the foundation...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "HcrAn2Uwh56", "nips_2021_vLyI__SoeAe", "8rR_6_0Xpcb", "Mv2M8xEWd0E", "mAydCn0OWRQ", "2XysYWkHwEs", "EbZH4Ek4XeN", "EbZH4Ek4XeN", "8KqhoUKeH8J", "HiD7p0bY7hV", "ytByHR67JcA", "nips_2021_vLyI__SoeAe", "NYspl1Sk5v", "RHZ2J1NlZzQ", "yt8jMT4R5St", "nips_2021_vLyI__SoeAe", "nips_2021_vLyI...
nips_2021_fCjd2bXG5iI
Can fMRI reveal the representation of syntactic structure in the brain?
While studying semantics in the brain, neuroscientists use two approaches. One is to identify areas that are correlated with semantic processing load. Another is to find areas that are predicted by the semantic representation of the stimulus words. However, most studies of syntax have focused only on identifying areas correlated with syntactic processing load. One possible reason for this discrepancy is that representing syntactic structure in an embedding space such that it can be used to model brain activity is a non-trivial computational problem. Another possible reason is that it is unclear if the low signal-to-noise ratio of neuroimaging tools such as functional Magnetic Resonance Imaging (fMRI) can allow us to reveal the correlates of complex (and perhaps subtle) syntactic representations. In this study, we propose novel multi-dimensional features that encode information about the syntactic structure of sentences. Using these features and fMRI recordings of participants reading a natural text, we model the brain representation of syntax. First, we find that our syntactic structure-based features explain additional variance in the brain activity of various parts of the language system, even after controlling for complexity metrics that capture processing load. At the same time, we see that regions well-predicted by syntactic features are distributed in the language system and are not distinguishable from those processing semantics. Our code and data will be available at https://github.com/anikethjr/brainsyntacticrepresentations.
accept
This paper spurred a lot of discussion and back and forth with the authors, leading two reviewers to raise their scores. One reviewer thought the work is of limited interest and focused on a small community. I do not think this is grounds for reducing its score, since neuroscience is a main area in NeurIPS, definitely historically but also nowadays. A major issue was that the reviewers questioned whether the proposed embeddings are truly disentangled syntax embeddings. This point was partly addressed by the authors providing more analysis. However, the reviewers still would like to see a more careful and nuanced discussion of this issue, how to interpret the results in light of the syntax-semantics debate, and to make sure that limitations of the present work are discussed. The reviewers also mentioned problems with clarity and paper organizations -- specific questions were mostly answered, but paper organization should be improved in the next revision. Given the discussion, I believe the paper passes the bar for acceptance. My low confidence here is mostly because my insufficient familiarity with neurolinguistics.
train
[ "1FLzIS4dOi5", "cSXYWouwgUt", "_AqYlTr2Nl0", "7PI1stJfYG8", "wH_J54XYkP9", "L_lUjNCCwo", "ZdPilc0OroS", "EvvGXhOlv2x", "QoZY5xda6EV", "63YXkjQ8_W7", "C3A-swR11Zr", "Zo1bdkxTF0h", "UAyIy1b_dk", "7lG9Q_TxFyh", "QPa4kd50TZ", "56vCV8NFyXT", "gUHKC6ZAYIN", "DXHq7X5C0hu", "tUkHjLyL2Ab"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "...
[ "* The authors conduct a brain encoding study testing whether fine-grained information about the content of incremental syntactic representations contribute to predictions of neural activity, over and above measures of syntactic complexity.\n* They design custom subgraph embedding representations to operationalize ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 6, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 2, 5 ]
[ "nips_2021_fCjd2bXG5iI", "UAyIy1b_dk", "7PI1stJfYG8", "euz2GanBeub", "L_lUjNCCwo", "QoZY5xda6EV", "euz2GanBeub", "UAyIy1b_dk", "C3A-swR11Zr", "tUkHjLyL2Ab", "tUkHjLyL2Ab", "56vCV8NFyXT", "7lG9Q_TxFyh", "DXHq7X5C0hu", "nips_2021_fCjd2bXG5iI", "gUHKC6ZAYIN", "QPa4kd50TZ", "1FLzIS4dOi...
nips_2021_SwfsoPuGYku
Robust Implicit Networks via Non-Euclidean Contractions
Saber Jafarpour, Alexander Davydov, Anton Proskurnikov, Francesco Bullo
accept
The paper originally got 2 "Marginally above the acceptance threshold"s and 2 "Marginally below the acceptance threshold", all with relatively high confidences. The major challenges include a number of comparisons to existing works being lacking both in the theory and experiments, relatively weak experiments, missing some important details, etc. The authors did heavy rebuttals, including presenting extra experiments, and they seemed to take effect. Reviewer xt3G raised his/her score from 5 to 7. Reviewer ivrZ also raised to 7. Considering that the extra experiments and clarifications can be easily incorporated in the revision, the AC deemed that the paper is acceptable, thus recommended acceptance.
val
[ "0rCMIiig4Sp", "kD2uUopS6Ng", "aJmzUQO9CKb", "OUSFxTU6X5b", "emtFZks-ovU", "Wk6ANZonjDb", "iii7ANRp6oG", "kP6KEduxjd", "M-wwEJawrcX", "aKi7W9sbYgg", "YMdOFO8Syju", "rytMrNZZpxL", "_HaZrvCtRs4", "iDjgV5VtzSU", "UVv7tVrmpxS", "Sv-gt7PfBm", "MFZKiro7dCH", "C9iJF6sQfG5", "veTar8P7CM"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "au...
[ "This paper studies the implicit nn with non-Euclidean norms. In particular, the authors give the sufficient condition for the well-posedness of the fixed point problem under any norms, and provide the corresponding contraction factor. Consequently, the author derives the Lipschiz constant of their implicit model i...
[ 7, -1, 6, -1, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "nips_2021_SwfsoPuGYku", "kP6KEduxjd", "nips_2021_SwfsoPuGYku", "iii7ANRp6oG", "nips_2021_SwfsoPuGYku", "nips_2021_SwfsoPuGYku", "bNNqWOdDlX", "0rCMIiig4Sp", "emtFZks-ovU", "aJmzUQO9CKb", "Wk6ANZonjDb", "nips_2021_SwfsoPuGYku", "iDjgV5VtzSU", "Sv-gt7PfBm", "nips_2021_SwfsoPuGYku", "UVv...
nips_2021_MOkjFCuLxlc
A Kernel-based Test of Independence for Cluster-correlated Data
The Hilbert-Schmidt Independence Criterion (HSIC) is a powerful kernel-based statistic for assessing the generalized dependence between two multivariate variables. However, independence testing based on the HSIC is not directly possible for cluster-correlated data. Such a correlation pattern among the observations arises in many practical situations, e.g., family-based and longitudinal data, and requires proper accommodation. Therefore, we propose a novel HSIC-based independence test to evaluate the dependence between two multivariate variables based on cluster-correlated data. Using the previously proposed empirical HSIC as our test statistic, we derive its asymptotic distribution under the null hypothesis of independence between the two variables but in the presence of sample correlation. Based on both simulation studies and real data analysis, we show that, with clustered data, our approach effectively controls type I error and has a higher statistical power than competing methods.
accept
The paper proposes a test of independence for cluster correlated data based on HSIC and derive an asymptotic distribution for HSIC under cluster correlation. In evaluations, the test is slightly conservative. The test is applied to a microbiome study. The paper is well organized and well written. After discussion with the authors, only minor issues remain. However, the reviewers agree that this is a good contribution and should be accepted.
train
[ "gohKfOVwy-", "JQBiVTbU97F", "VKbIKih30wi", "dzSZmqVh_jO", "I6DwYZxc1i", "ihs1_-dXs-I", "FCJxtTPR3cP", "pW6dt5SN0A", "tzhLKHkB7O" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThank you for taking the time to write this thorough and detailed response. It answered all of my questions.\n\nAll of the best.", "The paper proposed a modified version of Hilbert-Schmidt Independence Criterion (HSIC) to test the dependence between two multivariate variables. The method is sui...
[ -1, 6, 7, -1, -1, -1, -1, 7, 6 ]
[ -1, 3, 3, -1, -1, -1, -1, 3, 2 ]
[ "dzSZmqVh_jO", "nips_2021_MOkjFCuLxlc", "nips_2021_MOkjFCuLxlc", "pW6dt5SN0A", "JQBiVTbU97F", "tzhLKHkB7O", "VKbIKih30wi", "nips_2021_MOkjFCuLxlc", "nips_2021_MOkjFCuLxlc" ]
nips_2021_d2CejHDZJh
Efficient methods for Gaussian Markov random fields under sparse linear constraints
Methods for inference and simulation of linearly constrained Gaussian Markov Random Fields (GMRF) are computationally prohibitive when the number of constraints is large. In some cases, such as for intrinsic GMRFs, they may even be unfeasible. We propose a new class of methods to overcome these challenges in the common case of sparse constraints, where one has a large number of constraints and each only involves a few elements. Our methods rely on a basis transformation into blocks of constrained versus non-constrained subspaces, and we show that the methods greatly outperform existing alternatives in terms of computational cost. By combining the proposed methods with the stochastic partial differential equation approach for Gaussian random fields, we also show how to formulate Gaussian process regression with linear constraints in a GMRF setting to reduce computational cost. This is illustrated in two applications with simulated data.
accept
Following reviewers discussion, I recommend accepting the paper under the condition that the authors will explain that its main contributions are applications for the projective clustering problem in probability/statistics, and cite the relevant existing solutions from different fields (coresets in computational geometry, subspace clustering in DB, dictionary learning in signal processing). It should also be clarified that the suggested solution is given as a simple and possibly inefficient solver of projective clustering for the special case where no noise exists (the points lie on the subspace). In this case, the paper can serve as an interesting bridge between the CS, ML and statistics community.
val
[ "_5SZgUKnAX", "Vcq1S4lKP_O", "1QC6VcCiTfQ", "L3p94_ogKf0", "CoC4Bc2Tt43", "JEoPNuTS5r", "npvYAGa0pRm", "i7eI0Jgy4BD", "1VHnF46_Y4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces efficient methods for GP inference under the Markov assumption when conditioning on sparse linear combinations. The approach relies on astute changes of basis. It assumes that the matrix A representing the linear mapping involved in the conditioning can be represented in block diagonal form, ...
[ 7, -1, 6, 6, -1, -1, -1, -1, 6 ]
[ 3, -1, 4, 4, -1, -1, -1, -1, 3 ]
[ "nips_2021_d2CejHDZJh", "JEoPNuTS5r", "nips_2021_d2CejHDZJh", "nips_2021_d2CejHDZJh", "L3p94_ogKf0", "1QC6VcCiTfQ", "_5SZgUKnAX", "1VHnF46_Y4", "nips_2021_d2CejHDZJh" ]
nips_2021_-b5OSCydOMe
Sparse is Enough in Scaling Transformers
Large Transformer models yield impressive results on many tasks, but are expensive to train, or even fine-tune, and so slow at decoding that their use and study becomes out of reach. We address this problem by leveraging sparsity. We study sparse variants for all layers in the Transformer and propose Scaling Transformers, a family of next generation Transformer models that use sparse layers to scale efficiently and perform unbatched decoding much faster than the standard Transformer as we scale up the model size. Surprisingly - the sparse layers are enough to obtain the same perplexity as the standard Transformer. We also integrate with prior sparsity approaches to enable fast inference on long sequences even with limited memory, resulting in performance competitive to the state-of-the-art on long text summarization.
accept
The paper introduces a technique for improving the inference speed of transformers by sparsifying all the linear layers. During training, a controller module predicts a sparse mask for the activations (trained with a straight-through Gumbel softmax), and during decoding this mask is used to prune the weights for the linear layer. Reviewers appreciated that the method is simple, and dramatically improves the speed of decoding. Reviewer vUEu raises reasonable concerns about the lack of comparison with mixture-of-experts approaches (which improve both training and inference speed), however the authors argue that their method is more expressive, and include an empirical comparison in their response. I think the title and abstract should be clearer about that the paper is addressing purely unbatched decoding speed. Overall, I recommend acceptance.
train
[ "1KQtNy4mj_7", "20MORGaGFXr", "NgNXLNoMVMi", "eMuEsx4sg6W", "cTtJ-TVYNiB", "22hy-ukYk1r", "YSsvMkG1DTz", "r1BglfIra9w", "ZkXFZPisS_K", "R9yKk5EFJ1B", "BrwIvrqbOIf", "rJsnqyGkK1", "5VDgo4_5wj", "P5qBjcPTnXm", "HDUX8IdSclc", "MlxS64Fse8a" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We want to thank the reviewer for the insightful comments and a lot of thought put into our paper. In particular, we believe this is the key doubt, as stated by the reviewer: *\"However, the selection of each block is highly correlated since the selection is decided by a low-rank compressed version of the activat...
[ -1, -1, -1, 8, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "NgNXLNoMVMi", "P5qBjcPTnXm", "rJsnqyGkK1", "nips_2021_-b5OSCydOMe", "5VDgo4_5wj", "P5qBjcPTnXm", "nips_2021_-b5OSCydOMe", "ZkXFZPisS_K", "R9yKk5EFJ1B", "BrwIvrqbOIf", "YSsvMkG1DTz", "MlxS64Fse8a", "eMuEsx4sg6W", "HDUX8IdSclc", "nips_2021_-b5OSCydOMe", "nips_2021_-b5OSCydOMe" ]
nips_2021_MNVjrDpu6Yo
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (GraNet), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods with ResNet-50 on ImageNet without extending the training time. We release all codes in https://github.com/Shiweiliuiiiiiii/GraNet.
accept
This paper studies how to optimize the performance of sparse models which are pruned during the training process by combining during-training pruning with the growth criterion originally proposed in RigL. All reviewers felt the paper was clear and covered a timely and important topic, though there were some concerns regarding clarity and comparison to prior work which were resolved through the author discussion. As such, I recommend the paper be accepted.
train
[ "a2JgOxHIb7x", "wwzcFhXCEuC", "QynLtw53mDr", "CjFRd4GU3i", "A4AM8q8q-yt", "fZ4Ql7S89vk", "uKWd5zUMoa0", "DpDjav09sQW", "ftBba7gkhxr" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We really appreciate your valuable comments and your acceptance! \n\nAll your comments are constructive, helping to make our paper stronger and clearer. We will name both versions of the method as GraNet and incorporate all the discussions in the next version.", "This work aims to combine ideas presented in rec...
[ -1, 7, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, 5, -1, -1, -1, -1, -1, 3, 4 ]
[ "wwzcFhXCEuC", "nips_2021_MNVjrDpu6Yo", "DpDjav09sQW", "nips_2021_MNVjrDpu6Yo", "wwzcFhXCEuC", "wwzcFhXCEuC", "ftBba7gkhxr", "nips_2021_MNVjrDpu6Yo", "nips_2021_MNVjrDpu6Yo" ]
nips_2021_oqKC5A7iq_k
Low-Fidelity Video Encoder Optimization for Temporal Action Localization
Most existing temporal action localization (TAL) methods rely on a transfer learning pipeline: by first optimizing a video encoder on a large action classification dataset (i.e., source domain), followed by freezing the encoder and training a TAL head on the action localization dataset (i.e., target domain). This results in a task discrepancy problem for the video encoder – trained for action classification, but used for TAL. Intuitively, joint optimization with both the video encoder and TAL head is a strong baseline solution to this discrepancy. However, this is not operable for TAL subject to the GPU memory constraints, due to the prohibitive computational cost in processing long untrimmed videos. In this paper, we resolve this challenge by introducing a novel low-fidelity (LoFi) video encoder optimization method. Instead of always using the full training configurations in TAL learning, we propose to reduce the mini-batch composition in terms of temporal, spatial, or spatio-temporal resolution so that jointly optimizing the video encoder and TAL head becomes operable under the same memory conditions of a mid-range hardware budget. Crucially, this enables the gradients to flow backwards through the video encoder conditioned on a TAL supervision loss, favourably solving the task discrepancy problem and providing more effective feature representations. Extensive experiments show that the proposed LoFi optimization approach can significantly enhance the performance of existing TAL methods. Encouragingly, even with a lightweight ResNet18 based video encoder in a single RGB stream, our method surpasses two-stream (RGB + optical-flow) ResNet50 based alternatives, often by a good margin.
accept
This paper presents work on temporal action localization. The main idea is to use low fidelity (e.g. lower temporal resolution) to enable end-to-end training within the constraints imposed by large video batches / models in GPU memory. The reviewers appreciate the simplicity and effectiveness of this idea. While based on related efforts in training image and other video models, identifying this approach to enable end-to-end training is interesting to the researchers working in this field, challenging a standard assumption about the difficulties in end-to-end training.
train
[ "TD4NdfTzib", "0dyOmRI1gkB", "WZHmfh57OCJ", "3jJwtSZ7G4I", "k0J7NJBsgt2", "bqRmHiXbZoZ", "08ZjP7xDjf", "6oxgpO0gf6k" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Updating after the rebuttal:\nThe rebuttal well addressed some of my concerns. I would raise the rating to 6: Marginally above the acceptance threshold\n\n---\n\nThe paper presents a low-fidelity video encoder optimization approach to relieve the large memory constraints in the temporal action localization problem...
[ 6, 6, -1, -1, -1, -1, 7, 8 ]
[ 3, 3, -1, -1, -1, -1, 2, 5 ]
[ "nips_2021_oqKC5A7iq_k", "nips_2021_oqKC5A7iq_k", "TD4NdfTzib", "6oxgpO0gf6k", "08ZjP7xDjf", "0dyOmRI1gkB", "nips_2021_oqKC5A7iq_k", "nips_2021_oqKC5A7iq_k" ]
nips_2021_r-oRRT-ElX
On Provable Benefits of Depth in Training Graph Convolutional Networks
Graph Convolutional Networks (GCNs) are known to suffer from performance degradation as the number of layers increases, which is usually attributed to over-smoothing. Despite the apparent consensus, we observe that there exists a discrepancy between the theoretical understanding of over-smoothing and the practical capabilities of GCNs. Specifically, we argue that over-smoothing does not necessarily happen in practice, a deeper model is provably expressive, can converge to global optimum with linear convergence rate, and achieve very high training accuracy as long as properly trained. Despite being capable of achieving high training accuracy, empirical results show that the deeper models generalize poorly on the testing stage and existing theoretical understanding of such behavior remains elusive. To achieve better understanding, we carefully analyze the generalization capability of GCNs, and show that the training strategies to achieve high training accuracy significantly deteriorate the generalization capability of GCNs. Motivated by these findings, we propose a decoupled structure for GCNs that detaches weight matrices from feature propagation to preserve the expressive power and ensure good generalization performance. We conduct empirical evaluations on various synthetic and real-world datasets to validate the correctness of our theory.
accept
This work provides theoretical and empirical evidence that over-smoothing does not necessarily happen in practice: it is shown that a deep GCN is expressive as long as properly trained, as well as that it can converge to a globally optimal solution. The paper also discusses the generalization capability of GCNs. The reviewers and AC agree that these contributions are valuable to the GNN community and non-trivial. The paper contained some small bugs, but these should be easily fixable in the camera-ready version.
val
[ "CsKamW4ZAx", "EA4FoFIKsG", "MX_Y6WCC1Ej", "ThCn4gw_qXj", "txXk7uTdFKT", "oR_mrttNkHI", "VTC3tf-aOg", "Z6eMKzLPoSa", "87tee_hQNah", "5wj0a4Sh7Vn", "ZDVwiynGvp_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Based on the response from the authors and the comments from other reviewers, I'll keep my rating and recommend this paper to be accepted.", " Thank you, for the clarification. I increased my score, under the author's promise of sufficiently polishing the final version of the paper. ", "The deals with semi-su...
[ -1, -1, 5, 7, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, 3, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "oR_mrttNkHI", "87tee_hQNah", "nips_2021_r-oRRT-ElX", "nips_2021_r-oRRT-ElX", "Z6eMKzLPoSa", "ZDVwiynGvp_", "5wj0a4Sh7Vn", "ThCn4gw_qXj", "MX_Y6WCC1Ej", "nips_2021_r-oRRT-ElX", "nips_2021_r-oRRT-ElX" ]
nips_2021_ebIORrYImx
Practical Near Neighbor Search via Group Testing
We present a new algorithm for the approximate near neighbor problem that combines classical ideas from group testing with locality-sensitive hashing (LSH). We reduce the near neighbor search problem to a group testing problem by designating neighbors as "positives," non-neighbors as "negatives," and approximate membership queries as group tests. We instantiate this framework using distance-sensitive Bloom Filters to Identify Near-Neighbor Groups (FLINNG). We prove that FLINNG has sub-linear query time and show that our algorithm comes with a variety of practical advantages. For example, FLINNG can be constructed in a single pass through the data, consists entirely of efficient integer operations, and does not require any distance computations. We conduct large-scale experiments on high-dimensional search tasks such as genome search, URL similarity search, and embedding search over the massive YFCC100M dataset. In our comparison with leading algorithms such as HNSW and FAISS, we find that FLINNG can provide up to a 10x query speedup with substantially smaller indexing time and memory.
accept
The submission was deemed significant, well-executed, and that it will have impact. While some reviewers had some reservations, the author rebuttals addressed the concerns.
train
[ "AkORhdbI5F0", "zEDVlYO8XL", "ULzbs1ht_xn", "-jucIXJdKtt", "nLaesMD2Rs", "GBIPUeyQrjS", "K3FOZQ-C2QX", "mtrTUjhuvCz", "8lVlIj8psR", "Y8ZkmxWsE6t", "jhS8FFOJzTU", "0sG44X3KdG1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I acknowledge reading the response and would be happy to see the promised edits being implemented. I maintain my score.", " Thanks for the response. \nGiven the response as well as other reviews, I maintain my original rating. I think the paper is valuable and makes good contributions, in particular I appreciat...
[ -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "Y8ZkmxWsE6t", "mtrTUjhuvCz", "nips_2021_ebIORrYImx", "8lVlIj8psR", "nips_2021_ebIORrYImx", "K3FOZQ-C2QX", "nLaesMD2Rs", "0sG44X3KdG1", "ULzbs1ht_xn", "jhS8FFOJzTU", "nips_2021_ebIORrYImx", "nips_2021_ebIORrYImx" ]
nips_2021_TFEFvU0ZV6Q
Baby Intuitions Benchmark (BIB): Discerning the goals, preferences, and actions of others
To achieve human-like common sense about everyday life, machine learning systems must understand and reason about the goals, preferences, and actions of other agents in the environment. By the end of their first year of life, human infants intuitively achieve such common sense, and these cognitive achievements lay the foundation for humans' rich and complex understanding of the mental states of others. Can machines achieve generalizable, commonsense reasoning about other agents like human infants? The Baby Intuitions Benchmark (BIB) challenges machines to predict the plausibility of an agent's behavior based on the underlying causes of its actions. Because BIB's content and paradigm are adopted from developmental cognitive science, BIB allows for direct comparison between human and machine performance. Nevertheless, recently proposed, deep-learning-based agency reasoning models fail to show infant-like reasoning, leaving BIB an open challenge.
accept
This paper introduces a new benchmark set for visual cognition tasks, inspired by looking-time experiments done with human babies. I think this work would be useful even if the only role were to highlight the questions and methodologies used by developmental psychology to student non-verbal intelligence, but care was taken to provide a broad benchmark that I think will provoke interesting ML work. The reviewers generally agree, though note a few requests for clarification and context. (Note, it's probably the case that this paper fits better into the D&B track, but since that track is brand new this year I don't think the authors should be penalized for choosing the main track.)
train
[ "Wzr1c-TSRP1", "To-0_vk-Oc", "gS390isnhO", "oTdIsbeRSdj", "6GgtbQYeJA-", "CWLjYTzuvlx", "wkgNtXWdKFI", "E7og1LCT3EX", "P__XSJYtC_", "MlEffJUxQJQ", "LsB7wRBVeSP", "TOlPzppg4zj" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents the Baby Intuitions Benchmark (BIB) to test deep learning methods abilities’ to reason about how other agents will behave. BIB is inspired by human infants’ expectations about other agents, as clearly laid out in the paper. The authors include several different capabilities/tasks within the ben...
[ 6, -1, -1, -1, 7, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, 5, 3 ]
[ "nips_2021_TFEFvU0ZV6Q", "wkgNtXWdKFI", "oTdIsbeRSdj", "MlEffJUxQJQ", "nips_2021_TFEFvU0ZV6Q", "TOlPzppg4zj", "Wzr1c-TSRP1", "6GgtbQYeJA-", "LsB7wRBVeSP", "nips_2021_TFEFvU0ZV6Q", "nips_2021_TFEFvU0ZV6Q", "nips_2021_TFEFvU0ZV6Q" ]
nips_2021_PZ3TnxaC-PT
Neural Hybrid Automata: Learning Dynamics With Multiple Modes and Stochastic Transitions
Effective control and prediction of dynamical systems require appropriate handling of continuous-time and discrete, event-triggered processes. Stochastic hybrid systems (SHSs), common across engineering domains, provide a formalism for dynamical systems subject to discrete, possibly stochastic, state jumps and multi-modal continuous-time flows. Despite the versatility and importance of SHSs across applications, a general procedure for the explicit learning of both discrete events and multi-mode continuous dynamics remains an open problem. This work introduces Neural Hybrid Automata (NHAs), a recipe for learning SHS dynamics without a priori knowledge on the number, mode parameters, and inter-modal transition dynamics. NHAs provide a systematic inference method based on normalizing flows, neural differential equations, and self-supervision. We showcase NHAs on several tasks, including mode recovery and flow learning in systems with stochastic transitions, and end-to-end learning of hierarchical robot controllers.
accept
After clarifications made by the authors in their rebuttal, all reviewers agreed about the merits of this work, and recommended acceptance. In the final version, please incorporate the feedback given in the reviews, and add the important clarifications of the rebuttal.
train
[ "OxULYYHvufy", "yRBqcjljV5Q", "xmH9YbFsdmw", "Hjv_DgNY3C", "99JPQ8LaMef", "6NR0EzupiDV", "Zn9tL53taB", "f5l6TwfeTB1" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a solution called 'Neural Hybrid Automata' to model real world dynamical systems with multiple modes of operation and stochastic switching between modes. Such problems are plentiful in a variety of engineering domains. The authors address this using three modules: Dynamics model, Latent model t...
[ 6, -1, -1, -1, -1, -1, 7, 6 ]
[ 3, -1, -1, -1, -1, -1, 3, 2 ]
[ "nips_2021_PZ3TnxaC-PT", "xmH9YbFsdmw", "OxULYYHvufy", "OxULYYHvufy", "f5l6TwfeTB1", "Zn9tL53taB", "nips_2021_PZ3TnxaC-PT", "nips_2021_PZ3TnxaC-PT" ]
nips_2021_5af9FHClUZu
Fast Projection onto the Capped Simplex with Applications to Sparse Regression in Bioinformatics
We consider the problem of projecting a vector onto the so-called k-capped simplex, which is a hyper-cube cut by a hyperplane.For an n-dimensional input vector with bounded elements, we found that a simple algorithm based on Newton's method is able to solve the projection problem to high precision with a complexity roughly about O(n), which has a much lower computational cost compared with the existing sorting-based methods proposed in the literature.We provide a theory for partial explanation and justification of the method.We demonstrate that the proposed algorithm can produce a solution of the projection problem with high precision on large scale datasets, and the algorithm is able to significantly outperform the state-of-the-art methods in terms of runtime (about 6-8 times faster than a commercial software with respect to CPU time for input vector with 1 million variables or more).We further illustrate the effectiveness of the proposed algorithm on solving sparse regression in a bioinformatics problem.Empirical results on the GWAS dataset (with 1,500,000 single-nucleotide polymorphisms) show that, when using the proposed method to accelerate the Projected Quasi-Newton (PQN) method, the accelerated PQN algorithm is able to handle huge-scale regression problem and it is more efficient (about 3-6 times faster) than the current state-of-the-art methods.
accept
This paper poses a new method for projecting vectors onto the k-capped simplex. This method reformulates the quadratic program into a form that can harness Newton method, a second-order iterative optimization algorithm, to solve the projection goal. The key novelty is using a second-order method to iteratively solve for a minimum, as opposed to sorting-based methods with a cubic complexity. Out of the 4 reviewers, 3 support accepting the manuscript. Detailed technical comments are provided in the reviews and the authors responded to these reviews in detail and appropriately. One reviewer suggests to reject the paper based on the fact that the manuscript has typos and an opinion that some elements of the manuscript would be better moved to the supplement. In essence, I think the low score by that reviewer is not well justified and in my own assessment of this manuscript I have excluded it. What's remaining are three reviewers that support the paper quite strongly overall (scores 7,7,6). I therefore recommend to accept this manuscript.
val
[ "Mo18OuvytBK", "hAhZxRLUZvw", "igKnr1GckX", "2DT79LV7mK7", "jLrYtG0wWOK", "DQ_M8h_sw4E", "7vDceYLGPmi", "uRfLrjIBrbe", "1Membaysg8h", "cEdTt9ECmjL", "Zl7Mr2fx5J0", "GCvP7RTuFxC" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper poses a new method for projecting vectors onto the k-capped simplex. This method reformulates the quadratic program into a form that can harness Newton method, a second-order iterative optimization algorithm, to solve the projection goal. The key novelty is using a second-order method to iteratively sol...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "nips_2021_5af9FHClUZu", "igKnr1GckX", "2DT79LV7mK7", "jLrYtG0wWOK", "DQ_M8h_sw4E", "Mo18OuvytBK", "GCvP7RTuFxC", "Zl7Mr2fx5J0", "cEdTt9ECmjL", "nips_2021_5af9FHClUZu", "nips_2021_5af9FHClUZu", "nips_2021_5af9FHClUZu" ]
nips_2021_-8QSntMuqBV
The Many Faces of Adversarial Risk
Muni Sreenivas Pydi, Varun Jog
accept
All the reviewers agreed that this paper provides important contributions in the area of robust learning. I recommend to the authors to incorporate all the comments made by the reviewers in the updated version of the paper. The reviews have indeed brought many important points.
train
[ "ifDjGh3O_P5", "K7oMaSc3hsF", "ezRkNGXFucK", "Do2dZ9k4d2O", "B2O8YDsAWDx", "qxNouhZ8tvw", "7y-ksMYQD8D", "tK_VwP3-Nt_", "A5_Vg0FM--h", "hM6M0zYEaB7", "pWVs8HyJZa7", "9Qu-zgQP41e", "F7P44pYKdES", "AWpNyG00PFg", "qAIGTWAaSqA", "ChyHjvp4ZKA", "bs74ae4hbjU", "xs2jA9UUmCw", "r2WkzeASS...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "...
[ "The paper discusses different definitions of adversarial risk that have been used in various papers throughout the years, where, by using tools from optimal transport theory, robust statistics, functional analysis, and game theory, these definitions are viewed under a different lens, and in several ways new connec...
[ 6, -1, -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "nips_2021_-8QSntMuqBV", "B2O8YDsAWDx", "qxNouhZ8tvw", "tK_VwP3-Nt_", "xs2jA9UUmCw", "bs74ae4hbjU", "nips_2021_-8QSntMuqBV", "hM6M0zYEaB7", "nips_2021_-8QSntMuqBV", "F7P44pYKdES", "9Qu-zgQP41e", "Y6LuaXdUxpO", "AWpNyG00PFg", "qAIGTWAaSqA", "r2WkzeASSAs", "nips_2021_-8QSntMuqBV", "SUS...