paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_xIAxm1b4pWc | Improving Sentiment Classification Using 0-Shot Generated Labels for Custom Transformer Embeddings | We present an approach to improve sentiment classification for transformers (based on BERT and DistilBERT) using additional embeddings to represent emotion inputs. We used HuggingFace's 0-shot prediction pipeline to generate probabilities of whether emotions apply to a given sample. We generated 0-shot probabilities for 1.6 million samples from a sentiment classification dataset and a smaller sentiment airline dataset using 63 emotions. Then we added custom tokens to BERT's embeddings and tokenizers representing various levels of emotion for each predicted emotion. Finally, depending on the probability of each emotion, the respective custom token representing that level was prepended to the text input of the model to process and train for classification. We additionally test direct classification layer addition of emotion inputs and an ensemble of BERT and DistilBERT models both using emotion inputs achieving a modest increase in sentiment prediction accuracy. Our results show modest improvement in all cases over the original model for both BERT and DistilBERT tested with added emotion inputs generated from 0-shot pretrained models. | Reject | The paper proposes to incorporate unsupervisedly extracted emotion-related tokens/embeddings to improve sentiment classifiers trained on top of BERT. The strengths of the paper, as identified by reviewers, are in the importance of the problem, a relatively easy-to-reproduce method, and a clear write-up. However, all the reviewers identify several major weaknesses, including the lack of a clear research question, unclear contribution, limited novelty of the method, missing baselines, and relatively small gains in the downstream task of sentiment analysis. In sum, all the reviewers agree that the draft is not yet ready for publication. | train | [
"khQHlAoiGIc",
"WhFl9g1TIsS",
"A-lTOJ7cMpL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\n\nSummary: \n\nThis works proposes a sentiment classification approach for transformer-based models that employs additional embeddings to represent emotion inputs. These additional emotion inputs are generated using pre-trained transformer... | [
3,
3,
3
] | [
4,
3,
5
] | [
"iclr_2022_xIAxm1b4pWc",
"iclr_2022_xIAxm1b4pWc",
"iclr_2022_xIAxm1b4pWc"
] |
iclr_2022__dXmN3FV--0 | Lottery Ticket Structured Node Pruning for Tabular Datasets | In this paper we presented two pruning approaches on tabular neural networks based on the lottery ticket hypothesis that went beyond masking nodes by resizing the models accordingly. We showed top performing models in 6 of 8 datasets tested in terms of F1/RMSE. We also showed in 6 of 8 datasets a total reduction of over 85% of nodes and many over 98% reduced with minimal affect to accuracy. In one dataset the model reached a total size of one node per layer while still improving RMSE compared to the larger model used for pruning. We presented results for two approaches, iterative pruning using two styles, and oneshot pruning. Iterative pruning gradually reduces nodes in each layers based on norm pruning until we reach the smallest state, while oneshot will prune the model directly to the smallest state. We showed that the iterative approach will obtain the best result more consistently than oneshot. | Reject | ### Summary
The paper demonstrates the applicability of pruning to tabular datasets, which aren't typically explored in the literature on pruning. The work identifies that yes, pruning can indeed be applied to this domain with some success.
### Discussion
#### Strengths
An unconventional domain that, nonetheless, should be studied.
#### Weaknesses
The empirical setup does not include comparisons to baselines or ablations (e.g., different importance metrics).
### Decision
I recommend Reject. Reviewer k3Jq provides a precise and constructive set of criticisms that if solved would make for an interesting and significant piece of work. | train | [
"fBV9kUiBOiV",
"q3OJPiLdCth",
"8BeTQSLwAVi",
"m8yWndCrHma",
"r0LUhIRbENf",
"CgRF3IWmRWM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" There is no rebuttal, so I keep my score",
" Since there is no feedback from the authors, I decide to keep my score. ",
"This paper investigates model pruning and the Lottery Ticket Hypothesis in the context of tabular datasets and model training. The authors apply a set of pruning techniques to the tabular n... | [
-1,
-1,
5,
1,
3,
3
] | [
-1,
-1,
4,
5,
3,
3
] | [
"m8yWndCrHma",
"8BeTQSLwAVi",
"iclr_2022__dXmN3FV--0",
"iclr_2022__dXmN3FV--0",
"iclr_2022__dXmN3FV--0",
"iclr_2022__dXmN3FV--0"
] |
iclr_2022_mKsMcL8FfsV | Learning Rich Nearest Neighbor Representations from Self-supervised Ensembles | Pretraining convolutional neural networks via self-supervision, and applying them in transfer learning, is an incredibly fast-growing field that is rapidly and iteratively improving performance across practically all image domains.
Meanwhile, model ensembling is one of the most universally applicable techniques in supervised learning literature and practice, offering a simple solution to reliably improve performance. But how to optimally combine self-supervised models to maximize representation quality has largely remained unaddressed.
In this work, we provide a framework to perform self-supervised model ensembling via a novel method of learning representations directly through gradient descent at inference time.
This technique improves representation quality, as measured by k-nearest neighbors, both on the in-domain dataset and in the transfer setting, with models transferable from the former setting to the latter.
Additionally, this direct learning of feature through backpropagation improves representations from even a single model, echoing the improvements found in self-distillation. | Reject | The paper proposes a method to perform self-supervised model ensembling by learning representations directly through gradient descent at inference. The effectiveness is evaluated by k-nearest neighbors accuracy.
The reviewers agreed that the paper studies an important and interesting problem of leveraging model ensembling for self-supervised learning, which could improve both the performance and robustness of the learned representations. However, the reviewers also agreed that there were issues with the soundness of the empirical evaluation, which was a key reason for rejection. | train | [
"Ty67-J_yW50",
"Xj47sq9lEkH",
"QmhlPuRBb35",
"w6f6O5MST3j"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a representation learning method to optimize individual data representation vectors as well as an MLP encoder so that the learned representation vectors can recover the features from multiple pre-trained self-supervised learning models. Experimental studies show that such learning method can ou... | [
5,
5,
3,
5
] | [
4,
3,
4,
4
] | [
"iclr_2022_mKsMcL8FfsV",
"iclr_2022_mKsMcL8FfsV",
"iclr_2022_mKsMcL8FfsV",
"iclr_2022_mKsMcL8FfsV"
] |
iclr_2022_4QUoBU27oXN | Cognitively Inspired Learning of Incremental Drifting Concepts | Humans continually expand their learned knowledge to new domains and learn new concepts without any interference with past learned experiences. In contrast, machine learning models perform poorly in a continual learning setting, where input data distribution changes over time. Inspired by the nervous system learning mechanisms, we develop a computational model that enables a deep neural network to learn new concepts and expand its learned knowledge to new domains incrementally in a continual learning setting. We rely on the Parallel Distributed Processing theory to encode abstract concepts in an embedding space in terms of a multimodal distribution. This embedding space is modeled by internal data representations in a hidden network layer. We also leverage the Complementary Learning Systems theory to equip the model with a memory mechanism to overcome catastrophic forgetting through implementing pseudo-rehearsal. Our model can generate pseudo-data points for experience replay and accumulate new experiences to past learned experiences without causing cross-task interference. | Reject | This paper tackles the challenge of continual learning. It approaches the problem by combining a Gaussian Mixture Model (GMM) to model concepts in a latent space and and a decoder system to generate new data points for pseudo-rehearsal and maintenance of previous information. When new concepts arrive, the GMM can be updated with rehearsal serving to prevent forgetting. The authors show competitive results on incremental learning of MNIST and FMNIST.
The scores were mostly below threshold, with one above threshold (5,3,5,6). The reviewers generally agreed the approach was interesting and they appreciated the theoretical treatments. However, there were a number of concerns, the central ones being the lack of clarity and the lack of convincing empirical demonstrations of scalability. The authors attempted to address the concerns, but they were not able to show good performance on larger datasets. The suggested this was due to the complexity of the encoding model, but they were unable to demonstrate this concretely. The reviewers' scores did not change, though, and the consensus was that this paper was not quite ready for publication. Given these considerations, and an average final score of 4.75, a decision of reject was reached. | train | [
"Axs8W6DVSgd",
"4RTGz-IhVnH",
"p55iNFmSou2",
"YgZUokXpc9a"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors propose ICLA, an approach to tackle the problem of incremental and continual learning. By keeping track of a model's \"internal\" representations of data, they identify and overcome the \"drift\" issue (as in continual learning). Particularly, they adopt an encoder-decoder style architec... | [
5,
3,
5,
6
] | [
4,
4,
3,
3
] | [
"iclr_2022_4QUoBU27oXN",
"iclr_2022_4QUoBU27oXN",
"iclr_2022_4QUoBU27oXN",
"iclr_2022_4QUoBU27oXN"
] |
iclr_2022_9HXfisrWl1 | DeepDebug: Fixing Python Bugs Using Stack Traces, Backtranslation, and Code Skeletons | The joint task of bug localization and program repair is an integral part of the software development process. In this work we present DeepDebug, an approach to automated debugging using large, pretrained transformers. We begin by training a bug-creation model on reversed commit data for the purpose of generating synthetic bugs. We apply these synthetic bugs toward two ends. First, we directly train a backtranslation model on all functions from 200K repositories. Next, we focus on 10K repositories for which we can execute tests, and create buggy versions of all functions in those repositories that are covered by passing tests. This provides us with rich debugging information such as stack traces and print statements, which we use to finetune our model which was pretrained on raw source code. Finally, we strengthen all our models by expanding the context window beyond the buggy function itself, and adding a skeleton consisting of that function's parent class, imports, signatures, docstrings, and method bodies, in order of priority. On the QuixBugs benchmark, we increase the total number of fixes found by over 50%, while also decreasing the false positive rate from 35% to 5% and decreasing the timeout from six hours to one minute. On our own benchmark of executable tests, our model fixes 68% of all bugs on its first attempt without using traces, and after adding traces it fixes 75% on first attempt. | Reject | Metareview:
This paper proposes a transformer-based automated program debugger, called DeepDebug. All reviewers agree that the addressed problem is interesting. However, there was a consensus among reviewers regarding concerns in terms of the novelty and the lack of comparisons with other.
In general, all reviewers consistently gave a score that is below the acceptance threshold. This paper is of interest to the ICLR audience, but current form is not ready for acceptance.
Summary Of Reasons To Publish:
-Good Result on QuixBug
-Synthetic bug generation
Summary Of Suggested Revisions
-Comparisons in/with other datasets/tools | train | [
"ZOEzs0MXk0E",
"vMn2INu5r1e",
"U_Y7EDvXUFl",
"8K9b2NajG8c",
"0kWE8r3abnx",
"egju1nXs3_",
"5YYbQq2ig6n",
"NuQqEp_Esus",
"Z61BaoLoTPZ",
"39ii48lv5mo"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for elaboration on the paper's contributions.",
" > Allamanis et al. 2021 wasn't out when we completed this work.\n\nJust to clarify, that paper has been online since May, 2021, so I wasn't trying to ask for something unreasonable. Since your submission is closely related to that work, I believe it is ap... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"egju1nXs3_",
"U_Y7EDvXUFl",
"5YYbQq2ig6n",
"NuQqEp_Esus",
"39ii48lv5mo",
"Z61BaoLoTPZ",
"iclr_2022_9HXfisrWl1",
"iclr_2022_9HXfisrWl1",
"iclr_2022_9HXfisrWl1",
"iclr_2022_9HXfisrWl1"
] |
iclr_2022_K9KiBYAthi9 | DMSANET: DUAL MULTI SCALE ATTENTION NETWORK | Attention mechanism of late has been quite popular in the computer vision community. A lot of work has been done to improve the performance of the network,
although almost always it results in increased computational complexity. In this paper, we propose a new attention module that not only achieves the best performance
but also has lesser parameters compared to most existing models. Our attention
module can easily be integrated with other convolutional neural networks because
of its lightweight nature. The proposed network named Dual Multi Scale Attention
Network (DMSANet) is comprised of two parts: the first part is used to extract
features at various scales and aggregate them, the second part uses spatial and
channel attention modules in parallel to adaptively integrate local features with
their global dependencies. We benchmark our network performance for Image
Classification on ImageNet dataset, Object Detection and Instance Segmentation
both on MS COCO dataset. | Reject | This paper is proposed to improve base CNN models by dual multi-scale attention module. To achieve a better feature representational ability, authors consider the multi-scale mechanism from both channel-dimension and spatial-dimension. The proposed method has been verified on several benchmarks, including ImageNet and MS COCO. However, all reviewers consider rejecting this paper because this work lacks novelty, the results are suspicious, and the writing is poor. No responses are submitted by authors to address the reviewers' concerns. | val | [
"3RddyLH2-P_",
"AMJW4tuAvF8",
"VIuifvJngVe",
"0v1upQ4OfkM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors proposed a new attention module, named Dual Multi-Scale Attention. Three commonly used methods are combined: multi-scale, channel-attention, and position attention. By introducing the new module (which is a simple combination without any novelty) to ResNet, it SIGNIFICANTLY improves the ... | [
1,
1,
3,
3
] | [
5,
4,
3,
5
] | [
"iclr_2022_K9KiBYAthi9",
"iclr_2022_K9KiBYAthi9",
"iclr_2022_K9KiBYAthi9",
"iclr_2022_K9KiBYAthi9"
] |
iclr_2022_m5EBN92vjN | AASEG: ATTENTION AWARE NETWORK FOR REAL TIME SEMANTIC SEGMENTATION | In this paper, we present a new network named Attention Aware Network (AASeg)
for real time semantic image segmentation. Our network incorporates spatial and
channel information using Spatial Attention (SA) and Channel Attention (CA)
modules respectively. It also uses dense local multi-scale context information
using Multi Scale Context (MSC) module. The feature maps are concatenated
individually to produce the final segmentation map. We demonstrate the effectiveness of our method using a comprehensive analysis, quantitative experimental
results and ablation study using Cityscapes, ADE20K and Camvid datasets. Our
network performs better than most previous architectures with a 74.4% Mean IOU
on Cityscapes test dataset while running at 202.7 FPS. | Reject | All reviewers recommended reject, and there were no responses from authors. | train | [
"ICw8ulgSWs4",
"P4KE4ADX7Oj",
"WpBes0go01Z",
"kEFkocuT7R"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This manuscript proposes an Attention Aware Network for real-time semantic segmentation. This network is designed from scratch, which contains a spatial attention module, a channel attention module and a multi scale context module. It achieves an impressive tradeoff between accuracy and inference speed on three da... | [
1,
3,
1,
1
] | [
5,
4,
4,
5
] | [
"iclr_2022_m5EBN92vjN",
"iclr_2022_m5EBN92vjN",
"iclr_2022_m5EBN92vjN",
"iclr_2022_m5EBN92vjN"
] |
iclr_2022_PyBp6nFfzuj | UNCERTAINTY QUANTIFICATION USING VARIATIONAL INFERENCE FOR BIOMEDICAL IMAGE SEGMENTATION | Deep learning motivated by convolutional neural networks has been highly successful in a range of medical imaging problems like image classification, image
segmentation, image synthesis etc. However for validation and interpretability, not
only do we need the predictions made by the model but also how confident it is
while making those predictions. This is important in safety critical applications
for the people to accept it. In this work, we used an encoder decoder architecture
based on variational inference techniques for segmenting brain tumour images. We
evaluate our work on the publicly available BRATS dataset using Dice Similarity
Coefficient (DSC) and Intersection Over Union (IOU) as the evaluation metrics.
Our model is able to segment brain tumours while taking into account both aleatoric
uncertainty and epistemic uncertainty in a principled bayesian manner. | Reject | The paper introduces a method for uncertainty quantification for medical applications, which quantifies both aleatoric and epistemic components.
The paper initially received three strong reject recommendations. The main limitations pointed out by reviewers relate to the limited contributions (either methodological or applicative and clinical), the lack of positioning with respect to related works, the presentation needing improvement and the lack of experimental comparison with respect to recent relevant baselines.
No rebuttal was provided. \
The AC carefully read the submission and agrees that the paper is premature for publication in the current form. Therefore, the AC recommends rejection. | train | [
"M6PynPdDyGF",
"j2oZu4QbhG8",
"fihsBPxLk5S",
"ymvLcG5y_d9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a method to quantify uncertainty in medical imaging, which is an important task for clinical applications, with variational inference. It uses the U-Net architecture and BRATS18 dataset for evaluation. It quantifies uncertainty with 3 methods and evaluates the predictions with Dice score (DSC) a... | [
1,
1,
1,
1
] | [
3,
5,
4,
5
] | [
"iclr_2022_PyBp6nFfzuj",
"iclr_2022_PyBp6nFfzuj",
"iclr_2022_PyBp6nFfzuj",
"iclr_2022_PyBp6nFfzuj"
] |
iclr_2022_Aot3sKdraW | AA-PINN: ATTENTION AUGMENTED PHYSICS INFORMED NEURAL NETWORKS | Physics Informed Neural Networks has been quite successful in modelling the complex nature of fluid flow. Computational Fluid Dynamics using parallel processing
algorithms on GPUs have considerably reduced the time to solve the Navier Stokes
Equations. CFD based approaches uses approximates to make the modelling easy
but it comes at the cost of decrease in accuracy. In this paper, we propose an
attention based network architecture named AA-PINN to model PDEs behind fluid
flow. We use a combination of channel and spatial attention module. We propose a
novel loss function which is more robust in handling the initial as well as boundary
conditions imposed. Using evaluation metrics like RMSE, divergence and thermal
kinetic energy, our network outperforms previous PINNs for modelling Navier
Stokes and Burgers Equation. | Reject | All four reviewers agree that the paper should be rejected in its current form, but make numerous suggestions for improving it. The main points of concern were the motivation of the proposed method, novelty and the quality of the presentation of the work. The authors did not provide a response. The AC agrees with the reviewers and recommends rejecting the paper. | test | [
"bHOEfHBfWgs",
"n6AXwNqIBXN",
"JbBhmRV-j-S",
"mMT1mM164g8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors consider Navier-Stokes equation in 2D and the 1D Burger's equation.\nUsing a PINN approach to model the solution with neural networks, they propose an architecture that alternates spatial and channel attention blocks to convolutional blocks. \n \n\nThe presentation is very poor, experimental details ar... | [
3,
3,
3,
3
] | [
3,
4,
4,
3
] | [
"iclr_2022_Aot3sKdraW",
"iclr_2022_Aot3sKdraW",
"iclr_2022_Aot3sKdraW",
"iclr_2022_Aot3sKdraW"
] |
iclr_2022_TxIXgcP3yp- | Decouple and Reconstruct: Mining Discriminative Features for Cross-domain Object Detection | In recent years, a great progress has been witnessed for cross-domain object detection. Most state-of-the-art methods strive to handle the relation between local regions by calibrating cross-channel and spatial information to enable better alignment. They succeed in improving the generalization of the model, but implicitly drive networks to pay more attention on the shared attributes and ignore the domain-specific feature, which limits the performance of the algorithm. In order to search for the equilibrium between transferability and discriminability, we propose a novel adaptation framework for cross-domain object detection. Specifically, we adopt a style-aware feature fusion method and design two plug-and-play feature component regularization modules, which repositions the focus of the model on domain-specific features by restructuring the style and content of features. Our key insight is that while it is difficult to extract discriminative features in target domain, it is feasible to assign the underlying details to the model via feature style transfer. Without bells and whistles, our method significantly boosts the performance of existing Domain Adaptive Faster R-CNN detectors, and achieves state-of-the-art results on several benchmark datasets for cross-domain object detection. | Reject | This paper presents a method for unsupervised domain adaptation, focusing on the object detection problem. Under this framework, the paper proposes modules of domain adaptive instance normalization, global style alignment and local content alignment. The proposed method is evaluated on multiple datasets.
Several reviewers have pointed out that the paper lacks discussion and comparison to related methods. The paper has some merits but, the lack of a proper presentation, and the fact that there are not enough experimental results to support all claims in the paper, result in a submission that does not meet the bar of ICLR publication. Hence, the current paper is recommended to be not published at ICLR. | train | [
"tnBgp5h-dqt",
"a932sBM889n",
"qSoC9u7-A3",
"KimX2sPaxgQ",
"e-cZ-lROTjL",
"24WVrF4DV16"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Given the absence of any feedback from the authors regarding the raised doubts and questions, as well as, the feedback from fellow referees, I am keeping my initial recommendation, thus considering the current submission below the acceptance bar, despite the\nsome merits that have to be acknowledged. The lack of ... | [
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
2,
4,
3,
3
] | [
"KimX2sPaxgQ",
"e-cZ-lROTjL",
"iclr_2022_TxIXgcP3yp-",
"iclr_2022_TxIXgcP3yp-",
"iclr_2022_TxIXgcP3yp-",
"iclr_2022_TxIXgcP3yp-"
] |
iclr_2022_ZnUHvSyjstv | On the Capacity and Superposition of Minima in Neural Network Loss Function Landscapes | Minima of the loss function landscape of a neural network are locally optimal sets of
weights that extract and process information from the input data to make outcome predictions.
In underparameterised networks, the capacity of the weights may be insufficient to fit all the relevant information.
We demonstrate that different local minima specialise in certain aspects of the learning problem, and process the input
information differently. This effect can be exploited using a meta-network in
which the predictive power from multiple minima of the LFL is combined to produce a better
classifier. With this approach, we can increase the area under the receiver operating characteristic curve
(AUC) by around $20\%$ for a complex learning problem.
We propose a theoretical basis for combining minima and show how a meta-network can
be trained to select the representative that is used for classification of a
specific data item. Finally, we present an analysis of symmetry-equivalent
solutions to machine learning problems, which provides a systematic means to improve the
efficiency of this approach. | Reject | All reviewers agree that the paper is below the acceptance threshold and the authors did not respond to the reviews.
In summary, this is a clear reject | val | [
"5EyW0ZfEG0n",
"GkgnBAP59_C",
"VdHu-_lpri",
"q14EiUPS4Bj"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a method to select a subset of local minima of loss function landscape and combine the minima using a second meta-network which learns to select a set of weights for each input data point. The subset of local minima is selected based on two criteria, minima that are distant in classification spa... | [
5,
3,
3,
3
] | [
3,
4,
5,
4
] | [
"iclr_2022_ZnUHvSyjstv",
"iclr_2022_ZnUHvSyjstv",
"iclr_2022_ZnUHvSyjstv",
"iclr_2022_ZnUHvSyjstv"
] |
iclr_2022_rlYiXFdSy70 | Graph-Enhanced Exploration for Goal-oriented Reinforcement Learning | Goal-oriented Reinforcement Learning (GoRL) is a promising approach for scaling up RL techniques on sparse reward environments requiring long horizon planning. Recent works attempt to build suitable abstraction graph of the environment and enhance GoRL with classical graphical methods such as shortest path searching; however, these approaches mainly focus on either graph construction or agent exploitation, but leave the exploration lack of study. This paper proposes Graph-enhanced GoRL (G2RL), a new GoRL framework for effective exploration and efficient training based on the state-transition graph. We first introduce the optimal goals for exploration on the graph and then use them as supervised signals to train the goal generator in G2RL in a hindsight manner. Furthermore, we define relevant trajectories of a state based on its graph neighborhood and show that giving high priority to these trajectories would lead to an efficient policy learning. In addition to the theoretical results regarding optimal goal generation, our empirical results on standard discrete and continuous control benchmarks show that leveraging the state-transition graph is beneficial for GoRL to learn an effective and informative exploration strategy and outperform the state-of-the-art methods. | Accept (Poster) | This paper proposes a new approach to goal-oriented RL (GoRL) which constructs a state-transition graph from experience and uses this to guide exploration. Compared to prior approaches, this work innovates on (1) how subgoals are selected and (2) how relevant experience is sampled from the replay buffer. This approach outperforms a number of baseline methods across a fairly wide range of environments. The paper also includes extensive ablation and generalization studies.
The reviewers agreed the problem tackled by the paper was important, and were impressed by the empirical evaluation. Several reviewers (6obU and Sjwy) appreciated the theoretical grounding of the method as well, and generally found the approach to the problem compelling, with Reviewer 6obU writing that “defining the optimal goals to explore on the graph by looking one-step (or one-episode) ahead is a very interesting idea” and Reviewer EMRL noting a strength of the paper was a “goal generation method that explicitly considers only having access to a partial graph of the MDP”. The main concerns raised by the reviewers had to do with clarity and the simplicity of the experiments; however, the reviewers felt these were sufficiently addressed by the rebuttal. I agree the approach seems well-motivated and interesting and recommend acceptance as a poster. | train | [
"qVNwhMZ3y0J",
"Uob3VaExlgW",
"V76DL9nhelJ",
"ZIG2IzVPdAc",
"X4pCqMe1JCf",
"veci6iRFE5L",
"mCx6s70SOTj",
"EYtTJg630RG",
"AFF6tyhpiuw",
"O06_-gv1sb",
"SjfyfANwaDt",
"SUqf4EYZxk1",
"gg-f67FnGUM",
"9DwXiDTAxh",
"ugouAzEwJZY",
"2C0TdF2D4Ur",
"hfJSp-uU0q4",
"FTNPtzWrQY"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank again all reviewers for their constructive comments and suggestions. As reviewers also mentioned, our responses, new experiments, and revised version have adequately addressed all the questions raised by the reviewers . We hope the reviewers will champion our paper for acceptance.",
" Thanks ... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"2C0TdF2D4Ur",
"ZIG2IzVPdAc",
"X4pCqMe1JCf",
"9DwXiDTAxh",
"iclr_2022_rlYiXFdSy70",
"O06_-gv1sb",
"O06_-gv1sb",
"X4pCqMe1JCf",
"iclr_2022_rlYiXFdSy70",
"SjfyfANwaDt",
"AFF6tyhpiuw",
"AFF6tyhpiuw",
"FTNPtzWrQY",
"hfJSp-uU0q4",
"hfJSp-uU0q4",
"iclr_2022_rlYiXFdSy70",
"iclr_2022_rlYiXFd... |
iclr_2022_KdcLdLuIjQT | Goal Randomization for Playing Text-based Games without a Reward Function | Playing text-based games requires language understanding and sequential decision making. The objective of a reinforcement learning agent is to behave so as to maximise the sum of a suitable scalar reward function. In contrast to current RL methods, humans are able to learn new skills with little or no reward by using various forms of intrinsic motivation. We propose a goal randomization method that uses random basic goals to train a policy in the absence of the reward of environments. Specifically, through simple but effective goal generation, our method learns to continuously propose challenging -- yet temporal and achievable -- goals that allow the agent to learn general skills for acting in a new environment, independent of the task to be solved. In a variety of text-based games, we show that this simple method results in competitive performance for agents. We also show that our method can learn policies that generalize across different text-based games. In further, we demonstrate an interesting result that our method works better than one of state-of-the-art agents GATA, which uses environment rewards for some text-based games. | Reject | This paper proposes intrinsic rewards to train agents without environment rewards in text-based games. The key contribution is a goal generation method that samples random goals from a set of valid goals in natural language, which are obtained based on commonsense rules. Reviewers generally agree that the proposed method is intuitive and simple to implement, and appreciates the new results added during discussion. However, there are two main concerns: 1) the goal creation process is largely rule-based and task-specific, therefor it's unclear how well this method would generalize to other tasks; 2) related to 1), the generate goals carry a significant amount of domain knowledge about the task that is not available to the baselines, making the comparison a bit unfair. A future submission would benefit from demonstrating the generalizability of the proposed approach, e.g., by using more generic resources such as game meta data, generic knowledge/commonsense bases. | train | [
"Pzstjeti-vK",
"R2w5rH1vig6",
"zyGJbFbO23t",
"Mw5FOMQvuRf",
"FimFvlIIvF",
"blITI0ZZ9lJ",
"NT5OM_0bSrr",
"iwnOpfLzMw"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces GoalRand, an algorithm for playing text-based games in the absence of extrinsic reward. In particular, it uses generated goals to train a goal-conditioned reinforcement learning agent. These goals are generated by extracting factual information from the agent's knowledge graph. Given a goal s... | [
5,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_KdcLdLuIjQT",
"zyGJbFbO23t",
"Mw5FOMQvuRf",
"Pzstjeti-vK",
"iwnOpfLzMw",
"NT5OM_0bSrr",
"iclr_2022_KdcLdLuIjQT",
"iclr_2022_KdcLdLuIjQT"
] |
iclr_2022_UxTR9Z2DW8R | Reinforcement Learning State Estimation for High-Dimensional Nonlinear Systems | In high-dimensional nonlinear systems such as fluid flows, the design of state estimators such as Kalman filters relies on a reduced-order model (ROM) of the dynamics. However, ROMs are prone to large errors, which negatively affects the performance of the estimator. Here, we introduce the reinforcement learning reduced-order estimator (RL-ROE), a ROM-based estimator in which the data assimilation feedback term is given by a nonlinear stochastic policy trained through reinforcement learning. The flexibility of the nonlinear policy enables the RL-ROE to compensate for errors of the ROM, while still taking advantage of the imperfect knowledge of the dynamics. We show that the trained RL-ROE is able to outperform a Kalman filter designed using the same ROM, and displays robust estimation performance with respect to different reference trajectories and initial state estimates. | Reject | The paper received splitted scores, 3,3,5,8. While reviewers thought that using reinforcement learning for state estimation is interesting, they are not convinced if the proposed technique makes sense. The paper assumes that the high-dimensional state vectors are known, in which case one may use supervised learning directly. Also there's a debate regarding suboptimality of stationarity policies, which made the reviewers confusing. Although the authors try to argue that this is mostly a clarity issue, the reviewers were not convinced and didn't change their decisions. | train | [
"OoNAu4Rzd9",
"PZXcQvpshFp",
"AYf9Snr_fN-",
"tc2DxOy5n-l",
"NL8r4TC3VTR",
"EAbk1IAhp9Z",
"I0eqI9JoxcA",
"rC6mG5zUMyA",
"MB3vIvIFc6v",
"fUVnjgvlG1f",
"KPmusH9YStc",
"1PSH9rahmCS",
"dBdFKvlY7A",
"CVUMMwBT95B",
"aVXXqzjcVK7",
"FHZpNR1eWeD"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nThank you for your patience with us. We discussed again among ourselves your concern, and we think that the misunderstanding stems from the following:\n\n1- We perhaps did not emphasize enough that in solving for an optimal policy, we are in fact solving for the whole trajectory from k=0, to k=K... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"PZXcQvpshFp",
"AYf9Snr_fN-",
"tc2DxOy5n-l",
"NL8r4TC3VTR",
"EAbk1IAhp9Z",
"1PSH9rahmCS",
"rC6mG5zUMyA",
"iclr_2022_UxTR9Z2DW8R",
"FHZpNR1eWeD",
"dBdFKvlY7A",
"aVXXqzjcVK7",
"CVUMMwBT95B",
"iclr_2022_UxTR9Z2DW8R",
"iclr_2022_UxTR9Z2DW8R",
"iclr_2022_UxTR9Z2DW8R",
"iclr_2022_UxTR9Z2DW8R... |
iclr_2022_Cy0n0WCvLPU | Topic Aware Neural Language Model: Domain Adaptation of Unconditional Text Generation Models | Our goal is to adapt pre-trained neural language models (NLMs) to the unconditional text generation task within the target domain.
Because many Transformer based NLMs are trained on more massive and heterogeneous corpora than this target domain,
the difference between these corpora and the target domain raises the question of whether these NLMs can provide their benefits to this task even after the fine-tuning.
To tackle these problems, our approach focuses on topics to bridge the semantic gap between these corpora and the target domain corpus,
and relates them at a topic level.
That is, this approach injects topics into these NLMs and trains them via topics behind these dependencies over segments,
introducing both topic alignment (TA) and training tasks (TDM and TEM),
while previous Transformer based NLMs are better at learning from the predefined segment length such as the context.
Experiments show that this approach contributes to resolve the imbalance between these corpora,
and can tailor previous pre-trained NLMs to generate coherent and semantically valid text reflecting a given small fine-tuning corpus. | Reject | This paper proposes topic-aware NLMs, an approach to adapt pre-trained transformer based language models to a specific domain/topic. The work seems novel and interesting, and the paper presents good results. However, it bypasses many technical details that would enable a good understanding and appreciation of the work. In the responses, the authors clarify some of these issues, however, it would be beneficial to re-review the paper given the suggested changes. In addition to clarity, lack of detailed experimental analysis is an issue, and it would be useful to integrate the analysis suggested in the reviews. | train | [
"rw00HhQASl",
"YCb7cWamh8Y",
"BFIcUTf2-o_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new topic-aware transformer-based language model with better domain adaptation ability. The paper introduces topic alignment to the NLMs. The paper treats the topic as an additional latent variable in the NLMs. The authors test their models on two datasets. The contribution is to introduce top... | [
5,
5,
1
] | [
3,
2,
2
] | [
"iclr_2022_Cy0n0WCvLPU",
"iclr_2022_Cy0n0WCvLPU",
"iclr_2022_Cy0n0WCvLPU"
] |
iclr_2022__MO2xzOZXv | Count-GNN: Graph Neural Networks for Subgraph Isomorphism Counting | The prevalence of graph structures has attracted a surge of research interest in graph data. As many graph-based tasks exploit recurring subgraph patterns on graphs, subgraph isomorphism counting becomes an important problem. Classical methods usually boil down to a backtracking framework that needs to navigate a huge search space with prohibitive computational cost due to the NP-completeness of the problem. Some recent studies resort to graph neural networks (GNNs) to learn a low-dimensional representation for both the query subgraph and the input graph, in order to predict the number of query subgraph isomorphisms on the input graph. However, typical GNNs employ a node-centric message passing mechanism that receives and aggregates messages on nodes. While effective on node-oriented tasks, they become inadequate in complex structure matching for isomorphism counting. Moreover, given an input graph, the space of query subgraph is enormous, and thus expecting a single model to fit the diverse range of query subgraphs is unrealistic. In this paper, we propose a novel GNN called Count-GNN for subgraph isomorphic counting, to deal with the above challenges at two levels. At the edge level, we resort to an edge-centric message passing scheme, where messages on edges are propagated and aggregated based on the edge adjacency. By treating edges as first-class citizens, Count-GNN is able to preserve finer-grained structural information, given that an edge is an atomic unit of encoding graph structures. At the graph level, we modulate the graph representation conditioned on the query subgraph, so that the model can be adapted to each unique query for better matching with the input graph. To demonstrate the effectiveness and efficiency of Count-GNN, we conduct extensive experiments on a number of benchmark graphs. Results show that Count-GNN achieves superior performance in comparison to the state-of-the-art baselines. | Reject | This work proposes a graph neural network (GNN) model for subgraph isomorphism counting. The work cannot be accepted in its current form. The reviewers provided good comments to authors that can improve their work.
Strengths/Weaknesses:
(+) Experimental results are reasonably detailed with an extensive ablation study.
(-) Experimental results are limited to two synthetic and one small MUTAG dataset.
(-) The reviewers found the work to be missing a motivation. The authors are correct that subgraph isomorphism frequencies and counts are extremely important in many areas of science, including graph representation learning, the reviewers found it unclear why GNNs are the right tool for the task (in lieu of a specialized randomized algorithm, for instance).
(-) Regarding specialized algorithms, there are state-of-the-art randomized algorithms that are not search-based via backtracking cited in the paper, such as Motivo (Bressan et al., 2020) and PSRW (Wang et al., 2014) and their derivatives (as one of the reviewers pointed out).
(-) The reviewers were not particularly excited with the technical depth of the paper.
The work definitively has potential. I would encourage the authors to consider better motivating the work. | train | [
"npB7PfhT5Fs",
"0kSy1tviva",
"k1QLvAnqc_X",
"r0Yo30mwyz1",
"P2C67ztUN6s",
"sqcJWxIDWRM",
"pa0KzDBX5Wi",
"MdVusBRitSu",
"X5ZNJ6gY9ip",
"SXdwbOdkuGz",
"rAyrKecb_C3",
"S3Y4Mw0P6on"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for responding to my comments. I believe that some concrete example or case study where the model facilitates counting subgraphs within an acceptable tolerance is needed to strengthen the paper. I will maintain my current score.",
" Thank you for addressing my comments.\nI maintain my score ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
8,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4,
4
] | [
"r0Yo30mwyz1",
"P2C67ztUN6s",
"S3Y4Mw0P6on",
"rAyrKecb_C3",
"SXdwbOdkuGz",
"X5ZNJ6gY9ip",
"MdVusBRitSu",
"iclr_2022__MO2xzOZXv",
"iclr_2022__MO2xzOZXv",
"iclr_2022__MO2xzOZXv",
"iclr_2022__MO2xzOZXv",
"iclr_2022__MO2xzOZXv"
] |
iclr_2022_VZAgsLaP3or | Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation | In recent years, the adversarial vulnerability of deep neural networks (DNNs) has raised increasing attention.
Among all the threat models, no-box attacks are the most practical but extremely challenging since they neither rely on any knowledge of the target model or similar substitute model, nor access the dataset for training a new substitute model. Although a recent method has attempted such an attack in a loose sense, its performance is not good enough and the computational overhead of training is expensive.
In this paper, we move a step forward and show the existence of a \textbf{training-free} adversarial perturbation under the no-box threat model, which can be successfully used to attack different DNNs in real-time.
Motivated by our observation that high-frequency component (HFC) domains in low-level features and plays a crucial role in classification, we attack an image mainly by manipulating its frequency components. Specifically, the perturbation is combined by the suppression of the original HFC and the adding of noisy HFC.
We empirically and experimentally analyze the requirements of effective noisy HFC and show that it should be regionally homogeneous, repeating and dense.
Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our proposed no-box method. It attacks ten well-known models with a success rate of \textbf{98.13\%} on average, which outperforms state-of-the-art no-box attacks by \textbf{29.39\%}. Furthermore, our method is even competitive to mainstream transfer-based black-box attacks. Our code is available in our appendix. | Reject | This paper proposed a no-box attack method, with mixed reviews. While I appreciate the authors efforts which attempted to address the questions raised by the reviewers, they were not sufficient to remove the concerns. Based on the reviewers comments and author responses, I won't be able to recommend the paper to be accepted in its current form. | train | [
"QKOs2RnW2WV",
"HvBt2T8VYgs",
"D1EojSdPu3O",
"zrGvB-7Bcxh",
"NloaKpYIvC3",
"xCAB9RY3-L",
"o36Wx8Sj-Wh",
"6hA070kYXtN",
"UdXosolwsNl",
"2O2cPImZH2G",
"6frXc8U_71j",
"ACnQVQr8PP",
"d4Mxz--DaRt",
"A5DJKgSnpsN",
"GnogNoHMsAy",
"U2ouh1qTVNa",
"y98Lw2yYSq",
"o83tLMmL6sv",
"-ShRnCPLdi",... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors have proposed an attack based on the modification of high-frequency components of an image through different types of masks such as circular, square, and rhombus. The experiments are performed using multiple networks on a subset of the ImageNet database. It is well known that the high-frequency compon... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_VZAgsLaP3or",
"D1EojSdPu3O",
"A5DJKgSnpsN",
"UdXosolwsNl",
"o36Wx8Sj-Wh",
"6frXc8U_71j",
"2O2cPImZH2G",
"iclr_2022_VZAgsLaP3or",
"U2ouh1qTVNa",
"QKOs2RnW2WV",
"-ShRnCPLdi",
"d4Mxz--DaRt",
"iclr_2022_VZAgsLaP3or",
"o83tLMmL6sv",
"iclr_2022_VZAgsLaP3or",
"y98Lw2yYSq",
"6hA07... |
iclr_2022_LZVXOnSrD0Y | Pareto Frontier Approximation Network (PA-Net) Applied to Multi-objective TSP | Multi-objective optimization is used in various areas of robotics like control, planning etc. Their solutions are dependent on multiple objective functions, which can be conflicting in nature. In such cases, the optimality is defined in terms of Pareto optimality. A set of these Pareto Optimal solutions in the objective space form a Pareto front (or frontier). Each solution has its own trade off. For instance, the travelling salesman problem (TSP) is used in robotics for task/resource allocation. Often this allocation is influenced by multiple objective functions and is solved using Multi-objective travelling salesman problem (MOTSP). In this work, we present PA-Net, a network that generates good approximations of the Pareto front for the multi-objective optimization problems. Our training framework is applicable to other multi-objective optimization problems; however, in this work, we focus on solving MOTSP. Firstly, MOTSP is converted into a constrained optimization problem. We then train our network to solve this constrained problem using the Lagrangian relaxation and policy gradient. With PA-Net we are able to generate better quality Pareto fronts with fast inference times as compared to other learning based and classical methods. Finally, we present the application of PA-Net to find optimal visiting order in coverage planning. | Reject | The paper introduces a method to approximate a Pareto front for multi-objective TSP. The proposed method first converts the MOTSP into a set of constrained single-objective optimization problems with different preference-based constraints. Then it builds a modified TSP-Net with preference augmentation to solve all the constrained problems. The method is empirically compared with multi-objective genetic algorithms and a DRL-based approach, showing to be competitive in the approximation of the Pareto front.
After reading the authors' feedback and discussing their concerns, the reviewers reached a consensus and they think that the paper is still not ready for publication. The authors need to improve their experimental evaluation in order to make it more robust and fair. | train | [
"x7eSCzMOAQq",
"Q1mhuBYbBO_",
"WduvUgUewcA",
"k5iCWTcoTAk",
"3kMpvusZuTL",
"ZAWand47uw1",
"1cKRqsPVx7e",
"o3Ll0-SeZlD"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a Pareto frontier approximation network (PA-Net) to learn the Pareto frontier for multi-objective traveling salesman problem (MOTSP). The proposed method first converts the MOTSP into a set of constrained single-objective optimization problems with different preference-based constraints. Then it... | [
3,
-1,
-1,
-1,
-1,
6,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2022_LZVXOnSrD0Y",
"o3Ll0-SeZlD",
"x7eSCzMOAQq",
"1cKRqsPVx7e",
"ZAWand47uw1",
"iclr_2022_LZVXOnSrD0Y",
"iclr_2022_LZVXOnSrD0Y",
"iclr_2022_LZVXOnSrD0Y"
] |
iclr_2022_VNdFPD5wqjh | Generalizable Person Re-identification Without Demographics | Generalizable Person Re-Identification (DG ReID) aims to learn ready-to-use cross-domain representations for direct cross-data evaluation. It typically fully exploit demographics information, e.g. the domain information and camera IDs to learn features that are domain-invariant. However, the protected demographic features are not often accessible due to privacy and regulation issues. Under this more realistic setting, distributionally robust optimization (DRO) provides a promising way for learning robust models that are able to perform well on a collection of possible data distributions (the ``uncertainty set”) without demographics. However, the convex condition of KL DRO may not hold for overparameterized neural networks, such that applying KL DRO often fails to generalize under distribution shifts in real scenarios. Instead, by applying the change-of-measure technique and the analytical solution of KL DRO, we propose a simple yet efficient approach, Unit DRO. Unit DRO minimizes the loss over a reweighted dataset where important samples (i.e. samples on which models perform poorly) will be upweighted and others will be downweighted. Empirical results show that Unit DRO achieves superior performance on large-scale DG ReID and cross-domain ReID benchmarks compared to standard baselines. | Reject | This paper receives recommendations from four experts who are actively working on the Re-ID problem. Two reviewers give more positive comments, while the comments of the remaining reviewers are relatively negative. The paper aims to achieve generalizable Re-ID without demographics (DGWD-ReID). However, two Reviewers, umsc and Cuay, pointed out that, the claimed novelty of such a problem setting is oversold, and some relevant works are not cited and compared in the original submission. More specifically, the concept of DGWD-ReID in the paper is not fully convincing, and reviewers umsc and Cuay both think DGWD-ReID setting seems to be a special case of DG-ReID, and may not need to be considered totally independently. Though the reviewer eLS6 gives more positive recommendation, an issue is still raised by this reviewer - the method is general and does not have much specific design for Re-ID, but the authors only used Re-ID datasets for evaluation. In experiments, the authors used powerful techniques/tricks in their backbone model to improve the performance. This also brings some difficulties for the readers to assess which part is really working in their method. It will be good if authors can further improve the paper. | train | [
"KQ8SlccwjWZ",
"LPog74hlzPm",
"7g1bar_RzWK",
"50hLcGHeczG",
"Q91gG0V4UXe",
"nrMOAylsEUD",
"nq2UdlPUU4t",
"5GBZiqnTUb5",
"valU6Ep_OIp",
"_0538TiF6OS",
"jlFv6tZk5Fg",
"CznfuQHQ7VJ",
"uolBlAWKfZW",
"ZhHNpgZfXvE",
"XzxPKyJt212",
"D1hxeIz0zkN"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your constructive comments! We conduct experiments in general DG benchmarks. Comprehensive experiments show that, for general DG tasks, UnitDRO consistently outperforms all the baselines by a considerable margin even without demographics.\n\n\n| Domain | | | PACS | ... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
8
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"5GBZiqnTUb5",
"50hLcGHeczG",
"50hLcGHeczG",
"nq2UdlPUU4t",
"iclr_2022_VNdFPD5wqjh",
"5GBZiqnTUb5",
"uolBlAWKfZW",
"_0538TiF6OS",
"jlFv6tZk5Fg",
"XzxPKyJt212",
"D1hxeIz0zkN",
"ZhHNpgZfXvE",
"Q91gG0V4UXe",
"iclr_2022_VNdFPD5wqjh",
"iclr_2022_VNdFPD5wqjh",
"iclr_2022_VNdFPD5wqjh"
] |
iclr_2022_MXrIVw-F_a4 | FLOAT: FAST LEARNABLE ONCE-FOR-ALL ADVERSARIAL TRAINING FOR TUNABLE TRADE-OFF BETWEEN ACCURACY AND ROBUSTNESS | Training a model that can be robust against adversarially-perturbed images with-out compromising accuracy on clean-images has proven to be challenging. Recent research has tried to resolve this issue by incorporating an additional layer after each batch-normalization layer in a network, that implements feature-wise linear modulation (FiLM). These extra layers enable in-situ calibration of a trained model, allowing the user to configure the desired priority between robustness and clean-image performance after deployment. However, these extra layers significantly increase training time, parameter count, and add latency which can prove costly for time or memory constrained applications. In this paper, we present Fast Learnable Once-for-all Adversarial Training (FLOAT) which transforms the weight tensors without using extra layers, thereby incurring no significant increase in parameter count, training time, or network latency compared to a standard adversarial training. In particular, we add configurable scaled noise to the weight tensors that enables a ‘continuous’ trade-off between clean and adversarial performance. Additionally, we extend FLOAT to slimmable neural networks to enable a three-way in-situ trade-off between robustness, accuracy, and complexity. Extensive experiments show that FLOAT can yield state-of-the-art performance improving both clean and perturbed image classification by up to ∼6.5% and ∼14.5%, respectively, while requiring up to 1.47x fewer parameters with similar hyperparameter settings compared to FiLM-based alternatives. | Reject | This paper presents Fast Learnable Once-for-all Adversarial Training (FLOAT) which transforms the weight tensors without using extra layers, thereby incurring no significant increase in parameter count, training time, or network latency compared to a standard adversarial training. Compared to existing SOTA, FLOAT is better in many metrics including training time, training parameters and hyperparameters, storage cost, potential inference latency, speed, and task accuracy.
This paper received highly mixed scores 8-6-5-3. During the private discussion, Reviewer DN3 stated that she/he was willing to raise score from 3 to 5. Although I am not sure why the reviewer did not actually make the change, I'm consider the rating increase as happened (i.e., "factually" 8-6-5-5).
After reading this paper, AC agrees that FLOAT solved an important limitation of the previous state-of-the-art method OAT: reducing the FiLM overhead by using a more efficient model conditioning method, i.e. adding configurable scaled noise on model weights. The lukewarm part is, the method is no doubt heavily based on the OAT paper. Even one can argue the "method as a whole" is novel, the contributions (despite interesting) remain slightly incremental.
In view of the above, AC currently places this paper as a borderline rejection. | test | [
"SxBnCQYM682",
"pAnlSZ5yvzU",
"h2WT1QZHdp",
"yYZb15mrh5i",
"qPkhXDLjMb",
"kahv8EfpBvE",
"f4xBliYV2BU",
"-CzRwdDwq7M",
"BXVZXzc7toU",
"pFn1vMhuWIw",
"P22UcZP5ftI",
"zTAnnYQDJDD",
"lvkMc2qq-L0"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper argues feature-wise linear modulation (FiLM) methods to perform in-situ calibration are too computationally expensive. Motivated by this, the authors propose the FLOAT method which adds scaled binary noise to the weight tensor instead of using extra layers. This paper also extends FLOAT to balance the 3... | [
5,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2022_MXrIVw-F_a4",
"iclr_2022_MXrIVw-F_a4",
"SxBnCQYM682",
"qPkhXDLjMb",
"iclr_2022_MXrIVw-F_a4",
"-CzRwdDwq7M",
"zTAnnYQDJDD",
"qPkhXDLjMb",
"h2WT1QZHdp",
"f4xBliYV2BU",
"lvkMc2qq-L0",
"iclr_2022_MXrIVw-F_a4",
"iclr_2022_MXrIVw-F_a4"
] |
iclr_2022_8Z7-NG11HY | Constrained Density Matching and Modeling for Effective Contextualized Alignment | Multilingual representations pre-trained with monolingual data offer unmatched task performances between languages. While this has been tackled through the lens of contextualized alignments, these techniques require large parallel data, thereby leaving under-represented language communities behind. In this work, we analyze the limitations according to which previous alignments become very resource-intensive, \emph{viz.,} (i) the inability to sufficiently leverage data and (ii) that alignments are not trained properly. To address them, we present density based approaches to perform alignments, and we complement them with our validation criteria accounting for downstream task performances. Our experiments encompass 16 alignment techniques (including ours), evaluated across 6 language pairs, synthetic and 4 NLP tasks. We demonstrate that our solutions are particularly effective in the scenarios of limited and no parallel data. More importantly, we show, both theoretically and empirically, the advantages of our boostrapping procedures, by which unsupervised approaches rival supervised counterparts.
| Reject | The paper investigates various methods for cross-lingual alignment of contextual word embeddings. It also introduces a new method based on density matching via normalizing flows to align contextual representations in two languages. The paper has many strengths, but also reviewers identify several major weaknesses including the lack of strong baselines and the lack of extrinsic evaluation. These concerns were not addressed by the authors during the discussion period. | train | [
"HQVUCDxVPiZ",
"kzctxg0FrZF",
"vMbCINlzDnJ",
"5idJn57PCPt"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThis paper propose methods for contextualized vector alignment, using normalizing flow and earth mover distance for supervised and unsupervised alignment respsectively. \nExperiments demonstrate the effectiveness of the proposed approaches. ### Strengths \n- This is a well-written paper in general, with reasona... | [
5,
6,
3,
8
] | [
4,
3,
4,
2
] | [
"iclr_2022_8Z7-NG11HY",
"iclr_2022_8Z7-NG11HY",
"iclr_2022_8Z7-NG11HY",
"iclr_2022_8Z7-NG11HY"
] |
iclr_2022_Ub1BQTKiwqg | Learning sparse DNNs with soft thresholding of weights during training | This paper proposes a new and simple way of training sparse neural networks. Our method is based on a differentiation of the forward and backward paths: the weights in the forward path are a thresholded version of the weights maintained in the backward path. This decoupling allows for micro-updates, produced by gradient descent, to stack up, leading to the possible re-activation of weights that were set to zero in earlier training steps. At the end of training, links with zero weights are pruned away.
Additional critical specificities of our approach lie (i) in the progressive increase of the zeroed weight ratio along the training, and (ii) in the use of soft-thresholding rather than hard-tresholding to derive the forward-path weights from the ones maintained in the backward path.
At constant accuracy, our approach reduces the number of training cycles to 1 compared to the state-of-the-art recursive pruning methods. At high pruning rates, it also improves the model accuracy compared to other single cycle pruning approaches (66.18% top-1 accuracy when training a ResNet-50 on ImageNet at 98% sparsity).
| Reject | This submission proposes a method for learning sparse DNNs which consists of three components: First, a "dense" network is maintained and updated in each backwards pass, but the forward pass is done via a sparsified version of the network; sparsification is done via "soft" thresholding; and the sparsity ratio is increased over the course of training. Reviewers noted that each of these components had been previously proposed, and that the state-of-the-art baselines are not actually state-of-the-art anymore. They also noted that the paper read more like a draft and needs substantial improvement. The consensus was therefore to reject. | train | [
"IEjY5CCugf",
"xKovEQXkHck",
"5xLZjh4hkq",
"mP2HIzEwRI9",
"B-6fdkWu--"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for the time they devoted to our submission. Their constructive and valuable comments will definitely help us to strengthen our paper.\n\nWe indeed missed the method presented in [1], published a few months before our ICLR submission. After careful analysis, it appears that we need more tim... | [
-1,
3,
3,
3,
3
] | [
-1,
4,
4,
4,
4
] | [
"iclr_2022_Ub1BQTKiwqg",
"iclr_2022_Ub1BQTKiwqg",
"iclr_2022_Ub1BQTKiwqg",
"iclr_2022_Ub1BQTKiwqg",
"iclr_2022_Ub1BQTKiwqg"
] |
iclr_2022_UQBEkRO0_-M | Softmax Gradient Tampering: Decoupling the Backward Pass for Improved Fitting | We introduce Softmax Gradient Tampering, a technique for modifying the gradients in the backward pass of neural networks in order to enhance their accuracy. Our approach transforms the predicted probability values using a power-based probability transformation and then recomputes the gradients in the backward pass. This modification results in a smoother gradient profile, which we demonstrate empirically and theoretically. We do a grid search for the transform parameters on residual networks. We demonstrate that modifying the softmax gradients in ConvNets may result in increased training accuracy, thus increasing the fit across the training data and maximally utilizing the learning capacity of neural networks. We get better test metrics and lower generalization gaps when combined with regularization techniques such as label smoothing. Softmax gradient tampering improves ResNet-50's test accuracy by $0.52\%$ over the baseline on the ImageNet dataset. Our approach is very generic and may be used across a wide range of different network architectures and datasets. | Reject | This paper introduces Softmax Gradient Tampering, a technique for modifying the gradient of the softmax loss to make the loss more smooth. On standard benchmarks the authors demonstrate improved training and test accuracy.
The reviewers are unanimous in their recommendation to not accept the paper. They identify the following problems:
* a lack of theoretical understanding and rigor, and a lack of support for the claims that are made
* insufficient experimental results to convince the reviewers of the merit of the proposed technique in the absence of theoretical understanding
The authors did not provide a rebuttal, and I see no special reasons to question the assessment made by the reviewers. I therefore recommend to not accept this paper. | train | [
"0aGi_L3utd8",
"RgFVC1KMr8e",
"f15sa8Gm3mD",
"2Y8nqdBoWr",
"gWafz2Oj2VR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Given that there's no rebuttal from the authors, and my fellow reviewers are also not very positive, I am keeping my score and voting for a reject for this conference. \n\nFor the authors I found this paper: https://arxiv.org/abs/2010.07344\nWhich describes things similar to this paper, but in a very comprehensiv... | [
-1,
5,
3,
3,
1
] | [
-1,
4,
4,
4,
3
] | [
"f15sa8Gm3mD",
"iclr_2022_UQBEkRO0_-M",
"iclr_2022_UQBEkRO0_-M",
"iclr_2022_UQBEkRO0_-M",
"iclr_2022_UQBEkRO0_-M"
] |
iclr_2022_RMv-5wMMrE3 | Cell2State: Learning Cell State Representations From Barcoded Single-Cell Gene-Expression Transitions | Genetic barcoding coupled with single-cell sequencing technology enables direct measurement of cell-to-cell transitions and gene-expression evolution over a long timespan. This new type of data reveals explicit state transitions of cell dynamics. Motivated by dimension reduction methods for dynamical systems, we develop a *cell-to-state* (cell2state) learning method that, through learning from such multi-modal data, maps single-cell gene expression profiles to low-dimensional state vectors that are predictive of cell dynamics. We evaluate the cell2state method using barcoded stem cell dataset (Biddy et al. (2018)) and simulation studies, compared with baseline approaches using features that are not dynamic-aware. We demonstrate the merits of cell2state in challenging downstream tasks including cell state prediction and finding dynamically stable clusters. Further, our method reveals potentiallatent meta-states of the underlying evolution process. For each of the meta-states, we identify a set of marker genes and development pathways that are biologically meaningful and potentially expand existing knowledge. | Reject | While the problem tackled in this paper is interesting, there is a consensus among reviewers that the writing of the paper does not allow the reader to fully understand the method developed, nor the biological context and results obtained by the method. We encourage the authors to take into account the reviewers' comments to prepare a future improved version of the manuscript. | train | [
"EVhSg72E5-v",
"GCy1qRQXsYN",
"aUTj3V4sjV4",
"2lCvAsjAeH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors introduce cell2state, an algorithm that incorporates both genetic barcoding coupled with single-cell sequenced data to model explicit state transitions of cell dynamics over time. Single-cell gene expression profiles are mapped to low-dimensional state vectors that are predictive of cell dynamics. Cell... | [
3,
6,
3,
5
] | [
4,
5,
3,
4
] | [
"iclr_2022_RMv-5wMMrE3",
"iclr_2022_RMv-5wMMrE3",
"iclr_2022_RMv-5wMMrE3",
"iclr_2022_RMv-5wMMrE3"
] |
iclr_2022_vQ58AMOw4Il | Hermitry Ratio: Evaluating the validity of perturbation methods for explainable deep learning | Perturbation methods are model-agnostic methods used to generate heatmaps to explain black-box algorithms such as deep neural networks. Perturbation methods work by perturbing the input image. However, by perturbing parts of the input image we are changing the underlying structure of the image, potentially generating out-of-distribution (OOD) data. This would violate one of the core assumptions in supervised learning, namely that the train and test data come from the same distribution.
In this study, we coin the term hermitry ratio to quantify the utility of perturbation methods by looking at the amount of OOD samples they produce. Using this metric, we observe the utility of XAI methods (Occlusion analysis, LIME, Anchor LIME, Kernel SHAP) for image classification models ResNet50, DensNet121 and MnasNet1.0 on three classes of the ImageNet dataset. Our results show that, to some extent, \emph{all} four perturbation methods generate OOD data regardless of architecture or image class. Occlusion analysis primarily produces in-distribution perturbations while LIME produces mostly OOD perturbations. | Reject | The paper questions perturbation methods as a means of extracting explainable information about a learner, focusing on the hypothesis that perturbation may create out of distribution examples. To this end, the paper introduces the hermitry ratio to detect such out of distribution samples.
The reviewiers have raised the following concerns:
- questionable motivation and premises
- discussed related work outdated, important and relevant recent works are not discussed.
- limited contribution
- limited experimental evidence
The author response did not sufficiently address the concerns of the reviewers. The reviewers agree that the paper should be rejected. | train | [
"mfskZPiVCP2",
"0ZsHaSbkohj",
"hwjy4iXR5cB",
"xMphIm4LxX",
"L-BGvvHr1m0",
"DdFC5ypmSNb",
"qEv86dbZF72",
"N12KhNNRo-S",
"5mgiglSEPyD"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for clarifying. Given the other reviews and the authors rebuttal, I decided to keep my score. ",
" Dear reviewer, thank you for taking the time and for your comments. \n\nWe think that you make some good points, unfortunately we don't have the opportunity to address these with additional experiments. ... | [
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"hwjy4iXR5cB",
"5mgiglSEPyD",
"N12KhNNRo-S",
"qEv86dbZF72",
"DdFC5ypmSNb",
"iclr_2022_vQ58AMOw4Il",
"iclr_2022_vQ58AMOw4Il",
"iclr_2022_vQ58AMOw4Il",
"iclr_2022_vQ58AMOw4Il"
] |
iclr_2022_CA51pvZJ0xX | Robust Feature Selection using Sparse Centroid-Encoder | We develop a sparse optimization problem for the determination of the total set of features that discriminate two or more classes. This is a sparse implementa- tion of the centroid-encoder for nonlinear data reduction and visualization called Sparse Centroid-Encoder (SCE). We also provide an iterative feature selection al- gorithm that first ranks each feature by its occurrence, and the optimal number of features is chosen using a validation set. The algorithm is applied to a wide vari- ety of data sets including, single-cell biological data, high dimensional infectious disease data, hyperspectral data, image data, and GIS data. We compared our method to various state-of-the-art feature selection techniques, including three neural network-based models (DFS, SG-L1-NN, G-L1-NN), Sparse SVM, and Random Forest. We empirically showed that SCE features produced better classi- fication accuracy on the unseen test data, often with fewer features. | Reject | All reviewers point out the lack of significant novelty in the proposed method. In addition, there is a lack of theoretical justification, and relatively weak empirical results, to support the claim that the proposed method is competitive or outperforms the myriad of methods existing for feature selection. | train | [
"XkKjFFDbOyv",
"JcQIi142wZZ",
"M1nGEho-KQ",
"cyPlXuyCf5P"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a neural network based approach for feature selection. The presented approach uses the existing work from the literature (based on centroid-encoder neural networks). As the primary novelty, the authors introduce the sparsity term (which is the L_1 norm of the parameters as a regularization term... | [
3,
3,
3,
3
] | [
5,
3,
4,
5
] | [
"iclr_2022_CA51pvZJ0xX",
"iclr_2022_CA51pvZJ0xX",
"iclr_2022_CA51pvZJ0xX",
"iclr_2022_CA51pvZJ0xX"
] |
iclr_2022_2DT7DptUiXv | ConVAEr: Convolutional Variational AutoEncodeRs for incremental similarity learning | Due to catastrophic forgetting, incremental similarity learning in neural networks remains an open challenge. Previous work has shown that keeping image exemplars during incremental similarity learning is effective for preserving base knowledge (past learnt features and embeddings). It is also generally accepted that the output layers learn more task-specific feature embeddings during the later training stages compared to the input layers’ general features earlier on. Building on these insights, we start by freezing the input layers of a neural network. We then investigate the viability of generating “embedding” exemplars from a VAE that can protect base knowledge in the intermediate to output layers of the neural networks. These generated exemplars replace the necessity for retaining images from previously learned classes. We experimented with three metric learning loss functions on the CUB-200 and CARS-196 in an incremental similarity learning setup. We train different VAEs to generate exemplars from the intermediate convolution layers and linear output layers. We use these generated exemplars to rep-resent base knowledge. We compared our work to a previous technique that stores image exemplars. The comparison is done for base knowledge, new knowledge and average knowledge preservation as metrics. The results show that generating exemplars from the linear and convolutional layers retained the highest ratio of base knowledge. We note that using embeddings from the linear layers leads to better performance on new knowledge than convolutional embeddings. Overall our methods yield better average knowledge performance across all experiments. These results support the view that for incremental similarity learning to overcome catastrophic forgetting, emphasis can be placed on learning embedding exemplars for intermediate to output layers. Further, we note that most incremental similarity learning for new classes depends on the linear layers rather than the convolutions. Further investigation into the relationship between transfer learning and similarity learning and the protection of intermediate layer embedding space for catastrophic forgetting is required. | Reject | The paper studies the problem of metric learning for handling catastrophic forgetting. All the reviewers recommended clear reject because of writing issues and lack of experimental investigation to support the ideas. The authors did not provide a rebuttal. Hence, the reviewers' opinion still remains the same. AC agrees with the reviewers and believes that the paper is not yet ready. | train | [
"Rh0ko0PZF8x",
"pDRd6HHElVD",
"LSHEXRdaus"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper investigates the viability of generating “embedding” exemplars from a VAE that can protect base knowledge in the intermediate to output layers of the neural networks. Experiments on the CUB-200 and CARS-196 datasets with an incremental similarity learning setup are conducted. Generally, I think the nove... | [
5,
1,
1
] | [
3,
3,
4
] | [
"iclr_2022_2DT7DptUiXv",
"iclr_2022_2DT7DptUiXv",
"iclr_2022_2DT7DptUiXv"
] |
iclr_2022_Sb4hTI15hUZ | Data-oriented Scene Recognition | Most deep learning backbones are evaluated on ImageNet. Using scenery images as an example, we conducted extensive experiments to demonstrate the widely accepted principles in network design may result in dramatic performance differences when the data is altered. Exploratory experiments are engaged to explain the underlining cause of the differences. Based on our observation, this paper presents a novel network design methodology: data-oriented network design. In other words, instead of designing universal backbones, the scheming of the networks should treat the characteristics of data as a crucial component. We further proposed a Deep-Narrow Network and Lossless Pooling module, which improved the scene recognition performance using less than half of the computational resources compared to the benchmark network architecture ResNets. | Reject | All reviewers recommended reject. No responses from the authors. | train | [
"sONZc8_Hp5x",
"O2n6dwGDf2q",
"KCJgvZN7g1D",
"KFYrazhN1ln"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper suggests that for certain kinds of data, width is much less important to a network's performance than depth. This assumption, the authors claim, is a result of overfitting or over-reliance on ImageNet as a benchmark. Experimental results show width being much more important to the task of object classifi... | [
3,
5,
3,
3
] | [
4,
4,
4,
4
] | [
"iclr_2022_Sb4hTI15hUZ",
"iclr_2022_Sb4hTI15hUZ",
"iclr_2022_Sb4hTI15hUZ",
"iclr_2022_Sb4hTI15hUZ"
] |
iclr_2022_eJyt4hJzOLk | Discrepancy-Optimal Meta-Learning for Domain Generalization | This work attempts to tackle the problem of domain generalization (DG) via learning to reduce domain shift with an episodic training procedure. In particular, we measure the domain shift with $\mathcal{Y}$-discrepancy and learn to optimize $\mathcal{Y}$-discrepancy between the unseen target domain and source domains only using source-domain samples. Theoretically, we give a PAC-style generalization bound for discrepancy-optimal meta-learning and further make comparisons with other DG bounds including ERM and domain-invariant learning. The theoretical analyses show that there is a tradeoff between classification performance and computational complexity for discrepancy-optimal meta-learning. The theoretical results also shed light on a bilevel optimization algorithm for DG. Empirically, we evaluate the algorithm with DomainBed and achieves state-of-the-art results on two DG benchmarks. | Reject | The reviewers had a number of concerns which remain since the authors did not provide any response nor they have updated the paper. Hopefully, once the paper is improved in terms of clarification (significance and correctness of the theoretical results and the technical approach approach), it will be ready for publication in one of the ML venues. | train | [
"swQOqC226j7",
"Yy0-q0_CXEk",
"yYyLXYkZUtH",
"S_auxG7lu--"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper investigates a meta-learning approach to domain generalisation. Proposed theoretical contributions include a generalisation bound for domain generalisation that depends on the $\\mathcal{Y}$-discrepancy between the source domains available during training, the training error on these source domains, and ... | [
3,
6,
5,
5
] | [
5,
2,
3,
4
] | [
"iclr_2022_eJyt4hJzOLk",
"iclr_2022_eJyt4hJzOLk",
"iclr_2022_eJyt4hJzOLk",
"iclr_2022_eJyt4hJzOLk"
] |
iclr_2022_MbmwYwhD0Vy | A Novel Convergence Analysis for the Stochastic Proximal Point Algorithm | In this paper, we study the stochastic proximal point algorithm (SPPA) for general empirical risk minimization (ERM) problems as well as deep learning problems. We present an efficient implementation of SPPA with minor modification for different problem definitions and we observe that efficiently implemented SPPA has faster and more stable convergence than the celebrated stochastic gradient descent (SGD) algorithm, and its many variations, for both convex and non-convex problems. Due to the fact that per-iteration update of SPPA is defined abstractly and has long been considered expensive, its convergence proof has not been well-studied until recently. In this paper, we close the theoretical gap by providing its convergence for convex problems. Our proof technique is different from some of the recent attempts. As a result, we present a surprising result that SPPA for convex problems may converge \emph{arbitrarily fast}, depending on how the step sizes are chosen. As a second contribution, we also show that for some of the canonical ERM problems and deep learning problems, each iteration of SPPA can be efficiently calculated either in closed form or closed to closed form via bisection---the resulting complexity is exactly the same as that of SGD. Real data experiments showcase its effectiveness in terms of convergence compared to SGD and its variants. | Reject | The paper considers the stochastic proximal point algorithm, the main contribution is a convergence proof (in addition to arguing that it is a practical algorithm for ERM problems). However, reviewer EvJ6 and xJB8 pointed out a fatal flaw in Lemma 1 that affects all the downstream results, and the other two reviewers and myself agree that this invalidates the core results. Reviewer EvJ6 gives extensive examples of how the lemma cannot be simply fixed. So while this is an interesting stochastic extension of the famous deterministic method and may warrant further study, this paper doesn't yet provide a rigorous convergence proof. | train | [
"b0ra94gSKoM",
"BVS6YVnL8h",
"Cj2syYLHut",
"MeaowY8h3-",
"DCyJZAenvJN",
"gJQgG9EGuvq",
"aNC0oyz_VP0",
"l5vd7OwH9ON"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"# === Update ===\n\nI have read the authors reviews as well as the response by the authors.\n\nI'd like to thank the authors for engaging in a discussion around Lemma 1. It is unfortunate that the result does not appear to hold even in a restricted setting.\n\nI will maintain my score. I wish the authors the best ... | [
3,
3,
5,
-1,
-1,
-1,
-1,
5
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_MbmwYwhD0Vy",
"iclr_2022_MbmwYwhD0Vy",
"iclr_2022_MbmwYwhD0Vy",
"DCyJZAenvJN",
"gJQgG9EGuvq",
"aNC0oyz_VP0",
"b0ra94gSKoM",
"iclr_2022_MbmwYwhD0Vy"
] |
iclr_2022_847CwJv9Vx | Benchmarking person re-identification approaches and training datasets for practical real-world implementations | Person Re-Identification (Re-ID) is receiving a lot of attention recently. Large datasets containing labeled images of various individuals have been released, allowing researchers to develop and test many successful approaches. However, when such Re-ID models are deployed in a new city or environment, the task of searching people within a network of security cameras is likely to face an important domain shift, thus resulting in decreased performance. Indeed, while most public datasets were collected in a limited geographic area, images from a new city present different features (e.g., people's ethnicities and clothing style, weather, architecture, etc.). In addition, the whole frames of the video streams must be converted into cropped images of people using pedestrian detection models, which behave differently from the human annotators who created the dataset used for training. To better understand the extent of this issue, this paper introduces a complete methodology to evaluate Re-ID approaches and training datasets with respect to their suitability for deployment for live operations. This method is used to benchmark four Re-ID approaches and three datasets and provides interesting insight and guidelines that can help designing better Re-ID pipelines in the future. | Reject | All reviewers agree that the presented investigation of existing person re-identification approaches is easy to read and can be used as a tutorial in the field. However, the reviewers raised a number of major concerns including inadequate discussion of insights made from the conducted survey, lack of some related important experimental studies and inadequate/ unconvincing conclusions made in the presented work. The authors’ rebuttal addressed some of the reviewers’ questions but failed to alleviate all reviewers’ concerns. Hence, I cannot suggest this paper for presentation at ICLR. | train | [
"lK3KcnY6Qr8",
"Yk6REqaKpCc",
"fH09n5KFvJ7",
"bLCI7xdy8Y",
"yXCXxdYIL7l"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors conduct a systematic analysis of some existing person re-identification approaches. Specifically, they evaluate four publicly available models against three publicly available datasets using standard metrics. They also check how the models generalize to other datasets when not trained on them. Finally,... | [
5,
-1,
-1,
5,
3
] | [
4,
-1,
-1,
4,
4
] | [
"iclr_2022_847CwJv9Vx",
"yXCXxdYIL7l",
"bLCI7xdy8Y",
"iclr_2022_847CwJv9Vx",
"iclr_2022_847CwJv9Vx"
] |
iclr_2022_K1m0oSiGasn | Adaptive Region Pooling for Fine-Grained Representation Learning | Fine-grained recognition aims to discriminate the sub-categories of the images within one general category. It is fundamentally difficult due to the requirement to extract fine-grained features from subtle regions. Nonetheless, a Convolutional Neural Network typically applies strided operations to downsample the representation, which would excessively spoil the feature resolution and lead to a significant loss of fine-grained information. In this paper, we propose Adaptive Region Pooling (ARP): a novel downsampling algorithm that makes the network only focus on a smaller but more critical region, and simultaneously increase the resolution of sub-sampled feature. ARP owns a trade-off mechanism that allows users to actively balance the scale of receptive field and the granularity of feature. Also, without any learning-based parameters, ARP provides the network a stabler training process and an earlier convergence. Extensive experiments qualitatively and quantitatively validate the effectiveness and efficiency of the proposed pooling operation and show superior performance against the state-of-the-arts in both the tasks of image classification and image retrieval. | Reject | Four reviewers have reviewed this submission. Three of them recommended to reject the paper and one was marginally above the acceptance threshold. The authors have not responded to the criticisms or questions of reviewers. Among many concerns were the issues with the use of '`lean and single target object' images, lack of discussions on related models such as adaptive bilinear pooling and multi-domain pooling, lack of evaluations on datasets such as large-scale iNaturalist. Given the above criticisms and the lack of authors' response, this submission falls below the acceptance bar. | val | [
"nFpTXaoOioJ",
"wdq5ZAG4DDU",
"4VKmaLeVVCU",
"olcRhhiyCUj"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper is tackling the topic of fine-grained representation. In contrast to comparable methods, ARP focus on sub-sampled via strided operation to represent fine-grained information. \nARP is an intuitive strategy to extract critical regions and is more concise compared with multi-stage cascade architecture. Thi... | [
6,
5,
5,
3
] | [
3,
5,
4,
5
] | [
"iclr_2022_K1m0oSiGasn",
"iclr_2022_K1m0oSiGasn",
"iclr_2022_K1m0oSiGasn",
"iclr_2022_K1m0oSiGasn"
] |
iclr_2022_IOA9fJUUa0 | How does BERT address polysemy of Korean adverbial postpositions -ey, -eyse, and -(u)lo? | The present study reports computational accounts of resolving word-level polysemy in a lesser-studied language—Korean. Postpositions, which are characterized as multiple form-function mapping and thus polysemous, pose a challenge to automatic analysis and model performance in identifying their functions. In this study, we devised a classification model by employing BERT and introduces a computational simulation that interactively demonstrates how a BERT model simulates human interpretation of word-level polysemy involving Korean adverbial postpositions -ey, -eyse, and -(u)lo. Results reveal that (i) there is an inverse relationship between the classification accuracy and the number of functions that each postposition manifests, (ii) the model performance is affected by the corpus size of each function, and (iii) the performance gradually improves as the epoch proceeds. | Reject | This paper presents a study on identifying word-level polysemy in Korean based on a language-specific BERT model. It is always very good to see language-specific studies especially in non-English scenarios. However, as the reviewers point out, there could be significant improvements to the experimental design of the paper (see comments from reviewer eqSx) and also the HCI aspects of the visualization setup (see comments from reviewer MV2W) to transform it to an acceptable ICLR paper. | train | [
"3Dp1CcC3-yO",
"7VZ3gDkNAAm",
"pthUhSkBsB0",
"DWQQi7MpU8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper builds a BERT classifier for word-level polysemy of Korean language. Adverbial positions such as -ey, eyes, and –(u)lo are involved in the investigation. There are three findings during the experiments, if there are more functions then the classification accuracy is lower, corpus size is sensitive and m... | [
3,
3,
1,
3
] | [
4,
4,
3,
4
] | [
"iclr_2022_IOA9fJUUa0",
"iclr_2022_IOA9fJUUa0",
"iclr_2022_IOA9fJUUa0",
"iclr_2022_IOA9fJUUa0"
] |
iclr_2022_27aftiBeius | $$Research on fusion algorithm of multi-attribute decision making and reinforcement learning based on intuitionistic fuzzy number in wargame environment$$ | Intelligent games have seen an increasing interest within the research community on artificial intelligence. The article proposes an algorithm that combines the multi-attribute management and reinforcement learning methods, and that joined their effect on wargaming, it solves the problem of the agent’s low rate of winning against specific rules and its inability to quickly converge during intelligent wargame training. At the same time, this paper studied a multi-attribute decision making and reinforcement learning algorithm in a wargame simulation environment, yielding data on the conflict between red and blue sides. We calculate the weight of each attribute based on the intuitionistic fuzzy number weight calculations. And then we determine the threat posed by each opponent’s game agents . Using the red side reinforcement learning reward function, the AC framework is trained on the reward function, and an algorithm combining multi-attribute decision making with reinforcement learning is obtained. A simulation experiment confirms that the algorithm of multi-attribute decision making combined with reinforcement learning presented in this paper is significantly more intelligent than the pure reinforcement learning algorithm. By resolving the shortcomings of the agent’s neural network, coupled with sparse rewards in large-map combat games, this robust algorithm effectively reduces the difficulties of convergence. It is also the first time in this field that an algorithm design for intelligent wargaming combines multi-attribute decision making with reinforcement learning. Finally, another novelty of this research is the interdisciplinary, like designing intelligent wargames and improving reinforcement learning algorithms. | Reject | All reviewers agreed on rejection. Unfortunately, there was no author response so there was nothing to drive further discussion on the paper. The reviewers gave very detailed advice on improving the work. | test | [
"pdQufGcar3X",
"QF9OVcy_9Mv",
"aubAwZIN0Zy",
"Ltjo-9sb0zj"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a method for using fuzzy logic and domain knowledge to generate the reward signal for RL algorithms. The method is evaluated on a war-game domain. The paper isn't easy to read and I am not sure I fully understand the method that is proposed. The abstract and introduction are somewhat hard to re... | [
3,
1,
1,
5
] | [
4,
4,
4,
2
] | [
"iclr_2022_27aftiBeius",
"iclr_2022_27aftiBeius",
"iclr_2022_27aftiBeius",
"iclr_2022_27aftiBeius"
] |
iclr_2022_ZC1s7bdR9bD | Path Integrals for the Attribution of Model Uncertainties | Understanding model uncertainties is of key importance in Bayesian machine learning applications. This often requires to meaningfully attribute a model's predictive uncertainty to its input features, however, popular attribution methods are primarily targeted at model scores for classification and regression tasks. Thus, in order to explain uncertainties, state-of-the-art alternatives commonly procure counterfactual feature vectors associated with low uncertainty and proceed by making direct comparisons. Here, we present a novel algorithm for uncertainty attribution in differentiable models, via path integrals which leverage in-distribution curves connecting feature vectors to counterfactual counterparts. We validate our method on benchmark image data sets with varying resolution, and demonstrate that (i) it produces meaningful attributions that significantly simplify interpretability over the existing alternatives and (ii) retains desirable properties from popular attribution methods. | Reject | The paper explores a method to identify features in an input that can explain uncertainties in the model prediction. The proposed approach is similar to Integrated Gradients (IG),with a different explanation target and integration path. Overall, the idea seems fairly incremental and the experimental evaluation is lacking and does not sufficiently demonstrate the advantages of the proposed approach. Evaluation metrics could be improved (see suggestions by reviewer n3ei) to strengthen the paper. | test | [
"qaRkloY3S7i",
"sx3FvgKUK-",
"CgUZqqKUnB",
"rre9pTUaZhh",
"xA8iNO0FPYd",
"QKJKspqxXwf",
"UAi9PrGlEu"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi authors, thanks for reading the reviewer comments and taking them into account. About the metrics question - this seems pretty important because multiple reviewers brought it up.\n\nThe approach used in the XRAI paper seems pretty reasonable. There are different variations on this approach though, making it a ... | [
-1,
-1,
-1,
5,
3,
5,
5
] | [
-1,
-1,
-1,
4,
4,
3,
3
] | [
"sx3FvgKUK-",
"rre9pTUaZhh",
"iclr_2022_ZC1s7bdR9bD",
"iclr_2022_ZC1s7bdR9bD",
"iclr_2022_ZC1s7bdR9bD",
"iclr_2022_ZC1s7bdR9bD",
"iclr_2022_ZC1s7bdR9bD"
] |
nips_2021_CU8qQMhB3dh | Beyond Value-Function Gaps: Improved Instance-Dependent Regret Bounds for Episodic Reinforcement Learning | We provide improved gap-dependent regret bounds for reinforcement learning in finite episodic Markov decision processes. Compared to prior work, our bounds depend on alternative definitions of gaps. These definitions are based on the insight that, in order to achieve a favorable regret, an algorithm does not need to learn how to behave optimally in states that are not reached by an optimal policy. We prove tighter upper regret bounds for optimistic algorithms and accompany them with new information-theoretic lower bounds for a large class of MDPs. Our results show that optimistic algorithms can not achieve the information-theoretic lower bounds even in deterministic MDPs unless there is a unique optimal policy.
| accept | This work proposes a new gap notion and provides corresponding instance-dependent regret upper bound and lower bound. The new gap definition gives a better characterization of the complexity of an RL problem. The paper is well-written and structured. The reviewers believe the paper is good for publication in NeurIPS. | train | [
"9BD6ttshQZ_",
"dU_1LZHM0_N",
"PIWIR1O6gby",
"inzfDICSZAR",
"JhIHS-NBqL9",
"et_HyzwBVEU",
"j6Tuj-hS6vw",
"MhA0TB0R0F",
"7TWYFn75mwD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nThank you for your detailed response. It clarified all my doubts. I still think this is a good paper and thus keep my initial view.",
" Thanks for your detailed response. It clearly answers my questions. I decide to maintain my score as good paper.",
"This paper is on instance-dependent regre... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"JhIHS-NBqL9",
"j6Tuj-hS6vw",
"nips_2021_CU8qQMhB3dh",
"et_HyzwBVEU",
"7TWYFn75mwD",
"PIWIR1O6gby",
"MhA0TB0R0F",
"nips_2021_CU8qQMhB3dh",
"nips_2021_CU8qQMhB3dh"
] |
nips_2021_2a96Bf7Qdrg | Learning One Representation to Optimize All Rewards | We introduce the forward-backward (FB) representation of the dynamics of a reward-free Markov decision process. It provides explicit near-optimal policies for any reward specified a posteriori. During an unsupervised phase, we use reward-free interactions with the environment to learn two representations via off-the-shelf deep learning methods and temporal difference (TD) learning. In the test phase, a reward representation is estimated either from reward observations or an explicit reward description (e.g., a target state). The optimal policy for thatreward is directly obtained from these representations, with no planning. We assume access to an exploration scheme or replay buffer for the first phase.The corresponding unsupervised loss is well-principled: if training is perfect, the policies obtained are provably optimal for any reward function. With imperfect training, the sub-optimality is proportional to the unsupervised approximation error. The FB representation learns long-range relationships between states and actions, via a predictive occupancy map, without having to synthesize states as in model-based approaches.This is a step towards learning controllable agents in arbitrary black-box stochastic environments. This approach compares well to goal-oriented RL algorithms on discrete and continuous mazes, pixel-based MsPacman, and the FetchReach virtual robot arm. We also illustrate how the agent can immediately adapt to new tasks beyond goal-oriented RL.
| accept | The majority of the reviews agrees that the paper presents an interesting new approach and votes for a clear accept. Accordingly, I recommend acceptance. As most reviewers also pointed out a few issues with the presentation and organisation of the paper, please take into account these comments when preparing the final version. Finally, one reviewer was not that completely convinced by the suggested representation. In the discussion (s)he said (I think expressing the point better than in the original review):
"My concern is that the representation carries too much information. To be specific, (please correct me if my statements are wrong), in their tabular MDP example, the function $F$ has a large continuous input space (dimension>SA) and thus requires function approximations and large computation power. In contrast, learning the MDP only involves $S^2A$ exact parameters, and the planning under a given MDP and reward function is usually not the bottleneck (using dynamic programming and given a discount factor in (0,1)). This issue can be even worse for general MDPs, and the paper seems to not discuss too much on the approximation power of such a representative for other MDPs."
Although I understand that the paper suggests not to learn a particular MDP but a range of tasks, the question whether the arising complexity of the final problem is reasonable when compared to the complexity of a set of single tasks is I think a valid one and should be discussed. | train | [
"SZEzJvdlXLa",
"Iq5u6t_vMBh",
"XoBRlbP_S1h",
"97ZSLgJmOaa",
"PpmzD6b5iSy",
"kJg0Hwwi8e",
"9CiGpA_8vkZ",
"xr0BGoDazhm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes an algorithm learning a kind of universal value function (UVFA), so that given a buffer with a good coverage of transitions it returns the optimal behavior for any reward function or any goal state by computing a simple average.\n\nThe idea comes from the fact that the Q-function for a given po... | [
7,
5,
8,
-1,
-1,
-1,
-1,
7
] | [
4,
5,
2,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_2a96Bf7Qdrg",
"nips_2021_2a96Bf7Qdrg",
"nips_2021_2a96Bf7Qdrg",
"SZEzJvdlXLa",
"Iq5u6t_vMBh",
"XoBRlbP_S1h",
"xr0BGoDazhm",
"nips_2021_2a96Bf7Qdrg"
] |
nips_2021_gwP8pc1OgN_ | Matrix factorisation and the interpretation of geodesic distance | Given a graph or similarity matrix, we consider the problem of recovering a notion of true distance between the nodes, and so their true positions. Through new insights into the manifold geometry underlying a generic latent position model, we show that this can be accomplished in two steps: matrix factorisation, followed by nonlinear dimension reduction. This combination is effective because the point cloud obtained in the first step lives close to a manifold in which latent distance is encoded as geodesic distance. Hence, a nonlinear dimension reduction tool, approximating geodesic distance, can recover the latent positions, up to a simple transformation. We give a detailed account of the case where spectral embedding is used, followed by Isomap, and provide encouraging experimental evidence for other combinations of techniques.
| accept | The paper revisits the problem of recovering positions of points given the similarity information. It proposes to do: first, a spectral embedding and second, a nor linear embedding to obtain the positions. As reviewers note, the interesting contributions are in connecting the geometry between spectral approximations (X) and true embeddings (Z). Overall, the theory and experiments are interesting. A minor comment, currently, the paper assumes that A is fully known and the theory goes through. What happens A is partially known (akin to recovering positions from partial similarity information). What kinds of consistency results are required for the approximations to succeed?
| val | [
"tjJ_4IEKYso",
"OMQhVG_-2ye",
"v8orqTQSOsC",
"mO4u1fgbdNU",
"AN7Xl-rl8_j",
"cSRU9MTQ4Xw",
"hGN9_6NB0Vd",
"8oZszLo_BWQ",
"opMEHFySzpO",
"1-6WWONXTbZ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your thoughtful response.",
" We would like to thank all the reviewers for their time and expertise. We have responded to each review separately.",
" * The authors start with problem statement (1) but later make a few more assumptions on f (e.g., pos. def., f is C^2, Hessian is pos. def., and a ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2
] | [
"mO4u1fgbdNU",
"nips_2021_gwP8pc1OgN_",
"1-6WWONXTbZ",
"opMEHFySzpO",
"8oZszLo_BWQ",
"hGN9_6NB0Vd",
"nips_2021_gwP8pc1OgN_",
"nips_2021_gwP8pc1OgN_",
"nips_2021_gwP8pc1OgN_",
"nips_2021_gwP8pc1OgN_"
] |
nips_2021_UMcd6l1msUK | UniDoc: Unified Pretraining Framework for Document Understanding | Document intelligence automates the extraction of information from documents and supports many business applications. Recent self-supervised learning methods on large-scale unlabeled document datasets have opened up promising directions towards reducing annotation efforts by training models with self-supervised objectives. However, most of the existing document pretraining methods are still language-dominated. We present UniDoc, a new unified pretraining framework for document understanding. UniDoc is designed to support most document understanding tasks, extending the Transformer to take multimodal embeddings as input. Each input element is composed of words and visual features from a semantic region of the input document image. An important feature of UniDoc is that it learns a generic representation by making use of three self-supervised losses, encouraging the representation to model sentences, learn similarities, and align modalities. Extensive empirical analysis demonstrates that the pretraining procedure learns better joint representations and leads to improvements in downstream tasks.
| accept | This paper presents a new framework for pre-training for document understanding tasks like document and entity classification. The framework combines visual information from the input document image with textual information from OCR output using several self-supervised objectives. Results demonstrate improvements over state-of-the-art document encoding baselines on several downstream tasks.
The majority of reviewers are in favor of acceptance. Generally, reviewers praised the paper's approach as well-motivated and clearly described, and results as compelling. Several reviewers question the novelty of the ML contribution of this paper. I discount this criticism to some extent given that this is clearly a novel *application* of existing pre-training components, arranged into a new framework, demonstrating gains on important downstream tasks -- this seems squarely fair game for NeurIPS.
Other more minor criticisms include:
(1) Concerns about outdated baselines, e.g. why not use more recent MLMs as baselines like RoBERTa and ELECTRA? Rebuttal argues that these are not comparable because they focus on text representation alone. I don't completely buy this since BERT was used as a baseline.
(2) Some concerns about clarity (more formal mathematical descriptions would help) and ablations (there are many moving parts here, which model components are indispensable?).
(3) One reviewer was concerned that there were no comparisons with word-level pre-training -- author response points out that region-level often is word-level, but also include additional experiments with word-level MLM for comparison showing worse performance. Thus, I believe this concern was mostly addressed in rebuttal.
(4) Two reviewers were concerned that there was no improvement on document classification over the top baseline. This has been resolved in author response with new experiments demonstrating the effect of underlying OCR system.
Overall, I agree with the majority of reviewers and recommend acceptance. The issues mentioned by reviewers (and summarized above) absolutely need to be addressed in revision -- e.g. further ablations, improvement to mathematical clarity, comparisons with more recent MLMs, and the inclusion of all additional experiments provided in rebuttal. However, in this case, I do not believe the revisions are so substantial as to warrant another round of review. | train | [
"WE2EKh4p4YJ",
"Dftclqhlc3b",
"6nn7S2JmS1S",
"6RyHRMyo-Z4",
"fwxWgegUYTp",
"9RzaiIc732",
"qLnqLhxn957",
"QZIUFBPizUb",
"VOCspinvJ-M",
"3jJ42vaGeLr",
"a_omtb_za2A",
"ShrRiRUbNJV"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your helpful comments and support.",
" **[1]** We respectfully disagree with the comments on putting together existing methods and the incompatibility of our paper with NeurIPS. Firstly, as mentioned in the paper and response, the framework and pretraining tasks (e.g., vector quantization) propos... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
4
] | [
"6RyHRMyo-Z4",
"6nn7S2JmS1S",
"fwxWgegUYTp",
"qLnqLhxn957",
"3jJ42vaGeLr",
"VOCspinvJ-M",
"ShrRiRUbNJV",
"a_omtb_za2A",
"nips_2021_UMcd6l1msUK",
"nips_2021_UMcd6l1msUK",
"nips_2021_UMcd6l1msUK",
"nips_2021_UMcd6l1msUK"
] |
nips_2021_az0BBDjDvwD | Finding Discriminative Filters for Specific Degradations in Blind Super-Resolution | Recent blind super-resolution (SR) methods typically consist of two branches, one for degradation prediction and the other for conditional restoration. However, our experiments show that a one-branch network can achieve comparable performance to the two-branch scheme. Then we wonder: how can one-branch networks automatically learn to distinguish degradations? To find the answer, we propose a new diagnostic tool -- Filter Attribution method based on Integral Gradient (FAIG). Unlike previous integral gradient methods, our FAIG aims at finding the most discriminative filters instead of input pixels/features for degradation removal in blind SR networks. With the discovered filters, we further develop a simple yet effective method to predict the degradation of an input image. Based on FAIG, we show that, in one-branch blind SR networks, 1) we could find a very small number of (1%) discriminative filters for each specific degradation; 2) The weights, locations and connections of the discovered filters are all important to determine the specific network function. 3) The task of degradation prediction can be implicitly realized by these discriminative filters without explicit supervised learning. Our findings can not only help us better understand network behaviors inside one-branch blind SR networks, but also provide guidance on designing more efficient architectures and diagnosing networks for blind SR.
| accept | All reviewers are enthusiastic about the paper's findings and gave it high scores. The use of one-branch instead of two for blind super resolution is interesting, and one reviewer was particularly interested in the analytical aspect of the paper. | val | [
"bIFN9LUfnhl",
"MNwE-RBX6eZ",
"WHQvBskdQl0",
"Lmo4Z76iBHw",
"AJokq5lRjS",
"ruMzkUmsRAL",
"hkBRre8uzfF",
"oobZHJBv-pr",
"tTU3_hQBy5d",
"bcjL5NCCHWv"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing my comments. I'm happy to stand by my initial decision to recommend accepting the paper.",
" The authors have addressed all my questions. Thanks for the additional experiments and clear answers!\nThis paper provides deeper insights in interpreting blind SR models and I particularly like it... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"ruMzkUmsRAL",
"WHQvBskdQl0",
"bcjL5NCCHWv",
"oobZHJBv-pr",
"tTU3_hQBy5d",
"hkBRre8uzfF",
"nips_2021_az0BBDjDvwD",
"nips_2021_az0BBDjDvwD",
"nips_2021_az0BBDjDvwD",
"nips_2021_az0BBDjDvwD"
] |
nips_2021_iaO_IH7CnGJ | Counterfactual Explanations Can Be Manipulated | Counterfactual explanations are emerging as an attractive option for providing recourse to individuals adversely impacted by algorithmic decisions. As they are deployed in critical applications (e.g. law enforcement, financial lending), it becomes important to ensure that we clearly understand the vulnerabilties of these methods and find ways to address them. However, there is little understanding of the vulnerabilities and shortcomings of counterfactual explanations. In this work, we introduce the first framework that describes the vulnerabilities of counterfactual explanations and shows how they can be manipulated. More specifically, we show counterfactual explanations may converge to drastically different counterfactuals under a small perturbation indicating they are not robust. Leveraging this insight, we introduce a novel objective to train seemingly fair models where counterfactual explanations find much lower cost recourse under a slight perturbation. We describe how these models can unfairly provide low-cost recourse for specific subgroups in the data while appearing fair to auditors. We perform experiments on loan and violent crime prediction data sets where certain subgroups achieve up to 20x lower cost recourse under the perturbation. These results raise concerns regarding the dependability of current counterfactual explanation techniques, which we hope will inspire investigations in robust counterfactual explanations.
| accept | This paper contributes to the growing literature documenting the instability and manipulability of different model explanation techniques. In this work the authors specifically study counterfactual explanations, which commonly describe how an individual's inputs could change to receive a positive classification. Overall the reviewers agree that the specific problem formulation tackled in this work is novel and that the work improves our understanding of how models can be adversarially manipulated to produce biased recourse costs by hiding low cost recourse paths. Reviewers also agreed that the paper is well-developed, overall clearly written, and comprehensive.
The authors have already proposed additional text discussing the implications of developing adversarial models and issues with the common practice of relying on crime-related data in experiments. Beyond this, the set of revisions in preparation for the camera-ready is fairly small. For readers who are unfamiliar with the prior literature on "counterfactual" explanations, I agree with Reviewer EEmF that it would be work clarifying that the term "counterfactual" doesn't have the same meaning here as it does in the context of causal inference. Other reviewers provided lists of requested clarifications/modifications that are all actionable and would improve the clarity and accuracy of the manuscript.
Congratulations on an excellent submission! | train | [
"NYt2CHBq8_",
"6dH1xgxZO_V",
"DZt6jH195Xd",
"Bcg9YBVXutj",
"eBZ7Ys_9wKd",
"WmDrRqRdij_",
"o_JgXpljCFw",
"YWGFjIJ0Lh7",
"SC_yTp_GpSb",
"iQHRCPSeiC9",
"IPBwJy-XF7K"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are very glad to know that the reviewer found our rebuttal helpful. Thank you so much for keeping your positive score. ",
" Thank you very much for the detailed responses/discussions. They are very helpful for me to understand the relevant details. I will keep my positive score on the paper.",
"The paper i... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"6dH1xgxZO_V",
"o_JgXpljCFw",
"nips_2021_iaO_IH7CnGJ",
"nips_2021_iaO_IH7CnGJ",
"IPBwJy-XF7K",
"DZt6jH195Xd",
"iQHRCPSeiC9",
"SC_yTp_GpSb",
"nips_2021_iaO_IH7CnGJ",
"nips_2021_iaO_IH7CnGJ",
"nips_2021_iaO_IH7CnGJ"
] |
nips_2021_X3TdREzbZN | From Canonical Correlation Analysis to Self-supervised Graph Neural Networks | We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data. It follows the previous methods that generate two views of an input graph through data augmentation. However, unlike contrastive methods that focus on instance-level discrimination, we optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis. Compared with other works, our approach requires none of the parameterized mutual information estimator, additional projector, asymmetric structures, and most importantly, negative samples which can be costly. We show that the new objective essentially 1) aims at discarding augmentation-variant information by learning invariant representations, and 2) can prevent degenerated solutions by decorrelating features in different dimensions. Our theoretical analysis further provides an understanding for the new objective which can be equivalently seen as an instantiation of the Information Bottleneck Principle under the self-supervised setting. Despite its simplicity, our method performs competitively on seven public graph datasets.
| accept | This paper proposes a new objective and framework for self-supervised representation learning on graphs. It formalizes unsupervised node representation learning as a multi-view learning task. Results show that the proposed approach outperforms many competitive baseline methods. The idea presented in the paper is interesting. The proposed approach depends on several hyperparameters. The effect of these hyperparameters are not studied and should be included. | train | [
"-SY2LcxbQoR",
"yEfSesgRlvv",
"ddaFc5p39kY",
"vEjiBLfKc3s",
"Q82YvK4-o4",
"ZV7-o1Q4VqH",
"r31GAtCTJCM",
"gtbcQRLHC5J",
"PNc5M4neF5c",
"36mO2CFl0Fl",
"02JK7ociBn8",
"BL1zdCIZPdH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors propose CCA-SSG for self-supervised representation learning on graph. Tee method is simple and efficient. Experimental results show high performance of the method on the node classification task. \n The problem setting is not clear.\nThe basic objective of CCA is to extract two different correlated fe... | [
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_X3TdREzbZN",
"vEjiBLfKc3s",
"PNc5M4neF5c",
"-SY2LcxbQoR",
"nips_2021_X3TdREzbZN",
"gtbcQRLHC5J",
"-SY2LcxbQoR",
"Q82YvK4-o4",
"BL1zdCIZPdH",
"-SY2LcxbQoR",
"Q82YvK4-o4",
"nips_2021_X3TdREzbZN"
] |
nips_2021_Yw7ZNeDVpBS | BAST: Bayesian Additive Regression Spanning Trees for Complex Constrained Domain | Nonparametric regression on complex domains has been a challenging task as most existing methods, such as ensemble models based on binary decision trees, are not designed to account for intrinsic geometries and domain boundaries. This article proposes a Bayesian additive regression spanning trees (BAST) model for nonparametric regression on manifolds, with an emphasis on complex constrained domains or irregularly shaped spaces embedded in Euclidean spaces. Our model is built upon a random spanning tree manifold partition model as each weak learner, which is capable of capturing any irregularly shaped spatially contiguous partitions while respecting intrinsic geometries and domain boundary constraints. Utilizing many nice properties of spanning tree structures, we design an efficient Bayesian inference algorithm. Equipped with a soft prediction scheme, BAST is demonstrated to significantly outperform other competing methods in simulation experiments and in an application to the chlorophyll data in Aral Sea, due to its strong local adaptivity to different levels of smoothness.
| accept | This article introduces a novel Bayesian method for nonparametric regression on manifolds, referred to as Bayesian Additive Regression Spanning Trees (BAST). The basic idea is to build a sparse graph $\mathcal{G}$ on the observed covariate data points, and use a BART-like ensemble of tree-based models on $\mathcal{G}$. The graph $\mathcal{G}$ captures the lower-dimensional geometry of the manifold on which the covariates live, and the tree-based models yield piecewise-constant functions on partitions of the data. A Gibbs sampler algorithm is provided for posterior inference. In particular, a well-chosen prior on spanning trees facilitates exact sampling from the full conditional on trees for each tree-based model component in the ensemble, using a previously known result from the literature. Experiments are performed on simulated and real data.
Generally speaking, the reviewers and I found the paper to be interesting and novel. The paper is well-written and the method appears to provide a valuable contribution to this area of research. The method provides a compelling improvement in performance relative to competing methods.
In my view, the main limitations are:
1) The computation time is quite costly compared to competitors. For instance, on the U-shape example, BAST takes 651.49 seconds while BART takes only 15.83 seconds (see authors' reply to Reviewer Tbpc). The authors mention that a more computationally efficient implementation of BAST is under investigation.
2) Performance is likely to degrade significantly as the dimensionality of the data grows; see point 5 by Reviewer Tbpc and the authors' reply.
3) The method for out-of-sample predictions seems *ad hoc* and could potentially be improved. The method does not provide a coherent (i.e., projective) model for all points in the space, since it is defined only on the observed points due to reliance on the graph $\mathcal{G}$. Thus, the model does not lead to a natural technique for making out-of-sample predictions, which is presumably why an *ad hoc* method was used.
The only score below the acceptance threshold was from Reviewer R75X, with a score of 5. Although R75X has not responded further, I found the authors' reply to satisfactorily address the main criticisms and questions. | val | [
"OcGnPVqCc3O",
"dUhmoWECYnG",
"gOYdcRNqP3l",
"R7UQvC3buI",
"qcFgG3i-Eqc",
"y7P8LOUl3M4",
"Ug610_ssfyR",
"KZQH7yjRMbt",
"-Qaq6-bGLfH",
"jpOrqDU6Kds",
"gzCrHNgEQu"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer, \n\n We are very grateful to the reviewer for the valuable and constructive comments and suggestions, which greatly helped us to improve the manuscript. We have provided a point-by-point response to the comments. We are wondering if there are any other comments or feedback to help further improve... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
5,
4
] | [
"qcFgG3i-Eqc",
"gOYdcRNqP3l",
"y7P8LOUl3M4",
"KZQH7yjRMbt",
"gzCrHNgEQu",
"jpOrqDU6Kds",
"-Qaq6-bGLfH",
"nips_2021_Yw7ZNeDVpBS",
"nips_2021_Yw7ZNeDVpBS",
"nips_2021_Yw7ZNeDVpBS",
"nips_2021_Yw7ZNeDVpBS"
] |
nips_2021_c_XcmuxwAY | Hyperbolic Busemann Learning with Ideal Prototypes | Hyperbolic space has become a popular choice of manifold for representation learning of various datatypes from tree-like structures and text to graphs. Building on the success of deep learning with prototypes in Euclidean and hyperspherical spaces, a few recent works have proposed hyperbolic prototypes for classification. Such approaches enable effective learning in low-dimensional output spaces and can exploit hierarchical relations amongst classes, but require privileged information about class labels to position the hyperbolic prototypes. In this work, we propose Hyperbolic Busemann Learning. The main idea behind our approach is to position prototypes on the ideal boundary of the Poincar\'{e} ball, which does not require prior label knowledge. To be able to compute proximities to ideal prototypes, we introduce the penalised Busemann loss. We provide theory supporting the use of ideal prototypes and the proposed loss by proving its equivalence to logistic regression in the one-dimensional case. Empirically, we show that our approach provides a natural interpretation of classification confidence, while outperforming recent hyperspherical and hyperbolic prototype approaches.
| accept | The paper proposes a new approach for classification in hyperbolic space by combining ideal prototypes at the boundary of hyperbolic space and a penalized Busemann loss. The paper is well written, clear, and relevant to the NeurIPS community. All reviewers and the AC support acceptance, especially due to the novel and interesting ideas that advance research on hyperbolic neural networks and their applications, as well as the promising empirical results of the method. Reviewers highlighted also that the approach is technically sound, claims are supported in the paper, and the good empirical evaluation. Furthermore, questions and concerns related to the derivation of the approach, motivation, and significance of results where resolved after rebuttal. When preparing the camera ready version, please incorporate the overall feedback and suggestions of reviewers into the new revision. | train | [
"CzPoGqc3obw",
"6mypbjGgY7I",
"li1joC2xrA1",
"Pljg-orIFj1",
"giFI_V_V5XI",
"PEtpp7EZqMc",
"e4clHRgAW1w",
"DGLykeJluFz",
"ewr5VjaC0-e",
"6evEGroXN_9",
"HzJn_nX3HUe",
"ngpT4TblB0X"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" We thank the reviewer for the suggestion. We will include the hyperparameter tuning in the supplementary materials and the standard errors to the paper, as well as the low-dimensionality discussion suggested by reviewer 2AsY.\n\nRegarding the hierarchical prior, the mAP refers to performance over all leaf nodes, ... | [
-1,
-1,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4
] | [
"6mypbjGgY7I",
"6evEGroXN_9",
"giFI_V_V5XI",
"nips_2021_c_XcmuxwAY",
"HzJn_nX3HUe",
"DGLykeJluFz",
"nips_2021_c_XcmuxwAY",
"ewr5VjaC0-e",
"e4clHRgAW1w",
"ngpT4TblB0X",
"Pljg-orIFj1",
"nips_2021_c_XcmuxwAY"
] |
nips_2021_YjZoWjTKYvH | Backward-Compatible Prediction Updates: A Probabilistic Approach | When machine learning systems meet real world applications, accuracy is only one of several requirements. In this paper, we assay a complementary perspective originating from the increasing availability of pre-trained and regularly improving state-of-the-art models. While new improved models develop at a fast pace, downstream tasks vary more slowly or stay constant. Assume that we have a large unlabelled data set for which we want to maintain accurate predictions. Whenever a new and presumably better ML models becomes available, we encounter two problems: (i) given a limited budget, which data points should be re-evaluated using the new model?; and (ii) if the new predictions differ from the current ones, should we update? Problem (i) is about compute cost, which matters for very large data sets and models. Problem (ii) is about maintaining consistency of the predictions, which can be highly relevant for downstream applications; our demand is to avoid negative flips, i.e., changing correct to incorrect predictions. In this paper, we formalize the Prediction Update Problem and present an efficient probabilistic approach as answer to the above questions. In extensive experiments on standard classification benchmark data sets, we show that our method outperforms alternative strategies along key metrics for backward-compatible prediction updates.
| accept | This work introduces a Bayesian approach to the prediction update problem. In it they consider an interesting use case in which they want to label a large unlabelled dataset while taking advantage of new pertained classifiers as they become available. Overall, reviewers were positive about the paper, but had several good suggestions when it comes to improving the presentation. I encourage the authors to carefully incorporate reviewer feedback at the revision stage. | train | [
"_0c9jDVX1g",
"JdfBZtAelW5",
"Yp9V9vO1mxi",
"ty_AuAqQG3",
"62fg68me43",
"MNZ--whWsi_",
"iJuLms111Cp",
"Qz7Lf3m0QcY",
"Qe9OELgBN6u",
"J5MaP8TYUMx"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an approach for updating predictions on an unlabeled dataset as new, pretrained classifiers become available. Calling it the \"Prediction Update problem\", the authors seek to 1) make updates which improve accuracy, 2) avoid updates which introduce new errors, and 3) update predictions on data ... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_YjZoWjTKYvH",
"Yp9V9vO1mxi",
"ty_AuAqQG3",
"MNZ--whWsi_",
"nips_2021_YjZoWjTKYvH",
"J5MaP8TYUMx",
"Qe9OELgBN6u",
"_0c9jDVX1g",
"nips_2021_YjZoWjTKYvH",
"nips_2021_YjZoWjTKYvH"
] |
nips_2021_VA18aFPYfkd | Truncated Marginal Neural Ratio Estimation | Parametric stochastic simulators are ubiquitous in science, often featuring high-dimensional input parameters and/or an intractable likelihood. Performing Bayesian parameter inference in this context can be challenging. We present a neural simulation-based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability, which is unique among modern algorithms. Our approach is simulation efficient by simultaneously estimating low-dimensional marginal posteriors instead of the joint posterior and by proposing simulations targeted to an observation of interest via a prior suitably truncated by an indicator function. Furthermore, by estimating a locally amortized posterior our algorithm enables efficient empirical tests of the robustness of the inference results. Since scientists cannot access the ground truth, these tests are necessary for trusting inference in real-world applications. We perform experiments on a marginalized version of the simulation-based inference benchmark and two complex and narrow posteriors, highlighting the simulator efficiency of our algorithm as well as the quality of the estimated marginal posteriors.
| accept | The paper proposes a method for simulation-based inference that is based on neural ratio estimation and estimates marginals of the posterior distribution.
The reviewers are generally positive about the submission, citing the usefulness of the proposed method, the clarity of the presentation, and the good variety of experiments as strengths. There were some initial concerns, but the reviewers appreciated the extensive and thorough responses provided by the authors and most concerns were allayed. Overall, I'm happy to recommend acceptance. | train | [
"RAd6-2BWL9t",
"pn2Fg7-Y21L",
"vZDlWRcx-Sj",
"Hj0i5g4MirP",
"l1qYpgYTYu",
"H5G-2ZoZN2s",
"ZOICbbGnDDM",
"8v4dQ3NYMSi",
"4GSoaVjWMHW",
"FmJwIs9roDi",
"3-gqwVMKblE",
"3KwHWNgQSb",
"1veYn5qslv3",
"jWHKZLtwqV",
"LVMOX8hX3DB",
"ta9TR3C8KmY",
"UqdvJ_LoTcZ",
"1vdIVN86czL",
"JuXWEwYoGdh"... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for their lengthy and detailed response, and for getting on it and running so many new experiments in such a short time. It sounds like the results are broadly positive, but without being able to see the results, settings etc I am hesitant to upgrade my score based solely off of this dis... | [
-1,
7,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"vZDlWRcx-Sj",
"nips_2021_VA18aFPYfkd",
"Hj0i5g4MirP",
"l1qYpgYTYu",
"JuXWEwYoGdh",
"pn2Fg7-Y21L",
"nips_2021_VA18aFPYfkd",
"ZOICbbGnDDM",
"8v4dQ3NYMSi",
"3-gqwVMKblE",
"3KwHWNgQSb",
"1veYn5qslv3",
"JuXWEwYoGdh",
"pn2Fg7-Y21L",
"ta9TR3C8KmY",
"1vdIVN86czL",
"nips_2021_VA18aFPYfkd",
... |
nips_2021_IBVBtz_sRSm | ReAct: Out-of-distribution Detection With Rectified Activations | Out-of-distribution (OOD) detection has received much attention lately due to its practical importance in enhancing the safe deployment of neural networks. One of the primary challenges is that models often produce highly confident predictions on OOD data, which undermines the driving principle in OOD detection that the model should only be confident about in-distribution samples. In this work, we propose ReAct—a simple and effective technique for reducing model overconfidence on OOD data. Our method is motivated by novel analysis on internal activations of neural networks, which displays highly distinctive signature patterns for OOD distributions. Our method can generalize effectively to different network architectures and different OOD detection scores. We empirically demonstrate that ReAct achieves competitive detection performance on a comprehensive suite of benchmark datasets, and give theoretical explication for our method’s efficacy. On the ImageNet benchmark, ReAct reduces the false positive rate (FPR95) by 25.05% compared to the previous best method.
| accept | This paper observes that OOD samples tend to have overconfident activations and it proposes a simple activation clipping mechanism to reduce model overconfidence on OOD data. Although the proposed method is simple and straightforward, the reviewers found the theoretical analysis and insightful discussion valuable to the research field. Given the strong results and clear presentation of the ideas, I recommend acceptance. | train | [
"ylNTyMJUj_",
"KKGFuCCK1zN",
"CEGG8OIgGRd",
"PMlVXgkEpX0",
"Uhq4k2FyLi5",
"y1csmLg5SBf",
"puN-KmBBuFm",
"ppFiOAWlr-",
"HITOFd-3oxL",
"xXOHoeA0iB3",
"iJlTRyaFg2O",
"NwSlEs_syMC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response.\nI believe my initial score was too low and I have raised it.",
"The paper proposes a very simple way of improving out-of-distribution detection performance in neural network classifiers by rectifying the pre-logit layer's activations.\n Strengths:\nThe approach is very ... | [
-1,
4,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5,
3
] | [
"CEGG8OIgGRd",
"nips_2021_IBVBtz_sRSm",
"PMlVXgkEpX0",
"Uhq4k2FyLi5",
"KKGFuCCK1zN",
"nips_2021_IBVBtz_sRSm",
"iJlTRyaFg2O",
"nips_2021_IBVBtz_sRSm",
"y1csmLg5SBf",
"NwSlEs_syMC",
"nips_2021_IBVBtz_sRSm",
"nips_2021_IBVBtz_sRSm"
] |
nips_2021_AIQOddM5Xm | Non-local Latent Relation Distillation for Self-Adaptive 3D Human Pose Estimation | Available 3D human pose estimation approaches leverage different forms of strong (2D/3D pose) or weak (multi-view or depth) paired supervision. Barring synthetic or in-studio domains, acquiring such supervision for each new target environment is highly inconvenient. To this end, we cast 3D pose learning as a self-supervised adaptation problem that aims to transfer the task knowledge from a labeled source domain to a completely unpaired target. We propose to infer image-to-pose via two explicit mappings viz. image-to-latent and latent-to-pose where the latter is a pre-learned decoder obtained from a prior-enforcing generative adversarial auto-encoder. Next, we introduce relation distillation as a means to align the unpaired cross-modal samples i.e., the unpaired target videos and unpaired 3D pose sequences. To this end, we propose a new set of non-local relations in order to characterize long-range latent pose interactions, unlike general contrastive relations where positive couplings are limited to a local neighborhood structure. Further, we provide an objective way to quantify non-localness in order to select the most effective relation set. We evaluate different self-adaptation settings and demonstrate state-of-the-art 3D human pose estimation performance on standard benchmarks.
| accept | This paper studies the problem of 3D human pose estimation by combining information from labeled sources and unlabeled data in the wild. The paper received mixed reviews, in the beginning, tending near borderline. The reviewers' major concern was regarding the presentation clarity. Multiple reviewers found it hard to follow. The authors provided a rebuttal that addressed some of the reviewers' concerns and promise to improve the presentation. The paper was discussed and most reviewers responded to the rebuttal. One of the reviewers raised their rating but one reviewer did not. The main complaint is still around the writing and presentation. AC agrees with the reviewers that the paper has a good contribution to be accepted and urges the authors to look at the reviewers' feedback, incorporate their comments and clarify the writing as promised in the camera-ready. | test | [
"0SO6rYzQbx",
"jgIhJ6lSehe",
"2QsIRBeB6v-",
"1TnfeIfs5qf",
"llvlkt57zpN",
"rh3Aw_opCGG",
"XeySTR__8W0",
"xDrjJaaCfA0",
"rd0SdUUzctV",
"S9cYhMmhYYY"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This work regards monocular 3D human pose estimation as a domain adaptation problem to solve the problem of insufficient outdoor training data. The framework is composed of an image-to-latent encoder, which is pre-trained on the source domain and adapted to the target domain using unlabeled videos, and a latent-to... | [
6,
6,
-1,
6,
-1,
4,
-1,
-1,
-1,
-1
] | [
5,
2,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1
] | [
"nips_2021_AIQOddM5Xm",
"nips_2021_AIQOddM5Xm",
"llvlkt57zpN",
"nips_2021_AIQOddM5Xm",
"XeySTR__8W0",
"nips_2021_AIQOddM5Xm",
"0SO6rYzQbx",
"jgIhJ6lSehe",
"rh3Aw_opCGG",
"1TnfeIfs5qf"
] |
nips_2021_XCaZKu00a_D | Fast Training of Neural Lumigraph Representations using Meta Learning | Novel view synthesis is a long-standing problem in machine learning and computer vision. Significant progress has recently been made in developing neural scene representations and rendering techniques that synthesize photorealistic images from arbitrary views. These representations, however, are extremely slow to train and often also slow to render. Inspired by neural variants of image-based rendering, we develop a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time. Our approach, MetaNLR++, accomplishes this by using a unique combination of a neural shape representation and 2D CNN-based image feature extraction, aggregation, and re-projection. To push representation convergence times down to minutes, we leverage meta learning to learn neural shape and image feature priors which accelerate training. The optimized shape and image features can then be extracted using traditional graphics techniques and rendered in real time. We show that MetaNLR++ achieves similar or better novel view synthesis results in a fraction of the time that competing methods require.
| accept | All four reviewers attest clarity of presentation, importance of the problem and solid engineering. There is a large spread in the judgements of the significance of the work explaining the different scores of 3,5,6,8. Overall the paper has been defended sufficiently well.. | train | [
"yd7QS09z1j2",
"D3ZvxJvssvq",
"KfquTMH_12",
"tirxKrMniXK",
"Ag7xic31UMp",
"o7m-_ETL8Mi",
"HMpEcXnVBr",
"QM0WvRHarN",
"RNBRs_7J6VI",
"sT8f3fe0U_E"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a method for capturing the 3D shape and appearance of objects using an implicit model for the shape, and an aggregation of reprojected CNN features for the appearance. The authors highlight two problems with existing approaches that they try to address - slow training and rendering times. The k... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
2
] | [
"nips_2021_XCaZKu00a_D",
"Ag7xic31UMp",
"nips_2021_XCaZKu00a_D",
"sT8f3fe0U_E",
"yd7QS09z1j2",
"RNBRs_7J6VI",
"QM0WvRHarN",
"nips_2021_XCaZKu00a_D",
"nips_2021_XCaZKu00a_D",
"nips_2021_XCaZKu00a_D"
] |
nips_2021_KnAMQ3nH8Pq | Analytical Study of Momentum-Based Acceleration Methods in Paradigmatic High-Dimensional Non-Convex Problems | The optimization step in many machine learning problems rarely relies on vanilla gradient descent but it is common practice to use momentum-based accelerated methods. Despite these algorithms being widely applied to arbitrary loss functions, their behaviour in generically non-convex, high dimensional landscapes is poorly understood.In this work, we use dynamical mean field theory techniques to describe analytically the average dynamics of these methods in a prototypical non-convex model: the (spiked) matrix-tensor model. We derive a closed set of equations that describe the behaviour of heavy-ball momentum and Nesterov acceleration in the infinite dimensional limit. By numerical integration of these equations, we observe that these methods speed up the dynamics but do not improve the algorithmic threshold with respect to gradient descent in the spiked model.
| accept | The reviewers and AC agree that the idea of considering a high-dimensional limit of optimization dynamics using mean-field methodology to heuristically derive computational thresholds for important statistical recovery methods is interesting and may be conducive towards stronger theoretical results. The experiments demonstrating the accuracy of these predictions are very helpful in supporting the mathematical approach. A number of useful suggestions on presentation were made by the reviewers. The authors should make sure to incorporate these in their final version.
| train | [
"oBHe7b5wvxF",
"BXh6Nn4DyT8",
"swDjrYWNawo",
"sR4l9OgPYFk",
"I0rX_tqPjH",
"5PdUT7nH6da",
"2yzNbLOkbB",
"IJeMZZYhlG",
"AesNgJVyXvm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This mismatch may be due to the fact that quadratic convergence is sometimes used for sublinearly-convergent algorithms whose error goes down as $\\frac{1}{T^2}$, such as Nesterov's method for smooth convex functions.",
" We thank the reviewer for reviewing our work. The first point of the reviewer is very rele... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"sR4l9OgPYFk",
"AesNgJVyXvm",
"IJeMZZYhlG",
"2yzNbLOkbB",
"5PdUT7nH6da",
"nips_2021_KnAMQ3nH8Pq",
"nips_2021_KnAMQ3nH8Pq",
"nips_2021_KnAMQ3nH8Pq",
"nips_2021_KnAMQ3nH8Pq"
] |
nips_2021_WtmMyno9Tq2 | Multimodal Few-Shot Learning with Frozen Language Models | When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, we present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language). Using aligned image and caption data, we train a vision encoder to represent each image as a sequence of continuous embeddings, such that a pre-trained, frozen language model presented with this prefix generates the appropriate caption. The resulting system is a multimodal few-shot learner, with the surprising ability to learn a variety of new tasks when conditioned on examples, represented as a sequence of any number of interleaved image and text embeddings. We demonstrate that it can rapidly learn words for new objects and novel visual categories, do visual question-answering with only a handful of examples, and make use of outside knowledge, by measuring a single model on a variety of established and new benchmarks.
| accept | This work presents a proof-of-concept for multimodal few-shot training of frozen language models — where the parameters of a pre-trained language model are kept fixed while an image encoder is trained to prompt it to generate text. Despite its simplicity, the approach demonstrates surprising generalization capabilities of the model when presented unseen visual concepts, although as pointed out by the authors themselves, the results are still far from what current models can achieve on these tasks and more work is needed to refine these ideas further. | test | [
"yg4BBdEahgE",
"DGtgqJSpPI4",
"1A7dJWN8e6O",
"LyDxmv_ddVQ",
"P2DtyUvvmG",
"vYlZ7iZQ9TJ",
"AuXgH6-kvpR",
"Vw5aD5tjPwm",
"5I1avLv6wvW",
"3l1483VOGy",
"76weWBDVbM",
"wBDJuU2cBi",
"9Hi8koUpNx_",
"x4BzVhmbTJ",
"c8TlBPpOSTx",
"yOmFsKt0pZG",
"BC0Oe0P3IZH",
"eBsfRceT_gM",
"_gvjlEojNAr"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Although we agree that doing \"close-ended\" evaluation could reduce the variance, we don’t think it would **necessarily** give a more accurate assessment of the models. It can reward models that are trained as classifiers (using the same set of possible answers as targets) - even though they would necessarily fa... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"P2DtyUvvmG",
"Vw5aD5tjPwm",
"vYlZ7iZQ9TJ",
"5I1avLv6wvW",
"yOmFsKt0pZG",
"9Hi8koUpNx_",
"nips_2021_WtmMyno9Tq2",
"3l1483VOGy",
"c8TlBPpOSTx",
"76weWBDVbM",
"wBDJuU2cBi",
"AuXgH6-kvpR",
"_gvjlEojNAr",
"eBsfRceT_gM",
"BC0Oe0P3IZH",
"nips_2021_WtmMyno9Tq2",
"nips_2021_WtmMyno9Tq2",
"... |
nips_2021_mgkxmKYW62 | Approximating the Permanent with Deep Rejection Sampling | Juha Harviainen, Antti Röyskö, Mikko Koivisto | accept | This paper gives a generalization of an algorithm of Huber and Law for approximating the permanent of a non-negative matrix (an important and basic counting/sampling problem). The idea is to combine the rejection sampling method of Huber and Law with a deep (parameterized by some depth parameter d) "look-ahead" that uses a linear combination of sub-problems at depths <= d in the recursion tree as an upper bound on the permanent. This theoretically strengthens the guarantees of Huber and Law (and beats them on the "toy" example of random Bernoulli matrices. In experiments it beats the guarantees of a recent work of Kuck et. al. in approximating the permanent.
There were concerns that the theoretical contributions of the paper are not strong enough on top of prior work. However, the relatively clean and simple look-ahead method (deep AR) developed in this paper does give a significant advantage in terms of practical speed-ups. The deep AR heuristic, as a result, appears an appealing avenue to explore further. We recommend acceptance. | train | [
"NG7Zs_ggAbf",
"xOske1CG1BD",
"RSlRbdyKx4B",
"xw0BvlaANLW",
"0Yrz1z6-9S",
"_cR4pi3N4WT",
"obCp-66LQxb",
"q7iTSakm2CB",
"_-cDWVB1_W",
"IDxWgJCOxq"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. We’d like to further emphasize that AdaPart-0 wins *nowhere* in the results reported in Figure 2: On Uniform, each variant of HL is everywhere better than the respective variant of AdaPart, and AdaPart-0-DS is slightly better than AdaPart-0 on the smallest instances. On Block Diagonal, on... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
9,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"xOske1CG1BD",
"RSlRbdyKx4B",
"IDxWgJCOxq",
"obCp-66LQxb",
"q7iTSakm2CB",
"_-cDWVB1_W",
"nips_2021_mgkxmKYW62",
"nips_2021_mgkxmKYW62",
"nips_2021_mgkxmKYW62",
"nips_2021_mgkxmKYW62"
] |
nips_2021_ak06J5jNR4 | Revisiting Model Stitching to Compare Neural Representations | Yamini Bansal, Preetum Nakkiran, Boaz Barak | accept | This paper explores a previously described technique called "model stitching", in which two trained models can be appended. Through several empirical experiments the authors show that this technique provides insight into the representational spaces learned by the separate models.
The reviewers thought this idea was interesting, and the paper was creatively written and clear. The strongest critiques were about the thoroughness of experiments, stating that the authors did not explore the stitching of very diverse architectures or training on very different datasets. However, I feel the analyses in this paper are extensive, and so I think it is reasonable to leave some of that further experimentation for future work. | train | [
"AswDFanvjsu",
"VPVvH9G-9wG",
"cWqaOnDfAtL",
"3kMU1USn2I5",
"bTtmNWwqPxr",
"AGrZKqo9pZ",
"GEJllZ4ODYI",
"DuPrjqEsD6a",
"a6FnZ3-Orki"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors studied model stitching as a methodology to examine the internal representations of neural networks. Particularly, they use model stitching to verify several statements such as “good networks learn similar representations” and “representations learned with (1) more data, (2) bigger width, or (3) more t... | [
7,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_ak06J5jNR4",
"a6FnZ3-Orki",
"DuPrjqEsD6a",
"AswDFanvjsu",
"GEJllZ4ODYI",
"nips_2021_ak06J5jNR4",
"nips_2021_ak06J5jNR4",
"nips_2021_ak06J5jNR4",
"nips_2021_ak06J5jNR4"
] |
nips_2021_P5MtdcVdFZ4 | AugMax: Adversarial Composition of Random Augmentations for Robust Training | Data augmentation is a simple yet effective way to improve the robustness of deep neural networks (DNNs). Diversity and hardness are two complementary dimensions of data augmentation to achieve robustness. For example, AugMix explores random compositions of a diverse set of augmentations to enhance broader coverage, while adversarial training generates adversarially hard samples to spot the weakness. Motivated by this, we propose a data augmentation framework, termed AugMax, to unify the two aspects of diversity and hardness. AugMax first randomly samples multiple augmentation operators and then learns an adversarial mixture of the selected operators. Being a stronger form of data augmentation, AugMax leads to a significantly augmented input distribution which makes model training more challenging. To solve this problem, we further design a disentangled normalization module, termed DuBIN (Dual-Batch-and-Instance Normalization), that disentangles the instance-wise feature heterogeneity arising from AugMax. Experiments show that AugMax-DuBIN leads to significantly improved out-of-distribution robustness, outperforming prior arts by 3.03%, 3.49%, 1.82% and 0.71% on CIFAR10-C, CIFAR100-C, Tiny ImageNet-C and ImageNet-C. Codes and pretrained models are available: https://github.com/VITA-Group/AugMax.
| accept | This paper focuses on an interesting new data augmentation called AugMax. The proposal is to reconcile diversity with hardness by learning an adversarially weighted combination of multiple random augmentations. Since this stronger augmentation leads to more diverse input distribution that is harder to fit, they further designed a new normalization strategy called DuBIN to disentangle the instance-wise feature heterogeneity of AugMax samples. The philosophy behind sounds quite interesting to me, namely, diversity-hardness tradeoff of data augmentations. This philosophy leads to a novel algorithm design I have never seen.
The clarity and novelty are clearly above the bar of NeurIPS. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to accept this paper for publication! Please include the additional experimental results in the next version. | train | [
"a9gSV8wY1AD",
"E9zhTKsElQn",
"wkWm0erAwoK",
"F9aOzYBBwEO",
"RsORcLaWT8",
"17uQ8gVbces",
"QgNm0qWTxGz",
"3P0oFrXJw-S",
"rEtXeOLg0P"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Q: Why is AdvMix worse than AugMax?\n\nA: To achieve good model robustness, the augmented training data should contain both enough diversity and enough adversarially hard cases. We discussed this aspect in lines 47-50 in the manuscript. \n\nIn AugMax, we optimize the worst-case mixing parameters (w,m) by gradient... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
3
] | [
"QgNm0qWTxGz",
"nips_2021_P5MtdcVdFZ4",
"3P0oFrXJw-S",
"17uQ8gVbces",
"rEtXeOLg0P",
"nips_2021_P5MtdcVdFZ4",
"nips_2021_P5MtdcVdFZ4",
"nips_2021_P5MtdcVdFZ4",
"nips_2021_P5MtdcVdFZ4"
] |
nips_2021_DPHsCQ8OpA | Habitat 2.0: Training Home Assistants to Rearrange their Habitat | We introduce Habitat 2.0 (H2.0), a simulation platform for training virtual robots in interactive 3D environments and complex physics-enabled scenarios. We make comprehensive contributions to all levels of the embodied AI stack – data, simulation, and benchmark tasks. Specifically, we present: (i) ReplicaCAD: an artist-authored, annotated, reconfigurable 3D dataset of apartments (matching real spaces) with articulated objects (e.g. cabinets and drawers that can open/close); (ii) H2.0: a high-performance physics-enabled 3D simulator with speeds exceeding 25,000 simulation steps per second (850x real-time) on an 8-GPU node, representing 100x speed-ups over prior work; and, (iii) Home Assistant Benchmark (HAB): a suite of common tasks for assistive robots (tidy the house, stock groceries, set the table) that test a range of mobile manipulation capabilities. These large-scale engineering contributions allow us to systematically compare deep reinforcement learning (RL) at scale and classical sense-plan-act (SPA) pipelines in long-horizon structured tasks, with an emphasis on generalization to new objects, receptacles, and layouts. We find that (1) flat RL policies struggle on HAB compared to hierarchical ones; (2) a hierarchy with independent skills suffers from ‘hand-off problems’, and (3) SPA pipelines are more brittle than RL policies.
| accept | The paper presents the second version of a well-known and well used simulator, environment and benchmark for embodied computer vision, in particular navigation tasks.
The paper received 4 expert reviews with a wide spread (minimum rating 4, maximum rating), and was initially on the fence. The main weaknesses raised where a lack of methodology, as the paper mainly describes an engineering contribution.
In the discussion phase, a consensus emerged on the usefulness of this kind of contribution for the community, as the impact of the new simulator can be estimated to be large.
The AC concurs and recommends acceptance. | train | [
"ONcAujxWh7d",
"8Ha2_4MyqjK",
"1n4GD4fx2vA",
"hP_fTMiAYD0",
"fo03Ye_fOl9",
"AgNy6ou2t-Z",
"5L_iAKgvqcx",
"b0h_CIZcQvO",
"l9mOiv39q4",
"c3slS2yXrrv",
"dmOvlc1y--M",
"BnTPFUV1sSW",
"kgHSoTUT7th",
"P7RfEHnN_Wb",
"Akv6Ig-qX5i",
"5Ba9wrCWfl4",
"PTslu8txEv1",
"-3zr9XCjYYc",
"_nEaFEIYd2... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" We thank the reviewer for further engaging in the discussion. We will update the paper with these details of HAB and ReplicaCAD versus prior work. We discuss ReplicaCAD dataset biases in L398-401.",
" Thanks for the replies!\n\nYes! This comparison is very informative! Adding that info to the table or somewhere... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
10,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"8Ha2_4MyqjK",
"hP_fTMiAYD0",
"fo03Ye_fOl9",
"AgNy6ou2t-Z",
"kgHSoTUT7th",
"5L_iAKgvqcx",
"l9mOiv39q4",
"nips_2021_DPHsCQ8OpA",
"Akv6Ig-qX5i",
"dmOvlc1y--M",
"_nEaFEIYd21",
"nips_2021_DPHsCQ8OpA",
"PTslu8txEv1",
"-3zr9XCjYYc",
"b0h_CIZcQvO",
"_nEaFEIYd21",
"nips_2021_DPHsCQ8OpA",
"... |
nips_2021_xNmhYNQruJX | Time Discretization-Invariant Safe Action Repetition for Policy Gradient Methods | Seohong Park, Jaekyeom Kim, Gunhee Kim | accept | This paper proposes a novel trick to address the time discretization issue with continuous-time policy gradient method. As a small time interval delta is used, the authors shows that variance of PG estimator can exploded when delta->0. To address the issue, they propose a Safe Action Repetition method, which lets the action repeat when the state changes within a small ball with diameter d. This approach guarantees safe and robust policy optimization that is delta-invariant and adaptive to exploration spped. All reviewers appreciate that this solution is interesting and potentially useful.
The paper is still improvable. For example, the delta-invariance is achieved by reducing the time discretization problem to identifying a small neighborhood of the state space. This technique heavily depends on a priorly available state distance metric. Finding a good state distance metric is nontrivial, and would require further research beyond this paper. | train | [
"UUosIEL4nS",
"mDQCZ6c3oP",
"qNz0YV1uQvH",
"fibUUFW-Pcp",
"T9nPW1oxUdZ",
"I9U-1trmwy9",
"ukmWIY-_VX_",
"GOqQ2YAi1Y7",
"i9o2suwOxe",
"7WAuRi10cNv"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Real world tasks are often in continuous time, whereas typical RL algorithms operate in discrete time. In such situations the algorithms are often sensitive to the chosen discretization time step.\n\nSmall discretization scales can lead to various problems such as increases in the gradient variance, high computati... | [
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_xNmhYNQruJX",
"qNz0YV1uQvH",
"I9U-1trmwy9",
"nips_2021_xNmhYNQruJX",
"fibUUFW-Pcp",
"UUosIEL4nS",
"7WAuRi10cNv",
"i9o2suwOxe",
"nips_2021_xNmhYNQruJX",
"nips_2021_xNmhYNQruJX"
] |
nips_2021_H_qljL8t_A | Meta-Learning Reliable Priors in the Function Space | Meta-Learning promises to enable more data-efficient inference by harnessing previous experience from related learning tasks. While existing meta-learning methods help us to improve the accuracy of our predictions in face of data scarcity, they fail to supply reliable uncertainty estimates, often being grossly overconfident in their predictions. Addressing these shortcomings, we introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as stochastic processes and performs meta-level regularization directly in the function space. This allows us to directly steer the probabilistic predictions of the meta-learner towards high epistemic uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates. Finally, we showcase how our approach can be integrated with sequential decision making, where reliable uncertainty quantification is imperative. In our benchmark study on meta-learning for Bayesian Optimization (BO), F-PACOH significantly outperforms all other meta-learners and standard baselines. Even in a challenging lifelong BO setting, where optimization tasks arrive one at a time and the meta-learner needs to build up informative prior knowledge incrementally, our proposed method demonstrates strong positive transfer.
| accept | This paper uses PAC-Bayes meta-learning to do meta-learning using the functional KL. They demonstrate this in a series of experiments where they meta-learn hyperpriors of Gaussian processes for the purpose of doing Bayesian optimization among related tasks.
The reviewers thought the paper was well written, technically strong and interesting. The review scores were 6, 7, 4, 6. The main concern here seems to be novelty. The reviewer arguing for reject expressed that they thought it was a straightforward extension of functional KL and meta-learning ("The combination of functional-KL and meta-learning problem seem fairly marginal.”).
One major criticism shared by all reviewers seems to be the scale of the experiments, in that they’re limited to low-dimensional problems due to limitations of the function space view. One reviewer lowered their score from a 7 to a 6 as a result of this during the discussion period. It seems that the authors are aware of this concern and have promised to address this in their discussion. From the authors: “We agree with the reviewers’ concern about high-dimensional data. For this reason, we have added a discussion of the method’s limitations to lower-dimensional data domains to the updated version of the paper.”
The majority vote among the reviewers was to accept the paper, citing that the paper is well written, technically well-executed and empirically strong (although limited to low-D problems). The application to meta-Bayesian optimization seems well motivated and practically useful. Thus the recommendation is to accept the paper as a poster. Hopefully the reviewers feedback can be incorporated into the camera ready version (including discussion of low-D limitations) to make the paper stronger. | train | [
"iOwdjVKdr3j",
"Xot3l2_cH-9",
"BKLdJvG_MOT",
"mT2o7gCnQZu",
"oPKF2XXLeEQ",
"aiNsOFiNnW7",
"5cVxe8-p2T",
"QUnYhbKi8Ov",
"1MnV_8eghRT",
"8OWd3PML12s",
"KaUq6pEMZl",
"QXzdehu-DeH",
"WyG623xfCDx",
"k-9aHAuFyZB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. Unfortunately, I don't believe this changes my evaluation of the relative novelty of the method, which is a relatively straightforward extension of two related methods. This doesn't change the fact that your paper is well-executed.\n\nI understand that you are motivated primarily by BO, ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"5cVxe8-p2T",
"nips_2021_H_qljL8t_A",
"QUnYhbKi8Ov",
"1MnV_8eghRT",
"WyG623xfCDx",
"k-9aHAuFyZB",
"k-9aHAuFyZB",
"Xot3l2_cH-9",
"WyG623xfCDx",
"QXzdehu-DeH",
"nips_2021_H_qljL8t_A",
"nips_2021_H_qljL8t_A",
"nips_2021_H_qljL8t_A",
"nips_2021_H_qljL8t_A"
] |
nips_2021_1Sy9EwFCyFQ | VoiceMixer: Adversarial Voice Style Mixup | Although recent advances in voice conversion have shown significant improvement, there still remains a gap between the converted voice and target voice. A key factor that maintains this gap is the insufficient decomposition of content and voice style from the source speech. This insufficiency leads to the converted speech containing source speech style or losing source speech content. In this paper, we present VoiceMixer which can effectively decompose and transfer voice style through a novel information bottleneck and adversarial feedback. With self-supervised representation learning, the proposed information bottleneck can decompose the content and style with only a small loss of content information. Also, for adversarial feedback of each information, the discriminator is decomposed into content and style discriminator with self-supervision, which enable our model to achieve better generalization to the voice style of the converted speech. The experimental results show the superiority of our model in disentanglement and transfer performance, and improve audio quality by preserving content information.
| accept | UPDATE: The revision was reviewed and paper accepted.
----
This paper was discussed at length between the SACs, ACs, ethics reviewers, ethics review chairs, and program chairs. In the end, a decision was made to conditionally accept the paper. The list of conditions for acceptance is as follows:
1. Meaningful broader impacts statement. Moving beyond superficial discussion of obvious harms into a more detailed and thorough reflection on ethical issues, especially possible broader impact and potential for misuse.
2. Restricted release of model through some type of licensing or form-restricted access (ie. private repo accessed via request, model code and data use restricted by licenses, etc.)
3. Discussion of possible theoretical and practical mitigation strategies for minimizing harm of such technologies in the future. If this is not possible to discuss, include a clear articulation of the limits of such models in the absence of mitigation approaches. It is not necessary to implement the mitigation strategies discussed though some current work in this area should be highlighted by authors in the main text.
The original meta-review from the AC follows.
---
The reviewers have found this work technically sound overall, with a solid experimental section and thorough evaluation. The proposed approach is quite complex relative to the natural baseline (AutoVC), but the convincing experimental results adequately justify this additional complexity.
With regards to ethics however, there were some major concerns. I personally believe voice conversion has legitimate applications, and I do not think this work should be rejected simply because it genuinely improves on the state of the art in an area that is unfortunately fraught with potential misuse. It is imperative, however, that the authors appreciate the ethical implications of their work, and that it is correctly framed in this light in the manuscript.
The authors have responded to the ethics reviewers' concerns with a draft paragraph to include in the revised manuscript. The proposed changes include an overview of various potential nefarious uses, as well as a discussion of some potential countermeasures. Nevertheless, this has left some reviewers unconvinced.
After giving this a lot of thought however, I think this should be an opportunity for the authors to genuinely engage with the potential impact of their work, and therefore I prefer a positive approach. I would like to recommend acceptance, but this will be conditional on a further expansion of this section of the manuscript, demonstrating an exploration of the larger ethical concerns and implications of inappropriate use, as well as a clear, objective and exhaustive analysis showing that the benefits of publishing this work outweigh the risks. Please refer to the ethics reviewers' comments for further details on what should be included. | train | [
"cogAa0yjs9v",
"cZJFPllSQZo",
"N1vOgKmySEs",
"ikB69ja8Lu",
"-LTlL9r8o7j",
"zoiysKxqVSR",
"9MGFVGqNbwV",
"cYy4bAzsvX7",
"1o04QON45AF",
"zKqin9apVv",
"I_RQNkpLVkK",
"8rHO_51NhPA",
"8LF8SrtUZQu",
"EzBKxb6IQiD"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate for your helpful comments and suggestions. We have provided responses to your questions below to address your concerns.\n \n[About the Parrotron]\nThank you for your advice. We will include this paper [1] to explain the model using text transcriptions.\n\n[About the proper bottleneck size]\nWe use “... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"I_RQNkpLVkK",
"cogAa0yjs9v",
"zKqin9apVv",
"1o04QON45AF",
"nips_2021_1Sy9EwFCyFQ",
"nips_2021_1Sy9EwFCyFQ",
"nips_2021_1Sy9EwFCyFQ",
"8LF8SrtUZQu",
"EzBKxb6IQiD",
"8rHO_51NhPA",
"nips_2021_1Sy9EwFCyFQ",
"nips_2021_1Sy9EwFCyFQ",
"nips_2021_1Sy9EwFCyFQ",
"nips_2021_1Sy9EwFCyFQ"
] |
nips_2021_Yx1OzVU_SRi | Predicting What You Already Know Helps: Provable Self-Supervised Learning | Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks), that do not require labeled data, to learn semantic representations. These pretext tasks are created solely using the input features, such as predicting a missing image patch, recovering the color channels of an image from context, or predicting missing words, yet predicting this \textit{known} information helps in learning representations effective for downstream prediction tasks. This paper posits a mechanism based on approximate conditional independence to formalize how solving certain pretext tasks can learn representations that provably decrease the sample complexity of downstream supervised tasks. Formally, we quantify how the approximate independence between the components of the pretext task (conditional on the label and latent variables) allows us to learn representations that can solve the downstream task with drastically reduced sample complexity by just training a linear layer on top of the learned representation.
| accept | Three of four reviewers generally agree to recommend this paper for acceptance, and I agree with the strengths that they highlight, especially as reviewer Y1dq summarizes them.
Reviewer hGAR, who recommends rejection, makes a valid point as well: the experimental section and the discussion of assumptions leaves room for improvement. I encourage the authors to take the committee's suggestions about these parts into consideration when revising the draft.
For instance, two reviewers would have liked to see an empirical signal for whether the conditional independence assumption holds in a real data setting (not simulation). Even if the authors choose not to do this, it is natural to ask and it is lacking from the experiments, and so it is ultimately a limitation. It's best to be clear about whether the experiments are meant to closely verify theory (including its assumptions), or whether they are meant only to show that the analysis is at least not at odds with empirical observations. The latter seems to better describe this paper's experiments, but it is a weaker point than the former, and discussing limitations ultimately helps with clarity.
I will add that there are other ways to support an assumption than by experiment. One is to identify example mathematical constructions that satisfy them (e.g. topic models, HMMs). Another is to point to similar assumptions made in related prior work. The paper does the latter reasonably well. If it is easy enough to do, perhaps the authors can make use of an example construction or two as well.
Overall I second Y1dq's remarks in discussion that this is a field with little theoretical analysis, and that what constitutes a reasonable abstraction and assumption is not yet clear. Making assumptions for progress is fine, especially if similar ones have been make in related work. The validation and discussion of assumptions in this paper could be improved as hGAR emphasizes, but overall the support from the remaining reviewers leads me to recommend acceptance.
| test | [
"SYi4DwDcA5i",
"xPRozjOrRiB",
"NgGHl31NqWG",
"EGhvkLHKXn",
"K66tt2Ew3X-",
"Nx3wdHiaD1L",
"xNu_KDWTcEz",
"zFmp9z7hfJl",
"fqSLQCRO5cz",
"KwV2j0Du_Ri",
"SfkxZP-82NQ",
"Dq6dbV5bwNy",
"LZ5dKCWNjj",
"0sGQemoJ20W"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"- The paper theoretically analyzes self-supervision under some simplifying assumptions. They begin with a conditional independent setting, where X1 is conditionally independent of X2 given Y.\n- The high-level idea is to first pre-train a model that predicts X2 from X1. This can be done on unlabeled data. Then the... | [
8,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
3,
2,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_Yx1OzVU_SRi",
"nips_2021_Yx1OzVU_SRi",
"nips_2021_Yx1OzVU_SRi",
"KwV2j0Du_Ri",
"zFmp9z7hfJl",
"Dq6dbV5bwNy",
"SfkxZP-82NQ",
"fqSLQCRO5cz",
"0sGQemoJ20W",
"NgGHl31NqWG",
"SYi4DwDcA5i",
"0sGQemoJ20W",
"xPRozjOrRiB",
"nips_2021_Yx1OzVU_SRi"
] |
nips_2021_aMZJBOiOOPg | Oracle Complexity in Nonsmooth Nonconvex Optimization | Guy Kornowski, Ohad Shamir | accept | This paper provides oracle complexity of nonconvex and nonsmooth optimization problem. The paper provides useful and interesting results about the complexity of getting a near-approximated stationary points, and the techniques and results are new to the literature.
Overall the paper is very well-written, easy to follow, but the results are substantial. I recommend acceptance of this paper at least as a spotlight paper. | train | [
"Vdphw4GlL3N",
"QIkdfJR-dX9",
"G80FctjPfl",
"3ST2heLYjCP",
"M42oPTGX92x",
"gaX-8XXx-gS",
"PLaoAv1tPK",
"uMm4alQ9Gl_",
"3qlntVUbTFy",
"Hu1D-bXFdf",
"X7k4aNQi6eY",
"s8c-c1WvzIK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Update\n\nI'd like to thank the authors for their response. \n\nI have decided to increase my score to 9 in light of the other reviews and these comments. I look forward to seeing this work published.\n\n========\n\nThis submission investigates two problems central to non-smooth, non-convex optimization: i) th... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
8,
9
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"nips_2021_aMZJBOiOOPg",
"3ST2heLYjCP",
"gaX-8XXx-gS",
"s8c-c1WvzIK",
"X7k4aNQi6eY",
"Vdphw4GlL3N",
"Hu1D-bXFdf",
"3qlntVUbTFy",
"nips_2021_aMZJBOiOOPg",
"nips_2021_aMZJBOiOOPg",
"nips_2021_aMZJBOiOOPg",
"nips_2021_aMZJBOiOOPg"
] |
nips_2021_z1F9G4VnGZ- | CentripetalText: An Efficient Text Instance Representation for Scene Text Detection | Scene text detection remains a grand challenge due to the variation in text curvatures, orientations, and aspect ratios. One of the hardest problems in this task is how to represent text instances of arbitrary shapes. Although many methods have been proposed to model irregular texts in a flexible manner, most of them lose simplicity and robustness. Their complicated post-processings and the regression under Dirac delta distribution undermine the detection performance and the generalization ability. In this paper, we propose an efficient text instance representation named CentripetalText (CT), which decomposes text instances into the combination of text kernels and centripetal shifts. Specifically, we utilize the centripetal shifts to implement pixel aggregation, guiding the external text pixels to the internal text kernels. The relaxation operation is integrated into the dense regression for centripetal shifts, allowing the correct prediction in a range instead of a specific value. The convenient reconstruction of text contours and the tolerance of prediction errors in our method guarantee the high detection accuracy and the fast inference speed, respectively. Besides, we shrink our text detector into a proposal generation module, namely CentripetalText Proposal Network (CPN), replacing Segmentation Proposal Network (SPN) in Mask TextSpotter v3 and producing more accurate proposals. To validate the effectiveness of our method, we conduct experiments on several commonly used scene text benchmarks, including both curved and multi-oriented text datasets. For the task of scene text detection, our approach achieves superior or competitive performance compared to other existing methods, e.g., F-measure of 86.3% at 40.0 FPS on Total-Text, F-measure of 86.1% at 34.8 FPS on MSRA-TD500, etc. For the task of end-to-end scene text recognition, our method outperforms Mask TextSpotter v3 by 1.1% in F-measure on Total-Text.
| accept | This paper presents a new method for scene text detection and recognition based on the integration of individual local responses and their centripedal shifts.
The paper has received 4 expert reviews, which were quite positive, with in particular one reviewer championing the paper. While the reviewers (and the AC) agreed, that the paper had weaknesses in presentation and writing, there was a general agreement that the paper has merits, sufficient novelty, and convincing results.
The AC concurs and proposes acceptance. | val | [
"SIssFjthheJ",
"H081nzUaZ3",
"pZ1WUyWsxXp",
"iHP77oxluor",
"cyefS44nD9c",
"I330Yr1Rna",
"39LpeHEEShX",
"feawPnBVTR0",
"P3HAfAXh7cR",
"UCv7CpoteIl",
"b_mWjk3nLUF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Update after discussion among reviewers: I am still strongly in favor of accepting this paper. \n\nThe paper presents a new method to represent arbitrary shaped text in images and shows how this representation can be used in the output layer of some of the recent text detection methods. \n\nThe results look qualit... | [
8,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_z1F9G4VnGZ-",
"feawPnBVTR0",
"nips_2021_z1F9G4VnGZ-",
"nips_2021_z1F9G4VnGZ-",
"I330Yr1Rna",
"39LpeHEEShX",
"iHP77oxluor",
"pZ1WUyWsxXp",
"b_mWjk3nLUF",
"SIssFjthheJ",
"nips_2021_z1F9G4VnGZ-"
] |
nips_2021_MckiHYXsBT | Learning to Select Exogenous Events for Marked Temporal Point Process | Marked temporal point processes (MTPPs) have emerged as a powerful modelingtool for a wide variety of applications which are characterized using discreteevents localized in continuous time. In this context, the events are of two typesendogenous events which occur due to the influence of the previous events andexogenous events which occur due to the effect of the externalities. However, inpractice, the events do not come with endogenous or exogenous labels. To thisend, our goal in this paper is to identify the set of exogenous events from a set ofunlabelled events. To do so, we first formulate the parameter estimation problemin conjunction with exogenous event set selection problem and show that thisproblem is NP hard. Next, we prove that the underlying objective is a monotoneand \alpha-submodular set function, with respect to the candidate set of exogenousevents. Such a characterization subsequently allows us to use a stochastic greedyalgorithm which was originally proposed in~\cite{greedy}for submodular maximization.However, we show that it also admits an approximation guarantee for maximizing\alpha-submodular set function, even when the learning algorithm provides an imperfectestimates of the trained parameters. Finally, our experiments with synthetic andreal data show that our method performs better than the existing approaches builtupon superposition of endogenous and exogenous MTPPs.
| accept | The submission provides useful theory and experimental support for methods to label Marked Temporal Point Processes. There was extensive engagement between the authors and reviewers, and the reviewers came away well satisfied by the extended experiments. We ask that the authors please update the paper to incorporate as much of the useful additional exposition as possible, updating the main paper where possible or including supplemental material where appropriate. | train | [
"09x7AKfMnxy",
"lxJPd0aYGp",
"dhB5hryDLS",
"QSvWaJ2Gstd",
"n2MTXl_9KC7",
"AElvrjCijVo",
"1q5ucLoZmd2",
"245EfSyCUx3",
"1WajLK-oigV",
"GoW-1RGDnCA",
"JbI1JBUSslQ",
"GkyrwZRmdCf",
"uJM7MUaa6FO",
"CxIB4cjGn-C",
"Q4ZyyMRj7Sv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again to the authors for their responses. I don't think the prediction experiments reveal much that is new to the field - they indicate that it can be better to use simpler models for some event types; this is well known for multivariate point process models. However, my initial reaction was that the predi... | [
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"JbI1JBUSslQ",
"nips_2021_MckiHYXsBT",
"nips_2021_MckiHYXsBT",
"uJM7MUaa6FO",
"AElvrjCijVo",
"1q5ucLoZmd2",
"JbI1JBUSslQ",
"1WajLK-oigV",
"JbI1JBUSslQ",
"Q4ZyyMRj7Sv",
"lxJPd0aYGp",
"dhB5hryDLS",
"CxIB4cjGn-C",
"nips_2021_MckiHYXsBT",
"nips_2021_MckiHYXsBT"
] |
nips_2021_KXRTmcv3dQ8 | DRIVE: One-bit Distributed Mean Estimation | Shay Vargaftik, Ran Ben-Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben-Itzhak, Michael Mitzenmacher | accept | The paper proposes a new 1-bit mean estimation algorithm. While the idea of using random rotation matrices have been explored in this problem, the paper proposes a new elegant algorithm that has better performance both in theory and experiments. The paper is well written and I recommend acceptance.
I am curious to see how this method compares empirically to the variable length encoding presented in Distributed mean estimation with limited communication paper. I encourage authors to add this comparison and incorporate other reviewer comments in the final version. | val | [
"Vd5EitPF-n4",
"DMFPqDy-LWc",
"TpFBXYpACus",
"Fww3uzWiM-9",
"NRlzDkHH4uf",
"txJOSeypv6M",
"_g39rpnPaC6",
"RBlmljYuQPx",
"g9b3Rh8fw_I",
"CgAuo_-H5jz"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for pointing out these observations and suggestions.\n\nAs suggested, we will move some of the experimental results to the supplementary material to make room for addressing the reviews.\n\nDRIVE with a Structured Random Rotation satisfies the relative \"bounded variance\" assumption according to Lemma ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
1
] | [
"DMFPqDy-LWc",
"NRlzDkHH4uf",
"g9b3Rh8fw_I",
"_g39rpnPaC6",
"RBlmljYuQPx",
"CgAuo_-H5jz",
"nips_2021_KXRTmcv3dQ8",
"nips_2021_KXRTmcv3dQ8",
"nips_2021_KXRTmcv3dQ8",
"nips_2021_KXRTmcv3dQ8"
] |
nips_2021_LT5QcAeuM15 | Learning Space Partitions for Path Planning | Path planning, the problem of efficiently discovering high-reward trajectories, often requires optimizing a high-dimensional and multimodal reward function. Popular approaches like CEM and CMA-ES greedily focus on promising regions of the search space and may get trapped in local maxima. DOO and VOOT balance exploration and exploitation, but use space partitioning strategies independent of the reward function to be optimized. Recently, LaMCTS empirically learns to partition the search space in a reward-sensitive manner for black-box optimization. In this paper, we develop a novel formal regret analysis for when and why such an adaptive region partitioning scheme works. We also propose a new path planning method LaP3 which improves the function value estimation within each sub-region, and uses a latent representation of the search space. Empirically, LaP3 outperforms existing path planning methods in 2D navigation tasks, especially in the presence of difficult-to-escape local optima, and shows benefits when plugged into the planning components of model-based RL such as PETS. These gains transfer to highly multimodal real-world tasks, where we outperform strong baselines in compiler phase ordering by up to 39% on average across 9 tasks, and in molecular design by up to 0.4 on properties on a 0-1 scale. Code is available at https://github.com/yangkevin2/neurips2021-lap3.
| accept | The paper focuses on a black-box search approach to planning problems. The paper provides a theoretical analysis of an existing method in this space - La-MCTS - and suggests a modification of that method grounded in their analysis, resulting in a new approach called PlaLam. This analysis was valued by all reviewers and considered technically sound. And of course, planning problems are a relevant problem to the machine learning community.
Although the authors did compare their method to various method on a variety of planning problems, initially, it was unclear how the performance of the proposed 'PlaLam' modification compares to the original La-MCTS. This was brought up in review and was addressed by the authors in their reply. Other, minor, concerns of the reviewers were also satisfactorily addressed by the authors.
The paper was considered to be well written. | val | [
"KE5L6uxnhy7",
"ptZp1OH2len",
"8JP2y5mPPam",
"I5NGglrwHR9",
"2PQDaedUfmv",
"saAiycYQd5W",
"1Anmylr0M6S",
"KIGGBy9nnf0",
"baeSiuk8mwn",
"Gig_tHRb5Aq",
"qIJ4qvtPFG",
"_cc4GcXJ6XO",
"7VuyPO6HBE",
"0iXxS_LAISV",
"Twt5LtHcutx"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This work proposes a new path planning method PlaLaM, which is an extension of the LaMCTS algorithm.\nThe authors provide a novel regret analysis of adaptive region partitioning schemes. \n\nThe authors demonstrate that the PlaLam method improves performance in toy 2D navigation tasks. The approach is also be appl... | [
7,
-1,
-1,
7,
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
-1,
-1,
3,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_LT5QcAeuM15",
"8JP2y5mPPam",
"_cc4GcXJ6XO",
"nips_2021_LT5QcAeuM15",
"1Anmylr0M6S",
"nips_2021_LT5QcAeuM15",
"0iXxS_LAISV",
"Gig_tHRb5Aq",
"nips_2021_LT5QcAeuM15",
"7VuyPO6HBE",
"KE5L6uxnhy7",
"I5NGglrwHR9",
"baeSiuk8mwn",
"saAiycYQd5W",
"nips_2021_LT5QcAeuM15"
] |
nips_2021_rl2FreDHTb0 | Progressive Feature Interaction Search for Deep Sparse Network | Deep sparse networks (DSNs), of which the crux is exploring the high-order feature interactions, have become the state-of-the-art on the prediction task with high-sparsity features. However, these models suffer from low computation efficiency, including large model size and slow model inference, which largely limits these models' application value. In this work, we approach this problem with neural architecture search by automatically searching the critical component in DSNs, the feature-interaction layer. We propose a distilled search space to cover the desired architectures with fewer parameters. We then develop a progressive search algorithm for efficient search on the space and well capture the order-priority property in sparse prediction tasks. Experiments on three real-world benchmark datasets show promising results of PROFIT in both accuracy and efficiency. Further studies validate the feasibility of our designed search space and search algorithm.
| accept | This paper proposes using NAS for learning the feature interaction space for deep sparse networks that are used for high dimensional sparse networks commonly encountered in recommendation systems. The discovered architectures have higher efficiency compared to the baselines.
During discussion period some points of confusion consistently came up regarding order-vs-rank and the fact that it was not explicitly clear that the search space in feature-interactions is several orders of magnitude bigger than that used in DARTS for CNNs due to the cardinality of the operators (7 or so in DARTs vs. 39 here). This is a point that the authors can make clearer in the paper in the beginning itself. Otherwise the difficulty of the problem does not come about till later.
Reviewers have also pointed out various ways to make presentation better like not using forward references to acronyms before definition. Since the main contribution of this paper is the search space and the progressive search over feature orders and not the NAS search technique the authors should emphasize that part explicitly and play to the strengths of the paper in presentation. | train | [
"0wl9d0pSxfg",
"MHT8BHwOFHo",
"sS40FeIePoJ",
"fxrGLFZzeL9",
"AAwUfwj8RiB",
"vnn9NPALkF",
"FIzAuO2LIm",
"2sbp0g4EocP",
"er7Tz55PcaR",
"5eJni3K5xmc",
"wL23hB0dKLa",
"Zut3XHARM2e",
"p6AzvSoDgLc",
"NQ4nCk3stzz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors consider the problem of designing accurate and computationally efficient deep sparse networks (DSNs), which are an important problem in applications with sparse features such as click-through rate or movie recommendation. The authors propose a new approach based on neural architecture search (NAS) usin... | [
6,
-1,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_rl2FreDHTb0",
"sS40FeIePoJ",
"0wl9d0pSxfg",
"nips_2021_rl2FreDHTb0",
"vnn9NPALkF",
"2sbp0g4EocP",
"nips_2021_rl2FreDHTb0",
"5eJni3K5xmc",
"NQ4nCk3stzz",
"fxrGLFZzeL9",
"FIzAuO2LIm",
"p6AzvSoDgLc",
"nips_2021_rl2FreDHTb0",
"nips_2021_rl2FreDHTb0"
] |
nips_2021_1Av2E0EugkA | Local Explanation of Dialogue Response Generation | Yi-Lin Tuan, Connor Pryor, Wenhu Chen, Lise Getoor, William Yang Wang | accept | The paper studies the problem of generating model-agnostic explanations of dialog responses. The proposed method (LERG) estimates importance scores between every input-output segment pair by exploring perturbations of the input. LERG uses two optimization variations (Shapley value and LIME) originally proposed for classification and newly extended to sequence-to-sequence problems. The paper also comes with theoretical justifications, showing that LERG has the following desirable properties: unbiased approximation, consistency, cause identification (proofs are provided in the appendix). While the paper is evaluated only on dialogue, the methods of this paper seem applicable to other tasks such as machine translation.
The paper addresses a very important task, as the black-box nature of most current conversational AI systems is likely hindering our understanding of what goes wrong and what could be improved. Reviewers found the approach to be well motivated, technically sound, and the presentation of the work is quite clear. The only two concerns are:
* Alvarez-Melis and Jaakkola (2017) [1] presents a somewhat similar model-agnostic explanation generation method for sequence-to-sequence models. That said, there are some technical differences, and the paper offers technical contributions relative to [1]: the adaption of established model-agnostic explanations methods to seq2seq problems (LIME and Shapley value), and theoretical justifications (unbiased approximation, etc.). Furthermore, [1] is mainly focused on translation and its application to dialog seems a bit preliminary (the authors of [1] seem frank about that as they call their system “mediocre”). The few examples of dialog responses in [1] seem to suffer from a lack of diversity (“I don’t know”, etc.).
* The paper performs all experiments on gold responses, which is a bit unrealistic. The authors provide some good justifications (i.e., “we are explaining a reasonable response”, mitigating the effects of the sampling process typically used during inference, evaluating on the same gold outputs makes results more comparable), but I think evaluating both on gold and generated responses would have made the paper stronger.
Compared to previous work (e.g., [1]), the paper offers some empirical contributions: evaluation of "necessity" and "sufficiency" (“How is the model influenced after removing explanations?” and “How does the model perform when only the explanations are given?”, respectively). The paper also offers a user study to evaluate the effectiveness of LERG with real users, and the improvement over several baselines seems quite significant.
| train | [
"CfTevaWRQLS",
"Y3Cabxeqo2O",
"-j7_RA77Yqy",
"vZ2uE_uM-mq",
"N3ME4VMvuxv",
"np7FZa37jem",
"F1dKxrkJcqU",
"m93-iLKWxYZ",
"Wdcs4FZGGB",
"srtdEwHnTKN",
"EV7ZdtwzlrP",
"HqbnqsimwxQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a model-agnostic explanations model, local explanation of response generation (LERG), for dialogue response generation task. Due to the sequence-to-sequence natrual, previous works, which normal produces a single label as output, are no longer suitable for this task. This paper regards the\nexp... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
4
] | [
"nips_2021_1Av2E0EugkA",
"np7FZa37jem",
"vZ2uE_uM-mq",
"HqbnqsimwxQ",
"EV7ZdtwzlrP",
"CfTevaWRQLS",
"srtdEwHnTKN",
"Wdcs4FZGGB",
"nips_2021_1Av2E0EugkA",
"nips_2021_1Av2E0EugkA",
"nips_2021_1Av2E0EugkA",
"nips_2021_1Av2E0EugkA"
] |
nips_2021_admg0sZZm1e | Scalable Inference in SDEs by Direct Matching of the Fokker–Planck–Kolmogorov Equation | Simulation-based techniques such as variants of stochastic Runge–Kutta are the de facto approach for inference with stochastic differential equations (SDEs) in machine learning. These methods are general-purpose and used with parametric and non-parametric models, and neural SDEs. Stochastic Runge–Kutta relies on the use of sampling schemes that can be inefficient in high dimensions. We address this issue by revisiting the classical SDE literature and derive direct approximations to the (typically intractable) Fokker–Planck–Kolmogorov equation by matching moments. We show how this workflow is fast, scales to high-dimensional latent spaces, and is applicable to scarce-data applications, where a non-parametric SDE with a driving Gaussian process velocity field specifies the model.
| accept | This paper is concerned with inference of neural stochastic differential equations. In general, exact inference in these is intractable. This paper proposes a pragmatic technique, whereby the discretization is modeled by a recursive (deterministic) Gaussian approximation. They show computational advantages, particularly in higher dimensions.
All reviewers felt this was a valuable method, with a worthy contribution. The main concerns that were identified were relationships to some existing work (largely addressed in feedback) some unclear notation (also addressed in feedback), and some questions about the computational cost of the Jacobian step. Here, we echo the reviewer in that it is not reasonable to treat function evaluations and Jacobian evaluations as equal. The authors are trusted to fulfill their commitment to including a large neural-network model with strong dependencies to consider this timing issue experimentally. Still, overall the method appears correct, plausibly useful and the paper is well-written, thus I recommend acceptance. | train | [
"rw9EfaRiFyx",
"bBi2mOv36XT",
"wv2VEEJ4hnl",
"N4OsGvB4efh",
"pxJEBuSGb2u",
"0Da5ZQIp1n",
"aIcvqwW7NQn",
"jOKbx3zqCHX",
"aSwJ59YOK0X",
"kw5FoZgRHkK",
"I1-UxMQnM4D",
"e5GdR2CHRo-",
"e6g3eK37XzO",
"QtIeLy1u3s4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I thank the authors for the response. All of my questions are addressed and my outlook of the work remains positive after reading the other reviews and discussions. I maintain my score of 8.",
"The paper presents a practical methodology for SDE-based inference in machine learning that avoids the use of Monte-Ca... | [
-1,
6,
-1,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"kw5FoZgRHkK",
"nips_2021_admg0sZZm1e",
"e5GdR2CHRo-",
"pxJEBuSGb2u",
"aSwJ59YOK0X",
"nips_2021_admg0sZZm1e",
"e6g3eK37XzO",
"nips_2021_admg0sZZm1e",
"I1-UxMQnM4D",
"QtIeLy1u3s4",
"jOKbx3zqCHX",
"bBi2mOv36XT",
"0Da5ZQIp1n",
"nips_2021_admg0sZZm1e"
] |
nips_2021_vY2HsMWG2b_ | The Complexity of Bayesian Network Learning: Revisiting the Superstructure | We investigate the parameterized complexity of Bayesian Network Structure Learning (BNSL), a classical problem that has received significant attention in empirical but also purely theoretical studies. We follow up on previous works that have analyzed the complexity of BNSL w.r.t. the so-called superstructure of the input. While known results imply that BNSL is unlikely to be fixed-parameter tractable even when parameterized by the size of a vertex cover in the superstructure, here we show that a different kind of parameterization - notably by the size of a feedback edge set - yields fixed-parameter tractability. We proceed by showing that this result can be strengthened to a localized version of the feedback edge set, and provide corresponding lower bounds that complement previous results to provide a complexity classification of BNSL w.r.t. virtually all well-studied graph parameters.We then analyze how the complexity of BNSL depends on the representation of the input. In particular, while the bulk of past theoretical work on the topic assumed the use of the so-called non-zero representation, here we prove that if an additive representation can be used instead then BNSL becomes fixed-parameter tractable even under significantly milder restrictions to the superstructure, notably when parameterized by the treewidth alone. Last but not least, we show how our results can be extended to the closely related problem of Polytree Learning.
| accept | There is clear consensus among all involved in the assessment. I would be very glad to see the talk about this paper myself too. Please do take suggestions in consideration. | train | [
"XSxxmyxoKPc",
"5bsMVMrX18H",
"YQ0Dgxp60T",
"7ZC7956CKPb",
"7eXBYXc-Tq4",
"wItJs3DITHH",
"4d-Nzbo4v71",
"K6b6hjZ6DY8",
"piQ1PuC_Bf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response and deciding to add the remarks you mention. I don't think we have any real disagreement to discuss.",
"Summary. \nThe paper is about the parameterized complexity of Bayesian Network Structure Learning.\nIn particular, it is focused to analyze and develop previous contributions on super... | [
-1,
7,
-1,
-1,
-1,
-1,
8,
7,
7
] | [
-1,
2,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"wItJs3DITHH",
"nips_2021_vY2HsMWG2b_",
"piQ1PuC_Bf",
"5bsMVMrX18H",
"K6b6hjZ6DY8",
"4d-Nzbo4v71",
"nips_2021_vY2HsMWG2b_",
"nips_2021_vY2HsMWG2b_",
"nips_2021_vY2HsMWG2b_"
] |
nips_2021_RwdHpzTTGl | Fast Tucker Rank Reduction for Non-Negative Tensors Using Mean-Field Approximation | We present an efficient low-rank approximation algorithm for non-negative tensors. The algorithm is derived from our two findings: First, we show that rank-1 approximation for tensors can be viewed as a mean-field approximation by treating each tensor as a probability distribution. Second, we theoretically provide a sufficient condition for distribution parameters to reduce Tucker ranks of tensors; interestingly, this sufficient condition can be achieved by iterative application of the mean-field approximation. Since the mean-field approximation is always given as a closed formula, our findings lead to a fast low-rank approximation algorithm without using a gradient method. We empirically demonstrate that our algorithm is faster than the existing non-negative Tucker rank reduction methods and achieves competitive or better approximation of given tensors.
| accept | This paper gives a new way to do Tucker rank reduction for non-negative tensors. The new approach puts additional constraints on the form of the decomposition (characterized by "bingo-space" defined in the paper) and allows the algorithm to find the optimal solution under these constraints very efficiently. The approach is quite novel and has some reasonable guarantees. The authors should clarify these guarantees in detail in the revised version. | train | [
"KH01ClVMMA3",
"7BPWmC-WmsP",
"8QM7gqS_G5",
"wu_IFiEFTgk",
"P9-k7WYY7k6",
"gFciFWCL6N",
"0gbsuYAoIe0",
"zbxuCgM356k",
"GCEI0hgAGqq",
"CnACPN2mSxa"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper derives a tensor decomposition method using mean-field approximation and information geometry. The main claim of this paper is the speed of computation compared to some basic baseline methods. An interesting direction but there are several directions that the paper can improve further. This paper is te... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"nips_2021_RwdHpzTTGl",
"8QM7gqS_G5",
"wu_IFiEFTgk",
"KH01ClVMMA3",
"CnACPN2mSxa",
"GCEI0hgAGqq",
"zbxuCgM356k",
"nips_2021_RwdHpzTTGl",
"nips_2021_RwdHpzTTGl",
"nips_2021_RwdHpzTTGl"
] |
nips_2021_2Lq5mDVwBdJ | Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound | We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties. While our approach holds for arbitrary distributions, we instantiate it with Dirichlet distributions: this allows for a closed-form and differentiable expression for the expected risk, which then turns the generalization bound into a tractable training objective.The resulting stochastic majority vote learning algorithm achieves state-of-the-art accuracy and benefits from (non-vacuous) tight generalization bounds, in a series of numerical experiments when compared to competing algorithms which also minimize PAC-Bayes objectives -- both with uninformed (data-independent) and informed (data-dependent) priors.
| accept | The reviewers like the approach proposed in the paper and recommend acceptance. There were several clarity issues raised during the discussion and I hope they will be addressed in the final revision. In particular, the fixes to benchmark experiments that were discussed and revision of the two-moon experiment. | train | [
"9MHuo0b4nY",
"ZCLNMrEO-F",
"J7mOdblQyN5",
"AqauT3CpQ7e",
"NkThBigyX8",
"Dp1gIE-3GaG",
"ilzFgM53RA4",
"vmjT3Kt2rHU",
"K_rB6YWK70V",
"SRLabYMrKAW",
"voxVLtfNo7-",
"_S0r0ySJI2Y"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer is right that good predictor accuracy is achievable by optimizing a relaxation of the zero-one loss. This is exactly what most of the usual learning algorithms perform on classification tasks. However, in the quest of designing a “self-bounding” learning algorithm that provides the tightest PAC-Bayes... | [
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"J7mOdblQyN5",
"nips_2021_2Lq5mDVwBdJ",
"vmjT3Kt2rHU",
"Dp1gIE-3GaG",
"nips_2021_2Lq5mDVwBdJ",
"NkThBigyX8",
"voxVLtfNo7-",
"ZCLNMrEO-F",
"_S0r0ySJI2Y",
"nips_2021_2Lq5mDVwBdJ",
"nips_2021_2Lq5mDVwBdJ",
"nips_2021_2Lq5mDVwBdJ"
] |
nips_2021_urrcVI-_jRm | Numerical influence of ReLU’(0) on backpropagation | In theory, the choice of ReLU(0) in [0, 1] for a neural network has a negligible influence both on backpropagation and training. Yet, in the real world, 32 bits default precision combined with the size of deep learning problems makes it a hyperparameter of training methods. We investigate the importance of the value of ReLU'(0) for several precision levels (16, 32, 64 bits), on various networks (fully connected, VGG, ResNet) and datasets (MNIST, CIFAR10, SVHN, ImageNet). We observe considerable variations of backpropagation outputs which occur around half of the time in 32 bits precision. The effect disappears with double precision, while it is systematic at 16 bits. For vanilla SGD training, the choice ReLU'(0) = 0 seems to be the most efficient. For our experiments on ImageNet the gain in test accuracy over ReLU'(0) = 1 was more than 10 points (two runs). We also evidence that reconditioning approaches as batch-norm or ADAM tend to buffer the influence of ReLU'(0)’s value. Overall, the message we convey is that algorithmic differentiation of nonsmooth problems potentially hides parameters that could be tuned advantageously.
| accept | The ReLU nonlinearity is popular in deep learning. It's derivative at 0, ReLU’(0), is undefined. This paper presents the surprising observation that the chosen value of the derivative of ReLU at 0 has a substantial effect of the performance achieved in neural network training at typical numerical precision. After the initial round of reviews some concerns surfaced about how robust this observation is to the choice of learning rate and other hyperparameters. The authors addresses these concerns in their response and the reviewers now seem convinced the observed effect is real and meaningful. Most of the reviewers engaged in discussion with the authors and with the rest of the committee: They reached a consensus recommendation to accept the paper.
Reviewer kt6J is the sole remaining negative reviewer: Unfortunately they did not engage in discussion with the committee or the authors. Their review did not point out any serious problems that would invalidate the contribution made by the paper. I do not believe the review of kt6J provides enough reason to not accept the paper, given that the paper presents a clearly surprising and interesting result that is appreciated by the other reviewers.
Authors, please integrate all new results and discussion from the author response phase into the camera ready version of the paper. | train | [
"w-NIoU-FoY9",
"hsJrZdbsvG",
"k3QOResn1A6",
"fgBtA805En",
"Nj110P1DY-i",
"QGzMfPalU7s",
"TZF8Mcgz-v-",
"cidjF4Tjmd",
"t5DqB84gC8G",
"RobUjsYU6XG",
"Uyj1pRztSjo",
"Ri9j6SGHgt_",
"Hcq7qte4bSn",
"nEZG0Q43-b5",
"5haw4F5RPSD",
"pvDQF4pts9",
"_iDjFadQl7V"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" ### Response to j8rk\n\nYes 0.2 is missing, it is about to be corrected. \n\nWe thank you for your constructive comments and for the positive feedback on the additional experiments! We will, of course, implement all the modifications you suggested.",
" \n\n### Response to kt6J\n\nWe would like to thank the revi... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"QGzMfPalU7s",
"5haw4F5RPSD",
"5haw4F5RPSD",
"5haw4F5RPSD",
"nips_2021_urrcVI-_jRm",
"cidjF4Tjmd",
"Hcq7qte4bSn",
"t5DqB84gC8G",
"Ri9j6SGHgt_",
"Hcq7qte4bSn",
"_iDjFadQl7V",
"Nj110P1DY-i",
"nips_2021_urrcVI-_jRm",
"pvDQF4pts9",
"nips_2021_urrcVI-_jRm",
"nips_2021_urrcVI-_jRm",
"nips_... |
nips_2021_LcSfRundgwI | A Contrastive Learning Approach for Training Variational Autoencoder Priors | Variational autoencoders (VAEs) are one of the powerful likelihood-based generative models with applications in many domains. However, they struggle to generate high-quality images, especially when samples are obtained from the prior without any tempering. One explanation for VAEs' poor generative quality is the prior hole problem: the prior distribution fails to match the aggregate approximate posterior. Due to this mismatch, there exist areas in the latent space with high density under the prior that do not correspond to any encoded image. Samples from those areas are decoded to corrupted images. To tackle this issue, we propose an energy-based prior defined by the product of a base prior distribution and a reweighting factor, designed to bring the base closer to the aggregate posterior. We train the reweighting factor by noise contrastive estimation, and we generalize it to hierarchical VAEs with many latent variable groups. Our experiments confirm that the proposed noise contrastive priors improve the generative performance of state-of-the-art VAEs by a large margin on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ 256 datasets. Our method is simple and can be applied to a wide variety of VAEs to improve the expressivity of their prior distribution.
| accept | The paper proposes a new class of priors for Variational Auto-Encoders (VAEs). The main idea is to use Noise Contrastive Estimation and a two-stage training method. The reviewers highlighted that:
- The paper is clearly written.
- The idea is simple (in the positive sense!) and easy to implement.
- The proposed prior could be plugged in any VAE.
However, the reviewers raised some concerns:
- Sampling from the VAE with the suggested parameters is very slow.
- The experimental results are not overly convincing.
- The organization of the paper could be improved.
- The method cannot reliably be used to evaluate a bound on the log-likelihood.
- The only performance metric used is the FID, which arguably favors the quality of samples overgeneralization.
Moreover, I find it rather surprising that the authors do not compare their approach to at least one of the following two methods:
- Tomczak, J., & Welling, M. (2018). VAE with a VampPrior. In International Conference on Artificial Intelligence and Statistics (pp. 1214-1223). PMLR.
- Norouzi, S., Fleet, D. J., & Norouzi, M. (2020). Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation. arXiv preprint arXiv:2004.04795.
It seems rather natural to focus on a thorough comparison against other priors for VAE. Instead, the authors have a limited comparison in Table 2. Moreover, the authors decided to compare their method to various deep generative models. I totally agree that it is also important, however, without a proper comparison against various priors, it is hard to properly evaluate their idea.
The authors provided a thorough rebuttal and they promised to improve the paper. Overall, the paper is solid and the idea is neat, therefore, I tend to accept the paper.
| test | [
"-a1lMFKgTY",
"Oe571cZD4E",
"G0jKgkW4Bn",
"pep9omg3N9e",
"1cPVGwzFN3x",
"v9OS8XbUfj",
"bcBsBAGO9x",
"javgySy4kGN"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In trained VAEs, mismatch between the prior and the variational posterior can lead to low-quality generated samples.\n\nTo address this issue, the authors suggest the following procedure:\n1) given a trained VAE, train a classifier to distinguish images generated from the variational posterior vs images generated ... | [
6,
-1,
7,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
4,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_LcSfRundgwI",
"G0jKgkW4Bn",
"nips_2021_LcSfRundgwI",
"-a1lMFKgTY",
"javgySy4kGN",
"bcBsBAGO9x",
"nips_2021_LcSfRundgwI",
"nips_2021_LcSfRundgwI"
] |
nips_2021_RcjW7p7z8aJ | What training reveals about neural network complexity | This work explores the Benevolent Training Hypothesis (BTH) which argues that the complexity of the function a deep neural network (NN) is learning can be deduced by its training dynamics. Our analysis provides evidence for BTH by relating the NN's Lipschitz constant at different regions of the input space with the behavior of the stochastic training procedure. We first observe that the Lipschitz constant close to the training data affects various aspects of the parameter trajectory, with more complex networks having a longer trajectory, bigger variance, and often veering further from their initialization. We then show that NNs whose 1st layer bias is trained more steadily (i.e., slowly and with little variation) have bounded complexity even in regions of the input space that are far from any training point. Finally, we find that steady training with Dropout implies a training- and data-dependent generalization bound that grows poly-logarithmically with the number of parameters. Overall, our results support the intuition that good training behavior can be a useful bias towards good generalization.
| accept | The majority of reviewers were positive about this paper, commending the connection it makes between generalization and training dynamics. Nonetheless, several technical concerns were raised, in particular:
* The paper equates "complexity" with "small Lipschitz constant". Obviously small Lipschitz constant is sufficient to ensure generalization, but as far as the committee is aware of it is not necessary. Identifying the two will potentially mislead the uninformed readership.
* The learned parameter to which the analysis mostly applies is the bias in the first layer. It is perhaps a bit trivial to associate Lipschitz constant with the first layer bias, as the gradient with respect to this bias and with respect to the input are equal. Reviewers felt that this might be an indication of the analysis being somewhat fragile and not representing overarching phenomena.
Despite the above, the paper was deemed interesting and significant enough to warrant acceptance. I encourage the authors to be more transparent with regards to the above points and clearly highlight the limitations of their framework in the text. | val | [
"WgScGrXo3DY",
"9E4t1aX_khA",
"f7BsfXUWeS3",
"uIWm-S2-84x",
"OXB-3bxvrrC",
"xSI3d21wsC_",
"5VJW5n_rJH0",
"cmmG8FQ2laQ",
"ChFrYJOxeYK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper studies the relation between the distribution of Lipschitz constants in a neural network during training, the trajectory of gradient descent and the generalization of the network. The authors relate the change in the bias of the first layer to the Lipschitz constant of the network around the training dat... | [
5,
8,
-1,
8,
-1,
-1,
-1,
-1,
6
] | [
4,
3,
-1,
3,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_RcjW7p7z8aJ",
"nips_2021_RcjW7p7z8aJ",
"5VJW5n_rJH0",
"nips_2021_RcjW7p7z8aJ",
"9E4t1aX_khA",
"uIWm-S2-84x",
"ChFrYJOxeYK",
"WgScGrXo3DY",
"nips_2021_RcjW7p7z8aJ"
] |
nips_2021_OP6ihHjllEc | Class-agnostic Reconstruction of Dynamic Objects from Videos | We introduce REDO, a class-agnostic framework to REconstruct the Dynamic Objects from RGBD or calibrated videos. Compared to prior work, our problem setting is more realistic yet more challenging for three reasons: 1) due to occlusion or camera settings an object of interest may never be entirely visible, but we aim to reconstruct the complete shape; 2) we aim to handle different object dynamics including rigid motion, non-rigid motion, and articulation; 3) we aim to reconstruct different categories of objects with one unified framework. To address these challenges, we develop two novel modules. First, we introduce a canonical 4D implicit function which is pixel-aligned with aggregated temporal visual cues. Second, we develop a 4D transformation module which captures object dynamics to support temporal propagation and aggregation. We study the efficacy of REDO in extensive experiments on synthetic RGBD video datasets SAIL-VOS 3D and DeformingThings4D++, and on real-world video data 3DPW. We find REDO outperforms state-of-the-art dynamic reconstruction methods by a margin. In ablation studies we validate each developed component.
| accept | Three expert reviewers were initially positive about the paper, whereas a fourth reviewer was borderline negative. After rebuttal the positive reviewers became more positive, especially with additional results on real-world data. The more negative reviewer had requested these as well but did not interact in the reviewing process further.
Overall this paper seems provides a positive step in trying to tackle the very difficult problem of spatio-temporal class-agnostic object reconstruction from video so I am recommending acceptance. | val | [
"8hQGRtoqX2L",
"qYPvOxKL19",
"UYf-_lnqMfO",
"g_3d7VPVhGB",
"T64ELVI92wo",
"sUIwuanFuXS",
"7WdSMs0g2uc",
"5_XaUGr3lP",
"95NLoFRZTK",
"qXWfptq7sI9",
"g5K2dmZppf",
"ef7fMlU9l0x"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors present a class-agnostic framework to reconstruct the dynamic objects (FREDO) from unconstrained monocular videos. The proposed pipeline is designed to recover the accurate shape and articulation and deal with partial visibility. Experimental results on SAIL-VOS 3D dataset show that each... | [
6,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_OP6ihHjllEc",
"nips_2021_OP6ihHjllEc",
"nips_2021_OP6ihHjllEc",
"sUIwuanFuXS",
"nips_2021_OP6ihHjllEc",
"7WdSMs0g2uc",
"5_XaUGr3lP",
"UYf-_lnqMfO",
"ef7fMlU9l0x",
"qYPvOxKL19",
"8hQGRtoqX2L",
"nips_2021_OP6ihHjllEc"
] |
nips_2021_2GapPLFKvA | Unique sparse decomposition of low rank matrices | Dian Jin, Xin Bing, Yuqian Zhang | accept | Thank you for your submission to NeurIPS. There is broad consensus among the reviewers that the paper presents a significant step forward on a difficult problem. The reviewers found the paper well-written and easy to read.
Three of the four reviewers felt that the experimental aspects of the paper could have been improved. One reviewer raised concerns regarding the Bernoulli--Gaussian assumption on $X$. While the assumption is standard in the area, it nevertheless appears to be removed from practical applications. This reviewer suggested that more thorough simulations could have better covered this weakness.
Given the interesting and nontrivial nature of the theoretical contributions, the authors are encouraged to follow-up by submitting a long version of the paper to a traditional math journal. | test | [
"apHRjMyzDvB",
"O1ysThMEgPY",
"M551Cp8gO2N",
"yQ5-6aXVIIa",
"ugMIX3rMGO",
"2SpEWyzgK1",
"1gRxaV1yQXC",
"JI90PBi4hBo",
"_CiW5WjfRBs",
"aXEXFk6-pfp",
"k69IHY6aLjG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We agree with the reviewer that our current analysis only addresses vector case and algorithmically requires a deflation procedure to recover the whole matrix factorization, which is a common approach in global nonconvex sparse matrix / tensor recovery literature [17,34,45,38,37]. We expect the deflation procedur... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
5,
9
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"O1ysThMEgPY",
"JI90PBi4hBo",
"nips_2021_2GapPLFKvA",
"2SpEWyzgK1",
"aXEXFk6-pfp",
"M551Cp8gO2N",
"k69IHY6aLjG",
"_CiW5WjfRBs",
"nips_2021_2GapPLFKvA",
"nips_2021_2GapPLFKvA",
"nips_2021_2GapPLFKvA"
] |
nips_2021__kaH2bAI3O | Neighborhood Reconstructing Autoencoders | Vanilla autoencoders often produce manifolds that overfit to noisy training data, or have the wrong local connectivity and geometry. Autoencoder regularization techniques, e.g., the denoising autoencoder, have had some success in reducing overfitting, whereas recent graph-based methods that exploit local connectivity information provided by neighborhood graphs have had some success in mitigating local connectivity errors. Neither of these two approaches satisfactorily reduce both overfitting and connectivity errors; moreover, graph-based methods typically involve considerable preprocessing and tuning. To simultaneously address the two issues of overfitting and local connectivity, we propose a new graph-based autoencoder, the Neighborhood Reconstructing Autoencoder (NRAE). Unlike existing graph-based methods that attempt to encode the training data to some prescribed latent space distribution -- one consequence being that only the encoder is the object of the regularization -- NRAE merges local connectivity information contained in the neighborhood graphs with local quadratic approximations of the decoder function to formulate a new neighborhood reconstruction loss. Compared to existing graph-based methods, our new loss function is simple and easy to implement, and the resulting algorithm is scalable and computationally efficient; the only required preprocessing step is the construction of the neighborhood graph. Extensive experiments with standard datasets demonstrate that, compared to existing methods, NRAE improves both overfitting and local connectivity in the learned manifold, in some cases by significant margins. Code for NRAE is available at https://github.com/Gabe-YHLee/NRAE-public.
| accept | This work introduces NRAE -- Neighborhood Reconstructing Autoencoder. The high level goal is regularization to avoid overfitting to noise and learning a smooth manifold with the 'correct' geometry. The authors introduce an objective that encourages a small neighbourhood reconstruction loss’ based on a kernel. The paper is easy to follow, clearly written and includes an extensive set of experiments on a variety of datasets.
During the rebuttal, the authors were able to successfully address the concerns raised by the reviewers, in particular, comparisons with GRAE, SPAE and nuanced discussion of the claims about decoder regularization. There is a consensus among the reviewers that the paper deserves acceptance. | train | [
"BqVAng9NWkd",
"vcRvny9wbuo",
"McC0OD40BvQ",
"ojc55lrugmP",
"yrUbSZsqERX",
"4A63Joh5nNw",
"nLSGCWE2vbq",
"i9738lRMYOf",
"5mQF9QQnERb",
"VudLIJZJj4x",
"jfYLQD5Xmz",
"1foqaDAcqyQ",
"8S2iQma02d9"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' response. With the proposed changes and experimental additions, I am willing to change my score to accept.",
" Thank you! We will try to add those discussion points in the revised manuscript!",
" Yes. By solving the optimization problem, we can not obtain an explicit functional form ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"4A63Joh5nNw",
"yrUbSZsqERX",
"ojc55lrugmP",
"nLSGCWE2vbq",
"5mQF9QQnERb",
"1foqaDAcqyQ",
"8S2iQma02d9",
"VudLIJZJj4x",
"jfYLQD5Xmz",
"nips_2021__kaH2bAI3O",
"nips_2021__kaH2bAI3O",
"nips_2021__kaH2bAI3O",
"nips_2021__kaH2bAI3O"
] |
nips_2021_ZB8Du-E1KUz | TopicNet: Semantic Graph-Guided Topic Discovery | Existing deep hierarchical topic models are able to extract semantically meaningful topics from a text corpus in an unsupervised manner and automatically organize them into a topic hierarchy. However, it is unclear how to incorporate prior belief such as knowledge graph to guide the learning of the topic hierarchy. To address this issue, we introduce TopicNet as a deep hierarchical topic model that can inject prior structural knowledge as inductive bias to influence the learning. TopicNet represents each topic as a Gaussian-distributed embedding vector, projects the topics of all layers into a shared embedding space, and explores both the symmetric and asymmetric similarities between Gaussian embedding vectors to incorporate prior semantic hierarchies. With a variational auto-encoding inference network, the model parameters are optimized by minimizing the evidence lower bound and supervised loss via stochastic gradient descent. Experiments on widely used benchmark show that TopicNet outperforms related deep topic models on discovering deeper interpretable topics and mining better document representations.
| accept | This paper develops TopicNet, a generative model that can learn hierarchical topics by incorporating a pre-defined semantic graph. The main contribution of the paper seems to be on the application side, and all reviewers agreed the paper is above the acceptance threshold.
I would like to encourage the authors to incorporate the reviewers' feedback in their revised version. In particular:
+ Incorporate the new results from the response to reviewer aZKr.
+ Add the metrics of inter-topic (dis)similarity and topic specificity.
+ Add an experiment to showcase that the method can benefit from prior knowledge in terms of both interpretability and performance on downstream tasks.
+ Add a more detailed figure to illustrate the Gaussian SawETM and TopicNet models.
+ Clarify the other points raised in the reviews. | val | [
"u_ZKxOCqXmN",
"2WHDCvsDGgL",
"UQ1TdiGImxn",
"aTf7DOz_Yz9",
"6H9JhickrX1",
"WjuS4JeMPfU",
"OZCskkD2n7d",
"Xoi2C0hm8VP",
"KLJFteu7kO",
"p4nKczd-u-_",
"tNwL4y1wmCt",
"-xx_jRzA22W",
"cZ1JOLPHSdj",
"3u3uuY8Gswd",
"tgYxZhrmEzg",
"yFH_b28sJ9W",
"qO-rI8cvOtR",
"jlZw1cE8dF0",
"iaFlYeWXdo... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"a... | [
"(Summary)\nThis paper proposes a generative model that can learn hierarchical topics that coherently incorporate a structured prior knowledge graph. By introducing Gaussian embeddings to an existing SawETM, a neural topic model where topics in different levels can relate each other, the authors first propose the G... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_ZB8Du-E1KUz",
"nips_2021_ZB8Du-E1KUz",
"aTf7DOz_Yz9",
"6H9JhickrX1",
"jlZw1cE8dF0",
"OZCskkD2n7d",
"tgYxZhrmEzg",
"nips_2021_ZB8Du-E1KUz",
"tNwL4y1wmCt",
"-xx_jRzA22W",
"qO-rI8cvOtR",
"iaFlYeWXdoj",
"nips_2021_ZB8Du-E1KUz",
"nips_2021_ZB8Du-E1KUz",
"u_ZKxOCqXmN",
"nips_2021_... |
nips_2021_2lBhfVPYOM | (Almost) Free Incentivized Exploration from Decentralized Learning Agents | Incentivized exploration in multi-armed bandits (MAB) has witnessed increasing interests and many progresses in recent years, where a principal offers bonuses to agents to do explorations on her behalf. However, almost all existing studies are confined to temporary myopic agents. In this work, we break this barrier and study incentivized exploration with multiple and long-term strategic agents, who have more complicated behaviors that often appear in real-world applications. An important observation of this work is that strategic agents' intrinsic needs of learning benefit (instead of harming) the principal's explorations by providing "free pulls". Moreover, it turns out that increasing the population of agents significantly lowers the principal's burden of incentivizing. The key and somewhat surprising insight revealed from our results is that when there are sufficiently many learning agents involved, the exploration process of the principal can be (almost) free. Our main results are built upon three novel components which may be of independent interest: (1) a simple yet provably effective incentive-provision strategy; (2) a carefully crafted best arm identification algorithm for rewards aggregated under unequal confidences; (3) a high-probability finite-time lower bound of UCB algorithms. Experimental results are provided to complement the theoretical analysis.
| accept | All reviewers are positive with this submission. The reviewers agree that this paper is studying a relevant and new problem, and it is well-written. The reviewers also mention some suggestion for motivating the problem and the presentation order. I think the paper can benefit from these suggestions. | train | [
"Q8JH11tvLhy",
"gYlsy-HERLK",
"3ZyTfm-lTuU",
"5BcYPuUw3Xk",
"5tuz4PVd93r",
"kIcefvj6MhA"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the thoughtful suggestions, and address them in the following brief discussions. \n\n- **Best Arm Identification.**\n\n We agree that personalized recommendation is indeed an interesting topic as well. However, we believe that our current objective of best arm identification is also equ... | [
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
3,
2,
3
] | [
"kIcefvj6MhA",
"5tuz4PVd93r",
"5BcYPuUw3Xk",
"nips_2021_2lBhfVPYOM",
"nips_2021_2lBhfVPYOM",
"nips_2021_2lBhfVPYOM"
] |
nips_2021_yWd42CWN3c | Combining Recurrent, Convolutional, and Continuous-time Models with Linear State Space Layers | Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, Christopher Ré | accept | The authors propose a continuous time model combining recurrent and convolutional structures. Overall, reviewers are supportive of the paper. The main remaining concerns, after discussion, are mostly with respect to the presentation. It was felt that the paper is dense, heavily relying on the appendix, and could be more clearly communicated in the main text. There was also a perception that the main contributions were in engineering. While the authors push back this notion, there is no need for this to be perceived as a drawback, and could be reasonable to highlight engineering contributions in revisions. There were also some concerns that it was hard to derive clear takeaways from some of the experiments (details in the reviews). Please try to be receptive of reviewer comments in preparing revisions. But generally, this is great work! | test | [
"YJ7Msw6ofF2",
"UgcI99gqb6D",
"2iP6hIT_c3n",
"UZg8z3S6aPt",
"o7tsYR98U4W",
"wxVZbmDIaQc",
"j4ubBz5awse",
"ImS81GZQF21",
"VFi0D6fMgbx",
"MbgEvOsmHoF",
"HU0NVz0VRD8",
"ZwA-znDLycy",
"Dmux66I1wFk",
"OZvMEVY2o8c",
"uI43yeKDHb",
"X6M9_nt8Thc",
"AmlgRx0LKim",
"B9-vnW2FlJ",
"UubWS0rRoy6... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for the detailed response and the experiments conducted.\n\nI have read all the reviews and the authors' responses. I choose to keep my original score of strong accept.\n\nIf accepted, I would suggest for the camera ready version to address the concerns raised by other reviewers, specifically, \nconside... | [
-1,
7,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
5,
-1,
3,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"Dmux66I1wFk",
"nips_2021_yWd42CWN3c",
"o7tsYR98U4W",
"nips_2021_yWd42CWN3c",
"ImS81GZQF21",
"nips_2021_yWd42CWN3c",
"ImS81GZQF21",
"VFi0D6fMgbx",
"UgcI99gqb6D",
"UgcI99gqb6D",
"UgcI99gqb6D",
"UgcI99gqb6D",
"Ec7SRc98W48",
"Ec7SRc98W48",
"wxVZbmDIaQc",
"UZg8z3S6aPt",
"UZg8z3S6aPt",
... |
nips_2021_OThHxQUDzkp | Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness | We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier. In addition to the usual cross-entropy loss, we add regularization terms for every intermediate layer to ensure that the latent representations retain useful information for output prediction while reducing redundant information. We show that the HSIC bottleneck enhances robustness to adversarial attacks both theoretically and experimentally. In particular, we prove that the HSIC bottleneck regularizer reduces the sensitivity of the classifier to adversarial examples. Our experiments on multiple benchmark datasets and architectures demonstrate that incorporating an HSIC bottleneck regularizer attains competitive natural accuracy and improves adversarial robustness, both with and without adversarial examples during training. Our code and adversarially robust models are publicly available.
| accept | Adversarial robustness is an important problem of neural network models. This paper proposed a new regularization based on Hilbert-Schmidt Independence Criterion (HSIC) to improve adversarial robustness. The regularization consists two HSIC terms: one aims to reduce the nonlinear dependence between input and features, while the other enhances the dependence between features and outputs. Such a regularization is shown to enhance adversarial robustness in both natural training and adversarial training. The reviewers unanimously accept this paper after discussions. There are several minor points raised by reviewers that the authors promise to revise accordingly. In the preparation of final version, the authors are expected to incorporate these points suggested by the reviewers. | val | [
"z56A9xNQhg",
"h4xJAo_7Kzb",
"dvcAfWO978C",
"iX7RyGHI5r2",
"uY_fmUcLlMW",
"QwXSGs7fY--",
"OxTPhHG_yK3",
"wFYzKrfF30T",
"pht4GSSkGmx",
"rU8j86WLVvd",
"wadRJVDdwds",
"yyYtNi8RxJx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. All my concerns are addressed. I will keep my score of 7.",
" Thanks a lot for engaging in the discussion, your points are certainly valid. I would appreciate continuing this discussion, but for now all my concerns have been addressed.",
" I thank authors for their response. I kee... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"rU8j86WLVvd",
"iX7RyGHI5r2",
"OxTPhHG_yK3",
"QwXSGs7fY--",
"nips_2021_OThHxQUDzkp",
"pht4GSSkGmx",
"wadRJVDdwds",
"nips_2021_OThHxQUDzkp",
"uY_fmUcLlMW",
"yyYtNi8RxJx",
"nips_2021_OThHxQUDzkp",
"nips_2021_OThHxQUDzkp"
] |
nips_2021_Kvef55YMkm3 | T-LoHo: A Bayesian Regularization Model for Structured Sparsity and Smoothness on Graphs | Graphs have been commonly used to represent complex data structures. In models dealing with graph-structured data, multivariate parameters may not only exhibit sparse patterns but have structured sparsity and smoothness in the sense that both zero and non-zero parameters tend to cluster together. We propose a new prior for high-dimensional parameters with graphical relations, referred to as the Tree-based Low-rank Horseshoe (T-LoHo) model, that generalizes the popular univariate Bayesian horseshoe shrinkage prior to the multivariate setting to detect structured sparsity and smoothness simultaneously. The T-LoHo prior can be embedded in many high-dimensional hierarchical models. To illustrate its utility, we apply it to regularize a Bayesian high-dimensional regression problem where the regression coefficients are linked by a graph, so that the resulting clusters have flexible shapes and satisfy the cluster contiguity constraint with respect to the graph. We design an efficient Markov chain Monte Carlo algorithm that delivers full Bayesian inference with uncertainty measures for model parameters such as the number of clusters. We offer theoretical investigations of the clustering effects and posterior concentration results. Finally, we illustrate the performance of the model with simulation studies and a real data application for anomaly detection on a road network. The results indicate substantial improvements over other competing methods such as the sparse fused lasso.
| accept | The reviewers all agree that the paper makes novel and valuable contributions. One the main concerns of the reviewers was that the paper was missing a sufficiently compelling discussion of how the method proposed in the paper compares with the state of the art, and an experimental comparison with more alternative methods. The responses to the different reviewers addresses well these concerns. The authors are strongly encouraged to include the new experimental results and the elements of discussion provided in the final version of the paper. | train | [
"Z6mziX604Mx",
"9Jl5Os-SIMA",
"3umlktB0sM5",
"46x89_yXA0b",
"x13e0Orvi9z",
"PROoNiGEGvH",
"ZsgieENcog",
"hp0QN3jJQaY",
"zr-zLrkBics",
"xr0SiGzimBy",
"74hKVwhoUUD"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After reading the authors' response I would like to confirm my score.",
" We thank the review for taking the time to read our response. We will take your suggestions to revise our manuscript. \n",
" Thank you for the detailed response. The additional simulation results address my major concerns. I encourage t... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
1,
2,
3
] | [
"x13e0Orvi9z",
"3umlktB0sM5",
"ZsgieENcog",
"nips_2021_Kvef55YMkm3",
"xr0SiGzimBy",
"zr-zLrkBics",
"46x89_yXA0b",
"74hKVwhoUUD",
"nips_2021_Kvef55YMkm3",
"nips_2021_Kvef55YMkm3",
"nips_2021_Kvef55YMkm3"
] |
nips_2021_w6U6g5Bvug | The Utility of Explainable AI in Ad Hoc Human-Machine Teaming | Rohan Paleja, Muyleng Ghuy, Nadun Ranawaka Arachchige, Reed Jensen, Matthew Gombolay | accept | This paper presents a study on the effect of two types of interpretable/explainable models in human-machine teaming. By varying the type of explanation (showing the model for human intention recognition used by the robot and showing the robot's action-selection model), the authors conclude that showing both models increased participants' situation awareness, but that only the human intention recognition model decreased the overall task completion time. Furthermore, for 'expert' participants, the performance degraded with.
The paper has strengths in several aspects: The problem is significant; The study is well designed; and the paper is well-written. However, there are a few reservations: It is unclear if results can generalize to more complex settings; Some important references are missing.
After a thorough discussion among reviewers, we feel the paper contains significant contributions, and the strengths outweigh the weaknesses, assuming the authors should be clear and explicit about them. In particular, please, add a more thorough literature review to alleviate the feeling that the paper is out of context and overclaiming. Further, try to explain the scope of the setting and emphasize the importance of one of the first analyses of xAI under the sequential decision-making settings.
Lastly, the issue of fitness of the work to NeurIPS raised by one of the reviewers was discussed with the SAC and Program Chairs, and the committee was encouraged to take a broad view of the scope of NeurIPS. Our goal is also to encourage a more experimental and data-driven type of AI and Machine Learning research. | train | [
"UDhsY99-wgb",
"-6xkasQWnTk",
"d4pHAcsP1xQ",
"OR3SM8b4xOt",
"x-hNK5rgRf_",
"WKQyQFcs1_c",
"oCy9hG_JMxj",
"qMMgx_dF_dN",
"pslNKRXK9XY",
"Vog_oGYR_YT",
"3kC-JkXC7Ct",
"g8tQUoe1dr-",
"10Wh0Cfod_R",
"4YBPIoSxT5",
"uP0rK8zjCC7",
"0PgLP_k8QM7",
"xC6nN42EVhu",
"Ehs21GqaqFx"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a study on the effect of two types of interpretable/explainable model in human-machine teaming. By varying the type of explanation (showing the model for human intention recognition used by the robot and showing the robot's action-selection model), the authors conclude that showing both models ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
2
] | [
"nips_2021_w6U6g5Bvug",
"10Wh0Cfod_R",
"oCy9hG_JMxj",
"Vog_oGYR_YT",
"3kC-JkXC7Ct",
"g8tQUoe1dr-",
"qMMgx_dF_dN",
"pslNKRXK9XY",
"UDhsY99-wgb",
"Ehs21GqaqFx",
"xC6nN42EVhu",
"0PgLP_k8QM7",
"uP0rK8zjCC7",
"nips_2021_w6U6g5Bvug",
"nips_2021_w6U6g5Bvug",
"nips_2021_w6U6g5Bvug",
"nips_20... |
nips_2021_5KCvuCYGi7G | Subgoal Search For Complex Reasoning Tasks | Konrad Czechowski, Tomasz Odrzygóźdź, Marek Zbysiński, Michał Zawalski, Krzysztof Olejnik, Yuhuai Wu, Lukasz Kucinski, Piotr Miłoś | accept | This paper presents an interesting approach to problems with long decision horizons, by generating and planning to subgoals on the path to the full solution.
After a discussion largely around clarifying some points in the paper, the reviewers all felt that this paper should be accepted.
The authors should update the descriptions in the paper as suggested by the reviewers. | train | [
"H987r53Tp9",
"vTtf2guD3DV",
"N7u9ufkDRn3",
"SG5vi3HHI8",
"Lf7MGfXEwe7",
"o1N6LwXO6GP",
"jqo8TmP508-",
"CS4oCeoL3J7",
"6ogqzr4OFS_",
"GZF5yTlowrg",
"iicz22mmOu6",
"MOBB63-G9lZ",
"arvkrubOwio",
"7sQRFF6kuB",
"iSaHZVggORL",
"wUTdAq0hgl4"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response and being receptive. My overall evaluation remains positive. ",
" Thank you for your suggestion. We agree that further analysis would be valuable; this might be indeed the case to find some well-isolated small experiments.\n\nLet us consider the following grid-world setup. Let the sta... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"arvkrubOwio",
"Lf7MGfXEwe7",
"o1N6LwXO6GP",
"nips_2021_5KCvuCYGi7G",
"7sQRFF6kuB",
"GZF5yTlowrg",
"6ogqzr4OFS_",
"nips_2021_5KCvuCYGi7G",
"iicz22mmOu6",
"MOBB63-G9lZ",
"CS4oCeoL3J7",
"iSaHZVggORL",
"wUTdAq0hgl4",
"SG5vi3HHI8",
"nips_2021_5KCvuCYGi7G",
"nips_2021_5KCvuCYGi7G"
] |
nips_2021_YsZQhCJunjl | MCMC Variational Inference via Uncorrected Hamiltonian Annealing | Given an unnormalized target distribution we want to obtain approximate samples from it and a tight lower bound on its (log) normalization constant log Z. Annealed Importance Sampling (AIS) with Hamiltonian MCMC is a powerful method that can be used to do this. Its main drawback is that it uses non-differentiable transition kernels, which makes tuning its many parameters hard. We propose a framework to use an AIS-like procedure with Uncorrected Hamiltonian MCMC, called Uncorrected Hamiltonian Annealing. Our method leads to tight and differentiable lower bounds on log Z. We show empirically that our method yields better performances than other competing approaches, and that the ability to tune its parameters using reparameterization gradients may lead to large performance improvements.
| accept | The reviewers all came to a clear consensus that this paper should be accepted. The authors addressed all points raised by reviewers thoroughly. In the camera ready, please make sure to follow through on the edits discussed during the review process. In particular, multiple reviewers pointed out that the experiments should include runtimes, and that there should also be an experiment that provides a comparison to the true log marginal -- please ensure these are included in the camera-ready. Please also improve the rigour of the proof of Theorem 2 using measure-theoretic arguments.
| train | [
"xQNA95niHIS",
"OWW_4aqlDte",
"9p2WsCP_cYx",
"-ghLj22EARw",
"RKnfJsBvp33",
"EUgZiYaRSz-",
"2o6phdNjU5b",
"46WEfHBzj2u",
"MImWjVRBxtX",
"YEoQ3rRXYsz",
"cUVceKz45Ml",
"ae1pS2qBKJe"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my concerns. I think these additional experiments will really strengthen the paper. ",
"The authors introduce a variational inference method that leverages Annealed Importance Sampling (AIS) and Langevin/HMC MCMC. In particular HMC dynamics are used to target a sequence of intermediate ... | [
-1,
8,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
8,
8
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"9p2WsCP_cYx",
"nips_2021_YsZQhCJunjl",
"-ghLj22EARw",
"nips_2021_YsZQhCJunjl",
"46WEfHBzj2u",
"OWW_4aqlDte",
"ae1pS2qBKJe",
"cUVceKz45Ml",
"YEoQ3rRXYsz",
"nips_2021_YsZQhCJunjl",
"nips_2021_YsZQhCJunjl",
"nips_2021_YsZQhCJunjl"
] |
nips_2021_41QJ--DLjoD | Landmark-RxR: Solving Vision-and-Language Navigation with Fine-Grained Alignment Supervision | In Vision-and-Language Navigation (VLN) task, an agent is asked to navigate inside 3D indoor environments following given instructions. Cross-modal alignment is one of the most critical challenges in VLN because the predicted trajectory needs to match the given instruction accurately. In this paper, we address the cross-modal alignment challenge from a fine-grained perspective. Firstly, to alleviate weak cross-modal alignment supervision from coarse-grained data, we introduce a human-annotated fine-grained VLN dataset, namely Landmark-RxR. Secondly, to further enhance local cross-modal alignment under fine-grained supervision, we investigate the focal-oriented rewards with soft and hard forms, by focusing on the critical points sampled from fine-grained Landmark-RxR. Moreover, to fully evaluate the navigation process, we also propose a re-initialization mechanism that makes metrics insensitive to difficult points, which can cause the agent to deviate from the correct trajectories. Experimental results show that our agent has superior navigation performance on Landmark-RxR, en-RxR and R2R. Our dataset and code are available at https://github.com/hekj/Landmark-RxR.
| accept | This work introduces a new resource for the English fold of RxR and shows how these annotations lead to significant gains in the original related domains/tasks. The authors have provided substantial clarifications and new results during the discussion which are critical to understanding their work. These need to be included in the final revision. Note, in particular, many of the concerns raised by dQiv and the results for gPfu. Some discussion of whether the resource can provide utility to the full RxR task (not just English) would also be appreciated. | train | [
"o5aT2UJiah",
"_8FI5oU8Kom",
"fq3RnXyMzBj",
"u1N-jZxfL_4",
"2uDcTnURps",
"4dCIQ-pUge",
"_7uN_Bd1vQ7",
"wp_tRHr--lS",
"7r56PmcXby1",
"gt13QMX1EkI",
"JXdgGSJO5V",
"KrVgmaeEl6W",
"oAuXH51O-fN"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the further clarifications. I still believe the writing and clarity of the submission need to be improved. Due to the usefulness of the dataset and the clarifications, I have increased my score to 5.",
"The paper proposes to improve vision and language navigation by augmenting the RxR dataset wi... | [
-1,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"2uDcTnURps",
"nips_2021_41QJ--DLjoD",
"4dCIQ-pUge",
"nips_2021_41QJ--DLjoD",
"4dCIQ-pUge",
"wp_tRHr--lS",
"_8FI5oU8Kom",
"_8FI5oU8Kom",
"u1N-jZxfL_4",
"oAuXH51O-fN",
"KrVgmaeEl6W",
"nips_2021_41QJ--DLjoD",
"nips_2021_41QJ--DLjoD"
] |
nips_2021_YygA0yppTR | A Winning Hand: Compressing Deep Networks Can Improve Out-of-Distribution Robustness | Successful adoption of deep learning (DL) in the wild requires models to be: (1) compact, (2) accurate, and (3) robust to distributional shifts. Unfortunately, efforts towards simultaneously meeting these requirements have mostly been unsuccessful. This raises an important question: Is the inability to create Compact, Accurate, and Robust Deep neural networks (CARDs) fundamental? To answer this question, we perform a large-scale analysis of popular model compression techniques which uncovers several intriguing patterns. Notably, in contrast to traditional pruning approaches (e.g., fine tuning and gradual magnitude pruning), we find that ``lottery ticket-style'' approaches can surprisingly be used to produce CARDs, including binary-weight CARDs. Specifically, we are able to create extremely compact CARDs that, compared to their larger counterparts, have similar test accuracy and matching (or better) robustness---simply by pruning and (optionally) quantizing. Leveraging the compactness of CARDs, we develop a simple domain-adaptive test-time ensembling approach (CARD-Decks) that uses a gating module to dynamically select appropriate CARDs from the CARD-Deck based on their spectral-similarity with test samples. The proposed approach builds a "winning hand'' of CARDs that establishes a new state-of-the-art (on RobustBench) on CIFAR-10-C accuracies (i.e., 96.8% standard and 92.75% robust) and CIFAR-100-C accuracies (80.6% standard and 71.3% robust) with better memory usage than non-compressed baselines (pretrained CARDs and CARD-Decks available at https://github.com/RobustBench/robustbench). Finally, we provide theoretical support for our empirical findings.
| accept | This work studies whether or not model pruning can be leveraged to improve the out-of-distribution robustness of image models. The work notes that the benefits of pruning strongly depend on details on how pruning is applied. This observation is valuable in light of past literature which were largely unable to improve robustness via model pruning. Reviewers initially had a number of criticisms of the work. For example the question of why model compression sometimes improves robustness remained unanswered and also the work only provides experiments on variants of CIFAR-10 and CIFAR-100. However, the authors had several extended back and forth discussions with reviewers during the rebuttal period and made several improvements to the work while addressing a number of concerns that reviewers have. Although I am quite hesitant to accept a primarily empirical work with experiments only on CIFAR, I overall agree with reviewers that the provided experiments are quite through and recommend accepting the work. | train | [
"_c5xxCbKw3Z",
"QIZA1RnAOCJ",
"WHB6jSsgyC",
"fw9giPU6naa",
"tWfiOd6bQ8a",
"1wXIY5BFDNs",
"83Xo9Fanupd",
"lxUnpZHy3s6",
"YbR53BeGLre",
"uQyQcp9SzlD",
"SY3suZa9Mg",
"0p5hOZmbeub",
"V7YfsWtVHx",
"1_7W1ohIdeb",
"evwwNUuTX2h",
"UGByvc5ygoz",
"fikG9LxmWB1",
"AvH0GMRbouV",
"MQ_1cYXxPkK"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
"This paper performs an empirical study on the OOD robustness of compact models and shows the existence of compact, accurate, and robust DNNs (CARDs). It gives a frequency-domain analysis with the Fourier sensitivity method to explain why different pruning methods lead to compact models with varying OOD robustness.... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"nips_2021_YygA0yppTR",
"nips_2021_YygA0yppTR",
"fw9giPU6naa",
"tWfiOd6bQ8a",
"SY3suZa9Mg",
"QIZA1RnAOCJ",
"_c5xxCbKw3Z",
"Z_H18FuBPiR",
"0p5hOZmbeub",
"0p5hOZmbeub",
"0p5hOZmbeub",
"V7YfsWtVHx",
"QIZA1RnAOCJ",
"QIZA1RnAOCJ",
"QIZA1RnAOCJ",
"Z_H18FuBPiR",
"7jhwppnIz7",
"_c5xxCbKw3Z... |
nips_2021_fmiwLdJCmLS | On the Importance of Gradients for Detecting Distributional Shifts in the Wild | Detecting out-of-distribution (OOD) data has become a critical component in ensuring the safe deployment of machine learning models in the real world. Existing OOD detection approaches primarily rely on the output or feature space for deriving OOD scores, while largely overlooking information from the gradient space. In this paper, we present GradNorm, a simple and effective approach for detecting OOD inputs by utilizing information extracted from the gradient space. GradNorm directly employs the vector norm of gradients, backpropagated from the KL divergence between the softmax output and a uniform probability distribution. Our key idea is that the magnitude of gradients is higher for in-distribution (ID) data than that for OOD data, making it informative for OOD detection. GradNorm demonstrates superior performance, reducing the average FPR95 by up to 16.33% compared to the previous best method.
| accept | This paper proposes using the KL divergence between the softmax output and the uniform distribution to detect OOD examples (OOD examples are expected to have lower KL divergence). Authors further show extensive empirical results in support of their method.
All reviewers agree that this work is novel, well-written and is a significant contribution to the community. Reviewers have provided detailed comments to improve this work and increase its impact and I highly recommend authors to take all these suggestions into account. In particular, I suggest further explanation and investigation about the effectiveness of this technique when the class label sets are distinct as well as when there is covariate shift.
Given reviewers' consensus, I recommend accepting the paper. | train | [
"A7HcuOqOuZW",
"WMTnaknHe2q",
"DQBiat8jgdJ",
"xnq47R1AL0f",
"5GdGdD7xzDu",
"1bTsXur4nHo",
"UUodirRk2gC",
"OGxaFJHQLt4",
"w8lU_cW69PC",
"rkT9Y34ElXR",
"qP-5kJeW6wQ",
"d-9LhR_VAV",
"WfTe141vWZy",
"5Zsf2C-Nf-U",
"AsbGasNeUIO"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The work presents GradNorm which uses the norm of gradient, back-propagated from the KL divergence between the softmax output and a uniform probability distribution, as the uncertainty score for OOD detection. The experimental results show that the proposed method outperforms the baselines on ImageNet1k vs OOD and... | [
7,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_fmiwLdJCmLS",
"5GdGdD7xzDu",
"OGxaFJHQLt4",
"nips_2021_fmiwLdJCmLS",
"1bTsXur4nHo",
"rkT9Y34ElXR",
"nips_2021_fmiwLdJCmLS",
"w8lU_cW69PC",
"UUodirRk2gC",
"xnq47R1AL0f",
"WfTe141vWZy",
"nips_2021_fmiwLdJCmLS",
"AsbGasNeUIO",
"A7HcuOqOuZW",
"nips_2021_fmiwLdJCmLS"
] |
nips_2021_jcCatp6oWZK | Iterative Methods for Private Synthetic Data: Unifying Framework and New Methods | We study private synthetic data generation for query release, where the goal is to construct a sanitized version of a sensitive dataset, subject to differential privacy, that approximately preserves the answers to a large collection of statistical queries. We first present an algorithmic framework that unifies a long line of iterative algorithms in the literature. Under this framework, we propose two new methods. The first method, private entropy projection (PEP), can be viewed as an advanced variant of MWEM that adaptively reuses past query measurements to boost accuracy. Our second method, generative networks with the exponential mechanism (GEM), circumvents computational bottlenecks in algorithms such as MWEM and PEP by optimizing over generative models parameterized by neural networks, which capture a rich family of distributions while enabling fast gradient-based optimization. We demonstrate that PEP and GEM empirically outperform existing algorithms. Furthermore, we show that GEM nicely incorporates prior information from public data while overcoming limitations of PMW^Pub, the existing state-of-the-art method that also leverages public data.
| accept | The paper got overall strong feedback from the reviewers. There were a few concerns regarding the presentation, once addressed will make the paper stronger. Some of those are:
1. A more formal treatment of the Private Entropy Projection algorithm. In particular the distinction between the constrained version, and the Lagrangian formulation.
2. The lack of theoretical analysis of utility. Since almost all the prior works have a formal utility analysis, it is important for the authors to point out any technical difficulty towards proving a formal analysis. | train | [
"oD7YHZzSS4b",
"PhlaukHFAP",
"kI6VfUNZb4B",
"BxlHD3khiuF",
"zLwsbHacgjr",
"gSLJe3HVjTH",
"_vmVIMHPukj",
"M5i6-r6geC",
"0JHpW4cmnFP",
"u3Jjo7khq6_"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the helpful comments. \n\nRegarding the number of iterations or projection steps: Running with more iterations (max_iter) does not hurt the algorithm's accuracy. Therefore, in principle, we can apply the projection step until the error on the previous measurements stops decreasing, but we cap the nu... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"PhlaukHFAP",
"kI6VfUNZb4B",
"u3Jjo7khq6_",
"0JHpW4cmnFP",
"M5i6-r6geC",
"_vmVIMHPukj",
"nips_2021_jcCatp6oWZK",
"nips_2021_jcCatp6oWZK",
"nips_2021_jcCatp6oWZK",
"nips_2021_jcCatp6oWZK"
] |
nips_2021_xj2sE--Q90e | Understanding End-to-End Model-Based Reinforcement Learning Methods as Implicit Parameterization | Estimating the per-state expected cumulative rewards is a critical aspect of reinforcement learning approaches, however the experience is obtained, but standard deep neural-network function-approximation methods are often inefficient in this setting. An alternative approach, exemplified by value iteration networks, is to learn transition and reward models of a latent Markov decision process whose value predictions fit the data. This approach has been shown empirically to converge faster to a more robust solution in many cases, but there has been little theoretical study of this phenomenon. In this paper, we explore such implicit representations of value functions via theory and focused experimentation. We prove that, for a linear parametrization, gradient descent converges to global optima despite non-linearity and non-convexity introduced by the implicit representation. Furthermore, we derive convergence rates for both cases which allow us to identify conditions under which stochastic gradient descent (SGD) with this implicit representation converges substantially faster than its explicit counterpart. Finally, we provide empirical results in some simple domains that illustrate the theoretical findings.
| accept | Initially some of the reviewers were concerned with the limiting assumptions used. However, after the author response, all of the reviewers agree that the theoretical results while limited in some ways are interesting and significant enough for acceptance. As reviewer 6XLT points out, adding a balanced discussion of the additional complexities of the full MDP case and the additional experiments will strengthen the paper significantly. | train | [
"MInWodiLzj_",
"UQ4R4eYnMdt",
"nPeduCT4pLc",
"3WZTj7KvRT6",
"cOEgM43FAu",
"i4MLr2oHAC9",
"HgVx_qq7sID",
"gPAElRsFZO7",
"vb9ykob6MTo",
"8wo4FVbt-SW",
"8r1gO1shp0Q",
"gknHSNSR2aN",
"U-cWEJCXA8k"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for the concise and specific response to the concerns raised in the review. I have updated my score on the basis of this response. I would still strongly urge the authors to devote some space to a discussion of the possible drawbacks of the simplifying assumptions (as per the rebuttal), and clarify the ... | [
-1,
6,
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nPeduCT4pLc",
"nips_2021_xj2sE--Q90e",
"UQ4R4eYnMdt",
"gPAElRsFZO7",
"nips_2021_xj2sE--Q90e",
"nips_2021_xj2sE--Q90e",
"8wo4FVbt-SW",
"U-cWEJCXA8k",
"nips_2021_xj2sE--Q90e",
"cOEgM43FAu",
"i4MLr2oHAC9",
"UQ4R4eYnMdt",
"nips_2021_xj2sE--Q90e"
] |
nips_2021_9In970xfmNk | Mirror Langevin Monte Carlo: the Case Under Isoperimetry | Motivated by the connection between sampling and optimization, we study a mirror descent analogue of Langevin dynamics and analyze three different discretization schemes, giving nonasymptotic convergence rate under functional inequalities such as Log-Sobolev in the corresponding metric. Compared to the Euclidean setting, the result reveals intricate relationship between the underlying geometry and the target distribution and suggests that care might need to be taken in order for the discretized algorithm to achieve vanishing bias with diminishing stepsize for sampling from potentials under weaker smoothness/convexity regularity conditions.
| accept | The paper studies discretization schemes for the mirror-descent version of Langevin dynamics, under a mirror log-Sobolev inequality (LSI). We thank the authors for a lively discussion. We have decided to recommend acceptance, because the theoretical contribution is of interest to the community. That being said, we still had concerns about 1) knowing when mirror LSI is likely to hold, and 2) the thinness of the experimental section. We thus encourage the authors to update the paper before the final version, so as to include:
* a paragraph on when mirror LSI holds, or might hold, following the discussion that was held during the reviewing process.
* More quantitative experiments that illustrate the theoretical results on the various discretization schemes. Please also make the figure labels more readable. | train | [
"c5b2emx0YMl",
"Kf1pLwQG_zs",
"ky1yfYwzg_4",
"jKzUdhiwMBp",
"GYyG0kxdxts",
"fdl6Y8Q-Pci",
"NL9V0eVtlDb",
"lWZVdxMc7pS",
"6fXXvHkW9p0",
"5sIj7-MxDPQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies discretized mirror Langevin diffusions under the assumption of mirror-log Sobolev inequalities as well as relative smoothness assumptions. In particular, they show that a new backward method obtains perhaps the best dependence in terms of finite sample guarantees in this setting. On the other ha... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"nips_2021_9In970xfmNk",
"c5b2emx0YMl",
"6fXXvHkW9p0",
"GYyG0kxdxts",
"fdl6Y8Q-Pci",
"5sIj7-MxDPQ",
"lWZVdxMc7pS",
"nips_2021_9In970xfmNk",
"nips_2021_9In970xfmNk",
"nips_2021_9In970xfmNk"
] |
nips_2021_HShLSEcVZJ4 | Do Different Tracking Tasks Require Different Appearance Models? | Tracking objects of interest in a video is one of the most popular and widely applicable problems in computer vision. However, with the years, a Cambrian explosion of use cases and benchmarks has fragmented the problem in a multitude of different experimental setups. As a consequence, the literature has fragmented too, and now novel approaches proposed by the community are usually specialised to fit only one specific setup. To understand to what extent this specialisation is necessary, in this work we present UniTrack, a solution to address five different tasks within the same framework. UniTrack consists of a single and task-agnostic appearance model, which can be learned in a supervised or self-supervised fashion, and multiple ``heads'' that address individual tasks and do not require training. We show how most tracking tasks can be solved within this framework, and that the same appearance model can be successfully used to obtain results that are competitive against specialised methods for most of the tasks considered. The framework also allows us to analyse appearance models obtained with the most recent self-supervised methods, thus extending their evaluation and comparison to a larger variety of important problems.
| accept | Initially the paper received mixed positive review: three accept (7) and 1 reject (3). The reviewers appreciated the originality (unified framework for 5 tracking tasks, new similarity measurements and association algorithm, evaluation of supervised and self-supervised representations on 5 tracking tasks). The reviewers' main concerns were:
1. additional results with new unsupervised methods.
2. some writing/clarity issues
3. didn't use multi-task learning to train the framework.
4. comparison on SOT uses out-of-date dataset and older methods. need further evaluation on more recent datasets.
5. for tracking model, appearance model needs to be specialized for the current environment, rather than a unified appearance model.
6. missing theoretical analysis that appearance models can cover most changes in target appearance.
7. possible unfair comparisons since some methods not pre-trained on imagenet.
In the response, authors provided more experiment results (point 4: SOT results on 5 larger datasets; point 1: another SSL method) and further clarifications and explanations (points 2, 3, 5, 6, 7). The 3 positive reviewers were satisfied with the response, while the negative reviewer was only partially satisfied and raised their rating to "4". The remaining concern of the negative reviewer was mainly about the theoretical analysis (point 6). The AC tends to discount this concern, since theoretical analysis seems outside of the scope of this paper, given the large amount of empirical results. The AC agrees with the 3 positive reviewers that the paper provides an interesting new way to unify tracking tasks, and to evaluate representation learning methods on tracking tasks. Thus the AC recommends accept. The authors should update the paper according to the review comments and discussion. | train | [
"-GJqlaPiVXc",
"5litofwQ8j",
"cKYfn_OCjFV",
"j1bH97sDwPN",
"g3o9EZ5p1Yw",
"kUNz6J4G5sh",
"vikLSSKQNGN",
"MwuFoq4gMjm",
"_8tFdMG_MtQ",
"TPyHEUkF4sd",
"0Lby00Jy5WM",
"ifahbkK8hZT"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their detailed answers to all my questions. I am satisfied with their responses and my doubts about the network stride modification being consistent for most networks have been clarified. \n\nOverall, I retain my positive feedback for this paper after the rebuttal as well and will keep my ... | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"MwuFoq4gMjm",
"cKYfn_OCjFV",
"vikLSSKQNGN",
"nips_2021_HShLSEcVZJ4",
"ifahbkK8hZT",
"0Lby00Jy5WM",
"j1bH97sDwPN",
"TPyHEUkF4sd",
"nips_2021_HShLSEcVZJ4",
"nips_2021_HShLSEcVZJ4",
"nips_2021_HShLSEcVZJ4",
"nips_2021_HShLSEcVZJ4"
] |
nips_2021_3KhhJxaufVF | Towards robust vision by multi-task learning on monkey visual cortex | Deep neural networks set the state-of-the-art across many tasks in computer vision, but their generalization ability to simple image distortions is surprisingly fragile. In contrast, the mammalian visual system is robust to a wide range of perturbations. Recent work suggests that this generalization ability can be explained by useful inductive biases encoded in the representations of visual stimuli throughout the visual cortex. Here, we successfully leveraged these inductive biases with a multi-task learning approach: we jointly trained a deep network to perform image classification and to predict neural activity in macaque primary visual cortex (V1) in response to the same natural stimuli. We measured the out-of-distribution generalization abilities of our resulting network by testing its robustness to common image distortions. We found that co-training on monkey V1 data indeed leads to increased robustness despite the absence of those distortions during training. Additionally, we showed that our network's robustness is often very close to that of an Oracle network where parts of the architecture are directly trained on noisy images. Our results also demonstrated that the network's representations become more brain-like as their robustness improves. Using a novel constrained reconstruction analysis, we investigated what makes our brain-regularized network more robust. We found that our monkey co-trained network is more sensitive to content than noise when compared to a Baseline network that we trained for image classification alone. Using DeepGaze-predicted saliency maps for ImageNet images, we found that the monkey co-trained network tends to be more sensitive to salient regions in a scene, reminiscent of existing theories on the role of V1 in the detection of object borders and bottom-up saliency. Overall, our work expands the promising research avenue of transferring inductive biases from biological to artificial neural networks on the representational level, and provides a novel analysis of the effects of our transfer.
| accept | This paper shows that co-training on primate neural data improves the robustness of CNNs. There were some questions about the relevance of the work with respect to other previous work, but this was handled well by the authors and the reviewers were satisfied by those clarifications (which should also be worked into the paper). There were some ethical concerns raised, but these also were handled well. In general, I believe the review process worked well here, the paper is stronger because of it, and should be accepted. | train | [
"6gdF2DkSI3",
"ADlFdDXy5bD",
"x-7zCSEFWvX",
"ulbI4Z_raRR",
"XRKt-1SEIc",
"UzUQdqaktsA",
"S3NQ-JnKjJ",
"OiSOW3gIYtT",
"If4iUBKqCBj",
"aqodYgNJCDo",
"2j4NwFHGKop",
"x7YTEVfiUSH",
"wi9HQYcZ9WB",
"hhWtTx_Tf40",
"Cv0peS_NN7",
"Svgs3o_VkTS",
"jWXyYTTm5Jc",
"sY0Q6ZJtTTv",
"0W8JfSvDxqb",... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
... | [
" Dear reviewer 81tw, \n\nthanks for raising the score. We appreciate your constructive feedback which helped us improve the paper. \n",
"This study follows a recent trend of results that use similarity to neural representations in early visual areas as a regularizer during training of deep neural networks (DNNs)... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"ulbI4Z_raRR",
"nips_2021_3KhhJxaufVF",
"XRKt-1SEIc",
"x-7zCSEFWvX",
"UzUQdqaktsA",
"aqodYgNJCDo",
"If4iUBKqCBj",
"nips_2021_3KhhJxaufVF",
"JEtTMbpdrSN",
"jWXyYTTm5Jc",
"OiSOW3gIYtT",
"Cv0peS_NN7",
"Svgs3o_VkTS",
"Cv0peS_NN7",
"nips_2021_3KhhJxaufVF",
"nips_2021_3KhhJxaufVF",
"sY0Q6Z... |
nips_2021__idcJrecij | Arbitrary Conditional Distributions with Energy | Ryan Strauss, Junier B. Oliva | accept | This paper proposes an approach for performing conditional density estimation of the form $p(x_u|x_o)$ for any arbitrary choice of the observed ($x_o$) and unobserved ($x_u$) variables by using an energy function parametrized by a neural network to estimate the 1D distributions.
Three reviewers recommend acceptance, and one reviewer believes the paper is below the acceptance threshold. After reading the reviews and the rebuttal, I recommend this paper for acceptance. I also encourage the authors to improve the paper by considering the reviewers' comments. In particular:
+ Add an analysis of the number of samples needed for the importance sampling approximations.
+ As noted by reviewer AEG1, the imputation method may not be appropriate for certain applications because the imputation is performed using the marginal means, e.g., the means of $p(X_2|X_1=x_1)$ and $p(X_3|X_1=x_1)$, therefore, ignoring the dependences between $X_2$ and $X_3$. This should be mentioned/discussed in Section 4.4.2 or 5.1.2.
+ Add a sentence motivating importance sampling over the grid.
+ Add some discussion on the skip-connection and multiple latent vectors.
+ Include a derivation or discussion of the stochastic approximation of the autoregressive score in the Appendix. | test | [
"eV1vys059f",
"KzCzQ77yMcP",
"uYFyi5hBuUl",
"9i7V0ffZnQW",
"zjy3VUm1ItI",
"Cd_yC6LI0JY",
"exrOCywaz7c",
"6lnVtwCbozy",
"9jeknTbpmjB",
"42Azf7SSsjM",
"-bLP8Yr1UqZ",
"NSo0JnycLtb"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your helpful comments.",
" The non-negativity requirement is placed on $\\mathcal{E}(x)$, the energy. The unnormalized likelihood, $e^{-\\mathcal{E}(x)}$, is therefore a value between 0 and 1. You are correct that we have no loss of generality though, due to the normalizing constant. This is a typica... | [
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"9jeknTbpmjB",
"9i7V0ffZnQW",
"nips_2021__idcJrecij",
"6lnVtwCbozy",
"nips_2021__idcJrecij",
"exrOCywaz7c",
"zjy3VUm1ItI",
"uYFyi5hBuUl",
"NSo0JnycLtb",
"-bLP8Yr1UqZ",
"nips_2021__idcJrecij",
"nips_2021__idcJrecij"
] |
nips_2021_oepSB9bsoCF | Learning Domain Invariant Representations in Goal-conditioned Block MDPs | Deep Reinforcement Learning (RL) is successful in solving many complex Markov Decision Processes (MDPs) problems. However, agents often face unanticipated environmental changes after deployment in the real world. These changes are often spurious and unrelated to the underlying problem, such as background shifts for visual input agents. Unfortunately, deep RL policies are usually sensitive to these changes and fail to act robustly against them. This resembles the problem of domain generalization in supervised learning. In this work, we study this problem for goal-conditioned RL agents. We propose a theoretical framework in the Block MDP setting that characterizes the generalizability of goal-conditioned policies to new environments. Under this framework, we develop a practical method PA-SkewFit that enhances domain generalization. The empirical evaluation shows that our goal-conditioned RL agent can perform well in various unseen test environments, improving by 50\% over baselines.
| accept | The paper focuses on an interesting problem, goal-conditioned policy generalization in block MDPs (a kind of POMDPs where the current state is uniquely identifiable from the current observation, even though a state can emit many different observations). This is a mostly theoretical work, but its experiments are convincing. The reviewers have closely examined the paper's theory and, on the one hand, didn't find errors in it but, on the other hand, found theory gaps in it that could be explained either by the paper being not fully clear or not sufficiently rigorous. Please see the discussions with reviewers WvNf and S2fL, especially reviewer S2fL's "Thanks for your response" comment.
Nonetheless, due to this work being one of the early ones on this topic and being likely to become a stepping stone for further exploration of this area, the metareviewer recommends acceptance, trusting that the authors incorporate the points that came up in the discussion into the final version (it's hard to think of a reason not to do this). | test | [
"d0Tmb3OM980",
"FgAd_E3X-Sl",
"BwE8d2cqnmx",
"qaateapCF9D",
"ADqho1LH-z9",
"9c7oi7P8AJr",
"pXEssANRSGq",
"GiM9w9TZVej",
"wE32zz3xGAe",
"NkRB8hHIY9F",
"0jFRc9VnbdE",
"_T1Wo1vnA3H",
"IuF3N-zhTZ",
"BJDKXdY13va",
"n1iwfSQQQqe",
"SXe0B_cWMtG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for your valuable feedbacks. We will clarify the assumptions (e.g., near-deterministic dynamic and initial distribution) and remove the tokens' and statements' ambiguities in the next version.\n\nFurthermore, we will also incorporate a more rigorous analysis on the approximate perfect alignment, which our ... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
2,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"FgAd_E3X-Sl",
"IuF3N-zhTZ",
"nips_2021_oepSB9bsoCF",
"_T1Wo1vnA3H",
"9c7oi7P8AJr",
"pXEssANRSGq",
"GiM9w9TZVej",
"BJDKXdY13va",
"nips_2021_oepSB9bsoCF",
"nips_2021_oepSB9bsoCF",
"n1iwfSQQQqe",
"NkRB8hHIY9F",
"SXe0B_cWMtG",
"BwE8d2cqnmx",
"wE32zz3xGAe",
"nips_2021_oepSB9bsoCF"
] |
nips_2021_mbm8YOsoSER | Near-Optimal Multi-Perturbation Experimental Design for Causal Structure Learning | Causal structure learning is a key problem in many domains. Causal structures can be learnt by performing experiments on the system of interest. We address the largely unexplored problem of designing a batch of experiments that each simultaneously intervene on multiple variables. While potentially more informative than the commonly considered single-variable interventions, selecting such interventions is algorithmically much more challenging, due to the doubly-exponential combinatorial search space over sets of composite interventions. In this paper, we develop efficient algorithms for optimizing different objective functions quantifying the informativeness of a budget-constrained batch of experiments. By establishing novel submodularity properties of these objectives, we provide approximation guarantees for our algorithms. Our algorithms empirically perform superior to both random interventions and algorithms that only select single-variable interventions.
| accept | This paper describes a methodology for causal structure learning while varying multiple variables simultaneously. Individually, the fields of structure learning, causal inference, and experiment design are well-studies areas of statistics and machine learning. The novelty of this work is bringing such ideas together to address causal structure learning. The paper makes interesting connections to submodularity to solve the problem.
The reviewers were supportive of acceptance of the paper and found numerous merits. First, the proposed approximation algorithm is analyzed theoretically using set function submodularity. This lays the foundation for further analysis and perhaps improvements. Second, the empirical results show that there are significant improvements to be found by sequential multi-perturbations in causal structure learning. There were some clarifying comments around so-called Meek rules. The authors provided clarification in their response and the associated adjustments to the paper seem straightforward. Given that the paper was found to be technically sound and potentially impactful for not only gene regulatory networks, but also for other problem domains, I agree with the reviewers and favor acceptance.
| train | [
"1mIDLVJo2Ig",
"s0JZuTiobxG",
"yqGqGdKR95t",
"zLaj-tMnwCX",
"CPMhDAkLuEf",
"yw2DbO0IZ0H",
"_-hJpaaGZ77",
"8OfbATjNsF1",
"gUF1fhEhd4",
"6W-RFjysZsz",
"VYT92tF2yCs",
"DMqJ2mtKQsd",
"SHSf2Xo4yB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for taking the time to discuss these details with us. \n\nWhen we can do $N$ total interventions, a batch size of $k=1$ is an adaptive experiment setup and a batch size of $k=N$ (rather than infinity) is a totally offline setup. Any batch size $k \\in (1, N)$, exactly as you said, leads to des... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"yqGqGdKR95t",
"nips_2021_mbm8YOsoSER",
"yw2DbO0IZ0H",
"nips_2021_mbm8YOsoSER",
"8OfbATjNsF1",
"_-hJpaaGZ77",
"6W-RFjysZsz",
"SHSf2Xo4yB",
"DMqJ2mtKQsd",
"s0JZuTiobxG",
"zLaj-tMnwCX",
"nips_2021_mbm8YOsoSER",
"nips_2021_mbm8YOsoSER"
] |
nips_2021_slvWAZohje | Fuzzy Clustering with Similarity Queries | Wasim Huleihel, Arya Mazumdar, Soumyabrata Pal | accept | The authors study the fuzzy k-means problem with oracle queries and present algorithm that outperform previous work. The reviewers find the result interesting and novel and so, after the discussion phase, we think that the paper should be accepted.
One limitation of the paper is that it is a bit hard to read but the authors should be able to address the reviewers concerns for the final version of the paper. One important point raised by the reviewer is that Lemma 2 in the current form is very confusing and it should be rewritten and probably moved to the end of the paper. | test | [
"1eWdwkz40p",
"3eJv8hio1W",
"_QH9JbZfZYd",
"96lQupwHGYQ",
"AFndIJPLX9e",
"LFCyL-64Czf",
"OjxN2LyGctw",
"71t6ul7LxN",
"Fbw2YIR_PNp",
"elpPitOPmWO",
"FMXsr7rNjd6",
"HBorTfIOyl1",
"acDFLHZzeDi",
"NghidU8Kv-v"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their detailed response! I've read the response and the other reviews. I see that similar points were raised regarding the clarity (like the argument of approximations bounds for the metrics), but I think the authors' response shows this is more of a problem of writing being obscure at s... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"FMXsr7rNjd6",
"nips_2021_slvWAZohje",
"Fbw2YIR_PNp",
"AFndIJPLX9e",
"LFCyL-64Czf",
"OjxN2LyGctw",
"elpPitOPmWO",
"nips_2021_slvWAZohje",
"3eJv8hio1W",
"acDFLHZzeDi",
"NghidU8Kv-v",
"71t6ul7LxN",
"nips_2021_slvWAZohje",
"nips_2021_slvWAZohje"
] |
nips_2021_F7LYy9FnK2x | Improving black-box optimization in VAE latent space using decoder uncertainty | Optimization in the latent space of variational autoencoders is a promising approach to generate high-dimensional discrete objects that maximize an expensive black-box property (e.g., drug-likeness in molecular generation, function approximation with arithmetic expressions). However, existing methods lack robustness as they may decide to explore areas of the latent space for which no data was available during training and where the decoder can be unreliable, leading to the generation of unrealistic or invalid objects. We propose to leverage the epistemic uncertainty of the decoder to guide the optimization process. This is not trivial though, as a naive estimation of uncertainty in the high-dimensional and structured settings we consider would result in high estimator variance. To solve this problem, we introduce an importance sampling-based estimator that provides more robust estimates of epistemic uncertainty. Our uncertainty-guided optimization approach does not require modifications of the model architecture nor the training process. It produces samples with a better trade-off between black-box objective and validity of the generated samples, sometimes improving both simultaneously. We illustrate these advantages across several experimental settings in digit generation, arithmetic expression approximation and molecule generation for drug design.
| accept | Generation of discrete objects from a conditional density (a decoder) with a continuous latent is of interest in many applications.
This work proposes to use importance sampling to estimate the uncertainty of the decoder, "and use this quantity to avoid latent space regions with high uncertainty when generating high-dimensional discrete data."
The key contribution of the work is an importance-sampling-based estimate to measure decoder uncertainty.
This is used to avoid latent codes that correspond to invalid objects.
The paper is found solid and insightful by the reviewers. During the rebuttal, the authors successfully answered several concerns, and provided a detailed performance comparison with prior methods for molecular generation as well as for measuring performance on additional metric to illustrate the performance of the proposed method.
Overall, there is a consensus for acceptance and I also agree with this decision.
| val | [
"qTE2QB6YMEF",
"AyryQDOWw_N",
"TFQOvjYfIP2",
"lX-PUpNQKEW",
"qR6u55Nzmc",
"XLnQ7HHBSHs",
"HT94pkrBTyh",
"41_YNf2auP",
"dM7INjjyZyB",
"gOttuNRH-ru",
"oIfctDQ-0YO",
"y-NwGt9xKYY",
"dWhmGIlp12",
"JWo37-ZmohA",
"hVKagozCDGq",
"7iwYmqDdCy9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors use importance sampling to estimate the epistemic uncertainty of the decoder, and use this quantity to avoid latent space regions with high uncertainty when generating high-dimensional discrete data. I believe that if certain concerns are addressed the authors method can be of practical use to importa... | [
7,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_F7LYy9FnK2x",
"lX-PUpNQKEW",
"y-NwGt9xKYY",
"JWo37-ZmohA",
"oIfctDQ-0YO",
"41_YNf2auP",
"nips_2021_F7LYy9FnK2x",
"dM7INjjyZyB",
"HT94pkrBTyh",
"nips_2021_F7LYy9FnK2x",
"dWhmGIlp12",
"7iwYmqDdCy9",
"gOttuNRH-ru",
"qTE2QB6YMEF",
"nips_2021_F7LYy9FnK2x",
"nips_2021_F7LYy9FnK2x"... |
nips_2021_IZNR0RDtGp3 | Sample Selection for Fair and Robust Training | Fairness and robustness are critical elements of Trustworthy AI that need to be addressed together. Fairness is about learning an unbiased model while robustness is about learning from corrupted data, and it is known that addressing only one of them may have an adverse affect on the other. In this work, we propose a sample selection-based algorithm for fair and robust training. To this end, we formulate a combinatorial optimization problem for the unbiased selection of samples in the presence of data corruption. Observing that solving this optimization problem is strongly NP-hard, we propose a greedy algorithm that is efficient and effective in practice. Experiments show that our method obtains fairness and robustness that are better than or comparable to the state-of-the-art technique, both on synthetic and benchmark real datasets. Moreover, unlike other fair and robust training baselines, our algorithm can be used by only modifying the sampling step in batch selection without changing the training algorithm or leveraging additional clean data.
| accept | Reviewers broadly appreciated the focus of the paper, and agree that the intersection of fairness and robust is a rich and relatively underexplored space. Yet, reviewers found the work lacking in scope (e.g., with respect to the specific types of robustness considered and the characterization of the methods/approach) and novelty (e.g., again with respect to robustness, reviewers found that this was a very straightforward application of a known method). Our private discussion largely followed the public commentary that occurred between Reviewer 1QFb and the authors, so I would encourage the authors to take that sentiment and those comments into account moving forward. Broadly speaking, all reviewers (and the AC) feel that this work is promising but, at present, underdeveloped. | train | [
"9G4hAjhuN4",
"7glctn9cXPp",
"dvK7gGLmgs8",
"T3DCzpGy9tn",
"1DXka5wHISl",
"D9vgdehoz3",
"WqX7cKN56hN",
"I5ciN11kzri",
"NtVXJ4z6G_1",
"jG61QDrT8Fl"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new framework of fair sample selection. The paper introduces fairness constraints into clean selection and formalize the optimization problem in the same format as the knapsack problem. Then the authors propose an algorithm iteratively uses FairBatch to decide the fraction of the sample size ... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"nips_2021_IZNR0RDtGp3",
"dvK7gGLmgs8",
"WqX7cKN56hN",
"jG61QDrT8Fl",
"9G4hAjhuN4",
"NtVXJ4z6G_1",
"I5ciN11kzri",
"nips_2021_IZNR0RDtGp3",
"nips_2021_IZNR0RDtGp3",
"nips_2021_IZNR0RDtGp3"
] |
nips_2021_5sCVR3Lq6F | NeurWIN: Neural Whittle Index Network For Restless Bandits Via Deep RL | Whittle index policy is a powerful tool to obtain asymptotically optimal solutions for the notoriously intractable problem of restless bandits. However, finding the Whittle indices remains a difficult problem for many practical restless bandits with convoluted transition kernels. This paper proposes NeurWIN, a neural Whittle index network that seeks to learn the Whittle indices for any restless bandits by leveraging mathematical properties of the Whittle indices. We show that a neural network that produces the Whittle index is also one that produces the optimal control for a set of Markov decision problems. This property motivates using deep reinforcement learning for the training of NeurWIN. We demonstrate the utility of NeurWIN by evaluating its performance for three recently studied restless bandit problems.Our experiment results show that the performance of NeurWIN is significantly better than other RL algorithms.
| accept | The paper studies the important problem of restless bandit. While the Whittle index is known to optimally solve the problem, it remains hard to compute in many practical problems. The authors propose a solution leveraging the generalization capabilities of neural networks in a non trivial way and obtains a practically feasible and accurate method.
After the rebuttal and the discussion, the reviewers agree that the problem is significant and the contribution is interesting and it is likely to set an important baseline for future work on the topic. For this reason I recommend acceptance. Yet, there remains a number of aspects that the authors should improve in the final version:
- Add the comparison to WIBQL detailing how it is implemented and clarifying that defining a scalable version (i.e., with neural networks) is outside the scope of the paper.
- Add the improved version of the theory and possibly include an additional paragraph discussing in detail its relevance.
- Clearly acknowledge the need for a simulator and discuss how this could be possibly relaxed as a future work.
- Integrate reviewers' suggestions while improving the structure and overall clarity of the paper.
| train | [
"zld-V1O7RsD",
"RPRBpJuQ082",
"RvXYA8D2CZH",
"nvcIK2AdqKE",
"vCamkiSc1Ay",
"86Lrusas7fj",
"rgj4eTnJZtE",
"OE-s93Bj0tc",
"n_zfFsaOdls"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The authors develop a reinforcement learning based method for computing the Whittle Index heuristic policy for solving Restless Bandit problems. This is a novel methodological contribution in the space of restless bandits. Several experimental results are provided demonstrating the good performance and general app... | [
6,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_5sCVR3Lq6F",
"nips_2021_5sCVR3Lq6F",
"nips_2021_5sCVR3Lq6F",
"rgj4eTnJZtE",
"RPRBpJuQ082",
"n_zfFsaOdls",
"OE-s93Bj0tc",
"RvXYA8D2CZH",
"zld-V1O7RsD"
] |
nips_2021_rA9HFxFT7th | Sageflow: Robust Federated Learning against Both Stragglers and Adversaries | While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries. While the presence of both of these issues raises serious concerns in practical FL systems, no known schemes or combinations of schemes effectively address them at the same time. We propose Sageflow, staleness-aware grouping with entropy-based filtering and loss-weighted averaging, to handle both stragglers and adversaries simultaneously. Model grouping and weighting according to staleness (arrival delay) provides robustness against stragglers, while entropy-based filtering and loss-weighted averaging, working in a highly complementary fashion at each grouping stage, counter a wide range of adversary attacks. A theoretical bound is established to provide key insights into the convergence behavior of Sageflow. Extensive experimental results show that Sageflow outperforms various existing methods aiming to handle stragglers/adversaries.
| accept | This paper proposes a robust approach to dealing with stragglers and adversaries in federated learning by model grouping and staleness-dependent weighting of models at the server, combined with entropy-based filtering. Stragglers and adversaries are two key concerns in federated learning, and this paper makes progress along both fronts. The reviewers generally received this work well, and the authors’ responses helped address many of their concerns, leading reviewers to increase their scores. Overall, the consensus was that this paper is worthy of acceptance and makes a solid contribution to the federated learning literature. When preparing the camera ready, please make sure to address the questions that came up during the review process, especially around clarifying the proposed approach. | train | [
"1T9jaEr15z6",
"iz1R2a3OUf",
"oliC_V9dhz8",
"qgJBIoycyfP",
"jqMWwzNx6Sk",
"pj5Ujcmiqj",
"a4oqlUo1-6t",
"YSaOV4b1cRi",
"_lis9hbEHu6",
"tu1d24bKkvT",
"6ukhuVllCq2",
"EighbCS_Oq3",
"eD1ja6MDP0D",
"FIJmlx8MTZ1",
"FF7q26gaoXw",
"3sQAa60OfcD",
"J2l1bJ_PPcP",
"nzCKF3mkRw",
"d6a0r32kE_u"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
... | [
" Thank you for your time and efforts. We will clearly mention the related issue and possible remedies.",
" 1- The guarantee of secure aggregation depends on the number of models being aggregated. When only two models are aggregated for instance, the server can estimate these local models through their average. T... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iz1R2a3OUf",
"oliC_V9dhz8",
"qgJBIoycyfP",
"nzCKF3mkRw",
"s1FBb7ak4F",
"nips_2021_rA9HFxFT7th",
"alUEaRL9bY",
"_lis9hbEHu6",
"alUEaRL9bY",
"pj5Ujcmiqj",
"pj5Ujcmiqj",
"eD1ja6MDP0D",
"FF7q26gaoXw",
"nips_2021_rA9HFxFT7th",
"J2l1bJ_PPcP",
"J2l1bJ_PPcP",
"nzCKF3mkRw",
"FIJmlx8MTZ1",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.