paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_Xb8xvrtB8Ce
Bag of Tricks for Adversarial Training
Adversarial training (AT) is one of the most effective strategies for promoting model robustness. However, recent benchmarks show that most of the proposed improvements on AT are less effective than simply early stopping the training procedure. This counter-intuitive fact motivates us to investigate the implementation details of tens of AT methods. Surprisingly, we find that the basic settings (e.g., weight decay, training schedule, etc.) used in these methods are highly inconsistent. In this work, we provide comprehensive evaluations on CIFAR-10, focusing on the effects of mostly overlooked training tricks and hyperparameters for adversarially trained models. Our empirical observations suggest that adversarial robustness is much more sensitive to some basic training settings than we thought. For example, a slightly different value of weight decay can reduce the model robust accuracy by more than 7%, which is probable to override the potential promotion induced by the proposed methods. We conclude a baseline training setting and re-implement previous defenses to achieve new state-of-the-art results. These facts also appeal to more concerns on the overlooked confounders when benchmarking defenses.
poster-presentations
The authors have conducted a thorough empirical study on the hyperparameters of representative adversarial training methods. The technical novelty of this paper might be insufficient. But the empirical findings in this paper explain the strange and inconsistent reported algorithm results in the literature to some extent and remind the necessity and importance of a careful study on hyperparameters. The authors have actively interacted with the reviewers and through the discussions, many unclear issues have been fixed.
train
[ "0KlHdsIIfo", "82ZHkBJXDLm", "emJOfS4cuhT", "rl5WQU6Qdc", "8cBDb-aZfOG", "sDGdEdw0AoL", "hwBXrl3pYnn", "7jexVf_ji34", "vzXzPPj8zV1", "5hCTA-rR4X7", "5NikKR9pw00", "YUiVhdaqosM", "2q69CZ2nh1C" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper provides an evaluation of different hyperparameter settings for adversarial training. Specifically, it evaluates combinations of warmup, early stopping, weight decay, batch size and other parameters on adversarially trained models. The paper states that its overarching goal is to ``investigate how the im...
[ 5, 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_Xb8xvrtB8Ce", "iclr_2021_Xb8xvrtB8Ce", "rl5WQU6Qdc", "iclr_2021_Xb8xvrtB8Ce", "2q69CZ2nh1C", "hwBXrl3pYnn", "vzXzPPj8zV1", "82ZHkBJXDLm", "rl5WQU6Qdc", "0KlHdsIIfo", "82ZHkBJXDLm", "rl5WQU6Qdc", "iclr_2021_Xb8xvrtB8Ce" ]
iclr_2021_2VXyy9mIyU3
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
Human-annotated labels are often prone to noise, and the presence of such noise will degrade the performance of the resulting deep neural network (DNN) models. Much of the literature (with several recent exceptions) of learning with noisy labels focuses on the case when the label noise is independent of features. Practically, annotations errors tend to be instance-dependent and often depend on the difficulty levels of recognizing a certain task. Applying existing results from instance-independent settings would require a significant amount of estimation of noise rates. Therefore, providing theoretically rigorous solutions for learning with instance-dependent label noise remains a challenge. In this paper, we propose CORES2 (COnfidence REgularized Sample Sieve), which progressively sieves out corrupted examples. The implementation of CORES2 does not require specifying noise rates and yet we are able to provide theoretical guarantees of CORES2 in filtering out the corrupted examples. This high-quality sample sieve allows us to treat clean examples and the corrupted ones separately in training a DNN solution, and such a separation is shown to be advantageous in the instance-dependent noise setting. We demonstrate the performance of CORES2 on CIFAR10 and CIFAR100 datasets with synthetic instance-dependent label noise and Clothing1M with real-world human noise. As of independent interests, our sample sieve provides a generic machinery for anatomizing noisy datasets and provides a flexible interface for various robust training techniques to further improve the performance. Code is available at https://github.com/UCSC-REAL/cores.
poster-presentations
Dear Authors, Thank you very much for your very detailed feedback to the reviewers. They have highly contributed to clarifying some of the concerns raised by the reviewers and improved their understanding of this paper. Overall, all the reviewers acknowledge the merit of this paper and thus I suggest acceptance of this paper. However, as Reviewer #4 pointed out, there are conceptual and theoretical issues that need to be more carefully addressed. Please clarify these issues in the final version of the paper.
test
[ "zrDwoVgd76Z", "Z5wZa5cn-h", "XzOzBHpviQG", "UQ4bxPhGPDK", "C9K37bJulu9", "FY_RwlxcXrB", "3tMdy1bHux6", "FgWCp1XbU-K", "9QsBPu3JleW", "ynH8ODjaloC", "tGBH-ibREtv", "-HxmbXMRET0", "OTVC32bqI-a" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary\n\nThe paper introduces a noise-robust loss function CORES2, motivated by peer loss. The novel loss adds a regularization term that promotes confident prediction and pushes the model prediction away from the prior of the label. Using this loss function, the authors propose a dynamic sample sieve to separat...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_2VXyy9mIyU3", "XzOzBHpviQG", "-HxmbXMRET0", "iclr_2021_2VXyy9mIyU3", "OTVC32bqI-a", "iclr_2021_2VXyy9mIyU3", "UQ4bxPhGPDK", "zrDwoVgd76Z", "zrDwoVgd76Z", "UQ4bxPhGPDK", "UQ4bxPhGPDK", "UQ4bxPhGPDK", "iclr_2021_2VXyy9mIyU3" ]
iclr_2021_fmtSg8591Q
Efficient Reinforcement Learning in Factored MDPs with Application to Constrained RL
Reinforcement learning (RL) in episodic, factored Markov decision processes (FMDPs) is studied. We propose an algorithm called FMDP-BF, which leverages the factorization structure of FMDP. The regret of FMDP-BF is shown to be exponentially smaller than that of optimal algorithms designed for non-factored MDPs, and improves on the best previous result for FMDPs~\citep{osband2014near} by a factor of nH|Si|, where |Si| is the cardinality of the factored state subspace, H is the planning horizon and n is the number of factored transition. To show the optimality of our bounds, we also provide a lower bound for FMDP, which indicates that our algorithm is near-optimal w.r.t. timestep T, horizon H and factored state-action subspace cardinality. Finally, as an application, we study a new formulation of constrained RL, known as RL with knapsack constraints (RLwK), and provides the first sample-efficient algorithm based on FMDP-BF.
poster-presentations
This paper establishes the currently sharpest regret bounds for reinforcement learning in episodic factored MDP. The result improve the result by Osband and Van Roy 2014. The proposed FMDP-BF is a model-based algorithm that construct confidence sets of the transition distributions using Bernstein and adapt policies by optimistic planning. The regret bounds holds with high probability. They also provide a lower bound for this class of problems. Reviewers all see merit in the theoretical results of the paper and reach a consensus that this is a good paper. We'd still like to request that the authors make all corrections and clarifications following the reviewers's suggestions, especially to improve the clarity of the formulation and proof sketches. A separate suggestion: Model-based RL is a long existing approaches. For MDP belongs to a specific family, there exist regret bounds that depend on Eluder dimension of the the MDP family, see eg. https://arxiv.org/abs/1406.1853 and https://arxiv.org/abs/2006.01107. Can these results be applied to the factored MDP family and yield similar regret bounds? It would be necessary to add discussions about these papers, and explain why or why not these general regret bounds can apply to analyze HMDP.
train
[ "vlzUEh81F-2", "YDzC_A1BbrF", "E0ogezbP3kW", "DC1eFm2l6f0", "Td2i6mPbZQK", "JCZBQvA-mLU", "QhemEEZBDGO", "c0kkpuT9kpg", "1zl3w42gLn5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies RL in episodic factored MDPs (FMDPs) in the regret setting. The paper introduces an algorithm called FMDP-BF, which is a model-based algorithm implementing the optimistic principle by maintaining upper and lower confidence bounds derived using empirical Bernstein-type confidence sets.\nThe paper...
[ 7, -1, 6, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, 3, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_fmtSg8591Q", "DC1eFm2l6f0", "iclr_2021_fmtSg8591Q", "E0ogezbP3kW", "c0kkpuT9kpg", "1zl3w42gLn5", "vlzUEh81F-2", "iclr_2021_fmtSg8591Q", "iclr_2021_fmtSg8591Q" ]
iclr_2021_MJIve1zgR_
Unbiased Teacher for Semi-Supervised Object Detection
Semi-supervised learning, i.e., training networks with both labeled and unlabeled data, has made significant progress recently. However, existing works have primarily focused on image classification tasks and neglected object detection which requires more annotation effort. In this work, we revisit the Semi-Supervised Object Detection (SS-OD) and identify the pseudo-labeling bias issue in SS-OD. To address this, we introduce Unbiased Teacher, a simple yet effective approach that jointly trains a student and a gradually progressing teacher in a mutually-beneficial manner. Together with a class-balance loss to downweight overly confident pseudo-labels, Unbiased Teacher consistently improved state-of-the-art methods by significant margins on COCO-standard, COCO-additional, and VOC datasets. Specifically, Unbiased Teacher achieves 6.8 absolute mAP improvements against state-of-the-art method when using 1% of labeled data on MS-COCO, achieves around 10 mAP improvements against the supervised baseline when using only 0.5, 1, 2% of labeled data on MS-COCO.
poster-presentations
This paper proposed a new semi-supervised object detection approach using Unbiased Teacher to jointly address the pseudo-labeling bias and overfitting issues. Significant improvements over SOTA were reported on COCO and VOC. Reviewers agree that the proposed method is simple and effective, and the experimental results are solid and convincing. While the novelty of technical contributions for individual components may not be very significant, the idea is simple and well executed with strong results and good presentation. Overall, the paper is recommended for acceptance (poster).
train
[ "Z4h3F5P6bS", "s_VugpC9XfT", "Zrvn-HSJseE", "vW4wbplw9c", "HIyV_tFTMr3", "Q8_CfNai_qg", "YIWWTBaxacq", "s3DL8OeXEYl", "Tws7Z0U6667", "l-b9NP1HUh6", "QU0_W0jZcmn" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "+This paper presents a good work on semi-supervised object detection (SSOD), which is a very challenging task. Although there are great progresses on semi-supervised classification, the SSOD is lying behind. This paper shows very good results over the supervised baselines, even when all annotations are used in COC...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, 9, 6 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_MJIve1zgR_", "iclr_2021_MJIve1zgR_", "Z4h3F5P6bS", "Zrvn-HSJseE", "Zrvn-HSJseE", "Tws7Z0U6667", "l-b9NP1HUh6", "s_VugpC9XfT", "QU0_W0jZcmn", "iclr_2021_MJIve1zgR_", "iclr_2021_MJIve1zgR_" ]
iclr_2021_9l0K4OM-oXE
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time attack that injects a trigger pattern into a small proportion of training data so as to control the model's prediction at the test time. Backdoor attacks are notably dangerous since they do not affect the model's performance on clean examples, yet can fool the model to make the incorrect prediction whenever the trigger pattern appears during testing. In this paper, we propose a novel defense framework Neural Attention Distillation (NAD) to erase backdoor triggers from backdoored DNNs. NAD utilizes a teacher network to guide the finetuning of the backdoored student network on a small clean subset of data such that the intermediate-layer attention of the student network aligns with that of the teacher network. The teacher network can be obtained by an independent finetuning process on the same clean subset. We empirically show, against 6 state-of-the-art backdoor attacks, NAD can effectively erase the backdoor triggers using only 5\% clean training data without causing obvious performance degradation on clean examples. Our code is available at https://github.com/bboylyg/NAD.
poster-presentations
This paper introduces neural attention-distillation; a new scheme for erasing backdoors in a poisoned neural network. The paper performs an empirical evaluation of their proposed method against 6 state-of-the-art backdoor attacks. The authors show that attention-distillation succeeds by using only a small fraction of clean training data without any performance degradation. In addition, the authors have provided ablation studies to clarify the contribution of each component in their proposed approach. Reviewers find the simplicity and effectiveness of the approach an important attribute that may lead this work to have a high impact in the field. The paper is well-written, and all reviewers rate it on the accept side. I concur with their opinions and comments and I recommend accept.
test
[ "DrG7VOy-0v9", "ZKwWt9rZwdn", "N5y1p8nA2_", "07krcUSIKrN", "VgFw7yGKcEL", "JgvA7HhGRlj", "liVtSieNeh9", "8qcHhbfD0Av", "YYJLwNy3xoP", "jQmhfAPos2V", "zn-M4B5BntH", "_az6A3hgIbT", "qZrNVjY95oG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "## Overview \n\nThe paper proposes a simple yet effective approach for purifying a neural network poisoned with backdoor attacks, AKA backdoor erasing. In short, the authors propose a two-step process: 1) fine-tuning the poisoned model on a small portion of clean data, which is a commonly used defense, and 2) trea...
[ 7, -1, 6, -1, -1, 7, 7, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, -1, -1, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_9l0K4OM-oXE", "iclr_2021_9l0K4OM-oXE", "iclr_2021_9l0K4OM-oXE", "qZrNVjY95oG", "zn-M4B5BntH", "iclr_2021_9l0K4OM-oXE", "iclr_2021_9l0K4OM-oXE", "jQmhfAPos2V", "N5y1p8nA2_", "liVtSieNeh9", "JgvA7HhGRlj", "DrG7VOy-0v9", "N5y1p8nA2_" ]
iclr_2021_Wga_hrCa3P3
Contrastive Learning with Adversarial Perturbations for Conditional Text Generation
Recently, sequence-to-sequence (seq2seq) models with the Transformer architecture have achieved remarkable performance on various conditional text generation tasks, such as machine translation. However, most of them are trained with teacher forcing with the ground truth label given at each time step, without being exposed to incorrectly generated tokens during training, which hurts its generalization to unseen inputs, that is known as the "exposure bias" problem. In this work, we propose to solve the conditional text generation problem by contrasting positive pairs with negative pairs, such that the model is exposed to various valid or incorrect perturbations of the inputs, for improved generalization. However, training the model with naïve contrastive learning framework using random non-target sequences as negative examples is suboptimal, since they are easily distinguishable from the correct output, especially so with models pretrained with large text corpora. Also, generating positive examples requires domain-specific augmentation heuristics which may not generalize over diverse domains. To tackle this problem, we propose a principled method to generate positive and negative samples for contrastive learning of seq2seq models. Specifically, we generate negative examples by adding small perturbations to the input sequence to minimize its conditional likelihood, and positive examples by adding large perturbations while enforcing it to have a high conditional likelihood. Such `"hard'' positive and negative pairs generated using our method guides the model to better distinguish correct outputs from incorrect ones. We empirically show that our proposed method significantly improves the generalization of the seq2seq on three text generation tasks --- machine translation, text summarization, and question generation.
poster-presentations
This paper proposes a new method for conditional text generation that uses contrastive learning to mitigate the exposure bias problem in order to improve the performance. Specifically, negative examples are generated by adding small perturbations to the input sequence to minimize its conditional likelihood, while positive examples are generated by adding large perturbations while enforcing it to have a high conditional likelihood. This paper receives 2 reject and 2 accept recommendations, which is a borderline case. The reviewers have raised many useful questions during the review process, while the authors has also done a good job during the rebuttal to address the concerns. After checking the paper and all the discussions, the AC feels that all the major concerns have been solved, such as more clarification in the paper, more results on non-pretrained models, and small-scale human evaluation. On one hand, reviewers found that the proposed method is interesting and novel to a certain extent, the paper is also well written. On the other hand, even after adding all the additional results, the reviewers still feel it is not super-clear that results would extend to better models, as most of the experiments are conducted on T5-small, and the final reported numbers in the paper are far from SOTA. As shown in Table 1 & 2, the AC agrees that the final results are far from SOTA, and the authors should probably also study the incorporation of CLAPS into stronger backbones. On the other hand, the AC also thinks that T5 is already a relatively strong baseline to start with (though it is T5-small), and it may not be necessary to chase SOTA. Under a fair comparison, the AC thinks that the authors have done a good job at demonstrating its improvements over T5-MLE baselines. As a summary, the AC thinks that the authors have done a good job during the rebuttal. On balance, the AC is happy to recommend acceptance of the paper. The authors should add more careful discussions to reflect the reviewers' comments when preparing the camera ready.
train
[ "TUuLieKGGGu", "N8NAY7hz1cS", "Mtb-arBYU9_", "tufl_O3cxnF", "v6XME-kYtG3", "fQXJJ1OOIIc", "o_MRKsX2KYu", "jvgUNPU8fx", "9uDyjfWwkhY", "UAecDXYOCrk", "RAniN_ZeFrI", "dN_9QrybpT", "CY9OMjU06se", "d9-Do-DHn6S", "Q-innmCjHq", "nsiJSBbqBh3", "3ZUtkO-lGLl", "2p_BJUq0ZA", "D4WCfIfgP2C",...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ...
[ "This paper presents a method for conditional text generation tasks that aims to over the \"exposure bias\" problem through contrastive learning where negative examples are generated by adding small perturbations to the input sequence to minimize its conditional likelihood, and positive examples are generated by ad...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 4 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_Wga_hrCa3P3", "Mtb-arBYU9_", "N4Lq73J_MU-", "N4Lq73J_MU-", "9uDyjfWwkhY", "o_MRKsX2KYu", "Amc1Z0tVJy8", "N4Lq73J_MU-", "D4WCfIfgP2C", "iclr_2021_Wga_hrCa3P3", "EM-E8jm1By0", "Amc1Z0tVJy8", "TUuLieKGGGu", "N4Lq73J_MU-", "iclr_2021_Wga_hrCa3P3", "iclr_2021_Wga_hrCa3P3", "icl...
iclr_2021_WesiCoRVQ15
When Optimizing f-Divergence is Robust with Label Noise
We show when maximizing a properly defined f-divergence measure with respect to a classifier's predictions and the supervised labels is robust with label noise. Leveraging its variational form, we derive a nice decoupling property for a family of f-divergence measures when label noise presents, where the divergence is shown to be a linear combination of the variational difference defined on the clean distribution and a bias term introduced due to the noise. The above derivation helps us analyze the robustness of different f-divergence functions. With established robustness, this family of f-divergence functions arises as useful metrics for the problem of learning with noisy labels, which do not require the specification of the labels' noise rate. When they are possibly not robust, we propose fixes to make them so. In addition to the analytical results, we present thorough experimental evidence. Our code is available at https://github.com/UCSC-REAL/Robust-f-divergence-measures.
poster-presentations
The paper tackles a very important problem. The formulation of the paper is sound as under lightweight assumptions, the supervised loss follows an f-divergence formulation (see "Information, Divergence and Risk for Binary Experiments" by Reid and Williamson (JMLR 2011), in particular Section 4.7). It would make sense to dig in the loss in the context of label noise; the variational formulation provides an interesting direction along those lines. The rebuttal on the experimental concerns of reviewers is appreciated (Cf authors’ rebuttal summary).
train
[ "M1s2xMj2ZO", "yO5rHb8yOT7", "vCnzSmYvKVM", "m9mLZ0wPT5-", "K9kG4BGZ4J", "0-Nv6GS7wbI", "8q8UPptPrhr", "HyWaV06BPrz", "qJIP8n49ibc", "f7M25IBb8aZ" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers,\n\nWe thank all reviewers for their thoughtful and helpful comments! We have uploaded our revised main paper and appendix. Changes with respect to the previous version are highlighted in light-blue. To summarize, we revise the following in our paper:\n\n1. In our abstract, we mention that the f-di...
[ -1, -1, -1, -1, -1, -1, 7, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 2 ]
[ "iclr_2021_WesiCoRVQ15", "qJIP8n49ibc", "m9mLZ0wPT5-", "8q8UPptPrhr", "HyWaV06BPrz", "f7M25IBb8aZ", "iclr_2021_WesiCoRVQ15", "iclr_2021_WesiCoRVQ15", "iclr_2021_WesiCoRVQ15", "iclr_2021_WesiCoRVQ15" ]
iclr_2021_VJnrYcnRc6
Conditional Generative Modeling via Learning the Latent Space
Although deep learning has achieved appealing results on several machine learning tasks, most of the models are deterministic at inference, limiting their application to single-modal settings. We propose a novel general-purpose framework for conditional generation in multimodal spaces, that uses latent variables to model generalizable learning patterns while minimizing a family of regression cost functions. At inference, the latent variables are optimized to find solutions corresponding to multiple output modes. Compared to existing generative solutions, our approach demonstrates faster and more stable convergence, and can learn better representations for downstream tasks. Importantly, it provides a simple generic model that can perform better than highly engineered pipelines tailored using domain expertise on a variety of tasks, while generating diverse outputs. Code available at https://github.com/samgregoost/cGML.
poster-presentations
The paper proposes a model and a training mechanism for multimodal generation. The reviews are generally positive: they praise the generality of the method, the extensive experimental evaluation, and the good empirical results. Overall, no major concerns were raised, and all reviewers recommend acceptance. A couple of concerns remain, in my view: - The method is generally heuristic, and intuitively rather than theoretically motivated. This is compensated of course by the empirical evaluation, which is thorough. - The paper could be better written. The reviewers suggested some minor improvements which were implemented in the updated version, but I believe there is room for further improvement. Due to the above concerns, I consider the rating of reviewer #3 (10: Top 5% of accepted papers, seminal paper) to be unjustifiably high. On balance, however, I'm happy to recommend acceptance. Message to the authors: In the abstract you write: "a simple generic model that can beat highly engineered pipelines". Please be aware that the word "beat" evokes competition, winners and losers, so it's not appropriate in the context of scientific evaluation. Please consider replacing it with something neutral, such as "a simple generic model that can perform better than ...".
train
[ "zL5AHm0azHV", "7Pe3C1zGkIR", "fyz5di4M7hc", "1pOdEcSzrzf", "fO3bjWUwHlf", "h9APquaUgks", "owqPRvyqAr8", "pL7cEXOg1T", "PqvdAIZa-EL" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Quality:\nThe proposed general-purpose framework for modeling CMM spaces is worthwhile and insightful. By using a set of domain-agnostic regression cost functions instead of the adversarial loss, it improves both the stability and eliminates the incompatibility between the adversarial and reconstruction losses, al...
[ 7, -1, -1, -1, -1, -1, 6, 10, 7 ]
[ 3, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "iclr_2021_VJnrYcnRc6", "PqvdAIZa-EL", "owqPRvyqAr8", "zL5AHm0azHV", "pL7cEXOg1T", "iclr_2021_VJnrYcnRc6", "iclr_2021_VJnrYcnRc6", "iclr_2021_VJnrYcnRc6", "iclr_2021_VJnrYcnRc6" ]
iclr_2021_RovX-uQ1Hua
Text Generation by Learning from Demonstrations
Current approaches to text generation largely rely on autoregressive models and maximum likelihood estimation. This paradigm leads to (i) diverse but low-quality samples due to mismatched learning objective and evaluation metric (likelihood vs. quality) and (ii) exposure bias due to mismatched history distributions (gold vs. model-generated). To alleviate these problems, we frame text generation as an offline reinforcement learning (RL) problem with expert demonstrations (i.e., the reference), where the goal is to maximize quality given model-generated histories. We propose GOLD (generation by off-policy learning from demonstrations): an easy-to-optimize algorithm that learns from the demonstrations by importance weighting. Intuitively, GOLD upweights confident tokens and downweights unconfident ones in the reference during training, avoiding optimization issues faced by prior RL approaches that rely on online data collection. According to both automatic and human evaluation, models trained by GOLD outperform those trained by MLE and policy gradient on summarization, question generation, and machine translation. Further, our models are less sensitive to decoding algorithms and alleviate exposure bias.
poster-presentations
The paper is well-written, it is clear and concise. The idea of learning to generate text from off-policy demonstrations is interesting. The results experimental results are good. The authors seem to address the concerns raised by the authors during the rebuttal.
train
[ "tbZ7bJZMYlu", "KIITg3knKI", "N9N1sqUt7SB", "ujqWUwOINXC", "QJNhOkOZYgr", "sSBdQdiIFlz", "oHQd4MA1yqq", "bcYxzWEq0U", "t_nGw1nMyQ", "ovowdA465Jd", "5QgTGUsiWYN", "V_DM3GVMbL", "CG4NpkwacE", "kOApDRzSBQt", "7_yqI5jkBMJ", "Ip-ZxxLSzen", "A8yjZ_4kvWm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "TEXT GENERATION BY LEARNING FROM OFF-POLICY DEMONSTRATIONS\n\nThe authors propose an `````\"off-policy\" approach to training sequence generation models. The approach is based on using ``\"behaviour policy'' or demonstration state distributions and corrects for the action distribution, through a local importance w...
[ 5, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_RovX-uQ1Hua", "kOApDRzSBQt", "ovowdA465Jd", "QJNhOkOZYgr", "V_DM3GVMbL", "iclr_2021_RovX-uQ1Hua", "5QgTGUsiWYN", "iclr_2021_RovX-uQ1Hua", "tbZ7bJZMYlu", "Ip-ZxxLSzen", "7_yqI5jkBMJ", "sSBdQdiIFlz", "tbZ7bJZMYlu", "A8yjZ_4kvWm", "iclr_2021_RovX-uQ1Hua", "iclr_2021_RovX-uQ1Hua...
iclr_2021__X_4Akcd8Re
Learning Long-term Visual Dynamics with Region Proposal Interaction Networks
Learning long-term dynamics models is the key to understanding physical common sense. Most existing approaches on learning dynamics from visual input sidestep long-term predictions by resorting to rapid re-planning with short-term models. This not only requires such models to be super accurate but also limits them only to tasks where an agent can continuously obtain feedback and take action at each step until completion. In this paper, we aim to leverage the ideas from success stories in visual recognition tasks to build object representations that can capture inter-object and object-environment interactions over a long range. To this end, we propose Region Proposal Interaction Networks (RPIN), which reason about each object's trajectory in a latent region-proposal feature space. Thanks to the simple yet effective object representation, our approach outperforms prior methods by a significant margin both in terms of prediction quality and their ability to plan for downstream tasks, and also generalize well to novel environments. Code, pre-trained models, and more visualization results are available at https://haozhi.io/RPIN.
poster-presentations
This paper was reviewed by four experts in the field. Based on the reviewers' feedback, the decision is to recommend the paper for acceptance to ICLR 2021. The reviewers did raise some valuable concerns that should be addressed in the final camera-ready version of the paper. The authors are encouraged to make the necessary changes and include the missing references.
train
[ "mFTOEK4cOk", "VwNvezuJSAG", "RG2VbFU4JNh", "HaFZWGjmjb", "V8JRWkrBBK", "VN2tzLTZHN-", "pGE3l8TUYs", "mMfaOsz4F4w", "yZ-YzvjOP93" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for providing the clarifications. I really appreciate your detailed responses. Having read the other reviews and the author's responses, I feel that the paper makes a good contribution in integrating object-centric representations into the prediction process. Overall, I think this is a good paper and would ...
[ -1, -1, -1, -1, -1, 6, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 4, 2 ]
[ "VwNvezuJSAG", "pGE3l8TUYs", "mMfaOsz4F4w", "yZ-YzvjOP93", "VN2tzLTZHN-", "iclr_2021__X_4Akcd8Re", "iclr_2021__X_4Akcd8Re", "iclr_2021__X_4Akcd8Re", "iclr_2021__X_4Akcd8Re" ]
iclr_2021_xCxXwTzx4L1
ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations
Structured pruning methods are among the effective strategies for extracting small resource-efficient convolutional neural networks from their dense counterparts with minimal loss in accuracy. However, most existing methods still suffer from one or more limitations, that include 1) the need for training the dense model from scratch with pruning-related parameters embedded in the architecture, 2) requiring model-specific hyperparameter settings, 3) inability to include budget-related constraint in the training process, and 4) instability under scenarios of extreme pruning. In this paper, we present ChipNet, a deterministic pruning strategy that employs continuous Heaviside function and a novel crispness loss to identify a highly sparse network out of an existing dense network. Our choice of continuous Heaviside function is inspired by the field of design optimization, where the material distribution task is posed as a continuous optimization problem, but only discrete values (0 or 1) are practically feasible and expected as final outcomes. Our approach's flexible design facilitates its use with different choices of budget constraints while maintaining stability for very low target budgets. Experimental results show that ChipNet outperforms state-of-the-art structured pruning methods by remarkable margins of up to 16.1% in terms of accuracy. Further, we show that the masks obtained with ChipNet are transferable across datasets. For certain cases, it was observed that masks transferred from a model trained on feature-rich teacher dataset provide better performance on the student dataset than those obtained by directly pruning on the student data itself.
poster-presentations
This paper proposed a new method to prune neural networks using a continuous penalty function. All reviewers suggest acceptance (some are on borderline though) as the authors did a good job in the rebuttal phase. AC also could not find any particular reason to reject the paper (in particular, the overall writing is clear) and thinks that this paper is a meaningful addition to ICLR 2021.
train
[ "0Vh050D0u5U", "qX-o0YGpZTl", "BaJ9gWA1S_", "uNbvC7PBaEB", "5gk8gyJH8xA", "IVF0eLHahbQ", "4RSMaFOwgo-", "hzbSswnnPEP", "9rpjjgmN8Z8", "P2_BNQHzgrh", "fG9hsUk20k" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "##########################################################################\n\n\nSummary:\n\nThe paper provides a budget-aware regularizer method to train the network to be pruned. Existing methods based-on the regularizer method suffer from satisfying the user-specified constraints and resort to trial-and-error ap...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_xCxXwTzx4L1", "iclr_2021_xCxXwTzx4L1", "9rpjjgmN8Z8", "iclr_2021_xCxXwTzx4L1", "IVF0eLHahbQ", "qX-o0YGpZTl", "P2_BNQHzgrh", "0Vh050D0u5U", "fG9hsUk20k", "iclr_2021_xCxXwTzx4L1", "iclr_2021_xCxXwTzx4L1" ]
iclr_2021_b7g3_ZMHnT0
Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such KG-augmented models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We show that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs, which maintain the downstream performance of the original KG while significantly deviating from the original KG's semantics and structure. Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
poster-presentations
The paper's main message is that some existing NLP techniques that claim to improve performance by the use of a knowledge graph may not achieve this improved performance because of the knowledge graph or at least the explanation given may be questionable. This is thought provoking and it will incite the community to think more carefully about the real factors of improved performance. The initial version of the paper was not well written, but the authors improved the writing significantly. The paper includes a thorough empirical evaluation to support the main message. I have read the paper and I believe that this work will be of interest to a diverse audience.
train
[ "GTYGkGa3CNI", "V2S7twGSoM", "Ow0oQrYXFW", "hGVgA7pgtO", "klhIFY2ugbR", "IMBRfDH124", "RLMOu_LlD-V", "uouO_FGGil4", "KDInAOqt025", "f_jpJqJN77K", "NNVVqZl35K7", "tF6wKc1gkld" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents an interesting finding that some of the existing KG-augmented models, such as those for QA and item recommendation, may not actually capture or leverage the semantics in KGs, and their performance improvement cannot be attributed to the usage of additional knowledge. I think this finding is of s...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_b7g3_ZMHnT0", "iclr_2021_b7g3_ZMHnT0", "RLMOu_LlD-V", "GTYGkGa3CNI", "KDInAOqt025", "KDInAOqt025", "NNVVqZl35K7", "iclr_2021_b7g3_ZMHnT0", "V2S7twGSoM", "tF6wKc1gkld", "iclr_2021_b7g3_ZMHnT0", "iclr_2021_b7g3_ZMHnT0" ]
iclr_2021_xzqLpqRzxLq
IEPT: Instance-Level and Episode-Level Pretext Tasks for Few-Shot Learning
The need of collecting large quantities of labeled training data for each new task has limited the usefulness of deep neural networks. Given data from a set of source tasks, this limitation can be overcome using two transfer learning approaches: few-shot learning (FSL) and self-supervised learning (SSL). The former aims to learn `how to learn' by designing learning episodes using source tasks to simulate the challenge of solving the target new task with few labeled samples. In contrast, the latter exploits an annotation-free pretext task across all source tasks in order to learn generalizable feature representations. In this work, we propose a novel Instance-level and Episode-level Pretext Task (IEPT) framework that seamlessly integrates SSL into FSL. Specifically, given an FSL episode, we first apply geometric transformations to each instance to generate extended episodes. At the instance-level, transformation recognition is performed as per standard SSL. Importantly, at the episode-level, two SSL-FSL hybrid learning objectives are devised: (1) The consistency across the predictions of an FSL classifier from different extended episodes is maximized as an episode-level pretext task. (2) The features extracted from each instance across different episodes are integrated to construct a single FSL classifier for meta-learning. Extensive experiments show that our proposed model (i.e., FSL with IEPT) achieves the new state-of-the-art.
poster-presentations
The submission proposes instance-level and episode-level pretext tasks as an unsupervised data augmentation mechanism for few-shot learning. Furthermore, transformer are proposed to integrate features from different images and augmentations. The paper received one clear accept, one accept, one borderline accept and two borderline reject recommendations. The main concerns of the R5 and R2 were weak ablation study and the lack of a clear advantage of the method in terms of results compared to the prior state of the art. In the rebuttal, the authors provided more ablation studies. Similarly, the reviewers were concerned about the novelty of the paper being incremental compared to the prior works. Based on the majority vote, the meta reviewer recommends acceptance.
train
[ "u1sQ-9IPiMY", "WFioOP7iq0e", "kpP9_LEsDu", "ALMJsT4I9Ob", "uI5ucVsxROw", "wQINKeeoM6r", "5Z0rUfeXfwy", "ps8VqECshur", "dXxtsySWXzt", "axe-cTSzKx", "ppmjBbA2u_", "WASybGbsmAL", "OOgQIBg-19" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes both Instance-level and episode-level pretext task. In comparison to existing works (Gidaris et al., 2019; Su et al., 2020), the main novelty is to design the episode-level pretext task, which enforces consistent predictions for images with different rotations. \n\nThe paper is clearly written w...
[ 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_xzqLpqRzxLq", "iclr_2021_xzqLpqRzxLq", "OOgQIBg-19", "uI5ucVsxROw", "axe-cTSzKx", "WFioOP7iq0e", "wQINKeeoM6r", "ppmjBbA2u_", "u1sQ-9IPiMY", "WASybGbsmAL", "iclr_2021_xzqLpqRzxLq", "iclr_2021_xzqLpqRzxLq", "iclr_2021_xzqLpqRzxLq" ]
iclr_2021_L7WD8ZdscQ5
The Role of Momentum Parameters in the Optimal Convergence of Adaptive Polyak's Heavy-ball Methods
The adaptive stochastic gradient descent (SGD) with momentum has been widely adopted in deep learning as well as convex optimization. In practice, the last iterate is commonly used as the final solution. However, the available regret analysis and the setting of constant momentum parameters only guarantee the optimal convergence of the averaged solution. In this paper, we fill this theory-practice gap by investigating the convergence of the last iterate (referred to as {\it individual convergence}), which is a more difficult task than convergence analysis of the averaged solution. Specifically, in the constrained convex cases, we prove that the adaptive Polyak's Heavy-ball (HB) method, in which the step size is only updated using the exponential moving average strategy, attains an individual convergence rate of O(1t), as opposed to that of O(log⁡tt) of SGD, where t is the number of iterations. Our new analysis not only shows how the HB momentum and its time-varying weight help us to achieve the acceleration in convex optimization but also gives valuable hints how the momentum parameters should be scheduled in deep learning. Empirical results validate the correctness of our convergence analysis in optimizing convex functions and demonstrate the improved performance of the adaptive HB methods in training deep networks.
poster-presentations
This paper studies the *last iterate* convergence of the projected Heavy-ball method (and an adaptive variant) for convex problems, and propose a specific coefficient schedule. All reviewers thought that looking at the last-iterate convergence of the HB method was interesting and that the proofs, while simple, were interestingly novel. Several concerns were raised by the quality of the writing. Several were addressed in a revision and the rebuttal. While R1 did not update their score, the AC thinks that the rebuttal has addressed appropriately their initial concerns. The AC recommends the paper for acceptance, *but* it is important that the authors make an appropriate careful pass over their paper for the camera ready version. ### comments about the write-up - The paper still contains many typos (e.g. missing $1/t$ term in the average after equation (2); many misspelled words, etc.), please carefully proofread your paper again. - The AC agrees with R1 that the quality of presentation still needs improvement. $\beta_{1t}$ is still used in the introduction without being defined -- please define it properly first e.g. - The word "optimal" and "optimality" is usually misused in the manuscript. To refer to the convergence rate of an optimization algorithm, the standard terminology is to talk about the "suboptimality" or the "error" (e.g. see the terminology used by the cited [Harvey et al. 2019, Jain et al. 2019] papers). For example, one would say that the error or suboptimality of SGD has a $O(1/\sqrt{t})$ convergence rate. Saying "optimality of" or "optimal individual convergence rate" is quite confusing, and should be corrected. The adjective "optimal" (when talking about a convergence rate) should be restricted to when a matching lower bound exists. - Finally, the text introducing the experimental section should be fixed to clarify the actual results and motivation. Specifically, the "validate the correctness of our convergence analysis" only applies in the convex setting. I recommend that a high level description of the convex experiment and the main message of the results is moved from the appendix to the main paper there (there is space). And then, the deep learning experiments can be introduced as just investigating the practical performance of the suggested coefficient schedule for HB.
train
[ "dIYYegEuAi", "RQ1qRKHr2m", "ygRDPbItk5o", "R8eXAJr0cpp", "ZHSMs3fVufn", "zT_dO_17uR", "dMuWteVPYZB", "x2cHXCgddCd", "XWUCRxcKD60", "0E4RZoVI_cl" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors investigate the convergence of the projected Heavy-ball method (and an adaptive variant) for convex problems with convex constraints. The authors prove 4 results: 2 individual (last iterate) convergence rates and 2 rates using averaging. Notably, in their proofs they require an increasing (from 1/2 to ...
[ 6, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_L7WD8ZdscQ5", "dMuWteVPYZB", "iclr_2021_L7WD8ZdscQ5", "XWUCRxcKD60", "x2cHXCgddCd", "0E4RZoVI_cl", "dIYYegEuAi", "iclr_2021_L7WD8ZdscQ5", "iclr_2021_L7WD8ZdscQ5", "iclr_2021_L7WD8ZdscQ5" ]
iclr_2021_dV19Yyi1fS3
Training with Quantization Noise for Extreme Model Compression
We tackle the problem of producing compact models, maximizing their accuracy for a given model size. A standard solution is to train networks with Quantization Aware Training, where the weights are quantized during training and the gradients approximated with the Straight-Through Estimator. In this paper, we extend this approach to work with extreme compression methods where the approximations introduced by STE are severe. Our proposal is to only quantize a different random subset of weights during each forward, allowing for unbiased gradients to flow through the other weights. Controlling the amount of noise and its form allows for extreme compression rates while maintaining the performance of the original model. As a result we establish new state-of-the-art compromises between accuracy and model size both in natural language processing and image classification. For example, applying our method to state-of-the-art Transformer and ConvNet architectures, we can achieve 82.5% accuracy on MNLI by compressing RoBERTa to 14 MB and 80.0% top-1 accuracy on ImageNet by compressing an EfficientNet-B3 to 3.3 MB.
poster-presentations
Quantization is an important practical problem to address. The proposed method which quantizes a different random subset of weights during each forward is simple and interesting. The empirical results on RoBERTa and EfficientNet-B3 are good, in particular, for int4 quantization. During the rebuttal, the authors further included quantization results on ResNet which were suggested by the reviewers. This additional experiment is important for comparing this proposed approach with the existing methods which do not have quantization results on the models in this paper.
test
[ "dxRy7-0fMlW", "UkgjWXRggO5", "nFM4EN2_NcD", "YhXrk5RiBqD", "91qMfcea22q", "QetG1eFlmFU", "KCB4cN6LlH", "zHKXS4gw9-", "9cCz030j9v0", "XtGXMln4whC", "0fiTY9cDs-o" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "re: equations \n- Regarding Equation (2), it performs “fake quantization” in the sense that it successively quantizes and dequantizes the weights. Therefore, the reviewer is right that when successively quantizing and de-quantizing the weights, the zero point or bias $z$ cancels out (which is normal, see [1]). How...
[ -1, -1, -1, -1, -1, -1, 4, 4, 6, 10, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3, 5, 4 ]
[ "zHKXS4gw9-", "iclr_2021_dV19Yyi1fS3", "0fiTY9cDs-o", "XtGXMln4whC", "9cCz030j9v0", "KCB4cN6LlH", "iclr_2021_dV19Yyi1fS3", "iclr_2021_dV19Yyi1fS3", "iclr_2021_dV19Yyi1fS3", "iclr_2021_dV19Yyi1fS3", "iclr_2021_dV19Yyi1fS3" ]
iclr_2021_R0a0kFI3dJx
Adaptive Extra-Gradient Methods for Min-Max Optimization and Games
We present a new family of min-max optimization algorithms that automatically exploit the geometry of the gradient data observed at earlier iterations to perform more informative extra-gradient steps in later ones. Thanks to this adaptation mechanism, the proposed method automatically detects whether the problem is smooth or not, without requiring any prior tuning by the optimizer. As a result, the algorithm simultaneously achieves order-optimal convergence rates, \ie it converges to an ε-optimal solution within O(1/ε) iterations in smooth problems, and within O(1/ε2) iterations in non-smooth ones. Importantly, these guarantees do not require any of the standard boundedness or Lipschitz continuity conditions that are typically assumed in the literature; in particular, they apply even to problems with singularities (such as resource allocation problems and the like). This adaptation is achieved through the use of a geometric apparatus based on Finsler metrics and a suitably chosen mirror-prox template that allows us to derive sharp convergence rates for the methods at hand.
poster-presentations
The paper introduces a new step size rule for the extragradient/mirror-prox algorithm, building upon and improving the results of Bach & Levy for the deterministic convex-concave setups. The proposed adaptation of EG/Mirror-prox -- dubbed AdaProx in the submitted paper -- has the rate interpolation property, which means that it provides order-optimal rates for both smooth and nonsmooth problems, without any knowledge of the problem class or the problem parameters for the input instance. The paper also demonstrates that the same algorithm can handle certain barrier-based problems, using regularizers based on the Finsler metric. The consensus of the reviews was that the theory presented in the paper is solid and interesting. The main concerns shared by a subset of the reviews were regarding the practical usefulness of the proposed method. In particular, the method exhibits large constants in the convergence bounds and cannot handle stochastic setups. Further, the empirical evidence provided in the paper was deemed insufficient to demonstrate the algorithm's competitiveness on learning problems. If possible, the authors are advised to provide more convincing empirical results in a revised version, or, alternatively, to tone down the claims regarding the practical performance of the method.
test
[ "67sxapIZ80B", "0FzGLr9hkH", "VaNj814m4UV", "y1Zvqx6sGbH", "piJCQl59iMo", "4qoPybZ8IUO", "fmWuykWu5_F", "Tq6KYHmLk-N", "e-VZiVcKB1u", "RSAeE339Fv", "GQv_Ky5FIY", "3iBnqKlt-H8", "ipr1DsizLWK", "k3iqgx5Mjht", "pMUi-93tzte", "auFhGwZG6S9", "7SePegHKcJ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks again for the prompt input, we are in turn providing replies below.\n- For your points (2) and (3), taken together: \"[It's] not clear to me that this problem is either smooth or has bounded gradients (if $y$ is unbounded, it does not seem to be)\". Indeed, the lack of bounded gradients and smoothness in th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "0FzGLr9hkH", "VaNj814m4UV", "y1Zvqx6sGbH", "GQv_Ky5FIY", "fmWuykWu5_F", "ipr1DsizLWK", "Tq6KYHmLk-N", "k3iqgx5Mjht", "pMUi-93tzte", "pMUi-93tzte", "7SePegHKcJ", "auFhGwZG6S9", "7SePegHKcJ", "iclr_2021_R0a0kFI3dJx", "iclr_2021_R0a0kFI3dJx", "iclr_2021_R0a0kFI3dJx", "iclr_2021_R0a0kFI...
iclr_2021_NTEz-6wysdb
Distilling Knowledge from Reader to Retriever for Question Answering
The task of information retrieval is an important component of many natural language processing systems, such as open domain question answering. While traditional methods were based on hand-crafted features, continuous representations based on neural networks recently obtained competitive results. A challenge of using such methods is to obtain supervised data to train the retriever model, corresponding to pairs of query and support documents. In this paper, we propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents. Our approach leverages attention scores of a reader model, used to solve the task based on retrieved documents, to obtain synthetic labels for the retriever. We evaluate our method on question answering, obtaining state-of-the-art results.
poster-presentations
The paper attempts to improve retrieval in open domain question answering systems, which is a very important problem. In this regards, the authors propose to utilize cross-attention scores from a seq2seq reader models as signal for training retrieval systems. This approach overcomes typical low amount of labelled data available for retriever model. The reviewers reached a consensus that the proposed approach are interesting and novel. The proposed approach establish new state-of-the-art performance on three QA datasets, although the improvements over previous methods are marginal. Overall, reviewers agree that the paper will be beneficial to the community and thus I recommend an acceptance to ICLR.
test
[ "9qCTC_TzOEh", "RP5G9e74fux", "BklcmpupAm-", "gn-4RqXoivp", "rJJkGw0gXOt", "kMGp4O_QiKu", "oUZzDTUkOY-", "ihKwhpYZkca", "PWS7IuRysr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Paper Summary:\n* This paper proposes a technique to learn retriever models for question answering that does not require annotated pairs of query and documents. The proposed technique uses attention scores of a reader model to obtain synthetic labels for the retriever. Experimental results with NaturalQuestions,...
[ 7, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_NTEz-6wysdb", "9qCTC_TzOEh", "oUZzDTUkOY-", "RP5G9e74fux", "ihKwhpYZkca", "PWS7IuRysr", "iclr_2021_NTEz-6wysdb", "iclr_2021_NTEz-6wysdb", "iclr_2021_NTEz-6wysdb" ]
iclr_2021_lvRTC669EY_
Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization
We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, Reward-Randomized Policy Gradient (RPG). RPG is able to discover a set of multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents.
poster-presentations
All the reviewers are in favor of accepting this paper, which demonstrates both theoretically and empirically the value of reward randomization in solving multi-agent reinforcement learning problems. The rebuttal phase was crucial in improving the quality and evaluation of the submission. I am glad to recommend acceptance.
val
[ "EqTICpIKFnt", "4qMNkRZoZWj", "gI3txuFbzLf", "MukRkA9-B2i", "4cPYGwEa7Ua", "xm49AtZikRX", "ukbNNa-7K0g", "bntvoOJ1uH", "xQ4kV94ppgq", "ItIpu6vELjB", "RYpNArOhv8R" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers the problem of finding a nash equilibrium in two player games where each of the algorithm runs an RL algorithm. In this paper they ask the question -- which nash equilibria does the dynamics converge to in this two player game (where each player optimizes based on a policy gradient algorithm)....
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2021_lvRTC669EY_", "iclr_2021_lvRTC669EY_", "MukRkA9-B2i", "ukbNNa-7K0g", "iclr_2021_lvRTC669EY_", "EqTICpIKFnt", "4qMNkRZoZWj", "ItIpu6vELjB", "RYpNArOhv8R", "iclr_2021_lvRTC669EY_", "iclr_2021_lvRTC669EY_" ]
iclr_2021_tu29GQT0JFy
not-MIWAE: Deep Generative Modelling with Missing not at Random Data
When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihood-based inference. We present an approach for building and fitting deep latent variable models (DLVMs) in cases where the missing process is dependent on the missing data. Specifically, a deep neural network enables us to flexibly model the conditional distribution of the missingness pattern given the data. This allows for incorporating prior information about the type of missingness (e.g.~self-censoring) into the model. Our inference technique, based on importance-weighted variational inference, involves maximising a lower bound of the joint likelihood. Stochastic gradients of the bound are obtained by using the reparameterisation trick both in latent space and data space. We show on various kinds of data sets and missingness patterns that explicitly modelling the missing process can be invaluable.
poster-presentations
All the reviewers highlight that the paper addresses the important issue of extending deep latent variable models to handle missing non at random data, which are known to be very difficult. The authors suggest modeling the mechanism of missing values and perform inference using amortized importance weighted variational inference and demonstrate the capacities of their approach on many experiments. The paper highlight the trade-off between the complexity of the data model and that of the missing data mechanism. The authors appropriately answer reviewers comments, add new experiments varying the percentage of missing values, and give more details on the methodological part. I also think that this is a valuable contribution to the community, that the literature is well covered (the historical statistical litterature and the ML one), and that it provides new insights and methods to tackle this difficult problem.
test
[ "N9k_aTGl3Kb", "TJx0oeJU_Nf", "dfMo2T9sTE1", "vfMBHMigbEi", "I2ixncFJstn", "hKvczOZEed", "Hrds5skluOz", "xLp5A3Mxfxe", "db7Qm2K2Rim", "zNe9pk8uoTH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "thanks for your clarifications, especially that one reference by Ivanov was helpful I wasn't aware of that. \n\nIt makes sense to leave out GAIN or GAN approaches in the comparisons, given those results.\n\nAlso, thanks for the additional experiments!\n\n", "Many thanks for your comments and assessment of our pa...
[ -1, -1, -1, -1, -1, -1, 4, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "TJx0oeJU_Nf", "xLp5A3Mxfxe", "db7Qm2K2Rim", "zNe9pk8uoTH", "iclr_2021_tu29GQT0JFy", "Hrds5skluOz", "iclr_2021_tu29GQT0JFy", "iclr_2021_tu29GQT0JFy", "iclr_2021_tu29GQT0JFy", "iclr_2021_tu29GQT0JFy" ]
iclr_2021_MBOyiNnYthd
IDF++: Analyzing and Improving Integer Discrete Flows for Lossless Compression
In this paper we analyse and improve integer discrete flows for lossless compression. Integer discrete flows are a recently proposed class of models that learn invertible transformations for integer-valued random variables. Their discrete nature makes them particularly suitable for lossless compression with entropy coding schemes. We start by investigating a recent theoretical claim that states that invertible flows for discrete random variables are less flexible than their continuous counterparts. We demonstrate with a proof that this claim does not hold for integer discrete flows due to the embedding of data with finite support into the countably infinite integer lattice. Furthermore, we zoom in on the effect of gradient bias due to the straight-through estimator in integer discrete flows, and demonstrate that its influence is highly dependent on architecture choices and less prominent than previously thought. Finally, we show how different architecture modifications improve the performance of this model class for lossless compression, and that they also enable more efficient compression: a model with half the number of flow layers performs on par with or better than the original integer discrete flow model.
poster-presentations
The reviewers of this paper unanimously agreed that this paper adds an interesting theoretical and practical discussion to discrete flows. The paper has improved from the first version to the final one, in which the comments and suggestions by the reviewers have been followed. The paper is still incremental with respect to the previous paper and the reviewers all recommended a poster presentation.
train
[ "aJzsnzOfdK5", "YvF3FRo0fWW", "E7xnBhnln1T", "YmY5jVvVBDw", "Dzesfq5ZFqt", "NsRPK7lkSb1", "Pf_IpAc7df4", "6F95CTuToka", "qk9DndA0rO7", "DhL--EEwreq", "hy8TPHs0QYS", "n_t8hz0odAY", "LX46oqrWguZ", "3INK3e2RH3n", "MkVf6c-ytBW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper aims to analyze and improve IDFs for lossless compression. The authors claim is mainly three following points:\n1. IDFs treat the random variable as integers with a countably infinite number of classes, so it does not have a limited factorization capacity. \n2. Hoogeboom et al. (2019) demonstrated that ...
[ 7, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_MBOyiNnYthd", "Dzesfq5ZFqt", "hy8TPHs0QYS", "iclr_2021_MBOyiNnYthd", "LX46oqrWguZ", "qk9DndA0rO7", "DhL--EEwreq", "iclr_2021_MBOyiNnYthd", "3INK3e2RH3n", "MkVf6c-ytBW", "n_t8hz0odAY", "YmY5jVvVBDw", "aJzsnzOfdK5", "iclr_2021_MBOyiNnYthd", "iclr_2021_MBOyiNnYthd" ]
iclr_2021_unI5ucw_Jk
Explaining by Imitating: Understanding Decisions by Interpretable Policy Learning
Understanding human behavior from observed data is critical for transparency and accountability in decision-making. Consider real-world settings such as healthcare, in which modeling a decision-maker’s policy is challenging—with no access to underlying states, no knowledge of environment dynamics, and no allowance for live experimentation. We desire learning a data-driven representation of decision- making behavior that (1) inheres transparency by design, (2) accommodates partial observability, and (3) operates completely offline. To satisfy these key criteria, we propose a novel model-based Bayesian method for interpretable policy learning (“Interpole”) that jointly estimates an agent’s (possibly biased) belief-update process together with their (possibly suboptimal) belief-action mapping. Through experiments on both simulated and real-world data for the problem of Alzheimer’s disease diagnosis, we illustrate the potential of our approach as an investigative device for auditing, quantifying, and understanding human decision-making behavior.
poster-presentations
Explaining by Imitating: Understanding Decisions by Interpretable Policy Learning The topic is maximally timely and important: Understanding human decision-making behaviour based on observational data. Any tangible steps towards this challenging goal are bound to be significant, and those this paper makes. A Bayesian policy-learning method is introduced for this task, and validated on both simulated data and user exeperiments in a real decision-making task. The novel contribution is on learning interpretable decision dynamics The paper is written clearly enough.. The updated paper clarified most major concerns the reviewers had. In particular, they added a user study. The biggest remaining weaknesses are that - relationship to the AMM model did not become completely clear yet - the real user study has been carried out with only a small set of users. But a large-cohort study would be too much work to ask for a paper which has also a strong methodological contribution.
train
[ "KfgMuSHoye", "I_82zwwAbX0", "jS1iRcCGXD0", "pVCCba3a3sB", "LLZpdtTtvv", "2RHfaOlFx34", "l7DyEFQDtSG", "aHLiI0FA0ds", "t3z9BeUExv", "W_uJC40OA7y", "0pt1FXAXbZ6", "C0t7b9ql_V", "4TrvsYEO-fM" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis work proposes an approach for understanding and explaining decision-making behavior. The authors aim to make the method 1) transparent, 2) able to handle partial observability, and 3) work with offline data. To do this, they develop INTERPOLE, which uses Bayesian techniques to estimate decision dyna...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_unI5ucw_Jk", "0pt1FXAXbZ6", "C0t7b9ql_V", "4TrvsYEO-fM", "KfgMuSHoye", "C0t7b9ql_V", "4TrvsYEO-fM", "4TrvsYEO-fM", "4TrvsYEO-fM", "KfgMuSHoye", "KfgMuSHoye", "iclr_2021_unI5ucw_Jk", "iclr_2021_unI5ucw_Jk" ]
iclr_2021_ETBc_MIMgoX
Learning with AMIGo: Adversarially Motivated Intrinsic Goals
A key challenge for reinforcement learning (RL) consists of learning in environments with sparse extrinsic rewards. In contrast to current RL methods, humans are able to learn new skills with little or no reward by using various forms of intrinsic motivation. We propose AMIGo, a novel agent incorporating -- as form of meta-learning -- a goal-generating teacher that proposes Adversarially Motivated Intrinsic Goals to train a goal-conditioned "student" policy in the absence of (or alongside) environment reward. Specifically, through a simple but effective "constructively adversarial" objective, the teacher learns to propose increasingly challenging -- yet achievable -- goals that allow the student to learn general skills for acting in a new environment, independent of the task to be solved. We show that our method generates a natural curriculum of self-proposed goals which ultimately allows the agent to solve challenging procedurally-generated tasks where other forms of intrinsic motivation and state-of-the-art RL methods fail.
poster-presentations
This paper was reviewed by four experts in the field. Based on the reviewers' feedback, the decision is to recommend the paper for acceptance to ICLR 2021. The reviewers did raise some valuable concerns that should be addressed in the final camera-ready version of the paper. The authors are encouraged to make the necessary changes and include the missing references.
train
[ "dubUuHZkLz", "h1x4I1Ckev", "8d337YwqpEx", "52uBkiZaa1A", "JkExO9Emgvj", "l7NiqxKePTX", "_59tuNHdpk", "MKemtPRW_Pj", "E4MKpM-VbjC", "06CvNz6w1w0", "JMde7SZ75tT", "iQReL0AxncU", "Kc-VxtHOUm6", "F8a9cb5IzG", "rg4BGD7sYo", "OyUS1iBH7B", "4dnHW4XDd_M", "0H9w0X0FfK", "ty9UTnqkG6", "...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", ...
[ "Thank you for your detailed responses. I think I'm satisfied now, and I've updated my score to reflect that.", "The authors introduce AMIGo, an approach to curiosity in which an adversarial \"teacher\" agent proposes goals that the \"student\" agent attempts to achieve. The student obtains an intrinsic reward of...
[ -1, 7, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "8d337YwqpEx", "iclr_2021_ETBc_MIMgoX", "JkExO9Emgvj", "JkExO9Emgvj", "_59tuNHdpk", "iclr_2021_ETBc_MIMgoX", "MKemtPRW_Pj", "Kc-VxtHOUm6", "iQReL0AxncU", "iQReL0AxncU", "iclr_2021_ETBc_MIMgoX", "0H9w0X0FfK", "rg4BGD7sYo", "rg4BGD7sYo", "4dnHW4XDd_M", "h1x4I1Ckev", "OyUS1iBH7B", "l7...
iclr_2021_wta_8Hx2KD
Incorporating Symmetry into Deep Dynamics Models for Improved Generalization
Recent work has shown deep learning can accelerate the prediction of physical dynamics relative to numerical solvers. However, limited physical accuracy and an inability to generalize under distributional shift limit its applicability to the real world. We propose to improve accuracy and generalization by incorporating symmetries into convolutional neural networks. Specifically, we employ a variety of methods each tailored to enforce a different symmetry. Our models are both theoretically and experimentally robust to distributional shift by symmetry group transformations and enjoy favorable sample complexity. We demonstrate the advantage of our approach on a variety of physical dynamics including Rayleigh–Bénard convection and real-world ocean currents and temperatures. Compare with image or text applications, our work is a significant step towards applying equivariant neural networks to high-dimensional systems with complex dynamics.
poster-presentations
Symmetries play an important role in physics, and more and more papers show that they also play an important role in statistical machine learning. In particular, employing symmetries might be the key to improve training and predictive performance of machine learning models. In this context, the present paper shows how previous physical knowledge can be leveraged to improve neural network performance, in particular within Deep dynamic models. To this end, they show how to incorporate equivariance into resnets and u-nets for dynamical systems. On a technical level, as pointed out by the reviews and also clearly mentioned by the authors, the basic building blocks are well known in the literature. However, dynamical systems also raises their own challenges resp. laws when it comes to modelling symmetries, as the authors argue in the paper and also clarified in the rebuttal. For instance, it pays off to adapt the techniques known from the literature deal better with scale, magnitude and uniform motion equivariance. This is a solid contributions and will help many other who want to apply DNNs to dynamic and physical models.
train
[ "KYWZwtyqlF5", "8sNP_96dp2", "HRUmXimmCfy", "ABirmcQ-RaD", "jMu6PdPCoJa", "FpqQRPD2i_4", "DCeeIHIaUIp", "hGr2C1zFLoJ", "q0QgqK9DRKz", "MaMIj8u1o4E", "KR5ki_O3S2X", "lO8ZHqy_NFp", "ThYQvD25BG-" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: The paper demonstrates that incorporating equivariance (i.e. symmetries) into model for predicting fluid dynamics improves its performance, especially when the test distribution is transformed by those symmetry groups. Leveraging the recent literature on equivariant CNNs, the paper proposes a CNN model th...
[ 7, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, 6, 4 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, 2, 2 ]
[ "iclr_2021_wta_8Hx2KD", "HRUmXimmCfy", "q0QgqK9DRKz", "jMu6PdPCoJa", "KYWZwtyqlF5", "lO8ZHqy_NFp", "ThYQvD25BG-", "DCeeIHIaUIp", "iclr_2021_wta_8Hx2KD", "KR5ki_O3S2X", "q0QgqK9DRKz", "iclr_2021_wta_8Hx2KD", "iclr_2021_wta_8Hx2KD" ]
iclr_2021_h2EbJ4_wMVq
CaPC Learning: Confidential and Private Collaborative Learning
Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other's data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multi-party computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.
poster-presentations
This work describes a system for collaborative learning in which several agents holding data want to improve their models by asking other agents to label their points. The system preserves confidentiality of queries using MPC and also throws in differentially private aggregation of labels (taken from the PATE framework). It provides expriments showing computational feasibility of the system. The techniques use active learning to improve the models. Overall the ingredients are fairly standard but are put together in a new (to the best of my , admittedly limited, knowledge of this area). This seems like a solid attempt to explore approaches for learning in a federated setting with strong limitations on data sharing.
train
[ "kEyCnUFpg9k", "yGDQx-JWW3a", "QS-CxVM6Npv", "WcDh1RGvLW3", "kTbK9Wf75W", "wgveyYLPxk", "kBZmdWip1j9" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your feedback, we uploaded a new PDF that discusses the papers mentioned in your review. \n\nWe find the InstaHide work interesting. We now clarify in our paper that we achieve confidentiality at test time. Instead, InstaHide modifies the model during training and is thus orthogonal to our approach. ...
[ -1, 7, -1, -1, -1, 7, 7 ]
[ -1, 4, -1, -1, -1, 4, 4 ]
[ "wgveyYLPxk", "iclr_2021_h2EbJ4_wMVq", "kTbK9Wf75W", "yGDQx-JWW3a", "kBZmdWip1j9", "iclr_2021_h2EbJ4_wMVq", "iclr_2021_h2EbJ4_wMVq" ]
iclr_2021_UwGY2qjqoLD
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds
In the present work we study classifiers' decision boundaries via Brownian motion processes in ambient data space and associated probabilistic techniques. Intuitively, our ideas correspond to placing a heat source at the decision boundary and observing how effectively the sample points warm up. We are largely motivated by the search for a soft measure that sheds further light on the decision boundary's geometry. En route, we bridge aspects of potential theory and geometric analysis (Maz'ya 2011, Grigor'Yan and Saloff-Coste 2002) with active fields of ML research such as adversarial examples and generalization bounds. First, we focus on the geometric behavior of decision boundaries in the light of adversarial attack/defense mechanisms. Experimentally, we observe a certain capacitory trend over different adversarial defense strategies: decision boundaries locally become flatter as measured by isoperimetric inequalities (Ford et al 2019); however, our more sensitive heat-diffusion metrics extend this analysis and further reveal that some non-trivial geometry invisible to plain distance-based methods is still preserved. Intuitively, we provide evidence that the decision boundaries nevertheless retain many persistent "wiggly and fuzzy" regions on a finer scale. Second, we show how Brownian hitting probabilities translate to soft generalization bounds which are in turn connected to compression and noise stability (Arora et al 2018), and these bounds are significantly stronger if the decision boundary has controlled geometric features.
poster-presentations
Four reviewers have reviewed this paper and after rebuttal, they were overall positive about the proposed idea. We congratulate authors on the paper.
train
[ "nHKnaRqGBc1", "QyKpVXwzhqT", "vWPSVn8OynI", "zuEfSIiEK4z", "ltrXxyFbKk", "9L0ezHMtPHY", "0OTOaJaKInE", "g50cLA0o0yL", "tumesXDIXWw", "VTc2Q458BNX" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The contributions of the paper center on i) the introduction of diffusion-related tools for studying classifier decision boundaries; and ii) using those tools to connect decision boundary geometry, adversarial robustness, and generalization. The paper provides an analysis that provides insight into how curvature a...
[ 7, -1, -1, -1, -1, -1, -1, 6, 8, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2021_UwGY2qjqoLD", "iclr_2021_UwGY2qjqoLD", "g50cLA0o0yL", "tumesXDIXWw", "VTc2Q458BNX", "iclr_2021_UwGY2qjqoLD", "nHKnaRqGBc1", "iclr_2021_UwGY2qjqoLD", "iclr_2021_UwGY2qjqoLD", "iclr_2021_UwGY2qjqoLD" ]
iclr_2021_TR-Nj6nFx42
A PAC-Bayesian Approach to Generalization Bounds for Graph Neural Networks
In this paper, we derive generalization bounds for two primary classes of graph neural networks (GNNs), namely graph convolutional networks (GCNs) and message passing GNNs (MPGNNs), via a PAC-Bayesian approach. Our result reveals that the maximum node degree and the spectral norm of the weights govern the generalization bounds of both models. We also show that our bound for GCNs is a natural generalization of the results developed in \citep{neyshabur2017pac} for fully-connected and convolutional neural networks. For MPGNNs, our PAC-Bayes bound improves over the Rademacher complexity based bound \citep{garg2020generalization}, showing a tighter dependency on the maximum node degree and the maximum hidden dimension. The key ingredients of our proofs are a perturbation analysis of GNNs and the generalization of PAC-Bayes analysis to non-homogeneous GNNs. We perform an empirical study on several synthetic and real-world graph datasets and verify that our PAC-Bayes bound is tighter than others.
poster-presentations
This paper gives a new PAC-Bayesian generalization error bound for graph neural networks (GCN and MPGNN). The bound improves the previously known Rademacher complexity based bound given by Garg et al. (2020). In particular, its dependency on the maximum node degree and the maximum hidden dimension is improved. This paper gives an interesting improvement on the generalization analysis of GNNs. The writing is clear, where its connection to existing work and its technical contribution are well discussed. The biggest concern is its technical novelty. Indeed, the proof follows the out-line of Neyshabur et al. (2017). Given that the technical novelty would be a bit limited, however, the analysis should properly deal with the complicated structure specific to GNNs which makes the analysis more difficult than usual CNN/MLP and requires subtle and careful manipulations. In addition to that, the improvement of the generalization bound is valuable for the literature (while the improvement seems a bit minor for graphs with small maximum degree). For these reasons, I recommend acceptance for this paper.
train
[ "dei6AaGk6v0", "tsRI8PaqCkV", "GVl5Vu6YTJx", "1mF8v_UI-TN", "BWnmVrJyjKn", "C_l9bTpUbFm", "o_xgnWjtSX6", "G8xFKZN3M", "jLvKR85QTD", "ODoOHfashBE" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the valuable and constructive feedback. We respond to the individual questions as below.\n\n\nQ1: The proof techniques in this paper are mainly from Neyshabur et al., 2017. The theoretical contribution and novelty are limited. \n\n\n> A1: Please refer to A1 in the common response.\n\n\nQ2...
[ -1, -1, -1, -1, -1, -1, 6, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 2, 4 ]
[ "o_xgnWjtSX6", "jLvKR85QTD", "ODoOHfashBE", "G8xFKZN3M", "iclr_2021_TR-Nj6nFx42", "iclr_2021_TR-Nj6nFx42", "iclr_2021_TR-Nj6nFx42", "iclr_2021_TR-Nj6nFx42", "iclr_2021_TR-Nj6nFx42", "iclr_2021_TR-Nj6nFx42" ]
iclr_2021_xnC8YwKUE3k
Clairvoyance: A Pipeline Toolkit for Medical Time Series
Time-series learning is the bread and butter of data-driven *clinical decision support*, and the recent explosion in ML research has demonstrated great potential in various healthcare settings. At the same time, medical time-series problems in the wild are challenging due to their highly *composite* nature: They entail design choices and interactions among components that preprocess data, impute missing values, select features, issue predictions, estimate uncertainty, and interpret models. Despite exponential growth in electronic patient data, there is a remarkable gap between the potential and realized utilization of ML for clinical research and decision support. In particular, orchestrating a real-world project lifecycle poses challenges in engineering (i.e. hard to build), evaluation (i.e. hard to assess), and efficiency (i.e. hard to optimize). Designed to address these issues simultaneously, Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a (i) software toolkit, (ii) empirical standard, and (iii) interface for optimization. Our ultimate goal lies in facilitating transparent and reproducible experimentation with complex inference workflows, providing integrated pathways for (1) personalized prediction, (2) treatment-effect estimation, and (3) information acquisition. Through illustrative examples on real-world data in outpatient, general wards, and intensive-care settings, we illustrate the applicability of the pipeline paradigm on core tasks in the healthcare journey. To the best of our knowledge, Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML.
poster-presentations
The paper addresses a pressing problem for applications involving clinical time series and introduce a pipeline that handle many of the issues pertaining to data preprocessing. An important contribution is the software that makes the processing more seamless, which will, without a doubt, be useful to the community given the need for reproducibility. The authors have responded suitably to reviewer comments with the main 'leftover criticism' being that such a paper may not be the best fit for ICLR. This isn't a typical paper. However, something that introduces this level of automation and flexibility in handling time series has not been presented at this conference (or other ML conferences) to the best of my knowledge. It seems it could work in conjunction (as opposed to competing) with any new time series models/techniques that may be introduced.
train
[ "-cKlp1jlinV", "3trvxEJ7swX", "X6wc3TWOQPH", "deN1sFyiGu9", "P5r4dZCu_T", "rIRDVgsxucG", "LII0mVi0wVs", "jpnCPZU_5-", "R3Usw-zyE8", "10PnH69B1xQ", "Au_rk4aSJBp", "zBVVpWWs4w", "GTCw7Fw0jpY", "hUCBGgd3vys", "mo14K5ls5k", "mEzB2nPGJfp", "OHANJpZrB36", "PUWwJHc3f_n", "oUGNcScR2n-", ...
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "offici...
[ "---\n\nWe are sincerely grateful for your time and energy in the review process.\n\nIn light of our responses (Nov 19) and revisions (Nov 19), we would appreciate if the reviewer kindly let us know of any leftover concerns in the very limited time remaining. We would be happy to do our utmost to address them.\n\nT...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 2 ]
[ "RvLeelbIlfT", "mC5gsJsIW4o", "EmxX_YVHkCJ", "iclr_2021_xnC8YwKUE3k", "deN1sFyiGu9", "mC5gsJsIW4o", "mC5gsJsIW4o", "QAIL2GPu7tL", "RvLeelbIlfT", "RvLeelbIlfT", "mC5gsJsIW4o", "mC5gsJsIW4o", "RvLeelbIlfT", "RvLeelbIlfT", "RvLeelbIlfT", "EmxX_YVHkCJ", "EmxX_YVHkCJ", "EmxX_YVHkCJ", ...
iclr_2021_068E_JSq9O
Self-supervised Representation Learning with Relative Predictive Coding
This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance. The key to the success of RPC is two-fold. First, RPC introduces the relative parameters to regularize the objective for boundedness and low variance. Second, RPC contains no logarithm and exponential score functions, which are the main cause of training instability in prior contrastive objectives. We empirically verify the effectiveness of RPC on benchmark vision and speech self-supervised learning tasks. Lastly, we relate RPC with mutual information (MI) estimation, showing RPC can be used to estimate MI with low variance.
poster-presentations
The paper presents new contrastive based self-supervised objective based on Chi squared divergence that helps with mini batch sensitivity, training stability and improved downstream performance. An accept.
val
[ "uxG_1fi-7Y6", "EEPVege3mj4", "FaKxjH5ZSIt", "BxNElSXIQD", "vSRENrJMuo", "8tS8lxEp3MV", "MP5lKW3cVo", "JsD7JOsYnYu", "s6BcvfYM79j", "1sN2gUI1jnw", "EfU1QGmi2qs" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a new contrastive representation objective that has good training stability, minibatch size sensitivity, and downstream task performance. This objective is a generalization of Chi-square divergence, the optimal solution is the density ratio of joint distribution and product of marginal distribu...
[ 6, -1, -1, -1, -1, -1, -1, -1, 7, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_068E_JSq9O", "FaKxjH5ZSIt", "JsD7JOsYnYu", "s6BcvfYM79j", "EfU1QGmi2qs", "iclr_2021_068E_JSq9O", "1sN2gUI1jnw", "uxG_1fi-7Y6", "iclr_2021_068E_JSq9O", "iclr_2021_068E_JSq9O", "iclr_2021_068E_JSq9O" ]
iclr_2021_FmMKSO4e8JK
Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
In this work we consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points. This problem setting emerges in many domains where function evaluation is a complex and expensive process, such as in the design of materials, vehicles, or neural network architectures. Because the available data typically only covers a small manifold of the possible space of inputs, a principal challenge is to be able to construct algorithms that can reason about uncertainty and out-of-distribution values, since a naive optimizer can easily exploit an estimated model to return adversarial inputs. We propose to tackle the MBO problem by leveraging the normalized maximum-likelihood (NML) estimator, which provides a principled approach to handling uncertainty and out-of-distribution inputs. While in the standard formulation NML is intractable, we propose a tractable approximation that allows us to scale our method to high-capacity neural network models. We demonstrate that our method can effectively optimize high-dimensional design problems in a variety of disciplines such as chemistry, biology, and materials engineering.
poster-presentations
This work proposes a model-based optimization using an approximated normalized maximum likelihood (NML). It is an interesting idea and has the advantage of scaling to large datasets. The reviewers are generally positive and are satisfied with authors' response.
train
[ "zubXksL0NPx", "fpGZAusvPdJ", "TBhMYJxLjnG", "J5wHCifLcmX", "TzK0uFSFu7i", "kXngjR2X2sk", "5ckGtAKKEjM", "-DQb-BnTgsd", "yKO8leLOtal", "ywD5v8EmqXc", "ptAfxVd8fsD" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "Updated review\n----\n----\n\n# Summary\n\nThis work proposes an approach for model-based optimization based on learning a density function through an approximation of the normalized maximum likelihood (NML). This is done by discretizing the space and fitting distinct model parameters for each value. To lower the ...
[ 8, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_FmMKSO4e8JK", "iclr_2021_FmMKSO4e8JK", "iclr_2021_FmMKSO4e8JK", "yKO8leLOtal", "ptAfxVd8fsD", "5ckGtAKKEjM", "-DQb-BnTgsd", "ywD5v8EmqXc", "fpGZAusvPdJ", "zubXksL0NPx", "TBhMYJxLjnG" ]
iclr_2021_NQbnPjPYaG6
On the Impossibility of Global Convergence in Multi-Loss Optimization
Under mild regularity conditions, gradient-based methods converge globally to a critical point in the single-loss setting. This is known to break down for vanilla gradient descent when moving to multi-loss optimization, but can we hope to build some algorithm with global guarantees? We negatively resolve this open problem by proving that desirable convergence properties cannot simultaneously hold for any algorithm. Our result has more to do with the existence of games with no satisfactory outcomes, than with algorithms per se. More explicitly we construct a two-player game with zero-sum interactions whose losses are both coercive and analytic, but whose only simultaneous critical point is a strict maximum. Any 'reasonable' algorithm, defined to avoid strict maxima, will therefore fail to converge. This is fundamentally different from single losses, where coercivity implies existence of a global minimum. Moreover, we prove that a wide range of existing gradient-based methods almost surely have bounded but non-convergent iterates in a constructed zero-sum game for suitably small learning rates. It nonetheless remains an open question whether such behavior can arise in high-dimensional games of interest to ML practitioners, such as GANs or multi-agent RL.
poster-presentations
This paper presents a series of negative results regarding the convergence of deterministic, "reasonable" algorithms in min-max games. The defining characteristic of such algorithms is that (a) the algorithm's fixed points are critical points of the game; and (b) they avoid strict maxima from almost any initialization. The authors then construct a range of simple $2$-dimensional "market games" in which every reasonable algorithm fails to converge, from almost any initialization. The paper received three positive recommendations and one negative, with all reviewers indicating high confidence. After my own reading of the paper, I concur with the majority view that the paper's message is an interesting one for the community and will likely attract interest in ICLR. In more detail, I view the authors' result as a cautionary tale, not unlike the NeurIPS 2019 spotlight paper of Vlatakis-Gkaragkounis et al, and a concurrent arxiv preprint by Hsieh et al. (2020). In contrast to the type of cycling/recurrence phenomena that are well-documented in bilinear games (and which can be resolved through the use of extra-gradient methods), the non-convergence phenomena described by the authors of this paper appear to be considerably more resilient, as they apply to all "reasonable" algorithms. Determining whether GANs (or other practical applications of min-max optimization) can exhibit such phenomena is an important open question, and one which needs to be informed by a deeper understanding of the theory. I find this paper successful in this regard and I am happy to recommend acceptance.
test
[ "2Y03NADFE0g", "44ORizNn3p", "O9uYqa4T4rd", "Hp-RusoLmR8", "20yrgHochKG", "7tP8Ix3bbus", "w01IQE1Upfw", "DKr1SDLtYi", "9vRKMxOqyJn" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the authors' effort of trying to address my comments. However, my concerns still persist and have not been fully addressed. One of the biggest concerns is the practical use of the results in the paper. All the majors results are built upon specifically designed losses and looks like no one are using the...
[ -1, 6, -1, -1, -1, -1, 8, 7, 4 ]
[ -1, 4, -1, -1, -1, -1, 5, 4, 4 ]
[ "O9uYqa4T4rd", "iclr_2021_NQbnPjPYaG6", "9vRKMxOqyJn", "DKr1SDLtYi", "44ORizNn3p", "w01IQE1Upfw", "iclr_2021_NQbnPjPYaG6", "iclr_2021_NQbnPjPYaG6", "iclr_2021_NQbnPjPYaG6" ]
iclr_2021_6zaTwpNSsQ2
A Block Minifloat Representation for Training Deep Neural Networks
Training Deep Neural Networks (DNN) with high efficiency can be difficult to achieve with native floating-point representations and commercially available hardware. Specialized arithmetic with custom acceleration offers perhaps the most promising alternative. Ongoing research is trending towards narrow floating-point representations, called minifloats, that pack more operations for a given silicon area and consume less power. In this paper, we introduce Block Minifloat (BM), a new spectrum of minifloat formats capable of training DNNs end-to-end with only 4-8 bit weight, activation and gradient tensors. While standard floating-point representations have two degrees of freedom, via the exponent and mantissa, BM exposes the exponent bias as an additional field for optimization. Crucially, this enables training with fewer exponent bits, yielding dense integer-like hardware for fused multiply-add (FMA) operations. For ResNet trained on ImageNet, 6-bit BM achieves almost no degradation in floating-point accuracy with FMA units that are 4.1×(23.9×) smaller and consume 2.3×(16.1×) less energy than FP8 (FP32). Furthermore, our 8-bit BM format matches floating-point accuracy while delivering a higher computational density and faster expected training times.
poster-presentations
This paper proposes a new approach to training networks with low precision called Block Minifloat. The reviewers found the paper well written and found that the empirical results were sufficient. In particular, they found the hardware implementation was a strong contribution. Furthermore, the rebuttal properly addressed the comments of the reviewer.
val
[ "Wgf1zfl6SqZ", "g4Jir8CZZ6a", "UsFf72AJDl", "-5UXZW608wZ", "KicWDF7omYC", "pYHwUaesTTd" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a family of numerical representations for training neural networks based on minifloats that share a common exponent across blocks. The authors perform a lot of software simulations to explore the design space on many different models and tasks. Hardware designs have also been synthesized and re...
[ 7, 7, -1, -1, -1, 6 ]
[ 3, 5, -1, -1, -1, 4 ]
[ "iclr_2021_6zaTwpNSsQ2", "iclr_2021_6zaTwpNSsQ2", "Wgf1zfl6SqZ", "g4Jir8CZZ6a", "pYHwUaesTTd", "iclr_2021_6zaTwpNSsQ2" ]
iclr_2021_8nl0k08uMi
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
The properties of individual neurons are often analyzed in order to understand the biological and artificial neural networks in which they're embedded. Class selectivity—typically defined as how different a neuron's responses are across different classes of stimuli or data samples—is commonly used for this purpose. However, it remains an open question whether it is necessary and/or sufficient for deep neural networks (DNNs) to learn class selectivity in individual units. We investigated the causal impact of class selectivity on network function by directly regularizing for or against class selectivity. Using this regularizer to reduce class selectivity across units in convolutional neural networks increased test accuracy by over 2% in ResNet18 and 1% in ResNet50 trained on Tiny ImageNet. For ResNet20 trained on CIFAR10 we could reduce class selectivity by a factor of 2.5 with no impact on test accuracy, and reduce it nearly to zero with only a small (~2%) drop in test accuracy. In contrast, regularizing to increase class selectivity significantly decreased test accuracy across all models and datasets. These results indicate that class selectivity in individual units is neither sufficient nor strictly necessary, and can even impair DNN performance. They also encourage caution when focusing on the properties of single units as representative of the mechanisms by which DNNs function.
poster-presentations
This paper has received three positive reviews. In general, the reviewers have commented on the importance of the question related to how much selectivity is needed from units of a neural network for good classification -- from both the neuroscience and ML perspectives. The reviewers also commented on the thoroughness of the experiments and the general readability of the paper. This paper should be accepted if possible.
train
[ "01prQ1ivwci", "WLoTb1nq30E", "NfGMzcxMtYJ", "6RiDy2YPDc", "O1qtxczyvz", "eWHzz_EO4IJ", "lJzjvy58un", "MVHs42nL9I2", "EksZGg6gi7", "OLb5GeiCcH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper examines the impact of forcing units in a CNN to be more or less “class-selective” – i.e. respond preferentially to one image class compared to another. The approach taken is to include a regularizer in the loss that directly penalizes or encourages class selectivity in individual units. They report th...
[ 6, -1, 7, 6, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_8nl0k08uMi", "MVHs42nL9I2", "iclr_2021_8nl0k08uMi", "iclr_2021_8nl0k08uMi", "6RiDy2YPDc", "01prQ1ivwci", "eWHzz_EO4IJ", "EksZGg6gi7", "NfGMzcxMtYJ", "iclr_2021_8nl0k08uMi" ]
iclr_2021_WEHSlH5mOk
Discrete Graph Structure Learning for Forecasting Multiple Time Series
Time series forecasting is an extensively studied subject in statistics, economics, and computer science. Exploration of the correlation and causation among the variables in a multivariate time series shows promise in enhancing the performance of a time series model. When using deep neural networks as forecasting models, we hypothesize that exploiting the pairwise information among multiple (multivariate) time series also improves their forecast. If an explicit graph structure is known, graph neural networks (GNNs) have been demonstrated as powerful tools to exploit the structure. In this work, we propose learning the structure simultaneously with the GNN if the graph is unknown. We cast the problem as learning a probabilistic graph model through optimizing the mean performance over the graph distribution. The distribution is parameterized by a neural network so that discrete graphs can be sampled differentiably through reparameterization. Empirical evaluations show that our method is simpler, more efficient, and better performing than a recently proposed bilevel learning approach for graph structure learning, as well as a broad array of forecasting models, either deep or non-deep learning based, and graph or non-graph based.
poster-presentations
This paper presents a graph neural network-based approach to forecasting multiple time series. It incorporates structure learning similar in some ways to NRI (Kipf et al.) and a recurrent graph convolution forecaster given the inferred graph based on DCRNN (Li et al.). The paper shows consistently improved performance over several different kinds of baselines (classical, deep learning, and graph-based deep learning) on three datasets, two of which are public and one proprietary. Reviewer 1 thought the paper was “well presented and easy to understand”. I agree. The reviewer liked the simplicity of the approach but wondered whether the empirical improvement was sufficient. The reviewer asked about applicability outside the time series domain and the authors provided a satisfying response. Reviewer 3 thought that simultaneously learning graph structure and forecasting was an “understudied topic”. The reviewer pointed out several strengths including: the end-to-end nature of the approach, the reduction in training cost due to the direct parameterization of the adjacency matrix structure, the optimal structural regularization scheme, and the extensive experimentation. Like R1, they thought the paper was well written. They suggested several points of improvement, which were mainly seeking clarification. They made a good point regarding the PMU dataset in that only one month was considered, which was not enough to capture long-term seasonalities. This seems like a limitation to me. The authors responded to each of the points in turn. Regarding the point about the PMU dataset, the authors stated that the data was extremely noisy, so they settled on a month that was comparatively clean. The review from R4 was not as positive as the other reviews. R4 proposed that the present work may be a “simplification of NRI” rather than being novel/different because NRI was a windowed-based approach, while GTS was based on the entire time series. The reviewer also pointed out what appears to be a highly relevant paper (Wu et al.); though this appeared in KDD 2020 and in my opinion it’s understandable if the authors missed it. Finally, R4 raised several issues with the empirical evaluation. They make a very good point that the analysis on regularization used the kNN graph where a “ground truth” graph was not available. Why not evaluate the regularizer on the other datasets where ground truth is available? The authors responded with an updated paper addressing this point. The authors responded to other points of criticism and a fairly extensive debate ensued. The key points of the debate were: A difference in opinion between the significance in departure between the structure learning mechanism in this work (GTS) and NRI (Kipf et al.) Overlap between a recently proposed paper (Wu et al., KDD 2020). I am fairly sympathetic with the authors here. That work is fairly late-breaking and they do point out a key difference: "the advantage of our method over MTGNN is clear: we can get structures we desire, not restricted to a degree-k graph. This advantage is an important contribution, because MTGNN hard-wires the graph parameterization and enforces that all nodes have the same number of (out) neighbors. What if one desires instead a graph that approximately obeys spatial proximity, like that in the case of METR-LA? Our method yields such a graph." Some minor concerns with Fig. 2 and the structural prior/regularization analysis I have read the paper, and while R4’s concerns are legitimate, I think this paper is clearly over the bar. I support this paper’s acceptance and ask the authors to take the reviews into consideration when revising their paper. As R4 suggests, they could add a controlled set of experiments on a perhaps synthetic dataset to show that the proposed GTS is better than NRI. If the scalability of NRI is a concern, then this can be highlighted to the same extent as LDS.
train
[ "DCaAMS71Uk3", "0ON8TmGerON", "HTeDz0UymjB", "S4TWbaAj47K", "e52biSmuv2l", "g5mPxVK1-J", "etMHVGbN31Q", "2kZ3auO9ECt", "Tv2guRK2w9f", "o422tncF9cr", "07INhWEkW-Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer, your comments are well taken but you may have missed several points.\n \nDifferences from prior work lead positively to the improved empirical results. Such an improvement underlies the contribution of the work, as it points to updated knowledge of what design probably works better than others.\n \n...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 4 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "0ON8TmGerON", "Tv2guRK2w9f", "iclr_2021_WEHSlH5mOk", "e52biSmuv2l", "2kZ3auO9ECt", "HTeDz0UymjB", "o422tncF9cr", "o422tncF9cr", "07INhWEkW-Q", "iclr_2021_WEHSlH5mOk", "iclr_2021_WEHSlH5mOk" ]
iclr_2021_CR1XOQ0UTh-
Contrastive Learning with Hard Negative Samples
We consider the question: how can you sample good negative examples for contrastive learning? We argue that, as with metric learning, learning contrastive representations benefits from hard negative samples (i.e., points that are difficult to distinguish from an anchor point). The key challenge toward using hard negatives is that contrastive methods must remain unsupervised, making it infeasible to adopt existing negative sampling strategies that use label information. In response, we develop a new class of unsupervised methods for selecting hard negative samples where the user can control the amount of hardness. A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible. The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
poster-presentations
This paper proposes a contrastive learning framework that leverages hard negative samples for self-supervised training. The proposed framework is theoretically analyzed and its efficacy is examined on several datasets/problems. A group of expert reviewers reviewed the paper and provided positive ratings for this paper. I agree with the reviewers and I recommend accepting this submission. One of the main discussion points among the reviewers was to what degree Pr1 is "approximately" satisfied in the proposed framework. There are several approximations in this paper that are not fully analyzed. Some of these approximations could be examined assuming that labeled data is available during training. For example, $p_x^+$ is approximated using a set of semantics-preserving transformations. In practice, the distribution induced by augmenting $x$ is very different than the distribution that samples from the instances in the class of $x$. The effect of this approximation could be easily examined by sampling from true class labels. Additionally, it would be very helpful to visualize how $q$ samples from the negative instances and how much it follows Pr1. I would like to ask the authors to add a small limitations section to the final camera-ready version that lists all the assumptions and approximations made in this paper. Please provide a high-level analysis on how such assumptions could be validated or such approximations could be measured if labeled data or additional information was provided. This discussion is extremely important for future practitioners to understand the basic assumptions that may not hold in reality and it will enable them to improve upon this work.
train
[ "5NXH8O4f6wP", "LHA7j68nHef", "D7jhFBUCFzl", "JE3kG9BEIaE", "TshwQ1p1Lp0", "MUFmcRc_yby", "2gGXGyTDud3", "hMs2EDs8oj", "ueMtuZgIiKB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update:\n\nThe revisions are good. The paper is very easy to follow and most of the story is pretty clear. Theory sections are clearer as well. So I'll improve my score as the authors followed through with both mine and other reviewer's comments. There's one hitch: Pr1, the one having to do with the labels, is not...
[ 6, 7, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_CR1XOQ0UTh-", "iclr_2021_CR1XOQ0UTh-", "iclr_2021_CR1XOQ0UTh-", "LHA7j68nHef", "5NXH8O4f6wP", "hMs2EDs8oj", "ueMtuZgIiKB", "iclr_2021_CR1XOQ0UTh-", "iclr_2021_CR1XOQ0UTh-" ]
iclr_2021_tqOvYpjPax2
Intraclass clustering: an implicit learning ability that regularizes DNNs
Several works have shown that the regularization mechanisms underlying deep neural networks' generalization performances are still poorly understood. In this paper, we hypothesize that deep neural networks are regularized through their ability to extract meaningful clusters among the samples of a class. This constitutes an implicit form of regularization, as no explicit training mechanisms or supervision target such behaviour. To support our hypothesis, we design four different measures of intraclass clustering, based on the neuron- and layer-level representations of the training data. We then show that these measures constitute accurate predictors of generalization performance across variations of a large set of hyperparameters (learning rate, batch size, optimizer, weight decay, dropout rate, data augmentation, network depth and width).
poster-presentations
The paper proposes intra-class clustering as an indicator of generalization performance and validates this by extensive empirical evaluation. All reviewers have found this connection highly interesting. The author response has also duly addressed most of the reviewers' concerns. Given the importance of studying generalization performance of overparameterized deep models, the paper will potentially generate interesting discussion at the conference.
train
[ "7_lLnBCutOE", "0TqlJnMAS-Q", "TPLk6id30xS", "pp23GrLBCk2", "PxxGZJDsEQv", "yI50wvWM_L", "B3yPfhcCDrA", "EZofeRRAJ4d", "UpGs-u_b-hY", "DAk0QdAZss9", "lOnZ_NsqL8", "m_B3KzThbW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "**Summary**:\n\nThis paper introduces a notion coined “intraclass clustering” that describes a deep neural network’s implicit ability to cluster within data. 4 different quantities that measure the networks’ clustering ability are proposed, and large scale experiments show that they are highly effective at predict...
[ 6, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_tqOvYpjPax2", "iclr_2021_tqOvYpjPax2", "iclr_2021_tqOvYpjPax2", "iclr_2021_tqOvYpjPax2", "0TqlJnMAS-Q", "7_lLnBCutOE", "7_lLnBCutOE", "TPLk6id30xS", "TPLk6id30xS", "TPLk6id30xS", "m_B3KzThbW", "iclr_2021_tqOvYpjPax2" ]
iclr_2021_t0TaKv0Gx6Z
Sliced Kernelized Stein Discrepancy
Kernelized Stein discrepancy (KSD), though being extensively used in goodness-of-fit tests and model learning, suffers from the curse-of-dimensionality. We address this issue by proposing the sliced Stein discrepancy and its scalable and kernelized variants, which employs kernel-based test functions defined on the optimal one-dimensional projections. When applied to goodness-of-fit tests, extensive experiments show the proposed discrepancy significantly outperforms KSD and various baselines in high dimensions. For model learning, we show its advantages by training an independent component analysis when compared with existing Stein discrepancy baselines. We further propose a novel particle inference method called sliced Stein variational gradient descent (S-SVGD) which alleviates the mode-collapse issue of SVGD in training variational autoencoders.
poster-presentations
This paper propses a slice method for approximaing the Kernel Stein Discrepancy, which has been popularly used for learning and inference with unnormalized density models. The proposed method uses a finite set from the orthogonal bases for the slice to approximate the Stein Discrepancy. The experimental results show that they outperform exsiting methods in high-dimensional cases in the applications of goodness-of-fit tests and learning of energy-based models. The proposed slice idea is novel and significant. Especially, unlike sliced Wasserstein, the slices are taken from the limited number of vectors, which should be an advantageous feature of the method. Experiments demonstrate clear advantages in high-dimensional cases, as expected. The paper is worth accepting in ICLR.
train
[ "nUn2944QYR", "U-TQ5IP6o6P", "ARJos-Un10V", "JNwPQrzfi2a", "bZNKqg-HHgo", "K63iUU4UK5U", "WdnV5thhQn7", "HWeMKPXxSB0", "Q0cRD4WgEtE" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you so much for the updated score, and I am glad that the initial response addressed most of the concerns.\n\nThe time reported in the initial response is the time per epoch (including the inner loop optimization). For the Boston housing data set, we run 2000 epochs to make sure both algorithms converged. Th...
[ -1, 6, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "ARJos-Un10V", "iclr_2021_t0TaKv0Gx6Z", "JNwPQrzfi2a", "U-TQ5IP6o6P", "iclr_2021_t0TaKv0Gx6Z", "HWeMKPXxSB0", "Q0cRD4WgEtE", "iclr_2021_t0TaKv0Gx6Z", "iclr_2021_t0TaKv0Gx6Z" ]
iclr_2021_St1giarCHLP
Denoising Diffusion Implicit Models
Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps in order to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a particular Markovian diffusion process. We generalize DDPMs via a class of non-Markovian diffusion processes that lead to the same training objective. These non-Markovian processes can correspond to generative processes that are deterministic, giving rise to implicit models that produce high quality samples much faster. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, perform semantically meaningful image interpolation directly in the latent space, and reconstruct observations with very low error.
poster-presentations
This work provides additional insights into a class of generative models that is rapidly gaining traction, and extends it by potentially providing a faster sampling mechanism, as well as a way to meaningfully interpolate between samples (an ability which adversarial models, currently the most popular class of generative models, also have). The revised manuscript includes an extension to discrete data, which could potentially amplify the impact of this work. The authors have also run additional experiments in response to the reviewers' comments. Reviewer 1 raised several concerns about the choice of language (i.e. referring to the proposed model as a diffusion model, and the precise meaning of 'implicit' in the context of generative models). This is a fair point, as the authors introduce changes that affect the Markovian nature of the "diffusion" process, and a diffusion process is supposed to be Markovian by definition. However, I think there is something to be said for the authors' argument of using the word 'diffusion' to clearly link this work to the prior work on which it is based. Given that technically speaking, the original DDPM work already 'abuses' the term to refer to a discrete-time process, it is difficult to argue compellingly that 'diffusion' should not feature in the name of the proposed model. Referring to 'non-Markovian diffusion processes' however seems more problematic, as this is a direct contradiction. If the authors wish to use this phrase, adding a few sentences to the introduction that justify this use would be helpful, and personally I feel this would be sufficient to address the issue (I noted that Section 4.1 already acknowledges that the forward process is no longer a diffusion). Plenty of work in our field abuses notation and this is justified simply with the phrase "with (slight) abuse of notation..."; I don't think this would be any different. Reviewer 1 is technically correct that 'stochastic' is an absolute adjective, i.e. something can only be stochastic or deterministic, there is nothing in between, and there are no degrees/levels of stochasticity or determinism. In practice however, it is quite often used in a comparative sense, and I believe I have in fact been guilty of this myself! I do not feel that it causes any ambiguity in this case. Indeed, the phrase 'degree of stochasticity' seems to be in relatively common use in literature. While there may be more correct terms to use, I subscribe to the descriptivist view on language, and I do not think the comparative use of 'stochastic' is a major issue here. The alternatives I can think of seem potentially more cumbersome (e.g. I wager that 'more/less entropic' would be more poorly understood than 'more/less stochastic'). Still, I recommend that the authors consider potential alternatives in the future, to avoid any confusion. Overall, I think the reviewers' major concerns have been addressed in the revised manuscript. Given that all reviewers consider the idea worthwhile, I will join them in recommending acceptance.
train
[ "R-Q7ndlC_LD", "-OEVebC7R1n", "eRb_TjIRmP", "4QmfFJVMCrx", "bXEmBU5Og7f", "re7j-liM6vI", "fbp54zwj721", "H_F-YtuXKxI", "32Iem2FenH_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Authors,\n\nThank you for responding to my comments and clarifying my understanding. I am glad that you have made the changes you did to the paper. I believe these changes have slightly improved the work. I initially reviewed this paper at a 7 and I do not feel these new changes warrant me to further raise this sc...
[ -1, 6, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "fbp54zwj721", "iclr_2021_St1giarCHLP", "4QmfFJVMCrx", "bXEmBU5Og7f", "-OEVebC7R1n", "H_F-YtuXKxI", "32Iem2FenH_", "iclr_2021_St1giarCHLP", "iclr_2021_St1giarCHLP" ]
iclr_2021_r-gPPHEjpmw
Hierarchical Reinforcement Learning by Discovering Intrinsic Options
We propose a hierarchical reinforcement learning method, HIDIO, that can learn task-agnostic options in a self-supervised manner while jointly learning to utilize them to solve sparse-reward tasks. Unlike current hierarchical RL approaches that tend to formulate goal-reaching low-level tasks or pre-define ad hoc lower-level policies, HIDIO encourages lower-level option learning that is independent of the task at hand, requiring few assumptions or little knowledge about the task structure. These options are learned through an intrinsic entropy minimization objective conditioned on the option sub-trajectories. The learned options are diverse and task-agnostic. In experiments on sparse-reward robotic manipulation and navigation tasks, HIDIO achieves higher success rates with greater sample efficiency than regular RL baselines and two state-of-the-art hierarchical RL methods. Code at: https://github.com/jesbu1/hidio.
poster-presentations
This paper presents an approach to hierarchical RL which automatically learns intrinsic task-agnostic options. The approach involves a two-level hierarchy, with policies learned by lower-layer Workers and selected by a higher-layer Scheduler. The approach is evaluated on four complex tasks and is shown to outperform existing methods. There were initial concerns with this paper around clarity of a number of points. These included the contributions of this work and questions around the experimental results, such as discussing the learned options themselves. The authors provided extensive responses to these concerns, and updated the paper accordingly, including addition results and analysis. I believe the paper is now much clearer with interesting contributions.
train
[ "8wsets3KPgd", "dSO2vcxiaf0", "jhzfZmqj33K", "d2lIiFAQ0SD", "yWz7w5ff0S", "-1-o81p6SUS", "W83nRjry-rE", "8oCJbmzHJPx", "0xpw4WwiDQH", "9EzHASQRauK", "Bn2AziXMkRG", "T4Fi5cWSHrt", "fGyHGG_ZvV1" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their constructive comments. Both AnonReviewer3 and AnonReviewer5 think that the method is novel and technically sound, the experiment results are empirically strong, and the paper is generally well written. While AnonReviewer1 believes that our work “makes some reasonably sensible e...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 4 ]
[ "iclr_2021_r-gPPHEjpmw", "9EzHASQRauK", "9EzHASQRauK", "fGyHGG_ZvV1", "Bn2AziXMkRG", "Bn2AziXMkRG", "Bn2AziXMkRG", "9EzHASQRauK", "T4Fi5cWSHrt", "iclr_2021_r-gPPHEjpmw", "iclr_2021_r-gPPHEjpmw", "iclr_2021_r-gPPHEjpmw", "iclr_2021_r-gPPHEjpmw" ]
iclr_2021_EMHoBG0avc1
Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
We propose a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions, which achieves state-of-the-art performance on two multi-hop datasets, HotpotQA and multi-evidence FEVER. Contrary to previous work, our method does not require access to any corpus-specific information, such as inter-document hyperlinks or human-annotated entity markers, and can be applied to any unstructured text corpus. Our system also yields a much better efficiency-accuracy trade-off, matching the best published accuracy on HotpotQA while being 10 times faster at inference time.
poster-presentations
The paper introduces improving passage retrieval for multi-hop QA datasets by recursively retrieving passages, adding previously retrieved passages to the input (in addition to a query). This simple method shows gains on multiple QA benchmark datasets, and the evaluation presented in the paper on multiple competitive benchmark datasets (HotpotQA, FEVER) is very thorough (R1, R3, R4). While the application is pretty narrow, the performance gain (considering both efficiency and accuracy) is fairly significant, and the paper presents a simple model with less assumption (e.g., inter-document hyperlinks), that could be useful for future research. [1] also seems like a relevant line of work. [1] Generation-Augmented Retrieval for Open-domain Question Answering https://arxiv.org/pdf/2009.08553.pdf
train
[ "_iE1m5OT5JF", "VjZ_kUcoR_d", "8MvJA1-FD1M", "Zqhw9RdPLNJ", "vH_dUkmQOkm", "QjbIH3ao90P", "K_xTI-30tIT", "QZuOvRK4oxg", "1E_OfXesdga", "-4ssfVHi-x-" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary; The paper proposes a simple, clever, and as far as I can tell novel, combination of dense retrieval techniques and pseudo relevance feedback for multi-hop (complex) open-domain QA. The basic idea is to concatenate the passages returned for the first query to the original question, to form a new query to b...
[ 7, -1, -1, -1, -1, -1, -1, 6, 5, 9 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "iclr_2021_EMHoBG0avc1", "-4ssfVHi-x-", "QZuOvRK4oxg", "_iE1m5OT5JF", "_iE1m5OT5JF", "1E_OfXesdga", "iclr_2021_EMHoBG0avc1", "iclr_2021_EMHoBG0avc1", "iclr_2021_EMHoBG0avc1", "iclr_2021_EMHoBG0avc1" ]
iclr_2021_gIHd-5X324
Rethinking Soft Labels for Knowledge Distillation: A Bias–Variance Tradeoff Perspective
Knowledge distillation is an effective approach to leverage a well-trained network or an ensemble of them, named as the teacher, to guide the training of a student network. The outputs from the teacher network are used as soft labels for supervising the training of a new network. Recent studies (M ̈uller et al., 2019; Yuan et al., 2020) revealed an intriguing property of the soft labels that making labels soft serves as a good regularization to the student network. From the perspective of statistical learning, regularization aims to reduce the variance, however how bias and variance change is not clear for training with soft labels. In this paper, we investigate the bias-variance tradeoff brought by distillation with soft labels. Specifically, we observe that during training the bias-variance tradeoff varies sample-wisely. Further, under the same distillation temperature setting, we observe that the distillation performance is negatively associated with the number of some specific samples, which are named as regularization samples since these samples lead to bias increasing and variance decreasing. Nevertheless, we empirically find that completely filtering out regularization samples also deteriorates distillation performance. Our discoveries inspired us to propose the novel weighted soft labels to help the network adaptively handle the sample-wise bias-variance tradeoff. Experiments on standard evaluation benchmarks validate the effectiveness of our method. Our code is available in the supplementary.
poster-presentations
The paper investigates the effect of soft labels in knowledge distillation from the perspective of sample-wise bias-variance tradeoff. They observe that during training the bias-variance tradeoff varies sample-wisely. and under the same distillation temperature setting, we distillation performance is negatively associated with the number of regularization samples. But removing them altogether hurts the performance (the authors show empirical evidence of this). Based on some observations about regularization samples, the authors propose the weighted soft labels to handle the tradeoff. Experiments on standard datasets show that the proposed method can improve the standard knowledge distillation. pros. -the paper is written clearly. -through the review period the authors added additional experiments suggested by the reviewers and enhances experimental results. The experiment results are convincing and the authors have now added explanations on hyperparameter choices. -the mathematical setting is now clear after incorporating reviewer's comments. -the missing related work as suggested by reviewers is added cons. -comparison with results of Zitong Yang et al 2020[1] is missing. I thank the authors for incorporating the changes requested by reviewers. Please add comparison with result of [1] in the final version. [1] Rethinking Bias-Variance Trade-off for Generalization of Neural Networks Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, Yi Ma
val
[ "3InVsLbJmjQ", "Adz4ppD_fER", "zE0YvjQbtTt", "D-14M1HITL8", "eXJYJnl6PVa", "_T0KSI8hLgl", "t7mbShCWTJ", "Q2IEO7ThO2v", "FK3TtnlDSje", "lNnmoxxb_cF" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their constructive comments. We have revised our paper accordingly and the revised parts are highlighted as brown. Specifically, we have made the following changes:\n\n1. We revise our mathematical definitions to avoid abuse of notations.\n2. We add experiments about the intermediate...
[ -1, -1, -1, -1, -1, -1, 7, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "iclr_2021_gIHd-5X324", "t7mbShCWTJ", "FK3TtnlDSje", "Q2IEO7ThO2v", "lNnmoxxb_cF", "D-14M1HITL8", "iclr_2021_gIHd-5X324", "iclr_2021_gIHd-5X324", "iclr_2021_gIHd-5X324", "iclr_2021_gIHd-5X324" ]
iclr_2021_GMgHyUPrXa
A Design Space Study for LISTA and Beyond
In recent years, great success has been witnessed in building problem-specific deep networks from unrolling iterative algorithms, for solving inverse problems and beyond. Unrolling is believed to incorporate the model-based prior with the learning capacity of deep learning. This paper revisits \textit{the role of unrolling as a design approach for deep networks}: to what extent its resulting special architecture is superior, and can we find better? Using LISTA for sparse recovery as a representative example, we conduct the first thorough \textit{design space study} for the unrolled models. Among all possible variations, we focus on extensively varying the connectivity patterns and neuron types, leading to a gigantic design space arising from LISTA. To efficiently explore this space and identify top performers, we leverage the emerging tool of neural architecture search (NAS). We carefully examine the searched top architectures in a number of settings, and are able to discover networks that consistently better than LISTA. We further present more visualization and analysis to ``open the black box", and find that the searched top architectures demonstrate highly consistent and potentially transferable patterns. We hope our study to spark more reflections and explorations on how to better mingle model-based optimization prior and data-driven learning.
poster-presentations
The paper initially received mixed ratings, with one reviewer strongly supporting the paper given that the idea of combining unrolled algorithms and NAS is new and interesting, and one reviewer not convinced by the significance of the results. His/her main concern was the use of synthetic data only, which is not realistic. This was a legitimate concern as the performance of sparse estimation algorithms can change drastically when there is correlation in the design matrix. See for instance, the benchmarks in F. Bach, R. Jenatton, J. Mairal and G. Obozinski. Optimization with Sparsity-Inducing Penalties. The rebuttal addresses this concern in a satisfactory manner and the area chair is happy to recommend an accept.
train
[ "oZILNiRwWcc", "JGjyQhTE13", "7a4ETRjxOMp", "o8ZcpKWyEX5", "m7KBemKvMVr", "72K_wZecMig", "LNRpAG6cOs", "FplBy-gXAiI", "Q1jN7QfKPO", "woYZQkDY2lV", "Xu7UakCtKjJ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper conducts an empirical study on unrolling architecture design, by applying NAS to search the connectivity pattern based on LISTA, and compares the searched model to the original LISTA.\n\nPros:\n+ The paper is clearly written and not hard to follow\n+ Experiments show that the searched architecture perfo...
[ 6, -1, -1, -1, -1, -1, -1, -1, 8, 4, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_GMgHyUPrXa", "FplBy-gXAiI", "Xu7UakCtKjJ", "woYZQkDY2lV", "woYZQkDY2lV", "oZILNiRwWcc", "oZILNiRwWcc", "Q1jN7QfKPO", "iclr_2021_GMgHyUPrXa", "iclr_2021_GMgHyUPrXa", "iclr_2021_GMgHyUPrXa" ]
iclr_2021_CZ8Y3NzuVzO
What Should Not Be Contrastive in Contrastive Learning
Recent self-supervised contrastive methods have been able to produce impressive transferable visual representations by learning to be invariant to different data augmentations. However, these methods implicitly assume a particular set of representational invariances (e.g., invariance to color), and can perform poorly when a downstream task violates this assumption (e.g., distinguishing red vs. yellow cars). We introduce a contrastive learning framework which does not require prior knowledge of specific, task-dependent invariances. Our model learns to capture varying and invariant factors for visual representations by constructing separate embedding spaces, each of which is invariant to all but one augmentation. We use a multi-head network with a shared backbone which captures information across each augmentation and alone outperforms all baselines on downstream tasks. We further find that the concatenation of the invariant and varying spaces performs best across all tasks we investigate, including coarse-grained, fine-grained, and few-shot downstream classification tasks, and various data corruptions.
poster-presentations
There was a predominantly positive feedback from the reviewers so I recommend acceptance of the paper. It is well-written and well-motivated tackling an important problem: That in self-supervised learning one might encode different invariances by default, even if some of these invariances are useful for downstream tasks (e.g. being rotation invariant may be detrimental to predicting if an image has the correct rotation on a phone). For this, they propose a simple, yet elegant approach and validate it on many downstream tasks. Given the recent interest in self-supervised learning, this appears to be a relevant and interesting paper for the ICLR community.
train
[ "v9-QOkg7Ye", "2OoUuI7ZlvH", "kzWHCyMHnpS", "bbY3Sac593y", "QBDhkiIs46m", "Y9NKJPABN1I", "wDKOeRilwVl", "49EOFRfJCgs", "AUuU4qB0vGR", "Y-rfIIEWV6Y", "oQaLkNrKPao" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "== Summary ==\n\nThe paper proposes a contrastive learning approach for self-supervised learning in which multiple heads are trained to be invariant to all but one type of data augmentation. The rationale is that different downstream tasks may require different types of invariances (e.g. we may want to be rotation...
[ 5, 7, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_CZ8Y3NzuVzO", "iclr_2021_CZ8Y3NzuVzO", "bbY3Sac593y", "QBDhkiIs46m", "v9-QOkg7Ye", "2OoUuI7ZlvH", "Y-rfIIEWV6Y", "oQaLkNrKPao", "iclr_2021_CZ8Y3NzuVzO", "iclr_2021_CZ8Y3NzuVzO", "iclr_2021_CZ8Y3NzuVzO" ]
iclr_2021_KJNcAkY8tY4
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
A key factor in the success of deep neural networks is the ability to scale models to improve performance by varying the architecture depth and width. This simple property of neural network design has resulted in highly effective architectures for a variety of tasks. Nevertheless, there is limited understanding of effects of depth and width on the learned representations. In this paper, we study this fundamental question. We begin by investigating how varying depth and width affects model hidden representations, finding a characteristic block structure in the hidden representations of larger capacity (wider or deeper) models. We demonstrate that this block structure arises when model capacity is large relative to the size of the training set, and is indicative of the underlying layers preserving and propagating the dominant principal component of their representations. This discovery has important ramifications for features learned by different models, namely, representations outside the block structure are often similar across architectures with varying widths and depths, but the block structure is unique to each model. We analyze the output predictions of different model architectures, finding that even when the overall accuracy is similar, wide and deep models exhibit distinctive error patterns and variations across classes.
poster-presentations
This paper studies whether neural networks with different architectures, especially different width and depth, learn similar representations. All reviewers agree that the investigations are thorough and the experimental discoveries are convincing and well explained. Good work. I recommend accept.
train
[ "JvwH0QQBiz2", "ZDoqKuM-Cux", "sl1KJLj7PZE", "UHYvPMmVdKb", "Eeue4NSTACR", "dlMMMBw6nPO", "TuKst880uy", "iZiAM4GPYof" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your time and feedback! We are very happy to hear that you enjoyed the paper!\n\n**Similarity analysis and CKA:** In the submitted version of the paper, due to space constraints, we referred the reader to [1] for further explanation of the rationale and the details of the similarity technique (CKA). ...
[ -1, -1, -1, -1, 7, 6, 8, 6 ]
[ -1, -1, -1, -1, 3, 3, 5, 3 ]
[ "TuKst880uy", "iZiAM4GPYof", "dlMMMBw6nPO", "Eeue4NSTACR", "iclr_2021_KJNcAkY8tY4", "iclr_2021_KJNcAkY8tY4", "iclr_2021_KJNcAkY8tY4", "iclr_2021_KJNcAkY8tY4" ]
iclr_2021_cR91FAodFMe
Learning to Set Waypoints for Audio-Visual Navigation
In audio-visual navigation, an agent intelligently travels through a complex, unmapped 3D environment using both sights and sounds to find a sound source (e.g., a phone ringing in another room). Existing models learn to act at a fixed granularity of agent motion and rely on simple recurrent aggregations of the audio observations. We introduce a reinforcement learning approach to audio-visual navigation with two key novel elements: 1) waypoints that are dynamically set and learned end-to-end within the navigation policy, and 2) an acoustic memory that provides a structured, spatially grounded record of what the agent has heard as it moves. Both new ideas capitalize on the synergy of audio and visual data for revealing the geometry of an unmapped space. We demonstrate our approach on two challenging datasets of real-world 3D scenes, Replica and Matterport3D. Our model improves the state of the art by a substantial margin, and our experiments reveal that learning the links between sights, sounds, and space is essential for audio-visual navigation.
poster-presentations
The paper considers a variant of the point-goal navigation problem in which the agent additionally receives an audio signal emitted from the goal. The proposed framework incorporates a form of acoustic memory to build a map of acoustic signals over time. This memory is used in combination with an egocentric depth map to choose waypoints that serve as intermediate subgoals for planning. The method is shown to outperform state-of-the-art baselines in two navigation domains. The reviewers all agree that the paper is very well written and that the evaluations are thorough, showing that the proposed framework offers clear performance gains. The idea of combining acoustic memory as a form of map with an occupancy grid representation as a means of choosing intermediate goals is interesting. However, the significance of the contributions and their relevance are limited by the narrow scope of the audio-video navigation task, which seems a bit contrived. The paper also overstates the novelty of the work at times (e.g., being the first use of end-to-end learned subgoals for navigation). The author response resolves some of these concerns, but others remain.
train
[ "hc8Bt5Qch2", "u625kkWy1zz", "aN5dAGl6P15", "kjCa6a73irM", "xbUf-bYKu8S", "eOlCEE5YVVd", "zNerllBqvcN", "e-W8aB43f8", "jRezrVYFhWW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper tackles the AudioGoal task of navigating to a acoustic source in a 3D environment. It introduces the idea of an acoustic memory, which maps and aggregates acoustic intensity over time. An agent’s acoustic memory, in tandem with its egocentric depth view, is then used to select navigation waypoints in an...
[ 7, 7, 7, -1, -1, -1, -1, -1, 6 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_cR91FAodFMe", "iclr_2021_cR91FAodFMe", "iclr_2021_cR91FAodFMe", "jRezrVYFhWW", "aN5dAGl6P15", "hc8Bt5Qch2", "u625kkWy1zz", "iclr_2021_cR91FAodFMe", "iclr_2021_cR91FAodFMe" ]
iclr_2021_yFJ67zTeI2
Semi-supervised Keypoint Localization
Knowledge about the locations of keypoints of an object in an image can assist in fine-grained classification and identification tasks, particularly for the case of objects that exhibit large variations in poses that greatly influence their visual appearance, such as wild animals. However, supervised training of a keypoint detection network requires annotating a large image dataset for each animal species, which is a labor-intensive task. To reduce the need for labeled data, we propose to learn simultaneously keypoint heatmaps and pose invariant keypoint representations in a semi-supervised manner using a small set of labeled images along with a larger set of unlabeled images. Keypoint representations are learnt with a semantic keypoint consistency constraint that forces the keypoint detection network to learn similar features for the same keypoint across the dataset. Pose invariance is achieved by making keypoint representations for the image and its augmented copies closer together in feature space. Our semi-supervised approach significantly outperforms previous methods on several benchmarks for human and animal body landmark localization.
poster-presentations
This paper received three borderline reviews (2+ / 1-) and one positive review. Having read through the reviews and author responses, the AC recommends the paper to be accepted. The method, while simple, is proven experimentally to be effectively and will add to the body of work on key-point localization. The authors are requested to add their additional baselines in the response text to the revision of their paper if it has not already been done.
train
[ "LVzk2pcF13K", "OuoQtSA3niV", "Rk1ELcXzknN", "_WpvDEP5pCy", "SGJWN1Dw1R", "tdJ5PF83kY0", "4etN2AwuCbf", "TYQ8n1FzYdT", "2uMZ4MvuUCd" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "It can be applied to point heatmaps based network by adding a semantic representation learn by a three loss terms: one supervised and two semi-supervised. The proposed architecture combines a Keypoint Localization Network with a Keypoint Classification Network. Experiments are achieved on four public datasets. \n\...
[ 6, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ 3, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_yFJ67zTeI2", "_WpvDEP5pCy", "LVzk2pcF13K", "4etN2AwuCbf", "TYQ8n1FzYdT", "2uMZ4MvuUCd", "iclr_2021_yFJ67zTeI2", "iclr_2021_yFJ67zTeI2", "iclr_2021_yFJ67zTeI2" ]
iclr_2021_Cnon5ezMHtu
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective
Neural Architecture Search (NAS) has been explosively studied to automate the discovery of top-performer neural networks. Current works require heavy training of supernet or intensive architecture evaluations, thus suffering from heavy resource consumption and often incurring search bias due to truncated training or approximations. Can we select the best neural architectures without involving any training and eliminate a drastic portion of the search cost? We provide an affirmative answer, by proposing a novel framework called \textit{training-free neural architecture search} (TE-NAS). TE-NAS ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK), and the number of linear regions in the input space. Both are motivated by recent theory advances in deep networks, and can be computed without any training. We show that: (1) these two measurements imply the trainability and expressivity of a neural network; and (2) they strongly correlate with the network's actual test accuracy. Further on, we design a pruning-based NAS mechanism to achieve a more flexible and superior trade-off between the trainability and expressivity during the search. In NAS-Bench-201 and DARTS search spaces, TE-NAS completes high-quality search but only costs 0.5 and 4 GPU hours with one 1080Ti on CIFAR-10 and ImageNet, respectively. We hope our work to inspire more attempts in bridging between the theoretic findings of deep networks and practical impacts in real NAS applications.
poster-presentations
The authors propose training-free neural architecture search using two theoretically inspired heuristics: the condition number of the Neural Tangent Kernel (to measure "trainability" of the architecture), and the number of linear regions in the input space (to measure "expressivity"). These two heuristics are negatively and positively correlated with test accuracy, respectively, allowing for fast, training-free Neural Architecture Search. It is certainly not the first training-free NAS proposal, but achieves competitive results with much more expensive NAS methods. A few reviewers mentioned limited novelty of the method, a claim with which I agree. The contribution of the paper, however, is something different than how it was presented. The core message seems to be that the two proposed heuristics can greatly speed up NAS, and should be a baseline method against which more expensive methods should test. I feel like this is a borderline paper, but may be of interest to researchers in the field.
train
[ "k5D2_jGDcoi", "H3yihVp5NOe", "QgXff8NKMWb", "TWaN1RrxU5s", "cv7YHzJ7Mc", "IXmPvLCAGsy", "Z4K7nB0zgSP", "qvK_QjBfthX", "JVFO247SteB", "hrVthIh1ZVw", "WWzrLU7d3U", "9I268n7hSCr", "ZaKgIi6t0TS", "8xx650L2NYq", "QatydljzLNa", "KKoqWzeCOPg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "# Summary\nThis paper aims to speed up NAS with a training-free performance estimation strategy, i.e., estimate the performance of an architecture at initialization without training (not necessarily the absolute performance, but the relative ranking of architectures).\n\nThe proposed strategy estimates an architec...
[ 6, 4, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_Cnon5ezMHtu", "iclr_2021_Cnon5ezMHtu", "hrVthIh1ZVw", "iclr_2021_Cnon5ezMHtu", "k5D2_jGDcoi", "k5D2_jGDcoi", "k5D2_jGDcoi", "H3yihVp5NOe", "H3yihVp5NOe", "TWaN1RrxU5s", "H3yihVp5NOe", "KKoqWzeCOPg", "KKoqWzeCOPg", "QatydljzLNa", "iclr_2021_Cnon5ezMHtu", "iclr_2021_Cnon5ezMHt...
iclr_2021_eqBwg3AcIAK
Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning. Our approach stems from the idea that the agent's experience in the source domain should look similar to its experience in the target domain. Building off of a probabilistic view of RL, we achieve this goal by compensating for the difference in dynamics by modifying the reward function. This modified reward function is simple to estimate by learning auxiliary classifiers that distinguish source-domain transitions from target-domain transitions. Intuitively, the agent is penalized for transitions that would indicate that the agent is interacting with the source domain, rather than the target domain. Formally, we prove that applying our method in the source domain is guaranteed to obtain a near-optimal policy for the target domain, provided that the source and target domains satisfy a lightweight assumption. Our approach is applicable to domains with continuous states and actions and does not require learning an explicit model of the dynamics. On discrete and continuous control tasks, we illustrate the mechanics of our approach and demonstrate its scalability to high-dimensional~tasks.
poster-presentations
This paper presents an approach to domain adaptation in reinforcement learning. The main idea behind this approach, DARC, is to modify the reward function in the source domain so that the learned policy is optimal in the target domain. This is achieved by learning a classifier that learns to discriminate between the data from the source domain and those from the target domain. Overall, reviewers appreciated the intuitiveness of the approach as well as its formal analysis. They had some concerns with respect to experiments, which was sorted out in the author response period. Given the overall positive reviews, I recommend accepting the paper.
train
[ "nRW6psxKExh", "MVTJ_g2ZaOH", "UcVnL7m8uSR", "7xVU1n33Unx", "wo8DB6qQDlE", "2jEI7WeNfg9", "FNkLL_Dxrt", "wvtbqCsKCOb", "vtzuqg1fmmz", "eGPsaj3lVj" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "## Summary\nThis paper proposes a method for domain adaptation in RL where the source and target domains differ only in the transition distriubtions. A theoretical derivation based on RL as probabilistic inference is presented that starts with the objective of matching the desired distribution of trajectories in t...
[ 8, 6, -1, 7, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, 3, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_eqBwg3AcIAK", "iclr_2021_eqBwg3AcIAK", "wo8DB6qQDlE", "iclr_2021_eqBwg3AcIAK", "7xVU1n33Unx", "eGPsaj3lVj", "MVTJ_g2ZaOH", "nRW6psxKExh", "iclr_2021_eqBwg3AcIAK", "iclr_2021_eqBwg3AcIAK" ]
iclr_2021_ce6CFXBh30h
Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning
While existing federated learning approaches mostly require that clients have fully-labeled data to train on, in realistic settings, data obtained at the client-side often comes without any accompanying labels. Such deficiency of labels may result from either high labeling cost, or difficulty of annotation due to the requirement of expert knowledge. Thus the private data at each client may be either partly labeled, or completely unlabeled with labeled data being available only at the server, which leads us to a new practical federated learning problem, namely Federated Semi-Supervised Learning (FSSL). In this work, we study two essential scenarios of FSSL based on the location of the labeled data. The first scenario considers a conventional case where clients have both labeled and unlabeled data (labels-at-client), and the second scenario considers a more challenging case, where the labeled data is only available at the server (labels-at-server). We then propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch). FedMatch improves upon naive combinations of federated learning and semi-supervised learning approaches with a new inter-client consistency loss and decomposition of the parameters for disjoint learning on labeled and unlabeled data. Through extensive experimental validation of our method in the two different scenarios, we show that our method outperforms both local semi-supervised learning and baselines which naively combine federated learning with semi-supervised learning.
poster-presentations
This work proposes the Federated Matching algorithm as a novel method to tackle the problems in federated learning. The paper is well-written and original, and it contributes to the state-of-the-art.
train
[ "O2_Zp9owIZ", "hF1s5e1Cgc", "BsfRd4Cj8Pj", "XAj9C1qGFVw", "7gI79dr2oyU", "2BygUJTZlsb", "rdOF7MtwV5w", "H-xRprDKYwX", "nATmDuQJZs0", "O-45l6oQrN", "j58LHlYZ4-N", "v-oz52yQ8tE", "C4eLm_43Q8u", "yJmqbNr4cqf", "vXd586WcUsW", "nIUI7yfle-Q" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for getting back to us. As you requested, we formalize our contributions, such as the introduction of a novel problem, and proposal of an inter-client consistency loss, and disjoint learning algorithms. Please also refer to **Section 2** (Problem Definition), **4** (Labels-at-Client), and **5** (Labels-a...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "hF1s5e1Cgc", "nATmDuQJZs0", "yJmqbNr4cqf", "v-oz52yQ8tE", "v-oz52yQ8tE", "iclr_2021_ce6CFXBh30h", "iclr_2021_ce6CFXBh30h", "2BygUJTZlsb", "v-oz52yQ8tE", "yJmqbNr4cqf", "nIUI7yfle-Q", "C4eLm_43Q8u", "vXd586WcUsW", "iclr_2021_ce6CFXBh30h", "iclr_2021_ce6CFXBh30h", "iclr_2021_ce6CFXBh30h...
iclr_2021_P6_q1BRxY8Q
Learning Safe Multi-agent Control with Decentralized Neural Barrier Certificates
We study the multi-agent safe control problem where agents should avoid collisions to static obstacles and collisions with each other while reaching their goals. Our core idea is to learn the multi-agent control policy jointly with learning the control barrier functions as safety certificates. We propose a new joint-learning framework that can be implemented in a decentralized fashion, which can adapt to an arbitrarily large number of agents. Building upon this framework, we further improve the scalability by incorporating neural network architectures that are invariant to the quantity and permutation of neighboring agents. In addition, we propose a new spontaneous policy refinement method to further enforce the certificate condition during testing. We provide extensive experiments to demonstrate that our method significantly outperforms other leading multi-agent control approaches in terms of maintaining safety and completing original tasks. Our approach also shows substantial generalization capability in that the control policy can be trained with 8 agents in one scenario, while being used on other scenarios with up to 1024 agents in complex multi-agent environments and dynamics. Videos and source code can be found at https://realm.mit.edu/blog/learning-safe-multi-agent-control-decentralized-neural-barrier-certificates.
poster-presentations
Initially there were some shared concerns about the work being too incremental, lack of technical clarity on the algorithmic side and experiments, and lack of clear mathematical formulations. The authors did a good effort and cleared up many questions and remarks satisfactorily, and several reviewers have increased their scores as a consequence. In its current state I recommend to accept the paper.
train
[ "AGRqzJeYMc", "Phnvhk90d8S", "3vU3QirrTxm", "Jl8To_oK57O", "CZTxDzJe3lc", "r3COOXwmmz4", "zg6GxX3QWZu", "zsjBb74jNuB", "a6b2lEelTf", "H_clP6U3A6l", "oGuqwfMh2xv", "g5V80XdMo1y", "fQX-WvQ5ef", "jYKPaSVXaD6", "0ITPEjxz--d", "iPpSKDl9Wcu", "AVCuOvOI4l", "Bna7X9HVchB", "n4Hde0l7ua", ...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ "We appreciate the reviewers for their valuable comments. We also really appreciate that 4 out of 5 reviewers have raised their score to “strong accept” (8), “accept” (7) and “weak accept” (6) during the rebuttal phase, after we added the important missing information on the experimental setup and supplementary exp...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, 8, -1, -1, -1, 4 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, 4, -1, -1, -1, 4 ]
[ "iclr_2021_P6_q1BRxY8Q", "eDneawgo68K", "Jl8To_oK57O", "zg6GxX3QWZu", "iclr_2021_P6_q1BRxY8Q", "fQX-WvQ5ef", "CZTxDzJe3lc", "g5V80XdMo1y", "fQX-WvQ5ef", "CZTxDzJe3lc", "iclr_2021_P6_q1BRxY8Q", "CHnrudX-GHl", "iPpSKDl9Wcu", "Bna7X9HVchB", "CZTxDzJe3lc", "n4Hde0l7ua", "Y1qoPyEu34", "...
iclr_2021_3X64RLgzY6O
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate
Understanding the algorithmic bias of stochastic gradient descent (SGD) is one of the key challenges in modern machine learning and deep learning theory. Most of the existing works, however, focus on very small or even infinitesimal learning rate regime, and fail to cover practical scenarios where the learning rate is moderate and annealing. In this paper, we make an initial attempt to characterize the particular regularization effect of SGD in the moderate learning rate regime by studying its behavior for optimizing an overparameterized linear regression problem. In this case, SGD and GD are known to converge to the unique minimum-norm solution; however, with the moderate and annealing learning rate, we show that they exhibit different directional bias: SGD converges along the large eigenvalue directions of the data matrix, while GD goes after the small eigenvalue directions. Furthermore, we show that such directional bias does matter when early stopping is adopted, where the SGD output is nearly optimal but the GD output is suboptimal. Finally, our theory explains several folk arts in practice used for SGD hyperparameter tuning, such as (1) linearly scaling the initial learning rate with batch size; and (2) overrunning SGD with high learning rate even when the loss stops decreasing.
poster-presentations
This work compares and contrasts the learning rate dynamics of GD and SGD and shows that under practical learning rate settings, SGD is biased to approach the minimum along the direction of steepest descent, leading to better performance. Reviewers agree that the theoretical results are significant. The authors satisfactorily responded to reviewers’ questions and improved the paper’s clarity during the discussion phase.
train
[ "lNa0aMCzlv2", "Y9Uzby4nMr5", "9SPHWPy_En", "TzXjjeh1meN", "E9OnOatR2iN", "BNw52hjvPfI", "NLZgXBb1iEh", "jgw_IKjC_sR", "37xqSHbVv60", "xoHcO5LHGLH", "UqZosz3hwMi", "NqD68YmtYM1", "U9Q-AtFHi8A", "jIbSzCcEK4q" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary:\nIn this paper, an implicit bias of SGD and GD in terms of the direction of convergence points is studied. This study shows that, in a setting of linear regression, SGD and GD converge to different directions, which are determined by the largest/smallest eigenvectors of a data matrix when the learning rat...
[ 6, -1, -1, 7, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_3X64RLgzY6O", "E9OnOatR2iN", "BNw52hjvPfI", "iclr_2021_3X64RLgzY6O", "NqD68YmtYM1", "NLZgXBb1iEh", "UqZosz3hwMi", "iclr_2021_3X64RLgzY6O", "xoHcO5LHGLH", "U9Q-AtFHi8A", "TzXjjeh1meN", "lNa0aMCzlv2", "jgw_IKjC_sR", "iclr_2021_3X64RLgzY6O" ]
iclr_2021_Lc28QAB4ypz
Fast And Slow Learning Of Recurrent Independent Mechanisms
Decomposing knowledge into interchangeable pieces promises a generalization advantage when there are changes in distribution. A learning agent interacting with its environment is likely to be faced with situations requiring novel combinations of existing pieces of knowledge. We hypothesize that such a decomposition of knowledge is particularly relevant for being able to generalize in a systematic way to out-of-distribution changes. To study these ideas, we propose a particular training framework in which we assume that the pieces of knowledge an agent needs and its reward function are stationary and can be re-used across tasks. An attention mechanism dynamically selects which modules can be adapted to the current task, and the parameters of the \textit{selected} modules are allowed to change quickly as the learner is confronted with variations in what it experiences, while the parameters of the attention mechanisms act as stable, slowly changing, meta-parameters. We focus on pieces of knowledge captured by an ensemble of modules sparsely communicating with each other via a bottleneck of attention. We find that meta-learning the modular aspects of the proposed system greatly helps in achieving faster adaptation in a reinforcement learning setup involving navigation in a partially observed grid world with image-level input. We also find that reversing the role of parameters and meta-parameters does not work nearly as well, suggesting a particular role for fast adaptation of the dynamically selected modules.
poster-presentations
The paper examines an idea that knowledge and rewards are stationary and reusable across tasks. An interesting paper that combines number of related topics (meta RL, HRL, time scale in RL, and attention), improving the speed of training. The authors have addressed the reviewer comments, strengthening the paper. The reviewers agree, and I concur, that the paper contributes a novel model, valuable to the ICLR community. It is well thought-out, presented, and evaluated.
train
[ "kkEGkU_KVj2", "y3Up_9woLjm", "xQaL7nXhjFr", "DtoJMg2hzAs", "rn4ZbnlAfL5", "MuI5Y2y29-n", "LghxhU6iva0", "nAozgUTnyig", "nQsFH2ON3-z", "V_saFdETA8l", "QLX3jgo_2yO", "YywMDtPMVcI", "69Ac3ZXymtg", "k-zOHJdrbmK", "g8SAOzBqh3L", "XVtd-ROT4DZ", "qNp3-yCDxJA", "AbBF1-xz2ZY", "EVtLDYb6Q...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary of the paper:\n\nThe paper introduces a meta-learning approach to recurrent independent modules, with the goal to make RIMs adapt better to changing distributions, increasing generalization, and increasing sample efficiency. It postulates that RIMs, with their decomposition of independent information, can ...
[ 7, -1, 7, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 3, -1, 4, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_Lc28QAB4ypz", "iclr_2021_Lc28QAB4ypz", "iclr_2021_Lc28QAB4ypz", "QLX3jgo_2yO", "nQsFH2ON3-z", "LghxhU6iva0", "V_saFdETA8l", "iclr_2021_Lc28QAB4ypz", "YywMDtPMVcI", "AbBF1-xz2ZY", "k-zOHJdrbmK", "69Ac3ZXymtg", "nAozgUTnyig", "qNp3-yCDxJA", "kkEGkU_KVj2", "xQaL7nXhjFr", "EVt...
iclr_2021_pzpytjk3Xb2
Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples
To craft black-box adversarial examples, adversaries need to query the victim model and take proper advantage of its feedback. Existing black-box attacks generally suffer from high query complexity, especially when only the top-1 decision (i.e., the hard-label prediction) of the victim model is available. In this paper, we propose a novel hard-label black-box attack named Policy-Driven Attack, to reduce the query complexity. Our core idea is to learn promising search directions of the adversarial examples using a well-designed policy network in a novel reinforcement learning formulation, in which the queries become more sensible. Experimental results demonstrate that our method can significantly reduce the query complexity in comparison with existing state-of-the-art hard-label black-box attacks on various image classification benchmark datasets. Code and models for reproducing our results are available at https://github.com/ZiangYan/pda.pytorch
poster-presentations
The paper proposes an RL-based approach for decision-based attack. All the reviewers like the paper after the rebuttal phase, and we would like to encourage the authors to incorporate the new experiments in the camera-ready version. Furthermore, some recent decision-based attacks should also be included in the comparisons, such as Li et al., QEBA: Query-Efficient Boundary-Based Blackbox Attack. (CVPR 2020) Cheng et al., Sign-OPT: A Query-Efficient Hard-label Adversarial Attack. (ICLR 2020)
train
[ "5JS_UEcm2f", "xXjwzZce02", "JRIYqHEf43P", "dtL_N5JO9H2", "P7IEjW0Ribr", "EDtpP5y_rNd", "Q0R9s_bIAm", "o4PqOfGec_J", "UNZgy8IZCFm", "rOGh-6QLJ2D", "iCjCYRo9xeI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\nThis paper proposes a new hard-label black-box adversarial attack method based on reinforcement learning. The authors formulate the black-box attacking problem as a reinforcement learning problem, and design a policy network to learn the appropriate attack directions, in order to achieve more efficient a...
[ 6, -1, 7, -1, 7, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, 3, -1, 3, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_pzpytjk3Xb2", "5JS_UEcm2f", "iclr_2021_pzpytjk3Xb2", "UNZgy8IZCFm", "iclr_2021_pzpytjk3Xb2", "rOGh-6QLJ2D", "5JS_UEcm2f", "iCjCYRo9xeI", "JRIYqHEf43P", "P7IEjW0Ribr", "iclr_2021_pzpytjk3Xb2" ]
iclr_2021_vVjIW3sEc1s
A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
Autoregressive language models, pretrained using large text corpora to do well on next word prediction, have been successful at solving many downstream tasks, even with zero-shot usage. However, there is little theoretical understanding of this success. This paper initiates a mathematical study of this phenomenon for the downstream task of text classification by considering the following questions: (1) What is the intuitive connection between the pretraining task of next word prediction and text classification? (2) How can we mathematically formalize this connection and quantify the benefit of language modeling? For (1), we hypothesize, and verify empirically, that classification tasks of interest can be reformulated as sentence completion tasks, thus making language modeling a meaningful pretraining task. With a mathematical formalization of this hypothesis, we make progress towards (2) and show that language models that are ϵ-optimal in cross-entropy (log-perplexity) learn features that can linearly solve such classification tasks with O(ϵ) error, thus demonstrating that doing well on language modeling can be beneficial for downstream tasks. We experimentally verify various assumptions and theoretical findings, and also use insights from the analysis to design a new objective function that performs well on some classification tasks.
poster-presentations
The paper attempts to provide a theoretical explanation for benefit of language model pretraining on downstream classification task. In this regard, the authors provide a mathematical framework which seems to indicate that the distribution of the next word, conditional on the context, can provide a strong discriminative signal for the downstream task. The reviewers found the formulation insightful, interesting, and novel. Also reviewers enjoyed reading the well written paper and appreciated its cautious in its tone. As correctly pointed out by reviewers, the proposed framework might not directly align with techniques used in practice. Applicability of the framework to other pre-training approaches is limited. Also, there are some unresolved concerns about $O(\sqrt{\epsilon})$ assumption still. Nevertheless, reviewers reached a consensus that the framework would be beneficial for the community and attract follow-up works. Thus, I recommend an acceptance to ICLR. Following reviewer suggestion, it is strongly recommended that extensions section be expanded in the revised version using the extra page.
train
[ "7s1JHlVw-Ua", "T6O2wO-1p9L", "7rDfZQH8oqF", "7c16GW4v4AU", "b7VIdIDlHl", "Y71DxXRf538", "QRkHSjyxG6t", "iVLKsjLgiIv", "Bw2z_yIFp56", "QuPOPmX_9aP", "kV4De0HxrSz", "QTtvYSI-WD3", "Ljbfz8JWK8D", "3cDis2cmU1g" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary.\n\nThis work tries to understand why features from trained language models can be used to solve classification tasks effectively. A language model (LM) in the analysis is modeled as a feature map $f : S \\rightarrow \\mathbb{R}^d$, a word embedding $\\Phi \\in \\mathbb{R}^{d \\times V}$, and a trained lan...
[ 7, 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 3, 2, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_vVjIW3sEc1s", "iclr_2021_vVjIW3sEc1s", "b7VIdIDlHl", "iclr_2021_vVjIW3sEc1s", "Y71DxXRf538", "QRkHSjyxG6t", "iVLKsjLgiIv", "7c16GW4v4AU", "Ljbfz8JWK8D", "T6O2wO-1p9L", "3cDis2cmU1g", "7s1JHlVw-Ua", "iclr_2021_vVjIW3sEc1s", "iclr_2021_vVjIW3sEc1s" ]
iclr_2021_Naqw7EHIfrv
Representation Learning for Sequence Data with Deep Autoencoding Predictive Components
We propose Deep Autoencoding Predictive Components (DAPC) -- a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space. We encourage this latent structure by maximizing an estimate of \emph{predictive information} of latent feature sequences, which is the mutual information between the past and future windows at each time step. In contrast to the mutual information lower bound commonly used by contrastive learning, the estimate of predictive information we adopt is exact under a Gaussian assumption. Additionally, it can be computed without negative sampling. To reduce the degeneracy of the latent space extracted by powerful encoders and keep useful information from the inputs, we regularize predictive information learning with a challenging masked reconstruction loss. We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
poster-presentations
The paper combines a few different ideas for representation learning on sequential data and is able to achieve competitive WER on the Librispeech ASR dataset. I appreciate the fact that the authors engaged with reviewers and tried to improve the paper. While I get a sense that the final system has many moving parts, I believe the paper meets the bar for acceptance at ICLR.
train
[ "wn9e44RKFDz", "TzHhD1-gade", "t_91-ZiIcRy", "oFF5JGkr2w", "taaTKrcbXC2", "vl4mag7rZyS", "i9G5ur1Arx4", "L_f7JX0azt", "7rSo2YKDnXc", "p1zxcPR4px2", "n6M11jWwHo", "TtEng02JQ4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper builds on the prior work of Dynamical Components Analysis (DCA) which maximizes the mutual information between past and future temporal windows around the current time step, referred to as the the Predictive Information (PI) loss. \nIn this paper, the PI loss is used to train a neural encoder that lear...
[ 5, 6, -1, -1, -1, 5, -1, -1, -1, -1, -1, 7 ]
[ 4, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_Naqw7EHIfrv", "iclr_2021_Naqw7EHIfrv", "oFF5JGkr2w", "taaTKrcbXC2", "i9G5ur1Arx4", "iclr_2021_Naqw7EHIfrv", "vl4mag7rZyS", "TzHhD1-gade", "wn9e44RKFDz", "TtEng02JQ4", "iclr_2021_Naqw7EHIfrv", "iclr_2021_Naqw7EHIfrv" ]
iclr_2021_ZsZM-4iMQkH
A unifying view on implicit bias in training linear neural networks
We study the implicit bias of gradient flow (i.e., gradient descent with infinitesimal step size) on linear neural network training. We propose a tensor formulation of neural networks that includes fully-connected, diagonal, and convolutional networks as special cases, and investigate the linear version of the formulation called linear tensor networks. With this formulation, we can characterize the convergence direction of the network parameters as singular vectors of a tensor defined by the network. For L-layer linear tensor networks that are orthogonally decomposable, we show that gradient flow on separable classification finds a stationary point of the ℓ2/L max-margin problem in a "transformed" input space defined by the network. For underdetermined regression, we prove that gradient flow finds a global minimum which minimizes a norm-like function that interpolates between weighted ℓ1 and ℓ2 norms in the transformed input space. Our theorems subsume existing results in the literature while removing standard convergence assumptions. We also provide experiments that corroborate our analysis.
poster-presentations
This paper suggests an extension of previous implicit bias results on linear networks to a tensor formulation and arguably weakens some of the assumptions of previous works (e.g. loss going to zero is replaced with initialization assumptions). The reviewers were all positive about this work, saying it is clearly written and an original significant contribution. There were a few issues raised (e.g. the novelty of the proof techniques) and the authors responded. The reviewers did not clarify if this response satisfied these concerns, but did not change their positive scores. I will take this to indicate they still recommend acceptance.
val
[ "QaY6xEpYeB", "CnzDlyBR_O_", "TdMfRNtKMf6", "MpcoON5mTAF", "O1oJey8TPUS", "d2kMU7FaVCu" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the reviewer for the review and the thoughtful comments. Below, we reply to the comments raised by the reviewer. Also, please take a look at our revised manuscript.\n\nRe. proof techniques: It is true that our classification results rely on the results from [Ji & Telgarsky, 2020] and a rather “standa...
[ -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, 2, 4, 4 ]
[ "d2kMU7FaVCu", "MpcoON5mTAF", "O1oJey8TPUS", "iclr_2021_ZsZM-4iMQkH", "iclr_2021_ZsZM-4iMQkH", "iclr_2021_ZsZM-4iMQkH" ]
iclr_2021_tC6iW2UUbJf
What Makes Instance Discrimination Good for Transfer Learning?
Contrastive visual pretraining based on the instance discrimination pretext task has made significant progress. Notably, recent work on unsupervised pretraining has shown to surpass the supervised counterpart for finetuning downstream applications such as object detection and segmentation. It comes as a surprise that image annotations would be better left unused for transfer learning. In this work, we investigate the following problems: What makes instance discrimination pretraining good for transfer learning? What knowledge is actually learned and transferred from these models? From this understanding of instance discrimination, how can we better exploit human annotation labels for pretraining? Our findings are threefold. First, what truly matters for the transfer is low-level and mid-level representations, not high-level representations. Second, the intra-category invariance enforced by the traditional supervised model weakens transferability by increasing task misalignment. Finally, supervised pretraining can be strengthened by following an exemplar-based approach without explicit constraints among the instances within the same category.
poster-presentations
The paper aims at understanding why self-supervised/contrastive learning methods transfer well when used as pretraining for fine-tuning downstream tasks (compared to e.g., supervised pretraining based on the cross-entropy loss). Three reviewers recommend acceptance, whereas one reviewer recommends borderline rejection, arguing the take home message of the paper is not very clear. While this is a legitimate concern, the AC agrees with the majority that the paper does shed light on the differences between supervised and self-supervised pretraining (based on interesting empirical findings) and recommends acceptance.
train
[ "GwONnCL97dW", "JIhkbpmMIc", "LilYpscbrgK", "3Gb-elrp35y", "j4cWKt3eQvP", "SWtxG_ZVTkP", "bgrgBaXPoLB", "09VovtiUIOA", "WpyoPr4hJsa" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Authors,\n\nThanks for your response to my concerns.\n\nI think all of them have been addressed and I will keep my original score.\n\nThanks!\n", "We thank the reviewer for the valuable feedback. \n\n \nQ1: It would have been interesting to see whether this presented result comparing MoCo with supervised pr...
[ -1, -1, -1, -1, -1, 8, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "JIhkbpmMIc", "SWtxG_ZVTkP", "bgrgBaXPoLB", "WpyoPr4hJsa", "09VovtiUIOA", "iclr_2021_tC6iW2UUbJf", "iclr_2021_tC6iW2UUbJf", "iclr_2021_tC6iW2UUbJf", "iclr_2021_tC6iW2UUbJf" ]
iclr_2021_cTbIjyrUVwJ
Learning Accurate Entropy Model with Global Reference for Image Compression
In recent deep image compression neural networks, the entropy model plays a critical role in estimating the prior distribution of deep image encodings. Existing methods combine hyperprior with local context in the entropy estimation function. This greatly limits their performance due to the absence of a global vision. In this work, we propose a novel Global Reference Model for image compression to effectively leverage both the local and the global context information, leading to an enhanced compression rate. The proposed method scans decoded latents and then finds the most relevant latent to assist the distribution estimating of the current latent. A by-product of this work is the innovation of a mean-shifting GDN module that further improves the performance. Experimental results demonstrate that the proposed model outperforms the rate-distortion performance of most of the state-of-the-art methods in the industry.
poster-presentations
This paper received moderately good reviews, 3 positives (6, 6, 7) and 1 negative (5). The reviewers are generally positive about the main idea but identified several limitations; performance improvement is marginal compared to existing approaches, the proposed method incurs higher computational complexity, and the presentation is not clear enough. Some of these issues are addressed in the rebuttal, though. Overall, the merits of this work outweigh the drawbacks and I recommend accepting this paper.
train
[ "3HMNfq5Mye9", "HC3QMLshK52", "_HJFDsrk2j", "_jhFaovs8HW", "kuY_XliBUQF", "pYsFFKBNebA", "33y33CMjzl5", "-Zno45rCp4G", "zCuTR69TEHt" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper propose two methods for improve deep image compression performance: (i) Global Reference Module and (ii) Mean-shifting GDN Module (GSDN). (i) Global Reference Module searches over the decoded latents to find the relevant latents to the target latent for improve accuracy of entropy estimate. Authors exte...
[ 6, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ 4, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "iclr_2021_cTbIjyrUVwJ", "kuY_XliBUQF", "33y33CMjzl5", "zCuTR69TEHt", "3HMNfq5Mye9", "-Zno45rCp4G", "iclr_2021_cTbIjyrUVwJ", "iclr_2021_cTbIjyrUVwJ", "iclr_2021_cTbIjyrUVwJ" ]
iclr_2021_5jzlpHvvRk
Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search
Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models. For object detection, the well-established classification and regression loss functions have been carefully designed by considering diverse learning challenges (e.g. class imbalance, hard negative samples, and scale variances). Inspired by the recent progress in network architecture search, it is interesting to explore the possibility of discovering new loss function formulations via directly searching the primitive operation combinations. So that the learned losses not only fit for diverse object detection challenges to alleviate huge human efforts, but also have better alignment with evaluation metric and good mathematical convergence property. Beyond the previous auto-loss works on face recognition and image classification, our work makes the first attempt to discover new loss functions for the challenging object detection from primitive operation levels and finds the searched losses are insightful. We propose an effective convergence-simulation driven evolutionary search algorithm, called CSE-Autoloss, for speeding up the search progress by regularizing the mathematical rationality of loss candidates via two progressive convergence simulation modules: convergence property verification and model optimization simulation. CSE-Autoloss involves the search space (i.e. 21 mathematical operators, 3 constant-type inputs, and 3 variable-type inputs) that cover a wide range of the possible variants of existing losses and discovers best-searched loss function combination within a short time (around 1.5 wall-clock days with 20x speedup in comparison to the vanilla evolutionary algorithm). We conduct extensive evaluations of loss function search on popular detectors and validate the good generalization capability of searched losses across diverse architectures and various datasets. Our experiments show that the best-discovered loss function combinations outperform default combinations (Cross-entropy/Focal loss for classification and L1 loss for regression) by 1.1% and 0.8% in terms of mAP for two-stage and one-stage detectors on COCO respectively. Our searched losses are available at https://github.com/PerdonLiu/CSE-Autoloss.
poster-presentations
This paper received borderline scores but overall lean positive. The reviewers point out that the paper presents interesting new ideas and an effective solution to the problem of automatically searching for loss functions. The empirical results are convincing, although the baselines are not the strongest possible in terms of absolute performance. Overall, the ACs find that the paper has sufficient novelty and technical contribution to be accepted.
val
[ "NQQKsOgvNdZ", "2hvAPw350MM", "rDnT1FW45T", "rF9SacaL9RF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to use an evolutionary search algorithm to search for better loss functions for the classification and regression branch of an object detector. The algorithm starts with 20 primitive mathematical operations. Due to the highly sparse action space, the vanilla evolutionary algorithm would take a ...
[ 7, 6, 6, 5 ]
[ 5, 4, 4, 4 ]
[ "iclr_2021_5jzlpHvvRk", "iclr_2021_5jzlpHvvRk", "iclr_2021_5jzlpHvvRk", "iclr_2021_5jzlpHvvRk" ]
iclr_2021_ldxlzGYWDmW
Effective Abstract Reasoning with Dual-Contrast Network
As a step towards improving the abstract reasoning capability of machines, we aim to solve Raven’s Progressive Matrices (RPM) with neural networks, since solving RPM puzzles is highly correlated with human intelligence. Unlike previous methods that use auxiliary annotations or assume hidden rules to produce appropriate feature representation, we only use the ground truth answer of each question for model learning, aiming for an intelligent agent to have a strong learning capability with a small amount of supervision. Based on the RPM problem formulation, the correct answer filled into the missing entry of the third row/column has to best satisfy the same rules shared between the first two rows/columns.Thus we design a simple yet effective Dual-Contrast Network (DCNet) to exploit the inherent structure of RPM puzzles. Specifically, a rule contrast module is designed to compare the latent rules between the filled row/column and the first two rows/columns; a choice contrast module is designed to increase the relative differences between candidate choices. Experimental results on the RAVEN and PGM datasets show that DCNet outperforms the state-of-the-art methods by a large margin of 5.77%. Further experiments on few training samples and model generalization also show the effectiveness of DCNet. Code is available at https://github.com/visiontao/dcnet.
poster-presentations
This work proposes a new architecture for solving Ravens Progressive Matrices (RPM), a well known form of visual reasoning problem. The method relies on an operation that directly compares the final row and column (completed with different candidate answers) with the first two rows and columns. Doing this allows the network to perform better than previous approaches when measured on an in-distribution test set on two RPM datasets, RAVEN and PGM. As the reviewers pointed out, a strength of this work is the strong performance on the neutral split of these datasets and the fact that the methods do not (unlike some other approaches) require access to any annotations from the dataset other that knowledge of the structure of the RPM task and access to the candidate answers and the correct answer. However, a noted weakness is the fact that the network reflects the structure of the task more directly than other approaches, which means that the insight is specific to the problem of solving RPMs. Another weakness is that the authors focus on the neutral (in-distribution) splits. Reading the PGM paper, it is clear that the neutral split is not really the main focus of that dataset (it accounts for only 1/7 of the dataset), which seems to have been specifically developed as a benchmark for out-of-distribution generalisation. Indeed, the whole point of the RPM task is to measure the ability to induce abstract rules and principles from pixels, but without measuring out-of-distribution generalisation, can we really claim that any model has induced a 'rule'? The authors mitigate this issue to a small degree during the rebuttal by adding scores on the 'interpolation' and 'extrapolation' splits of the PGM dataset, but still do not consider the other splits where rule application is most clearly tested. I note that the weakness described above also applies to lots of other published work involving PGM and RAVEN datasets. In summary, this is a well-executed, neat piece of work that shows a better way to fit a large dataset by incorporating knowledge of the structure of the data into the task. Because it does not consider the full benchmarks, only the in-distribution splits, it falls short of showing that this enables better induction of abstract principles or rules. On the majority opinion of the reviewers, and because there are no scientific flaws in the work, I recommend acceptance with weak confidence pending wider calibration across the program.
train
[ "pD3aiZwoINn", "YUtq7Q4YqjE", "LNOjjwGqldd", "SZRua2NEaRQ", "-EntW5kB12o", "OZOzQASibO4", "g9CBp7SFcU5", "QoEunSd4N_P", "_Mhretjr7SU", "XlCtZdtw2yV", "DWIkTm_fLc", "JjwqKkV3m0s", "yBJfOmcrOq", "b-P1pswHCov" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "public", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe paper proposes a neural network based approach called Dual-Contrast Network (DCNet) to solve Raven’s Progressive Matrices (RPM). The approach consists of a rule contrast module that compares the latent rules between the unfilled (third) row/column and the filled (first and second) rows/columns, a c...
[ 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_ldxlzGYWDmW", "iclr_2021_ldxlzGYWDmW", "YUtq7Q4YqjE", "JjwqKkV3m0s", "yBJfOmcrOq", "b-P1pswHCov", "pD3aiZwoINn", "XlCtZdtw2yV", "DWIkTm_fLc", "YUtq7Q4YqjE", "JjwqKkV3m0s", "iclr_2021_ldxlzGYWDmW", "iclr_2021_ldxlzGYWDmW", "iclr_2021_ldxlzGYWDmW" ]
iclr_2021_7aogOj_VYO0
Do not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning
The privacy leakage of the model about the training data can be bounded in the differential privacy mechanism. However, for meaningful privacy parameters, a differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters. In this paper, we propose an algorithm \emph{Gradient Embedding Perturbation (GEP)} towards training differentially private deep models with decent accuracy. Specifically, in each gradient descent step, GEP first projects individual private gradient into a non-sensitive anchor subspace, producing a low-dimensional gradient embedding and a small-norm residual gradient. Then, GEP perturbs the low-dimensional embedding and the residual gradient separately according to the privacy budget. Such a decomposition permits a small perturbation variance, which greatly helps to break the dimensional barrier of private learning. With GEP, we achieve decent accuracy with low computational cost and modest privacy guarantee for deep models. Especially, with privacy bound ϵ=8, we achieve 74.9% test accuracy on CIFAR10 and 95.1% test accuracy on SVHN, significantly improving over existing results.
poster-presentations
The paper introduces a method for differentially private deep learning, which the authors term Gradient Embedding Perturbation. This is similar to several (roughly) concurrent works, which project gradients to a subspace based on some auxiliary public data. However, a crucial difference involves the use of the residual gradients, which allows the method to achieve the first significant accuracy gains using subspace projection. The reviewers believe this method will be important for the practice of DP deep learning.
train
[ "-bb4xa9RGI", "C5SQHBLtkUv", "BrpndPeRZl8", "_vjFj6qYYGL", "NdIXO-xkomL", "zIFt3PO584P", "bdzLXBlhCHD", "WfoUQgmM4W0", "KIZVtQhMeA5", "yf7VfQMhwOW", "saTHmd3H_-j", "nlt5w-nbNvf", "JIXNisX95-c", "jOWcVg2ddVb", "HAt2F2hylk", "pMvuMpXMbXS", "yO0OQoDZxI" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: The paper focuses on training deep learning models under differential privacy. The main contribution is a technique based on projecting the gradient into a non-sensitive anchor subspace and then perturbing the projected gradient (and the residual).\n\nComments:\n\nIt is well-known that a straightforward ...
[ 6, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 9 ]
[ 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_7aogOj_VYO0", "BrpndPeRZl8", "HAt2F2hylk", "NdIXO-xkomL", "KIZVtQhMeA5", "iclr_2021_7aogOj_VYO0", "WfoUQgmM4W0", "nlt5w-nbNvf", "jOWcVg2ddVb", "pMvuMpXMbXS", "yf7VfQMhwOW", "yO0OQoDZxI", "iclr_2021_7aogOj_VYO0", "zIFt3PO584P", "-bb4xa9RGI", "iclr_2021_7aogOj_VYO0", "iclr_2...
iclr_2021_04ArenGOz3
Set Prediction without Imposing Structure as Conditional Density Estimation
Set prediction is about learning to predict a collection of unordered variables with unknown interrelations. Training such models with set losses imposes the structure of a metric space over sets. We focus on stochastic and underdefined cases, where an incorrectly chosen loss function leads to implausible predictions. Example tasks include conditional point-cloud reconstruction and predicting future states of molecules. In this paper we propose an alternative to training via set losses, by viewing learning as conditional density estimation. Our learning framework fits deep energy-based models and approximates the intractable likelihood with gradient-guided sampling. Furthermore, we propose a stochastically augmented prediction algorithm that enables multiple predictions, reflecting the possible variations in the target set. We empirically demonstrate on a variety of datasets the capability to learn multi-modal densities and produce different plausible predictions. Our approach is competitive with previous set prediction models on standard benchmarks. More importantly, it extends the family of addressable tasks beyond those that have unambiguous predictions.
poster-presentations
The paper proposes to predict sets using conditional density estimates. The conditional densities of the reponse set given the observed features is modeled through an energy based function. The energy function can be specified using tailored neural nets like deep sets and is trained trough approximate negative log likelihoods using sampling. The paper was nice to read and was liked by all the reviewers. The one thing that stood out to me was the emphasis on multi-modality. (multi appears 51 times). This could be toned down because little is said about the quality relative to the true p(Y | x) and the focus is mainly on the lack of this in existing work.
train
[ "Tgwd_d-_Npm", "NSqu2Ip5xx_", "4ZSFBXRO5Cu", "LUUQGAVwzbO", "N9NGTg0Ied-", "TB6N8X3LAwB", "bihrmXMqbzO", "MRXiw9V6aS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Authors propose a new method for formulating set prediction tasks. They propose to use a noisy energy-based model with langevin mcmc + noisy startup as their model. The can approximate the gradient of the likelihood function by computing the enery of ground truth pairs and energy of synthesized pairs where the tar...
[ 7, 6, -1, -1, -1, -1, 7, 6 ]
[ 3, 2, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_04ArenGOz3", "iclr_2021_04ArenGOz3", "Tgwd_d-_Npm", "bihrmXMqbzO", "NSqu2Ip5xx_", "MRXiw9V6aS", "iclr_2021_04ArenGOz3", "iclr_2021_04ArenGOz3" ]
iclr_2021_e12NDM7wkEY
Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation
Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. Representation learning often plays an important role in the effectiveness of deep clustering, and thus can be a principal cause of performance degradation. In this paper, we propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation. Our deep-learning-based representation learning method is motivated by the properties of classical spectral clustering. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We utilize an instance discrimination method in which learning individual instance classes leads to learning similarity among instances. Through detailed experiments and examination, we show that the approach can be adapted to learning a latent space for clustering. We design novel softmax-formulated decorrelation constraints for learning. In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of 81.5% and 95.4%, respectively. We also show that the softmax-formulated constraints are compatible with various neural networks.
poster-presentations
This paper received mostly positive reviews. The reviewers praised the strong performance when compared with previous work. Also, the evaluation clearly shows the benefit of the proposed contributions in terms of performance. Most concerns raised by reviewers were properly addressed in the rebuttal. Lack of comparison to several previous works has been noted in a comment, but the authors clarified this concern, stating that the current work is a “large deviation from prior works”. The authors promised to include the missing references into the comparison. Given the reviews, comments, and author's answers, I suggest acceptance.
train
[ "9j-_Y0msw1e", "LRDUq_iFQul", "fKNhBOoO2LG", "fqlDtSHdCs", "C1PwZ8g84Qa", "0j1wGWTRiIz", "duL-NYc2EeF", "XI8l6irhAbr", "DOs_-Cjr11r", "ZVDdaaQyBpl" ]
[ "author", "public", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We really appreciate your time and valuable comments. We have carefully revised our paper, taking into consideration reviewers’ and public comments. The main revisions are summarized as follows.\n\n1. We added the description about two previous works, one of which has achieved the state-of-the art clustering accur...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "iclr_2021_e12NDM7wkEY", "0j1wGWTRiIz", "XI8l6irhAbr", "DOs_-Cjr11r", "ZVDdaaQyBpl", "duL-NYc2EeF", "iclr_2021_e12NDM7wkEY", "iclr_2021_e12NDM7wkEY", "iclr_2021_e12NDM7wkEY", "iclr_2021_e12NDM7wkEY" ]
iclr_2021_Xh5eMZVONGF
Language-Agnostic Representation Learning of Source Code from Structure and Context
Source code (Context) and its parsed abstract syntax tree (AST; Structure) are two complementary representations of the same computer program. Traditionally, designers of machine learning models have relied predominantly either on Structure or Context. We propose a new model, which jointly learns on Context and Structure of source code. In contrast to previous approaches, our model uses only language-agnostic features, i.e., source code and features that can be computed directly from the AST. Besides obtaining state-of-the-art on monolingual code summarization on all five programming languages considered in this work, we propose the first multilingual code summarization model. We show that jointly training on non-parallel data from multiple programming languages improves results on all individual languages, where the strongest gains are on low-resource languages. Remarkably, multilingual training only from Context does not lead to the same improvements, highlighting the benefits of combining Structure and Context for representation learning on code.
poster-presentations
The paper gives an extension of the transformer model that is suited to computing representations of source code. The main difference from transformers is that the model takes in a program's abstract syntax tree (AST) in addition to its sequence representation, and utilizes several pairwise distance measures between AST nodes in the self-attention operation. The model is evaluated on the task of code summarization for 5 different languages and shown to beat two state-of-the-art models. One interesting observation is that a model trained on data from all languages outperforms the monolingual version of the model. The reviewers generally liked the paper. The technical idea is simple, but the evaluation is substantial and makes a convincing case about setting a new state of the art. The observation about multilingual models is also interesting. While there were a few concerns, many of these were addressed in the authors' responses, and the ones that remain seem minor. Given this, I am recommending acceptance as a poster. Please incorporate the reviewers' comments in the final version.
train
[ "Li5x6Q5t-JD", "xhMv44ghYUe", "TGqn-7QjgEX", "11MKzS3uG3R", "wvJPql8D16", "HRNIxYxA1dg", "EsFKHnruw3n", "O-VCZP01tJk", "57tZ0O0gW5s", "oFK3_Nlt77M", "zeg1z985Jj9", "7u0_66yUWG", "qIsImVMxo3B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the explanations. I appreciate the fixed. The only small remaining remark is that it is still not correct to call a baseline system to be the GREAT algorithm if it doesn't use the same (or at least reimplemented similar) algorithm. It can be called transformer or anything else, but it is not the same...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "EsFKHnruw3n", "wvJPql8D16", "iclr_2021_Xh5eMZVONGF", "HRNIxYxA1dg", "TGqn-7QjgEX", "zeg1z985Jj9", "7u0_66yUWG", "qIsImVMxo3B", "oFK3_Nlt77M", "iclr_2021_Xh5eMZVONGF", "iclr_2021_Xh5eMZVONGF", "iclr_2021_Xh5eMZVONGF", "iclr_2021_Xh5eMZVONGF" ]
iclr_2021_eo6U4CAwVmg
Training GANs with Stronger Augmentations via Contrastive Discriminator
Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting. It is still unclear, however, that which augmentations could actually improve GANs, and in particular, how to apply a wider range of augmentations in training. In this paper, we propose a novel way to address these questions by incorporating a recent contrastive representation learning scheme into the GAN discriminator, coined ContraD. This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability, thereby preventing the discriminator overfitting issue in GANs more effectively. Even better, we observe that the contrastive learning itself also benefits from our GAN training, i.e., by maintaining discriminative features between real and fake samples, suggesting a strong coherence between the two worlds: good contrastive representations are also good for GAN discriminators, and vice versa. Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations, still maintaining highly discriminative features in the discriminator in terms of the linear evaluation. Finally, as a byproduct, we also show that our GANs trained in an unsupervised manner (without labels) can induce many conditional generative models via a simple latent sampling, leveraging the learned features of ContraD. Code is available at https://github.com/jh-jeong/ContraD.
poster-presentations
This paper aims to improve the training of generative adversarial networks (GANs) by incorporating the principle of contrastive learning into the training of discriminators in GANs. Unlike in an ordinary GAN which seeks to minimize the GAN loss directly, the proposed GAN variant with a contrastive discriminator (ContraD) uses the discriminator network to first learn a contrastive representation from a given set of data augmentations and real/generated examples and then train a discriminator based on the learned contrastive representation. It is noticed that a side effect of such blending is the improvement in contrastive learning as a result of GAN training. The resulting GAN model with a contrastive discriminator is shown to outperform other techniques using data augmentation. **Strengths:** * It proposes a new way of training the discriminators of GANs based on the principle of contrastive learning. * The paper is generally well written to articulate the main points that the authors want to convey. * The experimental evaluation is well designed and comprehensive. **Weaknesses:** * Even though the proposed learning scheme is novel, the building blocks are based on existing techniques in GAN and contrastive learning. * The claim that GAN helps contrastive learning is not fully substantiated. * It is claimed in the paper that the proposed contrastive discriminator can lead to much stronger augmentations *without catastrophic forgetting*. However, this “catastrophic forgetting” aspect is not really empirically validated in the experiments. * The writing has room for improvement. Despite its weaknesses, this paper explores a novel direction of training GANs that would be of interest to the research community.
train
[ "sYACn1zjun8", "9EI99yALVt", "gbroPcaasof", "_SmulYOzFmr", "CHppbK8-kOS", "ksybXJH7sR_", "cZYbo5eZadr", "thlGma-9lFm", "cCY1tXOW5Y9", "wBJI5oXyTL4", "SjEC9UWf-Vg", "udrw3p_xDjk", "CBVk3av3VB7", "bQdXheOIF0n", "wWtYkekwup", "fqyvlqreIbH" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "**Summary**\nThe manuscript proposes ContraD - a method that incorporates the recent SimCLR self-supervised learning method for images into the GAN training framework. Experimental results show that the proposed method consistently improves on strong baselines in terms of FID scores.\n\n**Score justification**\nTh...
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_eo6U4CAwVmg", "_SmulYOzFmr", "iclr_2021_eo6U4CAwVmg", "ksybXJH7sR_", "cZYbo5eZadr", "thlGma-9lFm", "CBVk3av3VB7", "udrw3p_xDjk", "iclr_2021_eo6U4CAwVmg", "wWtYkekwup", "fqyvlqreIbH", "gbroPcaasof", "wBJI5oXyTL4", "sYACn1zjun8", "iclr_2021_eo6U4CAwVmg", "iclr_2021_eo6U4CAwVmg...
iclr_2021_xHKVVHGDOEk
Influence Functions in Deep Learning Are Fragile
Influence functions approximate the effect of training samples in test-time predictions and have a wide variety of applications in machine learning interpretability and uncertainty estimation. A commonly-used (first-order) influence function can be implemented efficiently as a post-hoc method requiring access only to the gradients and Hessian of the model. For linear models, influence functions are well-defined due to the convexity of the underlying loss function and are generally accurate even across difficult settings where model changes are fairly large such as estimating group influences. Influence functions, however, are not well-understood in the context of deep learning with non-convex loss functions. In this paper, we provide a comprehensive and large-scale empirical study of successes and failures of influence functions in neural network models trained on datasets such as Iris, MNIST, CIFAR-10 and ImageNet. Through our extensive experiments, we show that the network architecture, its depth and width, as well as the extent of model parameterization and regularization techniques have strong effects in the accuracy of influence functions. In particular, we find that (i) influence estimates are fairly accurate for shallow networks, while for deeper networks the estimates are often erroneous; (ii) for certain network architectures and datasets, training with weight-decay regularization is important to get high-quality influence estimates; and (iii) the accuracy of influence estimates can vary significantly depending on the examined test points. These results suggest that in general influence functions in deep learning are fragile and call for developing improved influence estimation methods to mitigate these issues in non-convex setups.
poster-presentations
This paper examines under what conditions influence estimation can be applied to deep networks and finds that, among of items, that influence estimates are poorer for deeper architectures, perhaps due to poor inverse Hessian vector approximations for poor for deeper models. The authors provide an extensive experimental evaluation across datasets and architectures, and demonstrates the fragility of influence estimates in a number of conditions. Although the reviewers noted that these issues are now "folk knowledge", there has been less scientific effort in identifying these failures. Of course, more theoretical understanding would help the community better understand where these fragilities lie, but the experimental evaluation is sufficiently strong to be of broad interest to the community.
train
[ "UNFotJ1fIl0", "n2gNeYmGg0b", "cGidtFC-q7r", "Ri4x9RFmNB", "LRTbACL1ys", "bnJ5XDP-yIT", "4E0iuhFrBCe", "0w4PFxiE1pY", "hZSUGrCtoG" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Strength: \n\n+ The paper provides an interesting application of Influence functions that were introduced in Koh et al. for studying data poisoning attacks. The main idea of the paper given in Section 3 is that an approximation of the influence or impact of a training sample on the test set can be obtained using s...
[ 6, 7, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_xHKVVHGDOEk", "iclr_2021_xHKVVHGDOEk", "n2gNeYmGg0b", "UNFotJ1fIl0", "0w4PFxiE1pY", "hZSUGrCtoG", "Ri4x9RFmNB", "iclr_2021_xHKVVHGDOEk", "iclr_2021_xHKVVHGDOEk" ]
iclr_2021_8HhkbjrWLdE
Separation and Concentration in Deep Networks
Numerical experiments demonstrate that deep neural network classifiers progressively separate class distributions around their mean, achieving linear separability on the training set, and increasing the Fisher discriminant ratio. We explain this mechanism with two types of operators. We prove that a rectifier without biases applied to sign-invariant tight frames can separate class means and increase Fisher ratios. On the opposite, a soft-thresholding on tight frames can reduce within-class variabilities while preserving class means. Variance reduction bounds are proved for Gaussian mixture models. For image classification, we show that separation of class means can be achieved with rectified wavelet tight frames that are not learned. It defines a scattering transform. Learning 1×1 convolutional tight frames along scattering channels and applying a soft-thresholding reduces within-class variabilities. The resulting scattering network reaches the classification accuracy of ResNet-18 on CIFAR-10 and ImageNet, with fewer layers and no learned biases.
poster-presentations
After reading the author’s response, all reviewers recommend accepting the paper. The authors provided an extensive response carefully considering all reviewers' comments. After incorporating the feedback, the manuscript improved in terms of presentation, relation to the literature and empirical results. The paper is very well written and motivated. On top of the insightful analysis, experimental results are strong, obtaining comparable performance to that of a ResNet-18 on ImageNet. R1 and R3 strongly support the paper while R2 and R4 consider it borderline. R2 raised questions about experimental details and reproducibility. While R2 did not comment, these concerns were very clearly addressed by the authors in the view of the AC. R4 was initially concerned with the novelty of the approach, but changed their mind after the author's response. The AC encourages the authors to further consider the feedback provided by the reviewer after the discussion period was over.
val
[ "fX2exoNJPZY", "DtvMSsbVNYJ", "TewlBjG5bQi", "qCTALg_G2VC", "vqO0brBHD34", "pcH5NFdKJ0h", "ImRfhJkkDdH", "UG-X9MdzA3c", "CMLBUWUA74u", "uY8kPP2gTA1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes using networks that are composed of tight frames to analyze the clustering property of networks across layers.\nYet, the main focus of the paper in the first part is to construct the tight frame-based networks and then in the second part to train scattering transforms based networks. \nWhile the...
[ 6, 7, -1, 8, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, 4, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_8HhkbjrWLdE", "iclr_2021_8HhkbjrWLdE", "pcH5NFdKJ0h", "iclr_2021_8HhkbjrWLdE", "fX2exoNJPZY", "qCTALg_G2VC", "iclr_2021_8HhkbjrWLdE", "DtvMSsbVNYJ", "uY8kPP2gTA1", "iclr_2021_8HhkbjrWLdE" ]
iclr_2021_5NA1PinlGFu
Colorization Transformer
We present the Colorization Transformer, a novel approach for diverse high fidelity image colorization based on self-attention. Given a grayscale image, the colorization proceeds in three steps. We first use a conditional autoregressive transformer to produce a low resolution coarse coloring of the grayscale image. Our architecture adopts conditional transformer layers to effectively condition grayscale input. Two subsequent fully parallel networks upsample the coarse colored low resolution image into a finely colored high resolution image. Sampling from the Colorization Transformer produces diverse colorings whose fidelity outperforms the previous state-of-the-art on colorising ImageNet based on FID results and based on a human evaluation in a Mechanical Turk test. Remarkably, in more than 60\% of cases human evaluators prefer the highest rated among three generated colorings over the ground truth. The code and pre-trained checkpoints for Colorization Transformer are publicly available at https://github.com/google-research/google-research/tree/master/coltran
poster-presentations
The paper initially received a mixed rating, with two reviewers rate the paper below the bar and two above the bar. The raised concerns include the need for an autoregressive model for upsampling and the effect of batch sizes. These concerns were well-addressed in the rebuttal. Both of the reviewers that originally rated the paper below the bar raise the scores. After consulting the paper, the reviews, and the rebuttal, the AC agrees that the paper has its merits and is happy to accept the paper.
val
[ "IUQ6RLQFgRR", "Ch646_YhDE1", "lSNRY4-0hgd", "v6Tpi142mBb", "CXBmHZzufTC", "o3RTc03AFF_", "TPTSk_hj0NC", "pa0_QH7KcHr", "5MoaqioViEY", "eb8g8LJaz96", "82NU_eMgcsV", "WDPGI8V0y5", "ZCar2M-smP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "Update: I really appreciate the authors' efforts to address my original concerns. I believe that this work is a nice application of transformers to image colorization. The paper is well-written and the performance of proposed transformer architecture is strong. I think that this work is above the threshold of acce...
[ 7, 7, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_5NA1PinlGFu", "iclr_2021_5NA1PinlGFu", "iclr_2021_5NA1PinlGFu", "iclr_2021_5NA1PinlGFu", "iclr_2021_5NA1PinlGFu", "iclr_2021_5NA1PinlGFu", "v6Tpi142mBb", "Ch646_YhDE1", "v6Tpi142mBb", "v6Tpi142mBb", "lSNRY4-0hgd", "IUQ6RLQFgRR", "lSNRY4-0hgd" ]
iclr_2021_kmqjgSNXby
Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization
Standard dynamics models for continuous control make use of feedforward computation to predict the conditional distribution of next state and reward given current state and action using a multivariate Gaussian with a diagonal covariance structure. This modeling choice assumes that different dimensions of the next state and reward are conditionally independent given the current state and action and may be driven by the fact that fully observable physics-based simulation environments entail deterministic transition dynamics. In this paper, we challenge this conditional independence assumption and propose a family of expressive autoregressive dynamics models that generate different dimensions of the next state and reward sequentially conditioned on previous dimensions. We demonstrate that autoregressive dynamics models indeed outperform standard feedforward models in log-likelihood on heldout transitions. Furthermore, we compare different model-based and model-free off-policy evaluation (OPE) methods on RL Unplugged, a suite of offline MuJoCo datasets, and find that autoregressive dynamics models consistently outperform all baselines, achieving a new state-of-the-art. Finally, we show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer through data augmentation and improving performance using model-based planning.
poster-presentations
The paper is about the use of autoregressive dynamics models in the context of offline model-based reinforcement learning. After reading the authors' responses and the other reviews, the reviewers agree that this paper has several strengths (well written, easy to follow, the approach is novel and simple to implement, the empirical evaluation is well executed and the results are reproducible) and it deserves acceptance. The authors need to update their manuscript by keeping into considerations all the suggestions provided by the reviewers (clarifications and additional empirical comparisons).
train
[ "adb2Dse6KU6", "SVU48SmVlI3", "8tZr90JtFdr", "Er2amADQ1cj", "55FU7_M59yA", "BCBi16wVPVu", "pbLhyelQya", "-E2PeTM2At" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "#### Summary\n\nThe authors consider the usage of autoregressive dynamics models for batch model-based RL, where state-variable/reward predictions are performed sequentially conditioned on previously-predicted variables. Extensive numerical results are provided in several continuous domains for both policy evaluat...
[ 7, -1, -1, -1, -1, -1, 6, 7 ]
[ 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_kmqjgSNXby", "Er2amADQ1cj", "BCBi16wVPVu", "adb2Dse6KU6", "pbLhyelQya", "-E2PeTM2At", "iclr_2021_kmqjgSNXby", "iclr_2021_kmqjgSNXby" ]
iclr_2021_6YEQUn0QICG
FedBN: Federated Learning on Non-IID Features via Local Batch Normalization
The emerging paradigm of federated learning (FL) strives to enable collaborative training of deep models on the network edge without centrally aggregating raw data and hence improving data privacy. In most cases, the assumption of independent and identically distributed samples across local clients does not hold for federated learning setups. Under this setting, neural network training performance may vary significantly according to the data distribution and even hurt training convergence. Most of the previous work has focused on a difference in the distribution of labels or client shifts. Unlike those settings, we address an important problem of FL, e.g., different scanners/sensors in medical imaging, different scenery distribution in autonomous driving (highway vs. city), where local clients store examples with different distributions compared to other clients, which we denote as feature shift non-iid. In this work, we propose an effective method that uses local batch normalization to alleviate the feature shift before averaging models. The resulting scheme, called FedBN, outperforms both classical FedAvg, as well as the state-of-the-art for non-iid data (FedProx) on our extensive experiments. These empirical results are supported by a convergence analysis that shows in a simplified setting that FedBN has a faster convergence rate than FedAvg. Code is available at https://github.com/med-air/FedBN.
poster-presentations
The paper addresses the problem of batch normalization (BN) in federated learning, which is of great interest to the community including practitioners. The proposed method here simply excludes the BN parameters from the aggregation, and evolves them locally. As a main contribution, reviewers particularly liked the solid justification of the proposed scheme, both with substantial theory and extensive experiments. Presentation style can be slightly improved, the usage at test time can be clarified more, and some references mentioned by R3 should be added, but this overall does not affect the strong level of contributions present in the work, and the discussion phase with the authors was already constructive.
train
[ "4AaSIhpxO5M", "ElopFJca-KR", "kKunqdk3PDQ", "oQV8ZtnwoFb", "DB4WnBhMJtL", "ugDU0fozHAW", "KmXG4Xu3cV_", "SSuYHDeo6q0", "on_38WH2Ljb", "WsQKX4gM6VC", "u0r7kLlnLbg", "vvnrBagGFwg", "JuE5EFkV_m5", "JkIAsoDLW9m", "ZTJezEskkiJ", "1iOoO327bM2" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update following answer:\n\nThanks for your detailed answer, which confirms me in my initial assessment.\n\n------\n\n\n1/ Summary of the paper\n\nThis paper introduces the use of local batch normalisation layers in order to circumvent data shifts issues in FL, called FedBN.\nBuilding on a simplified model of BN (...
[ 8, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ 4, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_6YEQUn0QICG", "ZTJezEskkiJ", "1iOoO327bM2", "1iOoO327bM2", "JkIAsoDLW9m", "iclr_2021_6YEQUn0QICG", "1iOoO327bM2", "1iOoO327bM2", "4AaSIhpxO5M", "ugDU0fozHAW", "ZTJezEskkiJ", "ZTJezEskkiJ", "iclr_2021_6YEQUn0QICG", "WsQKX4gM6VC", "iclr_2021_6YEQUn0QICG", "iclr_2021_6YEQUn0QIC...
iclr_2021_fmOOI2a3tQP
Learning Robust State Abstractions for Hidden-Parameter Block MDPs
Many control tasks exhibit similar dynamics that can be modeled as having common latent structure. Hidden-Parameter Markov Decision Processes (HiP-MDPs) explicitly model this structure to improve sample efficiency in multi-task settings. However, this setting makes strong assumptions on the observability of the state that limit its application in real-world scenarios with rich observation spaces. In this work, we leverage ideas of common structure from the HiP-MDP setting, and extend it to enable robust state abstractions inspired by Block MDPs. We derive instantiations of this new framework for both multi-task reinforcement learning (MTRL) and meta-reinforcement learning (Meta-RL) settings. Further, we provide transfer and generalization bounds based on task and state similarity, along with sample complexity bounds that depend on the aggregate number of samples across tasks, rather than the number of tasks, a significant improvement over prior work. To further demonstrate efficacy of the proposed method, we empirically compare and show improvement over multi-task and meta-reinforcement learning baselines.
poster-presentations
The paper addresses the problem of learning and exploiting common (latent) task structure in multi-task reinforcement learning settings. The authors introduce a new formalism for capturing this type of structure and derive a gradient-based learning algorithm. They provide novel theoretical insights and strong empirical results. Reviewers initially raised several concerns, regarding assumptions and especially accessibility of the paper (and in particular theoretical discussions). The majority of these concerns have been addressed in the detailed rebuttal. The resulting consensus is to accept the paper. Authors are encouraged to continue to improve accessibility of the paper for the camera ready submission.
train
[ "yCsUtB6tqac", "NIQ82AGSfpI", "gURa6X5Yhb", "k6JJrFGW0Gu", "83FzptQ-LTz", "eGmCqeBShWG", "o0ilfeZEh7", "FCdJFgI2o9d", "E5H5PhnOHJa", "y8ngIgT-X15", "Q5ZyXOYR507", "RQmm50Xo4a5" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for reading our rebuttal and changing your score. And thank you for the reference to a meta-RL paper from pixels! We will try HiP-BMDP in this setting for a comparison to strengthen the paper.", "### Summary\n\nThe authors combine hidden-parameter MDPs and state abstractions to model multi-task problem...
[ -1, 7, -1, 7, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "83FzptQ-LTz", "iclr_2021_fmOOI2a3tQP", "o0ilfeZEh7", "iclr_2021_fmOOI2a3tQP", "E5H5PhnOHJa", "RQmm50Xo4a5", "FCdJFgI2o9d", "NIQ82AGSfpI", "k6JJrFGW0Gu", "Q5ZyXOYR507", "iclr_2021_fmOOI2a3tQP", "iclr_2021_fmOOI2a3tQP" ]
iclr_2021_Ti87Pv5Oc8
Meta-Learning with Neural Tangent Kernels
Model Agnostic Meta-Learning (MAML) has emerged as a standard framework for meta-learning, where a meta-model is learned with the ability of fast adapting to new tasks. However, as a double-looped optimization problem, MAML needs to differentiate through the whole inner-loop optimization path for every outer-loop training step, which may lead to both computational inefficiency and sub-optimal solutions. In this paper, we generalize MAML to allow meta-learning to be defined in function spaces, and propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK). Within this paradigm, we introduce two meta-learning algorithms in the RKHS, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework. We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory. Extensive experimental studies demonstrate advantages of our paradigm in both efficiency and quality of solutions compared to related meta-learning algorithms. Another interesting feature of our proposed methods is that they are demonstrated to be more robust to adversarial attacks and out-of-distribution adaptation than popular baselines, as demonstrated in our experiments.
poster-presentations
This paper considers meta-learning based on MAML. The authors use Neural Tangent Kernels (NTKs) to develop two meta-learning algorithms that avoid the inner-loop adaptation, which makes MAML computationally intensive. Experimental results demonstrate favorable empirical performance over existing methods. The paper is generally well written and readable. The proposed methods are well motivated and based on solid theoretical ground. The emprirical performance shows advantages in efficiency and quality. This work is worth acceptence in ICLR 2021.
train
[ "Tog3OhiiwzV", "yDhq4jYxDS", "dCDqhGeGMf6", "28LkwDjPWQs", "dTDx2XDhWV", "Gx0jnaY_ITu", "VkqbBywbB60", "FGQqZ62sqEX", "eFn5g3RtvPv", "CLoUOYU6DJ2" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nIn this paper, the authors view MAML from the lens of Reproducing Hilbert Kernel Hilbert Spaces (RKHS) by applying tools from the theory of Neural Tangent Kernels (NTKs). Based on these insights, they develop two meta-learning algorithms that avoid gradient-based inner-loop adaptation. Their algorithms ...
[ 7, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, 2, 4 ]
[ "iclr_2021_Ti87Pv5Oc8", "CLoUOYU6DJ2", "iclr_2021_Ti87Pv5Oc8", "dTDx2XDhWV", "FGQqZ62sqEX", "eFn5g3RtvPv", "Tog3OhiiwzV", "iclr_2021_Ti87Pv5Oc8", "iclr_2021_Ti87Pv5Oc8", "iclr_2021_Ti87Pv5Oc8" ]
iclr_2021_8xeBUgD8u9
Continual learning in recurrent neural networks
While a diverse collection of continual learning (CL) methods has been proposed to prevent catastrophic forgetting, a thorough investigation of their effectiveness for processing sequential data with recurrent neural networks (RNNs) is lacking. Here, we provide the first comprehensive evaluation of established CL methods on a variety of sequential data benchmarks. Specifically, we shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs. In contrast to feedforward networks, RNNs iteratively reuse a shared set of weights and require working memory to process input samples. We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements, which lead to an increased need for stability at the cost of decreased plasticity for learning subsequent tasks. We additionally provide theoretical arguments supporting this interpretation by studying linear RNNs. Our study shows that established CL methods can be successfully ported to the recurrent case, and that a recent regularization approach based on hypernetworks outperforms weight-importance methods, thus emerging as a promising candidate for CL in RNNs. Overall, we provide insights on the differences between CL in feedforward networks and RNNs, while guiding towards effective solutions to tackle CL on sequential data.
poster-presentations
I agree with the reviewers, and I find the careful analysis of CL approaches relying on regularization for RNN useful and insightful. I do feel that a lot of the interesting content is still in the appendix (from a quick skim and looking at the plots in the appendix) but I think something like this can potentially be unavoidable. I do like the separation between sequence length and memory requirements. I think making observations about different types of recurrent architectures is hard, but I think the paper does a good job to raise some interesting questions. A note that I would make (that I haven't seen raised through a quick look in the paper) is that is not clear how the Fisher Information Matrix should be computed in case of a recurrent model (which is a problem in general). E.g. a typical thing is to compute it as for a feed-forward model (using the gradients coming from BPTT) which is feasible computationally, but actually that is problematic as you first sum gradients before taking their outer-product rather than summing the outer-products corresponding of the different terms in the gradient. I'm wondering if that plays a role here as well. Overall I think the paper does careful analysis and ablation studies and raises some interesting observation of how one should approach CL algorithms for RNN models.
train
[ "2x05K3R-8TX", "STMFQzmD9g0", "_si-54gYUFS", "FJ921npMm1W", "pDxMuv5UwjV", "VfWl6R-fUXg", "yvG2fdeCsSO", "RPpv2sMbfTx" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe authors do an evaluation of the application of weight-importance continual learning methods to recurrent neural networks (RNNs). They draw out the tradeoff between complexity of precessing and just remembering (working memory) in terms of the applicability of these weight importance methods. They a...
[ 7, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_8xeBUgD8u9", "yvG2fdeCsSO", "RPpv2sMbfTx", "pDxMuv5UwjV", "2x05K3R-8TX", "iclr_2021_8xeBUgD8u9", "iclr_2021_8xeBUgD8u9", "iclr_2021_8xeBUgD8u9" ]
iclr_2021_ZK6vTvb84s
A Trainable Optimal Transport Embedding for Feature Aggregation and its Relationship to Attention
We address the problem of learning on sets of features, motivated by the need of performing pooling operations in long biological sequences of varying sizes, with long-range dependencies, and possibly few labeled data. To address this challenging task, we introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference. Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost. Our aggregation technique admits two useful interpretations: it may be seen as a mechanism related to attention layers in neural networks, or it may be seen as a scalable surrogate of a classical optimal transport-based kernel. We experimentally demonstrate the effectiveness of our approach on biological sequences, achieving state-of-the-art results for protein fold recognition and detection of chromatin profiles tasks, and, as a proof of concept, we show promising results for processing natural language sequences. We provide an open-source implementation of our embedding that can be used alone or as a module in larger learning models at https://github.com/claying/OTK.
poster-presentations
All reviewers agreed that the paper proposes some interesting and novel ideas on the use of OT for pooling. It also provides some nice insights and strong experimental results. As suggested by one of the reviewer, a discussion about the impact of the number of references may be of interest though.
test
[ "d7rq4cqrlo", "tlLgYH504Wf", "5bIbdP_46Gw", "V0GRWo7dJl", "1MPUyQrLNX", "hXvqmKu8dgG", "1sPmTCsRkR", "38vfg9_DABA", "5vM8iwoHn5D", "0z-mF1x74-k" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a kernel embedding for a set/sequence of features, based on the optimal transport distance to a set of references, leading to a fixed-dimensional embedding of variable length sequences.\nThe set of references can be obtained as cluster centers over the full dataset (unsupervised), or learned ba...
[ 6, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, 2, 4, 2 ]
[ "iclr_2021_ZK6vTvb84s", "iclr_2021_ZK6vTvb84s", "5vM8iwoHn5D", "iclr_2021_ZK6vTvb84s", "0z-mF1x74-k", "38vfg9_DABA", "d7rq4cqrlo", "iclr_2021_ZK6vTvb84s", "iclr_2021_ZK6vTvb84s", "iclr_2021_ZK6vTvb84s" ]
iclr_2021_h0de3QWtGG
Learning "What-if" Explanations for Sequential Decision-Making
Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior--i.e. trajectories of observations and actions made by an expert maximizing some unknown reward function--is essential for introspecting and auditing policies in different institutions. In this paper, we propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to ``"what if'' outcomes: Given the current history of observations, what would happen if we took a particular action? To learn these cost-benefit tradeoffs associated with the expert's actions, we integrate counterfactual reasoning into batch inverse reinforcement learning. This offers a principled way of defining reward functions and explaining expert behavior, and also satisfies the constraints of real-world decision-making---where active experimentation is often impossible (e.g. in healthcare). Additionally, by estimating the effects of different actions, counterfactuals readily tackle the off-policy nature of policy evaluation in the batch setting, and can naturally accommodate settings where the expert policies depend on histories of observations rather than just current states. Through illustrative experiments in both real and simulated medical environments, we highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.
poster-presentations
This paper presents a counterfactual approach to interpret aspects within a sequential decision-making setup. The reviewers have reacted to each others' comments as well as the authors' response to their views. I am recommending acceptance of this paper, as it targets an interesting problem and presents an intriguing approach. I think the community would appreciate further discussing this paper at the conference.
train
[ "7hG8k1Kr40z", "jVbNtjdf1W", "2v6J0s5m3-H", "3aqHqlgwIwY", "SxnrLjkd5OR", "AOFDQVjiUC8", "39iqXJ9-_Xq", "hGRDllQHiIw", "WpxKxBBqlpd", "hu9dLd9zTA7", "uJVP6oCyQz", "Eu0pel5MHD", "YGwj2n-Mdt-", "VvmiBePAuM7", "9Xny4R-QbAD", "yNMLfZnnc30" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer,\n\nWe would like to thank you once again for your useful comments and constructive feedback on our paper. Please let us know if our revised manuscript and replies have addressed your concerns. If you have any additional comments, we are very eager to address them.\n\nThank you very much!", "Dear r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 3 ]
[ "YGwj2n-Mdt-", "9Xny4R-QbAD", "VvmiBePAuM7", "yNMLfZnnc30", "hGRDllQHiIw", "YGwj2n-Mdt-", "9Xny4R-QbAD", "VvmiBePAuM7", "hu9dLd9zTA7", "yNMLfZnnc30", "AOFDQVjiUC8", "WpxKxBBqlpd", "iclr_2021_h0de3QWtGG", "iclr_2021_h0de3QWtGG", "iclr_2021_h0de3QWtGG", "iclr_2021_h0de3QWtGG" ]
iclr_2021_NomEDgIEBwE
Improving Transformation Invariance in Contrastive Representation Learning
We propose methods to strengthen the invariance properties of representations obtained by contrastive learning. While existing approaches implicitly induce a degree of invariance as representations are learned, we look to more directly enforce invariance in the encoding process. To this end, we first introduce a training objective for contrastive learning that uses a novel regularizer to control how the representation changes under transformation. We show that representations trained with this objective perform better on downstream tasks and are more robust to the introduction of nuisance transformations at test time. Second, we propose a change to how test time representations are generated by introducing a feature averaging approach that combines encodings from multiple transformations of the original input, finding that this leads to across the board performance gains. Finally, we introduce the novel Spirograph dataset to explore our ideas in the context of a differentiable generative process with multiple downstream tasks, showing that our techniques for learning invariance are highly beneficial.
poster-presentations
The paper recieved three consistently positive reviews. While I agree with most of them, I have two major concerns regarding the novelty of the paper, which the authors are strongly recommended to address in the final version. 1. Taking derivative with respect to the parameters of transformation isn't novel. The standard tangent prop algorithm has been around for over a decade: P. Simard, Y. LeCun, J. S. Denker, and B. Victorri. Transformation invariance in pattern recognition-tangent distance and tangent propagation. In Neural Networks: Tricks of the Trade. 1996. (see Eq 26 in https://halshs.archives-ouvertes.fr/halshs-00009505/document) Salah Rifai, Yann N. Dauphin, Pascal Vincent, Yoshua Bengio, Xavier Muller. The Manifold Tangent Classifier. NIPS 2011. (see Eq 6 therein) I understand there is some normalization in Eq 5, sampling of \alpha, \alpha', and the direction, and using the expectation to approximate the norm of the gradient. But such novelty is really incremental, or at least some empirical comparison will be necessary. It will also be necessary to cite the tangent distance/prop literature. 2. The new gradient based regularizer in Eq 11 and 12 appears completely decoupled with contrastive learning. It can be applied to any representation learning where f_\theta is an encoder. It does not use any substantial element from contrastive learning, although it might be "inspired" by contrastive learning. One may argue that such generality is an advantage, but 1) there is really no need to take such a big detour into contrastive learning just in order to derive the invariance regularizer in Eq 12, and 2) writing in this way can be quite confusing and/or misleading.
train
[ "kABLHqQ5Pxc", "IAfGi7oYcfk", "1ys9k2s1lWa", "QubcDCGHHNV", "dOZBxLWiZZE", "4eJff1Di7O", "_PoHiGTLFAR" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "1. Summary\nGiven one image, the paper first generates different views which are controlled by differentiable parameter \\alpha, and then minimizes the additional \"conditional variance\" term~(expectation of these views' squared differences). Therefore, the paper encourages representations of the same image remai...
[ 6, 7, -1, -1, -1, -1, 7 ]
[ 4, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2021_NomEDgIEBwE", "iclr_2021_NomEDgIEBwE", "IAfGi7oYcfk", "kABLHqQ5Pxc", "_PoHiGTLFAR", "iclr_2021_NomEDgIEBwE", "iclr_2021_NomEDgIEBwE" ]
iclr_2021_OPyWRrcjVQw
Shapley explainability on the data manifold
Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions. The Shapley framework for explainability attributes a model’s predictions to its input features in a mathematically principled and model-agnostic way. However, general implementations of Shapley explainability make an untenable assumption: that the model’s features are uncorrelated. In this work, we demonstrate unambiguous drawbacks of this assumption and develop two solutions to Shapley explainability that respect the data manifold. One solution, based on generative modelling, provides flexible access to data imputations; the other directly learns the Shapley value-function, providing performance and stability at the cost of flexibility. While “off-manifold” Shapley values can (i) give rise to incorrect explanations, (ii) hide implicit model dependence on sensitive attributes, and (iii) lead to unintelligible explanations in higher-dimensional data, on-manifold explainability overcomes these problems.
poster-presentations
A good paper with significant contribution on XAI and the on- vs off- data manifold explainability. Reviewers have appreciated authors’ feedback and update of the paper (R1, R2, R4). I would like to personally thank the authors for a smooth, extensive and focused interaction w/ updates.
train
[ "ofSzxRbY6vo", "HBf5DGxrs1t", "0RB6xDuKsj0", "L43CgSKi7A", "qhJrIHV38V", "Nos0IyTzRai", "RM2JiQ8sY4w", "_BWyimKcRsM", "qA7ZFJFmWy", "tTa5atxxt0K", "0qGylSCeg9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper is focused on the off-data manifold problem with Shapley values which is created by sampling data that is out of distribution. The goal is to develop efficient methods. Two main algorithms are proposed: Generative models to approximate conditional distributions and training supervised models for direct ...
[ 7, 7, 6, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_OPyWRrcjVQw", "iclr_2021_OPyWRrcjVQw", "iclr_2021_OPyWRrcjVQw", "HBf5DGxrs1t", "iclr_2021_OPyWRrcjVQw", "iclr_2021_OPyWRrcjVQw", "0RB6xDuKsj0", "0qGylSCeg9", "HBf5DGxrs1t", "ofSzxRbY6vo", "iclr_2021_OPyWRrcjVQw" ]
iclr_2021_gl3D-xY7wLq
Noise or Signal: The Role of Image Backgrounds in Object Recognition
We assess the tendency of state-of-the-art object recognition models to depend on signals from image backgrounds. We create a toolkit for disentangling foreground and background signal on ImageNet images, and find that (a) models can achieve non-trivial accuracy by relying on the background alone, (b) models often misclassify images even in the presence of correctly classified foregrounds--up to 88% of the time with adversarially chosen backgrounds, and (c) more accurate models tend to depend on backgrounds less. Our analysis of backgrounds brings us closer to understanding which correlations machine learning models use, and how they determine models' out of distribution performance.
poster-presentations
The paper investigates the tendency of image recognition models to depend on image backgrounds, and propose a suite of datasets to study this phenomenon. All the reviewers agree that the paper investigates an important problem, is well-written and contains several interesting insights that should be of interest to the community. I recommend acceptance.
train
[ "WsTxwdJ1OV0", "9qreAo4r_MQ", "a4nRkRCQ6D6", "Oc2VoXeE3JY", "L6Ql8CsihT", "398x0VE_Wh7", "uW50EwqqSEv", "MkXP6cFaAJR", "VZhVC5VCpK_", "6Lvt4NUls-H", "w2lr9L4Djoz", "I_uVPti8_B8", "inbcZ3I-fEk", "L59zUHrNp0M", "_QiYUV2Ek83", "jNzCrbDiNln" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThe submission performs similar foreground-background analysis for object recognition as in [1], but with more modern networks in mind. As such, the main takeaways indicate that this phenomenon still exists - networks today continue to suffer from background bias as they did four years ago with AlexNet, although...
[ 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2021_gl3D-xY7wLq", "iclr_2021_gl3D-xY7wLq", "L6Ql8CsihT", "iclr_2021_gl3D-xY7wLq", "I_uVPti8_B8", "uW50EwqqSEv", "MkXP6cFaAJR", "WsTxwdJ1OV0", "_QiYUV2Ek83", "Oc2VoXeE3JY", "Oc2VoXeE3JY", "Oc2VoXeE3JY", "WsTxwdJ1OV0", "jNzCrbDiNln", "iclr_2021_gl3D-xY7wLq", "iclr_2021_gl3D-xY7wLq...
iclr_2021_HOFxeCutxZR
Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space Navigation
Controllable semantic image editing enables a user to change entire image attributes with a few clicks, e.g., gradually making a summer scene look like it was taken in winter. Classic approaches for this task use a Generative Adversarial Net (GAN) to learn a latent space and suitable latent-space transformations. However, current approaches often suffer from attribute edits that are entangled, global image identity changes, and diminished photo-realism. To address these concerns, we learn multiple attribute transformations simultaneously, integrate attribute regression into the training of transformation functions, and apply a content loss and an adversarial loss that encourages the maintenance of image identity and photo-realism. We propose quantitative evaluation strategies for measuring controllable editing performance, unlike prior work, which primarily focuses on qualitative evaluation. Our model permits better control for both single- and multiple-attribute editing while preserving image identity and realism during transformation. We provide empirical results for both natural and synthetic images, highlighting that our model achieves state-of-the-art performance for targeted image manipulation.
poster-presentations
All the reviewers rate the paper above the bar. They like the experiment results and think the proposed latent space editing approach makes intuitive sense. While several weakness points were raised, including a lack of continuous editing comparison and sometimes vague descriptions, they were not considered major to reject the paper. After consolidating the reviews and rebuttal, the AC agrees with the reviewer assessment and recommends accepting the paper.
train
[ "0kg4ffQ3mr", "YT9z9TI22w", "s8ds5-G6uqk", "5vgWqS2cOom", "ApNvNCJW4-c", "ho7cKZcVYQ1", "vBJy1XGzom", "iVgoiFh7UTf" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose an image attribute editing method by manipulating the GAN latent vector. Specifically, this paper uses a pre-trained GAN to synthesize images, a pre-trained regressor to get the image attributes, and trains a network T to find meaningful latent-space directions. It then edits ima...
[ 6, -1, -1, -1, -1, 6, 6, 8 ]
[ 3, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_HOFxeCutxZR", "ho7cKZcVYQ1", "vBJy1XGzom", "0kg4ffQ3mr", "iVgoiFh7UTf", "iclr_2021_HOFxeCutxZR", "iclr_2021_HOFxeCutxZR", "iclr_2021_HOFxeCutxZR" ]
iclr_2021_dFwBosAcJkN
Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
A key challenge in adversarial robustness is the lack of a precise mathematical characterization of human perception, used in the definition of adversarial attacks that are imperceptible to human eyes. Most current attacks and defenses try to get around this issue by considering restrictive adversarial threat models such as those bounded by L2 or L∞ distance, spatial perturbations, etc. However, models that are robust against any of these restrictive threat models are still fragile against other threat models, i.e. they have poor generalization to unforeseen attacks. Moreover, even if a model is robust against the union of several restrictive threat models, it is still susceptible to other imperceptible adversarial examples that are not contained in any of the constituent threat models. To resolve these issues, we propose adversarial training against the set of all imperceptible adversarial examples. Since this set is intractable to compute without a human in the loop, we approximate it using deep neural networks. We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images. Through an extensive perceptual study, we show that the neural perceptual distance correlates well with human judgements of perceptibility of adversarial examples, validating our threat model. Under the NPTM, we develop novel perceptual adversarial attacks and defenses. Because the NPTM is very broad, we find that Perceptual Adversarial Training (PAT) against a perceptual attack gives robustness against many other types of adversarial attacks. We test PAT on CIFAR-10 and ImageNet-100 against five diverse adversarial attacks: L2, L∞, spatial, recoloring, and JPEG. We find that PAT achieves state-of-the-art robustness against the union of these five attacks—more than doubling the accuracy over the next best model—without training against any of them. That is, PAT generalizes well to unforeseen perturbation types. This is vital in sensitive applications where a particular threat model cannot be assumed, and to the best of our knowledge, PAT is the first adversarial defense with this property. Code and data are available at https://github.com/cassidylaidlaw/perceptual-advex
poster-presentations
This paper focuses on the adversarial robustness of deep neural networks against multiple and unforeseen threat models, which proposes a threat model called Neural Perceptual Threat Model (NPTM). The philosophy behind sounds quite interesting to me, namely, approximating human perception with a neural neural "neural perceptual distance". This philosophy leads to a novel algorithm design I have never seen, i.e., Perceptual Adversarial Training (PAT) which achieves good robustness against various types of adversarial attacks and even could generalize well to unforeseen perturbation types. The clarity and novelty are clearly above the bar of ICLR. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to accept this paper for publication! Please carefully address all comments in the final version.
train
[ "8I-FNv3DrZH", "2_ZAXUspf5E", "fku7ZrsAc3q", "QNtzZmqnHvc", "7OoHeB3crwP", "kOO8bUejkL", "bzvFgVnGZLx", "c7Mccdfzhf", "GPyRSgxK-WN", "n4XLvOHaoeG", "XvJ_0QqiAgo", "_vUNWSB4vj", "44yYvjYvwTr", "Ntr1s2pgIcb", "Ufe-gWvDSeh", "70O0E9B3JsN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "This work proposes a new form of adversarial training, supported by two proposed adversarial attacks based off a perceptual distance. The choice of perceptual distance (LPIPS), is computed by comparing the activations of (possibly different) two neural networks with respect to a pair of inputs. The authors propose...
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 3, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_dFwBosAcJkN", "XvJ_0QqiAgo", "iclr_2021_dFwBosAcJkN", "kOO8bUejkL", "kOO8bUejkL", "bzvFgVnGZLx", "n4XLvOHaoeG", "Ufe-gWvDSeh", "fku7ZrsAc3q", "fku7ZrsAc3q", "70O0E9B3JsN", "8I-FNv3DrZH", "Ntr1s2pgIcb", "iclr_2021_dFwBosAcJkN", "iclr_2021_dFwBosAcJkN", "iclr_2021_dFwBosAcJkN"...
iclr_2021_0cmMMy8J5q
Zero-Cost Proxies for Lightweight NAS
Neural Architecture Search (NAS) is quickly becoming the standard methodology to design neural network models. However, NAS is typically compute-intensive because multiple models need to be evaluated before choosing the best one. To reduce the computational power and time needed, a proxy task is often used for evaluating each model instead of full training. In this paper, we evaluate conventional reduced-training proxies and quantify how well they preserve ranking between neural network models during search when compared with the rankings produced by final trained accuracy. We propose a series of zero-cost proxies, based on recent pruning literature, that use just a single minibatch of training data to compute a model's score. Our zero-cost proxies use 3 orders of magnitude less computation but can match and even outperform conventional proxies. For example, Spearman's rank correlation coefficient between final validation accuracy and our best zero-cost proxy on NAS-Bench-201 is 0.82, compared to 0.61 for EcoNAS (a recently proposed reduced-training proxy). Finally, we use these zero-cost proxies to enhance existing NAS search algorithms such as random search, reinforcement learning, evolutionary search and predictor-based search. For all search methodologies and across three different NAS datasets, we are able to significantly improve sample efficiency, and thereby decrease computation, by using our zero-cost proxies. For example on NAS-Bench-101, we achieved the same accuracy 4× quicker than the best previous result. Our code is made public at: https://github.com/mohsaied/zero-cost-nas.
poster-presentations
This is a well-written paper proposing a promising a series of zero-cost proxies for NAS. Overall, the reviewers were convinced that the approach is sound and the results overall support the use of zero-cost proxies (although they are a bit weak in some cases, e.g. rank correlations in A.3). Despite some concerns amongst the reviewers around the technical novelty of the method, mostly due to the use of estimators from the pruning-at-init literature, this is promising work at the intersection of different sub-communities in ML.
train
[ "wDJZbKKki2", "JJq7Rwt2MM", "d3j5fvblcmf", "kpsEcw2wx-f", "Q6W15AO--O6", "O56E1WMqM27", "Sob2dcXfQBw", "odrxPiB2f6o", "Gea4Pzz_mo2", "LNF1PIQEANg", "nbMvUgPGNEf" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "# Summary\nTo reduce the cost of NAS, this paper focuses on zero-cost proxies to estimate the performance of network architectures without any training.\n\nSpecifically, the authors propose a series of zero-cost proxies based on recent pruning literature. They also propose two practical strategies: zero-cost warmu...
[ 7, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_0cmMMy8J5q", "Q6W15AO--O6", "iclr_2021_0cmMMy8J5q", "nbMvUgPGNEf", "O56E1WMqM27", "LNF1PIQEANg", "wDJZbKKki2", "Gea4Pzz_mo2", "iclr_2021_0cmMMy8J5q", "iclr_2021_0cmMMy8J5q", "iclr_2021_0cmMMy8J5q" ]
iclr_2021_p8agn6bmTbr
Usable Information and Evolution of Optimal Representations During Training
We introduce a notion of usable information contained in the representation learned by a deep network, and use it to study how optimal representations for the task emerge during training. We show that the implicit regularization coming from training with Stochastic Gradient Descent with a high learning-rate and small batch size plays an important role in learning minimal sufficient representations for the task. In the process of arriving at a minimal sufficient representation, we find that the content of the representation changes dynamically during training. In particular, we find that semantically meaningful but ultimately irrelevant information is encoded in the early transient dynamics of training, before being later discarded. In addition, we evaluate how perturbing the initial part of training impacts the learning dynamics and the resulting representations. We show these effects on both perceptual decision-making tasks inspired by neuroscience literature, as well as on standard image classification tasks.
poster-presentations
This paper proposes that we can understand the evolution of representations in deep neural networks during training using the concept of "usable information". This is effectively an indirect measure of how much information the network maintains about a given categorical variable, Y, and the authors show that it is in fact a variational lower bound on the amount of mutual information that the network's representations have with Y. The authors show that in deep neural networks the usable information that is maintained for different variables during training depends on the task, such that task irrelevant variables (but not task relevant variables) eventually have their usable information reduced, leading to "minimal sufficient representations". The initial reviews were mixed. A common theme in the critiques was the lack of evidence of the generalization and scalability of these results. The authors addressed these concerns by including new experiments on different architectures and the CIFAR datasets, leading one reviewer to increase their score. The final scores stood at 3, 7 ,7, 7. Given the overall positive reviews, interesting subject matter, and relevance to understanding learned representations in deep networks, this paper seems appropriate for acceptance in the AC's opinion.
train
[ "LTsahI-y2dK", "iB4JfblGLD", "6FcMJWhUYLk", "DWT0TBaKjaN", "qKSZRpZQAK4", "hEZgzPTlRd", "zAY0XjMCXKc", "u7FYoC7dZPS", "b1Ryv0oAt3", "ArQLAMQgTW1", "lr9ubnQ1CjD", "YSrVSP67rBH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Broadly, this work is an attempt to understand how neural networks can form generalizable representations while being severely overparameterized. This work proposes an information theoretic measure, called the \"usable information\", and use it to quantify the amount of relevant information in different layers of ...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, 3, 7 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_p8agn6bmTbr", "iclr_2021_p8agn6bmTbr", "iclr_2021_p8agn6bmTbr", "ArQLAMQgTW1", "YSrVSP67rBH", "iB4JfblGLD", "LTsahI-y2dK", "lr9ubnQ1CjD", "lr9ubnQ1CjD", "iclr_2021_p8agn6bmTbr", "iclr_2021_p8agn6bmTbr", "iclr_2021_p8agn6bmTbr" ]
iclr_2021_MjvduJCsE4
Exploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit
Modern deep learning models have achieved great success in predictive accuracy for many data modalities. However, their application to many real-world tasks is restricted by poor uncertainty estimates, such as overconfidence on out-of-distribution (OOD) data and ungraceful failing under distributional shift. Previous benchmarks have found that ensembles of neural networks (NNs) are typically the best calibrated models on OOD data. Inspired by this, we leverage recent theoretical advances that characterize the function-space prior of an infinitely-wide NN as a Gaussian process, termed the neural network Gaussian process (NNGP). We use the NNGP with a softmax link function to build a probabilistic model for multi-class classification and marginalize over the latent Gaussian outputs to sample from the posterior. This gives us a better understanding of the implicit prior NNs place on function space and allows a direct comparison of the calibration of the NNGP and its finite-width analogue. We also examine the calibration of previous approaches to classification with the NNGP, which treat classification problems as regression to the one-hot labels. In this case the Bayesian posterior is exact, and we compare several heuristics to generate a categorical distribution over classes. We find these methods are well calibrated under distributional shift. Finally, we consider an infinite-width final layer in conjunction with a pre-trained embedding. This replicates the important practical use case of transfer learning and allows scaling to significantly larger datasets. As well as achieving competitive predictive accuracy, this approach is better calibrated than its finite width analogue.
poster-presentations
This paper presents an empirical study focusing on Bayesian inference on NNGP - a Gaussian process where the kernel is defined by taking the width of a Bayesian neural network (BNN) to the infinity limit. The baselines include a finite width BNN with the same architecture, and a proposed GP-BNN hybrid (NNGP-LL) which is similar to GPDNN and deep kernel learning except that the last-layer GP has its kernel defined by the width-limit kernel. Experiments are performed on both regression and classification tasks, with a focus on OOD data. Results show that NNGP can obtain competitive results comparing to their BNN counterpart, and results on the proposed NNGP-LL approach provides promising supports on the hybrid design as to combine the best from both GP and deep learning fields. Although the proposed approach is a natural extension of the recent line of work on GP-BNN correspondence, reviewers agreed that the paper presented a good set of empirical studies, and the NNGP-LL approach, evaluated in section 5 with SOTA deep learning architectures, provides a promising direction of future for scalable uncertainty estimation. This is the main reason that leads to my decision on acceptance. Concerns on section 3's results on under-performing CNN & NNGP results on CIFAR-10 has been raised, which hinders the significance of the results there (since they are way too far from expected CNN accuracy). The compromise for model architecture in order to enable NNGP posterior sampling is understandable, although this does raise questions about the robustness of posterior inference for NNGP in large architectures.
train
[ "N4LHtd8I6e9", "5VdYhkcpuNv", "x6lvC3wE6_I", "qdCycRyfT_H", "2sw3ggn3TWt", "bWVfaDNAy-o", "xYzJ9uo3bwV", "zzD8wEuffco", "FaB9VbVQhEE", "9bVJJsYCKRb", "d35PXhAEEmH", "ZfXG2Jw8hr4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "After rebuttal: I am uneasy about the overstated claims made in section 2. That the architectures are small should really be mentioned more prominently. However reviewers #1 and #2 make a good case that what matters are the improvements presented in Section 5. Thus, I reluctantly recommend acceptance.\n\n# Paper s...
[ 6, 6, 7, -1, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_MjvduJCsE4", "iclr_2021_MjvduJCsE4", "iclr_2021_MjvduJCsE4", "iclr_2021_MjvduJCsE4", "iclr_2021_MjvduJCsE4", "d35PXhAEEmH", "zzD8wEuffco", "N4LHtd8I6e9", "x6lvC3wE6_I", "5VdYhkcpuNv", "ZfXG2Jw8hr4", "2sw3ggn3TWt" ]
iclr_2021_V8jrrnwGbuc
On the geometry of generalization and memorization in deep neural networks
Understanding how large neural networks avoid memorizing training data is key to explaining their high generalization performance. To examine the structure of when and where memorization occurs in a deep network, we use a recently developed replica-based mean field theoretic geometric analysis method. We find that all layers preferentially learn from examples which share features, and link this behavior to generalization performance. Memorization predominately occurs in the deeper layers, due to decreasing object manifolds’ radius and dimension, whereas early layers are minimally affected. This predicts that generalization can be restored by reverting the final few layer weights to earlier epochs before significant memorization occurred, which is confirmed by the experiments. Additionally, by studying generalization under different model sizes, we reveal the connection between the double descent phenomenon and the underlying model geometry. Finally, analytical analysis shows that networks avoid memorization early in training because close to initialization, the gradient contribution from permuted examples are small. These findings provide quantitative evidence for the structure of memorization across layers of a deep neural network, the drivers for such structure, and its connection to manifold geometric properties.
poster-presentations
The paper offers novel insights about memorization, the process by which deep neural networks are able to learn examples with incorrect labels. The core insight is that late layers are responsible for memorization. The paper presents a thorough examination of this claim from different angles. The experiments involving rewinding late layers are especially innovative. The reviewers found the insights valuable and voted unanimously for accepting the paper. The sentiment is well summarized by R2: "The findings of the paper are interesting. It shows the heterogeneity in layers and training stage of the neural net". I would like to bring to your attention the Coherent Gradients paper (see also R1 comment). This and other related papers already discusses the effect of label permutation on the gradient norm. Please make sure you discuss this related work. As a minor comment, please improve the resolution of all figures in the paper. In summary, it is my pleasure to recommend the acceptance of the paper. Thank you for submitting your work to ICLR, and please make sure you address all remarks of the reviewers in the camera-ready version.
test
[ "kN0N5Ep_NcS", "tNw-vUDSOPh", "pC06qjQc5f", "DM5mETmhMJ", "7FyJniTl_8F", "z6H6g2au5v3", "rimCQW7yYki", "hXWN4UqX-yh", "djl0JLE9qT", "V151H2sR6O2" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for their insightful and constructive comments. We appreciate that each reviewer found our approach and findings to be a useful step towards a better understanding of generalization and memorization in DNNs. In the posted revision, we have started incorporating some of the changes suggested ...
[ -1, 7, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_V8jrrnwGbuc", "iclr_2021_V8jrrnwGbuc", "DM5mETmhMJ", "hXWN4UqX-yh", "V151H2sR6O2", "djl0JLE9qT", "tNw-vUDSOPh", "iclr_2021_V8jrrnwGbuc", "iclr_2021_V8jrrnwGbuc", "iclr_2021_V8jrrnwGbuc" ]
iclr_2021_YUGG2tFuPM
Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks
Adversarial poisoning attacks distort training data in order to corrupt the test-time behavior of a classifier. A provable defense provides a certificate for each test sample, which is a lower bound on the magnitude of any adversarial distortion of the training set that can corrupt the test sample's classification. We propose two novel provable defenses against poisoning attacks: (i) Deep Partition Aggregation (DPA), a certified defense against a general poisoning threat model, defined as the insertion or deletion of a bounded number of samples to the training set --- by implication, this threat model also includes arbitrary distortions to a bounded number of images and/or labels; and (ii) Semi-Supervised DPA (SS-DPA), a certified defense against label-flipping poisoning attacks. DPA is an ensemble method where base models are trained on partitions of the training set determined by a hash function. DPA is related to both subset aggregation, a well-studied ensemble method in classical machine learning, as well as to randomized smoothing, a popular provable defense against evasion (inference) attacks. Our defense against label-flipping poison attacks, SS-DPA, uses a semi-supervised learning algorithm as its base classifier model: each base classifier is trained using the entire unlabeled training set in addition to the labels for a partition. SS-DPA significantly outperforms the existing certified defense for label-flipping attacks (Rosenfeld et al., 2020) on both MNIST and CIFAR-10: provably tolerating, for at least half of test images, over 600 label flips (vs. < 200 label flips) on MNIST and over 300 label flips (vs. 175 label flips) on CIFAR-10. Against general poisoning attacks where no prior certified defenses exists, DPA can certify ≥ 50% of test images against over 500 poison image insertions on MNIST, and nine insertions on CIFAR-10. These results establish new state-of-the-art provable defenses against general and label-flipping poison attacks. Code is available at https://github.com/alevine0/DPA
poster-presentations
The authors develop a novel strategy, Deep Partition Aggregation, to train models to be certifiably robust to data poisoning attacks based on flipping labels of a small subset of the training data or introducing poisoned input features. They improve upon existing certified defences against data poisoning and are the first to establish certified guarantees against general poisoning attacks. Most reviewers were in support of acceptance. Reviewer concerns were raised in the rebuttal phase but were convincingly addressed in the rebuttal phase. One reviewer did raise concerns on the weakness of experimental results on CIFAR-10, but the fact that this method has established the first certified defence in the general poisoning setting and that the results are stronger on other datasets certainly warrant acceptance. I would encourage the authors to clarify this in the final version.
train
[ "y2p6HMUtGom", "BKWc7xVl-2p", "coNt1hpORR3", "MTwUcxwBlPY", "k2KjeV8GkPZ", "r8MQcscWR3R", "5y9B88cQVeI", "EJlP-O_-mLZ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Context and summary of the results:\n\nData poisoning attacks deal with adversaries who change the training set, up to a certain degree (controlled in different ways), with the aim of lowering the \"quality\" of the produced model. Previously methods were designed to tolerate adversarial perturbations while preser...
[ 8, 7, -1, -1, -1, -1, 6, 4 ]
[ 4, 4, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_YUGG2tFuPM", "iclr_2021_YUGG2tFuPM", "BKWc7xVl-2p", "5y9B88cQVeI", "y2p6HMUtGom", "EJlP-O_-mLZ", "iclr_2021_YUGG2tFuPM", "iclr_2021_YUGG2tFuPM" ]
iclr_2021_V1ZHVxJ6dSS
DC3: A learning method for optimization with hard constraints
Large optimization problems with hard constraints arise in many settings, yet classical solvers are often prohibitively slow, motivating the use of deep networks as cheap "approximate solvers." Unfortunately, naive deep learning approaches typically cannot enforce the hard constraints of such problems, leading to infeasible solutions. In this work, we present Deep Constraint Completion and Correction (DC3), an algorithm to address this challenge. Specifically, this method enforces feasibility via a differentiable procedure, which implicitly completes partial solutions to satisfy equality constraints and unrolls gradient-based corrections to satisfy inequality constraints. We demonstrate the effectiveness of DC3 in both synthetic optimization tasks and the real-world setting of AC optimal power flow, where hard constraints encode the physics of the electrical grid. In both cases, DC3 achieves near-optimal objective values while preserving feasibility.
poster-presentations
The paper proposes an approach for solving constrained optimization problems using deep learning. The key idea is to separate equality and inequality constraints and "solve" for the equality constraints separately. Empirical results are given for convex QPs and for a non-convex problem that arises in AC optimal power flow. There was much discussion of this paper between the reviewers and the area chair. THe key question was whether the empirical evaluation is sufficient to convince that the method is more effective than existing solvers. The current experiments do not show that the method achieves better solutions than existing solvers. For the convex case this is to be expected since solvers are optimal. But in the non-convex case, it would have been nice to see that the method indeed can find better solutions. This leaves the advantage of the method in its speedup over existing methods. However, as the authors acknowledge, it is possible that this speedup is due to better use of parallelization than the methods they compare to. It is true that deep learning is particularly easy to parallelize, but this is not impossible for other methods (e.g., for linear algebra operations etc). Thus, taken together the empirical support for the current method is somewhat limited. The method itself does make sense, and this was indeed appreciated by the reviewers.
train
[ "yUJk-C_Pjt", "H2TMNeVeUW", "7H_u-x7pCz0", "mVeqgczXr15", "H5FKTfhTu90", "UlqDChX5FNS", "QC4UVyBiR5w", "mTf0XpydC07", "uIAvWMKCmNf", "q6EQfbAjVZO", "SIU6t2PkHYA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Strength:\n+ The paper proposes a general framework to deal with constraints in optimization problems using neural networks. In my opinion this is an important problem since there exists no standard method in many existing deep neural network frameworks to deal with constraints, which are also inapplicable even if...
[ 4, 7, 8, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_V1ZHVxJ6dSS", "iclr_2021_V1ZHVxJ6dSS", "iclr_2021_V1ZHVxJ6dSS", "H5FKTfhTu90", "7H_u-x7pCz0", "QC4UVyBiR5w", "H2TMNeVeUW", "uIAvWMKCmNf", "SIU6t2PkHYA", "yUJk-C_Pjt", "iclr_2021_V1ZHVxJ6dSS" ]
iclr_2021_PObuuGVrGaZ
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study
This work aims to empirically clarify a recently discovered perspective that label smoothing is incompatible with knowledge distillation. We begin by introducing the motivation behind on how this incompatibility is raised, i.e., label smoothing erases relative information between teacher logits. We provide a novel connection on how label smoothing affects distributions of semantically similar and dissimilar classes. Then we propose a metric to quantitatively measure the degree of erased information in sample's representation. After that, we study its one-sidedness and imperfection of the incompatibility view through massive analyses, visualizations and comprehensive experiments on Image Classification, Binary Networks, and Neural Machine Translation. Finally, we broadly discuss several circumstances wherein label smoothing will indeed lose its effectiveness.
poster-presentations
This paper studies the effect of label smoothing on knowledge-distillation. A previous work on this topic (Muller et al.) has claimed that label smoothing can hurt the performance of the student model in knowledge-distillation. The rationale behind this argument is that label smoothing erases information encoded in the labels. This work shows that such claimed effect does not necessarily happen. Specifically, by a comprehensive study on image classification, binary neural networks, and neural machine translation, the authors show that label smoothing can be compatible with knowledge distillation. However, they conclude that label smoothing will lose its effectiveness with long-tailed distribution and increased number of classes. Overall ratings of this paper are all on the positive side, and R2 finding this paper an important step toward understanding the interaction between knowledge-distillation and label smoothing. I concur with the reviewers about the importance of this research direction and I think this submission provides a reasonable empirical evidence to change our earlier perspectives. I recommend accept. While the paper specifically studies the effect of label smoothing on knowledge-distillation, I think providing a bigger context and reviewing some of the recent demystifying efforts on understanding knowledge-distillation could allow paper to communicate with a broader audience. I hope this can be accommodated in the final version.
val
[ "7OOzCVlcrs7", "Jpg8a2QZV3", "AVhxJrfY9TW", "_gs0Ndd-nsR", "5DrWugDl8bE", "Rh7qT5JMw8i", "HEo6fd8Urh1", "n2ejS0yNLyZ", "CNQY2PvnNJ", "v-hAiDcUaY" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper empirically discusses the relationship between Label Smoothing (LS) and Knowledge Distillation (KD). It designs a stability metric to measure the degree of erasing information and finds that LS can be compatible with knowledge distillation except in long-tailed distribution and increased number of class...
[ 6, 6, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 3, 5, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_PObuuGVrGaZ", "iclr_2021_PObuuGVrGaZ", "7OOzCVlcrs7", "Jpg8a2QZV3", "Jpg8a2QZV3", "Jpg8a2QZV3", "v-hAiDcUaY", "CNQY2PvnNJ", "iclr_2021_PObuuGVrGaZ", "iclr_2021_PObuuGVrGaZ" ]
iclr_2021_Db4yerZTYkz
Shape-Texture Debiased Neural Network Training
Shape and texture are two prominent and complementary cues for recognizing objects. Nonetheless, Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset. Our ablation shows that such bias degenerates model performance. Motivated by this observation, we develop a simple algorithm for shape-texture debiased learning. To prevent models from exclusively attending on a single cue in representation learning, we augment training data with images with conflicting shape and texture information (eg, an image of chimpanzee shape but with lemon texture) and, most importantly, provide the corresponding supervisions from shape and texture simultaneously. Experiments show that our method successfully improves model performance on several image recognition benchmarks and adversarial robustness. For example, by training on ImageNet, it helps ResNet-152 achieve substantial improvements on ImageNet (+1.2%), ImageNet-A (+5.2%), ImageNet-C (+8.3%) and Stylized-ImageNet (+11.1%), and on defending against FGSM adversarial attacker on ImageNet (+14.4%). Our method also claims to be compatible with other advanced data augmentation strategies, eg, Mixup, and CutMix. The code is available here: https://github.com/LiYingwei/ShapeTextureDebiasedTraining.
poster-presentations
After the rebuttal stage, three of four reviewers recommend acceptance, and one gives a borderline score but argues they lean positive. Concerns seem well addressed; the method is simple yet effective.
train
[ "4IvNnYISE-g", "DJ3tO-GTYAx", "U97u2kwAvcY", "DHwLcyN8ed7", "n7_hdtMRpzq", "KQ32n5w-XU-", "8-Hm2HA81M", "QzTMgnaPtFU", "blW4lRCluWM", "pq0HNGLN2z1", "LtPy2_SWrql" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your suggestions which help us improve the quality of this paper. We are glad to see your concerns are addressed. We additionally comment on Q3 & Q7 as below:\n\nRe Q3: Thanks for your great recommendations! This dataset seems to be a promising candidate for extending our method in the future!\n\nRe Q7:...
[ -1, -1, -1, -1, -1, -1, -1, 6, 4, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "U97u2kwAvcY", "QzTMgnaPtFU", "n7_hdtMRpzq", "blW4lRCluWM", "blW4lRCluWM", "pq0HNGLN2z1", "LtPy2_SWrql", "iclr_2021_Db4yerZTYkz", "iclr_2021_Db4yerZTYkz", "iclr_2021_Db4yerZTYkz", "iclr_2021_Db4yerZTYkz" ]
iclr_2021_sjuuTm4vj0
Using latent space regression to analyze and leverage compositionality in GANs
In recent years, Generative Adversarial Networks have become ubiquitous in both research and public perception, but how GANs convert an unstructured latent code to a high quality output is still an open question. In this work, we investigate regression into the latent space as a probe to understand the compositional properties of GANs. We find that combining the regressor and a pretrained generator provides a strong image prior, allowing us to create composite images from a collage of random image parts at inference time while maintaining global consistency. To compare compositional properties across different generators, we measure the trade-offs between reconstruction of the unrealistic input and image quality of the regenerated samples. We find that the regression approach enables more localized editing of individual image parts compared to direct editing in the latent space, and we conduct experiments to quantify this independence effect. Our method is agnostic to the semantics of edits, and does not require labels or predefined concepts during training. Beyond image composition, our method extends to a number of related applications, such as image inpainting or example-based image editing, which we demonstrate on several GANs and datasets, and because it uses only a single forward pass, it can operate in real-time. Code is available on our project page: https://chail.github.io/latent-composition/.
poster-presentations
The scores here are bimodal. The low-scoring reviewers have problems with the evaluation, and I agree it could be improved. The high scoring reviewers seem to mostly agree with those complaints, but think that the paper is interesting enough to be accepted anyway. One of the low-scoring reviewers has some complaints about novelty that I don't find super convincing. The other low-scoring reviewer has suggested that they'd be OK with a decision of Accept. Part of me thinks that I should reject this paper with a message of "come back later with the experiments improved", and that that would be the best thing for the field, because the paper can already be publicized on arXiv anyway. But the other part of me thinks: what if they do that and get unlucky with a bad batch of reviews the next time (the current reviewers were great and had a really thorough discussion)? With some amount of trepidation, I'm recommending accept, but *please* reward my faith in you (the authors) and make an effort to fix the things reviewers complained about before the camera ready.
train
[ "3d46jKDjKCW", "lih2xQupC7", "HxbjmvT7pk", "yEiLvrDojSJ", "6gPb5J172tY", "f_4gNONfPd3", "atz0WscN7JS", "Knv-8-3WUIQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I like this paper and I think it represents a very through-provoking and promising idea. The key aspect of this work is a (screen-space) masked encoder that learns to complete the loop with a previously trained generator. This allows for image completion, editing, collages, and essentially creates a prior for the ...
[ 7, 5, 8, 5, -1, -1, -1, -1 ]
[ 4, 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2021_sjuuTm4vj0", "iclr_2021_sjuuTm4vj0", "iclr_2021_sjuuTm4vj0", "iclr_2021_sjuuTm4vj0", "HxbjmvT7pk", "3d46jKDjKCW", "yEiLvrDojSJ", "lih2xQupC7" ]
iclr_2021_RqCC_00Bg7V
Blending MPC & Value Function Approximation for Efficient Reinforcement Learning
Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems that uses a model to make predictions about future behavior. For each state encountered, MPC solves an online optimization problem to choose a control action that will minimize future cost. This is a surprisingly effective strategy, but real-time performance requirements warrant the use of simple models. If the model is not sufficiently accurate, then the resulting controller can be biased, limiting performance. We present a framework for improving on MPC with model-free reinforcement learning (RL). The key insight is to view MPC as constructing a series of local Q-function approximations. We show that by using a parameter λ, similar to the trace decay parameter in TD(λ), we can systematically trade-off learned value estimates against the local Q-function approximations. We present a theoretical analysis that shows how error from inaccurate models in MPC and value function estimation in RL can be balanced. We further propose an algorithm that changes λ over time to reduce the dependence on MPC as our estimates of the value function improve, and test the efficacy our approach on challenging high-dimensional manipulation tasks with biased models in simulation. We demonstrate that our approach can obtain performance comparable with MPC with access to true dynamics even under severe model bias and is more sample efficient as compared to model-free RL.
poster-presentations
The authors put a lot of effort in replying to questions and improving the paper (to a point that the reviewers felt overwhelmed). Pros: - An interesting way of dealing with model bias in MPC - They successfully managed to address the most important concerns of the reviewers, with lots of additional experiments and insights - R3's concerns have also been successfully addressed by the authors, the review & score were unfortunately not updated Cons: - The only remaining point is that the simulations seem to be everything but physically realistic (update at end of R1's review), which is probably a problem of the benchmarks and not the authors faults.
val
[ "tguXwGZOPCT", "p2gZumHDGWi", "A6pYK9FhVJ1", "DUWW5oLImdN", "dgnxoWGl6Cc", "IwVRXCUB9G_", "zXMuXxLKqQ", "ZEiuDPqXGfk", "Jr1_Nxhh50K", "8CY25nChE4m", "ctXFvjm6bdm", "_KAMa2ftGKC", "X-7Sk30iyL1", "IOrQzKf5sVz", "aLPXoWfYbln", "e1srX7cgX47" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "+1 for character limit. I am trying to keep up with all the discussions, but there is a lot here when reviewing multiple papers and having a life. I think the authors did it with good intents, but it is hard.\n\nI agree with this reviewers conclusions.", "1. Leveraging the model could also be used for other thin...
[ -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "dgnxoWGl6Cc", "ZEiuDPqXGfk", "iclr_2021_RqCC_00Bg7V", "IwVRXCUB9G_", "iclr_2021_RqCC_00Bg7V", "A6pYK9FhVJ1", "ZEiuDPqXGfk", "IwVRXCUB9G_", "zXMuXxLKqQ", "dgnxoWGl6Cc", "iclr_2021_RqCC_00Bg7V", "aLPXoWfYbln", "ctXFvjm6bdm", "e1srX7cgX47", "iclr_2021_RqCC_00Bg7V", "iclr_2021_RqCC_00Bg7V...
iclr_2021_9YlaeLfuhJF
Model Patching: Closing the Subgroup Performance Gap with Data Augmentation
Classifiers in machine learning are often brittle when deployed. Particularly concerning are models with inconsistent performance on specific subgroups of a class, e.g., exhibiting disparities in skin cancer classification in the presence or absence of a spurious bandage. To mitigate these performance differences, we introduce model patching, a two-stage framework for improving robustness that encourages the model to be invariant to subgroup differences, and focus on class information shared by subgroups. Model patching first models subgroup features within a class and learns semantic transformations between them, and then trains a classifier with data augmentations that deliberately manipulate subgroup features. We instantiate model patching with CAMEL, which (1) uses a CycleGAN to learn the intra-class, inter-subgroup augmentations, and (2) balances subgroup performance using a theoretically-motivated subgroup consistency regularizer, accompanied by a new robust objective. We demonstrate CAMEL’s effectiveness on 3 benchmark datasets, with reductions in robust error of up to 33% relative to the best baseline. Lastly, CAMEL successfully patches a model that fails due to spurious features on a real-world skin cancer dataset.
poster-presentations
This paper presents an approach for mitigating subgroup performance gap in images in cases when a classifier relies on subgroup specific features. The authors propose a data augmentation approach, where synthetically produced examples (by GANs) act as instantiations of the real samples in all possible subgroups. By matching the predictions of original and augmented examples, the prediction model is forced to ignore subgroup differences encouraging invariance. The proposed method of ‘controlled data augmentations’ (as precisely called by R4) is relevant and well-motivated, the theoretical justifications support the main claims, and the experimental results are diverse and demonstrate merits of the proposed approach. As rightly pointed out by R3, ‘The appendices are also very thorough, and the code is organized well’. In the initial evaluation, the reviewers have raised (in unison) concerns regarding overlapping subgroups per class, and an imbalance problem in the subgroups when training GANs. There were also questions reg. theoretical justifications, and empirical evaluations of the baseline methods. The authors have addressed all major concerns in the rebuttal. Pleased to report that based on the author respond with extra experiments and explanations, R2 has raised the score from 6 to 7. In conclusion, all four reviewers were convinced by the author’s rebuttal, and AC recommends acceptance of this paper – congratulations to the authors! There is a colossal effort in the community addressing a goal similar to this work – learning invariant representations w.r.t. sensitive features by means of algorithmic fairness methods. (R1 and R3 relate to it). When preparing the final version, the authors are encouraged to elaborate more on the discussion/comparison to fairness-based methods, ideally including empirical evidence where possible (where subgroups overlap, e.g. CelebA). The AC believes this will strengthen the final revision and will have an even broader impact in the community.
val
[ "NFcmEqN9PES", "b2OwGDxJWJc", "t5upy_kaoud", "0947n7qoac4", "bggPSPzmo-a", "qyMgrShyJac", "AjvwbIUrgfo", "nLhiuMV0iOh", "SDF4s3X66g", "iQsC1W67P_" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "# Summary\n\nThis paper introduces a method (CAMEL) to make CNN models robust to the effect of subgroups in classes. CAMEL uses CycleGAN to transfer the subgroup of each input image in each class and applies consistency regularization among transferred images. This paper additionally introduces a novel objective (...
[ 7, -1, -1, -1, -1, -1, -1, 7, 7, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "iclr_2021_9YlaeLfuhJF", "qyMgrShyJac", "NFcmEqN9PES", "nLhiuMV0iOh", "SDF4s3X66g", "iQsC1W67P_", "iclr_2021_9YlaeLfuhJF", "iclr_2021_9YlaeLfuhJF", "iclr_2021_9YlaeLfuhJF", "iclr_2021_9YlaeLfuhJF" ]
iclr_2021_dKg5D1Z1Lm
Non-asymptotic Confidence Intervals of Off-policy Evaluation: Primal and Dual Bounds
Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies. Therefore, OPE is a key step in applying reinforcement learning to real-world domains such as medical treatment, where interactive data collection is expensive or even unsafe. As the observed data tends to be noisy and limited, it is essential to provide rigorous uncertainty quantification, not just a point estimation, when applying OPE to make high stakes decisions. This work considers the problem of constructing non-asymptotic confidence intervals in infinite-horizon off-policy evaluation, which remains a challenging open question. We develop a practical algorithm through a primal-dual optimization-based approach, which leverages the kernel Bellman loss (KBL) of Feng et al. 2019 and a new martingale concentration inequality of KBL applicable to time-dependent data with unknown mixing conditions. Our algorithm makes minimum assumptions on the data and the function class of the Q-function, and works for the behavior-agnostic settings where the data is collected under a mix of arbitrary unknown behavior policies. We present empirical results that clearly demonstrate the advantages of our approach over existing methods.
poster-presentations
The paper introduces new tighter non-asymptotic confidence intervals for off-policy evaluation, and all reviewers generally liked the results. I recommend acceptance of this paper. Some concerns of Reviewer2 and Reviewer3 are not fully addressed in your rebuttal. Please make sure to address all remaining issues.
train
[ "0EPsQrMrEjH", "-OpUIBt8782", "FZFRjTAZn4K", "Qw1cPoeGmau", "profKaV5u8", "UwQC7xBGL6h", "C69Gd0_9uIc", "V_FdNLfYuM-", "WAVAlCI27ib", "7QoGg0wLa3Q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The objective of this paper is to provide a method to produce tighter confidence intervals for off-policy evaluation. The paper claims to develop a new primal-dual perspective on OPE confidence intervals and a tight concentration inequality. It develops both theoretical and empirical evidence to support its claims...
[ 7, -1, -1, -1, -1, -1, -1, 6, 8, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_dKg5D1Z1Lm", "UwQC7xBGL6h", "7QoGg0wLa3Q", "V_FdNLfYuM-", "WAVAlCI27ib", "0EPsQrMrEjH", "iclr_2021_dKg5D1Z1Lm", "iclr_2021_dKg5D1Z1Lm", "iclr_2021_dKg5D1Z1Lm", "iclr_2021_dKg5D1Z1Lm" ]
iclr_2021_Fmg_fQYUejf
Linear Mode Connectivity in Multitask and Continual Learning
Continual (sequential) training and multitask (simultaneous) training are often attempting to solve the same overall objective: to find a solution that performs well on all considered tasks. The main difference is in the training regimes, where continual learning can only have access to one task at a time, which for neural networks typically leads to catastrophic forgetting. That is, the solution found for a subsequent task does not perform well on the previous ones anymore. However, the relationship between the different minima that the two training regimes arrive at is not well understood. What sets them apart? Is there a local structure that could explain the difference in performance achieved by the two different schemes? Motivated by recent work showing that different minima of the same task are typically connected by very simple curves of low error, we investigate whether multitask and continual solutions are similarly connected. We empirically find that indeed such connectivity can be reliably achieved and, more interestingly, it can be done by a linear path, conditioned on having the same initialization for both. We thoroughly analyze this observation and discuss its significance for the continual learning process. Furthermore, we exploit this finding to propose an effective algorithm that constrains the sequentially learned minima to behave as the multitask solution. We show that our method outperforms several state of the art continual learning algorithms on various vision benchmarks.
poster-presentations
The paper is presenting an important empirical finding. When the learning algorithms are initialized with the same point, the continual and multitask solutions are connected by linear and low-error paths. Motivated by this finding, the paper proposes a new continual learning algorithm based on path regularization. The paper received unanimously good scores. I agree with the reviews and recommend acceptance.
val
[ "PXyqsDafCOb", "gs1LY7S8UTv", "uCnLOhNlAUL", "OWoC4GSElBC", "PoaFfBPPVWD", "ND0YD0w5F5S", "6J2slrBwTuW", "i5ViUokelNP" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors provided an adequate response to most of my points. In particular, I am glad they included Figures 18 and 19 in the Appendix, demonstrating that EWC and stable SGD do not find linearly connected solutions, while their proposed MC-SGD does. The writing of Section 3 also seems to have been much improved....
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "OWoC4GSElBC", "iclr_2021_Fmg_fQYUejf", "ND0YD0w5F5S", "6J2slrBwTuW", "i5ViUokelNP", "iclr_2021_Fmg_fQYUejf", "iclr_2021_Fmg_fQYUejf", "iclr_2021_Fmg_fQYUejf" ]
iclr_2021_BVSM0x3EDK6
Robust and Generalizable Visual Representation Learning via Random Convolutions
While successful for various computer vision tasks, deep neural networks have shown to be vulnerable to texture style shifts and small perturbations to which humans are robust. In this work, we show that the robustness of neural networks can be greatly improved through the use of random convolutions as data augmentation. Random convolutions are approximately shape-preserving and may distort local textures. Intuitively, randomized convolutions create an infinite number of new domains with similar global shapes but random local texture. Therefore, we explore using outputs of multi-scale random convolutions as new images or mixing them with the original images during training. When applying a network trained with our approach to unseen domains, our method consistently improves the performance on domain generalization benchmarks and is scalable to ImageNet. In particular, in the challenging scenario of generalizing to the sketch domain in PACS and to ImageNet-Sketch, our method outperforms state-of-art methods by a large margin. More interestingly, our method can benefit downstream tasks by providing a more robust pretrained visual representation.
poster-presentations
Reviewers concurred that this is an interesting paper with contributions worthy of publication. The authors also provided many details in the rebuttal which makes the paper even more strong.
train
[ "Jnn6wx7ExqU", "QEhLtBtebec", "fen1WXCUy2e", "XTruPumnCvN", "p2P7DgGDL3F", "a5RGZYMxPUO", "QQUpEyIcx6", "-B3NFtp5Mp9", "IzZersT2KAU", "LeFLfRbUGVp", "vvYVH7Pwvzq", "eFXmjdrI0RB", "Db6lgFwrCB", "4BQoSUinQx_", "ePhHgHUU6Fv", "7GQvGH93Mtw" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update\n-------\n\nI've updated by score in light of the discussion; as I said in the comments, from a purely experimental point of view there are good results, however the presentation of the paper confounds too many aspects. If the authors can address the terminology issues then it would make the work stronger.\...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_BVSM0x3EDK6", "XTruPumnCvN", "p2P7DgGDL3F", "QQUpEyIcx6", "LeFLfRbUGVp", "iclr_2021_BVSM0x3EDK6", "Jnn6wx7ExqU", "Db6lgFwrCB", "7GQvGH93Mtw", "4BQoSUinQx_", "LeFLfRbUGVp", "ePhHgHUU6Fv", "iclr_2021_BVSM0x3EDK6", "iclr_2021_BVSM0x3EDK6", "iclr_2021_BVSM0x3EDK6", "iclr_2021_BV...
iclr_2021_l0mSUROpwY
Intrinsic-Extrinsic Convolution and Pooling for Learning on 3D Protein Structures
Proteins perform a large variety of functions in living organisms and thus play a key role in biology. However, commonly used algorithms in protein representation learning were not specifically designed for protein data, and are therefore not able to capture all relevant structural levels of a protein during learning. To fill this gap, we propose two new learning operators, specifically designed to process protein structures. First, we introduce a novel convolution operator that considers the primary, secondary, and tertiary structure of a protein by using n-D convolutions defined on both the Euclidean distance, as well as multiple geodesic distances between the atoms in a multi-graph. Second, we introduce a set of hierarchical pooling operators that enable multi-scale protein analysis. We further evaluate the accuracy of our algorithms on common downstream tasks, where we outperform state-of-the-art protein learning algorithms.
poster-presentations
Protein molecule structure analysis is an important problem in biology that has recently become of increasing interest in the ML field. The paper proposes a new architecture using a new type of convolution and pooling both on Euclidean as well as intrinsic representations of the proteins, and applies it to several standard tasks in the field. Overall the reviews were strong, with the reviewers commending the authors for an important result on the intersection of biology and ML. The reviewers raised the points of: - weak baselines (The authors responded with adding suggested comparison, which were not completely satisfactory) - focus mostly on recent protein literature - the reliance of the method on the 3D structure. The AC however does not find this as a weakness, as there are multiple problems that rely on 3D structure, which with recent methods can be predicted computationally rather than experimentally. We believe this to be an important paper and thus our recommendation is Accept. As the AC happens to have expertise in both 3D geometric ML and structural biology, he/she would strongly encourage the authors to better do their homework as there have been multiple recent works on convolutional operators on point clouds, as well as intrinsic representation-based ML methods for proteins.
val
[ "lvFug2S-F5I", "9TRw3pMaZdM", "8c_y__Kbh3g", "K47SH45Iqg3", "2ASyYz3k_GC", "cpwMRq9YYZw", "Ggx-od3JbAT", "ItyLAjusH0Q", "yau7nsQ6n7P", "_z76ad0_t6Z", "IFYd1JRAH3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents new convolutional and pooling operators for protein structures. These components are used to design an architecture that shows strong performance on several downstream tasks.\n\nThe main strength of the paper is the presentation of new ideas for modeling protein structures. The proposed operato...
[ 6, 5, -1, -1, -1, -1, -1, -1, 9, 8, 9 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2021_l0mSUROpwY", "iclr_2021_l0mSUROpwY", "yau7nsQ6n7P", "_z76ad0_t6Z", "9TRw3pMaZdM", "IFYd1JRAH3", "lvFug2S-F5I", "iclr_2021_l0mSUROpwY", "iclr_2021_l0mSUROpwY", "iclr_2021_l0mSUROpwY", "iclr_2021_l0mSUROpwY" ]
iclr_2021_XAS3uKeFWj
Variational State-Space Models for Localisation and Dense 3D Mapping in 6 DoF
We solve the problem of 6-DoF localisation and 3D dense reconstruction in spatial environments as approximate Bayesian inference in a deep state-space model. Our approach leverages both learning and domain knowledge from multiple-view geometry and rigid-body dynamics. This results in an expressive predictive model of the world, often missing in current state-of-the-art visual SLAM solutions. The combination of variational inference, neural networks and a differentiable raycaster ensures that our model is amenable to end-to-end gradient-based optimisation. We evaluate our approach on realistic unmanned aerial vehicle flight data, nearing the performance of state-of-the-art visual-inertial odometry systems. We demonstrate the applicability of the model to generative prediction and planning.
poster-presentations
The paper proposes a method for SLAM like dense 3D mapping (colored occupancy grid) based on differentiable rendering with a possibility to provide a probabilistic generative predictive distribution, evaluated on UAVs. Initially this paper has a wide spread of reviews, with ratings between 4 and 9. Reviewers appreciated the elegant and principled formulation and the interest of the predictive distribution. On the downside, several issues were raised on the incremental nature wrt to DVBF-LM; presentation and writing being very dense and difficult to follow; positioning wrt to prior art; performance with respect to known visual SLAM SOTA baselines; limited evaluations. The authors provided responses to many of this questions and also updates to the paper, which convinced several reviewers, who unanimously recommended acceptance after discussion. The AC concurs.
train
[ "scwBHMqKfi-", "BpGKLJN-Xj-", "f2ut39lY-t", "FoOk7uzIxEN", "0nSi6WIAPYt", "PNP3GXtoFt3", "uvE_kYm5Sz", "xBMl4Vm985", "o204HUSetg" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n## Summary\n\nThe paper proposes a framework built on DVBF-LM, extended to dense 3D mapping. Overall I find the work somewhat incremental over DVBF-LM and that the methodology lacks clarity. \n\n## Strengths\n\n - The work builds on a fundamentally new and interesting line of generative variational approaches to...
[ 6, 7, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_XAS3uKeFWj", "iclr_2021_XAS3uKeFWj", "iclr_2021_XAS3uKeFWj", "xBMl4Vm985", "o204HUSetg", "scwBHMqKfi-", "BpGKLJN-Xj-", "iclr_2021_XAS3uKeFWj", "iclr_2021_XAS3uKeFWj" ]
iclr_2021_Iz3zU3M316D
AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights
Normalization techniques, such as batch normalization (BN), are a boon for modern deep learning. They let weights converge more quickly with often better generalization performances. It has been argued that the normalization-induced scale invariance among the weights provides an advantageous ground for gradient descent (GD) optimizers: the effective step sizes are automatically reduced over time, stabilizing the overall training procedure. It is often overlooked, however, that the additional introduction of momentum in GD optimizers results in a far more rapid reduction in effective step sizes for scale-invariant weights, a phenomenon that has not yet been studied and may have caused unwanted side effects in the current practice. This is a crucial issue because arguably the vast majority of modern deep neural networks consist of (1) momentum-based GD (e.g. SGD or Adam) and (2) scale-invariant parameters (e.g. more than 90% of the weights in ResNet are scale-invariant due to BN). In this paper, we verify that the widely-adopted combination of the two ingredients lead to the premature decay of effective step sizes and sub-optimal model performances. We propose a simple and effective remedy, SGDP and AdamP: get rid of the radial component, or the norm-increasing direction, at each optimizer step. Because of the scale invariance, this modification only alters the effective step sizes without changing the effective update directions, thus enjoying the original convergence properties of GD optimizers. Given the ubiquity of momentum GD and scale invariance in machine learning, we have evaluated our methods against the baselines on 13 benchmarks. They range from vision tasks like classification (e.g. ImageNet), retrieval (e.g. CUB and SOP), and detection (e.g. COCO) to language modelling (e.g. WikiText) and audio classification (e.g. DCASE) tasks. We verify that our solution brings about uniform gains in performances in those benchmarks. Source code is available at https://github.com/clovaai/adamp
poster-presentations
Clarity: The paper is well-written with illustrative figures. Originality: The originality of the paper is relatively restricted, mainly due to the resemblance with the work [1]. However, there are important differences, that the authors nicely pointed out, and we encourage them to include these in the final version of the paper. Significance: The paper points out a relevant issue in using normalization techniques such as batch normalization together with momentum-based optimization algorithms in training deep neural networks. While the paper could be considered "another algorithms for training NNs", the papers illustrates nicely the main arguments, and is backed up with more than sufficient experimental results. Main pros: - In the main pros, AC and reviewers admit the phenomenal job in responding to reviewers' questions and requests - The paper provides experimental results on various tasks and datasets to demonstrate the advantage of the proposed method. - After the reviews, The authors also reinforced their empirical investigation by reporting standard deviation of the results, which allows to better appreciate the performances of SGDP and AdamP. Finally, they also added the experiments with higher weight decay, showing that indeed 1e-4 was the best value. Main cons: - One reviewer requires more explanation why the proposed update in equation (12) yields smaller norms ||w_{t+1}|| than the momentum-based update in equation (8).
test
[ "aBNDGVAVq0o", "QnPQsdqu9jZ", "g3oxMgVEy82", "evA0-1WkeH", "N-dL0XkI3_R", "RlRszIRF1Fb", "H9RyuHEhaG2", "fgN10f-6TpX", "pjv276UYR2c", "QmEGkbygxr", "4YzdQIFkliU", "-GaEEn7hncU", "JmI4PhEPpL", "mw-oWyQMNtx", "IlRZ_6u_jYN", "iWpivC6hGoC", "g1Wr1YOfoTX", "LZyf9OUZgwT", "UnLLBCtGBLR"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "public", "author", "public", "author", "author", "author", "author", "author", "author", "author", "official_reviewer",...
[ "###################################################################\n\nSummary:\n\nThis paper shows that momentum-based gradient descent optimizers reduce the effective step size in training scale-invariant models including deep neural networks normalized by batch normalization, layer normaliztion, instance normal...
[ 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "iclr_2021_Iz3zU3M316D", "iclr_2021_Iz3zU3M316D", "aBNDGVAVq0o", "LZyf9OUZgwT", "fgN10f-6TpX", "LJbD-rOArZd", "iclr_2021_Iz3zU3M316D", "iWpivC6hGoC", "-GaEEn7hncU", "LZyf9OUZgwT", "aBNDGVAVq0o", "JmI4PhEPpL", "mw-oWyQMNtx", "iclr_2021_Iz3zU3M316D", "iWpivC6hGoC", "aBNDGVAVq0o", "NZl1...
iclr_2021_gV3wdEOGy_V
MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering
We present Mixture of Contrastive Experts (MiCE), a unified probabilistic clustering framework that simultaneously exploits the discriminative representations learned by contrastive learning and the semantic structures captured by a latent mixture model. Motivated by the mixture of experts, MiCE employs a gating function to partition an unlabeled dataset into subsets according to the latent semantics and multiple experts to discriminate distinct subsets of instances assigned to them in a contrastive learning manner. To solve the nontrivial inference and learning problems caused by the latent variables, we further develop a scalable variant of the Expectation-Maximization (EM) algorithm for MiCE and provide proof of the convergence. Empirically, we evaluate the clustering performance of MiCE on four widely adopted natural image datasets. MiCE achieves significantly better results than various previous methods and a strong contrastive learning baseline.
poster-presentations
Thanks for your submission to ICLR! This paper considers a novel unsupervised image clustering framework based on a mixture of contrastive experts framework. Most of the reviewers were overall positive about the paper. On the positive side, they noted that the paper had an interesting idea, was well motivated, written well, and had solid results. Also, the authors provided detailed and useful responses to the reviews, which further strengthened the case for accepting the paper. On the negative side, one reviewer felt that the paper seemed a bit preliminary and its presentation could improve. Also, there was some concern about missing comparisons / discussion to previous work (including from a public comment) or data sets (e.g. ImageNet-10). Again, the authors responded well to these concerns. Given that the overall response was quite positive with the paper, I'm happy to recommend accepting it.
train
[ "y6ae8EP3Q4a", "3vGvW2pRSk", "1NEWQLTjQ4w", "d_F-XYddXq9", "4XDRwdhBR2y", "rIQGC9DvDDY", "mN_zZ00mbkd", "ZUdbytSlref", "M38Hb9ZSLyn", "UIp6Fn9KsOd", "tsGQ9ejnuB", "5TbJtJEVvzt", "Ae8ZpVmVTQ", "7tmry5-AJSk", "1ChMTRII-Nw" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents an image clustering methodology based on Mixture of Experts (MoE) for image clustering. \nAlthough MoE has been proposed for supervised learning problems, the authors exploit the instance discrimination framwork to apply the MoE idea for image clustering. \n\nThis is a novel aspect of the propos...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "iclr_2021_gV3wdEOGy_V", "iclr_2021_gV3wdEOGy_V", "ZUdbytSlref", "Ae8ZpVmVTQ", "UIp6Fn9KsOd", "mN_zZ00mbkd", "y6ae8EP3Q4a", "M38Hb9ZSLyn", "7tmry5-AJSk", "1ChMTRII-Nw", "5TbJtJEVvzt", "iclr_2021_gV3wdEOGy_V", "iclr_2021_gV3wdEOGy_V", "iclr_2021_gV3wdEOGy_V", "iclr_2021_gV3wdEOGy_V" ]
iclr_2021_9GBZBPn0Jx
HalentNet: Multimodal Trajectory Forecasting with Hallucinative Intents
Motion forecasting is essential for making intelligent decisions in robotic navigation. As a result, the multi-agent behavioral prediction has become a core component of modern human-robot interaction applications such as autonomous driving. Due to various intentions and interactions among agents, agent trajectories can have multiple possible futures. Hence, the motion forecasting model's ability to cover possible modes becomes essential to enable accurate prediction. Towards this goal, we introduce HalentNet to better model the future motion distribution in addition to a traditional trajectory regression learning objective by incorporating generative augmentation losses. We model intents with unsupervised discrete random variables whose training is guided by a collaboration between two key signals: A discriminative loss that encourages intents' diversity and a hallucinative loss that explores intent transitions (i.e., mixed intents) and encourages their smoothness. This regulates the neural network behavior to be more accurately predictive on uncertain scenarios due to the active yet careful exploration of possible future agent behavior. Our model's learned representation leads to better and more semantically meaningful coverage of the trajectory distribution. Our experiments show that our method can improve over the state-of-the-art trajectory forecasting benchmarks, including vehicles and pedestrians, for about 20% on average FDE and 50% on road boundary violation rate when predicting 6 seconds future. We also conducted human experiments to show that our predicted trajectories received 39.6% more votes than the runner-up approach and 32.2% more votes than our variant without hallucinative mixed intent loss. The code will be released soon.
poster-presentations
The paper presents a method for future trajectory generation. The main contribution is in proposing a technique for data augmentation in the latent space which encourages prediction of trajectories that are both plausible, but also different from the training set. The results clearly show superior performance on standard benchmarks. The evaluation is thorough and ablations show that the proposed innovation matters. R2, R3, R4 recommend that the paper be accepted with scores 6, 8, and 6 respectively. R1 recommends the paper be rejected with a score of 5. The main concern of reviewers are: R1: " In summary, the paper suffers from lack of a clear justification of the proposed contributions, unfair evaluations, and questionable significance of the results." The authors addressed this concern in their rebuttal. R2: "Some other points remain still open such as the limited focus on Trajectron in evaluations." Since trajectron is a recent SOTA, I think this is not a big concern. Authors compare against other baseline methods too. R4: Comparison to Mercat, Jean, et al., ICRA 2020 is missing. The authors mention that their code is unavailable and therefore cannot compare. R4: "underlying reasons for the success of different components (classification of latent intent and hallucinative latent intent) are hard to explain". I agree with this and this is also my major concern which I detail below. The paper proposes to find diverse trajectories by generating two latent vectors: z, z'. The first h time steps are generated by latent vector z and the remainder using z'. The generated trajectory is evaluated by a discriminator that ensures plausibility. The latent vectors are chosen to be discrete and a classifier is trained to recognize z from ground truth trajectories. To encourage diverse trajectories, authors use a loss that encourages mis-classification of the latent variable inferred from the generated trajectory. Since the generated trajectory cannot be classified well, it is assumed to be different from the training set. This formulation is rather adhoc. If the trajectory is indeed different from the training distribution, then it will also fool the discriminator. If it doesnot, then it's not very different. The mis-classification, is akin to encouraging high entropy in the z space inferred from predicted trajectories. With this view, it is possible that there is no need to generate two latent vectors z, z', but simply generate one and use the entropy penalty. I would love to see this experiment and see the authors demystify their method. It would also lead to significant changes in writing. Even now, writing needs improvement. Due to the proposed method being a adhoc trick, that is not well justified, I would normally not recommend acceptance. However, the empirical results are strong, tilting the recommendation to acceptance.
train
[ "G_Rg1DfhBP_", "rTSHJI_qgI-", "ob-K8kr6Sbr", "wJE8vDTUj4g", "-8p-cpS8Qzp", "kduPdNu9jve", "mRdP5xo5wiL", "nGc2tg3Yxse", "5_BDavsshj", "fMPLe0SWP5R", "XdnOihdyKKU", "uDwmrU-tEM", "J38X45DGR6F" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "**SUMMARY**\n\nThe present work considers the problem of multi-agent trajectory prediction. Its main contribution is incorporating generative augmentation losses for improving the quality of a trajectory predictor. This is achieved by allowing trajetcory predictors to model intent as an unobserved latent variable ...
[ 6, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_9GBZBPn0Jx", "J38X45DGR6F", "mRdP5xo5wiL", "J38X45DGR6F", "G_Rg1DfhBP_", "mRdP5xo5wiL", "iclr_2021_9GBZBPn0Jx", "mRdP5xo5wiL", "G_Rg1DfhBP_", "uDwmrU-tEM", "iclr_2021_9GBZBPn0Jx", "iclr_2021_9GBZBPn0Jx", "iclr_2021_9GBZBPn0Jx" ]
iclr_2021_p5uylG94S68
Model-based micro-data reinforcement learning: what are the crucial model properties and which model to choose?
We contribute to micro-data model-based reinforcement learning (MBRL) by rigorously comparing popular generative models using a fixed (random shooting) control agent. We find that on an environment that requires multimodal posterior predictives, mixture density nets outperform all other models by a large margin. When multimodality is not required, our surprising finding is that we do not need probabilistic posterior predictives: deterministic models are on par, in fact they consistently (although non-significantly) outperform their probabilistic counterparts. We also found that heteroscedasticity at training time, perhaps acting as a regularizer, improves predictions at longer horizons. At the methodological side, we design metrics and an experimental protocol which can be used to evaluate the various models, predicting their asymptotic performance when using them on the control problem. Using this framework, we improve the state-of-the-art sample complexity of MBRL on Acrobot by two to four folds, using an aggressive training schedule which is outside of the hyperparameter interval usually considered.
poster-presentations
**Overview** This paper performs detailed ablation studies over different dynamics prediction methods for MBRL. It proposes metrics for models to evaluate how different types of uncertainty impact predictions. The paper also measures control performance with random shooting MPC. The paper further implements a new hyper parameter schedule to achieve new SOTA performance on the acrobat task. **Pro** - The paper is well-written. - The analysis in this paper is very warranted. - The paper provides a very detailed ablation study. - The authors do a great job defining and arguing for evaluation metrics. - The seven properties and metrics are mostly well-motivated and well-defined. - The authors discussed the results clearly with implications. - The result of the necessity of probabilistic vs. deterministic models in different scenarios is a good contribution to this field. **Con** - The methodology might be hard generalizable, i.e., there is difficulty in matching the paper to the literature based on its own defined metric. - The scope might be limited. **Recommendation** The paper provides a significant contribution to MBRL by providing a detailed empirical study. During the rebuttal phase, the authors addresses many reviewers' concerns in a satisfactory way. The paper is well-written and easy to read. The recommendation is an accept.
train
[ "7jQ4oxpG6lr", "9Bsjs5vOB5f", "KB8L2rcabko", "anehz-fNBru", "CkmLxg1zemC", "uKAxzPIFFp3", "8bhUWb00ZOE", "G-5d1mG9sy", "IK1lsoNQIYK", "gX68UiAPdG4", "-Ov7frnqdLK", "RtO-FPVPZmR", "UY2zSpfoXSr", "4eUEV-D4rnP", "CxK6-XlsIOO", "SwzNBxuaZm-", "IZzHuZ-a1T9", "4XiZYqENesI", "B-_6LlLfO9...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_r...
[ "This paper tackles a key problem in model-based RL that is to identify the essential properties a predictive model needs to have to achieve good control performance. They solve this problem by systematically accessing the performance of a family of autoregressive mixtures learned by deep neural nets (DARMDN) using...
[ 7, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "iclr_2021_p5uylG94S68", "uKAxzPIFFp3", "G-5d1mG9sy", "IK1lsoNQIYK", "iclr_2021_p5uylG94S68", "4XiZYqENesI", "IZzHuZ-a1T9", "SwzNBxuaZm-", "CxK6-XlsIOO", "TzdsshA04eZ", "6optgf_7jJz", "7jQ4oxpG6lr", "fby9jcGNyTn", "4XiZYqENesI", "CkmLxg1zemC", "CkmLxg1zemC", "CkmLxg1zemC", "CkmLxg1...
iclr_2021_y06VOYLcQXa
Private Image Reconstruction from System Side Channels Using Generative Models
System side channels denote effects imposed on the underlying system and hardware when running a program, such as its accessed CPU cache lines. Side channel analysis (SCA) allows attackers to infer program secrets based on observed side channel signals. Given the ever-growing adoption of machine learning as a service (MLaaS), image analysis software on cloud platforms has been exploited by reconstructing private user images from system side channels. Nevertheless, to date, SCA is still highly challenging, requiring technical knowledge of victim software's internal operations. For existing SCA attacks, comprehending such internal operations requires heavyweight program analysis or manual efforts. This research proposes an attack framework to reconstruct private user images processed by media software via system side channels. The framework forms an effective workflow by incorporating convolutional networks, variational autoencoders, and generative adversarial networks. Our evaluation of two popular side channels shows that the reconstructed images consistently match user inputs, making privacy leakage attacks more practical. We also show surprising results that even one-bit data read/write pattern side channels, which are deemed minimally informative, can be used to reconstruct quality images using our framework.
poster-presentations
The paper shows that it is possible to reconstruct private images from CPU cache line and OS page table accesses side channels, using a generative model on top of side channel traces. The reviewers agree that the problem is interesting and the experimental evaluation makes a convincing case that such an attack is possible. The author rebuttal was useful in clarifying some aspects of the paper, and the discussion on possible mitigation strategies is a nice addition to the paper.
train
[ "Vv2x7zT8CB", "8BBaGFkHmtU", "_xlddZXghlz", "1EGA6EQk5Uy", "MMdtOAQ_OOm", "PkOk8_8CDWp", "ttKXHWdC-J6", "eNtwgUTVZX", "CHIUu6vBFbE", "BiliaUVZ5sz", "hOHUlNCoB1A", "Welrpis-leY", "1fwcExbR4W", "gFY5H6_Pls" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors proposed a representation and generation model to reconstruct the image signals from system side channel signals. The task itself is interesting and novel, demonstrating the first efforts and impressive performance on recovering noisy side channel signals. The work will potentially inspire more attempt...
[ 5, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_y06VOYLcQXa", "iclr_2021_y06VOYLcQXa", "iclr_2021_y06VOYLcQXa", "1fwcExbR4W", "CHIUu6vBFbE", "8BBaGFkHmtU", "gFY5H6_Pls", "Vv2x7zT8CB", "iclr_2021_y06VOYLcQXa", "PkOk8_8CDWp", "Welrpis-leY", "1fwcExbR4W", "_xlddZXghlz", "iclr_2021_y06VOYLcQXa" ]