paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_VErQxgyrbfn
Convex Regularization behind Neural Reconstruction
Neural networks have shown tremendous potential for reconstructing high-resolution images in inverse problems. The non-convex and opaque nature of neural networks, however, hinders their utility in sensitive applications such as medical imaging. To cope with this challenge, this paper advocates a convex duality framework that makes a two-layer fully-convolutional ReLU denoising network amenable to convex optimization. The convex dual network not only offers the optimum training with convex solvers, but also facilitates interpreting training and prediction. In particular, it implies training neural networks with weight decay regularization induces path sparsity while the prediction is piecewise linear filtering. A range of experiments with MNIST and fastMRI datasets confirm the efficacy of the dual network optimization problem.
poster-presentations
This paper is motivated by figuring out what regularization do popular neural network reconstruction techniques correspond to. In particular, this paper studies a convex duality framework that characterizes the global optima of a two-layer fully-convolutional ReLU denoising network via convex optimization. The authors use this regularization to interpret the obtained training results. The reviewers raised a variety of concerns regarding the tractability of the optimization problem (seems to be exponential in number of constraints), the utility for interpretation etc, significance of the results compared to existing literature. Some of these concerns were alleviated but not fully resolved. One reviewer had concerns about the correctness of the proof that was resolved based on the authors’ response. I share many of the above concerns. However, I do think having a computationally feasible way to figure out the exact regularization in these simple settings (at least with small dimensions) could provide some insights to guide further theoretical development. Therefore I am recommending acceptance. However, I strongly urge the authors to further revise the paper based on the above comments.
train
[ "8DwTrujDcl-", "8EUQTIxLB7J", "cAU1zxFrH4V", "vLL-tVpGwwu", "uTGaHcU_QLk", "1yTQ8-ptiVb", "3UAGpsZWjZZ", "u0b_9McfcIA", "qOO6ih469fg", "oNuarmgCdQ7", "GWwQiRnz6_-" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the response and the new filter visualization for MRI training. I would suggest to include your response about the scalable convex solvers such as PyTorch and CVXPY to the paper. \n\nConsidering authors' responses to me and other reviewers, I think, this is a solid paper with sufficient evaluations, ...
[ -1, -1, -1, -1, -1, -1, -1, 4, 6, 9, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "oNuarmgCdQ7", "iclr_2021_VErQxgyrbfn", "qOO6ih469fg", "oNuarmgCdQ7", "GWwQiRnz6_-", "u0b_9McfcIA", "u0b_9McfcIA", "iclr_2021_VErQxgyrbfn", "iclr_2021_VErQxgyrbfn", "iclr_2021_VErQxgyrbfn", "iclr_2021_VErQxgyrbfn" ]
iclr_2021_iKQAk8a2kM0
Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits
To explore the vulnerability of deep neural networks (DNNs), many attack paradigms have been well studied, such as the poisoning-based backdoor attack in the training stage and the adversarial attack in the inference stage. In this paper, we study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes. Specifically, our goal is to misclassify a specific sample into a target class without any sample modification, while not significantly reduce the prediction accuracy of other samples to ensure the stealthiness. To this end, we formulate this problem as a binary integer programming (BIP), since the parameters are stored as binary bits (i.e., 0 and 1) in the memory. By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method. Consequently, the flipped critical bits can be easily determined through optimization, rather than using a heuristic strategy. Extensive experiments demonstrate the superiority of our method in attacking DNNs.
poster-presentations
The major concerns about this paper are that (1) There are too many hyper-parameters, such as those needed for ADMM. I'd point out that there are adaptive variants of ADMM and heuristics methods for choosing optimization hyper-parameters, although it would be nice if the authors addressed these issues in the paper. (2) Some reviewers are concerned that, compared to other related attacks, it’s unclear why flipping fewer bits is an important objective - an attacker might only care about poisoning performance and clean data performance. The authors respond that flipping fewer bits makes the attack more effective when bits are manipulated by a physical method such as manipulating memory. Despite these criticisms, reviewers agree that the paper is a well thought-out approach that improves the state of the art by some metrics.
train
[ "QQVm8hygpV", "9kLdaVdxnj0", "OjB7EVqtZ4K", "i2hwyXOyVMm", "Szk6vBJRKFy", "m4m67DpFxx", "08TL6WuK8YF", "wIrPpEa32DA", "-QEZB02xvjw", "MmHRoSJzQNK", "0MrD1z4id3" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your additional feedback. We understand your concern about the hyper-parameters in our method and this concern is insightful. The introduced hyper-parameters can be divided into two parts, including (1) hyper-parameters in the $\\ell_p$-Box ADMM algorithm, and (2) attacker-specified hyper-parameters ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 2 ]
[ "9kLdaVdxnj0", "i2hwyXOyVMm", "iclr_2021_iKQAk8a2kM0", "wIrPpEa32DA", "MmHRoSJzQNK", "-QEZB02xvjw", "0MrD1z4id3", "iclr_2021_iKQAk8a2kM0", "iclr_2021_iKQAk8a2kM0", "iclr_2021_iKQAk8a2kM0", "iclr_2021_iKQAk8a2kM0" ]
iclr_2021_5Y21V0RDBV
Generalized Multimodal ELBO
Multiple data types naturally co-occur when describing real-world phenomena and learning from them is a long-standing goal in machine learning research. However, existing self-supervised generative models approximating an ELBO are not able to fulfill all desired requirements of multimodal models: their posterior approximation functions lead to a trade-off between the semantic coherence and the ability to learn the joint data distribution. We propose a new, generalized ELBO formulation for multimodal data that overcomes these limitations. The new objective encompasses two previous methods as special cases and combines their benefits without compromises. In extensive experiments, we demonstrate the advantage of the proposed method compared to state-of-the-art models in self-supervised, generative learning tasks.
poster-presentations
After a bit of discussion, all reviewers are for accepting the paper. Strengths: + Clarity (agreed by R4, R2, R1). The paper is easy to read and follow the core contributions. R3 had a concern about the correctness of a derivation, which was resolved in the discussion. + The work solves a core problem of generative models over multimodal applications, building on prior work with mixture and product of expert models. As R1 notes: "By combining MVAE and MMVAE under one framework, this may provide insights to researchers in this area." + On the datasets studied, the details for reproducibility are transparent, and multiple metrics and uncertainty over the metric results are reported. Weaknesses: + Multi-modality of the benchmarks. The experiments evaluate on MNIST-SVHN, "PolyMNIST", and CelebA. The latter two benchmarks are fairly arguable in whether they're really multimodal as R4 notes: e.g., CelebA has two "modalities" of image and attribute pairs. It seems arguable whether you even need multimodal approaches. + Scale of the benchmarks. Language models (especially with Transformer architectures) have been studied quite a bit over multiple modalities, and these works scale significantly better applying simple strategies. It remains to be seen empirically what the utility of multimodal latent variable models really are.
train
[ "EmY-4M4fvQN", "eg3ejBgO-76", "tePQKcX2vya", "77ilZ-StlyU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper focuses on providing a more generalize multimodal ELBO to encompass previous PoE and MoE as special cases and combines their benefits. To this end, the authors first define the new ELBO L_{MoPoE} which is an interesting extension of PoE and MoE. Different from PoE (product of experts) and MoE (mixture o...
[ 6, 6, 7, 6 ]
[ 4, 4, 4, 3 ]
[ "iclr_2021_5Y21V0RDBV", "iclr_2021_5Y21V0RDBV", "iclr_2021_5Y21V0RDBV", "iclr_2021_5Y21V0RDBV" ]
iclr_2021_0aW6lYOYB7d
Large-width functional asymptotics for deep Gaussian neural networks
In this paper, we consider fully connected feed-forward deep neural networks where weights and biases are independent and identically distributed according to Gaussian distributions. Extending previous results (Matthews et al., 2018a;b;Yang, 2019) we adopt a function-space perspective, i.e. we look at neural networks as infinite-dimensional random elements on the input space RI. Under suitable assumptions on the activation function we show that: i) a network defines a continuous Gaussian process on the input space RI; ii) a network with re-scaled weights converges weakly to a continuous Gaussian process in the large-width limit; iii) the limiting Gaussian process has almost surely locally γ-Hölder continuous paths, for 0<γ<1. Our results contribute to recent theoretical studies on the interplay between infinitely wide deep neural networks and Gaussian processes by establishing weak convergence in function-space with respect to a stronger metric.
poster-presentations
This article provides an analysis of feedforward neural network with iid Gaussian weights and biases in the infinite-width limit. The paper complements earlier work on this topic by taking a function-space approach, considering neural networks as infinite-dimensional random elements on the input space. This is a well-written and rigorous theoretical paper. Although, as noted by a reviewer, there are no direct practical implications, the result is interesting in itself, highly relevant to the ICLR audience, and likely to lead to further exploration of the connections between Gaussian processes and neural networks. There were a few questions regarding the proofs that have been answered satisfactorily by the authors. I recommend acceptance.
train
[ "9BXMFsswkU9", "jkI3d0CJYA3", "4T2qhRbnSh4", "gQaH7piHkyk", "gibk2S8d1M4", "ual4cunockB", "fwDllTP7f5e", "uhDkqhJghcS", "cyauPNHe3IA", "HhMOiNpgNvE", "_KBAqS_SfE9" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "POST-REBUTTAL\n\nThank you for your answers and incorporated modifications! I think you've succeeded in addressing my major concerns, so I'm raising my score and recommending accept as promised.\n\n---\n\nThis paper is a theoretical investigation of the asymptotic behaviour of deep fully connected neural networks ...
[ 7, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_0aW6lYOYB7d", "iclr_2021_0aW6lYOYB7d", "cyauPNHe3IA", "9BXMFsswkU9", "9BXMFsswkU9", "9BXMFsswkU9", "HhMOiNpgNvE", "_KBAqS_SfE9", "iclr_2021_0aW6lYOYB7d", "iclr_2021_0aW6lYOYB7d", "iclr_2021_0aW6lYOYB7d" ]
iclr_2021_H8UHdhWG6A3
Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent
Byzantine-resilient Stochastic Gradient Descent (SGD) aims at shielding model training from Byzantine faults, be they ill-labeled training datapoints, exploited software/hardware vulnerabilities, or malicious worker nodes in a distributed setting. Two recent attacks have been challenging state-of-the-art defenses though, often successfully precluding the model from even fitting the training set. The main identified weakness in current defenses is their requirement of a sufficiently low variance-norm ratio for the stochastic gradients. We propose a practical method which, despite increasing the variance, reduces the variance-norm ratio, mitigating the identified weakness. We assess the effectiveness of our method over 736 different training configurations, comprising the 2 state-of-the-art attacks and 6 defenses. For confidence and reproducibility purposes, each configuration is run 5 times with specified seeds (1 to 5), totalling 3680 runs. In our experiments, when the attack is effective enough to decrease the highest observed top-1 cross-accuracy by at least 20% compared to the unattacked run, our technique systematically increases back the highest observed accuracy, and is able to recover at least 20% in more than 60% of the cases.
poster-presentations
The authors present a simple modification of existing byzantine resistant techniques for training in the presence of worst case failures/attacks. The paper studies two of the strongest attacks to date, that no other method, till now, has been able to address. The novelty is significant for the related byzantine ML literature. The authors further do a fantastic job in their experiments and sharing reproducible code. Some weak aspects of theory are in fact attributed to what the metrics and guarantees that the related literature studies. The novelty of this paper does not lie so much in the theory contribution, but more so on their experiments and presented intuition. I believe this will be a paper that people will build up on and the ideas presented here are of solid value and importane.
train
[ "EHkv1EtpefS", "cS2BQi2LXU5", "pc7wb6cmQkt", "VWq_mjFPEpo", "v49IPADMjz8", "pg5JgNwCKju", "LyZs96-L23-", "VwJCrWb7zwb" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "# Contributions\n\nThis paper presents a novel method to tackle the Byzantine faults problem. By using a local momentum, this method can be extended to all other existing robust algorithms. The authors also provide some theoretical analysis of the effect of their algorithm. Finally, comprehensive experiments are s...
[ 7, -1, -1, -1, -1, 4, 6, 4 ]
[ 4, -1, -1, -1, -1, 3, 2, 3 ]
[ "iclr_2021_H8UHdhWG6A3", "LyZs96-L23-", "pg5JgNwCKju", "EHkv1EtpefS", "VwJCrWb7zwb", "iclr_2021_H8UHdhWG6A3", "iclr_2021_H8UHdhWG6A3", "iclr_2021_H8UHdhWG6A3" ]
iclr_2021_agHLCOBM5jP
Fully Unsupervised Diversity Denoising with Convolutional Variational Autoencoders
Deep Learning based methods have emerged as the indisputable leaders for virtually all image restoration tasks. Especially in the domain of microscopy images, various content-aware image restoration (CARE) approaches are now used to improve the interpretability of acquired data. Naturally, there are limitations to what can be restored in corrupted images, and like for all inverse problems, many potential solutions exist, and one of them must be chosen. Here, we propose DivNoising, a denoising approach based on fully convolutional variational autoencoders (VAEs), overcoming the problem of having to choose a single solution by predicting a whole distribution of denoised images. First we introduce a principled way of formulating the unsupervised denoising problem within the VAE framework by explicitly incorporating imaging noise models into the decoder. Our approach is fully unsupervised, only requiring noisy images and a suitable description of the imaging noise distribution. We show that such a noise model can either be measured, bootstrapped from noisy data, or co-learned during training. If desired, consensus predictions can be inferred from a set of DivNoising predictions, leading to competitive results with other unsupervised methods and, on occasion, even with the supervised state-of-the-art. DivNoising samples from the posterior enable a plethora of useful applications. We are (i) showing denoising results for 13 datasets, (ii) discussing how optical character recognition (OCR) applications can benefit from diverse predictions, and are (iii) demonstrating how instance cell segmentation improves when using diverse DivNoising predictions.
poster-presentations
A simple but sensible idea to improve VAE with good experimental results.
train
[ "hF48vV86MjP", "M5SQwAT8fyT", "bl-M8MAdge", "Aof1j7GlDDA", "Px2-lv-rPBg", "PaY3ydPm8_Q", "8OG-aBgVA0k", "DCEYcRSiLf", "9yhM7Ovcu0T", "PUtktqJT-Pn", "c7VSaG0nlMX", "-g8JIhYEBqN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new method of noise removal using convolutional VAE. An observed image with noise is input to VAE, and after the expression $z$ in the latent space, the noise removed image is finally output. After that, it is possible to generate a pseudo noisy observation image according to the noise mode...
[ 7, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, 3, -1, -1, 5, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_agHLCOBM5jP", "iclr_2021_agHLCOBM5jP", "iclr_2021_agHLCOBM5jP", "9yhM7Ovcu0T", "iclr_2021_agHLCOBM5jP", "M5SQwAT8fyT", "hF48vV86MjP", "-g8JIhYEBqN", "PaY3ydPm8_Q", "Px2-lv-rPBg", "8OG-aBgVA0k", "iclr_2021_agHLCOBM5jP" ]
iclr_2021_n7wIfYPdVet
Auxiliary Learning by Implicit Differentiation
Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest. Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss. Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation. First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function. This network can learn non-linear interactions between tasks. Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task. We evaluate AuxiLearn in a series of tasks and domains, including image segmentation and learning with attributes in the low data regime, and find that it consistently outperforms competing methods.
poster-presentations
The paper proposes a novel framework to develop useful auxiliary tasks and combine auxiliary tasks into a single coherent loss. The idea is good and the experiments are sufficient to verify the arguments. All the reviewers agree to accept the paper.
train
[ "b_aq1LJhSKh", "p77UfreS62", "LoVIfNGLxNv", "TFkb2jNdejG", "9njOAFFTBQR", "RfjHqjoa4dh", "nS7gB-eMMv", "TxmyaNxTT_o", "4hTy4SxESlT", "rGNYyRYBaYD", "HJ8hXa1LJf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper pinpoints the key issues of Auxiliary Learning: (1). how to design useful auxiliary tasks, (2) how to combine auxiliary tasks into a single coherent loss. Motived by the issues, this paper proposes a novel Auxiliary Learning frame work, named AuxiLearn. The paper is globally well organized and clearly w...
[ 7, 6, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2021_n7wIfYPdVet", "iclr_2021_n7wIfYPdVet", "TFkb2jNdejG", "rGNYyRYBaYD", "HJ8hXa1LJf", "p77UfreS62", "4hTy4SxESlT", "iclr_2021_n7wIfYPdVet", "iclr_2021_n7wIfYPdVet", "iclr_2021_n7wIfYPdVet", "iclr_2021_n7wIfYPdVet" ]
iclr_2021_TQt98Ya7UMP
Balancing Constraints and Rewards with Meta-Gradient D4PG
Deploying Reinforcement Learning (RL) agents to solve real-world applications often requires satisfying complex system constraints. Often the constraint thresholds are incorrectly set due to the complex nature of a system or the inability to verify the thresholds offline (e.g, no simulator or reasonable offline evaluation procedure exists). This results in solutions where a task cannot be solved without violating the constraints. However, in many real-world cases, constraint violations are undesirable yet they are not catastrophic, motivating the need for soft-constrained RL approaches. We present two soft-constrained RL approaches that utilize meta-gradients to find a good trade-off between expected return and minimizing constraint violations. We demonstrate the effectiveness of these approaches by showing that they consistently outperform the baselines across four different Mujoco domains.
poster-presentations
The paper looks at the soft-constrained RL techniques and proposes a meta-gradient approach. One of the biggest problems with the Lagrange Optimization-based CMDP algorithms is that the optimization of the Lagrange multiplier is tricky The proposed solution and empirical results have promise. The reviewers broadly agree on their evaluation and the major concerns on comprehension, additional experiments and as well as comparison with baselines have been addressed in the rebuttal. - Convergence rate and quality of fixed point reached. The authors mention convergence to local optima but omit the quality of this solution from perspective of safety. It would be useful to include a discussion on the topic, with potential references to concurrent work. Other relevant and concurrent papers to potentially take note of: - Risk-Averse Offline Reinforcement Learning (https://openreview.net/forum?id=TBIzh9b5eaz) - Distributional Reinforcement Learning for Risk-Sensitive Policies (https://openreview.net/forum?id=19drPzGV691) - Conservative Safety Critics for Exploration (https://openreview.net/forum?id=iaO86DUuKi) I would recommend acceptance of the paper based on empirical results, conditional on release of sufficiently documented and easy to use implementation. Given the fact that the main argument is empirical utility of the method, it would be limit the impact of this work if readers cannot readily build on this method.
train
[ "GUAqlaLS4PU", "TjB1rpPRtFJ", "7EotMguKW9I", "HmwVXUqP7JD", "txZzSgJumQa", "WzO4IqqKQvt", "IQSmsd-Srtv", "KReeDizK5A", "eN-Ak1uomP", "bYSLbn36EUi" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper addresses the soft constraints problem in RL. The problem is formulated as a Lagrangian optimization following Tesslaer et al. (2018) where the constraint is treated as a penalty in the reward. The base solution to the Lagrangian optimization is D4PG. \nTo adapt the learning rate of the Lagrangian multi...
[ 6, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_TQt98Ya7UMP", "7EotMguKW9I", "txZzSgJumQa", "bYSLbn36EUi", "GUAqlaLS4PU", "KReeDizK5A", "eN-Ak1uomP", "iclr_2021_TQt98Ya7UMP", "iclr_2021_TQt98Ya7UMP", "iclr_2021_TQt98Ya7UMP" ]
iclr_2021__mQp5cr_iNy
Adversarially Guided Actor-Critic
Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a bottleneck. These methods consider a policy (the actor) and a value function (the critic) whose respective losses are built using different motivations and approaches. This paper introduces a third protagonist: the adversary. While the adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor, in addition to learning to solve the task, tries to differentiate itself from the adversary predictions. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks.
poster-presentations
This work tackles to address the sparse reward problem in RL. They augment actor-critic algorithms by adding an adversarial policy. The adversary tries to mimic the actor while the actor itself tries to differentiate itself from the adversary in addition to learning to solve the task. This in a way provides diversity in exploration behavior. Reviewers liked the paper in general but had several clarification questions. The authors provided the rebuttal and addressed some of the concerns. Considering the reviews and rebuttal, AC and reviewers believe that the paper provides insights that are useful to share with the community. That being said, the paper will still immensely benefit with more extensive experimentation on standard benchmark environments like Atari, etc. Please refer to the reviews for other feedback and suggestions.
train
[ "NOZyXQTbzsu", "XSp0nxeu7k", "e4HT-wIBxz", "SXrOPxeXJaA", "qA3VjpeKHzE", "idtYLvxEKlF", "9htLjuFTGhS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents AGAC, an architecture for efficient, and generalisable, exploration in RL in settings with very sparse rewards. The model is compared against a number of SOTA methods for hard exploration problems on a number of procedurally generated environments, with very good performance results compared to ...
[ 7, -1, -1, -1, -1, 5, 7 ]
[ 2, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2021__mQp5cr_iNy", "iclr_2021__mQp5cr_iNy", "9htLjuFTGhS", "idtYLvxEKlF", "NOZyXQTbzsu", "iclr_2021__mQp5cr_iNy", "iclr_2021__mQp5cr_iNy" ]
iclr_2021_KLH36ELmwIB
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators
Despite the fast development of differentiable architecture search (DARTS), it suffers from a standing instability issue regarding searching performance, which extremely limits its application. Existing robustifying methods draw clues from the outcome instead of finding out the causing factor. Various indicators such as Hessian eigenvalues are proposed as a signal of performance collapse, and the searching should be stopped once an indicator reaches a preset threshold. However, these methods tend to easily reject good architectures if thresholds are inappropriately set, let alone the searching is intrinsically noisy. In this paper, we undertake a more subtle and direct approach to resolve the collapse. We first demonstrate that skip connections with a learnable architectural coefficient can easily recover from a disadvantageous state and become dominant. We conjecture that skip connections profit too much from this privilege, hence causing the collapse for the derived model. Therefore, we propose to factor out this benefit with an auxiliary skip connection, ensuring a fairer competition for all operations. Extensive experiments on various datasets verify that our approach can substantially improve the robustness of DARTS. Our code is available at https://github.com/Meituan-AutoML/DARTS-
poster-presentations
The paper proposed an interesting method to improve the robustness of DARTS and hence to alleviate the mode collapse. The idea consists of adding an auxiliary skip connection branch that complements the output of the cell function together with a depth analysis about the effect of the auxiliary branch. The proposed approach is validated on a few benchmarks showing the effectiveness of the proposed approach. All reviewers agreed that the idea is simple, efficient and interesting. author response satisfactorily addressed most of the points raised by the reviewers, and most of them increased their original score accepting the paper. Therefore, I recommend acceptance.
train
[ "Sxscak9yIiU", "vO6AbLrmzuu", "7dIDo-ytqvG", "di6K1I2rCPO", "7Fzja22iUx", "jLYWs55zdam", "Yl4dstNDCPl", "72v29HFFMPf", "4vyb_Jq8POh", "et15Bdbr-qa" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents an interesting method to alleviate the mode collapse of DARTS (all operations degenerate to skip-connect). This is done by simply adding a skip-connect operation to complement the output of the cell function and making the coefficient of the auxiliary operation decay with time. The method is te...
[ 6, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, 5, 3, 2 ]
[ "iclr_2021_KLH36ELmwIB", "di6K1I2rCPO", "Sxscak9yIiU", "72v29HFFMPf", "4vyb_Jq8POh", "iclr_2021_KLH36ELmwIB", "et15Bdbr-qa", "iclr_2021_KLH36ELmwIB", "iclr_2021_KLH36ELmwIB", "iclr_2021_KLH36ELmwIB" ]
iclr_2021__zx8Oka09eF
Are wider nets better given the same number of parameters?
Empirical studies demonstrate that the performance of neural networks improves with increasing number of parameters. In most of these studies, the number of parameters is increased by increasing the network width. This begs the question: Is the observed improvement due to the larger number of parameters, or is it due to the larger width itself? We compare different ways of increasing model width while keeping the number of parameters constant. We show that for models initialized with a random, static sparsity pattern in the weight tensors, network width is the determining factor for good performance, while the number of weights is secondary, as long as the model achieves high training accuarcy. As a step towards understanding this effect, we analyze these models in the framework of Gaussian Process kernels. We find that the distance between the sparse finite-width model kernel and the infinite-width kernel at initialization is indicative of model performance.
poster-presentations
The paper investigates the interesting question whether increasing the width or the number of parameters is responsible for improved test accuracy. The paper is very well written and the question is novel and innovative. From a methodological point of view, the experiments are well conducted, too. The theoretical part of the paper is somewhat detached from the experimental part and constitutes more of a heuristic conjecture. In addition, more experiments on a variety of other data sets would have been great. Ideally, the theoretical section would thus be replaced by such additional experiments, but this is of course not an option for a conference reviewing system. Given the innovative question and well-conducted experiments I think that the pros outweighs the cons, and for this reason I recommend to accept the paper. Reviewer concerns have been well addressed by the authors in their rebuttal and updated version of the paper.
train
[ "_EIOVAcxR7", "RYdbBWN_PQ2", "SfxQGshhlY", "0FTIZ8oUnf", "6ZcIkx5Y0W3", "rvE4u8266Nk", "3YYBqhVLdZ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thanks all reviewers for their valuable feedback! We have uploaded a revised version that incorporates improvements based on the reviewers' suggestions.", "We thank the reviewer for their helpful comments! As suggested, we have updated section 3 with a review of the Neural Tangent Kernel result.\n\nFor your c...
[ -1, -1, -1, -1, 4, 5, 6 ]
[ -1, -1, -1, -1, 4, 3, 2 ]
[ "iclr_2021__zx8Oka09eF", "3YYBqhVLdZ", "6ZcIkx5Y0W3", "rvE4u8266Nk", "iclr_2021__zx8Oka09eF", "iclr_2021__zx8Oka09eF", "iclr_2021__zx8Oka09eF" ]
iclr_2021_FZ1oTwcXchK
Optimal Conversion of Conventional Artificial Neural Networks to Spiking Neural Networks
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs) that comprise of spiking neurons to process asynchronous discrete signals. While more efficient in power consumption and inference speed on the neuromorphic hardware, SNNs are usually difficult to train directly from scratch with spikes due to the discreteness. As an alternative, many efforts have been devoted to converting conventional ANNs into SNNs by copying the weights from ANNs and adjusting the spiking threshold potential of neurons in SNNs. Researchers have designed new SNN architectures and conversion algorithms to diminish the conversion error. However, an effective conversion should address the difference between the SNN and ANN architectures with an efficient approximation of the loss function, which is missing in the field. In this work, we analyze the conversion error by recursive reduction to layer-wise summation and propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms. This pipeline enables almost no accuracy loss between the converted SNNs and conventional ANNs with only ∼1/10 of the typical SNN simulation time. Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory. Codes are available at https://github.com/Jackn0/snn_optimal_conversion_pipeline.
poster-presentations
The work tackles the task to convert an artificial neural networks (ANN) to a spiking neural network (SNN). The topic is potentially important for energy-efficient hardware implementations of neural networks. There is already quite some literature available on this topic. Compared to these, the manuscript exhibits a number of strong contributions: It presents a theoretical analysis of the conversion error and consequently arrives at a principled way to reduce the conversion error. The authors test the performance of the conversion on a number of challenging data sets. Their method achieves excellent performances with reduced simulation time / latency (usually, in order to achieve comparable performance to ANNs, one needs to run the SNN for many simulated time steps- this simulation time is reduced by their model). One reviewer criticized that the article was hard to read, but this opinion was not shared by other reviewers and the authors have improved the readability in a revision. In summary, I believe that this manuscript presents a very good contribution to the field.
train
[ "kLtqAtZypi", "8d5tvAq6XzW", "Nzg6I9vupip", "Vm85FI-0kp5", "Z4mgbIx5jR5", "HxhMCtNSinJ", "cilB2YSEAvA", "IniFjENRw0w", "mNDgu2mqQvE", "Q-qB6_aVp75", "oKSNjjleynB" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the acknowledgment of our current work and constructive suggestions for future directions. We will explore more on the RNN in the following works. ", "I appreciate the addition of the RNN study in the appendix. That topic needs to be fleshed out more -- but in another paper. For the scope of this p...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "8d5tvAq6XzW", "cilB2YSEAvA", "iclr_2021_FZ1oTwcXchK", "HxhMCtNSinJ", "iclr_2021_FZ1oTwcXchK", "mNDgu2mqQvE", "oKSNjjleynB", "Q-qB6_aVp75", "Nzg6I9vupip", "iclr_2021_FZ1oTwcXchK", "iclr_2021_FZ1oTwcXchK" ]
iclr_2021_aDjoksTpXOP
Deep Equals Shallow for ReLU Networks in Kernel Regimes
Deep networks are often considered to be more expressive than shallow ones in terms of approximation. Indeed, certain functions can be approximated by deep networks provably more efficiently than by shallow ones, however, no tractable algorithms are known for learning such deep models. Separately, a recent line of work has shown that deep networks trained with gradient descent may behave like (tractable) kernel methods in a certain over-parameterized regime, where the kernel is determined by the architecture and initialization, and this paper focuses on approximation for such kernels. We show that for ReLU activations, the kernels derived from deep fully-connected networks have essentially the same approximation properties as their shallow two-layer counterpart, namely the same eigenvalue decay for the corresponding integral operator. This highlights the limitations of the kernel framework for understanding the benefits of such deep architectures. Our main theoretical result relies on characterizing such eigenvalue decays through differentiability properties of the kernel function, which also easily applies to the study of other kernels defined on the sphere.
poster-presentations
This paper analyzes the expressive power of NTK corresponding to deep neural network. It is shown that the depth hardly affects the behavior of the spectrum of the corresponding integral operator, which indicates that depth separation does not occur as long as NTK is considered. The analysis is novel and gives a significant insight to the NTK research literature. The theoretical framework considered in this paper is considerably broad and potentially can be applied to several types of activation functions (while only ReLU is analyzed as a concrete example in the paper). Moreover, some numerical experiments are conducted that support the validity of the theoretical analysis. All reviewers are positive on this paper. I agree with their evaluations. For these reasons, I think this paper is worth acceptance.
train
[ "g7j0Gmp6IVX", "MX9AQk2XsCk", "LZHgAm04GCX", "dMNY77O45rq", "Dqksoir4EsH", "-ojbv1Rdwp", "qnGo0DXAws", "fBANV54B_ya", "jT7Da9X7YwL" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for their encouraging reviews and thoughtful comments. We addressed specific comments in separate responses to each reviewer.\n\nWe have uploaded a revised version of the paper with some clarifying modifications based on the reviews, including a more extended comparison to the concurrent wor...
[ -1, -1, -1, -1, -1, 6, 7, 6, 9 ]
[ -1, -1, -1, -1, -1, 4, 5, 3, 5 ]
[ "iclr_2021_aDjoksTpXOP", "jT7Da9X7YwL", "qnGo0DXAws", "-ojbv1Rdwp", "fBANV54B_ya", "iclr_2021_aDjoksTpXOP", "iclr_2021_aDjoksTpXOP", "iclr_2021_aDjoksTpXOP", "iclr_2021_aDjoksTpXOP" ]
iclr_2021_uxpzitPEooJ
Graph Coarsening with Neural Networks
As large scale-graphs become increasingly more prevalent, it poses significant computational challenges to process, extract and analyze large graph data. Graph coarsening is one popular technique to reduce the size of a graph while maintaining essential properties. Despite rich graph coarsening literature, there is only limited exploration of data-driven method in the field. In this work, we leverage the recent progress of deep learning on graphs for graph coarsening. We first propose a framework for measuring the quality of coarsening algorithm and show that depending on the goal, we need to carefully choose the Laplace operator on the coarse graph and associated projection/lift operators. Motivated by the observation that the current choice of edge weight for the coarse graph may be sub-optimal, we parametrize the weight assignment map with graph neural networks and train it to improve the coarsening quality in an unsupervised way. Through extensive experiments on both synthetic and real networks, we demonstrate that our method significantly improves common graph coarsening methods under various metrics, reduction ratios, graph sizes, and graph types. It generalizes to graphs of larger size (more than 25× of training graphs), adaptive to different losses (both differentiable and non-differentiable), and scales to much larger graphs than previous work.
poster-presentations
This paper presents a way to use GNNs to learn edge weights of a coarsened graph given the node mapping from the original graph to the coarsened graph. The paper is well-written and the approach is well-motivated as learning makes it easy to adapt the edge weights to different tasks and objectives, as illustrated in the graph Laplacian and Rayleigh quotient examples. All the reviewers gage positive reviews for this paper, hence I recommend accepting this paper. The reason for not promoting this paper further to spotlight or oral is that the paper addressed a relatively small problem, learning the edge weights given the node mapping, and the proposed method is quite simple. Therefore this paper’s impact could be limited. One suggestion to the authors is to present more results on downstream tasks, i.e. how does the proposed coarsening algorithm improve downstream task performance, instead of just losses defined without a downstream task in mind. Example things to consider: does this approach improve graph classification accuracy? Does this improve downstream GNN model’s efficiency without sacrificing accuracy?
test
[ "Fpf0PuXmWr", "J5iuoElp4e", "MTinSxrMLpG", "kUJNeAOKu-V", "xeABvKWGkWW", "S7TXqAmjCD-", "wC3Mt61eHaR", "9TQjPsDyFcD", "aBLmGxTrzH", "V70WqmsgR1p", "YQapnG9LlYv", "EUxGJTtMfiT" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your question. Indeed, we controlled the number of parameters: In fact, the number of parameters of MLP is roughly 30 percent larger than that of GOREN. As far as training error is concerned, the MLP and GOREN have similar training errors. In the table below, we list the training error and the improv...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "J5iuoElp4e", "kUJNeAOKu-V", "iclr_2021_uxpzitPEooJ", "9TQjPsDyFcD", "V70WqmsgR1p", "YQapnG9LlYv", "EUxGJTtMfiT", "aBLmGxTrzH", "MTinSxrMLpG", "iclr_2021_uxpzitPEooJ", "iclr_2021_uxpzitPEooJ", "iclr_2021_uxpzitPEooJ" ]
iclr_2021_tlV90jvZbw
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
Over-parameterized models, such as large deep networks, often exhibit a double descent phenomenon, whereas a function of model size, error first decreases, increases, and decreases at last. This intriguing double descent behavior also occurs as a function of training epochs and has been conjectured to arise because training epochs control the model complexity. In this paper, we show that such epoch-wise double descent occurs for a different reason: It is caused by a superposition of two or more bias-variance tradeoffs that arise because different parts of the network are learned at different epochs, and mitigating this by proper scaling of stepsizes can significantly improve the early stopping performance. We show this analytically for i) linear regression, where differently scaled features give rise to a superposition of bias-variance tradeoffs, and for ii) a wide two-layer neural network, where the first and second layers govern bias-variance tradeoffs. Inspired by this theory, we study two standard convolutional networks empirically and show that eliminating epoch-wise double descent through adjusting stepsizes of different layers improves the early stopping performance.
poster-presentations
This paper provides a novel theoretical analysis of epoch-wise double descent for a linear model and a two-layer non-linear model in the constant-NTK regime. Some reviewers noted that these models may be too simple to offer a full explanation for the phenomenon in state-of-the-art practical models, for which the NTK is known to change significantly. While this may be true, I believe that the detailed understanding derived in these simple settings provides an important first step and will surely be of interest to the community. I therefore recommend acceptance.
train
[ "eudYEAreoJC", "BlR5bryCebj", "UbPxvYid2gR", "oJNiIQgnSW8", "lIMYioC-v5V", "6NGZN-dsg84", "LNWCidiZzdi", "blnfLLjl8UC", "2CEEKP67S97", "338YOJw5TU4", "V4Ok_gVCjhi", "sSi7gI4LNOS", "QkDZtz82JY" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper provides an interesting analysis and direction to improve generalization capability by eliminating double decent during training by setting learning rates differently for each feature and using early stopping. In terms of technical contributions, the authors prove double decent phenomenon during training...
[ 4, 8, -1, -1, 6, -1, -1, -1, 7, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1 ]
[ "iclr_2021_tlV90jvZbw", "iclr_2021_tlV90jvZbw", "LNWCidiZzdi", "6NGZN-dsg84", "iclr_2021_tlV90jvZbw", "blnfLLjl8UC", "sSi7gI4LNOS", "lIMYioC-v5V", "iclr_2021_tlV90jvZbw", "V4Ok_gVCjhi", "2CEEKP67S97", "eudYEAreoJC", "BlR5bryCebj" ]
iclr_2021_aGfU_xziEX8
Efficient Inference of Flexible Interaction in Spiking-neuron Networks
Hawkes process provides an effective statistical framework for analyzing the time-dependent interaction of neuronal spiking activities. Although utilized in many real applications, the classic Hawkes process is incapable of modelling inhibitory interactions among neurons. Instead, the nonlinear Hawkes process allows for a more flexible influence pattern with excitatory or inhibitory interactions. In this paper, three sets of auxiliary latent variables (Polya-Gamma variables, latent marked Poisson processes and sparsity variables) are augmented to make functional connection weights in a Gaussian form, which allows for a simple iterative algorithm with analytical updates. As a result, an efficient expectation-maximization (EM) algorithm is derived to obtain the maximum a posteriori (MAP) estimate. We demonstrate the accuracy and efficiency performance of our algorithm on synthetic and real data. For real neural recordings, we show our algorithm can estimate the temporal dynamics of interaction and reveal the interpretable functional connectivity underlying neural spike trains.
poster-presentations
This article proposes latent variable augmentation scheme for inference in nonlinear multivariate Hawkes processes. It combines existing approaches (Polya-gamma and sparsity-inducing variables) in a sensible way and is clearly written. Concerns were raised with respect to the comparison to alternative baselines, and answered by the authors. As a result, some reviewers have increased their score, and I recommend acceptance.
train
[ "G1sUwyVW3w", "-uT5jlg7VzR", "Hp3gyHjFEu", "L7NdSkAGBSH", "UeFNwOIVNA", "kyLbH_2dlEZ", "vUAduCe1x58", "bHtz6o5VLNC", "i91wjsJwZyM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors here present an extension the Hawkes process to incorporate negative interactions. This allows for inference of excitatory and inhibitory interactions among point-processes using the Hawkes process framework. They present a novel inference procedure for this model using three augmentations to pieces of...
[ 6, -1, 7, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, 4, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_aGfU_xziEX8", "kyLbH_2dlEZ", "iclr_2021_aGfU_xziEX8", "G1sUwyVW3w", "bHtz6o5VLNC", "Hp3gyHjFEu", "i91wjsJwZyM", "iclr_2021_aGfU_xziEX8", "iclr_2021_aGfU_xziEX8" ]
iclr_2021_R2ZlTVPx0Gk
DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial Estimation
Deep ensembles perform better than a single network thanks to the diversity among their members. Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members’ performances. In this paper, we argue that learning strategies for deep ensembles need to tackle the trade-off between ensemble diversity and individual accuracies. Motivated by arguments from information theory and leveraging recent advances in neural estimation of conditional mutual information, we introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features. The main idea is that features extracted from pairs of members should only share information useful for target class prediction without being conditionally redundant. Therefore, besides the classification loss with information bottleneck, we adversarially prevent features from being conditionally predictable from each other. We manage to reduce simultaneous errors while protecting class information. We obtain state-of-the-art accuracy results on CIFAR-10/100: for example, an ensemble of 5 networks trained with DICE matches an ensemble of 7 networks trained independently. We further analyze the consequences on calibration, uncertainty estimation, out-of-distribution detection and online co-distillation.
poster-presentations
This paper proposes a new method of learning ensembles of neural networks based on the Information Bottleneck theory, which increases the diversity in an ensemble by minimizing the mutual information between latent features of the different ensemble models. It shows promising results on classification, calibration and uncertainty estimation. The paper is well-written and the comments were properly addressed.
train
[ "lCOLVv7IHyR", "xtGuPOeO-1-", "yiVxOvWj0iE", "HHZCekB9RS1", "ly_4IVgwXA1", "n6sB9qgBeUX", "38p0ZK6k32F", "yhY_usU5T8_", "t6WnxUgC06w", "BpjVTRnNfl", "jVRbYv4lmO0", "rLUyEViEv18", "Pr9RusQvVKG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Summary\nThis paper introduces a training procedure for ensembles of neural networks that improves intra-member diversity to achieve better accuracy and calibration.\n\n## Originality\nThis paper augments the VIB training objective of (Alemi et al., 2017) with two modifications: \n- adding a mutual information ...
[ 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_R2ZlTVPx0Gk", "HHZCekB9RS1", "iclr_2021_R2ZlTVPx0Gk", "n6sB9qgBeUX", "yhY_usU5T8_", "yiVxOvWj0iE", "rLUyEViEv18", "38p0ZK6k32F", "lCOLVv7IHyR", "Pr9RusQvVKG", "iclr_2021_R2ZlTVPx0Gk", "iclr_2021_R2ZlTVPx0Gk", "iclr_2021_R2ZlTVPx0Gk" ]
iclr_2021_KYPz4YsCPj
Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks
Temporal networks serve as abstractions of many real-world dynamic systems. These networks typically evolve according to certain laws, such as the law of triadic closure, which is universal in social networks. Inductive representation learning of temporal networks should be able to capture such laws and further be applied to systems that follow the same laws but have not been unseen during the training stage. Previous works in this area depend on either network node identities or rich edge attributes and typically fail to extract these laws. Here, we propose {\em Causal Anonymous Walks (CAWs)} to inductively represent a temporal network. CAWs are extracted by temporal random walks and work as automatic retrieval of temporal network motifs to represent network dynamics while avoiding the time-consuming selection and counting of those motifs. CAWs adopt a novel anonymization strategy that replaces node identities with the hitting counts of the nodes based on a set of sampled walks to keep the method inductive, and simultaneously establish the correlation between motifs. We further propose a neural-network model CAW-N to encode CAWs, and pair it with a CAW sampling strategy with constant memory and time cost to support online training and inference. CAW-N is evaluated to predict links over 6 real temporal networks and uniformly outperforms previous SOTA methods by averaged 15\% AUC gain in the inductive setting. CAW-N also outperforms previous methods in 5 out of the 6 networks in the transductive setting.
poster-presentations
The paper introduces a new method for encoding dynamics of temporal networks. The approach, while not ground-breaking, is interesting and the results are fairly convincing. The submission raised a number of concerns from the reviewers. They questioned the complexity of the proposed approach (R3 and R4), the clarity/readability (R2 and R1), and appropriateness of the link sampling strategy (R2), as well as raised several more minor (from my perspective) issues. I believe that the authors adequately addressed most of these concerns in their rebuttal and the revision. R2 has confirmed that they read the rebuttal and raised their score to strong accept. Unfortunately, the other reviewers have not engaged during the discussion period, and it is unclear if they are satisfied with the clarifications and changes. Nevertheless, after reading the authors' responses and skimming through the manuscript, I believe that most concerns have been addressed, and this is a good paper that deserves to be accepted. That being said, the issue of readability has been raised by the reviewers, and, while I do not think the paper is unreadable, I do agree that there is much room for improvement. I would encourage the authors to polish the manuscript for the camera-ready version, as well as try to address the remaining concerns raised by the reviewers.
train
[ "pVcIu3xdpvK", "e0o0JmzPBnW", "f1P6UgSbaz5", "bVpEHmbAEP", "wFIgW-IKfHX", "LGOVK2gzyMO", "2fCdT0ISMcx", "qrLOPDFbdmV", "ztOQW8_Aa-E", "uEKobAY-kxh", "8oL77rt82e9" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors would like to thank R2 for appreciating this work and voting for an acceptance. ", "The authors provide in-depth analysis on the critical topic of capturing dynamic laws for the inductive representation learning of temporal graphs. The authors leverage the causal anonymous walk to capture the topolog...
[ -1, 7, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "f1P6UgSbaz5", "iclr_2021_KYPz4YsCPj", "bVpEHmbAEP", "e0o0JmzPBnW", "8oL77rt82e9", "ztOQW8_Aa-E", "uEKobAY-kxh", "iclr_2021_KYPz4YsCPj", "iclr_2021_KYPz4YsCPj", "iclr_2021_KYPz4YsCPj", "iclr_2021_KYPz4YsCPj" ]
iclr_2021_YNnpaAKeCfx
FairBatch: Batch Selection for Model Fairness
Training a fair machine learning model is essential to prevent demographic disparity. Existing techniques for improving model fairness require broad changes in either data preprocessing or model training, rendering themselves difficult-to-adopt for potentially already complex machine learning systems. We address this problem via the lens of bilevel optimization. While keeping the standard training algorithm as an inner optimizer, we incorporate an outer optimizer so as to equip the inner problem with an additional functionality: Adaptively selecting minibatch sizes for the purpose of improving model fairness. Our batch selection algorithm, which we call FairBatch, implements this optimization and supports prominent fairness measures: equal opportunity, equalized odds, and demographic parity. FairBatch comes with a significant implementation benefit -- it does not require any modification to data preprocessing or model training. For instance, a single-line change of PyTorch code for replacing batch selection part of model training suffices to employ FairBatch. Our experiments conducted both on synthetic and benchmark real data demonstrate that FairBatch can provide such functionalities while achieving comparable (or even greater) performances against the state of the arts. Furthermore, FairBatch can readily improve fairness of any pre-trained model simply via fine-tuning. It is also compatible with existing batch selection techniques intended for different purposes, such as faster convergence, thus gracefully achieving multiple purposes.
poster-presentations
All the reviewers and I agree that the proposed approach is interesting and the paper is overall well written. However, I agree with R3 that the paper need further re-working the theoretical part (see the post-rebuttal comments of R4). Thus, I would encourage the authors to carefully address the comments of the reviewers in the revised version of the paper, which would ultimately improve the quality of the paper.
train
[ "eDL6yfhF_9", "178ikXm2zA8", "g8O7-4jIhGh", "TXz208ZRX6", "bftA7P17fu9", "8N3DwN9pVU", "izJPu1NCsPV", "sDe09X7uNcV" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors study the problem of training fair machine learning models through the lens of bi-level optimization. In particular, they propose a method, denoted FairBatch, that adaptively selects different batch-sizes for different protected groups to impose a certain measure of fairness. This is ach...
[ 4, -1, -1, -1, -1, 7, 6, 6 ]
[ 4, -1, -1, -1, -1, 4, 3, 5 ]
[ "iclr_2021_YNnpaAKeCfx", "eDL6yfhF_9", "sDe09X7uNcV", "8N3DwN9pVU", "izJPu1NCsPV", "iclr_2021_YNnpaAKeCfx", "iclr_2021_YNnpaAKeCfx", "iclr_2021_YNnpaAKeCfx" ]
iclr_2021_QpNz8r_Ri2Y
Representation Balancing Offline Model-based Reinforcement Learning
One of the main challenges in offline and off-policy reinforcement learning is to cope with the distribution shift that arises from the mismatch between the target policy and the data collection policy. In this paper, we focus on a model-based approach, particularly on learning the representation for a robust model of the environment under the distribution shift, which has been first studied by Representation Balancing MDP (RepBM). Although this prior work has shown promising results, there are a number of shortcomings that still hinder its applicability to practical tasks. In particular, we address the curse of horizon exhibited by RepBM, rejecting most of the pre-collected data in long-term tasks. We present a new objective for model learning motivated by recent advances in the estimation of stationary distribution corrections. This effectively overcomes the aforementioned limitation of RepBM, as well as naturally extending to continuous action spaces and stochastic policies. We also present an offline model-based policy optimization using this new objective, yielding the state-of-the-art performance in a representative set of benchmark offline RL tasks.
poster-presentations
The paper studies offline RL, which is an important topic in high risk domains. Compared with the existing works, this paper gives a tractable method to explicitly learn the model representation w.r.t the stationary distributions of two policies. This method is pretty general and could be paired with other pessimistic model-based RL methods. The experiments are limited to simpler domains, and could be extended to include harder tasks from other continuous control domains. Some examples could be domains such as in Robosuite (http://robosuite.ai/) or Robogym (https://github.com/openai/robogym). These environments have higher dimensional systems with clearer implications of representation learning. There are concerns on writing style and comprehension. - The work is on the one hand very specialized, on the other hand just an incremental modification of existing methods. - The presentation is very dense and quite hard to grasp, even with the Appendix. - The formalism, while important, can be very loose in terms of bounds. While that does open questions in RL theory, it would be useful for authors to be more candid about this fact in the paper. I would recommend including the response to R1 in the paper. Other relevant and concurrent papers to potentially take note of: - Fine-Tuning Offline Reinforcement Learning with Model-Based Policy Optimization (https://openreview.net/forum?id=wiSgdeJ29ee) - Robust Offline Reinforcement Learning from Low-Quality Data (https://openreview.net/forum?id=uOjm_xqKEoX) Given the overall positive reviews, I would recommend acceptance. However, the method would benefit from additional pass on re-writing to make the manuscript more accessible, which in turn to increase impact of this work.
train
[ "qbc92xrLMg4", "alZI3EEHCOQ", "nA3L4OZ8WnX", "OmX82___iHN", "bCip_qLEjK2", "8EMgrdQfzYx", "TaWq5n13hfr", "KUaMuxbuPKZ", "QrVEScFtBtW", "k77ZuwzNoBW", "yD2I3Br4PD", "FFVGONJo6I6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "- Clarity and Originality:\nThis paper is well-written and easy to read. The motivation is clearly stated: the original paper [Liu, et al., 2018] highly relies on the marginal action probability ratios to calculate the IPM metric, which suffers the curse of horizon issue. This paper addresses this by utilizing the...
[ 7, -1, 7, -1, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_QpNz8r_Ri2Y", "k77ZuwzNoBW", "iclr_2021_QpNz8r_Ri2Y", "QrVEScFtBtW", "iclr_2021_QpNz8r_Ri2Y", "iclr_2021_QpNz8r_Ri2Y", "FFVGONJo6I6", "qbc92xrLMg4", "bCip_qLEjK2", "nA3L4OZ8WnX", "iclr_2021_QpNz8r_Ri2Y", "iclr_2021_QpNz8r_Ri2Y" ]
iclr_2021_iOnhIy-a-0n
Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC via Variance Reduction
Replica exchange stochastic gradient Langevin dynamics (reSGLD) has shown promise in accelerating the convergence in non-convex learning; however, an excessively large correction for avoiding biases from noisy energy estimators has limited the potential of the acceleration. To address this issue, we study the variance reduction for noisy energy estimators, which promotes much more effective swaps. Theoretically, we provide a non-asymptotic analysis on the exponential convergence for the underlying continuous-time Markov jump process; moreover, we consider a generalized Girsanov theorem which includes the change of Poisson measure to overcome the crude discretization based on the Gr\"{o}wall's inequality and yields a much tighter error in the 2-Wasserstein (W2) distance. Numerically, we conduct extensive experiments and obtain state-of-the-art results in optimization and uncertainty estimates for synthetic experiments and image data.
poster-presentations
This work aims at doing Bayesian inference via Langevin dynamics with data subsampling. This builds on previous work with "replica exchange" where parallel chains are run at different temperatures and can be swapped to encourage moving between modes. The main technical novelty here is a scheme to reduce variance. This is done in the style of SGRD by periodically computing the gradient on all data and then using those values as control variates. This is shown to reduce variance. Reviewers generally felt that this represented a sensible combination of known ideas aimed at an important and timely problem with sufficient empirical evaluation. There was consensus the paper was clearly written. I concur that even if the combination is "expected" to work, the presence of guarantees for performance represent sufficient technical novelty. I particularly applaud the fact that the paper does not over-claim and generously gives credit to related work. This is helpful to the reader and encourages the flow of ideas. For these reasons I recommend acceptance of the paper. In reading the paper, I had a couple questions about the experiments: 1. It's not obvious to me from the experiments how specific the method is to the replica exchange setting. The main control variate idea appears to be applicable without replica exchange. I would very much like to see a "VR-SGHMC" row in Table 1 unless there is a good reason that this cannot be done. It would be very beneficial to understand the contributions of these different algorithmic components. 2. The CIFAR experiments directly test variance. That's fine, the paper is aimed at reducing variance, after all. However, I would like to see more tests of the follow-on improvements in optimization speed. It has been my experience that improvements in variance sometimes produce surprisingly small improvements in optimization speed. My intuition for this is that reduced variance mostly helps by making it possible to use a larger step-size without the same penalty in the stationary dist. In practice, the step-size typically ends up being imperfect, meaning that changes in variance have small changes.
train
[ "Z2Fy7TcLbz", "_-e7_fOqZpo", "f-S3Yf8FhJz", "Lbt6UaQMsKo", "c_35NgVa005", "xx78WKR2Rzl", "xmyKe1d0JQW", "5R9M62gbVow" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the detailed and valuable comments.\n\nQ1: How much tweaking do these parameters such as $F, m, n, \\eta, \\gamma, \\tau$ require?\n\n$F$ is an important hyperparameter and the tuning directly affects the empirical swapping rates. We would like to study the extension of a more user-friendly replica e...
[ -1, -1, -1, -1, 6, 7, 5, 7 ]
[ -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "c_35NgVa005", "xx78WKR2Rzl", "5R9M62gbVow", "xmyKe1d0JQW", "iclr_2021_iOnhIy-a-0n", "iclr_2021_iOnhIy-a-0n", "iclr_2021_iOnhIy-a-0n", "iclr_2021_iOnhIy-a-0n" ]
iclr_2021_E3Ys6a1NTGT
The Importance of Pessimism in Fixed-Dataset Policy Optimization
We study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. Our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. This analysis reveals that for naive approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. To avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. We show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. These theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four MinAtar environments.
poster-presentations
The reviewers agree in their positive evaluation of the paper. A weakness of the paper pointed out by several reviewers was its presentation, which has hovewer improved. Thus, I'm glad to recommend acceptance.
train
[ "y1onO5W1NGG", "_NJ87T34PYO", "qUIfLNbAbne", "WTAdctltVsw", "BiTZx391q_g", "no73rN8kyQ6", "rVrOrKefALC", "eyt6C-TqAg-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\n\nThe paper proposes a theoretical framework for analyzing the error of reinforcement learning algorithms in a fixed dataset policy optimization (FDPO) setting. In such settings, data has been collected by a single policy that may not be optimal and the learner puts together a model or value function th...
[ 6, 6, -1, -1, -1, -1, -1, 7 ]
[ 4, 4, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_E3Ys6a1NTGT", "iclr_2021_E3Ys6a1NTGT", "WTAdctltVsw", "BiTZx391q_g", "_NJ87T34PYO", "y1onO5W1NGG", "eyt6C-TqAg-", "iclr_2021_E3Ys6a1NTGT" ]
iclr_2021_gLWj29369lW
Interpreting Knowledge Graph Relation Representation from Word Embeddings
Many models learn representations of knowledge graph data by exploiting its low-rank latent structure, encoding known relations between entities and enabling unknown facts to be inferred. To predict whether a relation holds between entities, embeddings are typically compared in the latent space following a relation-specific mapping. Whilst their predictive performance has steadily improved, how such models capture the underlying latent structure of semantic information remains unexplained. Building on recent theoretical understanding of word embeddings, we categorise knowledge graph relations into three types and for each derive explicit requirements of their representations. We show that empirical properties of relation representations and the relative performance of leading knowledge graph representation methods are justified by our analysis.
poster-presentations
This paper extends the recent theoretical understanding on geometric properties for word embeddings to relations and entities of knowledge graph. It categorizes relations into different types and derive requirements for their representations. Empirically they experiment several graph embedding approaches and show that when the loss function is aligned with the requirement of the relation type, we can achieve better performance. The reviewers generally find the paper to be solid, well executed and provides useful insights. The authors are encouraged to strengthen the discussion of the motivation of this work, and improve the presentation based on reviewers' comments.
train
[ "eYsnoT3WX8D", "dMSKxUENazf", "2RI4n3e-i88", "4tIwidb1gM", "fQsJIjmExaO", "_stedc535wj", "Rb0isLk9cVD", "7pgcuislNmb" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Many thanks for your review.\n\n**“Motivation”**\\\nOur main motivation is to try to understand different KG model performance by developing a theory supported by empirical evidence. Our aim is that this contributes to a theoretical foundation for a largely empirical field, offering a principled direction for KG r...
[ -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "7pgcuislNmb", "fQsJIjmExaO", "_stedc535wj", "Rb0isLk9cVD", "iclr_2021_gLWj29369lW", "iclr_2021_gLWj29369lW", "iclr_2021_gLWj29369lW", "iclr_2021_gLWj29369lW" ]
iclr_2021_tL89RnzIiCd
Hopfield Networks is All You Need
We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new Hopfield network can store exponentially (with the dimension of the associative space) many patterns, retrieves the pattern with one update, and has exponentially small retrieval errors. It has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. The new update rule is equivalent to the attention mechanism used in transformers. This equivalence enables a characterization of the heads of transformer models. These heads perform in the first layers preferably global averaging and in higher layers partial averaging via metastable states. The new modern Hopfield network can be integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes. These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. We demonstrate the broad applicability of the Hopfield layers across various domains. Hopfield layers improved state-of-the-art on three out of four considered multiple instance learning problems as well as on immune repertoire classification with several hundreds of thousands of instances. On the UCI benchmark collections of small classification tasks, where deep learning methods typically struggle, Hopfield layers yielded a new state-of-the-art when compared to different machine learning methods. Finally, Hopfield layers achieved state-of-the-art on two drug design datasets. The implementation is available at: \url{https://github.com/ml-jku/hopfield-layers}
poster-presentations
The novelty of the paper are: + introduces a new Hopfield network with continuous states, hence can be learned end-to-end differentiation and back propagation. + derives efficient update rules + reveals a connection between the update rules and transformers + illustrate how the network can be used as a layer in deep neural network that can perform different functions The presentation was clear enough for the reviewers to understand and appreciate the novelty, although there were a few points of confusion. I would recommend the authors to address several suggestions that came up in the discussions including: - additional analysis to highlight when and how the networks is able to outperform other competing models - intuitions about the proofs for the theorems (okay to leave the detailed derivation in the appendix)
train
[ "d0DwejhNgq", "NIctU6e5MxV", "e432F1tQpzC", "01xnbsXe2b1", "m4JXwVy-pCR", "z4pH0FlkxHI", "DzD5Lil_BJT" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work extends the binary Hopfield network (Demircigil et al., 2017) to continuous patterns and states. Connections are drawn between the result model to the attention layers of the transformers, the pooling operation of LSTM, similarity search, and fully connected layers. Experimental results are briefly descr...
[ 7, -1, -1, -1, -1, 6, 7 ]
[ 4, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_tL89RnzIiCd", "d0DwejhNgq", "z4pH0FlkxHI", "DzD5Lil_BJT", "iclr_2021_tL89RnzIiCd", "iclr_2021_tL89RnzIiCd", "iclr_2021_tL89RnzIiCd" ]
iclr_2021_9EKHN1jOlA
Uncertainty Estimation and Calibration with Finite-State Probabilistic RNNs
Uncertainty quantification is crucial for building reliable and trustable machine learning systems. We propose to estimate uncertainty in recurrent neural networks (RNNs) via stochastic discrete state transitions over recurrent timesteps. The uncertainty of the model can be quantified by running a prediction several times, each time sampling from the recurrent state transition distribution, leading to potentially different results if the model is uncertain. Alongside uncertainty quantification, our proposed method offers several advantages in different settings. The proposed method can (1) learn deterministic and probabilistic automata from data, (2) learn well-calibrated models on real-world classification tasks, (3) improve the performance of out-of-distribution detection, and (4) control the exploration-exploitation trade-off in reinforcement learning. An implementation is available.
poster-presentations
This paper proposes a method to quantify the uncertainty for RNN, which is an important problem in various applications. It provides results in a variety of domains demonstrating that the proposed method outperforms baselines. However, these experiments would benefit greatly from a comparison with SOTA methods for the specific tasks in addition to the considered baselines (e.g. covariance propagation, prior network, and orthonormal certificates). The paper could also be improved by adding a theoretical justification to explain how the Gumbel softmax function is able to capture the underlying data and model uncertainty.
val
[ "gSP20LdcKgC", "6ZcFkkyTCM", "tRTdJHJsBXj", "uPJ7_la94LU", "y0ZP78Pzndu", "bXCPJH8fVy", "P91H-QgugFX", "6CNs_YwhX6", "NF56lMZNHs8", "NqAUpprQvRY", "b_RhBr5q0PX", "pSP5cz2E4DZ", "vmtF4mdMQck" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n----------\n\nThis paper presents an approach to uncertainty modeling in recurrent neural networks through a discrete hidden state. The training of this discrete model is done using a reparameterizable approximation (in particular, using the Gumbel-Softmax trick). The authors show the utility of this meth...
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 2, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_9EKHN1jOlA", "uPJ7_la94LU", "iclr_2021_9EKHN1jOlA", "P91H-QgugFX", "bXCPJH8fVy", "6CNs_YwhX6", "tRTdJHJsBXj", "pSP5cz2E4DZ", "vmtF4mdMQck", "gSP20LdcKgC", "iclr_2021_9EKHN1jOlA", "iclr_2021_9EKHN1jOlA", "iclr_2021_9EKHN1jOlA" ]
iclr_2021_fSTD6NFIW_b
Understanding the failure modes of out-of-distribution generalization
Empirical studies suggest that machine learning models often rely on features, such as the background, that may be spuriously correlated with the label only during training time, resulting in poor accuracy during test-time. In this work, we identify the fundamental factors that give rise to this behavior, by explaining why models fail this way even in easy-to-learn tasks where one would expect these models to succeed. In particular, through a theoretical study of gradient-descent-trained linear classifiers on some easy-to-learn tasks, we uncover two complementary failure modes. These modes arise from how spurious correlations induce two kinds of skews in the data: one geometric in nature and another, statistical. Finally, we construct natural modifications of image classification datasets to understand when these failure modes can arise in practice. We also design experiments to isolate the two failure modes when training modern neural networks on these datasets.
poster-presentations
This paper studies the reasons for failure of trained neural network models on out of distribution tasks. While the reviewers liked the theoretical aspects of the paper, one important concern is about the applicability of these insights to real datasets. The authors added an appendix to the paper showing results on a real dataset that mitigates this concern to an extent. Further, there are interesting insights in the paper to merit acceptance.
train
[ "rKG488D3hDL", "aZBUzUlAIFD", "ldVbCs0BTwx", "mVvCNoQMmlE", "K7S_678G6WR", "dXXaSVrAUvw", "lLtZ-91jH5S", "Oc3XJLPw833", "IOSq7uBSyBv", "B4zP1N0H6zI", "1TlXAScbQA7", "XcZLol5e95l", "uMBTV_iJV6S", "7BIVfXBSQaN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I stand by my initial review that this is a strong submission, and having read through the other reviews and author responses, I am raising my confidence level as well (I think I have a solid grasp of this work's potential import). I disagree with critiques of the paper's novelty and practicality -- I think it pro...
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_fSTD6NFIW_b", "iclr_2021_fSTD6NFIW_b", "IOSq7uBSyBv", "uMBTV_iJV6S", "7BIVfXBSQaN", "7BIVfXBSQaN", "aZBUzUlAIFD", "aZBUzUlAIFD", "rKG488D3hDL", "rKG488D3hDL", "uMBTV_iJV6S", "7BIVfXBSQaN", "iclr_2021_fSTD6NFIW_b", "iclr_2021_fSTD6NFIW_b" ]
iclr_2021_45uOPa46Kh
Generative Language-Grounded Policy in Vision-and-Language Navigation with Bayes' Rule
Vision-and-language navigation (VLN) is a task in which an agent is embodied in a realistic 3D environment and follows an instruction to reach the goal node. While most of the previous studies have built and investigated a discriminative approach, we notice that there are in fact two possible approaches to building such a VLN agent: discriminative and generative. In this paper, we design and investigate a generative language-grounded policy which uses a language model to compute the distribution over all possible instructions i.e. all possible sequences of vocabulary tokens given action and the transition history. In experiments, we show that the proposed generative approach outperforms the discriminative approach in the Room-2-Room (R2R) and Room-4-Room (R4R) datasets, especially in the unseen environments. We further show that the combination of the generative and discriminative policies achieves close to the state-of-the art results in the R2R dataset, demonstrating that the generative and discriminative policies capture the different aspects of VLN.
poster-presentations
The authors propose to take a token-level generative approach to the task of vision-language navigation (R2R/R4R). The reviewers raise a number of concerns which should be noted in the final version of this work. The primary concern revolves around generality. How will this approach generalize to more sophisticated generative and discriminative models? To what extent is the model relying on the short instruction/action sequences to succeed and would not perform well on longer instructions, longer trajectories, or more abstract language. Finally, the discussion of the uninformed prior is interesting because while "clean", reviewers note there is no realistic grounded language scenario in which an uninformative prior makes sense.
test
[ "aXJeQQwERR", "4tFsBhXoya8", "UoVwh4ougjK", "ocrhWc4ulZ", "_9nUWVEaTRy", "EZkY9hSPfen", "NoW89Zw2z9h", "zq2BtaSe2f4", "JaD5EsdLtCJ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Paper Summary**\n\nThe paper addresses the problem of vision-and-language navigation (Anderson et al., 2018). The idea of the paper is to use a generative policy where a distribution over all instruction tokens given the previous actions is computed. The agent takes the action that maximizes the probability of t...
[ 4, -1, -1, -1, -1, -1, 5, 8, 8 ]
[ 4, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "iclr_2021_45uOPa46Kh", "iclr_2021_45uOPa46Kh", "zq2BtaSe2f4", "JaD5EsdLtCJ", "NoW89Zw2z9h", "aXJeQQwERR", "iclr_2021_45uOPa46Kh", "iclr_2021_45uOPa46Kh", "iclr_2021_45uOPa46Kh" ]
iclr_2021_d8Q1mt2Ghw
Emergent Road Rules In Multi-Agent Driving Environments
For autonomous vehicles to safely share the road with human drivers, autonomous vehicles must abide by specific "road rules" that human drivers have agreed to follow. "Road rules" include rules that drivers are required to follow by law – such as the requirement that vehicles stop at red lights – as well as more subtle social rules – such as the implicit designation of fast lanes on the highway. In this paper, we provide empirical evidence that suggests that – instead of hard-coding road rules into self-driving algorithms – a scalable alternative may be to design multi-agent environments in which road rules emerge as optimal solutions to the problem of maximizing traffic flow. We analyze what ingredients in driving environments cause the emergence of these road rules and find that two crucial factors are noisy perception and agents’ spatial density. We provide qualitative and quantitative evidence of the emergence of seven social driving behaviors, ranging from obeying traffic signals to following lanes, all of which emerge from training agents to drive quickly to destinations without colliding. Our results add empirical support for the social road rules that countries worldwide have agreed on for safe, efficient driving.
poster-presentations
This paper shows how "road rules" (e.g., implicit designation of fast lanes on a highway) naturally emerge in a multi-agent MDP. The paper shows that interesting traffic rules do emerge, and it presents a detailed analysis of the factors that lead to this emergence. The paper is complemented by documented source code, with the aim to encourage the community to further work on the topic. The reviewers agreed that this is original work, and appreciated its simplicity. Two concerns that were recurrently voiced were that 1) there is no algorithmic innovation and 2) there is no comparison to baseline models, or more generally a better placement in the context of existing literature. The authors provided a detailed and, to my eyes, convincing response. With respect to the two concerns above, I would go as far as saying that 1) (no algorithmic innovation) is a feature, not a bug. The paper is interesting exactly because it studies emergent phenomena after framing multi-agent driving as a standard RL problem. Concerning 2) (lack of baselines), it seems to me somewhat besides the point: The paper is not claiming state of the art on some benchmark for a new algorithm, but studying how certain implicit rules emerge in a given setup. In this sense, as the authors point out, rather than looking at alternative baselines, it is informative to look at which aspects of the setup contribute to rule emergence, which is what the paper does. Although I realize that in proposing this I am going beyond the reviewers' ratings, I found this to be an original and exciting paper, that I would strongly like to see accepted at the conference.
train
[ "tW1G-dCCy2B", "RfcSRB06HHa", "k5sMKkdriwB", "kRvgK8HfjG", "Sc8xN6rewgH", "dim10D7VG6J", "OZFzAiJkmIX", "0-DoixvECaM", "39Si_rU_2y6", "XvE0zpxlnah", "U9QpjLeXtLc", "8DuD3QuadM9" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper investigate how to design simulation environments so the the agent trained with them can master social rules.\nCons:\n1. The paper is well written and easy to read and understand. Thanks!\n2. The experiments are solid and well defined. \n\nMy major concern of this paper are:\n1. The authors seems to onl...
[ 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ 3, -1, 2, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2021_d8Q1mt2Ghw", "kRvgK8HfjG", "iclr_2021_d8Q1mt2Ghw", "39Si_rU_2y6", "tW1G-dCCy2B", "8DuD3QuadM9", "8DuD3QuadM9", "U9QpjLeXtLc", "k5sMKkdriwB", "iclr_2021_d8Q1mt2Ghw", "iclr_2021_d8Q1mt2Ghw", "iclr_2021_d8Q1mt2Ghw" ]
iclr_2021_bEoxzW_EXsa
Wasserstein-2 Generative Networks
We propose a novel end-to-end non-minimax algorithm for training optimal transport mappings for the quadratic cost (Wasserstein-2 distance). The algorithm uses input convex neural networks and a cycle-consistency regularization to approximate Wasserstein-2 distance. In contrast to popular entropic and quadratic regularizers, cycle-consistency does not introduce bias and scales well to high dimensions. From the theoretical side, we estimate the properties of the generative mapping fitted by our algorithm. From the practical side, we evaluate our algorithm on a wide range of tasks: image-to-image color transfer, latent space optimal transport, image-to-image style transfer, and domain adaptation.
poster-presentations
The reviewers have different views on the papers but agreed that the paper can be accepted. However, they suggested some points of improvements including the writing (clarity and style) and experiments showing strong improvements compared to WGAN.
test
[ "yRMmEAZB2Q8", "kkia4K447V8", "NrCemgimoKR", "RWsJ97gqK_T", "CI5vndLT0m9", "S4LferGWJrC", "4cC-NEYbitQ", "T9GE6ceWu-", "3w1i4YXmG5z", "gGgvTmhlas2", "hMeYR1X2XI", "YSEXyEu3H_5" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes Wasserstein-2 Generative Networks (W2GNs) which is an optimal transport framework for learning generative models. Unlike minimax problems of Wasserstein GANs, the proposed approach which is based on minimizing the 2-Wasserstein distance reduces to a single-level optimization problem. The paper ...
[ 5, -1, 6, -1, -1, 8, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 2, -1, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_bEoxzW_EXsa", "RWsJ97gqK_T", "iclr_2021_bEoxzW_EXsa", "3w1i4YXmG5z", "T9GE6ceWu-", "iclr_2021_bEoxzW_EXsa", "YSEXyEu3H_5", "hMeYR1X2XI", "NrCemgimoKR", "yRMmEAZB2Q8", "S4LferGWJrC", "iclr_2021_bEoxzW_EXsa" ]
iclr_2021_9r30XCjf5Dt
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics
Poisoning attacks on Reinforcement Learning (RL) systems could take advantage of RL algorithm’s vulnerabilities and cause failure of the learning. However, prior works on poisoning RL usually either unrealistically assume the attacker knows the underlying Markov Decision Process (MDP), or directly apply the poisoning methods in supervised learning to RL. In this work, we build a generic poisoning framework for online RL via a comprehensive investigation of heterogeneous poisoning models in RL. Without any prior knowledge of the MDP, we propose a strategic poisoning algorithm called Vulnerability-Aware Adversarial Critic Poison (VA2C-P), which works for on-policy deep RL agents, closing the gap that no poisoning method exists for policy-based RL agents. VA2C-P uses a novel metric, stability radius in RL, that measures the vulnerability of RL algorithms. Experiments on multiple deep RL agents and multiple environments show that our poisoning algorithm successfully prevents agents from learning a good policy or teaches the agents to converge to a target policy, with a limited attacking budget.
poster-presentations
The paper focuses on adversarial attacks for RL, which is an exciting understudied research direction, and can be of interest to the community. All the reviewers are (mildly) positive about the paper and the author competently replied to the concerns expressed by the reviewers.
train
[ "ccvpovxlD5J", "apigGL9EkK", "jthw0_5YMM", "vpgwIE5zBJO", "bgfXohBzWUy", "muhv0PEoaJY", "EaVTiUvEE9B", "doCA-8QRou", "Y9Og8-BmNx3", "Al7e_5Dbmdv", "K58tPzTHxeR", "XYrAy0PPC_o", "w79kDGrm--", "sfoffRAXG1C", "cXifAnO47U", "bLQKQEpsJ8F", "mNmB4JqmQiL", "oBoAHPiA9JG", "ixGTYdwVK0", ...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ "> **Q7: Clarification of the episodic attacking, and whether our algorithm generalizes to a fully online setting.**\n> And **Question d** : *\"Which practical application would support the episodic poisoning setting described in this paper?\"\"*\n\nA7: \n**Why episodic.** Although our algorithm is amenable to full...
[ -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "5W2JK_t_gTI", "jthw0_5YMM", "5W2JK_t_gTI", "iclr_2021_9r30XCjf5Dt", "EaVTiUvEE9B", "iclr_2021_9r30XCjf5Dt", "cXifAnO47U", "iclr_2021_9r30XCjf5Dt", "Al7e_5Dbmdv", "K58tPzTHxeR", "XYrAy0PPC_o", "bLQKQEpsJ8F", "muhv0PEoaJY", "cXifAnO47U", "muhv0PEoaJY", "muhv0PEoaJY", "fcmTIw-N-5T", ...
iclr_2021_YtMG5ex0ou
Tomographic Auto-Encoder: Unsupervised Bayesian Recovery of Corrupted Data
We propose a new probabilistic method for unsupervised recovery of corrupted data. Given a large ensemble of degraded samples, our method recovers accurate posteriors of clean values, allowing the exploration of the manifold of possible reconstructed data and hence characterising the underlying uncertainty. In this set-ting, direct application of classical variational methods often gives rise to collapsed densities that do not adequately explore the solution space. Instead, we derive our novel reduced entropy condition approximate inference method that results in rich posteriors. We test our model in a data recovery task under the common setting of missing values and noise, demonstrating superior performance to existing variational methods for imputation and de-noising with different real data sets. We further show higher classification accuracy after imputation, proving the advantage of propagating uncertainty to downstream tasks with our model.
poster-presentations
Summary: The authors propose a Bayesian approach to data cleaning, implemented via a variational auto-encoder. They argue that a common problem in this context are posteriors that overfit by concentrating on a low-dimensional subset and introduce an optimization target intended to discourage that behavior. Discussion: Arguably the main concern brought up in the reviews was how much novelty there is in addressing latent variable posterior collapse, solutions for which have been proposed. The authors were able to clarify that this was due to a misunderstanding (the collapse they address is not in latent space), and the reviewer considers the matter resolved. Recommendation: I recommend publication. The reviewers are all positive, agree that the method is interesting, and seems novel. The writing is clear, and remaining doubts have been addressed in the discussion.
train
[ "6gRiC-o7dh", "AHvRLtRYqFf", "IxjXYj4oc_Z", "MV3XHtsfMpz", "B8-yFl_2CR9", "BD07M08FeqX", "9CVtA2gbvvS", "YNsZ87I9iKV", "5qBJW-gB-Vf", "SzkLvzTD-HO" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear authors, \nThanks for the clarification and sorry for my misunderstandings. Now my concern is resolved so I raise my score.\n", "This paper proposes a Tomographic auto-encoder (TAE) for unsupervised recovery of corrupted data. More specifically, TAE takes a Bayesian approach to recover the posterior distrib...
[ -1, 7, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, 4, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "BD07M08FeqX", "iclr_2021_YtMG5ex0ou", "YNsZ87I9iKV", "5qBJW-gB-Vf", "SzkLvzTD-HO", "AHvRLtRYqFf", "iclr_2021_YtMG5ex0ou", "iclr_2021_YtMG5ex0ou", "iclr_2021_YtMG5ex0ou", "iclr_2021_YtMG5ex0ou" ]
iclr_2021_0pxiMpCyBtr
Monotonic Kronecker-Factored Lattice
It is computationally challenging to learn flexible monotonic functions that guarantee model behavior and provide interpretability beyond a few input features, and in a time where minimizing resource use is increasingly important, we must be able to learn such models that are still efficient. In this paper we show how to effectively and efficiently learn such functions using Kronecker-Factored Lattice (KFL), an efficient reparameterization of flexible monotonic lattice regression via Kronecker product. Both computational and storage costs scale linearly in the number of input features, which is a significant improvement over existing methods that grow exponentially. We also show that we can still properly enforce monotonicity and other shape constraints. The KFL function class consists of products of piecewise-linear functions, and the size of the function class can be further increased through ensembling. We prove that the function class of an ensemble of M base KFL models strictly increases as M increases up to a certain threshold. Beyond this threshold, every multilinear interpolated lattice function can be expressed. Our experimental results demonstrate that KFL trains faster with fewer parameters while still achieving accuracy and evaluation speeds comparable to or better than the baseline methods and preserving monotonicity guarantees on the learned model.
poster-presentations
The focus of the submission is shape-constrained regression, particularly the goal is to learn monotonic, 'reasonably rich' functions. In order to tackle this task, the authors extend the monotonic regression framework (Gupta et al., 2016) which scales less benignly in the input dimension. They propose to use lattice functions with parameters having Kronecker product structure (, and their ensembles). The resulting function class can be (i) stored and evaluated in linear time (Proposition 1), (ii) characterized / checked from monotonicity perspective (Proposition 2). The efficiency of the approach is demonstrated in three real-world examples. Shape-constrained regression is a central topic in machine learning and statistics. The authors propose a parametric family to learn monotonically constrained functions. The storage and the evaluation of the resulting functions are both fast (linear), and the numerical experiments are encouraging. The submission can be of definite interest to the ICLR community.
train
[ "p27g10Gz16", "ztKN3kNDc2U", "jD81mF2M9eL", "hCwMX38Dd7n", "V2I07EuNwYX", "SS-O-gpXNEG", "PNGJT_W90mv", "YBTm2OYys7n", "wWyDe6ZlB6n", "cK5wirYeKPy", "ouomYiRD1JN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "(Added on 11/29/2020) **Post Rebuttal Comment**\n\nI thank the authors for sincerely replying my review comments. I think the authors answered my questions. \n\nAddtional Comments\n\n- Section 3.4: $(c_1(x[1]),...,c_D(x[D])$ →$(c_1(x[1]),...,c_D(x[D]))$ (Add a right parenthesis)\n\n--------------------------------...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 2, 3, -1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2021_0pxiMpCyBtr", "iclr_2021_0pxiMpCyBtr", "YBTm2OYys7n", "V2I07EuNwYX", "p27g10Gz16", "ouomYiRD1JN", "cK5wirYeKPy", "ztKN3kNDc2U", "iclr_2021_0pxiMpCyBtr", "iclr_2021_0pxiMpCyBtr", "iclr_2021_0pxiMpCyBtr" ]
iclr_2021_jM76BCb6F9m
LEAF: A Learnable Frontend for Audio Classification
Mel-filterbanks are fixed, engineered audio features which emulate human perception and have been used through the history of audio understanding up to today. However, their undeniable qualities are counterbalanced by the fundamental limitations of handmade representations. In this work we show that we can train a single learnable frontend that outperforms mel-filterbanks on a wide range of audio signals, including speech, music, audio events and animal sounds, providing a general-purpose learned frontend for audio classification. To do so, we introduce a new principled, lightweight, fully learnable architecture that can be used as a drop-in replacement of mel-filterbanks. Our system learns all operations of audio features extraction, from filtering to pooling, compression and normalization, and can be integrated into any neural network at a negligible parameter cost. We perform multi-task training on eight diverse audio classification tasks, and show consistent improvements of our model over mel-filterbanks and previous learnable alternatives. Moreover, our system outperforms the current state-of-the-art learnable frontend on Audioset, with orders of magnitude fewer parameters.
poster-presentations
All Reviewers agree that the paper has a clear and solid contribution. Furthermore, all of them highlight that the paper has improved significantly after revision. Hence, my recommendation is to ACCEPT the paper. As a brief summary, I highlight below some pros and cons that arose during the review and meta-review processes. Pros: - Comparison across network architectures. - Comparison across a broad range of different data sets. - Compactness of the representation (few parameters to learn). - Authors will share code. Cons: - Role of L2 normalization could be further discussed.
train
[ "NYlNefvv5X", "Q_5PI0NaIvZ", "HpOvseLs8gH", "V0A-RVtS77T", "eKNe3ONfrEx", "QnyWGuG7dFe", "S24VG38uxSN", "Pmp3tl3dcej", "vgQYI2SlOS9", "AR3JT8RiR71" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a learnable frontend for classification tasks on audio signals. The proposed learnable audio frontend (LEAF) is a generalization of a mel filterbank, used commonly in machine audition.\nLEAF consists of a Gabor filterbank, magnitude-squared nonlinearity, Gaussian lowpass filter and previously-pr...
[ 7, 8, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_jM76BCb6F9m", "iclr_2021_jM76BCb6F9m", "NYlNefvv5X", "iclr_2021_jM76BCb6F9m", "vgQYI2SlOS9", "S24VG38uxSN", "Q_5PI0NaIvZ", "AR3JT8RiR71", "iclr_2021_jM76BCb6F9m", "iclr_2021_jM76BCb6F9m" ]
iclr_2021_GFsU8a0sGB
Federated Learning via Posterior Averaging: A New Perspective and Practical Algorithms
Federated learning is typically approached as an optimization problem, where the goal is to minimize a global loss function by distributing computation across client devices that possess local data and specify different parts of the global objective. We present an alternative perspective and formulate federated learning as a posterior inference problem, where the goal is to infer a global posterior distribution by having client devices each infer the posterior of their local data. While exact inference is often intractable, this perspective provides a principled way to search for global optima in federated settings. Further, starting with the analysis of federated quadratic objectives, we develop a computation- and communication-efficient approximate posterior inference algorithm—federated posterior averaging (FedPA). Our algorithm uses MCMC for approximate inference of local posteriors on the clients and efficiently communicates their statistics to the server, where the latter uses them to refine a global estimate of the posterior mode. Finally, we show that FedPA generalizes federated averaging (FedAvg), can similarly benefit from adaptive optimizers, and yields state-of-the-art results on four realistic and challenging benchmarks, converging faster, to better optima.
poster-presentations
The reviewers raised a number of concerns which are addressed by the authors. The paper provides an interesting/novel perspective for federated learning (as a posterior inference problem rather than an optimization problem) which can potentially allow for faster and more accurate solutions.
train
[ "561sFyfUl-d", "ShFCjZdKvAs", "cIBzXe1pB_L", "-Rz5qjp1KVk", "sLluvQR7inE", "8dBWt3UAeMD", "XCbkfYRtUDq", "MwJP-xQX-jG" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all reviewers for their feedback and helpful comments.\n\nWe have responded to each individual reviewer in the comments below. AnonReviewer2 brought up a few pointers to the literature, and we have additionally provided 1-sentence summaries of all of them in a separate comment for other disc...
[ -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "iclr_2021_GFsU8a0sGB", "8dBWt3UAeMD", "XCbkfYRtUDq", "XCbkfYRtUDq", "MwJP-xQX-jG", "iclr_2021_GFsU8a0sGB", "iclr_2021_GFsU8a0sGB", "iclr_2021_GFsU8a0sGB" ]
iclr_2021_MtEE0CktZht
Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments
Exploration under sparse reward is a long-standing challenge of model-free reinforcement learning. The state-of-the-art methods address this challenge by introducing intrinsic rewards to encourage exploration in novel states or uncertain environment dynamics. Unfortunately, methods based on intrinsic rewards often fall short in procedurally-generated environments, where a different environment is generated in each episode so that the agent is not likely to visit the same state more than once. Motivated by how humans distinguish good exploration behaviors by looking into the entire episode, we introduce RAPID, a simple yet effective episode-level exploration method for procedurally-generated environments. RAPID regards each episode as a whole and gives an episodic exploration score from both per-episode and long-term views. Those highly scored episodes are treated as good exploration behaviors and are stored in a small ranking buffer. The agent then imitates the episodes in the buffer to reproduce the past good exploration behaviors. We demonstrate our method on several procedurally-generated MiniGrid environments, a first-person-view 3D Maze navigation task from MiniWorld, and several sparse MuJoCo tasks. The results show that RAPID significantly outperforms the state-of-the-art intrinsic reward strategies in terms of sample efficiency and final performance. The code is available at https://github.com/daochenzha/rapid
poster-presentations
In order to learn good exploratory behaviors in settings where agents encounter diverse environments, the authors propose an approach which involves learning from episodes that exhibit good episode-level exploratory behaviors. The innovation is in the scoring and learning from these episode-level behaviors rather than trying to come up with shorter timescale proxies of exploration. In making this concrete, the authors propose to score trajectories based effectively on state coverage within an episode (i.e. good exploration corresponds to good state coverage) as well as by scoring episodes relative to one another and giving preference to episodes that explore less often encountered states. To learn, the core algorithm interleaves standard RL updates with behavioral cloning updates using the best episodes of data, thereby training the policy to both solve the task and explore well at the episode level. A weakness is that the paper uses low-level state in grid worlds and there is some ambiguity in applying this to settings with continuous states. The authors discuss general strategies for dealing with these limitations as potential future work. The reviewers were positive about the clarity of the text and felt the core idea that was proposed was simple and effective. The authors put in solid effort to address reviewer concerns. The most salient remaining concern, which I share, is that there will be challenges in scaling this approach to more complex environments with continuous state/observation spaces. Overall, this paper had a consensus "accept" rating (7,7,7,6), and I endorse this as my decision.
train
[ "xqK8Mero5ZA", "a7cdAE1Gf7a", "qis7XKIhKsE", "G2tZhCSlJx", "g5jk5xf1kM4", "1SwEFsvMBXY", "6A7GNJUDSIR", "ffSrLyLQ-71", "BRi8kbxx8yc", "x9V_mxKNjhl", "bwvoQcvoJsM", "KFAlDgyHkmH", "jjezGmvhxH8", "ymO8bcCnU6V", "I96W0Ucpy4C" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We sincerely thank all the reviewers for the support and the insightful comments to help improve the paper. As a closing remark, we highlight the contributions of our paper as follows.\n\n1. We have explored a new exploration strategy at the episode-level. Unlike the previous studies that mainly focus on state-lev...
[ -1, 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_MtEE0CktZht", "iclr_2021_MtEE0CktZht", "x9V_mxKNjhl", "iclr_2021_MtEE0CktZht", "KFAlDgyHkmH", "bwvoQcvoJsM", "jjezGmvhxH8", "iclr_2021_MtEE0CktZht", "I96W0Ucpy4C", "a7cdAE1Gf7a", "ymO8bcCnU6V", "G2tZhCSlJx", "BRi8kbxx8yc", "iclr_2021_MtEE0CktZht", "iclr_2021_MtEE0CktZht" ]
iclr_2021_6BRLOfrMhW
Partitioned Learned Bloom Filters
Bloom filters are space-efficient probabilistic data structures that are used to test whether an element is a member of a set, and may return false positives. Recently, variations referred to as learned Bloom filters were developed that can provide improved performance in terms of the rate of false positives, by using a learned model for the represented set. However, previous methods for learned Bloom filters do not take full advantage of the learned model. Here we show how to frame the problem of optimal model utilization as an optimization problem, and using our framework derive algorithms that can achieve near-optimal performance in many cases.
poster-presentations
All of the reviewers thought that this paper addresses an interesting and important problem. Several of the reviewers thought that the paper gave a creative approach for training bloom filters and this would be of interest to the community.
train
[ "JZ9kl8xb-mr", "SQDxMuUejI", "RfOeXKErb8r", "r58HeAu1fF-", "7NZ4G1E-1dZ", "-e0h7pw3BP2" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "A clear exposition of the problem and proposed solution, the paper key strength is in the formulation of the partitioned bloom filter as an optimization problem that generalizes previously proposed architectures, and prescribes an interpretable solution for the choice of the optimal partition-thresholds in terms o...
[ 7, 7, -1, -1, -1, 6 ]
[ 3, 3, -1, -1, -1, 3 ]
[ "iclr_2021_6BRLOfrMhW", "iclr_2021_6BRLOfrMhW", "-e0h7pw3BP2", "SQDxMuUejI", "JZ9kl8xb-mr", "iclr_2021_6BRLOfrMhW" ]
iclr_2021_zeFrfgyZln
Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval
Conducting text retrieval in a learned dense representation space has many intriguing advantages. Yet dense retrieval (DR) often underperforms word-based sparse retrieval. In this paper, we first theoretically show the bottleneck of dense retrieval is the domination of uninformative negatives sampled in mini-batch training, which yield diminishing gradient norms, large gradient variances, and slow convergence. We then propose Approximate nearest neighbor Negative Contrastive Learning (ANCE), which selects hard training negatives globally from the entire corpus. Our experiments demonstrate the effectiveness of ANCE on web search, question answering, and in a commercial search engine, showing ANCE dot-product retrieval nearly matches the accuracy of BERT-based cascade IR pipeline. We also empirically validate our theory that negative sampling with ANCE better approximates the oracle importance sampling procedure and improves learning convergence.
poster-presentations
The paper explores how to effectively conduct negative sampling in learning for text retrieval. The paper shows that negative examples sampled locally are not informative, and proposes ANCE, a new learning mechanism that samples hard negative examples globally, using an asynchronously updated ANN index. Pros • The problem studied is important. • Paper is generally clearly written. • Solid experimental results. • There is theoretical analysis. Cons • The idea might not be so new. The contribution is mainly from its empirical part. During rebuttal, the authors have addressed the clarity issues pointed out by the reviewers. They have also added additional experimental results.
train
[ "VCSlCma9bFM", "tkGH5MnSBEo", "HBi_QkvmpbB", "0t8m6vKC82g", "OxkklhJPbY-", "tp8S5VcPIB", "uLf22SM8Nr8", "08iPx7Tk8Ma", "ClJCjM897G4" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer 1: \n\nWe have updated a new version in the system including additional experiments and revisions to improve clarity. The full list of updates is in the general comments. Note that we have added new results in end-to-end OpenQA showing that ANCE’s improved text retrieval can propagate to later Questi...
[ -1, -1, -1, -1, -1, 6, 7, 9, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 5, 4 ]
[ "tp8S5VcPIB", "08iPx7Tk8Ma", "ClJCjM897G4", "uLf22SM8Nr8", "iclr_2021_zeFrfgyZln", "iclr_2021_zeFrfgyZln", "iclr_2021_zeFrfgyZln", "iclr_2021_zeFrfgyZln", "iclr_2021_zeFrfgyZln" ]
iclr_2021_1GTma8HwlYp
AUXILIARY TASK UPDATE DECOMPOSITION: THE GOOD, THE BAD AND THE NEUTRAL
While deep learning has been very beneficial in data-rich settings, tasks with smaller training set often resort to pre-training or multitask learning to leverage data from other tasks. In this case, careful consideration is needed to select tasks and model parameterizations such that updates from the auxiliary tasks actually help the primary task. We seek to alleviate this burden by formulating a model-agnostic framework that performs fine-grained manipulation of the auxiliary task gradients. We propose to decompose auxiliary updates into directions which help, damage or leave the primary task loss unchanged. This allows weighting the update directions differently depending on their impact on the problem of interest. We present a novel and efficient algorithm for that purpose and show its advantage in practice. Our method leverages efficient automatic differentiation procedures and randomized singular value decomposition for scalability. We show that our framework is generic and encompasses some prior work as particular cases. Our approach consistently outperforms strong and widely used baselines when leveraging out-of-distribution data for Text and Image classification tasks.
poster-presentations
After engaging in some good interactive discussions all but one reviewer settled on a rating of marginal accept. The most negative reviewer didn't really provide a clear enough explanation of what was lacking in the work. The other reviewers felt that the observed gains for this multi-task learning framework were clear enough that the work is worthy of some attention by the community. The AC recommends acceptance, but one may consider this recommendation as a just past the line for acceptance recommendation.
train
[ "9fXahovi5GL", "aI_k3y3S0yt", "oXs6XuKEpG8", "qciojB2hIL8", "CnFSU5MBZD", "I7viOlzc_-M", "DoxTKl1wckg", "XeBEyJt5Dz", "Vde2r-IFvsI", "r1CUpI3QzsX", "CuBHK5kZR_I" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The work studies the auxiliary task selection in deep learning to resolve the burden of selecting relevant tasks for pre-training or the multitask learning. By decomposing the auxiliary updates, one can reweight separately the beneficial and harmful directions so that the net contribution to the update of the prim...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 3, 5, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2021_1GTma8HwlYp", "iclr_2021_1GTma8HwlYp", "qciojB2hIL8", "CnFSU5MBZD", "aI_k3y3S0yt", "9fXahovi5GL", "r1CUpI3QzsX", "CuBHK5kZR_I", "iclr_2021_1GTma8HwlYp", "iclr_2021_1GTma8HwlYp", "iclr_2021_1GTma8HwlYp" ]
iclr_2021_v5gjXpmR8J
SSD: A Unified Framework for Self-Supervised Outlier Detection
We ask the following question: what training information is required to design an effective outlier/out-of-distribution (OOD) detector, i.e., detecting samples that lie far away from training distribution? Since unlabeled data is easily accessible for many applications, the most compelling approach is to develop detectors based on only unlabeled in-distribution data. However, we observe that most existing detectors based on unlabeled data perform poorly, often equivalent to a random prediction. In contrast, existing state-of-the-art OOD detectors achieve impressive performance but require access to fine-grained data labels for supervised training. We propose SSD, an outlier detector based on only unlabeled in-distribution data. We use self-supervised representation learning followed by a Mahalanobis distance based detection in the feature space. We demonstrate that SSD outperforms most existing detectors based on unlabeled data by a large margin. Additionally, SSD even achieves performance on par, and sometimes even better, with supervised training based detectors. Finally, we expand our detection framework with two key extensions. First, we formulate few-shot OOD detection, in which the detector has access to only one to five samples from each class of the targeted OOD dataset. Second, we extend our framework to incorporate training data labels, if available. We find that our novel detection framework based on SSD displays enhanced performance with these extensions, and achieves state-of-the-art performance. Our code is publicly available at https://github.com/inspire-group/SSD.
poster-presentations
There is some positive consensus on this paper, which improved somewhat after the very detailed rebuttal comments by the authors. The use of limited amounts of OOD data is interesting and novel. There were some experimental design problems, but these were well-addressed in rebuttal. A reviewer points out that anomaly/outlier detection does not explicitly assume that there is only one class within the normal class (or in-distribution data). The one-class assumption is mainly made in some popular anomaly detection methods, such as one-class classification-based approaches for anomaly detection. The authors should take this into careful consideration when preparing a final version of this work.
train
[ "xDBLcIlb6rF", "XWoP_v4oYoA", "KotkAENpPmk", "le3TWOP-Cd", "v1aNfLLwSRR", "vIYdWSk7ZmV", "6Gexz5rWah", "8fvWGIvhyN", "IpikhaQbjzJ", "EN0oRTt5JO" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "[Summary] \nThe paper addresses the OOD problem without learning from class labels. It proposes to learn the feature representation with unsupervised learning, then applies Mahalanobis distance to measure how far a test data is away from the in-distribution data. The proposed framework also considers the cases of ...
[ 6, 6, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 5, 5, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_v5gjXpmR8J", "iclr_2021_v5gjXpmR8J", "le3TWOP-Cd", "IpikhaQbjzJ", "EN0oRTt5JO", "xDBLcIlb6rF", "8fvWGIvhyN", "XWoP_v4oYoA", "iclr_2021_v5gjXpmR8J", "iclr_2021_v5gjXpmR8J" ]
iclr_2021_Y87Ri-GNHYu
Ask Your Humans: Using Human Instructions to Improve Generalization in Reinforcement Learning
Complex, multi-task problems have proven to be difficult to solve efficiently in a sparse-reward reinforcement learning setting. In order to be sample efficient, multi-task learning requires reuse and sharing of low-level policies. To facilitate the automatic decomposition of hierarchical tasks, we propose the use of step-by-step human demonstrations in the form of natural language instructions and action trajectories. We introduce a dataset of such demonstrations in a crafting-based grid world. Our model consists of a high-level language generator and low-level policy, conditioned on language. We find that human demonstrations help solve the most complex tasks. We also find that incorporating natural language allows the model to generalize to unseen tasks in a zero-shot setting and to learn quickly from a few demonstrations. Generalization is not only reflected in the actions of the agent, but also in the generated natural language instructions in unseen tasks. Our approach also gives our trained agent interpretable behaviors because it is able to generate a sequence of high-level descriptions of its actions.
poster-presentations
Although some reviewers still had concerns about the novelty of the proposed method, most of the other concerns have been addressed in a satisfying manner according to reviewers. They globally have a positive opinion about the paper after revision.
train
[ "57Ide7u1cGH", "wuCOkIvHLWZ", "gBZUIipu8Uu", "1o5tJBaOKfT", "_VHhW1dySvo", "xEM2HchmK8d", "bLsHNsAKtL", "1vmJ959aC2J", "IwRT9XJZmb0", "JNRsWw90Q8x" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies whether a model can generate its language instruction to solve long and complex sequential tasks. Importantly, ground-truth (natural) language instruction is only provided during a pretraining phase. The contributions of the authors are the following:\n - The extension of the grid-world Minecraf...
[ 7, -1, -1, -1, -1, -1, -1, 8, 7, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_Y87Ri-GNHYu", "1vmJ959aC2J", "IwRT9XJZmb0", "JNRsWw90Q8x", "57Ide7u1cGH", "bLsHNsAKtL", "iclr_2021_Y87Ri-GNHYu", "iclr_2021_Y87Ri-GNHYu", "iclr_2021_Y87Ri-GNHYu", "iclr_2021_Y87Ri-GNHYu" ]
iclr_2021_cO1IH43yUF
Revisiting Few-sample BERT Fine-tuning
This paper is a study of fine-tuning of BERT contextual representations, with focus on commonly observed instabilities in few-sample scenarios. We identify several factors that cause this instability: the common use of a non-standard optimization method with biased gradient estimation; the limited applicability of significant parts of the BERT network for down-stream tasks; and the prevalent practice of using a pre-determined, and small number of training iterations. We empirically test the impact of these factors, and identify alternative practices that resolve the commonly observed instability of the process. In light of these observations, we re-visit recently proposed methods to improve few-sample fine-tuning with BERT and re-evaluate their effectiveness. Generally, we observe the impact of these methods diminishes significantly with our modified process.
poster-presentations
This paper addresses some of the well-documented instabilities that can arise from fine-tuning BERT on a dataset with few samples. Through a thorough investigation, they highlight various bizarre behaviors that have a negative impact on stability: First, that BERT inexplicably uses an unusual variant of Adam that, in fact, harms behavior; and second, that people tend to undertrain BERT on some downstream tasks. Separately, they find that reinitializing some of the final layers in BERT can be helpful. Since fine-tuning BERT has become such a common way to attack NLP problems, these practical recommendations will be quite welcome to the community. These findings address issues raised by recent work, so the paper is timely and relevant. The paper has through empirical analysis and is clear to read. There is a concurrent ICLR submission with similar findings, and this paper stands on its own. Reviewers all agreed that this paper should be published.
train
[ "ryUq4JI8E4j", "cZOpL5yucoH", "GQUEWRObiiI", "jnAxehnNp1K", "bREXKvIAUx", "-5X8P9agrUX", "OAGXe-tusXs", "DviyGBgd1TU", "Es4HZNa83vJ" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "All reviewers suggest experimenting on MNLI, as an instance of a large dataset, to confirm if Adam with bias correction gets similar results, as we hypothesize in our work. Here are the numbers we get with three random runs:\n\n| Setup | Dev Accuracy (%) | Test Accuracy (%) |\n|-...
[ -1, -1, -1, -1, -1, 7, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "iclr_2021_cO1IH43yUF", "-5X8P9agrUX", "OAGXe-tusXs", "DviyGBgd1TU", "Es4HZNa83vJ", "iclr_2021_cO1IH43yUF", "iclr_2021_cO1IH43yUF", "iclr_2021_cO1IH43yUF", "iclr_2021_cO1IH43yUF" ]
iclr_2021_K5YasWXZT3O
Tilted Empirical Risk Minimization
Empirical risk minimization (ERM) is typically designed to perform well on the average loss, which can result in estimators that are sensitive to outliers, generalize poorly, or treat subgroups unfairly. While many methods aim to address these problems individually, in this work, we explore them through a unified framework---tilted empirical risk minimization (TERM). In particular, we show that it is possible to flexibly tune the impact of individual losses through a straightforward extension to ERM using a hyperparameter called the tilt. We provide several interpretations of the resulting framework: We show that TERM can increase or decrease the influence of outliers, respectively, to enable fairness or robustness; has variance-reduction properties that can benefit generalization; and can be viewed as a smooth approximation to a superquantile method. We develop batch and stochastic first-order optimization methods for solving TERM, and show that the problem can be efficiently solved relative to common alternatives. Finally, we demonstrate that TERM can be used for a multitude of applications, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance. TERM is not only competitive with existing solutions tailored to these individual problems, but can also enable entirely new applications, such as simultaneously addressing outliers and promoting fairness.
poster-presentations
Dear Authors, Thank you very much for your detailed feedback to the initial reviews and also for further answering additional questions raised by a reviewer. Your effort has been certainly contributed to clarifying some of the concerns raised by the reviewers and improving their understanding of this paper. Overall, all the reviewers found a merit in this paper and thus I suggest its acceptance. However, as Reviewer #2 suggested, investigating the convergence in the stochastic case is very important. More discussion on this would be a valuable addition to the paper, which the authors can incorporate in the final version.
train
[ "PO-XJZYyXNX", "UDJeH2eg16", "So_YTxS-RTq", "9jZhWL7_hNy", "jscOrXZLOlA", "Je01Pff3BFs", "8vjso2io2t", "95m1TKIkDUz", "qLJgnLurpD", "XV4Nntx_9Ac", "-nFZF8lNkM6" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your additional feedback. We hope our response has addressed the rest of the reviewer's comments in the last round, and we respond to additional comments below.\n\n**[convergence guarantees for stochastic TERM]** In this work, our aim was to rigorously understand the properties of the TERM objective ...
[ -1, 6, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "So_YTxS-RTq", "iclr_2021_K5YasWXZT3O", "Je01Pff3BFs", "XV4Nntx_9Ac", "qLJgnLurpD", "UDJeH2eg16", "-nFZF8lNkM6", "iclr_2021_K5YasWXZT3O", "iclr_2021_K5YasWXZT3O", "iclr_2021_K5YasWXZT3O", "iclr_2021_K5YasWXZT3O" ]
iclr_2021_vK9WrZ0QYQ
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS
We prove that the reproducing kernel Hilbert spaces (RKHS) of a deep neural tangent kernel and the Laplace kernel include the same set of functions, when both kernels are restricted to the sphere Sd−1. Additionally, we prove that the exponential power kernel with a smaller power (making the kernel less smooth) leads to a larger RKHS, when it is restricted to the sphere Sd−1 and when it is defined on the entire Rd.
poster-presentations
The paper closes an important gap in our understanding of neural tangent kernels. In addition, the used techniques are novel. My low confidence is mainly based on the fact, that the review process at conference is not perfectly suited to deal with such papers, since their review would actually require both expert reviewers and substantially longer reviewing periods.
train
[ "NO9H8Cq1UW9", "D0O-xeS_yx", "VWHQQ_tYLGf", "otun2LhOxsm", "fl4O5Z2MbnZ", "Ij5nD_wwWPy", "byvLOZa1SqV", "nckpNsGN1Sn" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you so much for your time, insightful feedback, positive evaluation :-) We have incorporated the changes into the revised paper, and address the issues in detail here:\n\nQ: As the authors themselves allude to in the discussion, showing that two RKHS are the same as vector spaces ignores the Hilbert spaces s...
[ -1, -1, -1, -1, 8, 7, 5, 7 ]
[ -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "fl4O5Z2MbnZ", "Ij5nD_wwWPy", "nckpNsGN1Sn", "byvLOZa1SqV", "iclr_2021_vK9WrZ0QYQ", "iclr_2021_vK9WrZ0QYQ", "iclr_2021_vK9WrZ0QYQ", "iclr_2021_vK9WrZ0QYQ" ]
iclr_2021_8VXvj1QNRl1
On the Transfer of Disentangled Representations in Realistic Settings
Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning. While disentangled representations were found to be useful for diverse tasks such as abstract reasoning and fair classification, their scalability and real-world impact remain questionable. We introduce a new high-resolution dataset with 1M simulated images and over 1,800 annotated real-world images of the same setup. In contrast to previous work, this new dataset exhibits correlations, a complex underlying structure, and allows to evaluate transfer to unseen simulated and real-world settings where the encoder i) remains in distribution or ii) is out of distribution. We propose new architectures in order to scale disentangled representation learning to realistic high-resolution settings and conduct a large-scale empirical study of disentangled representations on this dataset. We observe that disentanglement is a good predictor for out-of-distribution (OOD) task performance.
poster-presentations
This paper introduces a new dataset for evaluating disentanglement and its impact on out of distribution generalization based on the trifinger robotics platform. Using this dataset, the authors rigorously investigate the performance of beta-VAEs in this setting under a number of conditions, finding that weak supervision is necessary to induce disentangled representations, and that, perhaps surprisingly, disentanglement does not help for sim2real settings despite the similarity between the simulator and the real data. Reviewers were divided on the work, but had a number of concerns related to the claims of novel architecture, comparisons to baselines, and issues with the clarity of the paper, some of which were addressed in the authors' response. I agree with some of these concerns, particularly with respect to the claims of novel architectures since the modifications could simply be viewed as tweaking hyperparameters and are not rigorously compared to baselines. However, I think the novelty of the dataset and the rigorous evaluation of OOD generalization settings is likely to be valuable enough to the community to merit acceptance. I'd encourage the authors, however, to tone down some of the claims regarding the architecture (or provide sufficient baseline comparisons), and instead focus on the dataset and the OOD results. I recommend acceptance.
train
[ "CxilaU-B9I0", "ci2ulrcgR-O", "72WCnWWbzH1", "EBI-WThX2ri", "ksDsS89iYk9", "lgy2qms_w0f", "RdIANJDfnSG", "DFD-INSEhnN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThis paper identifies that traditional datasets used for learning disentangled representation have several shortcomings such as no correlation between variables and simple structure.\nIt proposes a new dataset that has 1M higher-resolution simulated images along with 1K annotated real-world images of th...
[ 5, 7, -1, -1, -1, -1, 9, 2 ]
[ 4, 5, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_8VXvj1QNRl1", "iclr_2021_8VXvj1QNRl1", "DFD-INSEhnN", "RdIANJDfnSG", "ci2ulrcgR-O", "CxilaU-B9I0", "iclr_2021_8VXvj1QNRl1", "iclr_2021_8VXvj1QNRl1" ]
iclr_2021_-bxf89v3Nx
Calibration tests beyond classification
Most supervised machine learning tasks are subject to irreducible prediction errors. Probabilistic predictive models address this limitation by providing probability distributions that represent a belief over plausible targets, rather than point estimates. Such models can be a valuable tool in decision-making under uncertainty, provided that the model output is meaningful and interpretable. Calibrated models guarantee that the probabilistic predictions are neither over- nor under-confident. In the machine learning literature, different measures and statistical tests have been proposed and studied for evaluating the calibration of classification models. For regression problems, however, research has been focused on a weaker condition of calibration based on predicted quantiles for real-valued targets. In this paper, we propose the first framework that unifies calibration evaluation and tests for general probabilistic predictive models. It applies to any such model, including classification and regression models of arbitrary dimension. Furthermore, the framework generalizes existing measures and provides a more intuitive reformulation of a recently proposed framework for calibration in multi-class classification. In particular, we reformulate and generalize the kernel calibration error, its estimators, and hypothesis tests using scalar-valued kernels, and evaluate the calibration of real-valued regression problems.
poster-presentations
This is a well written paper addressing a challenging problem with an original approach. While one reviewer claims there is not a strong call for calibration of regression tasks, this may well be because methods don't exist. Certainly, calibration is a critical tool for classification. The major failing of the paper, however, is the empirical evaluation. Given that no prior work exists, it is arguably OK to not do this, but one could easily reject the paper on this issue alone, as AnonReviewer4 was inclined to do. One reviewer, however, thought highly of the paper, which bumped up its average score, more than I think it should have got (due to the poor experimental work). The abstract could be improved by mentioning the use of kernels, the nature of this solution is a substantial part of the paper.
train
[ "5dmTeP-VWiZ", "bH5jfoiYwxF", "f366_PbLCS_", "ffanilP0v4F", "pL4xg9h7e-m", "QEGvhaBQg51", "GSZuabszvv", "yredTQOClAY", "FHjZ-Fwvi4n" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "*Regarding the significance of the work, I can add that in practice, I find that relatively few ML users are concerned with the calibration of their models, and these are entirely restricted to problems of classification (almost always binary classification) or quantile regression. The novelty of this work seems t...
[ -1, -1, -1, -1, -1, -1, 9, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "yredTQOClAY", "yredTQOClAY", "GSZuabszvv", "FHjZ-Fwvi4n", "yredTQOClAY", "FHjZ-Fwvi4n", "iclr_2021_-bxf89v3Nx", "iclr_2021_-bxf89v3Nx", "iclr_2021_-bxf89v3Nx" ]
iclr_2021_jphnJNOwe36
Overparameterisation and worst-case generalisation: friend or foe?
Overparameterised neural networks have demonstrated the remarkable ability to perfectly fit training samples, while still generalising to unseen test samples. However, several recent works have revealed that such models' good average performance does not always translate to good worst-case performance: in particular, they may perform poorly on subgroups that are under-represented in the training set. In this paper, we show that in certain settings, overparameterised models' performance on under-represented subgroups may be improved via post-hoc processing. Specifically, such models' bias can be restricted to their classification layers, and manifest as structured prediction shifts for rare subgroups. We detail two post-hoc correction techniques to mitigate this bias, which operate purely on the outputs of standard model training. We empirically verify that with such post-hoc correction, overparameterisation can improve average and worst-case performance.
poster-presentations
This paper studies how to improve the worst-case subgroup error in overparameterized models using two simple post-hoc processing techniques. All reviewers were positive about the paper, though R5 questioned the novelty of the paper which built heavily on a few previous papers (in particular, it builds heavily on Sagawa et al. 2020a,b). The AC is satisfied with the authors`' response clarifying the novelty. Given that this topic is quite timely and of interest to the ICLR community, and that this paper presented a clean investigation on it, the AC recommends acceptance.
train
[ "PUGFzGK_Yvu", "WUBxVRNnR2e", "roDanqj9l3S", "pKkzkE4e3DU", "XFTo9ZrMlR0", "wk3b5CrwFh" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies how to improve the worst-case subgroup error in overparameterized models using two simple post-hoc processing techniques: (1) learning a new linear classification layer of a network, or (2) learning new per-group threshold on the logits. The efficacy of these techniques is evaluated on three syn...
[ 7, -1, -1, -1, 5, 6 ]
[ 3, -1, -1, -1, 3, 3 ]
[ "iclr_2021_jphnJNOwe36", "XFTo9ZrMlR0", "PUGFzGK_Yvu", "wk3b5CrwFh", "iclr_2021_jphnJNOwe36", "iclr_2021_jphnJNOwe36" ]
iclr_2021_yvQKLaqNE6M
You Only Need Adversarial Supervision for Semantic Image Synthesis
Despite their recent successes, GAN models for semantic image synthesis still suffer from poor image quality when trained with only adversarial supervision. Historically, additionally employing the VGG-based perceptual loss has helped to overcome this issue, significantly improving the synthesis quality, but at the same time limiting the progress of GAN models for semantic image synthesis. In this work, we propose a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results. We re-design the discriminator as a semantic segmentation network, directly using the given semantic label maps as the ground truth for training. By providing stronger supervision to the discriminator as well as to the generator through spatially- and semantically-aware discriminator feedback, we are able to synthesize images of higher fidelity with better alignment to their input label maps, making the use of the perceptual loss superfluous. Moreover, we enable high-quality multi-modal image synthesis through global and local sampling of a 3D noise tensor injected into the generator, which allows complete or partial image change. We show that images synthesized by our model are more diverse and follow the color and texture distributions of real images more closely. We achieve an average improvement of 6 FID and 5 mIoU points over the state of the art across different datasets using only adversarial supervision.
poster-presentations
The paper received 3 reviews with positive ratings: 7,6,7. The reviewers appreciated overall quality of the manuscript, thoroughness of the evaluation, and practical importance of this work (mentioning though that the technical novelty is still not high). They also acknowledged impressive empirical performance. The authors provided detailed responses to each of the reviews separately, which seemed to have resolved the remaining concerns. As a result, the final recommendation is to accept this work for presentation at ICLR as a poster.
train
[ "edh39Rikp7", "xAEqnGGAJC", "qk5LfDM9aK-", "ecIagUiNsg", "FJzsuIwp4T", "CP-5ptRs2Ij", "ypjDh7FZlwk", "1rOUZzEzrk1", "sWlHES_DBvN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "# Post Rebuttal:\n\nThank you for the detailed rebuttal. The comment made about being able to use this in a semi-supervised setting is an exciting direction and I encourage the authors to pursue it on larger less-labeled datasets mentioned in the review in a future work/final submission. I am glad that removing VG...
[ 7, 7, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_yvQKLaqNE6M", "iclr_2021_yvQKLaqNE6M", "edh39Rikp7", "xAEqnGGAJC", "edh39Rikp7", "sWlHES_DBvN", "xAEqnGGAJC", "xAEqnGGAJC", "iclr_2021_yvQKLaqNE6M" ]
iclr_2021_PS3IMnScugk
Learning to Recombine and Resample Data For Compositional Generalization
Flexible neural sequence models outperform grammar- and automaton-based counterparts on a variety of tasks. However, neural models perform poorly in settings requiring compositional generalization beyond the training data—particularly to rare or unseen subsequences. Past work has found symbolic scaffolding (e.g. grammars or automata) essential in these settings. We describe R&R, a learned data augmentation scheme that enables a large category of compositional generalizations without appeal to latent symbolic structure. R&R has two components: recombination of original training examples via a prototype-based generative model and resampling of generated examples to encourage extrapolation. Training an ordinary neural sequence model on a dataset augmented with recombined and resampled examples significantly improves generalization in two language processing problems—instruction following (SCAN) and morphological analysis (SIGMORPHON 2018)—where R&R enables learning of new constructions and tenses from as few as eight initial examples.
poster-presentations
The paper addresses generalization to compositions of rare and unseen sequences. It proposes an unstructured data augmentation, that achieves comparable generalization to structured approaches (e.g. using grammars). The idea is based on recombining prototypes and oversampling in the tail. The paper provides a novel approach to an important problem. All four reviewers recommended accept.
train
[ "AWIkIvYdf15", "fcp-t5CfEme", "EWgtR4r4jLo", "xefKR6TDM0x", "zT15vxcOKah", "DjfMJ3wQEC", "FqVRn5cK8KL", "VNmLOclumTt", "5nE_CBF5ZLh", "TqhM7h1Ob7f" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "####Summary: \nTo tackle situations where compositionality is mostly required at inference time, the paper proposes a novel data augmentation method with an RNN based generator (recombination); to make the generator generate highly compositional patterns, the paper proposes a resampling method. The methods have be...
[ 6, -1, -1, -1, -1, -1, -1, 7, 7, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_PS3IMnScugk", "iclr_2021_PS3IMnScugk", "xefKR6TDM0x", "AWIkIvYdf15", "VNmLOclumTt", "5nE_CBF5ZLh", "TqhM7h1Ob7f", "iclr_2021_PS3IMnScugk", "iclr_2021_PS3IMnScugk", "iclr_2021_PS3IMnScugk" ]
iclr_2021_FOyuZ26emy
A Critique of Self-Expressive Deep Subspace Clustering
Subspace clustering is an unsupervised clustering technique designed to cluster data that is supported on a union of linear subspaces, with each subspace defining a cluster with dimension lower than the ambient space. Many existing formulations for this problem are based on exploiting the self-expressive property of linear subspaces, where any point within a subspace can be represented as linear combination of other points within the subspace. To extend this approach to data supported on a union of non-linear manifolds, numerous studies have proposed learning an embedding of the original data using a neural network which is regularized by a self-expressive loss function on the data in the embedded space to encourage a union of linear subspaces prior on the data in the embedded space. Here we show that there are a number of potential flaws with this approach which have not been adequately addressed in prior work. In particular, we show the model formulation is often ill-posed in that it can lead to a degenerate embedding of the data, which need not correspond to a union of subspaces at all and is poorly suited for clustering. We validate our theoretical results experimentally and also repeat prior experiments reported in the literature, where we conclude that a significant portion of the previously claimed performance benefits can be attributed to an ad-hoc post processing step rather than the deep subspace clustering model.
poster-presentations
The authors carefully study a class of unsupervised learning models called self-expressive deep subspace clustering (SEDSC) models, which involve clustering data arising from mixtures of complex nonlinear manifolds. The main contribution is to show that the SEDSC formulation itself suffers from fundamental degeneracies, and that the experimental gains reported in the literature may be due to ad-hoc preprocessing. The contributions are compelling, and all reviewers appreciated the paper. Despite the paper being of somewhat narrow focus, my belief is that negative results of this nature are useful and timely. I recommend an accept.
test
[ "BLAIcxu4lJd", "GcprFO47VMR", "9yFOV9zXSSA", "2SbZ96_MWZc", "l4tR2C5pGMz", "ngbvbXiZVT-", "LsXydRj547U", "E7NtWtC6lF1", "QqeSnXmX7TL", "tJBQZJ5M3zI", "bIhQxSherAa" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Pros:\n1)\tAuthors theoretically studied a class of self-expression deep subspace clustering methods and found that the optimization problem is typically ill-posed.\n2)\tVarious normalization approaches are studied, including dataset and batch/channel normalization, and instance normalization. However, even with t...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_FOyuZ26emy", "iclr_2021_FOyuZ26emy", "iclr_2021_FOyuZ26emy", "l4tR2C5pGMz", "GcprFO47VMR", "GcprFO47VMR", "BLAIcxu4lJd", "tJBQZJ5M3zI", "bIhQxSherAa", "iclr_2021_FOyuZ26emy", "iclr_2021_FOyuZ26emy" ]
iclr_2021_O6LPudowNQm
INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving
In learning-assisted theorem proving, one of the most critical challenges is to generalize to theorems unlike those seen at training time. In this paper, we introduce INT, an INequality Theorem proving benchmark designed to test agents’ generalization ability. INT is based on a theorem generator, which provides theoretically infinite data and allows us to measure 6 different types of generalization, each reflecting a distinct challenge, characteristic of automated theorem proving. In addition, provides a fast theorem proving environment with sequence-based and graph-based interfaces, conducive to performing learning-based research. We introduce base-lines with architectures including transformers and graph neural networks (GNNs)for INT. Using INT, we find that transformer-based agents achieve stronger test performance for most of the generalization tasks, despite having much larger out-of-distribution generalization gaps than GNNs. We further find that the addition of Monte Carlo Tree Search (MCTS) at test time helps to prove new theorems.
poster-presentations
This work proposes an algorithm for generating training data to train automatic theorem proving models. In particular, it allows users to pose specific generalization challenges to theorem proving models and evaluate their performance. In doing so, it provides a degree of control over the task space that is greater than when working with 'organic' corpora of real theorems and proofs. The authors demonstrated the utility of their generated data by training well-known models such as transformers and GNNs, and were able to derive insights such as the value of MCTS-style planning for finding proofs in particular settings. After the rebuttal period, all authors agreed that the work was well executed and that the algorithm creates datasets that will be of value to the (learning-based) theorem proving community. As such, they all recommended acceptance to a greater or lesser degree. I am convinced by their arguments, because I think there is real value in using controlled synthetic data alongside real data when making scientific progress on hard problems like theorem proving. I am particularly convinced by the observation that the data generated by this method has already led to improved performance on a real corpus of proofs, as the authors state in their rebuttal. If they have not done so already, I encourage the authors to report this fact in the camera ready version of their paper citing the relevant work.
train
[ "-zsSR2KX2Sg", "2SGGAT-6SC", "vizQohQGzTH", "XqxkCFSA5r-", "47BM9D7ECq", "efykozOj2Ko", "fwuGZ6GGMca", "R97OFC0VFQU", "BqUpZRF2uJ8", "wgY6a_ZT10m", "ws5SHPi_jMN", "yRufVjVbK3C", "W3N-CMbBw3", "4Q5lyjLuerz" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a simple theorem proving framework (supporting a limited logic e.g., no disjunctions) along with a sample set of axioms and algorithms that can be used to generate unlimited training data for proving theorems in this system.\n\nAlthough the proof system itself is not of much practical interest,...
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2 ]
[ "iclr_2021_O6LPudowNQm", "iclr_2021_O6LPudowNQm", "R97OFC0VFQU", "efykozOj2Ko", "iclr_2021_O6LPudowNQm", "4Q5lyjLuerz", "ws5SHPi_jMN", "BqUpZRF2uJ8", "wgY6a_ZT10m", "2SGGAT-6SC", "-zsSR2KX2Sg", "W3N-CMbBw3", "iclr_2021_O6LPudowNQm", "iclr_2021_O6LPudowNQm" ]
iclr_2021_BUlyHkzjgmA
Improved Estimation of Concentration Under ℓp-Norm Distance Metrics Using Half Spaces
Concentration of measure has been argued to be the fundamental cause of adversarial vulnerability. Mahloujifar et al. (2019) presented an empirical way to measure the concentration of a data distribution using samples, and employed it to find lower bounds on intrinsic robustness for several benchmark datasets. However, it remains unclear whether these lower bounds are tight enough to provide a useful approximation for the intrinsic robustness of a dataset. To gain a deeper understanding of the concentration of measure phenomenon, we first extend the Gaussian Isoperimetric Inequality to non-spherical Gaussian measures and arbitrary ℓp-norms (p≥2). We leverage these theoretical insights to design a method that uses half-spaces to estimate the concentration of any empirical dataset under ℓp-norm distance metrics. Our proposed algorithm is more efficient than Mahloujifar et al. (2019)'s, and experiments on synthetic datasets and image benchmarks demonstrate that it is able to find much tighter intrinsic robustness bounds. These tighter estimates provide further evidence that rules out intrinsic dataset concentration as a possible explanation for the adversarial vulnerability of state-of-the-art classifiers.
poster-presentations
High-quality theoretical paper that studies the connection between concentration of the data distribution and adversarial robustness. It contributes a method for more accurate estimation of concentration, which allows drawing stronger conclusions about adversarial robustness compared to previous work. The paper is highly technical, but written clearly and precisely. All reviewers give positive scores, with only minor negative comments. One minor concern I have is that the potential audience of the paper might be small, given its highly technical nature and very specialized line of research it follows. Still, I believe it's a solid contribution, so I'm happy to recommend acceptance.
train
[ "H8YLFRYge-3", "K3yQVbTqndJ", "OeNawXOAYdi", "sPaun0I4wqs", "w96sBUsvDLO", "-4FEQtjybFl", "VAJBSvhT0pY", "h1sSzrlJWlN" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the feedback and constructive comments. For comments regarding the presentation of our paper, we have revised them accordingly based on your suggestions and highlighted the revision in blue in the updated paper.\n \n> I would expect that Theorem 3.3 to be already known in the literature, ...
[ -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, 4, 4, 3, 2 ]
[ "-4FEQtjybFl", "w96sBUsvDLO", "iclr_2021_BUlyHkzjgmA", "VAJBSvhT0pY", "iclr_2021_BUlyHkzjgmA", "iclr_2021_BUlyHkzjgmA", "iclr_2021_BUlyHkzjgmA", "iclr_2021_BUlyHkzjgmA" ]
iclr_2021_LkFG3lB13U5
Adaptive Federated Optimization
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyze their convergence in the presence of heterogeneous data for general non-convex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.
poster-presentations
The paper proposes adaptive optimization algorithms for federated learning that are federated versions of existing adaptive algorithms such as Adam, Adagrad, and Yogi. The paper establishes convergence guarantees for the proposed algorithms and performs an extensive experimental evaluation. Following the discussion, the reviewers were positive about the paper and felt that the author responses addressed their concerns. I recommend accept.
train
[ "Mloyim_mx1Z", "pOkj6QSTCjJ", "ycTbjqrI-6b", "7D7SAZdA-V4", "moVUkg-HZAI", "zY9a39aHoFo", "dXlZgiWlRFs", "JP2YOWdjHj" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the convergence of well-known adaptive methods, ADAM, ADAGRAD, and YOGI, for the federated learning problem. In particular, while the nodes (clients) still use SGD for their local computations (same as Fed-Avg), the server uses one of the three adaptive methods mentioned above to update the mode...
[ 6, 6, -1, -1, -1, -1, 6, 7 ]
[ 4, 4, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_LkFG3lB13U5", "iclr_2021_LkFG3lB13U5", "dXlZgiWlRFs", "pOkj6QSTCjJ", "JP2YOWdjHj", "Mloyim_mx1Z", "iclr_2021_LkFG3lB13U5", "iclr_2021_LkFG3lB13U5" ]
iclr_2021_1OCTOShAmqB
On the Dynamics of Training Attention Models
The attention mechanism has been widely used in deep neural networks as a model component. By now, it has become a critical building block in many state-of-the-art natural language models. Despite its great success established empirically, the working mechanism of attention has not been investigated at a sufficient theoretical depth to date. In this paper, we set up a simple text classification task and study the dynamics of training a simple attention-based classification model using gradient descent. In this setting, we show that, for the discriminative words that the model should attend to, a persisting identity exists relating its embedding and the inner product of its key and the query. This allows us to prove that training must converge to attending to the discriminative words when the attention output is classified by a linear classifier. Experiments are performed, which validate our theoretical analysis and provide further insights.
poster-presentations
This paper investigates the training dynamics of simple neural attention mechanisms, in a controlled setting with clear (but rather strict) assumptions. Some reviewers expressed caution about the applicability of the assumptions in practice, but nevertheless there is agreement that the results deepen our understanding and enrich our toolkit for reasoning about attention. In support of this, in the discussion period, it was emphasized that the work uses different techniques than most current work in this direction. I am therefore confident that the paper will be useful, and recommend acceptance. I strongly encourage the authors to improve the clarity of the work and thorough citation, as suggested by the reviewers.
test
[ "qKj2jEu4Ddh", "_wQ5rikuuj", "_-5dmgbBWzJ", "bQNbIU-Zk4b", "LGSI_iGD3Wv", "6CjQOdbhWsg", "QW5kKEJ6cEN", "klRMUDWxKbA", "ITjHwKuKK_U", "WubtFz3lHa3", "3fb-G5uoGHb", "6MKP6tI_QnN", "v3NnK2xeVZI", "-HBoOeN8VAe", "8AywpQXCBYw", "MgaGm9f4Xs", "_eJr-JvhLa" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper provides theoretical insight into the mechanisms by which a simplified attention model trained with gradient descent learns to allocate more mass to relevant words in the input.\n\nIn a limited toy setting, the authors derive a closed form relationship between word-score and word embedding norm in a sim...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 4 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "iclr_2021_1OCTOShAmqB", "6CjQOdbhWsg", "bQNbIU-Zk4b", "QW5kKEJ6cEN", "klRMUDWxKbA", "QW5kKEJ6cEN", "3fb-G5uoGHb", "ITjHwKuKK_U", "8AywpQXCBYw", "ITjHwKuKK_U", "6MKP6tI_QnN", "qKj2jEu4Ddh", "MgaGm9f4Xs", "_eJr-JvhLa", "iclr_2021_1OCTOShAmqB", "iclr_2021_1OCTOShAmqB", "iclr_2021_1OCTO...
iclr_2021_84gjULz1t5
Linear Convergent Decentralized Optimization with Compression
Communication compression has become a key strategy to speed up distributed optimization. However, existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms. They are unsatisfactory in terms of convergence rate, stability, and the capability to handle heterogeneous data. Motivated by primal-dual algorithms, this paper proposes the first \underline{L}in\underline{EA}r convergent \underline{D}ecentralized algorithm with compression, LEAD. Our theory describes the coupled dynamics of the inexact primal and dual update as well as compression error, and we provide the first consensus error bound in such settings without assuming bounded gradients. Experiments on convex problems validate our theoretical analysis, and empirical study on deep neural nets shows that LEAD is applicable to non-convex problems.
poster-presentations
The paper introduces LEAD, a decentralized optimizer with communication compression that can achieve linear convergence rate in the strongly convex setting. In terms of novelty, the authors should still add a discussion of `Magnusson et al., 2019, On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication`, which is a related linear convergence result in the deterministic (full gradient) case, and relates to the analysis here which is stochastic but also exploits the deterministic case. Nevertheless, reviewers reached consensus-*with communication compression in the given time*-that the paper in its current form is well written and the results are presented clearly in both experiments and theory (which builds up on the earlier NIDS algorithm). The presentation of the algorithm can be slightly improved. We hope the authors will incorporate the remaining smaller open points such as mentioned by R1, such as making the constants in the convergence bounds explicit when comparing with other methods.
train
[ "hoS9ziBE-lq", "brxWTiPUxQ", "-gihB3B8wS1", "0zQpjh5KF7l", "qaSaUZWgkEj", "XWIjUG9S578", "aJqEi10ELCG", "To412n6QUHP" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for adding corollary 1, it is very useful. The mentioned rate of $O(\\frac{1}{\\epsilon})$ that is achieved by both LEAD and Choco-SGD does not have explicit constants. I believe the rate will not be the same if you include the dependency on C. Please include it in the updated version.", "Thanks for th...
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "-gihB3B8wS1", "0zQpjh5KF7l", "XWIjUG9S578", "aJqEi10ELCG", "To412n6QUHP", "iclr_2021_84gjULz1t5", "iclr_2021_84gjULz1t5", "iclr_2021_84gjULz1t5" ]
iclr_2021_uR9LaO_QxF
Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation
Many real-world applications such as robotics provide hard constraints on power and compute that limit the viable model complexity of Reinforcement Learning (RL) agents. Similarly, in many distributed RL settings, acting is done on un-accelerated hardware such as CPUs, which likewise restricts model size to prevent intractable experiment run times. These "actor-latency" constrained settings present a major obstruction to the scaling up of model complexity that has recently been extremely successful in supervised learning. To be able to utilize large model capacity while still operating within the limits imposed by the system during acting, we develop an "Actor-Learner Distillation" (ALD) procedure that leverages a continual form of distillation that transfers learning progress from a large capacity learner model to a small capacity actor model. As a case study, we develop this procedure in the context of partially-observable environments, where transformer models have had large improvements over LSTMs recently, at the cost of significantly higher computational complexity. With transformer models as the learner and LSTMs as the actor, we demonstrate in several challenging memory environments that using Actor-Learner Distillation largely recovers the clear sample-efficiency gains of the transformer learner model while maintaining the fast inference and reduced total training time of the LSTM actor model.
poster-presentations
I thank the authors for their submission and participation in the author response period. The reviewers unanimously agree that the papers proposes an interesting and original approach to using a costly model on a learner node, while distilling to a cheaper model run on actor nodes to gather experiences in a distributed RL framework. During discussion, R1 and myself emphasized the concern that the experiments in this paper leave open the question whether the approach will work beyond toy environments. However, I side with R2 and R3 in that the paper presents a valuable contribution to the community as it stands, and that the experiments proof the concept to the point that the paper should be accepted. I therefore recommend acceptance.
train
[ "RtTDPG1m26P", "3Bkqvt5KSiQ", "stR535vFiGI", "Cw0aqCG8cyg", "1WomZoQ9o_H", "dRSSNQH63Ai", "JXVa8jytB0A", "5cHw7OlTLiC", "XK_EFNKoaY", "1tYXPQJY7H_", "-pAkg-6dDYj", "B_OeYoOmev", "m3vWKT8wvMf", "Cgp0XpUAS8j", "CBy-U42P3Jh", "tu531o1GA0L" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Summary\n\nThe paper proposes an original idea to use distillation to speed up modern distributed RL settings, when data collection is done on CPUs with the learning happening on accelerated hardware, e.g. GPU. More specifically, the authors propose to use a transformer for the learner and distil the policy in...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_uR9LaO_QxF", "XK_EFNKoaY", "m3vWKT8wvMf", "5cHw7OlTLiC", "JXVa8jytB0A", "B_OeYoOmev", "-pAkg-6dDYj", "iclr_2021_uR9LaO_QxF", "RtTDPG1m26P", "RtTDPG1m26P", "Cgp0XpUAS8j", "CBy-U42P3Jh", "tu531o1GA0L", "iclr_2021_uR9LaO_QxF", "iclr_2021_uR9LaO_QxF", "iclr_2021_uR9LaO_QxF" ]
iclr_2021_X4y_10OX-hX
Large Associative Memory Problem in Neurobiology and Machine Learning
Dense Associative Memories or modern Hopfield networks permit storage and reliable retrieval of an exponentially large (in the dimension of feature space) number of memories. At the same time, their naive implementation is non-biological, since it seemingly requires the existence of many-body synaptic junctions between the neurons. We show that these models are effective descriptions of a more microscopic (written in terms of biological degrees of freedom) theory that has additional (hidden) neurons and only requires two-body interactions between them. For this reason our proposed microscopic theory is a valid model of large associative memory with a degree of biological plausibility. The dynamics of our network and its reduced dimensional equivalent both minimize energy (Lyapunov) functions. When certain dynamical variables (hidden neurons) are integrated out from our microscopic theory, one can recover many of the models that were previously discussed in the literature, e.g. the model presented in "Hopfield Networks is All You Need" paper. We also provide an alternative derivation of the energy function and the update rule proposed in the aforementioned paper and clarify the relationships between various models of this class.
poster-presentations
This paper attends to the problem of how to implement dense associative memories (i.e. modern Hopfield networks) using only two-body synapses. This is interesting because modern Hopfield networks have much higher capacity, but at face value, they require synapses with cubic interactions between neurons, which to the best of our knowledge, is not a common feature in neurophysiology (though it should be noted: it is not by any means impossible from a physiological perspective to have cubic interactions at synapses, see e.g. Halassa, M. M., Fellin, T., & Haydon, P. G. (2007). The tripartite synapse: roles for gliotransmission in health and disease. Trends in molecular medicine, 13(2), 54-63.). The authors show how the use of a layer of hidden neurons, akin to a restricted Boltzmann machine architecture, coupled with the right energy function, can be used to recover dense associative memory models using only two-body synapses. They also demonstrate how this connects to recent work on the relationship between attention mechanisms in modern ML models and Hopfield network dynamics. Overall, the reviewers were positive on this paper. The most common critique related to the question of "biological plausibility". The authors addressed these concerns by adding some more recognition as to the lack of biological plausibility and more discussions of the relevance to neuroscience. To be candid with the authors, if the goal is indeed to make a more biologically plausible model of modern Hopfield networks, than a fair bit more work would be needed to connect the paper to biology well. As it stands, the only connection is the shift to two-body synapses by using hidden neurons, but this provides limited insight for most neuroscientists, as noted by Reviewer 2. Also, some of the biological examples provided seem strained (e.g. the colour memory example, where there is no physiological reason to posit that we store colour memories using our retina, or the MNIST example, since there is no reason to suppose that animals can memorise thousands of specific MNIST images). But overall, the critique regarding biological plausibility was attended to. The other concerns raises were also largely addressed. Given the interesting contributions from this paper, the overall positive reviews, and the decent job at addressing reviewer concerns, the AC believes that this paper should certainly be accepted. A decision of "Accept (Poster)" seems appropriate, though (as opposed to an oral or spotlight), given the lack of biological connections in a paper with a stated goal of achieving a more biologically realistic model.
train
[ "B8VGUaQdJGy", "2kb_TIsVQXO", "a6LwJJ17nYh", "tLZx1K4Hz2B", "KrqCfh2D10z", "3UsunVU4nkh", "fk30n1lUP2Z", "IaQLxR864sn", "X4nCJmlYW3U", "Xjf-ZvH5fSo", "QtmEoJm9Jgq", "cQ__o27Vz3f", "rtWl-8JfdAF", "e40Qv7kQ1jg" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper provides a mechanism by which recent extensions to the classic Hopfield network model, which as written involve many-body interactions, can be implemented by a more biologically plausible network that uses only two-body synaptic interactions (but more neurons). \n\nPros:\n\n-- The derivations are sound...
[ 6, -1, -1, -1, 8, -1, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, -1, -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_X4y_10OX-hX", "IaQLxR864sn", "tLZx1K4Hz2B", "QtmEoJm9Jgq", "iclr_2021_X4y_10OX-hX", "X4nCJmlYW3U", "iclr_2021_X4y_10OX-hX", "Xjf-ZvH5fSo", "rtWl-8JfdAF", "fk30n1lUP2Z", "KrqCfh2D10z", "e40Qv7kQ1jg", "B8VGUaQdJGy", "iclr_2021_X4y_10OX-hX" ]
iclr_2021_LucJxySuJcE
Protecting DNNs from Theft using an Ensemble of Diverse Models
Several recent works have demonstrated highly effective model stealing (MS) attacks on Deep Neural Networks (DNNs) in black-box settings, even when the training data is unavailable. These attacks typically use some form of Out of Distribution (OOD) data to query the target model and use the predictions obtained to train a clone model. Such a clone model learns to approximate the decision boundary of the target model, achieving high accuracy on in-distribution examples. We propose Ensemble of Diverse Models (EDM) to defend against such MS attacks. EDM is made up of models that are trained to produce dissimilar predictions for OOD inputs. By using a different member of the ensemble to service different queries, our defense produces predictions that are highly discontinuous in the input space for the adversary's OOD queries. Such discontinuities cause the clone model trained on these predictions to have poor generalization on in-distribution examples. Our evaluations on several image classification tasks demonstrate that EDM defense can severely degrade the accuracy of clone models (up to 39.7%). Our defense has minimal impact on the target accuracy, negligible computational costs during inference, and is compatible with existing defenses for MS attacks.
poster-presentations
This paper proposed an ensemble of diverse models as a mechanism to protect models from theft. The idea is quite novel. There are some concerns regarding the robustness of the hashing function (that I share), however not every paper has to be perfect, especially when it introduces a novel setup. AC
train
[ "U4DPhdE0Gor", "AO6nTNFioQg", "s6R91OkixB", "sDozuRDAae1", "LYpkQTWja9V", "39Y2ORadZUr", "40VEF1c43jT", "wvJwBdRdmA" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "## Summary\n- The paper proposes a defense against recent flavours of model stealing attacks by exploiting the insight that the recent effective attack query out of distribution examples to the victim model.\n- The approach introduces discontinuities in the input-prediction space by (i) training an ensemble of mod...
[ 6, -1, -1, -1, -1, 7, 5, 6 ]
[ 4, -1, -1, -1, -1, 3, 5, 3 ]
[ "iclr_2021_LucJxySuJcE", "U4DPhdE0Gor", "39Y2ORadZUr", "40VEF1c43jT", "wvJwBdRdmA", "iclr_2021_LucJxySuJcE", "iclr_2021_LucJxySuJcE", "iclr_2021_LucJxySuJcE" ]
iclr_2021_LVotkZmYyDi
Proximal Gradient Descent-Ascent: Variable Convergence under KŁ Geometry
The gradient descent-ascent (GDA) algorithm has been widely applied to solve minimax optimization problems. In order to achieve convergent policy parameters for minimax optimization, it is important that GDA generates convergent variable sequences rather than convergent sequences of function value or gradient norm. However, the variable convergence of GDA has been proved only under convexity geometries, and it is lack of understanding in general nonconvex minimax optimization. This paper fills such a gap by studying the convergence of a more general proximal-GDA for regularized nonconvex-strongly-concave minimax optimization. Specifically, we show that proximal-GDA admits a novel Lyapunov function, which monotonically decreases in the minimax optimization process and drives the variable sequences to a critical point. By leveraging this Lyapunov function and the KL geometry that parameterizes the local geometries of general nonconvex functions, we formally establish the variable convergence of proximal-GDA to a certain critical point x∗, i.e., xt→x∗,yt→y∗(x∗). Furthermore, over the full spectrum of the KL-parameterized geometry, we show that proximal-GDA achieves different types of convergence rates ranging from sublinear convergence up to finite-step convergence, depending on the geometry associated with the KL parameter. This is the first theoretical result on the variable convergence for nonconvex minimax optimization.
poster-presentations
The paper studies nonconvex-strongly concave min-max optimization using proximal gradient descent-ascent (GDA), assuming Kurdyka-Łojasiewicz (KŁ) condition holds. The main contribution is a novel Lyapunov function, which leads to a clean analysis. The main downsides of the paper as discussed by the reviewers are the lack of experiments and somewhat stringent assumptions needed in the analysis. Nevertheless, the paper was overall viewed favorably by the reviewers, who considered it a worthwhile contribution to the area min-max optimization.
train
[ "AabcP1xC0Ky", "wCRYHCRpEkE", "dAWsKj20iXz", "fQSP4diKHFS", "2ZDVNi8A2T7", "MuhvoXWoM1", "6Zi1CbqsMX", "QrXC-IJ8Mvs", "IB3g5HFmEuE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper the authors analyze a proximal gradient descent-ascent method for nonconvex minimax problem under specific assumptions (another name would be a forward-backward algorithm). The authors prove subsequence convergence of iterates to critical points and furthermore, additionally under the Kurdyka-Łojasi...
[ 5, 8, -1, -1, -1, -1, -1, 8, 7 ]
[ 4, 4, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2021_LVotkZmYyDi", "iclr_2021_LVotkZmYyDi", "fQSP4diKHFS", "IB3g5HFmEuE", "AabcP1xC0Ky", "QrXC-IJ8Mvs", "wCRYHCRpEkE", "iclr_2021_LVotkZmYyDi", "iclr_2021_LVotkZmYyDi" ]
iclr_2021_ct8_a9h1M
Contextual Dropout: An Efficient Sample-Dependent Dropout Module
Dropout has been demonstrated as a simple and effective module to not only regularize the training process of deep neural networks, but also provide the uncertainty estimation for prediction. However, the quality of uncertainty estimation is highly dependent on the dropout probabilities. Most current models use the same dropout distributions across all data samples due to its simplicity. Despite the potential gains in the flexibility of modeling uncertainty, sample-dependent dropout, on the other hand, is less explored as it often encounters scalability issues or involves non-trivial model changes. In this paper, we propose contextual dropout with an efficient structural design as a simple and scalable sample-dependent dropout module, which can be applied to a wide range of models at the expense of only slightly increased memory and computational cost. We learn the dropout probabilities with a variational objective, compatible with both Bernoulli dropout and Gaussian dropout. We apply the contextual dropout module to various models with applications to image classification and visual question answering and demonstrate the scalability of the method with large-scale datasets, such as ImageNet and VQA 2.0. Our experimental results show that the proposed method outperforms baseline methods in terms of both accuracy and quality of uncertainty estimation.
poster-presentations
This paper proposes an input-dependent dropout strategy, using variational inference to infer the rates. While the idea is a fairly straightforward variant of recent probabilistic dropout methods, the paper demonstrates consistent improvements across several types of NN layers (dense, convolutional, and attention) in large-scale experiments (e.g. ImageNet). The reviewers unanimously agreed on accepting the paper.
train
[ "sCemgA99YN1", "uO0GEL-hnhp", "V_Gn0UVV29p", "gSjUB7GE9ar" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper proposes a way of estimating data-dependent dropout probabilities. This is done in each layer using a small auxiliary neural network which takes the data (on which dropout is going to be applied) as input and outputs dropout probabilities, which are sampled and multiplied into the data.\n\nPros...
[ 7, 7, -1, 6 ]
[ 4, 4, -1, 4 ]
[ "iclr_2021_ct8_a9h1M", "iclr_2021_ct8_a9h1M", "iclr_2021_ct8_a9h1M", "iclr_2021_ct8_a9h1M" ]
iclr_2021_W1G1JZEIy5_
MIROSTAT: A NEURAL TEXT DECODING ALGORITHM THAT DIRECTLY CONTROLS PERPLEXITY
Neural text decoding algorithms strongly influence the quality of texts generated using language models, but popular algorithms like top-k, top-p (nucleus), and temperature-based sampling may yield texts that have objectionable repetition or incoherence. Although these methods generate high-quality text after ad hoc parameter tuning that depends on the language model and the length of generated text, not much is known about the control they provide over the statistics of the output. This is important, however, since recent reports show that humans prefer when perplexity is neither too much nor too little and since we experimentally show that cross-entropy (log of perplexity) has a near-linear relation with repetition. First, we provide a theoretical analysis of perplexity in top-k, top-p, and temperature sampling, under Zipfian statistics. Then, we use this analysis to design a feedback-based adaptive top-k text decoding algorithm called mirostat that generates text (of any length) with a predetermined target value of perplexity without any tuning. Experiments show that for low values of k and p, perplexity drops significantly with generated text length and leads to excessive repetitions (the boredom trap). Contrarily, for large values of k and p, perplexity increases with generated text length and leads to incoherence (confusion trap). Mirostat avoids both traps. Specifically, we show that setting target perplexity value beyond a threshold yields negligible sentence-level repetitions. Experiments with human raters for fluency, coherence, and quality further verify our findings.
poster-presentations
This work presents a novel approach to improving text decoding. This is backed up by a solid analysis of cross-entropy growth with top-k vs top-p and an interesting demonstration of repetition correlating with probability. The paper is well written and well organized. The authors' rebuttal was effective in convincing the reviewers. The human evaluation (added during the rebuttal phase) is a good demonstration of the effectiveness of the approach and so this paper's proposed decoding algorithm is likely to be impactful. Pros: - Well written. - Solid theoretical analysis of cross-entropy and its relation to top-p and top-k decoding. Good demonstration of how repetition is related to probability. - Interesting, novel and effective decoding algorithm. - Human evaluation of the algorithm's output. Cons: - The approach has not been tested with a variety of language models. - Decoding quality still depends on a target perplexity which may need to be tuned. - Unnecessary dependence on Zipf's law in the basic decoding algorithm.
train
[ "kkwlc5OHVVj", "hV1TPu-fSlX", "vPLbGWKPSJj", "4BsRSn_90Xd", "pzAPksk4OoO", "7XUDfJDwUZc", "v7DtRkp64ef", "RH9LZyWA_LG", "d3jrnqN1fmC", "zsfpZNqhtz7" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Summary:\n\nNeural text generation models typically rely on sampling schemes for autoregressive decoding. This may range from pure sampling, top-k, top-p to temperature modulated sampling. These methods are mostly heuristic schemes and lack theoretical analysis. This paper tries to fill that gap by analyzing these...
[ 7, -1, -1, -1, 6, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, -1, 5, -1, -1, -1, -1, 3 ]
[ "iclr_2021_W1G1JZEIy5_", "v7DtRkp64ef", "RH9LZyWA_LG", "7XUDfJDwUZc", "iclr_2021_W1G1JZEIy5_", "d3jrnqN1fmC", "kkwlc5OHVVj", "zsfpZNqhtz7", "pzAPksk4OoO", "iclr_2021_W1G1JZEIy5_" ]
iclr_2021_kDnal_bbb-E
DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues
To successfully negotiate a deal, it is not enough to communicate fluently: pragmatic planning of persuasive negotiation strategies is essential. While modern dialogue agents excel at generating fluent sentences, they still lack pragmatic grounding and cannot reason strategically. We present DialoGraph, a negotiation system that incorporates pragmatic strategies in a negotiation dialogue using graph neural networks. DialoGraph explicitly incorporates dependencies between sequences of strategies to enable improved and interpretable prediction of next optimal strategies, given the dialogue context. Our graph-based method outperforms prior state-of-the-art negotiation models both in the accuracy of strategy/dialogue act prediction and in the quality of downstream dialogue response generation. We qualitatively show further benefits of learned strategy-graphs in providing explicit associations between effective negotiation strategies over the course of the dialogue, leading to interpretable and strategic dialogues.
poster-presentations
In the context of constructing negotiation dialogue strategies/policies, the authors explore the use of graph attention networks (GATs) for determining the sequence of negotiation dialogue acts -- specifically leading to a (1) hierarchical dialogue encoder via pooled BERT + GRU encoding -> (2) GAT over dialogue strategies/acts (many technical details around graph usage) -> (3) GRU decoder. While a relatively straightforward replacement relative to similar architectures with other 'structural' encoders, they provide a sound end-to-end training strategy that is shown to perform well on the buyer-seller negotiation task via CraigslistBargain dataset where they demonstrate SoTA performance. == Pros == + Studying the pragmatics component of negotiation dialogue strategies has received recent interest and this seems a good milepost that demonstrates mainstream methodological approaches for this task (i.e., this is a good baseline for future innovations) + The paper is well-written in that it is easy to understand intuitively while having sufficient detail to understand the details. + The empirical results appear promising and meet the standard within this sub-community -- showing improvements with automatic and human evaluation. == Cons == - This builds on existing datasets, which are known to have undesirable properties (e.g., automatic evaluation, small number of dialogue datasets, use of explicit dialogues acts, etc.) While it still meets the standards of this sub-community, it still isn't a completely convincing task. - While the use of GATs is novel in this setting and they get it to work within the overall architecture, this is something that many people are likely trying at this time -- so there isn't an exciting 'disruptive' step here. - The empirical results, while satisfactory from a quantitative perspective, even in reading the Appendices, it isn't clear that these are significantly better from a planning perspective or if it is just 'pattern recognition' gains. Evaluating along the requested dimensions: - Quality: The underlying method is fairly straightforward and the authors incorporate up-to-date GAT-related methods to get this to work in this setting. The empirical results are sound if predicated on the general quality in this sub-community where you have the standard machine translation evaluation problem for meaning vs. lexical closeness. To mitigate, they use BERTScore and human evaluation -- which is at the higher end of what can be reasonably expected. - Clarity: The paper is written clearly overall, especially if considering the appendices where there is significant detail. Related to empirical evaluation, it isn't easy to intuitively interpret the results, but this is again par for the course. Additionally, I believe the authors did a good job responding to reviewer concerns. - Originality: While all of the reviewers agreed that the approach was novel in this setting, one of the reviewers explicitly pointed out that using GATs in negotiation dialogues isn't that exciting -- and I mostly agree. I view this as something that somebody would have done and will serve as a good baseline; although I think this sub-field is going to need more datasets to continue progressing. - Significance: As stated above, it is a good baseline that I think many are likely thinking of (as the TOD community has been doing this for a bit now). However, it is done well. Honestly, I agree with the reviewers that this is a somewhat borderline paper -- mostly due to it being a fairly 'obvious' idea and the nature of the subfield making it not entirely clear if the improvements are due to knowing the target performance while training or due to the methodological advance. Personally, I am convinced, but it isn't totally clear. That being said, it is a well-written paper and I think the reviewer issues were sufficiently addressed. Thus, I would prefer to see it accepted as I think it will be a strong methodological baseline for this problem (which hopefully will accumulate more convincing datasets and standard evaluation).
val
[ "Qg8dCirwUeL", "cjRCgYsb0xM", "1QySsJNjtbZ", "jce9u-XYw1v", "E7Mx_1iszVr", "u_eZyNExVCT", "Jo9rFblLRGx", "_CfFZdscWN", "VhzrZSil0aD", "VU0zlCFpyr", "asi8eiLtBSO", "wIoTpjblNht", "mEHXq7kwUo7", "dQSoP66oaJ3" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a end-to-end dialogue system that leverages Graph Attention Networks to model complex negotiation strategies. The main contributions that the author claims are to model negotiation strategies through a GNN and using these learned strategies to predict future strategies and generate a response l...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "iclr_2021_kDnal_bbb-E", "1QySsJNjtbZ", "u_eZyNExVCT", "mEHXq7kwUo7", "mEHXq7kwUo7", "Qg8dCirwUeL", "wIoTpjblNht", "wIoTpjblNht", "Qg8dCirwUeL", "dQSoP66oaJ3", "dQSoP66oaJ3", "iclr_2021_kDnal_bbb-E", "iclr_2021_kDnal_bbb-E", "iclr_2021_kDnal_bbb-E" ]
iclr_2021_4c0J6lwQ4_
Multi-Time Attention Networks for Irregularly Sampled Time Series
Irregular sampling occurs in many time series modeling applications where it presents a significant challenge to standard deep learning models. This work is motivated by the analysis of physiological time series data in electronic health records, which are sparse, irregularly sampled, and multivariate. In this paper, we propose a new deep learning framework for this setting that we call Multi-Time Attention Networks. Multi-Time Attention Networks learn an embedding of continuous time values and use an attention mechanism to produce a fixed-length representation of a time series containing a variable number of observations. We investigate the performance of this framework on interpolation and classification tasks using multiple datasets. Our results show that the proposed approach performs as well or better than a range of baseline and recently proposed models while offering significantly faster training times than current state-of-the-art methods.
poster-presentations
The work proposed a new approach to encode time series that are irregularly sampled and multivariate using time attention module and an encoder-decoder framework based on VAE. All the reviewers find the approach novel and the experiments extensive with encouraging results. Please continue to improve the presentation of the paper. I would suggest to move the diagram showing the overall architecture to the main text to assist the explanation. Reviewers also would like to see more explanation on the experimental results and some ablation studies to show the importance of each component of the proposed architecture.
train
[ "SnwZ2GXGuJG", "lrbv169PyD", "ThjZL8pzIjd", "Y9EbjgrEeH", "3EEVTPmzGHV", "TL-abfUPiGb", "fekomknc7AH", "6S1Tt6Vb1H7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a novel approach to learn an embedding of continuous time values and use an attention mechanism to produce a fixed-length representation of a time series containing a variable number of observations. In particular, it proposes an mTAN network to leverage the mTAN module in an encoder-decoder fr...
[ 6, 7, -1, -1, -1, -1, 7, 7 ]
[ 4, 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_4c0J6lwQ4_", "iclr_2021_4c0J6lwQ4_", "fekomknc7AH", "6S1Tt6Vb1H7", "lrbv169PyD", "SnwZ2GXGuJG", "iclr_2021_4c0J6lwQ4_", "iclr_2021_4c0J6lwQ4_" ]
iclr_2021_aD1_5zowqV
Learning Energy-Based Generative Models via Coarse-to-Fine Expanding and Sampling
Energy-based models (EBMs) parameterized by neural networks can be trained by the Markov chain Monte Carlo (MCMC) sampling-based maximum likelihood estimation. Despite the recent significant success of EBMs in image generation, the current approaches to train EBMs are unstable and have difficulty synthesizing diverse and high-fidelity images. In this paper, we propose to train EBMs via a multistage coarse-to-fine expanding and sampling strategy, which starts with learning a coarse-level EBM from images at low resolution and then gradually transits to learn a finer-level EBM from images at higher resolution by expanding the energy function as the learning progresses. The proposed framework is computationally efficient with smooth learning and sampling. It achieves the best performance on image generation amongst all EBMs and is the first successful EBM to synthesize high-fidelity images at 512×512 resolution. It can also be useful for image restoration and out-of-distribution detection. Lastly, the proposed framework is further generalized to the one-sided unsupervised image-to-image translation and beats baseline methods in terms of model size and training budget. We also present a gradient-based generative saliency method to interpret the translation dynamics.
poster-presentations
This work proposes to train EBMs using multi-stage sampling. The EBMs are then used for generating high dimensional images, performing image to image translation, and out-of-distribution detection. The reviewers are impressed with the results, but indicate that the novelty is limited. While I agree that the work can be seen as a combination of previously proposed techniques, demonstrating that this combination can be made to work well is still a significant contribution to the field. In addition, the paper demonstrates strong results in using Langevin dynamics to translate between images, which I do think is novel. I therefore recommend accepting the paper for a poster presentation.
train
[ "8V1eV1lgcz", "89rVHsl6FIf", "q8sohARZJWm", "JEXr-7Kyxsm", "eIq6oCQQ6K2", "KjXLJybotXx", "mAz_YR_Co4z", "qb2Hii7vr-f", "Pn7R05EXRC3", "eRS5BvWmIwI", "-pVwqc9uY-m", "ZZ8m1ejpHFg", "OgfENpyAj4L", "EiEPhGDNF_I", "R-Ucu6UUl3x", "lsPk9sAiwpg", "hXpR4bh01ob", "dtnZBNmxHn", "D9iHglS_E1_...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "author", "public",...
[ "This work proposes a method to improve generation with energy based models. The work further shows how energy based models can also be extended to cross class image translation.\n\nStrong Points:\nGenerated samples appear to look good\n\nWeak Points:\nMy most major concern is that the overall technical novelty of ...
[ 4, 6, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_aD1_5zowqV", "iclr_2021_aD1_5zowqV", "89rVHsl6FIf", "qb2Hii7vr-f", "8V1eV1lgcz", "iclr_2021_aD1_5zowqV", "eRS5BvWmIwI", "q8sohARZJWm", "iclr_2021_aD1_5zowqV", "lsPk9sAiwpg", "q8sohARZJWm", "OgfENpyAj4L", "8V1eV1lgcz", "R-Ucu6UUl3x", "8HzDPuzm9e", "D9iHglS_E1_", "dtnZBNmxHn...
iclr_2021_43VKWxg_Sqr
Unsupervised Audiovisual Synthesis via Exemplar Autoencoders
We present an unsupervised approach that converts the input speech of any individual into audiovisual streams of potentially-infinitely many output speakers. Our approach builds on simple autoencoders that project out-of-sample data onto the distribution of the training set. We use exemplar autoencoders to learn the voice, stylistic prosody, and visual appearance of a specific target exemplar speech. In contrast to existing methods, the proposed approach can be easily extended to an arbitrarily large number of speakers and styles using only 3 minutes of target audio-video data, without requiring any training data for the input speaker. To do so, we learn audiovisual bottleneck representations that capture the structured linguistic content of speech. We outperform prior approaches on both audio and video synthesis.
poster-presentations
This paper proposes and investigates an approach for audiovisual synthesis based on the so-called exemplar autoencoders. The proposed approach is shown to be able to convert an audio input to audiovisual outputs using only very small amount of training data. All reviewers consider the paper interesting with a lot of potentials in a variety of applications and appreciate the novelty of the work in this domain. But there are also concerns on the technical presentation and the quality of the samples in the demo. The authors addressed most of the concerns in the rebuttal but agreed that the quality of the results still had room for further improvements. Overall, the work presented is interesting. The paper can be accepted.
train
[ "GfLMi-jHUp2", "1jqrxEgctAA", "9i4Y8FFU80S", "x7_Sw1_PWnP", "ZiG-N9jdFhb", "64xXqDOhwP7" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for bringing up these points: \n\n**Results could be better**\nWe agree that our current results can still be improved. Our method is based on manipulations of Mel-spectrograms. This makes the quality of our generation results heavily dependent on the wavenet vocoders, which reconstruct raw a...
[ -1, -1, -1, 6, 9, 6 ]
[ -1, -1, -1, 3, 4, 3 ]
[ "64xXqDOhwP7", "x7_Sw1_PWnP", "ZiG-N9jdFhb", "iclr_2021_43VKWxg_Sqr", "iclr_2021_43VKWxg_Sqr", "iclr_2021_43VKWxg_Sqr" ]
iclr_2021_7aL-OtQrBWD
A Learning Theoretic Perspective on Local Explainability
In this paper, we explore connections between interpretable machine learning and learning theory through the lens of local approximation explanations. First, we tackle the traditional problem of performance generalization and bound the test-time predictive accuracy of a model using a notion of how locally explainable it is. Second, we explore the novel problem of explanation generalization which is an important concern for a growing class of finite sample-based local approximation explanations. Finally, we validate our theoretical results empirically and show that they reflect what can be seen in practice.
poster-presentations
This paper presents an interesting connection between learning theory and local explainability. The reviewers have reacted to each others' thoughts, as well as the authors' comments; they are largely in favor of acceptance. I think the ICLR community will enjoy discussing this paper at the conference.
train
[ "ik6czSXbqJG", "TsaX0mbKIxq", "HUkmbkDRf7n", "Cunelt6HBQU", "X0yqUHHjSfC", "ayshlCFr54p", "qi0U6s4fj9b" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper presents two main theoretical results as its main contributions. First, the authors provide a bound on the model generalization in terms its local explainability. This bound relates the model generalization, the training accuracy, local explainability, and the complexity of the explanation...
[ 7, -1, -1, -1, -1, 7, 5 ]
[ 3, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_7aL-OtQrBWD", "HUkmbkDRf7n", "ik6czSXbqJG", "ayshlCFr54p", "qi0U6s4fj9b", "iclr_2021_7aL-OtQrBWD", "iclr_2021_7aL-OtQrBWD" ]
iclr_2021_AHm3dbp7D1D
SEED: Self-supervised Distillation For Visual Representation
This paper is concerned with self-supervised learning for small models. The problem is motivated by our empirical studies that while the widely used contrastive self-supervised learning method has shown great progress on large model training, it does not work well for small models. To address this problem, we propose a new learning paradigm, named SElf-SupErvised Distillation (SEED), where we leverage a larger network (as Teacher) to transfer its representational knowledge into a smaller architecture (as Student) in a self-supervised fashion. Instead of directly learning from unlabeled data, we train a student encoder to mimic the similarity score distribution inferred by a teacher over a set of instances. We show that SEED dramatically boosts the performance of small networks on downstream tasks. Compared with self-supervised baselines, SEED improves the top-1 accuracy from 42.2% to 67.6% on EfficientNet-B0 and from 36.3% to 68.2% on MobileNet-v3-Large on the ImageNet-1k dataset.
poster-presentations
There is definite consensus on this paper, with all reviewers expressing very favorable opinions. The author responses are very well articulated and address the main concerns expressed by the reviewers. The paper is very well-written and the ablation study well-executed. Some recent related work was missed in the original submission, but this was adequately addressed in rebuttal. The proposed approach is novel technique for feature representation learning. The clarifications to the manuscript and the new analyses are especially appreciated.
train
[ "iKHBF4lZN-c", "U69Xa6-H_4M", "rcDd4pfFXf", "tN2Oe-tty0_", "R7b7-3ILkl5", "1XoaxQiosYw" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the reviewer for his/her comments.\n \n**Q13**: a notation change in equation (2).\n\n**A13**: We thank the reviewer for his/her careful reading, and we will address the confusion in our updated version accordingly.\n\n**Q14**: Please move the table’s caption to the top of the table.\n\n**A14**: We w...
[ -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, 5, 4, 5 ]
[ "tN2Oe-tty0_", "R7b7-3ILkl5", "1XoaxQiosYw", "iclr_2021_AHm3dbp7D1D", "iclr_2021_AHm3dbp7D1D", "iclr_2021_AHm3dbp7D1D" ]
iclr_2021_-mWcQVLPSPy
Isometric Propagation Network for Generalized Zero-shot Learning
Zero-shot learning (ZSL) aims to classify images of an unseen class only based on a few attributes describing that class but no access to any training sample. A popular strategy is to learn a mapping between the semantic space of class attributes and the visual space of images based on the seen classes and their data. Thus, an unseen class image can be ideally mapped to its corresponding class attributes. The key challenge is how to align the representations in the two spaces. For most ZSL settings, the attributes for each seen/unseen class are only represented by a vector while the seen-class data provide much more information. Thus, the imbalanced supervision from the semantic and the visual space can make the learned mapping easily overfitting to the seen classes. To resolve this problem, we propose Isometric Propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces. Specifically, IPN learns to propagate the class representations on an auto-generated graph within each space. In contrast to only aligning the resulted static representation, we regularize the two dynamic propagation procedures to be isometric in terms of the two graphs' edge weights per step by minimizing a consistency loss between them. IPN achieves state-of-the-art performance on three popular ZSL benchmarks. To evaluate the generalization capability of IPN, we further build two larger benchmarks with more diverse unseen classes and demonstrate the advantages of IPN on them.
poster-presentations
Three experts in the field recommend accepting the paper (ratings 7,7,6) after the author response, appreciating the improvements the authors made. [Note: The AC is mainly disregarding R3's rating, as R3 did not respond to the early request of the AC to clarify their review, did not respond to the authors request for clarification, and did not participate in any discussion past their initial short review.] The solid experimental evaluation and an original methodology for zero-shot learning speak for accepting the paper. [The area chair is certain about accepting the paper, but not fully confident if it should be Poster or Spotlight.]
test
[ "hUu8PxcFHha", "4nqd6gouRN8", "mHPhYnmbUP", "5YP8TJ37K57", "zwrb1g-Tf3A", "fuXwVOMhle", "D-Syo76fgx2", "Ni_0YuzXEla", "leC20jNkH6", "qe842yvj7bQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "This paper focuses on improving zero-shot classification by reducing the bias of the classifier towards seen classes. The bias occurs since the embedding is trained with visual examples from the seen classes, while using only the attribute information from unseen classes for testing. Authors propose an isometric p...
[ 7, 6, 7, -1, -1, -1, -1, -1, -1, 4 ]
[ 4, 3, 5, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_-mWcQVLPSPy", "iclr_2021_-mWcQVLPSPy", "iclr_2021_-mWcQVLPSPy", "D-Syo76fgx2", "hUu8PxcFHha", "mHPhYnmbUP", "Ni_0YuzXEla", "4nqd6gouRN8", "qe842yvj7bQ", "iclr_2021_-mWcQVLPSPy" ]
iclr_2021_33rtZ4Sjwjn
Effective and Efficient Vote Attack on Capsule Networks
Standard Convolutional Neural Networks (CNNs) can be easily fooled by images with small quasi-imperceptible artificial perturbations. As alternatives to CNNs, the recently proposed Capsule Networks (CapsNets) are shown to be more robust to white-box attack than CNNs under popular attack protocols. Besides, the class-conditional reconstruction part of CapsNets is also used to detect adversarial examples. In this work, we investigate the adversarial robustness of CapsNets, especially how the inner workings of CapsNets change when the output capsules are attacked. The first observation is that adversarial examples misled CapsNets by manipulating the votes from primary capsules. Another observation is the high computational cost, when we directly apply multi-step attack methods designed for CNNs to attack CapsNets, due to the computationally expensive routing mechanism. Motivated by these two observations, we propose a novel vote attack where we attack votes of CapsNets directly. Our vote attack is not only effective, but also efficient by circumventing the routing process. Furthermore, we integrate our vote attack into the detection-aware attack paradigm, which can successfully bypass the class-conditional reconstruction based detection method. Extensive experiments demonstrate the superior attack performance of our vote attack on CapsNets.
poster-presentations
This paper studies the robustness of CapsNets under adversarial attacks. It is found that the votes from primary capsules in CapsNets are manipulated by adversarial examples and that the computationally expensive routing mechanism in CapsNets incurs high computational cost. As such, a new adversarial attack is specially designed by attacking the votes of CapsNets without having to involve the routing mechanism, making the method both effective and efficient. **Strengths:** * This is the first work which proposes an attack specifically designed for CapsNets by exploiting their special properties. * The proposed vote attack is more effective and efficient than the other attacks originally proposed for CNNs rather than CapsNets. * The paper is generally well written. * The experimental study is quite comprehensive. * The code will be made available to facilitate reproducibility. **Weaknesses:** * The study is mostly for only one type of CapsNets. It is not clear whether the observations in this paper still hold generally for other types of CapsNets even after some additional experiments have been added. * The presentation of the paper has room for improvement. The authors are recommended to proofread the references thoroughly to ensure style consistency such as the consistent use of capitalization, e.g. * “Star-caps” -> “STAR-Caps” * “ieee symposium on security and privacy (sp)” -> “IEEE Symposium on Security and Privacy (SP)” Despite its weaknesses especially those pointed out by Reviewer 2, this paper would be of interest to other researchers as it is the first paper that studies adversarial attacks on CapsNets.
train
[ "TeJp--8mLI2", "YVog82YZQ4K", "sFQ1XHm0dh", "rGxgAcCoYTc", "ftUfuo6cC1Y", "K1CsH54WVvf", "dgRugoK3KFS", "KSSYeN6vKTj", "bPwysO9DTM", "_pYtdked5Ef", "yIM_GernJz" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Authors argue that in order to attack Capsule network more effectively one should consider their inner workings, i.e. iterative routing. They propose two reasons behind the relative robustness of capsnets vs cnns: 1) gradient obfuscating 2) being more computationally intensive. Therefore, they propose a new attack...
[ 6, 5, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2021_33rtZ4Sjwjn", "iclr_2021_33rtZ4Sjwjn", "rGxgAcCoYTc", "KSSYeN6vKTj", "TeJp--8mLI2", "YVog82YZQ4K", "_pYtdked5Ef", "yIM_GernJz", "iclr_2021_33rtZ4Sjwjn", "iclr_2021_33rtZ4Sjwjn", "iclr_2021_33rtZ4Sjwjn" ]
iclr_2021_mEdwVCRJuX4
Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
Real-world large-scale datasets are heteroskedastic and imbalanced --- labels have varying levels of uncertainty and label distributions are long-tailed. Heteroskedasticity and imbalance challenge deep learning algorithms due to the difficulty of distinguishing among mislabeled, ambiguous, and rare examples. Addressing heteroskedasticity and imbalance simultaneously is under-explored. We propose a data-dependent regularization technique for heteroskedastic datasets that regularizes different regions of the input space differently. Inspired by the theoretical derivation of the optimal regularization strength in a one-dimensional nonparametric classification setting, our approach adaptively regularizes the data points in higher-uncertainty, lower-density regions more heavily. We test our method on several benchmark tasks, including a real-world heteroskedastic and imbalanced dataset, WebVision. Our experiments corroborate our theory and demonstrate a significant improvement over other methods in noise-robust deep learning.
poster-presentations
This paper got 2 clear acceptance and 2 borderline recommendation. The main concerns lie in the clarity of the experiment results and settings (AR3). The authors address these questions in their response. AR2 has two important questions. One is whether the simplified assumption holds in the considered very complicated settings (i.e., the labels are noisy and long-tailed). The other one is the lack of comparison with SOTA method for long-tailed classification. The authors did good job in their response. They provide additional experiment results to address these questions. Overall, the quality of this submission meet the bar of ICLR acceptance, though AC has concerns on the complicated settings and the marginal performance improvement over the existing long-tailed works.
train
[ "3rXJ7-motOd", "coYqaM1C8Jr", "1_9M3Wx7Ch", "YxQZByaNbA_", "OpyvNrEVFdd", "maC5M5oAY5n", "UqxoajiRz-", "LyP6-Yp_3Y", "KKsM1ZrG7dP" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed an adaptive regularization method to handle heteroskedastic and imbalanced datasets, which are closer to real-world large-scale settings. The framework applies a Lipschitz regularizer with varying regularization strength depending on the particular data point. The authors first theoretically st...
[ 6, 7, -1, -1, -1, -1, -1, 9, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_mEdwVCRJuX4", "iclr_2021_mEdwVCRJuX4", "coYqaM1C8Jr", "KKsM1ZrG7dP", "3rXJ7-motOd", "3rXJ7-motOd", "LyP6-Yp_3Y", "iclr_2021_mEdwVCRJuX4", "iclr_2021_mEdwVCRJuX4" ]
iclr_2021_3tFAs5E-Pe
Continuous Wasserstein-2 Barycenter Estimation without Minimax Optimization
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport. In this paper, we present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures, which are not restricted to being discrete. While past approaches rely on entropic or quadratic regularization, we employ input convex neural networks and cycle-consistency regularization to avoid introducing bias. As a result, our approach does not resort to minimax optimization. We provide theoretical analysis on error bounds as well as empirical evidence of the effectiveness of the proposed approach in low-dimensional qualitative scenarios and high-dimensional quantitative experiments.
poster-presentations
The authors propose the 2-Wasserstein barycenter problem between measures. The authors propose a novel formulation that leverages a condition (congruence) that the optimal transport (Monge) maps, here parameterized as potentials, must obey at optimality. The introduce various regularizers to encourage that property. The idea is demonstrated on convincing synthetic experiments and on a simple color transfer problem. Although experiments are a bit limited, I do believe, and follow here the opinion of all reviewers, that there is novelty in this approach, and that this paper is a worthy addition to the recent line of work trying to leverage ICNNs/Brenier's theorem to solve OT problems.
train
[ "k0xNV0z_iUi", "pazVXCu6nfW", "ayPO8P3pMnZ", "HZk2xHW8wj", "WO0Wjhtxges", "W1vt8lIqjg", "vC5faUVLpqs", "3EGRNkES8fz", "TKiBIqQGhEm", "YDsxS0hDBKY", "e406EvNzDIN", "tTB4109eyKo" ]
[ "official_reviewer", "author", "author", "public", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work introduces a new Wasserstein-2 barycenter computation method. The authors first derive the dual formulation of the Wasserstein-2 barycenter problem, and then parametrize the convex potentials by ICNNs. The congruent and conjugacy conditions are enforced by regularization terms, respectively. They then sh...
[ 6, -1, -1, -1, 7, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_3tFAs5E-Pe", "YDsxS0hDBKY", "HZk2xHW8wj", "YDsxS0hDBKY", "iclr_2021_3tFAs5E-Pe", "k0xNV0z_iUi", "tTB4109eyKo", "WO0Wjhtxges", "e406EvNzDIN", "iclr_2021_3tFAs5E-Pe", "iclr_2021_3tFAs5E-Pe", "iclr_2021_3tFAs5E-Pe" ]
iclr_2021_tkAtoZkcUnm
Neural Thompson Sampling
Thompson Sampling (TS) is one of the most effective algorithms for solving contextual multi-armed bandit problems. In this paper, we propose a new algorithm, called Neural Thompson Sampling, which adapts deep neural networks for both exploration and exploitation. At the core of our algorithm is a novel posterior distribution of the reward, where its mean is the neural network approximator, and its variance is built upon the neural tangent features of the corresponding neural network. We prove that, provided the underlying reward function is bounded, the proposed algorithm is guaranteed to achieve a cumulative regret of O(T1/2), which matches the regret of other contextual bandit algorithms in terms of total round number T. Experimental comparisons with other benchmark bandit algorithms on various data sets corroborate our theory.
poster-presentations
All reviewers tend towards accepting the paper, and I agree.
train
[ "tge1iOd-HOE", "yaDcQ4EEY1j", "O5mfJhD2fHY", "dAsxMmOSij", "BifyH4e5hg5", "lERCPJBZU7O", "UgfxVCRYdv", "q0VN1wLVZ5d", "LDBwsAmn1VA" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "***** Paper's Summary *****\n\nThe authors proposed an algorithm named Neural Thompson Sampling (NeuralTS) for solving contextual multi-armed bandit problems. NeuralTS uses deep neural networks for dealing with exploration and exploitation. In the paper, the authors proved the sub-linear regret of NeuralTS, whic...
[ 7, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ 3, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2021_tkAtoZkcUnm", "LDBwsAmn1VA", "iclr_2021_tkAtoZkcUnm", "q0VN1wLVZ5d", "tge1iOd-HOE", "UgfxVCRYdv", "iclr_2021_tkAtoZkcUnm", "iclr_2021_tkAtoZkcUnm", "iclr_2021_tkAtoZkcUnm" ]
iclr_2021_q8qLAbQBupm
Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics
Understanding the dynamics of neural network parameters during training is one of the key challenges in building a theoretical foundation for deep learning. A central obstacle is that the motion of a network in high-dimensional parameter space undergoes discrete finite steps along complex stochastic gradients derived from real-world datasets. We circumvent this obstacle through a unifying theoretical framework based on intrinsic symmetries embedded in a network's architecture that are present for any dataset. We show that any such symmetry imposes stringent geometric constraints on gradients and Hessians, leading to an associated conservation law in the continuous-time limit of stochastic gradient descent (SGD), akin to Noether's theorem in physics. We further show that finite learning rates used in practice can actually break these symmetry induced conservation laws. We apply tools from finite difference methods to derive modified gradient flow, a differential equation that better approximates the numerical trajectory taken by SGD at finite learning rates. We combine modified gradient flow with our framework of symmetries to derive exact integral expressions for the dynamics of certain parameter combinations. We empirically validate our analytic expressions for learning dynamics on VGG-16 trained on Tiny ImageNet. Overall, by exploiting symmetry, our work demonstrates that we can analytically describe the learning dynamics of various parameter combinations at finite learning rates and batch sizes for state of the art architectures trained on any dataset.
poster-presentations
The paper offers a more systematic treatment of various symmetry-related results in the current literature. Concretely, the invariance properties exhibited by loss functions associated with neural networks give rise to various dynamical invariants of gradient flows. The authors address these dynamical invariants in a unified manner and study them wrt different variants of gradient flows aimed at reflecting different algorithmic aspects of real training processes. The simplicity and the generality of dynamical invariants are both the strength and the weakness of the approach. On one hand, they provide a simple way of obtaining non-trivial generalities for the dynamics of learning processes. On the other hand, they abstracts away the very structure of neural networks from which they derive, and hence only allow relatively generic statements. Perhaps the approach should be positioned more as a conceptual method for studying invariant loss functions. Overall, although the technical contributions in the paper are rather incremental, the conceptual contribution of using dynamical invariants to unify and somewhat simplify existing analyses in a clear and clean symmetry-based approach is appreciated by the reviews and warrant a recommendation for borderline acceptance.
test
[ "a27_KGlIlo", "74Tdl_eXuSQ", "mB9TwkPWk2J", "JIhoowVgfA", "aEBvSkU-km0", "20zskMO_Nq4", "wuK9tp_Qy_", "8AhZoGfUJR", "q0fO_lEA4F4", "eYJkUS3I9ee", "FiWCOp5pDBp", "Mrh_wcnxRKY", "fIF4xs4em9f", "3NHruIAUwk", "IuHg86njQxY", "PZnR3CgAja7", "XEbv4iQjnES", "0xL88ho6NxO", "LDk_3oOVHSp" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_re...
[ "Pros: \n- This paper is very well-written and motivated. \n- The train of thoughts is explained very clearly, such that I (admittedly not being an expert in this field) was able to follow. \n- The idea to unify invariances of the loss function by using symmetries and derive corresponding conservation laws (for $\...
[ 8, -1, 6, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, -1, 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_q8qLAbQBupm", "q0fO_lEA4F4", "iclr_2021_q8qLAbQBupm", "aEBvSkU-km0", "20zskMO_Nq4", "8AhZoGfUJR", "iclr_2021_q8qLAbQBupm", "IuHg86njQxY", "eYJkUS3I9ee", "XEbv4iQjnES", "Mrh_wcnxRKY", "PZnR3CgAja7", "iclr_2021_q8qLAbQBupm", "wuK9tp_Qy_", "wuK9tp_Qy_", "LDk_3oOVHSp", "mB9Twk...
iclr_2021_EoFNy62JGd
Neural gradients are near-lognormal: improved quantized and sparse training
While training can mostly be accelerated by reducing the time needed to propagate neural gradients (loss gradients with respect to the intermediate neural layer outputs) back throughout the model, most previous works focus on the quantization/pruning of weights and activations. These methods are often not applicable to neural gradients, which have very different statistical properties. Distinguished from weights and activations, we find that the distribution of neural gradients is approximately lognormal. Considering this, we suggest two closed-form analytical methods to reduce the computational and memory burdens of neural gradients. The first method optimizes the floating-point format and scale of the gradients. The second method accurately sets sparsity thresholds for gradient pruning. Each method achieves state-of-the-art results on ImageNet. To the best of our knowledge, this paper is the first to (1) quantize the gradients to 6-bit floating-point formats, or (2) achieve up to 85% gradient sparsity --- in each case without accuracy degradation. Reference implementation accompanies the paper in the supplementary material.
poster-presentations
This work makes the observation that gradients in neural network training are approximately distributed according to a log-normal distribution. This observation is then used to compress and sparsify the gradients, which can be useful in distributed optimization of neural nets. The reviewers indicate that this contribution is novel and useful and they do not find any major issues with the presented work. I recommend accepting the paper for a poster presentation.
train
[ "4pX8DZCLRN2", "VnCDbkewfqV", "iI-OnrqEVd0", "WY1x3OJChV_", "6is03X6wqm", "-GRvbZV9pIX", "yc1ERoAOul6", "S3mlUgIugA", "ZEHQF8Aji10" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors suggest an interesting finding where the gradient distribution in each layer is close to log-normal distribution instead of normal distribution. Given the suggestion, the authors propose two closed-form analytical methods to produce a better low-precision floating-point format and an optimized sparsity...
[ 7, 7, -1, -1, -1, -1, -1, 6, 8 ]
[ 3, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_EoFNy62JGd", "iclr_2021_EoFNy62JGd", "VnCDbkewfqV", "4pX8DZCLRN2", "S3mlUgIugA", "iclr_2021_EoFNy62JGd", "ZEHQF8Aji10", "iclr_2021_EoFNy62JGd", "iclr_2021_EoFNy62JGd" ]
iclr_2021_TTUVg6vkNjK
RODE: Learning Roles to Decompose Multi-Agent Tasks
Role-based learning holds the promise of achieving scalable multi-agent learning by decomposing complex tasks using roles. However, it is largely unclear how to efficiently discover such a set of roles. To solve this problem, we propose to first decompose joint action spaces into restricted role action spaces by clustering actions according to their effects on the environment and other agents. Learning a role selector based on action effects makes role discovery much easier because it forms a bi-level learning hierarchy: the role selector searches in a smaller role space and at a lower temporal resolution, while role policies learn in significantly reduced primitive action-observation spaces. We further integrate information about action effects into the role policies to boost learning efficiency and policy generalization. By virtue of these advances, our method (1) outperforms the current state-of-the-art MARL algorithms on 9 of the 14 scenarios that comprise the challenging StarCraft II micromanagement benchmark and (2) achieves rapid transfer to new environments with three times the number of agents. Demonstrative videos can be viewed at https://sites.google.com/view/rode-marl.
poster-presentations
The paper proposes a two-level hierarchical algorithm for efficient and scalable multi-agent learning where the high-level policy decides a reduced space for low-level to explore in. All the reviewers liked the premise and the experimental evaluation. Reviewers had some clarification questions which were answered in the authors' rebuttal. After discussing the rebuttal, AC as well as reviewers believe that the paper provides insights that will be useful for the multi-agent learning community and recommend acceptance.
train
[ "aZIIL96-JrH", "5g1ohWEpSDe", "_j7soKK-7yY", "Ne_cPC7h7hU", "YPdvER1GyUT", "0HUcTWolnd1" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper describes a role-based learning model for the DEC-POMDPs. The main contribution lies in the efficient discovery of roles from the joint action spaces and then learning a bi-level role assignment for each achievement. This is achieved in two steps. First, the joint action space is clustered into differen...
[ 8, 7, -1, -1, -1, 6 ]
[ 3, 4, -1, -1, -1, 4 ]
[ "iclr_2021_TTUVg6vkNjK", "iclr_2021_TTUVg6vkNjK", "0HUcTWolnd1", "5g1ohWEpSDe", "aZIIL96-JrH", "iclr_2021_TTUVg6vkNjK" ]
iclr_2021_3RLN4EPMdYd
Revisiting Hierarchical Approach for Persistent Long-Term Video Prediction
Learning to predict the long-term future of video frames is notoriously challenging due to the inherent ambiguities in a distant future and dramatic amplification of prediction error over time. Despite the recent advances in the literature, existing approaches are limited to moderately short-term prediction (less than a few seconds), while extrapolating it to a longer future quickly leads to destruction in structure and content. In this work, we revisit the hierarchical models in video prediction. Our method generates future frames by first estimating a sequence of dense semantic structures and subsequently translating the estimated structures to pixels by video-to-video translation model. Despite the simplicity, we show that modeling structures and their dynamics in categorical structure space with stochastic sequential estimator leads to surprisingly successful long-term prediction. We evaluate our method on two challenging video prediction scenarios, \emph{car driving} and \emph{human dancing}, and demonstrate that it can generate complicated scene structures and motions over a very long time horizon (\ie~thousands frames), setting a new standard of video prediction with orders of magnitude longer prediction time than existing approaches. Video results are available at https://1konny.github.io/HVP/.
poster-presentations
This paper proposes a new implementation of a previously proposed two-stage process for video prediction: first predict future segmentation maps, then map them to video frames. Combined with other advances in video prediction and image generation, this simple idea is shown empirically to work very well, producing video predictions up to many hundreds of frames into the future in real stochastic settings with unprecedented quality. Strong ablation studies over the course of the review process further serve to confirm the value of various design choices involved in the implementation.
train
[ "KpGEUM4GBer", "8K9eLKC7GEo", "uND5rCu9JOJ", "j6gMFPyV4y", "OJW59sW_gQd", "yoAeedDCUyI", "mpvfbwpCCV8", "xSoqMhhlT28", "bWvitgCfd5z" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a VAE based hierarchical model for video prediction. The model employs recurrent model to predict intermediate representations (in the form of label maps) and these representations are mapped to pixel level information, i.e., videos. The paper presents an interesting idea of using representatio...
[ 7, 6, -1, -1, -1, -1, -1, 6, 5 ]
[ 5, 4, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_3RLN4EPMdYd", "iclr_2021_3RLN4EPMdYd", "bWvitgCfd5z", "yoAeedDCUyI", "KpGEUM4GBer", "xSoqMhhlT28", "8K9eLKC7GEo", "iclr_2021_3RLN4EPMdYd", "iclr_2021_3RLN4EPMdYd" ]
iclr_2021_vyY0jnWG-tK
Physics-aware, probabilistic model order reduction with guaranteed stability
Given (small amounts of) time-series' data from a high-dimensional, fine-grained, multiscale dynamical system, we propose a generative framework for learning an effective, lower-dimensional, coarse-grained dynamical model that is predictive of the fine-grained system's long-term evolution but also of its behavior under different initial conditions. We target fine-grained models as they arise in physical applications (e.g. molecular dynamics, agent-based models), the dynamics of which are strongly non-stationary but their transition to equilibrium is governed by unknown slow processes which are largely inaccessible by brute-force simulations. Approaches based on domain knowledge heavily rely on physical insight in identifying temporally slow features and fail to enforce the long-term stability of the learned dynamics. On the other hand, purely statistical frameworks lack interpretability and rely on large amounts of expensive simulation data (long and multiple trajectories) as they cannot infuse domain knowledge. The generative framework proposed achieves the aforementioned desiderata by employing a flexible prior on the complex plane for the latent, slow processes, and an intermediate layer of physics-motivated latent variables that reduces reliance on data and imbues inductive bias. In contrast to existing schemes, it does not require the a priori definition of projection operators from the fine-grained description and addresses simultaneously the tasks of dimensionality reduction and model estimation. We demonstrate its efficacy and accuracy in multiscale physical systems of particle dynamics where probabilistic, long-term predictions of phenomena not contained in the training data are produced.
poster-presentations
The consensus of the reviews is to accept the paper. I agree. Reviewers highlighted many strengths, including a compelling main idea: * R5: "The paper presents an interesting and motivating case for Bayesian inference in probabilistic generative models: a problem that has inherent uncertainty along with the ability to incorporate domain knowledge that can reduce the inference complexity." * R3: "Overall, the idea is interesting and supported by correct mathematical derivations and experimental proofs of concept." * R4: "the generative approach is novel. Adding domain knowledge is relevant and significant when dealing with real world applications" As well as compelling experiments, substantially improved in the discussion period: * R1: "The authors have shown some promising results in modeling particle dynamics." * R5: "The addition of Appendix H, in my opinion, considerably strengthens the paper's story and case for acceptance. [... T]he authors have addressed most of my major concerns." And clear writing: * R5: "In general, the paper is well written (apart from some higher-level structural issues discussed below) and the notation is clear and unambiguous." * R4: "The paper is very well written, clear" The main weaknesses highlighted were in experiments (lacking good baselines, as well as ablations), and in discussing some choices in the model's construction. These were effectively addressed in the discussion (though R5 still points to some places that could be improved).
train
[ "iQRRZKI1aW6", "gZ11_l6TwB", "jjaRzWc6duh", "hrVhbwQl8hk", "dnH0G-Nqrpn", "iujFumXoUAa", "f_QLrI7eDw", "CLZRb6WkZQu", "k4B4VN1XRx", "ny1lA_xqjMF", "nDAEBjmLAQI", "-gbBGE2uvXi" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe paper presents a generative approach to modeling physical systems with high-dimensional, nonlinear dynamical systems such as those found in fluid mechanics. The authors provide a physics-motivated hierarchical model for high-dimensional time series and a variational inference method for inferring l...
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "iclr_2021_vyY0jnWG-tK", "iclr_2021_vyY0jnWG-tK", "gZ11_l6TwB", "ny1lA_xqjMF", "nDAEBjmLAQI", "iQRRZKI1aW6", "jjaRzWc6duh", "-gbBGE2uvXi", "iclr_2021_vyY0jnWG-tK", "iclr_2021_vyY0jnWG-tK", "iclr_2021_vyY0jnWG-tK", "iclr_2021_vyY0jnWG-tK" ]
iclr_2021_kLbhLJ8OT12
Modelling Hierarchical Structure between Dialogue Policy and Natural Language Generator with Option Framework for Task-oriented Dialogue System
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utterances remains as a bottleneck. Reinforcement learning (RL) deals with the problem through using non-differentiable evaluation metrics (e.g., the success rate) as rewards. Nonetheless, existing works with RL showed that the comprehensibility of generated system utterances could be corrupted when improving the performance on fulfilling user requests. In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations; (2) train HDNO via hierarchical reinforcement learning (HRL), as well as suggest the asynchronous updates between dialogue policy and NLG during training to theoretically guarantee their convergence to a local maximizer; and (3) propose using a discriminator modelled with language models as an additional reward to further improve the comprehensibility. We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA, showing improvements on the performance evaluated by automatic evaluation metrics and human evaluation. Finally, we demonstrate the semantic meanings of latent dialogue acts to show the explanability for HDNO.
poster-presentations
As the title states (and reads somewhat like an openreview review title), the authors apply the options framework from the RL community to perform hierarchical RL where the option is the dialogue act and the subproblem is the NLG component in task-oriented dialogue (TOD) policy learning. The two technical contributions (beyond the conceptual connection above) is showing that asynchronous updates between the hierarchy levels guarantees convergence and language-model based discriminator to densify the reward structure. Empirical results are solid improvements over recent SoTA findings. == Pros == + This is a conceptually appealing application of RL to TOD and they authors had to make additional modifications to get it to work — which will help other researchers in this space. + There are both theoretical and empirical contributions. The theoretical contributions are also insightful and not superfluous to the problem being studied. + Using a language-model based discriminator for reward shaping isn’t completely new (although I haven’t seen in this setting and stated exactly the same), but is interesting and effective. == Cons == + The writing could use significant work; while the reviewers/rebuttal cleared up many issues, I actually didn’t appreciate the value of this paper in my first read due to the writing (even if the motivation, etc. is sufficiently clear). + Human evaluation is treated as somewhat of an afterthought and there isn’t a deep dive into error analysis of the results. The visualization is a good first step, but there isn’t really a when/why this method works better than others, which is important for a problem where evaluation isn’t conclusive in the best cases. This is also significant since the authors claim ‘comprehensibility’. Evaluating along the requested dimensions: - Quality: The conceptual and theoretical contributions are both of high-quality. This is a promising approach to TOD and the authors additions (e.g., async optimization, LM reward shaping) are good examples of applied research. The empirical results are sufficient to above average, but not as strong (although this is partially an artifact of TOD evaluation) - Clarity: The motivation is good, but the paper could use some work in writing. Some examples include (1) stating precisely how the option choices are derived (latent variables), (2) mapping out notation in something like a preliminaries section, (3) sketch of proofs in the main body for continuity. If the reader is familiar with the closest cited work, it is a bit easier, but I think some effort in making the paper more self-contained would increase its impact. - Originality: Options in HRL is widely known, but applying it to TOD is novel to the best of my (and the reviewers) knowledge. I think many could have come up with the basic idea, but it took some effort to get it to work. - Significance: This is a widely studied problem and the approach is fairly convincing. I don’t think it will be ‘disruptive’ or cross-pollinate to other application areas, but will almost certainly be cited within the conversational agent community. In summary, the reviewers like this paper a bit more than myself personally — I think it is borderline with a preference to accept while the reviews are a more confident accept. However, the reviewers are also experienced experts in this area. I also do think that the authors handled concerns well in the rebuttal stages and addressed my more pressing concerns. I would encourage the authors to improve the writing if accepted, but I would prefer to accept this if possible.
train
[ "TP9xHrl2tm_", "sWn4OWereZp", "H0GsDO5r8cp", "OIrZP6msnCS", "tolDH2OMsBB", "FBnw7byIwLN", "_6E3VMlofRR", "ieBrqjzQLa5", "g1LJUGvxUm9", "7I3yfrB2YIS", "uC0w9QWGchx", "9KeQmRVmjF-", "ZOwNZ0CqOuV", "FyZ6NxC2cC4", "NhXcWZ-3hkb" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear R4,\n \n \nFirst, we agree with this statement that “Although the system is modular, the POL and NLG components are downstream, and therefore, would be more affected by errors propagating from the upstream state tracker and database modules”. However, in [1] the downstream module ignoring POL (replacing POL a...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "sWn4OWereZp", "FyZ6NxC2cC4", "tolDH2OMsBB", "iclr_2021_kLbhLJ8OT12", "OIrZP6msnCS", "OIrZP6msnCS", "OIrZP6msnCS", "OIrZP6msnCS", "FyZ6NxC2cC4", "NhXcWZ-3hkb", "FyZ6NxC2cC4", "ZOwNZ0CqOuV", "iclr_2021_kLbhLJ8OT12", "iclr_2021_kLbhLJ8OT12", "iclr_2021_kLbhLJ8OT12" ]
iclr_2021_hb1sDDSLbV
Learning explanations that are hard to vary
In this paper, we investigate the principle that good explanations are hard to vary in the context of deep learning. We show that averaging gradients across examples -- akin to a logical OR of patterns -- can favor memorization and `patchwork' solutions that sew together different strategies, instead of identifying invariances. To inspect this, we first formalize a notion of consistency for minima of the loss surface, which measures to what extent a minimum appears only when examples are pooled. We then propose and experimentally validate a simple alternative algorithm based on a logical AND, that focuses on invariances and prevents memorization in a set of real-world tasks. Finally, using a synthetic dataset with a clear distinction between invariant and spurious mechanisms, we dissect learning signals and compare this approach to well-established regularizers.
poster-presentations
The paper shows that hat if the goal is to find invariant mechanisms in the data, these can be identified by finding explanations (e.g. model parameters) that are hard to vary across examples. To find those "explanations" it then proposes to combine gradients across examples in a "logical AND" fashion, i.e., pooling gradients sing a geometric mean with a logical AND masking. All reviewers agree that the direction is very interesting. While indeed mentioning sum and products of experts might be good, the overall idea is still very much interesting, also to the ICRL community, since it paves the way to apply this to larger set of machine learning methods, as actually shown in the experimental evaluation. Still, the authors should make the link to causality more obvious from the very beginning. This should also involve clarifying that "explanations" here do not refer to "explanations" as used in Explainable AI. Overall, this is an interesting and simple (in a positive sense) contribution to the question of getting at least "more" causal models.
train
[ "z1fYJoxbEns", "7pfUVhmxwK0", "LOh5DYD7mCc", "cKtNFoCqrbj", "eH6greNpHj7", "aViaseS5dkw", "GP4glD0T8I", "XB5zTIGSxjK", "J_z40-zjvpp", "bUd_-ZUmbqI" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are glad to see you found the method interesting, the experiments well documented, the connection to IRM and causality properly addressed, and the paper largely well written, except for some presentation issues in the introduction and method.\nHowever, we are having a hard time reconciling the review with the i...
[ -1, -1, -1, -1, -1, -1, 5, 7, 2, 9 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "J_z40-zjvpp", "bUd_-ZUmbqI", "XB5zTIGSxjK", "eH6greNpHj7", "GP4glD0T8I", "iclr_2021_hb1sDDSLbV", "iclr_2021_hb1sDDSLbV", "iclr_2021_hb1sDDSLbV", "iclr_2021_hb1sDDSLbV", "iclr_2021_hb1sDDSLbV" ]
iclr_2021_rWZz3sJfCkm
Efficient Generalized Spherical CNNs
Many problems across computer vision and the natural sciences require the analysis of spherical data, for which representations may be learned efficiently by encoding equivariance to rotational symmetries. We present a generalized spherical CNN framework that encompasses various existing approaches and allows them to be leveraged alongside each other. The only existing non-linear spherical CNN layer that is strictly equivariant has complexity O(C2L5), where C is a measure of representational capacity and L the spherical harmonic bandlimit. Such a high computational cost often prohibits the use of strictly equivariant spherical CNNs. We develop two new strictly equivariant layers with reduced complexity O(CL4) and O(CL3log⁡L), making larger, more expressive models computationally feasible. Moreover, we adopt efficient sampling theory to achieve further computational savings. We show that these developments allow the construction of more expressive hybrid models that achieve state-of-the-art accuracy and parameter efficiency on spherical benchmark problems.
poster-presentations
This paper proposes an efficient approach for computing equivariant spherical CNNs, significantly reducing the memory and computation costs. Experiments validate the effectiveness of the proposed approach. Pros: 1. Speeding up equivariant spherical CNNs is a valuable topic in deep learning. 2. The proposed approach is effective, in all parameter size, memory footprint and computation time. 3. The theory underpinning the speedup method is sound. Cons: 1. The readability should be improved. Two of the reviewers complained that the paper is hard to read and only Reviewer #2 reflected that it is "easy" to read (but only under the condition that the readers are familiar with the relevant mathematics), and this situation is improved after rebuttal. Nonetheless, this should be further done. 2. The experiments are a bit limited. This may partially be due to limited benchmark datasets for spherical data, but for the existing datasets used for comparison, Esteves et al. (2020) is not compared on all of them. Esteves et al. (2020) is only reported on spherical MNIST, which has very close performance to the proposed one. This worries the AC, who is eager to see whether on QM7 and SHREC’17 the results would be similar. After rebuttal, three of the reviewers raised their scores. So the AC recommended acceptance.
train
[ "hf9JjN8IWpz", "EwuNng7dIAf", "xTWYk96CIM0", "U0y8uGYjYRj", "5WiqP8b4WHV", "PN1KIV6var1", "3C8oxTfRrvA", "jnAcoWOHarq", "TRSa7hQmhUR", "Hsi2LxsuSsX", "WnuBqrR80-u", "hcqIgP3QFdt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "The paper introduces a framework for computationally efficient and exactly rotation-equivariant spherical CNNs. The work most closely resembles the Fourier space method of Kondor et al., but improves on it in a number of ways: firstly, a channel-wise structure is introduced for the tensor product nonlinearities, w...
[ 8, 6, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 2, 2, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_rWZz3sJfCkm", "iclr_2021_rWZz3sJfCkm", "iclr_2021_rWZz3sJfCkm", "iclr_2021_rWZz3sJfCkm", "hf9JjN8IWpz", "hf9JjN8IWpz", "U0y8uGYjYRj", "U0y8uGYjYRj", "xTWYk96CIM0", "xTWYk96CIM0", "EwuNng7dIAf", "EwuNng7dIAf" ]
iclr_2021_ULQdiUTHe3y
Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks
In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions (a vector of labels) based on a single input, i.e. a single graph, image, or document respectively. Existing adversarial robustness certificates consider each prediction independently and are thus overly pessimistic for such tasks. They implicitly assume that an adversary can use different perturbed inputs to attack different predictions, ignoring the fact that we have a single shared input. We propose the first collective robustness certificate which computes the number of predictions that are simultaneously guaranteed to remain stable under perturbation, i.e. cannot be attacked. We focus on Graph Neural Networks and leverage their locality property - perturbations only affect the predictions in a close neighborhood - to fuse multiple single-node certificates into a drastically stronger collective certificate. For example, on the Citeseer dataset our collective certificate for node classification increases the average number of certifiable feature perturbations from 7 to 351.
poster-presentations
This paper considers a new setting of robustness, where multiple predictions are simultaneously made based on a single input. Different from existing robustness certificates which independently consider perturbation of each prediction, the authors propose collective robustness certificate that computes the number of predictions which are simultaneously guaranteed to remain stable under perturbation. This yields more optimistic results. Most reviewers think this is a very interesting work and the authors present an effective method to combine individual certificate. The experimental results are convincing. I recommend accept.
val
[ "OQdINjSLlrE", "wHQEEyOf0e7", "sr4wM_mrkqR", "CYLsAx5t9wd", "I36iuweguqe", "Lft3TtcKDb7", "c5YLnE9omq", "Z8FbJxzBmOj", "lqtzTmNj6Ms" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary\n-------------\nCurrent methods on adversarial robustness certificates consider data points independently which are highly pessimistic for structured data. This work proposes the first collective robustness certificate that considers the structure of the graph by modeling locality in order to derive strong...
[ 8, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ 2, -1, -1, -1, -1, -1, 3, 1, 1 ]
[ "iclr_2021_ULQdiUTHe3y", "iclr_2021_ULQdiUTHe3y", "OQdINjSLlrE", "lqtzTmNj6Ms", "c5YLnE9omq", "Z8FbJxzBmOj", "iclr_2021_ULQdiUTHe3y", "iclr_2021_ULQdiUTHe3y", "iclr_2021_ULQdiUTHe3y" ]
iclr_2021_xjXg0bnoDmS
Entropic gradient descent algorithms and wide flat minima
The properties of flat minima in the empirical risk landscape of neural networks have been debated for some time. Increasing evidence suggests they possess better generalization capabilities with respect to sharp ones. In this work we first discuss the relationship between alternative measures of flatness: The local entropy, which is useful for analysis and algorithm development, and the local energy, which is easier to compute and was shown empirically in extensive tests on state-of-the-art networks to be the best predictor of generalization capabilities. We show semi-analytically in simple controlled scenarios that these two measures correlate strongly with each other and with generalization. Then, we extend the analysis to the deep learning scenario by extensive numerical validations. We study two algorithms, Entropy-SGD and Replicated-SGD, that explicitly include the local entropy in the optimization objective. We devise a training schedule by which we consistently find flatter minima (using both flatness measures), and improve the generalization error for common architectures (e.g. ResNet, EfficientNet).
poster-presentations
This paper studies the link between generalization behavior and "flatness" of the loss landscape in deep networks. Specifically, the authors study two measures of flatness (local entropy and local energy), and show that these two measurements are strongly correlated with one another. Moreover they show via a careful set of numerical experiments that two previously proposed algorithms (entropy SGD and replica SGD) that optimize for local entropy tend to both find flatter minima as well as provide better generalization. Despite the fact that the paper proposes no new models or algorithms, the experiments are compelling and provide non-trivial insights into predicting generalization behavior of deep networks, as well as solid evidence on the benefits of entropy regularization in SGD. The authors also seem to have satisfactorily answered the (numerous) initial concerns raised by the authors. Overall, I recommend an accept.
val
[ "-ssXagImwDA", "iX6oNZ-HcRg", "E9tIGc1JvZw", "gMSYZ7Q1JzS", "3bYul9FOrnr", "KLYYVM4Ltb", "Z5OMSX91aaU", "XThqxFd7lVn", "zazvip4H6A" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update after response: I appreciate the authors making their contributions clearer, and adding details about the training loss and error. I have increased my score accordingly.\n\nOriginal Review:\n\nThis paper presents an empirical evaluation of whether flatness correlates with generalization using a few differen...
[ 6, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "iclr_2021_xjXg0bnoDmS", "XThqxFd7lVn", "-ssXagImwDA", "zazvip4H6A", "Z5OMSX91aaU", "iclr_2021_xjXg0bnoDmS", "iclr_2021_xjXg0bnoDmS", "iclr_2021_xjXg0bnoDmS", "iclr_2021_xjXg0bnoDmS" ]
iclr_2021_jDdzh5ul-d
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning
Federated learning (FL) is a distributed machine learning architecture that leverages a large number of workers to jointly learn a model with decentralized data. FL has received increasing attention in recent years thanks to its data privacy protection, communication efficiency and a linear speedup for convergence in training (i.e., convergence performance increases linearly with respect to the number of workers). However, existing studies on linear speedup for convergence are only limited to the assumptions of i.i.d. datasets across workers and/or full worker participation, both of which rarely hold in practice. So far, it remains an open question whether or not the linear speedup for convergence is achievable under non-i.i.d. datasets with partial worker participation in FL. In this paper, we show that the answer is affirmative. Specifically, we show that the federated averaging (FedAvg) algorithm (with two-sided learning rates) on non-i.i.d. datasets in non-convex settings achieves a convergence rate O(1mKT+1T) for full worker participation and a convergence rate O(KnT+1T) for partial worker participation, where K is the number of local steps, T is the number of total communication rounds, m is the total worker number and n is the worker number in one communication round if for partial worker participation. Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to T/m in full worker participation. We conduct extensive experiments on MNIST and CIFAR-10 to verify our theoretical results.
poster-presentations
The authors have addressed the issues raised by the reviewers. All the reviewers think that the paper deserves to be published at ICLR 2021. The authors should implement all the reviewers’ suggestions into the final version, especially for clarity issues and clear explanations. The reviewer also encourages authors to investigate $m$’s effects on the convergence rate more to see whether there is a structural limitation in federated learning settings for future work.
train
[ "gjsCYDqU3bw", "Ut9r_uo2iIp", "lXTGP4KavSD", "rfSyOZvmZ0z", "ABxiQLzZh5u", "D3T95dHiKkg", "4HyQeiHSaZ0", "K7AeHKVx7hH", "LSYKcLCZWno" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "#### Summary of the paper:\nThis paper analyzes convergence rates of the standard \"FedAvg\" algorithm (with two sided learning rates) that is widely used in federated learning in multiple orthogonal perspectives:\na) non I.I.D datasets - with reasonable assumptions about the heterogeneity and suitable parameters...
[ 7, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_jDdzh5ul-d", "iclr_2021_jDdzh5ul-d", "rfSyOZvmZ0z", "LSYKcLCZWno", "Ut9r_uo2iIp", "gjsCYDqU3bw", "ABxiQLzZh5u", "D3T95dHiKkg", "iclr_2021_jDdzh5ul-d" ]
iclr_2021_-GLNZeVDuik
Categorical Normalizing Flows via Continuous Transformations
Despite their popularity, to date, the application of normalizing flows on categorical data stays limited. The current practice of using dequantization to map discrete data to a continuous space is inapplicable as categorical data has no intrinsic order. Instead, categorical data have complex and latent relations that must be inferred, like the synonymy between words. In this paper, we investigate Categorical Normalizing Flows, that is normalizing flows for categorical data. By casting the encoding of categorical data in continuous space as a variational inference problem, we jointly optimize the continuous representation and the model likelihood. Using a factorized decoder, we introduce an inductive bias to model any interactions in the normalizing flow. As a consequence, we do not only simplify the optimization compared to having a joint decoder, but also make it possible to scale up to a large number of categories that is currently impossible with discrete normalizing flows. Based on Categorical Normalizing Flows, we propose GraphCNF a permutation-invariant generative model on graphs. GraphCNF implements a three step approach modeling the nodes, edges, and adjacency matrix stepwise to increase efficiency. On molecule generation, GraphCNF outperforms both one-shot and autoregressive flow-based state-of-the-art.
poster-presentations
Well-written paper that proposes a flow-based model for categorical data, and applies it to graph generation with good results. Extending flow-models to handle types of data that are not continuous is a useful contribution, and graph generation is an important application. Overall, the reviewers were positive about the paper, and only few negative points were raised, so I'm happy to recommend acceptance.
train
[ "0_y4dflcdsi", "pbpNH3wZazb", "ZdzHWMKRfqh", "fTlAQUGNFUb", "Y_fREubZl7", "bjX-QdTs4H", "uVeikX3sXHq", "MU8YGR778R", "4-bSS_H_j7", "Y1o98Y3bgSs", "mkVl0fF7CBq" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Summary:\n\nThis work uses the idea of variational inference to map categorical data to continuous space affording the use of normalizing flows. Authors use several ideas to increase their framework's applicability--factorized distribution assumption, use of multi-scale architecture for step-generation, and pe...
[ 7, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_-GLNZeVDuik", "iclr_2021_-GLNZeVDuik", "fTlAQUGNFUb", "0_y4dflcdsi", "bjX-QdTs4H", "4-bSS_H_j7", "Y1o98Y3bgSs", "mkVl0fF7CBq", "iclr_2021_-GLNZeVDuik", "iclr_2021_-GLNZeVDuik", "iclr_2021_-GLNZeVDuik" ]
iclr_2021_Xv_s64FiXTv
Learning to Represent Action Values as a Hypergraph on the Action Vertices
Action-value estimation is a critical component of many reinforcement learning (RL) methods whereby sample complexity relies heavily on how fast a good estimator for action value can be learned. By viewing this problem through the lens of representation learning, good representations of both state and action can facilitate action-value estimation. While advances in deep learning have seamlessly driven progress in learning state representations, given the specificity of the notion of agency to RL, little attention has been paid to learning action representations. We conjecture that leveraging the combinatorial structure of multi-dimensional action spaces is a key ingredient for learning good representations of action. To test this, we set forth the action hypergraph networks framework---a class of functions for learning action representations in multi-dimensional discrete action spaces with a structural inductive bias. Using this framework we realise an agent class based on a combination with deep Q-networks, which we dub hypergraph Q-networks. We show the effectiveness of our approach on a myriad of domains: illustrative prediction problems under minimal confounding effects, Atari 2600 games, and discretised physical control benchmarks.
poster-presentations
After reading the reviews and rebuttal and looking over the paper, I feel that the results are indeed strong, and the paper could have an impact in terms of exploiting the relationship among action dimensions. Maybe the only detail that I would add is that going through the example given in Fig 1 completely could be useful as it might not be perfectly obvious how (e.g. considering a simple mixing function like summation) one retrieves the q values for someone not familiar with this particular topic.
train
[ "SRAjBw5WSq1", "zluB-_JhFH", "40K5YkVTgc", "uwhEYK4p9Y3", "N7RF6wqP9by", "68ucaUNYNj5", "1lOscog--nu", "7aEVdN25D4w", "cg_ne69jZP8", "YGlmQGdmYTE", "LbHwINtDh8", "2FFkaifoUDC", "RGprVBeTTle", "o_oui06xlQ4", "O09ZCxmJsBs", "TcQ1oqldPi2", "FnGM2heLzB", "bvrYc51tyHK" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work focuses on learning action representations for problems involving high-dimensional action spaces. The aim is to build a flexible and general methodology for learning representations of multidimensional actions that can be combined with existing architectures (which mostly focus on learning state represen...
[ 8, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5 ]
[ 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 2 ]
[ "iclr_2021_Xv_s64FiXTv", "uwhEYK4p9Y3", "iclr_2021_Xv_s64FiXTv", "N7RF6wqP9by", "68ucaUNYNj5", "1lOscog--nu", "7aEVdN25D4w", "cg_ne69jZP8", "YGlmQGdmYTE", "O09ZCxmJsBs", "TcQ1oqldPi2", "FnGM2heLzB", "SRAjBw5WSq1", "bvrYc51tyHK", "40K5YkVTgc", "iclr_2021_Xv_s64FiXTv", "iclr_2021_Xv_s6...
iclr_2021_6puUoArESGp
Debiasing Concept-based Explanations with Causal Analysis
Concept-based explanation approach is a popular model interpertability tool because it expresses the reasons for a model's predictions in terms of concepts that are meaningful for the domain experts. In this work, we study the problem of the concepts being correlated with confounding information in the features. We propose a new causal prior graph for modeling the impacts of unobserved variables and a method to remove the impact of confounding information and noise using a two-stage regression technique borrowed from the instrumental variable literature. We also model the completeness of the concepts set and show that our debiasing method works when the concepts are not complete. Our synthetic and real-world experiments demonstrate the success of our method in removing biases and improving the ranking of the concepts in terms of their contribution to the explanation of the predictions.
poster-presentations
The paper has merits on providing a particular way of understanding a prediction model based on auxiliary data (concepts). I have a generally more positive view of it, aligned with the higher-scoring reviews. However, I feel a bit uncomfortable of framing it as "causal" in the sense it does not aim to provide any causal predictions, but it is more of a smoothing method for capturing signal contaminated with "uninteresting" latent sources - this is more akin to regression with measurement error (see e.g. Carroll, Ruppert and Stefanski's "Nonlinear regression with measurement error") where, like in this paper, different definitions of "instrumental variables" also exist and are different from the causal inference definition. I can see though why we may want to provide a causal interpretation in order to justify particular assumptions, not unlike interesting lines of work from Scholkopf's take on causality. The paper can be strengthened by some further discussion on the assumptions made about additivity on equations (2) and (3), which feel strong and not particularly welcome in many applications. The proposed title is still a bit clunky, I feel that the two-stage approach is less important than the structural assumptions made, perhaps a title emphasizing the latter rather than the former would be more promising.
train
[ "rGlgTQYkrR", "u-t-8czwGvQ", "ku5HTMJjHfB", "h48i7tu535", "F7i2N6FDlrN", "2pAEmsqXtLQ", "TwywvSjOV5N", "BJEN13JNR8K", "NatCqAo6gVT" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Overview of the Paper\nThis paper proposes a procedure to 'debias' and account for confounding while using 'concepts' as interpretations for black-box models. The approach proposes a causal graph, and then notes based on the proposed graph that sample labels satisfy conditions required of an instrumental variable....
[ 6, -1, -1, -1, -1, -1, 7, 5, 4 ]
[ 4, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_6puUoArESGp", "BJEN13JNR8K", "TwywvSjOV5N", "rGlgTQYkrR", "BJEN13JNR8K", "NatCqAo6gVT", "iclr_2021_6puUoArESGp", "iclr_2021_6puUoArESGp", "iclr_2021_6puUoArESGp" ]
iclr_2021_ADWd4TJO13G
Lifelong Learning of Compositional Structures
A hallmark of human intelligence is the ability to construct self-contained chunks of knowledge and adequately reuse them in novel combinations for solving different yet structurally related problems. Learning such compositional structures has been a significant challenge for artificial systems, due to the combinatorial nature of the underlying search problem. To date, research into compositional learning has largely proceeded separately from work on lifelong or continual learning. We integrate these two lines of work to present a general-purpose framework for lifelong learning of compositional structures that can be used for solving a stream of related tasks. Our framework separates the learning process into two broad stages: learning how to best combine existing components in order to assimilate a novel problem, and learning how to adapt the set of existing components to accommodate the new problem. This separation explicitly handles the trade-off between the stability required to remember how to solve earlier tasks and the flexibility required to solve new tasks, as we show empirically in an extensive evaluation.
poster-presentations
The paper addresses lifelong/continual learning (CL) by combining reusable components. The algorithm is based on, updating components, updating how they are combined for a given task and adding new components. Reviewers had concerns about the learning workflow, how it could scale to harder CL streams and how it differs from existing LL/CL work. They also asked for clarifications about compositionality. They highlighted the experiments as a point of strength. After the rebuttal, all reviewers found the paper to be above the acceptance bar.
train
[ "j6eaxhAwM1L", "abYyri-WEzl", "dbQHvLiXRby", "ekm_xQHp8UU", "IqvL8Au220U", "6jj1Zp4Fktk", "p2aV2nYkdAM", "s2PUtvZiKta", "1Oiq9IO8PsF", "yOmre4iEObR", "JqGA9SPV7Ag", "A_lSa6TJ1i7", "n2wgd_Orucc", "bPZucmPNj3k", "Erv0WGRWE3Q", "VXyuKMXvnAC", "qGJZuMaFYo", "wo6CjWCQXWQ", "ajiJlmznRM...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "** Summary **\n\nThe paper proposes a novel approach to lifelong learning, which builds upon the idea of gradually construcing a set of reusable components.\nThrough extensive experiments on different underlying architectures, the authors demonstrate the promise of their approach.\n\n** Strengths **\n\nThe paper a...
[ 7, -1, -1, -1, -1, 6, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 9 ]
[ 3, -1, -1, -1, -1, 4, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_ADWd4TJO13G", "ekm_xQHp8UU", "A_lSa6TJ1i7", "bPZucmPNj3k", "1Oiq9IO8PsF", "iclr_2021_ADWd4TJO13G", "iclr_2021_ADWd4TJO13G", "JqGA9SPV7Ag", "n2wgd_Orucc", "iclr_2021_ADWd4TJO13G", "iclr_2021_ADWd4TJO13G", "qGJZuMaFYo", "6jj1Zp4Fktk", "Erv0WGRWE3Q", "yOmre4iEObR", "ajiJlmznRMm...
iclr_2021_xpFFI_NtgpW
Rethinking Embedding Coupling in Pre-trained Language Models
We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.
poster-presentations
This paper makes a thorough investigation on the idea of decoupling the input and output word embeddings for pre-trained language models. The research shows that the decoupling can improve the performance of pre-trained LMs by reallocating the input word embedding parameters to the Transformer layers, while further improvements can be obtained by increasing the output embedding size. Experiments were conducted on the XTREME benchmark over a strong mBERT. R1&R2&R3 gave rather positive comments while R4 raised concerns on the model size. The authors gave detailed response on these concerns but R4 still thought the paper is overclaimed because the experiments were only conducted in a multilingual scenario.
train
[ "4V290Jqach0", "pgXnmzWQ7uz", "9JJBO0Lb9Jj", "VjmZsGDjJ1F", "QrtlLdOK8j", "qgMqDIpzqsF", "LlWEQtK73-r", "959iTeqc8ov", "0r5Css9Y2qM", "HN8h7VWbbR7" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their time and thoughtful feedback. All reviewers noted that our empirical results were strong and well supported by analysis. Reviewers further highlighted that the paper is well written and strongly motivated (R1, R3), that our strategy is novel (R3) and simple yet effective (R4), and ...
[ -1, -1, -1, -1, -1, -1, 4, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 3, 5 ]
[ "iclr_2021_xpFFI_NtgpW", "9JJBO0Lb9Jj", "LlWEQtK73-r", "959iTeqc8ov", "0r5Css9Y2qM", "HN8h7VWbbR7", "iclr_2021_xpFFI_NtgpW", "iclr_2021_xpFFI_NtgpW", "iclr_2021_xpFFI_NtgpW", "iclr_2021_xpFFI_NtgpW" ]
iclr_2021_gwnoVHIES05
Creative Sketch Generation
Sketching or doodling is a popular creative activity that people engage in. However, most existing work in automatic sketch understanding or generation has focused on sketches that are quite mundane. In this work, we introduce two datasets of creative sketches -- Creative Birds and Creative Creatures -- containing 10k sketches each along with part annotations. We propose DoodlerGAN -- a part-based Generative Adversarial Network (GAN) -- to generate unseen compositions of novel part appearances. Quantitative evaluations as well as human studies demonstrate that sketches generated by our approach are more creative and of higher quality than existing approaches. In fact, in Creative Birds, subjects prefer sketches generated by DoodlerGAN over those drawn by humans!
poster-presentations
While much of generative modeling is tasked with the goal of generating content within the data distribution, the motivation of this work is to examine whether ML techniques can generate creative content. This work has 2 core contributions: 1) Two new datasets of creative sketches: birds and creatures, that have part annotations (size ~ 10K samples for each set). The way the datasets are structured with the body part annotations will facilitate the creativity aspect of the approach later described in the paper. 2) This paper propose a GAN model that is part-based, which they call DoodlerGAN. It is inspired partly by the human's creative process of sequential drawing. Here, the trained model determines the appropriate order of parts to generate, which makes the model well suited for human-in-the-loop interactive interfaces in creative applications where it can make suggestions based on user drawn partial sketches. They show that the proposed model, trained on the part-annotated datasets, are able to generate unseen compositions of birds and creatures with novel body part configurations for creative sketches. They conduct human evaluation and also quantitative metrics to show the superiority of their approach (for human preference, and also FID score). Many reviewers, including R1 and myself observe that the datasets provided, along with the parts-based labeling and modeling approach are a clear advantage over existing datasets and methodology. With ever growing importance of generative models used in real world applications, including the creative industry, I believe this paper provides a much needed fresh take on creative ways of using our generative models besides making them larger, or achieving better log-likelihood scores. Many reviewers, including R3, would think that this work is indeed a "Delightful, well written paper! I have concerns about its fit here." I strongly believe such works in fact definitely *do* belong at ICLR, and I think this work has the potential to get researchers in the generative modeling community to rethink what they are really optimizing for. I believe this paper will be a great addition to ICLR2021, and I look forward to see their presentation to the community to spark more creativity in our research endeavors. For this reason, I'm strongly recommending an acceptance (Poster).
train
[ "NtraifPTXTo", "Blhp9wVn75", "EJ-dHrve8mc", "GtqsazETUI", "TF-NCOILleO", "XNrzkg4x9df", "7OtFwLzBBPU", "o0GOj6fvIc", "sCdvfMwKe1j", "wd_EiseeBT0", "NMS0SnBOUiv", "nPbTeeOR7z6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "### Summary\nThis paper introduces two creative sketch datasets of birds and creatures, segmented into parts, each with ~10k doodles collected from Amazon MTurk workers. In a user study, people tend to favor sketches from their dataset over the similar Gogole QuickDraw sketches. Additionally, the authors propose a...
[ 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_gwnoVHIES05", "7OtFwLzBBPU", "iclr_2021_gwnoVHIES05", "XNrzkg4x9df", "sCdvfMwKe1j", "EJ-dHrve8mc", "nPbTeeOR7z6", "NMS0SnBOUiv", "NtraifPTXTo", "iclr_2021_gwnoVHIES05", "iclr_2021_gwnoVHIES05", "iclr_2021_gwnoVHIES05" ]
iclr_2021_eJIJF3-LoZO
Concept Learners for Few-Shot Learning
Developing algorithms that are able to generalize to a novel task given only a few labeled examples represents a fundamental challenge in closing the gap between machine- and human-level performance. The core of human cognition lies in the structured, reusable concepts that help us to rapidly adapt to new tasks and provide reasoning behind our decisions. However, existing meta-learning methods learn complex representations across prior labeled tasks without imposing any structure on the learned representations. Here we propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions. Instead of learning a joint unstructured metric space, COMET learns mappings of high-level concepts into semi-structured metric spaces, and effectively combines the outputs of independent concept learners. We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation on a novel dataset from a biological domain developed in our work. COMET significantly outperforms strong meta-learning baselines, achieving 6-15% relative improvement on the most challenging 1-shot learning tasks, while unlike existing methods providing interpretations behind the model's predictions.
poster-presentations
The paper introduces "Concept Embeddings" to Prototypical Network, which are part-based representations and are learnt by a set of independent networks (which can share weights). The method first computes the concept embeddings of an input, and then takes the summation of the distances between those concept embeddings and their corresponding concept prototypes in each class to estimate the class probability. The experiments validates the proposed methods on 4 benchmarks in three different domains, including vision, language and biology. For the biology task, the authors also develop a new benchmark on cross-organ cell type classification. The key novel idea of transferable concepts results in significantly improved generalization ability over the existing few-shot learning methods. Although some reviewers raised concerns about not using other few-shot image classification datasets such as MiniImageNet these are not appropriate benchmarks, as the method requires the “part-based concepts” to reasonably span the space of all images which is a characteristic of fine-grained image classification problem. Although this does limit the scope of the method, the fact that it is applicable for multiple tasks is a strong counteragument to the claim that it is too limited, so overall I disagree with the assessment of one reviewer that the choice of benchmarks is insufficient.
train
[ "w01y9Bl3tQs", "wUgaVWaaW9-", "l_v3lZn5rJ1", "CgwLovahSYG", "aWF3kPPHH08", "IRI5y-DVJr6", "CioF1RY_yW1", "VAY0Qn2siwa" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a knowledge-driven prototypical learning strategy for few-shot classification tasks. The main idea of this work is to introduce a set of concepts defined in the subspaces of inputs and represent each class as a group of concept prototypes for few-shot learning. Following the prototypical network...
[ 6, -1, -1, -1, -1, 7, 5, 6 ]
[ 3, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_eJIJF3-LoZO", "IRI5y-DVJr6", "w01y9Bl3tQs", "CioF1RY_yW1", "VAY0Qn2siwa", "iclr_2021_eJIJF3-LoZO", "iclr_2021_eJIJF3-LoZO", "iclr_2021_eJIJF3-LoZO" ]
iclr_2021_6xHJ37MVxxp
Domain Generalization with MixStyle
Though convolutional neural networks (CNNs) have demonstrated remarkable ability in learning discriminative features, they often generalize poorly to unseen domains. Domain generalization aims to address this problem by learning from a set of source domains a model that is generalizable to any unseen domain. In this paper, a novel approach is proposed based on probabilistically mixing instance-level feature statistics of training samples across source domains. Our method, termed MixStyle, is motivated by the observation that visual domain is closely related to image style (e.g., photo vs.~sketch images). Such style information is captured by the bottom layers of a CNN where our proposed style-mixing takes place. Mixing styles of training instances results in novel domains being synthesized implicitly, which increase the domain diversity of the source domains, and hence the generalizability of the trained model. MixStyle fits into mini-batch training perfectly and is extremely easy to implement. The effectiveness of MixStyle is demonstrated on a wide range of tasks including category classification, instance retrieval and reinforcement learning.
poster-presentations
All three reviewers recommend acceptance after the rebuttal stage, and the AC found no reason to disagree with them. The proposed method is simple and effective, and the concerns raised about experimental validation and novelty seem well addressed in the rebuttal.
train
[ "S94HuhmWRwh", "ZBtzm7IqoV", "eaBLacKTvC_", "DE3f18co2hV", "--ls-mdu-U-", "g2YJDUhcz-b" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work proposes a technique for domain generalization by mixing style of images from different domains. This work adopts a mix up style approach [A] for domain generalization. Different from [A], the paper proposes to conduct mix-up in the intermediate layers, in particular, instance normalization layers. The p...
[ 6, -1, -1, -1, 7, 7 ]
[ 4, -1, -1, -1, 4, 4 ]
[ "iclr_2021_6xHJ37MVxxp", "g2YJDUhcz-b", "--ls-mdu-U-", "S94HuhmWRwh", "iclr_2021_6xHJ37MVxxp", "iclr_2021_6xHJ37MVxxp" ]
iclr_2021_ujmgfuxSLrO
DeLighT: Deep and Light-weight Transformer
We introduce a deep and light-weight transformer, DeLighT, that delivers similar or better performance than standard transformer-based models with significantly fewer parameters. DeLighT more efficiently allocates parameters both (1) within each Transformer block using the DeLighT transformation, a deep and light-weight transformation and (2) across blocks using block-wise scaling, that allows for shallower and narrower DeLighT blocks near the input and wider and deeper DeLighT blocks near the output. Overall, DeLighT networks are 2.5 to 4 times deeper than standard transformer models and yet have fewer parameters and operations. Experiments on benchmark machine translation and language modeling tasks show that DeLighT matches or improves the performance of baseline Transformers with 2 to 3 times fewer parameters on average.
poster-presentations
This paper presents some innovations to transformers allowing some significant reductions in parameter count. While some reviewers were concerned that the proposed innovations seem incremental and may not stand the test of time, all reviewers recommended acceptance after engaging in a rich and interactive author discussion. Given the clear importance of making transformers more efficient I think this paper will be of interest to the community and is worthy of acceptance at ICLR.
train
[ "-FzMy9xHS4g", "leDAD3GdnTv", "-I8gd5OFrJd", "mrePIBXt2k-", "Qxt76zz4nq0", "GxB-RoPlqQc", "4pd1BhCzLF9", "OoMSBo8ZhNZ", "JOG2PkZjwmI", "g_M3hAEbCq_", "n79UYt76p6d", "T3hRBYPI7Ai", "UtXeiW9-V6", "SHBKMMb8a_C", "o9ZDHkoh5e0", "AAfEI01pLHM", "7tmV42ZbZz", "-dz3v47udUC", "Q3BWJfbGdLM...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "au...
[ "The paper replaces the standard MLP block with a novel building block called DeLight. DeLight is a deeper MLP of partially group-wise linear layers, which leads to parameter and potential compute savings. The authors show that their approach outperforms other transformer-like architectures with more parameters on ...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_ujmgfuxSLrO", "iclr_2021_ujmgfuxSLrO", "leDAD3GdnTv", "iclr_2021_ujmgfuxSLrO", "T3hRBYPI7Ai", "-FzMy9xHS4g", "n79UYt76p6d", "JOG2PkZjwmI", "UtXeiW9-V6", "SHBKMMb8a_C", "o9ZDHkoh5e0", "AAfEI01pLHM", "2gyfmHvCZw", "_S1cdeeyrQ", "1whIP5OpW-6", "Q3BWJfbGdLM", "-dz3v47udUC", ...
iclr_2021_pqZV_srUVmK
Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy
We study the global convergence and global optimality of actor-critic, one of the most popular families of reinforcement learning algorithms. While most existing works on actor-critic employ bi-level or two-timescale updates, we focus on the more practical single-timescale setting, where the actor and critic are updated simultaneously. Specifically, in each iteration, the critic update is obtained by applying the Bellman evaluation operator only once while the actor is updated in the policy gradient direction computed using the critic. Moreover, we consider two function approximation settings where both the actor and critic are represented by linear or deep neural networks. For both cases, we prove that the actor sequence converges to a globally optimal policy at a sublinear O(K−1/2) rate, where K is the number of iterations. To the best of our knowledge, we establish the rate of convergence and global optimality of single-timescale actor-critic with linear function approximation for the first time. Moreover, under the broader scope of policy optimization with nonlinear function approximation, we prove that actor-critic with deep neural network finds the globally optimal policy at a sublinear rate for the first time.
poster-presentations
Most reviewers agree that the paper makes valuable contribution in analyzing single-timescale actor-critic algorithms. There were some doubts on the theoretical advantage over two-timescale algorithms and the realizability assumptions, but the authors made satisfactory clarifications. Therefore, acceptance is recommended, though I strongly suggest the authors to explicitly state key assumptions required to ensure global optimality in the abstract and introduction to avoid confusion.
train
[ "gZciFfzTWV_", "q5gpaJJm0s", "ryhAnor2zuH", "Qxb39Y3hYXg", "I_lEOsa8WsI", "yj3aaFUTzG2", "gNYO437h66W", "zQwbSHbta8z", "VYmzFFCYPt", "AljIbLyWOjd", "O0oO0Ac7hDf", "qHB96PfSkd" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the finite time performance of actor-critic algorithm with PPO-type update. Global convergence is provided by exploring the convex-like property of the objective function in the distributional space. Different from previous studies that focus either on two time-scale update or nested-loop updat...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 4 ]
[ "iclr_2021_pqZV_srUVmK", "I_lEOsa8WsI", "iclr_2021_pqZV_srUVmK", "ryhAnor2zuH", "gZciFfzTWV_", "O0oO0Ac7hDf", "AljIbLyWOjd", "VYmzFFCYPt", "qHB96PfSkd", "iclr_2021_pqZV_srUVmK", "iclr_2021_pqZV_srUVmK", "iclr_2021_pqZV_srUVmK" ]
iclr_2021_0oabwyZbOu
Mastering Atari with Discrete World Models
Intelligent agents need to generalize from past experience to achieve goals in complex environments. World models facilitate such generalization and allow learning behaviors from imagined outcomes to increase sample-efficiency. While learning world models from image inputs has recently become feasible for some tasks, modeling Atari games accurately enough to derive successful behaviors has remained an open challenge for many years. We introduce DreamerV2, a reinforcement learning agent that learns behaviors purely from predictions in the compact latent space of a powerful world model. The world model uses discrete representations and is trained separately from the policy. DreamerV2 constitutes the first agent that achieves human-level performance on the Atari benchmark of 55 tasks by learning behaviors inside a separately trained world model. With the same computational budget and wall-clock time, Dreamer V2 reaches 200M frames and surpasses the final performance of the top single-GPU agents IQN and Rainbow. DreamerV2 is also applicable to tasks with continuous actions, where it learns an accurate world model of a complex humanoid robot and solves stand-up and walking from only pixel inputs.
poster-presentations
The main contribution of the paper is showing that a model-based approach can be competitive with (and even outperform) strong model-free methods on the 200M Atari benchmark. This is achieved through a set of improvements over the original Dreamer algorithm. Reviewers have been polarized over this submission (4,5,8,9). After reading the paper, reviews, rebuttals, and engaging with all reviewers in private conversations, I am recommending acceptance as a poster. I agree with R3 and R4 that « this is impressive work », « results are a convincing demonstration of its utility », « it is an important setup from the perspective of model-based RL », « the model is elegant », and « the benchmarking discussion is very useful for the community ». Although it is true, as R1 puts it, that this work can be seen as « an incremental set of tricks over a prior published approach », these tricks are not obvious and lead to very substantial empirical performance gains. Since the authors described them in details and have also committed to sharing their code, I expect them to be quite valuable to other researchers. Finally, although I respect R2’s choice to stick to their rating of 4, I believe that their main concern, related to not fully understanding why this work improves on the existing SimPLE algorithm, is indeed justified, but is not enough for rejection. DreamerV2 has a lot of differences compared to SimPLE and it would be very costly to investigate in details the impact of each of them. Hopefully, this work will motivate further research in model-based RL that will shed more light on such questions. I would encourage the authors, however, to elaborate a bit more on the differences vs. SimPLE in the « Related work » section (or Appendix, if there is not enough room in the main text).
train
[ "2rVyGMguhoK", "liCjv8y4851", "WyIK2qTc1H", "lnS6a4Y8IiH", "0e3V9V8fpLF", "IHkpABdBCdJ", "ANO0ZW1ccl1", "7-jrUqdWMGd", "c40bF2oj2tg", "EKBz7UzRK3k", "u5sjmiskkw8", "TKW3HgrCsht", "2gLbE6RAVi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "The authors introduce DreamerV2, a modification of the influential Dreamer RL agent (hereafter refered to as DreamerV1). The primary changes from DreamerV1 are a discrete latent space and a modified loss function (and with it, a modified optimization scheme). As in DreamerV1, the agent trains a world model with en...
[ 9, 8, 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_0oabwyZbOu", "iclr_2021_0oabwyZbOu", "iclr_2021_0oabwyZbOu", "IHkpABdBCdJ", "iclr_2021_0oabwyZbOu", "c40bF2oj2tg", "liCjv8y4851", "0e3V9V8fpLF", "0e3V9V8fpLF", "liCjv8y4851", "2rVyGMguhoK", "WyIK2qTc1H", "WyIK2qTc1H" ]
iclr_2021_kW_zpEmMLdP
Learning Neural Event Functions for Ordinary Differential Equations
The existing Neural ODE formulation relies on an explicit knowledge of the termination time. We extend Neural ODEs to implicitly defined termination criteria modeled by neural event functions, which can be chained together and differentiated through. Neural Event ODEs are capable of modeling discrete and instantaneous changes in a continuous-time system, without prior knowledge of when these changes should occur or how many such changes should exist. We test our approach in modeling hybrid discrete- and continuous- systems such as switching dynamical systems and collision in multi-body systems, and we propose simulation-based training of point processes with applications in discrete control.
poster-presentations
This paper presents an extension of the neural ODE approach to include discrete changes in the continuous-time dynamics. All reviewers agree the contribution made by this paper is worth publishing. Most of the reviewers' concerns have been answered in the rebuttal and I therefore recommend accepting this paper.
train
[ "ALy5vLupVAe", "QKtQR4ZjMfr", "e_7bL36pQ2", "3Gts8HrB-Q7", "ni95cRKrgD", "G-25hcdlBe", "nclahBFi-nO", "Wcrr8mkdDH9", "dlCeSAMEKVI", "DhE_7lTQwHb", "zbOhuMKpDye" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your reply. I think the paper is a very valuable contribution and the additional comments make it more clear and accessible.", "I thank the authors for addressing most of my concerns by adding the details about architectures, algorithm and extra examples. The updated Table 2 now shows more clearly ...
[ -1, -1, 6, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nclahBFi-nO", "G-25hcdlBe", "iclr_2021_kW_zpEmMLdP", "Wcrr8mkdDH9", "dlCeSAMEKVI", "e_7bL36pQ2", "DhE_7lTQwHb", "zbOhuMKpDye", "iclr_2021_kW_zpEmMLdP", "iclr_2021_kW_zpEmMLdP", "iclr_2021_kW_zpEmMLdP" ]
iclr_2021_Q4EUywJIkqr
Contemplating Real-World Object Classification
Deep object recognition models have been very successful over benchmark datasets such as ImageNet. How accurate and robust are they to distribution shifts arising from natural and synthetic variations in datasets? Prior research on this problem has primarily focused on ImageNet variations (e.g., ImageNetV2, ImageNet-A). To avoid potential inherited biases in these studies, we take a different approach. Specifically, we reanalyze the ObjectNet dataset recently proposed by Barbu et al. containing objects in daily life situations. They showed a dramatic performance drop of the state of the art object recognition models on this dataset. Due to the importance and implications of their results regarding the generalization ability of deep models, we take a second look at their analysis. We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement. Relative to the numbers reported in Barbu et al., around 10-15% of the performance loss is recovered, without any test time data augmentation. Despite this gain, however, we conclude that deep models still suffer drastically on the ObjectNet dataset. We also investigate the robustness of models against synthetic image perturbations such as geometric transformations (e.g., scale, rotation, translation), natural image distortions (e.g., impulse noise, blur) as well as adversarial attacks (e.g., FGSM and PGD-5). Our results indicate that limiting the object area as much as possible (i.e., from the entire image to the bounding box to the segmentation mask) leads to consistent improvement in accuracy and robustness. Finally, through a qualitative analysis of ObjectNet data, we find that i) a large number of images in this dataset are hard to recognize even for humans, and ii) easy (hard) samples for models match with easy (hard) samples for humans. Overall, our analysis shows that ObjecNet is still a challenging test platform that can be used to measure the generalization ability of models. The code and data are available in [masked due to blind review].
poster-presentations
Reviewers agreed that overall the two-pronged message of the submission has utility. 1. That ObjectNet is continues to be difficult for models to understand and is a challenging test platform even when objects are isolated from their backgrounds. This is significant and not obvious. Cropping objects makes the distribution shift between ObjectNet and ImageNet far smaller, but the large remaining performance gap points to the fact that detectors are limited by their ability to recognize the foregrounds of objects not by their ability to isolate objects from their backgrounds. 2. That segmentation could be a promising direction for robustness to adversarial perturbations which has so far been overlooked.
train
[ "XlynfQFTzr", "4MFn1zi8XZy", "rcztgbL_Cta", "hSDHPEbkKN", "mjj8BFjJR3o", "P9YoPi6Qxg9", "N5o478jC4yP", "yChvojl7ops", "h3ReLoWq6wa", "WTDDyFHGi15" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper revisits ObjectNet dataset closely and found applying classifiers on object bounding box significantly reduces the gap between ImageNet and ObjectNet. The authors further investigates the robustness of CNNs against image perturbations and adversarial attacks, and found limiting the object area to their ...
[ 6, 5, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_Q4EUywJIkqr", "iclr_2021_Q4EUywJIkqr", "iclr_2021_Q4EUywJIkqr", "mjj8BFjJR3o", "XlynfQFTzr", "h3ReLoWq6wa", "4MFn1zi8XZy", "WTDDyFHGi15", "iclr_2021_Q4EUywJIkqr", "iclr_2021_Q4EUywJIkqr" ]
iclr_2021_XQQA6-So14
Neural Spatio-Temporal Point Processes
We propose a new class of parameterizations for spatio-temporal point processes which leverage Neural ODEs as a computational method and enable flexible, high-fidelity models of discrete events that are localized in continuous time and space. Central to our approach is a combination of continuous-time neural networks with two novel neural architectures, \ie, Jump and Attentive Continuous-time Normalizing Flows. This approach allows us to learn complex distributions for both the spatial and temporal domain and to condition non-trivially on the observed event history. We validate our models on data sets from a wide variety of contexts such as seismology, epidemiology, urban mobility, and neuroscience.
poster-presentations
This paper presents a model for spatiotemporal point processes using neural ODEs. Some technical innovations are introduced to allow the conditional intensity to change discontinuously in response to new events. Likewise, the spatial intensity is expanded upon that proposed in prior work on neural SDEs. Reviewers were generally positive about the contributions and the empirical assessments, and the authors made substantial improvements during the discussion phase.
train
[ "NqwXVem8OKg", "djNCj7E3Oqb", "PTxDZIBZO-L", "iiR72qBdL7n", "tidqVvVZsY8", "oPD0NjtASOG", "3uMtylN3QjN", "0sJCGvVsevt", "8Qm3AD1IEU", "I5Mj9xZGkog", "k4KSafGcdGI" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a neural-ODE-based point process for spatio-temporal data. Under the general framework, three particular variants are proposed: they handle data with different characteristics and have different computational efficiency. \n\nPros: \n\nThe idea is interestingly novel. \n\nThe proposed model archi...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_XQQA6-So14", "iclr_2021_XQQA6-So14", "iclr_2021_XQQA6-So14", "I5Mj9xZGkog", "oPD0NjtASOG", "djNCj7E3Oqb", "0sJCGvVsevt", "NqwXVem8OKg", "k4KSafGcdGI", "iclr_2021_XQQA6-So14", "iclr_2021_XQQA6-So14" ]
iclr_2021_PpshD0AXfA
Generative Time-series Modeling with Fourier Flows
Generating synthetic time-series data is crucial in various application domains, such as medical prognosis, wherein research is hamstrung by the lack of access to data due to concerns over privacy. Most of the recently proposed methods for generating synthetic time-series rely on implicit likelihood modeling using generative adversarial networks (GANs)—but such models can be difficult to train, and may jeopardize privacy by “memorizing” temporal patterns in training data. In this paper, we propose an explicit likelihood model based on a novel class of normalizing flows that view time-series data in the frequency-domain rather than the time-domain. The proposed flow, dubbed a Fourier flow, uses a discrete Fourier transform (DFT) to convert variable-length time-series with arbitrary sampling periods into fixed-length spectral representations, then applies a (data-dependent) spectral filter to the frequency-transformed time-series. We show that, by virtue of the DFT analytic properties, the Jacobian determinants and inverse mapping for the Fourier flow can be computed efficiently in linearithmic time, without imposing explicit structural constraints as in existing flows such as NICE (Dinh et al. (2014)), RealNVP (Dinh et al. (2016)) and GLOW (Kingma & Dhariwal (2018)). Experiments show that Fourier flows perform competitively compared to state-of-the-art baselines.
poster-presentations
Nice ideas with practical advantages.
train
[ "PRQMT_SktCF", "k_LoXt_XItu", "GV1VMRz8Qk8", "MZekb0bAD-g", "FBSb7JexmH0", "FB-6augP3r_", "DHerFMlLHSR", "ppePakm9Ohd", "GqtR0Zj7d3A", "JFS4b02OhVs", "OStEOPLX9kO", "0zUcPiEoCfA", "uFwyahg9-c1", "ATdkVTGH8B", "elrATjkRWSx", "aSyqt4z7ms9" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "### Summary\n\nThe paper presents Fourier Flows (FF), which is a time series generative model in the frequency domain. It shows that the Jacobian of the DFT is equal to 1, which means that DFT does not add too much overhead. The results on the real-world datasets are encouraging and expected because the predictive...
[ 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_PpshD0AXfA", "iclr_2021_PpshD0AXfA", "FB-6augP3r_", "k_LoXt_XItu", "PRQMT_SktCF", "iclr_2021_PpshD0AXfA", "ppePakm9Ohd", "GqtR0Zj7d3A", "PRQMT_SktCF", "elrATjkRWSx", "0zUcPiEoCfA", "aSyqt4z7ms9", "ATdkVTGH8B", "k_LoXt_XItu", "iclr_2021_PpshD0AXfA", "iclr_2021_PpshD0AXfA" ]
iclr_2021_6FqKiVAdI3Y
DOP: Off-Policy Multi-Agent Decomposed Policy Gradients
Multi-agent policy gradient (MAPG) methods recently witness vigorous progress. However, there is a significant performance discrepancy between MAPG methods and state-of-the-art multi-agent value-based approaches. In this paper, we investigate causes that hinder the performance of MAPG algorithms and present a multi-agent decomposed policy gradient method (DOP). This method introduces the idea of value function decomposition into the multi-agent actor-critic framework. Based on this idea, DOP supports efficient off-policy learning and addresses the issue of centralized-decentralized mismatch and credit assignment in both discrete and continuous action spaces. We formally show that DOP critics have sufficient representational capability to guarantee convergence. In addition, empirical evaluations on the StarCraft II micromanagement benchmark and multi-agent particle environments demonstrate that DOP outperforms both state-of-the-art value-based and policy-based multi-agent reinforcement learning algorithms. Demonstrative videos are available at https://sites.google.com/view/dop-mapg/.
poster-presentations
The paper presents a decomposition of the value function in the context of CCDA. Most reviewers find this paper clear and well written, although one reviewer suggests to change the paper structure. The method presented in this paper is simple and well justified by a theoretical section. Experiments on several domains, including Starcraft 2 micro-management tasks, are supporting the claims of that section. After some reviewers pointed out that the tabular setup is not useful in practice, the authors have extended the empirical and theoretical results to a more general setup. Some reviewers point out that some theoretical results may not be directly related to the experimental findings. In particular, reviewer 3 does not support a central claim of the paper, and find that CDM is misleading and not provably representing the core problem. In general, reviewer 3 does not support acceptance of this paper, but I still believe this paper should be accepted based on the other reviews (clearly in favour of acceptance). I hope that the authors and reviewer 3 will be able to further discuss and reach understanding, which hopefully should lead to fruitful results.
train
[ "HQ5E8hMQVTy", "_wr6Wzb9h4l", "ZV3eFQ7J3x", "VBIlCqEdfq4", "KbQLtFdbKRU", "ugYU6qM8h3l", "R2dzz0goQ-G", "9O4g7FXyvx2", "isTA35yheEL", "d3m1wYkeqa" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the review and comments.\n\n**Q1**: \"How exactly is a regular critic bad?\"\n\n**A1**: A regular critic has three drawbacks: (1) It does not well support off-policy learning for discrete action spaces. To calculate policy gradients, we need to estimate $Q_{tot}^{\\mathbf\\pi}$ in stead o...
[ -1, -1, -1, -1, -1, -1, 7, 3, 9, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "9O4g7FXyvx2", "HQ5E8hMQVTy", "_wr6Wzb9h4l", "d3m1wYkeqa", "isTA35yheEL", "R2dzz0goQ-G", "iclr_2021_6FqKiVAdI3Y", "iclr_2021_6FqKiVAdI3Y", "iclr_2021_6FqKiVAdI3Y", "iclr_2021_6FqKiVAdI3Y" ]
iclr_2021_BbNIbVPJ-42
The Risks of Invariant Risk Minimization
Invariant Causal Prediction (Peters et al., 2016) is a technique for out-of-distribution generalization which assumes that some aspects of the data distribution vary across the training set but that the underlying causal mechanisms remain constant. Recently, Arjovsky et al. (2019) proposed Invariant Risk Minimization (IRM), an objective based on this idea for learning deep, invariant features of data which are a complex function of latent variables; many alternatives have subsequently been suggested. However, formal guarantees for all of these works are severely lacking. In this paper, we present the first analysis of classification under the IRM objective—as well as these recently proposed alternatives—under a fairly natural and general model. In the linear case, we show simple conditions under which the optimal solution succeeds or, more often, fails to recover the optimal invariant predictor. We furthermore present the very first results in the non-linear regime: we demonstrate that IRM can fail catastrophically unless the test data is sufficiently similar to the training distribution—this is precisely the issue that it was intended to solve. Thus, in this setting we find that IRM and its alternatives fundamentally do not improve over standard Empirical Risk Minimization.
poster-presentations
The authors have made significant efforts to thoroughly address all the concerns. Due to the amount of discussions, I had to go through the paper myself and agree with the authors on many of the points. In my opinion, this is a solid theoretical work on the pitfalls of IRM.
train
[ "ThTgSL_Nqme", "t_vlSgM-0zE", "0c9LzlEb9rY", "tL7gpqKeC2", "Rleo4H8UG9f", "I6Al0ja-nT0", "2oDYQzgDyyt", "YG5HxAeNeKY", "ha9q82E9nR", "cxsggtTchmv", "Oagd_8DmX0Q", "d0tHCqsxcO3", "yEmiMYmSB8w", "huUlZkp8hQ2", "BmLO0wy3oAX", "G3AdrU_JZ_V", "2iwA7nLuPi", "QBzTuF_HnjR", "XEYCgdQvHig"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_re...
[ "Pros:\n* The work gives extended theoretical analysis on the effect of invariant risk minimization scheme, which is an increasingly popular framework for robust prediction. The work considerably extends the results in the original IRM paper. The results seem reasonable, and clarify implausible beliefs on the frame...
[ 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2 ]
[ "iclr_2021_BbNIbVPJ-42", "iclr_2021_BbNIbVPJ-42", "tL7gpqKeC2", "Rleo4H8UG9f", "t_vlSgM-0zE", "d0tHCqsxcO3", "cxsggtTchmv", "d0tHCqsxcO3", "d0tHCqsxcO3", "Oagd_8DmX0Q", "G3AdrU_JZ_V", "yEmiMYmSB8w", "huUlZkp8hQ2", "BmLO0wy3oAX", "t_vlSgM-0zE", "XEYCgdQvHig", "I9U_Pj4o45W", "ThTgSL_...
iclr_2021_GTGb3M_KcUl
DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation
Recently, the DL compiler, together with Learning to Compile has proven to be a powerful technique for optimizing deep learning models. However, existing methods focus on accelerating the convergence speed of the individual tensor operator rather than the convergence speed of the entire model, which results in long optimization time to obtain a desired latency. In this paper, we present a new method called DynaTune, which provides significantly faster convergence speed to optimize a DNN model. In particular, we consider a Multi-Armed Bandit (MAB) model for the tensor program optimization problem. We use UCB to handle the decision-making of time-slot-based optimization, and we devise a Bayesian belief model that allows predicting the potential performance gain of each operator with uncertainty quantification, which guides the optimization process. We evaluate and compare DynaTune with the state-of-the-art DL compiler. The experiment results show that DynaTune is 1.2--2.4 times faster to achieve the same optimization quality for a range of models across different hardware architectures.
poster-presentations
This paper applies multi-armed bandits to tuning deep learning code optimization. All reviewers agreed that this is an exploratory paper that opens up a new research area. My main criticism is algorithmic. In particular, the paper applies a 20-year old algorithm to a problem with a small number of arms. It is definitely not as impressive as https://papers.nips.cc/paper/2018/file/f33ba15effa5c10e873bf3842afb46a6-Paper.pdf who studied a different (but related) problem. The tuning problem in this paper also seems non-stochastic and contextual, while the authors apply a stochastic non-contextual bandit algorithm. I shared these concerns with the reviewers, who insisted that the application is important enough to justify the acceptance of the paper. I respect their opinion and therefore suggest an acceptance. I encourage the authors to take my comments into account when revising the paper.
train
[ "m1Vs7FffRqT", "UyMTDbY4wrO", "udcIN0vsQF7", "0X8y5uGwq9U", "ZIMVPPjMoK-", "wewmKr5XB1z", "Fn3VGKYoXNI", "BXuDO2r6I4m", "c9oH_hWi7Cg", "icXZ-7Mb49n", "wYY0ZdrdlMs", "509H3QIz0-l", "n6SA-HD5j4c", "PD9kK6AM66M", "W2Eo_6827zB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We sincerely thank all reviewers for their positive and constructive comments, time, and effort for improving the paper! Apart from the revision that has already been made to the manuscript, we will incorporate the reviewers' additional suggestions in the final version of the paper. We also very much appreciate al...
[ -1, -1, 6, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ -1, -1, 5, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2021_GTGb3M_KcUl", "509H3QIz0-l", "iclr_2021_GTGb3M_KcUl", "c9oH_hWi7Cg", "iclr_2021_GTGb3M_KcUl", "n6SA-HD5j4c", "BXuDO2r6I4m", "W2Eo_6827zB", "icXZ-7Mb49n", "PD9kK6AM66M", "509H3QIz0-l", "udcIN0vsQF7", "ZIMVPPjMoK-", "iclr_2021_GTGb3M_KcUl", "iclr_2021_GTGb3M_KcUl" ]