paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_HkxYzANYDB
CLEVRER: Collision Events for Video Representation and Reasoning
The ability to reason about temporal and causal events from videos lies at the core of human intelligence. Most video reasoning benchmarks, however, focus on pattern recognition from complex visual and language input, instead of on causal structure. We study the complementary problem, exploring the temporal and causal structures behind videos of objects with simple visual appearance. To this end, we introduce the CoLlision Events for Video REpresentation and Reasoning (CLEVRER) dataset, a diagnostic video dataset for systematic evaluation of computational models on a wide range of reasoning tasks. Motivated by the theory of human casual judgment, CLEVRER includes four types of question: descriptive (e.g., ‘what color’), explanatory (‘what’s responsible for’), predictive (‘what will happen next’), and counterfactual (‘what if’). We evaluate various state-of-the-art models for visual reasoning on our benchmark. While these models thrive on the perception-based task (descriptive), they perform poorly on the causal tasks (explanatory, predictive and counterfactual), suggesting that a principled approach for causal reasoning should incorporate the capability of both perceiving complex visual and language inputs, and understanding the underlying dynamics and causal relations. We also study an oracle model that explicitly combines these components via symbolic representations.
accept-spotlight
The reviewers are unanimous in their opinion that this paper offers a novel approach to causal learning. I concur.
train
[ "rylwb13YiH", "Syx9iRiYsr", "rkgwUAsKiS", "rJgTxAiKoS", "BJesxs8dYS", "H1lez5_nFr", "SJl5uAax5B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks a lot for your helpful comments and suggestions about our manuscript. We address your specific concerns and questions below.\n\n1. Multiple choice versus fine-grained physical reasoning.\n\nEven though the tasks in CLEVRER are grounded to multiple choices on natural language inputs, identification and predi...
[ -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, 4, 4, 1 ]
[ "BJesxs8dYS", "H1lez5_nFr", "SJl5uAax5B", "iclr_2020_HkxYzANYDB", "iclr_2020_HkxYzANYDB", "iclr_2020_HkxYzANYDB", "iclr_2020_HkxYzANYDB" ]
iclr_2020_r1lZ7AEKvB
The Logical Expressiveness of Graph Neural Networks
The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of which Boolean node classifiers (i.e., functions classifying nodes in graphs as true or false) can be expressed by GNNs. We tackle this problem by focusing on Boolean classifiers expressible as formulas in the logic FOC2, a well-studied fragment of first order logic. FOC2 is tightly related to the WL test, and hence to GNNs. We start by studying a popular class of GNNs, which we call AC-GNNs, in which the features of each node in the graph are updated, in successive layers, only in terms of the features of its neighbors. We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs. This subclass coincides with a logic heavily used by the knowledge representation community. We then look at what needs to be added to AC-GNNs for capturing all FOC2 classifiers. We show that it suffices to add readout functions, which allow to update the features of a node not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs of this kind ACR-GNNs. We experimentally validate our findings showing that, on synthetic data conforming to FOC2 formulas, AC-GNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.
accept-spotlight
The paper focuses on characterizing the expressiveness of graph neural networks. The reviewers were satisfied that the authors answered their questions suffciiently and uniformly agree that this is a strong paper that should be accepted.
val
[ "r1x495CRqH", "HJgRoh5dsS", "r1eHD2cOiH", "HygY42cOoB", "HJlFy2cdir", "r1gFmXtGsS", "r1egYwC6tr", "H1ezHHmAtB" ]
[ "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "This paper establishes novel theoretical connections between Boolean node classifiers on aggregate-combine Graph Neural Networks (AC-GNNs) and first-order predicate logic (FOC2). It shows that current boolean node classifiers on AC-GNNs can only represent a subset of FOC2 but that a simple extension taking into gl...
[ 8, -1, -1, -1, -1, -1, 8, 8 ]
[ 1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_r1lZ7AEKvB", "r1gFmXtGsS", "r1x495CRqH", "H1ezHHmAtB", "r1egYwC6tr", "iclr_2020_r1lZ7AEKvB", "iclr_2020_r1lZ7AEKvB", "iclr_2020_r1lZ7AEKvB" ]
iclr_2020_r1g87C4KwB
The Break-Even Point on Optimization Trajectories of Deep Neural Networks
The early phase of training of deep neural networks is critical for their final performance. In this work, we study how the hyperparameters of stochastic gradient descent (SGD) used in the early phase of training affect the rest of the optimization trajectory. We argue for the existence of the "``break-even" point on this trajectory, beyond which the curvature of the loss surface and noise in the gradient are implicitly regularized by SGD. In particular, we demonstrate on multiple classification tasks that using a large learning rate in the initial phase of training reduces the variance of the gradient, and improves the conditioning of the covariance of gradients. These effects are beneficial from the optimization perspective and become visible after the break-even point. Complementing prior work, we also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers. In short, our work shows that key properties of the loss surface are strongly influenced by SGD in the early phase of training. We argue that studying the impact of the identified effects on generalization is a promising future direction.
accept-spotlight
This is an interesting study analyzing learning trajectories and their dependence on hyperparameters, important for better understanding of learning in deep neural networks. All reviewers agree that the paper has a useful message to the ICLR community, and appreciate changes made by the authors in response to the initial reviews.
train
[ "Syg9XXmgoB", "SJekcJOCtH", "B1gbRl6tjr", "H1xAFABVjB", "Syl1RTS4jH", "SJgdUABNsS", "H1ltHRrEsB", "Bkx_caS4oB", "r1ly56SEoB", "HkgQJa95FH", "Hklp8J-nvH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author" ]
[ "This paper studies two objects that quantify the optimization trajectory: the Hessian of the training loss (H) that describes the curvature of the loss surface, and the covariance of gradients that quantifies noise induced by noisy estimate of the full-batch gradient. \n\nThe authors predict and demonstrate that l...
[ 6, 8, -1, -1, -1, -1, -1, -1, -1, 6, -1 ]
[ 4, 1, -1, -1, -1, -1, -1, -1, -1, 3, -1 ]
[ "iclr_2020_r1g87C4KwB", "iclr_2020_r1g87C4KwB", "iclr_2020_r1g87C4KwB", "Syg9XXmgoB", "SJekcJOCtH", "Syg9XXmgoB", "Syg9XXmgoB", "HkgQJa95FH", "HkgQJa95FH", "iclr_2020_r1g87C4KwB", "iclr_2020_r1g87C4KwB" ]
iclr_2020_H1eA7AEtvS
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT~\citep{devlin2018bert}. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and \squad benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at https://github.com/google-research/ALBERT.
accept-spotlight
This paper proposes three modifications of BERT type models two of which is concerned with parameter sharing and one with a new auxiliary loss. New SOTA on downstream tasks are demonstrated. All reviewers liked the paper and so did a lot of comments. Acceptance is recommended.
train
[ "rygf22FniS", "HkxZGWJisB", "B1x9O5W-jr", "B1gKTRWbjH", "ByxuKYW-jB", "Bkx3GYb-iB", "BkxJzVhyoB", "rJlMUMIoFH", "SygC0QIhFB", "Hkl2NgFecr", "B1lKRMYeqH", "BJex-gQg5B", "SklNxFQ6KB", "rJe1YLXptS", "SkgS1BMsKB", "HkelX735YH", "HkeS39ottH", "SyxfFRUKYH", "ryeI2ojwKS", "Hygq1uOBtS"...
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "public", "public", "author", "author", "public", "author", "public", "author", "public"...
[ "Okay thanks for the clarification!", "We want to thank the reviewers again for their suggestions! We have updated the paper with the following changes: \n - Addressing the typo pointed out by Reviewer 2.\n - Addressing Reviewer 3’s concern on the overgeneralization problem of this sentence “dropout can hurt pe...
[ -1, -1, -1, -1, -1, -1, -1, 6, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "ByxuKYW-jB", "iclr_2020_H1eA7AEtvS", "rJlMUMIoFH", "BkxJzVhyoB", "SygC0QIhFB", "Hkl2NgFecr", "iclr_2020_H1eA7AEtvS", "iclr_2020_H1eA7AEtvS", "iclr_2020_H1eA7AEtvS", "iclr_2020_H1eA7AEtvS", "SklNxFQ6KB", "rJe1YLXptS", "SkgS1BMsKB", "HkelX735YH", "iclr_2020_H1eA7AEtvS", "Hygq1uOBtS", ...
iclr_2020_HJxrVA4FDS
Disentangling neural mechanisms for perceptual grouping
Forming perceptual groups and individuating objects in visual scenes is an essential step towards visual intelligence. This ability is thought to arise in the brain from computations implemented by bottom-up, horizontal, and top-down connections between neurons. However, the relative contributions of these connections to perceptual grouping are poorly understood. We address this question by systematically evaluating neural network architectures featuring combinations bottom-up, horizontal, and top-down connections on two synthetic visual tasks, which stress low-level "Gestalt" vs. high-level object cues for perceptual grouping. We show that increasing the difficulty of either task strains learning for networks that rely solely on bottom-up connections. Horizontal connections resolve straining on tasks with Gestalt cues by supporting incremental grouping, whereas top-down connections rescue learning on tasks with high-level object cues by modifying coarse predictions about the position of the target object. Our findings dissociate the computational roles of bottom-up, horizontal and top-down connectivity, and demonstrate how a model featuring all of these interactions can more flexibly learn to form perceptual groups.
accept-spotlight
All the reviewers recommend acceptance. The reviews found the paper to be interesting with substantial insights.
train
[ "H1xpB_j6KB", "H1eArVEioH", "SygWsyQjsH", "S1xq_1Qosr", "BklhB17siB", "BJg5myXoiS", "ryesUNO59H", "SkeA5ZK6FH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a dataset and designed a relevant network structure to analyze the function of horizontal and top-down connections for perceptual grouping. The used two datasets smartly isolate the requirements for exploiting Gestalt cues and object-based strategies. Appendix A detailed describes the cABC data...
[ 8, -1, -1, -1, -1, -1, 6, 8 ]
[ 1, -1, -1, -1, -1, -1, 5, 1 ]
[ "iclr_2020_HJxrVA4FDS", "BJg5myXoiS", "SkeA5ZK6FH", "H1xpB_j6KB", "ryesUNO59H", "iclr_2020_HJxrVA4FDS", "iclr_2020_HJxrVA4FDS", "iclr_2020_HJxrVA4FDS" ]
iclr_2020_rJgJDAVKvB
Learning to Plan in High Dimensions via Neural Exploration-Exploitation Trees
We propose a meta path planning algorithm named \emph{Neural Exploration-Exploitation Trees~(NEXT)} for learning from prior experience for solving new path planning problems in high dimensional continuous state and action spaces. Compared to more classical sampling-based methods like RRT, our approach achieves much better sample efficiency in high-dimensions and can benefit from prior experience of planning in similar environments. More specifically, NEXT exploits a novel neural architecture which can learn promising search directions from problem structures. The learned prior is then integrated into a UCB-type algorithm to achieve an online balance between \emph{exploration} and \emph{exploitation} when solving a new problem. We conduct thorough experiments to show that NEXT accomplishes new planning problems with more compact search trees and significantly outperforms state-of-the-art methods on several benchmarks.
accept-spotlight
All reviewers unanimously accept the paper.
train
[ "SkewetU15S", "ByxknBAjoB", "SJgwUURsjr", "rygR18Roir", "Hylq3SQaYr", "Bkey_3u7qS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nMotion-planning in high dimensional spaces is challenging due to the curse of dimensionality. Sampling-based motion planners like PRM, PRM*, RRT, RRT*, BIT* etc have been the go-to solution family. But often these algorithms solve every planning problem tabula rasa. This work combines learning with sam...
[ 8, -1, -1, -1, 6, 8 ]
[ 5, -1, -1, -1, 3, 1 ]
[ "iclr_2020_rJgJDAVKvB", "Bkey_3u7qS", "Hylq3SQaYr", "SkewetU15S", "iclr_2020_rJgJDAVKvB", "iclr_2020_rJgJDAVKvB" ]
iclr_2020_BkgYPREtPr
Symplectic Recurrent Neural Networks
We propose Symplectic Recurrent Neural Networks (SRNNs) as learning algorithms that capture the dynamics of physical systems from observed trajectories. SRNNs model the Hamiltonian function of the system by a neural networks, and leverage symplectic integration, multiple-step training and initial state optimization to address the challenging numerical issues associated with Hamiltonian systems. We show SRNNs succeed reliably on complex and noisy Hamiltonian systems. Finally, we show how to augment the SRNN integration scheme in order to handle stiff dynamical systems such as bouncing billiards.
accept-spotlight
This paper proposes a novel architecture for learning Hamiltonian dynamics from data. The model outperforms the existing state of the art Hamiltonian Neural Networks on challenging physical datasets. It also goes further by proposing a way to deal with observation noise and a way to model stiff dynamical systems, like bouncing balls. The paper is well written, the model works well and the experimental evaluation is solid. All reviewers agree that this is an excellent contribution to the field, hence I am happy to recommend acceptance as an oral.
train
[ "BJlTuVBRFr", "Hyg1X9VTqS", "H1lAbHqTYH", "rJlZW2ZqoB", "B1xyAt34ir", "S1gCz_hEsH", "rkleXPh4iB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes to represent a Hamiltonian model of a physical system by a neural network. The parameters are then adjusted, so that the observations are considered maximally likely under a probabilistic model. The novelty is to consider a symplectic Leapfrog integration scheme for the Hamiltonian system, whic...
[ 8, 8, 6, -1, -1, -1, -1 ]
[ 1, 1, 3, -1, -1, -1, -1 ]
[ "iclr_2020_BkgYPREtPr", "iclr_2020_BkgYPREtPr", "iclr_2020_BkgYPREtPr", "H1lAbHqTYH", "Hyg1X9VTqS", "BJlTuVBRFr", "H1lAbHqTYH" ]
iclr_2020_S1gFvANKDS
Asymptotics of Wide Networks from Feynman Diagrams
Understanding the asymptotic behavior of wide networks is of considerable interest. In this work, we present a general method for analyzing this large width behavior. The method is an adaptation of Feynman diagrams, a standard tool for computing multivariate Gaussian integrals. We apply our method to study training dynamics, improving existing bounds and deriving new results on wide network evolution during stochastic gradient descent. Going beyond the strict large width limit, we present closed-form expressions for higher-order terms governing wide network training, and test these predictions empirically.
accept-spotlight
This submission presents bounds on the training dynamics (including gradient evolution) for deep linear (and in some cases nonlinear) networks as a function of the width of the layers or number of convolutional layers. The work also presents experimental results that provide evidence that the bounds are tight. Strengths: The work provides interesting insights into these training dynamics, particularly for the wide-but-not-infinite setting, which is less studied. The work also adapts cluster graphs and Feynman diagrams to derive these bounds, which could be useful tools for researchers in this field. Weaknesses: The validity and applicability of some of the results for nonlinear networks was not entirely clear at first but has been clarified in the revision. The reviewer consensus was to accept this submission.
train
[ "SyesRMTujB", "B1x_upx_sr", "HklO4Zydir", "B1el0lJusr", "BJefC4YVqr", "SJxzoDYLcS", "Byl5H39iqS" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We will try to clarify the relationship between activations, Feynman diagrams (FDs), and cluster graphs (CGs). We are happy to add these clarifications to the paper if they are useful!\n\nFDs and CGs are complementary tools. CGs provide a correct upper bound in all cases we prove, as well as in all cases we tested...
[ -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, 1, 1, 4 ]
[ "B1x_upx_sr", "B1el0lJusr", "BJefC4YVqr", "Byl5H39iqS", "iclr_2020_S1gFvANKDS", "iclr_2020_S1gFvANKDS", "iclr_2020_S1gFvANKDS" ]
iclr_2020_Sklgs0NFvr
Learning The Difference That Makes A Difference With Counterfactually-Augmented Data
Despite alarm over the reliance of machine learning systems on so-called spurious patterns, the term lacks coherent meaning in standard statistical frameworks. However, the language of causality offers clarity: spurious associations are due to confounding (e.g., a common cause), but not direct or indirect causal effects. In this paper, we focus on natural language processing, introducing methods and resources for training models less sensitive to spurious patterns. Given documents and their initial labels, we task humans with revising each document so that it (i) accords with a counterfactual target label; (ii) retains internal coherence; and (iii) avoids unnecessary changes. Interestingly, on sentiment analysis and natural language inference tasks, classifiers trained on original data fail on their counterfactually-revised counterparts and vice versa. Classifiers trained on combined datasets perform remarkably well, just shy of those specialized to either domain. While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e.g., mentions of genre), models trained on the combined data are less sensitive to this signal. Both datasets are publicly available.
accept-spotlight
This paper introduces the idea of a counterfactually augmented dataset, in which each example is paired with a manually constructed example with a different label that makes the minimal possible edit to the original example that makes that label correct. The paper justifies the value of these datasets as an aid in both understanding and building classifiers that are robust to spurious features, and releases two small examples. On my reading, this paper presents a very substantially new idea that is relevant to a major ongoing debate in the applied machine learning literature: How do we build models that learn some intended behavior, where the primary evidence we have of that behavior comes in the form of datasets with spurious correlations/artifacts. One reviewer argued for rejection on the grounds that dataset papers are not appropriate for publication at a main conference. I don't find that argument compelling, and I'm also not sure that it's accurate to call this paper primarily a dataset paper. We could not reach a complete consensus after further discussion. The other reviews raised some additional concerns about the paper, but the revised manuscript appears to have address them to the extent possible.
train
[ "HJxbp3D9jB", "rkg6_3D5jr", "S1eSrnDqsB", "H1e7QnDciS", "rke_9oPqir", "rylDGP7Zjr", "HJgaPkj0YH", "rylen8dl5H", "B1eNnTeLqS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all four reviewers for thoughtful reviews. We are glad to see that 3 reviewers vote for acceptance and that two champion the paper, with the reviewers recognizing the paper to be “timely”, to address “an important problem”, to contribute an “exciting” resource, and to be “extremely valuable ...
[ -1, -1, -1, -1, -1, 8, 8, 1, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "iclr_2020_Sklgs0NFvr", "rylDGP7Zjr", "HJgaPkj0YH", "rylen8dl5H", "B1eNnTeLqS", "iclr_2020_Sklgs0NFvr", "iclr_2020_Sklgs0NFvr", "iclr_2020_Sklgs0NFvr", "iclr_2020_Sklgs0NFvr" ]
iclr_2020_r1genAVKPB
Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?
Modern deep learning methods provide effective means to learn good representations. However, is a good representation itself sufficient for sample efficient reinforcement learning? This question has largely been studied only with respect to (worst-case) approximation error, in the more classical approximate dynamic programming literature. With regards to the statistical viewpoint, this question is largely unexplored, and the extant body of literature mainly focuses on conditions which \emph{permit} sample efficient reinforcement learning with little understanding of what are \emph{necessary} conditions for efficient reinforcement learning. This work shows that, from the statistical viewpoint, the situation is far subtler than suggested by the more traditional approximation viewpoint, where the requirements on the representation that suffice for sample efficient RL are even more stringent. Our main results provide sharp thresholds for reinforcement learning methods, showing that there are hard limitations on what constitutes good function approximation (in terms of the dimensionality of the representation), where we focus on natural representational conditions relevant to value-based, model-based, and policy-based learning. These lower bounds highlight that having a good (value-based, model-based, or policy-based) representation in and of itself is insufficient for efficient reinforcement learning, unless the quality of this approximation passes certain hard thresholds. Furthermore, our lower bounds also imply exponential separations on the sample complexity between 1) value-based learning with perfect representation and value-based learning with a good-but-not-perfect representation, 2) value-based learning and policy-based learning, 3) policy-based learning and supervised learning and 4) reinforcement learning and imitation learning.
accept-spotlight
The authors challenge the idea that good representation in RL lead are sufficient for learning good policies with an interesting negative result -- they show that there exist MDPs which require an exponential number of samples to learn a near-optimal policy even if a good-but-not-perfect representation is given to the agent for both value-based and policy-based learning. Reviewers had some minor technical questions which were clarified sufficiently by the authors, leading to a consensus of the contribution and quality of this work. Thus, I recommend this paper for acceptance.
train
[ "rklhKttoiB", "Skl9pF8vKB", "Bkll1ra_sB", "Hyx1asnwoS", "HyxeRKo4oS", "H1lxegn4oS", "r1xLt1n4sB", "SyeLBWVRFH", "HJxjXgv0tS" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We are glad that our clarification helps your understanding. Please find our responses to your questions below.\n\nLinear regression is actually doable with $O(d)$ samples in the setting studied in this paper. As mentioned on Page 7, the features constructed in Lemma A.1 are in fact random unit vectors. Standard r...
[ -1, 6, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "Bkll1ra_sB", "iclr_2020_r1genAVKPB", "HyxeRKo4oS", "iclr_2020_r1genAVKPB", "Skl9pF8vKB", "SyeLBWVRFH", "HJxjXgv0tS", "iclr_2020_r1genAVKPB", "iclr_2020_r1genAVKPB" ]
iclr_2020_B1xm3RVtwB
Simplified Action Decoder for Deep Multi-Agent Reinforcement Learning
In recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota. One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum. In contrast to these settings, success in the real world commonly requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative. In the last year, the card game Hanabi has been established as a new benchmark environment for AI to fill this gap. In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e. the ability to effectively reason over the intentions, beliefs and point of view of other agents when observing their actions. Learning to be informative when observed by others is an interesting challenge for Reinforcement Learning (RL): Fundamentally, RL requires agents to explore in order to discover good policies. However, when done naively, this randomness will inherently make their actions less informative to others during training. We present a new deep multi-agent RL method, the Simplified Action Decoder (SAD), which resolves this contradiction exploiting the centralized training phase. During training SAD allows other agents to not only observe the (exploratory) action chosen, but agents instead also observe the greedy action of their team mates. By combining this simple intuition with an auxiliary task for state prediction and best practices for multi-agent learning, SAD establishes a new state of the art for 2-5 players on the self-play part of the Hanabi challenge.
accept-spotlight
The method presented, the simplified action decoder, is a clever way of addressing the influence of exploratory actions in multi-agent RL. It's shown to enable state of the art performance in Hanabi, an interesting and relatively novel cooperative AI challenge. It seems, however, that the method has wider applicability than that. All reviewers agree that this is good and interesting work. Reviewer 2 had some issues with the presentation of the results and certain assumptions, but the authors responded so as to alleviate any concerns. This paper should definitely be accepted, if possible as oral.
train
[ "HJxoIwDojS", "Hkxfch4ssr", "Bkl2ONldFr", "B1lP5nVwor", "B1lfiqEvjS", "SygklZ5DYr", "SJxjUgaEsB", "BkxXJxNWoH", "Hkge_T7WjB", "SJeSyT7WsB", "r1x4m2mWir", "H1g0gKQlqr" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Once again, many thanks for the detailed feedback and also for the fast response. ", "Thanks for the fruitful comments and explanations. The presentation of the results with mean and max of 13 seeds seems much more solid. The performance gain solely due to greedy output is now clearly demonstrated. I have updat...
[ -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 1, -1, -1, 3, -1, -1, -1, -1, -1, 4 ]
[ "Hkxfch4ssr", "Hkge_T7WjB", "iclr_2020_B1xm3RVtwB", "Hkge_T7WjB", "SJxjUgaEsB", "iclr_2020_B1xm3RVtwB", "BkxXJxNWoH", "SygklZ5DYr", "Bkl2ONldFr", "H1g0gKQlqr", "iclr_2020_B1xm3RVtwB", "iclr_2020_B1xm3RVtwB" ]
iclr_2020_rkeu30EtvS
Network Deconvolution
Convolution is a central operation in Convolutional Neural Networks (CNNs), which applies a kernel to overlapping regions shifted across the image. However, because of the strong correlations in real-world image data, convolutional kernels are in effect re-learning redundant data. In this work, we show that this redundancy has made neural network training challenging, and propose network deconvolution, a procedure which optimally removes pixel-wise and channel-wise correlations before the data is fed into each layer. Network deconvolution can be efficiently calculated at a fraction of the computational cost of a convolution layer. We also show that the deconvolution filters in the first layer of the network resemble the center-surround structure found in biological neurons in the visual regions of the brain. Filtering with such kernels results in a sparse representation, a desired property that has been missing in the training of neural networks. Learning from the sparse representation promotes faster convergence and superior results without the use of batch normalization. We apply our network deconvolution operation to 10 modern neural network models by replacing batch normalization within each. Extensive experiments show that the network deconvolution operation is able to deliver performance improvement in all cases on the CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, Cityscapes, and ImageNet datasets.
accept-spotlight
This paper presents a feature normalization method for CNNs by decorrelating channel-wise and spatial correlation simultaneously. Overall all reviewers are positive to the acceptance and I support their opinions. The idea and implementation is relatively straightforward but well-motivated and reasonable. Experiments are well-organized and intensive, providing enough evidence to convince its effectiveness in terms of final accuracy and convergence speed. Also, it’s analogy to biological center-surrounded structure is thought provoking. The novelty of the method seems somewhat incremental considering that there already exists a channel-wise decorrelation method, but I think the findings of the paper are interesting and valuable enough for ICLR community and would like to recommend acceptance. Minor comments: I recommend authors to mention about zero-component analysis (ZCA) normalization, which has been a standard input normalization method for CIFAR datasets. I guess it is quite similar to the proposed method considering 1x1 convolution. Also, comparison with other recent normalization methods (e.g., Group Norm) would be useful.
train
[ "rklB-cN5iB", "rkx0a6N5sr", "SyeuhK4cjB", "BJgMW2npKH", "r1lY9K6aYS", "Bklx8agbqS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your in-depth review of the paper and the useful feedback. \n\nThe use of large kernels has not been an issue for training on ImageNet for two reasons: (1) We can use a larger subsampling stride (e.g. 7) to compute the covariance matrix to amortize the cost of larger kernel size, so that each pixel w...
[ -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, 4, 3, 3 ]
[ "r1lY9K6aYS", "BJgMW2npKH", "Bklx8agbqS", "iclr_2020_rkeu30EtvS", "iclr_2020_rkeu30EtvS", "iclr_2020_rkeu30EtvS" ]
iclr_2020_ryxjnREFwH
Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension
Integrating distributed representations with symbolic operations is essential for reading comprehension requiring complex reasoning, such as counting, sorting and arithmetics, but most existing approaches are hard to scale to more domains or more complex reasoning. In this work, we propose the Neural Symbolic Reader (NeRd), which includes a reader, e.g., BERT, to encode the passage and question, and a programmer, e.g., LSTM, to generate a program that is executed to produce the answer. Compared to previous works, NeRd is more scalable in two aspects: (1) domain-agnostic, i.e., the same neural architecture works for different domains; (2) compositional, i.e., when needed, complex programs can be generated by recursively applying the predefined operators, which become executable and interpretable representations for more complex reasoning. Furthermore, to overcome the challenge of training NeRd with weak supervision, we apply data augmentation techniques and hard Expectation-Maximization (EM) with thresholding. On DROP, a challenging reading comprehension dataset that requires discrete reasoning, NeRd achieves 1.37%/1.18% absolute improvement over the state-of-the-art on EM/F1 metrics. With the same architecture, NeRd significantly outperforms the baselines on MathQA, a math problem benchmark that requires multiple steps of reasoning, by 25.5% absolute increment on accuracy when trained on all the annotated programs. More importantly, NeRd still beats the baselines even when only 20% of the program annotations are given.
accept-spotlight
Main content: Blind review #1 summarizes it well: This paper presents a semantic parser that operates over passages of text instead of a structured data source. This is the first time anyone has demonstrated such a semantic parser (Siva Reddy and several others have essentially used unstructured text as an information source for a semantic parser, similar to OpenIE methods, but this is qualitatively different). The key insight is to let the semantic parser point to locations in the text that can be used in further symbolic operations. This is excellent work, and it should definitely be accepted. I have a ton of questions about this method, but they are good questions. -- Discussion: The reviews all agree on a generally positive assessment, and focus on details that have been addressed, rather than major problems. -- Recommendation and justification: This paper should be accepted. Even though novelty in terms of fundamental machine learning components is minimal, but the architecture employing neural models to do symbolic work is a good contribution in a crucial direction (especially in the theme of ICLR).
val
[ "rkg414ZnoB", "B1gAuEW3oB", "ByeMhgWnsr", "HkeLMAxnor", "HJxHCbZ3or", "SklrjkWnjH", "Skx3myW3iB", "ByeXCAe3iH", "r1lSnj8XjH", "H1eIfQ5odH", "rJlyvMZy5B", "SkgcD3zecB" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your interest in our work!\n\n1. We have compared extensively with previous methods such as BERT with calculator, MTMSN, etc, in our paper. To further clarify the differences, we copy part of our response to Reviewer 2 that emphasized the differences below. \n\n\"We would like to point out some importan...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "r1lSnj8XjH", "iclr_2020_ryxjnREFwH", "rJlyvMZy5B", "H1eIfQ5odH", "SkgcD3zecB", "Skx3myW3iB", "ByeXCAe3iH", "HkeLMAxnor", "iclr_2020_ryxjnREFwH", "iclr_2020_ryxjnREFwH", "iclr_2020_ryxjnREFwH", "iclr_2020_ryxjnREFwH" ]
iclr_2020_B1lPaCNtPB
Real or Not Real, that is the Question
While generative adversarial networks (GAN) have been widely adopted in various topics, in this paper we generalize the standard GAN to a new perspective by treating realness as a random variable that can be estimated from multiple angles. In this generalized framework, referred to as RealnessGAN, the discriminator outputs a distribution as the measure of realness. While RealnessGAN shares similar theoretical guarantees with the standard GAN, it provides more insights on adversarial learning. More importantly, compared to multiple baselines, RealnessGAN provides stronger guidance for the generator, achieving improvements on both synthetic and real-world datasets. Moreover, it enables the basic DCGAN architecture to generate realistic images at 1024*1024 resolution when trained from scratch.
accept-spotlight
The paper proposes a novel GAN formulation where the discriminator outputs discrete distributions instead of a scalar. The objective uses two "anchor" distributions that correspond to real and fake data. There were some concerns about the choice of these distributions but authors have addressed it in their response. The empirical results are impressive and the method will be of interest to the wide generative models community.
train
[ "S1eQF85RYB", "HyeHXe5jor", "rJlMay5ssS", "B1xOTmgjiB", "Byxf8pIOir", "HJlJQdEKiB", "rJldMhpadH", "BJegLH-Fjr", "B1lOLNOdiH", "HyeBTewdiB", "BklbY-v_iB", "SJldG7XbcS" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "Update: I raised my score from 3 to 6 after the authors addressed most of my comments.\n\n====================================================\n\nThis paper propose a new GAN formulation where the Discriminator outputs a discrete probability distribution instead of a scalar for each inputs. This discrete probabili...
[ 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 8 ]
[ 5, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2020_B1lPaCNtPB", "HJlJQdEKiB", "B1xOTmgjiB", "Byxf8pIOir", "S1eQF85RYB", "BJegLH-Fjr", "iclr_2020_B1lPaCNtPB", "B1lOLNOdiH", "HyeBTewdiB", "rJldMhpadH", "SJldG7XbcS", "iclr_2020_B1lPaCNtPB" ]
iclr_2020_S1lOTC4tDS
Dream to Control: Learning Behaviors by Latent Imagination
Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.
accept-spotlight
This paper presents an approach to model-based reinforcement learning in high-dimensional tasks. The approach involves learning a latent dynamics model, and performing rollouts thereof with an actor-critic model to learn behaviours. This is extensively evaluated on 20 visual control tasks. This paper was favourably received, but there were concerns around it being incremental (relative to PlaNet and SVG). The authors highlighted the differences in the rebuttal, clarifying the novelty of this work. Given the interesting ideas presented, and the convincing results, this paper should be accepted.
val
[ "H1xSQUW2tS", "r1lYlete9S", "SJeRQb-oFH", "SkxXFusisH", "HJezKUjisr", "HylXRrsijS", "BklFDSojir", "r1lsMBCdFB", "BJx3Y0dsKB", "H1lUufdiFS", "rygkDfsDtH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "This paper introduced a latent space model for reinforcement learning in vision-based control tasks. It first learns a latent dynamics model, in which the transition model and the reward model can be learned on the latent state representations. Using the learned latent state representations, it used an actor-criti...
[ 6, 8, 6, -1, -1, -1, -1, 8, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, 3, -1, -1, -1 ]
[ "iclr_2020_S1lOTC4tDS", "iclr_2020_S1lOTC4tDS", "iclr_2020_S1lOTC4tDS", "SJeRQb-oFH", "H1xSQUW2tS", "r1lsMBCdFB", "r1lYlete9S", "iclr_2020_S1lOTC4tDS", "H1lUufdiFS", "rygkDfsDtH", "iclr_2020_S1lOTC4tDS" ]
iclr_2020_HJlA0C4tPS
A Probabilistic Formulation of Unsupervised Text Style Transfer
We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.
accept-spotlight
This paper proposes an unsupervised text style transfer model which combines a language model prior with an encoder-decoder transducer. They use a deep generative model which hypothesises a latent sequence which generates the observed sequences. It is trained on non-parallel data and they report good results on unsupervised sentiment transfer, formality transfer, word decipherment, author imitation, and machine translation. The authors responded in depth to reviewer comments, and the reviewers took this into consideration. This is a well written paper, with an elegant model and I would like to see it accepted at ICLR.
train
[ "SJl7ALxqYH", "BkxAC3mhoH", "H1xpboQniH", "HkgM05m3sS", "rkeR9572sr", "SkxpKw0nYB", "B1gcj6Fe5r" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The main contribution of this paper is a principled probabilistic framework of unsupervised sequence to sequence transfer (text to text in particular). \n\nHowever, I believe there is a large disconnect between the probabilistic formulation written it section 3 and whats actually happening experimentally in sectio...
[ 6, -1, -1, -1, -1, 6, 8 ]
[ 5, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_HJlA0C4tPS", "iclr_2020_HJlA0C4tPS", "SJl7ALxqYH", "SkxpKw0nYB", "B1gcj6Fe5r", "iclr_2020_HJlA0C4tPS", "iclr_2020_HJlA0C4tPS" ]
iclr_2020_SkxpxJBKwS
Emergent Tool Use From Multi-Agent Autocurricula
Through multi-agent competition, the simple objective of hide-and-seek, and standard reinforcement learning algorithms at scale, we find that agents create a self-supervised autocurriculum inducing multiple distinct rounds of emergent strategy, many of which require sophisticated tool use and coordination. We find clear evidence of six emergent phases in agent strategy in our environment, each of which creates a new pressure for the opposing team to adapt; for instance, agents learn to build multi-object shelters using moveable boxes which in turn leads to agents discovering that they can overcome obstacles using ramps. We further provide evidence that multi-agent competition may scale better with increasing environment complexity and leads to behavior that centers around far more human-relevant skills than other self-supervised reinforcement learning methods such as intrinsic motivation. Finally, we propose transfer and fine-tuning as a way to quantitatively evaluate targeted capabilities, and we compare hide-and-seek agents to both intrinsic motivation and random initialization baselines in a suite of domain-specific intelligence tests.
accept-spotlight
This paper describes how multi-agent reinforcement learning at scale leads to the evolution of complex behaviors. Actually, "at scale" may be an understatement - a lot of computing power was used here. But the amount of compute used is not the point, rather the point is that complex and fascinating behavior can emerge from a long co-evolutionary process (though gradient-based RL is used here, the principle is the same) where the arms race forms an implicit curriculum. This is the existence proof that people in artificial life and adaptive behavior have been looking for for so long. Two reviewers were positive about the paper, with a third being negative because the paper does not give any new insights about how to do RL at scale. But that was not the stated aim of the paper, as the authors clarify in a response. This paper will draw quite some attention and deserves an oral presentation.
train
[ "H1lbBl1SjH", "HylGoCREir", "BJxWUARNsr", "B1l0zA0NsH", "S1eB3pAVsS", "BJlKi8Lkor", "rkxcmygpFr", "BJlxTZh6Yr", "SkxqkHzRYH", "Skev0gOVYS", "ryeCRG40Or", "SJx87rUm_r" ]
[ "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "In response to reviews, we’ve made the following updates to the paper:\n\n1. Slightly modified language of contribution statement\n2. Changed figures 1 and 3 to plot the mean across 3 seeds and show the seeds\n3. Add sentence describing out of bounds condition and reward to Section 3\n4. Add more description of po...
[ -1, -1, -1, -1, -1, -1, 6, 8, 3, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, -1, -1, -1 ]
[ "iclr_2020_SkxpxJBKwS", "BJlxTZh6Yr", "rkxcmygpFr", "rkxcmygpFr", "SkxqkHzRYH", "SkxqkHzRYH", "iclr_2020_SkxpxJBKwS", "iclr_2020_SkxpxJBKwS", "iclr_2020_SkxpxJBKwS", "ryeCRG40Or", "SJx87rUm_r", "iclr_2020_SkxpxJBKwS" ]
iclr_2020_HJxyZkBKDr
NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search
Neural architecture search (NAS) has achieved breakthrough success in a great number of applications in the past few years. It could be time to take a step back and analyze the good and bad aspects in the field of NAS. A variety of algorithms search architectures under different search space. These searched architectures are trained using different setups, e.g., hyper-parameters, data augmentation, regularization. This raises a comparability problem when comparing the performance of various NAS algorithms. NAS-Bench-101 has shown success to alleviate this problem. In this work, we propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information. NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms. The design of our search space is inspired by the one used in the most popular cell-based searching algorithms, where a cell is represented as a directed acyclic graph. Each edge here is associated with an operation selected from a predefined operation set. For it to be applicable for all NAS algorithms, the search space defined in NAS-Bench-201 includes all possible architectures generated by 4 nodes and 5 associated operation options, which results in 15,625 neural cell candidates in total. The training log using the same setup and the performance for each architecture candidate are provided for three datasets. This allows researchers to avoid unnecessary repetitive training for selected architecture and focus solely on the search algorithm itself. The training time saved for every architecture also largely improves the efficiency of most NAS algorithms and presents a more computational cost friendly NAS community for a broader range of researchers. We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms. In further support of the proposed NAS-Bench-102, we have analyzed it from many aspects and benchmarked 10 recent NAS algorithms, which verify its applicability.
accept-spotlight
This paper presents a new benchmark for architecture search. Reviewers put this paper in the top tier. I encourage the authors to also cite https://openreview.net/forum?id=SJx9ngStPH in their final version.
train
[ "SkeS0qLcjr", "SJg7AweAtr", "SkgxgbqpKr", "Bklfj97VjB", "HJew4IW3iB", "Hyx3zwWhjS", "SJxnpyihoH", "S1lTBO3joB", "BkguOjcsjH", "SyePFIlijB", "S1xT6pIciS", "rkxaXqr9iS", "Bkxx6rUNsr", "S1etfqUEiH", "HkgKKgwLoS", "H1l32DUEoH", "SyemyjsPKr" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ "Thanks for your recognition. We just uploaded all codes and data to the anonymous links as follows:\n\n1. Codes at https://github.com/D-X-Y/NAS-Projects include\n- instruction on how to re-generate our dataset\n- usages of 10 re-implemented NAS algorithms\n- instruction on how to use our API\n\n2. The data for API...
[ -1, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "HkgKKgwLoS", "iclr_2020_HJxyZkBKDr", "iclr_2020_HJxyZkBKDr", "iclr_2020_HJxyZkBKDr", "iclr_2020_HJxyZkBKDr", "HJew4IW3iB", "Hyx3zwWhjS", "SkeS0qLcjr", "SyePFIlijB", "S1etfqUEiH", "iclr_2020_HJxyZkBKDr", "Bklfj97VjB", "SyemyjsPKr", "SkgxgbqpKr", "Bkxx6rUNsr", "SJg7AweAtr", "iclr_2020...
iclr_2020_HJlWWJSFDH
Strategies for Pre-training Graph Neural Networks
Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naïve strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.
accept-spotlight
All three reviewers are consistently positive on this paper. Thus an accept is recommended.
train
[ "rJgKcidXir", "BJgEvjdmsH", "SJlhViuQsH", "rkesRlWXoH", "BJeDySijKr", "SkxAD0ECtB", "BkxPCGTX9B", "HJeOL_2WtS", "BkgPVKOq_S", "HkgiRWKV_S" ]
[ "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "We thank the reviewer for acknowledging the novelty of our work and for noting that our experiments are thorough.\n\nThank you for pointing out a related preprint by Z. Hu et al. [arXiv:1905.13728]. We note the work by Z. Hu et al. was developed independently and concurrently to our work here, and we were not awar...
[ -1, -1, -1, -1, 6, 6, 6, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 3, 4, -1, -1, -1 ]
[ "BJeDySijKr", "SkxAD0ECtB", "BkxPCGTX9B", "iclr_2020_HJlWWJSFDH", "iclr_2020_HJlWWJSFDH", "iclr_2020_HJlWWJSFDH", "iclr_2020_HJlWWJSFDH", "BkgPVKOq_S", "HkgiRWKV_S", "iclr_2020_HJlWWJSFDH" ]
iclr_2020_rygf-kSYwH
Behaviour Suite for Reinforcement Learning
This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short. bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning (RL) agents with two objectives. First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms. Second, to study agent behaviour through their performance on these shared benchmarks. To complement this effort, we open source this http URL, which automates evaluation and analysis of any agent on bsuite. This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms. Our code is Python, and easy to use within existing projects. We include examples with OpenAI Baselines, Dopamine as well as new reference implementations. Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers.
accept-spotlight
This paper proposes a platform for benchmarking and evaluating reinforcement learning algorithms. While reviewers had some concerns about whether such a tool was necessary given existing tools, reviewers who interacted with the tool found it easy to use and useful. Making such tools is often an engineering task and rarely aligned with typical research value systems, despite potentially acting as a public good. The success or failure of similar tools rely on community acceptance and it is my belief that this tool surpasses the bar to be promoted to the community at a top tier venue.
train
[ "SJgEVpbAFr", "BJx_KB2tiH", "H1xBKQnFor", "HygW6_h_sr", "rJlY979dsS", "BkeF_Lt_jH", "HyxWDXpLsH", "rJxjmH6otS", "H1l-LcTGjS", "BJl7GyRZsr", "B1gnI0T-iH", "ryx14o6WoH", "r1xvfq6WiH", "rkxk2BR3YH" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents the « Behavior Suite for Reinforcement Learning » (bsuite), which is a set of RL tasks (called « experiments ») meant to evaluate an algorithm’s ability to solve various key challenges in RL. Importantly, these experiments are designed to run fast enough that one can benchmark a new algorithm w...
[ 8, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_rygf-kSYwH", "iclr_2020_rygf-kSYwH", "HyxWDXpLsH", "rJlY979dsS", "BkeF_Lt_jH", "H1l-LcTGjS", "BJl7GyRZsr", "iclr_2020_rygf-kSYwH", "r1xvfq6WiH", "B1gnI0T-iH", "rJxjmH6otS", "rkxk2BR3YH", "SJgEVpbAFr", "iclr_2020_rygf-kSYwH" ]
iclr_2020_BygzbyHFvB
FreeLB: Enhanced Adversarial Training for Natural Language Understanding
Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44% and 67.75% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well.
accept-spotlight
The paper proposes a new algorithm for adversarial training of language models. This is an important research area and the paper is well presented, has great empirical results and a novel idea.
train
[ "Bkljp70sjB", "Bklg57RosH", "BJeQ87CjiS", "SyeaSzBZcS", "rkeZXiHs9r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the generous acknowledgement of our work! We agree that it is a good idea to add “natural” into the title. However, we are only focusing on Natural Language Understanding tasks in this paper and have not tried deploying such a technology to pretraining language models for better feature representatio...
[ -1, -1, -1, 8, 8 ]
[ -1, -1, -1, 3, 1 ]
[ "SyeaSzBZcS", "rkeZXiHs9r", "iclr_2020_BygzbyHFvB", "iclr_2020_BygzbyHFvB", "iclr_2020_BygzbyHFvB" ]
iclr_2020_Hklz71rYvS
Kernelized Wasserstein Natural Gradient
Many machine learning problems can be expressed as the optimization of some cost functional over a parametric family of probability distributions. It is often beneficial to solve such optimization problems using natural gradient methods. These methods are invariant to the parametrization of the family, and thus can yield more effective optimization. Unfortunately, computing the natural gradient is challenging as it requires inverting a high dimensional matrix at each iteration. We propose a general framework to approximate the natural gradient for the Wasserstein metric, by leveraging a dual formulation of the metric restricted to a Reproducing Kernel Hilbert Space. Our approach leads to an estimator for gradient direction that can trade-off accuracy and computational cost, with theoretical guarantees. We verify its accuracy on simple examples, and show the advantage of using such an estimator in classification tasks on \texttt{Cifar10} and \texttt{Cifar100} empirically.
accept-spotlight
This is a very interesting paper which extends natural gradient to output space metrics other than the Fisher-Rao metric (which is motivated by approximating KL divergence). It includes substantial mathematical and algorithmic insight. The method is shown to outperform various other optimizers on a neural net optimization problem that's artificially made ill-conditioned; while it's not clear how practically meaningful this setting is, it seems like a good way to study optimization. I think this paper will be of interest to a lot of researchers and could open up new research directions, so I recommend acceptance as an Oral.
train
[ "r1xIbmXaKr", "rJlR9JssoH", "HyxyYxijoS", "rye1IkoosB", "Bkli_CcsjH", "HylJnxCfoH", "HJgT19_foS", "BkxQyEJzsS", "r1eCca4-sH", "S1gMw5s2YH", "S1xohUaAFS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Thank you for your revision and for the rebuttal. This a strong submission with insightful angle on natural gradients and with provable guarantees. The authors improved a lot the manuscript and incorporated reviewers feedback. I am increasing my score to 8. \n \n###\nSummary of the paper: \n\nThe paper provides a...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_Hklz71rYvS", "r1xIbmXaKr", "S1gMw5s2YH", "S1xohUaAFS", "iclr_2020_Hklz71rYvS", "HJgT19_foS", "BkxQyEJzsS", "r1eCca4-sH", "iclr_2020_Hklz71rYvS", "iclr_2020_Hklz71rYvS", "iclr_2020_Hklz71rYvS" ]
iclr_2020_rJehVyrKwH
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
In this paper, we address the problem of reducing the memory footprint of convolutional network architectures. We introduce a vector quantization method that aims at preserving the quality of the reconstruction of the network outputs rather than its weights. The principle of our approach is that it minimizes the loss reconstruction error for in-domain inputs. Our method only requires a set of unlabelled data at quantization time and allows for efficient inference on CPU by using byte-aligned codebooks to store the compressed weights. We validate our approach by quantizing a high performing ResNet-50 model to a memory size of 5MB (20x compression factor) while preserving a top-1 accuracy of 76.1% on ImageNet object classification and by compressing a Mask R-CNN with a 26x factor.
accept-spotlight
This paper addresses to compress the network weights by quantizing their values to some fixed codeword vectors. The paper is well written, and is overall easy to follow. The proposed algorithm is well-motivated, and easy to apply. The method can be expected to perform well empirically, which the experiments verify, and to have potential impact. On the other hand, the novelty is not very high, though this paper uses these existing techniques in a different setting.
train
[ "B1ezIjyQiH", "SJlzQjkXjB", "rJlnnKJQiH", "rkxdLt17jr", "H1gAi6GdtH", "ryegpfeaYS", "S1e82d0HqB", "rklaWryJoH", "S1ezZlil5B", "Bkg1AQulcH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We thank Reviewer 3 for raising important questions. We answer them below.\n\nUsing \\tilde x in the E- and M-steps. \nWe agree with Reviewer 3 that “the error arising from quantizing v into c is only affected by a subset of rows of \\tilde x”. However, we solve Equation (2) with this proxy algorithm for two reaso...
[ -1, -1, -1, -1, 6, 8, 6, 8, -1, -1 ]
[ -1, -1, -1, -1, 3, 3, 4, 1, -1, -1 ]
[ "H1gAi6GdtH", "ryegpfeaYS", "S1e82d0HqB", "rklaWryJoH", "iclr_2020_rJehVyrKwH", "iclr_2020_rJehVyrKwH", "iclr_2020_rJehVyrKwH", "iclr_2020_rJehVyrKwH", "Bkg1AQulcH", "iclr_2020_rJehVyrKwH" ]
iclr_2020_BJxSI1SKDH
A Latent Morphology Model for Open-Vocabulary Neural Machine Translation
Translation into morphologically-rich languages challenges neural machine translation (NMT) models with extremely sparse vocabularies where atomic treatment of surface forms is unrealistic. This problem is typically addressed by either pre-processing words into subword units or performing translation directly at the level of characters. The former is based on word segmentation algorithms optimized using corpus-level statistics with no regard to the translation task. The latter learns directly from translation data but requires rather deep architectures. In this paper, we propose to translate words by modeling word formation through a hierarchical latent variable model which mimics the process of morphological inflection. Our model generates words one character at a time by composing two latent representations: a continuous one, aimed at capturing the lexical semantics, and a set of (approximately) discrete features, aimed at capturing the morphosyntactic function, which are shared among different surface forms. Our model achieves better accuracy in translation into three morphologically-rich languages than conventional open-vocabulary NMT methods, while also demonstrating a better generalization capacity under low to mid-resource settings.
accept-spotlight
This paper proposes a model for neural machine translation into morphologically rich languages by modeling word formation through a hierarchical latent variable model mimicking the process of morphological inflection. The model boils down to a VAE-like formulation with two latent representation: a continuous one (governed by a Gaussian) which captures lexical semantic aspects, and a discrete one (governed by the Kuma distribution) which captures the morphosyntactic function, shared among different surface forms. Even though the empirical improvements in terms of BLEU scores are fairly small, I find this a very elegant model which may foster interesting future research directions on latent models for NMT. The reviewers had some concerns with some experimental details and model details that were properly addressed by the authors in their detailed response. In the discussion phase this alleviated the reviewers' concerns, which leads me to recommend acceptance. I urge the authors to follow the reviewer's recommendations to improve the final version of the paper.
train
[ "S1lPyVNjFS", "SJl2KhcNtB", "HkeTmSckqS", "rkgZFW2VYr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "\nThis paper proposes a method, Latent Morphology Model (LMM), for producing word representations for a (hierarchical) character-level decoder used in neural machine translation (NMT). The main motivation is to overcome vocabulary sparsity or highly inflectional languages such as Arabic, Czech, and Turkish (experi...
[ 6, 6, 8, -1 ]
[ 3, 3, 5, -1 ]
[ "iclr_2020_BJxSI1SKDH", "iclr_2020_BJxSI1SKDH", "iclr_2020_BJxSI1SKDH", "iclr_2020_BJxSI1SKDH" ]
iclr_2020_HyevIJStwH
Understanding Why Neural Networks Generalize Well Through GSNR of Parameters
As deep neural networks (DNNs) achieve tremendous success across many application domains, researchers tried to explore in many aspects on why they generalize well. In this paper, we provide a novel perspective on these issues using the gradient signal to noise ratio (GSNR) of parameters during training process of DNNs. The GSNR of a parameter is simply defined as the ratio between its gradient's squared mean and variance, over the data distribution. Based on several approximations, we establish a quantitative relationship between model parameters' GSNR and the generalization gap. This relationship indicates that larger GSNR during training process leads to better generalization performance. Futher, we show that, different from that of shallow models (e.g. logistic regression, support vector machines), the gradient descent optimization dynamics of DNNs naturally produces large GSNR during training, which is probably the key to DNNs’ remarkable generalization ability.
accept-spotlight
Quoting a reviewer for a very nice summary: "In this work, the authors suggest a new point of view on generalization through the lens of the distribution of the per-sample gradients. The authors consider the variance and mean of the per-sample gradients for each parameter of the model and define for each parameter the Gradient Signal to Noise ratio (GSNR). The GSNR of a parameter is the ratio between the mean squared of the gradient per parameter per sample (computed over the samples) and the variance of the gradient per parameter per sample (also computed over the samples). The GSNR is promising as a measure of generalization and the authors provide a nice leading order derivation of the GSNR as a proxy for the measure of the generalization gap in the model." The majority of the reviewers vote to accept this paper. We can view the 3 as a weak signal as that reviewer stated in his review that he struggled to rate the paper because it contained a lot of math.
train
[ "HkxvaNRhFB", "rye6IOi42B", "HJgI26Q3iH", "S1xhXUI9jB", "SJxVhP5FoS", "B1ehzS5For", "B1lyj_jm5B" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this work, the authors suggest a new point of view on generalization through the lens of the distribution of the per-sample gradients. The authors consider the variance and mean of the per-sample gradients for each parameter of the model and define for each parameter the Gradient Signal to Noise ratio (GSNR). T...
[ 6, 6, -1, -1, -1, -1, 3 ]
[ 3, 1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_HyevIJStwH", "iclr_2020_HyevIJStwH", "iclr_2020_HyevIJStwH", "iclr_2020_HyevIJStwH", "HkxvaNRhFB", "B1lyj_jm5B", "iclr_2020_HyevIJStwH" ]
iclr_2020_S1xCPJHtDB
Model Based Reinforcement Learning for Atari
Model-free reinforcement learning (RL) can be used to learn effective policies for complex tasks, such as Atari games, even from image observations. However, this typically requires very large amounts of interaction -- substantially more, in fact, than a human would need to learn the same games. How can people learn so quickly? Part of the answer may be that people can learn how the game works and predict which actions will lead to desirable outcomes. In this paper, we explore how video prediction models can similarly enable agents to solve Atari games with fewer interactions than model-free methods. We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting. Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment, which corresponds to two hours of real-time play. In most games SimPLe outperforms state-of-the-art model-free algorithms, in some games by over an order of magnitude.
accept-spotlight
This paper presents a model-based RL approach to Atari games based on video prediction. The architecture performs remarkably well with a limited amount of interactions. This is a very significant result on a question that engages many in the research community. Reviewers all agree that the paper is good and should be published. There is some disagreement about the novelty of it. However, as one reviewer states, the significance of the results is more important than the novelty. Many conference attendees would like to hear about it. Based on this, I think the paper can be accepted for oral presentation.
val
[ "Byg8y2bUKr", "HJeGC9wFsH", "HJey_9DFjH", "H1lipKPYir", "SJg3esx3tH", "H1lwnmopFH", "ByxgJ6Jj5H", "BJxJlLRpFr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Summary\n\nThis paper proposes a model-based reinforcement learning algorithm suitable for high-dimensional visual environments like Atari. The algorithmic loop is conceptually simple and comprises 1) collecting real data with the current policy 2) updating an environment model with old and newly acquired data and...
[ 6, -1, -1, -1, 8, 6, -1, -1 ]
[ 5, -1, -1, -1, 4, 4, -1, -1 ]
[ "iclr_2020_S1xCPJHtDB", "Byg8y2bUKr", "SJg3esx3tH", "H1lwnmopFH", "iclr_2020_S1xCPJHtDB", "iclr_2020_S1xCPJHtDB", "BJxJlLRpFr", "iclr_2020_S1xCPJHtDB" ]
iclr_2020_rkgbYyHtwB
Disagreement-Regularized Imitation Learning
We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning. It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost. Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize. We prove a regret bound for the algorithm which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails. We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning.
accept-spotlight
This paper presents an approach for interactive imitation learning while avoiding an adversarial optimization by using ensembles. The reviewers agreed that the contributions were significant and the results were compelling. Hence, the paper should be accepted.
train
[ "Syx2DuzwFr", "ryxdF6c2sS", "rkeRVQ52ir", "rygcE4chiS", "S1lhsQchsB", "S1gya8oTKB", "SJgLU6tycB", "HJg9IX6v_B", "S1lv4r5qvS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "* Summary:\nThe paper aims to address the covariate shift issue of behavior cloning (BC). The main idea of the paper is to learn a policy by minimizing a BC loss and an uncertainty loss. This uncertainty loss is defined as a variance of a policy posterior given by demonstration. To approximate this posterior, the ...
[ 8, -1, -1, -1, -1, 8, 6, -1, -1 ]
[ 4, -1, -1, -1, -1, 3, 4, -1, -1 ]
[ "iclr_2020_rkgbYyHtwB", "iclr_2020_rkgbYyHtwB", "SJgLU6tycB", "Syx2DuzwFr", "S1gya8oTKB", "iclr_2020_rkgbYyHtwB", "iclr_2020_rkgbYyHtwB", "S1lv4r5qvS", "iclr_2020_rkgbYyHtwB" ]
iclr_2020_H1enKkrFDB
Stable Rank Normalization for Improved Generalization in Neural Networks and GANs
Exciting new work on generalization bounds for neural networks (NN) given by Bartlett et al. (2017); Neyshabur et al. (2018) closely depend on two parameter- dependant quantities: the Lipschitz constant upper bound and the stable rank (a softer version of rank). Even though these bounds typically have minimal practical utility, they facilitate questions on whether controlling such quantities together could improve the generalization behaviour of NNs in practice. To this end, we propose stable rank normalization (SRN), a novel, provably optimal, and computationally efficient weight-normalization scheme which minimizes the stable rank of a linear operator. Surprisingly we find that SRN, despite being non-convex, can be shown to have a unique optimal solution. We provide extensive analyses across a wide variety of NNs (DenseNet, WideResNet, ResNet, Alexnet, VGG), where applying SRN to their linear layers leads to improved classification accuracy, while simultaneously showing improvements in genealization, evaluated empirically using—(a) shattering experiments (Zhang et al., 2016); and (b) three measures of sample complexity by Bartlett et al. (2017), Neyshabur et al. (2018), & Wei & Ma. Additionally, we show that, when applied to the discriminator of GANs, it improves Inception, FID, and Neural divergence scores, while learning mappings with low empirical Lipschitz constant.
accept-spotlight
The authors propose stable rank normalization, which minimizes the stable rank of a linear operator and apply this to neural network training. The authors present techniques for performing the normalization efficiently and evaluate it empirically in a range of situations. The only issues raised by reviewers related to the empirical evaluation. The authors addressed these in their revisions.
train
[ "BkepTNsk5H", "SJgMTtk3oS", "S1xLQnknoS", "H1xE9zwssr", "BkeTRc8Kir", "rklejkzciB", "BJlmSVqKsB", "ByeLvkDtor", "Bygq5FLFsS", "HJlyvpIYjH", "rJl2zKrmoH", "rJgwtQhPFH", "rylcn0iiYB" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "This paper proposes normalizing the stable rank (ratio of the Frobenius norm to the spectral norm) of weight matrices in neural networks. They propose an algorithm that provably finds the optimal solution efficiently and perform experiments to show the effectiveness of this normalization technique.\n\nStable rank ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_H1enKkrFDB", "iclr_2020_H1enKkrFDB", "H1xE9zwssr", "ByeLvkDtor", "rJgwtQhPFH", "iclr_2020_H1enKkrFDB", "BkeTRc8Kir", "HJlyvpIYjH", "rylcn0iiYB", "BkepTNsk5H", "iclr_2020_H1enKkrFDB", "iclr_2020_H1enKkrFDB", "iclr_2020_H1enKkrFDB" ]
iclr_2020_SJlpYJBKvH
Measuring the Reliability of Reinforcement Learning Algorithms
Lack of reliability is a well-known issue for reinforcement learning (RL) algorithms. This problem has gained increasing attention in recent years, and efforts to improve it have grown substantially. To aid RL researchers and production users with the evaluation and improvement of reliability, we propose a set of metrics that quantitatively measure different aspects of reliability. In this work, we focus on variability and risk, both during training and after learning (on a fixed policy). We designed these metrics to be general-purpose, and we also designed complementary statistical tests to enable rigorous comparisons on these metrics. In this paper, we first describe the desired properties of the metrics and their design, the aspects of reliability that they measure, and their applicability to different scenarios. We then describe the statistical tests and make additional practical recommendations for reporting results. The metrics and accompanying statistical tools have been made available as an open-source library. We apply our metrics to a set of common RL algorithms and environments, compare them, and analyze the results.
accept-spotlight
Main content: This paper provides a unified way to provide robust statistics in evaluating the reliability of RL algorithms, especially deep RL algorithms. Though the metrics are not particularly novel, the investigation should be useful to the broader community as it compares seven specific evaluation metrics, including 'Dispersion across Time (DT): IQR across Time', 'Short-term Risk across Time (SRT): CVaR on Differences', 'Long-term Risk across Time (LRT): CVaR on Drawdown', 'Dispersion across Runs (DR): IQR across Runs', 'Risk across Runs (RR): CVaR across Runs', 'Dispersion across Fixed-Policy Rollouts (DF): IQR across Rollouts' and 'Risk across Fixed-Policy Rollouts (RF): CVaR across Rollouts'. The paper further proposed ranking and also confidence intervals based on bootstrapped samples, and compared against continuous control and discrete actions algorithms on Atari and OpenAI Gym. -- Discussion: The reviews clearly agree on accepting the paper, with a weak accept coming from a reviewer who does not know much about this subarea. Comments are mostly just directed at clarifications and completeness of description, which the authors have addressed. -- Recommendation and justification: This paper should be accepted due to its useful contributions toward doing a better job of measuring performance of RL.
train
[ "SkxBZirTFS", "BJxox6QRYr", "rkxiwsoYsS", "SJxbtZsYiH", "HkelP-jYjB", "HkeVVWjtoB", "H1eChesYjr", "HkloOhgEKr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "*Summary*\n\nAuthors proposed a variety of metrics to measure the reliability of an RL algorithm. Mainly looking at Dispersion and Risk across time and runs while learning, and also in the evaluation phase. \nAuthors have further proposed ranking and also confidence intervals based on bootstrapped samples. They al...
[ 8, 8, -1, -1, -1, -1, -1, 6 ]
[ 3, 5, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_SJlpYJBKvH", "iclr_2020_SJlpYJBKvH", "H1eChesYjr", "HkloOhgEKr", "HkeVVWjtoB", "SkxBZirTFS", "BJxox6QRYr", "iclr_2020_SJlpYJBKvH" ]
iclr_2020_Hke0K1HKwr
Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue
Knowledge-grounded dialogue is a task of generating an informative response based on both discourse context and external knowledge. As we focus on better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue, we propose a sequential latent variable model as the first approach to this matter. The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge; as a result, it can not only reduce the ambiguity caused from the diversity in knowledge selection of conversation but also better leverage the response information for proper choice of knowledge. Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation. We achieve the new state-of-the-art performance on Wizard of Wikipedia (Dinan et al., 2019) as one of the most large-scale and challenging benchmarks. We further validate the effectiveness of our model over existing conversation methods in another knowledge-based dialogue Holl-E dataset (Moghe et al., 2018).
accept-spotlight
This paper proposes a sequential latent variable model for the knowledge selection task for knowledge grounded dialogues. Experimental results demonstrate improvements over the previous SOTA in the WoW, knowledge grounded dialogue dataset, through both automated and human evaluation. All reviewers scored the paper highly, but they also made several suggestions for improving the presentation. Authors responded positively to all these suggestions and provided updated results and other stats. The paper will be a good contribution to ICLR.
train
[ "ryxezvS0tB", "rJxe9OPnsr", "rJediVDhoS", "rkxOuxPhor", "B1ehFnYiYr", "Bkxtl24c5r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Post author response edit: The authors did a good job of addressing many of the concerns of reviewers. I believe with these new results (esp to reviewer 4), they will have a stronger version for the camera ready. I'm bumping up my recommendation for this reason.\n\n\n\nThe authors propose a novel architecture for ...
[ 8, -1, -1, -1, 6, 8 ]
[ 5, -1, -1, -1, 4, 4 ]
[ "iclr_2020_Hke0K1HKwr", "B1ehFnYiYr", "ryxezvS0tB", "Bkxtl24c5r", "iclr_2020_Hke0K1HKwr", "iclr_2020_Hke0K1HKwr" ]
iclr_2020_SklD9yrFPS
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Neural Tangents is a library for working with infinite-width neural networks. It provides a high-level API for specifying complex and hierarchical neural network architectures. These networks can then be trained and evaluated either at finite-width as usual or in their infinite-width limit. Infinite-width networks can be trained analytically using exact Bayesian inference or using gradient descent via the Neural Tangent Kernel. Additionally, Neural Tangents provides tools to study gradient descent training dynamics of wide but finite networks in either function space or weight space. The entire library runs out-of-the-box on CPU, GPU, or TPU. All computations can be automatically distributed over multiple accelerators with near-linear scaling in the number of devices. In addition to the repository below, we provide an accompanying interactive Colab notebook at https://colab.research.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb
accept-spotlight
This paper presents a software library for dealing with neural networks either in the (usual) finite limit or in the infinite limit. The latter is obtained by using the Neural Tangent Kernel theory. There is variance in the reviewers' scores, however there has also been quite a lot of discussion, which has been facilitated by the authors' elaborate rebuttal. The main points in favor and against are clear: on the positive side, the library is demonstrated well (especially after rebuttal) and is equipped with desirable properties such as usage of GPU/TPU, scalability etc. On the other hand, a lot of the key insights build heavily on prior work of Lee et al, 2019. However, judging novelty when it comes to a software paper is more tricky to do, especially given that not many such papers appear in ICLR and therefore calibration is difficult. This has been discussed among reviewers. It would help if some further theoretical insights were included in this paper; these insights could come by working backwards from the implementation (i.e. what more can we learn about infinite width networks now that we can experiment easily with them?). Overall, this paper should still be of interest to the ICLR community.
train
[ "H1emYsn6Yr", "ryxYhr0aKr", "BylarJj2iH", "rygwzyihjB", "HyxHSX5hjr", "HJe9z9V3iH", "HyxjHFEnsr", "ryl8AD42oB", "BkgVyRg2jB", "S1eqydg2iB", "Bke8siZhiB", "S1g-lnZhir", "B1gt-wb3or", "HJxkA_b3sS", "BJgNczUaFB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "POST-REBUTTAL COMMENTS\n\nI appreciate the response from the authors. \n\nI particularly like the comparison table in the response to the other reviewer and ought to be highlighted in the paper.\n\nIf I were to start this line of research, I would be inclined to expand on the codebase. The contribution is signific...
[ 8, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_SklD9yrFPS", "iclr_2020_SklD9yrFPS", "ryl8AD42oB", "ryxYhr0aKr", "HyxjHFEnsr", "HJxkA_b3sS", "Bke8siZhiB", "S1g-lnZhir", "BJgNczUaFB", "H1emYsn6Yr", "ryxYhr0aKr", "ryxYhr0aKr", "ryxYhr0aKr", "ryxYhr0aKr", "iclr_2020_SklD9yrFPS" ]
iclr_2020_Hyx-jyBFPr
Self-labelling via simultaneous clustering and representation learning
Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard cross-entropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline.
accept-spotlight
The paper focuses on supervised and self-supervised learning. The originality is to formulate the self-supervised criterion in terms of optimal transport, where the trained representation is required to induce $K$ equidistributed clusters. The formulation is well founded; in practice, the approach proceeds by alternatively optimizing the cross-entropy loss (SGD) and the pseudo-loss, through a fast version of the Sinkhorn-Knopp algorithm, and scales up to million of samples and thousands of classes. Some concerns about the robustness w.r.t. imbalanced classes, the ability to deliver SOTA supervised performances, the computational complexity have been answered by the rebuttal and handled through new experiments. The convergence toward a local minimum is shown; however, increasing the number of pseudo-label optimization rounds might degrade the results. Overall, I recommend to accept the paper as an oral presentation. A more fancy title would do a better justice to this very nice paper ("Self-labelling learning via optimal transport" ?).
train
[ "H1g4TD85Fr", "Hyxdbiu3iH", "BJxkFWAesH", "rJeZ_eAeir", "rJgW41CxsH", "ByeCQC6xoH", "B1gMQL-2tB", "SylFxBcV5S" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary & Pros\n- This paper proposes a representation learning method based on clustering. The proposed method performs clustering and representation learning alternatively and simultaneously. This approach requires only a few domain-specific prior (precisely, CNN prior) while self-supervised learning requires mo...
[ 8, -1, -1, -1, -1, -1, 3, 8 ]
[ 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_Hyx-jyBFPr", "iclr_2020_Hyx-jyBFPr", "H1g4TD85Fr", "B1gMQL-2tB", "SylFxBcV5S", "iclr_2020_Hyx-jyBFPr", "iclr_2020_Hyx-jyBFPr", "iclr_2020_Hyx-jyBFPr" ]
iclr_2020_S1e4jkSKvB
The intriguing role of module criticality in the generalization of deep networks
We study the phenomenon that some modules of deep neural networks (DNNs) are more critical than others. Meaning that rewinding their parameter values back to initialization, while keeping other modules fixed at the trained parameters, results in a large drop in the network's performance. Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called module criticality, based on the shape of the valleys that connect the initial and final values of the module parameters. We formulate how generalization relates to the module criticality, and show that this measure is able to explain the superior generalization performance of some architectures over others, whereas, earlier measures fail to do so.
accept-spotlight
The paper analyses the importance of different DNN modules for generalization performance, explaining why certain architectures may be much better performing than others. All reviewers agree that this is an interesting paper with a novel and important contribution.
train
[ "SJlrtEmAKS", "ByxzqndmsB", "ByxYbrVioB", "Skl10NNsor", "B1gk2JoDoH", "ByeWYJswsH", "BJxOkkjvsH", "SJeMgKD5tS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces a new way to reason about neural network generalization using a module criticality measure. The measure is tangible and intuitive. It leads to some formal bounds on the generalization of deep networks, and is able to better rank trained image classification architectures than previous measure...
[ 8, 6, -1, -1, -1, -1, -1, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_S1e4jkSKvB", "iclr_2020_S1e4jkSKvB", "B1gk2JoDoH", "ByeWYJswsH", "SJeMgKD5tS", "SJlrtEmAKS", "ByxzqndmsB", "iclr_2020_S1e4jkSKvB" ]
iclr_2020_rkl8sJBYvH
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Recent research shows that the following two models are equivalent: (a) infinitely wide neural networks (NNs) trained under l2 loss by gradient descent with infinitesimally small learning rate (b) kernel regression with respect to so-called Neural Tangent Kernels (NTKs) (Jacot et al., 2018). An efficient algorithm to compute the NTK, as well as its convolutional counterparts, appears in Arora et al. (2019a), which allowed studying performance of infinitely wide nets on datasets like CIFAR-10. However, super-quadratic running time of kernel methods makes them best suited for small-data tasks. We report results suggesting neural tangent kernels perform strongly on low-data tasks. 1. On a standard testbed of classification/regression tasks from the UCI database, NTK SVM beats the previous gold standard, Random Forests (RF), and also the corresponding finite nets. 2. On CIFAR-10 with 10 – 640 training samples, Convolutional NTK consistently beats ResNet-34 by 1% - 3%. 3. On VOC07 testbed for few-shot image classification tasks on ImageNet with transfer learning (Goyal et al., 2019), replacing the linear SVM currently used with a Convolutional NTK SVM consistently improves performance. 4. Comparing the performance of NTK with the finite-width net it was derived from, NTK behavior starts at lower net widths than suggested by theoretical analysis(Arora et al., 2019a). NTK’s efficacy may trace to lower variance of output.
accept-spotlight
This paper carries out extensive experiments on Neural Tangent Kernel (NTK) --kernel methods based on infinitely wide neural nets on small-data tasks. I recommend acceptance.
train
[ "SJx5zKqaKS", "Sklj47KpFS", "HJeJkFCisr", "B1xFj_Rijr", "rJgJBORiiH", "SJg-qDAjjS", "r1eG5udtKS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper conducts very interesting and meaningful study of kernels induced by infinitely wide neural networks on small data tasks. They show that on a variety of tasks performance of these kernels are superior to both finite neural networks and Random Forest methods.\n\nWhile neural tangent kernel (NTK) [1] is m...
[ 8, 6, -1, -1, -1, -1, 8 ]
[ 5, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2020_rkl8sJBYvH", "iclr_2020_rkl8sJBYvH", "iclr_2020_rkl8sJBYvH", "r1eG5udtKS", "Sklj47KpFS", "SJx5zKqaKS", "iclr_2020_rkl8sJBYvH" ]
iclr_2020_BkevoJSYPB
Differentiation of Blackbox Combinatorial Solvers
Achieving fusion of deep learning with combinatorial algorithms promises transformative changes to artificial intelligence. One possible approach is to introduce combinatorial building blocks into neural networks. Such end-to-end architectures have the potential to tackle combinatorial problems on raw input data such as ensuring global consistency in multi-object tracking or route planning on maps in robotics. In this work, we present a method that implements an efficient backward pass through blackbox implementations of combinatorial solvers with linear objective functions. We provide both theoretical and experimental backing. In particular, we incorporate the Gurobi MIP solver, Blossom V algorithm, and Dijkstra's algorithm into architectures that extract suitable features from raw inputs for the traveling salesman problem, the min-cost perfect matching problem and the shortest path problem.
accept-spotlight
This paper proposes a method for efficiently training neural networks combined with blackbox implementations of exact combinatorial solvers. Reviewers and AC agree that it is a well written paper with a novel idea supported by good experimental results. Experimental results are of small scale and can be further improved, but the authors acknowledged this aspect well. Hence, I recommend acceptance.
train
[ "H1l7e7n2jB", "SyepKWTAKB", "ByxYxfZaKr", "B1gviFQvoB", "H1l8znxvjS", "r1ebFcgwoB", "HkgC2txwoH", "B1lpzueDoS", "Hke2mmXCYB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Thanks for the additional details and clarifications! It is surprising the constant \\lambda baseline is quite strong. I have gone through the other reviews and discussions and this thread and agree with R3 that this is a clear accept and have also updated my score to an 8.", "This paper shows how end-to-end lea...
[ -1, 8, 8, -1, -1, -1, -1, -1, 8 ]
[ -1, 5, 4, -1, -1, -1, -1, -1, 1 ]
[ "HkgC2txwoH", "iclr_2020_BkevoJSYPB", "iclr_2020_BkevoJSYPB", "iclr_2020_BkevoJSYPB", "ByxYxfZaKr", "Hke2mmXCYB", "SyepKWTAKB", "iclr_2020_BkevoJSYPB", "iclr_2020_BkevoJSYPB" ]
iclr_2020_rJgsskrFwH
Scaling Autoregressive Video Models
Due to the statistical complexity of video, the high degree of inherent stochasticity, and the sheer amount of data, generating natural video remains a challenging task. State-of-the-art video generation models attempt to address these issues by combining sometimes complex, often video-specific neural network architectures, latent variable models, adversarial training and a range of other methods. Despite their often high complexity, these approaches still fall short of generating high quality video continuations outside of narrow domains and often struggle with fidelity. In contrast, we show that conceptually simple, autoregressive video generation models based on a three-dimensional self-attention mechanism achieve highly competitive results across multiple metrics on popular benchmark datasets for which they produce continuations of high fidelity and realism. Furthermore, we find that our models are capable of producing diverse and surprisingly realistic continuations on a subset of videos from Kinetics, a large scale action recognition dataset comprised of YouTube videos exhibiting phenomena such as camera movement, complex object interactions and diverse human movement. To our knowledge, this is the first promising application of video-generation models to videos of this complexity.
accept-spotlight
This paper presents an approach for scalable autoregressive video generation based on a three-dimensional self-attention mechanism. As rightly pointed out by R3, the proposed approach ’is individually close to ideas proposed elsewhere before in other forms ... but this paper does the important engineering work of selecting and combining these ideas in this specific video synthesis problem setting.’ The proposed method is relevant and well-motivated, and the experimental results are strong. All reviewers agree that experiments on the Kinetics dataset are particularly appealing. In the initial evaluation, the reviewers have raised several concerns such as performance metrics, ablation study, training time comparison, empirical evaluation of the baseline methods on Kinetics, that were addressed by the authors in the rebuttal. In conclusion, all three reviewers were convinced by the author’s rebuttal, and AC recommends acceptance of this paper – congratulations to the authors!
train
[ "Hyg-BQCqsB", "HyeO0bRqjB", "rkl5dCEDsB", "SyxPJQzwsr", "HyeTaL-PjS", "BygwQS-DiS", "r1gMc17TtH", "B1g90yoRtH", "ryeoIONqKB" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We updated our related work section slightly so that it becomes clearer that many of the mentioned works (after discussing VAE based approaches) are actually completely different directions and not additions upon VAEs.\n\nWe added a reference for Figure 2 in section 4.2, paragraph \"Qualitative Observations\".", ...
[ -1, -1, -1, -1, -1, -1, 8, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "ryeoIONqKB", "B1g90yoRtH", "r1gMc17TtH", "HyeTaL-PjS", "ryeoIONqKB", "B1g90yoRtH", "iclr_2020_rJgsskrFwH", "iclr_2020_rJgsskrFwH", "iclr_2020_rJgsskrFwH" ]
iclr_2020_rJe2syrtvS
The Ingredients of Real World Robotic Reinforcement Learning
The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system. Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges. We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.
accept-spotlight
This is a very interesting paper which discusses practical issues and solutions around deploying RL on real physical robotic systems, specifically involving questions on the use of raw sensory data, crafting reward functions, and not having resets at the end of episodes. Many of the issues raised in the reviews and discussion were concerned with experimental details and settings, as well as relation to different areas of related work. These were all sufficiently handled in the rebuttal, and all reviewers were in favour of acceptance.
train
[ "r1l7Uvpn_r", "r1xnxPghjS", "HylKcAZijH", "Skgx4yZqoH", "S1x6e1-5sB", "SJgaARl5oH", "Hyl29EPXtr", "r1gseP0nKH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper takes seriously the question of having a robotic system learning continuously without manual reset nor state or reward engineering. The authors propose a first approach using vison-based SAC, shown visual goals and VICE, and show that it does not provide a satisfactory solution. Then they add a random pe...
[ 8, -1, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rJe2syrtvS", "HylKcAZijH", "SJgaARl5oH", "r1l7Uvpn_r", "Hyl29EPXtr", "r1gseP0nKH", "iclr_2020_rJe2syrtvS", "iclr_2020_rJe2syrtvS" ]
iclr_2020_ryeYpJSKwr
Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization
Transferring knowledge across tasks to improve data-efficiency is one of the open key challenges in the field of global black-box optimization. Readily available algorithms are typically designed to be universal optimizers and, therefore, often suboptimal for specific tasks. We propose a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing our algorithm to utilize the proven generalization capabilities of Gaussian processes. Using reinforcement learning to meta-train an acquisition function (AF) on a set of related tasks, the proposed method learns to extract implicit structural information and to exploit it for improved data-efficiency. We present experiments on a simulation-to-real transfer task as well as on several synthetic functions and on two hyperparameter search problems. The results show that our algorithm (1) automatically identifies structural properties of objective functions from available source tasks or simulations, (2) performs favourably in settings with both scarse and abundant source data, and (3) falls back to the performance level of general AFs if no particular structure is present.
accept-spotlight
This paper explores the idea of using meta-learning for acquisition functions. It is an interesting and novel research direction with promising results. The paper could be strengthened by adding more insights about the new acquisition function and performing more comparisons e.g. to Chen et al. 2017. But in any case, the current form of the paper should already be of high interest to the community
train
[ "S1xux0YRKr", "SkgpLghVYr", "BJl7k5fPtH", "B1xy9hr3sB", "HkgRB7vsoS", "r1l_8VDiiH", "H1xSRBvior", "S1xV6bwjsS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "\nThis paper proposes a framework for meta learning neural acquisition functions for the Bayesian optimization of various underivable functions. The neural acquisition functions are learned using proximal policy optimization in an outer loop on different problems on the same domain, and the learned acquisition fun...
[ 8, 8, 6, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2020_ryeYpJSKwr", "iclr_2020_ryeYpJSKwr", "iclr_2020_ryeYpJSKwr", "iclr_2020_ryeYpJSKwr", "BJl7k5fPtH", "BJl7k5fPtH", "SkgpLghVYr", "S1xux0YRKr" ]
iclr_2020_BJliakStvH
Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning
While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agent’s policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard constraints. In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior. We reformulate the problem of IRL on Markov Decision Processes (MDPs) such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agent’s behavior. Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agent’s demonstrations given our knowledge of an MDP. Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations. We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle.
accept-spotlight
The paper introduces a novel way of doing IRL based on learning constraints. The topic of IRL is an important one in RL and the approach introduced is interesting and forms a fundamental contribution that could lead to relevant follow-up work.
train
[ "H1e1ksZioB", "HJxa7qWisr", "HygPYt-isH", "B1x8n_bior", "rJlIrVJk9B", "S1xYIh4v9r", "SkxBRWYCFS", "HkeSgyVa9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and for pointing out that this is an interesting problem to address! We address your question about applications below, and we’ve added a bit of these ideas to the introduction of the updated version as well.\n\nTo continue with the car example introduced in the text, it could be possible...
[ -1, -1, -1, -1, 3, 6, 6, 8 ]
[ -1, -1, -1, -1, 4, 3, 1, 4 ]
[ "SkxBRWYCFS", "rJlIrVJk9B", "S1xYIh4v9r", "HkeSgyVa9r", "iclr_2020_BJliakStvH", "iclr_2020_BJliakStvH", "iclr_2020_BJliakStvH", "iclr_2020_BJliakStvH" ]
iclr_2020_H1l_0JBYwS
Spectral Embedding of Regularized Block Models
Spectral embedding is a popular technique for the representation of graph data. Several regularization techniques have been proposed to improve the quality of the embedding with respect to downstream tasks like clustering. In this paper, we explain on a simple block model the impact of the complete graph regularization, whereby a constant is added to all entries of the adjacency matrix. Specifically, we show that the regularization forces the spectral embedding to focus on the largest blocks, making the representation less sensitive to noise or outliers. We illustrate these results on both on both synthetic and real data, showing how regularization improves standard clustering scores.
accept-spotlight
The paper proposes a nice and easy way to regularize spectral graph embeddings, and explains the effect through a nice set of experiments. Therefore, I recommend acceptance.
train
[ "BJgNvYBgiH", "rJlOcA2CKS", "Bke7Sj6f9r" ]
[ "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comments and suggestions.\n\n* We have detailed the derivation of Eq (7) (see the revised version).\n* Selecting good values for alpha is an interesting question, that is indeed not addressed in our paper. We simply recommend to use the relative value of alpha with respect to the total weight of th...
[ -1, 6, 8 ]
[ -1, 3, 3 ]
[ "rJlOcA2CKS", "iclr_2020_H1l_0JBYwS", "iclr_2020_H1l_0JBYwS" ]
iclr_2020_BkxRRkSKwr
Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models
The impressive performance of neural networks on natural language processing tasks attributes to their ability to model complicated word and phrase compositions. To explain how the model handles semantic compositions, we study hierarchical explanation of neural network predictions. We identify non-additivity and context independent importance attributions within hierarchies as two desirable properties for highlighting word and phrase compositions. We show some prior efforts on hierarchical explanations, e.g. contextual decomposition, do not satisfy the desired properties mathematically, leading to inconsistent explanation quality in different models. In this paper, we start by proposing a formal and general way to quantify the importance of each word and phrase. Following the formulation, we propose Sampling and Contextual Decomposition (SCD) algorithm and Sampling and Occlusion (SOC) algorithm. Human and metrics evaluation on both LSTM models and BERT Transformer models on multiple datasets show that our algorithms outperform prior hierarchical explanation algorithms. Our algorithms help to visualize semantic composition captured by models, extract classification rules and improve human trust of models.
accept-spotlight
The authors present a hierarchical explanation model for understanding the underlying representations produced by LSTMs and Transformers. Using human evaluation, they find that their explanations are better, which could lead to better trust of these opaque models. The reviewers raised some issues with the derivations, but the author response addressed most of these.
train
[ "r1gOVjXf5H", "r1xK3FH2sS", "SyeYa8TisS", "SJeysJXPor", "ryx68k7wsr", "S1lcdXmDor", "ByljpJ7wiS", "S1x7E1QwoH", "HJgBoPd0Kr", "BJey9CJQqB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe authors proposed a method for generating hierarchical importance attribution for any neural sequence models (LSTM, BERT, etc.) Towards this goal, the authors propose two desired properties: 1) non-additivity, which means the importance of a phrase should be a non-linear function over the importance o...
[ 6, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_BkxRRkSKwr", "S1x7E1QwoH", "SJeysJXPor", "HJgBoPd0Kr", "r1gOVjXf5H", "iclr_2020_BkxRRkSKwr", "BJey9CJQqB", "r1gOVjXf5H", "iclr_2020_BkxRRkSKwr", "iclr_2020_BkxRRkSKwr" ]
iclr_2020_HkxARkrFwB
word2ket: Space-efficient Word Embeddings inspired by Quantum Entanglement
Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words. A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors. Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors. However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory. Here, we used approaches inspired by quantum computing to propose two related methods, word2ket and word2ketXS, for storing word embedding matrix during training and inference in a highly efficient way. Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.
accept-spotlight
This paper proposes quantum-inspired methods for increasing the parametric efficiency of word embeddings. While a little heavy in terms of quantum jargon, and perhaps a little ignorant of loosely related work in this sub-field (e.g. see the work of Coecke and colleagues from 2008 onwards), the majority of reviewers were broadly convinced the work and results were of sufficient merit to be published.
train
[ "Hyl12frTtr", "rkg2avmniH", "B1lpmFXhoH", "B1xfxOX2oH", "r1lY-EZJqH", "HJlANh2NqH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents two methods to learn word embedding matrices that can be stored in much less space compared to traditional d x p embedding matrices, where d is the vocabulary size and p is the embedding size. Two methods are proposed: the first method estimates a p-dimensional embedding for a word as a sum of r...
[ 8, -1, -1, -1, 8, 3 ]
[ 4, -1, -1, -1, 1, 1 ]
[ "iclr_2020_HkxARkrFwB", "HJlANh2NqH", "Hyl12frTtr", "r1lY-EZJqH", "iclr_2020_HkxARkrFwB", "iclr_2020_HkxARkrFwB" ]
iclr_2020_rJxbJeHFPS
What Can Neural Networks Reason About?
Neural networks have succeeded in many reasoning tasks. Empirically, these tasks require specialized network structures, e.g., Graph Neural Networks (GNNs) perform well on many such tasks, but less structured networks fail. Theoretically, there is limited understanding of why and when a network structure generalizes better than others, although they have equal expressive power. In this paper, we develop a framework to characterize which reasoning tasks a network can learn well, by studying how well its computation structure aligns with the algorithmic structure of the relevant reasoning process. We formally define this algorithmic alignment and derive a sample complexity bound that decreases with better alignment. This framework offers an explanation for the empirical success of popular reasoning models, and suggests their limitations. As an example, we unify seemingly different reasoning tasks, such as intuitive physics, visual question answering, and shortest paths, via the lens of a powerful algorithmic paradigm, dynamic programming (DP). We show that GNNs align with DP and thus are expected to solve these tasks. On several reasoning tasks, our theory is supported by empirical results.
accept-spotlight
This paper proposes a framework which qualifies how well given neural architectures can perform on reasoning tasks. From this, they show a number of interesting empirical results, including the ability of graph neural network architectures for learn dynamic programming. This substantial theoretical and empirical study impressed the reviewers, who strongly lean towards acceptance. My view is that this is exactly the sort of work we should be show-casing at the conference, both in terms of focus, and of quality. I am happy to recommend this for acceptance.
test
[ "B1l03Tksjr", "HJe7pdvCYr", "SygwyYiqoS", "BJgzlu59iB", "rkgdaugtjS", "rJePttBWor", "HJeoqFHZsS", "rJgE4iCNjH", "rklnp2qziS", "SygoUFBWor", "ByehQFSWsH", "ryg30_SWiB", "HJe-F_CljB", "Hylr6tgjKB", "rJeOFDKjFr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Thank you for your updates. I am satisfied with the quality of this work and I recommend its acceptance.", "This paper presents a framework, dubbed algorithmic alignment, based on PAC learning and sample complexity, with the aim to explain generalization on reasoning tasks for different neural architectures. The...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "rkgdaugtjS", "iclr_2020_rJxbJeHFPS", "SygoUFBWor", "HJeoqFHZsS", "iclr_2020_rJxbJeHFPS", "rJeOFDKjFr", "Hylr6tgjKB", "rklnp2qziS", "ryg30_SWiB", "HJe7pdvCYr", "iclr_2020_rJxbJeHFPS", "HJe-F_CljB", "iclr_2020_rJxbJeHFPS", "iclr_2020_rJxbJeHFPS", "iclr_2020_rJxbJeHFPS" ]
iclr_2020_B1gdkxHFDH
Training individually fair ML models with sensitive subspace robustness
We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases.
accept-spotlight
The paper addresses individual fairness scenario (treating similar users similarly) and proposes a new definition of algorithmic fairness that is based on the idea of robustness, i.e. by perturbing the inputs (while keeping them close with respect to the distance function), the loss of the model cannot be significantly increased. All reviewers and AC agree that this work is clearly of interest to ICLR, however the reviewers have noted the following potential weaknesses: (1) presentation clarity -- see R3’s detailed suggestions e.g. comparison to Dwork et al, see R2’s comments on how to improve, (2) empirical evaluations -- see R1’s question about using more complex models, see R3’s question on the usefulness of the word embeddings. Pleased to report that based on the author respond with extra experiments and explanations, R3 has raised the score to weak accept. All reviewers and AC agree that the most crucial concerns have been addressed in the rebuttal, and the paper could be accepted - congratulations to the authors! The authors are strongly urged to improve presentation clarity and to include the supporting empirical evidence when preparing the final revision.
train
[ "SJlzds52FS", "B1xTGh_ziB", "ryxUvjOMoH", "HJx13oOfjS", "HyeYKndMoS", "r1xmBTKkqH", "Syx0-bXNcB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new definition of algorithmic fairness that is based on the idea of individual fairness. They then present an algorithm that will provably find an ML model that satisfies the fairness constraint (if such a model exists in the search space). One needed ingredient for the fairness constraint is...
[ 6, -1, -1, -1, -1, 6, 8 ]
[ 4, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_B1gdkxHFDH", "SJlzds52FS", "Syx0-bXNcB", "r1xmBTKkqH", "iclr_2020_B1gdkxHFDH", "iclr_2020_B1gdkxHFDH", "iclr_2020_B1gdkxHFDH" ]
iclr_2020_SkeuexBtDr
Learning from Rules Generalizing Labeled Exemplars
In many applications labeled data is not readily available, and needs to be collected via pain-staking human supervision. We propose a rule-exemplar method for collecting human supervision to combine the efficiency of rules with the quality of instance labels. The supervision is coupled such that it is both natural for humans and synergistic for learning. We propose a training algorithm that jointly denoises rules via latent coverage variables, and trains the model through a soft implication loss over the coverage and label variables. The denoised rules and trained model are used jointly for inference. Empirical evaluation on five different tasks shows that (1) our algorithm is more accurate than several existing methods of learning from a mix of clean and noisy supervision, and (2) the coupled rule-exemplar supervision is effective in denoising rules.
accept-spotlight
The paper addresses the problem of costly human supervision for training supervised learning methods. The authors propose a joint approach for more effectively collecting supervision data from humans, by extracting rules and their exemplars, and a model for training on this data. They demonstrate the effectiveness of their approach on multiple datasets by comparing to a range of baselines. Based on the reviews and my own reading I recommend to accept this paper. The approach makes intuitively a lot of sense and is well explained. The experimental results are convincing.
train
[ "Bkxefw05KS", "BkxHJx_3ir", "BygrBzoKsB", "SygQVgiYoB", "HklrTqcYir", "ByxNcIqtsB", "Byg7vz-pFr", "SyebtskccS", "SkleMEdwtB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "In case of a lack of labeled data, human-designed rules can be used to label the unlabelled data. This paper proposes a better rule-based labeling method by restricting the coverage of the rule, which is based on the assumption that the rules can be applied to a local region but can not be 'over-generalized' to th...
[ 6, -1, -1, -1, -1, -1, 8, 6, -1 ]
[ 4, -1, -1, -1, -1, -1, 3, 3, -1 ]
[ "iclr_2020_SkeuexBtDr", "HklrTqcYir", "iclr_2020_SkeuexBtDr", "SyebtskccS", "Byg7vz-pFr", "Bkxefw05KS", "iclr_2020_SkeuexBtDr", "iclr_2020_SkeuexBtDr", "iclr_2020_SkeuexBtDr" ]
iclr_2020_B1eWbxStPH
Directional Message Passing for Molecular Graphs
Graph neural networks have recently achieved great successes in predicting quantum mechanical properties of molecules. These models represent a molecule as a graph using only the distance between atoms (nodes). They do not, however, consider the spatial direction from one atom to another, despite directional information playing a central role in empirical potentials for molecules, e.g. in angular potentials. To alleviate this limitation we propose directional message passing, in which we embed the messages passed between atoms instead of the atoms themselves. Each message is associated with a direction in coordinate space. These directional message embeddings are rotationally equivariant since the associated directions rotate with the molecule. We propose a message passing scheme analogous to belief propagation, which uses the directional information by transforming messages based on the angle between them. Additionally, we use spherical Bessel functions and spherical harmonics to construct theoretically well-founded, orthogonal representations that achieve better performance than the currently prevalent Gaussian radial basis representations while using fewer than 1/4 of the parameters. We leverage these innovations to construct the directional message passing neural network (DimeNet). DimeNet outperforms previous GNNs on average by 76% on MD17 and by 31% on QM9. Our implementation is available online.
accept-spotlight
This paper studies Graph Neural Networks for quantum chemistry by incorporating a number of physics-informed innovations into the architecture. In particular, it considers directional edge information while preserving equivariance. Reviewers were in agreement that this is an excellent paper with strong empirical results, great empirical evaluation and clear exposition. Despite some concerns about the limited novelty in terms of GNN methodology ( for instance, directional message passing has appeared in previous GNN papers, see e.g. https://openreview.net/forum?id=H1g0Z3A9Fm , in a different context). Ultimately, the AC believes this is a strong, high quality work that will be of broad interest, and thus recommends acceptance.
train
[ "B1xQUxXDsH", "HJl0NxmDiH", "r1lvxgXwjH", "rJx3xOahYB", "Sklq1fBatH", "H1eUT3mRtS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "As you correctly pointed out, going beyond the graph and incorporating the underlying spatial data is one of the main ideas behind our model. In our work we focus on molecular prediction and leverage the characteristics of this problem. However, many of the ideas we propose in our paper should be applicable to oth...
[ -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, 1, 5, 4 ]
[ "rJx3xOahYB", "Sklq1fBatH", "H1eUT3mRtS", "iclr_2020_B1eWbxStPH", "iclr_2020_B1eWbxStPH", "iclr_2020_B1eWbxStPH" ]
iclr_2020_H1xFWgrFPS
Explanation by Progressive Exaggeration
As machine learning methods see greater adoption and implementation in high stakes applications such as medical image diagnosis, the need for model interpretability and explanation has become more critical. Classical approaches that assess feature importance (eg saliency maps) do not explain how and why a particular region of an image is relevant to the prediction. We propose a method that explains the outcome of a classification black-box by gradually exaggerating the semantic effect of a given class. Given a query input to a classifier, our method produces a progressive set of plausible variations of that query, which gradually change the posterior probability from its original class to its negation. These counter-factually generated samples preserve features unrelated to the classification decision, such that a user can employ our method as a ``tuning knob'' to traverse a data manifold while crossing the decision boundary. Our method is model agnostic and only requires the output value and gradient of the predictor with respect to its input.
accept-spotlight
This paper presents an idea for interpolating between two points in the decision-space of a black-box classifier in the image-space, while producing plausible images along the interpolation path. The presentation is clear and the experiments support the premise of the model. While the proposed technique can be used to help understanding how a classifier works, I have strong reservations in calling the generated samples "explanations". In particular, there is no reason for the true explanation of how the classifier works to lie in the manifold of plausible images. This constraint is more of a feature to please humans rather than to explain the geometry of the decision boundary. I believe this paper will be well-received and I suggested acceptance, but I believe it will be of limited usefulness for robust understanding of the decision boundary of classifiers.
train
[ "rJxvtP93sS", "SygEn1AVsS", "Syg6pAa4oH", "H1eKOkenFS", "rJgq6kTS5B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We want to thank the reviewers for their valuable and constructive feedback. \n\nWe have made the following changes in the revision of our paper.\n1. Updated references to point to published articles.\n2. Added human evaluation experiment in Appendix section A.4. We used Amazon Mechanical Turk (AMT) to conduct hum...
[ -1, -1, -1, 8, 6 ]
[ -1, -1, -1, 5, 3 ]
[ "iclr_2020_H1xFWgrFPS", "H1eKOkenFS", "rJgq6kTS5B", "iclr_2020_H1xFWgrFPS", "iclr_2020_H1xFWgrFPS" ]
iclr_2020_ByeGzlrKwH
Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network
One of the biggest issues in deep learning theory is the generalization ability of networks with huge model size. The classical learning theory suggests that overparameterized models cause overfitting. However, practically used large deep models avoid overfitting, which is not well explained by the classical approaches. To resolve this issue, several attempts have been made. Among them, the compression based bound is one of the promising approaches. However, the compression based bound can be applied only to a compressed network, and it is not applicable to the non-compressed original network. In this paper, we give a unified frame-work that can convert compression based bounds to those for non-compressed original networks. The bound gives even better rate than the one for the compressed network by improving the bias term. By establishing the unified frame-work, we can obtain a data dependent generalization error bound which gives a tighter evaluation than the data independent ones.
accept-spotlight
This paper has a few interesting contributions: (a) a bound for un-compressed networks in terms of the compressed network (this is in contrast to some prior work, which only gives bounds on the compressed network); (b) the use of local Rademacher complexity to try to squeeze as much as possible out of the connection; (c) an application of the bound to a specific interesting favorable condition, namely low-rank structure. As a minor suggestion, I'd like to recommend that the authors go ahead and use their allowed 10th body page!
val
[ "SkgfGuL3or", "Bkg4y5FCYS", "HJgKa3Rijr", "BJl6G22osr", "Hyepa7Msir", "SJeWa-KcoB", "BJeq9Wt9jH", "BkgeHbYcjr", "SkeOxWYcjS", "Byevc7PaYS", "BygRLlp3cS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for clarifying the notation and answering my questions. I found the new empirical evaluation of intrinsic dimensionality and comparison to Arora et al. bound to be a valuable contribution (Appendix D). However, I do believe that the presentation of the theoretical results and the notation in the main tex...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "BJeq9Wt9jH", "iclr_2020_ByeGzlrKwH", "BJl6G22osr", "Hyepa7Msir", "BkgeHbYcjr", "Byevc7PaYS", "Bkg4y5FCYS", "BygRLlp3cS", "iclr_2020_ByeGzlrKwH", "iclr_2020_ByeGzlrKwH", "iclr_2020_ByeGzlrKwH" ]
iclr_2020_Bkeb7lHtvH
At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks?
Background: Recent developments have made it possible to accelerate neural networks training significantly using large batch sizes and data parallelism. Training in an asynchronous fashion, where delay occurs, can make training even more scalable. However, asynchronous training has its pitfalls, mainly a degradation in generalization, even after convergence of the algorithm. This gap remains not well understood, as theoretical analysis so far mainly focused on the convergence rate of asynchronous methods. Contributions: We examine asynchronous training from the perspective of dynamical stability. We find that the degree of delay interacts with the learning rate, to change the set of minima accessible by an asynchronous stochastic gradient descent algorithm. We derive closed-form rules on how the learning rate could be changed, while keeping the accessible set the same. Specifically, for high delay values, we find that the learning rate should be kept inversely proportional to the delay. We then extend this analysis to include momentum. We find momentum should be either turned off, or modified to improve training stability. We provide empirical experiments to validate our theoretical findings.
accept-spotlight
The paper considers the problem of training neural networks asynchronously, and the gap in generalization due to different local minima being accessible with different delays. The authors derive a theoretical model for the delayed gradients, which provide prescriptions for setting the learning rate and momentum. All reviewers agreed that this a nice paper with valuable theoretical and empirical contributions.
test
[ "BJxe-k_0Kr", "BygmdOItoB", "SkgYLuIwjr", "BygeIZlDoB", "HJlV0elDir", "SyeTfjp3YH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "The authors introduce a theoretical model for delayed gradients in asynchronous training. It is a very nice model and solving the corresponding differential equation allows to study its stability. Authors derive stability bounds for pure SGD (learning rate needs to decrease with delay) and for SGD with momentum, w...
[ 6, -1, 8, -1, -1, 8 ]
[ 3, -1, 3, -1, -1, 3 ]
[ "iclr_2020_Bkeb7lHtvH", "SkgYLuIwjr", "iclr_2020_Bkeb7lHtvH", "SyeTfjp3YH", "BJxe-k_0Kr", "iclr_2020_Bkeb7lHtvH" ]
iclr_2020_rygeHgSFDH
Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)
A central question of representation learning asks under which conditions it is possible to reconstruct the true latent variables of an arbitrarily complex generative process. Recent breakthrough work by Khemakhem et al. (2019) on nonlinear ICA has answered this question for a broad class of conditional generative processes. We extend this important result in a direction relevant for application to real-world data. First, we generalize the theory to the case of unknown intrinsic problem dimension and prove that in some special (but not very restrictive) cases, informative latent variables will be automatically separated from noise by an estimating model. Furthermore, the recovered informative latent variables will be in one-to-one correspondence with the true latent variables of the generating process, up to a trivial component-wise transformation. Second, we introduce a modification of the RealNVP invertible neural network architecture (Dinh et al. (2016)) which is particularly suitable for this type of problem: the General Incompressible-flow Network (GIN). Experiments on artificial data and EMNIST demonstrate that theoretical predictions are indeed verified in practice. In particular, we provide a detailed set of exactly 22 informative latent variables extracted from EMNIST.
accept-spotlight
This paper builds on the recent theoretical work by Khemakhem et al. (2019) to propose a novel flow-based method for performing non-linear ICA. The paper is well written, includes theoretical justifications for the proposed approach and convincing experimental results. Many of the initial minor concerns raised by the reviewers were addressed during the discussion stage, and all of the reviewers agree that this paper is an important contribution to the field and hence should be accepted. Hence, I am happy to recommend the acceptance of this paper as an oral.
train
[ "HkemnUnqoH", "BylKIoj5oS", "Byx4zDN_oS", "HJejK84_sB", "Hye788EOiB", "rylG-P22FS", "H1eAihsatr", "SJlju2tx9r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I have read the comments and the appendix is certainly improved. \n\nThe new added section 4.4.3 admits that selection of $u$ is not clear. I think this section and the subsequent one could also be merged into the conclusions, as a limitation and subject for future work. \n\nI realize that my final comment was inc...
[ -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, 3, 1, 5 ]
[ "Hye788EOiB", "Byx4zDN_oS", "SJlju2tx9r", "H1eAihsatr", "rylG-P22FS", "iclr_2020_rygeHgSFDH", "iclr_2020_rygeHgSFDH", "iclr_2020_rygeHgSFDH" ]
iclr_2020_BkgrBgSYDS
Kaleidoscope: An Efficient, Learnable Representation For All Structured Linear Maps
Modern neural network architectures use structured linear transformations, such as low-rank matrices, sparse matrices, permutations, and the Fourier transform, to improve inference speed and reduce memory usage compared to general linear maps. However, choosing which of the myriad structured transformations to use (and its associated parameterization) is a laborious task that requires trading off speed, space, and accuracy. We consider a different approach: we introduce a family of matrices called kaleidoscope matrices (K-matrices) that provably capture any structured matrix with near-optimal space (parameter) and time (arithmetic operation) complexity. We empirically validate that K-matrices can be automatically learned within end-to-end pipelines to replace hand-crafted procedures, in order to improve model quality. For example, replacing channel shuffles in ShuffleNet improves classification accuracy on ImageNet by up to 5%. K-matrices can also simplify hand-engineered pipelines---we replace filter bank feature computation in speech data preprocessing with a learnable kaleidoscope layer, resulting in only 0.4% loss in accuracy on the TIMIT speech recognition task. In addition, K-matrices can capture latent structure in models: for a challenging permuted image classification task, adding a K-matrix to a standard convolutional architecture can enable learning the latent permutation and improve accuracy by over 8 points. We provide a practically efficient implementation of our approach, and use K-matrices in a Transformer network to attain 36% faster end-to-end inference speed on a language translation task.
accept-spotlight
The paper generalizes several existing results for structured linear transformations in the form of K-matrices. This is an excellent paper and all reviewers confirmed that.
train
[ "HJxIw8m_iB", "rJgHQL7uir", "r1eYbUQ_oB", "rJl51UQ_iH", "BJlGTr7OsH", "BkltbF7QsS", "BJx4TPtRFB", "HyeOD450tr", "S1g74Sfq9H" ]
[ "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Since Kronecker-factored matrices have an efficient representation, they are automatically captured by a K-matrix with the correct number of parameters up to logarithmic factors.\nThere is actually a tighter bound that can be made in the case of the Kronecker products specifically, relating the K-matrix width of t...
[ -1, -1, -1, -1, -1, -1, 6, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, 1, 3, 1 ]
[ "BkltbF7QsS", "BJx4TPtRFB", "HyeOD450tr", "S1g74Sfq9H", "iclr_2020_BkgrBgSYDS", "iclr_2020_BkgrBgSYDS", "iclr_2020_BkgrBgSYDS", "iclr_2020_BkgrBgSYDS", "iclr_2020_BkgrBgSYDS" ]
iclr_2020_S1evHerYPr
Improving Generalization in Meta Reinforcement Learning using Learned Objectives
Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that decides how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms human-engineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency.
accept-spotlight
This paper proposes a meta-RL algorithm that learns an objective function whose gradients can be used to efficiently train a learner on entirely new tasks from those seen during meta-training. Building off-policy gradient-based meta-RL methods is challenging, and had not been previously demonstrated. Further, the demonstrated generalization capabilities are a substantial improvement in capabilities over prior meta-learning methods. There are a couple related works that are quite relevant (and somewhat similar in methodology) and overlooked -- see [1,2]. Further, we strongly encourage the authors to run the method on multiple meta-training environments and to report results with more seeds, as promised. The contributions are significant and should be seen by the ICLR community. Hence, I recommend an oral presentation. [1] Yu et al. One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning [2] Sung et al. Meta-critic networks
train
[ "HkxPiyXRKr", "SygfXwUnoB", "SylY3hWssS", "HJgkRUO6Kr", "HylZtS-9sS", "B1eo8QIEsS", "rkeLWm84iH", "BklpFf8VsS", "B1ln3l84sS", "ryxWqYyctr" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes to meta learn the objective function of a policy gradient algorithm using second order gradients of the objective function w.r.t the state-action value Q. \n\nThis is an interesting approach, however, I think the experimental evidence is not sufficiently convincing. \n\n- In particular, I think ...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_S1evHerYPr", "iclr_2020_S1evHerYPr", "HylZtS-9sS", "iclr_2020_S1evHerYPr", "B1eo8QIEsS", "rkeLWm84iH", "HJgkRUO6Kr", "HkxPiyXRKr", "ryxWqYyctr", "iclr_2020_S1evHerYPr" ]
iclr_2020_BJxsrgStvr
Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks
(Frankle & Carbin, 2019) shows that there exist winning tickets (small but critical subnetworks) for dense, randomly initialized networks, that can be trained alone to achieve comparable accuracies to the latter in a similar number of iterations. However, the identification of these winning tickets still requires the costly train-prune-retrain process, limiting their practical benefits. In this paper, we discover for the first time that the winning tickets can be identified at the very early training stage, which we term as Early-Bird (EB) tickets, via low-cost training schemes (e.g., early stopping and low-precision training) at large learning rates. Our finding of EB tickets is consistent with recently reported observations that the key connectivity patterns of neural networks emerge early. Furthermore, we propose a mask distance metric that can be used to identify EB tickets with low computational overhead, without needing to know the true winning tickets that emerge after the full training. Finally, we leverage the existence of EB tickets and the proposed mask distance to develop efficient training methods, which are achieved by first identifying EB tickets via low-cost schemes, and then continuing to train merely the EB tickets towards the target accuracy. Experiments based on various deep networks and datasets validate: 1) the existence of EB tickets and the effectiveness of mask distance in efficiently identifying them; and 2) that the proposed efficient training via EB tickets can achieve up to 5.8x ~ 10.7x energy savings while maintaining comparable or even better accuracy as compared to the most competitive state-of-the-art training methods, demonstrating a promising and easily adopted method for tackling cost-prohibitive deep network training.
accept-spotlight
This work studies small but critical subnetworks, called winning tickets, that have very similar performance to an entire network, even with much less training. They show how to identify these early in the training of the entire network, saving computation and time in identifying them and then overall for the prediction task as a whole. The reviewers agree this paper is well-presented and of general interest to the community. Therefore, we recommend that the paper be accepted.
train
[ "BJxIRropKH", "HkxzLrTjoB", "HkghlnSjjB", "SyeWhISisS", "B1xzRFWqjH", "r1lESd-qsr", "HJedJdZ9jr", "rk5HxSNKr", "Bkx7VRIhFH" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper empirically analyzed the wide existence of \"early-bird tickets\", e.g., the \"lottery tickets\" emerging and stabilizing in very early training stage. The potential connection to (Achille et al., 2019; Li et al.,2019) reads interesting. \n\nThe authors made several contributions in addition to the obser...
[ 8, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_BJxsrgStvr", "HkghlnSjjB", "B1xzRFWqjH", "r1lESd-qsr", "rk5HxSNKr", "Bkx7VRIhFH", "BJxIRropKH", "iclr_2020_BJxsrgStvr", "iclr_2020_BJxsrgStvr" ]
iclr_2020_HyxyIgHFvr
Truth or backpropaganda? An empirical investigation of deep learning theory
We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.
accept-spotlight
The authors take a closer look at widely held beliefs about neural networks. Using a mix of analysis and experiment, they shed some light on the ways these assumptions break down. The paper contributes to our understanding of various phenomena and their connection to generalization, and should be a useful paper for theoreticians searching for predictive theories.
train
[ "SJgN-L5mcH", "HJxg3bwisS", "S1l7D-voiB", "ryxZZWwsiS", "HJeQuWDTtr", "H1x75kATFr", "r1xCi-IjKr", "Bygm_Tz0KS", "SygoTRPiFB", "SJlPOVxutB", "BJgFUN6OB", "ryeEsMcLOS", "B1xvkIhNOB", "H1gFcHcmur" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "public", "author", "public", "author", "public" ]
[ "In this paper, the authors seek to examine carefully some assumptions investigated in the theory of deep neural networks. The paper attempts to answer the following theoretical assumptions: the existence of local minima in loss landscapes, the relevance of weight decay with small L2-norm solutions, the connection ...
[ 8, -1, -1, -1, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_HyxyIgHFvr", "HJeQuWDTtr", "SJgN-L5mcH", "H1x75kATFr", "iclr_2020_HyxyIgHFvr", "iclr_2020_HyxyIgHFvr", "SJlPOVxutB", "SygoTRPiFB", "BJgFUN6OB", "iclr_2020_HyxyIgHFvr", "ryeEsMcLOS", "iclr_2020_HyxyIgHFvr", "H1gFcHcmur", "iclr_2020_HyxyIgHFvr" ]
iclr_2020_H1gNOeHKPS
Neural Arithmetic Units
Neural networks can approximate complex functions, but they struggle to perform exact arithmetic operations over real numbers. The lack of inductive bias for arithmetic operations leaves neural networks without the underlying logic necessary to extrapolate on tasks such as addition, subtraction, and multiplication. We present two new neural network components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction; and the Neural Multiplication Unit (NMU) that can multiply subsets of a vector. The NMU is, to our knowledge, the first arithmetic neural network component that can learn to multiply elements from a vector, when the hidden size is large. The two new components draw inspiration from a theoretical analysis of recently proposed arithmetic components. We find that careful initialization, restricting parameter space, and regularizing for sparsity is important when optimizing the NAU and NMU. Our proposed units NAU and NMU, compared with previous neural units, converge more consistently, have fewer parameters, learn faster, can converge for larger hidden sizes, obtain sparse and meaningful weights, and can extrapolate to negative and small values.
accept-spotlight
This paper extends work on NALUs, providing a pair of units which, in tandem, outperform NALUs. The reviewers were broadly in favour of the paper given the presentation and results. The one dissenting reviewer appears to not have had time to reconsider their score despite the main points of clarification being addressed in the revision. I am happy to err on the side of optimism here and assume they would be satisfied with the changes that came as an outcome of the discussion, and recommend acceptance.
train
[ "HkxOeWlCYS", "ryxm9VxmiS", "ryxPvIZiiH", "H1gWAnqFsr", "S1gJpwkpqB", "HJx7vjkjiB", "SyxSQuqFjS", "rklWB8gQor", "B1eJmNlmoS", "BJx8Y7x7sS", "Skg7-Qe7sB", "ryglXQgXir", "ByxDNmx7oB", "BkgVknOCtB", "B1l3vxgLcS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose the Neural Multiplication Unit (NMU), which can learn to solve a family of arithmetic operations using -, + and * atomic operations over real numbers from examples. They show that a combination of careful initialization, regularization and structural choices allows their model to learn more rel...
[ 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_H1gNOeHKPS", "HkxOeWlCYS", "HJx7vjkjiB", "iclr_2020_H1gNOeHKPS", "iclr_2020_H1gNOeHKPS", "ByxDNmx7oB", "Skg7-Qe7sB", "iclr_2020_H1gNOeHKPS", "BkgVknOCtB", "B1l3vxgLcS", "S1gJpwkpqB", "S1gJpwkpqB", "S1gJpwkpqB", "iclr_2020_H1gNOeHKPS", "iclr_2020_H1gNOeHKPS" ]
iclr_2020_B1e3OlStPB
DeepSphere: a graph-based spherical CNN
Designing a convolution for a spherical neural network requires a delicate tradeoff between efficiency and rotation equivariance. DeepSphere, a method based on a graph representation of the discretized sphere, strikes a controllable balance between these two desiderata. This contribution is twofold. First, we study both theoretically and empirically how equivariance is affected by the underlying graph with respect to the number of pixels and neighbors. Second, we evaluate DeepSphere on relevant problems. Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation. Perhaps surprisingly, comparison with previous work suggests that anisotropic filters might be an unnecessary price to pay. Our code is available at https://github.com/deepsphere.
accept-spotlight
This paper proposes a novel methodology for applying convolutional networks to spherical data through a graph-based discretization. The reviewers all found the methodology sensible and the experiments convincing. A common concern of the reviewers was the amount of novelty in the approach, as in it involves the combination of established methods, but ultimately they found that the empirical performance compared to baselines outweighed this.
train
[ "HJxzpfinsr", "r1xyJ3DDjS", "Sye2UiPDjS", "SJxXi9wPoS", "BkeilpU-sB", "r1xdde8nKH", "BkgixEgAKr", "Bkx5kg6RKH" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We uploaded an improved manuscript thanks to the reviewers' comments.\n\nThe main update is the addition of theorem 3.2 that formalizes the relation between theorem 3.1 and rotation equivariance. Small changes across the text have been made to clarify the exposition further.\n\nA link to a public git repository co...
[ -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, 1, 4, 3 ]
[ "iclr_2020_B1e3OlStPB", "r1xdde8nKH", "BkgixEgAKr", "Bkx5kg6RKH", "iclr_2020_B1e3OlStPB", "iclr_2020_B1e3OlStPB", "iclr_2020_B1e3OlStPB", "iclr_2020_B1e3OlStPB" ]
iclr_2020_SylkYeHtwr
SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models
Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.
accept-spotlight
The paper proposes a new way to train latent variable models. The standard way of training using the ELBO produces biased estimates for many quantities of interest. The authors introduce an unbiased estimate for the log marginal probability and its derivative to address this. The new estimator is based on the importance weighted autoencoder, correcting the remaining bias using russian roulette sampling. The model is empirically shown to give better test set likelihood, and can be used in tasks where unbiased estimates are needed. All reviewers are positive about the paper. Support for the main claims is provided through empirical and theoretical results. The reviewers had some minor comments, especially about the theory, which the authors have addressed with additional clarification, which was appreciated by the reviewers. The paper was deemed to be well organized. There were some unclarities about variance issues and bias from gradient clipping, which have been addressed by the authors in additional explanation as well as an additional plot. The approach is novel and addresses a very relevant problem for the ICLR community: optimizing latent variable models, especially in situations where unbiased estimates are required. The method results in marginally better optimization compared to IWAE with much smaller average number of samples. The method was deemed by the reviewers to open up new possibilities such as entropy minimization.
train
[ "SkxKwPjosS", "rJgxdFWhFS", "Bke1TwjiiB", "rJea5tnioS", "H1ldrdnssS", "HJeJxFiijB", "SJlZ3eOBYB", "HklgDk-gcB" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thoughtful review and comments. We’ve added comparisons to previous work on bias reduced estimators (e.g. jackknife variational inference (JVI) (Nowozin, 2018)) in the related work and experiments. We have cited multiple bias compensation works with RRE, across multiple areas of application, as ...
[ -1, 8, -1, -1, -1, -1, 8, 6 ]
[ -1, 3, -1, -1, -1, -1, 3, 5 ]
[ "HklgDk-gcB", "iclr_2020_SylkYeHtwr", "rJgxdFWhFS", "H1ldrdnssS", "Bke1TwjiiB", "SJlZ3eOBYB", "iclr_2020_SylkYeHtwr", "iclr_2020_SylkYeHtwr" ]
iclr_2020_S1eZYeHFDS
Deep Learning For Symbolic Mathematics
Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing these mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica.
accept-spotlight
The paper presents a deep learning approach for tasks such as symbolic integration and solving differential equations. The reviewers were positive and the paper has had extensive discussion, which we hope has been positive for the authors. We look forward to seeing the engagement with this work at the conference.
train
[ "HJgxJL_CKS", "SygTQ8Pqjr", "rygDLq24jr", "BylfbY3Esr", "BJeaNOh4jr", "SJgIgO3VoS", "Hklgq824ir", "HJlE_rhVsS", "r1g8MHnVoS", "H1ltQLJMoH", "rkec-H1UKS", "SJgm1yNFFS", "H1xysAfSqH", "H1eE5hROOH", "rygXFOtvOr", "HkgfEuFwdB", "SJeO4Wa7_H", "SyeEzcnXdB", "SJlw8Scm_H", "B1xF7rqQdH"...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "public", "public", "author", "author", "author", "author", "author"...
[ "In this paper, the authors propose a method for generating two types of symbolic mathematics problems, integration and differential equations, and their solutions. The purpose of the method is to generate datasets for training transformer neural networks that solve integration and differential-equation problems. T...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_S1eZYeHFDS", "Hklgq824ir", "iclr_2020_S1eZYeHFDS", "H1ltQLJMoH", "SJgIgO3VoS", "rkec-H1UKS", "HJgxJL_CKS", "SJgm1yNFFS", "H1eE5hROOH", "iclr_2020_S1eZYeHFDS", "iclr_2020_S1eZYeHFDS", "iclr_2020_S1eZYeHFDS", "HkgfEuFwdB", "rygXFOtvOr", "SyeEzcnXdB", "SJeO4Wa7_H", "ryeOMfcmO...
iclr_2020_S1xitgHtvS
Making Sense of Reinforcement Learning and Probabilistic Inference
Reinforcement learning (RL) combines a control problem with statistical estimation: The system dynamics are not known to the agent, but can be learned through experience. A recent line of research casts ‘RL as inference’ and suggests a particular framework to generalize the RL problem as probabilistic inference. Our paper surfaces a key shortcoming in that approach, and clarifies the sense in which RL can be coherently cast as an inference problem. In particular, an RL agent must consider the effects of its actions upon future rewards and observations: The exploration-exploitation tradeoff. In all but the most simple settings, the resulting inference is computationally intractable so that practical RL algorithms must resort to approximation. We demonstrate that the popular ‘RL as inference’ approximation can perform poorly in even very basic problems. However, we show that with a small modification the framework does yield algorithms that can provably perform well, and we show that the resulting algorithm is equivalent to the recently proposed K-learning, which we further connect with Thompson sampling.
accept-spotlight
The paper explores in more detail the "RL as inference" viewpoint and highlights some issues with this approach, as well as ways to address these issues. The new version of the paper has effectively addressed some of the reviewers' initial concerns, resulting in an overall well-written paper with interesting insights.
train
[ "S1gAoVaTYH", "rkevha9_ir", "ryx5s-wNoB", "HJg7MbPNjH", "ryx8nxwNsr", "S1xZvxv4sS", "B1eIQQVUtS", "ryez-G1Lqr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper at hand presents an alternative view on reinforcement learning as probabilistic inference (or equivalently maximum entropy reinforcement learning). With respect to other formulations of this view (e.g. Levine, 2018; I am referring to the references of the paper here), the paper identifies a shortcoming i...
[ 6, -1, -1, -1, -1, -1, 8, 6 ]
[ 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_S1xitgHtvS", "S1xZvxv4sS", "ryez-G1Lqr", "S1gAoVaTYH", "B1eIQQVUtS", "iclr_2020_S1xitgHtvS", "iclr_2020_S1xitgHtvS", "iclr_2020_S1xitgHtvS" ]
iclr_2020_r1eyceSYPr
Unbiased Contrastive Divergence Algorithm for Training Energy-Based Latent Variable Models
The contrastive divergence algorithm is a popular approach to training energy-based latent variable models, which has been widely used in many machine learning models such as the restricted Boltzmann machines and deep belief nets. Despite its empirical success, the contrastive divergence algorithm is also known to have biases that severely affect its convergence. In this article we propose an unbiased version of the contrastive divergence algorithm that completely removes its bias in stochastic gradient methods, based on recent advances on unbiased Markov chain Monte Carlo methods. Rigorous theoretical analysis is developed to justify the proposed algorithm, and numerical experiments show that it significantly improves the existing method. Our findings suggest that the unbiased contrastive divergence algorithm is a promising approach to training general energy-based latent variable models.
accept-spotlight
Main content: Blind review #1 summarizes it well: The paper proposes an algorithmic improvement that significantly simplifies training of energy-based models, such as the Restricted Boltzmann Machine. The key issue in training such models is computing the gradient of the log partition function, which can be framed as computing the expected value of f(x) = dE(x; theta) / d theta over the model distribution p(x). The canonical algorithm for this problem is Contrastive Divergence which approximates x ~ p(x) with k steps of Gibbs sampling, resulting in biased gradients. In this paper, the authors apply the recently introduced unbiased MCMC framework of Jacob et al. to completely remove the bias. The key idea is to (1) rewrite the expectation as a limit of a telescopic sum: E f(x_0) + \sum_t E f(x_t) - E f(x_{t-1}); (2) run two coupled MCMC chains, one for the “positive” part of the telescopic sum and one for the “negative” part until they converge. After convergence, all remaining terms of the sum are zero and we can stop iterating. However, the number of time steps until convergence is now random. Other contributions of the paper are: 1. Proof that Bernoulli RBMs and other models satisfying certain conditions have finite expected number of steps and finite variance of the unbiased gradient estimator. 2. A shared random variables method for the coupled Gibbs chains that should result in faster convergence of the chains. 3. Verification of the proposed method on two synthetic datasets and a subset of MNIST, demonstrating more stable training compared to contrastive divergence and persistent contrastive divergence. -- Discussion: The main objection in reviews was to have meaningful empirical validation of the strong theoretical aspect of the paper, which the authors did during the rebuttal period to the satisfaction of reviewers. -- Recommendation and justification: As review #1 said, "I am very excited about this paper and strongly support its acceptance, since the proposed method should revitalize research in energy-based models."
train
[ "BkxfqjaaFH", "Syx7qLS3sB", "SJx1krBhir", "BygfyEHhiS", "r1xivrLjYS", "rkgmSvqotB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Based on recent progress in unbiased MCMC sampling the paper proposes an unbiased contrastive divergence (UCD) algorithm for training energy based models. Specifically they developed an unbiased version of the gibbs sampling contrastive divergence algorithm for training restricted Boltzman machines. The authors de...
[ 6, -1, -1, -1, 8, 8 ]
[ 3, -1, -1, -1, 3, 3 ]
[ "iclr_2020_r1eyceSYPr", "BkxfqjaaFH", "rkgmSvqotB", "r1xivrLjYS", "iclr_2020_r1eyceSYPr", "iclr_2020_r1eyceSYPr" ]
iclr_2020_Syx79eBKwr
A Mutual Information Maximization Perspective of Language Representation Learning
We show state-of-the-art word representation learning methods maximize an objective function that is a lower bound on the mutual information between different parts of a word sequence (i.e., a sentence). Our formulation provides an alternative perspective that unifies classical word embedding models (e.g., Skip-gram) and modern contextual embeddings (e.g., BERT, XLNet). In addition to enhancing our theoretical understanding of these methods, our derivation leads to a principled framework that can be used to construct new self-supervised tasks. We provide an example by drawing inspirations from related methods based on mutual information maximization that have been successful in computer vision, and introduce a simple self-supervised objective that maximizes the mutual information between a global sentence representation and n-grams in the sentence. Our analysis offers a holistic view of representation learning methods to transfer knowledge and translate progress across multiple domains (e.g., natural language processing, computer vision, audio processing).
accept-spotlight
This paper explores several embedding models (Skip-gram, BERT, XLNet) and describes a framework for comparing, and in the end, unifying them. The framework is such that it actually suggests new ways of creating embeddings, and draws connections to methodology from computer vision. One of the reviewers had several questions about the derivations in your paper and was worried about the paper's clarity. But all of the reviewers appreciated the contributions of the paper, which joins multiple seemingly disparite models under into one theoretical framework. The reviewers were positive about the paper, and in particular were happy to see the active response of authors to their questions and willingness to update the paper with their suggested improvements.
val
[ "SklzdV82jr", "H1eWGbyRYB", "HyxUKCS2oB", "B1eMJ9-msH", "r1xKd5Z7or", "rJlgPc-mjB", "Hyemr5VVtB", "Skx9YZG-qB" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the clarification. I hope we have answered your question above.\n\nRegarding novelty, the main contribution of the paper is a unifying framework of language representation learning models based on mutual information maximization. The framework also allows us to easily construct new self-supervised ta...
[ -1, 8, -1, -1, -1, -1, 8, 6 ]
[ -1, 5, -1, -1, -1, -1, 5, 3 ]
[ "HyxUKCS2oB", "iclr_2020_Syx79eBKwr", "rJlgPc-mjB", "Hyemr5VVtB", "H1eWGbyRYB", "Skx9YZG-qB", "iclr_2020_Syx79eBKwr", "iclr_2020_Syx79eBKwr" ]
iclr_2020_S1e_9xrFvS
Energy-based models for atomic-resolution protein conformations
We propose an energy-based model (EBM) of protein conformations that operates at atomic scale. The model is trained solely on crystallized protein data. By contrast, existing approaches for scoring conformations use energy functions that incorporate knowledge of physical principles and features that are the complex product of several decades of research and tuning. To evaluate the model, we benchmark on the rotamer recovery task, the problem of predicting the conformation of a side chain from its context within a protein structure, which has been used to evaluate energy functions for protein design. The model achieves performance close to that of the Rosetta energy function, a state-of-the-art method widely used in protein structure prediction and design. An investigation of the model’s outputs and hidden representations finds that it captures physicochemical properties relevant to protein energy.
accept-spotlight
The paper proposes a data-driven approach to learning atomic-resolution energy functions. Experiment results show that the proposed energy function is similar to the state-of-art method (Rosetta) based on physical principles and engineered features. The paper addresses an interesting and challenging problem. The results are very promising. It is a good showcase of how ML can be applied to solve an important application problem. For the final version, we suggest that the authors can tune down some claims in the paper to fairly reflect the contribution of the work.
train
[ "Bylp_tzhsr", "ByeQ4s-3oS", "ByxLpbGhjr", "SJerAWZhiB", "BJldk-Z2sS", "Skgupl9TYH", "SJxY2dsAFB", "Hkg4BP-N5H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for addressing my comments - as reflected in my score ( I'll stay by my original accept-score) I find this paper interesting and hope to see it at ICLR2020.", "We thank the reviewer for constructive criticism and questions. This feedback has been very helpful and we've made a number of alterations to the...
[ -1, -1, -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 1 ]
[ "ByeQ4s-3oS", "Skgupl9TYH", "iclr_2020_S1e_9xrFvS", "SJxY2dsAFB", "Hkg4BP-N5H", "iclr_2020_S1e_9xrFvS", "iclr_2020_S1e_9xrFvS", "iclr_2020_S1e_9xrFvS" ]
iclr_2020_BJe55gBtvH
Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem
Understanding the representational power of Deep Neural Networks (DNNs) and how their structural properties (e.g., depth, width, type of activation unit) affect the functions they can compute, has been an important yet challenging question in deep learning and approximation theory. In a seminal paper, Telgarsky high- lighted the benefits of depth by presenting a family of functions (based on sim- ple triangular waves) for which DNNs achieve zero classification error, whereas shallow networks with fewer than exponentially many nodes incur constant error. Even though Telgarsky’s work reveals the limitations of shallow neural networks, it doesn’t inform us on why these functions are difficult to represent and in fact he states it as a tantalizing open question to characterize those functions that cannot be well-approximated by smaller depths. In this work, we point to a new connection between DNNs expressivity and Sharkovsky’s Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing functions based on the presence of a generalized notion of fixed points, called periodic points (a fixed point is a point of period 1). Motivated by our observation that the triangle waves used in Telgarsky’s work contain points of period 3 – a period that is special in that it implies chaotic behaviour based on the celebrated result by Li-Yorke – we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth. Technically, the crux of our approach is based on an eigenvalue analysis of the dynamical systems associated with such functions.
accept-spotlight
The article is concerned with depth width tradeoffs in the representation of functions with neural networks. The article presents connections between expressivity of neural networks and dynamical systems, and obtains lower bounds on the width to represent periodic functions as a function of the depth. These are relevant advances and new perspectives for the theoretical study of neural networks. The reviewers were very positive about this article. The authors' responses also addressed comments from the initial reviews.
train
[ "rJloHCKhjr", "BJegW4y3iH", "H1g-XB1hsB", "ryl5620g5S", "HJgnwZWW5H" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I greatly appreciate the author's thorough response! I also appreciate the inclusion of the synthetic dataset! Unfortunately, it isn't possible for me to raise my score any higher (because it is already at the maximum). ", "First we thank Reviewer 1 for their time, positive feedback and valuable comments. Bot...
[ -1, -1, -1, 8, 8 ]
[ -1, -1, -1, 3, 1 ]
[ "BJegW4y3iH", "HJgnwZWW5H", "ryl5620g5S", "iclr_2020_BJe55gBtvH", "iclr_2020_BJe55gBtvH" ]
iclr_2020_H1gBsgBYwH
Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint
This paper investigates the generalization properties of two-layer neural networks in high-dimensions, i.e. when the number of samples n, features d, and neurons h tend to infinity at the same rate. Specifically, we derive the exact population risk of the unregularized least squares regression problem with two-layer neural networks when either the first or the second layer is trained using a gradient flow under different initialization setups. When only the second layer coefficients are optimized, we recover the \textit{double descent} phenomenon: a cusp in the population risk appears at h≈n and further overparameterization decreases the risk. In contrast, when the first layer weights are optimized, we highlight how different scales of initialization lead to different inductive bias, and show that the resulting risk is \textit{independent} of overparameterization. Our theoretical and experimental results suggest that previously studied model setups that provably give rise to \textit{double descent} might not translate to optimizing two-layer neural networks.
accept-spotlight
This paper focuses on studying the double descent phenomenon in a one layer neural network training in an asymptotic regime where various dimensions go to infinity together with fixed ratios. The authors provide precise asymptotic characterization of the risk and use it to study various phenomena. In particular they characterize the role of various scales of the initialization and their effects. The reviewers all agree that this is an interesting paper with nice contributions. I concur with this assessment. I think this is a solid paper with very precise and concise theory. I recommend acceptance.
train
[ "Bke6usYzKr", "Bklc6FAsoH", "B1xu2HCssS", "Hkee-HRoiH", "ryxeE1cjYH", "rylsXynatr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors study the generalization error of two-layer neural nets, where an asymptotic point of view is taken. Their main results can be summarized as follows.\n1. If only the second layer is optimized, they observe the double-descent phenomenon.\n2. However, if only the first layer is optimized, the double-desc...
[ 8, -1, -1, -1, 6, 8 ]
[ 1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_H1gBsgBYwH", "ryxeE1cjYH", "Bke6usYzKr", "rylsXynatr", "iclr_2020_H1gBsgBYwH", "iclr_2020_H1gBsgBYwH" ]
iclr_2020_SJxUjlBtwB
Reconstructing continuous distributions of 3D protein structure from cryo-EM images
Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the structure of proteins and other macromolecular complexes at near-atomic resolution. In single particle cryo-EM, the central problem is to reconstruct the 3D structure of a macromolecule from 104−7 noisy and randomly oriented 2D projection images. However, the imaged protein complexes may exhibit structural variability, which complicates reconstruction and is typically addressed using discrete clustering approaches that fail to capture the full range of protein dynamics. Here, we introduce a novel method for cryo-EM reconstruction that extends naturally to modeling continuous generative factors of structural heterogeneity. This method encodes structures in Fourier space using coordinate-based deep neural networks, and trains these networks from unlabeled 2D cryo-EM images by combining exact inference over image orientation with variational inference for structural heterogeneity. We demonstrate that the proposed method, termed cryoDRGN, can perform ab-initio reconstruction of 3D protein complexes from simulated and real 2D cryo-EM image data. To our knowledge, cryoDRGN is the first neural network-based approach for cryo-EM reconstruction and the first end-to-end method for directly reconstructing continuous ensembles of protein structures from cryo-EM images.
accept-spotlight
The paper introduces a generative approach to reconstruct 3D images for cryo-electron microscopy (cryo-EM). All reviewers really liked the paper, appreciate the challenging problem tackled and the proposed solution. Acceptance is therefore recommended.
test
[ "B1ewkdOijr", "HkxVYD_jiH", "HJgXXPdsjr", "S1lOMkH6FH", "HyxRW6rTFr", "rJetvQrJcB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments and questions. Classical cryo-EM reconstruction algorithms (e.g. cryoSPARC) are described in Section 2.2 at a high level and we refer the reader to its reference (Punjani et al. 2017) for more details on their implementation.\n\nTo clarify the relationship between the cryoSPARC and cryo...
[ -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, 3, 4, 1 ]
[ "S1lOMkH6FH", "HyxRW6rTFr", "rJetvQrJcB", "iclr_2020_SJxUjlBtwB", "iclr_2020_SJxUjlBtwB", "iclr_2020_SJxUjlBtwB" ]
iclr_2020_SJxpsxrYPS
PROGRESSIVE LEARNING AND DISENTANGLEMENT OF HIERARCHICAL REPRESENTATIONS
Learning rich representation from data is an important task for deep generative models such as variational auto-encoder (VAE). However, by extracting high-level abstractions in the bottom-up inference process, the goal of preserving all factors of variations for top-down generation is compromised. Motivated by the concept of “starting small”, we present a strategy to progressively learn independent hierarchical representations from high- to low-levels of abstractions. The model starts with learning the most abstract representation, and then progressively grow the network architecture to introduce new representations at different levels of abstraction. We quantitatively demonstrate the ability of the presented model to improve disentanglement in comparison to existing works on two benchmark datasets using three disentanglement metrics, including a new metric we proposed to complement the previously-presented metric of mutual information gap. We further present both qualitative and quantitative evidence on how the progression of learning improves disentangling of hierarchical representations. By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations.
accept-spotlight
This paper proposes a novel way to learn hierarchical disentangled latent representations by building on the previously published Variational Ladder AutoEncoder (VLAE) work. The proposed extension involves learning disentangled representations in a progressive manner, from the most abstract to the more detailed. While at first the reviewers expressed some concerns about the paper, in terms of its main focus (whether it was the disentanglement or the hierarchical aspect of the learnt representation), connections to past work, and experimental results, these concerns were fully alleviated during the discussion period. All of the reviewers now agree that this is a valuable contribution to the field and should be accepted to ICLR. Hence, I am happy to recommend this paper for acceptance as an oral.
train
[ "ryx1b4oaYH", "Bkxu1H8y9B", "BJxLfm2wsH", "r1l9QfnwoB", "H1gEzb3DoS", "HkxsYDz-qB", "S1xo6JQzjS", "rJg_917MsB", "BJe7sjL9tB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "This paper introduce pro-VLAE, an extension to VAE that promotes disentangled representation learning in a hierarchical fashion.\nEncoder and decoder are made of multiple layers and latent variables are not only present in the bottleneck but also between intermediate layers; in such a way, it is possible to encode...
[ 8, 6, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_SJxpsxrYPS", "iclr_2020_SJxpsxrYPS", "BJe7sjL9tB", "ryx1b4oaYH", "HkxsYDz-qB", "Bkxu1H8y9B", "Bkxu1H8y9B", "Bkxu1H8y9B", "iclr_2020_SJxpsxrYPS" ]
iclr_2020_rJg8TeSFDH
An Exponential Learning Rate Schedule for Deep Learning
Intriguing empirical evidence exists that deep learning can work well with exotic schedules for varying the learning rate. This paper suggests that the phenomenon may be due to Batch Normalization or BN(Ioffe & Szegedy, 2015), which is ubiq- uitous and provides benefits in optimization and generalization across all standard architectures. The following new results are shown about BN with weight decay and momentum (in other words, the typical use case which was not considered in earlier theoretical analyses of stand-alone BN (Ioffe & Szegedy, 2015; Santurkar et al., 2018; Arora et al., 2018) • Training can be done using SGD with momentum and an exponentially in- creasing learning rate schedule, i.e., learning rate increases by some (1 + α) factor in every epoch for some α > 0. (Precise statement in the paper.) To the best of our knowledge this is the first time such a rate schedule has been successfully used, let alone for highly successful architectures. As ex- pected, such training rapidly blows up network weights, but the net stays well-behaved due to normalization. • Mathematical explanation of the success of the above rate schedule: a rigor- ous proof that it is equivalent to the standard setting of BN + SGD + Standard Rate Tuning + Weight Decay + Momentum. This equivalence holds for other normalization layers as well, Group Normalization(Wu & He, 2018), Layer Normalization(Ba et al., 2016), Instance Norm(Ulyanov et al., 2016), etc. • A worked-out toy example illustrating the above linkage of hyper- parameters. Using either weight decay or BN alone reaches global minimum, but convergence fails when both are used.
accept-spotlight
After the revision, the reviewers agree on acceptance of this paper. Let's do it.
val
[ "rJlhm_ETtH", "rklU4eINhr", "r1e-C3cUor", "HygVdTqLir", "SJl1vjc8oB", "Hyle1ic8sS", "rJxVktSXsr", "HJxfYraZsB", "HkxH37tRFH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "%%% Update to the review %%%\nThanks for your clarification and the revision - the paper looks good! With regards to your comment on accelerating hyper-parameter search, note that there are fairly subtle issues owing to the use of SGD - refer to a recent work of Ge et al \"The Step Decay Schedule: A Near Optimal, ...
[ 6, 8, -1, -1, -1, -1, -1, 8, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rJg8TeSFDH", "iclr_2020_rJg8TeSFDH", "HkxH37tRFH", "rJlhm_ETtH", "HJxfYraZsB", "iclr_2020_rJg8TeSFDH", "iclr_2020_rJg8TeSFDH", "iclr_2020_rJg8TeSFDH", "iclr_2020_rJg8TeSFDH" ]
iclr_2020_S1e2agrFvS
Geom-GCN: Geometric Graph Convolutional Networks
Message-passing neural networks (MPNNs) have been successfully applied in a wide variety of applications in the real world. However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs. Few studies have noticed the weaknesses from different perspectives. From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses. The behind basic idea is the aggregation on a graph can benefit from a continuous space underlying the graph. The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation. We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs. Experimental results show the proposed Geom-GCN achieved state-of-the-art performance on a wide range of open datasets of graphs.
accept-spotlight
This paper is consistently supported by all three reviewers and thus an accept is recommended.
val
[ "rJgofPeTKr", "rJxqK542oS", "Byx9Lj4hiH", "S1giVoY2iB", "Bkgpjq4hsr", "HklyXj4niS", "SkxLWoVhsr", "SJluLVxXqr", "rkeT4UtIcS", "rkxArIisYH", "SJg0E9eqKS", "B1lX_zzXdr", "BJx-XvXk_B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "\nThis work proposes geometric aggregation scheme for GCNs, which aims to overcome the limitations in traditional GCNs; those are lacking long distance dependencies and structure information in nodes. In particular, each node is transformed into a latent space. To overcome the first limitation, some nodes that are...
[ 6, -1, -1, -1, -1, -1, -1, 8, 6, -1, -1, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2020_S1e2agrFvS", "rJgofPeTKr", "SJluLVxXqr", "Byx9Lj4hiH", "rJxqK542oS", "SkxLWoVhsr", "rkeT4UtIcS", "iclr_2020_S1e2agrFvS", "iclr_2020_S1e2agrFvS", "SJg0E9eqKS", "iclr_2020_S1e2agrFvS", "BJx-XvXk_B", "iclr_2020_S1e2agrFvS" ]
iclr_2020_HJgzt2VKPB
CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning
Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets. However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements. Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive. We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure. In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved. Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning. In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable. Using CATER, we provide insights into some of the most recent state of the art deep video architectures.
accept-talk
The paper proposed a new synthetically generated video dataset (CATER) for benchmarking temporal reasoning. The dataset is based on the CLEVR dataset and provides videos make up of primitive actions ("rotate", "pick-place", "slide", "contain") that can be combined to form for complex actions. The paper also benchmarks a variety of methods on three proposed tasks (atomic action classification, composite action classification, and 'snitch' localization) and demonstrates that while it is possible to get high performance on atomic action classification, the other two task are still challenging and requires temporal modeling. Overall, all reviewers found the paper to be well written and easy to follow, with care given to the dataset construction, as well as the task definitions and experiment setup and analysis. The paper received strong scores from all reviewers (3 accepts). Based on the reviewer comments, the authors further improved the paper by adding additional relevant datasets for comparison and providing missing details pointed out by the reviewers. After the rebuttal, the reviewers remained positive.
train
[ "SkgDmQoLir", "BJgyw-jIiH", "SklLOxjIjH", "H1xVYHDAKS", "BJe4G_vz9S", "SylGEECj5r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your time and insightful feedback! We have incorporated the changes into the revised paper, and address the issues in detail here:\n\n- Additional dataset comparisons: Thanks for pointing those out! We have added additional datasets to Table 1.\n\n- Train-Val-Test set: We will add that to the code re...
[ -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "H1xVYHDAKS", "BJe4G_vz9S", "SylGEECj5r", "iclr_2020_HJgzt2VKPB", "iclr_2020_HJgzt2VKPB", "iclr_2020_HJgzt2VKPB" ]
iclr_2020_BJlrF24twB
BackPACK: Packing more into Backprop
Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient. Yet, other quantities such as the variance of the mini-batch gradients or many approximations to the Hessian can, in theory, be computed efficiently, and at the same time as the gradient. While these quantities are of great interest to researchers and practitioners, current deep learning software does not support their automatic calculation. Manually implementing them is burdensome, inefficient if done naively, and the resulting code is rarely shared. This hampers progress in deep learning, and unnecessarily narrows research to focus on gradient descent and its variants; it also complicates replication studies and comparisons between newly developed methods that require those quantities, to the point of impossibility. To address this problem, we introduce BackPACK, an efficient framework built on top of PyTorch, that extends the backpropagation algorithm to extract additional information from first-and second-order derivatives. Its capabilities are illustrated by benchmark reports for computing additional quantities on deep neural networks, and an example application by testing several recent curvature approximations for optimization.
accept-talk
The paper efficiently computes quantities, such as variance estimates of the gradient or various Hessian approximations, jointly with the gradient, and the paper also provides a software package for this. All reviewers agree that this is a very good paper and should be accepted.
train
[ "HJeZEQVDor", "ByelwmEvjr", "SkeZCGVvjH", "SJxJeLvmcS", "S1eDIl2NqH", "SJeQXw-85r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your detailed reading, all typographic and related work remarks will of course be addressed! On your two specific comments:\n\nWe appreciate your comment on the title and will try to find a more descriptive one.\nHowever, like you, we so far have struggled to find a sufficiently compact one.\n\nAnd than...
[ -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, 4, 3, 3 ]
[ "S1eDIl2NqH", "SJxJeLvmcS", "SJeQXw-85r", "iclr_2020_BJlrF24twB", "iclr_2020_BJlrF24twB", "iclr_2020_BJlrF24twB" ]
iclr_2020_HkxlcnVFwB
GenDICE: Generalized Offline Estimation of Stationary Values
An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain. In many real-world applications, access to the underlying transition operator is limited to a fixed set of data that has already been collected, without additional interaction with the environment being available. We show that consistent estimation remains possible in this scenario, and that effective estimation can still be achieved in important applications. Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions, derived from fundamental properties of the stationary distribution, and exploiting constraint reformulations based on variational divergence minimization. The resulting algorithm, GenDICE, is straightforward and effective. We prove the consistency of the method under general conditions, provide a detailed error analysis, and demonstrate strong empirical performance on benchmark tasks, including off-line PageRank and off-policy policy evaluation.
accept-talk
The authors develop a framework for off-policy value estimation for infinite horizon RL tasks, for estimating the stationary distribution of a Markov chain. Reviewers were uniformly impressed by the work, and satisfied by the author response. Congratulations!
train
[ "BkgNEfo2ir", "SJxp7c5hjr", "S1gt9jdjjH", "BkeIkxtijH", "HklHP-tjjH", "Skli-sussr", "r1geCyJTKB", "H1l3zHj6Yr", "Hyluc57aFB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank the author for the detailed response. The ablation study answers all my questions and the current paper is quite solid. It would be great if the author can open source the code, which will definitely benefit the OPE community.", "Thank you for the response, the ablation study is clear and provide importan...
[ -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "BkeIkxtijH", "HklHP-tjjH", "H1l3zHj6Yr", "Hyluc57aFB", "r1geCyJTKB", "iclr_2020_HkxlcnVFwB", "iclr_2020_HkxlcnVFwB", "iclr_2020_HkxlcnVFwB", "iclr_2020_HkxlcnVFwB" ]
iclr_2020_H1lma24tPB
Principled Weight Initialization for Hypernetworks
Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner. Despite extensive applications ranging from multi-task learning to Bayesian deep learning, the problem of optimizing hypernetworks has not been studied to date. We observe that classical weight initialization methods like Glorot & Bengio (2010) and He et al. (2015), when applied directly on a hypernet, fail to produce weights for the mainnet in the correct scale. We develop principled techniques for weight initialization in hypernets, and show that they lead to more stable mainnet weights, lower training loss, and faster convergence.
accept-talk
All the reviewers agreed that this was a sensible application of mostly existing ideas from standard neural net initialization to the setting of hypernetworks. The main criticism was that this method was used to improve existing applications of hypernets, instead of extending their limits of applicability.
train
[ "BJlfwPbitH", "Hkx755_3jH", "rkxlxvd2KB", "BkgkWGvnjH", "SklvzTessH", "HyeLv6goiS", "B1laB3xsiH", "H1eb3jxosH", "HJlpawuhFB" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Review of “Principled Weight Initialization for Hypernetworks”\n\nThere has been a lot of existing work on neural network initialization, and much of this work has made large impact in making deep learning models easier to train in practice. There has also been a line of work on indirect encoding of neural works (...
[ 8, -1, 8, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_H1lma24tPB", "BkgkWGvnjH", "iclr_2020_H1lma24tPB", "B1laB3xsiH", "BJlfwPbitH", "iclr_2020_H1lma24tPB", "rkxlxvd2KB", "HJlpawuhFB", "iclr_2020_H1lma24tPB" ]
iclr_2020_HJxNAnVtDS
On the Convergence of FedAvg on Non-IID Data
Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (\texttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of O(1T) for strongly convex and smooth problems, where T is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for \texttt{FedAvg} on non-iid data: the learning rate η must decay, even if full-gradient is used; otherwise, the solution will be Ω(η) away from the optimal.
accept-talk
This manuscript analyzes the convergence of federated learning wit hstragellers, and provides convergence rates. The proof techniques involve bounding the effects of the non-identical distribution due to stragglers and related issues. The manuscript also includes a thorough empirical evaluation. Overall, the reviewers were quite positive about the manuscript, with a few details that should be improved.
train
[ "Bkxft2PptH", "SkxS2tZ-sH", "Hyx97qWbsS", "B1gL_cb-jS", "rkl4PWCsKB", "SyeCOwkk9r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper analyzes the convergence of FedAvg, the most popular algorithm for federated learning. The highlight of the paper is removing the following two assumptions: (i) the data are iid across devices, and (ii) all the devices are active. For smooth and strongly convex problems, the paper proves an O(1/T) conve...
[ 8, -1, -1, -1, 8, 6 ]
[ 1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_HJxNAnVtDS", "rkl4PWCsKB", "Bkxft2PptH", "SyeCOwkk9r", "iclr_2020_HJxNAnVtDS", "iclr_2020_HJxNAnVtDS" ]
iclr_2020_S1efxTVYDr
Data-dependent Gaussian Prior Objective for Language Generation
For typical sequence prediction problems such as language generation, maximum likelihood estimation (MLE) has commonly been adopted as it encourages the predicted sequence most consistent with the ground-truth sequence to have the highest probability of occurring. However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect. We refer to this drawback as {\it negative diversity ignorance} in this paper. Treating all incorrect predictions as equal unfairly downplays the nuance of these sequences' detailed token-wise structure. To counteract this, we augment the MLE loss by introducing an extra Kullback--Leibler divergence term derived by comparing a data-dependent Gaussian prior and the detailed training prediction. The proposed data-dependent Gaussian prior objective (D2GPo) is defined over a prior topological order of tokens and is poles apart from the data-independent Gaussian prior (L2 regularization) commonly adopted in smoothing the training of MLE. Experimental results show that the proposed method makes effective use of a more detailed prior in the data and has improved performance in typical language generation tasks, including supervised and unsupervised machine translation, text summarization, storytelling, and image captioning.
accept-talk
This paper addresses the problem of poor generation quality in models for text generation that results from the use of the maximum likelihood (ML) loss, in particular the fact that the ML loss does not differentiate between different "incorrect" generated outputs (ones that do not match the corresponding training sequence). The authors propose to train text generation models with an additional loss term that measures the distance from the ground truth via a Gaussian distribution based on embeddings of the ground-truth tokens. This is not the first attempt to address drawbacks of ML training for text generation, but it is simple and intuitive, and produces improvements over the state of the art on a range of tasks. The reviewers are all quite positive, and are in agreement that the author responses and revisions have improved the paper quality and addressed initial concerns. I think this work will be broadly appreciated by the ICLR audience. One negative point is that the writing quality still needs improvement.
train
[ "H1gsgn90YB", "Sygtn69ssB", "HJgBWHejir", "B1xZJjwqjH", "rklUouAKjH", "SkeC04MptH", "HkxacnIYsB", "HJx_vvUYjS", "HkgSXP8FiB", "r1griLLtjH", "Hkx6OwB0YB" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces the use of data-dependent Gaussian prior, to overcome negative diversity ignorance problem that includes the exposure bias problem for sequence generation models. In addition to the usual MLE (teacher forcing) criteria, the authors add the KL divergence between the prediction and the Gaussian...
[ 8, -1, -1, -1, -1, 8, -1, -1, -1, -1, 8 ]
[ 5, -1, -1, -1, -1, 3, -1, -1, -1, -1, 4 ]
[ "iclr_2020_S1efxTVYDr", "HJgBWHejir", "HkgSXP8FiB", "iclr_2020_S1efxTVYDr", "HJx_vvUYjS", "iclr_2020_S1efxTVYDr", "iclr_2020_S1efxTVYDr", "SkeC04MptH", "Hkx6OwB0YB", "H1gsgn90YB", "iclr_2020_S1efxTVYDr" ]
iclr_2020_H1gax6VtDB
Contrastive Learning of Structured World Models
A structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition. Learning such a structured world model from raw sensory data remains a challenge. As a step towards this goal, we introduce Contrastively-trained Structured World Models (C-SWMs). C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure. We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network. This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process. We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation. Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.
accept-talk
This paper presents an approach to learn state representations of the scene as well as their action-conditioned transition model, applying contrastive learning on top of a graph neural network. The reviewers unanimously agree that this paper contains a solid research contribution and the authors' response to the reviews further clarified their concerns.
train
[ "S1gKxxeZ5B", "B1l195thiH", "B1gr78U2jH", "H1gu47mniS", "ByxR2zlsoH", "H1l81UwCYS", "SJlwMQTdsS", "SkxeBpMDjS", "ryee-pMwiH", "BkxrThzwor", "B1l07hfwoS", "H1gve3zDjS", "rJgl-c_P_B" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper aims to learn a structured latent space for images, which is made up of objects and their relations. The method works by (1) extracting object masks via a CNN, (2) turning those masks into feature vectors via an MLP, (3) estimating an action-conditioned delta for each feature via a GNN. Learning happens...
[ 8, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_H1gax6VtDB", "B1gr78U2jH", "B1l07hfwoS", "ByxR2zlsoH", "ryee-pMwiH", "iclr_2020_H1gax6VtDB", "SkxeBpMDjS", "rJgl-c_P_B", "BkxrThzwor", "H1l81UwCYS", "S1gKxxeZ5B", "iclr_2020_H1gax6VtDB", "iclr_2020_H1gax6VtDB" ]
iclr_2020_B1evfa4tPB
Neural Network Branching for Neural Network Verification
Formal verification of neural networks is essential for their deployment in safety-critical areas. Many available formal verification methods have been shown to be instances of a unified Branch and Bound (BaB) formulation. We propose a novel framework for designing an effective branching strategy for BaB. Specifically, we learn a graph neural network (GNN) to imitate the strong branching heuristic behaviour. Our framework differs from previous methods for learning to branch in two main aspects. Firstly, our framework directly treats the neural network we want to verify as a graph input for the GNN. Secondly, we develop an intuitive forward and backward embedding update schedule. Empirically, our framework achieves roughly 50% reduction in both the number of branches and the time required for verification on various convolutional networks when compared to the best available hand-designed branching strategy. In addition, we show that our GNN model enjoys both horizontal and vertical transferability. Horizontally, the model trained on easy properties performs well on properties of increased difficulty levels. Vertically, the model trained on small neural networks achieves similar performance on large neural networks.
accept-talk
The authors develop a strategy to learn branching strategies for branch-and-bound based neural network verification algorithms, based on GNNs that imitate strong branching. This allows the authors to obtain significant speedups in branch and bound based neural network verification algorithms relative to strong baselines considered in prior work. The reviewers were in consensus and the quality of the paper and minor concerns raised in the initial reviews were adequately addressed in the rebuttal phase. Therefore, I strongly recommend acceptance.
train
[ "H1eFFHskqS", "BklC9jJior", "HJetcZYmqB", "BkgLWeCcoH", "Byguwkscir", "rklKyWicsH", "S1ehKxsqsB", "HylpJei5oS", "rJxXbtWRFH", "rJeqfkJJjr", "r1enywL3YB" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "This paper proposes to use graph neural networks (GNNs) to replace the\nsplitting heuristic in branch and bound (BaB) based neural network verification\nalgorithms. The paper follows the general BaB framework by Bunel et al., but\nconsiders only splitting ReLU neurons, not input domains. The GNN is built by\nrepla...
[ 6, -1, 8, -1, -1, -1, -1, -1, 8, -1, -1 ]
[ 5, -1, 5, -1, -1, -1, -1, -1, 3, -1, -1 ]
[ "iclr_2020_B1evfa4tPB", "BkgLWeCcoH", "iclr_2020_B1evfa4tPB", "HylpJei5oS", "HJetcZYmqB", "rJxXbtWRFH", "H1eFFHskqS", "Byguwkscir", "iclr_2020_B1evfa4tPB", "r1enywL3YB", "iclr_2020_B1evfa4tPB" ]
iclr_2020_BJgnXpVYwS
Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity
We provide a theoretical explanation for the effectiveness of gradient clipping in training deep neural networks. The key ingredient is a new smoothness condition derived from practical neural network training examples. We observe that gradient smoothness, a concept central to the analysis of first-order optimization algorithms that is often assumed to be a constant, demonstrates significant variability along the training trajectory of deep neural networks. Further, this smoothness positively correlates with the gradient norm, and contrary to standard assumptions in the literature, it can grow with the norm of the gradient. These empirical observations limit the applicability of existing theoretical analyses of algorithms that rely on a fixed bound on smoothness. These observations motivate us to introduce a novel relaxation of gradient smoothness that is weaker than the commonly used Lipschitz smoothness assumption. Under the new condition, we prove that two popular methods, namely, gradient clipping and normalized gradient, converge arbitrarily faster than gradient descent with fixed stepsize. We further explain why such adaptively scaled gradient methods can accelerate empirical convergence and verify our results empirically in popular neural network training settings.
accept-talk
Gradient clipping is increasingly popular and it's nice to see a paper theoretically exploring its nice performance. All reviewers appreciated the work and the results. Please make sure to incorporate all of their comments for the final version.
train
[ "ryx_yx-HtS", "rylSdgmeqB", "SJgnrqooiS", "HyeE2sosjr", "rkgxn0mosS", "SylJvkx5oB", "rJeemQXLjS", "HJgzsbXUsB", "Skei0fQ8ir", "BylX_gm8iS", "rkxmDNtEYH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors relax the generally used Lipschitz smoothness condition in optimization, to a more general smoothness condition that may depend on norm of the gradient. The authors proved that, with this relaxed condition, under such cases, both GD and clipped GD can converge within O(1/\\epsilon^2) tim...
[ 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_BJgnXpVYwS", "iclr_2020_BJgnXpVYwS", "BylX_gm8iS", "SylJvkx5oB", "Skei0fQ8ir", "HJgzsbXUsB", "rkxmDNtEYH", "rylSdgmeqB", "ryx_yx-HtS", "iclr_2020_BJgnXpVYwS", "iclr_2020_BJgnXpVYwS" ]
iclr_2020_Syg-ET4FPS
Posterior sampling for multi-agent reinforcement learning: solving extensive games with imperfect information
Posterior sampling for reinforcement learning (PSRL) is a useful framework for making decisions in an unknown environment. PSRL maintains a posterior distribution of the environment and then makes planning on the environment sampled from the posterior distribution. Though PSRL works well on single-agent reinforcement learning problems, how to apply PSRL to multi-agent reinforcement learning problems is relatively unexplored. In this work, we extend PSRL to two-player zero-sum extensive-games with imperfect information (TEGI), which is a class of multi-agent systems. More specifically, we combine PSRL with counterfactual regret minimization (CFR), which is the leading algorithm for TEGI with a known environment. Our main contribution is a novel design of interaction strategies. With our interaction strategies, our algorithm provably converges to the Nash Equilibrium at a rate of O(log⁡T/T). Empirical results show that our algorithm works well.
accept-talk
The paper extends posterior sampling to the multi-agent RL setting, and develops a novel algorithm with convergence guarantees to a Nash Equilibrium strategy in two-player zero sum games. Reviewers raised several questions, many of which were well addressed by the authors and which helped further clarify the approach and contribution of the paper. The paper is timely in that novel connections between Game Theory and RL are being explored in fruitful ways, and the paper provides valuable new insights and directions for future research.
train
[ "BkgacWb0cS", "H1eFXW0siH", "r1ghagRiiS", "HJg09qHzqS", "rJxjp58sjS", "Hyx2jpeojS", "Skeg6SmOjB", "Byx45BmdsS", "H1lzrHXOor", "r1xCiV7_sB", "HkxcDKmIFB" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Review for \"Posterior Sampling for Multi-Agent Reinforcement Learning\".\n\nThe paper proposes a sample-efficient way to compute a Nash equilibrium of an extensive form game. The algorithm works by maintaining a probability distribution over the chance player / reward pair (i.e. an environment model).\n\nI give a...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_Syg-ET4FPS", "Hyx2jpeojS", "rJxjp58sjS", "iclr_2020_Syg-ET4FPS", "Byx45BmdsS", "Skeg6SmOjB", "HkxcDKmIFB", "H1lzrHXOor", "HJg09qHzqS", "BkgacWb0cS", "iclr_2020_Syg-ET4FPS" ]
iclr_2020_SJe5P6EYvS
Mogrifier LSTM
Many advances in Natural Language Processing have been based upon more expressive models for how inputs interact with the context in which they occur. Recurrent networks, which have enjoyed a modicum of success, still lack the generalization and systematicity ultimately required for modelling language. In this work, we propose an extension to the venerable Long Short-Term Memory in the form of mutual gating of the current input and the previous output. This mechanism affords the modelling of a richer space of interactions between inputs and their context. Equivalently, our model can be viewed as making the transition function given by the LSTM context-dependent. Experiments demonstrate markedly improved generalization on language modelling in the range of 3–4 perplexity points on Penn Treebank and Wikitext-2, and 0.01–0.05 bpc on four character-based datasets. We establish a new state of the art on all datasets with the exception of Enwik8, where we close a large gap between the LSTM and Transformer models.
accept-talk
This paper presents a new twist on the typical LSTM that applies several rounds of gating on the history and input, with the end result that the LSTM's transition function is effectively context-dependent. The performance of the model is illustrated on several datasets. In general, the reviews were positive, with one score being upgraded during the rebuttal period. One of the reviewers complained that the baselines were not adequate, but in the end conceded that the results were still worthy of publication. One reviewer argued very hard for the acceptance of this paper "Papers that are as clear and informative as this one are few and far between. ... As such, I vehemently argue in favor of this paper being accepted to ICLR."
test
[ "rJe0f83x9B", "rylMKNabsr", "B1ehRohZjB", "S1efyYn-jH", "BylUf0l6YB", "HklaAK10FS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I have read the authors' response. Their points regarding baseline comparisons are sensible in that there isn't a reason to expect the observations to *not* generalization to other datasets. It is odd that mLSTM is outperformed by LSTM in Table 3, but as the authors note in section 4.2 this may be due to instabili...
[ 6, -1, -1, -1, 8, 8 ]
[ 1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_SJe5P6EYvS", "BylUf0l6YB", "HklaAK10FS", "rJe0f83x9B", "iclr_2020_SJe5P6EYvS", "iclr_2020_SJe5P6EYvS" ]
iclr_2020_B1elCp4KwH
Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech
In this paper, we present a method for learning discrete linguistic units by incorporating vector quantization layers into neural models of visually grounded speech. We show that our method is capable of capturing both word-level and sub-word units, depending on how it is configured. What differentiates this paper from prior work on speech unit learning is the choice of training objective. Rather than using a reconstruction-based loss, we use a discriminative, multimodal grounding objective which forces the learned units to be useful for semantic image retrieval. We evaluate the sub-word units on the ZeroSpeech 2019 challenge, achieving a 27.3% reduction in ABX error rate over the top-performing submission, while keeping the bitrate approximately the same. We also present experiments demonstrating the noise robustness of these units. Finally, we show that a model with multiple quantizers can simultaneously learn phone-like detectors at a lower layer and word-like detectors at a higher layer. We show that these detectors are highly accurate, discovering 279 words with an F1 score of greater than 0.5.
accept-talk
The paper is extremely well-written with a clear motivation (Section 1). The approach is novel. But I think the paper's biggest strength is in its very thorough experimental investigation. Their approach is compared to other very recent speech discretization methods on the same data using the same (ABX) evaluation metric. But the work goes further in that it systematically attempts to actually understand what types of structures are captured in the intermediate discrete layers, and it is able to answer this question convincingly. Finally, very good results on standard benchmarks are achieved. To authors: Please do include the additional discussions and results in the final paper.
train
[ "SJgUrG29or", "BJeC_3Lmir", "H1ebk6U7jS", "Hye3K58QsS", "BkxgxcImoB", "SyxAgKUmoB", "HJlEmq1jFS", "SyxuhAn2FS", "HJxsX9aL9B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for very thorough answers to all my questions, and for the substantial additional experiments which further strengthens the paper.", "Detailed response to Reviewer #1 (part 2 of 2):\n\nQ1.6: My second point is that it is unclear why word-like units only appear when the higher-level discrete layers are ...
[ -1, -1, -1, -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "BJeC_3Lmir", "HJlEmq1jFS", "HJlEmq1jFS", "SyxuhAn2FS", "HJxsX9aL9B", "iclr_2020_B1elCp4KwH", "iclr_2020_B1elCp4KwH", "iclr_2020_B1elCp4KwH", "iclr_2020_B1elCp4KwH" ]
iclr_2020_HkxQRTNYPH
Mirror-Generative Neural Machine Translation
Training neural machine translation models (NMT) requires a large amount of parallel corpus, which is scarce for many language pairs. However, raw non-parallel corpora are often easy to obtain. Existing approaches have not exploited the full potential of non-parallel bilingual data either in training or decoding. In this paper, we propose the mirror-generative NMT (MGNMT), a single unified architecture that simultaneously integrates the source to target translation model, the target to source translation model, and two language models. Both translation models and language models share the same latent semantic space, therefore both translation directions can learn from non-parallel data more effectively. Besides, the translation models and language models can collaborate together during decoding. Our experiments show that the proposed MGNMT consistently outperforms existing approaches in a variety of scenarios and language pairs, including resource-rich and low-resource languages.
accept-talk
This paper proposes a novel method for considering translations in both directions within the framework of generative neural machine translation, significantly improving accuracy. All three reviewers appreciated the paper, although they noted that the gains were somewhat small for the increased complexity of the model. Nonetheless, the baselines presented are already quite competitive, so improvements on these datasets are likely to never be extremely large. Overall, I found this to be a quite nice paper, and strongly recommend acceptance, perhaps as an oral presentation.
val
[ "rkeLk2RpFB", "SyeFqSD2sH", "HJlMuQwnsH", "r1eqJfw2sr", "BkgmuTUnjB", "HJgU75IhjB", "rJe2KI8hiS", "r1x8BrKCFB", "H1ei7P7g9H", "B1lUjQp_9r" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an approach to neural MT in which the joint (source, target) distribution is modeled as an average over two different factorizations: target given source and source given target. This gives rise to four distributions - two language models and two translation models - which are parameterized sep...
[ 8, -1, -1, -1, -1, -1, -1, 8, 8, -1 ]
[ 5, -1, -1, -1, -1, -1, -1, 5, 5, -1 ]
[ "iclr_2020_HkxQRTNYPH", "HJlMuQwnsH", "r1eqJfw2sr", "B1lUjQp_9r", "rkeLk2RpFB", "H1ei7P7g9H", "r1x8BrKCFB", "iclr_2020_HkxQRTNYPH", "iclr_2020_HkxQRTNYPH", "iclr_2020_HkxQRTNYPH" ]
iclr_2020_rkeS1RVtPS
Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning
The posteriors over neural network weights are high dimensional and multimodal. Each mode typically characterizes a meaningfully different representation of the data. We develop Cyclical Stochastic Gradient MCMC (SG-MCMC) to automatically explore such distributions. In particular, we propose a cyclical stepsize schedule, where larger steps discover new modes, and smaller steps characterize each mode. We prove non-asymptotic convergence theory of our proposed algorithm. Moreover, we provide extensive experimental results, including ImageNet, to demonstrate the effectiveness of cyclical SG-MCMC in learning complex multimodal distributions, especially for fully Bayesian inference with modern deep neural networks.
accept-talk
This paper proposes a novel stochastic gradient Markov chain Monte Carlo method incorporating a cyclical step size schedule (cyclical SG-MCMC). The authors argue that this step size schedule allows the sampler to cross modes (when the step size is large) and locally explore modes (when the step size is smaller). SG-MCMC is a very promising method for Bayesian deep learning as it is both scalable and easily to incorporate into existing models. However, the stochastic setting often leads to the sampler getting stuck in a local mode due to a requirement of a small step size (which itself is often due to leaving out the Metropolis-Hastings accept / reject step). The cyclic learning rate intuitively helps the sampler escape local modes. This property is demonstrated on synthetic problems in comparison to existing SG-MCMC baselines. The authors demonstrate improved negative log likelihood on larger scale deep learning benchmarks, which is appreciated as the related literature often restricts experiments to small scale problems. The reviewers all found the paper compelling and argued for acceptance and thus the recommendation is to accept. Some questions remain for future work. E.g. all experiments were performed using a very low temperature, which implies that the methods are not sampling from the true Bayesian posterior. Why is such a low temperature needed for reasonable performance? In any case a very nice paper.
train
[ "Skl9cJDjjH", "S1eVwgy15B", "HJg-lpehiB", "BkehOR8oiB", "BJxT9hLoiH", "SyxU3oUosS", "SJxD4x_TYH", "HJlFucOe5S" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We have incorporated reviewers’ suggestions and comments into the new version. The changes are the following:\n\n1. Appendix G.3. Future Direction for the Wasserstein gradient flows. We have added a discussion about the relationship between MCMC and the Wasserstein gradient flow.\n\n2. Appendix H. Sensitivity of H...
[ -1, 8, -1, -1, -1, -1, 8, 6 ]
[ -1, 1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_rkeS1RVtPS", "iclr_2020_rkeS1RVtPS", "BkehOR8oiB", "S1eVwgy15B", "HJlFucOe5S", "SJxD4x_TYH", "iclr_2020_rkeS1RVtPS", "iclr_2020_rkeS1RVtPS" ]
iclr_2020_Hkxzx0NtDB
Your classifier is secretly an energy based model and you should treat it like one
We propose to reinterpret a standard discriminative classifier of p(y|x) as an energy based model for the joint distribution p(x, y). In this setting, the standard class probabilities can be easily computed as well as unnormalized values of p(x) and p(x|y). Within this framework, standard discriminative architectures may be used and the model can also be trained on unlabeled data. We demonstrate that energy based training of the joint distribution improves calibration, robustness, and out-of-distribution detection while also enabling our models to generate samples rivaling the quality of recent GAN approaches. We improve upon recently proposed techniques for scaling up the training of energy based models and present an approach which adds little overhead compared to standard classification training. Our approach is the first to achieve performance rivaling the state-of-the-art in both generative and discriminative learning within one hybrid model.
accept-talk
This paper uses energy based model to interpret standard discriminative classifier and demonstrates that energy based model training of the joint distribution improves calibration, robustness, and out-of-distribution detection while generating samples with better quality than GAN-based approaches. The reviewers are very excited about this work, and the energy-based perspective of generative and discriminative learning. There is a unanimous agreement to strongly accept this paper after author response.
train
[ "Skglf_h_9H", "r1gdG_HosH", "S1evFK77jH", "HJlZdaG7iS", "r1gouRzmiS", "rkx3C6zXoB", "HJxPlnMXir", "Hyela9GXoS", "B1lbVT9liB", "HJlmF6qoOr", "Hklnbfn6tr", "S1xyit7s9B", "HJghWRO5qS", "r1eebQv1cr", "S1efqSUJ5H", "H1lIa3Hkcr", "SklBGzqbuB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "public", "public" ]
[ "This paper introduces the idea of energy based model to the traditional classifier, and proposes a new framework to improve the performances of the model in multiple aspects. The idea of reinterpreting the traditional classifier is very interesting, and the experiments show some good results of the proposed method...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_Hkxzx0NtDB", "Hyela9GXoS", "SklBGzqbuB", "Skglf_h_9H", "iclr_2020_Hkxzx0NtDB", "HJlZdaG7iS", "Hklnbfn6tr", "HJlmF6qoOr", "S1efqSUJ5H", "iclr_2020_Hkxzx0NtDB", "iclr_2020_Hkxzx0NtDB", "HJghWRO5qS", "iclr_2020_Hkxzx0NtDB", "SklBGzqbuB", "H1lIa3Hkcr", "iclr_2020_Hkxzx0NtDB", ...
iclr_2020_HJgLZR4KvH
Dynamics-Aware Unsupervised Discovery of Skills
Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, we combine model-based learning with model-free learning of primitives that make model-based planning easy. To that end, we aim to answer the question: how can we discover skills whose outcomes are easy to predict? We propose an unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS), which simultaneously discovers predictable behaviors and learns their dynamics. Our method can leverage continuous skill spaces, theoretically, allowing us to learn infinitely many behaviors even for high-dimensional state-spaces. We demonstrate that zero-shot planning in the learned latent space significantly outperforms standard MBRL and model-free goal-conditioned RL, can handle sparse-reward tasks, and substantially improves over prior hierarchical RL methods for unsupervised skill discovery.
accept-talk
This is a very interesting paper on unsupervised skill learning based on the predictability of skill effects, with the incorporation of these ideas into model-based RL. This is a clear accept, based on the clarity of the ideas presented and the writing, as well as the thorough and convincing experiments.
train
[ "rylloP4qjr", "Byl81OV9iB", "B1lfrbNcjH", "SJg5q-L5Yr", "BylOrNEmqH", "ByxhdaGN5r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "(a) While there are other methods using intermediate level primitives, prior work in unsupervised skill/option discovery has primarily been demonstrated in discrete control. DIAYN [2] (which matches our assumptions of unsupervised learning + continuous control) was shown to perform better than prior work, for exam...
[ -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, 4, 1, 1 ]
[ "BylOrNEmqH", "SJg5q-L5Yr", "ByxhdaGN5r", "iclr_2020_HJgLZR4KvH", "iclr_2020_HJgLZR4KvH", "iclr_2020_HJgLZR4KvH" ]
iclr_2020_BkgzMCVtPB
Optimal Strategies Against Generative Attacks
Generative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a "fake reality"? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches and show that they result in models that are more robust to various attacks on real-world data.
accept-talk
This paper concerns the problem of defending against generative "attacks": that is, falsification of data for malicious purposes through the use of synthesized data based on "leaked" samples of real data. The paper casts the problem formally and assesses the problem of authentication in terms of the sample complexity at test time and the sample budget of the attacker. The authors prove a Nash equillibrium exists, derive a closed form for the special case of multivariate Gaussian data, and propose an algorithm called GAN in the Middle leveraging the developed principles, showing an implementation to perform better than authentication baselines and suggesting other applications. Reviewers were overall very positive, in agreement that the problem addressed is important and the contribution made is significant. Most criticisms were superficial. This is a dense piece of work, and presentation could still be improved. However this is clearly a significant piece of work addressing a problem of increasing importance, and is worthy of acceptance.
train
[ "r1lRr8jocr", "BkeCz02Lsr", "Bkepx22LjS", "HJxoCK3UoS", "BJlsYOhIoS", "BJge4FdRFH", "SygqiKVl5r", "rJliBd199H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper addresses the issue of malicious use of generative models to fool authentication/anomaly detection systems that rely on sensor data. The authors formulate the scenario as a maxmin game between an authenticator and an attacker, with limitations on the number of samples available to the authenticator to f...
[ 8, -1, -1, -1, -1, 8, 8, 8 ]
[ 3, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2020_BkgzMCVtPB", "rJliBd199H", "r1lRr8jocr", "BJge4FdRFH", "SygqiKVl5r", "iclr_2020_BkgzMCVtPB", "iclr_2020_BkgzMCVtPB", "iclr_2020_BkgzMCVtPB" ]
iclr_2020_r1lGO0EKDH
GraphZoom: A Multi-level Spectral Approach for Accurate and Scalable Graph Embedding
Graph embedding techniques have been increasingly deployed in a multitude of different applications that involve learning on non-Euclidean data. However, existing graph embedding models either fail to incorporate node attribute information during training or suffer from node attribute noise, which compromises the accuracy. Moreover, very few of them scale to large graphs due to their high computational complexity and memory usage. In this paper we propose GraphZoom, a multi-level framework for improving both accuracy and scalability of unsupervised graph embedding algorithms. GraphZoom first performs graph fusion to generate a new graph that effectively encodes the topology of the original graph and the node attribute information. This fused graph is then repeatedly coarsened into much smaller graphs by merging nodes with high spectral similarities. GraphZoom allows any existing embedding methods to be applied to the coarsened graph, before it progressively refine the embeddings obtained at the coarsest level to increasingly finer graphs. We have evaluated our approach on a number of popular graph datasets for both transductive and inductive tasks. Our experiments show that GraphZoom can substantially increase the classification accuracy and significantly accelerate the entire graph embedding process by up to 40.8×, when compared to the state-of-the-art unsupervised embedding methods.
accept-talk
The authors present an approach for learning graph embeddings by first fusing the graph to generate a new graph with encodes structural information as well as node attribution information. They then iteratively merge nodes based spectral similarities to obtain coarser graphs. They then use existing methods to learn embeddings from this coarse graph and progressively refine the embeddings to finer graphs. They demonstrate the performance of their method on standard graph datasets. This paper has received positive reviews from all reviewers. The authors did a good job of addressing the reviewers' concerns and managed to convince the reviewers about their contributions. I request the authors to take the reviewers suggestions into consideration while preparing the final draft of the paper and recommend that the paper be accepted.
train
[ "H1lQ2mwCKr", "SylX3XyW5B", "SklP0HP8oS", "SkemlgjsjB", "Syg21booor", "H1lIM0vcjS", "rklAtq0OsB", "B1gxvG18oH", "SyeI1Gk8oS", "Hyl2lAASoH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary: The authors propose a way to fuse information on nodes of a graph with the topology of the graph in the large scale setting. The proposed approach is done in four phases where (i) the covariates in the nodes of the graph is first mapped in the graph space for fusion and fused using linear combination of t...
[ 6, 8, 8, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_r1lGO0EKDH", "iclr_2020_r1lGO0EKDH", "iclr_2020_r1lGO0EKDH", "H1lIM0vcjS", "iclr_2020_r1lGO0EKDH", "rklAtq0OsB", "SklP0HP8oS", "SyeI1Gk8oS", "H1lQ2mwCKr", "SylX3XyW5B" ]
iclr_2020_rklHqRVKvH
Harnessing Structures for Value-Based Planning and Reinforcement Learning
Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL). In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL. In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures. Specifically, we investigate the low-rank structure, which widely exists for big data matrices. We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks. As our key contribution, by leveraging Matrix Estimation (ME) techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions. This leads to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to value-based RL techniques to consistently achieve better performance on "low-rank" tasks. Extensive experiments on control tasks and Atari games confirm the efficacy of our approach.
accept-talk
The paper shows empirical evidence that the the optimal action-value function Q* often has a low-rank structure. It uses ideas from the matrix estimation/completion literature to provide a modification of value iteration that benefits from such a low-rank structure. The reviewers are all positive about this paper. They find the idea novel and the writing clear. There have been some questions about the relation of this concept of rank to other definitions and usage of rank in the RL literature. The authors’ rebuttal seem to be satisfactory to the reviewers. Given these, I recommend acceptance of this paper.
train
[ "ryeeSstqsB", "S1xspKRKiH", "SkxfrKh_sB", "Hyepb_3dir", "r1glJ_nusH", "HyxOjDn_oH", "r1emLw2_sS", "r1xKGw3usH", "SJl0rjD2tr", "r1liO4xTtH", "rke4w8yIKr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I would like to thank the authors for the time they took to answer my questions. They have addressed all my comments. ", "I thank the authors for their thorough response, clarifications, and additional experiments.\n\nRe Q1): The results presented in Appendix F1 provide a satisfying investigation into the effect...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 4, 4 ]
[ "HyxOjDn_oH", "Hyepb_3dir", "iclr_2020_rklHqRVKvH", "r1glJ_nusH", "rke4w8yIKr", "SJl0rjD2tr", "r1xKGw3usH", "r1liO4xTtH", "iclr_2020_rklHqRVKvH", "iclr_2020_rklHqRVKvH", "iclr_2020_rklHqRVKvH" ]
iclr_2020_S1gSj0NKvB
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Many neural network pruning algorithms proceed in three steps: train the network to completion, remove unwanted structure to compress the network, and retrain the remaining structure to recover lost accuracy. The standard retraining technique, fine-tuning, trains the unpruned weights from their final trained values using a small fixed learning rate. In this paper, we compare fine-tuning to alternative retraining techniques. Weight rewinding (as proposed by Frankle et al., (2019)), rewinds unpruned weights to their values from earlier in training and retrains them from there using the original training schedule. Learning rate rewinding (which we propose) trains the unpruned weights from their final values using the same learning rate schedule as weight rewinding. Both rewinding techniques outperform fine-tuning, forming the basis of a network-agnostic pruning algorithm that matches the accuracy and compression ratios of several more network-specific state-of-the-art techniques.
accept-talk
Reviewers unanimously accepted this paper.
val
[ "Skl8Xog0cB", "r1l5tjbjtH", "BJxOXBkhiB", "ryxgprk3iH", "ryg4srJnoH", "HyxKYBJhoB", "r1g3S9Nycr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "*Summary*\nExtending the observations of Frankle et al. (2019, \"The Lottery Ticket Hypothesis\"), this paper examines \"rewinding\" as an alternative to fine-tuning in a typical network pruning process. After training to convergence for T iterations, the k% of weights with the smallest magnitude are pruned (set t...
[ 8, 8, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, -1, 3 ]
[ "iclr_2020_S1gSj0NKvB", "iclr_2020_S1gSj0NKvB", "iclr_2020_S1gSj0NKvB", "Skl8Xog0cB", "r1g3S9Nycr", "r1l5tjbjtH", "iclr_2020_S1gSj0NKvB" ]
iclr_2020_SJeD3CEFPH
Meta-Q-Learning
This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL.
accept-talk
This paper’s contribution is twofold: 1) it proposes a new meta-RL method that leverages off-policy meta-learning by importance weighting, and 2) it demonstrates that current popular meta-RL benchmarks don’t necessarily require meta-learning, as a simple non-meta-learning algorithm (TD3) conditioned on a context variable of the trajectory is competitive with SoTA meta-learning approaches. The reviewers all agreed that the approach is interesting and the contributions are significant. I’d like to thank the reviewers for engaging in a spirited discussion about this paper, both with each other and with the authors. There was also a disagreement about the semantics of whether the approach can be classified as “meta-learning”, but in my opinion this argument is orthogonal to the practical contributions. After the revisions and rebuttal, reviewers agreed that the paper was improved and increased their ratings as a result, with all recommending accept. There’s a good chance this work will make an impactful contribution to the field of meta-reinforcement learning and therefore I recommend it for an oral presentation.
val
[ "SJxhwmZqFB", "rJeUksY3jr", "Skx-VcF2or", "SJxOyn32KB", "rJxzNJ7nir", "BJgWhC1CtS", "rkgGjKNiiS", "ByliWd75iS", "SJeN18m9ir", "S1g8pEmcsr", "S1x9SU79iH", "SJxyoUQ5jH", "Hyl7frmcir" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "The authors investigate meta-learning in reinforcement learning with respect to sample efficiency and the necessity of meta-learning an adaptation scheme. Based on their findings, they propose a new algorithm 'MQL' (Meta-Q-Learning) that is off-policy and has a fixed adaptation scheme but is still competitive on m...
[ 6, -1, -1, 8, -1, 8, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_SJeD3CEFPH", "rJxzNJ7nir", "rkgGjKNiiS", "iclr_2020_SJeD3CEFPH", "SJeN18m9ir", "iclr_2020_SJeD3CEFPH", "Hyl7frmcir", "iclr_2020_SJeD3CEFPH", "SJxOyn32KB", "BJgWhC1CtS", "SJxhwmZqFB", "S1x9SU79iH", "S1g8pEmcsr" ]
iclr_2020_Ske31kBtPr
Mathematical Reasoning in Latent Space
We design and conduct a simple experiment to study whether neural networks can perform several steps of approximate reasoning in a fixed dimensional latent space. The set of rewrites (i.e. transformations) that can be successfully performed on a statement represents essential semantic features of the statement. We can compress this information by embedding the formula in a vector space, such that the vector associated with a statement can be used to predict whether a statement can be rewritten by other theorems. Predicting the embedding of a formula generated by some rewrite rule is naturally viewed as approximate reasoning in the latent space. In order to measure the effectiveness of this reasoning, we perform approximate deduction sequences in the latent space and use the resulting embedding to inform the semantic features of the corresponding formal statement (which is obtained by performing the corresponding rewrite sequence using real formulas). Our experiments show that graph neural networks can make non-trivial predictions about the rewrite-success of statements, even when they propagate predicted latent representations for several steps. Since our corpus of mathematical formulas includes a wide variety of mathematical disciplines, this experiment is a strong indicator for the feasibility of deduction in latent space in general.
accept-talk
This paper was very well received by the reviewers with solid Accept ratings across the board. The subject matter is quite interesting - mathematical reasoning in latent space, and it was suggested by a reviewer that this could be a good candidate for an oral. The AC agrees and recommends acceptance as an oral. Some of the intuitions of what is being done in this paper could be better visualized and presented and I encourage the authors to think carefully about how to present this work if an oral presentation is granted by the PCs.
val
[ "Bkx-GQrhtS", "HyxcG0Phor", "ryx8LTFijr", "HJl5cVvoYS", "r1eZtFVKir", "BJeJJl0doB", "H1g9CATusr", "BJeoaTpujH", "BkgqHnp_oS", "rJxGlK6Ttr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a method to do math reasoning purely using formula embeddings. The proposed method employs a graph neural network to embed math formulas to a latent space. The formula embeddings are then combined with theorem embeddings (also formulas, computed in the same way as formula embeddings) to predict ...
[ 8, -1, -1, 8, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_Ske31kBtPr", "BkgqHnp_oS", "BJeoaTpujH", "iclr_2020_Ske31kBtPr", "H1g9CATusr", "iclr_2020_Ske31kBtPr", "HJl5cVvoYS", "rJxGlK6Ttr", "Bkx-GQrhtS", "iclr_2020_Ske31kBtPr" ]
iclr_2020_r1eBeyHFDH
A Theory of Usable Information under Computational Constraints
We propose a new framework for reasoning about information in complex systems. Our foundation is based on a variational extension of Shannon’s information theory that takes into account the modeling power and computational constraints of the observer. The resulting predictive V-information encompasses mutual information and other notions of informativeness such as the coefficient of determination. Unlike Shannon’s mutual information and in violation of the data processing inequality, V-information can be created through computation. This is consistent with deep neural networks extracting hierarchies of progressively more informative features in representation learning. Additionally, we show that by incorporating computational constraints, V-information can be reliably estimated from data even in high dimensions with PAC-style guarantees. Empirically, we demonstrate predictive V-information is more effective than mutual information for structure learning and fair representation learning. Codes are available at https://github.com/Newbeeer/V-information .
accept-talk
All reviewers unanimously accept the paper.
train
[ "S1lklsZcor", "Sket3aNBoB", "Bkx9phEroS", "ryl1OrQ0KS", "r1eqrvhf5S" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for addressing all issues raised in a convincing and thorough manner and preparing a revised manuscript!", "Thank you for your review and suggestions\n\nQ: Suppose that Y is a scalar and X is a noisy estimate of Y. Suppose we restrict F to the family of Gaussian distributions. That is, with side inform...
[ -1, -1, -1, 8, 8 ]
[ -1, -1, -1, 3, 4 ]
[ "Bkx9phEroS", "r1eqrvhf5S", "ryl1OrQ0KS", "iclr_2020_r1eBeyHFDH", "iclr_2020_r1eBeyHFDH" ]
iclr_2020_rygixkHKDH
Geometric Analysis of Nonconvex Optimization Landscapes for Overcomplete Learning
Learning overcomplete representations finds many applications in machine learning and data analytics. In the past decade, despite the empirical success of heuristic methods, theoretical understandings and explanations of these algorithms are still far from satisfactory. In this work, we provide new theoretical insights for several important representation learning problems: learning (i) sparsely used overcomplete dictionaries and (ii) convolutional dictionaries. We formulate these problems as ℓ4-norm optimization problems over the sphere and study the geometric properties of their nonconvex optimization landscapes. For both problems, we show the nonconvex objective has benign (global) geometric structures, which enable the development of efficient optimization methods finding the target solutions. Finally, our theoretical results are justified by numerical simulations.
accept-talk
This paper investigates the use non-convex optimization for two dictionary learning problems, i.e., over-complete dictionary learning and convolutional dictionary learning. The paper provides theoretical results, associated with empirical experiments, about the fact that, that when formulating the problem as an l4 optimization, gives rise to a landscape with strict saddle points and as such, they can be escaped with negative curvature. As a result, descent methods can be used for learning with provable guarantees. All reviews found the work extremely interesting, highlighting the importance of the results that constitute "a solid improvement over the prior understandings on over-complete DL" and "extends our understanding of provable methods for dictionary learning". This is an interesting submission on non-convex optimization, and as such of interest to the ML community of ICLR . I'm recommending this work for acceptance.
val
[ "BygCupVAYH", "H1eJ2sF2or", "ryxV5ne2jB", "ByeYfqGlcH", "S1xicX0uiS", "rJxa2N0djH", "SkeexDRusH", "rJe375X3KB" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the dictionary learning problem for two popular settings involving sparsely used over-complete dictionaries and convolutional dictionaries.\n\nFor the over-complete dictionary setting, given the measurements of the form $Y = A X$, where $A$ and $X$ denote the over-complete dictionary and the spa...
[ 8, -1, -1, 8, -1, -1, -1, 8 ]
[ 4, -1, -1, 1, -1, -1, -1, 4 ]
[ "iclr_2020_rygixkHKDH", "ryxV5ne2jB", "S1xicX0uiS", "iclr_2020_rygixkHKDH", "ByeYfqGlcH", "BygCupVAYH", "rJe375X3KB", "iclr_2020_rygixkHKDH" ]
iclr_2020_ryghZJBKPS
Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds
We design a new algorithm for batch active learning with deep neural network models. Our algorithm, Batch Active learning by Diverse Gradient Embeddings (BADGE), samples groups of points that are disparate and high-magnitude when represented in a hallucinated gradient space, a strategy designed to incorporate both predictive uncertainty and sample diversity into every selected batch. Crucially, BADGE trades off between diversity and uncertainty without requiring any hand-tuned hyperparameters. While other approaches sometimes succeed for particular batch sizes or architectures, BADGE consistently performs as well or better, making it a useful option for real world active learning problems.
accept-talk
The paper provides a simple method of active learning for classification using deep nets. The method is motivated by choosing examples based on an embedding computed that represents the last layer gradients, which is shown to have a connection to a lower bound of model change if labeled. The algorithm is simple and easy to implement. The method is justified by convincing experiments. The reviewers agree that the rebuttal and revisions cleared up any misunderstandings. This is a solid empirical work on an active learning technique that seems to have a lot of promise. Accept.
train
[ "B1loQbi2YB", "HJgcG9o2FB", "HJxiKftooH", "SJloeGs3oS", "S1lUYBYoiB", "S1xcbNFijB", "HJeqfZW0YB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces an algorithm for active learning in deep neural networks named BADGE. It consists basically of two steps: (1) computing how uncertain the model is about the examples in the dataset (by looking at the gradients of the loss with respect to the parameters of the last layer of the network), and ...
[ 8, 6, -1, -1, -1, -1, 8 ]
[ 1, 1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_ryghZJBKPS", "iclr_2020_ryghZJBKPS", "HJeqfZW0YB", "iclr_2020_ryghZJBKPS", "B1loQbi2YB", "HJgcG9o2FB", "iclr_2020_ryghZJBKPS" ]
iclr_2020_H1gDNyrKDS
Understanding and Robustifying Differentiable Architecture Search
Differentiable Architecture Search (DARTS) has attracted a lot of attention due to its simplicity and small search costs achieved by a continuous relaxation and an approximation of the resulting bi-level optimization problem. However, DARTS does not work robustly for new problems: we identify a wide range of search spaces for which DARTS yields degenerate architectures with very poor test performance. We study this failure mode and show that, while DARTS successfully minimizes validation loss, the found solutions generalize poorly when they coincide with high validation loss curvature in the architecture space. We show that by adding one of various types of regularization we can robustify DARTS to find solutions with less curvature and better generalization properties. Based on these observations, we propose several simple variations of DARTS that perform substantially more robustly in practice. Our observations are robust across five search spaces on three image classification tasks and also hold for the very different domains of disparity estimation (a dense regression task) and language modelling.
accept-talk
This paper studies the properties of Differentiable Architecture Search, and in particular when it fails, and then proposes modifications that improve its performance for several tasks. The reviews were all very supportive with three Accept opinions, and authors have addressed their comments and suggestions. Given the unanimous reviews, this appears to be a clear Accept.
train
[ "SylDRJcpYr", "rJg2Lwr3ir", "S1gi6wS2oS", "SyeYBOShoH", "B1eoXuH3iB", "HJgy-_ShjB", "HJgGSAKk9H", "HylLUgPU5S", "HkeEK8eZdS", "HkgLJuJRvS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "----- Updated after rebuttal period ---\n\nThe author's detailed response effectively addressed my concerns. I am moving my score to Accept. This paper proposes an interesting systematic study of differentiable approach in NAS.\n\n------ Original Review ----\n\nSummary \n\nThis paper presents a systematic evaluati...
[ 8, -1, -1, -1, -1, -1, 8, 8, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, 1, 4, -1, -1 ]
[ "iclr_2020_H1gDNyrKDS", "SylDRJcpYr", "rJg2Lwr3ir", "HylLUgPU5S", "HJgGSAKk9H", "S1gi6wS2oS", "iclr_2020_H1gDNyrKDS", "iclr_2020_H1gDNyrKDS", "HkgLJuJRvS", "iclr_2020_H1gDNyrKDS" ]
iclr_2020_ryxdEkHtPS
A Closer Look at Deep Policy Gradients
We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development. To this end, we propose a fine-grained analysis of state-of-the-art methods based on key elements of this framework: gradient estimation, value prediction, and optimization landscapes. Our results show that the behavior of deep policy gradient algorithms often deviates from what their motivating framework would predict: surrogate rewards do not match the true reward landscape, learned value estimators fail to fit the true value function, and gradient estimates poorly correlate with the "true" gradient. The mismatch between predicted and empirical behavior we uncover highlights our poor understanding of current methods, and indicates the need to move beyond current benchmark-centric evaluation methods.
accept-talk
The paper empirically studies the behaviour of deep policy gradient algorithms, and reveals several unexpected observations that are not explained by the current theory. All three reviewers are excited about this work and recommend acceptance.
test
[ "rkgkefNAYr", "rygtBhTOor", "BylmMh6djH", "B1lOJhaOir", "Sye656FTFr", "r1g2J6P6tr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This is an interesting and important paper, it emphasizes and analyzes how policy gradient methods modify their objective functions and how this leads to training differences (and often errors w.r.t. the true objective). I have some minor comments on terminology used that I would like to see properly defined withi...
[ 8, -1, -1, -1, 6, 8 ]
[ 5, -1, -1, -1, 4, 4 ]
[ "iclr_2020_ryxdEkHtPS", "r1g2J6P6tr", "Sye656FTFr", "rkgkefNAYr", "iclr_2020_ryxdEkHtPS", "iclr_2020_ryxdEkHtPS" ]
iclr_2020_r1etN1rtPB
Implementation Matters in Deep RL: A Case Study on PPO and TRPO
We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO). Specifically, we investigate the consequences of "code-level optimizations:" algorithm augmentations found only in implementations or described as auxiliary details to the core algorithm. Seemingly of secondary importance, such optimizations turn out to have a major impact on agent behavior. Our results show that they (a) are responsible for most of PPO's gain in cumulative reward over TRPO, and (b) fundamentally change how RL methods function. These insights show the difficulty, and importance, of attributing performance gains in deep reinforcement learning.
accept-talk
This paper provides a careful and well-executed evaluation of the code-level details of two leading policy search algorithms, which are typically considered implementation details and therefore often unstated or brushed aside in papers. These are revealed to have major implications for the performance of both algorithms. The reviewers are all in agreement that this paper has important reproducibility and evaluation implications for the field, and adds substantially to our body of knowledge on policy gradient algorithms. I therefore recommend it be accepted. However, a serious limitation is that only 3 random seeds were used to get average performance in the first, key experiment. Experiments are expensive, but that result is not meaningful without more runs, and arguably could be misleading rather than informative. The authors should increase the number of runs as much as possible, at least to 10 but ideally more.
train
[ "H1xMvclFtr", "HJeISRZAKS", "rkxSRfF3iB", "rylcHmbnoS", "BkeUPN-2oS", "HJxMBiZ9sS", "HJgUz6HFor", "SylZgZgFor", "Byeyga6OjH", "rJeMhnp_sr", "HkeJq2ausH", "Hyx12SsVsB", "HJexEd0ZoB", "BJeHgxi6tr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "Summary\n\nThis paper calls to attention the importance of specifying all performance altering implementation details that are current inherent in the state-of-the-art deep policy gradient community. Specifically, this paper builds very closely on the work started by Henderson et al. 2017, building a conversation...
[ 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_r1etN1rtPB", "iclr_2020_r1etN1rtPB", "HkeJq2ausH", "iclr_2020_r1etN1rtPB", "SylZgZgFor", "HJgUz6HFor", "rJeMhnp_sr", "Byeyga6OjH", "HJeISRZAKS", "BJeHgxi6tr", "H1xMvclFtr", "HJexEd0ZoB", "iclr_2020_r1etN1rtPB", "iclr_2020_r1etN1rtPB" ]
iclr_2020_BJeAHkrYDS
Fast Task Inference with Variational Intrinsic Successor Features
It has been established that diverse behaviors spanning the controllable subspace of a Markov decision process can be trained by rewarding a policy for being distinguishable from other policies. However, one limitation of this formulation is the difficulty to generalize beyond the finite set of behaviors being explicitly learned, as may be needed in subsequent tasks. Successor features provide an appealing solution to this generalization problem, but require defining the reward function as linear in some grounded feature space. In this paper, we show that these two techniques can be combined, and that each method solves the other's primary limitation. To do so we introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor features framework. We empirically validate VISR on the full Atari suite, in a novel setup wherein the rewards are only exposed briefly after a long unsupervised phase. Achieving human-level performance on 12 games and beating all baselines, we believe VISR represents a step towards agents that rapidly learn from limited feedback.
accept-talk
This work uses a variational autoencoder-based approach to combine the benefits of recent methods that learn policies with behavioral diversity with the advantages of successor representations, addressing the generalization and slow inference problems of competing methods such as DIAYN. After discussion of the author rebuttal, the reviewers all agreed on the significant contribution of the paper and that concerns about clarity were sufficiently addressed. Thus, I recommend this paper for acceptance.
train
[ "BJx2exIRFS", "Skg5-FY2iH", "r1lY0dtnjH", "Hyl1FOK3sr", "B1lpxwFhoB", "rygRKCvOtr", "SyekDcTg9B" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors address the problem of finding optimal policies in reinforcement learning problems after an initial unsupervised phase in which the agent can interact with the environment without receiving rewards. After this initial phase, the agent can again interact with the environment while having access to the ...
[ 6, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_BJeAHkrYDS", "SyekDcTg9B", "rygRKCvOtr", "BJx2exIRFS", "iclr_2020_BJeAHkrYDS", "iclr_2020_BJeAHkrYDS", "iclr_2020_BJeAHkrYDS" ]
iclr_2020_rkeZIJBYvr
Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks
While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on two realistic task- and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework.
accept-talk
The reviewers generally agreed that the paper presents a compelling method that addresses an important problem. This paper should clearly be accepted, and I would suggest for it to be considered for an oral presentation. I would encourage the authors to take into account the reviewers' suggestions (many of which were already addressed in the rebuttal period) and my own suggestion. The main suggestion I would have in regard to improving the paper is to position it a bit more carefully in regard to prior work on Bayesian meta-learning. This is an active research field, with quite a number of papers. There are two that are especially close to the VI method that the authors are proposing: Gordon et al. and Finn et al. (2018). For example, the graphical model in Figure 2 looks nearly identical to the ones presented in these two prior papers, as does the variational inference procedure. There is nothing wrong with that, but it would be appropriate for the authors to discuss this prior work a bit more diligently -- currently the relationship to these prior works is not at all apparent from their discussion in the related work section. A more appropriate way to present this would be to begin Section 3.2 by stating that this framework follows prior work -- there is nothing wrong with building on prior work, and the significant and important contribution of this paper is no way diminished by being up-front about which parts are inspired by previous papers.
test
[ "rJep4Itotr", "B1xVgzZ3iS", "SJxmPQW3jH", "rkgunfZ2sS", "rked9EZ2oH", "HygSWGg2tS", "S1lLvaCTYr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n-------------\nThis paper proposed to improve existing meta learning algorithms in the presence of task imbalance, class imbalance, and out-of-distribution tasks. Starting from the model-agnostic meta-learning (MAML) algorithm (Finn et al. 2017), to tackle task imbalance, where the number of training exam...
[ 8, -1, -1, -1, -1, 8, 8 ]
[ 1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_rkeZIJBYvr", "S1lLvaCTYr", "rJep4Itotr", "HygSWGg2tS", "iclr_2020_rkeZIJBYvr", "iclr_2020_rkeZIJBYvr", "iclr_2020_rkeZIJBYvr" ]
iclr_2020_S1eALyrYDH
RNA Secondary Structure Prediction By Learning Unrolled Algorithms
In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction which can effectively take into account the inherent constraints in the problem. The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled algorithm for constrained programming as the template for deep architectures to enforce constraints. With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold: it predicts significantly better structures compared to previous SOTA (especially for pseudoknotted structures), while being as efficient as the fastest algorithms in terms of inference time.
accept-talk
This paper proposes a RNA structure prediction algorithm based on an unrolled inference algorithm. The proposed approach overcomes limitations of previous methods, such as dynamic programming (which does not work for molecular configurations that do not factorize), or energy-based models (which require a minimization step, e.g. by using MCMC to traverse the energy landscape and find minima). Reviewers agreed that the method presented here is novel on this application domain, has excellent empirical evaluation setup with strong numerical results, and has the potential to be of interest to the wider deep learning community. The AC shares these views and recommends an enthusiastic acceptance.
train
[ "B1g-0PdH3r", "B1lJZQthiB", "r1eofYOniB", "BkxrY85dsH", "HJgMRKJuiB", "S1eeyK1_iH", "rJge8iyuoB", "HJeLA51_iS", "SyxL1Okuor", "HJxLplGwor", "HJelWTWPsB", "H1gcAa_yoS", "Bklpej_AKH", "SJlmuE8k5S", "S1xh9zkV9H", "r1e3VSpiqr", "Hyx-fHaiqS", "B1xm6dZqqS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "RNA Secondary Structure Prediction by Learning Unrolled Algorithms\n\nThis paper proposes E2Efold, which is an RNA secondary structure prediction algorithm based on an unrolled algorithm. Previous methods rely on dynamic programming (which does not work for molecular configurations that do not factorize) or rely o...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 8, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 1, 1, -1, -1, -1 ]
[ "iclr_2020_S1eALyrYDH", "r1eofYOniB", "HJeLA51_iS", "iclr_2020_S1eALyrYDH", "SJlmuE8k5S", "S1xh9zkV9H", "Bklpej_AKH", "HJgMRKJuiB", "iclr_2020_S1eALyrYDH", "HJelWTWPsB", "H1gcAa_yoS", "r1e3VSpiqr", "iclr_2020_S1eALyrYDH", "iclr_2020_S1eALyrYDH", "iclr_2020_S1eALyrYDH", "Hyx-fHaiqS", ...
iclr_2020_BJlQtJSKDB
Watch the Unobserved: A Simple Approach to Parallelizing Monte Carlo Tree Search
Monte Carlo Tree Search (MCTS) algorithms have achieved great success on many challenging benchmarks (e.g., Computer Go). However, they generally require a large number of rollouts, making their applications costly. Furthermore, it is also extremely challenging to parallelize MCTS due to its inherent sequential nature: each rollout heavily relies on the statistics (e.g., node visitation counts) estimated from previous simulations to achieve an effective exploration-exploitation tradeoff. In spite of these difficulties, we develop an algorithm, WU-UCT, to effectively parallelize MCTS, which achieves linear speedup and exhibits only limited performance loss with an increasing number of workers. The key idea in WU-UCT is a set of statistics that we introduce to track the number of on-going yet incomplete simulation queries (named as unobserved samples). These statistics are used to modify the UCT tree policy in the selection steps in a principled manner to retain effective exploration-exploitation tradeoff when we parallelize the most time-consuming expansion and simulation steps. Experiments on a proprietary benchmark and the Atari Game benchmark demonstrate the linear speedup and the superior performance of WU-UCT comparing to existing techniques.
accept-talk
The paper investigates parallelizing MCTS. The authors propose a simple method based on only updating the exploration bonus in (P)-UCT by taking into account the number of currently ongoing / unfinished simulations. The approach is extensively tested on a variety of environments, notably including ATARI games. This is a good paper. The approach is simple, well motivated and effective. The experimental results are convincing and the authors made a great effort to further improve the paper during the rebuttal period. I recommend an oral presentation of this work, as MCTS has become a core method in RL and planning, and therefore I expect a lot of interest in the community for this work.
val
[ "r1gt3TnWqB", "r1l3oj5nsB", "SJlF9m02qr", "BJlI8uR5jS", "rklIb1bFjS", "S1xeFWdSsH", "HJe0YmUrsr", "H1x_y7UHir", "B1lF5ZUHoS", "SJxPJ-IHjH", "BJgtnAHHiS", "H1eaaz-3dr" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper introduces a new algorithm for parallelizing monte carlo tree search (MCTS). MCTS is hard to parallelize as we have to keep track of the statistics of the node of the tree, which are typically not up-to-date in a parallel execution. The paper introduces a new algorithm that updates the visitation counts ...
[ 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_BJlQtJSKDB", "B1lF5ZUHoS", "iclr_2020_BJlQtJSKDB", "rklIb1bFjS", "S1xeFWdSsH", "HJe0YmUrsr", "H1x_y7UHir", "SJlF9m02qr", "r1gt3TnWqB", "H1eaaz-3dr", "iclr_2020_BJlQtJSKDB", "iclr_2020_BJlQtJSKDB" ]