paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_SklKcRNYDH
Extreme Tensoring for Low-Memory Preconditioning
State-of-the-art models are now trained with billions of parameters, reaching hardware limits in terms of memory consumption. This has created a recent demand for memory-efficient optimizers. To this end, we investigate the limits and performance tradeoffs of memory-efficient adaptively preconditioned gradient methods. We propose \emph{extreme tensoring} for high-dimensional stochastic optimization, showing that an optimizer needs very little memory to benefit from adaptive preconditioning. Our technique applies to arbitrary models (not necessarily with tensor-shaped parameters), and is accompanied by regret and convergence guarantees, which shed light on the tradeoffs between preconditioner quality and expressivity. On a large-scale NLP model, we reduce the optimizer memory overhead by three orders of magnitude, without degrading performance.
accept-poster
Post author rebuttal the score of this paper increased. Discussions with reviewers were substantive and the AC recommends acceptance.
train
[ "SyleXNLiiH", "r1lCAy65cH", "HJx3ljfTFB", "rkgDq6IijS", "B1eZlFLisH", "SkeEI4Uijr", "BJlvN4IsiB", "H1grZ3oJ9r" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "@Camera-ready update: Done. We replicated the results for SM3, and incorporated the very helpful suggestions for discussing how this fits in with other literature.\n\n===\n\nThanks for the particularly thoughtful review. We have incorporated the minor points into the current revision. Responses to the major points...
[ -1, 8, 6, -1, -1, -1, -1, 6 ]
[ -1, 5, 1, -1, -1, -1, -1, 3 ]
[ "r1lCAy65cH", "iclr_2020_SklKcRNYDH", "iclr_2020_SklKcRNYDH", "B1eZlFLisH", "SkeEI4Uijr", "HJx3ljfTFB", "H1grZ3oJ9r", "iclr_2020_SklKcRNYDH" ]
iclr_2020_HylpqA4FwS
RNNs Incrementally Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients?
Recurrent neural networks (RNNs) are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate. While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN (iRNN), where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatt's (1962) continuous-time RNNs. iRNN exhibits identity gradients and is able to account for long-term dependencies (LTD). We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation. We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks.
accept-poster
In this paper, the authors propose the incremental RNN, a novel recurrent neural network architecture that resolves the exploding/vanishing gradient problem. While the reviewers initially had various concerns, the paper has been substantially improved during the discussion period and all questions by the reviewers have been resolved. The main idea of the paper is elegant, the theoretical results interesting, and the empirical evaluation extensive. The reviewers and the AC recommend acceptance of this paper to ICLR-2020.
train
[ "SklIxwXhiH", "S1l7cek0FS", "SJg6kpf3iH", "HkexDJynsS", "S1lSiEwMjH", "SJgqAKBGsB", "SJgGchHfjr", "SJgfNIk3YB", "SyeP2jSaYH" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We again thank the reviewer for the detailed feedback. We feel that we did not emphasize a central point in our response sufficiently. Perhaps identity gradient is over-emphasized, but in reality what we seek is that over the length T, we can circumvent vanishing/exploding gradient. This means that we neither want...
[ -1, 8, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "HkexDJynsS", "iclr_2020_HylpqA4FwS", "S1lSiEwMjH", "SJgqAKBGsB", "S1l7cek0FS", "SJgfNIk3YB", "SyeP2jSaYH", "iclr_2020_HylpqA4FwS", "iclr_2020_HylpqA4FwS" ]
iclr_2020_Hkl1iRNFwS
The Early Phase of Neural Network Training
Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent moves into a small subspace (Gur-Ari et al., 2018), and the network undergoes a critical period (Achille et al., 2019). Here we examine the changes that deep neural networks undergo during this early phase of training. We perform extensive measurements of the network state and its updates during these early iterations of training, and leverage the framework of Frankle et al. (2019) to quantitatively probe the weight distribution and its reliance on various aspects of the dataset. We find that, within this framework, deep networks are not robust to reinitializing with random weights while maintaining signs, and that weight distributions are highly non-independent even after only a few hundred iterations. Despite this, pre-training with blurred inputs or an auxiliary self-supervised task can approximate the changes in supervised networks, suggesting that these changes are label-agnostic, though labels significantly accelerate this process. Together, these results help to elucidate the network changes occurring during this pivotal initial period of learning.
accept-poster
This paper studies numerous ways in which the statistics of network weights evolve during network training. Reviewers are not entirely sure what conclusions to make from these studies, and training dynamics can be strongly impacted by arbitrary choices made in the training process. Despite these issues, the reviewers think the observed results are interesting enough to clear the bar for publication.
val
[ "SJlvCqBatr", "rke5xYS7iB", "HkeEPdBXoS", "HJlSyuSQjB", "HJx_-RnntS", "H1ggWiM15r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper aims at exploring the properties of neural network training during the early phase. By some studies on the lottery ticket hypothesis, something important happens during the early phase of training so rewinding the network should go to these early phases instead of the initial phase. So, what is importan...
[ 8, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, 3, 4 ]
[ "iclr_2020_Hkl1iRNFwS", "H1ggWiM15r", "SJlvCqBatr", "HJx_-RnntS", "iclr_2020_Hkl1iRNFwS", "iclr_2020_Hkl1iRNFwS" ]
iclr_2020_ryxgsCVYPr
NeurQuRI: Neural Question Requirement Inspector for Answerability Prediction in Machine Reading Comprehension
Real-world question answering systems often retrieve potentially relevant documents to a given question through a keyword search, followed by a machine reading comprehension (MRC) step to find the exact answer from them. In this process, it is essential to properly determine whether an answer to the question exists in a given document. This task often becomes complicated when the question involves multiple different conditions or requirements which are to be met in the answer. For example, in a question "What was the projection of sea level increases in the fourth assessment report?", the answer should properly satisfy several conditions, such as "increases" (but not decreases) and "fourth" (but not third). To address this, we propose a neural question requirement inspection model called NeurQuRI that extracts a list of conditions from the question, each of which should be satisfied by the candidate answer generated by an MRC model. To check whether each condition is met, we propose a novel, attention-based loss function. We evaluate our approach on SQuAD 2.0 dataset by integrating the proposed module with various MRC models, demonstrating the consistent performance improvements across a wide range of state-of-the-art methods.
accept-poster
This paper extracts a list of conditions from the question, each of which should be satisfied by the candidate answer generated by an MRC model. All reviewers agree that this approach is interesting (verification and validation) and experiments are solid. One of the reviewers raised concerns are promptly answered by authors, raising the average score to accept.
train
[ "HkeaCf55FH", "H1gdyfH2oB", "ryxVdWr3sS", "Hkg8reH2iB", "SklFQPvNsr", "S1x9_DP4iB", "ryeN6wD4sB", "rkxcXY9JFB", "BJxQM0_CYr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nPaper Summary:\n\nThis paper proposes a neural question requirement inspection models called NeurQuRI. It is different from existing answer verifiers in that NeurQuRI pinpoints where the mismatch occurs between the question and the candidate answer in unanswerable cases. Experiments with SQuAD 2.0 show the effe...
[ 6, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_ryxgsCVYPr", "S1x9_DP4iB", "SklFQPvNsr", "iclr_2020_ryxgsCVYPr", "BJxQM0_CYr", "HkeaCf55FH", "rkxcXY9JFB", "iclr_2020_ryxgsCVYPr", "iclr_2020_ryxgsCVYPr" ]
iclr_2020_SkgGjRVKDS
Towards Stabilizing Batch Statistics in Backward Propagation of Batch Normalization
Batch Normalization (BN) is one of the most widely used techniques in Deep Learning field. But its performance can awfully degrade with insufficient batch size. This weakness limits the usage of BN on many computer vision tasks like detection or segmentation, where batch size is usually small due to the constraint of memory consumption. Therefore many modified normalization techniques have been proposed, which either fail to restore the performance of BN completely, or have to introduce additional nonlinear operations in inference procedure and increase huge consumption. In this paper, we reveal that there are two extra batch statistics involved in backward propagation of BN, on which has never been well discussed before. The extra batch statistics associated with gradients also can severely affect the training of deep neural network. Based on our analysis, we propose a novel normalization method, named Moving Average Batch Normalization (MABN). MABN can completely restore the performance of vanilla BN in small batch cases, without introducing any additional nonlinear operations in inference procedure. We prove the benefits of MABN by both theoretical analysis and experiments. Our experiments demonstrate the effectiveness of MABN in multiple computer vision tasks including ImageNet and COCO. The code has been released in https://github.com/megvii-model/MABN.
accept-poster
This work introduces Moving Average Batch Normalization (MABN) method to address performance issues of batch normalization in small batch cases. The method is theoretically analyzed and empirically verified on ImageNet and COCO. Some issues were raised by the reviewers, such as restrictive nature of some of the assumptions in the analysis as well as performance degradation due lack of centralizing feature maps. Nevertheless, all the reviewers found the contributions of this paper interesting and important, and they all recommended accept.
train
[ "BkeYyKYnKS", "rJxHNwjTKr", "BJeCWt3KiH", "Syx9qFhtjB", "ByehHF2KjS", "Bke_CWyaFH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper provides a new method to deal with the small batch size problem of BN, called MABN. Compared to BRN, MABN has two main contributions: 1) find two statistics, g and ψ, in BP to apply moving average operation without introducing too much overhead; 2) reduce the number of statistics of BN via centralizing ...
[ 6, 6, -1, -1, -1, 8 ]
[ 5, 5, -1, -1, -1, 1 ]
[ "iclr_2020_SkgGjRVKDS", "iclr_2020_SkgGjRVKDS", "rJxHNwjTKr", "BkeYyKYnKS", "Bke_CWyaFH", "iclr_2020_SkgGjRVKDS" ]
iclr_2020_rJeQoCNYDS
Single Episode Policy Transfer in Reinforcement Learning
Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning (RL). An even greater challenge is performing near-optimally in a single attempt at test time, possibly without access to dense rewards, which is not addressed by current methods that require multiple experience rollouts for adaptation. To achieve single episode transfer in a family of environments with related dynamics, we propose a general algorithm that optimizes a probe and an inference model to rapidly estimate underlying latent variables of test dynamics, which are then immediately used as input to a universal control policy. This modular approach enables integration of state-of-the-art algorithms for variational inference or RL. Moreover, our approach does not require access to rewards at test time, allowing it to perform in settings where existing adaptive approaches cannot. In diverse experimental domains with a single episode test constraint, our method significantly outperforms existing adaptive approaches and shows favorable performance against baselines for robust transfer.
accept-poster
This is an interesting paper that is concerned with single episode transfer to reinforcement learning problems with different dynamics models, assuming they are parameterised by a latent variable. Given some initial training tasks to learn about this parameter, and a new test task, they present an algorithm to probe and estimate the latent variable on the test task, whereafter the inferred latent variable is used as input to a control policy. There were several issues raised by the reviewers. Firstly, there were questions with the number of runs and the baseline implementations, which were all addressed in the rebuttals. Then, there were questions around the novelty and the main contribution being wall-clock time. These issues were also adequately addressed. In light of this, I recommend acceptance of this paper.
val
[ "Hyl52JbTYS", "SJgRcIL3iB", "SklGwkk2jB", "ryxaY3AoiS", "rJeJD0Ajsr", "BklO1ACssB", "rklESaCjir", "BJlzDiCoir", "ryewBvJGoH", "H1xwruy6tH", "HJgzZIPrcr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThis paper addresses the problem of transfer in RL. After an agent is given an opportunity to train from a distribution of environments, we want an agent to perform well on the test environment. This paper specifically focuses on the setting where the state space, action space, reward space, and discoun...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rJeQoCNYDS", "BklO1ACssB", "ryewBvJGoH", "Hyl52JbTYS", "H1xwruy6tH", "rklESaCjir", "ryxaY3AoiS", "HJgzZIPrcr", "iclr_2020_rJeQoCNYDS", "iclr_2020_rJeQoCNYDS", "iclr_2020_rJeQoCNYDS" ]
iclr_2020_HklBjCEKvH
Generalization through Memorization: Nearest Neighbor Language Models
We introduce kNN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a k-nearest neighbors (kNN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this transformation to a strong Wikitext-103 LM, with neighbors drawn from the original training set, our kNN-LM achieves a new state-of-the-art perplexity of 15.79 -- a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail.
accept-poster
This paper proposes an idea of using a pre-trained language model on a potentially smaller set of text, and interpolating it with a k-nearest neighbor model over a large datastore. The authors provide extensive evaluation and insightful results. Two reviewers vote for accepting the paper, and one reviewer is negative. After considering the points made by reviewers, the AC decided that the paper carries value for the community and should be accepted.
train
[ "Bke5gwsosr", "Hke8y2cPjH", "S1xWjs9woH", "Hylodjcvor", "H1gQzS6XjS", "BygRmOwXjB", "rkluvmvXjr", "B1ejI7Rgor", "SJeUqRdlsH", "Byl93YXyor", "B1gMnYs8tS", "ryghQUjTYH", "H1eELYG-cS", "Hke_6-MAqB", "Hkg-ndcpcS", "Bkxs8w5adr" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author" ]
[ "Hello Reveiwer1, we just wanted to follow up on our previous comment about generating results for a CNN-based model. Our experiments are currently running, but unfortunately we won’t have the results before the end of the discussion period. \n", "Hello Reviewer1,\n\nThanks for your comments. We’re glad you enjoy...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, -1, -1, -1 ]
[ "Hke8y2cPjH", "H1eELYG-cS", "ryghQUjTYH", "H1gQzS6XjS", "SJeUqRdlsH", "SJeUqRdlsH", "B1ejI7Rgor", "iclr_2020_HklBjCEKvH", "B1gMnYs8tS", "Hke_6-MAqB", "iclr_2020_HklBjCEKvH", "iclr_2020_HklBjCEKvH", "iclr_2020_HklBjCEKvH", "Hkg-ndcpcS", "iclr_2020_HklBjCEKvH", "iclr_2020_HklBjCEKvH" ]
iclr_2020_r1eIiCNYwS
Transformer-XH: Multi-Evidence Reasoning with eXtra Hop Attention
Transformers have achieved new heights modeling natural language as a sequence of text tokens. However, in many real world scenarios, textual data inherently exhibits structures beyond a linear sequence such as trees and graphs; many tasks require reasoning with evidence scattered across multiple pieces of texts. This paper presents Transformer-XH, which uses eXtra Hop attention to enable intrinsic modeling of structured texts in a fully data-driven way. Its new attention mechanism naturally “hops” across the connected text sequences in addition to attending over tokens within each sequence. Thus, Transformer-XH better conducts joint multi-evidence reasoning by propagating information between documents and constructing global contextualized representations. On multi-hop question answering, Transformer-XH leads to a simpler multi-hop QA system which outperforms previous state-of-the-art on the HotpotQA FullWiki setting. On FEVER fact verification, applying Transformer-XH provides state-of-the-art accuracy and excels on claims whose verification requires multiple evidence.
accept-poster
This work examines a problem that is of considerable interest to the community and does a good job of presenting the work. The AC recommends acceptance.
train
[ "BJlHcZMptB", "BklJuRInoB", "S1eTNW12iS", "rkgEwBfosS", "BkedcOJNjS", "BJxvTZf5sH", "HkeM5zCEtH", "SklrDYyVsH", "rJenPSy4jH", "H1e7zOy4iS", "B1ghg2ACYr" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Summary: This paper introduces a way to train transformer models over document graphs, where each node is a document and edges connect related documents. It is inspired by the transformer-XL model, as well as Graph Neural networks. They apply this model to answer multi-hop questions on the HotPotQA dataset and out...
[ 8, -1, -1, -1, -1, -1, 6, -1, -1, -1, 6 ]
[ 5, -1, -1, -1, -1, -1, 4, -1, -1, -1, 5 ]
[ "iclr_2020_r1eIiCNYwS", "S1eTNW12iS", "H1e7zOy4iS", "BJxvTZf5sH", "iclr_2020_r1eIiCNYwS", "SklrDYyVsH", "iclr_2020_r1eIiCNYwS", "HkeM5zCEtH", "B1ghg2ACYr", "BJlHcZMptB", "iclr_2020_r1eIiCNYwS" ]
iclr_2020_S1l8oANFDH
Synthesizing Programmatic Policies that Inductively Generalize
Deep reinforcement learning has successfully solved a number of challenging control tasks. However, learned policies typically have difficulty generalizing to novel environments. We propose an algorithm for learning programmatic state machine policies that can capture repeating behaviors. By doing so, they have the ability to generalize to instances requiring an arbitrary number of repetitions, a property we call inductive generalization. However, state machine policies are hard to learn since they consist of a combination of continuous and discrete structures. We propose a learning framework called adaptive teaching, which learns a state machine policy by imitating a teacher; in contrast to traditional imitation learning, our teacher adaptively updates itself based on the structure of the student. We show that our algorithm can be used to learn policies that inductively generalize to novel environments, whereas traditional neural network policies fail to do so.
accept-poster
The authors consider control tasks that require "inductive generalization", ie the ability to repeat certain primitive behaviors. They propose state-machine machine policies, which switch between low-level policies based on learned transition criteria. The approach is tested on multiple continuous control environments and compared to RL baselines as well as an ablation. The reviewers appreciated the general idea of the paper. During the rebuttal, the authors addressed most of the issues raised in the reviews and hence reviewers increased their score. The paper is marginally above acceptance. On the positive side: Learning structured policies is clearly desirable but difficult and the paper proposes an interesting set of ideas to tackle this challenge. My main concern about this work is: The approach uses the true environment simulator, as the training relies on gradients of the reward function. This makes the tasks into planning and not an RL problems; this needs to be highlighted, as it severly limits its applicability of the proposed approach. Furthermore, this also means that the comparison to the model-free PPO baselines is less meaningful. The authors should clear mention this. Overall however, I think there are enough good ideas presented here to warrant acceptance.
train
[ "H1xaHcTk9S", "SJeOWYCjsB", "B1gnOqBijH", "SkeqDHc3jS", "Hyl3Cynojr", "HyxIstrjoB", "r1gRsIjqiS", "HyeFWUlkqr", "SygGYjyFir", "BygnP9JYoS", "rkl2Rcytir", "S1lBrc1Yor", "Hye-NYyKjS", "B1xDWh2Pcr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper can be viewed as being related to two bodies of work:\n\n(A) The first is training programmatic policies (e.g., https://arxiv.org/abs/1907.05431). The most popular idea is to use program synthesis & imitation learning to distill from a programmatic policy from some oracle policy. \n\n(B) The second is...
[ 8, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_S1l8oANFDH", "Hyl3Cynojr", "r1gRsIjqiS", "HyxIstrjoB", "BygnP9JYoS", "Hye-NYyKjS", "SygGYjyFir", "iclr_2020_S1l8oANFDH", "HyeFWUlkqr", "S1lBrc1Yor", "H1xaHcTk9S", "B1xDWh2Pcr", "iclr_2020_S1l8oANFDH", "iclr_2020_S1l8oANFDH" ]
iclr_2020_HklOo0VFDH
Decoding As Dynamic Programming For Recurrent Autoregressive Models
Decoding in autoregressive models (ARMs) consists of searching for a high scoring output sequence under the trained model. Standard decoding methods, based on unidirectional greedy algorithm or beam search, are suboptimal due to error propagation and myopic decisions which do not account for future steps in the generation process. In this paper we present a novel decoding approach based on the method of auxiliary coordinates (Carreira-Perpinan & Wang, 2014) to address the aforementioned shortcomings. Our method introduces discrete variables for output tokens, and auxiliary continuous variables representing the states of the underlying ARM. The auxiliary variables lead to a factor graph approximation of the ARM, whose maximum a posteriori (MAP) inference is found exactly using dynamic programming. The MAP inference is then used to recreate an improved factor graph approximation of the ARM via updated auxiliary variables. We then extend our approach to decode in an ensemble of ARMs, possibly with different generation orders, which is out of reach for the standard unidirectional decoding algorithms. Experiments on the text infilling task over SWAG and Daily Dialogue datasets show that our decoding method is superior to strong unidirectional decoding baselines.
accept-poster
This paper proposes an approximate inference approach for decoding in autoregressive models, based on the method of auxiliary coordinates, which uses iterative factor graph approximations of the model. The approach leads to nice improvements in performance on a text infilling task. The reviewers were generally positive about this paper, though there was a concern that more baselines are needed and discussion was very limited following the author responses. I tend to agree with the authors that their results are convincing on the infilling task. The impact of the paper is a bit limited by the lack of experiments on more standard decoding tasks, which, as the authors point out, would be challenging as their approach is computationally demanding. Overall I believe this would be an interesting contribution to the ICLR community.
train
[ "H1gEW1rhor", "BkxjgFZ3oH", "Skg3_uWniS", "ByeEgdZhjr", "BygmgjVnYr", "SJlQ7OR3FB", "rJgHLapy9r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers, thanks for your thoughtful input on this submission! The authors have now responded to your comments. Please be sure to go through their replies and revisions. If you have additional feedback or questions, it would be great to know. The authors still have one more day to respond/revise further....
[ -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, 3, 1, 1 ]
[ "iclr_2020_HklOo0VFDH", "BygmgjVnYr", "SJlQ7OR3FB", "rJgHLapy9r", "iclr_2020_HklOo0VFDH", "iclr_2020_HklOo0VFDH", "iclr_2020_HklOo0VFDH" ]
iclr_2020_B1g5sA4twr
Deep Double Descent: Where Bigger Models and More Data Hurt
We show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity, and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.
accept-poster
This paper experimentally analyzes the double descent phenomenon for deep models. While, as the reviewers have mentioned, this phenomenon has been observed for some time, some of its specificities still elude us. As a consequence, I am happy to see this paper presented to ICLR. That being said, given the original lack of proper references as well as the recent public announcements about this paper giving it visibility, I want to make it absolutely clear that this paper is accepted with the assumption that proper credit will be given to past work and that efforts will be made to draw connections between all these works.
train
[ "HkxGyDxhoS", "H1lrZBqMoH", "BJeZYOWbjB", "H1gFwdZbjB", "H1xTYre6tB", "rJxvG0y6KH", "BJebj9ZrqB", "B1lSR-pl5r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "I thank the authors for considering my comments and taking them into account in the revision. \nI am still not convinced about the relevance of the defined Effective Model Complexity. But the experimental part of the paper is very nice and the paper should be published in ICLR. ", "The paper defines the effecti...
[ -1, 8, -1, -1, 6, 6, -1, -1 ]
[ -1, 3, -1, -1, 3, 3, -1, -1 ]
[ "BJebj9ZrqB", "iclr_2020_B1g5sA4twr", "rJxvG0y6KH", "H1xTYre6tB", "iclr_2020_B1g5sA4twr", "iclr_2020_B1g5sA4twr", "B1lSR-pl5r", "iclr_2020_B1g5sA4twr" ]
iclr_2020_HyxJhCEFDS
Intriguing Properties of Adversarial Training at Scale
Adversarial training is one of the main defenses against adversarial attacks. In this paper, we provide the first rigorous study on diagnosing elements of large-scale adversarial training on ImageNet, which reveals two intriguing properties. First, we study the role of normalization. Batch normalization (BN) is a crucial element for achieving state-of-the-art performance on many vision tasks, but we show it may prevent networks from obtaining strong robustness in adversarial training. One unexpected observation is that, for models trained with BN, simply removing clean images from training data largely boosts adversarial robustness, i.e., 18.3%. We relate this phenomenon to the hypothesis that clean images and adversarial images are drawn from two different domains. This two-domain hypothesis may explain the issue of BN when training with a mixture of clean and adversarial images, as estimating normalization statistics of this mixture distribution is challenging. Guided by this two-domain hypothesis, we show disentangling the mixture distribution for normalization, i.e., applying separate BNs to clean and adversarial images for statistics estimation, achieves much stronger robustness. Additionally, we find that enforcing BNs to behave consistently at training and testing can further enhance robustness. Second, we study the role of network capacity. We find our so-called "deep" networks are still shallow for the task of adversarial learning. Unlike traditional classification tasks where accuracy is only marginally improved by adding more layers to "deep" networks (e.g., ResNet-152), adversarial training exhibits a much stronger demand on deeper networks to achieve higher adversarial robustness. This robustness improvement can be observed substantially and consistently even by pushing the network capacity to an unprecedented scale, i.e., ResNet-638.
accept-poster
This paper studies the properties of adversarial training in the large scale setting. The reviewers found the properties identified by the paper to be of interest to the ICLR community - in particular the robustness community. We encourage the authors to release their models to help jumpstart future work building on this study.
train
[ "rJxeamAAKB", "rkeOvhchjr", "B1eYW3c2sB", "rJgNWej3jB", "Skg65jc2jr", "B1xyoFBGqH", "Skl9fUh4qB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper reveals some interesting properties of neural networks when trained adversarially at ImageNet scale. The total cost of the experiments is quite impressive, therefore the results are valuable references. With extensive experiments, the authors reveals two intriguing properties of neural networks when tra...
[ 6, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_HyxJhCEFDS", "B1xyoFBGqH", "Skl9fUh4qB", "rJxeamAAKB", "iclr_2020_HyxJhCEFDS", "iclr_2020_HyxJhCEFDS", "iclr_2020_HyxJhCEFDS" ]
iclr_2020_Bkxe2AVtPS
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out of the box for representative models: ResNet50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization.
accept-poster
Main description: paper focuses on training neural networks using 8-bit floating-point numbers (FP8). The goal is highly motivated: training neural networks faster, with smaller memory footprint and energy consumption. Discussions reviewer 3: gives a very short review and is not knowledagble in this area (rating is weak accept) reviewer 4: well written and convincing paper, some minor technical flaws (not very knowledgable) reviewer 1: interesting paper but argues not very practical (not very knowledgable) reviewer 2: this is the most thorough and knowledable review, and here the authors like the scope of the paper and its interest to ICLR. Recommendation: going mainly by reviewer 2, i vote to accept this as a poster
train
[ "SJloyd_PjH", "BJgIkv_vsH", "BklmFrEDoB", "SJxv7Z7PjB", "Skg9PnTzjB", "B1xG2Yfljr", "SkeEF48CKS", "Bygzt_NR9S", "Byl9vmZJir" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks to the reviewer for the detailed comments regarding hardware implementation. As pointed out by Reviewer #2, the bandwidth savings of moving to lower precision significantly improve both performance and power efficiency. Moving bytes is expensive compared to the relatively few additional FLOPS required for t...
[ -1, -1, -1, -1, -1, 6, 6, 1, 8 ]
[ -1, -1, -1, -1, -1, 1, 3, 1, 1 ]
[ "Bygzt_NR9S", "SkeEF48CKS", "Skg9PnTzjB", "Byl9vmZJir", "iclr_2020_Bkxe2AVtPS", "iclr_2020_Bkxe2AVtPS", "iclr_2020_Bkxe2AVtPS", "iclr_2020_Bkxe2AVtPS", "iclr_2020_Bkxe2AVtPS" ]
iclr_2020_SJxZnR4YvB
Distributed Bandit Learning: Near-Optimal Regret with Efficient Communication
We study the problem of regret minimization for distributed bandits learning, in which M agents work collaboratively to minimize their total regret under the coordination of a central server. Our goal is to design communication protocols with near-optimal regret and little communication cost, which is measured by the total amount of transmitted data. For distributed multi-armed bandits, we propose a protocol with near-optimal regret and only O(Mlog⁡(MK)) communication cost, where K is the number of arms. The communication cost is independent of the time horizon T, has only logarithmic dependence on the number of arms, and matches the lower bound except for a logarithmic factor. For distributed d-dimensional linear bandits, we propose a protocol that achieves near-optimal regret and has communication cost of order O((Md+dlog⁡log⁡d)log⁡T), which has only logarithmic dependence on T.
accept-poster
This paper tackles the problem of regret minimization in a multi-agent bandit problem, where distributed learning bandit algorithms collaborate in order to minimize their total regret. More specifically, the work focuses on efficient communication protocols and the regret corresponds to the communication cost. The goal is therefore to design protocols with little communication cost. The authors first establish lower bounds on the communication cost, and then introduce an algorithm with provable near-optimal regret. The only concern with the paper is that ICLR may not be the appropriate venue given that this work lacks representation learning contributions. However, all reviewers being otherwise positive about the quality and contributions of this work, I would recommend acceptance.
train
[ "S1lV0dNpYS", "S1ekI4WBsH", "SyejNmWror", "B1eOKfbroH", "HJeHVjyb5S", "HJlMtWL79H" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper considers the problem of distributed multi-arm bandit, where M players are playing in the same stochastic environment. The goal of the paper is to have small over-all regret for all the players without a significant amount of communication between the players. \n\n\n\nThe main contribution of this paper ...
[ 6, -1, -1, -1, 6, 8 ]
[ 5, -1, -1, -1, 4, 4 ]
[ "iclr_2020_SJxZnR4YvB", "S1lV0dNpYS", "HJlMtWL79H", "HJeHVjyb5S", "iclr_2020_SJxZnR4YvB", "iclr_2020_SJxZnR4YvB" ]
iclr_2020_r1xGnA4Kvr
Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks
Current artificial neural networks (ANNs) can perform and excel at a variety of tasks ranging from image classification to spam detection through training on large datasets of labeled data. While the trained network may perform well on similar testing data, inputs that differ even slightly from the training data may trigger unpredictable behavior. Due to this limitation, it is possible to design inputs with very small perturbations that can result in misclassification. These adversarial attacks present a security risk to deployed ANNs and indicate a divergence between how ANNs and humans perform classification. Humans are robust at behaving in the presence of noise and are capable of correctly classifying objects that are noisy, blurred, or otherwise distorted. It has been hypothesized that sleep promotes generalization of knowledge and improves robustness against noise in animals and humans. In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as in increasing ANN classification robustness. We compare the sleep algorithm's performance on various robustness tasks with two previously proposed adversarial defenses - defensive distillation and fine-tuning. We report an increase in robustness after sleep phase to adversarial attacks as well as to general image distortions for three datasets: MNIST, CUB200, and a toy dataset. Overall, these results demonstrate the potential for biologically inspired solutions to solve existing problems in ANNs and guide the development of more robust, human-like ANNs.
accept-poster
"Sleep" is introduced as a way of increasing robustness in neural network training. To sleep, the network is converted into a spiking network and goes through phases of more and less intense activation. The results are quite good when it comes to defending against adversarial examples. Reviewers agree that the method is novel and interesting. Authors responded to the reviewers' questions (one of the reviewers had a quite extensive set of questions) satisfactorily, and improved the paper significantly in the process. I think the paper should be accepted on the grounds of novelty and good results.
train
[ "H1e-LL3_iB", "HJgcU4nuoS", "HkgkZZnOjS", "BkgoQYmKKS", "HJerk1855H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Section 5: results\n5.\tThis is a good point which we believe is composed of 2 related questions: \n-Why don’t the accuracy values all converge to 0? Indeed, as more targeted noise is added (noise that increases the loss for a specific example), we would expect to observe more misclassifications until all examples...
[ -1, -1, -1, 8, 6 ]
[ -1, -1, -1, 3, 1 ]
[ "BkgoQYmKKS", "BkgoQYmKKS", "HJerk1855H", "iclr_2020_r1xGnA4Kvr", "iclr_2020_r1xGnA4Kvr" ]
iclr_2020_HJeVnCEKwH
A Closer Look at the Optimization Landscapes of Generative Adversarial Networks
Generative adversarial networks have been very successful in generative modeling, however they remain relatively challenging to train compared to standard deep neural networks. In this paper, we propose new visualization techniques for the optimization landscapes of GANs that enable us to study the game vector field resulting from the concatenation of the gradient of both players. Using these visualization techniques we try to bridge the gap between theory and practice by showing empirically that the training of GANs exhibits significant rotations around LSSP, similar to the one predicted by theory on toy examples. Moreover, we provide empirical evidence that GAN training seems to converge to a stable stationary point which is a saddle point for the generator loss, not a minimum, while still achieving excellent performance.
accept-poster
This is an interesting contribution that sheds some light on a well-studied but still poorly understood problem. I think it might be of interest to the community.
val
[ "HkxNcJcqYr", "S1xyowehYr", "rkgswvwoir", "SyeHJT29sS", "rJgoGwU9oB", "HklwX_KPiH", "rkl0guYDiB", "SJlhivFvor", "r1xU1wKDoS", "HyeEo544tB" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary: \n\nThis paper proposes visualization techniques for the optimization landscape in GANs. The primary tool presented in this paper is a quantity called path-angle, which looks at the angle between the game vector field and the linear path between a point away from a stationary point and a point near a stat...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_HJeVnCEKwH", "iclr_2020_HJeVnCEKwH", "SyeHJT29sS", "HklwX_KPiH", "SJlhivFvor", "HyeEo544tB", "HkxNcJcqYr", "r1xU1wKDoS", "S1xyowehYr", "iclr_2020_HJeVnCEKwH" ]
iclr_2020_HJxEhREKDH
On the Global Convergence of Training Deep Linear ResNets
We study the convergence of gradient descent (GD) and stochastic gradient descent (SGD) for training L-hidden-layer linear residual networks (ResNets). We prove that for training deep residual networks with certain linear transformations at input and output layers, which are fixed throughout training, both GD and SGD with zero initialization on all hidden weights can converge to the global minimum of the training loss. Moreover, when specializing to appropriate Gaussian random linear transformations, GD and SGD provably optimize wide enough deep linear ResNets. Compared with the global convergence result of GD for training standard deep linear networks \citep{du2019width}, our condition on the neural network width is sharper by a factor of O(κL), where κ denotes the condition number of the covariance matrix of the training data. We further propose a modified identity input and output transformations, and show that a (d+k)-wide neural network is sufficient to guarantee the global convergence of GD/SGD, where d,k are the input and output dimensions respectively.
accept-poster
This paper provides further analysis of convergence in deep linear networks. I recommend acceptance.
train
[ "SyluQ28J9S", "rkgnmE0ptS", "HJg_GdOTKr", "rJllHnhsiH", "SyxL6inoiH", "HkxmYG-isB", "B1eGf3FFir", "HJlVjoYFsr", "Hylp8Q_vjS", "H1grcQOvsS", "H1lutMuvsH", "rJlPx1ODoH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "\n*Summary* \nThis paper deals with the global convergence of deep linear ResNets. The author show that under some initialization conditions for the first and the last layer (that are not optimized !) GD and SGD does converge to a global minimum of the min squared error. The closed related work seems to be Bartlet...
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_HJxEhREKDH", "iclr_2020_HJxEhREKDH", "iclr_2020_HJxEhREKDH", "B1eGf3FFir", "HJlVjoYFsr", "rJlPx1ODoH", "H1lutMuvsH", "Hylp8Q_vjS", "rkgnmE0ptS", "HJg_GdOTKr", "rkgnmE0ptS", "SyluQ28J9S" ]
iclr_2020_Hklr204Fvr
Towards a Deep Network Architecture for Structured Smoothness
We propose the Fixed Grouping Layer (FGL); a novel feedforward layer designed to incorporate the inductive bias of structured smoothness into a deep learning model. FGL achieves this goal by connecting nodes across layers based on spatial similarity. The use of structured smoothness, as implemented by FGL, is motivated by applications to structured spatial data, which is, in turn, motivated by domain knowledge. The proposed model architecture outperforms conventional neural network architectures across a variety of simulated and real datasets with structured smoothness.
accept-poster
The AC has carefully looked at the paper/comments/discussion in order to arrive at this meta-review. Looking over the paper, the FGL layer is an interesting idea, but its utility is only evaluated in a limited setting (fMRI data), rather that other types of images/data. Also, the approach seems to work on some of the fMRI datasets, on others the performance is on par with the baselines. Overall, the paper is borderline but the AC believes the paper would be a good contribution to the conference.
train
[ "Ske3zbFnoH", "ryxh1xY2ir", "BygdP4u6FH", "H1emraEJcB" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "Relation to MRF’s and CRF’s:\nProbabilistic graphical models are a great approach for capturing known structure in data. We believe our approach is complementary and distinct from the probabilistic approaches. Specifically, the relationship between FGL and MRF/CRF is analogous to the relationship between RNN/LSTM/...
[ -1, -1, 6, 6 ]
[ -1, -1, 4, 1 ]
[ "BygdP4u6FH", "H1emraEJcB", "iclr_2020_Hklr204Fvr", "iclr_2020_Hklr204Fvr" ]
iclr_2020_SJgdnAVKDH
Revisiting Self-Training for Neural Sequence Generation
Self-training is one of the earliest and simplest semi-supervised methods. The key idea is to augment the original labeled dataset with unlabeled data paired with the model's prediction (i.e. the pseudo-parallel data). While self-training has been extensively studied on classification problems, in complex sequence generation tasks (e.g. machine translation) it is still unclear how self-training works due to the compositionality of the target space. In this work, we first empirically show that self-training is able to decently improve the supervised baseline on neural sequence generation tasks. Through careful examination of the performance gains, we find that the perturbation on the hidden states (i.e. dropout) is critical for self-training to benefit from the pseudo-parallel data, which acts as a regularizer and forces the model to yield close predictions for similar unlabeled inputs. Such effect helps the model correct some incorrect predictions on unlabeled data. To further encourage this mechanism, we propose to inject noise to the input space, resulting in a noisy version of self-training. Empirical study on standard machine translation and text summarization benchmarks shows that noisy self-training is able to effectively utilize unlabeled data and improve the performance of the supervised baseline by a large margin.
accept-poster
This paper analyzes self-training for sequence-to-sequence models and proposes a noisy version of self training. An empirical study shows the proposed noisy version improves results for machine translation and summarization tasks. All reviewers appreciate the interesting contributions of the research, as well as clear writing. They also offer several comments for the revision of the paper. We look forward to seeing this paper presented at the conference!
train
[ "H1l-spz2iH", "HJlgTWmhsS", "H1eCQlX3iH", "HkleTRfniS", "Hygxm9Q7Fr", "rke6Rf_pKS", "Bke-eZJ0YH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the time and comments. Due to time limitations we could only address major points, but we’ll make sure to reflect all advice in future revisions. We also appreciate the reviewer’s understanding that fully validating the actual reason is difficult, we will keep working on this and hopefull...
[ -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, 5, 4, 3 ]
[ "Bke-eZJ0YH", "iclr_2020_SJgdnAVKDH", "Hygxm9Q7Fr", "rke6Rf_pKS", "iclr_2020_SJgdnAVKDH", "iclr_2020_SJgdnAVKDH", "iclr_2020_SJgdnAVKDH" ]
iclr_2020_HJeqhA4YDS
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. A major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the corrupted image. While this over-parameterized network can fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent it generates an almost uncorrupted image. This intriguing phenomenon enables state-of-the-art CNN-based denoising and regularization of other inverse problems. In this paper, we attribute this effect to a particular architectural choice of convolutional networks, namely convolutions with fixed interpolating filters. We then formally characterize the dynamics of fitting a two-layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. Our proof relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion.
accept-poster
This paper studies the question of why a network trained to reproduce a single image often de-noises the image early in training. This an interesting question and, post discussion, all three reviewers agree that it will be of general interest to the community and is worth publishing. Therefore I recommend it be accepted.
train
[ "HklY2HdfqS", "rJeixQ_osS", "H1xLtg4qoH", "H1e6BljOsr", "B1eSQejdoS", "r1elbgsdoH", "H1xrIN8NYH", "SJxWlnNpFB" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the situation in which a two-layer CNN with RELU nonlinearity is fit to a single image and the observation that it is able to fit a \"natural\" image in fewer iterations than a \"noisy\" image. Theorems on the convergence of this fitting are discussed and proven in the appendices. Intermediate r...
[ 6, -1, -1, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_HJeqhA4YDS", "H1xLtg4qoH", "r1elbgsdoH", "H1xrIN8NYH", "SJxWlnNpFB", "HklY2HdfqS", "iclr_2020_HJeqhA4YDS", "iclr_2020_HJeqhA4YDS" ]
iclr_2020_B1lj20NFDS
Variational Autoencoders for Highly Multivariate Spatial Point Processes Intensities
Multivariate spatial point process models can describe heterotopic data over space. However, highly multivariate intensities are computationally challenging due to the curse of dimensionality. To bridge this gap, we introduce a declustering based hidden variable model that leads to an efficient inference procedure via a variational autoencoder (VAE). We also prove that this model is a generalization of the VAE-based model for collaborative filtering. This leads to an interesting application of spatial point process models to recommender systems. Experimental results show the method's utility on both synthetic data and real-world data sets.
accept-poster
This paper presents a novel VAE-based model for multivariate spatial point process which can realize efficient inference by amortization and handle missing points via smooth intensity estimation. Authors also provide interesting theoretical analysis to connect their method to a popular VAE-based collaborative filtering method. Overall, all reviewers appreciate the methodological and theoretical contributions of the paper. During the reviewer discussion, one reviewer decided to update to the score to Weak Acceptance. There are still some concerns regarding experimental validation, I think the paper provides enough theoretical contribution to the community and would like to recommend acceptance.
train
[ "SkxY1KDaKH", "BkgknwV9jr", "Bke8PvN9or", "H1xqmDNqjH", "rylSHQ6aYH", "rygQxkld9B" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose a VAE model for spatial point processes. The model generalizes the kernel density-based intensity and applies variational inference. The model is applied to synthetic datasets, a location-based social network dataset, and a recommender system dataset. \n\nThe paper is well motiva...
[ 6, -1, -1, -1, 8, 6 ]
[ 1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_B1lj20NFDS", "SkxY1KDaKH", "rylSHQ6aYH", "rygQxkld9B", "iclr_2020_B1lj20NFDS", "iclr_2020_B1lj20NFDS" ]
iclr_2020_Skln2A4YDB
Model-Augmented Actor-Critic: Backpropagating through Paths
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator to augment the data for policy optimization or value function learning. In this paper, we show how to make more effective use of the model by exploiting its differentiability. We construct a policy optimization algorithm that uses the pathwise derivative of the learned model and policy across future timesteps. Instabilities of learning across many timesteps are prevented by using a terminal value function, learning the policy in an actor-critic fashion. Furthermore, we present a derivation on the monotonic improvement of our objective in terms of the gradient error in the model and value function. We show that our approach (i) is consistently more sample efficient than existing state-of-the-art model-based algorithms, (ii) matches the asymptotic performance of model-free algorithms, and (iii) scales to long horizons, a regime where typically past model-based approaches have struggled.
accept-poster
The authors propose a novel model-based reinforcement learning algorithm. The key difference with previous approaches is that the authors use gradients through the learned model. They present theoretical results on error bounds for their approach and a monotonic improvement theorem. In the small sample regime, they show improved performance over previous approaches. After the revisions, reviewers raised a few concerns: The results are only for 100,000 steps, which does not support the claim that the models achieves the same asymptotic performance as model – free algorithms would. The results would be stronger as the experiments were run with more than 3 random seats. In the revised version of the text, it's unclear if the authors are using target networks. Overall, I think the paper introduces some interesting ideas and shows improved performance over existing approaches. I recommend acceptance on the condition that the authors tone down their claims or back them up with empirical evidence. Currently, I don't see evidence for the claim that the method achieves similar asymptotic performance to model free algorithms or the claim that their approach allows for longer horizons than previous approaches.
train
[ "H1gzAjBWqH", "S1e3iVZltS", "HJe9j9xhor", "HJeE09g3sS", "SkgKZolhjr", "HklOqV3pKH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "#rebuttal responses\nThanks for the clarification! However, I will keep my original score for these reasons:\n(1) Only 3 random seeds are used for each environment, which is not convincing as the variance of MAAC is large in some figures.\n(2) Baselines are only trained with 10^5 steps and do not converge. Thus i...
[ 3, 8, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, 4 ]
[ "iclr_2020_Skln2A4YDB", "iclr_2020_Skln2A4YDB", "H1gzAjBWqH", "HklOqV3pKH", "S1e3iVZltS", "iclr_2020_Skln2A4YDB" ]
iclr_2020_Hkx6hANtwH
LambdaNet: Probabilistic Type Inference using Graph Neural Networks
As gradual typing becomes increasingly popular in languages like Python and TypeScript, there is a growing need to infer type annotations automatically. While type annotations help with tasks like code completion and static error catching, these annotations cannot be fully inferred by compilers and are tedious to annotate by hand. This paper proposes a probabilistic type inference scheme for TypeScript based on a graph neural network. Our approach first uses lightweight source code analysis to generate a program abstraction called a type dependency graph, which links type variables with logical constraints as well as name and usage information. Given this program abstraction, we then use a graph neural network to propagate information between related type variables and eventually make type predictions. Our neural architecture can predict both standard types, like number or string, as well as user-defined types that have not been encountered during training. Our experimental results show that our approach outperforms prior work in this space by 14% (absolute) on library types, while having the ability to make type predictions that are out of scope for existing techniques.
accept-poster
This paper proposes an approach to type inference in dynamically typed languages using graph neural networks. The reviewers (and the area chair) love this novel and useful application of GNNs to a practical problem, the presentation, the results. Clear accept.
train
[ "Sklm0w_6tr", "r1eQ_Rw9jS", "BJeyJw-dsH", "ByxVe3vnOS", "r1egLBhwoB", "B1gVAxAfsS", "SJe-F2gGoB", "HygAu6xMsS", "rkek9lWzsS", "HyeEUJbMsr", "BJgF7g7jFr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposed to use Graph Neural Networks (GNN) to do type inference for dynamically typed languages. The key technique is to construct a type dependency graph and infer the type on top of it. The type dependency graph contains edges specifying hard constraints derived from the static analysis, as well as s...
[ 6, -1, -1, 8, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_Hkx6hANtwH", "HygAu6xMsS", "r1egLBhwoB", "iclr_2020_Hkx6hANtwH", "B1gVAxAfsS", "rkek9lWzsS", "Sklm0w_6tr", "Sklm0w_6tr", "ByxVe3vnOS", "BJgF7g7jFr", "iclr_2020_Hkx6hANtwH" ]
iclr_2020_H1guaREYPr
From Inference to Generation: End-to-end Fully Self-supervised Generation of Human Face from Speech
This work seeks the possibility of generating the human face from voice solely based on the audio-visual data without any human-labeled annotations. To this end, we propose a multi-modal learning framework that links the inference stage and generation stage. First, the inference networks are trained to match the speaker identity between the two different modalities. Then the pre-trained inference networks cooperate with the generation network by giving conditional information about the voice. The proposed method exploits the recent development of GANs techniques and generates the human face directly from the speech waveform making our system fully end-to-end. We analyze the extent to which the network can naturally disentangle two latent factors that contribute to the generation of a face image one that comes directly from a speech signal and the other that is not related to it and explore whether the network can learn to generate natural human face image distribution by modeling these factors. Experimental results show that the proposed network can not only match the relationship between the human face and speech, but can also generate the high-quality human face sample conditioned on its speech. Finally, the correlation between the generated face and the corresponding speech is quantitatively measured to analyze the relationship between the two modalities.
accept-poster
The authors propose a conditional GAN-based approach for generating faces consistent with given input speech. The technical novelty is not large, as the approach is mainly putting together existing ideas, but the application is a fairly new one and the experiments and results are convincing. The approach might also have broader applicability beyond this task.
train
[ "BJeqNfIsqB", "HJeh26vqsr", "ryxRwxZ4iH", "rylhl0_7oS", "rkeDaadmsB", "HJl7wpOXjH", "rJeQ281nFr", "r1lI1bEZcH", "HyxY1VP_tr", "rJx4D7zuKB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Summary - \nIn this work the authors propose a two-stage procedure for training a GAN which generates plausible faces conditioned on the raw waveform of a speech signal. In the first stage two embedding functions are trained, one taking as input a frame from a video (of a person speaking), the other taking as inpu...
[ 8, -1, -1, -1, -1, -1, 6, 3, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, 5, 4, -1, -1 ]
[ "iclr_2020_H1guaREYPr", "iclr_2020_H1guaREYPr", "iclr_2020_H1guaREYPr", "rJeQ281nFr", "r1lI1bEZcH", "BJeqNfIsqB", "iclr_2020_H1guaREYPr", "iclr_2020_H1guaREYPr", "rJx4D7zuKB", "iclr_2020_H1guaREYPr" ]
iclr_2020_BJxt60VtPr
Learning from Unlabelled Videos Using Contrastive Predictive Neural 3D Mapping
Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our retinas. This paper explores the role of view prediction in the development of 3D visual recognition. We propose neural 3D mapping networks, which take as input 2.5D (color and depth) video streams captured by a moving camera, and lift them to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera. The model also projects its 3D feature maps to novel viewpoints, to predict and match against target views. We propose contrastive prediction losses to replace the standard color regression loss, and show that this leads to better performance on complex photorealistic data. We show that the proposed model learns visual representations useful for (1) semi-supervised learning of 3D object detectors, and (2) unsupervised learning of 3D moving object detectors, by estimating the motion of the inferred 3D feature maps in videos of dynamic scenes. To the best of our knowledge, this is the first work that empirically shows view prediction to be a scalable self-supervised task beneficial to 3D object detection.
accept-poster
The authors propose to learn space-aware 3D feature abstractions of the world given 2.5D input, by minimizing 3D and 2D view contrastive prediction objectives. The work builds upon Tung et al. (2019) but extends it by removing some of the limitations, making it thus more general. To do so, they learn an inverse graphics network which takes as input 2.5D video and maps to a 3D feature maps of the scene. The authors present experiments on both real and simulation datasets and their proposed approach is tested on feature learning, 3D moving object detection, and 3D motion estimation with good performance. All reviewers agree that this is an important problem in computer vision and the papers provides a working solution. The authors have done a good job with comparisons and make a clear case about their superiority of their model (large datasets, multiple tasks). Moreover, the rebuttal period has been quite productive, with the authors incorporating reviewers' comments in the manuscript, resulting thus in a stronger submission. Based in reviewer's comment and my own assessment, I think this paper should get accepted, as the experiments are solid with good results that the CV audience of ICLR would find relevant.
train
[ "S1egW78noB", "SJxB1qoujS", "HyeXtWiuoS", "ryx6zbjdjr", "BygH1gjOir", "Hye041iujB", "S1gQLacujS", "BJxP4n5usH", "SJgeVwIWqH", "SJgZnF9b9S", "HkgbmrE95S", "SkgYMyY65r" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your detailed response. \n\nI encourage the authors to add the remarks about \"Latent map update\", \"data transformation\" and other failed trials in the paper. They gave a lot of insight while reproducing/enhancing the results, and if they confirm other paper observation, it is still usef...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 3 ]
[ "Hye041iujB", "iclr_2020_BJxt60VtPr", "ryx6zbjdjr", "SJgeVwIWqH", "SJgZnF9b9S", "HkgbmrE95S", "BJxP4n5usH", "SkgYMyY65r", "iclr_2020_BJxt60VtPr", "iclr_2020_BJxt60VtPr", "iclr_2020_BJxt60VtPr", "iclr_2020_BJxt60VtPr" ]
iclr_2020_r1gRTCVFvB
Decoupling Representation and Classifier for Long-Tailed Recognition
The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.
accept-poster
This paper presents an approach for the long-tailed image classification, where the class frequencies during (supervised) training of an image classifier are heavily skewed, so that the classifier underfits on under-represented classes. The authors' responses to the reviews clarified most of their concerns, although some reviewers pointed out that some of the details regarding experiments such as the construction of the validation set and the selection of balanced/imbalanced sets remain unclear. Overall, we believe this paper contains interesting observations to be shared.
val
[ "SyxbjD5jsB", "S1ldZD9sjB", "rkgQjIqosS", "SJx9gIcsoS", "H1ejK57YKB", "rJeu7N5hKS", "rJeKbzl1cr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "A1: [tau normalization] Please refer to general response. \n\nA2: [train a cosine classifier as done in the paper that you cite \"Dynamic few-shot visual learning without forgetting\" by Gidaris et al.]\n\nWe tried to replace the linear classifier with a cosine similarity classifier with (denoted by Cos in the tab...
[ -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, 3, 1, 3 ]
[ "H1ejK57YKB", "rJeu7N5hKS", "rJeKbzl1cr", "iclr_2020_r1gRTCVFvB", "iclr_2020_r1gRTCVFvB", "iclr_2020_r1gRTCVFvB", "iclr_2020_r1gRTCVFvB" ]
iclr_2020_HJgC60EtwB
Robust Reinforcement Learning for Continuous Control with Model Misspecification
We provide a framework for incorporating robustness -- to perturbations in the transition dynamics which we refer to as model misspecification -- into continuous control Reinforcement Learning (RL) algorithms. We specifically focus on incorporating robustness into a state-of-the-art continuous control RL algorithm called Maximum a-posteriori Policy Optimization (MPO). We achieve this by learning a policy that optimizes for a worst case, entropy-regularized, expected return objective and derive a corresponding robust entropy-regularized Bellman contraction operator. In addition, we introduce a less conservative, soft-robust, entropy-regularized objective with a corresponding Bellman operator. We show that both, robust and soft-robust policies, outperform their non-robust counterparts in nine Mujoco domains with environment perturbations. In addition, we show improved robust performance on a challenging, simulated, dexterous robotic hand. Finally, we present multiple investigative experiments that provide a deeper insight into the robustness framework; including an adaptation to another continuous control RL algorithm. Performance videos can be found online at https://sites.google.com/view/robust-rl.
accept-poster
The authors provide a framework for improving robustness (if the model of the dynamics is perturbed) into the RL methods, and provide nice experimental results, especially in the updated version. I am happy to see that the discussion for this paper went in a totally positive and constructive way which lead to a) constructive criticism of the reviewers b) significant changes in the paper c) corresponding better scores by the reviewer. Good work and obvious accept.
train
[ "B1gRd9T3YH", "ryePWBCTtB", "rkxmdUuG9r", "rJeKjkZ2jr", "rJel5PbsjS", "HylmGzZsiS", "HJlDYbWooH", "r1gUe6tKsS", "ryx3m47_jH", "BkgcYR6zjH", "B1xQzP6zsr", "BygvrjafoB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "AFTER REBUTTAL:\nThe authors answered all my questions and (significantly) modified the paper according to all the comments from all reviewers. The paper has significant value given the extensive experimental results and I believe could provide insight to other researchers in the field. I still have (like other re...
[ 8, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_HJgC60EtwB", "iclr_2020_HJgC60EtwB", "iclr_2020_HJgC60EtwB", "rJel5PbsjS", "HJlDYbWooH", "ryx3m47_jH", "r1gUe6tKsS", "BkgcYR6zjH", "BygvrjafoB", "ryePWBCTtB", "rkxmdUuG9r", "B1gRd9T3YH" ]
iclr_2020_S1l-C0NtwS
Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework
Learning multilingual representations of text has proven a successful method for many cross-lingual transfer learning tasks. There are two main paradigms for learning such representations: (1) alignment, which maps different independently trained monolingual representations into a shared space, and (2) joint training, which directly learns unified multilingual representations using monolingual and cross-lingual objectives jointly. In this paper, we first conduct direct comparisons of representations learned using both of these methods across diverse cross-lingual tasks. Our empirical results reveal a set of pros and cons for both methods, and show that the relative performance of alignment versus joint training is task-dependent. Stemming from this analysis, we propose a simple and novel framework that combines these two previously mutually-exclusive approaches. Extensive experiments demonstrate that our proposed framework alleviates limitations of both approaches, and outperforms existing methods on the MUSE bilingual lexicon induction (BLI) benchmark. We further show that this framework can generalize to contextualized representations such as Multilingual BERT, and produces state-of-the-art results on the CoNLL cross-lingual NER benchmark.
accept-poster
Reviewer worries include: whether the approach scales to distant language pairs, overselling of the paper as a "framework", a few citations and comparisons missing. I agree and encourage the authors not to use the word "framework" here. I would also encourage the authors to evaluate on more interesting language pairs, and analyze what vocabularies are relocated, as well as what their method is better at compared to previous work.
train
[ "HJl6n0XnjS", "SJga_nQ3oB", "r1e53TX3iB", "SJlqPpQ3jB", "rkxAK85sKB", "HJeJ2SZ6FH", "ByezsH96YH", "H1xnIvJRYS", "r1xXNfUpYS", "HyeSj2hqOH", "S1xdi6dKOr", "Byxf6n-KuS", "rkg82y9XdH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public" ]
[ "Thank you for your comprehensive review and valuable feedback. We address your comments one by one as following:\n\n[Manipulate sharing/analyze what is shared]\n\nWe agree that controlling and analyzing the amount of sharing is an interesting direction. However, this is a non-trivial task as we explained in our re...
[ -1, -1, -1, -1, 8, 8, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 1, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ "rkxAK85sKB", "HJeJ2SZ6FH", "SJlqPpQ3jB", "ByezsH96YH", "iclr_2020_S1l-C0NtwS", "iclr_2020_S1l-C0NtwS", "iclr_2020_S1l-C0NtwS", "r1xXNfUpYS", "iclr_2020_S1l-C0NtwS", "S1xdi6dKOr", "Byxf6n-KuS", "rkg82y9XdH", "iclr_2020_S1l-C0NtwS" ]
iclr_2020_SJgmR0NKPr
Training Recurrent Neural Networks Online by Learning Explicit State Variables
Recurrent neural networks (RNNs) allow an agent to construct a state-representation from a stream of experience, which is essential in partially observable problems. However, there are two primary issues one must overcome when training an RNN: the sensitivity of the learning algorithm's performance to truncation length and and long training times. There are variety of strategies to improve training in RNNs, the mostly notably Backprop Through Time (BPTT) and by Real-Time Recurrent Learning. These strategies, however, are typically computationally expensive and focus computation on computing gradients back in time. In this work, we reformulate the RNN training objective to explicitly learn state vectors; this breaks the dependence across time and so avoids the need to estimate gradients far back in time. We show that for a fixed buffer of data, our algorithm---called Fixed Point Propagation (FPP)---is sound: it converges to a stationary point of the new objective. We investigate the empirical performance of our online FPP algorithm, particularly in terms of computation compared to truncated BPTT with varying truncation levels.
accept-poster
The paper proposes an alternative to BPTT for training recurrent neural networks based on an explicit state variable, which is trained to improve both the prediction accuracy and the prediction of the next state. One of the benefits of the methods is that it can be used for online training, where BPTT cannot be used in its exact form. Theoretical analysis is developed to show that the algorithm converges to a fixed point. Overall, the reviewers appreciate the clarity of the paper, and find the theory and the experimental evaluation to be reasonably well balanced. After a round of discussion, the authors improved the paper according to the reviews. The final assessments are overall positive, and I’m therefore recommending accepting this paper.
val
[ "HyxtIoqnor", "BJx0at9nor", "Hyg5UOInjH", "SyxwiITsoS", "Bkl8xW2_oB", "H1l-QdmPjr", "BJxcb6KLsS", "S1xrhnFIjS", "HkeOFnFIoS", "H1lOzsLcFH", "SyeisMCatH", "SkeYFd2EqS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I'm a bit skeptical of the results mentioned about PTB and Sequential MNIST.\n\nFor PTB as well as sequential MNIST, can authors report their baselines results (i.e when you do full back propagation ?). I just want to make sure, baselines are not \"faulty\". \n\nFor reference, authors can see result in Zoneout (ht...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 1 ]
[ "Hyg5UOInjH", "Bkl8xW2_oB", "SyxwiITsoS", "H1l-QdmPjr", "BJxcb6KLsS", "S1xrhnFIjS", "SyeisMCatH", "H1lOzsLcFH", "SkeYFd2EqS", "iclr_2020_SJgmR0NKPr", "iclr_2020_SJgmR0NKPr", "iclr_2020_SJgmR0NKPr" ]
iclr_2020_HklUCCVKDB
Uncertainty-guided Continual Learning with Bayesian Neural Networks
Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' \textit{importance}. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB), where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify \textit{what to remember} and \textit{what to change} as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e. it does not presume knowledge of which task a sample belongs to.
accept-poster
While prior work has shown the potential of using uncertainty to tackle catastrophic forgetting (e.g. by appropriate updates to the posterior), this paper goes further and proposes a strategy to adapt the learning rate based on the uncertainty. This is a very reasonable idea since, in practice, learning rate control is one of the simplest and most understood techniques to fight catastrophic forgetting. The overall approach ends up being a well-motivated strategy for controlling the learning rate of the parameters according to a notion of their "importance". Of course now the question is if this work uses a good proxy for "importance" so further ablation studies would help, but the current results already show a clear benefit.
test
[ "S1ewYgHaKr", "SJlbynqTYS", "rylM6-AsoB", "Bke__ZAosB", "rygv8aTojr", "BkxDC2pjiB", "HJeE5n6ior", "Byl4mBDl9B" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "**** Post Rebuttal ****\n\nI have read the author's response and other reviewers' comments. In light of comments by other reviewers, I am increasing the score. The paper reports decent empirical results in some challenging settings which might be useful to the continual learning community. \n\n**** End ****\n\nThe...
[ 6, 6, -1, -1, -1, -1, -1, 8 ]
[ 3, 4, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_HklUCCVKDB", "iclr_2020_HklUCCVKDB", "Bke__ZAosB", "S1ewYgHaKr", "SJlbynqTYS", "Byl4mBDl9B", "iclr_2020_HklUCCVKDB", "iclr_2020_HklUCCVKDB" ]
iclr_2020_rkgt0REKwS
Curriculum Loss: Robust Learning and Generalization against Label Corruption
Deep neural networks (DNNs) have great expressive power, which can even memorize samples with wrong labels. It is vitally important to reiterate robustness and generalization in DNNs against label corruption. To this end, this paper studies the 0-1 loss, which has a monotonic relationship between empirical adversary (reweighted) risk (Hu et al. 2018). Although the 0-1 loss is robust to outliers, it is also difficult to optimize. To efficiently optimize the 0-1 loss while keeping its robust properties, we propose a very simple and efficient loss, i.e. curriculum loss (CL). Our CL is a tighter upper bound of the 0-1 loss compared with conventional summation based surrogate losses. Moreover, CL can adaptively select samples for stagewise training. As a result, our loss can be deemed as a novel perspective of curriculum sample selection strategy, which bridges a connection between curriculum learning and robust learning. Experimental results on noisy MNIST, CIFAR10 and CIFAR100 dataset validate the robustness of the proposed loss.
accept-poster
This paper studies learning with noisy labels by integrating the idea of curriculum learning. All reviewers and AC are happy with novelty, clear write-up and experimental results. I recommend acceptance.
val
[ "Skg2CS-Q3S", "SkxvXnoatH", "S1efYqIiKr", "rJezxPnojH", "BJlutLhioB", "BJe6rL3jsS", "HJghRHVAKr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thank you for the clarification! I would like to clarify that in my previous review about motivation, I did not misunderstand the motivation but I want to emphasize that the message of Theorem 1 basically says that \n\nthe minimizer of the clean distribution is identical to that of the worst-case distribution aro...
[ -1, 6, 6, -1, -1, -1, 6 ]
[ -1, 4, 4, -1, -1, -1, 3 ]
[ "rJezxPnojH", "iclr_2020_rkgt0REKwS", "iclr_2020_rkgt0REKwS", "S1efYqIiKr", "SkxvXnoatH", "HJghRHVAKr", "iclr_2020_rkgt0REKwS" ]
iclr_2020_SkgsACVKPH
Picking Winning Tickets Before Training by Preserving Gradient Flow
Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels. Our code is made public at: https://github.com/alecwangcq/GraSP.
accept-poster
This paper proposes a method to improve the training of sparse network by ensuring the gradient is preserved at initialization. The reviewers found that the approach was well motivated and well explained. The experimental evaluation considers challenging benchmarks such as Imagenet and includes strong baselines.
test
[ "HkgFzZMcFB", "BJlsJthO5S", "rJeE_952sS", "HyeCnTKnsH", "HyezXScnsH", "HJxYY6OhsH", "ByxUSu5soH", "S1xyP07isS", "BJlWGFWsjr", "BJl1D6WiiB", "HJxKfTbisB", "Syen2_ZjiS", "S1eT_9uVjr", "SJg-rWYfiB", "S1e9hXy0FS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\nThe paper proposes a new prunning criterion that performs better than Single-shot Network Pruning (SNIP) in prunning a network at the initalization. This is an important and potentially very impactful research direction, The key idea is to optimize the mask for the loss decrease after an infinimitesal step, rath...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_SkgsACVKPH", "iclr_2020_SkgsACVKPH", "HJxYY6OhsH", "HJxYY6OhsH", "ByxUSu5soH", "BJlsJthO5S", "SJg-rWYfiB", "iclr_2020_SkgsACVKPH", "BJlsJthO5S", "BJlsJthO5S", "BJlsJthO5S", "BJlsJthO5S", "S1e9hXy0FS", "HkgFzZMcFB", "iclr_2020_SkgsACVKPH" ]
iclr_2020_SJgaRA4FPH
Generative Models for Effective ML on Private, Decentralized Datasets
To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact. Manual inspection of raw data—of representative samples, of outliers, of misclassifications—is an essential tool in a) identifying and fixing problems in the data, b) generating new modeling hypotheses, and c) assigning or refining human-provided labels. However, manual data inspection is risky for privacy-sensitive datasets, such as those representing the behavior of real-world individuals. Furthermore, manual data inspection is impossible in the increasingly important setting of federated learning, where raw examples are stored at the edge and the modeler may only access aggregated outputs such as metrics or model parameters. This paper demonstrates that generative models—trained using federated methods and with formal differential privacy guarantees—can be used effectively to debug data issues even when the data cannot be directly inspected. We explore these methods in applications to text with differentially private federated RNNs and to images using a novel algorithm for differentially private federated GANs.
accept-poster
The paper provides methods for training generative models by combining federated learning techniques with differentiable privacy. The paper also provides two concrete applications for the problem of debugging models. Even though the method in the paper seems to be a standard combination of DP deep learning and federated learning, the paper is well-written and presents interesting use cases.
train
[ "Bkl4cLB15S", "SkeZlzEDqr", "rJgi-kwNoH", "rkeTWC8EiB", "Bkl15TIEjB", "Byg96ATCtS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This work presents a method for using generative models to gain insight into sensitive user data, while maintaining guarantees about the privacy of that data via differential privacy (DP) techniques. This scheme takes place in the federated learning (FL) setting, where the data in question remains on a local devic...
[ 6, 8, -1, -1, -1, 3 ]
[ 1, 4, -1, -1, -1, 4 ]
[ "iclr_2020_SJgaRA4FPH", "iclr_2020_SJgaRA4FPH", "Byg96ATCtS", "Bkl4cLB15S", "SkeZlzEDqr", "iclr_2020_SJgaRA4FPH" ]
iclr_2020_rJeW1yHYwH
Inductive representation learning on temporal graphs
Inductive representation learning on temporal graphs is an important step toward salable machine learning on real-world dynamic networks. The evolving nature of temporal dynamic graphs requires handling new nodes as well as capturing temporal patterns. The node embeddings, which are now functions of time, should represent both the static node features and the evolving topological structures. Moreover, node and topological features can be temporal as well, whose patterns the node embeddings should also capture. We propose the temporal graph attention (TGAT) layer to efficiently aggregate temporal-topological neighborhood features to learn the time-feature interactions. For TGAT, we use the self-attention mechanism as building block and develop a novel functional time encoding technique based on the classical Bochner's theorem from harmonic analysis. By stacking TGAT layers, the network recognizes the node embeddings as functions of time and is able to inductively infer embeddings for both new and observed nodes as the graph evolves. The proposed approach handles both node classification and link prediction task, and can be naturally extended to include the temporal edge features. We evaluate our method with transductive and inductive tasks under temporal settings with two benchmark and one industrial dataset. Our TGAT model compares favorably to state-of-the-art baselines as well as the previous temporal graph embedding approaches.
accept-poster
The major contribution of this paper is the use of random Fourier features as temporal (positional) encoding for dynamic graphs. The reviewers all find the proposed method interesting, and believes that this is a paper with reasonable contributions. One comment pointed out that the connection between Time2Vec and harmonic analysis has been discussed in the previous work, and we suggest the authors to include this discussion/comparison in the paper.
train
[ "HklEYpDwoS", "Hyg83ASHoS", "SJgLWABBir", "SkeUKTBrjB", "S1gCZwh2Kr", "B1e4OGa6tB", "HJgpzq645S", "S1ef0VW4FS", "BJlz1NRkur", "rJxsCdcJ_r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "The authors responded satisfactorily and updated the paper with better clarity. I appreciate the job and maintain the assessment. A minor point to clarify regarding the question of the number of embedding vectors, is that I actually meant the other methods (not this paper). But since other methods do not take a fu...
[ -1, -1, -1, -1, 8, 6, 6, -1, -1, -1 ]
[ -1, -1, -1, -1, 5, 1, 3, -1, -1, -1 ]
[ "Hyg83ASHoS", "S1gCZwh2Kr", "B1e4OGa6tB", "HJgpzq645S", "iclr_2020_rJeW1yHYwH", "iclr_2020_rJeW1yHYwH", "iclr_2020_rJeW1yHYwH", "BJlz1NRkur", "rJxsCdcJ_r", "iclr_2020_rJeW1yHYwH" ]
iclr_2020_Sklf1yrYDr
BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning
Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural networks. However, an ensemble’s cost for both training and testing increases linearly with the number of networks, which quickly becomes untenable. In this paper, we propose BatchEnsemble, an ensemble method whose computational and memory costs are significantly lower than typical ensembles. BatchEnsemble achieves this by defining each weight matrix to be the Hadamard product of a shared weight among all ensemble members and a rank-one matrix per member. Unlike ensembles, BatchEnsemble is not only parallelizable across devices, where one device trains one member, but also parallelizable within a device, where multiple ensemble members are updated simultaneously for a given mini-batch. Across CIFAR-10, CIFAR-100, WMT14 EN-DE/EN-FR translation, and out-of-distribution tasks, BatchEnsemble yields competitive accuracy and uncertainties as typical ensembles; the speedup at test time is 3X and memory reduction is 3X at an ensemble of size 4. We also apply BatchEnsemble to lifelong learning, where on Split-CIFAR-100, BatchEnsemble yields comparable performance to progressive neural networks while having a much lower computational and memory costs. We further show that BatchEnsemble can easily scale up to lifelong learning on Split-ImageNet which involves 100 sequential learning tasks
accept-poster
This paper proposed an improved ensemble method called BatchEnsemble, where the weight matrix is decomposed as the element-wise product of a shared weigth metrix and a rank-one matrix for each member. The effectiveness of the proposed methods has been verified by experiments on a list of various tasks including image classification, machine translation, lifelong learning and uncertainty modeling. The idea is simple and easy to follow. Although some reviewers thought it lacks of in-deep analysis, I would like to see it being accepted so the community can benefit from it.
train
[ "r1giCa_isr", "r1eWgJYijB", "HkeAWH3XoH", "BygEHAlcoS", "Syl-w72Ssr", "B1eTCr27jS", "HJxBdw3msH", "Syg4ew37iH", "S1eCXLhXor", "SJxoU252KH", "rkljAl1hKS", "r1xd6xjpFH" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The revision includes:\n1. An additional experiments about neural network calibration on CIFAR-10 corruption dataset in Appendix D. Figure 7 shows that BatchEnsemble achieves the best trade-off among accuracy, calibration, computational and memory costs. Moreover, it shows that BatchEnsemble is orthogonal to exist...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2020_Sklf1yrYDr", "rkljAl1hKS", "rkljAl1hKS", "Syl-w72Ssr", "Syg4ew37iH", "HkeAWH3XoH", "r1xd6xjpFH", "SJxoU252KH", "B1eTCr27jS", "iclr_2020_Sklf1yrYDr", "iclr_2020_Sklf1yrYDr", "iclr_2020_Sklf1yrYDr" ]
iclr_2020_ByxGkySKwH
Towards neural networks that provably know when they don't know
It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data. Thus, ReLU networks do not know when they don't know. However, this is a highly important property in safety critical applications. In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees. In this paper we propose a new approach to OOD which overcomes both problems. Our approach can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates for low confidence predictions in a neighborhood of an out-distribution point. In the experiments we show that state-of-the-art methods fail in this worst-case setting whereas our model can guarantee its performance while retaining state-of-the-art OOD performance.
accept-poster
This paper tackles the problem of confidence on neural network predictions for out-of-distribution (OOD) samples. The authors propose an approach for training neural networks such that the OOD prediction is uniform across classes. The approach requires samples from in- and out-of distribution and relies on a mixture of Gaussians for modelling the distributions, allowing to obtain theoretical guarantees on detecting OOD samples (unlike existing techniques). The main concerns of the reviewers have been addressed during the rebuttal. If this approach does not outperform state-of-the-art in practice, providing such theoretical guarantees is an important contribution. All reviewers agree that this paper should be accepted. I therefore recommend acceptance.
train
[ "Sye38MQptH", "BkxIutEqtS", "ByxzUJ1siS", "rJlSam-ciH", "ryxadQZ5jH", "H1gtSXbqiS", "BygIxqKcYr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper uses a generative model to assign anomaly scores. By its construction, it can provide provable performance guarantees. Experiments do not make unreasonable assumptions such as the ability to peak at the test data, unlike much previous work.\nMy primary concern is that they should show performance on CIF...
[ 6, 6, -1, -1, -1, -1, 6 ]
[ 5, 4, -1, -1, -1, -1, 3 ]
[ "iclr_2020_ByxGkySKwH", "iclr_2020_ByxGkySKwH", "H1gtSXbqiS", "Sye38MQptH", "BygIxqKcYr", "BkxIutEqtS", "iclr_2020_ByxGkySKwH" ]
iclr_2020_HJx81ySKwr
Iterative energy-based projection on a normal data manifold for anomaly localization
Autoencoder reconstructions are widely used for the task of unsupervised anomaly localization. Indeed, an autoencoder trained on normal data is expected to only be able to reconstruct normal features of the data, allowing the segmentation of anomalous pixels in an image via a simple comparison between the image and its autoencoder reconstruction. In practice however, local defects added to a normal image can deteriorate the whole reconstruction, making this segmentation challenging. To tackle the issue, we propose in this paper a new approach for projecting anomalous data on a autoencoder-learned normal data manifold, by using gradient descent on an energy derived from the autoencoder's loss function. This energy can be augmented with regularization terms that model priors on what constitutes the user-defined optimal projection. By iteratively updating the input of the autoencoder, we bypass the loss of high-frequency information caused by the autoencoder bottleneck. This allows to produce images of higher quality than classic reconstructions. Our method achieves state-of-the-art results on various anomaly localization datasets. It also shows promising results at an inpainting task on the CelebA dataset.
accept-poster
This paper proposed to use an autoencoder based approach for anomaly localization. The method shows promising on inpainting task compared with traditional auto-encoder. First two reviewers recommend this paper for acceptance. The last review has some concerns about the experimental design and whether VAE is a suitable baseline. The authors provide reasonable explanation in rebuttal while the reviewer did not give further comments. Overall, the paper proposes a promising approach for anomaly localization; thus, I recommend it for acceptance.
val
[ "SJxhkqf7iB", "H1lbmOfQir", "HyeKGwMQsr", "ryxfYSfmoH", "BJeUAnt7qr", "rJgi-7gN9H", "B1eyRHZV5H" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer, thank you for your time and comments. First, we would like to draw your attention to our general comment, answering questions about overall baselines for anomaly localization and statistics on the benefits of our method. This comment also explains changes in the last revision of the paper.\n\nWe wi...
[ -1, -1, -1, -1, 3, 6, 8 ]
[ -1, -1, -1, -1, 1, 1, 3 ]
[ "BJeUAnt7qr", "rJgi-7gN9H", "B1eyRHZV5H", "iclr_2020_HJx81ySKwr", "iclr_2020_HJx81ySKwr", "iclr_2020_HJx81ySKwr", "iclr_2020_HJx81ySKwr" ]
iclr_2020_Skxuk1rFwB
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures. Meanwhile, interval bound propagation (IBP) based training is efficient and significantly outperforms linear relaxation based methods on many tasks, yet it may suffer from stability issues since the bounds are much looser especially at the beginning of training. In this paper, we propose a new certified adversarial training method, CROWN-IBP, by combining the fast IBP bounds in a forward bounding pass and a tight linear relaxation based bound, CROWN, in a backward bounding pass. CROWN-IBP is computationally efficient and consistently outperforms IBP baselines on training verifiably robust neural networks. We conduct large scale experiments on MNIST and CIFAR datasets, and outperform all previous linear relaxation and bound propagation based certified defenses in L_inf robustness. Notably, we achieve 7.02% verified test error on MNIST at epsilon=0.3, and 66.94% on CIFAR-10 with epsilon=8/255.
accept-poster
This paper presents a method that hybridizes the strategies of linear programming and interval bound propagation to improve adversarial robustness. While some reviewers have concerns about the novelty of the underlying ideas presented, the method is an improvement to the SOTA in certifiable robustness, and has become a benchmark method within this class of defenses.
test
[ "S1e-LkOnjS", "B1lqtOGcsr", "Hyl0oIC7iB", "HylAHLAXsS", "H1eLlPRmjr", "Skx-gURXsS", "BklFgrR7jH", "H1eplpk-5H", "HJgnxLMMcH", "SJeZNqlpqH", "S1eGg-n9YB", "B1xHnjsIdS", "r1geHD2NOH", "S1eG0peVuB", "HJlktCsXdH" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "public" ]
[ "Dear Reviewers and Area Chair,\n\nWe summarize the changes we made to address reviewer concerns, and the new results added to the revision of our paper as follows.\n\n1. The use of TPUs to achieve SOTA results may be a concern, so we have implemented a multi-GPU version of CROWN-IBP. To train the SOTA CIFAR model,...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 8, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 1, 1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_Skxuk1rFwB", "HJgnxLMMcH", "HylAHLAXsS", "Skx-gURXsS", "H1eplpk-5H", "HJgnxLMMcH", "SJeZNqlpqH", "iclr_2020_Skxuk1rFwB", "iclr_2020_Skxuk1rFwB", "iclr_2020_Skxuk1rFwB", "iclr_2020_Skxuk1rFwB", "r1geHD2NOH", "S1eG0peVuB", "HJlktCsXdH", "iclr_2020_Skxuk1rFwB" ]
iclr_2020_B1gskyStwr
Frequency-based Search-control in Dyna
Model-based reinforcement learning has been empirically demonstrated as a successful strategy to improve sample efficiency. In particular, Dyna is an elegant model-based architecture integrating learning and planning that provides huge flexibility of using a model. One of the most important components in Dyna is called search-control, which refers to the process of generating state or state-action pairs from which we query the model to acquire simulated experiences. Search-control is critical in improving learning efficiency. In this work, we propose a simple and novel search-control strategy by searching high frequency regions of the value function. Our main intuition is built on Shannon sampling theorem from signal processing, which indicates that a high frequency signal requires more samples to reconstruct. We empirically show that a high frequency function is more difficult to approximate. This suggests a search-control strategy: we should use states from high frequency regions of the value function to query the model to acquire more samples. We develop a simple strategy to locally measure the frequency of a function by gradient and hessian norms, and provide theoretical justification for this approach. We then apply our strategy to search-control in Dyna, and conduct experiments to show its property and effectiveness on benchmark domains.
accept-poster
The reviewers are unanimous in their evaluation of this paper, and I concur.
val
[ "r1eEdLMyjH", "HJlqzZ8voS", "SJej20SDsS", "rkl4BYTBiB", "HJglemDstB", "Byx_XzzJir" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Disclaimer: \nThis is an emergency review that I got assigned on Nov 5th, so I unfortunately had only a few hours to assess the paper.\n\nSummary:\nThis paper proposes a new mechanism for “search control”, that is, choosing the planning-start-states from which to roll out the model, in the context of Dyna-style up...
[ 6, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, 3, 4 ]
[ "iclr_2020_B1gskyStwr", "HJglemDstB", "Byx_XzzJir", "r1eEdLMyjH", "iclr_2020_B1gskyStwr", "iclr_2020_B1gskyStwr" ]
iclr_2020_Bke61krFvS
Learning representations for binary-classification without backpropagation
The family of feedback alignment (FA) algorithms aims to provide a more biologically motivated alternative to backpropagation (BP), by substituting the computations that are unrealistic to be implemented in physical brains. While FA algorithms have been shown to work well in practice, there is a lack of rigorous theory proofing their learning capabilities. Here we introduce the first feedback alignment algorithm with provable learning guarantees. In contrast to existing work, we do not require any assumption about the size or depth of the network except that it has a single output neuron, i.e., such as for binary classification tasks. We show that our FA algorithm can deliver its theoretical promises in practice, surpassing the learning performance of existing FA methods and matching backpropagation in binary classification tasks. Finally, we demonstrate the limits of our FA variant when the number of output neurons grows beyond a certain quantity.
accept-poster
This paper provides a rigorous analysis of feedback alignment under two restrictions 1) that all, except the first, layers are constrained to realize monotone functions and 2) the task is binary classification. Overall, all reviewers agree that this is an interesting submission providing important results on the topic and as such all agree that it should feature at the ICLR program. Thus, I recommend acceptance. However, I ask the authors to take into account the reviewers' concerns and include a discussion about limitations (and general applicability) of this work.
train
[ "HJlPbgIsKB", "H1gC5r7mjS", "SkxYmEmQoS", "BylAPQmQjr", "BylesNeRKr", "rJxE1TVAFH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "------update------\n\nAfter talking to other reviews and reading the rebuttal, I am convinced that the paper contributes sufficiently to the theoretical understanding of the FA algorithm and should be accepted as a conference paper. \n\nI hope that, in the next revision, the authors could include more about the li...
[ 6, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, 3, 5 ]
[ "iclr_2020_Bke61krFvS", "HJlPbgIsKB", "BylesNeRKr", "rJxE1TVAFH", "iclr_2020_Bke61krFvS", "iclr_2020_Bke61krFvS" ]
iclr_2020_HygegyrYwH
Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks
Recent theoretical work has guaranteed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size n, the (inverse) target error 1/ϵ, and the (inverse) failure probability 1/δ. This work shows that Θ~(1/ϵ) iterations of gradient descent with Ω~(1/ϵ2) training examples on two-layer ReLU networks of any width exceeding polylog(n,1/ϵ,1/δ) suffice to achieve a test misclassification error of ϵ. We also prove that stochastic gradient descent can achieve ϵ test error with polylogarithmic width and Θ~(1/ϵ) samples. The analysis relies upon the separation margin of the limiting kernel, which is guaranteed positive, can distinguish between true labels and random labels, and can give a tight sample-complexity analysis in the infinite-width setting.
accept-poster
This paper studies how much overparameterization is required to achieve zero training error via gradient descent in one hidden layer neural nets. In particular the paper studies the effect of margin in data on the required amount of overparameterization. While the paper does not improve in the worse case in the presence of margin the paper shows that sometimes even logarithmic width is sufficient. The reviewers all seem to agree that this is a nice paper but had a few mostly technical concerns. These concerns were sufficiently addressed in the response. Based on my own reading I also find the paper to be interesting, well written with clever proofs. So I recommend acceptance. I would like to make a suggestion that the authors do clarify in the abstract intro that this improvement can not be achieved in the worst case as a shallow reading of the manuscript may cause some confusion (that logarithmic width suffices in general).
test
[ "HyeBXPOpFr", "r1l6L345sS", "BkxvVm4ciS", "r1gHwzfwsS", "r1lil-NcjS", "Hkl7GrRFoH", "Byer9Q0tiB", "rkldAVatsr", "rJeP142Kir", "ByxJi5sYiB", "r1eSFGzDiS", "H1xIGMGwjS", "rJeEcZzPoB", "HkeBVZMPjH", "rkgnCl8MKS", "Bkgha_aM9r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the author shows that for a two-layer ReLU network, it only requires a network width that is poly-logarithmic in the sample size n to get good optimization and generalization error bounds, which is better than prior results. \n\nOverall, the paper is well written and easy to follow. However I still...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_HygegyrYwH", "HyeBXPOpFr", "Byer9Q0tiB", "HyeBXPOpFr", "rkldAVatsr", "HyeBXPOpFr", "r1gHwzfwsS", "rJeP142Kir", "HkeBVZMPjH", "r1eSFGzDiS", "rkgnCl8MKS", "rJeEcZzPoB", "Bkgha_aM9r", "iclr_2020_HygegyrYwH", "iclr_2020_HygegyrYwH", "iclr_2020_HygegyrYwH" ]
iclr_2020_r1gelyrtwH
Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics
Sparsely available data points cause numerical error on finite differences which hinders us from modeling the dynamics of physical systems. The discretization error becomes even larger when the sparse data are irregularly distributed or defined on an unstructured grid, making it hard to build deep learning models to handle physics-governing observations on the unstructured grid. In this paper, we propose a novel architecture, Physics-aware Difference Graph Networks (PA-DGN), which exploits neighboring information to learn finite differences inspired by physics equations. PA-DGN leverages data-driven end-to-end learning to discover underlying dynamical relations between the spatial and temporal differences in given sequential observations. We demonstrate the superiority of PA-DGN in the approximation of directional derivatives and the prediction of graph signals on the synthetic data and the real-world climate observations from weather stations.
accept-poster
All reviewers agree that this research is novel and well carried out, so this is a clear accept. Please ensure that the final version reflect the reviewer comments and the new information provided during the rebuttal
train
[ "S1x-cmNsiB", "HJlGM8Fwsr", "SJg0q7YDjH", "H1lB2JFwiS", "S1gtmEPptS", "Bkgw7nNRKH", "ByetFVF0tB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We tested our proposed method and baselines on the NEMO sea surface temperature (SST) dataset (available at http://marine.copernicus.eu/services-portfolio/access-to-products/?option=com_csw&view=details&product_id=GLOBAL_ANALYSIS_FORECAST_PHY_001_024)\nWe first download the data in the area between 50N-65N and 75W...
[ -1, -1, -1, -1, 6, 8, 8 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "HJlGM8Fwsr", "Bkgw7nNRKH", "S1gtmEPptS", "ByetFVF0tB", "iclr_2020_r1gelyrtwH", "iclr_2020_r1gelyrtwH", "iclr_2020_r1gelyrtwH" ]
iclr_2020_r1lZgyBYwS
HiLLoC: lossless image compression with hierarchical latent variable models
We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.
accept-poster
The paper proposes a lossless image compression consisting of a hierarchical VAE and using a bits-back version of ANS. Compared to previous work, the paper (i) improves the compression rate performance by adapting the discretization of latent space required for the entropy coder ANS (ii) increases compression speed by implementing a vectorized version of ANS (iii) shows that a model trained on a low-resolution imagenet 32 dataset can generalize its compression capabilities to higher resolution. The authors addressed properly reviewers' concerns. Main critics which remain are (i) the method is not practical yet (long compression time) (ii) results are not state of the art - but the contribution is nevertheless solid.
train
[ "B1lCFME2iH", "r1lq-uOFiB", "SJx8SmutiS", "HJl-XGOYir", "BygIsY5aKr", "BygJ9qoy9S", "BJeuOnp19S" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I am satisfied with the author's rebuttal, and will keep my rating at \"Accept\".", "Thank you for your review. To address the points you raised:\n\n>I would like the authors to revise their statement of state of the art compression performance on page 7 directly below table 2. ... It would be beneficial for the...
[ -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, 4, 4, 1 ]
[ "r1lq-uOFiB", "BygIsY5aKr", "BJeuOnp19S", "BygJ9qoy9S", "iclr_2020_r1lZgyBYwS", "iclr_2020_r1lZgyBYwS", "iclr_2020_r1lZgyBYwS" ]
iclr_2020_BJeGlJStPr
IMPACT: Importance Weighted Asynchronous Architectures with Clipped Target Networks
The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time. To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process. However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency). In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA (Espeholt et al., 2018)), sample efficiency drops significantly. To address this, we propose a new distributed reinforcement learning algorithm, IMPACT. IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling. In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA. For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO.
accept-poster
The authors propose a novel distributed reinforcement learning algorithm that includes 3 new components: a target network for the policy for stability, a circular buffer, and truncated importance sampling. The authors demonstrate that this improves performance while decreasing wall clock training time. Initially, reviewers were concerned about the fairness of hyper parameter tuning, the baseline implementation of algorithms, and the limited set of experiments done on the Atari games. After the author response, reviewers were satisfied with all 3 of those issues. I may have missed it, but I did not see that code was being released with this paper. I think it would greatly increase the impact of the paper at the authors release source code, so I strongly encourage them to do so. Generally, all the reviewers were in consensus that this is an interesting paper and I recommend acceptance.
train
[ "H1ePESipKS", "SJlsXw06tr", "ByeZ6yrjor", "Byl-Rcma9B", "B1x4iit9iS", "rygAOsKcsH", "B1eKVot9sB", "ryx_H5FcoS", "HJx4ydyAKr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper studies a novel way for distributed RL training which combines the data reuse of PPO with the asynchronous updates of IMPALA. The main contribution is the observation that using a target network is necessary for achieving stable learning. I think this is an important result which seems to be validated by...
[ 6, 6, -1, 6, -1, -1, -1, -1, 3 ]
[ 3, 1, -1, 4, -1, -1, -1, -1, 3 ]
[ "iclr_2020_BJeGlJStPr", "iclr_2020_BJeGlJStPr", "ryx_H5FcoS", "iclr_2020_BJeGlJStPr", "H1ePESipKS", "SJlsXw06tr", "HJx4ydyAKr", "Byl-Rcma9B", "iclr_2020_BJeGlJStPr" ]
iclr_2020_BJewlyStDr
On Bonus Based Exploration Methods In The Arcade Learning Environment
Research on exploration in reinforcement learning, as applied to Atari 2600 game-playing, has emphasized tackling difficult exploration problems such as Montezuma's Revenge (Bellemare et al., 2016). Recently, bonus-based exploration methods, which explore by augmenting the environment reward, have reached above-human average performance on such domains. In this paper we reassess popular bonus-based exploration methods within a common evaluation framework. We combine Rainbow (Hessel et al., 2018) with different exploration bonuses and evaluate its performance on Montezuma's Revenge, Bellemare et al.'s set of hard of exploration games with sparse rewards, and the whole Atari 2600 suite. We find that while exploration bonuses lead to higher score on Montezuma's Revenge they do not provide meaningful gains over the simpler epsilon-greedy scheme. In fact, we find that methods that perform best on that game often underperform epsilon-greedy on easy exploration Atari 2600 games. We find that our conclusions remain valid even when hyperparameters are tuned for these easy-exploration games. Finally, we find that none of the methods surveyed benefit from additional training samples (1 billion frames, versus Rainbow's 200 million) on Bellemare et al.'s hard exploration games. Our results suggest that recent gains in Montezuma's Revenge may be better attributed to architecture change, rather than better exploration schemes; and that the real pace of progress in exploration research for Atari 2600 games may have been obfuscated by good results on a single domain.
accept-poster
This paper presents a detailed comparison of different bonus-based exploration methods on a common evaluation framework (Rainbow) when used with the ATARI game suite. They find that while these bonuses help on Montezuma's Revenge (MR), they underperform relative to epsilon-greedy on other games. This suggests that architectural changes may be a more important factor than bonus-based exploration in recent advances on MR. The reviewers commented that this paper makes no effort to present new techniques, and the insights discovered could be expanded on. Despite this, it is an interesting paper that is generally well argued and would be a useful contribution to the field. I recommend acceptance.
train
[ "SklK7x6-cS", "BJg-P7A3KH", "rJxjPjINiH", "B1lnu5INiS" ]
[ "official_reviewer", "official_reviewer", "author", "author" ]
[ "#rebuttal responses\n \nI change the score to be weak accept as the authors do not provide any comparison result on Rainbow without the prioritized replay buffer during the rebuttal phase. I also agree with Reviewer 1's opinion that the authors do not provide some fixing method, such as combining the noisy network...
[ 6, 6, -1, -1 ]
[ 3, 3, -1, -1 ]
[ "iclr_2020_BJewlyStDr", "iclr_2020_BJewlyStDr", "SklK7x6-cS", "BJg-P7A3KH" ]
iclr_2020_r1lOgyrKDS
Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation
Sequence generation models are commonly refined with reinforcement learning over user-defined metrics. However, high gradient variance hinders the practical use of this method. To stabilize this method, we adapt to contextual generation of categorical sequences a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control. Due to the correlation, the number of unique rollouts is random and adaptive to model uncertainty; those rollouts naturally become baselines for each other, and hence are combined to effectively reduce gradient variance. We also demonstrate the use of correlated MC rollouts for binary-tree softmax models, which reduce the high generation cost in large vocabulary scenarios by decomposing each categorical action into a sequence of binary actions. We evaluate our methods on both neural program synthesis and image captioning. The proposed methods yield lower gradient variance and consistent improvement over related baselines.
accept-poster
The paper presents a novel reinforcement learning-based algorithm for contextual sequence generation. Specifically, the paper presents experimental results on the application of the gradient ARSM estimator of Yin et al. (2019) to challenging structured prediction problems (neural program synthesis and image captioning). The method consists in performing correlated Monte Carlo rollouts starting from each token in the generated sequence, and using the multiple rollouts to reduce gradient variance. Numerical experiments are presented with promising performance. Reviewers were in agreement that this is a non-trivial extension of previous work with broad potential application. Some concerns about better framing of contributions were mostly resolved during the author rebuttal phase. Therefore, the AC recommends publication.
val
[ "HJgX78WIcr", "SkxFkND3iH", "S1eQqpL3oH", "H1lGR5UnoS", "B1xMsRpaFH", "HklBH2ICFr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents experimental results on the application of the gradient ARSM estimator of Yin et al. (2019) to challenging structured prediction problems (neural program synthesis and image captioning). The authors also propose two variants, ASR-K which is the ARS estimator computed on a random sample of K (amo...
[ 6, -1, -1, -1, 8, 6 ]
[ 3, -1, -1, -1, 3, 3 ]
[ "iclr_2020_r1lOgyrKDS", "B1xMsRpaFH", "HklBH2ICFr", "HJgX78WIcr", "iclr_2020_r1lOgyrKDS", "iclr_2020_r1lOgyrKDS" ]
iclr_2020_HJeOekHKwr
Smoothness and Stability in GANs
Generative adversarial networks, or GANs, commonly display unstable behavior during training. In this work, we develop a principled theoretical framework for understanding the stability of various types of GANs. In particular, we derive conditions that guarantee eventual stationarity of the generator when it is trained with gradient descent, conditions that must be satisfied by the divergence that is minimized by the GAN and the generator's architecture. We find that existing GAN variants satisfy some, but not all, of these conditions. Using tools from convex analysis, optimal transport, and reproducing kernels, we construct a GAN that fulfills these conditions simultaneously. In the process, we explain and clarify the need for various existing GAN stabilization techniques, including Lipschitz constraints, gradient penalties, and smooth activation functions.
accept-poster
The paper provides a theoretical study of what regularizations should be used in GAN training and why. The main focus is that the conditions on the discriminator that need to be enforced, to get the Lipshitz property of the corresponding function that is optimized for the generator. Quite a few theorems and propositions are provided. As noted by Reviewer3, this adds insight to well-known techniques: the Reviewer1 rightfully notes that this does not lead to any practical conclusion. Moreover, then training of GANs never goes to the optimal discriminator, that could be a weak point; rather than it proceeds in the alternating fashion, and then evolution is governed by the spectra of the local Jacobian (which is briefly mentioned). This is mentioned in future work, but it is not clear at all if the results here can be helpful (or can be generalized). At some point of the paper it gets to "more theorems mode" which make it not so easy and motivating to read. The theoretical results at the quantitative level are very interesting. I have looked for a long time on Figure 1: does this support the claims? First my impression was it does not (there are better FID scores for larger learning rates). But in the end, I think it supports: the convergence for a smaller that $\gamma_0$ learning rate to the same FID indicated the convergence to the same local minima (probably). This is perfectly fine. Oscillations afterwards move us to a stochastic region, where FID oscillates. So, the theory has at least minor confirmation.
train
[ "BJeJ4c22tH", "ryerF4z3iH", "rkxozAqLoH", "HkgEeBw9sB", "rylKHzwYsS", "rJeTlA9LiS", "Skl_j_8FKB", "ryeniNsQYS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper provides a unified theoretical framework for regularizing GAN losses. It accounts for most regularization technics especially spectral normalization and gradient penalty and explains how those two methods are in fact complementary. So far this was only observed experimentally but without any theoretical...
[ 8, -1, -1, -1, -1, -1, 6, 1 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_HJeOekHKwr", "iclr_2020_HJeOekHKwr", "rJeTlA9LiS", "BJeJ4c22tH", "Skl_j_8FKB", "ryeniNsQYS", "iclr_2020_HJeOekHKwr", "iclr_2020_HJeOekHKwr" ]
iclr_2020_rJxtgJBKDr
SNOW: Subscribing to Knowledge via Channel Pooling for Transfer & Lifelong Learning of Convolutional Neural Networks
SNOW is an efficient learning method to improve training/serving throughput as well as accuracy for transfer and lifelong learning of convolutional neural networks based on knowledge subscription. SNOW selects the top-K useful intermediate feature maps for a target task from a pre-trained and frozen source model through a novel channel pooling scheme, and utilizes them in the task-specific delta model. The source model is responsible for generating a large number of generic feature maps. Meanwhile, the delta model selectively subscribes to those feature maps and fuses them with its local ones to deliver high accuracy for the target task. Since a source model takes part in both training and serving of all target tasks in an inference-only mode, one source model can serve multiple delta models, enabling significant computation sharing. The sizes of such delta models are fractional of the source model, thus SNOW also provides model-size efficiency. Our experimental results show that SNOW offers a superior balance between accuracy and training/inference speed for various image classification tasks to the existing transfer and lifelong learning practices.
accept-poster
This paper proposes a method, SNOW, for improving the speed of training and inference for transfer and lifelong learning by subscribing the target delta model to the knowledge of source pretrained model via channel pooling. Reviewers and AC agree that this paper is well written, with simple but sound technique towards an important problem and with promising empirical performance. The main critique is that the approach can only tackle transfer learning while failing in the lifelong setting. Authors provided convincing feedbacks on this key point. Details requested by the reviewers were all well addressed in the revision. Hence I recommend acceptance.
train
[ "SygBFYrscB", "rkxoNbajir", "rygKCo52KH", "Syel4Qx2oH", "Hkl1amTooH", "r1gqfQasiS", "SyxUr7pjiH", "H1gdFC2osS", "SJlLFPEjFr" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "After rebuttal:\nAuthors have addressed all my doubts. I recommend accepting this paper.\n\n=============================\nBefore rebuttal:\nSummary:\n\nThis paper proposes a new way to do transfer learning. Specifically, authors first train a big source ConvNet and then for each task, they train a small ConvNet i...
[ 8, -1, 8, -1, -1, -1, -1, -1, 3 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_rJxtgJBKDr", "SygBFYrscB", "iclr_2020_rJxtgJBKDr", "SyxUr7pjiH", "SJlLFPEjFr", "rygKCo52KH", "r1gqfQasiS", "iclr_2020_rJxtgJBKDr", "iclr_2020_rJxtgJBKDr" ]
iclr_2020_SkeFl1HKwr
Empirical Studies on the Properties of Linear Regions in Deep Neural Networks
A deep neural networks (DNN) with piecewise linear activations can partition the input space into numerous small linear regions, where different linear functions are fitted. It is believed that the number of these regions represents the expressivity of a DNN. This paper provides a novel and meticulous perspective to look into DNNs: Instead of just counting the number of the linear regions, we study their local properties, such as the inspheres, the directions of the corresponding hyperplanes, the decision boundaries, and the relevance of the surrounding regions. We empirically observed that different optimization techniques lead to completely different linear regions, even though they result in similar classification accuracies. We hope our study can inspire the design of novel optimization techniques, and help discover and analyze the behaviors of DNNs.
accept-poster
This paper studies the properties of regions where a DNN with piecewise linear activations behaves linearly. They develop a variety of techniques to chracterize properties and show how these properties correlate with various parameters of the network architecture and training method. The reviewers were in consensus on the quality of the paper: The paper is well written and contains a number of insights that would be of broad interest to the deep learning community. I therefore recommend acceptance.
train
[ "ryxYOd65jB", "rkeSGKa9jS", "SkgRXY6cir", "H1xQe_6qoH", "SylLjOT5iH", "SJx9dS5oFr", "H1xyI5totr", "Byg1GC30tH", "ByeMIYZJ5H", "rJeA5tIstS", "rJxwW57NtS", "SkgO4pZJYr", "r1xXflPEKS", "rygx_TkxFr", "HkxE7zQROB", "ByxNE7_aOB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "author", "public", "public", "public", "public" ]
[ "We thank the reviewer for the constructive comments, which helped improve the paper. We also apologize for our mistake for including the acknowledgement. It has been removed in our revision. Thank you for pointing this out!\nWe address your detailed comments below.\n\n- Figure 1 Top: What do the different colour r...
[ -1, -1, -1, -1, -1, 6, 3, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 1, 1, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "SJx9dS5oFr", "H1xyI5totr", "H1xyI5totr", "Byg1GC30tH", "SJx9dS5oFr", "iclr_2020_SkeFl1HKwr", "iclr_2020_SkeFl1HKwr", "iclr_2020_SkeFl1HKwr", "rJeA5tIstS", "iclr_2020_SkeFl1HKwr", "HkxE7zQROB", "ByxNE7_aOB", "rJxwW57NtS", "SkgO4pZJYr", "iclr_2020_SkeFl1HKwr", "iclr_2020_SkeFl1HKwr" ]
iclr_2020_S1ltg1rFDS
Black-box Off-policy Estimation for Infinite-Horizon Reinforcement Learning
Off-policy estimation for long-horizon problems is important in many real-life applications such as healthcare and robotics, where high-fidelity simulators may not be available and on-policy evaluation is expensive or impossible. Recently, \citet{liu18breaking} proposed an approach that avoids the curse of horizon suffered by typical importance-sampling-based methods. While showing promising results, this approach is limited in practice as it requires data being collected by a known behavior policy. In this work, we propose a novel approach that eliminates such limitations. In particular, we formulate the problem as solving for the fixed point of a "backward flow" operator and show that the fixed point solution gives the desired importance ratios of stationary distributions between the target and behavior policies. We analyze its asymptotic consistency and finite-sample generalization. Experiments on benchmarks verify the effectiveness of our proposed approach.
accept-poster
This paper addresses an important and relevant problem in reinforcement learning: learning from off-policy data, taking into account the offsets in the visitation distribution of states. This has the promise of lowering variance even with long horizon roll-outs. Existing methods have required access to the behavior policy (or have required data from the stationary distribution). The novel proposed approach instead uses an alternative method, based on the fixed point of the "backward flow" operator, to calculate the importance ratios required for policy evaluation in discrete and continuous environments. In the initial version of the submission, several concerns were expressed regarding both the quality of the paper and clarity. The authors have updated the paper to address these concerns to the satisfaction of the reviewers, who are now unanimously in favor of acceptance.
test
[ "rkxhdccTFB", "SJlNlFo3oS", "rJlNHqc3oB", "B1gVuKknoH", "SkliKP1hsH", "r1gnmI1hjB", "B1xXv4knjr", "H1xjTu5iYH", "rklAXTxaKr" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new algorithm to the off-policy evaluation problem in reinforcement learning, based on stationary state visitation. The proposed algorithm does not need to know the behavior policy probabilities, which provide a broader application scenario compared with previous work. It is based on directly...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_S1ltg1rFDS", "rJlNHqc3oB", "B1gVuKknoH", "H1xjTu5iYH", "rklAXTxaKr", "rkxhdccTFB", "iclr_2020_S1ltg1rFDS", "iclr_2020_S1ltg1rFDS", "iclr_2020_S1ltg1rFDS" ]
iclr_2020_rkecl1rtwB
PairNorm: Tackling Oversmoothing in GNNs
The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PairNorm, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PairNorm is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PairNorm makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs. Code is available at https://github.com/LingxiaoShawn/PairNorm.
accept-poster
The paper proposes a way to tackle oversmoothing in Graph Neural Networks. The authors do a good job of motivating their approach, which is straightforward and works well. The paper is well written and the experiments are informative and well carried out. Therefore, I recommend acceptance. Please make suree thee final version reflects the discussion during the rebuttal.
train
[ "BygxL-UnjS", "HyeWZDB2oH", "BkxkcqhTtB", "rJg-wCfLqr" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for reading our paper thoroughly and giving constructive feedback, and we are glad that you found our paper interesting and contributing to a deeper understanding of the field. We respond to your questions one-by-one in the following. \n\n>> Can we interpret PairNorm (or the optimization proble...
[ -1, -1, 8, 3 ]
[ -1, -1, 4, 3 ]
[ "BkxkcqhTtB", "rJg-wCfLqr", "iclr_2020_rkecl1rtwB", "iclr_2020_rkecl1rtwB" ]
iclr_2020_rJlnxkSYPS
Unsupervised Clustering using Pseudo-semi-supervised Learning
In this paper, we propose a framework that leverages semi-supervised models to improve unsupervised clustering performance. To leverage semi-supervised models, we first need to automatically generate labels, called pseudo-labels. We find that prior approaches for generating pseudo-labels hurt clustering performance because of their low accuracy. Instead, we use an ensemble of deep networks to construct a similarity graph, from which we extract high accuracy pseudo-labels. The approach of finding high quality pseudo-labels using ensembles and training the semi-supervised model is iterated, yielding continued improvement. We show that our approach outperforms state of the art clustering results for multiple image and text datasets. For example, we achieve 54.6% accuracy for CIFAR-10 and 43.9% for 20news, outperforming state of the art by 8-12% in absolute terms.
accept-poster
The authors addressed the issues raised by the reviewers, so I suggest the acceptance of this paper.
val
[ "SJl0hUVhoH", "S1eyIhP_jH", "Byg-iO4dsS", "Bye0MLNusS", "SkeXcSE_sB", "HJgjIIMDtS", "B1etG5Xntr", "SyeYFWMb9r" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1) Regarding your comment about using self-supervised networks as a baseline rather than starting from scratch or using pre-trained features. Yes, we agree that it would be an interesting middle point to evaluate. We would like to run this experiment but unfortunately we won’t have enough time to do it before the ...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 1 ]
[ "S1eyIhP_jH", "Byg-iO4dsS", "HJgjIIMDtS", "B1etG5Xntr", "SyeYFWMb9r", "iclr_2020_rJlnxkSYPS", "iclr_2020_rJlnxkSYPS", "iclr_2020_rJlnxkSYPS" ]
iclr_2020_Hke3gyHYwH
Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee
Over-parameterized deep neural networks trained by simple first-order methods are known to be able to fit any labeling of data. Such over-fitting ability hinders generalization when mislabeled training examples are present. On the other hand, simple regularization methods like early-stopping can often achieve highly nontrivial performance on clean test data in these scenarios, a phenomenon not theoretically understood. This paper proposes and analyzes two simple and intuitive regularization methods: (i) regularization by the distance between the network parameters to initialization, and (ii) adding a trainable auxiliary variable to the network output for each training example. Theoretically, we prove that gradient descent training with either of these two methods leads to a generalization guarantee on the clean data distribution despite being trained using noisy labels. Our generalization analysis relies on the connection between wide neural network and neural tangent kernel (NTK). The generalization bound is independent of the network size, and is comparable to the bound one can get when there is no label noise. Experimental results verify the effectiveness of these methods on noisily labeled datasets.
accept-poster
This paper studies the effect of various regularization techniques for dealing with noisy labels. In particular the authors study various regularization techniques such as distance from initialization to mitigate this effect. The authors also provide theory in the NTK regime. All reviewers have positive assessment about the paper and think is clearly written with nice contributions but do raise some questions about novelty given that it mostly follows the normal NTK regime. I agree that the paper is nicely written and well-motivated. I do not think the theory developed here fully captures all the nuances of practical observations in this problem. In particular, with label noise this theory suggests that test performance is not dramatically affected by label noise when using regularization or early stopping where as in practice what has been observed (and even proven in some cases) is that the performance is completely unaffected with small label noise. I think this paper is a good addition to ICLR and therefore recommend acceptance but recommend the authors to more clearly articulate the above nuances and limitations of their theory in the final manuscript.
train
[ "S1gq-N_pYH", "BJerRXTjor", "BkxSSXTsir", "rJxYvGpojr", "HJengQ5IYH", "S1xDkcCjtB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes two regularization methods for learning on noisily labeled data: the first penalizes the distance w.r.t. Euclidean norm from an initial point and the second uses an additional auxiliary variable for each example to learn a noise. In the theoretical part, the paper shows that an original clean d...
[ 6, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, 5, 3 ]
[ "iclr_2020_Hke3gyHYwH", "HJengQ5IYH", "S1xDkcCjtB", "S1gq-N_pYH", "iclr_2020_Hke3gyHYwH", "iclr_2020_Hke3gyHYwH" ]
iclr_2020_H1laeJrKDB
Controlling generative models with continuous factors of variations
Recent deep generative models can provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing. Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation. To overcome these major issues, very recent works have shown the interest of studying the semantics of the latent space of generative models. In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like position or scale of the object in the image. Our method is weakly supervised and particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations. We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders.
accept-poster
Following the revision and the discussion, all three reviewers agree that the paper provides an interesting contribution to the area of generative image modeling. Accept.
train
[ "r1lnq4gCFB", "SJgA8ZLKYH", "BJeZ7_WXiH", "B1gfhLb7oH", "SJeyHrZXiB", "S1gDj4-7jB", "HkxRzJWpFS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes an algorithm to find linear trajectories in the latent space of a generative model that correspond to a user-specified transformation T in image space. Roughly, the latent trajectory is obtained by inverting the generator at the transformed image and a clever recursive estimation strategy is pro...
[ 6, 6, -1, -1, -1, -1, 8 ]
[ 4, 4, -1, -1, -1, -1, 5 ]
[ "iclr_2020_H1laeJrKDB", "iclr_2020_H1laeJrKDB", "SJgA8ZLKYH", "HkxRzJWpFS", "r1lnq4gCFB", "r1lnq4gCFB", "iclr_2020_H1laeJrKDB" ]
iclr_2020_ryxmb1rKDS
Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control
In this paper, we introduce Symplectic ODE-Net (SymODEN), a deep learning framework which can infer the dynamics of a physical system, given by an ordinary differential equation (ODE), from observed state trajectories. To achieve better generalization with fewer training samples, SymODEN incorporates appropriate inductive bias by designing the associated computation graph in a physics-informed manner. In particular, we enforce Hamiltonian dynamics with control to learn the underlying dynamics in a transparent way, which can then be leveraged to draw insight about relevant physical aspects of the system, such as mass and potential energy. In addition, we propose a parametrization which can enforce this Hamiltonian formalism even when the generalized coordinate data is embedded in a high-dimensional space or we can only access velocity data instead of generalized momentum. This framework, by offering interpretable, physically-consistent models for physical systems, opens up new possibilities for synthesizing model-based control strategies.
accept-poster
This paper proposes a novel method for learning Hamiltonian dynamics from data. The data is obtained from systems subjected to an external control signal. The authors show the utility of their method for subsequent improved control in a reinforcement learning setting. The paper is well written, the method is derived from first principles, and the experimental validation is solid. The authors were also able to take into account the reviewers’ feedback and further improve their paper during the discussion period. Overall all of the reviewers agree that this is a great contribution to the field and hence I am happy to recommend acceptance.
val
[ "H1l1Ek92KB", "r1gsylCe9B", "HJxvhJ_0YS", "rkgZ6sVosr", "BkeTkhEisS", "S1gQ6lGKoH", "ryxW_CZFjB", "r1gT57zKoS", "HJlHKVMYsS", "ryxuD-JoDB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "In this paper, the authors propose a framework for learning the dynamics of a system with underlying Hamiltonian structure subjected to external control. Based on the extended equations of motion, the authors suggest how to apply NeuralODE in a way that makes use of the prior information that the unconstrained sys...
[ 8, 6, 8, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_ryxmb1rKDS", "iclr_2020_ryxmb1rKDS", "iclr_2020_ryxmb1rKDS", "r1gT57zKoS", "HJxvhJ_0YS", "H1l1Ek92KB", "HJxvhJ_0YS", "r1gsylCe9B", "r1gsylCe9B", "iclr_2020_ryxmb1rKDS" ]
iclr_2020_SJeY-1BKDS
Understanding l4-based Dictionary Learning: Interpretation, Stability, and Robustness
Recently, the ℓ4-norm maximization has been proposed to solve the sparse dictionary learning (SDL) problem. The simple MSP (matching, stretching, and projection) algorithm proposed by \cite{zhai2019a} has proved surprisingly efficient and effective. This paper aims to better understand this algorithm from its strong geometric and statistical connections with the classic PCA and ICA, as well as their associated fixed-point style algorithms. Such connections provide a unified way of viewing problems that pursue {\em principal}, {\em independent}, or {\em sparse} components of high-dimensional data. Our studies reveal additional good properties of ℓ4-maximization: not only is the MSP algorithm for sparse coding insensitive to small noise, but it is also robust to outliers and resilient to sparse corruptions. We provide statistical justification for such inherently nice properties. To corroborate the theoretical analysis, we also provide extensive and compelling experimental evidence with both synthetic data and real images.
accept-poster
Main content: Blind review #3 summarizes it well: This paper presents results on Dictionary Learning through l4 maximization. The authors base this paper heavily off of the formulation and algorithm in Zhai et. al. (2019) "Complete dictionary learning via l4-norm maximization over the orthogonal group". The paper draws connections between complete dictionary learning, PCA, and ICA by pointing out similarities between the objectives functions that are optimized as well as the algorithms used. The paper further presents results on dictionary learning in the presence of different types of noise (AWGN, sparse corruptions, outliers) and show that the l4 objective is robust to different types of noise. Finally the authors apply different types of noise to synthetic and real images and show that the dictionaries that they learn are robust to the noise applied. -- Discussion: Reviews agree about the interesting work, including the connections of complete dictionary learning with classic PCA and ICA (after further clarification during the rebuttal period). Additional empirical strengthening during the rebuttal period also addressed a reviewer concern. -- Recommendation and justification: As review #3 wrote, "Overall this paper makes significant contributions by extending the work in [Zhai et. al's (2019) "Complete dictionary learning via l4-norm maximization over the orthogonal group"] to noisy dictionary learning settings".
val
[ "H1gO_IImcB", "B1x_np7Ctr", "r1lCqgAWjH", "H1e1hkCWoH" ]
[ "official_reviewer", "official_reviewer", "author", "author" ]
[ "This paper presents results on Dictionary Learning through l4 maximization. The authors base this paper heavily off of the formulation and algorithm in Zhai et. al. (2019) \"Complete dictionary learning via l4-norm maximization over the orthogonal group\". The paper draws connections between complete dictionary le...
[ 8, 6, -1, -1 ]
[ 4, 4, -1, -1 ]
[ "iclr_2020_SJeY-1BKDS", "iclr_2020_SJeY-1BKDS", "B1x_np7Ctr", "H1gO_IImcB" ]
iclr_2020_Hygab1rKDS
Quantum Algorithms for Deep Convolutional Neural Networks
Quantum computing is a powerful computational paradigm with applications in several fields, including machine learning. In the last decade, deep learning, and in particular Convolutional Neural Networks (CNN), have become essential for applications in signal processing and image recognition. Quantum deep learning, however, remains a challenging problem, as it is difficult to implement non linearities with quantum unitaries. In this paper we propose a quantum algorithm for evaluating and training deep convolutional neural networks with potential speedups over classical CNNs for both the forward and backward passes. The quantum CNN (QCNN) reproduces completely the outputs of the classical CNN and allows for non linearities and pooling operations. The QCNN is in particular interesting for deep networks and could allow new frontiers in the image recognition domain, by allowing for many more convolution kernels, larger kernels, high dimensional inputs and high depth input channels. We also present numerical simulations for the classification of the MNIST dataset to provide practical evidence for the efficiency of the QCNN.
accept-poster
Four reviewers have assessed this paper and they have scored it as 6/6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission. Especially, the authors should take care to make this paper accessible (understandable) to the ML community as ICLR is a ML venue (rather than quantum physics one). Failure to do so will likely discourage the generosity of reviewers toward this type of submissions in the future.
test
[ "rkg0c4VhiB", "HyxNzfvjor", "BJe0H1wKjB", "ByxRZyvKoB", "rygvnCLtsS", "Hketr08Yjr", "S1g6EebaKr", "HklnChipKr", "SylabRM85H", "SyeOQqnPqB" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the insightful comments.\n\nWe would like to clarify that our work provides a rigorous theoretical analysis showing that our quantum CNN is a faster and noise-robust adaptation of the classical CNN. As the quantum algorithm performs the same operations as the classical CNN (convolution, p...
[ -1, -1, -1, -1, -1, -1, 6, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 1, 1, 5, 1 ]
[ "HyxNzfvjor", "BJe0H1wKjB", "S1g6EebaKr", "HklnChipKr", "SylabRM85H", "SyeOQqnPqB", "iclr_2020_Hygab1rKDS", "iclr_2020_Hygab1rKDS", "iclr_2020_Hygab1rKDS", "iclr_2020_Hygab1rKDS" ]
iclr_2020_B1lJzyStvS
Self-Supervised Learning of Appliance Usage
Learning home appliance usage is important for understanding people's activities and optimizing energy consumption. The problem is modeled as an event detection task, where the objective is to learn when a user turns an appliance on, and which appliance it is (microwave, hair dryer, etc.). Ideally, we would like to solve the problem in an unsupervised way so that the method can be applied to new homes and new appliances without any labels. To this end, we introduce a new deep learning model that takes input from two home sensors: 1) a smart electricity meter that outputs the total energy consumed by the home as a function of time, and 2) a motion sensor that outputs the locations of the residents over time. The model learns the distribution of the residents' locations conditioned on the home energy signal. We show that this cross-modal prediction task allows us to detect when a particular appliance is used, and the location of the appliance in the home, all in a self-supervised manner, without any labeled data.
accept-poster
Authors proposed a multi-modal unsupervised algorithm to uncover the electricity usage of different appliances in a home. The detection of appliance was done by using both combined electricity consumption data and user location data from sensors. The unit of detection was set to be a 25-second window centered around any electricity usage spike. Authors used a encoder/decode set up to model two different factors of usage: type of appliance and variety within the same appliance. This part of the model was trained by predicting actual consumption. Then only the type of appliance was used to predict the location of people in the house, which was also factored into appliance related and unrelated factors. Locations are represented as images to avoid complicated modeling of multiple people. The reviewers were satisfied with the discussion after the authors, and therefore believe this work is of general interest to the ICLR community.
train
[ "BJgTpNPjjS", "ByxVf3fisS", "BJebqjMjor", "SyeDTqGjiS", "BkxK89GssB", "r1eYGqGijH", "HyxczJKYtH", "SygwxOtM5H", "BygE9mFVqB" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers,\n\nThank you again for the thoughtful comments. We made minor changes to the paper (colored in blue) to address some of the issues. We clarified EL-kmeans and the sentence explaining our clustering algorithm. We also improved the figure caption, strengthened our motivation, and added clearer pointe...
[ -1, -1, -1, -1, -1, -1, 6, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, 1, 1, 5 ]
[ "iclr_2020_B1lJzyStvS", "BygE9mFVqB", "HyxczJKYtH", "SygwxOtM5H", "SygwxOtM5H", "SygwxOtM5H", "iclr_2020_B1lJzyStvS", "iclr_2020_B1lJzyStvS", "iclr_2020_B1lJzyStvS" ]
iclr_2020_HyeJf1HKvS
Deep Graph Matching Consensus
This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs. First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes. Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs. We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process. Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently. We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-the-art.
accept-poster
The paper proposed an end-to-end network architecture for graph matching problems, where first a GNN is applied to compute the initial soft correspondence, and then a message passing network is applied to attempt to resolve structural mismatch. The reviewers agree that the second component (message passing) is novel, and after the rebuttal period, additional experiments were provided by the authors to demonstrate the effectiveness of this. Overall this is an interesting network solution for graph-matching, and would be a worthwhile addition to the literature.
train
[ "Bkg1F6tZ5S", "HkgactQu9B", "S1gWx_K8jr", "SkgnRR_UjB", "Hye_QDtLsH", "SkeVqBYLir", "HJeaC7YUjH", "SJebbWtIjH", "Syg85NKpYB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a two-stage GNN-based architecture to establish correspondences between two graphs. The first step is to learn node embeddings using a GNN to obtain soft node correspondences between two graphs. The second step is to iteratively refine them using the constraints of matching consensus in local n...
[ 6, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, 5, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_HyeJf1HKvS", "iclr_2020_HyeJf1HKvS", "HkgactQu9B", "Bkg1F6tZ5S", "HkgactQu9B", "Syg85NKpYB", "Bkg1F6tZ5S", "Bkg1F6tZ5S", "iclr_2020_HyeJf1HKvS" ]
iclr_2020_rkllGyBFPH
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Recent theoretical work has established connections between over-parametrized neural networks and linearized models governed by the Neural Tangent Kernels (NTKs). NTK theory leads to concrete convergence and generalization results, yet the empirical performance of neural networks are observed to exceed their linearized models, suggesting insufficiency of this theory. Towards closing this gap, we investigate the training of over-parametrized neural networks that are beyond the NTK regime yet still governed by the Taylor expansion of the network. We bring forward the idea of randomizing the neural networks, which allows them to escape their NTK and couple with quadratic models. We show that the optimization landscape of randomized two-layer networks are nice and amenable to escaping-saddle algorithms. We prove concrete generalization and expressivity results on these randomized networks, which lead to sample complexity bounds (of learning certain simple functions) that match the NTK and can in addition be better by a dimension factor when mild distributional assumptions are present. We demonstrate that our randomization technique can be generalized systematically beyond the quadratic case, by using it to find networks that are coupled with higher-order terms in their Taylor series.
accept-poster
This paper studies the training of over-parameterized two-layer neural networks by considering high-order Taylor approximation, and randomizing the network to remove the first order term in the network’s Taylor expansion. This enables the neural network training go beyond the recently so-called neural tangent kernel (NTK) regime. The authors also established the optimization landscape, generalization error and expressive power results under the proposed analysis framework. They showed that when learning polynomials, the proposed randomized networks with quadratic Taylor approximation outperform standard NTK by a factor of the input dimension. This is a very nice work, and provides a new perspective on NTK and beyond. All reviewers are in support of accepting this paper.
train
[ "B1eYDESfcH", "S1gHvMjMjS", "rJl42lFfor", "B1eV8eFMoH", "SylJRJYMsB", "HJxsMltzoB", "Hylx2510tH", "SyePj4XCtH" ]
[ "official_reviewer", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents an approach for going beyond NTK regime, namely the linear Taylor approximation for network function. By employing the idea of randomization this paper manages to reduce the magnitude of the linear part while the quadratic part is unaffected. This technique enables further analysis of both the ...
[ 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rkllGyBFPH", "iclr_2020_rkllGyBFPH", "iclr_2020_rkllGyBFPH", "Hylx2510tH", "B1eYDESfcH", "SyePj4XCtH", "iclr_2020_rkllGyBFPH", "iclr_2020_rkllGyBFPH" ]
iclr_2020_SJlbGJrtDB
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. These thresholds can have fine-grained layer-wise adjustments dynamically via backpropagation. We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same training epochs as dense models. Dynamic Sparse Training achieves prior art performance compared with other sparse training algorithms on various network architectures. Additionally, we have several surprising observations that provide strong evidence to the effectiveness and efficiency of our algorithm. These observations reveal the underlying problems of traditional three-stage pruning algorithms and present the potential guidance provided by our algorithm to the design of more compact network architectures.
accept-poster
The paper lies on the borderline. An accept is suggested based on majority reviews and authors' response.
train
[ "Bkl9Cpy0Yr", "ryeb52WxiB", "B1xLNtN-sB", "rkxycJcqsS", "ByxiKSw-or", "SJg4p8keoS", "rylmPotfjB", "BJgGbhJ-iB", "BkxVwrQM5S" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Update after the rebuttal\nI appreciate the author's clarification in the rebuttal and the additional result on ImageNet, which addressed some of my concerns.\n\n# Summary\nThis paper proposes a trainable mask layer in neural networks for compressing neural networks end-to-end. The main idea is to apply a diffe...
[ 6, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_SJlbGJrtDB", "Bkl9Cpy0Yr", "BJgGbhJ-iB", "iclr_2020_SJlbGJrtDB", "Bkl9Cpy0Yr", "Bkl9Cpy0Yr", "BkxVwrQM5S", "iclr_2020_SJlbGJrtDB", "iclr_2020_SJlbGJrtDB" ]
iclr_2020_rJgzzJHtDB
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference
Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019). Such a dilemma is shown to be rooted in the inherently higher sample complexity (Schmidt et al., 2018) and/or model capacity (Nakkiran, 2019), for learning a high-accuracy and robust classifier. In view of that, give a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency, therefore posing challenges for resource-constrained applications. Is it possible to co-design model accuracy, robustness and efficiency to achieve their triple wins? This paper studies multi-exit networks associated with input-adaptive efficient inference, showing their strong promise in achieving a “sweet point" in co-optimizing model accuracy, robustness, and efficiency. Our proposed solution, dubbed Robust Dynamic Inference Networks (RDI-Nets), allows for each input (either clean or adversarial) to adaptively choose one of the multiple output layers (early branches or the final one) to output its prediction. That multi-loss adaptivity adds new variations and flexibility to adversarial attacks and defenses, on which we present a systematical investigation. We show experimentally that by equipping existing backbones with such robust adaptive inference, the resulting RDI-Nets can achieve better accuracy and robustness, yet with over 30% computational savings, compared to the defended original models.
accept-poster
The authors develop a novel technique to train networks to be robust and accurate while still being efficient to train and evaluate. The authors propose "Robust Dynamic Inference Networks" that allows inputs to be adaptively routed to one of several output channels and thereby adjust the inference time used for any given input. They show The line of investigation initiated by authors is very interesting and should open up a new set of research questions in the adversarial training literature. The reviewers were in consensus on the quality of the paper and voted in favor of acceptance. One of the reviewers had concerns about the evaluation in the paper, in particular about whether carefully crafted attacks could break the networks studied by the authors. However, the authors performed additional experiments and revised the paper to address this concern to the satisfaction of the reviewer. Overall, the paper contains interesting contributions and should be accepted.
test
[ "BylE8C1osH", "Skgy0H6woH", "Bke0kZ6vjB", "H1lCVUpwir", "HyxN_BoaFr", "ryeFuGK1qH", "HJgVmFG6qH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I have read the responses and I will consider the revised manuscript. I appreciate the reporting of the new results.", "\nWe sincerely appreciate your positive opinion and insightful suggestions about our work. \n\nRegarding the role of \"increasing capacity\", we draw the conclusions from the theoretical analys...
[ -1, -1, -1, -1, 8, 8, 3 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "Bke0kZ6vjB", "ryeFuGK1qH", "HJgVmFG6qH", "HyxN_BoaFr", "iclr_2020_rJgzzJHtDB", "iclr_2020_rJgzzJHtDB", "iclr_2020_rJgzzJHtDB" ]
iclr_2020_BJgQfkSYDS
Neural Policy Gradient Methods: Global Optimality and Rates of Convergence
Policy gradient methods with actor-critic schemes demonstrate tremendous empirical successes, especially when the actors and critics are parameterized by neural networks. However, it remains less clear whether such "neural" policy gradient methods converge to globally optimal policies and whether they even converge at all. We answer both the questions affirmatively in the overparameterized regime. In detail, we prove that neural natural policy gradient converges to a globally optimal policy at a sublinear rate. Also, we show that neural vanilla policy gradient converges sublinearly to a stationary point. Meanwhile, by relating the suboptimality of the stationary points to the~representation power of neural actor and critic classes, we prove the global optimality of all stationary points under mild regularity conditions. Particularly, we show that a key to the global optimality and convergence is the "compatibility" between the actor and critic, which is ensured by sharing neural architectures and random initializations across the actor and critic. To the best of our knowledge, our analysis establishes the first global optimality and convergence guarantees for neural policy gradient methods.
accept-poster
The paper makes a solid contribution to understanding the convergence properties of policy gradient methods with over-parameterized neural network function approximators. This work is concurrent with and not subsumed by other strong work by Agarwal et al. on the same topic. There is sufficient novelty in this contribution to merit acceptance. The authors should nevertheless clarify the relationship between their work and the related work noted by AnonReviewer2, in addition to addressing the other comments of the reviewers.
train
[ "r1x5sJ_uiS", "HJen2hwdor", "SyepUav_oB", "SklqKJuOjS", "Skexj6DusB", "ByerOV_aFH", "ByxGBAtpKS", "HygSQAMAYr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate your review of our work. We have addressed the issues raised by the other reviewers and revised our work accordingly.", "We appreciate the valuable review and suggestions. We have revised our work accordingly. \n\nFirst, we would like to point out that the reviewer seems to have made a mistake by s...
[ -1, -1, -1, -1, -1, 8, 6, 3 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "ByerOV_aFH", "HygSQAMAYr", "HJen2hwdor", "ByxGBAtpKS", "SyepUav_oB", "iclr_2020_BJgQfkSYDS", "iclr_2020_BJgQfkSYDS", "iclr_2020_BJgQfkSYDS" ]
iclr_2020_ByedzkrKvH
Double Neural Counterfactual Regret Minimization
Counterfactual regret minimization (CFR) is a fundamental and effective technique for solving Imperfect Information Games (IIG). However, the original CFR algorithm only works for discrete states and action spaces, and the resulting strategy is maintained as a tabular representation. Such tabular representation limits the method from being directly applied to large games. In this paper, we propose a double neural representation for the IIGs, where one neural network represents the cumulative regret, and the other represents the average strategy. Such neural representations allow us to avoid manual game abstraction and carry out end-to-end optimization. To make the learning efficient, we also developed several novel techniques including a robust sampling method and a mini-batch Monte Carlo Counterfactual Regret Minimization (MCCFR) method, which may be of independent interests. Empirically, on games tractable to tabular approaches, neural strategies trained with our algorithm converge comparably to their tabular counterparts, and significantly outperform those based on deep reinforcement learning. On extremely large games with billions of decision nodes, our approach achieved strong performance while using hundreds of times less memory than the tabular CFR. On head-to-head matches of hands-up no-limit texas hold'em, our neural agent beat the strong agent ABS-CFR by 9.8±4.1 chips per game. It's a successful application of neural CFR in large games.
accept-poster
Double coúnterfactual regret minimization is an extension of neural counterfactual regret minimization that uses separate policy and regret networks (reminiscent of similar extensions of the basic RL formula in reinforcement learning). Several new algorithmic modifications are added to improve the performance. The reviewers agree that this paper is novel, sound, and interesting. One of the reviewers had a set of questions that the authors responded to, seemingly satisfactorily. Given that this seems to be a high-quality paper with no obvious issues, it should be accepted.
train
[ "HkgqM83ojH", "Hkld3H2jir", "HJx2eN_6Kr", "ryg_h5Lk9r" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for your recognition of our work. \n\nExactly, we have made a lot of efforts to evaluate our method on the large-scale game (heads-up no-limit Texas Hold’em) in the past year. In the future, we plan to reproduce the latest works, such as ED, Deep CFR, Single Deep CFR, and compare them on HUNL, although it's...
[ -1, -1, 6, 8 ]
[ -1, -1, 3, 5 ]
[ "ryg_h5Lk9r", "HJx2eN_6Kr", "iclr_2020_ByedzkrKvH", "iclr_2020_ByedzkrKvH" ]
iclr_2020_S1esMkHYPr
GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation
Molecular graph generation is a fundamental problem for drug discovery and has been attracting growing attention. The problem is challenging since it requires not only generating chemically valid molecular structures but also optimizing their chemical properties in the meantime. Inspired by the recent progress in deep generative models, in this paper we propose a flow-based autoregressive model for graph generation called GraphAF. GraphAF combines the advantages of both autoregressive and flow-based approaches and enjoys: (1) high model flexibility for data density estimation; (2) efficient parallel computation for training; (3) an iterative sampling process, which allows leveraging chemical domain knowledge for valency checking. Experimental results show that GraphAF is able to generate 68\% chemically valid molecules even without chemical knowledge rules and 100\% valid molecules with chemical rules. The training process of GraphAF is two times faster than the existing state-of-the-art approach GCPN. After fine-tuning the model for goal-directed property optimization with reinforcement learning, GraphAF achieves state-of-the-art performance on both chemical property optimization and constrained property optimization.
accept-poster
All reviewers agreed that this paper is essentially a combination of existing ideas, making it a bit incremental, but is well-executed and a good contribution. Specifically, to quote R1: "This paper proposes a generative model architecture for molecular graph generation based on autoregressive flows. The main contribution of this paper is to combine existing techniques (auto-regressive BFS-ordered generation of graphs, normalizing flows, dequantization by Gaussian noise, fine-tuning based on reinforcement learning for molecular property optimization, and validity constrained sampling). Most of these techniques are well-established either for data generation with normalizing flows or for molecular graph generation and the novelty lies in the combination of these building blocks into a framework. ... Overall, the paper is very well written, nicely structured and addresses an important problem. The framework in its entirety is novel, but the building blocks of the proposed framework are established in prior work and the idea of using normalizing flows for graph generation has been proposed in earlier work. Nonetheless, I find the paper relevant for an ICLR audience and the quality of execution and presentation of the paper is good."
train
[ "ByxZqYuWcr", "r1gWIWfssH", "H1gP43WoiB", "ryxiyLlF_S", "rJgzzsWjjH", "SkxA45WosH", "SJg3GOZssr", "SyxTYv-iiS", "H1gY_VP_iB", "H1e1KNvYsS", "H1gdnWwFsr", "HklSxJ_diS", "BkxrWaDOor", "ByeVKuvusr", "Bygv3OP_ir", "rklLSrw_iB", "rJlKI8w_sS", "rJediPDdsS", "r1e4JLwdsS", "S1gjCzZAFr"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ "# Post Rebuttal\n\nThe authors have partially and satisfactorily addressed my concerns. In line of this I am raising my score to Weak Accept.\n\nThis paper proposes a new molecular graph generative model (GraphAF) which fuses the best of two worlds of generative networks - reversible flow and autoregressive mode. ...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_S1esMkHYPr", "HklSxJ_diS", "rJgzzsWjjH", "iclr_2020_S1esMkHYPr", "SJg3GOZssr", "SyxTYv-iiS", "H1e1KNvYsS", "H1gdnWwFsr", "iclr_2020_S1esMkHYPr", "ByeVKuvusr", "Bygv3OP_ir", "BkxrWaDOor", "rJediPDdsS", "ryxiyLlF_S", "ryxiyLlF_S", "ByxZqYuWcr", "ByxZqYuWcr", "S1gjCzZAFr", ...
iclr_2020_HyxnMyBKwB
The Gambler's Problem and Beyond
We analyze the Gambler's problem, a simple reinforcement learning problem where the gambler has the chance to double or lose their bets until the target is reached. This is an early example introduced in the reinforcement learning textbook by Sutton and Barto (2018), where they mention an interesting pattern of the optimal value function with high-frequency components and repeating non-smooth points. It is however without further investigation. We provide the exact formula for the optimal value function for both the discrete and the continuous cases. Though simple as it might seem, the value function is pathological: fractal, self-similar, derivative taking either zero or infinity, not smooth on any interval, and not written as elementary functions. It is in fact one of the generalized Cantor functions, where it holds a complexity that has been uncharted thus far. Our analyses could lead insights into improving value function approximation, gradient-based algorithms, and Q-learning, in real applications and implementations.
accept-poster
This paper studies the optimal value function for the gambler's problem, and presents some interesting characterizations thereof. The paper is well written and should be accepted.
test
[ "ryeJ7a_TYB", "HJgFDrRioB", "BJlURV3jjB", "S1xIYUHRKH", "SkeF8NnojH", "BylyIg-Kir", "rkepo8xujr", "HygGTf3miS", "H1xcUBhZjH", "BkxXYmnbsS", "HJl_tG2WiB", "rkecKYVccH" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper revisits the Gambler's problem. It studies a generalized formulation with continuous state and action space and shows that the optimal value function is self-similar, fractal and non-rectifiable. That is, it cannot be described by any simple analytic formula. Based on this, it also deeply analysis the di...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_HyxnMyBKwB", "SkeF8NnojH", "HygGTf3miS", "iclr_2020_HyxnMyBKwB", "H1xcUBhZjH", "rkepo8xujr", "BkxXYmnbsS", "S1xIYUHRKH", "S1xIYUHRKH", "ryeJ7a_TYB", "rkecKYVccH", "iclr_2020_HyxnMyBKwB" ]
iclr_2020_r1xCMyBtPS
Multilingual Alignment of Contextual Word Representations
We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkably matching pseudo-fully-supervised translate-train models for Bulgarian and Greek. Further, to measure the degree of alignment, we introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer. Using this word retrieval task, we also analyze BERT and find that it exhibits systematic deficiencies, e.g. worse alignment for open-class parts-of-speech and word pairs written in different scripts, that are corrected by the alignment procedure. These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.
accept-poster
This paper proposes a method to improve alignments of a multilingual contextual embedding model (e.g., multilingual BERT) using parallel corpora as an anchor. The authors show the benefit of their approach in a zero-shot XNLI experiment and present a word retrieval analysis to better understand multilingual BERT. All reviewers agree that this is an interesting paper with valuable contributions. The authors and reviewers have been engaged in a thorough discussion during the rebuttal period and the revised paper has addressed most of the reviewers concerns. I think this paper would be a good addition to ICLR so I recommend accepting this paper.
test
[ "rylhOp_sjH", "ryxX7adssB", "BJlv3F9nYH", "S1gW5lfuoH", "rJe8h1ePjH", "ByghNKkSiB", "rye0GYyHsH", "SklXRuyrjr", "S1x3sOyHiS", "BJgZ3a56KH", "Hkeb5VNZ5B" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks again for the feedback. As an update, we have run our method on more distant languages (Chinese, Arabic, and Urdu), where we align a single BERT model on all eight languages (the five Europarl languages and the three additional ones) using 20K sentences per language. We see similar gains for the new languag...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "S1gW5lfuoH", "rJe8h1ePjH", "iclr_2020_r1xCMyBtPS", "ByghNKkSiB", "SklXRuyrjr", "BJlv3F9nYH", "BJlv3F9nYH", "BJgZ3a56KH", "Hkeb5VNZ5B", "iclr_2020_r1xCMyBtPS", "iclr_2020_r1xCMyBtPS" ]
iclr_2020_rygGQyrFvH
The Curious Case of Neural Text Degeneration
Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops. To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models than previous decoding strategies. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for open-ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is currently the best available decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text.
accept-poster
This paper presents nucleus sampling, a sampling method that truncates the tail of a probability distribution and samples from a dynamic nucleus containing the majority of the probability mass. Likelihood and human evaluations show that the proposed method is a better alternative to a standard sampling method and top-k sampling. This is a well-written paper and I think the proposed sampling method will be useful in language modeling. All reviewers agree that the paper addresses an important problem. Two reviewers have concerns regarding the technical contribution of the paper (i.e., nucleus sampling is a straightforward extension of top-k sampling), and whether it is enough for publications at a venue such as ICLR. R2 suggests to have a better theoretical framework for nucleus sampling. I think these are valid concerns. However, given the potential widespread application of the proposed method and the strong empirical results, I recommend to accept the paper. Also, a minor comment, I think there is something wrong with your style file (e.g., the bottom margin appears too large compared to other submissions).
train
[ "rkelnCk3sH", "SJx4wSfjor", "Skxc3_RpFB", "BygK6is5jH", "BJeDcioqjH", "B1ldSoo5jS", "SygdmsocjB", "Hyxdbsj9jB", "rJeVL4b6KS", "S1lugevycS" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your quick reply and engagement with our response.\n\nWe appreciate your acknowledgement of the empirical analyses performed: Our perspective is that such analyses are vital to understanding the current landscape of generation, and that the analysis of methods, metrics, and models are key to studying...
[ -1, -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "SJx4wSfjor", "SygdmsocjB", "iclr_2020_rygGQyrFvH", "BJeDcioqjH", "S1lugevycS", "rJeVL4b6KS", "Hyxdbsj9jB", "Skxc3_RpFB", "iclr_2020_rygGQyrFvH", "iclr_2020_rygGQyrFvH" ]
iclr_2020_HkxdQkSYDB
Graph Convolutional Reinforcement Learning
Learning to cooperate is crucially important in multi-agent environments. The key is to understand the mutual interplay between agents. However, multi-agent environments are highly dynamic, where agents keep moving and their neighbors change quickly. This makes it hard to learn abstract representations of mutual interplay between agents. To tackle these difficulties, we propose graph convolutional reinforcement learning, where graph convolution adapts to the dynamics of the underlying graph of the multi-agent environment, and relation kernels capture the interplay between agents by their relation representations. Latent features produced by convolutional layers from gradually increased receptive fields are exploited to learn cooperation, and cooperation is further improved by temporal relation regularization for consistency. Empirically, we show that our method substantially outperforms existing methods in a variety of cooperative scenarios.
accept-poster
The work proposes a graph convolutional network based approach to multi-agent reinforcement learning. This approach is designed to be able to adaptively capture changing interactions between agents. Initial reviews highlighted several limitations but these were largely addressed by the authors. The resulting paper makes a valuable contribution by proposing a well-motivated approach, and by conducting extensive empirical validation and analysis that result in novel insights. I encourage the authors to take on board any remaining reviewer suggestions as they prepare the camera ready version of the paper.
val
[ "ByxZYj7hjr", "BJeVi97hsH", "HJg1W8g3sS", "BklJkMghjH", "SJx0QUYOdB", "BJgLDYsciB", "BJld6mP9or", "H1xLgxw9ir", "HylRNbw9iS", "SJlH50IcoS", "rygwdyD9sS", "Byxv13sAFS", "SygTyGcOcr", "Skga4mX4cS", "HJg3DQ7gcH", "SygMD_i_uB", "SklOSsFduH", "r1gtJmtu_r", "BJxoQu_udS" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "public" ]
[ "\n\n>>> Clarifying ‘these two methods require the objects in the environment are explicitly labeled, which is infeasible in many real-world applications.’\n\nThese two methods use the entities in the environment as the nodes of the graph. So, they need explicitly know what the entities are and where they are in th...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 1, 3, -1, -1, -1, -1, -1, -1 ]
[ "BklJkMghjH", "HJg1W8g3sS", "rygwdyD9sS", "H1xLgxw9ir", "iclr_2020_HkxdQkSYDB", "BJld6mP9or", "SJx0QUYOdB", "SygTyGcOcr", "Byxv13sAFS", "iclr_2020_HkxdQkSYDB", "SygTyGcOcr", "iclr_2020_HkxdQkSYDB", "iclr_2020_HkxdQkSYDB", "HJg3DQ7gcH", "iclr_2020_HkxdQkSYDB", "SklOSsFduH", "BJxoQu_ud...
iclr_2020_SyljQyBFDH
Meta-Learning Deep Energy-Based Memory Models
We study the problem of learning an associative memory model -- a system which is able to retrieve a remembered pattern based on its distorted or incomplete version. Attractor networks provide a sound model of associative memory: patterns are stored as attractors of the network dynamics and associative retrieval is performed by running the dynamics starting from a query pattern until it converges to an attractor. In such models the dynamics are often implemented as an optimization procedure that minimizes an energy function, such as in the classical Hopfield network. In general it is difficult to derive a writing rule for a given dynamics and energy that is both compressive and fast. Thus, most research in energy-based memory has been limited either to tractable energy models not expressive enough to handle complex high-dimensional objects such as natural images, or to models that do not offer fast writing. We present a novel meta-learning approach to energy-based memory models (EBMM) that allows one to use an arbitrary neural architecture as an energy model and quickly store patterns in its weights. We demonstrate experimentally that our EBMM approach can build compressed memories for synthetic and natural data, and is capable of associative retrieval that outperforms existing memory systems in terms of the reconstruction error and compression rate.
accept-poster
Four knowledgable reviewers recommend accept. Good job!
train
[ "HyekxA0EcS", "B1e_L5zpKH", "BJlwh7Xnsr", "Sye2RlX3sH", "SylSrLlojr", "rJgjRC1jor", "SyeMLFkijB", "HJeE4YJijS", "ryxTh1ncjr", "SJgY4RocoH", "BkeZXMcccB", "Hye1YmQTqr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the extensive answers. I updated my rating based on the provided clarification and extra experiments.\n\n=====================\nMy understanding of eq 1-5 is that the algorithm finds an energy landscape (by modifying \\theta) for each dataset (task) such that in this landscape, the inputs from the distr...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_SyljQyBFDH", "iclr_2020_SyljQyBFDH", "iclr_2020_SyljQyBFDH", "BkeZXMcccB", "BkeZXMcccB", "B1e_L5zpKH", "HyekxA0EcS", "HyekxA0EcS", "Hye1YmQTqr", "iclr_2020_SyljQyBFDH", "iclr_2020_SyljQyBFDH", "iclr_2020_SyljQyBFDH" ]
iclr_2020_rkl3m1BFDB
Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning
Saliency maps are frequently used to support explanations of the behavior of deep reinforcement learning (RL) agents. However, a review of how saliency maps are used in practice indicates that the derived explanations are often unfalsifiable and can be highly subjective. We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and assess the degree to which they correspond to the semantics of RL environments. We use Atari games, a common benchmark for deep RL, to evaluate three types of saliency maps. Our results show the extent to which existing claims about Atari games can be evaluated and suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool.
accept-poster
This was a contentious paper, with quite a large variance in the ratings, and ultimately a lack of consensus. After reading the paper myself, I found it to be a valuable synthesis of common usage of saliency maps and a critique of their improper interpretation. Further, the demonstration of more rigorous methods of evaluating agents based on salience maps using case studies is quite illustrative and compelling. I think we as a field can agree that we’d like to gain better understanding our deep RL models. This is not possible if we don’t have a good understanding of the analysis tools we’re using. R2 rightly pointed out a need for quantitative justification for their results, in the form of statistical tests, which the authors were able to provide, leading the reviewer to revise their score to the highest value of 8. I thank them for instigating the discussion. R1 continues to feel that the lack of a methodological contribution (in the form of improving learning within an agent) is a weakness. However, I don’t believe that all papers at deep learning conferences have to have the goal of empirically “learning better” on some benchmark task or dataset, and that there’s room at ICLR for more analysis papers. Indeed, it’d be nice to see more papers like this. For this reason, I’m inclined to recommend accept for this paper. However this paper does have weaknesses, in that the framework proposed could be made more rigorous and formal. Currently it seems rather adhoc and on a task-by-task basis (ie we need to have access to game states or define them ourselves for the task). It’s also disappointing that it doesn’t work for recurrent agents, which limits its applicability for analyzing current SOTA deep RL agents. I wonder if authors can comment on possible extensions that would allow for this.
train
[ "r1xud4tiiS", "B1gg4VDstB", "Byl8RtUooB", "B1lXpW7ijH", "rygokYDpYr", "SyeeHuL9iH", "S1g5luL9jB", "SJlpnv8coB", "HygC_PIcsS", "r1lRNvb9sr", "B1l6TLZCKB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We appreciate the continued discussion with Reviewer 1 and the revised score. Below we address the additional questions raised in Reviewer 1’s latest comments: \n\nGoal of the paper — Reviewer 1 asks how the causal graphical model “improves learning the agent”? Our proposed method is intended to improve *explana...
[ -1, 8, -1, -1, 3, -1, -1, -1, -1, -1, 1 ]
[ -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, 3 ]
[ "B1lXpW7ijH", "iclr_2020_rkl3m1BFDB", "SJlpnv8coB", "HygC_PIcsS", "iclr_2020_rkl3m1BFDB", "iclr_2020_rkl3m1BFDB", "B1l6TLZCKB", "B1gg4VDstB", "rygokYDpYr", "B1gg4VDstB", "iclr_2020_rkl3m1BFDB" ]
iclr_2020_rklTmyBKPH
Fast Neural Network Adaptation via Parameter Remapping and Architecture Search
Deep neural networks achieve remarkable performance in many computer vision tasks. Most state-of-the-art~(SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone, commonly pre-trained on ImageNet. However, performance gains can be achieved by designing network architectures specifically for detection and segmentation, as shown by recent neural architecture search (NAS) research for detection and segmentation. One major challenge though, is that ImageNet pre-training of the search space representation (a.k.a. super network) or the searched networks incurs huge computational cost. In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network (e.g. a high performing manually designed backbone) to become a network with different depth, width, or kernels via a Parameter Remapping technique, making it possible to utilize NAS for detection/segmentation tasks a lot more efficiently. In our experiments, we conduct FNA on MobileNetV2 to obtain new networks for both segmentation and detection that clearly out-perform existing networks designed both manually and by NAS. The total computation cost of FNA is significantly less than SOTA segmentation/detection NAS approaches: 1737× less than DPC, 6.8× less than Auto-DeepLab and 7.4× less than DetNAS. The code is available at https://github.com/JaminFong/FNA .
accept-poster
Main content: Paper proposes a fast network adaptation (FNA) method, which takes a pre-trained image classification network, and produces a network for the task of object detection/semantic segmentation Summary of discussion: reviewer1: interesting paper with good results, specifically without the need to do pre-training on Imagenet. Cons are better comparisons to existing methods and run on more datasets. reviewer2: interesting idea on adapting source network network via parameter re-mapping that offers good results in both performance and training time. reviewer3: novel method overall, though some concerns on the concrete parameter remapping scheme. Results are impressive Recommendation: Interesting idea and good results. Paper could be improved with better comparison to existing techniques. Overall recommend weak accept.
val
[ "S1xI2WlfYH", "BkebcochsB", "B1xwBkpZsr", "rJggnOBrjS", "B1eIspq2oH", "S1xwrYnWsr", "SJxfMYF2iH", "BylZzChZjS", "H1eZlc3WoS", "r1x08lBvYB", "SkekZOLsKB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors take a MobileNet v2 trained for ImageNet classification, and adapt it either (i) semantic segmentation on Cityscapes, or (ii) object detection on COCO. They do this by first expanding the network into a \"supernet\" and copy weights in an ad-hoc manner, then, they perform DARTS-style ar...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_rklTmyBKPH", "S1xI2WlfYH", "BylZzChZjS", "S1xI2WlfYH", "iclr_2020_rklTmyBKPH", "SkekZOLsKB", "B1xwBkpZsr", "S1xI2WlfYH", "r1x08lBvYB", "iclr_2020_rklTmyBKPH", "iclr_2020_rklTmyBKPH" ]
iclr_2020_BJl07ySKvS
Guiding Program Synthesis by Learning to Generate Examples
A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well. This can be difficult to achieve as the specification provided by the end user is often limited, containing as few as one or two input-output examples. In this paper we address this challenge via an iterative approach that finds ambiguities in the provided specification and learns to resolve these by generating additional input-output examples. The main insight is to reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct. As a result, to train our probabilistic models, we can take advantage of the large amounts of data in the form of program outputs, which are often much easier to obtain than the corresponding ground-truth programs.
accept-poster
The paper consider the problem of program induction from a small dataset of input-output pairs; the small amount of available data results a large set of valid candidate programs. The authors propose to train an neural oracle by unsupervised learning on the given data, and synthesizing new pairs to augment the given data, therefore reducing the set of admissible programs. This is reminiscent of data augmentation schemes, eg elastic transforms for image data. The reviewers appreciate the simplicity and effectiveness of this approach, as demonstrated on an android UI dataset. The authors successfully addressed most negative points raised by the reviewers in the rebuttal, except the lack of experimental validating on other datasets. I recommend to accept this paper, based on reviews and my own reading. I think the manuscript could be further improved by more explicitly discussing (early in the paper) the intuition why the authors think this approach is sensible: The additional information for more successfully infering the correct program has to come from somewhere; as no new information is eg given by a human oracle, it was injected by the choice of prior over neural oracles. It is essential that the paper discuss this.
train
[ "SJgjq4tsjS", "rJl9NnOjjS", "ByxLP-NioH", "Hklcgt8KiS", "Skgbh4Untr", "Bkl2z1Htir", "rJlaeTn_sr", "rJljp4f4jB", "Skg_xQGNjr", "B1eTI1z4oB", "SJlNo_lk5B", "SJegExwycB" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I appreciate the authors for pointing out the difficulty of obtaining a suitable dataset for program synthesis. However, I am entirely not convinced by this list. First of all, while many papers here use a single dataset, most of the papers use a commonly acceptable dataset such as Karel (not really a fixed datase...
[ -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, 3, 8 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "rJl9NnOjjS", "Hklcgt8KiS", "Skg_xQGNjr", "rJljp4f4jB", "iclr_2020_BJl07ySKvS", "B1eTI1z4oB", "iclr_2020_BJl07ySKvS", "SJlNo_lk5B", "SJegExwycB", "Skgbh4Untr", "iclr_2020_BJl07ySKvS", "iclr_2020_BJl07ySKvS" ]
iclr_2020_Sye0XkBKvS
SNODE: Spectral Discretization of Neural ODEs for System Identification
This paper proposes the use of spectral element methods \citep{canuto_spectral_1988} for fast and accurate training of Neural Ordinary Differential Equations (ODE-Nets; \citealp{Chen2018NeuralOD}) for system identification. This is achieved by expressing their dynamics as a truncated series of Legendre polynomials. The series coefficients, as well as the network weights, are computed by minimizing the weighted sum of the loss function and the violation of the ODE-Net dynamics. The problem is solved by coordinate descent that alternately minimizes, with respect to the coefficients and the weights, two unconstrained sub-problems using standard backpropagation and gradient methods. The resulting optimization scheme is fully time-parallel and results in a low memory footprint. Experimental comparison to standard methods, such as backpropagation through explicit solvers and the adjoint technique \citep{Chen2018NeuralOD}, on training surrogate models of small and medium-scale dynamical systems shows that it is at least one order of magnitude faster at reaching a comparable value of the loss function. The corresponding testing MSE is one order of magnitude smaller as well, suggesting generalization capabilities increase.
accept-poster
This work proposes using spectral element methods to speed up training of ODE Networks for system identification. The authors utilize truncated series of Legendre polynomials to analyze the dynamics and then conduct experiments that shows their proposed scheme achieves an order of magnitude improvement in training speed compared to baseline methods. Reviewers raised some concerns (e.g. empirical comparison against adjoint methods in the multi-agent example) or asked for clarifications (e.g. details of time sampling of the data). The authors adequately addressed most of these concerns via rebuttal response as well as revising the initial submission. At the end, all reviewers recommended for accept based on contributions of this work on improving training speed of ODE Networks. R4 hopes that some of the additional concerns that are not yet reflected in the current revision, be addressed in the camera ready version.
train
[ "HkxrIqo9iH", "S1lJdRTWiS", "rJgELaTbiH", "Bke4X3zmiH", "rJeDrA6-sB", "rJxNmApWsH", "BklwF3abjr", "BJg92wGiYS", "H1lLFOChKS", "SkelTPIi5B", "ryeSS-kjwr" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "We have uploaded the final revision, containing the amendments suggested by the Reviewers and the results of the additional experiments using the adjoint method for the multi-agent example. Further details concerning all of the points raised in each review can be found in the comments that we posted last week belo...
[ -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 1, 1, -1 ]
[ "iclr_2020_Sye0XkBKvS", "SkelTPIi5B", "H1lLFOChKS", "iclr_2020_Sye0XkBKvS", "H1lLFOChKS", "SkelTPIi5B", "BJg92wGiYS", "iclr_2020_Sye0XkBKvS", "iclr_2020_Sye0XkBKvS", "iclr_2020_Sye0XkBKvS", "iclr_2020_Sye0XkBKvS" ]
iclr_2020_H1lxVyStPH
Generalized Convolutional Forest Networks for Domain Generalization and Visual Recognition
When constructing random forests, it is of prime importance to ensure high accuracy and low correlation of individual tree classifiers for good performance. Nevertheless, it is typically difficult for existing random forest methods to strike a good balance between these conflicting factors. In this work, we propose a generalized convolutional forest networks to learn a feature space to maximize the strength of individual tree classifiers while minimizing the respective correlation. The feature space is iteratively constructed by a probabilistic triplet sampling method based on the distribution obtained from the splits of the random forest. The sampling process is designed to pull the data of the same label together for higher strength and push away the data frequently falling to the same leaf nodes. We perform extensive experiments on five image classification and two domain generalization datasets with ResNet-50 and DenseNet-161 backbone networks. Experimental results show that the proposed algorithm performs favorably against state-of-the-art methods.
accept-poster
The authors introduce an approach to learn a random forest model and a representation simultaneously. The basic idea is to modify the representation so that subsequent trees in the random forest are less correlated. The authors evaluate the technique empirically and show some modest gains. While the reviews were mixed, the approach is quite different from the usual approaches published at ICLR and so I think it's worth highlighting this work.
train
[ "SJl4w6jqoB", "H1x-FTj5or", "HklZmToqiB", "HJxQzU9stS", "HJlFJWh6FS", "HklgYgX-5S" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank Reviewer 3 for the helpful comments. We answer to the comments of Reviewer 3.\n\n1. Backbone networks\nWe compare the proposed algorithm with the state-of-the-art methods using the same backbone network such as ResNet-18 (Table 3), AlexNet (Table 4), ResNet-50 (Table 5, 6, 8 and 9), ResNet-152 (Table 7) a...
[ -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, 3, 5, 4 ]
[ "HJlFJWh6FS", "HJxQzU9stS", "HklgYgX-5S", "iclr_2020_H1lxVyStPH", "iclr_2020_H1lxVyStPH", "iclr_2020_H1lxVyStPH" ]
iclr_2020_HylxE1HKwS
Once-for-All: Train One Network and Specialize it for Efficient Deployment
We address the challenging problem of efficient inference across many devices and resource constraints, especially on edge devices. Conventional approaches either manually design or use neural architecture search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally prohibitive (causing CO2 emission as much as 5 cars' lifetime) thus unscalable. In this work, we propose to train a once-for-all (OFA) network that supports diverse architectural settings by decoupling training and search, to reduce the cost. We can quickly get a specialized sub-network by selecting from the OFA network without additional training. To efficiently train OFA networks, we also propose a novel progressive shrinking algorithm, a generalized pruning method that reduces the model size across many more dimensions than pruning (depth, width, kernel size, and resolution). It can obtain a surprisingly large number of sub-networks (>1019) that can fit different hardware platforms and latency constraints while maintaining the same level of accuracy as training independently. On diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to 4.0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1.5x faster than MobileNetV3, 2.6x faster than EfficientNet w.r.t measured latency) while reducing many orders of magnitude GPU hours and CO2 emission. In particular, OFA achieves a new SOTA 80.0% ImageNet top-1 accuracy under the mobile setting (<600M MACs). OFA is the winning solution for the 3rd Low Power Computer Vision Challenge (LPCVC), DSP classification track and the 4th LPCVC, both classification track and detection track. Code and 50 pre-trained models (for many devices & many latency constraints) are released at https://github.com/mit-han-lab/once-for-all.
accept-poster
The authors propose a new method for neural architecture search, except it's not exactly that because model training is separated from architecture, which is the main point of the paper. Once this network is trained, sub-networks can be distilled from it and used for specific tasks. The paper as submitted missed certain details, but after this was pointed out by reviewers the details were satisfactorily described by the authors. The idea of the paper is original and interesting. The paper is correct and, after the revisions by authors, complete. In my view, this is sufficient for acceptance.
val
[ "Bygt_gUQYr", "S1g4HP_njB", "rkgITIzrjS", "SyefqEGrsr", "rJeuZIGHiB", "Skx8DSMHoH", "BJgb4Ezrjr", "r1l72hZWir", "BklpzCV0FB", "HygXKK8RKB", "ByxzZ83IYH", "BJe47svjOB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "public" ]
[ "In this papers, the authors learn a Once-for-all net. This starts as a big neural network which is trained normally (albeit with input images of different resolutions). It is then fine-tuned while sampling sub-networks with progressively smaller kernels, then lower depth, then width (while still sampling larger ne...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, -1, -1 ]
[ "iclr_2020_HylxE1HKwS", "iclr_2020_HylxE1HKwS", "BklpzCV0FB", "iclr_2020_HylxE1HKwS", "Bygt_gUQYr", "HygXKK8RKB", "r1l72hZWir", "iclr_2020_HylxE1HKwS", "iclr_2020_HylxE1HKwS", "iclr_2020_HylxE1HKwS", "BJe47svjOB", "iclr_2020_HylxE1HKwS" ]
iclr_2020_B1gZV1HYvS
Multi-Agent Interactions Modeling with Correlated Policies
In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at \url{https://github.com/apexrl/CoDAIL}.
accept-poster
The paper proposes an extension to the popular Generative Adversarial Imitation Learning framework that considers multi-agent settings with "correlated policies", i.e., where agents' actions influence each other. The proposed approach learns opponent models to consider possible opponent actions during learning. Several questions were raised during the review phase, including clarifying questions about key components of the proposed approach and theoretical contributions, as well as concerns about related work. These were addressed by the authors and the reviewers are satisfied that the resulting paper provides a valuable contribution. I encourage the authors to continue to use the reviewers' feedback to improve the clarity of their manuscript in time for the camera ready submission.
test
[ "Hyeeki6nKB", "B1gXtGEb5S", "r1ggNn9Zir", "Hyx3Roc-sB", "HkgZL35bsS", "ryxN9TcbiH", "HylafpcWir", "Hkgn995bjr", "rJezMXKS5r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "In this work, a multi-agent imitation learning algorithm with opponent modeling is proposed, where each agent considers other agents’ expected actions in advance and uses them to generate their own actions. Assuming each agent can observe other agents’ actions, which is a reasonable assumption in MARL problems, a ...
[ 8, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_B1gZV1HYvS", "iclr_2020_B1gZV1HYvS", "rJezMXKS5r", "rJezMXKS5r", "rJezMXKS5r", "Hyeeki6nKB", "B1gXtGEb5S", "iclr_2020_B1gZV1HYvS", "iclr_2020_B1gZV1HYvS" ]
iclr_2020_BJgWE1SFwS
PCMC-Net: Feature-based Pairwise Choice Markov Chains
Pairwise Choice Markov Chains (PCMC) have been recently introduced to overcome limitations of choice models based on traditional axioms unable to express empirical observations from modern behavior economics like context effects occurring when a choice between two options is altered by adding a third alternative. The inference approach that estimates the transition rates between each possible pair of alternatives via maximum likelihood suffers when the examples of each alternative are scarce and is inappropriate when new alternatives can be observed at test time. In this work, we propose an amortized inference approach for PCMC by embedding its definition into a neural network that represents transition rates as a function of the alternatives' and individual's features. We apply our construction to the complex case of airline itinerary booking where singletons are common (due to varying prices and individual-specific itineraries), and context effects and behaviors strongly dependent on market segments are observed. Experiments show our network significantly outperforming, in terms of prediction accuracy and logarithmic loss, feature engineered standard and latent class Multinomial Logit models as well as recent machine learning approaches.
accept-poster
This submission proposes to use neural networks in combination with pairwise choice markov chain models for choice modelling. The deep network is used to parametrize the PCMC and in so doing improve generalization and inference. Strengths: The formulation and theoretical justifications are convincing. The improvements are non-trivial and the approach is novel. Weaknesses: The text was not always easy to follow. The experimental validation is too limited initially. This was addressed during the discussion by adding an additional experiment. All reviewers recommend acceptance.
test
[ "H1l2wrZ_cr", "S1xfZCnTKB", "BygUPA_hoS", "BklhlAuniS", "Sklulsu2or", "S1xv9y4k5H", "BylOTtd3jH", "ryesrK_njr", "rJxDQbYstS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "1.The goal of the paper is to connect flexible choice modeling with a modern approach to ML architecture to make said choice modeling scalable, tractable, and practical. \n2. The approach of the paper is well motivated intuitively, but could more explicitly show that PCMC-Net is needed to fix inferential problems ...
[ 8, 6, -1, -1, -1, 6, -1, -1, 3 ]
[ 5, 1, -1, -1, -1, 1, -1, -1, 1 ]
[ "iclr_2020_BJgWE1SFwS", "iclr_2020_BJgWE1SFwS", "S1xfZCnTKB", "rJxDQbYstS", "BylOTtd3jH", "iclr_2020_BJgWE1SFwS", "S1xv9y4k5H", "H1l2wrZ_cr", "iclr_2020_BJgWE1SFwS" ]
iclr_2020_Byx4NkrtDS
Implementing Inductive bias for different navigation tasks through diverse RNN attrractors
Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map. The precise form of this representation is often considered to be a metric representation of space. An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks. Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment. To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network's output. We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state. These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities. Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes -- which can be shaped by pre-training and analyzed using dynamical systems methods. Furthermore, we demonstrate that non-metric representations are useful for navigation tasks.
accept-poster
Navigation is learned in a two-stage process, where the (recurrent) network is first pre-trained in a task-agnostic stage and then fine-tuned using Q-learning. The analysis of the learned network confirms that what has been learned in the task-agnostic pre-training stage takes the form of attractors. The reviewers generally liked this work, but complained about lack of comparison studies / baselines. The authors then carried out such studies and did a major update of the paper. Given that the extensive update of the paper seems to have addressed the reviewers' complaints, I think this paper can be accepted.
val
[ "HyetoPNhsr", "BkxSZ8x3jH", "rklL5zxssr", "rkl90g-mor", "Bkxi8gb7ir", "SkxCR1-mjS", "r1ldu7WAYS", "HJg9M2dCFH", "rkl_IG0e9S" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for the discussion and believe that our paper has improved through this process.\nWe now uploaded a revised version of the paper, that we hope answers all the concerns raised by the reviewers. In particular both points #1 and #2 address the novelty concerns raised by reviewer #3. \n\nThe mai...
[ -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 1, 4, 5 ]
[ "iclr_2020_Byx4NkrtDS", "rklL5zxssr", "rkl90g-mor", "r1ldu7WAYS", "HJg9M2dCFH", "rkl_IG0e9S", "iclr_2020_Byx4NkrtDS", "iclr_2020_Byx4NkrtDS", "iclr_2020_Byx4NkrtDS" ]
iclr_2020_BJgr4kSFDS
Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings
Answering complex logical queries on large-scale incomplete knowledge graphs (KGs) is a fundamental yet challenging task. Recently, a promising approach to this problem has been to embed KG entities as well as the query into a vector space such that entities that answer the query are embedded close to the query. However, prior work models queries as single points in the vector space, which is problematic because a complex query represents a potentially large set of its answer entities, but it is unclear how such a set can be represented as a single point. Furthermore, prior work can only handle queries that use conjunctions (∧) and existential quantifiers (∃). Handling queries with logical disjunctions (∨) remains an open problem. Here we propose query2box, an embedding-based framework for reasoning over arbitrary queries with ∧, ∨, and ∃ operators in massive and incomplete KGs. Our main insight is that queries can be embedded as boxes (i.e., hyper-rectangles), where a set of points inside the box corresponds to a set of answer entities of the query. We show that conjunctions can be naturally represented as intersections of boxes and also prove a negative result that handling disjunctions would require embedding with dimension proportional to the number of KG entities. However, we show that by transforming queries into a Disjunctive Normal Form, query2box is capable of handling arbitrary logical queries with ∧, ∨, ∃ in a scalable manner. We demonstrate the effectiveness of query2box on two large KGs and show that query2box achieves up to 25% relative improvement over the state of the art.
accept-poster
This paper proposes a new method to answering queries using incomplete knowledge bases. The approach relies on learning embeddings of the vertices of the knowledge graph. The reviewers unanimously found that the method was well motivated and found the method convincingly outperforms previous work.
train
[ "S1geJGyitH", "SyxzjxvjsH", "S1xjDxPssS", "B1gHTJvjsB", "ryxU5kPssS", "Hklbu1aRKH", "rJxqx3Fy9r" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper introduces an approach to answering queries on knowledge graphs, called Query2Box. The idea is to work with the embeddings of the vertices of the knowledge graph as if they were kind of sets. In this way, from a set, called box, of entities embeddings it is possible to project them to find other boxes us...
[ 8, -1, -1, -1, -1, 6, 6 ]
[ 1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_BJgr4kSFDS", "rJxqx3Fy9r", "rJxqx3Fy9r", "Hklbu1aRKH", "S1geJGyitH", "iclr_2020_BJgr4kSFDS", "iclr_2020_BJgr4kSFDS" ]
iclr_2020_B1g8VkHFPH
Rethinking the Hyperparameters for Fine-tuning
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyperparameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyperparameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyperparameters for fine-tuning, in particular, the effective learning rate, are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyperparameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for "dissimilar" datasets. Our findings challenge common practices of fine-tuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.
accept-poster
This paper presents a guide for setting hyperparameters when fine-tuning from one domain to another. This is an important problem as many practical deep learning applications repurpose an existing model to a new setting through fine-tuning. All reviewers were positive saying that this work provides new experimental insights, especially related to setting momentum parameters. Though other works may have previously discussed the effect of momentum during fine-tuning, this work presented new experiments which contributes to the overall understanding. Reviewer 3 had some concern about the generalization of the findings to other backbone architectures, but this concern was resolved during the discussion phase. The authors have provided detailed clarifications during the rebuttal and we encourage them to incorporate any remaining discussion or any new clarifications into the final draft.
train
[ "BkxO9K7JcH", "Hkxx5ys3jH", "Bkl_x05niH", "ByeXaaqhoS", "B1e2K9q3ir", "BJgIS_c3ir", "BklYgHvGsH", "SJev0nZKYB", "B1xkAOgitB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "This submission studies the problem of transfer learning and fine tuning. This submission proposes four insights: Momentum hyperparameters are essential for fine-tuning; When the hyperparameters satisfy some certain relationships, the results of fine-tuning are optimal; The similarity between source and target dat...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_B1g8VkHFPH", "BklYgHvGsH", "BkxO9K7JcH", "BkxO9K7JcH", "B1xkAOgitB", "SJev0nZKYB", "iclr_2020_B1g8VkHFPH", "iclr_2020_B1g8VkHFPH", "iclr_2020_B1g8VkHFPH" ]
iclr_2020_H1edEyBKDS
Plug and Play Language Models: A Simple Approach to Controlled Text Generation
Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.
accept-poster
This paper proposes a simple plug-and-play language model approach to the problem of controlled language generation. The problem is important and timely, and the approach is simple yet effective. Reviewers had some discussions whether 1) there is enough novelty, 2) evaluation task really shows effectiveness, and 3) this paper will inspire future research directions. After discussions of the above points, reviewers are leaning more positive, and I reflect their positive sentiment by recommending it to be accepted. I look forward to seeing this work presented at ICLR.
train
[ "H1x4jhziFS", "B1xorXFnjS", "BJeY0tInor", "Bkxgl5Toir", "BJgpGJ7ooH", "H1lsWy7ssS", "Sye1kV9qjB", "B1ljnf5csH", "HkegjQxaFH", "rklLgxopKB", "BklhMpxp_r", "SJxBXUyT_B", "r1emraXVOr", "HJlzgOuG_B", "SklON_OGdr", "ByeOsKdGOr", "BJgScu_zdr", "BklATnxydS", "H1lCPj1pwH", "rJxpps_swS"...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "public", "public", "public", "public", "public", "public"...
[ "The authors describe a method for training plug and play language models, a way to incorporate control elements into pre-trained LMs. In contrast to existing work, which often trains conditioned upon the control element, the authors emphasize that their method does not require re-training the initial LM. This is e...
[ 6, -1, -1, -1, -1, -1, -1, -1, 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_H1edEyBKDS", "BJeY0tInor", "Sye1kV9qjB", "iclr_2020_H1edEyBKDS", "H1lsWy7ssS", "H1x4jhziFS", "HkegjQxaFH", "rklLgxopKB", "iclr_2020_H1edEyBKDS", "iclr_2020_H1edEyBKDS", "ByeOsKdGOr", "iclr_2020_H1edEyBKDS", "iclr_2020_H1edEyBKDS", "SkezXrHiPr", "HJlzgOuG_B", "BklATnxydS", ...
iclr_2020_rkgqN1SYvr
Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks
The selection of initial parameter values for gradient-based optimization of deep neural networks is one of the most impactful hyperparameter choices in deep learning systems, affecting both convergence times and model performance. Yet despite significant empirical and theoretical analysis, relatively little has been proved about the concrete effects of different initialization schemes. In this work, we analyze the effect of initialization in deep linear networks, and provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights. We show that for deep networks, the width needed for efficient convergence to a global minimum with orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence with Gaussian initializations scales linearly in the depth. Our results demonstrate how the benefits of a good initialization can persist throughout learning, suggesting an explanation for the recent empirical successes found by initializing very deep non-linear networks according to the principle of dynamical isometry.
accept-poster
The paper shows that initializing the parameters of a deep linear network from the orthogonal group speeds up learning, whereas sampling the parameters from a Gaussian may be harmful. The result of this paper can be interesting to the deep learning community. The main concern the reviewers raised is the huge overlap with the paper by Du & Hu (2019). It would have been nice to actually see whether the results for linear networks empirically also hold for nonlinear networks.
train
[ "SJeRmfcAKH", "SkgU7E06FS", "S1l4JT3joH", "B1x4O5hoiB", "Hkl2Bu2jor", "Sygdb88AFr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the role of initialization for training deep linear neural networks. The authors specifically consider the orthogonal initialization, and prove that with the orthogonal initialization proposed in equation (4), the gradient descent can achieve zero training error in a linear convergence rate. The...
[ 6, 8, -1, -1, -1, 3 ]
[ 4, 4, -1, -1, -1, 3 ]
[ "iclr_2020_rkgqN1SYvr", "iclr_2020_rkgqN1SYvr", "SkgU7E06FS", "Sygdb88AFr", "SJeRmfcAKH", "iclr_2020_rkgqN1SYvr" ]
iclr_2020_HyxjNyrtPr
RGBD-GAN: Unsupervised 3D Representation Learning From Natural Image Datasets via RGBD Image Synthesis
Understanding three-dimensional (3D) geometries from two-dimensional (2D) images without any labeled information is promising for understanding the real world without incurring annotation cost. We herein propose a novel generative model, RGBD-GAN, which achieves unsupervised 3D representation learning from 2D images. The proposed method enables camera parameter--conditional image generation and depth image generation without any 3D annotations, such as camera poses or depth. We use an explicit 3D consistency loss for two RGBD images generated from different camera parameters, in addition to the ordinal GAN objective. The loss is simple yet effective for any type of image generator such as DCGAN and StyleGAN to be conditioned on camera parameters. Through experiments, we demonstrated that the proposed method could learn 3D representations from 2D images with various generator architectures.
accept-poster
The paper has initially received mixed reviews, with two reviewers being weakly positive and one being negative. Following the author's revision, however, the negative reviewer was satisfied with the changes, and one of the positive reviewers increased the score as well. In general, the reviewers agree that the paper contains a simple and well-executed idea for recovering geometry in unsupervised way with generative modeling from a collection of 2D images, even though the results are a bit underwhelming. The authors are encouraged to expand the related work section in the revision and to follow our suggestion of the reviewers.
val
[ "SJl2lItnjr", "H1lhwI5KiB", "rkgrqN9Fir", "S1e_fN9Ysr", "r1lqcDSiYS", "H1gPwuOTtH", "SJxNx3Optr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all the reviewers for their valuable comments.\n\nWe revised the paper. The major modifications are as follows.\n- Normalize depth images and visualize them with colormaps\n- Separate the related works section from the introduction section\n- Add some related works and discussions\n- Additio...
[ -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2020_HyxjNyrtPr", "r1lqcDSiYS", "H1gPwuOTtH", "SJxNx3Optr", "iclr_2020_HyxjNyrtPr", "iclr_2020_HyxjNyrtPr", "iclr_2020_HyxjNyrtPr" ]
iclr_2020_SyxhVkrYvr
Towards Verified Robustness under Text Deletion Interventions
Neural networks are widely used in Natural Language Processing, yet despite their empirical successes, their behaviour is brittle: they are both over-sensitive to small input changes, and under-sensitive to deletions of large fractions of input text. This paper aims to tackle under-sensitivity in the context of natural language inference by ensuring that models do not become more confident in their predictions as arbitrary subsets of words from the input text are deleted. We develop a novel technique for formal verification of this specification for models based on the popular decomposable attention mechanism by employing the efficient yet effective interval bound propagation (IBP) approach. Using this method we can efficiently prove, given a model, whether a particular sample is free from the under-sensitivity problem. We compare different training methods to address under-sensitivity, and compare metrics to measure it. In our experiments on the SNLI and MNLI datasets, we observe that IBP training leads to a significantly improved verified accuracy. On the SNLI test set, we can verify 18.4% of samples, a substantial improvement over only 2.8% using standard training.
accept-poster
This paper deals with the under-sensitivity problem in natural language inference tasks. An interval bound propagation (IBP) approach is applied to predict the confidence of the model when a subsets of words from the input text are deleted. The paper is well written and easy to follow. The authors give detailed rebuttal and 3 of the 4 reviewers lean to accept the paper.
train
[ "B1xeKnauiH", "rJe5coadiH", "H1gdBspusH", "HJeGB96doS", "HJefc93Z5S", "ryxiFWbHcH", "rkxuhTae5S", "HJlme7pccr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and comments. \n\nComment:\n“[...] the technical contribution feels incremental over previous approaches, especially Huang (2019).”\n\nResponse:\nHuang et al. (2019) indeed also made use of IBP, but address a different NLP task, and an over-sensitivity specification. Their work is adapted...
[ -1, -1, -1, -1, 8, 6, 6, 3 ]
[ -1, -1, -1, -1, 1, 3, 3, 1 ]
[ "rkxuhTae5S", "HJefc93Z5S", "ryxiFWbHcH", "HJlme7pccr", "iclr_2020_SyxhVkrYvr", "iclr_2020_SyxhVkrYvr", "iclr_2020_SyxhVkrYvr", "iclr_2020_SyxhVkrYvr" ]
iclr_2020_Hke0V1rKPS
Jacobian Adversarially Regularized Networks for Robustness
Adversarial examples are crafted with imperceptible perturbations with the intent to fool neural networks. Against such attacks, adversarial training and its variants stand as the strongest defense to date. Previous studies have pointed out that robust models that have undergone adversarial training tend to produce more salient and interpretable Jacobian matrices than their non-robust counterparts. A natural question is whether a model trained with an objective to produce salient Jacobian can result in better robustness. This paper answers this question with affirmative empirical results. We propose Jacobian Adversarially Regularized Networks (JARN) as a method to optimize the saliency of a classifier's Jacobian by adversarially regularizing the model's Jacobian to resemble natural training images. Image classifiers trained with JARN show improved robust accuracy compared to standard models on the MNIST, SVHN and CIFAR-10 datasets, uncovering a new angle to boost robustness without using adversarial training.
accept-poster
This paper extends previous observations (Tsipars, Etmann etc) in relations between Jacobian and robustness and directly train a model that improves robustness using Jacobians that look like images. The questions regarding computation time (suggested by two reviewers, including one of the most negative reviewers) are appropriately addressed by the authors (added experiments). Reviewers agree that the idea is novel, and some conjectured why the paper’s idea is a very sensible one. We think this paper would be an interest for ICLR readers. Please address any remaining comments from the reviewers before the final copy.
train
[ "r1xkPfF6tr", "S1l4Bz3soH", "H1eCvDhiiH", "rkllMwhsjr", "SkeInMnsoH", "SklSSInojB", "Byg5Bb6GiH", "rklhsmOwFH", "ryxg23o2tS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Summary:\nIt was previously observed that models that were more robust to adversarial perturbation had more interpretable jacobian. The authors attempt to train for interpretable jacobian in order to improve the robustness of the model.\nThis is done by employing a GAN-like procedure where a discriminator attempts...
[ 6, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_Hke0V1rKPS", "ryxg23o2tS", "iclr_2020_Hke0V1rKPS", "Byg5Bb6GiH", "rklhsmOwFH", "r1xkPfF6tr", "iclr_2020_Hke0V1rKPS", "iclr_2020_Hke0V1rKPS", "iclr_2020_Hke0V1rKPS" ]
iclr_2020_SJexHkSFPS
Thinking While Moving: Deep Reinforcement Learning with Concurrent Control
We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must "think while moving."
accept-poster
This paper studies the setting in reinforcement learning where the next action must be sampled while the current action is still executing. This refers to continuous time problems that are discretised to make them delay-aware in terms of the time taken for action execution. The paper presents adaptions of the Bellman operator and Q-learning to deal with this scenario. This is a problem that is of theoretical interest and also has practical value in many real-world problems. The reviewers found both the problem setting and the proposed solution to be valuable, particularly after the greatly improved technical clarity in the rebuttals. As a result, this paper should be accepted.
test
[ "rygdpEwhKS", "BygtJHzqsB", "ryg6d_-MqB", "H1x8xt1coH", "BkexhKk5iB", "Hye7aUyciB", "SkgRRIJqsS", "ByxoiRtUcB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper tackles the problem of making decisions for the next action while still engaged in doing the previous actions. Such a delay could either be part of the design (like a robot deciding the next action before its actors and motors have come to full rest after the current action) or an artefact of the delays ...
[ 6, -1, 6, -1, -1, -1, -1, 6 ]
[ 4, -1, 3, -1, -1, -1, -1, 1 ]
[ "iclr_2020_SJexHkSFPS", "Hye7aUyciB", "iclr_2020_SJexHkSFPS", "iclr_2020_SJexHkSFPS", "rygdpEwhKS", "ryg6d_-MqB", "ByxoiRtUcB", "iclr_2020_SJexHkSFPS" ]
iclr_2020_SJxbHkrKDH
Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning
In multi-agent games, the complexity of the environment can grow exponentially as the number of agents increases, so it is particularly challenging to learn good policies when the agent population is large. In this paper, we introduce Evolutionary Population Curriculum (EPC), a curriculum learning paradigm that scales up Multi-Agent Reinforcement Learning (MARL) by progressively increasing the population of training agents in a stage-wise manner. Furthermore, EPC uses an evolutionary approach to fix an objective misalignment issue throughout the curriculum: agents successfully trained in an early stage with a small population are not necessarily the best candidates for adapting to later stages with scaled populations. Concretely, EPC maintains multiple sets of agents in each stage, performs mix-and-match and fine-tuning over these sets and promotes the sets of agents with the best adaptability to the next stage. We implement EPC on a popular MARL algorithm, MADDPG, and empirically show that our approach consistently outperforms baselines by a large margin as the number of agents grows exponentially. The source code and videos can be found at https://sites.google.com/view/epciclr2020.
accept-poster
The paper proposes a curriculum approach to increasing the number of agents (and hence complexity) in MARL. The reviewers mostly agreed that this is a simple and useful idea to the MARL community. There was some initial disagreement about relationships with other RL + evolution approaches, but it got resolved in the rebuttal. Another concern was the slight differences in the environments considered by the paper compared to the literature, but the authors added an experiment with the unmodified version. Given the positive assessment and the successful rebuttal, I recommend acceptance.
train
[ "B1lZmZmkqH", "H1lUoOjaYr", "B1xuPXkKjH", "B1x_YM1tjH", "SyxjEpAOoB", "HkeDFh0OoS", "ryxYvoLRYH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a kind of curriculum for large-scale multi-agent learning. The related work section mentions some obvious points of comparison (note: see also https://science.sciencemag.org/content/364/6443/859.abstract). However, the authors do not compare with ANY of this work (either in terms of algorithm de...
[ 6, 6, -1, -1, -1, -1, 8 ]
[ 4, 5, -1, -1, -1, -1, 4 ]
[ "iclr_2020_SJxbHkrKDH", "iclr_2020_SJxbHkrKDH", "B1lZmZmkqH", "H1lUoOjaYr", "ryxYvoLRYH", "iclr_2020_SJxbHkrKDH", "iclr_2020_SJxbHkrKDH" ]
iclr_2020_r1xMH1BtvB
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.
accept-poster
This paper investigates the tasks used to pretrain language models. The paper proposes not using a generative tasks ('filling in' masked tokens), but instead a discriminative tasked (recognising corrupted tokens). The authors empirically show that the proposed method leads to improved performance, especially in the "limited compute" regime. Initially, the reviewers had quite split opinions on the paper, but after the rebuttal and discussion phases all reviewers agreed on an "accept" recommendation. I am happy to agree with this recommendation based on the following observations: - The authors provide strong empirical results including relevant ablations. Reviews initially suggested a limitation to classification tasks and a lack of empirical analysis, but those issues have been addressed in the updated version. - The problem of pre-training language model is relevant for the ML and NLP communities, and it should be especially relevant for ICLR. The resulting method significantly outperforms existing methods, especially in the low compute regime. - The idea is quite simple, but at the same time it seems to be a quite novel idea.
test
[ "BkxelNPJqB", "SJgKAmShKH", "Skxd42Y2oB", "SkxNfzO9or", "SkeFTOXIsS", "B1eYvDmUiH", "r1ebpQmLoS", "rJxvxzXIiS", "r1xUDhfRtB", "Skg95kdCcS", "HJe2AfI4dB", "HJlI-6LZdr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "public", "author", "public" ]
[ "The authors propose replaced token detection, a novel self-supervised task, for learning text representations.\n\nThe principle advantage of the approach is that, in contrast with the standard masked language model (MLM) objective used by BERT and derivatives, there is a training signal for all tokens of the input...
[ 8, 6, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1 ]
[ "iclr_2020_r1xMH1BtvB", "iclr_2020_r1xMH1BtvB", "SkeFTOXIsS", "iclr_2020_r1xMH1BtvB", "SJgKAmShKH", "r1xUDhfRtB", "BkxelNPJqB", "Skg95kdCcS", "iclr_2020_r1xMH1BtvB", "iclr_2020_r1xMH1BtvB", "HJlI-6LZdr", "iclr_2020_r1xMH1BtvB" ]
iclr_2020_SklGryBtwr
Environmental drivers of systematicity and generalization in a situated agent
The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI. Here, we consider tests of out-of-sample generalisation that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room. We first describe a comparatively generic agent architecture that exhibits strong performance on these tests. We then identify three aspects of the training regime and environment that make a significant difference to its performance: (a) the number of object/word experiences in the training set; (b) the visual invariances afforded by the agent's perspective, or frame of reference; and (c) the variety of visual input inherent in the perceptual aspect of the agent's perception. Our findings indicate that the degree of generalisation that networks exhibit can depend critically on particulars of the environment in which a given task is instantiated. They further suggest that the propensity for neural networks to generalise in systematic ways may increase if, like human children, those networks have access to many frames of richly varying, multi-modal observations as they learn.
accept-poster
The paper studies out-of-sample generalisation that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room, and analyzes factors which promote combinatorial generalization in such environment. The paper is a very thought provoking work, and would make a valuable contribution to the line of works on systematic generalization in embodied agents. The draft has been improved significantly after the rebuttal. After the discussion, we agree that it is worthwhile presenting at ICLR.
train
[ "Byeq7rzTYr", "HyeAa6v9Fr", "rkecRlD2jB", "HkxvT5U2jH", "HyxGIMUnoS", "rJxuMyU1ir", "H1eAp6B2jH", "SygQojS3sS", "B1g6djrhiB", "ByxW-PBniS", "HyxG-PZooH", "BJlsCL1siS", "BJxmcLyioB", "HJeNbA-qjB", "H1xOA3DFiB", "SJxtvO5uoB", "SkxuMtcujB", "S1geqFcuoH", "BJgVAR1ziS", "HJg8zgAbjS"...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", ...
[ "\n=============================== Update after revisions =====================================================\n\nIn my initial review, I had raised some issues with the interpretation of the results and suggested some control experiments to tighten the conclusions. The authors chose to weaken their initial claims...
[ 6, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_SklGryBtwr", "iclr_2020_SklGryBtwr", "H1eAp6B2jH", "HyxGIMUnoS", "B1g6djrhiB", "iclr_2020_SklGryBtwr", "ByxW-PBniS", "HyeAa6v9Fr", "Byeq7rzTYr", "HyxG-PZooH", "BJxmcLyioB", "H1xOA3DFiB", "HJeNbA-qjB", "SJxtvO5uoB", "iclr_2020_SklGryBtwr", "rJxuMyU1ir", "Byeq7rzTYr", "Hye...
iclr_2020_ByxQB1BKwH
Abstract Diagrammatic Reasoning with Multiplex Graph Networks
Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems. In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks. MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks. MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels. MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates. We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM). For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin.
accept-poster
This paper a new method of constructing graph neural networks for the task of reasoning to answer IQ style diagrammatic reasoning, in particular including Raven Progressive Matrices. The model first learns an object representation for parts of the image and then tries to combine them together to represent relations between different objects of the image. Using this model they achieve SOTA results (ignoring a parallely submitted paper) on the PGM and Raven datasets. The improvement in SOTA is subtantial. Most of the critique made for the paper is on writing style and presentation. The authors seem to have fixed several of these concerns in the newly uploaded version of the paper. I will further request the authors to revise the paper for readability. However, since the paper presents both an interesting modeling and improved empirical results, I recommend acceptance.
train
[ "S1l2nYTLoS", "Syepv_6IjH", "HJlUfdaIiB", "SyxZXtpUoS", "rJllgtaUsH", "H1lN8OSAYr", "BJgYu7iQqH", "H1lWI44d9H", "B1eGUyDCKB", "r1xaSDIRKS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "In the revised version, we improved structuring and writing quality according to reviewers' comments. The major changes are:\n\n1. Combined Figure 1 and 2 to give more space for other sections.\n2. Moved parts of dataset description to Appendix to give more space for other sections.\n3. Improved explanation of mod...
[ -1, -1, -1, -1, -1, 6, 3, 6, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, 1, 1, -1, -1 ]
[ "iclr_2020_ByxQB1BKwH", "HJlUfdaIiB", "BJgYu7iQqH", "H1lN8OSAYr", "H1lWI44d9H", "iclr_2020_ByxQB1BKwH", "iclr_2020_ByxQB1BKwH", "iclr_2020_ByxQB1BKwH", "r1xaSDIRKS", "iclr_2020_ByxQB1BKwH" ]
iclr_2020_rylXBkrYDS
A Baseline for Few-Shot Image Classification
Fine-tuning a deep network trained with the standard cross-entropy loss is a strong baseline for few-shot learning. When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters. The simplicity of this approach enables us to demonstrate the first few-shot learning results on the ImageNet-21k dataset. We find that using a large number of meta-training classes results in high few-shot accuracies even for a large number of few-shot classes. We do not advocate our approach as the solution for few-shot learning, but simply use the results to highlight limitations of current benchmarks and few-shot protocols. We perform extensive studies on benchmark datasets to propose a metric that quantifies the "hardness" of a few-shot episode. This metric can be used to report the performance of few-shot algorithms in a more systematic way.
accept-poster
This paper introduces a simple baseline for few-shot image classification in the transductive setting, which includes a standard cross-entropy loss on the labeled support samples and a conditional entropy loss on the unlabeled query samples. Both losses are known in the literature (the seminal work of entropy minimization by Bengio should be cited properly). However, reviewers are positive about this paper, acknowledging the significant contributions of a novel few-shot baseline that establishes a new state-of-the-art on well-known public few-shot datasets as well as on the introduced large-scale benchmark ImageNet21K. The comprehensive study of the methods and datasets in this domain will benefit the research practices in this area. Therefore, I make an acceptance recommendation.
train
[ "SJlVVzkRFB", "SyxCU0c3oS", "SJxmJaqnoB", "H1gn4s5hsS", "S1gBtOq2ir", "B1lohwq2jB", "HJgnmTMTYS", "rJejGJP_qH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a fine-tune-based few-shot classification baseline, which has been validated effectively on several datasets, including Mini-Imagenet, Tiered-Imagenet, CIFAR-FS, FC-100, and Imagenet-21k. In addition to the method, the authors also provide concrete experimental setting and new evaluation propos...
[ 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_rylXBkrYDS", "SJlVVzkRFB", "SJlVVzkRFB", "rJejGJP_qH", "HJgnmTMTYS", "iclr_2020_rylXBkrYDS", "iclr_2020_rylXBkrYDS", "iclr_2020_rylXBkrYDS" ]
iclr_2020_SJgVHkrYDH
Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering
Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question. This paper introduces a new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions. Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents. Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path. Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method. Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points.
accept-poster
The paper proposed a multi-hop machine reading method for hotpotqa and squad-open datasets. The reviewers agreed that it is very interesting to learn to retrieve, and the paper presents an interesting solution. Some additional experiments as suggested by the reviewers will help improve the paper further.
train
[ "S1gd_SqAKr", "BJegSqBYsH", "rkeehtADiH", "HkglHuj3FS", "HJxL380PiS", "Bkl-WmRviB", "SklkStAPjH", "HylTvDCvsr", "HJlHgIAPiB", "rJgyjVRPir", "ryg3CRRRYS", "HJgZPKrFOS", "r1g-3abtdr", "BkxGvEnBuS" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "public", "author", "public" ]
[ "Summary\n========\nThis paper introduces a graph-based recurrent retrieval model for retrieving evidence documents in a multi-hop reasoning question answering task. The main idea is that (1) the graph formed by Wikipedia links between passages can be used as constraint for constructing reasoning chains, and (2) th...
[ 6, -1, -1, 8, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1 ]
[ "iclr_2020_SJgVHkrYDH", "HkglHuj3FS", "SklkStAPjH", "iclr_2020_SJgVHkrYDH", "HJlHgIAPiB", "iclr_2020_SJgVHkrYDH", "HkglHuj3FS", "HJxL380PiS", "S1gd_SqAKr", "ryg3CRRRYS", "iclr_2020_SJgVHkrYDH", "r1g-3abtdr", "BkxGvEnBuS", "iclr_2020_SJgVHkrYDH" ]
iclr_2020_BJlBSkHtDS
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
The performance of deep network learning strongly depends on the choice of the non-linear activation function associated with each neuron. However, deciding on the best activation is non-trivial and the choice depends on the architecture, hyper-parameters, and even on the dataset. Typically these activations are fixed by hand before training. Here, we demonstrate how to eliminate the reliance on first picking fixed activation functions by using flexible parametric rational functions instead. The resulting Padé Activation Units (PAUs) can both approximate common activation functions and also learn new ones while providing compact representations. Our empirical evidence shows that end-to-end learning deep networks with PAUs can increase the predictive performance. Moreover, PAUs pave the way to approximations with provable robustness.
accept-poster
The paper proposed a new learnable activation function called Padé Activation Unit (PAU) based on parameterization of rational function. All the reviewers agree that the method is soundly motivated, the empirical results are strong to suggest that this would be a good addition to the literature.
train
[ "SkxMQumTKB", "HJgsBTOLKr", "HkgclbXsjH", "rklNJvljoS", "BJemW-L5ir", "BJeUaB5YiH", "rkesLmKBoH", "rkg9m7YHiS", "BkxfeQYBor", "SkgoV1AnFH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The authors introduce an activation function based on learnable Padé approximations. The numerator and denominator of the learnable activation function are polynomials of m and n, respectively. The authors name them Padé activation units (PAUs). The authors also propose a randomized a version of these functions th...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_BJlBSkHtDS", "iclr_2020_BJlBSkHtDS", "BJemW-L5ir", "BkxfeQYBor", "rkesLmKBoH", "rkg9m7YHiS", "HJgsBTOLKr", "SkgoV1AnFH", "SkxMQumTKB", "iclr_2020_BJlBSkHtDS" ]
iclr_2020_SJlKrkSFPH
A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES
Formal verification techniques that compute provable guarantees on properties of machine learning models, like robustness to norm-bounded adversarial perturbations, have yielded impressive results. Although most techniques developed so far require knowledge of the architecture of the machine learning model and remain hard to scale to complex prediction pipelines, the method of randomized smoothing has been shown to overcome many of these obstacles. By requiring only black-box access to the underlying model, randomized smoothing scales to large architectures and is agnostic to the internals of the network. However, past work on randomized smoothing has focused on restricted classes of smoothing measures or perturbations (like Gaussian or discrete) and has only been able to prove robustness with respect to simple norm bounds. In this paper we introduce a general framework for proving robustness properties of smoothed machine learning models in the black-box setting. Specifically, we extend randomized smoothing procedures to handle arbitrary smoothing measures and prove robustness of the smoothed classifier by using f-divergences. Our methodology improves upon the state of the art in terms of computation time or certified robustness on several image classification tasks and an audio classification task, with respect to several classes of adversarial perturbations.
accept-poster
This submission proposes a black-box method for certifying the robustness of smoothed classifiers in the presence of adversarial perturbations. This work goes beyond previous works in certifying robustness for arbitrary smoothing measures. Strengths: -Sound formulation and theoretical justification to tackle an important problem. Weaknesses -Experimental comparison was at times not fair. -The presentation and writing could be improved. These two weaknesses were sufficiently addressed during the discussion. All reviewers recommend acceptance.
train
[ "SylnaDVqYr", "r1eYLSiotr", "BygdAZ9_jH", "rklUqQ9dsB", "HyxxTiecsB", "rkxA4Q9uoS", "BJgnDzc_or", "B1gCq_6Msr", "rkged9hGsr", "rklQRwcfsr", "rygigJwkjH", "BkealMyptB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "public", "official_reviewer" ]
[ "The paper introduces a generalization of the randomized smoothing approach for certifying robustness of black-box classifiers, allowing the smoothing measure to be an arbitrary distribution (whereas previous work almost exclusively focused on Gaussian noise), and facilitating the certification with respect to diff...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_SJlKrkSFPH", "iclr_2020_SJlKrkSFPH", "iclr_2020_SJlKrkSFPH", "SylnaDVqYr", "r1eYLSiotr", "r1eYLSiotr", "BkealMyptB", "rklQRwcfsr", "rygigJwkjH", "iclr_2020_SJlKrkSFPH", "iclr_2020_SJlKrkSFPH", "iclr_2020_SJlKrkSFPH" ]
iclr_2020_SkgpBJrtvS
Contrastive Representation Distillation
Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. When combined with knowledge distillation, our method sets a state of the art in many transfer tasks, sometimes even outperforming the teacher network.
accept-poster
This paper presents a new distillation method with theoretical and empirical supports. Given reviewers' comments and AC's reading, the novelty/significance and application-scope shown in the paper can be arguably limited. However, the authors extensively verified and compared the proposed methods and existing ones by showing significant improvements under comprehensive experiments. As the distillation method can enjoy a broader usage, I think the propose method in this paper can be influential in the future works. Hence, I think this is a borderlines paper toward acceptance.
train
[ "H1lvNCwcjS", "S1l0WCw9or", "SklFaaP5jr", "SkeiTRlTYS", "Hyl91UHRFB", "HkevRLo4cB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer 3,\n\nThank you for your feedback.\n\n“the problem then becomes nothing but trying to minimize another distance metric between teacher and student networks on an intermediate layer”\nDifferent distance metrics can make a big difference. Theoretically, we show our objective is maximizing a lower bound...
[ -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, 1, 4, 4 ]
[ "SkeiTRlTYS", "Hyl91UHRFB", "HkevRLo4cB", "iclr_2020_SkgpBJrtvS", "iclr_2020_SkgpBJrtvS", "iclr_2020_SkgpBJrtvS" ]
iclr_2020_HyeaSkrYPH
Certified Defenses for Adversarial Patches
Adversarial patch attacks are among one of the most practical threat models against real-world computer vision systems. This paper studies certified and empirical defenses against patch attacks. We begin with a set of experiments showing that most existing defenses, which work by pre-processing input images to mitigate adversarial patches, are easily broken by simple white-box adversaries. Motivated by this finding, we propose the first certified defense against patch attacks, and propose faster methods for its training. Furthermore, we experiment with different patch shapes for testing, obtaining surprisingly good robustness transfer across shapes, and present preliminary results on certified defense against sparse attacks. Our complete implementation can be found on: https://github.com/Ping-C/certifiedpatchdefense.
accept-poster
This paper presents a certified defense method for adversarial patch attacks. The proposed approach provides certifiable guarantees to the attacks, and the reviewers particularly find its experiments results interesting and promising. The added new experiments during the rebuttal phase strengthened the paper. There still is a remaining concern that is novelty is limited as this paper could be viewed as the application of the original IBP to patch attacks, but the reviewers believe in that its empirical results are important.
train
[ "r1lTNkw6tS", "rkxL6t_njB", "B1gxVBPnir", "rJgZlGvhoH", "rkeTrpE2iH", "r1lKQwihYS", "SJxfCOehsr", "B1g8i7IoiB", "H1eib7IisS", "S1xAU9vqoB", "Byg569Dcsr", "rJefUZ0iFr" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper attempts to extend the Interval Bound Propagation algorithm from (Gowal et al. 2018) to defend against adversarial patch-based attacks. In order to defend against patches which could appear at any location, all the patches need to be considered. This is too computationally expensive, hence they proposed...
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_HyeaSkrYPH", "iclr_2020_HyeaSkrYPH", "rJgZlGvhoH", "rkeTrpE2iH", "SJxfCOehsr", "iclr_2020_HyeaSkrYPH", "S1xAU9vqoB", "r1lTNkw6tS", "r1lTNkw6tS", "r1lKQwihYS", "rJefUZ0iFr", "iclr_2020_HyeaSkrYPH" ]
iclr_2020_HJlxIJBFDr
Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires O(1/ϵ3/2)\footnote{O(⋅) notation hides constant factors.} episodes to find an ϵ-approximate stationary point of the nonconcave performance function J(θ) (i.e., θ such that ∥∇J(θ)∥22≤ϵ). This sample complexity improves the existing result O(1/ϵ5/3) for stochastic variance reduced policy gradient algorithms by a factor of O(1/ϵ1/6). In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.
accept-poster
The paper introduces a policy gradient estimator that is based on stochastic recursive gradient estimator. It provides a sample complexity result of O(eps^{-3/2}) trajectories for estimating the gradient with the accuracy of eps. This paper generated a lot of discussions among reviewers. The discussions were around the novelty of this work in relation to SARAH (Nguyen et al., ICML2017), SPIDER (Fang et al., NeurIPS2018) and the work of Papini et al. (ICML 2018). SARAH/SPIDER are stochastic variance reduced gradient estimators for convex/non-convex problems and have been studied in the optimization literature. To bring it to the RL literature, some adjustments are needed, for example the use of importance sampling (IS) estimator. The work of Papini et al. uses IS, but does not use SARAH/SPIDEH, and it does not use step-wise IS. Overall, I believe that even though the key algorithmic components of this work have been around, it is still a valuable contribution to the RL literature.
train
[ "rJxxBuwItB", "r1xTId76KH", "B1ljJcBFjS", "r1lWNm6Osr", "HyxX-XTusH", "HJgY3GadoS", "r1gEFzT_jr", "HJe3Ea2AYr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary: \n\nThe paper proposed a policy gradient method called SRVR-PG, which based on stochastic recursive gradient estimator. It shows that the complexity is better than that of SVRPG. Some experiments on standard environments are provided to show the efficiency of the algorithm over GPOMDP and SVRPG. \n\nCom...
[ 6, 8, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_HJlxIJBFDr", "iclr_2020_HJlxIJBFDr", "iclr_2020_HJlxIJBFDr", "r1xTId76KH", "HJe3Ea2AYr", "rJxxBuwItB", "rJxxBuwItB", "iclr_2020_HJlxIJBFDr" ]
iclr_2020_r1egIyBFPS
Deep Symbolic Superoptimization Without Human Knowledge
Deep symbolic superoptimization refers to the task of applying deep learning methods to simplify symbolic expressions. Existing approaches either perform supervised training on human-constructed datasets that defines equivalent expression pairs, or apply reinforcement learning with human-defined equivalent trans-formation actions. In short, almost all existing methods rely on human knowledge to define equivalence, which suffers from large labeling cost and learning bias, because it is almost impossible to define and comprehensive equivalent set. We thus propose HISS, a reinforcement learning framework for symbolic super-optimization that keeps human outside the loop. HISS introduces a tree-LSTM encoder-decoder network with attention to ensure tractable learning. Our experiments show that HISS can discover more simplification rules than existing human-dependent methods, and can learn meaningful embeddings for symbolic expressions, which are indicative of equivalence.
accept-poster
This work introduces a neural architecture and corresponding method for simplifying symbolic equations, which can be trained without requiring human input. This is an area somewhat outside most of our expertise, but the general consensus is that the paper is interesting and is an advance. The reviewer's concerns have been mostly resolved by the rebuttal, so I am recommending an accept.
train
[ "HJlylMRX5H", "Bygshft2sH", "HklviNintr", "H1g5MR8iiB", "rkegH68ijr", "HkeqEhUsjH", "S1lTyjUjsH", "Skl0FlWAtS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents a method for symbolic superoptimization — the task of simplifying equations into equivalent expressions. The main goal is to design a method that does not rely on human input in defining equivalence classes, which should improve scalability of the simplification method to a larger set of expres...
[ 6, -1, 6, -1, -1, -1, -1, 3 ]
[ 1, -1, 1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_r1egIyBFPS", "H1g5MR8iiB", "iclr_2020_r1egIyBFPS", "HklviNintr", "Skl0FlWAtS", "HJlylMRX5H", "iclr_2020_r1egIyBFPS", "iclr_2020_r1egIyBFPS" ]
iclr_2020_SJgzLkBKPB
Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution
As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our proposed approach, SARFA (Specific and Relevant Feature Attribution), generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweighs irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare SARFA with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches. For the code release and demo videos, see: https://nikaashpuri.github.io/sarfa-saliency/.
accept-poster
A new method of calculating saliency maps for deep networks trained through RL (for example to play games) is presented. The method is aimed at explaining why moves were taken by showing which salient features influenced the move, and seems to work well based on experiments with Chess, Go, and several Atari games. Reviewer 2 had a number of questions related to the performance of the method under various conditions, and these were answered satisfactorily by the reviewers. This is a solid paper with good reasoning and results, though perhaps not super novel, as the basic idea of explaining policies with saliency is not new. It should be accepted for poster presentation.
train
[ "rJx9cWb3YS", "SkekPRl2sS", "BylD_6l3sS", "BJeXh9ehsH", "Hyeu8gpssS", "Skx402YnYB", "HJe916UjsH", "SJg5oTWtsB", "Hyxvkl6uoH", "S1xBnuYNir", "Byx0WCDViS", "rJxZZaPEjS", "Bkeeh2vEoB", "rJlbK2tLOB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes an algorithm for explaining the move of the agents trained by reinforcement learning (RL) by generating a saliency map.\nThe authors proposed two desired properties for the saliency map, specificity and relevance.\nThe authors then pointed out that prior studies failed to capture one of the two...
[ 8, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_SJgzLkBKPB", "Bkeeh2vEoB", "HJe916UjsH", "SJg5oTWtsB", "Hyxvkl6uoH", "iclr_2020_SJgzLkBKPB", "Byx0WCDViS", "S1xBnuYNir", "rJxZZaPEjS", "rJx9cWb3YS", "Skx402YnYB", "rJlbK2tLOB", "iclr_2020_SJgzLkBKPB", "iclr_2020_SJgzLkBKPB" ]