paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2020_SyegvgHtwr | Localised Generative Flows | We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem. LGFs are composed of stacked continuous mixtures of bijections, which enables each bijection to learn a local region of the target rather than its entirety. Our method is a generalisation of existing flow-based methods, which can be used without modification as the basis for an LGF model. Unlike normalising flows, LGFs do not permit exact computation of log likelihoods, but we propose a simple variational scheme that performs well in practice. We show empirically that LGFs yield improved performance across a variety of common density estimation tasks. | reject | This paper proposes to overcome some fundamental limitations of normalizing flows by introducing auxiliary continuous latent variables. While the problem this paper is trying to address is mathematically legitimate, there is no strong evidence that this is a relevant problem in practice. Moreover, the proposed solution is not entirely novel, converting the flow in a latent-variable model. Overall, I believe this paper will be of minor relevance to the ICLR community. | train | [
"r1xPlCLSFS",
"HyltFrE2jS",
"H1gWPLNhsr",
"ryeRNDV2ir",
"S1xw68VhjS",
"HylQcUE3oB",
"B1gQMDEhiB",
"HJgjqGLwtH",
"BJlmkymy5S",
"S1eB7k4Zdr",
"r1x0meQ2vH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"The paper introduces a straight-forward way to expand the flow models by considering mixture of flow distributions. The idea is not very novel since several previous work have tried the mixture of flow such as the mentioned RAD and Deep Mixture. The paper studies some further improvements such as using the continu... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1
] | [
"iclr_2020_SyegvgHtwr",
"iclr_2020_SyegvgHtwr",
"HyltFrE2jS",
"r1xPlCLSFS",
"BJlmkymy5S",
"H1gWPLNhsr",
"HJgjqGLwtH",
"iclr_2020_SyegvgHtwr",
"iclr_2020_SyegvgHtwr",
"r1x0meQ2vH",
"iclr_2020_SyegvgHtwr"
] |
iclr_2020_B1eZweHFwr | Statistical Verification of General Perturbations by Gaussian Smoothing | We present a novel statistical certification method that generalizes prior work based on smoothing to handle richer perturbations. Concretely, our method produces a provable classifier which can establish statistical robustness against geometric perturbations (e.g., rotations, translations) as well as volume changes and pitch shifts on audio data. The generalization is non-trivial and requires careful handling of operations such as interpolation. Our method is agnostic to the choice of classifier and scales to modern architectures such as ResNet-50 on ImageNet. | reject | This paper proposes a smoothing-based certification against various forms of transformations, such as rotations, translations. The reviewers have concerns on the novelty of the work and several technical issues. The authors have made efforts to address some of issues, but the work may still significantly benefit from a throughout improvement in both presentation and technical contribution. | test | [
"BkxnCnHooS",
"HkloYnSosB",
"r1l0Tjrsjr",
"rkx4_jSsoS",
"SkeL435jtB",
"S1l6RuK2Kr",
"r1x1Xg40tH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"> Non-isotropic covariance matrices\n\nWe have addressed this point in our response to all reviewers.\n\n> To certify this attack, given a base classifier, we can simply do, for example, grid search, on the low dimensional space to find the worst case with very good accuracy. And it should be able to give much bet... | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"SkeL435jtB",
"S1l6RuK2Kr",
"r1x1Xg40tH",
"iclr_2020_B1eZweHFwr",
"iclr_2020_B1eZweHFwr",
"iclr_2020_B1eZweHFwr",
"iclr_2020_B1eZweHFwr"
] |
iclr_2020_HylfPgHYvr | Occlusion resistant learning of intuitive physics from videos | To reach human performance on complex tasks, a key ability for artificial
systems is to understand physical interactions between objects, and predict
future outcomes of a situation. This ability, often referred to as
intuitive
physics
, has recently received attention and several methods were proposed to
learn these physical rules from video sequences. Yet, most these methods are
restricted to the case where no occlusions occur, narrowing the potential areas
of application. The main contribution of this paper is a method combining
a predictor of object dynamics and a neural renderer efficiently predicting
future trajectories and explicitly modelling partial and full occlusions among
objects. We present a training procedure enabling learning intuitive physics
directly from the input videos containing segmentation masks of objects and
their depth. Our results show that our model learns object dynamics despite
significant inter-object occlusions, and realistically predicts segmentation
masks up to 30 frames in the future. We study model performance for
increasing levels of occlusions, and compare results to previous work on
the tasks of future prediction and object following. We also show results
on predicting motion of objects in real videos and demonstrate significant
improvements over state-of-the-art on the object permanence task in the
intuitive physics benchmark of Riochet et al. (2018). | reject | The paper studies the problem of modeling inter-object dynamics with occlusions. It provides proof-of-concept demonstrations on toy 3d scenes that occlusions can be handled by structured representations using object-level segmentation masks and depth information. However, the technical novelty is not high and the requirement of such structured information seems impractical real-world applications which thus limits the significance of the proposed method. | train | [
"rJejkkaRqr",
"Bklh6GB2sH",
"ryed9h43oS",
"HJeCH2V3iH",
"HyxH-2NhiH",
"HJlwAevmoB",
"Bylu2aeMoB",
"S1eWc4LTtS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method to predict future trajectories by modeling partial and full occlusions. Although it is well-written and the topic sounds interesting, I failed to catch why this approach is required for this setting. So, to strengthen the message of this paper, I listed a couple of suggestions and comm... | [
3,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
5,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"iclr_2020_HylfPgHYvr",
"HJlwAevmoB",
"rJejkkaRqr",
"Bylu2aeMoB",
"S1eWc4LTtS",
"iclr_2020_HylfPgHYvr",
"iclr_2020_HylfPgHYvr",
"iclr_2020_HylfPgHYvr"
] |
iclr_2020_r1gfweBFPB | Learning by shaking: Computing policy gradients by physical forward-propagation | Model-free and model-based reinforcement learning are two ends of a spectrum. Learning a good policy without a dynamic model can be prohibitively expensive. Learning the dynamic model of a system can reduce the cost of learning the policy, but it can also introduce bias if it is not accurate. We propose a middle ground where instead of the transition model, the sensitivity of the trajectories with respect to the perturbation (shaking) of the parameters is learned. This allows us to predict the local behavior of the physical system around a set of nominal policies without knowing the actual model. We assay our method on a custom-built physical robot in extensive experiments and show the feasibility of the approach in practice. We investigate potential challenges when applying our method to physical systems and propose solutions to each of them. | reject | While the reviewers generally appreciated the idea behind the method in the paper, there was considerable concern about the experimental evaluation, which did not provide a convincing demonstration that the method works in interesting and relevant problem settings, and did not compare adequately to alternative approach. As such, I believe this paper is not quite ready for publication in its current form. | train | [
"rJe-f_H3jB",
"rJe5wjhIsB",
"BJlps92LsH",
"HJxb-c28jS",
"S1eQvcorsr",
"B1ggQEC6tS",
"HJeEH7Lf9B",
"H1xKZJ6f9B",
"S1eCOIEOqS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your response. Ultimately I think this is an interesting direction, but perhaps the core idea (estimating the gradient of trajectories w.r.t. the policy parameters by fitting a GP to a set of noisy trajectories executing the same controller) is just innately difficult because of very high-variance gr... | [
-1,
-1,
-1,
-1,
-1,
1,
1,
3,
1
] | [
-1,
-1,
-1,
-1,
-1,
5,
1,
4,
1
] | [
"HJxb-c28jS",
"B1ggQEC6tS",
"HJeEH7Lf9B",
"H1xKZJ6f9B",
"S1eCOIEOqS",
"iclr_2020_r1gfweBFPB",
"iclr_2020_r1gfweBFPB",
"iclr_2020_r1gfweBFPB",
"iclr_2020_r1gfweBFPB"
] |
iclr_2020_rJx7wlSYvB | Differentiable Bayesian Neural Network Inference for Data Streams | While deep neural networks (NNs) do not provide the confidence of its prediction, Bayesian neural network (BNN) can estimate the uncertainty of the prediction. However, BNNs have not been widely used in practice due to the computational cost of predictive inference. This prohibitive computational cost is a hindrance especially when processing stream data with low-latency. To address this problem, we propose a novel model which approximate BNNs for data streams. Instead of generating separate prediction for each data sample independently, this model estimates the increments of prediction for a new data sample from the previous predictions. The computational cost of this model is almost the same as that of non-Bayesian deep NNs. Experiments including semantic segmentation on real-world data show that this model performs significantly faster than BNNs, estimating uncertainty comparable to the results of BNNs.
| reject | The main contribution is a Bayesian neural net algorithm which saves computation at test time using a vector quantization approximation. The reviewers are on the fence about the paper. I find the exposition somewhat hard to follow. In terms of evaluation, they demonstrate similar performance to various BNN architectures which require Monte Carlo sampling. But there have been lots of BNN algorithms that don't require sampling (e.g. PBP, Bayesian dark knowledge, MacKay's delta approximation), so it seems important to compare to these. I think there may be promising ideas here, but the paper needs a bit more work before it is to be published at a venue such as ICLR.
| test | [
"SJgiWVZPoH",
"Syx_8h1wiB",
"rkgsfR0Lir",
"H1x8fZ-Dsr",
"SyeZuXesFr",
"HJeDxzgnKS",
"Hkg457uTtS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We really appreciate your comments. We have revised the paper according to the suggestions and would like to clarify several things:\n\n1. How is the posterior distribution of the weights computed?\n\nIn this paper, as mentioned in the introduction and Section 4.2, we only consider predictive inference. We assume ... | [
-1,
-1,
-1,
-1,
3,
3,
8
] | [
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"SyeZuXesFr",
"HJeDxzgnKS",
"Hkg457uTtS",
"HJeDxzgnKS",
"iclr_2020_rJx7wlSYvB",
"iclr_2020_rJx7wlSYvB",
"iclr_2020_rJx7wlSYvB"
] |
iclr_2020_rkxmPgrKwB | Weight-space symmetry in neural network loss landscapes revisited | Neural network training depends on the structure of the underlying loss landscape, i.e. local minima, saddle points, flat plateaus, and loss barriers. In relation to the structure of the landscape, we study the permutation symmetry of neurons in each layer of a deep neural network, which gives rise not only to multiple equivalent global minima of the loss function but also to critical points in between partner minima. In a network of d−1 hidden layers with nk neurons in layers k=1,…,d, we construct continuous paths between equivalent global minima that lead through a `permutation point' where the input and output weight vectors of two neurons in the same hidden layer k collide and interchange. We show that such permutation points are critical points which lie inside high-dimensional subspaces of equal loss, contributing to the global flatness of the landscape. We also find that a permutation point for the exchange of neurons i and j transits into a flat high-dimensional plateau that enables all nk! permutations of neurons in a given layer k at the same loss value. Moreover, we introduce higher-order permutation points by exploiting the hierarchical structure in the loss landscapes of neural networks, and find that the number of K-th order permutation points is much larger than the (already huge) number of equivalent global minima -- at least by a polynomial factor of order K. In two tasks, we demonstrate numerically with our path finding method that continuous paths between partner minima exist: first, in a toy network with a single hidden layer on a function approximation task and, second, in a multilayer network on the MNIST task. Our geometric approach yields a lower bound on the number of critical points generated by weight-space symmetries and provides a simple intuitive link between previous theoretical results and numerical observations. | reject | After communicating with each reviewer about the rebuttal, there seems to be a consensus that the paper contains a number of interesting ideas, but the motivation for the paper and the relationship to the literature needs to be expanded. The reviewers have not changed their scores, and so there is not currently enough support to accept this paper. | train | [
"Sylo4Kp9ir",
"Skg_YFaqiS",
"HkewfK69sS",
"ryxT3dTqor",
"SylvFuSmjH",
"BkluC-v3FS",
"BJxkDMC6YS",
"HklMZ14z5H"
] | [
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your feedback and ideas.\n\nThe symmetries you describe for linear networks are well-known in the field (see e.g. Baldi and Hornik 1989 https://doi.org/10.1016/0893-6080(89)90014-2 ) and generalizations to non-linear activation functions are possible thanks to local linear approximations. That is, it... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"BkluC-v3FS",
"SylvFuSmjH",
"BJxkDMC6YS",
"HklMZ14z5H",
"iclr_2020_rkxmPgrKwB",
"iclr_2020_rkxmPgrKwB",
"iclr_2020_rkxmPgrKwB",
"iclr_2020_rkxmPgrKwB"
] |
iclr_2020_H1lVvgHKDr | Knowledge Transfer via Student-Teacher Collaboration | Accompanying with the flourish development in various fields, deep neural networks, however, are still facing with the plight of high computational costs and storage. One way to compress these heavy models is knowledge transfer (KT), in which a light student network is trained through absorbing the knowledge from a powerful teacher network. In this paper, we propose a novel knowledge transfer method which employs a Student-Teacher Collaboration (STC) network during the knowledge transfer process. This is done by connecting the front part of the student network to the back part of the teacher network as the STC network. The back part of the teacher network takes the intermediate representation from the front part of the student network as input to make the prediction. The difference between the prediction from the collaboration network and the output tensor from the teacher network is taken into account of the loss during the train process. Through back propagation, the teacher network provides guidance to the student network in a gradient signal manner. In this way, our method takes advantage of the knowledge from the entire teacher network, who instructs the student network in learning process. Through plentiful experiments, it is proved that our STC method outperforms other KT methods with conventional strategy. | reject | This paper has been assessed by three reviewers scoring it as follows: 6, 3, 8. The submission however attracted some criticism post-rebuttal from the reviewers e.g., why concatenating teacher to student is better than the use l2 loss or how the choice of transf. layers has been made (ad-hoc). Similarly, other major criticism includes lack of proper referencing to parts of work that have been in fact developed earlier in preceding papers. On balance, this paper falls short of the expectations of ICLR 2020, thus it cannot be accepted at this time. The authors are encouraged to work through major comments and resolve them for a future submission. | train | [
"rkxVAH9LtB",
"HJlEDovqjS",
"rJgj05DqjS",
"B1ezUtD5sr",
"H1xKvcvqor",
"rkezle-NtH",
"HyxdAeIpFS",
"HylYU_cZKB",
"Bkeh1VJbFr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"\nThis paper proposed a new method for knowledge distillation, which transfers knowledge from a large teacher network to a small student network in training to help network compression and acceleration. The proposed method concatenate the first a few layers of the student network with the last a few layers of the ... | [
6,
-1,
-1,
-1,
-1,
8,
3,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1
] | [
"iclr_2020_H1lVvgHKDr",
"iclr_2020_H1lVvgHKDr",
"rkezle-NtH",
"HyxdAeIpFS",
"rkxVAH9LtB",
"iclr_2020_H1lVvgHKDr",
"iclr_2020_H1lVvgHKDr",
"Bkeh1VJbFr",
"iclr_2020_H1lVvgHKDr"
] |
iclr_2020_S1ervgHFwS | Adversarial Training Generalizes Data-dependent Spectral Norm Regularization | We establish a theoretical link between adversarial training and operator norm regularization for deep neural networks. Specifically, we present a data-dependent variant of spectral norm regularization and prove that it is equivalent to adversarial training based on a specific ℓ2-norm constrained projected gradient ascent attack. This fundamental connection confirms the long-standing argument that a network's sensitivity to adversarial examples is tied to its spectral properties and hints at novel ways to robustify and defend against adversarial attacks. We provide extensive empirical evidence to support our theoretical results. | reject | This paper shows an theoretical equivalence between the L2 PGD adversarial training and operator norm regularization. It gives an interesting observation and support it from both theoretical arguments and practical experiments. There has been a significant discussion between the reviewers and authors. Although the authors made efforts in rebuttal, it still leaves many places to improve and clarify, especially in improving the mathematical rigor of the proof and experiments using state-of-the-art networks.
| train | [
"r1eOQ7j2iB",
"BylmbysnjS",
"BklZ9jKKsS",
"rJetHsYKoH",
"S1lb22MUjS",
"HJgdP1MIoS",
"H1lL7yzIir",
"H1g6ACW8jS",
"HJxvL6Z8iB",
"r1lbZ6bUir",
"B1eJ6jbIjS",
"B1xIuisysH",
"B1laVbdjYH",
"SyeD7eqHcH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer\n\nIn response to your comment wondering what happens in the practical (PGA) algorithm when \\alpha goes to infinity, we empirically tested that the effect of Adversarial Training remains constant when provided with consecutively larger \\alpha-values. Please refer to the last plot in the Appendix in... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"B1laVbdjYH",
"S1lb22MUjS",
"S1lb22MUjS",
"S1lb22MUjS",
"HJgdP1MIoS",
"B1xIuisysH",
"B1xIuisysH",
"B1xIuisysH",
"B1laVbdjYH",
"B1laVbdjYH",
"SyeD7eqHcH",
"iclr_2020_S1ervgHFwS",
"iclr_2020_S1ervgHFwS",
"iclr_2020_S1ervgHFwS"
] |
iclr_2020_SyeHPgHFDr | Finding Deep Local Optima Using Network Pruning | Artificial neural networks (ANNs) are very popular nowadays and offer reliable solutions to many classification problems. However, training deep neural networks (DNN) is time-consuming due to the large number of parameters. Recent research indicates that these DNNs might be over-parameterized and different solutions have been proposed to reduce the complexity both in the number of parameters and in the training time of the neural networks. Furthermore, some researchers argue that after reducing the neural network complexity via connection pruning, the remaining weights are irrelevant and retraining the sub-network would obtain a comparable accuracy with the original one.
This may hold true in most vision problems where we always enjoy a large number of training samples and research indicates that most local optima of the convolutional neural networks may be equivalent. However, in non-vision sparse datasets, especially with many irrelevant features where a standard neural network would overfit, this might not be the case and there might be many non-equivalent local optima. This paper presents empirical evidence for these statements and an empirical study of the learnability of neural networks (NNs) on some challenging non-linear real and simulated data with irrelevant variables.
Our simulation experiments indicate that the cross-entropy loss function on XOR-like data has many local optima, and the number of local optima grows exponentially with the number of irrelevant variables.
We also introduce a connection pruning method to improve the capability of NNs to find a deep local minimum even when there are irrelevant variables.
Furthermore, the performance of the discovered sparse sub-network degrades considerably either by retraining from scratch or the corresponding original initialization, due to the existence of many bad optima around.
Finally, we will show that the performance of neural networks for real-world experiments on sparse datasets can be recovered or even improved by discovering a good sub-network architecture via connection pruning. | reject | This paper provides empirical evidence on synthetic examples with a focus on understanding the relationship between the number of “good” local minima and number of irrelevant features. The reviewers find the problem discussed to be important. One of the reviewers has pointed out that the paper does not present deep insights and is more suitable for workshops. The authors did not provide a rebuttal, and it appears that the reviewers opinion has not changed.
The current score is clearly not sufficient to accept this paper in its current form. Due to this reason, I recommend to reject this paper.
| train | [
"H1exfgYCYB",
"H1lOu8zkcS",
"SkgJL1KqKr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This submission studies losses at local minima of a set of neural networks trained on an XOR-like synthetic dataset, finds that local minima are of varying quality, and proposes a network pruning method to find better local minima. The pruning method is evaluated on XOR-like datasets as well as real-world datasets... | [
3,
6,
3
] | [
3,
5,
3
] | [
"iclr_2020_SyeHPgHFDr",
"iclr_2020_SyeHPgHFDr",
"iclr_2020_SyeHPgHFDr"
] |
iclr_2020_r1gIwgSYwr | Localized Meta-Learning: A PAC-Bayes Analysis for Meta-Leanring Beyond Global Prior | Meta-learning methods learn the meta-knowledge among various training tasks and aim to promote the learning of new tasks under the task similarity assumption. However, such meta-knowledge is often represented as a fixed distribution, which is too restrictive to capture various specific task information. In this work, we present a localized meta-learning framework based on PAC-Bayes theory. In particular, we propose a LCC-based prior predictor that allows the meta learner adaptively generate local meta-knowledge for specific task. We further develop a pratical algorithm with deep neural network based on the bound. Empirical results on real-world datasets demonstrate the efficacy of the proposed method. | reject | This paper proposes PAC-Bayes bounds for meta-learning. The reviewers who are most knowledgeable about the subject and who read the paper most closely brought up several concerns regarding novelty (especially a description of how the proposed bounds relate to those in prior works (Pentina el al. (2014), Galanti et al. (2016) and Amit and Meir (2018))) and regarding clarity. The reviewers found theoretical analysis and proofs hard to follow. For these reasons, the paper isn't ready for publication at this time. See the reviewer's comments for details. | test | [
"BJelocwZ5H",
"HJxKbpLhiH",
"S1lQWNwhir",
"HJlljbvhoB",
"B1gMMeD3oS",
"SJeZu6U2iS",
"ByxKbnL3ir",
"Bye1xeI2ir",
"Syli4yL3or",
"HygYpKr2sr",
"S1gb7pTV5B",
"rkeqdjJOcS",
"SyeChbnu5S",
"HJljKWOXdB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Post-rebuttal:\n===========\nThank you to the authors for responding to my review, and for adding the comparison to other meta-learning methods besides Amit et al. (2018), which makes it clearer in which settings this technique outperforms purely discriminative approaches (in particular with few tasks & many sampl... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
6,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
-1
] | [
"iclr_2020_r1gIwgSYwr",
"rkeqdjJOcS",
"BJelocwZ5H",
"ByxKbnL3ir",
"SyeChbnu5S",
"HJxKbpLhiH",
"Syli4yL3or",
"HJljKWOXdB",
"HygYpKr2sr",
"S1gb7pTV5B",
"iclr_2020_r1gIwgSYwr",
"iclr_2020_r1gIwgSYwr",
"iclr_2020_r1gIwgSYwr",
"iclr_2020_r1gIwgSYwr"
] |
iclr_2020_rJePwgSYwB | SGD Learns One-Layer Networks in WGANs | Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax problem to global optimality, but in practice, are successfully trained with stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution in polynomial time and sample complexity. | reject | This article studies convergence of WGAN training using SGD and generators of the form $\phi(Ax)$, with results on convergence with polynomial time and sample complexity under the assumption that the target distribution can be expressed by this type of generator. This expands previous work that considered linear generators. An important point of discussion was the choice of the discriminator as a linear or quadratic function. The authors' responses clarified some of the initial criticism, and the scores improved slightly. Following the discussion, the reviewers agreed that the problem being studied is a difficult one and that the paper makes some important contributions. However, they still found that the considered settings are very restrictive, maintaining that quadratic discriminators would work only for the very simple type of generators and targets under consideration. Although the article makes important advances towards understanding convergence of WGAN training with nonlinear models, the relevance of the contribution could be greatly enhanced by addressing / discussing the plausibility or implications of the analysis in a practical setting, in the best case scenario addressing a more practical type of neural networks. | val | [
"B1xjCMHAtr",
"HJgk6-WCFr",
"r1gt9c0YjS",
"HJgRps0Ysr",
"SyxWQ30Ysr",
"Byl5lg8TKB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors provide a long text to justify their contributions and I have read it thoroughly. Unfortunately, I find the responses don't really address my concerns.\n\nMy major concern is that I cannot understand how quadratic discriminator can be treated as WGAN. The authors replied that the regularization conside... | [
3,
6,
-1,
-1,
-1,
3
] | [
3,
4,
-1,
-1,
-1,
3
] | [
"iclr_2020_rJePwgSYwB",
"iclr_2020_rJePwgSYwB",
"B1xjCMHAtr",
"HJgk6-WCFr",
"Byl5lg8TKB",
"iclr_2020_rJePwgSYwB"
] |
iclr_2020_ByxODxHYwB | Multi-source Multi-view Transfer Learning in Neural Topic Modeling with Pretrained Topic and Word Embeddings | Though word embeddings and topics are complementary representations, several
past works have only used pretrained word embeddings in (neural) topic modeling
to address data sparsity problem in short text or small collection of documents.
However, no prior work has employed (pretrained latent) topics in transfer learning
paradigm. In this paper, we propose a framework to perform transfer learning
in neural topic modeling using (1) pretrained (latent) topics obtained from a large
source corpus, and (2) pretrained word and topic embeddings jointly (i.e., multiview)
in order to improve topic quality, better deal with polysemy and data sparsity
issues in a target corpus. In doing so, we first accumulate topics and word representations
from one or many source corpora to build respective pools of pretrained
topic (i.e., TopicPool) and word embeddings (i.e., WordPool). Then, we identify
one or multiple relevant source domain(s) and take advantage of corresponding
topics and word features via the respective pools to guide meaningful learning
in the sparse target domain. We quantify the quality of topic and document representations
via generalization (perplexity), interpretability (topic coherence) and
information retrieval (IR) using short-text, long-text, small and large document
collections from news and medical domains. We have demonstrated the state-ofthe-
art results on topic modeling with the proposed transfer learning approaches. | reject | This paper presents a transfer learning framework in neural topic modeling. Authors claim and reviewers agree that this view of transfer learning in the realm of topic modeling is novel.
However, after much deliberation and discussion among the reviewers, we conclude that this paper does not contribute sufficient novelty in terms of the method. Also, reviewers find the experiments and results not sufficiently convincing.
I sincerely thank the authors for submitting to ICLR and hope to see a revised paper in a future venue. | train | [
"S1xyosr2iH",
"HyenXIMZoH",
"HylGDaWujB",
"SkgKHY_aKS",
"BJlcKuzbjr",
"HJeZYDGWir",
"BJl1wBf-iH",
"HyeGN5C85B",
"SygTrGoa9r"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for increasing your rating and leaning towards accept!\n\nThanks for acknowledging contribution of our proposed transfer learning approaches in topic modeling.",
"Thanks for your reviews and positive comments, e.g., \"well written\". \n\nThe extensive experimental results (Table 5, 6 and 7) have shown sig... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
3,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
3
] | [
"SkgKHY_aKS",
"HyeGN5C85B",
"iclr_2020_ByxODxHYwB",
"iclr_2020_ByxODxHYwB",
"SkgKHY_aKS",
"HyeGN5C85B",
"SygTrGoa9r",
"iclr_2020_ByxODxHYwB",
"iclr_2020_ByxODxHYwB"
] |
iclr_2020_B1lKDlHtwS | Measuring causal influence with back-to-back regression: the linear case | Identifying causes from observations can be particularly challenging when i) potential factors are difficult to manipulate individually and ii) observations are complex and multi-dimensional. To address this issue, we introduce “Back-to-Back” regression (B2B), a method designed to efficiently measure, from a set of co-varying factors, the causal influences that most plausibly account for multidimensional observations. After proving the consistency of B2B and its links to other linear approaches, we show that our method outperforms least-squares regression and cross-decomposition techniques (e.g. canonical correlation analysis and partial least squares) on causal identification. Finally, we apply B2B to neuroimaging recordings of 102 subjects reading word sequences. The results show that the early and late brain representations, caused by low- and high-level word features respectively, are more reliably detected with B2B than with other standard techniques.
| reject | The authors introduce a method for disentangling effects of correlated predictors in the context of high dimensional outcomes. While the paper contains interesting ideas and has been substantially improved from its original form, the paper still does not meet the quality bar of ICLR due to its limitations in terms of limited applicability and experiments. The paper will benefit from a revision and resubmission to another venue. | train | [
"Bkgo2vCzcr",
"HyxooSwIYB",
"H1eor6rosr",
"S1x3LhHijH",
"HkxGB2rjoB",
"HygiRjHsoH",
"B1xmsjBsoS",
"rJg4DjHijr",
"rJl4BEwAYH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes \"Back-to-Back\" regression for estimating the causal influence between X and Y in the linear model Y=(XE+N)F, where the E denotes a diagonal matrix of causal influences. Furthermore, this work theoretically shows the consistency of B2B and the experiments also verify the effectiveness of this ... | [
6,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_B1lKDlHtwS",
"iclr_2020_B1lKDlHtwS",
"iclr_2020_B1lKDlHtwS",
"HyxooSwIYB",
"HyxooSwIYB",
"rJl4BEwAYH",
"rJl4BEwAYH",
"Bkgo2vCzcr",
"iclr_2020_B1lKDlHtwS"
] |
iclr_2020_HkgFDgSYPH | Adaptive Online Planning for Continual Lifelong Learning | We study learning control in an online lifelong learning scenario, where mistakes can compound catastrophically into the future and the underlying dynamics of the environment may change. Traditional model-free policy learning methods have achieved successes in difficult tasks due to their broad flexibility, and capably condense broad experiences into compact networks, but struggle in this setting, as they can activate failure modes early in their lifetimes which are difficult to recover from and face performance degradation as dynamics change. On the other hand, model-based planning methods learn and adapt quickly, but require prohibitive levels of computational resources. Under constrained computation limits, the agent must allocate its resources wisely, which requires the agent to understand both its own performance and the current state of the environment: knowing that its mastery over control in the current dynamics is poor, the agent should dedicate more time to planning. We present a new algorithm, Adaptive Online Planning (AOP), that achieves strong performance in this setting by combining model-based planning with model-free learning. By measuring the performance of the planner and the uncertainty of the model-free components, AOP is able to call upon more extensive planning only when necessary, leading to reduced computation times. We show that AOP gracefully deals with novel situations, adapting behaviors and policies effectively in the face of unpredictable changes in the world -- challenges that a continual learning agent naturally faces over an extended lifetime -- even when traditional reinforcement learning methods fail. | reject | A new setting for lifelong learning is analyzed and a new method, AOP, is introduced, which combines a model-free with a model-based approach to deal with this setting.
While the idea is interesting, the main claims are insufficiently demonstrated. A theoretical justification is missing, and the experiments alone are not rigorous enough to draw strong conclusions. The three environments are rather simplistic and there are concerns about the statistical significance, for at least some of the experiments. | train | [
"rkgwdyz3oS",
"H1lYy0nUiH",
"BygGoCnLsB",
"BkeaSa2Iir",
"rJl9W3nIsB",
"Hygm-g2oYS",
"rkgTvCJ2tB",
"BJlIWgx6Fr"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Our replies to specific concerns were left in the comments below. We show a summary of our total changes since the paper was first submitted (> indicates change after 11/11 (after last summary), - indicates change before 11/11 (in last summary)):\n\n> There was an issue with the Ant environment causing learning to... | [
-1,
-1,
-1,
-1,
-1,
1,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
5,
1,
1
] | [
"iclr_2020_HkgFDgSYPH",
"Hygm-g2oYS",
"iclr_2020_HkgFDgSYPH",
"rkgTvCJ2tB",
"BJlIWgx6Fr",
"iclr_2020_HkgFDgSYPH",
"iclr_2020_HkgFDgSYPH",
"iclr_2020_HkgFDgSYPH"
] |
iclr_2020_B1lqDertwr | Regularization Matters in Policy Optimization | Deep Reinforcement Learning (Deep RL) has been receiving increasingly more attention thanks to its encouraging performance on a variety of control tasks. Yet, conventional regularization techniques in training neural networks (e.g., L2 regularization, dropout) have been largely ignored in RL methods, possibly because agents are typically trained and evaluated in the same environment. In this work, we present the first comprehensive study of regularization techniques with multiple policy optimization algorithms on continuous control tasks. Interestingly, we find conventional regularization techniques on the policy networks can often bring large improvement on the task performance, and the improvement is typically more significant when the task is more difficult. We also compare with the widely used entropy regularization and find L2 regularization is generally better. Our findings are further confirmed to be robust against the choice of training hyperparameters. We also study the effects of regularizing different components and find that only regularizing the policy network is typically enough. We hope our study provides guidance for future practices in regularizing policy optimization algorithms. | reject | This paper proposes an analysis of regularization for policy optimization. While the multiple effects of regularization are well known in the statistics and optimization community, it is less the case in the RL community. This makes the novelty of the paper difficult to judge as it depends on the familiarity of RL researchers with the two aforementioned communities.
Besides the novelty aspect, which is debatable, reviewers had doubts on the significance of the results, and in particular on the metrics chosen (based on the rank). While defining a "best" algorithm is notoriously difficult, and could be considered outside of the scope of this paper, the fact is that the conclusions reached are still sensitive to that difficulty.
I thus regret to reject this paper as I feel not much more work is necessary to provide a compelling story. I encourage the authors to extend their choice of metrics to be more convincing in their conclusions. | train | [
"rkgs0schoS",
"HJlwM292sB",
"rkx1EyK5oH",
"rkg5D1F5or",
"SJeyVKXjor",
"HJx1P0MooS",
"rken3DMjjB",
"SJeBbpOqjB",
"SkgDRAd5jr",
"SJeZG0_9jr",
"S1lSCid9jS",
"SJes-2O5oH",
"rJgXCiRoFr",
"SJlYWS2hYB",
"rkeScHLTKr"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"p-values:\n\nRegularization Versus Baseline:\n------------------------------------------------------------------------------------\n A2C TRPO PPO SAC TOTAL\n------------------------------------------------------------------------------------\nL2 ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"HJx1P0MooS",
"HJx1P0MooS",
"rJgXCiRoFr",
"rJgXCiRoFr",
"rken3DMjjB",
"SJeBbpOqjB",
"SJeZG0_9jr",
"SJlYWS2hYB",
"rJgXCiRoFr",
"SJlYWS2hYB",
"rkeScHLTKr",
"rkeScHLTKr",
"iclr_2020_B1lqDertwr",
"iclr_2020_B1lqDertwr",
"iclr_2020_B1lqDertwr"
] |
iclr_2020_HkliveStvH | Connectivity-constrained interactive annotations for panoptic segmentation | Large-scale ground truth data sets are of crucial importance for deep learning
based segmentation models, but annotating per-pixel
masks is prohibitively time consuming. In this paper, we investigate interactive graph-based segmentation algorithms that enforce connectivity. To be more precise, we introduce an instance-aware heuristic of a discrete Potts model, and a class-aware Integer Linear Programming (ILP) formulation that ensures global optimum. Both algorithms can take RGB, or utilize the feature maps from any DCNN, whether trained on the target dataset or not, as input. We present competitive semantic (and panoptic) segmentation results on the PASCAL VOC 2012 and Cityscapes dataset given initial scribbles. We also demonstrate that our interactive approach can reach 90.6% mIoU on VOC validation set with an overhead of just 3 correction scribbles. They are thus suitable for interactive annotation on new or existing datasets, or can be used inside any weakly supervised learning framework on new datasets. | reject | The paper proposes two methods for interactive panoptic segmentation (a combination of semantic and instance segmentation) that leverages scribbles as supervision during inference. Reviewers had concerns about the novelty of the paper as it applies existing algorithms for this task and limited empirical comparison with other methods. Reviewers also suggested that ICLR may not be a good fit for the paper and I encourage the authors to consider submitting to a vision oriented conference. | val | [
"BygLwR_cYH",
"BJlbD83nYr",
"Byl9phUFjr",
"ByxhFh8tjH",
"Skg_DhUKoB",
"HkgGijUFjH",
"HkgsejLYir",
"SkxvMZ5i9B",
"r1e_52FhcH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n- key problem: efficiently leveraging scribbles as interactive supervision (at test time) for panoptic segmentation;\n- contributions: 1) two algorithms leveraging scribbles via a superpixel connectivity constraint (one class-agnostic local diffusion heuristic, one class-aware with a MRF formulation), 2)... | [
1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
4,
1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2020_HkliveStvH",
"iclr_2020_HkliveStvH",
"BygLwR_cYH",
"BJlbD83nYr",
"SkxvMZ5i9B",
"r1e_52FhcH",
"iclr_2020_HkliveStvH",
"iclr_2020_HkliveStvH",
"iclr_2020_HkliveStvH"
] |
iclr_2020_SJxjPxSYDH | Discriminative Variational Autoencoder for Continual Learning with Generative Replay | Generative replay (GR) is a method to alleviate catastrophic forgetting in continual learning (CL) by generating previous task data and learning them together with the data from new tasks. In this paper, we propose discriminative variational autoencoder (DiVA) to address the GR-based CL problem. DiVA has class-wise discriminative latent embeddings by maximizing the mutual information between classes and latent variables of VAE. Thus, DiVA is directly applicable to classification and class-conditional generation which are efficient and effective properties in the GR-based CL scenario. Furthermore, we use a novel trick based on domain translation to cover natural images which is challenging to GR-based methods. As a result, DiVA achieved the competitive or higher accuracy compared to state-of-the-art algorithms in Permuted MNIST, Split MNIST, and Split CIFAR10 settings. | reject | The paper presents a method for continual learning with a variant of VAE. The proposed approach is reasonable but technical contribution is quite incremental. The experimental results are limited to comparisons among methods with generative replay, and experimental results on more complex datasets (e.g., CIFAR 100, CUB, ImageNet) are missing. Overall, the contribution of the work in the current form seems insufficient for acceptance at ICLR. | train | [
"rylcVXLnir",
"HklW0uH3jB",
"rJlpm6hSir",
"rygR36nHjr",
"SJlTY3nrsH",
"ryl1Y2nHoH",
"H1xBk23rjB",
"H1l4PVq4KH",
"S1eIP8MpYB",
"SkgI3OVTYH"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your feedback. Since, the rebuttal deadline is almost end, we are not sure to report the additional comparison that you raise. Nevertheless, we are now going to start the CL experiment with the [1] based on CIFAR 10 dataset. However, we hope to say that the DGR in our paper is also based on the WGAN... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
1
] | [
"HklW0uH3jB",
"ryl1Y2nHoH",
"S1eIP8MpYB",
"H1l4PVq4KH",
"ryl1Y2nHoH",
"SkgI3OVTYH",
"iclr_2020_SJxjPxSYDH",
"iclr_2020_SJxjPxSYDH",
"iclr_2020_SJxjPxSYDH",
"iclr_2020_SJxjPxSYDH"
] |
iclr_2020_H1livgrFvr | Out-of-Distribution Image Detection Using the Normalized Compression Distance | On detection of the out-of-distribution images, whose underlying distribution is different from that of the training dataset, we tackle to apply out-of-distribution detection methods to already deployed convolutional neural networks. Most recent approaches have to utilize out-of-distribution samples for validation or retrain the model, which makes it less practical for real-world applications. We propose a novel out-of-distribution detection method MALCOM, which neither uses any out-of-distribution samples nor retrain the model. Inspired by the method using the global average pooling on the feature maps of the convolutional neural networks, the goal of our method is to extract informative sequential patterns from the feature maps. To this end, we introduce a similarity metric which focuses on the shared patterns between two sequences. In short, MALCOM uses both the global average and spatial pattern of the feature maps to accurately identify out-of-distribution samples. | reject | This paper proposes an out-of-distribution detection (OOD) method without assuming OOD in validation.
As reviewers mentioned, I think the idea is interesting and the proposed method has potential. However, I think the paper can be much improved and is not ready to publish due to the followings given reviewers' comments:
(a) The prior work also has some experiments without OOD in validation, i.e., use adversarial examples (AE) instead in validation. Hence, the main motivation of this paper becomes weak unless the authors justify enough why AE is dangerous to use in validation.
(b) The performance of their replication of the prior method is far lower than reported. I understand that sometimes it is not easy to reproduce the prior results. In this case, one can put the numbers in the original paper. Or, one can provide detailed analysis why the prior method should fail in some cases.
(c) The authors follow exactly same experimental settings in the prior works. But, the reported score of the prior method is already very high in the settings, and the gain can be marginal. Namely, the considered settings are more or less "easy problems". Hence, additional harder interesting OOD settings, e.g., motivated by autonomous driving, would strength the paper.
Hence, I recommend rejection. | train | [
"H1lDwHtCtH",
"BJxkDunM5S",
"HkgLRLYhoB",
"ByxS1FYhsH",
"rJg_udYnoS",
"BJgc_IKhjH",
"SJxip7thoH",
"S1epit-TFS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"After reading the other reviews and comments, I appreciate the effort by the Authors, but it looks like the paper still needs some work before being ready. So, I have decided to maintain my rating.\n\n===================\n\nThe work proposes a system for detecting out-of-distribution images for neural networks und... | [
3,
3,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
5,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_H1livgrFvr",
"iclr_2020_H1livgrFvr",
"BJgc_IKhjH",
"S1epit-TFS",
"H1lDwHtCtH",
"BJxkDunM5S",
"iclr_2020_H1livgrFvr",
"iclr_2020_H1livgrFvr"
] |
iclr_2020_Bye6weHFvB | Plan2Vec: Unsupervised Representation Learning by Latent Plans | Creating a useful representation of the world takes more than just rote memorization of individual data samples. This is because fundamentally, we use our internal representation to plan, to solve problems, and to navigate the world. For a representation to be amenable to planning, it is critical for it to embody some notion of optimality. A representation learning objective that explicitly considers some form of planning should generate representations which are more computationally valuable than those that memorize samples. In this paper, we introduce \textbf{Plan2Vec}, an unsupervised representation learning objective inspired by value-based reinforcement learning methods. By abstracting away low-level control with a learned local metric, we show that it is possible to learn plannable representations that inform long-range structures, entirely passively from high-dimensional sequential datasets without supervision. A latent space is learned by playing an ``Imagined Planning Game" on the graph formed by the data points, using a local metric function trained contrastively from context. We show that the global metric on this learned embedding can be used to plan with O(1) complexity by linear interpolation. This exponential speed-up is critical for planning with a learned representation on any problem containing non-trivial global topology. We demonstrate the effectiveness of Plan2Vec on simulated toy tasks from both proprioceptive and image states, as well as two real-world image datasets, showing that Plan2Vec can effectively plan using learned representations. Additional results and videos can be found at \url{https://sites.google.com/view/plan2vec}. | reject | The paper proposes a representation learning objective that makes it
amenable to planning,
The initial submission contained clear holes, such as missing related work and only containing very simplistic baselines. The authors have substantially updated the paper based on this feedback, resulting in a clear improvement.
Nevertheless, while the new version is a good step in the right direction, there is some additional work needed to fully address the reviewers' complaints. For example, the improved baselines are only evaluated in the most simple domain, while the more complex domains still only contain simplistic baselines that are destined to fail. There are also some unaddressed questions regarding the correctness of Eq. 4. Finally, the substantial rewrites have given the paper a less-than-polished feel.
In short, while the work is interesting, it still needs a few iterations before it's ready for publication. | train | [
"HyeuCKuCtB",
"Hylqshc2jr",
"HJlrunc2sS",
"B1gvgac3iH",
"BJgEsgciiS",
"BJxqc253iH",
"H1g_F7Z0KB",
"S1x9U4-I5r",
"B1eMLIT1KS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
"## Paper Summary\n\nWhile cast slightly differently in the intro, it seems to me that this paper learns a goal-conditioned value function that is used at test time to construct visual plans by selecting an appropriate sequence of data points from the training data. Similar to prior work they learn a local distance... | [
3,
-1,
-1,
-1,
-1,
-1,
1,
1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1
] | [
"iclr_2020_Bye6weHFvB",
"S1x9U4-I5r",
"iclr_2020_Bye6weHFvB",
"HyeuCKuCtB",
"B1eMLIT1KS",
"H1g_F7Z0KB",
"iclr_2020_Bye6weHFvB",
"iclr_2020_Bye6weHFvB",
"iclr_2020_Bye6weHFvB"
] |
iclr_2020_HyeAPeBFwS | Quantifying uncertainty with GAN-based priors | Bayesian inference is used extensively to quantify the uncertainty in an inferred field given the measurement of a related field when the two are linked by a mathematical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and/or have prior distributions that are difficult to characterize mathematically. In this work we demonstrate how the approximate distribution learned by a generative adversarial network (GAN) may be used as a prior in a Bayesian update to address both these challenges. We demonstrate the efficacy of this approach by inferring and quantifying uncertainty in inference problems arising in computer vision and physics-based applications. In both instances we highlight the role of computing uncertainty in providing a measure of confidence in the solution, and in designing successive measurements to improve this confidence. | reject | This paper suggests a Bayesian approach to make inference about latent variables for image inference tasks. While the idea in the paper seems elegant and simple, reviewers pointed out a few concerns, including lack of comparisons, missing references, and requested for more extensive validations. While a few comments might have been misunderstandings (eg lack of quantification - seems to be resolved by author’s comments), other comments are not (eg equation (8) needs further justification even if the final results don’t use it). We encourage authors to carefully review comments and edit the manuscript (perhaps some appendix items should be in the main to reduce confusion) for resubmitting to future conferences. | train | [
"rklmUH52sH",
"BkexChE3jS",
"rJxFqsV3jr",
"r1xXycN3jB",
"HJgn1YNnsr",
"SyxQvoKmjS",
"HJenV_FQjr",
"r1lA7pumoH",
"Hker4juXsH",
"BygwEyIPKB",
"rJghfrNnKS",
"SJlld01kcS"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Based on all the reviewer's response we have recognized that we were remiss in not clarifying our main contributions in the manuscript. We have done so in the revised version and repeat them below. \n\nThe main contribution of this paper can be summarized as follows:\n•\tA novel method for performing Bayesian infe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2020_HyeAPeBFwS",
"rJxFqsV3jr",
"BygwEyIPKB",
"rJghfrNnKS",
"SJlld01kcS",
"BygwEyIPKB",
"BygwEyIPKB",
"rJghfrNnKS",
"SJlld01kcS",
"iclr_2020_HyeAPeBFwS",
"iclr_2020_HyeAPeBFwS",
"iclr_2020_HyeAPeBFwS"
] |
iclr_2020_HJeRveHKDH | ADAPTIVE GENERATION OF PROGRAMMING PUZZLES | AI today is far from being able to write complex programs. What type of problems
would be best for computers to learn to program, and how should such problems
be generated? To answer the first question, we suggest programming puzzles as a
domain for teaching computers programming. A programming puzzle consists of a
short program for a Boolean function f(x) and the goal is, given the source code, to
find an input that makes f return True. Puzzles are objective in that one can easily
test the correctness of a given solution x by seeing whether it satisfies f, unlike the
most common representations for program synthesis: given input-output pairs or an
English problem description, the correctness of a given solution is not determined
and is debatable. To address the second question of automatic puzzle generation,
we suggest a GAN-like generation algorithm called “Troublemaker” which can
generate puzzles targeted at any given puzzle-solver. The main innovation is that it
adapts to one or more given puzzle-solvers: rather than generating a single dataset
of puzzles, Tro | reject | The authors introducing programming puzzles as a way to help AI systems learn about reasoning. The authors then propose a GAN-like generation algorithm to generate diverse and difficult puzzles.
This is a very novel problem and the authors have made an interesting submission. However, at least 2 reviewers have raised severe concerns about the work. In particular, the relation to existing work as pointed by R2 was not very clear. Further, the paper was also lacking a strong empirical evaluation of the proposed ideas. The authors did agree with most of the comments of the reviewers and made changes wherever possible. However, some changes have been pushed to future work or are not feasible right now.
Based on the above observations, I recommended that the paper cannot be accepted now. The paper has a lot of potential and I would strongly encourage a revised submission addressing the questions/suggestions made by the reviewers. | train | [
"BygJYh9njH",
"SygfM2lOjB",
"HkeVtsgOor",
"S1xh1sl_jr",
"S1xQslvMiH",
"SJgZe8sAYH",
"HJgzFWw7qS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your feedback --- it has helped us improve the paper immensely! In particular, we have uploaded a paper with several changes including:\n- We have expanded on the related work section situating our contributions; and moved it earlier in the paper.\n- We have added a game-theoretic analysis of the Pro... | [
-1,
-1,
-1,
-1,
3,
8,
3
] | [
-1,
-1,
-1,
-1,
1,
1,
1
] | [
"iclr_2020_HJeRveHKDH",
"SJgZe8sAYH",
"S1xQslvMiH",
"HJgzFWw7qS",
"iclr_2020_HJeRveHKDH",
"iclr_2020_HJeRveHKDH",
"iclr_2020_HJeRveHKDH"
] |
iclr_2020_BkeyOxrYwH | Imagine That! Leveraging Emergent Affordances for Tool Synthesis in Reaching Tasks | In this paper we investigate an artificial agent's ability to perform task-focused tool synthesis via imagination. Our motivation is to explore the richness of information captured by the latent space of an object-centric generative model - and how to exploit it. In particular, our approach employs activation maximisation of a task-based performance predictor to optimise the latent variable of a structured latent-space model in order to generate tool geometries appropriate for the task at hand. We evaluate our model using a novel dataset of synthetic reaching tasks inspired by the cognitive sciences and behavioural ecology. In doing so we examine the model's ability to imagine tools for increasingly complex scenario types, beyond those seen during training. Our experiments demonstrate that the synthesis process modifies emergent, task-relevant object affordances in a targeted and deliberate way: the agents often specifically modify aspects of the tools which relate to meaningful (yet implicitly learned) concepts such as a tool's length, width and configuration. Our results therefore suggest, that task relevant object affordances are implicitly encoded as directions in a structured latent space shaped by experience. | reject | This paper investigates the task of learning to synthesize tools for specific tasks (in this case, a simulated reaching task). The paper was reviewed by 3 experts and received Reject, Weak Reject, and Weak Reject opinions. The reviews are very encouraging of the topic and general approach taken by the paper -- e.g. R3 commenting on the "coolness" of the problem and R1 calling it an "important problem from a cognitive perspective" -- but also identify a number of concerns about baselines, novelty of proposed techniques, underwhelming performance on the task, whether experiments support the conclusions, and some missing or unclear technical details. Overall, the feeling of the reviewers is that they're "not sure what I am supposed to get out of the paper" (R3). The authors posted responses that addressed some of these issues, in particular clarifying their terminology and contribution, and clearing up some of the technical details. However, in post-rebuttal discussions, the reviewers still have concerns with the claims of the papers. In light of these reviews, we are not able to recommend acceptance at this time, but I agree with reviewers that this is a "cool" task and that authors should revise and submit to another venue. | train | [
"SyeF38jhjH",
"SygCHuJssB",
"Syx08TSOiB",
"Ske2zlrdjH",
"BklfCyrdiS",
"SJgplR4usr",
"HklqFh4diH",
"SJlp49V_sr",
"SJxPoldPtB",
"rJe7toL2Fr",
"rygLWwJ6Yr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your detailed explanations to the concerns.\nI think this is certainly an interesting topic. However, I still think the results demonstrated in the current work is not strong enough to convince people. So, I would stick with the current score.",
"Thanks for your detailed responses to my comments. A... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"SJgplR4usr",
"Syx08TSOiB",
"rJe7toL2Fr",
"rJe7toL2Fr",
"rJe7toL2Fr",
"SJxPoldPtB",
"rygLWwJ6Yr",
"iclr_2020_BkeyOxrYwH",
"iclr_2020_BkeyOxrYwH",
"iclr_2020_BkeyOxrYwH",
"iclr_2020_BkeyOxrYwH"
] |
iclr_2020_BylldxBYwH | Physics-Aware Flow Data Completion Using Neural Inpainting | In this paper we propose a physics-aware neural network for inpainting fluid flow data. We consider that flow field data inherently follows the solution of the Navier-Stokes equations and hence our network is designed to capture physical laws. We use a DenseBlock U-Net architecture combined with a stream function formulation to inpaint missing velocity data. Our loss functions represent the relevant physical quantities velocity, velocity Jacobian, vorticity and divergence. Obstacles are treated as known priors, and each layer of the network receives the relevant information through concatenation with the previous layer's output. Our results demonstrate the network's capability for physics-aware completion tasks, and the presented ablation studies show the effectiveness of each proposed component. | reject | The authors present a physics-aware models for inpainting fluid data. In particular, the authors extend the vanilla U-net architecture and add losses that explicitly bias the network towards physically meaningful solutions.
While the reviewers found the work to be interesting, they raised a few questions/objections which are summarised below:
1) Novelty: The reviewers largely found the idea to be novel. I agree that this is indeed novel and a step in the right direction.
2) Experiments: The main objection was to the experimental methodology. In particular, since most of the experiments were on simulated data the reviewers expected simulations where the test conditions were a bit more different than the training conditions. It is not very clear whether the training and test conditions were different and it would have been useful if the authors had clarified this in the rebuttal. The reviewers have also suggested a more thorough ablation study.
3) Organisation: The authors could have used the space more effectively by providing additional details and ablation studies.
Unfortunately, the authors did not engage with the reviewers and respond to their queries. I understand that this could have been because of the poor ratings which would have made the authors believe that a discussion wouldn't help. The reviewers have asked very relevant Qs and made some interesting suggestions about the experimental setup. I strongly recommend the authors to consider these during subsequent submissions.
Based on the reviewer comments and lack of response from the authors, I recommend that the paper cannot be accepted. | test | [
"B1ehNl2iYS",
"r1gggNzRYH",
"HygM1raHcS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper the authors adopt prior work in image inpainting to the problem of 2d fluid velocity field inpainting by extending the network architecture and using additional loss functions. Specifically, the U-net network is extended with a DenseBlock in the middle, and a separate branch of the network is added w... | [
1,
3,
3
] | [
3,
3,
1
] | [
"iclr_2020_BylldxBYwH",
"iclr_2020_BylldxBYwH",
"iclr_2020_BylldxBYwH"
] |
iclr_2020_HkxedlrFwB | Accelerating First-Order Optimization Algorithms | Several stochastic optimization algorithms are currently available. In most cases, selecting the best optimizer for a given problem is not an easy task. Therefore, instead of looking for yet another ’absolute’ best optimizer, accelerating existing ones according to the context might prove more effective. This paper presents a simple and intuitive technique to accelerate first-order optimization algorithms. When applied to first-order optimization algorithms, it converges much more quickly and achieves lower function/loss values when compared to traditional algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient during training. Several tests were conducted with SGD, AdaGrad, Adam and AMSGrad on three public datasets. Results clearly show that the proposed technique, has the potential to improve the performance of existing optimization algorithms. | reject | All reviewers recommend rejection, and the authors have not provided a response.
| train | [
"Hkgw-7intr",
"HklyD34pKS",
"H1eOoyPTFr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper describes a technique to speed up optimizers that rely on gradient\ninformation to find the optimum value of a function. The authors describe and\njustify their method and show its promise in an empirical evaluation.\n\nThe proposed method sounds interesting and promising, but the empirical\nevaluation i... | [
3,
3,
1
] | [
3,
3,
4
] | [
"iclr_2020_HkxedlrFwB",
"iclr_2020_HkxedlrFwB",
"iclr_2020_HkxedlrFwB"
] |
iclr_2020_BJxbOlSKPr | Learning Compact Embedding Layers via Differentiable Product Quantization | Embedding layers are commonly used to map discrete symbols into continuous embedding vectors that reflect their semantic meanings. Despite their effectiveness, the number of parameters in an embedding layer increases linearly with the number of symbols and poses a critical challenge on memory and storage constraints. In this work, we propose a generic and end-to-end learnable compression framework termed differentiable product quantization (DPQ). We present two instantiations of DPQ that leverage different approximation techniques to enable differentiability in end-to-end learning. Our method can readily serve as a drop-in alternative for any existing embedding layer. Empirically, DPQ offers significant compression ratios (14-238x) at negligible or no performance cost on 10 datasets across three different language tasks. | reject | The presented paper gives a differentiable product quantization framework to compress embedding and support the claim by experiments (the supporting materials are as large as the paper itself). Reviewers agreed that the idea is simple is interesting, and also nice and positive discussion appeared. However, the main limiting factor is the small novelty over Chen 2018b, and I agree with that. Also, the comparison with low rank is rather formal: of course it would be of full rank , as the authors claim in the answer, but looking at singular values is needed to make this claim. Also, one can use low-rank tensor factorization to compress embeddings, and this can be compared.
To summarize, I think the contribution is not enough to be accepted. | train | [
"BJg9ZaL2iB",
"ByxFark3sB",
"r1eBRjEooB",
"BkeJQoNooH",
"S1erwsNjoH",
"B1eqa9EsoS",
"Hyet4X5p_S",
"r1lYKM7SKB",
"Hyl2WZpf5H"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewer,\n\nThanks for the prompt response!\n\nWe think it is reasonable that our method can achieve better task performance than full embedding in certain tasks/datasets, because DPQ can implicitly regularize the model with more efficient parameterization. We have observed this phenomenon consistently on so... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
3
] | [
"ByxFark3sB",
"S1erwsNjoH",
"iclr_2020_BJxbOlSKPr",
"Hyl2WZpf5H",
"Hyet4X5p_S",
"r1lYKM7SKB",
"iclr_2020_BJxbOlSKPr",
"iclr_2020_BJxbOlSKPr",
"iclr_2020_BJxbOlSKPr"
] |
iclr_2020_r1eWdlBFwS | Isolating Latent Structure with Cross-population Variational Autoencoders | A significant body of recent work has examined variational autoencoders as a powerful approach for tasks which involve modeling the distribution of complex data such as images and text. In this work, we present a framework for modeling multiple data sets which come from differing distributions but which share some common latent structure. By incorporating architectural constraints and using a mutual information regularized form of the variational objective, our method successfully models differing data populations while explicitly encouraging the isolation of the shared and private latent factors. This enables our model to learn useful shared structure across similar tasks and to disentangle cross-population representations in a weakly supervised way. We demonstrate the utility of our method on several applications including image denoising, sub-group discovery, and continual learning. | reject | The paper proposes a hierarchical Bayesian model over multiple data sets that
has both data set specific as well as shared parameters.
The data set specific parameters are further encouraged to only capture aspects
that vary across data sets by an addition mutual information contribution to the
training loss.
The proposed method is compared to standard VAEs on multiple data sets.
The reviewers agree that the main approach of the paper is sensible. However,
concerns were raised about general novelty, about the theoretical justification
for the proposed loss function and about the lack of non-trivial baselines.
The authors' rebuttal did not manage to full address these points.
Based on the reviews and my own reading, I think this paper is slightly
below acceptance threshold. | train | [
"r1lMN7j9FS",
"HJg6uOEatH",
"HJxnlUtN9B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studied the problem of learning the latent representation from a complex data set which followed the independent but not identically distributions. The main contributions of this paper are to explicitly learn the commonly shared and private latent factors for different data populations in a unified VAE ... | [
6,
3,
3
] | [
3,
4,
4
] | [
"iclr_2020_r1eWdlBFwS",
"iclr_2020_r1eWdlBFwS",
"iclr_2020_r1eWdlBFwS"
] |
iclr_2020_S1eZOeBKDS | Deep Spike Decoder (DSD) | Spike-sorting is of central importance for neuroscience research. We introducea novel spike-sorting method comprising a deep autoencoder trained end-to-endwith a biophysical generative model, biophysically motivated priors, and a self-supervised loss function to training a deep autoencoder. The encoder infers the ac-tion potential event times for each source, while the decoder parameters representeach source’s spatiotemporal response waveform. We evaluate this approach inthe context of real and synthetic multi-channel surface electromyography (sEMG)data, a noisy superposition of motor unit action potentials (MUAPs). Relative toan established spike-sorting method, this autoencoder-based approach shows su-perior recovery of source waveforms and event times. Moreover, the biophysicalnature of the loss functions facilitates interpretability and hyperparameter tuning.Overall, these results demonstrate the efficacy and motivate further developmentof self-supervised spike sorting techniques. | reject | The paper presents a model for learning spiking representations. The basic model is a a deep autoencoder trained end-to-end with a biophysical generative model and results are presented on EMG and sEMG data, with the aim to motivate further research in self-supervised learning.
The reviewers raised several points about the paper. Reviewer 1 raised concerns about lack of context on surrounding work, clarity of the model itself and motivating the loss. Reviewer 2 pointed out strengths of the paper in its simplicity and the importance of this problem, but also raised concerns about the papers clarity, again motivations on the loss function and sensibility of design choices. The authors responded to the feedback from reviewer 1, but overall the reviewer did not think their scores should be changed.
The paper in its current form is not yet ready for acceptance, and we hope there has been useful feedback from the reviewing process for their future research. | test | [
"r1gvTRU7jB",
"BJeR3p3xir",
"rJxSU_FbqB"
] | [
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks a lot for the review. In response to your points:\n\n- We avoided extensive literature review and explanation on KiloSort mostly due to page limitations. In the future revised version of the paper, we’ll try to provide more background information and also include a summary of KiloSort in the appendix.\n\n- ... | [
-1,
1,
1
] | [
-1,
1,
5
] | [
"BJeR3p3xir",
"iclr_2020_S1eZOeBKDS",
"iclr_2020_S1eZOeBKDS"
] |
iclr_2020_H1xzdlStvB | Multi-Precision Policy Enforced Training (MuPPET) : A precision-switching strategy for quantised fixed-point training of CNNs | Large-scale convolutional neural networks (CNNs) suffer from very long training times, spanning from hours to weeks, limiting the productivity and experimentation of deep learning practitioners. As networks grow in size and complexity one approach of reducing training time is the use of low-precision data representation and computations during the training stage. However, in doing so the final accuracy suffers due to the problem of vanishing gradients. Existing state-of-the-art methods combat this issue by means of a mixed-precision approach employing two different precision levels, FP32 (32-bit floating-point precision) and FP16/FP8 (16-/8-bit floating-point precision), leveraging the hardware support of recent GPU architectures for FP16 operations to obtaining performance gains. This work pushes the boundary of quantised training by employing a multilevel optimisation approach that utilises multiple precisions including low-precision fixed-point representations. The training strategy, named MuPPET, combines the use of multiple number representation regimes together with a precision-switching mechanism that decides at run time the transition between different precisions. Overall, the proposed strategy tailors the training process to the hardware-level capabilities of the utilised hardware architecture and yields improvements in training time and energy efficiency compared to state-of-the-art approaches. Applying MuPPET on the training of AlexNet, ResNet18 and GoogLeNet on ImageNet (ILSVRC12) and targeting an NVIDIA Turing GPU, the proposed method achieves the same accuracy as the standard full-precision training with an average training-time speedup of 1.28× across the networks. | reject | The submission presents an approach to speed up network training time by using lower precision representations and computation to begin with and then dynamically increasing the precision from 8 to 32 bits over the course of training. The results show that the same accuracy can be obtained while achieving a moderate speed up.
The reviewers were agreed that the paper did not offer a signficant advantage or novelty, and that the method was somewhat ad hoc and unclear. Unfortunately, the authors' rebuttal did not clarify all of these points, and the recommendation after discussion is for rejection. | val | [
"rkgs7O7hsS",
"Hkee75Q3jr",
"rkgKa_mhsB",
"r1gJesMatS",
"BJeQTHBTtr",
"r1xFvatycr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review. Further details on the motivation behind the switching mechanism has been added to Section 3.3 in the revised version of the paper, particularly at the paragraph beginning “The likelihood of observing r gradients…”.\nAdditionally we would like to state that when the observed p-value is h... | [
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"r1xFvatycr",
"r1gJesMatS",
"BJeQTHBTtr",
"iclr_2020_H1xzdlStvB",
"iclr_2020_H1xzdlStvB",
"iclr_2020_H1xzdlStvB"
] |
iclr_2020_SJeQdeBtwB | Adversarially learned anomaly detection for time series data | Anomaly detection in time series data is an important topic in many domains. However, time series are known to be particular hard to analyze. Based on the recent developments in adversarially learned models, we propose a new approach for anomaly detection in time series data. We build upon the idea to use a combination of a reconstruction error and the output of a Critic network. To this end we propose a cycle-consistent GAN architecture for sequential data and a new way of measuring the reconstruction error. We then show in a detailed evaluation how the different parts of our model contribute to the final anomaly score and demonstrate how the method improves the results on several data sets. We also compare our model to other baseline anomaly detection methods to verify its performance. | reject | The paper proposes a cycle-consistent GAN architecture with measuring the reconstruction error of time series for anomaly detection.
The paper aims to address an important problem, but the current version is not ready for publication. We suggest the authors consider the following aspects for improving the paper:
1. The novelty of the proposed model: motivate the design choices and compare them with state-of-art methods
2. Evaluation: formalize the target anomalies and identify datasets/examples where the proposed model can significantly outperform existing solutions.
| train | [
"ryevebv_Yr",
"BJe1BfZoor",
"SJxVCg-siB",
"ryeOhk-osr",
"r1xrGN9sFr",
"Sked-RZaFr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper trains a GAN on univariate time series data and uses reconstruction errors in combination with the critic's output to predict anomalous subsequences. The method is applied on two real-world data sets and compared to three simple baselines.\n\nI have several reservations about this manuscript:\n- The meth... | [
1,
-1,
-1,
-1,
3,
1
] | [
4,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_SJeQdeBtwB",
"ryevebv_Yr",
"r1xrGN9sFr",
"Sked-RZaFr",
"iclr_2020_SJeQdeBtwB",
"iclr_2020_SJeQdeBtwB"
] |
iclr_2020_B1xmOgrFPS | Meta-RCNN: Meta Learning for Few-Shot Object Detection | Despite significant advances in object detection in recent years, training effective detectors in a small data regime remains an open challenge. Labelling training data for object detection is extremely expensive, and there is a need to develop techniques that can generalize well from small amounts of labelled data. We investigate this problem of few-shot object detection, where a detector has access to only limited amounts of annotated data. Based on the recently evolving meta-learning principle, we propose a novel meta-learning framework for object detection named ``Meta-RCNN", which learns the ability to perform few-shot detection via meta-learning. Specifically, Meta-RCNN learns an object detector in an episodic learning paradigm on the (meta) training data. This learning scheme helps acquire a prior which enables Meta-RCNN to do few-shot detection on novel tasks. Built on top of the Faster RCNN model, in Meta-RCNN, both the Region Proposal Network (RPN) and the object classification branch are meta-learned. The meta-trained RPN learns to provide class-specific proposals, while the object classifier learns to do few-shot classification. The novel loss objectives and learning strategy of Meta-RCNN can be trained in an end-to-end manner. We demonstrate the effectiveness of Meta-RCNN in addressing few-shot detection on Pascal VOC dataset and achieve promising results. | reject | This paper develops a meta-learning approach for few-shot object detection. This paper is borderline and the reviewers are split. The problem is important, albeit somewhat specific to computer vision applications. The main concerns were that it was lacking a head-to-head comparison to RepMet and that it was missing important details (e.g. the image resolution was not clarified, nor was the paper updated to include the details). The authors suggested that the RepMet code was not available, but I was able to find the official code for RepMet via a simple Google search:
https://github.com/jshtok/RepMet
Reviewers also brought up concerns about an ICCV 2019 paper, though this should be considered as concurrent work, as it was not publicly available at the time of submission.
Overall, I think the paper is borderline. Given that many meta-learning papers compare on rather synthetic benchmarks, the study of a more realistic problem setting is refreshing. That said, it's unclear if the insights from this paper would transfer to other machine learning problem settings of interest to the ICLR community.
With all of this in mind, the paper is slightly below the bar for acceptance at ICLR. | train | [
"rJe_tNnijS",
"Bkeg5r3sir",
"rygUTX2isH",
"BygTZ2FdYH",
"HJeEHAb6tS",
"ryg6pPtQ5r"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the comments! We agree with your concerns, and would like to offer clarifications for a clearer understanding. \n\nTo do a novel few-shot detection task, a prior needs to be acquired from some base data (e.g. meta train data in our case). To acquire this prior, we can follow two approaches: 1) Train a t... | [
-1,
-1,
-1,
6,
3,
8
] | [
-1,
-1,
-1,
4,
4,
5
] | [
"BygTZ2FdYH",
"HJeEHAb6tS",
"ryg6pPtQ5r",
"iclr_2020_B1xmOgrFPS",
"iclr_2020_B1xmOgrFPS",
"iclr_2020_B1xmOgrFPS"
] |
iclr_2020_SJx4Ogrtvr | Random Bias Initialization Improving Binary Neural Network Training | Edge intelligence especially binary neural network (BNN) has attracted considerable attention of the artificial intelligence community recently. BNNs significantly reduce the computational cost, model size, and memory footprint. However, there is still a performance gap between the successful full-precision neural network with ReLU activation and BNNs. We argue that the accuracy drop of BNNs is due to their geometry.
We analyze the behaviour of the full-precision neural network with ReLU activation and compare it with its binarized counterpart. This comparison suggests random bias initialization as a remedy to activation saturation in full-precision networks and leads us towards an improved BNN training. Our numerical experiments confirm our geometric intuition. | reject | The article studies the behaviour of binary and full precision ReLU networks towards explaining differences in performance and suggests a random bias initialisation strategy. The reviewers agree that, while closing the gap between binary networks and full precision networks is an interesting problem, the article cannot be accepted in its current form. They point out that more extensive theoretical analysis and experiments would be important, as well as improving the writing. The authors did not provide a rebuttal nor a revision. | train | [
"H1lAIDb-Kr",
"Byg1CKm9KH",
"BJeE7ZOaKr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a method for bias initialization and shows that it improves training for BNN.\n\nI vote to reject the paper. Main points against are: (1) is no theory and very limited experiments (2) Bad writing.\n\nDetailed remarks:\n - The level of english is not good enough all over the paper, example: \"It ... | [
1,
1,
3
] | [
4,
3,
1
] | [
"iclr_2020_SJx4Ogrtvr",
"iclr_2020_SJx4Ogrtvr",
"iclr_2020_SJx4Ogrtvr"
] |
iclr_2020_r1gBOxSFwr | Reweighted Proximal Pruning for Large-Scale Language Representation | Recently, pre-trained language representation flourishes as the mainstay of the natural language understanding community, e.g., BERT. These pre-trained language representations can create state-of-the-art results on a wide range of downstream tasks. Along with continuous significant performance improvement, the size and complexity of these pre-trained neural models continue to increase rapidly. Is it possible to compress these large-scale language representation models? How will the pruned language representation affect the downstream multi-task transfer learning objectives? In this paper, we propose Reweighted Proximal Pruning (RPP), a new pruning method specifically designed for a large-scale language representation model. Through experiments on SQuAD and the GLUE benchmark suite, we show that proximal pruned BERT keeps high accuracy for both the pre-training task and the downstream multiple fine-tuning tasks at high prune ratio. RPP provides a new perspective to help us analyze what large-scale language representation might learn. Additionally, RPP makes it possible to deploy a large state-of-the-art language representation model such as BERT on a series of distinct devices (e.g., online servers, mobile phones, and edge devices). | reject | This paper proposes a novel pruning method for use with transformer text encoding models like BERT, and show that it can dramatically reduce the number of non-zero weights in a trained model while only slightly harming performance.
This is one of the hardest cases in my pile. The topic is obviously timely and worthwhile. None of the reviewers was able to give a high-confidence assessment, but the reviews were all ultimately leaning positive. However, the reviewers didn't reach a clear consensus on the main strengths of the paper, even after some private discussion, and they raised many concerns. These concerns, taken together, make me doubt that the current paper represents a substantial, sound contribution to the model compression literature in NLP.
I'm voting to reject, on the basis of:
- Recurring concerns about missing strong baselines, which make it less clear that the new method is an ideal choice.
- Relatively weak motivations for the proposed method (pruning a pre-trained model before fine-tuning) in the proposed application domain (mobile devices).
- Recurring concerns about thin analysis. | train | [
"HJlbxZg2iH",
"H1lUDA-hor",
"rkeoILZ2iS",
"r1e2ID-noB",
"H1xKq1WhoS",
"S1xFzHl2ir",
"SkxYlayniS",
"SJlU2stRYH",
"HJlB_8x9cS",
"rkgN_clcqr"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewers, \n\nThanks so much for the valuable comments. We make the responses to the questions on the issue of previous pruning method as below:\n\nIn the revision, we have updated the figure in Appendix D for better visualization, and have provided more details about the issue of previous methods. We summar... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"iclr_2020_r1gBOxSFwr",
"SJlU2stRYH",
"HJlB_8x9cS",
"HJlB_8x9cS",
"rkgN_clcqr",
"iclr_2020_r1gBOxSFwr",
"iclr_2020_r1gBOxSFwr",
"iclr_2020_r1gBOxSFwr",
"iclr_2020_r1gBOxSFwr",
"iclr_2020_r1gBOxSFwr"
] |
iclr_2020_BJg8_xHtPr | OBJECT-ORIENTED REPRESENTATION OF 3D SCENES | In this paper, we propose a generative model, called ROOTS (Representation of Object-Oriented Three-dimension Scenes), for unsupervised object-wise 3D-scene decomposition and and rendering. For 3D scene modeling, ROOTS bases on the Generative Query Networks (GQN) framework, but unlike GQN, provides object-oriented representation decomposition. The inferred object-representation of ROOTS is 3D in the sense that it is viewpoint invariant as the full scene representation of GQN is so. ROOTS also provides hierarchical object-oriented representation: at 3D global-scene level and at 2D local-image level. We achieve this without performance degradation. In experiments on datasets of 3D rooms with multiple objects, we demonstrate the above properties by focusing on its abilities for disentanglement, compositionality, and generalization in comparison to GQN. | reject | The author proposes a object-oriented probabilistic generative model of 3D scenes. The model is based on the GQN with the key innovation being that there is a separate 3D representation per object (vs a single one for the entire scene). A scene-volume map is used to prevent two objects from occupying the same space. The authors show that using this model, it's possible to learn the scene representation in an unsupervised manner (without the 3D ground truth).
The submission has received relatively low scores with one weak accept and 3 weak rejects. All reviewers found the initial submission to be unclear and poorly written (with 1 reject and 3 weak rejects initially). The initial submission also failed to acknowledge prior work on object based representations in the 3D vision community. Based on the reviewer feedback, the authors greatly improved the paper by reworking the notation and the description of the model, and included a discussion of related work from 3D vision. Overall, the exposition of the paper was substantially improved. Some of the reviewers recognize the improvement, and lifted their scores.
However, the work still have some issues:
1. The experimental section is still weak
The reviewers (especially those from an computer vision background) questioned the lack of baseline comparisons and ablation studies, which the authors (in their rebuttal) felt to be unnecessary. It is this AC's opinion that comparisons against alternatives and ablations is critical for scientific rigor, and high quality work aims not to just propose new models, but also to demonstrate via experimental analysis how the model compares to previous models, and what parts of the model is necessary, coming up with new metrics, baselines, and evaluation when needed.
It is the AC's opinion that the authors should attempt to compare against other methods/baselines when appropriate. For instance, perhaps it would make sense to compare the proposed model against IODINE and MONet. Upon closer examination of the experimental results, the AC also finds that the description of the object detection quality to be not very precise. Is the evaluation in 2D or 3D? The filtering of predictions that are too far away from any ground truth also seems unscientific.
2. The objects and arrangements considered in this paper is very simplistic.
3. The writing is still poor and need improvement.
The paper needs an editing pass as the paper was substantially rewritten. There are still grammar/typos, and unresolved references to Table ?? (page 8,9).
After considering the author responses and the reviewer feedback, the AC believe this work shows great promise but still need improvement. The authors have tackled a challenging and exciting problem, and have provided a very interesting model. The work can be strengthened by improving the experiments, analysis, and the writing. The AC recommend the authors further iterate on the paper and resubmit. As the revised paper was significantly different from the initial submission, an additional review cycle will also help ensure that the revised paper is properly fully evaluated. The current reviewers are to be commended for taking the time and effort to look over the revision. | train | [
"ByxbeCd0tB",
"ByxlkaX3jB",
"BkeDTCPhjH",
"B1ev9InFoB",
"r1lwoLTjoB",
"HyeF6pssjr",
"H1xdNJCEiS",
"HJgunr0Nir",
"SyxszVCVir",
"BJeh8bRNor",
"SyxsjzRVoS",
"Byl0r3K0Yr",
"HklkZ4gVqB"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a model building off of the generative query network model that takes in as input multiple images, builds a model of the 3D scene, and renders it. This can be trained end to end. The insight of the method is that one can factor the underlying representation into different objects. The system is ... | [
3,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
5,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_BJg8_xHtPr",
"iclr_2020_BJg8_xHtPr",
"B1ev9InFoB",
"iclr_2020_BJg8_xHtPr",
"B1ev9InFoB",
"H1xdNJCEiS",
"iclr_2020_BJg8_xHtPr",
"ByxbeCd0tB",
"ByxbeCd0tB",
"HklkZ4gVqB",
"Byl0r3K0Yr",
"iclr_2020_BJg8_xHtPr",
"iclr_2020_BJg8_xHtPr"
] |
iclr_2020_BJlPOlBKDB | Closed loop deep Bayesian inversion: Uncertainty driven acquisition for fast MRI | This work proposes a closed-loop, uncertainty-driven adaptive sampling frame- work (CLUDAS) for accelerating magnetic resonance imaging (MRI) via deep Bayesian inversion. By closed-loop, we mean that our samples adapt in real- time to the incoming data. To our knowledge, we demonstrate the first genera- tive adversarial network (GAN) based framework for posterior estimation over a continuum sampling rates of an inverse problem. We use this estimator to drive the sampling for accelerated MRI. Our numerical evidence demonstrates that the variance estimate strongly correlates with the expected MSE improvement for dif- ferent acceleration rates even with few posterior samples. Moreover, the resulting masks bring improvements to the state-of-the-art fixed and active mask designing approaches across MSE, posterior variance and SSIM on real undersampled MRI scans. | reject | The author responses and notes to the AC are acknowledged. A fourth review was requested because this seemed like a tricky paper to review, given both the technical contribution and the application area. Overall, the reviewers were all in agreement in terms of score that the paper was just below borderline for acceptance. They found that the methodology seemed sensible and the application potentially impactful. However, a common thread was that the paper was hard to follow for non-experts on MRI and the reviewers weren't entirely convinced by the experiments (asking for additional experiments and comparison to Zhang et al.). The authors comment on the challenge of implementing Zhang is acknowledged and it's unfortunate that cluster issues prevented additional experimental results. While ICLR certainly accepts application papers and particularly ones with interesting technical contribution in machine learning, given that the reviewers struggled to follow the paper through the application specific language it does seem like this isn't the right venue for the paper as written. Thus the recommendation is to reject. Perhaps a more application specific venue would be a better fit for this work. Otherwise, making the paper more accessible to the ML audience and providing experiments to justify the methodology beyond the application would make the paper much stronger. | val | [
"SJxsl2aa9r",
"ryxU31W_sr",
"SygQN1buir",
"HkxxWCluoH",
"Hyx1naedjH",
"HklOragdjr",
"SJxUphe_oS",
"rJx9Am1OYS",
"HkxcSGnntB",
"BylJ7Uoh9r"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper describes a method for accelerating MRI scans by proposing lines in k-space to acquire next. The proposals are based on posterior uncertainty estimates obtained from GAN-based reconstructions from parts of the k-space acquired thus far. The authors address an interesting and important problem of speeding... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"iclr_2020_BJlPOlBKDB",
"SJxsl2aa9r",
"BylJ7Uoh9r",
"rJx9Am1OYS",
"HkxcSGnntB",
"SJxUphe_oS",
"iclr_2020_BJlPOlBKDB",
"iclr_2020_BJlPOlBKDB",
"iclr_2020_BJlPOlBKDB",
"iclr_2020_BJlPOlBKDB"
] |
iclr_2020_HJxDugSFDB | Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model | Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these kinds of observation spaces present a number of challenges in practice, since the policy must now solve two problems: a representation learning problem, and a task learning problem. In this paper, we aim to explicitly learn representations that can accelerate reinforcement learning from images. We propose the stochastic latent actor-critic (SLAC) algorithm: a sample-efficient and high-performing RL algorithm for learning policies for complex continuous control tasks directly from high-dimensional image inputs. SLAC learns a compact latent representation space using a stochastic sequential latent variable model, and then learns a critic model within this latent space. By learning a critic within a compact state space, SLAC can learn much more efficiently than standard RL methods. The proposed model improves performance substantially over alternative representations as well, such as variational autoencoders. In fact, our experimental evaluation demonstrates that the sample efficiency of our resulting method is comparable to that of model-based RL methods that directly use a similar type of model for control. Furthermore, our method outperforms both model-free and model-based alternatives in terms of final performance and sample efficiency, on a range of difficult image-based control tasks. Our code and videos of our results are available at our website. | reject | An actor-critic method is introduced that explicitly aims to learn a good representation using a stochastic latent variable model. There is disagreement among the reviewers regarding the significance of this paper. Two of the three reviewers argue that several strong claims made in the paper that are not properly backed up by evidence. In particular, it is not sufficiently clear to what degree the shown performance improvement is due to the stochastic nature of the model used, one of the key points of the paper. I recommend that the authors provide more empirical evidence to back up their claims and then resubmit. | test | [
"HJe0GqrssH",
"rylNAPAqsH",
"SJg3cwC9sS",
"SJe5StvycH",
"BJgddhA3YH",
"B1l4nYmjqH"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the comments and feedback. We have revised the paper to address the points below.\n\n- '\"contrary to the conclusions in prior work (Hafner et al., 2019; Buesing et al., 2018), the fully stochastic model performs on par or better.\" Why?'\nWe revised the results from Figure 6 and the text... | [
-1,
-1,
-1,
8,
3,
3
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"SJe5StvycH",
"BJgddhA3YH",
"B1l4nYmjqH",
"iclr_2020_HJxDugSFDB",
"iclr_2020_HJxDugSFDB",
"iclr_2020_HJxDugSFDB"
] |
iclr_2020_SJeuueSYDH | Distributed Training Across the World | Traditional synchronous distributed training is performed inside a cluster, since it requires high bandwidth and low latency network (e.g. 25Gb Ethernet or Infini-band). However, in many application scenarios, training data are often distributed across many geographic locations, where physical distance is long and latency is high. Traditional synchronous distributed training cannot scale well under such limited network conditions. In this work, we aim to scale distributed learning un-der high-latency network. To achieve this, we propose delayed and temporally sparse (DTS) update that enables synchronous training to tolerate extreme network conditions without compromising accuracy. We benchmark our algorithms on servers deployed across three continents in the world: London (Europe), Tokyo(Asia), Oregon (North America) and Ohio (North America). Under such challenging settings, DTS achieves90×speedup over traditional methods without loss of accuracy on ImageNet. | reject | The paper introduces a distributed algorithm for training deep nets in clusters with high-latency (i.e. very remote) nodes. While the motivation and clarity are the strengths of the paper, the reviewers have some concerns regarding novelty and insufficient theoretical analysis. | train | [
"BkgiDncnjr",
"HJgi2xt3sB",
"BJxcVFy2oS",
"ryxOXZKcsH",
"BJlVOGFcjS",
"BJxCgGFcjH",
"rJlp29pRtS",
"HyeMuX_J9H",
"HJeMe67H5S",
"r1eT_7l55r",
"SJxf5f3pDr"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We sincerely thank all reviewers for their comments. All reviewers agree that the paper is clearly written and has certain contributions to against latency. R2 & R3 mainly concern about the convergence guarantees, which we have justified through proof in the appendix (guarantee to converge and no slower than SGD).... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
-1,
-1
] | [
"iclr_2020_SJeuueSYDH",
"BJxcVFy2oS",
"BJxCgGFcjH",
"HJeMe67H5S",
"rJlp29pRtS",
"HyeMuX_J9H",
"iclr_2020_SJeuueSYDH",
"iclr_2020_SJeuueSYDH",
"iclr_2020_SJeuueSYDH",
"SJxf5f3pDr",
"iclr_2020_SJeuueSYDH"
] |
iclr_2020_Byeq_xHtwS | Neural Video Encoding | Deep neural networks have had unprecedented success in computer vision, natural language processing, and speech largely due to the ability to search for suitable task algorithms via differentiable programming. In this paper, we borrow ideas from Kolmogorov complexity theory and normalizing flows to explore the possibilities of finding arbitrary algorithms that represent data. In particular, algorithms which encode sequences of video image frames. Ultimately, we demonstrate neural video encoded using convolutional neural networks to transform autoregressive noise processes and show that this method has surprising cryptographic analogs for information security. | reject | The paper has several clarity and novelty issues. | test | [
"SJgISm7mYS",
"ryg6ZC3jYB",
"SJl7REkMcr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors show that CNNs are somewhat able to compress entire videos within their parameters that can be reconstructed by an autoregressive process. This is an interesting idea that has been explored in a different context before (e.g., Deep Image Prior, Ulyanov et al. 2017). There is also plenty of work in the ... | [
1,
1,
3
] | [
4,
5,
1
] | [
"iclr_2020_Byeq_xHtwS",
"iclr_2020_Byeq_xHtwS",
"iclr_2020_Byeq_xHtwS"
] |
iclr_2020_HygcdeBFvr | Score and Lyrics-Free Singing Voice Generation | Generative models for singing voice have been mostly concerned with the task of "singing voice synthesis," i.e., to produce singing voice waveforms given musical scores and text lyrics. In this work, we explore a novel yet challenging alternative: singing voice generation without pre-assigned scores and lyrics, in both training and inference time. In particular, we experiment with three different schemes: 1) free singer, where the model generates singing voices without taking any conditions; 2) accompanied singer, where the model generates singing voices over a waveform of instrumental music; and 3) solo singer, where the model improvises a chord sequence first and then uses that to generate voices. We outline the associated challenges and propose a pipeline to tackle these new tasks. This involves the development of source separation and transcription models for data preparation, adversarial networks for audio generation, and customized metrics for evaluation. | reject | Main content:
Blind review #1 summarizes it well:
his paper claims to be the first to tackle unconditional singing voice generation. It is noted that previous singing voice generation approaches leverage explicit pitch information (either of an accompaniment via a score or for the voice itself), and/or specified lyrics the voice should sing. The authors first create their own dataset of singing voice data with accompaniments, then use a GAN to generate singing voice waveforms in three different settings:
1) Free singer - only noise as input, completely unconditional singing sampling
2) Accompanied singer - Providing the accompaniment *waveform* (not symbolic data like a score - the model needs to learn how to transcribe to use this information) as a condition for the singing voice
3) Solo singer - The same setting as 1 but the model first generates an accompaniment then, from that, generates singing voice
--
Discussion:
The reviews generally point out that while a lot of new work has been done, this paper bites off too much at once: it tackles many different open problems, in a generative art domain where evaluation is subjective.
--
Recommendation and justification:
This paper is a weak reject, not because it is uninteresting or bad work, but because the ambitious scope is really too large for a single conference paper. In a more specialized conference like ISMIR, it would still have a good chance. The authors should break it down into conference sized chunks, and address more of the reviewer comments in each chunk. | train | [
"Hkgfl5cior",
"r1loQVQojB",
"r1xSbxL5sH",
"rkekOh5Kir",
"BJltegoFsS",
"r1xO5msFir",
"rJeJyJhYjr",
"rkgg0l5eiB",
"B1x9SOtgjH",
"r1xCpXLgjr",
"HklxPZIMcS"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your comments. As you stated in the two review comments, the evaluation of this task is difficult because it is generative art and it is a new task. Furthermore, we would like to emphasize that it is difficult also because the evaluation of singing as a type of music is fairly subjective. \... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"r1loQVQojB",
"r1xO5msFir",
"iclr_2020_HygcdeBFvr",
"HklxPZIMcS",
"r1xCpXLgjr",
"B1x9SOtgjH",
"rkgg0l5eiB",
"iclr_2020_HygcdeBFvr",
"iclr_2020_HygcdeBFvr",
"iclr_2020_HygcdeBFvr",
"iclr_2020_HygcdeBFvr"
] |
iclr_2020_H1ls_eSKPH | Overcoming Catastrophic Forgetting via Hessian-free Curvature Estimates | Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work. | reject | The reviewers have provided thorough reviews of your work. I encourage you to read them carefully should you decide to resubmit it to a later conference. | val | [
"BJeuYr72jH",
"rkelpoAaKS",
"BJlV6dostB",
"BklI585CtH"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewers for their feedback. Unfortunately, we are not able to address all comments to the extent and depth we would like to within the rebuttal period, but we will use the feedback as a guideline for improving on our paper and results and resubmit in the future.\n",
"The paper focuse... | [
-1,
3,
1,
3
] | [
-1,
1,
4,
3
] | [
"iclr_2020_H1ls_eSKPH",
"iclr_2020_H1ls_eSKPH",
"iclr_2020_H1ls_eSKPH",
"iclr_2020_H1ls_eSKPH"
] |
iclr_2020_BJes_xStwS | GRASPEL: GRAPH SPECTRAL LEARNING AT SCALE | Learning meaningful graphs from data plays important roles in many data mining and machine learning tasks, such as data representation and analysis, dimension reduction, data clustering, and visualization, etc. In this work, we present a scalable spectral approach to graph learning from data. By limiting the precision matrix to be a graph Laplacian, our approach aims to estimate ultra-sparse weighted graphs and has a clear connection with the prior graphical Lasso method. By interleaving nearly-linear time spectral graph sparsification, coarsening and embedding procedures, ultra-sparse yet spectrally-stable graphs can be iteratively constructed in a highly-scalable manner. Compared with prior graph learning approaches that do not scale to large problems, our approach is highly-scalable for constructing graphs that can immediately lead to substantially improved computing efficiency and solution quality for a variety of data mining and machine learning applications, such as spectral clustering (SC), and t-Distributed Stochastic Neighbor Embedding (t-SNE). | reject | This paper proposes a scalable approach for graph learning from data. The reviewers think the approach appears heuristic and it is not clear the algorithm is optimizing the proposed sparse graph recovery objective. | train | [
"HJx9csHQ5H",
"Bkx5ok9sjr",
"ryxCxkcjiB",
"Skg4aRtssH",
"r1l8rAKooH",
"S1e5fAq6tS",
"rkeVRxoCFB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors present a method that transforms data into graph. They emphasize on the fact that the proposed method is scalable, using a spectral embedding to construct the graph.\n\nWe think that the paper is not of enough quality to be accepted in ICLR. Without going in detail in the derivations, we... | [
1,
-1,
-1,
-1,
-1,
3,
6
] | [
5,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_BJes_xStwS",
"S1e5fAq6tS",
"rkeVRxoCFB",
"HJx9csHQ5H",
"iclr_2020_BJes_xStwS",
"iclr_2020_BJes_xStwS",
"iclr_2020_BJes_xStwS"
] |
iclr_2020_rke3OxSKwr | Improved Training Techniques for Online Neural Machine Translation | Neural sequence-to-sequence models are at the basis of state-of-the-art solutions for sequential prediction problems such as machine translation and speech recognition. The models typically assume that the entire input is available when starting target generation. In some applications, however, it is desirable to start the decoding process before the entire input is available, e.g. to reduce the latency in automatic speech recognition. We consider state-of-the-art wait-k decoders, that first read k tokens from the source and then alternate between reading tokens from the input and writing to the output. We investigate the sensitivity of such models to the value of k that is used during training and when deploying the model, and the effect of updating the hidden states in transformer models as new source tokens are read. We experiment with German-English translation on the IWSLT14 dataset and the larger WMT15 dataset. Our results significantly improve over earlier state-of-the-art results for German-English translation on the WMT15 dataset across different latency levels. | reject | The paper proposes a method of training latency-limited (wait-k) decoders for online machine translation. The authors investigate the impact of the value of k, and of recalculating the transformer's decoder hidden states when a new source token arrives. They significantly improve over state-of-the-art results for German-English translation on the WMT15 dataset, however there is limited novelty wrt previous approaches. The authors responded in depth to reviews and updated the paper with improvements, for which there was no reviewer response. The paper presents interesting results but IMO the approach is not novel enough to justify acceptance at ICLR.
| test | [
"BylTPwWioS",
"rJxSt6UWsr",
"r1xU1p8bsB",
"HJgHncMCtB",
"B1xFB-O0FB",
"SyxQenqMcH"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Based on the suggestions of the reviewers, we made a few updates to the paper:\nMade the comparison to the original wait-k paper [1] clearer, highlighting the differences in the encoder side.\nAdded training time details of our models as compared to the baselines and our implementation of STACL [1].\nAdded decodin... | [
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2020_rke3OxSKwr",
"HJgHncMCtB",
"B1xFB-O0FB",
"iclr_2020_rke3OxSKwr",
"iclr_2020_rke3OxSKwr",
"iclr_2020_rke3OxSKwr"
] |
iclr_2020_SkxaueHFPB | Implicit competitive regularization in GANs | Generative adversarial networks (GANs) are capable of producing high quality samples, but they suffer from numerous issues such as instability and mode collapse during training. To combat this, we propose to model the generator and discriminator as agents acting under local information, uncertainty, and awareness of their opponent. By doing so we achieve stable convergence, even when the underlying game has no Nash equilibria. We call this mechanism \emph{implicit competitive regularization} (ICR) and show that it is present in the recently proposed \emph{competitive gradient descent} (CGD).
When comparing CGD to Adam using a variety of loss functions and regularizers on CIFAR10, CGD shows a much more consistent performance, which we attribute to ICR.
In our experiments, we achieve the highest inception score when using the WGAN loss (without gradient penalty or weight clipping) together with CGD. This can be interpreted as minimizing a form of integral probability metric based on ICR. | reject | The paper proposes to study "implicit competitive regularization", a phenomenon borne of taking a more nuanced game theoretic perspective on GAN training, wherein the two competing networks are "model[ed] ... as agents acting with limited information and in awareness of their opponent". The meaning of this is developed through a series of examples using simpler games and didactic experiments on actual GANs. An adversary-aware variant employing a Taylor approximation to the loss.
Reviewer assessment amounted to 3 relatively light reviews, two of which reported little background in the area, and one more in-depth review, which happened to also be the most critical. R1, R2, R3 all felt the contribution was interesting and valuable. R1 felt the contribution of the paper may be on the light side given the original competitive gradient descent paper, on which this manuscript leans heavily, included GAN training (the authors disagreed); they also felt the paper would be stronger with additional datasets in the empirical evaluation (this was not addressed). R2 felt the work suffered for lack of evidence of consistency via repeated experiments, which the authors explained was due to the resource-intensity of the experiments.
R5 raised that Inception scores for both the method and being noticeably worse than those reported in the literature, a concern that was resolved in an update and seemed to center on the software implementation of the metric. R5 had several technical concerns, but was generally unhappy with the presentation and finishedness of the manuscript, in particular the degree to which details are deferred to the CGD paper. (The authors maintain that CGD is but one instantiation of a more general framework, but given that the empirical section of the paper relies on this instantiation I would concur that it is under-treated.)
Minor updates were made to the paper, but R5 remains unconvinced (other reviewers did not revisit their reviews at all). In particular: experiments seem promising but not final (repeatability is a concern), the single paragraph "intuitive explanation" and cartoon offered in Figure 3 were viewed as insufficiently rigorous. A great deal of the paper is spent on simple cases, but not much is said about ICR specifically in those cases.
This appears to have the makings of an important contribution, but I concur with R5 that it is not quite ready for mass consumption. As is, the narrative is locally consistent but quite difficult to follow section after section. It should also be noted that ICLR as a venue has a community that is not as steeped in the game theory literature as the authors clearly are, and the assumed technical background is quite substantial here. For a game theory novice, it is difficult to tell which turns of phrase refer to concepts from game theory and which may be more informally introduced herein. I believe the paper requires redrafting for greater clarity with a more rigorous theoretical and/or empirical characterization of ICR, perhaps involving small scale experiments which clearly demonstrates the effect. I also believe the authors have done themselves a disservice by not availing themselves of 10 pages rather than 8.
I recommend rejection at this time, but hope that the authors view this feedback as valuable and continue to improve their manuscript, as I (and the reviewers) believe this line of work has the potential to be quite impactful. | train | [
"SJx5UCR_iS",
"ryeDKp0_ir",
"H1gYHaCdjH",
"SkeEkTA_iH",
"rJxILrkIir",
"SkxA5ukeoB",
"H1xrzuygiS",
"B1xuDl80YB",
"B1e_HcwAFr",
"BkeoDK4qqH",
"HyeCPFQo9r"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank all reviewers for their assessment and feedback. The main purpose of this work was to illustrate a novel mechanism that could stabilize GAN training without the need for Lipschitz-regularization (for instance, through gradient penalties) and we were happy to see that this idea was appreciate... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
1,
4
] | [
"iclr_2020_SkxaueHFPB",
"B1xuDl80YB",
"B1e_HcwAFr",
"BkeoDK4qqH",
"HyeCPFQo9r",
"HyeCPFQo9r",
"HyeCPFQo9r",
"iclr_2020_SkxaueHFPB",
"iclr_2020_SkxaueHFPB",
"iclr_2020_SkxaueHFPB",
"iclr_2020_SkxaueHFPB"
] |
iclr_2020_rkg0_eHtDr | Benefits of Overparameterization in Single-Layer Latent Variable Generative Models | One of the most surprising and exciting discoveries in supervising learning was the benefit of overparameterization (i.e. training a very large model) to improving the optimization landscape of a problem, with minimal effect on statistical performance (i.e. generalization). In contrast, unsupervised settings have been under-explored, despite the fact that it has been observed that overparameterization can be helpful as early as Dasgupta & Schulman (2007). In this paper, we perform an exhaustive study of different aspects of overparameterization in unsupervised learning via synthetic and semi-synthetic experiments. We discuss benefits to different metrics of success (recovering the parameters of the ground-truth model, held-out log-likelihood), sensitivity to variations of the training algorithm, and behavior as the amount of overparameterization increases. We find that, when learning using methods such as variational inference, larger models can significantly increase the number of ground truth latent variables recovered. | reject | This paper studies over-parameterization for unsupervised learning. The paper does a series of empirical studies on this topic. Among other things the authors observe that larger models can increase the number latent variables recovered when fitting larger variational inference models. The reviewers raised some concern about the simplicity of the models studied and also lack of some theoretical justification. One reviewer also suggests that more experiments and ablation studies on more general models will further help clarify the role over-parameterized model for latent generative models. I agree with the reviewers that this paper is "compelling reason for theoretical research on the interplay between overparameterization and parameter recovery in latent variable neural networks trained with gradient descent methods". I disagree with the reviewers that theoretical study is required as I think a good empirical paper with clear conjectures is as important. I do agree with the reviewers however that for empirical paper I think the empirical studies would have to be a bit more thorough with more clear conjectures. In summary, I think the paper is nice and raises a lot of interesting questions but can be improved with more through studies and conjectures. I would have liked to have the paper accepted but based on the reviewer scores and other papers in my batch I can not recommend acceptance at this time. I strongly recommend the authors to revise and resubmit. I really think this is a nice paper and has a lot of potential and can have impact with appropriate revision. | train | [
"r1g54dkzqH",
"B1gZcbv3sS",
"HygbeZDhsS",
"B1e0qeD2sS",
"HJledmXCtr",
"HJeivC_0YH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper performs empirical study on the influence of overparameterization to generalization performance of noisy-or networks and sparse coding, and points out overparameterization is indeed beneficial. I find the paper has some drawbacks.\n\n1. Overparameterization is better than underparamterization and exact ... | [
3,
-1,
-1,
-1,
3,
6
] | [
1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_rkg0_eHtDr",
"r1g54dkzqH",
"HJeivC_0YH",
"HJledmXCtr",
"iclr_2020_rkg0_eHtDr",
"iclr_2020_rkg0_eHtDr"
] |
iclr_2020_HyxWteSFwS | Deep Interaction Processes for Time-Evolving Graphs | Time-evolving graphs are ubiquitous such as online transactions on an e-commerce platform and user interactions on social networks. While neural approaches have been proposed for graph modeling, most of them focus on static graphs. In this paper we present a principled deep neural approach that models continuous time-evolving graphs at multiple time resolutions based on a temporal point process framework. To model the dependency between latent dynamic representations of each node, we define a mixture of temporal cascades in which a node's neural representation depends on not only this node's previous representations but also the previous representations of related nodes that have interacted with this node. We generalize LSTM on this temporal cascade mixture and introduce novel time gates to model time intervals between interactions. Furthermore, we introduce a selection mechanism that gives important nodes large influence in both k−hop subgraphs of nodes in an interaction. To capture temporal dependency at multiple time-resolutions, we stack our neural representations in several layers and fuse them based on attention. Based on the temporal point process framework, our approach can naturally handle growth (and shrinkage) of graph nodes and interactions, making it inductive. Experimental results on interaction prediction and classification tasks -- including a real-world financial application -- illustrate the effectiveness of the time gate, the selection and attention mechanisms of our approach, as well as its
superior performance over the alternative approaches. | reject | All reviewers rated this paper as a weak reject.
The author response was just not enough to sway any of the reviewers to revise their assessment.
The AC recommends rejection. | train | [
"Byl3LKt9sH",
"r1lL75ucsS",
"rkgVbVOqjS",
"Bkex8DIYoS",
"Hye3y1j9jH",
"HkgAT1el5S",
"SyeMN893cr",
"HkxlmTgAqS"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"we now answer your comments about DIP experiments.\nIn our updated version, we provide a comparison with your suggested baseline JODIE and provide results analysis in 5.3.2 and 5.4.2. Meanwhile we investigate the effects of different k and L on interaction prediction and interaction classification(Appendix B.3). ... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
1,
3,
4
] | [
"Bkex8DIYoS",
"rkgVbVOqjS",
"SyeMN893cr",
"HkxlmTgAqS",
"HkgAT1el5S",
"iclr_2020_HyxWteSFwS",
"iclr_2020_HyxWteSFwS",
"iclr_2020_HyxWteSFwS"
] |
iclr_2020_rkxMKerYwr | Towards Interpreting Deep Neural Networks via Understanding Layer Behaviors | Deep neural networks (DNNs) have achieved unprecedented practical success in many applications.
However, how to interpret DNNs is still an open problem.
In particular, what do hidden layers behave is not clearly understood.
In this paper, relying on a teacher-student paradigm, we seek to understand the layer behaviors of DNNs by ``monitoring" both across-layer and single-layer distribution evolution to some target distribution in the training. Here, the ``across-layer" and ``single-layer" considers the layer behavior \emph{along the depth} and a specific layer \emph{along training epochs}, respectively.
Relying on optimal transport theory, we employ the Wasserstein distance (W-distance) to measure the divergence between the layer distribution and the target distribution.
Theoretically, we prove that i) the W-distance of across layers to the target distribution tends to decrease along the depth. ii) the W-distance of a specific layer to the target distribution tends to decrease along training iterations. iii)
However, a deep layer is not always better than a shallow layer for some samples. Moreover, our results helps to analyze the stability of layer distributions and explains why auxiliary losses helps the training of DNNs. Extensive experiments on real-world datasets justify our theoretical findings. | reject | This paper studies the transfer of representations learned by deep neural networks across various datasets and tasks when the network is pre-trained on some dataset and subsequently fine-tuned on the target dataset. On the theoretical side the authors analyse two-layer fully connected networks. In an extensive empirical evaluation the authors argue that an appropriately pre-trained networks enable better loss landscapes (improved Lipschitzness). Understanding the transferability of representations is an important problem and the reviewers appreciated some aspects of the extensive empirical evaluation and the initial theoretical investigation. However, we feel that the manuscript needs a major revision and that there is not enough empirical evidence to support the stated conclusions. As a result, I will recommend rejecting this paper in the current form. Nevertheless, as the problem is extremely important I encourage the authors to improve the clarity and provide more convincing arguments towards the stated conclusions by addressing the issues raised during the discussion phase. | test | [
"B1lfD3Ahtr",
"BJeibrI5iH",
"ByxCO489sH",
"rkgKoLL9jH",
"r1eMoS85iS",
"HkxW9dLptr",
"BJe8i8JLqB",
"H1eDnHP7_r",
"Sye009qhwr",
"B1lIWeW3wH",
"rkgUEKZ2PB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public",
"public"
] | [
"This paper seeks to understand both across-layer and single-layer behavior within neural networks (i.e. layer behavior along the depth of a network, and behavior of a single layer along training epochs). Therefore, they resort to the optimal transport framework to compare predicted and target distributions. Theore... | [
3,
-1,
-1,
-1,
-1,
3,
6,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2020_rkxMKerYwr",
"BJe8i8JLqB",
"iclr_2020_rkxMKerYwr",
"B1lfD3Ahtr",
"HkxW9dLptr",
"iclr_2020_rkxMKerYwr",
"iclr_2020_rkxMKerYwr",
"B1lIWeW3wH",
"rkgUEKZ2PB",
"iclr_2020_rkxMKerYwr",
"B1lIWeW3wH"
] |
iclr_2020_HkgXteBYPB | Stochastic Neural Physics Predictor | Recently, neural-network based forward dynamics models have been proposed that attempt to learn the dynamics of physical systems in a deterministic way. While near-term motion can be predicted accurately, long-term predictions suffer from accumulating input and prediction errors which can lead to plausible but different trajectories that diverge from the ground truth. A system that predicts distributions of the future physical states for long time horizons based on its uncertainty is thus a promising solution. In this work, we introduce a novel robust Monte Carlo sampling based graph-convolutional dropout method that allows us to sample multiple plausible trajectories for an initial state given a neural-network based forward dynamics predictor. By introducing a new shape preservation loss and training our dynamics model recurrently, we stabilize long-term predictions. We show that our model’s long-term forward dynamics prediction errors on complicated physical interactions of rigid and deformable objects of various shapes are significantly lower than existing strong baselines. Lastly, we demonstrate how generating multiple trajectories with our Monte Carlo dropout method can be used to train model-free reinforcement learning agents faster and to better solutions on simple manipulation tasks. | reject | The paper presents a timely method for intuitive physics simulations that expand on the HTRN model, and tested in several physicals systems with rigid and deformable objects as well as other results later in the review.
Reviewer 3 was positive about the paper, and suggested improving the exposition to make it more self-contained. Reviewer 1 raised questions about the complexity of tasks and a concerns of limited advancement provided by the paper. Reviewer 2, had a similar concerns about limited clarity as to how the changes contribute to the results, and missing baselines. The authors provided detailed responses in all cases, providing some additional results with various other videos. After discussion and reviewing the additional results, the role of the stochastic elements of the model and its contributions to performance remained and the reviewers chose not to adjust their ratings.
The paper is interesting, timely and addresses important questions, but questions remain. We hope the review has provided useful information for their ongoing research. | train | [
"S1gbnPT3tS",
"S1gKVJ7isS",
"BygCcDMooS",
"BkgAQNfsoS",
"HJlYP0-jjS",
"SJxWh3W0KB",
"B1l8ZyHCtS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Overview:\nThis paper introduces a method for physical dynamics prediction, which is a version of hierarchical relation network (Mrowca ‘18). HRNs work on top of hierarchical particle-based representations of objects and the corresponding physics (e.g. forces between different parts), and are essentially are graph... | [
3,
-1,
-1,
-1,
-1,
3,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2020_HkgXteBYPB",
"iclr_2020_HkgXteBYPB",
"S1gbnPT3tS",
"SJxWh3W0KB",
"B1l8ZyHCtS",
"iclr_2020_HkgXteBYPB",
"iclr_2020_HkgXteBYPB"
] |
iclr_2020_B1gNKxrYPB | Attributed Graph Learning with 2-D Graph Convolution | Graph convolutional neural networks have demonstrated promising performance in attributed graph learning, thanks to the use of graph convolution that effectively combines graph structures and node features for learning node representations. However, one intrinsic limitation of the commonly adopted 1-D graph convolution is that it only exploits graph connectivity for feature smoothing, which may lead to inferior performance on sparse and noisy real-world attributed networks. To address this problem, we propose to explore relational information among node attributes to complement node relations for representation learning. In particular, we propose to use 2-D graph convolution to jointly model the two kinds of relations and develop a computationally efficient dimensionwise separable 2-D graph convolution (DSGC). Theoretically, we show that DSGC can reduce intra-class variance of node features on both the node dimension and the attribute dimension to facilitate learning. Empirically, we demonstrate that by incorporating attribute relations, DSGC achieves significant performance gain over state-of-the-art methods on node classification and clustering on several real-world attributed networks.
| reject | The paper studies the problem of graph learning with attributes, and propose a 2-D graph convolution that models the node relation graph and the attribute graph jointly. The paper proposes and efficient algorithm and models intra-class variation. Empirical performance on 20-NG, L-Cora, and Wiki show the promise of the approach.
The authors responded to the reviews by updating the paper, but the reviewers unfortunately did not further engage during the discussion period. Therefore it is unclear whether their concerns have been adequately addressed.
Overall, there have been many strong submissions on graph neural networks at ICLR this year, and this submission as is currently stands does not quite make the threshold of acceptance. | test | [
"SylRjYchsS",
"B1gWE95njB",
"HkgwEaqhsH",
"Syg5Ic52oS",
"B1l3QabUKS",
"Hkg7Wc8QqS",
"H1lM6AJEqB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank all the reviewers for their valuable time and feedback. \n\nWe have incorporated their suggestion and revised the manuscript accordingly. Major changes include: 1) In section 7, we improve the presentation of experimental results and include comparison with a suggested baseline LDS; 2) We re... | [
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"iclr_2020_B1gNKxrYPB",
"H1lM6AJEqB",
"B1l3QabUKS",
"Hkg7Wc8QqS",
"iclr_2020_B1gNKxrYPB",
"iclr_2020_B1gNKxrYPB",
"iclr_2020_B1gNKxrYPB"
] |
iclr_2020_Hye4KeSYDr | Evaluations and Methods for Explanation through Robustness Analysis | Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model. In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysis, which measures the minimum tolerance of adversarial perturbation. By measuring the tolerance level for an adversarial attack, we can extract a set of features that provides most robust support for a current prediction, and also can extract a set of features that contrasts the current prediction to a target class by setting a targeted adversarial attack. By applying this methodology to various prediction tasks across multiple domains, we observed the derived explanations are indeed capturing the significant feature set qualitatively and quantitatively. | reject | The paper proposes an approach for finding an explainable subset of features by choosing features that simultaneously are: most important for the prediction task, and robust against adversarial perturbation. The paper provides quantitative and qualitative evidence that the proposed method works.
The paper had two reviews (both borderline), and the while the authors responded enthusiastically, the reviewers did not further engage during the discussion period.
The paper has a promising idea, but the presentation and execution in its current form have been found to be not convincing by the reviewers. Unfortunately, the submission as it stands is not yet suitable for ICLR. | train | [
"B1l10XwCtS",
"SkeCBrK_jB",
"ryejC4tdiS",
"H1xHFDK_ir",
"BkeOnUYdjr",
"Hke7Y4KuiH",
"rkeXJZY_or",
"ryxIu2xOqS",
"H1xoUkao_H",
"BJxBpOUodr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"\n\nThe manuscript proposes a method for model explanation and two metrics for the evaluation of methods for model explanation based on robustness analysis. More specifically, two complementary, yet very related, criteria are proposed: i) robustness to perturbations on irrelevant features and ii) robustness to per... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1
] | [
"iclr_2020_Hye4KeSYDr",
"ryejC4tdiS",
"Hke7Y4KuiH",
"BkeOnUYdjr",
"ryxIu2xOqS",
"B1l10XwCtS",
"iclr_2020_Hye4KeSYDr",
"iclr_2020_Hye4KeSYDr",
"BJxBpOUodr",
"iclr_2020_Hye4KeSYDr"
] |
iclr_2020_SyxIterYwS | Dynamical System Embedding for Efficient Intrinsically Motivated Artificial Agents | Mutual Information between agent Actions and environment States (MIAS) quantifies the influence of agent on its environment. Recently, it was found that intrinsic motivation in artificial agents emerges from the maximization of MIAS.
For example, empowerment is an information-theoretic approach to intrinsic motivation, which has been shown to solve a broad range of standard RL benchmark problems. The estimation of empowerment for arbitrary dynamics is a challenging problem because it relies on the estimation of MIAS. Existing approaches rely on sampling, which have formal limitations, requiring exponentially many samples. In this work, we develop a novel approach for the estimation of empowerment in unknown arbitrary dynamics from visual stimulus only, without sampling for the estimation of MIAS. The core idea is to represent the relation between action sequences and future states by a stochastic dynamical system in latent space, which admits an efficient estimation of MIAS by the ``Water-Filling" algorithm from information theory. We construct this embedding with deep neural networks trained on a novel objective function and demonstrate our approach by numerical simulations of non-linear continuous-time dynamical systems. We show that the designed embedding preserves information-theoretic properties of the original dynamics, and enables us to solve the standard AI benchmark problems. | reject | The paper proposes a novel method for embedding sequences of states and actions into a latent representation that enables efficient estimation of empowerment for an RL system. They use empowerment as intrinsic reward for safe exploration. While the reviewers agree that this paper has promise, they also agree that it is not quite ready for publication in its current state. In particular, the paper is lacking a theoretical justification for the proposed approach, the definition of empowerment used by the authors raised questions, and the manuscript would benefit from more clear and detailed description of the method. For these reasons I recommend rejection. | train | [
"BylFEjQ3ir",
"Bkx5rSQhsr",
"ryx_eZX3sS",
"SJlRd9oGYr",
"S1xfo2e9tH",
"HJeX20UYqB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We appreciate your feedbacks and they helped in our revision of the paper. We focused on making the terms clearer and more intuitive. Here are some additional clarifications:\n\n#1 Empowerment is defined as the maximal of mutual information between action sequences and resulting final states. In previous studies o... | [
-1,
-1,
-1,
1,
3,
3
] | [
-1,
-1,
-1,
1,
4,
1
] | [
"SJlRd9oGYr",
"HJeX20UYqB",
"S1xfo2e9tH",
"iclr_2020_SyxIterYwS",
"iclr_2020_SyxIterYwS",
"iclr_2020_SyxIterYwS"
] |
iclr_2020_r1lIKlSYvH | The Usual Suspects? Reassessing Blame for VAE Posterior Collapse | In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated. | reject | This manuscript investigates the posterior collapse in variational autoencoders and seeks to provide some explanations from the phenomenon. The primary contribution is to propose some previously understudied explanations for the posterior collapse that results from the optimization landscape of the log-likelihood portion of the ELBO.
The reviewers and AC agree that the problem studied is timely and interesting, and closely related to a variety of recent work investigating the landscape properties of variational autoencoders and other generative models. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the technical difficulty and importance of the results. In reviews and discussion, the reviewers noted issues with clarity of the presentation and sufficient justification of the results. There were also concerns about novelty. In the opinion of the AC, the manuscript in its current state is borderline, and should ideally be improved in terms of clarity of the discussion, and some more investigation of the insights that result from the analysis. | train | [
"ByeTIu0jsH",
"Byg76w0soB",
"Hke8eqHFsr",
"SklOYFSKsr",
"rye_eErYir",
"rJx4TQStoH",
"r1l0LxStjr",
"Hye7WxStjH",
"S1xJXhf6tr",
"rJg6KrUpKr",
"S1gXy3rXcB"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have recently become aware that no paper revisions can be uploaded after November 15. Therefore we are now uploading a new version of our paper that addresses many of the reviewer comments. Among other things, we have clarified Section 5 and significantly expanded Section 7. Beyond this, we have also include... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"rye_eErYir",
"r1l0LxStjr",
"SklOYFSKsr",
"rJg6KrUpKr",
"rJx4TQStoH",
"S1xJXhf6tr",
"Hye7WxStjH",
"S1gXy3rXcB",
"iclr_2020_r1lIKlSYvH",
"iclr_2020_r1lIKlSYvH",
"iclr_2020_r1lIKlSYvH"
] |
iclr_2020_SJxDKerKDS | Reinforcement Learning with Structured Hierarchical Grammar Representations of Actions | From a young age humans learn to use grammatical principles to hierarchically combine words into sentences. Action grammars is the parallel idea; that there is an underlying set of rules (a "grammar") that govern how we hierarchically combine actions to form new, more complex actions. We introduce the Action Grammar Reinforcement Learning (AG-RL) framework which leverages the concept of action grammars to consistently improve the sample efficiency of Reinforcement Learning agents. AG-RL works by using a grammar inference algorithm to infer the “action grammar" of an agent midway through training, leading to a higher-level action representation. The agent's action space is then augmented with macro-actions identified by the grammar. We apply this framework to Double Deep Q-Learning (AG-DDQN) and a discrete action version of Soft Actor-Critic (AG-SAC) and find that it improves performance in 8 out of 8 tested Atari games (median +31%, max +668%) and 19 out of 20 tested Atari games (median +96%, maximum +3,756%) respectively without substantive hyperparameter tuning. We also show that AG-SAC beats the model-free state-of-the-art for sample efficiency in 17 out of the 20 tested Atari games (median +62%, maximum +13,140%), again without substantive hyperparameter tuning. | reject | The topic of macro-actions/hierarchical RL is an important one and the perspective this paper takes on this topic by drawing parallels with action grammars is intriguing. However, some more work is needed to properly evaluate the significance. In particular, a better evaluation of the strengths and weaknesses of the method would improve this paper a lot. | train | [
"r1lP_es5iH",
"HyecePWciH",
"Bylk4IZ5sH",
"r1lHABZ5jS",
"BygCV9NAYS",
"r1x8yFBCtS",
"rJeFXXNJcB",
"ryxS79KatH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
">The k-Sequitur algorithm runs in linear time in the length of the presented action sequence. Hence, in computational terms it is easily feasible. Furthermore, the entropy regularisation deployed in the technique makes it more than a greedy compression technique.\n\nI think it's fairly clear that k-Sequitur does m... | [
-1,
-1,
-1,
-1,
3,
8,
1,
-1
] | [
-1,
-1,
-1,
-1,
5,
3,
5,
-1
] | [
"HyecePWciH",
"BygCV9NAYS",
"r1x8yFBCtS",
"rJeFXXNJcB",
"iclr_2020_SJxDKerKDS",
"iclr_2020_SJxDKerKDS",
"iclr_2020_SJxDKerKDS",
"iclr_2020_SJxDKerKDS"
] |
iclr_2020_rJgPFgHFwr | Laconic Image Classification: Human vs. Machine Performance | We propose laconic classification as a novel way to understand and compare the performance of diverse image classifiers. The goal in this setting is to minimise the amount of information (aka. entropy) required in individual test images to maintain correct classification. Given a classifier and a test image, we compute an approximate minimal-entropy positive image for which the classifier provides a correct classification, becoming incorrect upon any further reduction. The notion of entropy offers a unifying metric that allows to combine and compare the effects of various types of reductions (e.g., crop, colour reduction, resolution reduction) on classification performance, in turn generalising similar methods explored in previous works. Proposing two complementary frameworks for computing the minimal-entropy positive images of both human and machine classifiers, in experiments over the ILSVRC test-set, we find that machine classifiers are more sensitive entropy-wise to reduced resolution (versus cropping or reduced colour for machines, as well as reduced resolution for humans), supporting recent results suggesting a texture bias in the ILSVRC-trained models used. We also find, in the evaluated setting, that humans classify the minimal-entropy positive images of machine models with higher precision than machines classify those of humans. | reject | The paper proposes and studies a task where the goal is to classify an image that has been intentionally degraded to reduce information content.
All the reviewers found the comparison of human and machine performance interesting and valuable. However the reviewers expressed concerns and noted the following weaknesses: the presented results are not convincing to support our understanding of the differences between human and machine perception (R1), using entropy to quantify the distortion is not well motivated and has been addressed before (R1), lack of empirical evidence (R2).
AC suggests, in its current state the manuscript is not ready for a publication. We hope the detailed reviews are useful for improving and revising the paper. | train | [
"SkxuGaE2jH",
"SJeA7cE3sH",
"H1xGd8N3iS",
"S1eqmSN2jH",
"rJxR8XGzFH",
"r1xx7qp5YB",
"rJxzyUwf9H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers and the chairs for their consideration and feedback. We have responded to each individual reviewer in turn and have submitted a revised version of the paper.",
"## Geodesics Of Learned Representations\n## Olivier J. Henaff & Eero P. Simoncelli\n\nWe thank the reviewer for the reference, wh... | [
-1,
-1,
-1,
-1,
1,
6,
6
] | [
-1,
-1,
-1,
-1,
5,
3,
5
] | [
"iclr_2020_rJgPFgHFwr",
"rJxR8XGzFH",
"r1xx7qp5YB",
"rJxzyUwf9H",
"iclr_2020_rJgPFgHFwr",
"iclr_2020_rJgPFgHFwr",
"iclr_2020_rJgPFgHFwr"
] |
iclr_2020_SkluFgrFwH | Learning Mahalanobis Metric Spaces via Geometric Approximation Algorithms | Learning Mahalanobis metric spaces is an important problem that has found numerous applications. Several algorithms have been designed for this problem, including Information Theoretic Metric Learning (ITML) [Davis et al. 2007] and Large Margin Nearest Neighbor (LMNN) classification [Weinberger and Saul 2009]. We consider a formulation of Mahalanobis metric learning as an optimization problem,where the objective is to minimize the number of violated similarity/dissimilarity constraints. We show that for any fixed ambient dimension, there exists a fully polynomial time approximation scheme (FPTAS) with nearly-linear running time.This result is obtained using tools from the theory of linear programming in low dimensions. We also discuss improvements of the algorithm in practice, and present experimental results on synthetic and real-world data sets. Our algorithm is fully parallelizable and performs favorably in the presence of adversarial noise. | reject | The paper proposes a method to handle Mahalanobis metric learning thorough linear programming.
All reviewers were unclear on what novelty of the approach is compared to existing work.
I recommend rejection at this time, but encourage the authors to incorporate reviewers' feedback (in particular placing the work in better context and clarifying the motivations) and resubmitting elsewhere.
| train | [
"rJl8AV3osS",
"SJlsP4hior",
"BkxQW4hjsS",
"H1xwygViFB",
"ryxFcwpnKr",
"H1lULCyZ5r"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the insightful comments. Below are our responses to the specific points raised:\n\n- Our algorithm is the first with a provable guarantee on the number of violated constraints on arbitrary (that is, adversarial) inputs. The machinery for solving LP-type problems is well-known within the c... | [
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
3,
1,
3
] | [
"H1xwygViFB",
"ryxFcwpnKr",
"H1lULCyZ5r",
"iclr_2020_SkluFgrFwH",
"iclr_2020_SkluFgrFwH",
"iclr_2020_SkluFgrFwH"
] |
iclr_2020_S1eYKlrYvr | Diagnosing the Environment Bias in Vision-and-Language Navigation | Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are extremely useful in navigating new environments which the agent does not know about previously. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons of this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations which contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gap between seen and unseen on multiple datasets (i.e., 8.6% to 0.2% on R2R, 23.9% to 0.1% on R4R, and 3.74 to 0.17 on CVDN) and achieve competitive unseen results to previous state-of-the-art models. | reject | The submission is a detailed and extensive examination of overfitting in vision-and-language navigation domains. The authors evaluate several methods across multiple environments, using different splits of the environment data into training, validation-seen, and validation-unseen. The authors also present an approach using semantic features which is shown to have little or no gap between training and validation performance.
The reviewers had mixed reviews and there was substantial discussion about the merits of the paper. However, a significant issue was observed and confirmed with the authors, relating to tuning the semantic features and agent model on the unseen validation data. This is an important flaw, since the other methods were not tuned in this way, and there was no 'test' performance given in the paper. For this reason, the recommendation is to reject the paper. The authors are encouraged to fairly compare all models and resubmit their paper at another venue. | train | [
"r1l12CqnYr",
"BylcucJ2sS",
"SyefQahjiS",
"ryx96Khsjr",
"BJgFU69ijB",
"Bkg8e9SqsS",
"rylViYHcsr",
"HJg32DScjS",
"SJxImLB9oH",
"S1gbNt0nYr",
"HyeuvT_0Kr"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper aims to identify the primary source of transfer error in vision&language navigation tasks in unseen environments. The authors tease apart the contributions of the out-of-distribution severity of language instructions, navigation graph (environmental structure), and visual features, and conclude that vis... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_S1eYKlrYvr",
"SyefQahjiS",
"ryx96Khsjr",
"BJgFU69ijB",
"rylViYHcsr",
"r1l12CqnYr",
"HJg32DScjS",
"S1gbNt0nYr",
"HyeuvT_0Kr",
"iclr_2020_S1eYKlrYvr",
"iclr_2020_S1eYKlrYvr"
] |
iclr_2020_HJgKYlSKvr | Unsupervised Generative 3D Shape Learning from Natural Images | In this paper we present, to the best of our knowledge, the first method to learn a generative model of 3D shapes from natural images in a fully unsupervised way. For example, we do not use any ground truth 3D or 2D annotations, stereo video, and ego-motion during the training. Our approach follows the general strategy of Generative Adversarial Networks, where an image generator network learns to create image samples that are realistic enough to fool a discriminator network into believing that they are natural images. In contrast, in our approach the image gen- eration is split into 2 stages. In the first stage a generator network outputs 3D ob- jects. In the second, a differentiable renderer produces an image of the 3D object from a random viewpoint. The key observation is that a realistic 3D object should yield a realistic rendering from any plausible viewpoint. Thus, by randomizing the choice of the viewpoint our proposed training forces the generator network to learn an interpretable 3D representation disentangled from the viewpoint. In this work, a 3D representation consists of a triangle mesh and a texture map that is used to color the triangle surface by using the UV-mapping technique. We provide analysis of our learning approach, expose its ambiguities and show how to over- come them. Experimentally, we demonstrate that our method can learn realistic 3D shapes of faces by using only the natural images of the FFHQ dataset. | reject | The paper proposes a GAN approach for unsupervised learning of 3d object shapes from natural images. The key idea is a two-stage generative process where the 3d shape is first generated and then rendered to pixel-level images. While the experimental results are promising, the experimental results are mostly focused on faces (that are well aligned and share roughly similar 3d structures across the dataset). Results on other categories are preliminary and limited, so it's unclear how well the proposed method will work for more general domains. In addition, comparison to the existing baselines (e.g., HoloGAN; Pix2Scene; Rezende et al., 2016) is missing. Overall, further improvements are needed to be acceptable for ICLR.
Extra note: Missing citation to a relevant work
Wang and Gupta, Generative Image Modeling using Style and Structure Adversarial Networks
https://arxiv.org/abs/1603.05631 | train | [
"Hygnc4E0dH",
"SylVFp23KH",
"rkg_nVQjoS",
"B1lAOE7ooB",
"r1g4847sjr",
"B1xF7NmojH",
"rkl8cfFotS",
"SJlW0u2EFS",
"HkgFYmnmKS",
"HJloQWkZur",
"rkeLjGjx_H",
"ryg5_65x_B",
"HkgODgJlOH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"public",
"author",
"author",
"public"
] | [
"The paper tries to solve the problem of recovering the 3D structure from 2D images. To this end, it describes a GAN-type model for generating realistic images, where the generator disentangles shape, texture, and background. Most notably, the shape is represented in three dimensions as a mesh made of triangles. Th... | [
3,
3,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_HJgKYlSKvr",
"iclr_2020_HJgKYlSKvr",
"iclr_2020_HJgKYlSKvr",
"Hygnc4E0dH",
"rkl8cfFotS",
"SylVFp23KH",
"iclr_2020_HJgKYlSKvr",
"HkgFYmnmKS",
"iclr_2020_HJgKYlSKvr",
"rkeLjGjx_H",
"HkgODgJlOH",
"iclr_2020_HJgKYlSKvr",
"iclr_2020_HJgKYlSKvr"
] |
iclr_2020_HygqFlBtPS | Improved Training of Certifiably Robust Models | Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical (PGD) robustness. In principle, relaxation can provide tight bounds if the convex relaxation solution is feasible for the original non-relaxed problem. Therefore, we propose two regularizers that can be used to train neural networks that yield convex relaxations with tighter bounds. In all of our experiments, the proposed regularizations result in tighter certification bounds than non-regularized baselines. | reject | The authors develop regularization schemes that aim to promote tightness of convex relaxations used to provides certificates of robustness to adversarial examples in neural networks.
While the paper make some interesting contributions, the reviewers had several concerns on the paper:
1) The aim of the authors' work and the distinction with closely related prior work is not clear from the presentation. In particular, the relationship to the ReLU stability regularizer (Xiao et al ICLR 2019) and the FastLin/CROWN-IBP work (https://arxiv.org/abs/1906.06316) is not very well presented in the theoretical sections or the experiments.
2) The theoretical results (proposition 1) requires very strong conditions to apply, which are unlikely to be satisfied for real networks. This calls into question the effectiveness of the framework developed by the authors.
While the paper has some interesting ideas, it seems unfit for publication in its present form. | train | [
"HJlLJh4hir",
"r1eZ0iNniH",
"B1es6DN2iH",
"Hyl4nPE3sB",
"r1l226VnsS",
"Hkge-pN2iH",
"Hye3HcXoFB",
"HyeOL_lGcB",
"HyxG4FXV5B"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"> \"- * The crux of the contribution seems to rest on the premise that identifying the optimal perturbation in the input space with the relaxed model, … In general, it seems very unclear why this should work based on the evidence presented in the paper. Specifically with the relaxation, it might not even be guaran... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
5
] | [
"HyeOL_lGcB",
"HyeOL_lGcB",
"Hyl4nPE3sB",
"HyxG4FXV5B",
"iclr_2020_HygqFlBtPS",
"Hye3HcXoFB",
"iclr_2020_HygqFlBtPS",
"iclr_2020_HygqFlBtPS",
"iclr_2020_HygqFlBtPS"
] |
iclr_2020_BJlqYlrtPB | Negative Sampling in Variational Autoencoders | We propose negative sampling as an approach to improve the notoriously bad out-of-distribution likelihood estimates of Variational Autoencoder models. Our model pushes latent images of negative samples away from the prior. When the source of negative samples is an auxiliary dataset, such a model can vastly improve on baselines when evaluated on OOD detection tasks. Perhaps more surprisingly, we present a fully unsupervised variant that can also significantly improve detection performance: using the output of the generator as a source of negative samples results in a fully unsupervised model that can be interpreted as adversarially trained.
| reject | This paper proposes to improve VAEs' modeling of out-of-distribution examples, by pushing the latent representations of negative examples away from the prior. The general idea seems interesting, at least to some of the reviewers and to me. However, the paper seems premature, even after revision, as it leaves unclear some of the justification and analysis of the approach, especially in the fully unsupervised case. I think that with some more work it could be a very compelling contribution to a future conference. | train | [
"S1leviI0YB",
"BkgalbL3FH",
"Sylxlkrhsr",
"SJghYWEhsH",
"Hke5Kl43oS",
"HJg7vJE2ir",
"rkg5xRX2jH",
"H1gQtiyvtH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary:\nThe authors propose augmenting VAEs with an additional latent variable to allow them to detect out-of-distribution (OOD) data. They propose several measures based on this model to distinguish between inliers and outliers, and evaluate the model empirically, finding it successful.\n\nUnfortunately, the me... | [
3,
6,
-1,
-1,
-1,
-1,
-1,
3
] | [
5,
5,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_BJlqYlrtPB",
"iclr_2020_BJlqYlrtPB",
"iclr_2020_BJlqYlrtPB",
"iclr_2020_BJlqYlrtPB",
"H1gQtiyvtH",
"BkgalbL3FH",
"S1leviI0YB",
"iclr_2020_BJlqYlrtPB"
] |
iclr_2020_BkghKgStPH | Continual Learning using the SHDL Framework with Skewed Replay Distributions | Human and animals continuously acquire, adapt as well as transfer knowledge throughout their lifespan. The ability to learn continuously is crucial for the effective functioning of agents interacting with the real world and processing continuous streams of information. Continuous learning has been a long-standing challenge for neural networks as the repeated acquisition of information from non-uniform data distributions generally lead to catastrophic forgetting or interference. This work proposes a modular architecture capable of continuous acquisition of tasks while averting catastrophic forgetting. Specifically, our contributions are: (i) Efficient Architecture: a modular architecture emulating the visual cortex that can learn meaningful representations with limited labelled examples, (ii) Knowledge Retention: retention of learned knowledge via limited replay of past experiences, (iii) Forward Transfer: efficient and relatively faster learning on new tasks, and (iv) Naturally Skewed Distributions: The learning in the above-mentioned claims is performed on non-uniform data distributions which better represent the natural statistics of our ongoing experience. Several experiments that substantiate the above-mentioned claims are demonstrated on the CIFAR-100 dataset. | reject | The paper adapts a previously proposed modular deep network architecture (SHDL) for supervised learning in a continual learning setting. One problem in this setting is catastrophic forgetting. The proposed solution replays a small fraction of the data from old tasks to avoid forgetting, on top of a modular architecture that facilitates fast transfer when new tasks are added. The method is developed for image inputs and evaluated experimentally on CIFAR-100.
The reviews were in agreement that this paper is not ready for publication. All the reviews had concerns about the lack of explanation of the proposed solution and the experimental methods. The reviewers were concerned about the choice of metrics not being comparable or justified: Reviewer4 wanted an apples-to-apples comparison, Reviewer1 suggested the paper follow the evaluation paradigm used in earlier papers, and Reviewer2 described the absence of an explained baseline value. Two reviewers (Reviewer4 and Reviewer2) described the lack of details on the parameters, architecture, and training regime used for the experiments. The paper did not not justify which aspects of the modular system contributed to the observed performance (Reviewer4 and Reviewer1). Several additional concerns were also raised.
The authors did not respond to any of the concerns raised by the reviewers.
| train | [
"B1xG4i5TFH",
"Hkgn9zNX9r",
"HJgg5GG59B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper suggest to use the previously proposed ScatterNet Hybrid Deep Learning (SHDL) network in a continual learning setting. This is motivated by the fact that the SHDL needs less supervised data, so keeping a small replay buffer can be enough to maintain performance while avoiding catastrophic forgetting.\n\... | [
1,
1,
1
] | [
4,
3,
5
] | [
"iclr_2020_BkghKgStPH",
"iclr_2020_BkghKgStPH",
"iclr_2020_BkghKgStPH"
] |
iclr_2020_rkg6FgrtPB | Biologically Plausible Neural Networks via Evolutionary Dynamics and Dopaminergic Plasticity | Artificial neural networks (ANNs) lack in biological plausibility, chiefly because backpropagation requires a variant of plasticity (precise changes of the synaptic weights informed by neural events that occur downstream in the neural circuit) that is profoundly incompatible with the current understanding of the animal brain. Here we propose that backpropagation can happen in evolutionary time, instead of lifetime, in what we call neural net evolution (NNE). In NNE the weights of the links of the neural net are sparse linear functions of the animal's genes, where each gene has two alleles, 0 and 1. In each generation, a population is generated at random based on current allele frequencies, and it is tested in the learning task. The relative performance of the two alleles of each gene over the whole population is determined, and the allele frequencies are updated via the standard population genetics equations for the weak selection regime. We prove that, under assumptions, NNE succeeds in learning simple labeling functions with high probability, and with polynomially many generations and individuals per generation. We test the NNE concept, with only one hidden layer, on MNIST with encouraging results. Finally, we explore a further version of biologically plausible ANNs inspired by the recent discovery in animals of dopaminergic plasticity: the increase of the strength of a synapse that fired if dopamine was released soon after the firing. | reject | Unfortunately the paper is confusingly written, and there is only agreement by all reviewers on the rejection of the paper. Indeed, if all reviewers and the area chair do not interpret the paper well, the authors' best response would be to rewrite the papers rather than disagree with all reviewers.
In the area chair's opinion, the current form the paper does not merit publication. The authors are advised to address the reviewers' concerns, rework the paper, and submit to a conference again. | train | [
"Sygu74JooS",
"rygLcMyojS",
"S1l46g1ojr",
"S1eZ0ExCuH",
"ByedtDasKS",
"rygnNYcCFB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank you for your valuable comments. We provide the answers below for the concerns raised in the review. \n \n(1)\nThe point of this paper is not to produce a new machine learning framework for classification tasks, but to understand how the animal brain could work, by studying biologically plausible neural ne... | [
-1,
-1,
-1,
1,
3,
3
] | [
-1,
-1,
-1,
3,
1,
1
] | [
"rygnNYcCFB",
"S1eZ0ExCuH",
"ByedtDasKS",
"iclr_2020_rkg6FgrtPB",
"iclr_2020_rkg6FgrtPB",
"iclr_2020_rkg6FgrtPB"
] |
iclr_2020_H1eRYxHYPB | Optimal Unsupervised Domain Translation | Unsupervised Domain Translation~(UDT) consists in finding meaningful correspondences between two domains, without access to explicit pairings between them. Following the seminal work of \textit{CycleGAN}, many variants and extensions of this model have been applied successfully to a wide range of applications. However, these methods remain poorly understood, and lack convincing theoretical guarantees. In this work, we define UDT in a rigorous, non-ambiguous manner, explore the implicit biases present in the approach and demonstrate the limits of theses approaches. Specifically, we show that mappings produced by these methods are biased towards \textit{low energy} transformations, leading us to cast UDT into an Optimal Transport~(OT) framework by making this implicit bias explicit. This not only allows us to provide theoretical guarantees for existing methods, but also to solve UDT problems where previous methods fail. Finally, making the link between the dynamic formulation of OT and CycleGAN, we propose a simple approach to solve UDT, and illustrate its properties in two distinct settings. | reject | The paper examines the problem of unsupervised domain translation. It poses the problem in a rigorous way for the first time and examines the shortcomings of existing CycleGAN-based methods. Then the authors propose to consider the problem through the lens of Optimal Transport theory and formulate a practical algorithm.
The reviewers agree that the paper addresses an important problem, brings clarity to existing methods, and proposes an interesting approach / algorithm, and is well-written. However, there was a shared concern about whether the new approach just moves the complexity elsewhere (into the design of the cost function). The authors claim to have addressed in the rebuttal by adding an extra experiment, but the reviewers remained unconvinced.
Based on the reviewer discussion, I recommend rejection at this time, but look forward to seeing the revised paper at a future venue. | train | [
"SJe7q1o3or",
"HkeRpB93oB",
"r1lBXtcnsS",
"HygIgX93iH",
"SJg1jSWvtH",
"rkgyzs7iKB",
"r1xMmonntS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"First of all, we would like to kindly thank the reviewers, who have all taken the time to give us useful feedback on the paper by writing detailed comments about different aspects of our work. We have answered each reviewer individually and will outline here the main changes in the revised version of our work:\n\n... | [
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2020_H1eRYxHYPB",
"rkgyzs7iKB",
"SJg1jSWvtH",
"r1xMmonntS",
"iclr_2020_H1eRYxHYPB",
"iclr_2020_H1eRYxHYPB",
"iclr_2020_H1eRYxHYPB"
] |
iclr_2020_HJlRFlHFPS | Unsupervised Distillation of Syntactic Information from Contextualized Word Representations | Contextualized word representations, such as ELMo and BERT, were shown to perform well on a various of semantic and structural (syntactic) task. In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that discards the lexical semantics, but keeps the structural information. To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural component that is encoded in the vectors. We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics. Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in few-shot parsing setting. | reject | This paper aims to disentangle semantics and syntax inside of popular contextualized word embedding models. They use the model to generate sentences which are structurally similar but semantically different.
This paper generated a lot of discussion. The reviewers do like the method for generating structurally similar sentences, and the triplet loss. They felt the evaluation methods were clever. However, one reviewer raised several issues. First, they thought the idea of syntax had not been well defined. They also thought the evaluation did not support the claims. The reviewer also argued very hard for the need to compare performance to SOTA models. The authors argued that beating SOTA is not the goal of their work, rather it is to understand what SOTA models are doing. The reviewers also argue that nearest neighbors is not a good method for evaluating the syntactic information in the representations.
I hope all of the comments of the reviewers will help improve the paper as it is revised for a future submission. | train | [
"SJgn9R5hsS",
"SyekktO3jB",
"SJgOLrZhiB",
"rJg-gS-2iS",
"H1lgdg3siH",
"HyxJmJuUjB",
"S1xa5Zz7sH",
"rJg2I-fQjr",
"ryx6n0b7iH",
"rygVr0-7oS",
"BJesp3WXsB",
"SJeeN3WmoS",
"rJxPTT-IYH",
"SygRa5LTKS",
"SklmVR-fqH",
"Byey-Ljz5H"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for taking the time to carefully re-read our paper. We regret that you still do not \"get\" what we were trying to achieve in this work. Most notably, we were *not* aiming at beating any other system. That is simply not the intention of the works. Our intention was to distill the structural representation e... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
4
] | [
"SyekktO3jB",
"BJesp3WXsB",
"HyxJmJuUjB",
"H1lgdg3siH",
"rygVr0-7oS",
"rJxPTT-IYH",
"rJxPTT-IYH",
"rJxPTT-IYH",
"SygRa5LTKS",
"SklmVR-fqH",
"Byey-Ljz5H",
"iclr_2020_HJlRFlHFPS",
"iclr_2020_HJlRFlHFPS",
"iclr_2020_HJlRFlHFPS",
"iclr_2020_HJlRFlHFPS",
"iclr_2020_HJlRFlHFPS"
] |
iclr_2020_BJg15lrKvS | Towards Understanding the Spectral Bias of Deep Learning | An intriguing phenomenon observed during training neural networks is the spectral bias, where neural networks are biased towards learning less complex functions. The priority of learning functions with low complexity might be at the core of explaining generalization ability of neural network, and certain efforts have been made to provide theoretical explanation for spectral bias. However, there is still no satisfying theoretical results justifying the existence of spectral bias. In this work, we give a comprehensive and rigorous explanation for spectral bias and relate it with the neural tangent kernel function proposed in recent work. We prove that the training process of neural networks can be decomposed along different directions defined by the eigenfunctions of the neural tangent kernel, where each direction has its own convergence rate and the rate is determined by the corresponding eigenvalue. We then provide a case study when the input data is uniformly distributed over the unit shpere, and show that lower degree spherical harmonics are easier to be learned by over-parameterized neural networks. | reject | The authors propose to understand spectral bias during training of neural networks from the perspective of the NTK. While reviewers appreciated aspects of the work, the general consensus was that the current version is not ready for publication; some concerns stem from whether the the NTK model and finite neural networks are sufficiently similar that we should be able to gain real practical insights into the behaviour of finite models. This is partly an empirical question, and stronger experiments are required to have a better sense of the answer. Nonetheless, the authors are encouraged to persist with this work, taking into account reviewer comments in future revisions.
| train | [
"S1e1WhjGFr",
"rkxx_cYzsB",
"rygGJvFzjH",
"SkgDAPKfiS",
"HJlZsUhVYS",
"SJlRf3oiKS"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the training of overparametrized neural networks by gradient descent. More precisely, the authors consider the neural tangent regime (NTK regime). That is, the weights are chosen sufficiently large and the neural network is sufficiently overparametrized. It has been observed that in this scenari... | [
3,
-1,
-1,
-1,
6,
6
] | [
1,
-1,
-1,
-1,
1,
3
] | [
"iclr_2020_BJg15lrKvS",
"S1e1WhjGFr",
"SJlRf3oiKS",
"HJlZsUhVYS",
"iclr_2020_BJg15lrKvS",
"iclr_2020_BJg15lrKvS"
] |
iclr_2020_HyleclHKvS | A Non-asymptotic comparison of SVRG and SGD: tradeoffs between compute and speed | Stochastic gradient descent (SGD), which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve large-scale machine learning problems. SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to slow asymptotic convergence. Several variance reduction algorithms, such as SVRG, introduce control variates to obtain a lower variance gradient estimate and faster convergence. Despite their appealing asymptotic guarantees, SVRG-like algorithms have not been widely adopted in deep learning. The traditional asymptotic analysis in stochastic optimization provides limited insight into training deep learning models under a fixed number of epochs. In this paper, we present a non-asymptotic analysis of SVRG under a noisy least squares regression problem. Our primary focus is to compare the exact loss of SVRG to that of SGD at each iteration t. We show that the learning dynamics of our regression model closely matches with that of neural networks on MNIST and CIFAR-10 for both the underparameterized and the overparameterized models. Our analysis and experimental results suggest there is a trade-off between the computational cost and the convergence speed in underparametrized neural networks. SVRG outperforms SGD after a few epochs in this regime. However, SGD is shown to always outperform SVRG in the overparameterized regime. | reject | Two reviewers as well as the AC are confused by the paper—perhaps because the readability of it should be improved? It is clear that the page limitation of conferences are problematic, with 7 pages of appendix (not part of the review) the authors may consider another venue to publish. In its current form, the usefulness for the ICLR community seems limited. | train | [
"B1ghYumwKH",
"Hkl2Dn5ssB",
"rJeMvjqiir",
"Skx-ZF9ijr",
"ByggwL9osH",
"H1lyVZU_dB",
"SyxZrW16Yr",
"r1x2--U6FB",
"SJxQMScXKr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper compares SGD and SVRG (as a representative variance reduced method) to explore tradeoffs. Although the computational complexity vs overall convergence performance tradeoff is well-known at this point, an interesting new perspective is the comparison in regions of interpolation (where SGD gradient varian... | [
6,
-1,
-1,
-1,
-1,
3,
1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
4,
5,
-1,
-1
] | [
"iclr_2020_HyleclHKvS",
"H1lyVZU_dB",
"B1ghYumwKH",
"SyxZrW16Yr",
"iclr_2020_HyleclHKvS",
"iclr_2020_HyleclHKvS",
"iclr_2020_HyleclHKvS",
"SJxQMScXKr",
"iclr_2020_HyleclHKvS"
] |
iclr_2020_HyeG9lHYwH | Compression without Quantization | Standard compression algorithms work by mapping an image to discrete code using an encoder from which the original image can be reconstructed through a decoder. This process, due to the quantization step, is inherently non-differentiable so these algorithms must rely on approximate methods to train the encoder and decoder end-to-end. In this paper, we present an innovative framework for lossy image compression which is able to circumvent the quantization step by relying on a non-deterministic compression codec. The decoder maps the input image to a distribution in continuous space from which a sample can be encoded with expected code length being the relative entropy to the encoding distribution, i.e. it is bits-back efficient. The result is a principled, end-to-end differentiable compression framework that can be straight-forwardly trained using standard gradient-based optimizers. To showcase the efficiency of our method, we apply it to lossy image compression by training Probabilistic Ladder Networks (PLNs) on the CLIC 2018 dataset and show that their rate-distortion curves on the Kodak dataset are competitive with the state-of-the-art on low bitrates. | reject | The paper proposes a method for lossy image compression. Based on the encoder-decoder framework, it replaces the discrete codes by continuous ones, so that the learning can be performed in an end-to-end way. The idea is interesting, but the motivation is based on a quantization "problem" that the authors show no evidence the competing method is actually suffering from. It is thus unclear how much does quantization in existing methods impact performance, and how much will fixing this benefit the overall system. Also, the authors may add some discussions on whether the proposed sampling of z_{c^\star} is indeed also a form of quantization.
Experimental results are not convincing. The proposed method is only compared with one method. While it works only slightly worse at low bit-rate region, the gap becomes larger in higher bit rate regions. Another major concern is that the encoding time is significantly longer. Ablation study is also needed. Finally, the writing can be improved. | train | [
"HJlJd4gnsS",
"SyxKW6SuoB",
"Bygbj0SuiH",
"rkxeGAHOiB",
"B1gfOpBdiS",
"SkxaSBcaFH",
"HkgqED8k5H",
"SJgHF8Qt5B",
"B1x_DUuhcr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the rebuttal. \n\nWe agree that even though the competing method [1] uses quantization, it does not seem to suffer from it. You then argue that the other reason for removing quantization is that it circumvents the restriction for uniform distributions of the posterior, and that therefore your method ... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
6,
1
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
1,
1
] | [
"Bygbj0SuiH",
"B1x_DUuhcr",
"SkxaSBcaFH",
"HkgqED8k5H",
"SJgHF8Qt5B",
"iclr_2020_HyeG9lHYwH",
"iclr_2020_HyeG9lHYwH",
"iclr_2020_HyeG9lHYwH",
"iclr_2020_HyeG9lHYwH"
] |
iclr_2020_B1g79grKPr | Goal-Conditioned Video Prediction | Many processes can be concisely represented as a sequence of events leading from a starting state to an end state. Given raw ingredients, and a finished cake, an experienced chef can surmise the recipe. Building upon this intuition, we propose a new class of visual generative models: goal-conditioned predictors (GCP). Prior work on video generation largely focuses on prediction models that only observe frames from the beginning of the video. GCP instead treats videos as start-goal transformations, making video generation easier by conditioning on the more informative context provided by the first and final frames. Not only do existing forward prediction approaches synthesize better and longer videos when modified to become goal-conditioned, but GCP models can also utilize structures that are not linear in time, to accomplish hierarchical prediction. To this end, we study both auto-regressive GCP models and novel tree-structured GCP models that generate frames recursively, splitting the video iteratively into finer and finer segments delineated by subgoals. In experiments across simulated and real datasets, our GCP methods generate high-quality sequences over long horizons. Tree-structured GCPs are also substantially easier to parallelize than auto-regressive GCPs, making training and inference very efficient, and allowing the model to train on sequences that are thousands of frames in length.Finally, we demonstrate the utility of GCP approaches for imitation learning in the setting without access to expert actions. Videos are on the supplementary website: https://sites.google.com/view/video-gcp | reject | The paper addresses a video generation setting where both initial and goal state are provided as a basis for long-term prediction. The authors propose two types of models, sequential and hierarchical, and obtain interesting insights into the performance of these two models. Reviewers raised concerns about evaluation metrics, empirical comparisons, and the relationship of the proposed model to prior work.
While many of the initial concerns have been addressed by the authors, reviewers remain concerned about two issues in particular. First, the proposed model is similar to previous approaches with sequential latent variable models, and it is unclear how such existing models would compare if applied in this setting. Second, there are remaining concerns on whether the model may learn degenerate solutions. I quote from the discussion here, as I am not sure this will be visible to authors [about Figure 12]: "now the two examples with two samples they show have the same door in the middle frame which makes me doubt the method learn[s] anything meaningful in terms of the agent walking through the door but just go to the middle of the screen every time." | train | [
"HyedaO2pKH",
"SygAcbAjiS",
"HkgEsgRooS",
"Skxd4xAojH",
"HygiHzCjiB",
"rJxnKiMaKB",
"HklBPG5AFS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The following work proposes a model for long-range video interpolation -- specifically targetting cases where the intermediate content trajectories may be highly non-linear. This is referred to as goal-conditioned in the paper. They present an autoregressive sequential model, as well as a hierarchical mod... | [
6,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2020_B1g79grKPr",
"rJxnKiMaKB",
"HyedaO2pKH",
"HklBPG5AFS",
"iclr_2020_B1g79grKPr",
"iclr_2020_B1g79grKPr",
"iclr_2020_B1g79grKPr"
] |
iclr_2020_HJg4qxSKPB | Implicit Rugosity Regularization via Data Augmentation | Deep (neural) networks have been applied productively in a wide range of supervised and unsupervised learning tasks. Unlike classical machine learning algorithms, deep networks typically operate in the overparameterized regime, where the number of parameters is larger than the number of training data points. Consequently, understanding the generalization properties and the role of (explicit or implicit) regularization in these networks is of great importance. In this work, we explore how the oft-used heuristic of data augmentation imposes an implicit regularization penalty of a novel measure of the rugosity or “roughness” based on the tangent Hessian of the function fit to the training data. | reject | This paper aims to study the effect of data augmentation of generalization performance. The authors put forth a measure of rugosity or "roughness" based on the tangent Hessian of the function reminiscent of a classic result by Donoho et. al. The authors show that this measure changes in tandem with how much data augmentation helps. The reviewers and I concur that the rugosity measure is interesting. However, as the reviewer mention the main draw back of this paper is that this measure of rugosity when made explicit does not improve generalization. This is the main draw back of the paper. I agree with the authors that this measure is interesting in itself. However, I think in its current form the paper is not ready for prime time and recommend rejection. That said, I believe this paper has a lot of potential and recommend the authors to rewrite and carry out more careful experiments for a future submission. | train | [
"rke91XJ5jH",
"HkgycV15sS",
"ryg5HmJ5iS",
"rJlfN3TjtS",
"Syl3tBeAtH",
"HJgBtISK5r"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for positive comments and useful suggestions. We provide the following responses for the comments.\n\n1> The definition of rugosity is an extension of (Donoho & Grimes (2003)) in which the extension is not really improving anything or used anywhere in the paper.\n\nThe rugosity measure that w... | [
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
4,
1,
3
] | [
"rJlfN3TjtS",
"HJgBtISK5r",
"Syl3tBeAtH",
"iclr_2020_HJg4qxSKPB",
"iclr_2020_HJg4qxSKPB",
"iclr_2020_HJg4qxSKPB"
] |
iclr_2020_r1xH5xHYwH | Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter! | We investigated the changes in visual representations learnt by CNNs when using different linguistic labels (e.g., trained with basic-level labels only, superordinate-level only, or both at the same time) and how they compare to human behavior when asked to select which of three images is most different. We compared CNNs with identical architecture and input, differing only in what labels were used to supervise the training. The results showed that in the absence of labels, the models learn very little categorical structure that is often assumed to be in the input. Models trained with superordinate labels (vehicle, tool, etc.) are most helpful in allowing the models to match human categorization, implying that human representations used in odd-one-out tasks are highly modulated by semantic information not obviously present in the visual input. | reject | This paper explores training CNNs with labels of differing granularity, and finds that the types of information learned by the method depends intimately on the structure of the labels provided.
Thought the reviewers found value in the paper, they felt there were some issues with clarity, and didn't think the analyses were as thorough as they could be. I thank the authors for making changes to their paper in light of the reviews, and hope that they feel their paper is stronger because of the review process. | train | [
"HyefSpuJcS",
"BkxmYVKjiS",
"ByxdMQFijH",
"BkeI_zYssS",
"BJxYabYiiS",
"r1gZ4qdsjS",
"SJewdvuioS",
"SyedkNO0KH",
"Hkg40vTEqH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper assesses the effects of training an image classifier with different label types: 1-hot coarse-grained labels (10 classes), 1-hot fine grained labels (30 labels which are all subcategories of the 10 coarse-grained categories), word vector representations of the 30 fine-grained labels. They also compare t... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4
] | [
"iclr_2020_r1xH5xHYwH",
"iclr_2020_r1xH5xHYwH",
"BJxYabYiiS",
"BJxYabYiiS",
"HyefSpuJcS",
"Hkg40vTEqH",
"SyedkNO0KH",
"iclr_2020_r1xH5xHYwH",
"iclr_2020_r1xH5xHYwH"
] |
iclr_2020_rJxHcgStwr | Handwritten Amharic Character Recognition System Using Convolutional Neural Networks | Amharic language is an official language of the federal government of the Federal Democratic Republic of Ethiopia. Accordingly, there is a bulk of handwritten Amharic documents available in libraries, information centres, museums, and offices. Digitization of these documents enables to harness already available language technologies to local information needs and developments. Converting these documents will have a lot of advantages including (i) to preserve and transfer history of the country (ii) to save storage space (ii) proper handling of documents (iv) enhance retrieval of information through internet and other applications. Handwritten Amharic character recognition system becomes a challenging task due to inconsistency of a writer, variability in writing styles of different writers, relatively large number of characters of the script, high interclass similarity, structural complexity and degradation of documents due to different reasons. In order to recognize handwritten Amharic character a novel method based on deep neural networks is used which has recently shown exceptional performance in various pattern recognition and machine learning applications, but has not been endeavoured for Ethiopic script. The CNN model is trained and tested our database that contains 132,500 datasets of handwritten Amharic characters. Common machine learning methods usually apply a combination of feature extractor and trainable classifier. The use of CNN leads to significant improvements across different machine-learning classification algorithms. Our proposed CNN model is giving an accuracy of 91.83% on training data and 90.47% on validation data. | reject | The submission proposes to use CNN for Amharic Character Recognition. The authors used a straight forward application of CNNs to go from images of Amharic characters to the corresponding character. There was no innovation on the CNN side. The main contribution of the work is the Amharic handwriting dataset and the experiments that were performed.
The reviewers indicated the following concerns:
1. There was no innovation to the method (a straight forward CNN is used) and is likely not of interest to the ICLR community
2. The dataset was divided into train/val split and does not contain a held-out test set. Thus it was impossible to determine the generalization of the model.
3. The paper is poorly written with the initial version having major formatting issues and missing references. The revised version has fixed some of the formatting issues. The paper still need to having more paragraph breaks to help with the readability of the paper (for instance, the introduction is still one big long paragraph). The terminology and writing can also be improved. For instance, in section 2.3, the authors write that "500 dataset for each character were collected". It would be clearer to say that "500 images for each character were collected".
The submission received low reviews overall (3 rejects), which was unchanged after the rebuttal. Due to the general consensus, there was limited discussion. There were also major formatting issues with the initial submission. The revised version was improved to have proper inclusion of Amharic characters in the text, missing figures, and references. However, even after the revision, the paper still had the above issues with methodology (as noted by R4) and is likely of low interest for the ICLR community.
The Amharic handwriting data and experiments using a CNN can be of interest to the different community and I would recommend the authors work on improving their paper based on reviewer comments and submit to different venue (such as a workshop focused on character recognition for different languages).
| train | [
"H1ep80BntB",
"HkerBE8aYB",
"BygFLIHo5S",
"H1e6iYj4qH",
"r1gIAmkyKB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"This paper try to use CNN to build recognizer for handwritten Amharic characters. The CNN they used is simple and standard. Apparently this paper no novelty at all. They just apply CNN to a new task. This kind of work is not qualified for ICLR at all.\n\nThere are also some problems in paper organization. They sho... | [
1,
1,
1,
-1,
-1
] | [
4,
4,
1,
-1,
-1
] | [
"iclr_2020_rJxHcgStwr",
"iclr_2020_rJxHcgStwr",
"iclr_2020_rJxHcgStwr",
"r1gIAmkyKB",
"iclr_2020_rJxHcgStwr"
] |
iclr_2020_BylD9eSYPS | Clustered Reinforcement Learning | Exploration strategy design is one of the challenging problems in reinforcement learning~(RL), especially when the environment contains a large state space or sparse rewards. During exploration, the agent tries to discover novel areas or high reward~(quality) areas. In most existing methods, the novelty and quality in the neighboring area of the current state are not well utilized to guide the exploration of the agent. To tackle this problem, we propose a novel RL framework, called \underline{c}lustered \underline{r}einforcement \underline{l}earning~(CRL), for efficient exploration in RL. CRL adopts clustering to divide the collected states into several clusters, based on which a bonus reward reflecting both novelty and quality in the neighboring area~(cluster) of the current state is given to the agent. Experiments on several continuous control tasks and several Atari-2600 games show that CRL can outperform other state-of-the-art methods to achieve the best performance in most cases. | reject | The paper discusses a simple but apparently effective clustering technique to improve exploration. There are no theoretical results, hence the reader relies fully on the experiments to evaluate the method. Unfortunately, an in-dept analysis of the results is missing making it hard to properly evaluate the strength and weaknesses. Furthermore, the authors have not provided any rebuttal to the reviewers' concerns. | train | [
"rkgyvJ3_uS",
"ryl70K8hYB",
"SklciHXTKB",
"HJx70oQouH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"This paper presents a clear approach to improve the exploration strategy in reinforcement learning, which is named clustered reinforcement learning. The approach tries to push the agent to explore more states with high novelty and quality. It is done by adding a bonus reward shown in Eq. (3) to the reward functio... | [
3,
6,
3,
-1
] | [
1,
4,
5,
-1
] | [
"iclr_2020_BylD9eSYPS",
"iclr_2020_BylD9eSYPS",
"iclr_2020_BylD9eSYPS",
"iclr_2020_BylD9eSYPS"
] |
iclr_2020_H1MOqeHYvB | At Your Fingertips: Automatic Piano Fingering Detection | Automatic Piano Fingering is a hard task which computers can learn using data. As data collection is hard and expensive, we propose to automate this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision techniques. Running this process on 90 videos results in the largest dataset for piano fingering with more than 150K notes. We show that when running a previously proposed model for automatic piano fingering on our dataset and then fine-tuning it on manually labeled piano fingering data, we achieve state-of-the-art results.
In addition to the fingering extraction method, we also introduce a novel method for transferring deep-learning computer-vision models to work on out-of-domain data, by fine-tuning it on out-of-domain augmentation proposed by a Generative Adversarial Network (GAN).
For demonstration, we anonymously release a visualization of the output of our process for a single video on https://youtu.be/Gfs1UWQhr5Q | reject | The paper shows an automatic piano fingering algorithm. The idea is good. But the reviewers find that the novelty is limited and it is an incremental work. All the reivewers agree to reject. | train | [
"BJxPvjsJjr",
"S1lelcj1iS",
"rkxDbgiWYS",
"ryeHZxXPYB",
"BJevc4E2tS"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review.\n\nWe would like to address your review:\n1. We are not aware of any works using CycleGAN for pose estimation, or any works using sim2real in that manner (of fine-tuning the algorithms). It would be great to get some citations.\nRegarding the novelty of our method, here is a list of what... | [
-1,
-1,
1,
3,
1
] | [
-1,
-1,
5,
3,
1
] | [
"rkxDbgiWYS",
"ryeHZxXPYB",
"iclr_2020_H1MOqeHYvB",
"iclr_2020_H1MOqeHYvB",
"iclr_2020_H1MOqeHYvB"
] |
iclr_2020_S1gtclSFvr | Neural Phrase-to-Phrase Machine Translation | We present Neural Phrase-to-Phrase Machine Translation (\nppmt), a phrase-based translation model that uses a novel phrase-attention mechanism to discover relevant input (source) segments to generate output (target) phrases. We propose an efficient dynamic programming algorithm to marginalize over all possible segments at training time and use a greedy algorithm or beam search for decoding. We also show how to incorporate a memory module derived from an external phrase dictionary to \nppmt{} to improve decoding. %that allows %the model to be trained faster %\nppmt is significantly faster %than existing neural phrase-based %machine translation method by \cite{huang2018towards}. Experiment results demonstrate that \nppmt{} outperforms the best neural phrase-based translation model \citep{huang2018towards} both in terms of model performance and speed, and is comparable to a state-of-the-art Transformer-based machine translation system \citep{vaswani2017attention}. | reject | This paper describes how they extend a previous phrase-based neural machine translation model to incorporate external dictionaries. The reviewers mention the small scale of the experiments, and the lack of clarity in the writing, and missing discussion on computational complexity. Even though the method seems to have the potential to impact the field, the paper is currently not strong enough for publication. The authors have not engaged in the discussion at all. | train | [
"HJx5Mz-xiH",
"SJlq8XkRFB",
"BJgZqngyqB",
"HkxaB1i7uS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"This paper proposed an end-to-end phrase-to-phrase NMT model (NP2MT). I think the contribution of this paper is incremental and the idea is of less novelty. In general, the model is largely based on the NPMT model, where modification is the introduce of phrases in the source sentences. Then the author proposed the... | [
3,
3,
3,
-1
] | [
4,
5,
3,
-1
] | [
"iclr_2020_S1gtclSFvr",
"iclr_2020_S1gtclSFvr",
"iclr_2020_S1gtclSFvr",
"iclr_2020_S1gtclSFvr"
] |
iclr_2020_HyeKcgHFvS | Gradient-based training of Gaussian Mixture Models in High-Dimensional Spaces | We present an approach for efficiently training Gaussian Mixture Models (GMMs) with Stochastic Gradient Descent (SGD) on large amounts of high-dimensional data (e.g., images). In such a scenario, SGD is strongly superior in terms of execution time and memory usage, although it is conceptually more complex than the traditional Expectation-Maximization (EM) algorithm.
For enabling SGD training, we propose three novel ideas:
First, we show that minimizing an upper bound to the GMM log likelihood instead of the full one is feasible and numerically much more stable way in high-dimensional spaces.
Secondly, we propose a new regularizer that prevents SGD from converging to pathological local minima.
And lastly, we present a simple method for enforcing the constraints inherent to GMM training when using SGD.
We also propose an SGD-compatible simplification to the full GMM model based on local principal directions, which avoids excessive memory use in high-dimensional spaces due to quadratic growth of covariance matrices.
Experiments on several standard image datasets show the validity of our approach, and we provide a publicly available TensorFlow implementation. | reject | The paper presents an SGD-based learning of a Gaussian mixture model, designed to match a data streaming setting.
The reviews state that the paper contains some quite good points, such as
* the simplicity and scalability of the method, and its robustness w.r.t. the initialization of the approach;
* the SOM-like approach used to avoid degenerated solutions;
Among the weaknesses are
* an insufficient discussion wrt the state of the art, e.g. for online EM;
* the description of the approach seems yet not mature (e.g., the constraint enforcement boils down to considering that the $\pi_k$ are obtained using softmax; the discussion about the diagonal covariance matrix vs the use of local principal directions is not crystal clear);
* the fact that experiments need be strengthened.
I thus encourage the authors to rewrite and polish the paper, simplifying the description of the approach and better positioning it w.r.t. the state of the art (in particular, mentioning the data streaming motivation from the start). Also, more evidence, and a more thorough analysis thereof, must be provided to back up the approach and understand its limitations. | train | [
"r1e28s73iB",
"rkevwcX3iS",
"BkeHHcXnoB",
"r1gV3D7njr",
"ByxznYJAFB",
"HkeSb3jgqS",
"rkgY7q_4cS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the comprehensive review, it is the first time we publish on GMMs so this feedback is invaluable. Here our responses, we could not incorporate all the changes yet but they will come!\n\n- max-component: the use of the log(max p_k) instead of log(sum_k p_k) is sufficiently motivated to avoid well-know... | [
-1,
-1,
-1,
-1,
3,
3,
1
] | [
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"ByxznYJAFB",
"HkeSb3jgqS",
"HkeSb3jgqS",
"rkgY7q_4cS",
"iclr_2020_HyeKcgHFvS",
"iclr_2020_HyeKcgHFvS",
"iclr_2020_HyeKcgHFvS"
] |
iclr_2020_BJxiqxSYPB | Learning to Prove Theorems by Learning to Generate Theorems | We consider the task of automated theorem proving, a key AI task. Deep learning has shown promise for training theorem provers, but there are limited human-written theorems and proofs available for supervised learning. To address this limitation, we propose to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover. Experiments on real-world tasks demonstrate that synthetic data from our approach significantly improves the theorem prover and advances the state of the art of automated theorem proving in Metamath. | reject | This paper proposes to augment training data for theorem provers by learning a deep neural generator that generates data to train a prover, resulting in an improvement over the Holophrasm baseline prover. The results were restricted to one particular mathematical formalism -- MetaMath, a limitation raised one by reviewer.
All reviewers agree that it's an interesting method for addressing an important problem. However there were some concerns about the strength of the experimental results from R4 and R1. R4 in particular wanted to see results on more datasets, an assessment with which I agree. Although the authors argued vigorously against using other datasets, I am not convinced. For instance, they claim that other datasets do not afford the opportunity to generate new theorems, or the human proofs provided cannot be understood by an automatic prover. In their words,
"The idea of theorem generation can be applied to other systems beyond Metamath, but realizing it on another system is highly nontrivial. It can even involve new research challenges. In particular, due to large differences in logic foundations, grammar, inference rules, and benchmarking environments, the generation process, which is a key component of our approach, would be almost completely different for a new system. And the entire pipeline essentially needs to be re-designed and re-coded from scratch for a new formal system, which can require an unreasonable amount of engineering."
It sounds like they've essentially tailored their approach for this one dataset, which limits the generality of their approach, a limitation that was not discussed in the paper.
There is also only one baseline considered, which renders their experimental findings rather weak. For these reasons, I think this work is not quite ready for publication at ICLR 2020, although future versions with stronger baselines and experiments could be quite impactful.
| train | [
"ryeWi65niH",
"HkePM05hiS",
"S1xrBR9hjS",
"SygFxR53jB",
"HJgDu05noH",
"Syl0Rpq2sH",
"SyxzOra6FS",
"rylsGeGYcH",
"BJe7FtbscH"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments and your time for reviewing our submission. We address your individual points below in a QA format. \n\nQ1: The main result of the paper is that an extra 35/2720 (1.2%) of the test theorems are proven, a 6% improvement over the Holophrasm baseline of 539. It is difficult to judge how re... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"BJe7FtbscH",
"rylsGeGYcH",
"SyxzOra6FS",
"Syl0Rpq2sH",
"iclr_2020_BJxiqxSYPB",
"ryeWi65niH",
"iclr_2020_BJxiqxSYPB",
"iclr_2020_BJxiqxSYPB",
"iclr_2020_BJxiqxSYPB"
] |
iclr_2020_r1xo9grKPr | Flexible and Efficient Long-Range Planning Through Curious Exploration | Identifying algorithms that flexibly and efficiently discover temporally-extended multi-phase plans is an essential next step for the advancement of robotics and model-based reinforcement learning. The core problem of long-range planning is finding an efficient way to search through the tree of possible action sequences — which, if left unchecked, grows exponentially with the length of the plan. Existing non-learned planning solutions from the Task and Motion Planning (TAMP) literature rely on the existence of logical descriptions for the effects and preconditions for actions. This constraint allows TAMP methods to efficiently reduce the tree search problem but limits their ability to generalize to unseen and complex physical environments. In contrast, deep reinforcement learning (DRL) methods use flexible neural-network-based function approximators to discover policies that generalize naturally to unseen circumstances. However, DRL methods have had trouble dealing with the very sparse reward landscapes inherent to long-range multi-step planning situations. Here, we propose the Curious Sample Planner (CSP), which fuses elements of TAMP and DRL by using a curiosity-guided sampling strategy to learn to efficiently explore the tree of action effects. We show that CSP can efficiently discover interesting and complex temporally-extended plans for solving a wide range of physically realistic 3D tasks. In contrast, standard DRL and random sampling methods often fail to solve these tasks at all or do so only with a huge and highly variable number of training samples. We explore the use of a variety of curiosity metrics with CSP and analyze the types of solutions that CSP discovers. Finally, we show that CSP supports task transfer so that the exploration policies learned during experience with one task can help improve efficiency on related tasks. | reject | The authors consider planning problems with sparse rewards.
They propose an algorithm that performs planning based on an auxiliary reward
given by a curiosity score.
They test they approach on a range of tasks in simulated robotics environments
and compare to model-free baselines.
The reviewers mainly criticize the lack of competitive baselines; it comes as now
surprise that the baselines presented in the paper do not perform well, as they
make use of strictly less information of the problem.
The authors were very active in the rebuttal period, however eventually did not
fully manage to address the points raised by the reviewers.
Although the paper proposes an interesting approach, I think this paper is below
acceptance threshold.
The experimental results lack baselines,
Furthermore, critical details of the algorithm are missing / hard to find. | train | [
"BkeEr613jB",
"rkekcIvosS",
"S1x6Hpeijr",
"S1gqiLITFH",
"Sye5LrlijB",
"S1xCZegiiB",
"SJx0n4p5jr",
"B1lo-SNciS",
"BkxfnUM9iH",
"rJgjoXfqoH",
"Hklo5ff5jB",
"B1gCTrWMiB",
"S1xenrWGoS",
"SkeVXHWfjr",
"rklSCEZMiS",
"rJemwNbGjH",
"BkggoJ_gir",
"ryeLTXvkiS",
"Syl0y5O6Fr",
"BkxtxbUJqH"... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"official_reviewer",
"official_r... | [
"Maybe my view on the matter is too much that of an outsider. But still, it appears that dynamic programming (and appropriate approximations from the literature) should be applicable. Maybe the TAMP community has deemed these approaches insufficient, but the targeted community (i.e. not robot motion planning but re... | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
6
] | [
-1,
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"ryeLTXvkiS",
"S1x6Hpeijr",
"Sye5LrlijB",
"iclr_2020_r1xo9grKPr",
"S1xCZegiiB",
"SJx0n4p5jr",
"B1lo-SNciS",
"Hklo5ff5jB",
"rJgjoXfqoH",
"SkeVXHWfjr",
"iclr_2020_r1xo9grKPr",
"S1xenrWGoS",
"BkggoJ_gir",
"rklSCEZMiS",
"S1gqiLITFH",
"BkxtxbUJqH",
"ryeLTXvkiS",
"Syl0y5O6Fr",
"iclr_20... |
iclr_2020_HkxnclHKDr | Provable Representation Learning for Imitation Learning via Bi-level Optimization | A common strategy in modern learning systems is to learn a representation which is useful for many tasks, a.k.a, representation learning. We study this strategy in the imitation learning setting where multiple experts trajectories are available. We formulate representation learning as a bi-level optimization problem where the "outer" optimization tries to learn the joint representation and the "inner" optimization encodes the imitation learning setup and tries to learn task-specific parameters. We instantiate this framework for the cases where the imitation setting being behavior cloning and observation alone. Theoretically, we provably show using our framework that representation learning can reduce the sample complexity of imitation learning in both settings. We also provide proof-of-concept experiments to verify our theoretical findings. | reject | This paper proposes a methodology for learning a representation given multiple demonstrations, by optimizing the representation as well as the learned policy parameters. The paper includes some theoretical results showing that this is a sensible thing to do, and an empirical evaluation.
Post-discussion, the reviewers (and me!) agreed that this is an interesting approach that has a lot of promise. But there was still concern about he empirical evaluation and the writing. Hence I am recommending rejection. | train | [
"HygXYMhjoS",
"Hkl3lr2jsH",
"SJl0fuCtor",
"HkleVc2wjB",
"SJl5CNTDiH",
"HygbjvhwoB",
"rJx5HLM-oB",
"BJxHdQmqFS",
"r1lKVtETYH"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all the reviewers for providing useful feedback and comments! We made the following main changes in our revision\n\n- Clarified our precise contribution in the introduction (as pointed out by reviewer #4) and made a more detailed comparison with recents works in the related work section, based on feedback... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3
] | [
"iclr_2020_HkxnclHKDr",
"SJl0fuCtor",
"SJl5CNTDiH",
"r1lKVtETYH",
"rJx5HLM-oB",
"BJxHdQmqFS",
"iclr_2020_HkxnclHKDr",
"iclr_2020_HkxnclHKDr",
"iclr_2020_HkxnclHKDr"
] |
iclr_2020_rkx35lHKwB | Generalizing Reinforcement Learning to Unseen Actions | A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances. In this work, we address one such setting which requires solving a task with a novel set of actions. Empowering machines with this ability requires generalization in the way an agent perceives its available actions along with the way it uses these actions to solve tasks. Hence, we propose a framework to enable generalization over both these aspects: understanding an action’s functionality, and using actions to solve tasks through reinforcement learning. Specifically, an agent interprets an action’s behavior using unsupervised representation learning over a collection of data samples reflecting the diverse properties of that action. We employ a reinforcement learning architecture which works over these action representations, and propose regularization metrics essential for enabling generalization in a policy. We illustrate the generalizability of the representation learning method and policy, to enable zero-shot generalization to previously unseen actions on challenging sequential decision-making environments. Our results and videos can be found at sites.google.com/view/action-generalization/ | reject | This paper proposes a method for reinforcement learning with unseen actions. More precisely, the problem setting considers a partitioned action space. The actions available during training (known actions) are a subset of all the actions available during evaluation (known and unknown actions). The method can choose unknown actions during evaluation through an embedding space over the actions, which defines a distance between actions. The action embedding is trained by a hierarchical variational autoencoder. The proposed method and algorithmic variants are applied to several domains in the experiments section.
The reviewers discussed both strengths and weaknesses of the paper. The strengths described by the reviewers include the use of the hierarchical VAE and the explanatory videos. The primary weakness is the absence of sufficient detail when describing the solution. The solution description is not sufficiently clear to understand the details of the regularization metrics. The details of regularization are essential when some actions are never seen in training. The reviewers also mentioned that the experiment analysis would benefit from more care.
This paper is not ready for publication, as the solution methods and experiments are not presented with sufficient detail. | train | [
"Syx57Rknir",
"Bye4_TyniS",
"ByxR76y2jr",
"BJxAN3y3sB",
"SJetIi12jB",
"rJlWHQoTYS",
"Hkg_dJ1RFr",
"rkxczGYx9H"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to sincerely thank all the reviewers for their constructive comments. We have revised our paper to incorporate them. The updates are summarized as follows:\n\n1. [Approach Section] Revised details and definitions on regularization metrics\nThe revision to method section (Section 3.4) mathematically d... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"iclr_2020_rkx35lHKwB",
"Hkg_dJ1RFr",
"Hkg_dJ1RFr",
"rJlWHQoTYS",
"rkxczGYx9H",
"iclr_2020_rkx35lHKwB",
"iclr_2020_rkx35lHKwB",
"iclr_2020_rkx35lHKwB"
] |
iclr_2020_B1lTqgSFDH | Antifragile and Robust Heteroscedastic Bayesian Optimisation | Bayesian Optimisation is an important decision-making tool for high-stakes applications in drug discovery and materials design. An oft-overlooked modelling consideration however is the representation of input-dependent or heteroscedastic aleatoric uncertainty. The cost of misrepresenting this uncertainty as being homoscedastic could be high in drug discovery applications where neglecting heteroscedasticity in high throughput virtual screening could lead to a failed drug discovery program. In this paper, we propose a heteroscedastic Bayesian Optimisation scheme which both represents and optimises aleatoric noise in the suggestions. We consider cases such as drug discovery where we would like to minimise or be robust to aleatoric uncertainty but also applications such as materials discovery where it may be beneficial to maximise or be antifragile to aleatoric uncertainty. Our scheme features a heteroscedastic Gaussian Process (GP) as the surrogate model in conjunction with two acquisition heuristics. First, we extend the augmented expected improvement (AEI) heuristic to the heteroscedastic setting and second, we introduce a new acquisition function, aleatoric-penalised expected improvement (ANPEI) based on a simple scalarisation of the performance and noise objective. Both methods are capable of penalising or promoting aleatoric noise in the suggestions and yield improved performance relative to a naive implementation of homoscedastic Bayesian Optimisation on toy problems as well as a real-world optimisation problem. | reject | The reviewers initially gave scores of 1,1,3 citing primarily weak empirical results and a lack of theoretical justification. The experiments are presented on synthetic examples, which is a great start but the reviewers found that this doesn't give strong enough evidence that the methods developed in the paper would work well in practice. The authors did not submit an author response to the reviewers and as such the scores did not change during discussion. This paper would be significantly strengthened with the addition of experiments on actual problems e.g. related to drug discovery which is the motivation in the paper. | train | [
"Hkgy21W3ir",
"SylAl7jVKH",
"SJeQfDipYr",
"rJghy_26Kr"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their helpful comments and feedback. We will endeavour to incorporate the suggestions into a more comprehensive and extended work to be submitted to another venue.",
"The paper considers the heterogeneous noise in Bayesian optimisation. The paper utilised the existing heterogeneous Gau... | [
-1,
3,
1,
1
] | [
-1,
5,
3,
5
] | [
"iclr_2020_B1lTqgSFDH",
"iclr_2020_B1lTqgSFDH",
"iclr_2020_B1lTqgSFDH",
"iclr_2020_B1lTqgSFDH"
] |
iclr_2020_BygacxrFwS | Fractional Graph Convolutional Networks (FGCN) for Semi-Supervised Learning | Due to high utility in many applications, from social networks to blockchain to power grids, deep learning on non-Euclidean objects such as graphs and manifolds continues to gain an ever increasing interest. Most currently available techniques are based on the idea of performing a convolution operation in the spectral domain with a suitably chosen nonlinear trainable filter and then approximating the filter with finite order polynomials. However, such polynomial approximation approaches tend to be both non-robust to changes in the graph structure and to capture primarily the global graph topology. In this paper we propose a new Fractional Generalized Graph Convolutional Networks (FGCN) method for semi-supervised learning, which casts the L\'evy Fights into random walks on graphs and, as a result, allows to more accurately account for the intrinsic graph topology and to substantially improve classification performance, especially for heterogeneous graphs. | reject | This paper proposes a fractional graph convolutional networks for semi-supervised learning, using a classification function repurposed from previous work, as well as parallelization and weighted combinations of pooling function. This leads to good results on several tasks.
Reviewers had concerns about the part played by each piece, the lack of comparison to recent related work, and asked for better explanation of the rationale of the method and more experimental details. Authors provided explanations and details, and a more thorough set of comparison to other work, showing better performance in some but not all cases.
However, concerns that the proposed innovations are too incremental remain.
Therefore, we cannot recommend acceptance. | train | [
"Bke3S1BTYS",
"ByeG3YPpFB",
"SyxdoagL5B",
"SyxTSA1JOH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"This paper presents a fractional generalized graph convolutional networks for semi-supervised learning. The authors design a new graph convolutional filter based on Levy Flights, and propose new feature propagation rules on graphs. Experimental results on multiple graph datasets are reported and discussed.\n\nPros... | [
6,
3,
8,
-1
] | [
4,
3,
3,
-1
] | [
"iclr_2020_BygacxrFwS",
"iclr_2020_BygacxrFwS",
"iclr_2020_BygacxrFwS",
"iclr_2020_BygacxrFwS"
] |
iclr_2020_rJl0ceBtDH | Semi-Supervised Boosting via Self Labelling | Attention to semi-supervised learning grows in machine learning as the price to expertly label data increases. Like most previous works in the area, we focus on improving an algorithm's ability to discover the inherent property of the entire dataset from a few expertly labelled samples. In this paper we introduce Boosting via Self Labelling (BSL), a solution to semi-supervised boosting when there is only limited access to labelled instances. Our goal is to learn a classifier that is trained on a data set that is generated by combining the generalization of different algorithms which have been trained with a limited amount of supervised training samples. Our method builds upon a combination of several different components. First, an inference aided ensemble algorithm developed on a set of weak classifiers will offer the initial noisy labels. Second, an agreement based estimation approach will return the average error rates of the noisy labels. Third and finally, a noise-resistant boosting algorithm will train over the noisy labels and their error rates to describe the underlying structure as closely as possible. We provide both analytical justifications and experimental results to back the performance of our model. Based on several benchmark datasets, our results demonstrate that BSL is able to outperform state-of-the-art semi-supervised methods consistently, achieving over 90% test accuracy with only 10% of the data being labelled. | reject | The paper presents a new semi-supervised boosting approach.
As reviewers pointed out and AC acknowledge, the paper is not ready to publish in various aspects: (a) limited novelty/contribution, (b) reproducibility issue and (c) arguable assumptions.
Hence, I recommend rejection. | train | [
"S1lx1ydnsS",
"SJl5VtlVsS",
"HyguTu9msB",
"SylvK_qmiH",
"HJx8lu9QjS",
"BJlqncT3KH",
"HJlClhzCtS",
"rkluCySRYr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I have read the other reviews and authors' responses. They do not change my view of the paper.",
"Thank the authors for clarifying. But even when taking the clarification into account, I'd still be shocked if any educated researcher can reproduce the experiments given the details in the paper/comments. I definit... | [
-1,
-1,
-1,
-1,
-1,
1,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
1,
3,
3
] | [
"HyguTu9msB",
"SylvK_qmiH",
"BJlqncT3KH",
"HJlClhzCtS",
"rkluCySRYr",
"iclr_2020_rJl0ceBtDH",
"iclr_2020_rJl0ceBtDH",
"iclr_2020_rJl0ceBtDH"
] |
iclr_2020_BkxA5lBFvH | Hope For The Best But Prepare For The Worst: Cautious Adaptation In RL Agents | We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure? This problem setting occurs frequently in real-world reinforcement learning scenarios such as a vehicle adapting to drive in a new city, or a robotic drone adapting a policy trained only in simulation. While learning without catastrophic failures is exceptionally difficult, prior experience can allow us to learn models that make this much easier. These models might not directly transfer to new settings, but can enable cautious adaptation that is substantially safer than na\"{i}ve adaptation as well as learning from scratch. Building on this intuition, we propose risk-averse domain adaptation (RADA). RADA works in two steps: it first trains probabilistic model-based RL agents in a population of source domains to gain experience and capture epistemic uncertainty about the environment dynamics. Then, when dropped into a new environment, it employs a pessimistic exploration policy, selecting actions that have the best worst-case performance as forecasted by the probabilistic model. We show that this simple maximin policy accelerates domain adaptation in a safety-critical driving environment with varying vehicle sizes. We compare our approach against other approaches for adapting to new environments, including meta-reinforcement learning. | reject | The work this paper presents is interesting, but it is not quite ready yet for publication at ICLR. Specifically, the motivation of particular choices could be better, such as summing over quantiles, as indicated by Reviewer 1. The inherent trade-off between safety and speed of adaptation and how this relates to the proposed method could also use a clearer exposition. | train | [
"BkePIn1g5S",
"r1e59N5hjH",
"SJebd45noB",
"SklAMNc3jr",
"HJltqtIeKS",
"B1xrQKZ3Fr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nThis paper proposes to adapt RL agents from some set of training environments (which, in the current instantiation, vary in some simple respect) to a new domain. They build on a framework for model-based RL called PETS. \n\nThe approach goes as follows: \n\n2-step process\n * train probabilistic model-based RL a... | [
3,
-1,
-1,
-1,
3,
3
] | [
5,
-1,
-1,
-1,
1,
4
] | [
"iclr_2020_BkxA5lBFvH",
"iclr_2020_BkxA5lBFvH",
"B1xrQKZ3Fr",
"BkePIn1g5S",
"iclr_2020_BkxA5lBFvH",
"iclr_2020_BkxA5lBFvH"
] |
iclr_2020_ByxJjlHKwr | Learning Latent State Spaces for Planning through Reward Prediction | Model-based reinforcement learning methods typically learn models for high-dimensional state spaces by aiming to reconstruct and predict the original observations. However, drawing inspiration from model-free reinforcement learning, we propose learning a latent dynamics model directly from rewards. In this work, we introduce a model-based planning framework which learns a latent reward prediction model and then plan in the latent state-space. The latent representation is learned exclusively from multi-step reward prediction which we show to be the only necessary information for successful planning. With this framework, we are able to benefit from the concise model-free representation, while still enjoying the data-efficiency of model-based algorithms. We demonstrate our framework in multi-pendulum and multi-cheetah environments where several pendulums or cheetahs are shown to the agent but only one of them produces rewards. In these environments, it is important for the agent to construct a concise latent representation to filter out irrelevant observations. We find that our method can successfully learn an accurate latent reward prediction model in the presence of the irrelevant information while existing model-based methods fail. Planning in the learned latent state-space shows strong performance and high sample efficiency over model-free and model-based baselines. | reject | The authors propose a model-based RL algorithm, consisting of learning a
deterministic multi-step reward prediction model and a vanilla CEM-based MPC
actor.
In contrast to prior work, the model does not attempt to learn from observations
nor is a value function learned.
The approach is tested on task from the mujoco control suit.
The paper is below acceptance threshold.
It is a variation on previous work form Hafner et al.
Furthermore, I think the approach is fundamentally limited: All the learning
derives from the immediate, dense reward signal, whereas the main challenges in RL
are found in sparse reward settings that require planning over long horizons, where value
functions or similar methods to assign credit over long time windows are
absolutely essential. | train | [
"SyeB0VlnsS",
"BJx_fxx2jS",
"rJl-kTyhoS",
"rJl62sy2jH",
"BJgKdT6JFS",
"HJlQrTscYr",
"SJxkccvy5H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewers for their helpful comments and feedback. We were truly appreciative to see that the problem we are addressing is well-received with several comments on how to proceed further in this domain. \n\nWe have addressed the general concern of including Deepmdp results in figure 4 (1-c... | [
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2020_ByxJjlHKwr",
"BJgKdT6JFS",
"HJlQrTscYr",
"SJxkccvy5H",
"iclr_2020_ByxJjlHKwr",
"iclr_2020_ByxJjlHKwr",
"iclr_2020_ByxJjlHKwr"
] |
iclr_2020_ByxloeHFPS | PROVABLY BENEFITS OF DEEP HIERARCHICAL RL | Modern complex sequential decision-making problem often both low-level policy and high-level planning. Deep hierarchical reinforcement learning (Deep HRL) admits multi-layer abstractions which naturally model the policy in a hierarchical manner, and it is believed that deep HRL can reduce the sample complexity compared to the standard RL frameworks. We initiate the study of rigorously characterizing the complexity of Deep HRL. We present a model-based optimistic algorithm which demonstrates that the complexity of learning a near-optimal policy for deep HRL scales with the sum of number of states at each abstraction layer whereas standard RL scales with the product of number of states at each abstraction layer. Our algorithm achieves this goal by using the fact that distinct high-level states have similar low-level structures, which allows an efficient information exploitation and thus experiences from different high-level state-action pairs can be generalized to unseen state-actions. Overall, our result shows an exponential improvement using Deep HRL comparing to standard RL framework. | reject | This paper pursues an ambitious goal to provide a theoretical analysis HRL in terms of regret bounds. However, the exposition of the ideas has severe clarity issues and the assumptions about HMDPs used are overly simplistic to have an impact in RL research.
Finally, there is agreement between the reviewers and AC that the novelty of the proposed ideas is a weak factor and that the paper needs substantial revision. | test | [
"BJxd6v70YS",
"Syl87c5noB",
"HkealYchiH",
"rklhDPqhiS",
"rkg7VVswKB",
"SJgUpKLeqH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new kind of episodic finite MDPs called \"deep hierarchical MDP\" (hMDP). An L-layer hMDP can be *roughly* thought of as L episodic finite MDPs stacked together. A variant of UCRL2 [JOA10] is proposed to solve these hMDPs and some results from its regret analysis are provided. \n\nPros:\n\n1.... | [
1,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_ByxloeHFPS",
"rkg7VVswKB",
"BJxd6v70YS",
"SJgUpKLeqH",
"iclr_2020_ByxloeHFPS",
"iclr_2020_ByxloeHFPS"
] |
iclr_2020_rkeeoeHYvr | AdvCodec: Towards A Unified Framework for Adversarial Text Generation | Machine learning (ML) especially deep neural networks (DNNs) have been widely applied to real-world applications. However, recent studies show that DNNs are vulnerable to carefully crafted \emph{adversarial examples} which only deviate from the original data by a small magnitude of perturbation.
While there has been great interest on generating imperceptible adversarial examples in continuous data domain (e.g. image and audio) to explore the model vulnerabilities, generating \emph{adversarial text} in the discrete domain is still challenging.
The main contribution of this paper is to propose a general targeted attack framework \advcodec for adversarial text generation which addresses the challenge of discrete input space and be easily adapted to general natural language processing (NLP) tasks.
In particular, we propose a tree based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation. With the tree based decoder, it is possible to ensure the grammar correctness of the generated text; and the tree based encoder enables flexibility of making manipulations on different levels of text, such as sentence (\advcodecsent) and word (\advcodecword) levels. We consider multiple attacking scenarios, including appending an adversarial sentence or adding unnoticeable words to a given paragraph, to achieve arbitrary \emph{targeted attack}. To demonstrate the effectiveness of the proposed method, we consider two most representative NLP tasks: sentiment analysis and question answering (QA). Extensive experimental results show that \advcodec has successfully attacked both tasks. In particular, our attack causes a BERT-based sentiment classifier accuracy to drop from 0.703 to 0.006, and a BERT-based QA model's F1 score to drop from 88.62 to 33.21 (with best targeted attack F1 score as 46.54). Furthermore, we show that the white-box generated adversarial texts can transfer across other black-box models, shedding light on an effective way to examine the robustness of existing NLP models. | reject | This paper proposes a method for generating text examples that are adversarial against a known text model, based on modifying the internal representations of a tree-structured autoencoder.
I side with the two more confident reviewers, and argue that this paper doesn't offer sufficient evidence that this method is useful in the proposed setting. I'm particularly swayed by R1, who raises some fairly basic concerns about the value of adversarial example work of this kind, where the generated examples look unnatural in most cases, and where label preservation is not guaranteed. I'm also concerned by the fact, which came up repeatedly in the reviews, that the authors claimed that using a tree-structured decoder encourages the model to generate grammatical sentences—I see no reason why this should be the case in the setting described here, and the paper doesn't seem to offer evidence to back this up. | val | [
"rygOacSKjH",
"HJeWfvBFjH",
"BJeL0LHtiB",
"Bkg4HIBtsH",
"HJglbUStoB",
"Skla6rHKsB",
"S1lpcqm6YS",
"HyeN5T9AtS",
"Byg5OXlgcr",
"BJlwnxYpYr",
"BJgacDp3FB"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"General Responses\nWe thank the reviewers for their valuable comments and suggestions. Based on the review comments, we have revised Section 3 and Section 4 to make the presentation clearer. We also added 3 sections in the appendix and conducted additional experiments following the reviews’ suggestions.\n\nSpecifi... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
3,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3,
-1,
-1
] | [
"iclr_2020_rkeeoeHYvr",
"S1lpcqm6YS",
"S1lpcqm6YS",
"HyeN5T9AtS",
"Byg5OXlgcr",
"Byg5OXlgcr",
"iclr_2020_rkeeoeHYvr",
"iclr_2020_rkeeoeHYvr",
"iclr_2020_rkeeoeHYvr",
"BJgacDp3FB",
"iclr_2020_rkeeoeHYvr"
] |
iclr_2020_Bye-sxHFwB | A Gradient-Based Approach to Neural Networks Structure Learning | Designing the architecture of deep neural networks (DNNs) requires human expertise and is a cumbersome task. One approach to automatize this task has been considering DNN architecture parameters such as the number of layers, the number of neurons per layer, or the activation function of each layer as hyper-parameters, and using an external method for optimizing it. Here we propose a novel neural network model, called Farfalle Neural Network, in which important architecture features such as the number of neurons in each layer and the wiring among the neurons are automatically learned during the training process. We show that the proposed model can replace a stack of dense layers, which is used as a part of many DNN architectures. It can achieve higher accuracy using significantly fewer parameters. | reject | This paper proposes a neural network architecture that represents each neuron with input and output embeddings. Experiments on CIFAR show that the proposed method outperforms baseline models with a fully connected layer.
I like the main idea of the paper. However, I agree with R1 and R2 that experiments presented in the paper are not enough to convince readers of the benefit of the proposed method. In particular, I would like to see a more comprehensive set of results across a suite of datasets. It would be even better, although not necessary, if the authors apply this method on top of different base architectures in multiple domains. At the very least, the authors should run an experiment to compare the proposed approach with a feed forward network on a simple/toy classification dataset. I understand that these experiments require a lot of computational resources. The authors do not need to reach SotA, but they do need to provide more empirical evidence that the method is useful in practice.
I also would like to see more discussions with regards to the computational cost of the proposed method. How much slower/faster is training/inference compared to a fully connected network?
The writing of the paper can also be improved. There are many a few typos throughout the paper, even in the abstract.
I recommend rejecting this paper for ICLR, but would encourage the authors to polish it and run a few more suggested experiments to strengthen the paper. | test | [
"rJgEXQv6tS",
"rJx_NPdojB",
"BJlJTjp9sH",
"SJev98McsB",
"HklEgucTtH",
"rylhQa4CKB"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a new neural network architecture, in which all neurons (called \"floating neurons\") are essentially endowed with \"input\" and \"output\" embedding vectors, the product of which defines the weight of the connection between any two neurons. The authors discuss two network architectures emplo... | [
6,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_Bye-sxHFwB",
"HklEgucTtH",
"rJgEXQv6tS",
"rylhQa4CKB",
"iclr_2020_Bye-sxHFwB",
"iclr_2020_Bye-sxHFwB"
] |
iclr_2020_HkxZigSYwS | Universal Safeguarded Learned Convex Optimization with Guaranteed Convergence | Many applications require quickly and repeatedly solving a certain type of optimization problem, each time with new (but similar) data. However, state of the art general-purpose optimization methods may converge too slowly for real-time use. This shortcoming is addressed by “learning to optimize” (L2O) schemes, which construct neural networks from parameterized forms of the update operations of general-purpose methods. Inferences by each network form solution estimates, and networks are trained to optimize these estimates for a particular distribution of data. This results in task-specific algorithms (e.g., LISTA, ALISTA, and D-LADMM) that can converge order(s) of magnitude faster than general-purpose counterparts. We provide the first general L2O convergence theory by wrapping all L2O schemes for convex optimization within a single framework. Existing L2O schemes form special cases, and we give a practical guide for applying our L2O framework to other problems. Using safeguarding, our theory proves, as the number of network layers increases, the distance between inferences and the solution set goes to zero, i.e., each cluster point is a solution. Our numerical examples demonstrate the efficacy of our approach for both existing and new L2O methods. | reject | This paper gave a general L2O convergence theory called Learned Safeguarded KM (LSKM). The reviewers found flaws both in theory and in experiments. While all the reviewers have read the authors' rebuttal and gave detailed replies, they all agree to reject this paper. I agree also. | train | [
"rygYsus3jB",
"BJenzHsnoB",
"SJxz1go2iB",
"HJldWinqsB",
"rJeTAw1ciS",
"Skg73b6OoB",
"S1e_EmxLjH",
"H1xdZV-RKS",
"SylIK-A0FS",
"B1l7OLzF5B"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks again for your feedback. We address each comment in turn.\n\n1) We acknowledge your point and will update the plots to use mean relative error, as suggested. \n\n2) Please see our recently posted response to Reviewer 2. In short, \"This is a THEORY paper whose contributions justify themselves mathematically... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
1
] | [
"rJeTAw1ciS",
"H1xdZV-RKS",
"HJldWinqsB",
"Skg73b6OoB",
"S1e_EmxLjH",
"B1l7OLzF5B",
"SylIK-A0FS",
"iclr_2020_HkxZigSYwS",
"iclr_2020_HkxZigSYwS",
"iclr_2020_HkxZigSYwS"
] |
iclr_2020_SkxMjxHYPS | Filter redistribution templates for iteration-lessconvolutional model reduction | Automatic neural network discovery methods face an enormous challenge caused for the size of the search space. A common practice is to split this space at different levels and to explore only a part of it. Neural architecture search methods look for how to combine a subset of layers, which are the most promising, to create an architecture while keeping a predefined number of filters in each layer. On the other hand, pruning techniques take a well known architecture and look for the appropriate number of filters per layer. In both cases the exploration is made iteratively, training models several times during the search. Inspired by the advantages of the two previous approaches, we proposed a fast option to find models with improved characteristics. We apply a small set of templates, which are considered promising, for make a redistribution of the number of filters in an already existing neural network. When compared to the initial base models, we found that the resulting architectures, trained from scratch, surpass the original accuracy even after been reduced to fit the same amount of resources. | reject | This paper examines how different distributions of the layer-wise number of CNN filters, as partitioned into a set of fixed templates, impacts the performance of various baseline deep architectures. Testing is conducting from the viewpoint of balancing accuracy with various resource metrics such as number of parameters, memory footprint, etc.
In the end, reviewer scores were partitioned as two accepts and two rejects. However, the actual comments indicate that both nominal accept reviewers expressed borderline opinions regarding this work (e.g., one preferred a score of 4 or 5 if available, while the other explicitly stated that the paper was borderline acceptance-worthy). Consequently in aggregate there was no strong support for acceptance and non-dismissable sentiment towards rejection.
For example, consistent with reviewer comments, a primary concern with this paper is that the novelty and technical contribution is rather limited, and hence, to warrant acceptance the empirical component should be especially compelling. However, all the experiments are limited to cifar10/cifar100 data, with the exception of a couple extra tests on tiny ImageNet added after the rebuttal. But these latter experiments are not so convincing since the base architecture has the best accuracy on VGG, and only on a single MobileNet test do we actually see clear-cut improvement. Moreover, these new results appear to be based on just a single trial per data set (this important detail is unclear), and judging from Figure 2 of the revision, MobileNet results on cifar data can have very high variance blurring the distinction between methods. It is therefore hard to draw firm conclusions at this point, and these two additional tiny ImageNet tests notwithstanding, we don't really know how to differentiate phenomena that are intrinsic to cifar data from other potentially relevant factors.
Overall then, my view is that far more testing with different data types is warranted to strengthen the conclusions of this paper and compensate for the modest technical contribution. Note also that training with all of these different filter templates is likely no less computationally expensive than some state-of-the-art pruning or related compression methods, and therefore it would be worth comparing head-to-head with such approaches. This is especially true given that in many scenarios, test-time computational resources are more critical than marginal differences in training time, etc. | train | [
"BkxV6MbAYH",
"SJeRVvEU_S",
"B1gK8nS3iH",
"rkxUzpNnjr",
"BJxvypEhjH",
"SkeCnhVhsr",
"B1gb9nE3sr",
"B1x3Sh43sS",
"BklbdaJLqH",
"SJxf1Z_85B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper changes the distribution of number of filters (called “filter distribution template”, or \"template\") at each layer in modern deep Conv models (e.g., VGG, Inception, ResNet) and discover that the model with unconventional (e.g., reverse base, quadratic) template sometimes outperform conventional one (w... | [
3,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2020_SkxMjxHYPS",
"iclr_2020_SkxMjxHYPS",
"SkeCnhVhsr",
"iclr_2020_SkxMjxHYPS",
"SJxf1Z_85B",
"SJeRVvEU_S",
"BkxV6MbAYH",
"BklbdaJLqH",
"iclr_2020_SkxMjxHYPS",
"iclr_2020_SkxMjxHYPS"
] |
iclr_2020_H1lMogrKDH | LEARNING DIFFICULT PERCEPTUAL TASKS WITH HODGKIN-HUXLEY NETWORKS | This paper demonstrates that a computational neural network model using ion channel-based conductances to transmit information can solve standard computer vision datasets at near state-of-the-art performance. Although not fully biologically accurate, this model incorporates fundamental biophysical principles underlying the control of membrane potential and the processing of information by Ohmic ion channels. The key computational step employs Conductance-Weighted Averaging (CWA) in place of the traditional affine transformation, representing a fundamentally different computational principle.
Importantly, CWA based networks are self-normalizing and range-limited. We also demonstrate for the first time that a network with excitatory and inhibitory neurons and nonnegative synapse strengths can successfully solve computer vision problems. Although CWA models do not yet surpass the current state-of-the-art in deep learning, the results are competitive on CIFAR-10. There remain avenues for improving these networks, e.g. by more closely modeling ion channel function and connectivity patterns of excitatory and inhibitory neurons found in the brain. | reject | The paper studies non-spiking Hudgkin-Huxley models and shows that under few simplifying assumptions the model can be trained using conventional backpropagation to yield accuracies almost comparable to state-of-the-art neural networks. Overall, the reviewers found the paper well-written, and the idea somewhat interesting, but criticized the experimental evaluation and potential low impact and interest to the community. While the method itself is sound, the overall assessment of the paper is somewhat below what's expected from papers accepted to ICLR, and I’m thus recommending rejection. | test | [
"HJePAd92jH",
"SkxFHOq3iB",
"Syx2FO9nor",
"BJl5fg_6FS",
"BkgeFBvRYS",
"HJgyLhgI9r"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your encouraging comments. Regarding the implications of this work, the primary implication is that neuroscience models can be compared with deep learning results on deep learning benchmarks and with appropriate modelling, the results can be competitive, as one would expect given that the brain perfo... | [
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
3,
3,
5
] | [
"BJl5fg_6FS",
"HJgyLhgI9r",
"BkgeFBvRYS",
"iclr_2020_H1lMogrKDH",
"iclr_2020_H1lMogrKDH",
"iclr_2020_H1lMogrKDH"
] |
iclr_2020_BJe4oxHYPB | Winning the Lottery with Continuous Sparsification | The Lottery Ticket Hypothesis from Frankle & Carbin (2019) conjectures that, for typically-sized neural networks, it is possible to find small sub-networks which train faster and yield superior performance than their original counterparts. The proposed algorithm to search for such sub-networks (winning tickets), Iterative Magnitude Pruning (IMP), consistently finds sub-networks with 90-95% less parameters which indeed train faster and better than the overparameterized models they were extracted from, creating potential applications to problems such as transfer learning.
In this paper, we propose a new algorithm to search for winning tickets, Continuous Sparsification, which continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies. We show empirically that our method is capable of finding tickets that outperforms the ones learned by Iterative Magnitude Pruning, and at the same time providing up to 5 times faster search, when measured in number of training epochs. | reject | This paper proposes a new algorithm called Continuous Sparsification (CS) to search for winning tickets (in the context of the Lottery Ticket Hypothesis from Frankle & Carbin (2019)), as an alternative to the Iterative Magnitude Pruning (IMP) algorithm proposed therein. CS continuously removes parameters from a network during training, and learns the sub-network's structure with gradient-based methods instead of relying on pruning strategies. The papers shows empirically that CS finds lottery tickets that outperforms the ones learned by ITS with up to 5 times faster search, when measured in number of training epochs.
While this paper presents a novel contribution of pruning and of finding winning lottery tickets and is very well written, there are some concerns raised by the reviewers regarding the current evaluation. The paper presents no concrete data on the comparative costs of performing CS and IMP even though the core claim is that CS is more efficient. The paper does not disclose enough detail to compute these costs, and it seems like CS is more expensive than IMP for standard workflows. Moreover, the current presentation of the data through "pareto curves" is misleadingly favorable to CS. The reviewers suggest including more experiments on ImageNet and a more thorough evaluation as a pruning technique beyond the lottery ticket hypothesis. We recommend the authors to address the detailed reviewers' comments in an eventual ressubmission.
| train | [
"Bklcxb3i5r",
"S1gZUJAdsH",
"B1xtv2p_oH",
"rJgsviadjB",
"HkgZhjp_jH",
"ryevJia_jS",
"BJl-_9auoH",
"S1xheijnFr",
"SJlzXqz19r"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nTo the authors of paper 2504: I have posted a private comment for the reviewer/AC discussion period based on your author responses and your revised paper. I want you to be able to see my full response, but I can't post additional public comments to the paper. As such, I'm editing my review with exactly what I se... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_BJe4oxHYPB",
"Bklcxb3i5r",
"iclr_2020_BJe4oxHYPB",
"Bklcxb3i5r",
"Bklcxb3i5r",
"SJlzXqz19r",
"S1xheijnFr",
"iclr_2020_BJe4oxHYPB",
"iclr_2020_BJe4oxHYPB"
] |
iclr_2020_r1lEjlHKPH | Better Knowledge Retention through Metric Learning | In a continual learning setting, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories. While deep neural nets have achieved resounding success in the classical setting, they are known to forget about knowledge acquired in prior episodes of learning if the examples encountered in the current episode of learning are drastically different from those encountered in prior episodes. This makes deep neural nets ill-suited to continual learning. In this paper, we propose a new model that can both leverage the expressive power of deep neural nets and is resilient to forgetting when new categories are introduced. We demonstrate an improvement in terms of accuracy on original classes compared to a vanilla deep neural net. | reject | Catastrophic forgetting in neural networks is a real problem, and this paper suggests a mechanism for avoiding this using a k-nearest neighbor mechanism in the final layer. The reason is that the layers below the last layer should not change significantly when very different data is introduced.
While the idea is interesting none of the reviewers is entirely convinced about the execution and empirical tests, which had partially inconclusive. The reviewers had a number of questions, which were only partially satisfactorily answered. While some of the reviewers had less familiarity with the specific research topic, the seemingly most knowledgeable reviewer does not think the paper is ready for publication.
On balance, I think the paper cannot be accepted in its current state. The idea is interesting, but needs more work. | train | [
"HJlvCi32KS",
"HygFUMZKYS",
"rylvJlj2sH",
"rkxA4junjS",
"r1eKEaEnoH",
"B1lOLa42jr",
"B1x-F64hsS",
"BkglOaNhsH",
"HkeBUpcTKr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper applies metric learning to reduce catastrophic forgetting on neural networks. By improving the expressiveness of the final layer, the authors claim that lower layers do not change weights as much, leading to better results in continual learning. They provide large-scale experiments on different datasets... | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_r1lEjlHKPH",
"iclr_2020_r1lEjlHKPH",
"rkxA4junjS",
"r1eKEaEnoH",
"HygFUMZKYS",
"r1eKEaEnoH",
"HkeBUpcTKr",
"HJlvCi32KS",
"iclr_2020_r1lEjlHKPH"
] |
iclr_2020_S1eSoeSYwr | Deep Evidential Uncertainty | Deterministic neural networks (NNs) are increasingly being deployed in safety critical domains, where calibrated, robust and efficient measures of uncertainty are crucial. While it is possible to train regression networks to output the parameters of a probability distribution by maximizing a Gaussian likelihood function, the resulting model remains oblivious to the underlying confidence of its predictions. In this paper, we propose a novel method for training deterministic NNs to not only estimate the desired target but also the associated evidence in support of that target. We accomplish this by placing evidential priors over our original Gaussian likelihood function and training our NN to infer the hyperparameters of our evidential distribution. We impose priors during training such that the model is penalized when its predicted evidence is not aligned with the correct output. Thus the model estimates not only the probabilistic mean and variance of our target but also the underlying uncertainty associated with each of those parameters. We observe that our evidential regression method learns well-calibrated measures of uncertainty on various benchmarks, scales to complex computer vision tasks, and is robust to adversarial input perturbations.
| reject | This paper presents a method for providing uncertainty for deep learning regressors through assigning a notion of evidence to the predictions. This is done by putting priors on the parameters of the Gaussian outputs of the model and estimating these via an empirical Bayes-like optimization. The reviewers in general found the methodology sensible although incremental in light of Sensoy et al. and Malinin & Gales but found the experiments thorough. A comment on the paper pointed out that the approach was very similar to something presented in the thesis of Malinin (it seems unfair to expect the authors to have been aware of this, but the thesis should be cited and not just the paper which is a different contribution). In discussion, one reviewer raised their score from weak reject to weak accept but the highest scoring reviewer explicitly was not willing to champion the paper and raise their score to accept. Thus the recommendation here is to reject. Taking the reviewer feedback into account, incorporating the proposed changes and adding more careful treatment of related work would make this a much stronger submission to a future conference. | train | [
"ryxl-cgTYB",
"rke4jm93jB",
"BJeSI6NnjB",
"HklTztaioB",
"rkxhKY_3sB",
"rkgnfCIhjr",
"Byxa12f3oH",
"r1lApDpoiS",
"rklwFlL9sr",
"Bkeg_7hvjB",
"H1x799HvoH",
"HyliHcHDoH",
"r1xdJ8HPir",
"HklhT8o3KH",
"HJlBIefWcB",
"S1xHbOpZ_r",
"HkxWlmF5wH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"public"
] | [
"This paper proposed deep evidential regression, a method for training neural networks to not only estimate the output but also the associated evidence in support of that output. The main idea follows the evidential deep learning work proposed in (Sensoy et al., 2018) extending it from the classification regime to ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1
] | [
"iclr_2020_S1eSoeSYwr",
"rkxhKY_3sB",
"Byxa12f3oH",
"iclr_2020_S1eSoeSYwr",
"rkgnfCIhjr",
"r1lApDpoiS",
"H1x799HvoH",
"rklwFlL9sr",
"HyliHcHDoH",
"S1xHbOpZ_r",
"HklhT8o3KH",
"ryxl-cgTYB",
"HJlBIefWcB",
"iclr_2020_S1eSoeSYwr",
"iclr_2020_S1eSoeSYwr",
"iclr_2020_S1eSoeSYwr",
"iclr_2020... |
iclr_2020_rygUoeHKvB | Deep exploration by novelty-pursuit with maximum state entropy | Efficient exploration is essential to reinforcement learning in huge state space. Recent approaches to address this issue include the intrinsically motivated goal exploration process (IMGEP) and the maximum state entropy exploration (MSEE). In this paper, we disclose that goal-conditioned exploration behaviors in IMGEP can also maximize the state entropy, which bridges the IMGEP and the MSEE. From this connection, we propose a maximum entropy criterion for goal selection in goal-conditioned exploration, which results in the new exploration method novelty-pursuit. Novelty-pursuit performs the exploration in two stages: first, it selects a goal for the goal-conditioned exploration policy to reach the boundary of the explored region; then, it takes random actions to explore the non-explored region. We demonstrate the effectiveness of the proposed method in environments from simple maze environments, Mujoco tasks, to the long-horizon video game of SuperMarioBros. Experiment results show that the proposed method outperforms the state-of-the-art approaches that use curiosity-driven exploration. | reject | There is insufficient support to recommend accepting this paper. The reviewers unanimously recommended rejection, and did not change their recommendation after the author response period. The technical depth of the paper was criticized, as was the experimental evaluation. The review comments should help the authors strenghen this work. | train | [
"r1l1D2jtjH",
"BylzmJ2For",
"BJlqkTsFsS",
"BJezCosFsH",
"HygSxD3hYH",
"S1xkntLaFB",
"BJxjEFuqqr"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your detail suggestions. \n\nQ1、Q2、Q3、Q6b、Q7、Q10: Writing styles \n\nThanks for your suggestion. We have revised our writing styles according to your guidance.\n\nQ4: Definition and notation\n\n$\\gamma$ = 0 is excluded since it beyond a standard reinforcement learning (i.e., the current decision co... | [
-1,
-1,
-1,
-1,
1,
3,
3
] | [
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"S1xkntLaFB",
"iclr_2020_rygUoeHKvB",
"HygSxD3hYH",
"BJxjEFuqqr",
"iclr_2020_rygUoeHKvB",
"iclr_2020_rygUoeHKvB",
"iclr_2020_rygUoeHKvB"
] |
iclr_2020_HygPjlrYvB | Learning from Positive and Unlabeled Data with Adversarial Training | Positive-unlabeled (PU) learning learns a binary classifier using only positive and unlabeled examples without labeled negative examples. This paper shows that the GAN (Generative Adversarial Networks) style of adversarial training is quite suitable for PU learning. GAN learns a generator to generate data (e.g., images) to fool a discriminator which tries to determine whether the generated data belong to a (positive) training class. PU learning is similar and can be naturally casted as trying to identify (not generate) likely positive data from the unlabeled set also to fool a discriminator that determines whether the identified likely positive data from the unlabeled set (U) are indeed positive (P). A direct adaptation of GAN for PU learning does not produce a strong classifier. This paper proposes a more effective method called Predictive Adversarial Networks (PAN) using a new objective function based on KL-divergence, which performs much better.~Empirical evaluation using both image and text data shows the effectiveness of PAN. | reject | Thanks for your feedback to the reviewers, which helped us a lot to better understand your paper.
Through the discussion, the overall evaluation of this paper was significantly improved.
However, given the very high competition at ICLR2020, this submission is still below the bar unfortunately.
We hope that the discussion with the reviewers will help you improve your paper for potential future publication. | val | [
"Bke8e-RTFS",
"ryxIKrdsiS",
"SJg7FV1OoS",
"H1enVb4Kor",
"SJeZrS1_sr",
"r1lbxrkdjH",
"SylVX4J_jH",
"SylW40kRFH",
"HkxCbL-e9H"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"<Paper summary>\nThe authors proposed a novel method for positive-unlabeled learning. In the proposed method, adversarial training is adopted to extract positive samples from unlabeled data. In the experiments, the proposed method achieves better performance compared with state-of-the-art methods. \n\n<Review summ... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_HygPjlrYvB",
"H1enVb4Kor",
"SylW40kRFH",
"r1lbxrkdjH",
"Bke8e-RTFS",
"Bke8e-RTFS",
"HkxCbL-e9H",
"iclr_2020_HygPjlrYvB",
"iclr_2020_HygPjlrYvB"
] |
iclr_2020_B1xoserKPH | Analyzing Privacy Loss in Updates of Natural Language Models | To continuously improve quality and reflect changes in data, machine learning-based services have to regularly re-train and update their core models. In the setting of language models, we show that a comparative analysis of model snapshots before and after an update can reveal a surprising amount of detailed information about the changes in the data used for training before and after the update.
We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect. | reject | This paper report empirical implications of privacy ‘leaks’ in language models. Reviewers generally agree that the results look promising and interesting, but the paper isn’t fully developed yet. A few pointed out that framing the paper better to better indicate broader implications of the observed symptoms would greatly improve the paper. Another pointed out better placing this work in the context of other related work. Overall, this paper could use another cycle of polishing/enhancing the results.
| test | [
"BkxEloKniH",
"B1xn_vNhjr",
"B1gdvQXmoH",
"Syl2wVmXjH",
"ryg4Q4m7or",
"H1xFTQQmiS",
"S1lzB4m7jS",
"H1gshz42KB",
"BJg-Ic3hFr",
"SyejLWDpKr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"By well-trained, I mean models that are empirically competitive on LM benchmarks. The point you make about larger/higher-capacity models memorizing large amounts of data is true and I'd also theorize that these models will exacerbate the problem that you observe. I didn't consider that initially.\n\nI appreciate t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
3
] | [
"Syl2wVmXjH",
"B1gdvQXmoH",
"iclr_2020_B1xoserKPH",
"H1gshz42KB",
"BJg-Ic3hFr",
"SyejLWDpKr",
"BJg-Ic3hFr",
"iclr_2020_B1xoserKPH",
"iclr_2020_B1xoserKPH",
"iclr_2020_B1xoserKPH"
] |
iclr_2020_rJx2slSKDS | Latent Variables on Spheres for Sampling and Inference | Variational inference is a fundamental problem in Variational AutoEncoder (VAE). The optimization with lower bound of marginal log-likelihood results in the distribution of latent variables approximate to a given prior probability, which is the dilemma of employing VAE to solve real-world problems. By virtue of high-dimensional geometry, we propose a very simple algorithm completely different from existing ones to alleviate the variational inference in VAE. We analyze the unique characteristics of random variables on spheres in high dimensions and prove that Wasserstein distance between two arbitrary data sets randomly drawn from a sphere are nearly identical when the dimension is sufficiently large. Based on our theory, a novel algorithm for distribution-robust sampling is devised. Moreover, we reform the latent space of VAE by constraining latent variables on the sphere, thus freeing VAE from the approximate optimization of posterior probability via variational inference. The new algorithm is named Spherical AutoEncoder (SAE). Extensive experiments by sampling and inference tasks validate our theoretical analysis and the superiority of SAE. | reject | This paper proposes to improve VAE/GAN by performing variational inference with a constraint that the latent variables lie on a sphere. The reviewers find some technical issues with the paper (R3's comment regarding theorem 3). They also found that the method is not motivated well, and the paper is not convincing. Based on this feedback, I recommend to reject the paper. | train | [
"Hketwk83jH",
"BJl_-qJEiH",
"B1xPFFJ4jH",
"Byek4o59or",
"HylUhjk4iB",
"HJga0cRjYr",
"H1e_Hdw6tr",
"BJeMRX6atB",
"r1lf20oUFB",
"S1gC86mHYr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for your response.\n\nI went through the rebuttal and the revised version of the paper, and most of my original concerns remain unaddressed:\n\n- The positioning of the paper with respect to VAE, variational inference is confusing and even misleading.\n\n- Attributing generation issues in VAE to the fact... | [
-1,
-1,
-1,
-1,
-1,
6,
1,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
-1,
-1
] | [
"BJl_-qJEiH",
"H1e_Hdw6tr",
"BJeMRX6atB",
"iclr_2020_rJx2slSKDS",
"HJga0cRjYr",
"iclr_2020_rJx2slSKDS",
"iclr_2020_rJx2slSKDS",
"iclr_2020_rJx2slSKDS",
"S1gC86mHYr",
"iclr_2020_rJx2slSKDS"
] |
iclr_2020_BklhsgSFvB | Learning to Transfer via Modelling Multi-level Task Dependency | Multi-task learning has been successful in modeling multiple related tasks with large, carefully curated labeled datasets. By leveraging the relationships among different tasks, multi-task learning framework can improve the performance significantly. However, most of the existing works are under the assumption that the predefined tasks are related to each other. Thus, their applications on real-world are limited, because rare real-world problems are closely related. Besides, the understanding of relationships among tasks has been ignored by most of the current methods. Along this line, we propose a novel multi-task learning framework - Learning To Transfer Via Modelling Multi-level Task Dependency, which constructed attention based dependency relationships among different tasks. At the same time, the dependency relationship can be used to guide what knowledge should be transferred, thus the performance of our model also be improved. To show the effectiveness of our model and the importance of considering multi-level dependency relationship, we conduct experiments on several public datasets, on which we obtain significant improvements over current methods. | reject | In this work, the authors address a multi-task learning setting and propose to enhance the estimation of task dependency with an attention mechanism capturing sample-dependant measure of task relatedness. All reviewers and AC agree that the current manuscript lacks clarity and convincing empirical evaluations that clearly show the benefits of the proposed approach w.r.t. state-of-the-art methods. Specifically, the reviewers raised several important concerns that were viewed by AC as critical issues:
(1) the empirical evaluations need to be significantly strengthened to show the benefits of the proposed methods over SOTA -- see R2’s request to empirically compare with the related recent work [Taskonomy, 2018] and R4’s request to compare with the work [End-to-end multi-task learning with attention, 2018]. R4 also suggested to include an ablation study to assess the benefits of the attention mechanism. Pleased to report that the authors addressed the ablation study in their rebuttal and confirmed that the proposed attention mechanism plays an important role in the performance of the proposed method.
(2) All reviewers see an issue with the presentation clarity of the conceptual and technical contributions -- see R4’s and R2’s detailed comments and questions regarding technical contributions; see R3’s and R4’s comments that the distinction between the general task dependency and the data-driven dependency is either not significant or is not clearly articulated; finding better examples to illustrate the difference (instead of reiterating the current ones) would strengthen the clarity and conceptual contributions.
A general consensus among reviewers and AC suggests, in its current state the manuscript is not ready for a publication. It needs more clarifications, empirical studies and polish to achieve the desired goal.
| train | [
"SJxJYVNg9S",
"ByxJhPKOqS",
"rklspDjooH",
"Hyl4LLisjr",
"Bkg2ISjisr",
"ryxHumoisB",
"SyluxUeL5r",
"r1xqvc3uqr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The submission argues for the modeling the relationships between different tasks and incorporating such relationships when training multi-task frameworks. Though the basic concept (usefulness of modeling and incorporating the relationships among tasks) is valid, the submission has a number of critical issues, name... | [
1,
3,
-1,
-1,
-1,
-1,
3,
3
] | [
5,
4,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2020_BklhsgSFvB",
"iclr_2020_BklhsgSFvB",
"SJxJYVNg9S",
"SyluxUeL5r",
"ByxJhPKOqS",
"r1xqvc3uqr",
"iclr_2020_BklhsgSFvB",
"iclr_2020_BklhsgSFvB"
] |
iclr_2020_r1lnigSFDr | Improving the Gating Mechanism of Recurrent Neural Networks | In this work, we revisit the gating mechanisms widely used in various recurrent and feedforward networks such as LSTMs, GRUs, or highway networks. These gates are meant to control information flow, allowing gradients to better propagate back in time for recurrent models. However, to propagate gradients over very long temporal windows, they need to operate close to their saturation regime. We propose two independent and synergistic modifications to the standard gating mechanism that are easy to implement, introduce no additional hyper-parameters, and are aimed at improving learnability of the gates when they are close to saturation. Our proposals are theoretically justified, and we show a generic framework that encompasses other recently proposed gating mechanisms such as chrono-initialization and master gates . We perform systematic analyses and ablation studies on the proposed improvements and evaluate our method on a wide range of applications including synthetic memorization tasks, sequential image classification, language modeling, and reinforcement learning. Empirically, our proposed gating mechanisms robustly increase the performance of recurrent models such as LSTMs, especially on tasks requiring long temporal dependencies. | reject | This submission proposes a new gating mechanism to improve gradient information propagation during back-propagation when training recurrent neural networks.
Strengths:
-The problem is interesting and important.
-The proposed method is novel.
Weaknesses:
-The justification and motivation of the UGI mechanism was not clear and/or convincing.
-The experimental validation is sometimes hard to interpret and the proposed improvements of the gating mechanism are not well-reflected in the quantitative results.
-The submission was hard to read and some images were initially illegible.
The authors improved several of the weaknesses but not to the desired level.
AC agrees with the majority recommendation to reject. | train | [
"r1la3UHzoB",
"rkewrIrMsS",
"BJxBlUSfsr",
"rye_5HHzsr",
"S1llO2HCKH",
"rJxC9rZ2FH",
"rkl3__O0tr",
"HkgP_s1RYr"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"We thank the reviewer for pointing out potentially confusing aspects of the submission, which we address below.\n\n>>> a. In Figure 3(a), where are the other baselines? Are they performing too badly so that they can not show up in the figure? It needs more explanation.\n b. In Figure 3(b), actually a lot of... | [
-1,
-1,
-1,
-1,
6,
3,
3,
-1
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
-1
] | [
"rkl3__O0tr",
"rJxC9rZ2FH",
"S1llO2HCKH",
"iclr_2020_r1lnigSFDr",
"iclr_2020_r1lnigSFDr",
"iclr_2020_r1lnigSFDr",
"iclr_2020_r1lnigSFDr",
"iclr_2020_r1lnigSFDr"
] |
iclr_2020_Hkg0olStDr | Multi-Step Decentralized Domain Adaptation | Despite the recent breakthroughs in unsupervised domain adaptation (uDA), no prior work has studied the challenges of applying these methods in practical machine learning scenarios. In this paper, we highlight two significant bottlenecks for uDA, namely excessive centralization and poor support for distributed domain datasets. Our proposed framework, MDDA, is powered by a novel collaborator selection algorithm and an effective distributed adversarial training method, and allows for uDA methods to work in a decentralized and privacy-preserving way.
| reject | This paper proposes a solution to the decentralized privacy preserving domain adaptation problem. In other words, how to adapt to a target domain without explicit data access to other existing domains. In this scenario the authors propose MDDA which consists of both a collaborator selection algorithm based on minimal Wasserstein distance as well as a technique for adapting through sharing discriminator gradients across domains.
The reviewers has split scores for this work with two recommending weak accept and two recommending weak reject. However, both reviewers who recommended weak accept explicitly mentioned that their recommendation was borderline (an option not available for ICLR 2020). The main issues raised by the reviewers was lack of algorithmic novelty and lack of comparison to prior privacy preserving work. The authors agreed that their goal was not to introduce a new domain adaptation algorithm, but rather to propose a generic solution to extend existing algorithms to the case of privacy preserving and decentralized DA. The authors also provided extensive revisions in response to the reviewers comments. Though the reviewers were convinced on some points (like privacy preserving arguments), there still remained key outstanding issues that were significant enough to cause the reviewers not to update their recommendations.
Therefore, this paper is not recommended for acceptance in its current form. We encourage the authors to build off the revisions completed during the rebuttal phase and any outstanding comments from the reviewers. | train | [
"ByxBx9sFOS",
"rylZ6nz59B",
"HJx6LU_nor",
"Hkg7iGi9iS",
"Bkl3MaIFsr",
"BylUbevFiB",
"ByefrXPFiS",
"H1gdhxwFjr",
"S1lSfA8Fsr",
"SyehsyHxKr",
"rkgK7eQYcH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper focuses on the problem of domain adaptation among multiple domains when some domains are not available on the same machine. The paper builds a decentralized algorithm based on previous domain adaptation methods.\n\nPros: \n1. The problem is novel and practical. Previous domain adaptation assumes that sou... | [
6,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2020_Hkg0olStDr",
"iclr_2020_Hkg0olStDr",
"iclr_2020_Hkg0olStDr",
"ByefrXPFiS",
"SyehsyHxKr",
"rkgK7eQYcH",
"rylZ6nz59B",
"ByxBx9sFOS",
"Bkl3MaIFsr",
"iclr_2020_Hkg0olStDr",
"iclr_2020_Hkg0olStDr"
] |
iclr_2020_S1ly2grtvB | IS THE LABEL TRUSTFUL: TRAINING BETTER DEEP LEARNING MODEL VIA UNCERTAINTY MINING NET | In this work, we consider a new problem of training deep neural network on partially labeled data with label noise. As far as we know,
there have been very few efforts to tackle such problems.
We present a novel end-to-end deep generative pipeline for improving classifier performance when dealing with such data problems. We call it
Uncertainty Mining Net (UMN).
During the training stage, we utilize all the available data (labeled and unlabeled) to train the classifier via a semi-supervised generative framework.
During training, UMN estimates the uncertainly of the labels’ to focus on clean data for learning. More precisely, UMN applies the sample-wise label uncertainty estimation scheme.
Extensive experiments and comparisons against state-of-the-art methods on several popular benchmark datasets demonstrate that UMN can reduce the effects of label noise and significantly improve classifier performance. | reject | The paper presents an interesting idea but all reviewers pointed out problems with the writing (eg clarity of the motivation) and with the motivation of the experiments and link to the contest. The rebuttal helped, but it is clear that the paper requires more work before being acceptable to ICLR. | train | [
"Hkx78mZ3tS",
"rkgCOgohjB",
"BygADgi3jS",
"BJgqHgs3oS",
"rJgqWIYnjS",
"SJlrcMYhiH",
"SyeQxtH3tr",
"Hye15ApRYr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Update after rebuttal:\nThe rebuttal addressed a few of my concerns, but there is still a major issue. Namely, UMN is claimed to work on sample-wise label noise, but there are no experiments to support this (note that this is different from non-uniform class-dependent label noise). As fixing this would require lar... | [
3,
-1,
-1,
-1,
-1,
-1,
1,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_S1ly2grtvB",
"BygADgi3jS",
"BJgqHgs3oS",
"SyeQxtH3tr",
"Hkx78mZ3tS",
"Hye15ApRYr",
"iclr_2020_S1ly2grtvB",
"iclr_2020_S1ly2grtvB"
] |
iclr_2020_HkgxheBFDS | Undersensitivity in Neural Reading Comprehension | Neural reading comprehension models have recently achieved impressive gener- alisation results, yet still perform poorly when given adversarially selected input. Most prior work has studied semantically invariant text perturbations which cause a model’s prediction to change when it should not. In this work we focus on the complementary problem: excessive prediction undersensitivity where input text is meaningfully changed, and the model’s prediction does not change when it should. We formulate a noisy adversarial attack which searches among semantic variations of comprehension questions for which a model still erroneously pro- duces the same answer as the original question – and with an even higher prob- ability. We show that – despite comprising unanswerable questions – SQuAD2.0 and NewsQA models are vulnerable to this attack and commit a substantial frac- tion of errors on adversarially generated questions. This indicates that current models—even where they can correctly predict the answer—rely on spurious sur- face patterns and are not necessarily aware of all information provided in a given comprehension question. Developing this further, we experiment with both data augmentation and adversarial training as defence strategies: both are able to sub- stantially decrease a model’s vulnerability to undersensitivity attacks on held out evaluation data. Finally, we demonstrate that adversarially robust models gener- alise better in a biased data setting with a train/evaluation distribution mismatch; they are less prone to overly rely on predictive cues only present in the training set and outperform a conventional model in the biased data setting by up to 11% F1. | reject | The paper investigates the sensitivity of a QA model to perturbations in the input, by replacing content words, such as named entities and nouns, in questions to make the question not answerable by the document. Experimental analysis demonstrates while the original QA performance is not hurt, the models become significantly less vulnerable to such attacks. Reviewers all agree that the paper includes a thorough analysis, at the same time they all suggested extensions to the paper, such as comparison to earlier work, experimental results, which the authors made in the revision. However, reviewers also question the novelty of the approach, given data augmentation methods. Hence, I suggest rejecting the paper. | train | [
"HyeC8utKiS",
"Hyg5owFYjH",
"ByxhXUFtoS",
"BkeX04NtKr",
"HJl-ZkURYB",
"SkgpGiD0tB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review.\n\nYes, one could expect that the model changes its prediction probabilities as entities in the question are exchanged, however models trained on both SQuAD2.0 and NewsQA are expected to detect when a question is unanswerable, as this is explicitly annotated and a requirement to complete... | [
-1,
-1,
-1,
6,
3,
6
] | [
-1,
-1,
-1,
4,
5,
3
] | [
"BkeX04NtKr",
"HJl-ZkURYB",
"SkgpGiD0tB",
"iclr_2020_HkgxheBFDS",
"iclr_2020_HkgxheBFDS",
"iclr_2020_HkgxheBFDS"
] |
iclr_2020_H1x-3xSKDr | Batch Normalization is a Cause of Adversarial Vulnerability | Batch normalization (BN) is often used in an attempt to stabilize and accelerate training in deep neural networks. In many cases it indeed decreases the number of parameter updates required to achieve low training error. However, it also reduces robustness to small adversarial input perturbations and common corruptions by double-digit percentages, as we show on five standard datasets. Furthermore, we find that substituting weight decay for BN is sufficient to nullify a relationship between adversarial vulnerability and the input dimension. A recent mean-field analysis found that BN induces gradient explosion when used on multiple layers, but this cannot fully explain the vulnerability we observe, given that it occurs already for a single BN layer. We argue that the actual cause is the tilting of the decision boundary with respect to the nearest-centroid classifier along input dimensions of low variance. As a result, the constant introduced for numerical stability in the BN step acts as an important hyperparameter that can be tuned to recover some robustness at the cost of standard test accuracy. We explain this mechanism explicitly on a linear ``toy model and show in experiments that it still holds for nonlinear ``real-world models. | reject | This article studies the effects of BN on robustness. The article presents a series of experiments on various datasets with noise, PGD adversarial attacks, and various corruption benchmarks, that show a drop in robustness when using BN. It is suggested that a main cause of vulnerability is the tiling angle of the decision boundary, which is illustrated in a toy example.
The reviewers found the contribution interesting and that the effect will impact many DNNs. However, they the did not find the arguments for the tiling explanation convincing enough, and suggested more theory and experimental illustration of this explanation would be important. In the rebuttal the authors maintain that the main contribution is to link BN and adversarial vulnerability and consider their explanation reasonable. In the initial discussion the reviewers also mentioned that the experiments were not convincing enough and that the phenomenon could be an effect of gradient masking, and that more experiments with other attack strategies would be important to clarify this. In response, the revision included various experiments, including some with various initial learning schedules. The revision clarified some of these issues. However, the reviewers still found that the reason behind the effect requires more explanations. In summary, this article makes an important observation that is already generating a vivid discussion and will likely have an impact, but the reviewers were not convinced by the explanations provided for these observations.
| test | [
"rJxofaHa_r",
"H1xR93zcoB",
"SJeMYKKyiS",
"S1xIO7xciH",
"B1xtz9k9sB",
"SyxyABk9ir",
"rkgmZfkciB",
"Sygo0J1ciH",
"rkx6F6AFoS",
"rJg7I5MztH",
"S1lNkzJ2YB",
"HkxOtjiG5r",
"HygzLHdjuS",
"rygmCcRcOH",
"BylPxYLmuH",
"rkloiiNGuH",
"HkeHazWxdr",
"SygA8UybOr",
"SklNaNu2vr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"public",
"author",
"author",
"public",
"public"
] | [
"This paper identifies an important weakness of batch normalization: it increases adversarial vulnerability. It is very well written and the claims are theoretically sound. In the experiments, the authors demonstrated a significant difference in robustness between networks with or without batch normalization layers... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_H1x-3xSKDr",
"iclr_2020_H1x-3xSKDr",
"rJxofaHa_r",
"HkxOtjiG5r",
"SyxyABk9ir",
"S1lNkzJ2YB",
"Sygo0J1ciH",
"rJg7I5MztH",
"SJeMYKKyiS",
"iclr_2020_H1x-3xSKDr",
"iclr_2020_H1x-3xSKDr",
"iclr_2020_H1x-3xSKDr",
"rygmCcRcOH",
"iclr_2020_H1x-3xSKDr",
"rkloiiNGuH",
"SygA8UybOr",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.