paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2020_S1gmrxHFvB | AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty | Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AugMix significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half. | accept-poster | This paper tackles the problem of learning under data shift, i.e. when the training and testing distributions are different. The authors propose an approach to improve robustness and uncertainty of image classifiers in this situation. The technique uses synthetic samples created by mixing multiple augmented images, in addition to a Jensen-Shannon Divergence consistency loss. Its evaluation is entirely based on experimental evidence.
The method is simple, easy to implement, and effective. Though this is a purely empirical paper, the experiments are extensive and convincing.
In the end, the reviewers didn't show any objections against this paper. I therefore recommend acceptance. | train | [
"HygBn2yTYS",
"HkxqnYVhsH",
"S1eRGbrFoB",
"S1x-rOioor",
"HygIjPjiiH",
"r1lzMvXior",
"BkgkPVejjS",
"H1eh3Az5ir",
"HJlCqR9FoS",
"ryeT2_OKsB",
"B1gcvuztjr",
"SkgAtOSPsr",
"rJgjmOrvor",
"Skg8FuaQoB",
"SJlJ-Yvmor",
"SJlwxsHKKr",
"BkloWXh9KB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a novel method called augMix, which creates synthetic samples by mixing multiple augmented images. Coupled with a Jensen-Shannon Divergence consistency loss, the proposed method has been experimentally, using CIFAR10, CIFAR100, and ImageNet, shown to be able to improve over some augmentation met... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_S1gmrxHFvB",
"HygIjPjiiH",
"B1gcvuztjr",
"B1gcvuztjr",
"BkgkPVejjS",
"BkgkPVejjS",
"H1eh3Az5ir",
"HJlCqR9FoS",
"SJlJ-Yvmor",
"rJgjmOrvor",
"Skg8FuaQoB",
"iclr_2020_S1gmrxHFvB",
"SJlwxsHKKr",
"BkloWXh9KB",
"HygBn2yTYS",
"iclr_2020_S1gmrxHFvB",
"iclr_2020_S1gmrxHFvB"
] |
iclr_2020_BylQSxHFwr | AtomNAS: Fine-Grained End-to-End Neural Architecture Search | Search space design is very critical to neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit that is much smaller than the ones used in recent NAS algorithms. This search space allows a mix of operations by composing different types of atomic blocks, while the search space in previous methods only allows homogeneous operations. Based on this search space, we propose a resource-aware architecture search framework which automatically assigns the computational resources (e.g., output channel numbers) for each operation by jointly considering the performance and the computational cost. In addition, to accelerate the search process, we propose a dynamic network shrinkage technique which prunes the atomic blocks with negligible influence on outputs on the fly. Instead of a search-and-retrain two-stage paradigm, our method simultaneously searches and trains the target architecture.
Our method achieves state-of-the-art performance under several FLOPs configurations on ImageNet with a small searching cost.
We open our entire codebase at: https://github.com/meijieru/AtomNAS. | accept-poster | Reviewer #1 noted that he wishes to change his review to weak accept post rebuttal, but did not change his score in the system. Presuming his score is weak accept, then all reviewers are unanimous for acceptance. I have reviewed the paper and find the results appear to be clear, but the magnitude of the improvement is modest. I concur with the weak accept recommendation. | train | [
"BJeTrhF2jB",
"SyeaqNe3jr",
"S1x9Czlnjr",
"BkxRfHg3jS",
"SketHivcFH",
"S1xUnP03KS",
"B1xOjiLpKB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We appreciate the invaluable comments from the reviewers. Below is our response to the common concerns and questions from all reviewers.\n\n- Code Release\n\nWe have released the whole codebase including search, which could be accessed with the following links:\nhttps://anonymous.4open.science/r/ced78872-1992-43b9... | [
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
3,
5,
1
] | [
"iclr_2020_BylQSxHFwr",
"SketHivcFH",
"S1xUnP03KS",
"B1xOjiLpKB",
"iclr_2020_BylQSxHFwr",
"iclr_2020_BylQSxHFwr",
"iclr_2020_BylQSxHFwr"
] |
iclr_2020_B1l4SgHKDH | Residual Energy-Based Models for Text Generation | Text generation is ubiquitous in many NLP tasks, from summarization, to dialogue and machine translation. The dominant parametric approach is based on locally normalized models which predict one word at a time. While these work remarkably well, they are plagued by exposure bias due to the greedy nature of the generation process. In this work, we investigate un-normalized energy-based models (EBMs) which operate not at the token but at the sequence level. In order to make training tractable, we first work in the residual of a pretrained locally normalized language model and second we train using noise contrastive estimation. Furthermore, since the EBM works at the sequence level, we can leverage pretrained bi-directional contextual representations, such as BERT and RoBERTa. Our experiments on two large language modeling datasets show that residual EBMs yield lower perplexity compared to locally normalized baselines. Moreover, generation via importance sampling is very efficient and of higher quality than the baseline models according to human evaluation. | accept-poster | This paper proposes a Residual Energy-based Model for text generation.
After rebuttal and discussion, the reviewers all converged on a vote to accept, citing novelty and interestingness of the approach.
Authors are encouraged to revise to address reviewer comments. | train | [
"HJggab7ccS",
"Hyl9u-SX5H",
"rylLgGNojS",
"BylUcWNiiS",
"rJe1DgVjjr",
"Syg4BwvptB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This work is an interesting extension of Gutmann and Hyvarinen (2010), where the parametric model is the combination of a noise model (language model) and an energy function (residual energy), so the difference of parametric model and the noise model cancels out the noise model. Therefore optimizing (3) under some... | [
6,
6,
-1,
-1,
-1,
6
] | [
3,
5,
-1,
-1,
-1,
4
] | [
"iclr_2020_B1l4SgHKDH",
"iclr_2020_B1l4SgHKDH",
"HJggab7ccS",
"Hyl9u-SX5H",
"Syg4BwvptB",
"iclr_2020_B1l4SgHKDH"
] |
iclr_2020_rkevSgrtPr | A closer look at the approximation capabilities of neural networks | The universal approximation theorem, in one of its most general versions, says that if we consider only continuous activation functions σ, then a standard feedforward neural network with one hidden layer is able to approximate any continuous multivariate function f to any given approximation threshold ε, if and only if σ is non-polynomial. In this paper, we give a direct algebraic proof of the theorem. Furthermore we shall explicitly quantify the number of hidden units required for approximation. Specifically, if X in R^n is compact, then a neural network with n input units, m output units, and a single hidden layer with {n+d choose d} hidden units (independent of m and ε), can uniformly approximate any polynomial function f:X -> R^m whose total degree is at most d for each of its m coordinate functions. In the general case that f is any continuous function, we show there exists some N in O(ε^{-n}) (independent of m), such that N hidden units would suffice to approximate f. We also show that this uniform approximation property (UAP) still holds even under seemingly strong conditions imposed on the weights. We highlight several consequences: (i) For any δ > 0, the UAP still holds if we restrict all non-bias weights w in the last layer to satisfy |w| < δ. (ii) There exists some λ>0 (depending only on f and σ), such that the UAP still holds if we restrict all non-bias weights w in the first layer to satisfy |w|>λ. (iii) If the non-bias weights in the first layer are *fixed* and randomly chosen from a suitable range, then the UAP holds with probability 1. | accept-poster | This is a nice paper on the classical problem of universal approximation, but giving a direct proof with good approximation rates, and providing many refinements and ties to the literature.
If possible, I urge the authors to revise the paper further for camera ready; there are various technical oversights (e.g., 1/lambda should appear in the approximation rates in theorem 3.1), and the proof of theorem 3.1 is an uninterrupted 2.5 page block (splitting it into lemmas would make it cleaner, and also those lemmas could be useful to other authors). | val | [
"HJxGV3eP2r",
"S1gn0Z1isS",
"BJgppgyiir",
"BylcVlJssH",
"H1xtdl4CYr",
"S1eH2aMfqB",
"B1gx-ia29r"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"UPDATE TO MY EARLIER REVIEW\n============================\n\nSince this paper presets new findings that will be of significant interest to much of ICLR's audience, and the paper is is well-written, I am changing my rating to \"Accept\". Since Reviewer #1 did not submit a review and Reviewer #2 indicated that (s)he... | [
8,
-1,
-1,
-1,
6,
6,
6
] | [
3,
-1,
-1,
-1,
1,
3,
3
] | [
"iclr_2020_rkevSgrtPr",
"H1xtdl4CYr",
"B1gx-ia29r",
"S1eH2aMfqB",
"iclr_2020_rkevSgrtPr",
"iclr_2020_rkevSgrtPr",
"iclr_2020_rkevSgrtPr"
] |
iclr_2020_rygjHxrYDB | Deep Audio Priors Emerge From Harmonic Convolutional Networks | Convolutional neural networks (CNNs) excel in image recognition and generation. Among many efforts to explain their effectiveness, experiments show that CNNs carry strong inductive biases that capture natural image priors. Do deep networks also have inductive biases for audio signals? In this paper, we empirically show that current network architectures for audio processing do not show strong evidence in capturing such priors. We propose Harmonic Convolution, an operation that helps deep networks distill priors in audio signals by explicitly utilizing the harmonic structure within. This is done by engineering the kernel to be supported by sets of harmonic series, instead of local neighborhoods for convolutional kernels. We show that networks using Harmonic Convolution can reliably model audio priors and achieve high performance in unsupervised audio restoration tasks. With Harmonic Convolution, they also achieve better generalization performance for sound source separation. | accept-poster | This paper introduces a new convolution-like operation, called a Harmonic Convolution (weighted combination of dilated convolutions with different dilation factors/anchors), which operates on the STFT of an audio signal. Experiments are carried on audio denoising tasks and sound separation and seems convincing, but could have been more convincing: (i) with different types of noises for the denoising task (ii) comparison with more methods for sound separation. Apart those two concerns, the authors seem to have addressed most of reviewers' complaints.
| train | [
"rJeNYCa6KS",
"H1l2JDmroS",
"Hyg0_EvhsH",
"ByxcMLXSsS",
"r1xOpSmroS",
"HJxtYHmSjH",
"HJeBEdKvtr",
"BJl815FntH",
"Skgs9l3vtr",
"H1ea5LjwFr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"The paper considers the effectiveness of standard convolutional blocks for modelling learning tasks with audio signals. The effectiveness of a neural network architecture is assessed by evaluating its ability to map a random vector to a signal corrupted with an additive noise. Figure 1 illustrates this process wit... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1
] | [
1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
-1
] | [
"iclr_2020_rygjHxrYDB",
"iclr_2020_rygjHxrYDB",
"HJxtYHmSjH",
"HJeBEdKvtr",
"BJl815FntH",
"rJeNYCa6KS",
"iclr_2020_rygjHxrYDB",
"iclr_2020_rygjHxrYDB",
"H1ea5LjwFr",
"iclr_2020_rygjHxrYDB"
] |
iclr_2020_ByglLlHFDS | Expected Information Maximization: Using the I-Projection for Mixture Density Estimation | Modelling highly multi-modal data is a challenging problem in machine learning. Most algorithms are based on maximizing the likelihood, which corresponds to the M(oment)-projection of the data distribution to the model distribution.
The M-projection forces the model to average over modes it cannot represent. In contrast, the I(nformation)-projection ignores such modes in the data and concentrates on the modes the model can represent. Such behavior is appealing whenever we deal with highly multi-modal data where modelling single modes correctly is more important than covering all the modes. Despite this advantage, the I-projection is rarely used in practice due to the lack of algorithms that can efficiently optimize it based on data. In this work, we present a new algorithm called Expected Information Maximization (EIM) for computing the I-projection solely based on samples for general latent variable models, where we focus on Gaussian mixtures models and Gaussian mixtures of experts. Our approach applies a variational upper bound to the I-projection objective which decomposes the original objective into single objectives for each mixture component as well as for the coefficients, allowing an efficient optimization. Similar to GANs, our approach employs discriminators but uses a more stable optimization procedure, using a tight upper bound. We show that our algorithm is much more effective in computing the I-projection than recent GAN approaches and we illustrate the effectiveness of our approach for modelling multi-modal behavior on two pedestrian and traffic prediction datasets. | accept-poster | The paper proposes a new algorithm called Expected Information Maximization (EIM) for learning latent variable models while computing the I-projection solely based on samples. The reviewers had several questions, which the authors sufficiently answered. The reviewers agree that the paper should be accepted. The authors should carefully read the reviewer questions and comments and use them to improve their final manuscript. | test | [
"S1xHvissjr",
"BkxpEGGQor",
"rJlXs-G7sH",
"BJevwbfXoB",
"B1er3Km0KH",
"rkxrfrIfcB",
"SyxDIDyp5S"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I have read the authors' answers and appreciate the time spent writing the rebuttal. I will maintain my initial assessment.",
"We thank the reviewers for their time and valuable feedback. Besides fixing small typos, ambiguities and unclearities we elaborated on the relation and differences to previously existing... | [
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
1
] | [
"rJlXs-G7sH",
"B1er3Km0KH",
"rkxrfrIfcB",
"SyxDIDyp5S",
"iclr_2020_ByglLlHFDS",
"iclr_2020_ByglLlHFDS",
"iclr_2020_ByglLlHFDS"
] |
iclr_2020_ryxWIgBFPS | A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms | We propose to use a meta-learning objective that maximizes the speed of transfer on a modified distribution to learn how to modularize acquired knowledge. In particular, we focus on how to factor a joint distribution into appropriate conditionals, consistent with the causal directions. We explain when this can work, using the assumption that the changes in distributions are localized (e.g. to one of the marginals, for example due to an intervention on one of the variables). We prove that under this assumption of localized changes in causal mechanisms, the correct causal graph will tend to have only a few of its parameters with non-zero gradient, i.e. that need to be adapted (those of the modified variables). We argue and observe experimentally that this leads to faster adaptation, and use this property to define a meta-learning surrogate score which, in addition to a continuous parametrization of graphs, would favour correct causal graphs. Finally, motivated by the AI agent point of view (e.g. of a robot discovering its environment autonomously), we consider how the same objective can discover the causal variables themselves, as a transformation of observed low-level variables with no causal meaning. Experiments in the two-variable case validate the proposed ideas and theoretical results. | accept-poster | This paper proposes to discover causal mechanisms through meta-learning, and suggests an approach for doing so. The reviewers raised concerns about the key hypothesis (that the right causal model implies higher expected online likelihood) not being sufficiently backed up through theory or through experiments on real data. The authors pointed to a recent paper that builds upon this work and tests on a more realistic problem setting. However, the newer paper measures not the online likelihood of adaptation, but just the training error during adaptation, suggesting that the approach in this paper may be worse. Despite the concerns, the reviewers generally agreed that the paper included novel and interesting ideas, and addressed a number of the reviewers' other concerns about the clarity, references, and experiments. Hence, it makes a worthwhile contribution to ICLR. | train | [
"r1xwKpbhoH",
"BJeG83E5jH",
"BJxSWlOYor",
"HkgdEnCOoS",
"H1gIz2Ruir",
"rJlLJh0OiH",
"BkxGniA_sH",
"SyxcGsCusB",
"rkl6smfDtB",
"r1lca6O85S",
"SygYFAKF9S"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the clarifications and updated paper, and I applaud the inclusion of a negative result in the Appendix for the p(B|A) scenario. My score remains unchanged.",
"1) It is true that the online likelihood itself is averaged during meta-training and not the gradient of the online likelihood. Nonetheless,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"HkgdEnCOoS",
"BJxSWlOYor",
"rJlLJh0OiH",
"rkl6smfDtB",
"r1lca6O85S",
"BkxGniA_sH",
"SygYFAKF9S",
"iclr_2020_ryxWIgBFPS",
"iclr_2020_ryxWIgBFPS",
"iclr_2020_ryxWIgBFPS",
"iclr_2020_ryxWIgBFPS"
] |
iclr_2020_rJxGLlBtwH | On the interaction between supervision and self-play in emergent communication | A promising approach for teaching artificial agents to use natural language involves using human-in-the-loop training. However, recent work suggests that current machine learning methods are too data inefficient to be trained in this way from scratch. In this paper, we investigate the relationship between two categories of learning signals with the ultimate goal of improving sample efficiency: imitating human language data via supervised learning, and maximizing reward in a simulated multi-agent environment via self-play (as done in emergent communication), and introduce the term supervised self-play (S2P) for algorithms using both of these signals. We find that first training agents via supervised learning on human data followed by self-play outperforms the converse, suggesting that it is not beneficial to emerge languages from scratch. We then empirically investigate various S2P schedules that begin with supervised learning in two environments: a Lewis signaling game with symbolic inputs, and an image-based referential game with natural language descriptions. Lastly, we introduce population based approaches to S2P, which further improves the performance over single-agent methods. | accept-poster | This paper investigates how two means of learning natural language - supervised learning from labeled data and reward-maximizing self-play - can be combined. The paper empirically investigates this question, showing in two grounded visual language games that supervision followed by self-play works better than the reverse.
The reviewers found this paper interesting and well executed, though not especially novel. The last is a reasonable criticism but in this case I think a little beside the point. In any case, since all the reviewers are in agreement I recommend acceptance. | train | [
"ryxFUWSwoB",
"r1g_7brwsB",
"H1xXheHvoH",
"r1gUKlHviB",
"SyltwyrDsS",
"SkeTV81RKr",
"HJgVjdDRFB",
"BJxoajDCtS",
"S1eg12kTcS"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Thanks for your interest in our work!\n\n1) By assigning each object a unique word, we simulate a perfectly compositional language. One population of speaker and listener are trained on a fixed order of words. Different populations are trained on different permutations of the language. So there is no varying permu... | [
-1,
-1,
-1,
-1,
-1,
6,
8,
6,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
-1
] | [
"S1eg12kTcS",
"SkeTV81RKr",
"HJgVjdDRFB",
"HJgVjdDRFB",
"BJxoajDCtS",
"iclr_2020_rJxGLlBtwH",
"iclr_2020_rJxGLlBtwH",
"iclr_2020_rJxGLlBtwH",
"iclr_2020_rJxGLlBtwH"
] |
iclr_2020_SJem8lSFwB | Dynamic Model Pruning with Feedback | Deep neural networks often have millions of parameters. This can hinder their deployment to low-end devices, not only due to high memory requirements but also because of increased latency at inference. We propose a novel model compression method that generates a sparse trained model without additional overhead: by allowing (i) dynamic allocation of the sparsity pattern and (ii) incorporating feedback signal to reactivate prematurely pruned weights we obtain a performant sparse model in one single training pass (retraining is not needed, but can further improve the performance). We evaluate the method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models and further that their performance surpasses all previously proposed pruning schemes (that come without feedback mechanisms). | accept-poster | The paper proposes a new, simple method for sparsifying deep neural networks.
It use as temporary, pruned model to improve pruning masks via SGD, and eventually
applying the SGD steps to the dense model.
The paper is well written and shows SOTA results compared to prior work.
The authors unanimously recommend to accept this work, based on simplicity of
the proposed method and experimental results.
I recommend to accept this paper, it seems to make a simple, yet effective
contribution to compressing large-scale models.
| train | [
"S1gYqRc9oS",
"H1xetvcdjH",
"SklAJuquiS",
"rJefXD9dsB",
"SJl7CI5_jB",
"HyxogpQvtB",
"S1la1ROTYB",
"BJenAkApFH",
"H1ePQhnstS",
"rkebs7-rtB",
"rkeUA7QH_S",
"S1e6DmR2OS",
"HkgY5-Of_S"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"public",
"public"
] | [
"Thank you for addressing my comments. There is a minor typo in the revised version page 7, section 6, second sentence \"grantees\" -> \"guarantees\".\n\nI will leave the score unchanged and vote for accepting this work.",
"Thank you for your review. We have fixed the typo (1.) in our revision. We hope we can cla... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
1,
3,
-1,
-1,
-1,
-1,
-1
] | [
"SklAJuquiS",
"S1la1ROTYB",
"HyxogpQvtB",
"BJenAkApFH",
"iclr_2020_SJem8lSFwB",
"iclr_2020_SJem8lSFwB",
"iclr_2020_SJem8lSFwB",
"iclr_2020_SJem8lSFwB",
"rkebs7-rtB",
"S1e6DmR2OS",
"HkgY5-Of_S",
"rkeUA7QH_S",
"iclr_2020_SJem8lSFwB"
] |
iclr_2020_SJxE8erKDH | Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings | Learned joint representations of images and text form the backbone of several important cross-domain tasks such as image captioning. Prior work mostly maps both domains into a common latent representation in a purely supervised fashion. This is rather restrictive, however, as the two domains follow distinct generative processes. Therefore, we propose a novel semi-supervised framework, which models shared information between domains and domain-specific information separately.
The information shared between the domains is aligned with an invertible neural network. Our model integrates normalizing flow-based priors for the domain-specific information, which allows us to learn diverse many-to-many mappings between the two domains. We demonstrate the effectiveness of our model on diverse tasks, including image captioning and text-to-image synthesis. | accept-poster | This paper addresses the problem of many-to-many cross-domain mapping tasks with a double variational auto-encoder architecture, making use of the normalizing flow-based priors.
Reviewers and AC unanimously agree that it is a well written paper with a solid approach to a complicated real problem supported by good experimental results. There are still some concerns with confusing notations, and with human study to further validate their approach, which should be addressed in a future version.
I recommend acceptance. | train | [
"HkxBYSQ0KB",
"SkesJ76W5H",
"rJed5iPm5S",
"HJgp5ud3or",
"Bkl77Lg2ir",
"H1xfnrHPsr",
"BygTZBrwsB",
"HkxD0NSDir",
"BkeOsFKksH",
"Skei1JsY9H",
"BJgSA2X89r"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"Summary:\nThis paper addresses the problem of many-to-many cross domain mapping tasks (such as captioning or text-to-image synthesis). It proposes a double variational auto-encoder architecture mapping data to a factored latent representation with both shared and domain-specific components. The proposed model make... | [
8,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_SJxE8erKDH",
"iclr_2020_SJxE8erKDH",
"iclr_2020_SJxE8erKDH",
"Bkl77Lg2ir",
"H1xfnrHPsr",
"rJed5iPm5S",
"HkxBYSQ0KB",
"SkesJ76W5H",
"Skei1JsY9H",
"BJgSA2X89r",
"iclr_2020_SJxE8erKDH"
] |
iclr_2020_S1gEIerYwH | Transferring Optimality Across Data Distributions via Homotopy Methods | Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis, including complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available.
In this work, we propose a novel homotopy-based numerical method that can be used to transfer knowledge regarding the localization of an optimum across different task distributions in deep learning applications. We validate the proposed methodology with some empirical evaluations in the regression and classification scenarios, where it shows that superior numerical performance can be achieved in popular deep learning benchmarks, i.e. FashionMNIST, CIFAR-10, and draw connections with the widely used fine-tuning heuristic. In addition, we give more insights on the properties of a general homotopy method when used in combination with Stochastic Gradient Descent by conducting a general local theoretical analysis in a simplified setting. | accept-poster | This paper presents a theoretically motivated method based on homotopy continuation for transfer learning and demonstrates encouraging results on FashionMNIST and CIFAR-10. The authors draw a connection between this approach and the widely used fine-tuning heuristic. Reviewers find principled approaches to transfer learning in deep neural networks an important direction, and find the contributions of this paper an encouraging step in that direction. Alongside with the reviewers, I think homotopy continuation is a great numerical tool with a lot of untapped potentials for ML applications, and I am happy to see an instantiation of this approach for transfer learning. Reviewers had some concerns about experimental evaluations (reporting test performance in addition to training), and the writing of the draft. The authors addressed these in the revised version by including test performance in the appendix and rewriting the first parts of the paper. Two out of three reviewers recommend accept. I also find the homotopy analysis interesting and alongside with majority of reviewers, recommend accept. However, please try to iterate at least once more over the writing; simply long sentences and make sure the writing and flow are, for the camera ready version. | val | [
"Byl6u9ZhtH",
"r1xes1UjoH",
"H1ebq869iH",
"rygvEN6qir",
"Hyx8jGXXjS",
"BylA50cmqB",
"HJl5vl0pKB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Authors propose a very general framework of Homotopy to the deep learning set up and explores a few relevant theoretical issues.\n\nThough the proposed idea is interesting, the depth and breadth of authors' presentation are simply lacking. The entire paper lacks focus and I suggest authors consider focusing on 1-2... | [
3,
-1,
-1,
-1,
-1,
6,
8
] | [
1,
-1,
-1,
-1,
-1,
1,
3
] | [
"iclr_2020_S1gEIerYwH",
"H1ebq869iH",
"BylA50cmqB",
"HJl5vl0pKB",
"Byl6u9ZhtH",
"iclr_2020_S1gEIerYwH",
"iclr_2020_S1gEIerYwH"
] |
iclr_2020_rygwLgrYPB | Regularizing activations in neural networks via distribution matching with the Wasserstein metric | Regularization and normalization have become indispensable components in training deep neural networks, resulting in faster training and improved generalization performance. We propose the projected error function regularization loss (PER) that encourages activations to follow the standard normal distribution. PER randomly projects activations onto one-dimensional space and computes the regularization loss in the projected space. PER is similar to the Pseudo-Huber loss in the projected space, thus taking advantage of both L1 and L2 regularization losses. Besides, PER can capture the interaction between hidden units by projection vector drawn from a unit sphere. By doing so, PER minimizes the upper bound of the Wasserstein distance of order one between an empirical distribution of activations and the standard normal distribution. To the best of the authors' knowledge, this is the first work to regularize activations via distribution matching in the probability distribution space. We evaluate the proposed method on the image classification task and the word-level language modeling task.
| accept-poster | This paper presents an interesting and novel idea that is likely to be of interest to the community. The most negative reviewer did not acknowledge the author response. The AC recommends acceptance. | train | [
"ryl86MI0tS",
"rke4QlgvsS",
"HyeFyggvoH",
"HygKZgeDjH",
"S1ejPeevsS",
"Hkl5pWjptr",
"BJl1kZ14cr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This submission belongs to the general field of neural networks and sub-field of activation regularisation. In particular, this submission proposes a novel approach for activation regularisation whereby a distribution of activations within minibatch are regularised to have standard normal distribution. The approac... | [
6,
-1,
-1,
-1,
-1,
3,
6
] | [
5,
-1,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_rygwLgrYPB",
"Hkl5pWjptr",
"ryl86MI0tS",
"BJl1kZ14cr",
"iclr_2020_rygwLgrYPB",
"iclr_2020_rygwLgrYPB",
"iclr_2020_rygwLgrYPB"
] |
iclr_2020_ByxaUgrFvH | Mutual Information Gradient Estimation for Representation Learning | Mutual Information (MI) plays an important role in representation learning. However, MI is unfortunately intractable in continuous and high-dimensional settings. Recent advances establish tractable and scalable MI estimators to discover useful representation. However, most of the existing methods are not capable of providing an accurate estimation of MI with low-variance when the MI is large. We argue that directly estimating the gradients of MI is more appealing for representation learning than estimating MI in itself. To this end, we propose the Mutual Information Gradient Estimator (MIGE) for representation learning based on the score estimation of implicit distributions. MIGE exhibits a tight and smooth gradient estimation of MI in the high-dimensional and large-MI settings. We expand the applications of MIGE in both unsupervised learning of deep representations based on InfoMax and the Information Bottleneck method. Experimental results have indicated significant performance improvement in learning useful representation. | accept-poster | This paper proposes the Mutual Information Gradient Estimator (MIGE) for estimating the gradient of the mutual information (MI), instead of calculating it directly. To build a tractable approximation to the gradient of MI, the authors make use of Stein's estimator followed by a random projection. The authors empirically evaluate the performance on representation learning tasks and show benefits over prior MI estimation methods.
The reviewers agree that the problem is important and challenging, and that the proposed approach is novel and principled. While there were some concerns about the empirical evaluation, most of the issues were addressed during the discussion phase. I will hence recommend acceptance of this paper. We ask the authors to update the manuscript as discussed. | train | [
"HyeJ_8BnoB",
"rJxEhSHhoH",
"r1lRPLxjjr",
"rylHOHJsiS",
"H1ezeV1siH",
"BylD9OC9oS",
"B1glaFo5jS",
"B1xOHQzqiH",
"HyliJBZciS",
"r1gDQ2YKiB",
"HkliG6Ktor",
"HygathtKor",
"rkePFjKYor",
"SyeM9cFYjB",
"BylvZbsnFB",
"B1gYH_Xuqr",
"BklSOaCu9S",
"SJx1qthYqB"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for recommence. \nWe will add the statement related to [1] to the appendix in the revision.\n",
"Thank you for your response\n\n>> Q1Q2\nEven some empirical statistics would suffice. I find it misleading to plot only one realization of the gradient estimate. How about plotting 10000 realizations and ca... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
"rylHOHJsiS",
"r1lRPLxjjr",
"HygathtKor",
"BylD9OC9oS",
"B1glaFo5jS",
"HyliJBZciS",
"rkePFjKYor",
"SyeM9cFYjB",
"SyeM9cFYjB",
"B1gYH_Xuqr",
"SJx1qthYqB",
"r1gDQ2YKiB",
"BklSOaCu9S",
"BylvZbsnFB",
"iclr_2020_ByxaUgrFvH",
"iclr_2020_ByxaUgrFvH",
"iclr_2020_ByxaUgrFvH",
"iclr_2020_Byx... |
iclr_2020_ByeMPlHKPH | Lite Transformer with Long-Short Range Attention | Transformer has become ubiquitous in natural language processing (e.g., machine translation, question answering); however, it requires enormous amount of computations to achieve high performance, which makes it not suitable for mobile applications that are tightly constrained by the hardware resources and battery. In this paper, we present an efficient mobile NLP architecture, Lite Transformer to facilitate deploying mobile NLP applications on edge devices. The key primitive is the Long-Short Range Attention (LSRA), where one group of heads specializes in the local context modeling (by convolution) while another group specializes in the long-distance relationship modeling (by attention). Such specialization brings consistent improvement over the vanilla transformer on three well-established language tasks: machine translation, abstractive summarization, and language modeling. Under constrained resources (500M/100M MACs), Lite Transformer outperforms transformer on WMT'14 English-French by 1.2/1.7 BLEU, respectively. Lite Transformer reduces the computation of transformer base model by 2.5x with 0.3 BLEU score degradation. Combining with pruning and quantization, we further compressed the model size of Lite Transformer by 18.2x. For language modeling, Lite Transformer achieves 1.8 lower perplexity than the transformer at around 500M MACs. Notably, Lite Transformer outperforms the AutoML-based Evolved Transformer by 0.5 higher BLEU for the mobile NLP setting without the costly architecture search that requires more than 250 GPU years. Code has been made available at https://github.com/mit-han-lab/lite-transformer. | accept-poster | This paper presents an efficient architecture of Transformer to facilitate implementations on mobile settings. The core idea is to decompose the self-attention layers to focus on local and global information separately. In the experiments on machine translation, it is shown to outperform baseline Transformer as well as the Evolved Transformer obtained by a costly architecture search.
While all reviewers admitted the practical impact of the results in terms of engineering, the main concerns in the initial paper were the clarification of the mobile settings and scientific contributions. Through the discussion, reviewers are fairly satisfied with the authors’ response and are now all positive to the acceptance. Although we are still curious how it works on other tasks (as the title says “mobile applications”), I think the paper provides enough insights valuable to the community, so I’d like to recommend acceptance.
| train | [
"ryl7mTePFB",
"SJgrvtiYjH",
"HylA7KiFjH",
"rJgULPjYjS",
"SJezRdjtoB",
"rkxtlq35FB",
"rkebus9xqB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new technique (LSRA) improving Transformer for constrained scenarios (e.g., mobile settings). It combines two attention modules to provide both global and local information separately for a translation task. In this manner, the authors place the attention and the convolutional module side by ... | [
6,
-1,
-1,
-1,
-1,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_ByeMPlHKPH",
"iclr_2020_ByeMPlHKPH",
"rkebus9xqB",
"ryl7mTePFB",
"rkxtlq35FB",
"iclr_2020_ByeMPlHKPH",
"iclr_2020_ByeMPlHKPH"
] |
iclr_2020_H1lNPxHKDH | A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case | We give a tight characterization of the (vectorized Euclidean) norm of weights required to realize a function f:R→Rd as a single hidden-layer ReLU network with an unbounded number of units (infinite width), extending the univariate characterization of Savarese et al. (2019) to the multivariate case. | accept-poster | The article studies the set of functions expressed by a network with bounded parameters in the limit of large width, relating the required norm to the norm of a transform of the target function, and extending previous work that addressed the univariate case. The article contains a number of observations and consequences. The reviewers were quite positive about this article. | train | [
"H1gbPLWqoH",
"BkxnrM-5oH",
"rJxPr1-9jr",
"BkgBuY4sFB",
"Hkx2sBPpKH",
"ryx--vwTYr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all the reviewers for their careful reading of the manuscript, and have uploaded a revision based on their feedback. Changes addressing the reviewers comments are indicated by blue text. The main change is an expanded discussion in Section 5.1 regarding the order of smoothness in our Sobolev norm bounds, ... | [
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
4,
1,
3
] | [
"iclr_2020_H1lNPxHKDH",
"BkgBuY4sFB",
"Hkx2sBPpKH",
"iclr_2020_H1lNPxHKDH",
"iclr_2020_H1lNPxHKDH",
"iclr_2020_H1lNPxHKDH"
] |
iclr_2020_Bke_DertPB | Adversarial Lipschitz Regularization | Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality. However, Wasserstein GANs require the critic to be 1-Lipschitz, which is often enforced implicitly by penalizing the norm of its gradient, or by globally restricting its Lipschitz constant via weight normalization techniques. Training with a regularization term penalizing the violation of the Lipschitz constraint explicitly, instead of through the norm of the gradient, was found to be practically infeasible in most situations. Inspired by Virtual Adversarial Training, we propose a method called Adversarial Lipschitz Regularization, and show that using an explicit Lipschitz penalty is indeed viable and leads to competitive performance when applied to Wasserstein GANs, highlighting an important connection between Lipschitz regularization and adversarial training. | accept-poster | This paper introduces an adversarial approach to enforcing a Lipschitz constraint on neural networks. The idea is intuitively appealing, and the paper is clear and well written. It's not clear from the experiments if this method outperforms competing approaches, but it is at least comparable, which means this is at the very least another useful tool in the toolbox. There was a lot of back-and-forth with the reviewers, mostly over the experiments and some other minor points. The reviewers feel like their concerns have all been addressed, and now agree on acceptance.
| train | [
"H1xTs6z6qH",
"Hkl6hcj55H",
"r1epF44hsB",
"H1lqnfeior",
"Skx1dpejsS",
"BJl7kngssr",
"S1leVwxijH",
"HkeAUFKOsB",
"HkllGwGusr",
"B1gDLnGOsH",
"rJgbWoMOjH",
"SJlkWrqe5S"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Final Edit:\n\nI have reviewer the final version of the paper and have decided to increase my score to a weak accept. I maintain some concerns around the empirical evaluation in the paper (collating results from multiple sources with different experimental procedures). But my major concerns have been addressed by ... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_Bke_DertPB",
"iclr_2020_Bke_DertPB",
"H1lqnfeior",
"iclr_2020_Bke_DertPB",
"SJlkWrqe5S",
"Hkl6hcj55H",
"H1xTs6z6qH",
"rJgbWoMOjH",
"H1xTs6z6qH",
"SJlkWrqe5S",
"Hkl6hcj55H",
"iclr_2020_Bke_DertPB"
] |
iclr_2020_rklnDgHtDS | Compositional Language Continual Learning | Motivated by the human's ability to continually learn and gain knowledge over time, several research efforts have been pushing the limits of machines to constantly learn while alleviating catastrophic forgetting. Most of the existing methods have been focusing on continual learning of label prediction tasks, which have fixed input and output sizes. In this paper, we propose a new scenario of continual learning which handles sequence-to-sequence tasks common in language learning. We further propose an approach to use label prediction continual learning algorithm for sequence-to-sequence continual learning by leveraging compositionality. Experimental results show that the proposed method has significant improvement over state-of-the-art methods. It enables knowledge transfer and prevents catastrophic forgetting, resulting in more than 85% accuracy up to 100 stages, compared with less than 50% accuracy for baselines in instruction learning task. It also shows significant improvement in machine translation task. This is the first work to combine continual learning and compositionality for language learning, and we hope this work will make machines more helpful in various tasks. | accept-poster | The paper addresses the task of continual learning in NLP for seq2seq style tasks. The key idea of the proposed method is to enable the network to represent syntactic and semantic knowledge separately, which allows the neural network to leverage compositionality for knowledge transfer and also solves the problem of catastrophic forgetting. The paper has been improved substantially after the reviewers' comments and also obtains good results on benchmark tasks. The only concern is that the evaluation is on artificial datasets. In future, the authors should try to include more evaluation on real datasets (however, this is also limited by availability of such datasets). As of now, I'm recommending an Acceptance. | train | [
"S1xahryRFr",
"HkgYv3uhjB",
"rklSQhOnjB",
"HkxxasO2jr",
"rygs4o_3jS",
"ryeyDTniFH",
"rke6W0h6YH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper is about continual learning on NLP applications like natural language instruction learning or machine translation. The authors propose to exploit \"compositionality\" to separate semantics and syntax so as to facilitate the problem of interest.\n\nIn summary, the current manuscript is clearly not ready f... | [
3,
-1,
-1,
-1,
-1,
6,
8
] | [
1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_rklnDgHtDS",
"iclr_2020_rklnDgHtDS",
"rke6W0h6YH",
"ryeyDTniFH",
"S1xahryRFr",
"iclr_2020_rklnDgHtDS",
"iclr_2020_rklnDgHtDS"
] |
iclr_2020_rkxawlHKDr | End to End Trainable Active Contours via Differentiable Rendering | We present an image segmentation method that iteratively evolves a polygon. At each iteration, the vertices of the polygon are displaced based on the local value of a 2D shift map that is inferred from the input image via an encoder-decoder architecture. The main training loss that is used is the difference between the polygon shape and the ground truth segmentation mask. The network employs a neural renderer to create the polygon from its vertices, making the process fully differentiable. We demonstrate that our method outperforms the state of the art segmentation networks and deep active contour solutions in a variety of benchmarks, including medical imaging and aerial images. | accept-poster | The submission presents a differentiable take on classic active contour methods, which used to be popular in computer vision. The method is sensible and the results are strong. After the revision, all reviewers recommend accepting the paper. | train | [
"Syl8NXg6tH",
"H1xhlGTQqH",
"S1gg0BrviH",
"r1xFi7sFFS",
"HkxNP0b7oH",
"SJxyGCWQsB",
"ryx4aT-Xir",
"SJxs76WmjH",
"SygWvXz19S",
"B1lq6PWaYr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"This paper investigates an image segmentation technique that learns to evolve an active contour, constraining the segmentation prediction to be a polygon (with a predetermined number of vertices). The advantage of active contour methods is that some shapes (such as buildings) can naturally be represented as close... | [
8,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_rkxawlHKDr",
"iclr_2020_rkxawlHKDr",
"r1xFi7sFFS",
"iclr_2020_rkxawlHKDr",
"r1xFi7sFFS",
"Syl8NXg6tH",
"H1xhlGTQqH",
"iclr_2020_rkxawlHKDr",
"B1lq6PWaYr",
"iclr_2020_rkxawlHKDr"
] |
iclr_2020_BJxkOlSYDH | Provable Filter Pruning for Efficient Neural Networks | We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network. Our algorithm uses a small batch of input data points to assign a saliency score to each filter and constructs an importance sampling distribution where filters that highly affect the output are sampled with correspondingly high probability.
In contrast to existing filter pruning approaches, our method is simultaneously data-informed, exhibits provable guarantees on the size and performance of the pruned network, and is widely applicable to varying network architectures and data sets. Our analytical bounds bridge the notions of compressibility and importance of network structures, which gives rise to a fully-automated procedure for identifying and preserving filters in layers that are essential to the network's performance. Our experimental evaluations on popular architectures and data sets show that our algorithm consistently generates sparser and more efficient models than those constructed by existing filter pruning approaches. | accept-poster | This paper presents a sampling-based approach for generating compact CNNs by pruning redundant filters. One advantage of the proposed method is a bound for the final pruning error.
One of the major concerns during review is the experiment design. The original paper lacks the results on real work dataset like ImageNet. Furthermore, the presentation is a little misleading. The authors addressed most of these problems in the revision.
Model compression and purring is a very important field for real world application, hence I choose to accept the paper.
| train | [
"rkejpAh_iB",
"HyeztAh_jB",
"BJg5eChdsr",
"SkgCaS9hsB",
"HylTqAh_iS",
"B1gqAUOxqr",
"S1e1rr3e9S",
"B1ehHpGjqS"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your consideration of our paper and for your thoughtful comments. We would like to first clarify that the Lottery Ticket Hypothesis [1] does not claim that the predictive accuracy of the pruned network is necessarily better than the unpruned network (pg. 2, para. 2 of [1]). In fact, from the figures ... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"B1gqAUOxqr",
"B1ehHpGjqS",
"iclr_2020_BJxkOlSYDH",
"iclr_2020_BJxkOlSYDH",
"S1e1rr3e9S",
"iclr_2020_BJxkOlSYDH",
"iclr_2020_BJxkOlSYDH",
"iclr_2020_BJxkOlSYDH"
] |
iclr_2020_rkgfdeBYvH | Effect of Activation Functions on the Training of Overparametrized Neural Nets | It is well-known that overparametrized neural networks trained using gradient based methods quickly achieve small training error with appropriate hyperparameter settings. Recent papers have proved this statement theoretically for highly overparametrized networks under reasonable assumptions. These results either assume that the activation function is ReLU or they depend on the minimum eigenvalue of a certain Gram matrix. In the latter case, existing works only prove that this minimum eigenvalue is non-zero and do not provide quantitative bounds which require that this eigenvalue be large. Empirically, a number of alternative activation functions have been proposed which tend to perform better than ReLU at least in some settings but no clear understanding has emerged. This state of affairs underscores the importance of theoretically understanding the impact of activation functions on training. In the present paper, we provide theoretical results about the effect of activation function on the training of highly overparametrized 2-layer neural networks. A crucial property that governs the performance of an activation is whether or not it is smooth:
• For non-smooth activations such as ReLU, SELU, ELU, which are not smooth because there is a point where either the first order or second order derivative is discontinuous, all eigenvalues of the associated Gram matrix are large under minimal assumptions on the data.
• For smooth activations such as tanh, swish, polynomial, which have derivatives of all orders at all points, the situation is more complex: if the subspace spanned by the data has small dimension then the minimum eigenvalue of the Gram matrix can be small leading to slow training. But if the dimension is large and the data satisfies another mild condition, then the eigenvalues are large. If we allow deep networks, then the small data dimension is not a limitation provided that the depth is sufficient.
We discuss a number of extensions and applications of these results. | accept-poster | The article studies the role of the activation function in learning of 2 layer overparaemtrized networks, presenting results on the minimum eigenvalues of the Gram matrix that appears in this type of analysis and which controls the rate of convergence. The article makes numerous observations contributing to the development of principles for the design of activation functions and a better understanding of an active area of investigation as is convergence in overparametrized nets. The reviewers were generally positive about this article. | train | [
"rkl1-KdEsH",
"H1gD4vdVsH",
"r1g6d9LCFS",
"HyeTpizhqr"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewer, \n\nThank you very much for your constructive review and for your time and effort. \nWe have uploaded a revised version.\n\nResponses to suggestions/comments:\n\n1. We have changed the statement of Theorem 4.7 to be the informal statement of Cor. J.4.2 (Cor. I.4.2 in the revised version). We have dr... | [
-1,
-1,
8,
6
] | [
-1,
-1,
5,
3
] | [
"r1g6d9LCFS",
"HyeTpizhqr",
"iclr_2020_rkgfdeBYvH",
"iclr_2020_rkgfdeBYvH"
] |
iclr_2020_rJe4_xSFDB | Lipschitz constant estimation of Neural Networks via sparse polynomial optimization | We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bound on the Lipschitz constant of neural networks. The underlying optimization problems boil down to either linear (LP) or semidefinite (SDP) programming. We show how to use the sparse connectivity of a network, to significantly reduce the complexity of computation. This is specially useful for convolutional as well as pruned neural networks. We conduct experiments on networks with random weights as well as networks trained on MNIST, showing that in the particular case of the ℓ∞-Lipschitz constant, our approach yields superior estimates as compared to other baselines available in the literature.
| accept-poster | This paper improves upper bound estimates on Lipschitz constants for neural networks by converting the problem into a polynomial optimization problem. The proposed method also exploits sparse connections in the network to decompose the original large optimization problem into smaller ones that are more computationally tractable. The bounds achieved by the method improve upon those found from a quadratic program formulation. The method is tested on networks with random weights and networks trained on MNIST and provides better estimates than the baselines.
The reviews and the author discussion covered several topics. The reviewers found the paper to be well written. The reviewers liked that tighter bounds on the Lipschitz constants can be found in a computationally efficient manner. They also liked that the method was applied to a real-world dataset, though they noted that the sizes of the networks analyzed here are smaller than the ones in common use. The reviewers pointed out several ways that the paper could be improved. The authors adopted these suggestions including additional comparisons, computation time plots, error bars, and relevant references to related work. The reviewers found the discussion and revised paper addressed most of their concerns.
This paper improves on existing methods for analyzing neural network architectures and it should be accepted. | train | [
"BklFCPgRKH",
"ryePC-_y5r",
"ryly93ZjjS",
"r1e5jsZsoS",
"SygL05bjiB",
"B1emwTnzsH",
"B1grEtzMsr",
"Skxd_pz5Yr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"This paper presents a general approach for upper bounding the Lipschitz constant of a neural network by relaxing the problem to a polynomial optimization problem. And the authors extend the method to fully make use of the sparse connections in the network so that the problem can be decomposed into a series of much... | [
8,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2020_rJe4_xSFDB",
"iclr_2020_rJe4_xSFDB",
"Skxd_pz5Yr",
"BklFCPgRKH",
"ryePC-_y5r",
"B1grEtzMsr",
"iclr_2020_rJe4_xSFDB",
"iclr_2020_rJe4_xSFDB"
] |
iclr_2020_rylrdxHFDr | State Alignment-based Imitation Learning | Consider an imitation learning problem that the imitator and the expert have different dynamics models. Most of existing imitation learning methods fail because they focus on the imitation of actions. We propose a novel state alignment-based imitation learning method to train the imitator by following the state sequences in the expert demonstrations as much as possible. The alignment of states comes from both local and global perspectives. We combine them into a reinforcement learning framework by a regularized policy update objective. We show the superiority of our method on standard imitation learning settings as well as the challenging settings in which the expert and the imitator have different dynamics models. | accept-poster | This paper seeks to adapt behavioural cloning to the case where demonstrator and learner have different dynamics (e.g. human demonstrator), by designing a state-based objective. The reviewers agreed the paper makes an important and interesting contribution, but were somewhat divided about whether the experiments were sufficiently impactful. They furthermore had additional concerns regarding the clarity of the paper and presentation of the method. Through discussion, it seems that these were sufficiently addressed that the consensus has moved towards agreeing that the paper sufficiently proves the concept to warrant publication (with one reviewer dissenting).
I recommend acceptance, with the view that the authors should put a substantial amount of work into improving the presentation of the paper based on the feedback that has emerged from the discussion before the camera ready is submitted (if accepted). | train | [
"SJer6JonjB",
"SylTOhF3or",
"HJeDsmmIqS",
"ryxKWFghsH",
"Skg4aNy2jB",
"r1gjKT42Kr",
"HJgcSiqjsB",
"Bkl7NsAqsH",
"B1x5zqAcjH",
"rkgo-346tS",
"B1x0yM-ssB",
"SJl5qjdYiB",
"BJgKvjdFsB",
"SJlgjNxPiS",
"SJe4LmxwoB"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"We have uploaded a revision with modified part marked in red. When the whole section is newly created (Appendix D), we mark the title as red.",
"In practice, this algorithm is computationally expensive due to the need to compute a big Jacobian matrix for each expert transitions (Eq 5 in their paper). While it ha... | [
-1,
-1,
6,
-1,
-1,
3,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
5,
-1,
-1,
4,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_rylrdxHFDr",
"ryxKWFghsH",
"iclr_2020_rylrdxHFDr",
"SJlgjNxPiS",
"B1x0yM-ssB",
"iclr_2020_rylrdxHFDr",
"Bkl7NsAqsH",
"B1x5zqAcjH",
"r1gjKT42Kr",
"iclr_2020_rylrdxHFDr",
"SJl5qjdYiB",
"BJgKvjdFsB",
"rkgo-346tS",
"SJe4LmxwoB",
"HJeDsmmIqS"
] |
iclr_2020_rkl8dlHYvB | Learning to Group: A Bottom-Up Framework for 3D Part Discovery in Unseen Categories | We address the problem of learning to discover 3D parts for objects in unseen categories. Being able to learn the geometry prior of parts and transfer this prior to unseen categories pose fundamental challenges on data-driven shape segmentation approaches. Formulated as a contextual bandit problem, we propose a learning-based iterative grouping framework which learns a grouping policy to progressively merge small part proposals into bigger ones in a bottom-up fashion. At the core of our approach is to restrict the local context for extracting part-level features, which encourages the generalizability to novel categories. On a recently proposed large-scale fine-grained 3D part dataset, PartNet, we demonstrate that our method can transfer knowledge of parts learned from 3 training categories to 21 unseen testing categories without seeing any annotated samples. Quantitative comparisons against four strong shape segmentation baselines show that we achieve the state-of-the-art performance. | accept-poster | This paper presents and evaluates a technique for unsupervised object part discovery in 3d -- i.e. grouping points of a point cloud into coherent parts for an object that has not been seen before. The paper received 3 reviews from experts working in this area. R1 recommended Weak Accept, and identified some specific technical questions for the authors to address in the response (which the authors provided and R1 seemed satisfied). R2 recommends Weak Reject, and indicates an overall positive view of the paper but felt the experimental results were somewhat weak and posed several specific questions to the reviewers. The authors' response convincingly addressed these questions. R3 recommends Accept, but suggests some additional qualitative examples and ablation studies. The author response again addresses these. Overall, the reviews indicate that this is a good paper with some specific questions and concerns that can be addressed; the AC thus recommends a (Weak) Accept based on the reviews and author responses. | train | [
"BkxB6Uchir",
"HyeB3Y2IsS",
"Byl4piU2jS",
"H1eFDFn8sH",
"S1xe6dhLiS",
"ByeojHyTYB",
"Hyx-1l2lqr",
"B1gRgSUX5B"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the updates! \n\nWe also believe that the two points will strengthen the contribution of the proposed method, and include them in Appendix C.1. Thank you for the valuable suggestion!\n\nSorry for the confusion. What we want to point out is that the capacity of the network also matters the performance of... | [
-1,
-1,
-1,
-1,
-1,
8,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"Byl4piU2jS",
"B1gRgSUX5B",
"H1eFDFn8sH",
"Hyx-1l2lqr",
"ByeojHyTYB",
"iclr_2020_rkl8dlHYvB",
"iclr_2020_rkl8dlHYvB",
"iclr_2020_rkl8dlHYvB"
] |
iclr_2020_HJl8_eHYvS | Discriminative Particle Filter Reinforcement Learning for Complex Partial observations | Deep reinforcement learning is successful in decision making for sophisticated games, such as Atari, Go, etc.
However, real-world decision making often requires reasoning with partial information extracted from complex visual observations. This paper presents Discriminative Particle Filter Reinforcement Learning (DPFRL), a new reinforcement learning framework for complex partial observations. DPFRL encodes a differentiable particle filter in the neural network policy for explicit reasoning with partial observations over time. The particle filter maintains a belief using learned discriminative update, which is trained end-to-end for decision making. We show that using the discriminative update instead of standard generative models results in significantly improved performance, especially for tasks with complex visual observations, because they circumvent the difficulty of modeling complex observations that are irrelevant to decision making.
In addition, to extract features from the particle belief, we propose a new type of belief feature based on the moment generating function. DPFRL outperforms state-of-the-art POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark introduced in this paper. Further, DPFRL performs well for visual navigation with real-world data in the Habitat environment. | accept-poster | The authors introduce an RL algorithm / architecture for partially observable
environments.
At the heart of it is a filtering algorithm based on a differentiable version of
sequential Monte Carlo inference.
The inferred particles are fed into a policy head and the whole architecture is
trained by RL.
The proposed methods was evaluated on multiple environments and ablations
establish that all moving parts are necessary for the observed performance.
All reviewers agree that this is an interesting contribution for addressing the
important problem of acting in POMDPs.
I think this paper is well above acceptance threshold. However, I have a few points that I
would quibble with:
1) I don't see how the proposed trampling is fully differentiable; as far as I
understand it, no credit is assigned to the discrete decision which particle to
reuse. Adding a uniform component to the resampling distribution does not
make it fully differentiable, see eg [Filtering Variational Objectives. Maddison
et al]. I think the authors might use a form of straight-through gradient approximation.
2) Just stating that unsupervised losses might incentivise the filter to learn
the wrong things, and just going back to plain RL loss is not in itself a novel
contribution; in extremely sparse reward settings, this will not be
satisfactory. | train | [
"B1eqKjQ3sS",
"SkepmDSCYH",
"SJgVcy72sH",
"HJx3PRVusB",
"HJeL4A4dsB",
"B1gmGR4OiS",
"HJeKIBfg9H",
"S1l7IWx99B"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the encouragement!\n\nWe will further study the learned representation to provide intuitions for designing better representation learning algorithms.",
"Update: my concerns have been addressed and I have updated the score to 8\n****\n\nThis paper introduces 3 neat ideas for training dee... | [
-1,
8,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
5,
-1,
-1,
-1,
-1,
1,
4
] | [
"SJgVcy72sH",
"iclr_2020_HJl8_eHYvS",
"HJx3PRVusB",
"SkepmDSCYH",
"HJeKIBfg9H",
"S1l7IWx99B",
"iclr_2020_HJl8_eHYvS",
"iclr_2020_HJl8_eHYvS"
] |
iclr_2020_Sye_OgHFwH | Unrestricted Adversarial Examples via Semantic Manipulation | Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their Lp norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce "unrestricted" perturbations that manipulate semantically meaningful image-based visual descriptors - color and texture - in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks. | accept-poster | In this paper, the authors present adversarial attacks by semantic manipulations, i.e., manipulating specific detectors that result in imperceptible changes in the picture, such as changing texture and color, but without affecting their naturalness. Moreover, these tasks are done on two large scale datasets (ImageNet and MSCOCO) and two visual tasks (classification and captioning). Finally, they also test their adversarial examples against a couple of defense mechanisms and how their transferability. Overall, all reviewers agreed this is an interesting work and well executed, complete with experiments and analyses. I agree with the reviewers in the assessment. I think this is an interesting study that moves us beyond restricted pixel perturbations and overall would be interesting to see what other detectors could be used to generate these type of semantic manipulations. I recommend acceptance of this paper.
| test | [
"rkgkAfhijS",
"BJgw06isjH",
"B1xcI6ojsB",
"HJgAOwI-FB",
"BJlg1WSTtS",
"rklbmtp6YH"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments and interest in our work.\n\nQ1: Some readers may wonder how the \"averaged-case\" corruption robustness behaves for both cAdv and sAdv, e.g. considering random colorization. Would it be worse than the robustness of Gaussian noise?\n\nA1: Thanks for the interesting question and it’s in... | [
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
3,
3,
3
] | [
"HJgAOwI-FB",
"BJlg1WSTtS",
"rklbmtp6YH",
"iclr_2020_Sye_OgHFwH",
"iclr_2020_Sye_OgHFwH",
"iclr_2020_Sye_OgHFwH"
] |
iclr_2020_H1lK_lBtvS | Classification-Based Anomaly Detection for General Data | Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence. Recently, classification-based methods were shown to achieve superior results on this task. In this work, we present a unifying view and propose an open-set method, GOAD, to relax current generalization assumptions. Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations. Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types. The strong performance of our method is extensively validated on multiple datasets from different domains. | accept-poster | The paper presents a method that unifies classification-based approaches for outlier detection and (one-class) anomaly detection. The paper also extends the applicability to non-image data.
In the end, all the reviewers agreed that the paper makes a valuable contribution and I'm happy to recommend acceptance. | train | [
"SJx_brSjKH",
"BJgsGImnsr",
"HJeld8X2iB",
"SJgKg873oS",
"HkgZFGqjKH",
"HJgOSSS6Fr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"UPDATE:\nI acknowledge that I‘ve read the author responses as well as the other reviews.\n\nI appreciate the clarifications, additional experiments, and overall improvements made to the paper. I updated my score to 6 Weak Accept. \n\n\n####################\n\nThis paper proposes a deep method for anomaly detection... | [
6,
-1,
-1,
-1,
8,
8
] | [
5,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_H1lK_lBtvS",
"HkgZFGqjKH",
"SJx_brSjKH",
"HJgOSSS6Fr",
"iclr_2020_H1lK_lBtvS",
"iclr_2020_H1lK_lBtvS"
] |
iclr_2020_HJgpugrKPS | Scale-Equivariant Steerable Networks | The effectiveness of Convolutional Neural Networks (CNNs) has been substantially attributed to their built-in property of translation equivariance. However, CNNs do not have embedded mechanisms to handle other types of transformations. In this work, we pay attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera. First, we introduce the general theory for building scale-equivariant convolutional networks with steerable filters. We develop scale-convolution and generalize other common blocks to be scale-equivariant. We demonstrate the computational efficiency and numerical stability of the proposed method. We compare the proposed models to the previously developed methods for scale equivariance and local scale invariance. We demonstrate state-of-the-art results on the MNIST-scale dataset and on the STL-10 dataset in the supervised learning setting. | accept-poster | This work presents a theory for building scale-equivariant CNNs with steerable filters. The proposed method is compared with some of the related techniques . SOTA is achieved on MNIST-scale dataset and gains on STL-10 is demonstrated. The reviewers had some concern related to the method, clarity, and comparison with related works. The authors have successfully addressed most of these concerns. Overall, the reviewers are positive about this work and appreciate the generality of the presented theory and its good empirical performance. All the reviewers recommend accept. | train | [
"HJeiPFvjjS",
"HylDlFPosB",
"H1xLPrvooS",
"B1xNrEPsjH",
"SyeNvGwsoS",
"r1g06sdZir",
"BJgeyM2iYH",
"H1g-eN-3KS",
"SkgxHUDycS"
] | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Q: (3.1) In Figure 2, how many layers does the network have which was used to construct the middle plot? \nA: In Figure 2 we represent the equivariance error of a network of just one layer. We added this information to Section 6.1 for better understanding.\n\nQ: (3.2) It would have been useful to include a study o... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"HylDlFPosB",
"SkgxHUDycS",
"r1g06sdZir",
"H1g-eN-3KS",
"BJgeyM2iYH",
"iclr_2020_HJgpugrKPS",
"iclr_2020_HJgpugrKPS",
"iclr_2020_HJgpugrKPS",
"iclr_2020_HJgpugrKPS"
] |
iclr_2020_SkxxtgHKPS | On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning | Generalization error (also known as the out-of-sample error) measures how well the hypothesis learned from training data generalizes to previously unseen data. Proving tight generalization error bounds is a central question in statistical learning theory. In this paper, we obtain generalization error bounds for learning general non-convex objectives, which has attracted significant attention in recent years. We develop a new framework, termed Bayes-Stability, for proving algorithm-dependent generalization error bounds. The new framework combines ideas from both the PAC-Bayesian theory and the notion of algorithmic stability. Applying the Bayes-Stability method, we obtain new data-dependent generalization bounds for stochastic gradient Langevin dynamics (SGLD) and several other noisy gradient methods (e.g., with momentum, mini-batch and acceleration, Entropy-SGD). Our result recovers (and is typically tighter than) a recent result in Mou et al. (2018) and improves upon the results in Pensia et al. (2018). Our experiments demonstrate that our data-dependent bounds can distinguish randomly labelled data from normal data, which provides an explanation to the intriguing phenomena observed in Zhang et al. (2017a). We also study the setting where the total loss is the sum of a bounded loss and an additiona l`2 regularization term. We obtain new generalization bounds for the continuous Langevin dynamic in this setting by developing a new Log-Sobolev inequality for the parameter distribution at any time. Our new bounds are more desirable when the noise level of the processis not very small, and do not become vacuous even when T tends to infinity. | accept-poster | The authors provide bounds on the expected generalization error for noisy gradient methods (such as SGLD). They do so using the information theoretic framework initiated by Russo and Zou, where the expected generalization error is controlled by the mutual information between the weights and the training data. The work builds on the approach pioneered by Pensia, Jog, and Loh, who proposed to bound the mutual information for noisy gradient methods in a step wise fashion.
The main innovation of this work is that they do not implicitly condition on the minibatch sequence when bounding the mutual information. Instead, this uncertainty manifests as a mixture of gaussians. Essentially they avoid the looseness implied by an application of Jensen's inequality that they have shown was unnecessary.
I think this is an interesting contribution and worth publishing. It contributes to a rapidly progressing literature on generalization bounds for SGLD that are becoming increasingly tight.
I have one strong request that I will make of the authors, and I'll be quite disappointed if it is not executed faithfully.
1. The stepsize constraint and its violation in the experimental work is currently buried in the appendix. This fact must be brought into the main paper and made transparent to readers, otherwise it will pervert empirical comparisons and mask progress.
2. In fact, I would like the authors to re-run their experiments in a way that guarantees that the bounds are applicable. One approach is outline by the authors: the Lipschitz constant can be replaced by a max_i bound on the running squared gradient norms, and then gradient clipping can be used to guarantee that the step-size constraint is met. The authors might compare step sizes, allowing them to use less severe gradient clipping. The point of this exercise is to verify that the learning dynamics don't change when the bound conditions are met. If they change, it may upset the empirical phenomena they are trying to study. If this change does upset the empirical findings, then the authors should present both, and clearly explain that the bound is not strictly speaking known to be valid in one of the cases. It will be a good open problem.
| train | [
"SkeOi8jAYB",
"HyxQ4HuatB",
"Hkxq3xWniB",
"rklasCehjr",
"SklyGyPqir",
"Ske3vo89or",
"H1gewIr9iS",
"H1gq7BHqoS",
"SJeMhEH5jr",
"BkeVXPeqsr",
"Bkl0toHSoB",
"SkleD9Q3tH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"This paper aims at developing a better understanding of generalization error for increasingly prevalent non-convex learning problems. For many such problems, the existing generalization bounds in the statistical learning theory literature are not very informative. To address these issues, the paper explores algori... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_SkxxtgHKPS",
"iclr_2020_SkxxtgHKPS",
"SkeOi8jAYB",
"SklyGyPqir",
"SJeMhEH5jr",
"SJeMhEH5jr",
"BkeVXPeqsr",
"SkleD9Q3tH",
"SkleD9Q3tH",
"Bkl0toHSoB",
"HyxQ4HuatB",
"iclr_2020_SkxxtgHKPS"
] |
iclr_2020_S1lxKlSKPH | Consistency Regularization for Generative Adversarial Networks | Generative Adversarial Networks (GANs) are known to be difficult to train, despite considerable research effort. Several regularization techniques for stabilizing training have been proposed, but they introduce non-trivial computational overheads and interact poorly with existing techniques like spectral normalization. In this work, we propose a simple, effective training stabilizer based on the notion of consistency regularization—a popular technique in the semi-supervised learning literature. In particular, we augment data passing into the GAN discriminator and penalize the sensitivity of the discriminator to these augmentations. We conduct a series of experiments to demonstrate that consistency regularization works effectively with spectral normalization and various GAN architectures, loss functions and optimizer settings. Our method achieves the best FID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA. Moreover, Our consistency regularized GAN (CR-GAN) improves state of-the-art FID scores for conditional generation from 14.73 to 11.48 on CIFAR-10 and from 8.73 to 6.66 on ImageNet-2012. | accept-poster | The paper proposes a simple and effective way to stabilize training by adding consistency term to discriminator. Given the stochastic augmentation procedure $T(x)$ the loss is just a penalty on $D$. The main unsolved question why it help to make discriminator "smoother" in the consistency case for a standard GAN (since typically, no constraints are enforced). Nevertheless, at the moment this a working heuristics that gives new SOTA, and that is the main strength. The reviewer all agree to accept, and so do I. | train | [
"BJgYim5oiS",
"SyxNxScisH",
"B1geRB5ooB",
"SJxo5bcioB",
"BkxG8pKiiS",
"H1eKjX745B",
"SJg_aihN9S",
"rkli38WvcH",
"BJg7Y9MR5H"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Thank you for all the valuable comments.\n\nQ1: Related work [1]\n\nThank you for pointing out the related work. We cited this paper in our revision. \n\nQ2: Regularizing with features vs output\n\nIn our method, we penalize sensitivity of the last layer (which is one dimensional) of the discriminator. It is actua... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
-1
] | [
-1,
-1,
-1,
-1,
-1,
1,
5,
3,
-1
] | [
"SJg_aihN9S",
"SJg_aihN9S",
"H1eKjX745B",
"rkli38WvcH",
"BJg7Y9MR5H",
"iclr_2020_S1lxKlSKPH",
"iclr_2020_S1lxKlSKPH",
"iclr_2020_S1lxKlSKPH",
"iclr_2020_S1lxKlSKPH"
] |
iclr_2020_rJleKgrKwS | Differentiable learning of numerical rules in knowledge graphs | Rules over a knowledge graph (KG) capture interpretable patterns in data and can be used for KG cleaning and completion. Inspired by the TensorLog differentiable logic framework, which compiles rule inference into a sequence of differentiable operations, recently a method called Neural LP has been proposed for learning the parameters as well as the structure of rules. However, it is limited with respect to the treatment of numerical features like age, weight or scientific measurements. We address this limitation by extending Neural LP to learn rules with numerical values, e.g., ”People younger than 18 typically live with their parents“. We demonstrate how dynamic programming and cumulative sum operations can be exploited to ensure efficiency of such extension. Our novel approach allows us to extract more expressive rules with aggregates, which are of higher quality and yield more accurate predictions compared to rules learned by the state-of-the-art methods, as shown by our experiments on synthetic and real-world datasets. | accept-poster | This paper presents a number of improvements on existing approaches to neural logic programming. The reviews are generally positive: two weak accepts, one weak reject. Reviewer 2 seems wholly in favour of acceptance at the end of discussion, and did not clarify why they were sticking to their score of weak accept. The main reason Reviewer
1 sticks to 6 rather than 8 is that the work extends existing work rather than offering a "fundamental contribution", but otherwise is very positive. I personally feel that
a) most work extends existing work
b) there is room in our conferences for such well executed extensions (standing on the shoulders of giants etc).
Reviewer 3 is somewhat unconvinced by the nature of the evaluation. While I understand their reservations, they state that they would not be offended by the paper being accepted in spite of their reservations.
Overall, I find that the review group leans more in favour of acceptance, and an happy to recommend acceptance for the paper as it makes progress in an interesting area at the intersection of differentiable programming and logic-based programming. | train | [
"rJx2GYj3oB",
"BylaDvshjB",
"Skl_X6F3ir",
"BJgJZYS9ir",
"r1xPX_S5jS",
"r1e3Nvr5sH",
"S1eJGUy6Yr",
"BJekrJ9atS",
"SJx_7NveqH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for answering -- about 2, I mean the extracted rules still do not support less trivial operations such as aggregations (sum, mean, ..) and math operations (such as the sum). There is some work investigating how to learn these using neural architectures, such as https://arxiv.org/abs/1808.00508 . It would... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"r1e3Nvr5sH",
"r1xPX_S5jS",
"BJgJZYS9ir",
"S1eJGUy6Yr",
"BJekrJ9atS",
"SJx_7NveqH",
"iclr_2020_rJleKgrKwS",
"iclr_2020_rJleKgrKwS",
"iclr_2020_rJleKgrKwS"
] |
iclr_2020_BJgMFxrYPB | Learning to Move with Affordance Maps | The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles. Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects (such as other agents) or semantic constraints (such as wet floors or doorways). Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret. In this paper, we combine the best of both worlds with a modular approach that {\em learns} a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners. Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering. In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards. We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance. | accept-poster | This paper presents a framework for navigation that leverages learning spatial affordance maps (ie what parts of a scene are navigable) via a self-supervision approach in order to deal with environments with dynamics and hazards. They evaluate on procedurally generated VizDoom levels and find improvements over frontier and RL baseline agents.
Reviewers all agreed on the quality of the paper and strength of the results. Authors were highly responsive to constructive criticism and the engagement/discussion appears to have improved the paper overall. After seeing the rebuttal and revisions, I believe this paper will be a useful contribution to the field and I’m happy to recommend accept.
| train | [
"H1eF2cVy5r",
"SylGeCPhoB",
"SylkMA1nsH",
"S1lp6KRsir",
"BkxdBkAoiS",
"ryguP1umjH",
"ByeZGyumiH",
"Hkeuc0v7jB",
"HJxqDRDXiB",
"SkgvG0dpFr",
"BygsJ2jatS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an interesting, and to the best of my knowledge novel, pipeline for learning a semantic map of the environment with respect to navigability, and simultaneously uses it for further exploring the environment.\n\nThe pipeline can be summarized as follows: Navigate somewhere using some heuristic. Wh... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2020_BJgMFxrYPB",
"BkxdBkAoiS",
"Hkeuc0v7jB",
"BkxdBkAoiS",
"ryguP1umjH",
"H1eF2cVy5r",
"BygsJ2jatS",
"SkgvG0dpFr",
"SkgvG0dpFr",
"iclr_2020_BJgMFxrYPB",
"iclr_2020_BJgMFxrYPB"
] |
iclr_2020_HklQYxBKwS | Neural tangent kernels, transportation mappings, and universal approximation | This paper establishes rates of universal approximation for the shallow neural tangent kernel (NTK): network weights are only allowed microscopic changes from random initialization, which entails that activations are mostly unchanged, and the network is nearly equivalent to its linearization. Concretely, the paper has two main contributions: a generic scheme to approximate functions with the NTK by sampling from transport mappings between the initial weights and their desired values, and the construction of transport mappings via Fourier transforms. Regarding the first contribution, the proof scheme provides another perspective on how the NTK regime arises from rescaling: redundancy in the weights due to resampling allows individual weights to be scaled down. Regarding the second contribution, the most notable transport mapping asserts that roughly 1/δ10d nodes are sufficient to approximate continuous functions, where δ depends on the continuity properties of the target function. By contrast, nearly the same proof yields a bound of 1/δ2d for shallow ReLU networks; this gap suggests a tantalizing direction for future work, separating shallow ReLU networks and their linearization.
| accept-poster | The paper considers representational aspects of neural tangent kernels (NTKs). More precisely, recent literature on overparametrized neural networks has identified NTKs as a way to characterize the behavior of gradient descent on wide neural networks as fitting these types of kernels. This paper focuses on the representational aspect: namely that functions of appropriate "complexity" can be written as an NTK with parameters close to initialization (comparably close to what results on gradient descent get).
The reviewers agree this content is of general interest to the community and with the proposed revisions there is general agreement that the paper has merits to recommend acceptance. | train | [
"Hyge3AC3FB",
"B1edT2t3sr",
"HklU-TY2iB",
"r1eVwpYnjH",
"HkgFVpK2oS",
"H1edMNKV9r"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper studies approximation properties (in L2 over some data distribution P) of two-layer ReLU networks in the NTK setting, that is, where weights remain close to initialization and the model behaves like a kernel method given by its linearization around initialization.\n\nThe authors obtain a variety of resul... | [
6,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2020_HklQYxBKwS",
"iclr_2020_HklQYxBKwS",
"H1edMNKV9r",
"HkgFVpK2oS",
"Hyge3AC3FB",
"iclr_2020_HklQYxBKwS"
] |
iclr_2020_SJxrKgStDH | SCALOR: Generative World Models with Scalable Object Representations | Scalability in terms of object density in a scene is a primary challenge in unsupervised sequential object-oriented representation learning. Most of the previous models have been shown to work only on scenes with a few objects. In this paper, we propose SCALOR, a probabilistic generative world model for learning SCALable Object-oriented Representation of a video. With the proposed spatially parallel attention and proposal-rejection mechanisms, SCALOR can deal with orders of magnitude larger numbers of objects compared to the previous state-of-the-art models. Additionally, we introduce a background module that allows SCALOR to model complex dynamic backgrounds as well as many foreground objects in the scene. We demonstrate that SCALOR can deal with crowded scenes containing up to a hundred objects while jointly modeling complex dynamic backgrounds. Importantly, SCALOR is the first unsupervised object representation model shown to work for natural scenes containing several tens of moving objects. | accept-poster | After the author response and paper revision, the reviewers all came to appreciate this paper and unanimously recommended it be accepted. The paper makes a nice contribution to generative modelling of object-oriented representations with large numbers of objects. The authors adequately addressed the main reviewer concerns with their detailed rebuttal and revision. | train | [
"ryeAO_hsYH",
"rJx4wc3TKB",
"H1x79aD2jB",
"BJxxoMOnjB",
"ryluTWunir",
"SyxDkz_2iB",
"r1l07GO2iS",
"ByxZ6fOhsr",
"r1lOKCwhjS",
"Byl5Um4m5B"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"UPDATE: My original main concern was the lack of baseline, but during the rebuttal period the authors have conducted the request comparison and addressed my questions satisfactorily. Therefore, I would recommend the paper be accepted.\n\n---\nSummary: This paper proposes a generative model and inference algorithm ... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_SJxrKgStDH",
"iclr_2020_SJxrKgStDH",
"iclr_2020_SJxrKgStDH",
"ryeAO_hsYH",
"rJx4wc3TKB",
"rJx4wc3TKB",
"rJx4wc3TKB",
"ryeAO_hsYH",
"Byl5Um4m5B",
"iclr_2020_SJxrKgStDH"
] |
iclr_2020_SyevYxHtDB | Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks | High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such applications, which require a lot of time, money, and effort to develop. Existing defenses take a passive role against stealing attacks, such as by truncating predicted information. We find such passive defenses ineffective against DNN stealing attacks. In this paper, we propose the first defense which actively perturbs predictions targeted at poisoning the training objective of the attacker. We find our defense effective across a wide range of challenging datasets and DNN model stealing attacks, and additionally outperforms existing defenses. Our defense is the first that can withstand highly accurate model stealing attacks for tens of thousands of queries, amplifying the attacker's error rate up to a factor of 85× with minimal impact on the utility for benign users. | accept-poster | The paper proposed an optimization-based defense against model stealing attacks. A criticism of the paper is that the method is computationally expensive, and was not demonstrated on more complex problems like ImageNet. While this criticism is valid, other reviewers seem less concerned by this because the SOTA in this area is currently focused on smaller problems. After considering the rebuttal, there is enough reviewer support for this paper to be accepted. | test | [
"BJx6LZr0tB",
"B1gudgK3jB",
"B1eAb_gqiH",
"S1gADrHuir",
"H1lmmrBdsS",
"S1luN1BuoH",
"Byg7Ei4OiH",
"r1ef7Qn-jr",
"SygLUkwTFr"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed an effective defense against model stealing attacks. \n\nMerits:\n1) In general, this paper is well written and easy to follow.\n2) The approach is a significant supplement to existing defense against model stealing attacks. \n3) Extensive experiments. \n\nHowever, I still have concerns about t... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_SyevYxHtDB",
"B1eAb_gqiH",
"H1lmmrBdsS",
"H1lmmrBdsS",
"r1ef7Qn-jr",
"BJx6LZr0tB",
"SygLUkwTFr",
"iclr_2020_SyevYxHtDB",
"iclr_2020_SyevYxHtDB"
] |
iclr_2020_rJxycxHKDS | Domain Adaptive Multibranch Networks | We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition. To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others.
This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared.
As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy. Furthermore, it allows us to handle any number of domains simultaneously. | accept-poster | Although some criticism remains for experiments, I suggest to accept this paper. | train | [
"HygvQy4otr",
"BygqFzoYjr",
"B1emJzstsS",
"SJl_mWjtiH",
"H1lBZPAc_B",
"B1x4SgJRFS"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"After Discussion Period:\n\nI stick to my original score. My issues are largely resolved.\n\n----\nThe submission is using adaptive computation graphs for domain adaptation. Multi-flow network is the main architectural element proposed in the submission. And, it is composed of parallel blocks of computations which... | [
6,
-1,
-1,
-1,
3,
8
] | [
5,
-1,
-1,
-1,
5,
4
] | [
"iclr_2020_rJxycxHKDS",
"H1lBZPAc_B",
"HygvQy4otr",
"B1x4SgJRFS",
"iclr_2020_rJxycxHKDS",
"iclr_2020_rJxycxHKDS"
] |
iclr_2020_B1eB5xSFvr | DiffTaichi: Differentiable Programming for Physical Simulation | We present DiffTaichi, a new differentiable programming language tailored for building high-performance differentiable physical simulators. Based on an imperative programming language, DiffTaichi generates gradients of simulation steps using source code transformations that preserve arithmetic intensity and parallelism. A light-weight tape is used to record the whole simulation program structure and replay the gradient kernels in a reversed order, for end-to-end backpropagation.
We demonstrate the performance and productivity of our language in gradient-based learning and optimization tasks on 10 different physical simulators. For example, a differentiable elastic object simulator written in our language is 4.2x shorter than the hand-engineered CUDA version yet runs as fast, and is 188x faster than the TensorFlow implementation.
Using our differentiable programs, neural network controllers are typically optimized within only tens of iterations. | accept-poster | The paper provides a language for optimizing through physical simulations. The reviewers had a number of concerns related to paper organization and insufficient comparisons to related work (jax). During the discussion phase, the authors significantly updated their paper and ran additional experiments, leading to a much stronger paper. | train | [
"HJgrncv9KB",
"Hklp2A4Ssr",
"Syln2pNHsB",
"rkgQGp4rjB",
"S1g3334HoS",
"ryeQ4y6htH",
"H1loP67jqH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces DiffSim, a programming language for high-performance differentiable physics simulations. The paper demonstrates 10 different simulations with controller optimization. It shows that the proposed language is easier to use and faster than the other alternatives, such as CUDA and TensorFlow. At t... | [
6,
-1,
-1,
-1,
-1,
3,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
1
] | [
"iclr_2020_B1eB5xSFvr",
"HJgrncv9KB",
"ryeQ4y6htH",
"H1loP67jqH",
"iclr_2020_B1eB5xSFvr",
"iclr_2020_B1eB5xSFvr",
"iclr_2020_B1eB5xSFvr"
] |
iclr_2020_BJxI5gHKDr | Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning | Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance. | accept-poster | The paper points out pitfalls of existing metrics for in-domain uncertainty quantification, and also studies different strategies for ensembling techniques.
The authors also satisfactorily addressed the reviewers' questions during the rebuttal phase. In the end, all the reviewers agreed that this is a valuable contribution and paper deserves to be accepted.
Nice work! | train | [
"rJg0iIgAFH",
"BkxCKUsFsH",
"Skli45uFsB",
"BJebZqOFjS",
"SJlB-P_tjB",
"rkglRIOFsS",
"HkgUlEdFsB",
"rkgh0fP3FS",
"S1lvoIaRtH",
"ryxCquU9OH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
"The authors response on 13th nov regarding the main concerns I have are valid. They make sense. I thank the authors for the detailed explanations. I went back and did another thorough read of the work. As it stands, I am OK to change my review to weak accept. \n\n---------------------------------\n\nThe authors ev... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1
] | [
"iclr_2020_BJxI5gHKDr",
"ryxCquU9OH",
"rkgh0fP3FS",
"rkgh0fP3FS",
"rJg0iIgAFH",
"rJg0iIgAFH",
"S1lvoIaRtH",
"iclr_2020_BJxI5gHKDr",
"iclr_2020_BJxI5gHKDr",
"iclr_2020_BJxI5gHKDr"
] |
iclr_2020_HkxjqxBYDB | Episodic Reinforcement Learning with Associative Memory | Sample efficiency has been one of the major challenges for deep reinforcement learning. Non-parametric episodic control has been proposed to speed up parametric reinforcement learning by rapidly latching on previously successful policies. However, previous work on episodic reinforcement learning neglects the relationship between states and only stored the experiences as unrelated items. To improve sample efficiency of reinforcement learning, we propose a novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies. We build a graph on top of states in memory based on state transitions and develop a reverse-trajectory propagation strategy to allow rapid value propagation through the graph. We use the non-parametric associative memory as early guidance for a parametric reinforcement learning model. Results on navigation domain and Atari games show our framework achieves significantly higher sample efficiency than state-of-the-art episodic reinforcement learning models. | accept-poster | The submission tackles the problem of data efficiency in RL by building a graph on top of the replay memory and propagate values based on this representation of states and transitions. The method is evaluated on Atari games and is shown to outperform other episodic RL methods.
The reviews were mixed initially but have been brought up by the revisions to the paper and the authors' rebuttal. In particular, there was a concern about theoretical support and the authors added a proof of convergence. They have also added additional experiments and explanations. Given the positive reviews and discussion, the recommendation is to accept this paper. | train | [
"HkgePooptS",
"rJeP9zTptr",
"S1l5LRX2sH",
"SkxXK_4osH",
"Hyeklt4siB",
"S1eHzYEjsB",
"B1eG6-ZzcB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes to combine DQN with a nonparametric estimate of the optimal Q function based on the graph of all observed transitions in the buffer. Specifically, they use the nonparametric estimate as a regularizer in the DQN loss. They show that this regularizer facilitates learning, and compare to other nonp... | [
6,
3,
-1,
-1,
-1,
-1,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_HkxjqxBYDB",
"iclr_2020_HkxjqxBYDB",
"iclr_2020_HkxjqxBYDB",
"B1eG6-ZzcB",
"rJeP9zTptr",
"HkgePooptS",
"iclr_2020_HkxjqxBYDB"
] |
iclr_2020_ByeWogStDS | Sub-policy Adaptation for Hierarchical Reinforcement Learning | Hierarchical reinforcement learning is a promising approach to tackle long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Leaving the skills fixed can lead to significant sub-optimality in the transfer setting. In this work, we propose a novel algorithm to discover a set of skills, and continuously adapt them along with the higher level even when training on a new task. Our main contributions are two-fold. First, we derive a new hierarchical policy gradient with an unbiased latent-dependent baseline, and we introduce Hierarchical Proximal Policy Optimization (HiPPO), an on-policy method to efficiently train all levels of the hierarchy jointly. Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes. Code and videos are available at sites.google.com/view/hippo-rl. | accept-poster | This paper considers hierarchical reinforcement learning, and specifically the case where the learning and use of lower-level skills should not be decoupled. To this end the paper proposes Hierarchical Proximal Policy Optimization (HiPPO) to jointly learn the different layers of the hierarchy. This is compared against other hierarchical RL schemes on several Mujoco domains.
The reviewers raised three main issues with this paper. The first concerns an excluded baseline, which was included in the rebuttal. The other issues involve the motivation for the paper (in that there exist other methods that try and learn different levels of hierarchy together) and justification for some design choices. These were addressed to some extent in the rebuttal, but I believe this to still be an interesting contribution to the literature, and should be accepted.
| train | [
"HyeJo7g3oH",
"ryggqqioir",
"Ske2AqmssH",
"SkejVVXjor",
"SkllWnFOKS",
"BylQQdyTYr"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"1. We are glad the reviewer agrees with our statement that many of the recent end-to-end hierarchical methods (FuN, HIRO, etc.) are limited to goal-reaching problems. We agree with the reviewer that Option-Critic does not fall into this category, and we hope it's clear in our updated paper.\n\n2. HiPPO actually do... | [
-1,
-1,
-1,
-1,
8,
3
] | [
-1,
-1,
-1,
-1,
1,
3
] | [
"ryggqqioir",
"SkejVVXjor",
"SkllWnFOKS",
"BylQQdyTYr",
"iclr_2020_ByeWogStDS",
"iclr_2020_ByeWogStDS"
] |
iclr_2020_rylmoxrFDH | Critical initialisation in continuous approximations of binary neural networks | The training of stochastic neural network models with binary (±1) weights and activations via continuous surrogate networks is investigated. We derive new surrogates using a novel derivation based on writing the stochastic neural network as a Markov chain. This derivation also encompasses existing variants of the surrogates presented in the literature. Following this, we theoretically study the surrogates at initialisation. We derive, using mean field theory, a set of scalar equations describing how input signals propagate through the randomly initialised networks. The equations reveal whether so-called critical initialisations exist for each surrogate network, where the network can be trained to arbitrary depth. Moreover, we predict theoretically and confirm numerically, that common weight initialisation schemes used in standard continuous networks, when applied to the mean values of the stochastic binary weights, yield poor training performance. This study shows that, contrary to common intuition, the means of the stochastic binary weights should be initialised close to ±1, for deeper networks to be trainable. | accept-poster | The authors study neural networks with binary weights or activations, and the so-called "differentiable surrogates" used to train them.
They present an analysis that unifies previously proposed surrogates and they study critical initialization of weights to facilitate trainability.
The reviewers agree that the main topic of the paper is important (in particular initialization heuristics of neural networks), however they found the presentation of the content lacking in clarity as well as in clearly emphasizing the main contributions.
The authors imporved the readability of the manuscript in the rebuttal.
This paper seems to be at acceptance threshold and 2 of 3 reviewers indicated low confidence.
Not being familiar with this line of work, I recommend acceptance following the average review score. | test | [
"BJxzjHA9or",
"r1xtybOgir",
"ryeDYCQKjr",
"B1lIUoQKoH",
"SylTKmmYjB",
"HkeufHm1cB",
"SkxNKaup9B"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for revisiting the paper. While it has indeed improved, I believe my original rating still applies and I've decided to retain it.",
"The paper provides an in-depth exploration of stochastic binary networks, continuous surrogates, and their training dynamics with some potentially actionable insights on ... | [
-1,
6,
-1,
-1,
-1,
3,
6
] | [
-1,
1,
-1,
-1,
-1,
5,
1
] | [
"B1lIUoQKoH",
"iclr_2020_rylmoxrFDH",
"HkeufHm1cB",
"SkxNKaup9B",
"r1xtybOgir",
"iclr_2020_rylmoxrFDH",
"iclr_2020_rylmoxrFDH"
] |
iclr_2020_ryloogSKDS | Deep Orientation Uncertainty Learning based on a Bingham Loss | Reasoning about uncertain orientations is one of the core problems in many perception tasks such as object pose estimation or motion estimation. In these scenarios, poor illumination conditions, sensor limitations, or appearance invariance may result in highly uncertain estimates. In this work, we propose a novel learning-based representation for orientation uncertainty. By characterizing uncertainty over unit quaternions with the Bingham distribution, we formulate a loss that naturally captures the antipodal symmetry of the representation. We discuss the interpretability of the learned distribution parameters and demonstrate the feasibility of our approach on several challenging real-world pose estimation tasks involving uncertain orientations. | accept-poster | This paper considers the problem of reasoning about uncertain poses of objects in images. The reviewers agree that this is an interesting direction, and that the paper has interesting technical merit. | train | [
"SylTkGQ3iB",
"HkeGKStvsr",
"BygMvSFPoS",
"HylxNBtvir",
"ryeBokZ4qB",
"B1lYHtmY9H",
"HJx94DXzKS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
"We would like to thank all reviewers again for their thoughtful feedback. We have uploaded another revised version of the paper. The changes focus on the aspects raised in the reviewers’ questions:\n\n1. We address the main points of Reviewer 2 by including and discuss the training of mixture density networks as w... | [
-1,
-1,
-1,
-1,
6,
6,
-1
] | [
-1,
-1,
-1,
-1,
1,
3,
-1
] | [
"iclr_2020_ryloogSKDS",
"HJx94DXzKS",
"ryeBokZ4qB",
"B1lYHtmY9H",
"iclr_2020_ryloogSKDS",
"iclr_2020_ryloogSKDS",
"iclr_2020_ryloogSKDS"
] |
iclr_2020_r1g6ogrtDr | Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring in Data | Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping. Even though some combinations of transformations might never appear (e.g. an upright face with a horizontal nose), current equivariant architectures consider the set of all possible transformations in a transformation group when learning feature representations. Contrarily, the human visual system is able to attend to the set of relevant transformations occurring in the environment and utilizes this information to assist and improve object recognition. Based on this observation, we modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data and generalize this notion to act on groups consisting of multiple symmetries. We show that our proposed co-attentive equivariant neural networks consistently outperform conventional rotation equivariant and rotation & reflection equivariant neural networks on rotated MNIST and CIFAR-10. | accept-poster | The paper proposes an attention mechanism for equivariant neural networks towards the goal of attending to co-occurring features. It instantiates the approach with rotation and reflection transformations, and reports results on rotated MNIST and CIFAR-10. All reviewers have found the idea of using self-attention on top of equivariant feature maps technically novel and sound. There were some concerns about readability which the authors should try to address in the final version. | train | [
"ryguPqudYH",
"Bygx8wipFS",
"SyxhYn43or",
"SJxppfLdjH",
"HyeUqyfDjH",
"SJlj21lIoS",
"S1eO4fn4ir",
"HkeuQBi4iS",
"HyezazUxjS",
"SkeC8XEloS",
"HJl2HyEgoB",
"SJlMDm7eoS",
"rJgHp0GgjH",
"H1eJfdke5r"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"[Post-rebuttal update]\n\nHaving read the rebuttals and seen the new draft, the authors have answered a lot of my concerns. I am still unsatisfied about the experimental contribution, but I guess producing a paper full of theory and good experiments is a tall ask. Having also read through the concerns of the other... | [
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_r1g6ogrtDr",
"iclr_2020_r1g6ogrtDr",
"iclr_2020_r1g6ogrtDr",
"HyeUqyfDjH",
"SJlj21lIoS",
"S1eO4fn4ir",
"HkeuQBi4iS",
"iclr_2020_r1g6ogrtDr",
"H1eJfdke5r",
"HJl2HyEgoB",
"Bygx8wipFS",
"rJgHp0GgjH",
"ryguPqudYH",
"iclr_2020_r1g6ogrtDr"
] |
iclr_2020_Hyx0slrFvH | Mixed Precision DNNs: All you need is a good parametrization | Efficient deep neural network (DNN) inference on mobile or embedded devices typically involves quantization of the network parameters and activations. In particular, mixed precision networks achieve better performance than networks with homogeneous bitwidth for the same size constraint. Since choosing the optimal bitwidths is not straight forward, training methods, which can learn them, are desirable. Differentiable quantization with straight-through gradients allows to learn the quantizer's parameters using gradient methods. We show that a suited parametrization of the quantizer is the key to achieve a stable training and a good final performance. Specifically, we propose to parametrize the quantizer with the step size and dynamic range. The bitwidth can then be inferred from them. Other parametrizations, which explicitly use the bitwidth, consistently perform worse. We confirm our findings with experiments on CIFAR-10 and ImageNet and we obtain mixed precision DNNs with learned quantization parameters, achieving state-of-the-art performance. | accept-poster | The reviewers uniformly vote to accept this paper. Please take comments into account when revising for the camera ready. I was also very impressed by the authors' responsiveness to reviewer comments, putting in additional work after submission. | train | [
"ryl5_mkRKr",
"Hkg9XIQnsB",
"Syee59qBjH",
"r1ePi95Bor",
"ByxD5fONsr",
"HJebt9uEoS",
"BJxSefdmoH",
"BJeWPJ87oB",
"SkltZl2zsH",
"rJehYBRCFH",
"rJxa1xZHcr",
"Skx62lU2Fr",
"SklDkN4otS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper considers the problem of training mixed-precision models. \nSince quantization involves non-differentiable operations, this paper discusses how to use the straight-through estimator to estimate the gradients, and how different parameterizations of the quantized DNN affect the optimization process. The ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
1,
-1,
-1
] | [
"iclr_2020_Hyx0slrFvH",
"iclr_2020_Hyx0slrFvH",
"BJeWPJ87oB",
"Syee59qBjH",
"rJxa1xZHcr",
"rJehYBRCFH",
"SkltZl2zsH",
"iclr_2020_Hyx0slrFvH",
"ryl5_mkRKr",
"iclr_2020_Hyx0slrFvH",
"iclr_2020_Hyx0slrFvH",
"SklDkN4otS",
"iclr_2020_Hyx0slrFvH"
] |
iclr_2020_rkg1ngrFPr | Information Geometry of Orthogonal Initializations and Training | Recently mean field theory has been successfully used to analyze properties
of wide, random neural networks. It gave rise to a prescriptive theory for
initializing feed-forward neural networks with orthogonal weights, which
ensures that both the forward propagated activations and the backpropagated
gradients are near ℓ2 isometries and as a consequence training is
orders of magnitude faster. Despite strong empirical performance, the
mechanisms by which critical initializations confer an advantage in the
optimization of deep neural networks are poorly understood. Here we show a
novel connection between the maximum curvature of the optimization landscape
(gradient smoothness) as measured by the Fisher information matrix (FIM) and
the spectral radius of the input-output Jacobian, which partially explains
why more isometric networks can train much faster. Furthermore, given that
orthogonal weights are necessary to ensure that gradient norms are
approximately preserved at initialization, we experimentally investigate the
benefits of maintaining orthogonality throughout training, and we conclude
that manifold optimization of weights performs well regardless of the
smoothness of the gradients. Moreover, we observe a surprising yet robust
behavior of highly isometric initializations --- even though such networks
have a lower FIM condition number \emph{at initialization}, and therefore by
analogy to convex functions should be easier to optimize, experimentally
they prove to be much harder to train with stochastic gradient descent. We
conjecture the FIM condition number plays a non-trivial role in the optimization. | accept-poster | I've gone over this paper carefully and think it's above the bar for ICLR.
The paper proves a relationship between the eigenvalues of the Fisher information matrix and the singular values of the network Jacobian. The main step is bounding the eigenvalues of the full Fisher matrix in terms of the eigenvalues and singular values of individual blocks using Gersgorin disks. The analysis seems correct and (to the best of my knowledge) novel, and relationships between the Jacobian and FIM are interesting insofar as they give different ways of looking at linearized approximations. The Gersgorin disk analysis seems like it may give loose bounds, but the analysis still matches up well with the experiments.
The paper is not quite as strong when it comes to relating the anslysis to optimization. The maximum eigenvalue of the FIM by itself doesn't tell us much about the difficulty of optimization. E.g., if the top FIM eigenvalue is increased, but the distance the weights need to travel is proportionately decreased (as seems plausible when the Jacobian scale is changed), then one could make just as fast progress with a smaller learning rate. So in this light, it's not too surprising that the analysis fails to capture the optimization dynamics once the learning rates are tuned. But despite this limitation, the contribution still seems worthwhile.
The writing can still be improved.
The claim about stability of the linearization explaining the training dynamics appears fairly speculative, and not closely related to the analysis and experiments. I recommend removing it, or at least removing it from the abstract.
| train | [
"SygON0qAYB",
"rylzvaO2oB",
"rylq7Ph2FS",
"Hkgcx64hoH",
"HkgLkpNnsH",
"B1e_934noS",
"rkl0v34nor",
"B1ePSnNhjr",
"rygBQhV2sS",
"rJeBe342sr",
"ryxt7P3nYS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper analyses the training behavior of wide networks and argues orthogonal initialization helps the training. They suggest projections to the manifold of orthogonal weights during training and provide analysis. Their main result seems to be a bound on the eigen-values of the Fisher information matrix for wid... | [
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2020_rkg1ngrFPr",
"B1e_934noS",
"iclr_2020_rkg1ngrFPr",
"HkgLkpNnsH",
"ryxt7P3nYS",
"rylq7Ph2FS",
"B1ePSnNhjr",
"rygBQhV2sS",
"rJeBe342sr",
"SygON0qAYB",
"iclr_2020_rkg1ngrFPr"
] |
iclr_2020_rJxe3xSYDS | Extreme Classification via Adversarial Softmax Approximation | Training a classifier over a large number of classes, known as 'extreme classification', has become a topic of major interest with applications in technology, science, and e-commerce. Traditional softmax regression induces a gradient cost proportional to the number of classes C, which often is prohibitively expensive. A popular scalable softmax approximation relies on uniform negative sampling, which suffers from slow convergence due a poor signal-to-noise ratio. In this paper, we propose a simple training method for drastically enhancing the gradient signal by drawing negative samples from an adversarial model that mimics the data distribution. Our contributions are three-fold: (i) an adversarial sampling mechanism that produces negative samples at a cost only logarithmic in C, thus still resulting in cheap gradient updates; (ii) a mathematical proof that this adversarial sampling minimizes the gradient variance while any bias due to non-uniform sampling can be removed; (iii) experimental results on large scale data sets that show a reduction of the training time by an order of magnitude relative to several competitive baselines.
| accept-poster | The paper proposes a fast training method for extreme classification problems where number of classes is very large. The method improves the negative sampling (method which uses uniform distribution to sample the negatives) by using an adversarial auxiliary model to sample negatives in a non-uniform manner. This has logarithmic computational cost and minimizes the variance in the gradients. There were some concerns about missing empirical comparisons with methods that use sampled-softmax approach for extreme classification. While these comparisons will certainly add further value to the paper, the improvement over widely used method of negative sampling and a formal analysis of improvement from hard negatives is a valuable contribution in itself that will be of interest to the community. Authors should include the experiments on small datasets to quantify the approximation gap due to negative sampling compared to full softmax, as promised. | val | [
"H1gf7y-RtB",
"rye6PDeniH",
"SyxHbQehjr",
"Syxtigl2sr",
"H1lmZ7TpKr",
"ByeL_F-q9B"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper focuses on efficient and fast training in the extreme classification setting where the number of classes C is very large. In this setting, naively using softmax based loss function incurs a prohibitively large cost as the cost of computing the loss value for each example scales linearly with C. One way ... | [
6,
-1,
-1,
-1,
3,
8
] | [
4,
-1,
-1,
-1,
5,
3
] | [
"iclr_2020_rJxe3xSYDS",
"H1lmZ7TpKr",
"H1gf7y-RtB",
"ByeL_F-q9B",
"iclr_2020_rJxe3xSYDS",
"iclr_2020_rJxe3xSYDS"
] |
iclr_2020_HJx-3grYDB | Learning Nearly Decomposable Value Functions Via Communication Minimization | Reinforcement learning encounters major challenges in multi-agent settings, such as scalability and non-stationarity. Recently, value function factorization learning emerges as a promising way to address these challenges in collaborative multi-agent systems. However, existing methods have been focusing on learning fully decentralized value functions, which are not efficient for tasks requiring communication. To address this limitation, this paper presents a novel framework for learning nearly decomposable Q-functions (NDQ) via communication minimization, with which agents act on their own most of the time but occasionally send messages to other agents in order for effective coordination. This framework hybridizes value function factorization learning and communication learning by introducing two information-theoretic regularizers. These regularizers are maximizing mutual information between agents' action selection and communication messages while minimizing the entropy of messages between agents. We show how to optimize these regularizers in a way that is easily integrated with existing value function factorization methods such as QMIX. Finally, we demonstrate that, on the StarCraft unit micromanagement benchmark, our framework significantly outperforms baseline methods and allows us to cut off more than 80% of communication without sacrificing the performance. The videos of our experiments are available at https://sites.google.com/view/ndq. | accept-poster | The paper extends recent value function factorization methods for the case where limited agent communication is allowed. The work is interesting and well motivated. The reviewers brought up a number of mostly minor issues, such as unclear terms and missing implementation details. As far as I can see, the reviewers have addressed these issues successfully in their updated version. Hence, my recommendation is accept. | train | [
"BkxAqrOhsr",
"rJeQuBzPjH",
"HJlk7y82ir",
"rylDKVfwiS",
"HJgCxDMviS",
"SJgHHSfPsr",
"HJgpxkoRFH",
"SygnB7pe5S",
"HyxgLm1q9B"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your feedback. \n\nFor point 2:\nIn our formulation, we use the action selection variable $A_j$ of agent $j$, which is deterministic only when the histories of *all* agents are given. However, in the definition of the mutual information $I(A_j; M_{ij} | \\mathrm{T}_j, M_{(-i)j})$, the history of agen... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
1,
1
] | [
"HJlk7y82ir",
"HJgpxkoRFH",
"rylDKVfwiS",
"HyxgLm1q9B",
"iclr_2020_HJx-3grYDB",
"SygnB7pe5S",
"iclr_2020_HJx-3grYDB",
"iclr_2020_HJx-3grYDB",
"iclr_2020_HJx-3grYDB"
] |
iclr_2020_rylb3eBtwr | Robust Subspace Recovery Layer for Unsupervised Anomaly Detection | We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and removes outliers that lie away from this subspace. It is used within an autoencoder. The encoder maps the data into a latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a ``manifold" close to the original inliers. Inliers and outliers are distinguished according to the distances between the original and mapped positions (small for inliers and large for outliers). Extensive numerical experiments with both image and document datasets demonstrate state-of-the-art precision and recall. | accept-poster | Three reviewers have assessed this paper and they have scored it as 6/6/6/6 after rebuttal. Nonetheless, the reviewers have raised a number of criticisms and the authors are encouraged to resolve them for the camera-ready submission. | train | [
"HyxIlEaNFH",
"ryl4G2X15S",
"rJe8kNCjsr",
"r1l1nslHiH",
"BylbIigSiB",
"rJxzJoxBsH",
"rylGn9lSir",
"S1gfPcgSiB",
"HkxvUjYsKS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes to use the robust subspace recovery layer (RSR) in the autoencoder model for unsupervised anomaly detection.\nThis paper is well written overall. Presentation is clear and it is easy to follow.\nThe proposed approach is a simple combination of existing approaches.\nAlthough its theoretical anal... | [
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_rylb3eBtwr",
"iclr_2020_rylb3eBtwr",
"iclr_2020_rylb3eBtwr",
"HyxIlEaNFH",
"HkxvUjYsKS",
"ryl4G2X15S",
"ryl4G2X15S",
"ryl4G2X15S",
"iclr_2020_rylb3eBtwr"
] |
iclr_2020_ryxB2lBtvH | Learning to Coordinate Manipulation Skills via Skill Behavior Diversification | When mastering a complex manipulation task, humans often decompose the task into sub-skills of their body parts, practice the sub-skills independently, and then execute the sub-skills together. Similarly, a robot with multiple end-effectors can perform complex tasks by coordinating sub-skills of each end-effector. To realize temporal and behavioral coordination of skills, we propose a modular framework that first individually trains sub-skills of each end-effector with skill behavior diversification, and then learns to coordinate end-effectors using diverse behaviors of the skills. We demonstrate that our proposed framework is able to efficiently coordinate skills to solve challenging collaborative control tasks such as picking up a long bar, placing a block inside a container while pushing the container with two robot arms, and pushing a box with two ant agents. Videos and code are available at https://clvrai.com/coordination | accept-poster | This paper deals with multi-agent hierarchical reinforcement learning. A discrete set of pre-specified low-level skills are modulated by a conditioning vector and trained in a fashion reminiscent of Diversity Is All You Need, and then combined via a meta-policy which coordinates multiple agents in pursuit of a goal. The idea is that fine control over primitive skills is beneficial for achieving coordinated high-level behaviour.
The paper improved considerably in its completeness and in the addition of baselines, notably DIAYN without discrete, mutually exclusive skills. Reviewers agreed that the problem is interesting and the method, despite involving a degree of hand-crafting, showed promise for informing future directions.
On the basis that this work addresses an interesting problem setting with a compelling set of experiments, I recommend acceptance. | train | [
"H1eyGV_aYB",
"HygPavW2iB",
"Byenowbnjr",
"SJggRL-3iS",
"SJljR-WnjS",
"S1lSlx-noB",
"HJxR4YcnYS",
"r1eDu0Dk5S"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper provides a specific way of incorporating temporal abstraction into the multi-agent reinforcement learning (MARL) setting. Specifically, this method first discovers diversified skills for every single agent and then train a meta-policy to choose among skills for all agents. \n\nOverall, this paper is wel... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_ryxB2lBtvH",
"Byenowbnjr",
"iclr_2020_ryxB2lBtvH",
"H1eyGV_aYB",
"HJxR4YcnYS",
"r1eDu0Dk5S",
"iclr_2020_ryxB2lBtvH",
"iclr_2020_ryxB2lBtvH"
] |
iclr_2020_SJx9ngStPH | NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search | One-shot neural architecture search (NAS) has played a crucial role in making
NAS methods computationally feasible in practice. Nevertheless, there is still a
lack of understanding on how these weight-sharing algorithms exactly work due
to the many factors controlling the dynamics of the process. In order to allow
a scientific study of these components, we introduce a general framework for
one-shot NAS that can be instantiated to many recently-introduced variants and
introduce a general benchmarking framework that draws on the recent large-scale
tabular benchmark NAS-Bench-101 for cheap anytime evaluations of one-shot
NAS methods. To showcase the framework, we compare several state-of-the-art
one-shot NAS methods, examine how sensitive they are to their hyperparameters
and how they can be improved by tuning their hyperparameters, and compare their
performance to that of blackbox optimizers for NAS-Bench-101. | accept-poster | The authors present a new benchmark for architecture search. Reviews were somewhat mixed, but also with mixed confidence scores. I recommend acceptance as poster - and encourage the authors to also cite https://openreview.net/forum?id=HJxyZkBKDr | train | [
"BJgfr4RFoS",
"BkeNfNCFjB",
"HyxD2ZRFjH",
"BJgaF-RFsr",
"SyxkV1RtsH",
"SkljhB5ndB",
"BkeRiU01qB",
"Byxt3MX9qH"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"-- References --\n[1] Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, Frank Hutter, NAS-Bench-101: Towards Reproducible Neural Architecture Search, ICML 2019\n[2] Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, Jeff Dean, Efficient Neural Architecture Search via Parameter Sharing, ICML ... | [
-1,
-1,
-1,
-1,
-1,
1,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"BkeNfNCFjB",
"SkljhB5ndB",
"BkeRiU01qB",
"Byxt3MX9qH",
"iclr_2020_SJx9ngStPH",
"iclr_2020_SJx9ngStPH",
"iclr_2020_SJx9ngStPH",
"iclr_2020_SJx9ngStPH"
] |
iclr_2020_BJlahxHYDS | Conservative Uncertainty Estimation By Fitting Prior Networks | Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks. In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution. Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm. We also show concentration, implying that the uncertainty estimates converge to zero as we get more data. Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines. We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice. | accept-poster | The paper provides theoretical justification for a previously proposed method for uncertainty estimation based on sampling from a prior distribution (Osband et al., Burda et al.).
The reviewers initially raised concerns about significance, clarity and experimental evaluation, but the author rebuttal addressed most of these concerns.
In the end, all the reviewers agreed that the paper deserves to be accepted. | train | [
"B1xA7Po2tS",
"HkgK3JURYB",
"HylZ0QwnjH",
"HkxA9Xw2iB",
"SkxSCMv2sr",
"SJxttMvhiH",
"rylzpBMioS",
"BJeSRf3qoH",
"BkxsBgKdoH",
"Byl7Pu4QiB",
"rkeEsDE7jB",
"HJeO7lNQsS",
"Hkl9YrEXjr",
"SJxrqE4Xor",
"B1gsx74XoS",
"B1xsPiVxsH",
"HJgnXst2tr",
"HJecBlyaur",
"Bklx07aldH",
"rJlP-g5JuH"... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"Overview:\nThis paper introduces a new method for uncertainty estimation which utilizes randomly initialized networks. Essentially, instead of training a single predictor that outputs means and uncertainty estimates together, authors propose to have two separate models: one that outputs means, and one that outputs... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2020_BJlahxHYDS",
"iclr_2020_BJlahxHYDS",
"HkgK3JURYB",
"B1xA7Po2tS",
"BJeSRf3qoH",
"iclr_2020_BJlahxHYDS",
"BJeSRf3qoH",
"B1gsx74XoS",
"iclr_2020_BJlahxHYDS",
"B1xsPiVxsH",
"HkgK3JURYB",
"iclr_2020_BJlahxHYDS",
"B1xA7Po2tS",
"HJgnXst2tr",
"HJgnXst2tr",
"iclr_2020_BJlahxHYDS",
... |
iclr_2020_rkgg6xBYDH | Understanding Generalization in Recurrent Neural Networks | In this work, we develop the theory for analyzing the generalization performance of recurrent neural networks. We first present a new generalization bound for recurrent neural networks based on matrix 1-norm and Fisher-Rao norm. The definition of Fisher-Rao norm relies on a structural lemma about the gradient of RNNs. This new generalization bound assumes that the covariance matrix of the input data is positive definite, which might limit its use in practice. To address this issue, we propose to add random noise to the input data and prove a generalization bound for training with random noise, which is an extension of the former one. Compared with existing results, our generalization bounds have no explicit dependency on the size of networks. We also discover that Fisher-Rao norm for RNNs can be interpreted as a measure of gradient, and incorporating this gradient measure not only can tighten the bound, but allows us to build a relationship between generalization and trainability. Based on the bound, we theoretically analyze the effect of covariance of features on generalization of RNNs and discuss how weight decay and gradient clipping in the training can help improve generalization. | accept-poster | This paper presents a generalization bound for RNNs based on matrix-1 norm and Fisher-Rao norm. As the initial bound relies on non-signularity of input covariance, which may not always hold in practice, the authors present additional analysis by noise injection to ensure covariance is positive definite. Through the resulted bound, the paper discusses how weight decay and gradient clipping in the training can help generalization. There were some concerns raised by reviewers, including rigorous report of the experiment results, claims on generalization in IMDB experiment, claims of no explicit dependence on the size of networks, and the relationship of small eigenvalues in input covariance to high frequency features. The authors responded to these and also revised their draft to address most of these concerns (in particular, authors added a new section in the appendix that includes additional experimental results). Reviewers were mainly satisfied with the responses and the revision, and they all recommend accept.
| val | [
"SyeQvOGKiS",
"Byl1NFbKjr",
"Hygpk4fFoH",
"H1lNcs-Kjr",
"rJxdMvq3KB",
"rJeKFZ8aFH",
"H1lnk5Sp5H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We appreciate the insightful and constructive comments by all reviewers. We have revised the manuscript as suggested by the reviewers, and summarize the major changes as follows:\n\n> Update all misleading sentences pointed out by the reviewers.\n\n> Replot Figure 1 with error bars.\n\n> Add a new Appendix D which... | [
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
1,
1
] | [
"iclr_2020_rkgg6xBYDH",
"rJxdMvq3KB",
"H1lnk5Sp5H",
"rJeKFZ8aFH",
"iclr_2020_rkgg6xBYDH",
"iclr_2020_rkgg6xBYDH",
"iclr_2020_rkgg6xBYDH"
] |
iclr_2020_HyebplHYwB | The Shape of Data: Intrinsic Distance for Data Distributions | The ability to represent and compare machine learning models is crucial in order to quantify subtle model changes, evaluate generative models, and gather insights on neural network architectures. Existing techniques for comparing data distributions focus on global data properties such as mean and covariance; in that sense, they are extrinsic and uni-scale. We develop a first-of-its-kind intrinsic and multi-scale method for characterizing and comparing data manifolds, using a lower-bound of the spectral variant of the Gromov-Wasserstein inter-manifold distance, which compares all data moments. In a thorough experimental study, we demonstrate that our method effectively discerns the structure of data manifolds even on unaligned data of different dimensionalities; moreover, we showcase its efficacy in evaluating the quality of generative models. | accept-poster | This paper introduces a way to measure dataset similarities. Reviewers all agree that this method is novel and interesting. A few questions initially raised by reviewers regarding models with and without likelihood, geometric exposition, and guarantees around GW, are promptly answered by authors, which raised the score to all weak accept.
| test | [
"HJxW01GvYB",
"SJxlZk-hoH",
"HJe2GlmisH",
"H1gtFkmiiS",
"rkg6ARzssr",
"HklyYEYltS",
"SylsznxZcB",
"ryeqp63PuH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author"
] | [
"The paper propose a novel way to measure similarity between datasets, which e.g. is useful to determine if samples from a generative model resembles a test dataset. The approach is presented as an efficient numerical algorithm for working with diffusion on data manifolds.\n\nI am very much in doubt about this pape... | [
6,
-1,
-1,
-1,
-1,
6,
6,
-1
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
-1
] | [
"iclr_2020_HyebplHYwB",
"H1gtFkmiiS",
"HklyYEYltS",
"HJxW01GvYB",
"SylsznxZcB",
"iclr_2020_HyebplHYwB",
"iclr_2020_HyebplHYwB",
"iclr_2020_HyebplHYwB"
] |
iclr_2020_S1erpeBFPB | How to 0wn the NAS in Your Spare Time | New data processing pipelines and novel network architectures increasingly drive the success of deep learning. In consequence, the industry considers top-performing architectures as intellectual property and devotes considerable computational resources to discovering such architectures through neural architecture search (NAS). This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service (MLaaS), the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side-channels. However, it is challenging to reconstruct novel architectures and pipelines without knowing the computational graph (e.g., the layers, branches or skip connections), the architectural parameters (e.g., the number of filters in a convolutional layer) or the specific pre-processing steps (e.g. embeddings). In this paper, we design an algorithm that reconstructs the key components of a novel deep learning system by exploiting a small amount of information leakage from a cache side-channel attack, Flush+Reload. We use Flush+Reload to infer the trace of computations and the timing for each computation. Our algorithm then generates candidate computational graphs from the trace and eliminates incompatible candidates through a parameter estimation process. We implement our algorithm in PyTorch and Tensorflow. We demonstrate experimentally that we can reconstruct MalConv, a novel data pre-processing pipeline for malware detection, and ProxylessNAS-CPU, a novel network architecture for the ImageNet classification optimized to run on CPUs, without knowing the architecture family. In both cases, we achieve 0% error. These results suggest hardware side channels are a practical attack vector against MLaaS, and more efforts should be devoted to understanding their impact on the security of deep learning systems. | accept-poster | This paper proposes using Flush+Reload to infer the deep network architecture of another program, when the two programs are running on the same machine (as in cloud computing or similar).
There is some disagreement about this paper; the approach is thoughtful and well executed, but one reviewer had concerns about its applicability and realism. Upon reading the author's rebuttal I believe these to be largely addressed, or at least as realistically as one can in a single paper. Therefore I recommend acceptance. | test | [
"Hygvy9O3ir",
"B1ekcK5tir",
"HJlPSt5YoH",
"H1xcou5YoS",
"HygKud9KoS",
"SkxmSd5KiS",
"rkxi9slCKr",
"rJgGcD4RtS",
"SkgYKSmAtS"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank our reviewers again for taking the time to read, evaluate our work, and provide constructive feedback. We have uploaded a revised version of our paper, with edits to address the concerns raised. Here, we summarize our responses and updates below:\n\n[Reviewer 1]\nQ1. We provide our answer to the concerns ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
1
] | [
"iclr_2020_S1erpeBFPB",
"HJlPSt5YoH",
"SkgYKSmAtS",
"rkxi9slCKr",
"SkxmSd5KiS",
"rJgGcD4RtS",
"iclr_2020_S1erpeBFPB",
"iclr_2020_S1erpeBFPB",
"iclr_2020_S1erpeBFPB"
] |
iclr_2020_B1xSperKvH | Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation | Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron's spike time. The weight update is proportional to the difference in spike timing between the current time step and the most recent time step the neuron generated an output spike. The SNNs trained with our hybrid conversion-and-STDB training perform at 10×−25× fewer number of time steps and achieve similar accuracy compared to purely converted SNNs. The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100 and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10× faster compared to converted SNNs with similar accuracy. | accept-poster | After the rebuttal, all reviewers rated this paper as a weak accept.
The reviewer leaning towards rejection was satisfied with the author response and ended up raising their rating to a weak accept. The AC recommends acceptance. | train | [
"SklebdhWoB",
"BkgpDWL5sB",
"H1xRH-89sr",
"B1gEUxI5iB",
"Hke4VJI5iS",
"S1gQnUOTFS",
"Bye9F-4CFS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper examines combining two approaches of obtaining a trained spikingneural network (SNN). The first approach of previous work is converting the weights of a trained artificial neural network (ANN) with a given architecture, to the weights and thresholds of a SNN, and the second approach uses a surrogate gra... | [
6,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_B1xSperKvH",
"S1gQnUOTFS",
"Bye9F-4CFS",
"Hke4VJI5iS",
"SklebdhWoB",
"iclr_2020_B1xSperKvH",
"iclr_2020_B1xSperKvH"
] |
iclr_2020_HJxdTxHYvB | BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES | Defenses against adversarial attacks can be classified into certified and non-certified. Certifiable defenses make networks robust within a certain ℓp-bounded radius, so that it is impossible for the adversary to make adversarial examples in the certificate bound. We present an attack that maintains the imperceptibility property of adversarial examples while being outside of the certified radius. Furthermore, the proposed "Shadow Attack" can fool certifiably robust networks by producing an imperceptible adversarial example that gets misclassified and produces a strong ``spoofed'' certificate. | accept-poster | This work presents a "shadow attack" that fools certifiably robust networks by producing imperceptible adversarial examples by search outside of the certified radius. The reviewers are generally positive on the novelty and contribution of the work. | train | [
"HyxJPqchoB",
"r1lSZ9qnir",
"rJe2nF5nsB",
"SJllHP3yjB",
"ryg2OYFAFS",
"rJeb2tXkqB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your constructive feedback. We have modified the paper to clarify some of the terms per your suggestion. Please find our detailed response below:\n\n[R1: In Table 1, for ImageNet, Shadow Attack does not always generate adversarial examples that have on average larger certified radii than the natural par... | [
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
3,
1,
1
] | [
"ryg2OYFAFS",
"rJeb2tXkqB",
"SJllHP3yjB",
"iclr_2020_HJxdTxHYvB",
"iclr_2020_HJxdTxHYvB",
"iclr_2020_HJxdTxHYvB"
] |
iclr_2020_Skxd6gSYDS | Query-efficient Meta Attack to Deep Neural Networks | Black-box attack methods aim to infer suitable attack patterns to targeted DNN models by only using output feedback of the models and the corresponding input queries. However, due to lack of prior and inefficiency in leveraging the query and feedback information, existing methods are mostly query-intensive for obtaining effective attack patterns. In this work, we propose a meta attack approach that is capable of attacking a targeted model with much fewer queries. Its high query-efficiency stems from effective utilization of meta learning approaches in learning generalizable prior abstraction from the previously observed attack patterns and exploiting such prior to help infer attack patterns from only a few queries and outputs. Extensive experiments on MNIST, CIFAR10 and tiny-Imagenet demonstrate that our meta-attack method can remarkably reduce the number of model queries without sacrificing the attack performance. Besides, the obtained meta attacker is not restricted to a particular model but can be used easily with a fast adaptive ability to attack a variety of models. Our code will be released to the public. | accept-poster | This paper proposes a meta attack approach based on meta learning approaches to learn generalizable prior from the previously observed attack patterns. The proposed approach is able to attack targeted models with much fewer queries. After author response, all reviewers are very positive about the paper. Thus I recommend accept. | train | [
"HkghA12YsS",
"Hyl8uEoTKr",
"S1xaPlJ3jS",
"rkxMuTp4cr",
"BkegNlL5iS",
"ryeEXlnKoS",
"SJgSky2KoS",
"H1gbr0itoB",
"SkgrvaoKsH",
"SyxMU8GpYr",
"BJli9wVUuH",
"SJgwk1z2YS",
"Sklxkpz__S"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"public",
"author"
] | [
"Thank you for your thoughtful review. We have conducted three additional experiments to answer your questions; our response is as below. \n\nQuestion 1: The authors miss some important details about training meta attackers like how many images you choose to get the image and its gradient pairs. Are the images from... | [
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1
] | [
"SyxMU8GpYr",
"iclr_2020_Skxd6gSYDS",
"SJgSky2KoS",
"iclr_2020_Skxd6gSYDS",
"SkgrvaoKsH",
"SyxMU8GpYr",
"Hyl8uEoTKr",
"rkxMuTp4cr",
"rkxMuTp4cr",
"iclr_2020_Skxd6gSYDS",
"iclr_2020_Skxd6gSYDS",
"Sklxkpz__S",
"BJli9wVUuH"
] |
iclr_2020_HyeYTgrFPB | Massively Multilingual Sparse Word Representations | In this paper, we introduce Mamus for constructing multilingual sparse word representations. Our algorithm operates by determining a shared set of semantic units which get reutilized across languages, providing it a competitive edge both in terms of speed and evaluation performance. We demonstrate that our proposed algorithm behaves competitively to strong baselines through a series of rigorous experiments performed towards downstream applications spanning over dependency parsing, document classification and natural language inference. Additionally, our experiments relying on the QVEC-CCA evaluation score suggests that the proposed sparse word representations convey an increased interpretability as opposed to alternative approaches. Finally, we are releasing our multilingual sparse word representations for the 27 typologically diverse set of languages that we conducted our various experiments on. | accept-poster | This paper describes a new method for creating word embeddings that can operate on corpora from more than one language. The algorithm is simple, but rivals more complex approaches.
The reviewers were happy with this paper. They were also impressed that the authors ran the requested multi-lingual BERT experiments, even though they did not show positive results. One reviewer did think that non-contextual word embeddings were of less interest to the NLP community, but thought your arguments for the computational efficiency were convincing. | train | [
"S1eGTLLnjB",
"SJeylM2cjS",
"HJlcVa2doH",
"B1l1Gwp_ir",
"Byx08zpuiS",
"rJe0V3HRKH",
"B1lY96o1cH",
"SklpIWaPqS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks very much for adding these two experiments: for adding multilingual BERT to the paper and discussing the relation to BiSparse in the comments above (and presumably later adding it to the paper). I think the conclusion that you actually outperform BiSparse really helps strengthen the paper, and I greatly ap... | [
-1,
-1,
-1,
-1,
-1,
8,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"HJlcVa2doH",
"SklpIWaPqS",
"SklpIWaPqS",
"rJe0V3HRKH",
"B1lY96o1cH",
"iclr_2020_HyeYTgrFPB",
"iclr_2020_HyeYTgrFPB",
"iclr_2020_HyeYTgrFPB"
] |
iclr_2020_Hyg96gBKPS | Monotonic Multihead Attention | Simultaneous machine translation models start generating a target sequence before they have encoded or read the source sequence. Recent approach for this task either apply a fixed policy on transformer, or a learnable monotonic attention on a weaker recurrent neural network based structure. In this paper, we propose a new attention mechanism, Monotonic Multihead Attention (MMA), which introduced the monotonic attention mechanism to multihead attention. We also introduced two novel interpretable approaches for latency control that are specifically designed for multiple attentions. We apply MMA to the simultaneous machine translation task and demonstrate better latency-quality tradeoffs compared to MILk, the previous state-of-the-art approach.
| accept-poster | This paper extends previous models for monotonic attention to the multi-head attention used in Transformers, yielding "Monotonic Multi-head Attention." The proposed method achieves better latency-quality tradeoffs in simultaneous MT tasks in two language pairs.
The proposed method is a relatively straightforward extension of the previous Hard and Infinite Lookback monotonic attention models. However, all reviewers seem to agree that this paper is a meaningful contribution to the task of simultaneously MT, and the revised version of the paper (along with the authors' comments) addressed most of the raised concerns.
Therefore, I propose acceptance of this paper. | train | [
"rkeK-Biu9r",
"BJeSYds2ir",
"SJxMKAc3jB",
"H1eys4ajjH",
"HygsSKziiS",
"H1lRkHlqsH",
"ryeQS4x5iB",
"SJlFL7lqsB",
"HJxiJmeciB",
"r1xlQQxcor",
"SJg74-IRYB",
"HkxUOzOkqH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an approach for simultaneous neural machine translation. While prior works deal with recurrent models, the authors adopt previous approaches for Transformer. Specifically, for decoder-encoder attention they introduce Monotonic Multihead Attention (MMA) designed to deal with several attention hea... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_Hyg96gBKPS",
"ryeQS4x5iB",
"r1xlQQxcor",
"HygsSKziiS",
"HJxiJmeciB",
"SJg74-IRYB",
"HkxUOzOkqH",
"rkeK-Biu9r",
"rkeK-Biu9r",
"rkeK-Biu9r",
"iclr_2020_Hyg96gBKPS",
"iclr_2020_Hyg96gBKPS"
] |
iclr_2020_BkeoaeHKDS | Gradients as Features for Deep Representation Learning | We address the challenging problem of deep representation learning -- the efficient adaption of a pre-trained deep network to different tasks. Specifically, we propose to explore gradient-based features. These features are gradients of the model parameters with respect to a task-specific loss given an input sample. Our key innovation is the design of a linear model that incorporates both gradient and activation of the pre-trained network. We demonstrate that our model provides a local linear approximation to an underlying deep model, and discuss important theoretical insights. Moreover, we present an efficient algorithm for the training and inference of our model without computing the actual gradients. Our method is evaluated across a number of representation-learning tasks on several datasets and using different network architectures. Strong results are obtained in all settings, and are well-aligned with our theoretical insights. | accept-poster | The paper makes a reasonable contribution to extracting useful features from a pre-trained neural network. The approach is conceptually simple and sufficient evidence is provided of its effectiveness. In addition to the connection to tangent kernels there also appears to be a relationship to holographic feature representations of deep networks. The authors did do a reasonable job of providing additional ablation studies, but the paper would be improved if a clearer study were added to investigate applying the technique to different layers. All of the reviewer comments appear worthwhile, but AnonReviewer2 in particular provides important guidance for improving the paper. | train | [
"S1xP7SAaFS",
"rJxTJEiojr",
"SJlx-Qijjr",
"r1l1mGjojH",
"SkgVRxoisH",
"r1l25t7-9S",
"B1gcXmQr5H"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: This paper proposes to use the gradients of specific layers of convolutional networks as features in a linearized model for transfer learning and fast adaptation. The method is theoretically backed by an appeal to the recently proposed neural tangent kernel and seems like it could be practically useful.\n... | [
6,
-1,
-1,
-1,
-1,
3,
8
] | [
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_BkeoaeHKDS",
"SJlx-Qijjr",
"S1xP7SAaFS",
"r1l25t7-9S",
"B1gcXmQr5H",
"iclr_2020_BkeoaeHKDS",
"iclr_2020_BkeoaeHKDS"
] |
iclr_2020_ryxyCeHtPB | Pay Attention to Features, Transfer Learn Faster CNNs | Deep convolutional neural networks are now widely deployed in vision applications, but a limited size of training data can restrict their task performance. Transfer learning offers the chance for CNNs to learn with limited data samples by transferring knowledge from models pretrained on large datasets. Blindly transferring all learned features from the source dataset, however, brings unnecessary computation to CNNs on the target task. In this paper, we propose attentive feature distillation and selection (AFDS), which not only adjusts the strength of transfer learning regularization but also dynamically determines the important features to transfer. By deploying AFDS on ResNet-101, we achieved a state-of-the-art computation reduction at the same accuracy budget, outperforming all existing transfer learning methods. With a 10x MACs reduction budget, a ResNet-101 equipped with AFDS transfer learned from ImageNet to Stanford Dogs 120, can achieve an accuracy 11.07% higher than its best competitor. | accept-poster | This paper presents an attention-based approach to transfer faster CNNs, which tackles the problem of jointly transferring source knowledge and pruning target CNNs.
Reviewers are unanimously positive on the paper, in terms of a well-written paper with a reasonable approach that yields strong empirical performance under the resource constraint.
AC feels that the paper studies an important problem of making transfer learning faster for CNNs, however, the proposed model is a relatively straightforward combination of fine-tuning and filter-pruning, each having very extensive prior works. Also, AC has very critical comments for improving this paper:
- The Attentive Feature Distillation (AFD) module is very similar to DELTA (Li et al. ICLR 2019) and L2T (Jang et al. ICML 2019), significantly weakening the novelty. The empirical evaluation should consider DELTA as baselines, e.g. AFS+DELTA.
I accept this paper, assuming that all comments will be well addressed in the revision. | train | [
"BJxy8nSUiS",
"r1e9TnrIsH",
"BJlAoqSIsH",
"Ske1UQAX9B",
"BkeROrlU5B",
"BklJwjhd5B",
"Skg8EJfGuH"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Thank you for your comments. We would like to answer your questions:\n1. Consider a convolution operation with a “k * k” kernel, which takes input features with “Ci” channels, and computes feature maps of shape “Co * Ho * Wo”. To evaluate the convolution thus requires “k^2 * Ci * Co * Ho * Wo” multiply-accumulate ... | [
-1,
-1,
-1,
6,
6,
8,
-1
] | [
-1,
-1,
-1,
4,
4,
3,
-1
] | [
"BkeROrlU5B",
"Ske1UQAX9B",
"BklJwjhd5B",
"iclr_2020_ryxyCeHtPB",
"iclr_2020_ryxyCeHtPB",
"iclr_2020_ryxyCeHtPB",
"iclr_2020_ryxyCeHtPB"
] |
iclr_2020_SJgwNerKvB | Continual learning with hypernetworks | Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks. To overcome this problem, we present a novel approach based on task-conditioned hypernetworks, i.e., networks that generate the weights of a target model based on task identity. Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks only require rehearsing task-specific weight realizations, which can be maintained in memory using a simple regularizer. Besides achieving state-of-the-art performance on standard CL benchmarks, additional experiments on long task sequences reveal that task-conditioned hypernetworks display a very large capacity to retain previous memories. Notably, such long memory lifetimes are achieved in a compressive regime, when the number of trainable hypernetwork weights is comparable or smaller than target network size. We provide insight into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and show that task-conditioned hypernetworks demonstrate transfer learning. Finally, forward information transfer is further supported by empirical results on a challenging CL benchmark based on the CIFAR-10/100 image datasets. | accept-spotlight | This paper proposes to use hypernetwork to prevent catastrophic forgetting. Overall, the paper is well-written, well-motivated, and the idea is novel. Experimentally, the proposed approach achieves SOTA on various (well-chosen) standard CL benchmarks (notably P-MNIST for CL, Split MNIST) and also does reasonably well on Split CIFAR-10/100 benchmark. The authors are suggested to investigate alternative penalties in the rehearsal objective, and also add comparison with methods like HAT and PackNet. | train | [
"HyeR6lUYiB",
"r1gp9gUKsH",
"rkludeIKiS",
"rkgcEg8tjH",
"S1gdMgUtjB",
"rkebr3IhKS",
"rygAU3qTtB",
"H1ehZ8mCtH"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We are very grateful to all three reviewers for the time taken in carefully assessing our work and for the overall positive feedback, that we found very encouraging.\n\nWe have added new results including a study of the PermutedMNIST-100 CL2/3 benchmark and improved the clarity of the manuscript. We also provide n... | [
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2020_SJgwNerKvB",
"rkludeIKiS",
"rkebr3IhKS",
"rygAU3qTtB",
"H1ehZ8mCtH",
"iclr_2020_SJgwNerKvB",
"iclr_2020_SJgwNerKvB",
"iclr_2020_SJgwNerKvB"
] |
iclr_2020_BkxUvnEYDH | Program Guided Agent | Developing agents that can learn to follow natural language instructions has been an emerging research direction. While being accessible and flexible, natural language instructions can sometimes be ambiguous even to humans. To address this, we propose to utilize programs, structured in a formal language, as a precise and expressive way to specify tasks. We then devise a modular framework that learns to perform a task specified by a program – as different circumstances give rise to diverse ways to accomplish the task, our framework can perceive which circumstance it is currently under, and instruct a multitask policy accordingly to fulfill each subtask of the overall task. Experimental results on a 2D Minecraft environment not only demonstrate that the proposed framework learns to reliably accomplish program instructions and achieves zero-shot generalization to more complex instructions but also verify the efficiency of the proposed modulation mechanism for learning the multitask policy. We also conduct an analysis comparing various models which learn from programs and natural language instructions in an end-to-end fashion. | accept-spotlight | This paper provides a fascinating hybridization approach to incorporating programs as priors over policies which are then refined using deep RL. The reviewers were, at the end of the discussion, all in favour of acceptance (with the majority strongly in favour). An excellent paper I hope to see included in the conference. | train | [
"rylluKUCtS",
"SJgwVEYmcH",
"Hkx71T7AYS",
"rkggHuCoir",
"HylsHJ0oir",
"SJlqIu6jjr",
"rkge8UTjoH",
"HyeFzOOjiS",
"ryxUH8E5iS",
"rkgDRUVcjS",
"B1gyl0MsjH",
"SyevQUEqjr",
"rkg-m_45sS",
"Byxg1tN5iH",
"ByxHNFE5ir",
"BJlssu49jB",
"HJxmHONqsH",
"HkljewEqjS",
"HkxtwIVqsH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper presents a reinforcement learning agent that learns to execute tasks specified in a form of programs with an architecture consisting of three modules. The (fixed) interpreter module interprets the program, by issuing queries to a (pre-trained) vision module and giving goals to a policy module that execu... | [
6,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_BkxUvnEYDH",
"iclr_2020_BkxUvnEYDH",
"iclr_2020_BkxUvnEYDH",
"SJlqIu6jjr",
"rkge8UTjoH",
"HkljewEqjS",
"rkgDRUVcjS",
"B1gyl0MsjH",
"Hkx71T7AYS",
"SJgwVEYmcH",
"rkg-m_45sS",
"Hkx71T7AYS",
"rylluKUCtS",
"rylluKUCtS",
"iclr_2020_BkxUvnEYDH",
"rylluKUCtS",
"rylluKUCtS",
"SJg... |
iclr_2020_BygPO2VKPH | Sparse Coding with Gated Learned ISTA | In this paper, we study the learned iterative shrinkage thresholding algorithm (LISTA) for solving sparse coding problems. Following assumptions made by prior works, we first discover that the code components in its estimations may be lower than expected, i.e., require gains, and to address this problem, a gated mechanism amenable to theoretical analysis is then introduced. Specific design of the gates is inspired by convergence analyses of the mechanism and hence its effectiveness can be formally guaranteed. In addition to the gain gates, we further introduce overshoot gates for compensating insufficient step size in LISTA. Extensive empirical results confirm our theoretical findings and verify the effectiveness of our method. | accept-spotlight | The paper extends LISTA by introducing gain gates and overshoot gates, which respectively address underestimation of code components and compensation of small step size of LISTA. The authors theoretically analyze these extensions and backup the effectiveness of their proposed algorithm with encouraging empirical results. All reviewers are highly positive on the contributions of this paper, and appreciate the rigorous theory which is further supported by convincing experiments. All three reviewers recommended accept.
| val | [
"SJlzRVohKH",
"BkeidSdTFr",
"SklsjVbtoH",
"Byxyxc7njH",
"rkgpyBlYjB",
"SkgYZ9sOsH",
"S1eZe6Sa9H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"1. Summary\nThe authors propose extensions to LISTA with the goal of addressing underestimation (by introducing “gain gates”) and including momentum (by introducing “overshoot gates”). The authors provide theoretical analysis for each step of their LISTA augmentations, showing that it improves convergence rate. Th... | [
8,
8,
-1,
-1,
-1,
-1,
8
] | [
4,
4,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_BygPO2VKPH",
"iclr_2020_BygPO2VKPH",
"BkeidSdTFr",
"iclr_2020_BygPO2VKPH",
"S1eZe6Sa9H",
"SJlzRVohKH",
"iclr_2020_BygPO2VKPH"
] |
iclr_2020_S1ldO2EFPr | Graph Neural Networks Exponentially Lose Expressive Power for Node Classification | Graph Neural Networks (graph NNs) are a promising deep learning approach for analyzing graph-structured data. However, it is known that they do not improve (or sometimes worsen) their predictive performance as we pile up many layers and add non-lineality. To tackle this problem, we investigate the expressive power of graph NNs via their asymptotic behaviors as the layer size tends to infinity.
Our strategy is to generalize the forward propagation of a Graph Convolutional Network (GCN), which is a popular graph NN variant, as a specific dynamical system. In the case of a GCN, we show that when its weights satisfy the conditions determined by the spectra of the (augmented) normalized Laplacian, its output exponentially approaches the set of signals that carry information of the connected components and node degrees only for distinguishing nodes.
Our theory enables us to relate the expressive power of GCNs with the topological information of the underlying graphs inherent in the graph spectra. To demonstrate this, we characterize the asymptotic behavior of GCNs on the Erd\H{o}s -- R\'{e}nyi graph.
We show that when the Erd\H{o}s -- R\'{e}nyi graph is sufficiently dense and large, a broad range of GCNs on it suffers from the ``information loss" in the limit of infinite layers with high probability.
Based on the theory, we provide a principled guideline for weight normalization of graph NNs. We experimentally confirm that the proposed weight scaling enhances the predictive performance of GCNs in real data. Code is available at https://github.com/delta2323/gnn-asymptotics. | accept-spotlight | The paper provides a theoretical analysis of graph neural networks, as the number of layers goes to infinity. For the graph convolutional network, they relate the expressive power of the network with the graph spectra. In particular for Erdos-Renyi graphs, they show that very deep graphs lose information, and propose a new weight normalization scheme based on this insight.
The authors responded well to reviewer comments. It is nice to see that the open review nature has also resulted in a new connection. Unfortunately one of the reviewers did not engage further in the discussion with respect to the author rebuttals.
Overall, the paper provides a nice theoretical analysis of a widely used graph neural network architecture, and characterises its behaviour on a popular class of graphs. The fact that the theory provides a new approach for weight normalization is a bonus. | train | [
"S1xNLXBcsB",
"rkewGZSciB",
"SkewKvFFsr",
"BJldPZVKKr",
"Byx-2eBCYr",
"rygvUd4WqH",
"SklIn_C3uB",
"BJgz6B0YOr",
"BJe64S_LOH"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public"
] | [
"First of all, thank you for your valuable feedback and comments. We appreciate that you highly evaluate our work.\n\n\n> The analysis is mostly for dense graphs. However, most of the real-world networks are large-scale sparse networks.\n\nFirst, we think that Erdos-Renyi graphs Theorem 3 can support are not that d... | [
-1,
-1,
-1,
8,
6,
8,
-1,
-1,
-1
] | [
-1,
-1,
-1,
3,
4,
1,
-1,
-1,
-1
] | [
"BJldPZVKKr",
"rygvUd4WqH",
"Byx-2eBCYr",
"iclr_2020_S1ldO2EFPr",
"iclr_2020_S1ldO2EFPr",
"iclr_2020_S1ldO2EFPr",
"BJgz6B0YOr",
"BJe64S_LOH",
"iclr_2020_S1ldO2EFPr"
] |
iclr_2020_rJljdh4KDH | Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells | Unsupervised text encoding models have recently fueled substantial progress in NLP. The key idea is to use neural networks to convert words in texts to vector space representations (embeddings) based on word positions in a sentence and their contexts, which are suitable for end-to-end training of downstream tasks. We see a strikingly similar situation in spatial analysis, which focuses on incorporating both absolute positions and spatial contexts of geographic objects such as POIs into models. A general-purpose representation model for space is valuable for a multitude of tasks. However, no such general model exists to date beyond simply applying discretization or feed-forward nets to coordinates, and little effort has been put into jointly modeling distributions with vastly different characteristics, which commonly emerges from GIS data. Meanwhile, Nobel Prize-winning Neuroscience research shows that grid cells in mammals provide a multi-scale periodic representation that functions as a metric for location encoding and is critical for recognizing places and for path-integration. Therefore, we propose a representation learning model called Space2Vec to encode the absolute positions and spatial relationships of places. We conduct experiments on two real-world geographic data for two different tasks: 1) predicting types of POIs given their positions and context, 2) image classification leveraging their geo-locations. Results show that because of its multi-scale representations, Space2Vec outperforms well-established ML approaches such as RBF kernels, multi-layer feed-forward nets, and tile embedding approaches for location modeling and image classification tasks. Detailed analysis shows that all baselines can at most well handle distribution at one scale but show poor performances in other scales. In contrast, Space2Vec ’s multi-scale representation can handle distributions at different scales. | accept-spotlight | This paper proposes to follow inspiration from NLP method that use position embeddings and adapt them to spatial analysis that also makes use of both absolute and contextual information, and presents a representation learning approach called space2vec to capture absolute positions and spatial relationships of places. Experiments show promising results on real data compared to a number of existing approaches.
Reviewers recognize the promise of this approach and suggested a few additional experiments such as using this spatial encoding as part of other tasks such as image classification, as well as clarification and further explanations on many important points. Authors performed these experiments and incorporated the results in their revisions, further strengthening the submission. They also provided more analyses and explanations about the granularity of locality and motivation for their approach, which answered the main concerns of reviewers.
Overall, the revised paper is solid and we recommend acceptance. | train | [
"ryehO-3hjr",
"rkeq3ii3iH",
"HygnxKu3sr",
"SyxmYOC6KB",
"Hye56-ChYr",
"H1gqafgOsr",
"SJepwbl_oS",
"BJgWWbxuiS",
"BJeBLjydjB",
"H1lSQ12vcS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"We will make this point clear in the final version, and experiment with more inductive learning tasks in our future work.",
"Thank you for responding back to the comments. I am satisfied with most of author's responses to my comments as well as the comments of other reviewers. \n\nI will recommend authors to add... | [
-1,
-1,
-1,
8,
6,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
3,
1,
-1,
-1,
-1,
-1,
5
] | [
"rkeq3ii3iH",
"H1gqafgOsr",
"BJgWWbxuiS",
"iclr_2020_rJljdh4KDH",
"iclr_2020_rJljdh4KDH",
"Hye56-ChYr",
"SyxmYOC6KB",
"SyxmYOC6KB",
"H1lSQ12vcS",
"iclr_2020_rJljdh4KDH"
] |
iclr_2020_r1lfF2NYvH | InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization | This paper studies learning the representations of whole graphs in both unsupervised and semi-supervised scenarios. Graph-level representations are critical in a variety of real-world applications such as predicting the properties of molecules and community analysis in social networks. Traditional graph kernel based methods are simple, yet effective for obtaining fixed-length representations for graphs but they suffer from poor generalization due to hand-crafted designs. There are also some recent methods based on language models (e.g. graph2vec) but they tend to only consider certain substructures (e.g. subtrees) as graph representatives. Inspired by recent progress of unsupervised representation learning, in this paper we proposed a novel method called InfoGraph for learning graph-level representations. We maximize the mutual information between the graph-level representation and the representations of substructures of different scales (e.g., nodes, edges, triangles). By doing so, the graph-level representations encode aspects of the data that are shared across different scales of substructures. Furthermore, we further propose InfoGraph*, an extension of InfoGraph for semisupervised scenarios. InfoGraph* maximizes the mutual information between unsupervised graph representations learned by InfoGraph and the representations learned by existing supervised methods. As a result, the supervised encoder learns from unlabeled data while preserving the latent semantic space favored by the current supervised task. Experimental results on the tasks of graph classification and molecular property prediction show that InfoGraph is superior to state-of-the-art baselines and InfoGraph* can achieve performance competitive with state-of-the-art semi-supervised models. | accept-spotlight | This paper proposes a graph embedding method for the whole graph under both unsupervised and semi-supervised setting. It can extract a fixed length graph-level representation with good generalization capability. All reviewers provided unanimous rating of weak accept. The reviewers praise the paper is well written and is value to different fields dealing with graph learning. There are some discussions on the novelty of the approach, which was better clarified after the response from the authors. Overall this paper presents a new effort in the active topic of graph representation learning with potential large impact to multiple fields. Therefore, the ACs recommend it to be an oral paper. | val | [
"r1xii9kciB",
"HJljp73E9S",
"H1e5U7WSiS",
"BJx1x7ZSjB",
"BkgknzZBor",
"rkxgDv7ZoB",
"rkl8SAaQ5r",
"BJlXq_au9H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I am OK with authors' comments.\nI think that the paper deserves to be published.\nThe authors improved the presentation and addressed my comments.\nTherefore, I can increase the grade.",
"The paper presents an unsupervised method for graph embedding. The authors seek to obtain graph representations by maximizin... | [
-1,
6,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"BJx1x7ZSjB",
"iclr_2020_r1lfF2NYvH",
"BJlXq_au9H",
"HJljp73E9S",
"rkl8SAaQ5r",
"iclr_2020_r1lfF2NYvH",
"iclr_2020_r1lfF2NYvH",
"iclr_2020_r1lfF2NYvH"
] |
iclr_2020_B1e9Y2NYvS | On Robustness of Neural Ordinary Differential Equations | Neural ordinary differential equations (ODEs) have been attracting increasing attention in various research domains recently. There have been some works studying optimization issues and approximation capabilities of neural ODEs, but their robustness is still yet unclear. In this work, we fill this important gap by exploring robustness properties of neural ODEs both empirically and theoretically. We first present an empirical study on the robustness of the neural ODE-based networks (ODENets) by exposing them to inputs with various types of perturbations and subsequently investigating the changes of the corresponding outputs. In contrast to conventional convolutional neural networks (CNNs), we find that the ODENets are more robust against both random Gaussian perturbations and adversarial attack examples. We then provide an insightful understanding of this phenomenon by exploiting a certain desirable property of the flow of a continuous-time ODE, namely that integral curves are non-intersecting. Our work suggests that, due to their intrinsic robustness, it is promising to use neural ODEs as a basic block for building robust deep network models. To further enhance the robustness of vanilla neural ODEs, we propose the time-invariant steady neural ODE (TisODE), which regularizes the flow on perturbed data via the time-invariant property and the imposition of a steady-state constraint. We show that the TisODE method outperforms vanilla neural ODEs and also can work in conjunction with other state-of-the-art architectural methods to build more robust deep networks. | accept-spotlight | This paper studies the robustness of NeuralODE, as well as propose a new variant. The results suggest that the neuralODE can be used as a building block to build robust deep networks. The reviewers agree that this is a good paper for ICLR, and based on their recommendation I suggest to accept this paper. | train | [
"HJeAZTd-KB",
"SJlv3r9qor",
"B1gOP8htjB",
"HJgh99NFsB",
"rJesf2NKoH",
"r1xJXjNFsr",
"B1l41dEFjH",
"SJg8R8Vtsr",
"r1gjSrVtiB",
"SygdqthatS",
"Bkx6D1fx9H",
"Bkeg6dcctr",
"BJeFf1hStH",
"BJeptFoNKH",
"r1xirtM4tB",
"SJliq3S4KB",
"SyxqthOXKS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"public",
"public"
] | [
"This paper studied the robustness of neural ODE-based networks (ODENets) to various types of perturbations on the input images. The authors observed that ODENets are more robust to both Gaussian perturbation and adversarial attacks, which the authors explained as non-intersecting of the integral curves for differe... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_B1e9Y2NYvS",
"B1gOP8htjB",
"B1l41dEFjH",
"HJeAZTd-KB",
"HJeAZTd-KB",
"HJeAZTd-KB",
"HJeAZTd-KB",
"SygdqthatS",
"Bkx6D1fx9H",
"iclr_2020_B1e9Y2NYvS",
"iclr_2020_B1e9Y2NYvS",
"SJliq3S4KB",
"BJeptFoNKH",
"iclr_2020_B1e9Y2NYvS",
"SyxqthOXKS",
"r1xirtM4tB",
"iclr_2020_B1e9Y2NYv... |
iclr_2020_H1xscnEKDr | Defending Against Physically Realizable Attacks on Image Classification | We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks. | accept-spotlight | This paper studies the problem of defending deep neural network approaches for image classification from physically realizable attacks. It first demonstrates that adversarial training with PGD attacks and randomized smoothing exhibit limited effectiveness against three of the highest profile physical attacks. Then, it proposes a new abstract adversarial model, where an adversary places a small adversarially crafted rectangle in an image, and develops two approaches for efficiently computing the resulting adversarial examples. Empirical results show the effectiveness. Overall, a good paper. The rebuttal is convincing. | test | [
"SyxgyvUOiB",
"Skg9iIyx9r",
"HyeCkjdDjB",
"Bkgzaykrsr",
"HkldRKjXiH",
"SylewKS7jS",
"HklAJYBXor",
"SkxtNPBXjH",
"B1e2oPHmjr",
"ryg4bBBmjS",
"Byglyo0otB",
"SkgpWdyEqH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for addressing my remaining comments. I will keep my score the same.",
"[Note: I gave a 3 for thoroughness even though I only read the paper once, because I believe that I carefully considered the paper while reading it.]\n\nThis paper argues that threat models such as L-inf are limited when considerin... | [
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"SylewKS7jS",
"iclr_2020_H1xscnEKDr",
"HklAJYBXor",
"HkldRKjXiH",
"SkxtNPBXjH",
"Byglyo0otB",
"Skg9iIyx9r",
"SkgpWdyEqH",
"SkgpWdyEqH",
"iclr_2020_H1xscnEKDr",
"iclr_2020_H1xscnEKDr",
"iclr_2020_H1xscnEKDr"
] |
iclr_2020_rklEj2EFvB | Estimating Gradients for Discrete Random Variables by Sampling without Replacement | We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the Rao-Blackwellization of three different estimators. Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations. The resulting estimator is closely related to other gradient estimators. Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings. | accept-spotlight | The authors derive a novel, unbiased gradient estimator for discrete random variables based on sampling without replacement. They relate their estimator to existing multi-sample estimators and motivate why we would expect reduced variance. Finally, they evaluate their estimator across several tasks and show that is performs well in all of them.
The reviewers agree that the revised paper is well-written and well-executed. There was some concern about that effectiveness of the estimator, however, the authors clarified that "it is the only estimator that performs well across different settings (high and low entropy). Therefore it is more robust and a strict improvement to any of these estimators which only have good performance in either high or low entropy settings." Reviewer 2 was still not convinced about the strength of the analysis of the estimator, and this is indeed quantifying the variance reduction theoretically would be an improvement.
Overall, the paper is a nice addition to the set of tools for computing gradients of expectations of discrete random variables. I recommend acceptance.
| train | [
"H1l1_PuhKB",
"B1l84u7oFS",
"BJxVhyjhsr",
"SyghB-OhjB",
"rkxH-4Nujr",
"r1xlixQ_sr",
"rJgjVlXOiS",
"S1xd3JmusB",
"rylJnHURKr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary: In this paper, an unbiased estimator for expectations over discrete random variables is developed based on a sampling-without-replacement strategy. The proposed estimator is shown to be a Rao-Blackwellization of three existing unbiased estimators with guaranteed reduction in estimation variance. The conne... | [
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2020_rklEj2EFvB",
"iclr_2020_rklEj2EFvB",
"SyghB-OhjB",
"r1xlixQ_sr",
"iclr_2020_rklEj2EFvB",
"B1l84u7oFS",
"H1l1_PuhKB",
"rylJnHURKr",
"iclr_2020_rklEj2EFvB"
] |
iclr_2020_HyeSin4FPB | Learning to Control PDEs with Differentiable Physics | Predicting outcomes and planning interactions with the physical world are long-standing goals for machine learning. A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom. Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters. We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames. We propose to split the problem into two distinct tasks: planning and control. To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters. Both stages are trained end-to-end using a differentiable PDE solver. We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations. | accept-spotlight | The paper proposes a method to control dynamical systems described by a partial differential equations (PDE). The method uses a hierarchical predictor-corrector scheme that divides the problem into smaller and simpler temporal subproblems. They illustrate the performance of their method on 1D Burger’s PDE and 2D incompressible flow.
The reviewers are all positive about this paper and find it well-written and potentially impactful. Hence, I recommend acceptance of this paper. | train | [
"HkescpwnsH",
"BygUDiBKjH",
"HkxXJKruoB",
"HyxMq2l_sS",
"SJgXD3xOjH",
"B1xgThldsr",
"rJgsfMWpYH",
"Sye8KBAe9B",
"rkeA9bdX5r"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your response, which addressed many of my concerns. I've read the updated version, and I think the paper looks much more clear with these included. I keep my rating as 6.",
"Dear reviewers,\n\nThank you very much for your efforts in reviewing this paper.\n\nThe authors have provided their rebuttal.... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"SJgXD3xOjH",
"iclr_2020_HyeSin4FPB",
"HyxMq2l_sS",
"Sye8KBAe9B",
"rkeA9bdX5r",
"rJgsfMWpYH",
"iclr_2020_HyeSin4FPB",
"iclr_2020_HyeSin4FPB",
"iclr_2020_HyeSin4FPB"
] |
iclr_2020_HygOjhEYDH | Intensity-Free Learning of Temporal Point Processes | Temporal point processes are the dominant paradigm for modeling sequences of events happening at irregular intervals. The standard way of learning in such models is by estimating the conditional intensity function. However, parameterizing the intensity function usually incurs several trade-offs. We show how to overcome the limitations of intensity-based approaches by directly modeling the conditional distribution of inter-event times. We draw on the literature on normalizing flows to design models that are flexible and efficient. We additionally propose a simple mixture model that matches the flexibility of flow-based models, but also permits sampling and computing moments in closed form. The proposed models achieve state-of-the-art performance in standard prediction tasks and are suitable for novel applications, such as learning sequence embeddings and imputing missing data. | accept-spotlight | This submission proposes a new paradigm for modelling temporal point processes by using deep learning to learn to mix log-normal distributions in order to directly model the conditional distribution of event time intervals themselves.
Strengths of the paper:
-Introduces a new modelling paradigm that can lead to further research in this direction, for an important problem.
-Extensive experimentation validates the approach quantitatively.
-Easy to read.
Weaknesses:
-Several reviewers wanted more details on how the mixing parameter K was tuned. This was adequately addressed during the discussion period.
The reviewer consensus was to accept this submission.
| train | [
"rkl-2Ar2jS",
"ryldL4EooH",
"BkxIC5-QiB",
"rkeeYYZmjH",
"ryx58dWmjr",
"SJgwtqv3YH",
"SyxzOArRFB",
"BJlCvHrG9H"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"As you correctly pointed out, we focused on event time prediction in this paper. We agree with you that mark prediction is an interesting research problem in its own right, and that the conditional independence assumption might be too restrictive in some cases. \n\nIn theory, our model could be altered in several ... | [
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"ryldL4EooH",
"ryx58dWmjr",
"SJgwtqv3YH",
"SyxzOArRFB",
"BJlCvHrG9H",
"iclr_2020_HygOjhEYDH",
"iclr_2020_HygOjhEYDH",
"iclr_2020_HygOjhEYDH"
] |
iclr_2020_HJeTo2VFwH | A Signal Propagation Perspective for Pruning Neural Networks at Initialization | Network pruning is a promising avenue for compressing deep neural networks. A typical approach to pruning starts by training a model and then removing redundant parameters while minimizing the impact on what is learned. Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a saliency criterion called connection sensitivity. However, it remains unclear exactly why pruning an untrained, randomly initialized neural network is effective. In this work, by noting connection sensitivity as a form of gradient, we formally characterize initialization conditions to ensure reliable connection sensitivity measurements, which in turn yields effective pruning results. Moreover, we analyze the signal propagation properties of the resulting pruned networks and introduce a simple, data-free method to improve their trainability. Our modifications to the existing pruning at initialization method lead to improved results on all tested network models for image classification tasks. Furthermore, we empirically study the effect of supervision for pruning and demonstrate that our signal propagation perspective, combined with unsupervised pruning, can be useful in various scenarios where pruning is applied to non-standard arbitrarily-designed architectures. | accept-spotlight | This is a strong submission, and I recommend acceptance. The idea is an elegant one: sparsify a network at initialization using a distribution that achieves approximate orthogonality of the Jacobian for each layer. This is well motivated by dynamical isometry theory, and should imply good performance of the pruned network to the extent that the training dynamics are explainable in terms of a linearization around the initial weights. The paper is very well written, and all design decisions are clearly motivated. The experiments are careful, and cleanly demonstrate the effectiveness of the technique. The one shortcoming is that the experiments don't use state-of-the-art modern architectures, even thought that ought to have been easy to try. The architectures differ in ways that could impact the results, so it's not clear to what extent the same principles describe SOTA neural nets. Still, this is overall a very strong submission, and will be of interest to a lot of researchers at the conference.
| train | [
"BJgihtQOsH",
"ryxp6O7usH",
"BJlBI_m_sH",
"HJlRgxh3KS",
"BkltFEKDKS",
"SJlF8AaTtB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nWe thank the reviewer (R2) for providing the feedback. We address the comments below.\n\n\n# Generalization performance\n\nAs researchers we study many aspects (besides generalization) that help us better understand deep learning models. Therefore, we disagree with R2's assessment that we (in studying network pr... | [
-1,
-1,
-1,
8,
3,
6
] | [
-1,
-1,
-1,
5,
4,
4
] | [
"BkltFEKDKS",
"HJlRgxh3KS",
"SJlF8AaTtB",
"iclr_2020_HJeTo2VFwH",
"iclr_2020_HJeTo2VFwH",
"iclr_2020_HJeTo2VFwH"
] |
iclr_2020_BJlRs34Fvr | Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets | Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt. Despite their huge success in building deeper and more powerful DNNs, we identify a surprising \emph{security weakness} of skip connections in this paper. Use of skip connections \textit{allows easier generation of highly transferable adversarial examples}. Specifically, in ResNet-like (with skip connections) neural networks, gradients can backpropagate through either skip connections or residual modules. We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability. Our method is termed \emph{Skip Gradient Method} (SGM). We conduct comprehensive transfer attacks against state-of-the-art DNNs including ResNets, DenseNets, Inceptions, Inception-ResNet, Squeeze-and-Excitation Network (SENet) and robustly trained DNNs. We show that employing SGM on the gradient flow can greatly improve the transferability of crafted attacks in almost all cases. Furthermore, SGM can be easily combined with existing black-box attack techniques, and obtain high improvements over state-of-the-art transferability methods. Our findings not only motivate new research into the architectural vulnerability of DNNs, but also open up further challenges for the design of secure DNN architectures. | accept-spotlight | This paper makes the observation that, by adjusting the ratio of gradients from skip connections and residual connections in ResNet-family networks in a projected gradient descent attack (that is, upweighting the contribution of the skip connection gradient), one can obtain more transferable adversarial examples. This is evaluated empirically in the single-model black box transfer setting, against a wide range of models, both with and without countermeasures.
Reviewers praised the novelty and simplicity of the method, the breadth of empirical results, and the review of related work. Concerns were raised regarding a lack of variance reporting, strength of the baselines vs. numbers reported in the literature, and the lack of consideration paid to the threat model under which an adversary employs an ensemble of source models, as well as the framing given by the original title and abstract. All of these appear to have been satisfactorily addressed, in a fine example of what ICLR's review & revision process can yield. It is therefore my pleasure to recommend acceptance. | train | [
"rJeTPQqKir",
"BJxG0SRl9B",
"rJepEV7KiB",
"ryebMH7KsB",
"BJe2TMXKiH",
"SkgLpWdiYH",
"BklvMFCmqS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the significant additional experiments, responses, and presentation revisions. These address my previous concerns, and I believe greatly strengthen the reliability and significance of the overall message. I believe this paper contains several interesting empirical results, which will be of great inte... | [
-1,
8,
-1,
-1,
-1,
6,
6
] | [
-1,
5,
-1,
-1,
-1,
3,
4
] | [
"ryebMH7KsB",
"iclr_2020_BJlRs34Fvr",
"SkgLpWdiYH",
"BJxG0SRl9B",
"BklvMFCmqS",
"iclr_2020_BJlRs34Fvr",
"iclr_2020_BJlRs34Fvr"
] |
iclr_2020_H1ebhnEYDH | White Noise Analysis of Neural Networks | A white noise analysis of modern deep neural networks is presented to unveil
their biases at the whole network level or the single neuron level. Our analysis is
based on two popular and related methods in psychophysics and neurophysiology
namely classification images and spike triggered analysis. These methods have
been widely used to understand the underlying mechanisms of sensory systems
in humans and monkeys. We leverage them to investigate the inherent biases of
deep neural networks and to obtain a first-order approximation of their functionality.
We emphasize on CNNs since they are currently the state of the art methods
in computer vision and are a decent model of human visual processing. In
addition, we study multi-layer perceptrons, logistic regression, and recurrent neural
networks. Experiments over four classic datasets, MNIST, Fashion-MNIST,
CIFAR-10, and ImageNet, show that the computed bias maps resemble the target
classes and when used for classification lead to an over two-fold performance than
the chance level. Further, we show that classification images can be used to attack
a black-box classifier and to detect adversarial patch attacks. Finally, we utilize
spike triggered averaging to derive the filters of CNNs and explore how the behavior
of a network changes when neurons in different layers are modulated. Our
effort illustrates a successful example of borrowing from neurosciences to study
ANNs and highlights the importance of cross-fertilization and synergy across machine
learning, deep learning, and computational neuroscience. | accept-spotlight | All the reviewers found the paper to contain an interesting idea with insightful experiments. The rebuttal further improved confidence of the reviewers. The paper is accepted. | train | [
"rkl8oJhKsS",
"r1eaIOsFoB",
"rylmiRMHor",
"S1xzZE9DoB",
"HkgkErQSiB",
"Hyl3w6GSjH",
"S1xgFFGBiB",
"rJg3yGj6FB",
"HyxVi6e0tB",
"r1loJKssqB",
"BJlIMrw3cS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the response and for adding the result on ImageNet. I will keep my score as weak accept and support the acceptance of this work. \n\nI think this paper has an interesting perspective and many solid experiments to justify the existence of bias. This paper could be a good contribution to ICLR'20. ",
"Ba... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
5
] | [
"S1xzZE9DoB",
"Hyl3w6GSjH",
"rJg3yGj6FB",
"HkgkErQSiB",
"BJlIMrw3cS",
"HyxVi6e0tB",
"r1loJKssqB",
"iclr_2020_H1ebhnEYDH",
"iclr_2020_H1ebhnEYDH",
"iclr_2020_H1ebhnEYDH",
"iclr_2020_H1ebhnEYDH"
] |
iclr_2020_Byl8hhNYPS | Neural Machine Translation with Universal Visual Representation | Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations. In this paper, we present a universal visual representation learned over the monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs, thereby extending image applicability in NMT. In detail, a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table learned over the existing sentence-image pairs, and then is encoded as image representations by a pre-trained ResNet. An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations. In particular, the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodel NMT. Experiments on four widely used translation datasets, including the WMT'16 English-to-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K, show that the proposed approach achieves significant improvements over strong baselines. | accept-spotlight | This paper proposes using visual representations learned in a monolingual setting with image annotations into machine translation. Their approach obviates the need to have bilingual sentences aligned with image annotations, a very restricted resource. An attention layer allows the transformer to incorporate a topic-image lookup table. Their approach achieves significant improvements over strong baselines. The reviewers and the authors engaged in substantive discussions. This is a strong paper which should be included in ICLR.
| train | [
"ryxpUP6RKr",
"B1gYdNvCFS",
"rJe9E2ITYr",
"BylD4R7NiB",
"rJgabTX4sr",
"r1eQFhX4ir",
"rJezfnmEoB",
"HJxJBYhb_S",
"rygYkNlb_H"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"Summary: This paper uses visual representation learned over monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs for multimodal NMT. Their approach enables visual information to be integrated into large-scale text-only NMT. Experiments on four widely us... | [
6,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_Byl8hhNYPS",
"iclr_2020_Byl8hhNYPS",
"iclr_2020_Byl8hhNYPS",
"iclr_2020_Byl8hhNYPS",
"rJe9E2ITYr",
"B1gYdNvCFS",
"ryxpUP6RKr",
"rygYkNlb_H",
"iclr_2020_Byl8hhNYPS"
] |
iclr_2020_BJeKh3VYDH | Tranquil Clouds: Neural Networks for Learning Temporally Coherent Features in Point Clouds | Point clouds, as a form of Lagrangian representation, allow for powerful and flexible applications in a large number of computational disciplines. We propose a novel deep-learning method to learn stable and temporally coherent feature spaces for points clouds that change over time. We identify a set of inherent problems with these approaches: without knowledge of the time dimension, the inferred solutions can exhibit strong flickering, and easy solutions to suppress this flickering can result in undesirable local minima that manifest themselves as halo structures. We propose a novel temporal loss function that takes into account higher time derivatives of the point positions, and encourages mingling, i.e., to prevent the aforementioned halos. We combine these techniques in a super-resolution method with a truncation approach to flexibly adapt the size of the generated positions. We show that our method works for large, deforming point sets from different sources to demonstrate the flexibility of our approach. | accept-spotlight | This paper provides an improved method for deep learning on point clouds. Reviewers are unanimous that this paper is acceptable, and the AC concurs. | train | [
"HJl_VsdKsB",
"SJefiq_tjB",
"H1eNK5dFoS",
"H1emRfD0tr",
"SJlE85gmcS",
"BkeSwCWwqB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank you for the positive assessment and feedback. Below are our responses to the concerns mentioned in your review. We will upload a revised version of our submission that extends the evaluation with respect to inputs with dense correspondences.\n\nRegarding the qualitative results we provide:\n\n- Each of ou... | [
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
1,
4,
3
] | [
"H1emRfD0tr",
"SJlE85gmcS",
"BkeSwCWwqB",
"iclr_2020_BJeKh3VYDH",
"iclr_2020_BJeKh3VYDH",
"iclr_2020_BJeKh3VYDH"
] |
iclr_2020_BJlS634tPr | PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search | Differentiable architecture search (DARTS) provided a fast solution in finding effective network architectures, but suffered from large memory and computing overheads in jointly training a super-net and searching for an optimal architecture. In this paper, we present a novel approach, namely Partially-Connected DARTS, by sampling a small part of super-net to reduce the redundancy in exploring the network space, thereby performing a more efficient search without comprising the performance. In particular, we perform operation search in a subset of channels while bypassing the held out part in a shortcut. This strategy may suffer from an undesired inconsistency on selecting the edges of super-net caused by sampling different channels. We solve it by introducing edge normalization, which adds a new set of edge-level hyper-parameters to reduce uncertainty in search. Thanks to the reduced memory cost, PC-DARTS can be trained with a larger batch size and, consequently, enjoy both faster speed and higher training stability. Experiment results demonstrate the effectiveness of the proposed method. Specifically, we achieve an error rate of 2.57% on CIFAR10 within merely 0.1 GPU-days for architecture search, and a state-of-the-art top-1 error rate of 24.2% on ImageNet (under the mobile setting) within 3.8 GPU-days for search. Our code has been made available at https://www.dropbox.com/sh/on9lg3rpx1r6dkf/AABG5mt0sMHjnEJyoRnLEYW4a?dl=0. | accept-spotlight | This paper proposes an improvement to the popular DARTS approach, speeding it up by performing the search in a subset of channels. The improvements are robust, and code is available for reproducibility.
The rebuttal cleared up initial concerns, and after the (private) discussion among reviewers now all reviewers give accepting scores. Because the improvements seem somewhat incremental and only applied to DARTS, R3 argued against an oral, and even the most positive reviewer agreed that a poster format would be best for presentation.
I therefore strongly recommend recommendation, as a poster. | test | [
"ryg_OAC6YB",
"rkekqSV3jB",
"SyeDxHX3ir",
"rJeYXSTGjS",
"HJltjB6zsH",
"H1l9yraGor",
"H1gXOxY3FS",
"rJeA7lKatB"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"---\nrevised score. rebuttal clears my concerns.\n---\nSummary:\n\nThe paper proposes a partially connected differential architecture search (PC-DARTS) technique, that uses a variant of channel dropout for each node's output feature maps, and a weighted summation of concatenating all previous nodes. Searched archi... | [
6,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2020_BJlS634tPr",
"SyeDxHX3ir",
"HJltjB6zsH",
"rJeA7lKatB",
"ryg_OAC6YB",
"H1gXOxY3FS",
"iclr_2020_BJlS634tPr",
"iclr_2020_BJlS634tPr"
] |
iclr_2020_rkxZyaNtwB | Online and stochastic optimization beyond Lipschitz continuity: A Riemannian approach | Motivated by applications to machine learning and imaging science, we study a class of online and stochastic optimization problems with loss functions that are not Lipschitz continuous; in particular, the loss functions encountered by the optimizer could exhibit gradient singularities or be singular themselves. Drawing on tools and techniques from Riemannian geometry, we examine a Riemann–Lipschitz (RL) continuity condition which is tailored to the singularity landscape of the problem’s loss functions. In this way, we are able to tackle cases beyond the Lipschitz framework provided by a global norm, and we derive optimal regret bounds and last iterate convergence results through the use of regularized learning methods (such as online mirror descent). These results are subsequently validated in a class of stochastic Poisson inverse problems that arise in imaging science. | accept-spotlight | This is a mostly theoretical paper concerning online and stochastic optimization for convex loss functions that are not Lipschitz continuous. The authors propose a method for replacing the Lipschitz continuity condition with a more general Riemann-Lipschitz continuity condition, under which they are able to provide regret bounds for the online mirror descent algorithm, as well as extending to the stochastic setting. They follow up by evaluating their algorithm on Poisson inverse problems.
The reviewers all agree that this is a well-written paper that makes a clear contribution. To the best of our knowledge, the theory and derivations are correct, and the authors were highly responsive to reviewers’ (minor) comments. I’m therefore happy to recommend acceptance. | train | [
"SkeKfUUoqS",
"Hyg2Myvror",
"rJgdta8HsB",
"rJe1H2UBoS",
"HylZuqdiFB",
"SklTXJI75B"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper establishes optimal regret bounds of the order O(\\sqrt{T}) for Follow The Regularised Leader (FTRL) and Online Mirror Descent (OMD) for convex loss functions and potentials (a.k.a. Riemannian regularizers) that are, respectively, Lipschitz continuous and strongly convex with respect to a given Riemannia... | [
8,
-1,
-1,
-1,
6,
8
] | [
3,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_rkxZyaNtwB",
"HylZuqdiFB",
"SklTXJI75B",
"SkeKfUUoqS",
"iclr_2020_rkxZyaNtwB",
"iclr_2020_rkxZyaNtwB"
] |
iclr_2020_Skgvy64tvr | Enhancing Adversarial Defense by k-Winners-Take-All | We propose a simple change to existing neural network structures for better defending against gradient-based adversarial attacks. Instead of using popular activation functions (such as ReLU), we advocate the use of k-Winners-Take-All (k-WTA) activation, a C0 discontinuous function that purposely invalidates the neural network model’s gradient at densely distributed input data points. The proposed k-WTA activation can be readily used in nearly all existing networks and training methods with no significant overhead. Our proposal is theoretically rationalized. We analyze why the discontinuities in k-WTA networks can largely prevent gradient-based search of adversarial examples and why they at the same time remain innocuous to the network training. This understanding is also empirically backed. We test k-WTA activation on various network structures optimized by a training method, be it adversarial training or not. In all cases, the robustness of k-WTA networks outperforms that of traditional networks under white-box attacks. | accept-spotlight | This paper presents new non-linearity function which specially affects regions of the model which are densely valued. The non-linearity is simple: it retains only top-k highest units from the input, while truncating the rest to zero. This also makes the models more robust to adversarial defense which depend on the gradients. The non-linearity function is shown to have better adversarial robustness on CIFAR-10 and SVHN datasets. The paper also presents theoretical analysis for why the non-linearity is a good function.
The authors have already incorporated major suggestions by the reviewers and the paper can make significant impact on the community. Thus, I recommend its acceptance. | test | [
"Skgu1ZFjtr",
"SJlDkeB5jr",
"S1ebavLGsS",
"BJxMS1JVsS",
"Bke5-OLMiH",
"HyghukfWoB",
"S1gbs2K9FH",
"ryg2N7D7cr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose using k-winner take all (k-WTA) activation functions to prevent white box adversarial attacks. A k-WTA activation functions outputs the k highest activations in a layer while setting all other activations to zero. The reasoning given by the authors is that k-WTA activation functions have many d... | [
8,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_Skgvy64tvr",
"iclr_2020_Skgvy64tvr",
"S1gbs2K9FH",
"ryg2N7D7cr",
"S1ebavLGsS",
"Skgu1ZFjtr",
"iclr_2020_Skgvy64tvr",
"iclr_2020_Skgvy64tvr"
] |
iclr_2020_Hke-WTVtwr | Encoding word order in complex embeddings | Sequential word order is important when processing text. Currently, neural networks (NNs) address this by modeling word position using position embeddings. The problem is that position embeddings capture the position of individual words, but not the ordered relationship (e.g., adjacency or precedence) between individual word positions. We present a novel and principled solution for modeling both the global absolute positions of words and their order relationships. Our solution generalizes word embeddings, previously defined as independent vectors, to continuous word functions over a variable (position). The benefit of continuous functions over variable positions is that word representations shift smoothly with increasing positions. Hence, word representations in different positions can correlate with each other in a continuous function. The general solution of these functions can be extended to complex-valued variants. We extend CNN, RNN and Transformer NNs to complex-valued versions to incorporate our complex embedding (we make all code available). Experiments on text classification, machine translation and language modeling show gains over both classical word embeddings and position-enriched word embeddings. To our knowledge, this is the first work in NLP to link imaginary numbers in complex-valued representations to concrete meanings (i.e., word order). | accept-spotlight | This paper describes a new language model that captures both the position of words, and their order relationships. This redefines word embeddings (previously thought of as fixed and independent vectors) to be functions of position. This idea is implemented in several models (CNN, RNN and Transformer NNs) to show improvements on multiple tasks and datasets.
One reviewer asked for additional experiments, which the authors provided, and which still supported their methodology. In the end, the reviewers agreed this paper should be accepted. | train | [
"r1lxJDIcKB",
"SygmcOJ0YS",
"S1gUZCQoor",
"r1g91PmjsS",
"S1e32fPEcS",
"BJx1LnGojH",
"Hkx-jjWooS",
"rylyLsbsjB",
"SyxEj9-siS",
"Skg0qtWsiS",
"H1xGaaQcoH",
"H1xeZgm5oH",
"B1x_HUU8cH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Summary\n\nThe authors present a \"natural\" way of encoding position information into word embeddings and present extensive empirical evidence to support their method. I believe that paper meets the bar for acceptance.\n\n### Details\n\nThe paper \"Encoding word order in complex embeddings\" presents a method... | [
6,
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2020_Hke-WTVtwr",
"iclr_2020_Hke-WTVtwr",
"rylyLsbsjB",
"H1xeZgm5oH",
"iclr_2020_Hke-WTVtwr",
"SyxEj9-siS",
"r1lxJDIcKB",
"SygmcOJ0YS",
"S1e32fPEcS",
"B1x_HUU8cH",
"H1xeZgm5oH",
"S1e32fPEcS",
"iclr_2020_Hke-WTVtwr"
] |
iclr_2020_B1x1ma4tDr | DDSP: Differentiable Digital Signal Processing | Most generative models of audio directly generate samples in one of two domains: time or frequency. While sufficient to express any signal, these representations are inefficient, as they do not utilize existing knowledge of how sound is generated and perceived. A third approach (vocoders/synthesizers) successfully incorporates strong domain knowledge of signal processing and perception, but has been less actively researched due to limited expressivity and difficulty integrating with modern auto-differentiation-based machine learning methods. In this paper, we introduce the Differentiable Digital Signal Processing (DDSP) library, which enables direct integration of classic signal processing elements with deep learning methods. Focusing on audio synthesis, we achieve high-fidelity generation without the need for large autoregressive models or adversarial losses, demonstrating that DDSP enables utilizing strong inductive biases without losing the expressive power of neural networks. Further, we show that combining interpretable modules permits manipulation of each separate model component, with applications such as independent control of pitch and loudness, realistic extrapolation to pitches not seen during training, blind dereverberation of room acoustics, transfer of extracted room acoustics to new environments, and transformation of timbre between disparate sources. In short, DDSP enables an interpretable and modular approach to generative modeling, without sacrificing the benefits of deep learning. The library will is available at https://github.com/magenta/ddsp and we encourage further contributions from the community and domain experts.
| accept-spotlight | This paper proposes a novel differentiable digital signal processing in audio synthesis. The application is novel and interesting. All the reivewers agree to accept it. The authors are encouraged to consider the reviewer's suggestions to revise the paper. | train | [
"rkgTv1y5iB",
"S1e0NJJcsH",
"SygxGy1cor",
"SkxG6ARYiH",
"SkeFR6AFor",
"H1gnICCFjr",
"HJxrEA0KiS",
"SkxtJ6CKiH",
"Hyx1NHgeir",
"rJlmneOVqr",
"rJemdEkV9H",
"SyeL1QN2qS"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review and helpful comments. We have replied to your main question below.\n\n> “how susceptible do you think the system is robust with respect to f0 and loudness encoders? Have you experimented with situations where the f0 and the loudness encoders might fail (such as more non-periodic and noisy... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"rJemdEkV9H",
"SygxGy1cor",
"rJlmneOVqr",
"SyeL1QN2qS",
"Hyx1NHgeir",
"HJxrEA0KiS",
"SkeFR6AFor",
"iclr_2020_B1x1ma4tDr",
"iclr_2020_B1x1ma4tDr",
"iclr_2020_B1x1ma4tDr",
"iclr_2020_B1x1ma4tDr",
"iclr_2020_B1x1ma4tDr"
] |
iclr_2020_SJl5Np4tPr | Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation | Few-shot classification aims to recognize novel categories with only few labeled images in each class. Existing metric-based few-shot classification algorithms predict categories by comparing the feature embeddings of query images with those from a few labeled images (support examples) using a learned metric function. While promising performance has been demonstrated, these methods often fail to generalize to unseen domains due to large discrepancy of the feature distribution across domains. In this work, we address the problem of few-shot classification under domain shifts for metric-based methods. Our core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage. To capture variations of the feature distributions under different domains, we further apply a learning-to-learn approach to search for the hyper-parameters of the feature-wise transformation layers. We conduct extensive experiments and ablation studies under the domain generalization setting using five few-shot classification datasets: mini-ImageNet, CUB, Cars, Places, and Plantae. Experimental results demonstrate that the proposed feature-wise transformation layer is applicable to various metric-based models, and provides consistent improvements on the few-shot classification performance under domain shift. | accept-spotlight | This submission addresses the problem of few-shot classification. The proposed solution centers around metric-based models with a core argument that prior work may lead to learned embeddings which are overfit to the few labeled examples available during learning. Thus, when measuring cross-domain performance, the specialization of the original classifier to the initial domain will be apparent through degraded test time (new domain) performance. The authors therefore, study the problem of domain generalization in the few-shot learning scenario. The main algorithmic contribution is the introduction of a feature-wise transformation layer.
All reviewers suggest to accept this paper. Reviewer 3 says this problem statement is especially novel. Reviewer 1 and 2 had concerns over lack of comparisons with recent state-of-the-art methods. The authors responded with some additional results during the rebuttal phase, which should be included in the final draft.
Overall the AC recommends acceptance, based on the positive comments and the fact that this paper addresses a sufficiently new problem statement.
| train | [
"ryeE2xQ3oB",
"SJeVLl7hoH",
"Hkl8tRz2oB",
"BylXv0G3jB",
"HyeXr3GniH",
"SkeKjHmiYB",
"r1x2xr675r",
"Hke1yO9d9B"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\n——---\n>>> Comments: There is no direct comparison to the state-of-the-art methods.\n\n> Response: We include the results of several state-of-the-art methods [Qiao et al., 2018; Oreshkin et al., 2018; Lifchitz et al., 2019; Lee et al., 2019; Rusu et al., 2019] on the mini-ImageNet dataset in Table 7 in the appen... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"SkeKjHmiYB",
"SkeKjHmiYB",
"r1x2xr675r",
"r1x2xr675r",
"Hke1yO9d9B",
"iclr_2020_SJl5Np4tPr",
"iclr_2020_SJl5Np4tPr",
"iclr_2020_SJl5Np4tPr"
] |
iclr_2020_HklRwaEKwB | Ridge Regression: Structure, Cross-Validation, and Sketching | We study the following three fundamental problems about ridge regression: (1) what is the structure of the estimator? (2) how to correctly use cross-validation to choose the regularization parameter? and (3) how to accelerate computation without losing too much accuracy? We consider the three problems in a unified large-data linear model. We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise.
We study the bias of K-fold cross-validation for choosing the regularization parameter, and propose a simple bias-correction. We analyze the accuracy of primal and dual sketching for ridge regression, showing they are surprisingly accurate. Our results are illustrated by simulations and by analyzing empirical data. | accept-spotlight | The paper studies theoretical properties of ridge regression, and in particular how to correct for the bias of the estimator.
The reviewers appreciated the contribution and the fact that you updated the manuscript to make it clearer.
I however advise the authors to think about the best way to maximize impact for the ICLR audience, perhaps by providing relevant examples from the ML literature. | train | [
"BkgrXByTYB",
"Hkg5V-bIor",
"HJecJWZ8iH",
"rygXNl-qtr"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"This paper deals with 3 theoretical properties of ridge regression. First, it proves that the ridge regression estimator is equivalent to a specific representation which is useful as for instance it can be used to derive the training error of the ridge estimator. Second, it provides a bias correction mechanism for... | [
6,
-1,
-1,
8
] | [
4,
-1,
-1,
1
] | [
"iclr_2020_HklRwaEKwB",
"rygXNl-qtr",
"BkgrXByTYB",
"iclr_2020_HklRwaEKwB"
] |
iclr_2020_SJgndT4KwB | Finite Depth and Width Corrections to the Neural Tangent Kernel | We prove the precise scaling, at finite depth and width, for the mean and variance of the neural tangent kernel (NTK) in a randomly initialized ReLU network. The standard deviation is exponential in the ratio of network depth to width. Thus, even in the limit of infinite overparameterization, the NTK is not deterministic if depth and width simultaneously tend to infinity. Moreover, we prove that for such deep and wide networks, the NTK has a non-trivial evolution during training by showing that the mean of its first SGD update is also exponential in the ratio of network depth to width. This is sharp contrast to the regime where depth is fixed and network width is very large. Our results suggest that, unlike relatively shallow and wide networks, deep and wide ReLU networks are capable of learning data-dependent features even in the so-called lazy training regime. | accept-spotlight | This paper aims to study the mean and variance of the neural tangent kernel (NTK) in a randomly initialized ReLU network. The purpose is to understand the regime where the width and depth go to infinity together with a fixed ratio. The paper does not have a lot of numerical experiments to test the mathematical conclusions. In the discussion the reviewers concurred that the paper is interesting and has nice results but raised important points regarding the fact that only the diagonal elements are studied. This I think is the major limitation of this paper. Another issue raised was lack of experimental work validating the theory. Despite the limitations discussed above, overall I think this is an interesting and important area as it sheds light on how to move beyond the NTK regime. I also think studying this limit is very important to better understanding of neural network training. I recommend acceptance to ICLR. | train | [
"rJxfij46tS",
"ByxE3AX8jS",
"BygGwAQIoH",
"rygnyAXLsH",
"rJx2scAsYr",
"BJlnf573Yr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the finite depth and width corrections to the neural tangent kernel (NTK) in fully-connected ReLU networks. It gives sharp upper and lower bounds on the variance of NTK(x, x), which reveals an exponential dependence on a quantity beta=d/n, where d is depth, and n is hidden width. This implies th... | [
6,
-1,
-1,
-1,
8,
8
] | [
5,
-1,
-1,
-1,
5,
4
] | [
"iclr_2020_SJgndT4KwB",
"rJx2scAsYr",
"BJlnf573Yr",
"rJxfij46tS",
"iclr_2020_SJgndT4KwB",
"iclr_2020_SJgndT4KwB"
] |
iclr_2020_BklEFpEYwS | Meta-Learning without Memorization | The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods. Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks. However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once. For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels. If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs. In some domains, this makes meta-learning entirely inapplicable. In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation. This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input. By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks. We demonstrate its applicability to both contextual and gradient-based meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult. Our approach substantially outperforms standard meta-learning algorithms in these settings. | accept-spotlight | The paper introduces the concept of overfitting in meta learning and proposes some solutions to address this problem. Overall, this is a good paper. It would be good if the authors could relate this work to meta learning approaches, which are based on hierarchical (Bayesian) modeling for learning a task embedding.
[1] Hausman et al. (ICLR 2018): Learning an Embedding Space for Transferable Robot Skills
https://openreview.net/pdf?id=rk07ZXZRb
[2] Saemundsson et al. (UAI 2018): Meta Reinforcement Learning with Latent Variable Gaussian Processes
http://auai.org/uai2018/proceedings/papers/235.pdf
| train | [
"HkgyWzXhFH",
"Bylzova-cH",
"rygYKn-hYr",
"B1ey3QpBjB",
"HJxkr-aroH",
"r1glYM6HiS",
"rkeazV6rjS",
"BklKNearor"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"\nSummary:\n\nIn this paper, the authors propose a new method to alleviate the effect of meta over-fitting. The designed method is based on the information-theoretic meta-regularization objective. Experiments demonstrate the effectiveness of the proposed model.\n\nStrong Points:\n\n+ The authors aim to alleviate t... | [
6,
8,
8,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_BklEFpEYwS",
"iclr_2020_BklEFpEYwS",
"iclr_2020_BklEFpEYwS",
"rygYKn-hYr",
"HkgyWzXhFH",
"HJxkr-aroH",
"B1ey3QpBjB",
"Bylzova-cH"
] |
iclr_2020_BJgy96EYvr | Influence-Based Multi-Agent Exploration | Intrinsically motivated reinforcement learning aims to address the exploration challenge for sparse-reward tasks. However, the study of exploration methods in transition-dependent multi-agent settings is largely absent from the literature. We aim to take a step towards solving this problem. We present two exploration methods: exploration via information-theoretic influence (EITI) and exploration via decision-theoretic influence (EDTI), by exploiting the role of interaction in coordinated behaviors of agents. EITI uses mutual information to capture the interdependence between the transition dynamics of agents. EDTI uses a novel intrinsic reward, called Value of Interaction (VoI), to characterize and quantify the influence of one agent's behavior on expected returns of other agents. By optimizing EITI or EDTI objective as a regularizer, agents are encouraged to coordinate their exploration and learn policies to optimize the team performance. We show how to optimize these regularizers so that they can be easily integrated with policy gradient reinforcement learning. The resulting update rule draws a connection between coordinated exploration and intrinsic reward distribution. Finally, we empirically demonstrate the significant strength of our methods in a variety of multi-agent scenarios. | accept-spotlight | The paper presents a new take on exploration in multi-agent reinforcement learning settings, and presents two approaches, one motivated by information theoretic, the other by decision theoretic influence on other agents. Reviewers consider the proposed approach "pretty elegant, and in a sense seem fundamental", the experimental section "thorough", and expect the work to "encourage future work to explore more problems in this area". Several questions were raised, especially regarding related work, comparison to single agent exploration approaches, and several clarifying questions. These were largely addressed by the authors, resulting in a strong submission with valuable contributions. | train | [
"rJg6aatjYH",
"BkxVHrpKYS",
"HklmzDWjir",
"Byx8jl-soH",
"ryez7xbooH",
"SklRxeWsoB",
"Hke-bHAmtr",
"BJeHprdQFS",
"HJx2JQyg9S",
"rJehSPzj_H",
"HJlVKGuwdH",
"BkeXH5T7dB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"author",
"public"
] | [
"Update: I thank the authors for their response and I will maintain my score, my main hesitation being the overall clarity and readability of the paper. \n\nSummary: \nThis paper proposes the use of two intrinsic rewards for exploration in MARL settings. The first one is an information-theoretic influence (EITI) b... | [
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1
] | [
"iclr_2020_BJgy96EYvr",
"iclr_2020_BJgy96EYvr",
"iclr_2020_BJgy96EYvr",
"BkxVHrpKYS",
"rJg6aatjYH",
"HJx2JQyg9S",
"iclr_2020_BJgy96EYvr",
"rJehSPzj_H",
"iclr_2020_BJgy96EYvr",
"HJlVKGuwdH",
"BkeXH5T7dB",
"iclr_2020_BJgy96EYvr"
] |
iclr_2020_SJeqs6EFvB | HOPPITY: LEARNING GRAPH TRANSFORMATIONS TO DETECT AND FIX BUGS IN PROGRAMS | We present a learning-based approach to detect and fix a broad range of bugs in Javascript programs. We frame the problem in terms of learning a sequence of graph transformations: given a buggy program modeled by a graph structure, our model makes a sequence of predictions including the position of bug nodes and corresponding graph edits to produce a fix. Unlike previous works that use deep neural networks, our approach targets bugs that are more complex and semantic in nature (i.e.~bugs that require adding or deleting statements to fix). We have realized our approach in a tool called HOPPITY. By training on 290,715 Javascript code change commits on Github, HOPPITY correctly detects and fixes bugs in 9,490 out of 36,361 programs in an end-to-end fashion. Given the bug location and type of the fix, HOPPITY also outperforms the baseline approach by a wide margin. | accept-spotlight | This paper presents a learning-based approach to detect and fix bugs in JavaScript programs. By modeling the bug detection and fix as a sequence of graph transformations, the proposed method achieved promising experimental results on a large JavaScript dataset crawled from GitHub.
All the reviews agree to accept the paper for its reasonable and interesting approach to solve the bug problems. The main concerns are about the experimental design, which has been addressed by the authors in the revision.
Based on the novelty and solid experiments of the proposed method, I agreed to accept the paper as other revises.
| train | [
"Syg633Qs_H",
"rJe5NQzhir",
"ByxOVrGhsH",
"S1lk2NM2iH",
"Hkx6IVzhsH",
"Hyg7Kmz3sr",
"HkemRjoaKS",
"Hkl8GXDRYr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a graph tranformation-based code repair tool. By representing source code as a graph a network is asked to take a series of simple graph edit operations to edit the code. The authors show that their method better predicts edits from existing code.\n\nOverall, I find the problem interesting and ... | [
6,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
4,
1
] | [
"iclr_2020_SJeqs6EFvB",
"Hkl8GXDRYr",
"iclr_2020_SJeqs6EFvB",
"Hkx6IVzhsH",
"Syg633Qs_H",
"HkemRjoaKS",
"iclr_2020_SJeqs6EFvB",
"iclr_2020_SJeqs6EFvB"
] |
iclr_2020_BJge3TNKwH | Sliced Cramer Synaptic Consolidation for Preserving Deeply Learned Representations | Deep neural networks suffer from the inability to preserve the learned data representation (i.e., catastrophic forgetting) in domains where the input data distribution is non-stationary, and it changes during training. Various selective synaptic plasticity approaches have been recently proposed to preserve network parameters, which are crucial for previously learned tasks while learning new tasks. We explore such selective synaptic plasticity approaches through a unifying lens of memory replay and show the close relationship between methods like Elastic Weight Consolidation (EWC) and Memory-Aware-Synapses (MAS). We then propose a fundamentally different class of preservation methods that aim at preserving the distribution of internal neural representations for previous tasks while learning a new one. We propose the sliced Cram\'{e}r distance as a suitable choice for such preservation and evaluate our Sliced Cramer Preservation (SCP) algorithm through extensive empirical investigations on various network architectures in both supervised and unsupervised learning settings. We show that SCP consistently utilizes the learning capacity of the network better than online-EWC and MAS methods on various incremental learning tasks. | accept-spotlight | The paper addresses an important problem (preventing catastrophic forgetting in continual learning) through a novel approach based on the sliced Kramer distance. The paper provides a novel and interesting conceptual contribution and is well written. Experiments could have been more extensive but this is very nice work and deserves publication. | train | [
"SkxaLppiiS",
"Byg4_nTisr",
"BylVcusk5r",
"SJlTfBCMqH"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for a positive evaluation of our work and for setting time aside to carefully read our paper. We appreciate your valuable time spent on serving the community. Below please find our responses. \n\n\n1) \"I think the experimental section relies heavily on MNIST (e.g. permuted MNIST, auto-encoding... | [
-1,
-1,
8,
6
] | [
-1,
-1,
5,
4
] | [
"BylVcusk5r",
"SJlTfBCMqH",
"iclr_2020_BJge3TNKwH",
"iclr_2020_BJge3TNKwH"
] |
iclr_2020_rJeB36NKvB | How much Position Information Do Convolutional Neural Networks Encode? | In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. Information concerning absolute position is inherently useful, and it is reasonable to assume that deep CNNs may implicitly learn to encode this information if there is a means to do so. In this paper, we test this hypothesis revealing the surprising degree of absolute position information that is encoded in commonly used neural networks. A comprehensive set of experiments show the validity of this hypothesis and shed light on how and where this information is represented while offering clues to where positional information is derived from in deep CNNs. | accept-spotlight | This paper analyzes the weights associated with filters in CNNs and finds that they encode positional information (i.e. near the edges of the image). A detailed discussion and analysis is performed, which shows where this positional information comes from.
The reviewers were happy with your paper and found it to be quite interesting. The reviewers felt your paper addressed an important (and surprising!) issue not previously recognized in CNNs. | train | [
"SklQXVNkcB",
"ryxckH_joB",
"r1gWFEOosS",
"rJlN7fOioB",
"BklLFSzRtB",
"r1xDPX_yqH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studied the problem of the encoded position information in convolution neural networks. The hypothesis is that CNN can implicitly learn to encode the position information. The author tests the hypothesis with lots of experiments to show how and where the position information is encoded.\n\nClarity:\nThi... | [
8,
-1,
-1,
-1,
8,
8
] | [
4,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_rJeB36NKvB",
"r1xDPX_yqH",
"SklQXVNkcB",
"BklLFSzRtB",
"iclr_2020_rJeB36NKvB",
"iclr_2020_rJeB36NKvB"
] |
iclr_2020_HJenn6VFvB | Hamiltonian Generative Networks | The Hamiltonian formalism plays a central role in classical and quantum physics. Hamiltonians are the main tool for modelling the continuous time evolution of systems with conserved quantities, and they come equipped with many useful properties, like time reversibility and smooth interpolation in time. These properties are important for many machine learning problems - from sequence prediction to reinforcement learning and density modelling - but are not typically provided out of the box by standard tools such as recurrent neural networks. In this paper, we introduce the Hamiltonian Generative Network (HGN), the first approach capable of consistently learning Hamiltonian dynamics from high-dimensional observations (such as images) without restrictive domain assumptions. Once trained, we can use HGN to sample new trajectories, perform rollouts both forward and backward in time, and even speed up or slow down the learned dynamics. We demonstrate how a simple modification of the network architecture turns HGN into a powerful normalising flow model, called Neural Hamiltonian Flow (NHF), that uses Hamiltonian dynamics to model expressive densities. Hence, we hope that our work serves as a first practical demonstration of the value that the Hamiltonian formalism can bring to machine learning. More results and video evaluations are available at: http://tiny.cc/hgn | accept-spotlight | The paper introduces a novel way of learning Hamiltonian dynamics with a generative network. The Hamiltonian generative network (HGN) learns the dynamics directly from data by embedding observations in a latent space, which is then transformed into a phase space describing the system's initial (abstract) position and momentum. Using a second network, the Hamiltonian network, the position and momentum are reduced to a scalar, interpreted as the Hamiltonian of the system, which can then be used to do rollouts in the phase space using techniques known from, e.g., Hamiltonian Monte Carlo sampling. An important ingredient of the paper is the fact that no access to the derivatives of the Hamiltonian is needed.
The reviewers agree that this paper is a good contribution, and I recommend acceptance. | train | [
"SkeI9242oB",
"ryeymd72iH",
"S1gCHOIjsB",
"SJg75vcciB",
"BygY8jg5jH",
"rkgfNbOdjS",
"Bkx4mW_djS",
"SJgmLg_doH",
"S1g6AkuOiB",
"Sye2f6xAur",
"HJgFZBKTKB",
"BkgU-6KRKS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your suggestion. We have updated the text to say \"expressive\" instead of \"more expressive\".",
"Dear authors, thank you very much for your response and the updated paper. Both nicely address my questions.\n\nYou are right that I was a bit too quick to conclude that you argue that NHF is more exp... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"ryeymd72iH",
"S1g6AkuOiB",
"SJgmLg_doH",
"rkgfNbOdjS",
"iclr_2020_HJenn6VFvB",
"Bkx4mW_djS",
"Sye2f6xAur",
"HJgFZBKTKB",
"BkgU-6KRKS",
"iclr_2020_HJenn6VFvB",
"iclr_2020_HJenn6VFvB",
"iclr_2020_HJenn6VFvB"
] |
iclr_2020_SkeyppEFvS | CoPhy: Counterfactual Learning of Physical Dynamics | Understanding causes and effects in mechanical systems is an essential component of reasoning in the physical world. This work poses a new problem of counterfactual learning of object mechanics from visual input. We develop the CoPhy benchmark to assess the capacity of the state-of-the-art models for causal physical reasoning in a synthetic 3D environment and propose a model for learning the physical dynamics in a counterfactual setting. Having observed a mechanical experiment that involves, for example, a falling tower of blocks, a set of bouncing balls or colliding objects, we learn to predict how its outcome is affected by an arbitrary intervention on its initial conditions, such as displacing one of the objects in the scene. The alternative future is predicted given the altered past and a latent representation of the confounders learned by the model in an end-to-end fashion with no supervision. We compare against feedforward video prediction baselines and show how observing alternative experiences allows the network to capture latent physical properties of the environment, which results in significantly more accurate predictions at the level of super human performance. | accept-spotlight | The reviewers are unanimous in their opinion that this paper offers a novel approach to learning naïve physics. I concur. | train | [
"ryloApppFB",
"BkgCHpSosB",
"rke4DbQqsS",
"Skesm4MLoB",
"H1lOZEz8ir",
"Byl-77z8sr",
"BJeBRMzUsB",
"B1gVN4rCtS",
"BkMT-8Rtr"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Update: after revision, I have decided to keep the score unchanged.\n\nOriginal comments:\n\nIn this paper, the authors proposed a new method to learn physical dynamics based on counterfactual reasoning. \n\n1. As also summarized by the paper, over recent years, there has been increasing interest in the research c... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_SkeyppEFvS",
"rke4DbQqsS",
"Skesm4MLoB",
"H1lOZEz8ir",
"B1gVN4rCtS",
"BkMT-8Rtr",
"ryloApppFB",
"iclr_2020_SkeyppEFvS",
"iclr_2020_SkeyppEFvS"
] |
iclr_2020_BJg866NFvB | Estimating counterfactual treatment outcomes over time through adversarially balanced representations | Identifying when to give treatments to patients and how to select among multiple treatments over time are important medical problems with a few existing solutions. In this paper, we introduce the Counterfactual Recurrent Network (CRN), a novel sequence-to-sequence model that leverages the increasingly available patient observational data to estimate treatment effects over time and answer such medical questions. To handle the bias from time-varying confounders, covariates affecting the treatment assignment policy in the observational data, CRN uses domain adversarial training to build balancing representations of the patient history. At each timestep, CRN constructs a treatment invariant representation which removes the association between patient history and treatment assignments and thus can be reliably used for making counterfactual predictions. On a simulated model of tumour growth, with varying degree of time-dependent confounding, we show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment than current state-of-the-art methods. | accept-spotlight | Reviewers uniformly suggest acceptance. Please look carefully at reviewer comments and address in the camera-ready. Great work! | train | [
"H1e1Ni-3FB",
"HJxc4a1UiB",
"HkeIisXGjS",
"rkxAY_Xfir",
"rygpMUQfsr",
"ByxPsr7zsB",
"rylO65p7Kr",
"S1lBUXOaKr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces Counterfactual Recurrent Network (CRN) that is able to estimate the effects of various treatments from longitudinal data. The claim is that the model can decide (i) treatment plan; (ii) optimal time of treatment; and (iii) when to stop treatment. The proposed method attempts to learn time-inva... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_BJg866NFvB",
"iclr_2020_BJg866NFvB",
"rkxAY_Xfir",
"H1e1Ni-3FB",
"rylO65p7Kr",
"S1lBUXOaKr",
"iclr_2020_BJg866NFvB",
"iclr_2020_BJg866NFvB"
] |
iclr_2020_Skep6TVYDB | Gradientless Descent: High-Dimensional Zeroth-Order Optimization | Zeroth-order optimization is the process of minimizing an objective f(x), given oracle access to evaluations at adaptively chosen inputs x. In this paper, we present two simple yet powerful GradientLess Descent (GLD) algorithms that do not rely on an underlying gradient estimate and are numerically stable. We analyze our algorithm from a novel geometric perspective and we show that for {\it any monotone transform} of a smooth and strongly convex objective with latent dimension k≥n, we present a novel analysis that shows convergence within an ϵ-ball of the optimum in O(kQlog(n)log(R/ϵ)) evaluations, where the input dimension is n, R is the diameter of the input space and Q is the condition number. Our rates are the first of its kind to be both 1) poly-logarithmically dependent on dimensionality and 2) invariant under monotone transformations. We further leverage our geometric perspective to show that our analysis is optimal. Both monotone invariance and its ability to utilize a low latent dimensionality are key to the empirical success of our algorithms, as demonstrated on synthetic and MuJoCo benchmarks.
| accept-spotlight | The paper considers an interesting algorithm on zeorth-order optimization and contains strong theory. All the reviewers agree to accept. | train | [
"B1geUE_Hcr",
"B1xE7meqsH",
"H1lvhGxcsr",
"rkett7l5oB",
"rkeEw7x5sB",
"r1gG4uu3FB",
"H1ePXMhaYS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nUpdate after rebuttal: I found the rebuttal convincing and I liked the fact that concerns regarding empirical justification were addressed. Consequently, I increase my score from \"Weak Reject\" to \"Weak Accept\".\n--------------------------\nThis paper focuses on derivative-free, or zero-th order, optimization... | [
6,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_Skep6TVYDB",
"B1geUE_Hcr",
"iclr_2020_Skep6TVYDB",
"H1ePXMhaYS",
"r1gG4uu3FB",
"iclr_2020_Skep6TVYDB",
"iclr_2020_Skep6TVYDB"
] |
iclr_2020_Hkekl0NFPr | Conditional Learning of Fair Representations | We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting. Two key components underpinning the design of our algorithm are balanced error rate and conditional alignment of representations. We show how these two components contribute to ensuring accuracy parity and equalized false-positive and false-negative rates across groups without impacting demographic parity. Furthermore, we also demonstrate both in theory and on two real-world experiments that the proposed algorithm leads to a better utility-fairness trade-off on balanced datasets compared with existing algorithms on learning fair representations for classification.
| accept-spotlight | This paper provides a new algorithm for learning fair representation for two different fairness criteria--accuracy parity and equalized odds. The reviewers agree that the paper provides novel techniques, although the experiments may appear to be a bit weak. Overall, this paper gives new contributions to the fair representation learning literature.
The authors should consider citing and discussing the relationship with the following work:
A Reductions Approach to Fair Classification., ICML 2018 | train | [
"rJe6njy0KH",
"S1e9hdUBcS",
"ryg3OvGsjS",
"SylzfAdqsH",
"Ske9dDXMir",
"B1gt08mMjB",
"BkeAVwQGoB",
"SJl4zO7MoH",
"r1gh9fvAYB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper focuses on learning representations which can simultaneously achieve equalized odds and accuracy parity without impacting demographic parity. The authors show both theoretically and empirically that the proposed algorithm show better utility-fairness tradeoff on balanced datasets. This is indeed a usefu... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
1
] | [
"iclr_2020_Hkekl0NFPr",
"iclr_2020_Hkekl0NFPr",
"SylzfAdqsH",
"B1gt08mMjB",
"rJe6njy0KH",
"S1e9hdUBcS",
"r1gh9fvAYB",
"iclr_2020_Hkekl0NFPr",
"iclr_2020_Hkekl0NFPr"
] |
iclr_2020_ByxxgCEYDS | Inductive Matrix Completion Based on Graph Neural Networks | We propose an inductive matrix completion model without using side information. By factorizing the (rating) matrix into the product of low-dimensional latent embeddings of rows (users) and columns (items), a majority of existing matrix completion methods are transductive, since the learned embeddings cannot generalize to unseen rows/columns or to new matrices. To make matrix completion inductive, most previous works use content (side information), such as user's age or movie's genre, to make predictions. However, high-quality content is not always available, and can be hard to extract. Under the extreme setting where not any side information is available other than the matrix to complete, can we still learn an inductive matrix completion model? In this paper, we propose an Inductive Graph-based Matrix Completion (IGMC) model to address this problem. IGMC trains a graph neural network (GNN) based purely on 1-hop subgraphs around (user, item) pairs generated from the rating matrix and maps these subgraphs to their corresponding ratings. It achieves highly competitive performance with state-of-the-art transductive baselines. In addition, IGMC is inductive -- it can generalize to users/items unseen during the training (given that their interactions exist), and can even transfer to new tasks. Our transfer learning experiments show that a model trained out of the MovieLens dataset can be directly used to predict Douban movie ratings with surprisingly good performance. Our work demonstrates that: 1) it is possible to train inductive matrix completion models without using side information while achieving similar or better performances than state-of-the-art transductive methods; 2) local graph patterns around a (user, item) pair are effective predictors of the rating this user gives to the item; and 3) Long-range dependencies might not be necessary for modeling recommender systems. | accept-spotlight | This paper proposes a novel technique for matrix completion, using graphical neighborhood structure to side-step the need for any side-information.
Post-rebuttal, the reviewers converged on a unanimous decision to accept. The authors are encouraged to review to address reviewer comments. | train | [
"Bkxs1aUDsB",
"SJlwehUDjS",
"H1xK-etpYB",
"rkggvmj8ir",
"rkxNCMiUor",
"Byxx9MoIjB",
"HkevAXKCYB",
"BklY_ODfqS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the very helpful comments again! We really appreciate them.",
"Thanks for the detailed response - I've increased my score in response to your additions.",
"This paper presents a method for inductive matrix completion that does not rely on side information to make predictions. The approach is as f... | [
-1,
-1,
6,
-1,
-1,
-1,
8,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
5,
4
] | [
"SJlwehUDjS",
"rkggvmj8ir",
"iclr_2020_ByxxgCEYDS",
"H1xK-etpYB",
"HkevAXKCYB",
"BklY_ODfqS",
"iclr_2020_ByxxgCEYDS",
"iclr_2020_ByxxgCEYDS"
] |
iclr_2020_Hkx7xRVYDr | Duration-of-Stay Storage Assignment under Uncertainty | Storage assignment, the act of choosing what goods are placed in what locations in a warehouse, is a central problem of supply chain logistics. Past literature has shown that the optimal method to assign pallets is to arrange them in increasing duration of stay in the warehouse (the Duration-of-Stay, or DoS, method), but the methodology requires perfect prior knowledge of DoS for each pallet, which is unknown and uncertain under realistic conditions. Attempts to predict DoS have largely been unfruitful due to the multi-valuedness nature (every shipment contains multiple identical pallets with different DoS) and data sparsity induced by lack of matching historical conditions. In this paper, we introduce a new framework for storage assignment that provides a solution to the DoS prediction problem through a distributional reformulation and a novel neural network, ParallelNet. Through collaboration with a world-leading cold storage company, we show that the system is able to predict DoS with a MAPE of 29%, a decrease of ~30% compared to a CNN-LSTM model, and suffers less performance decay into the future. The framework is then integrated into a first-of-its-kind Storage Assignment system, which is being deployed in warehouses across United States, with initial results showing up to 21% in labor savings. We also release the first publicly available set of warehousing records to facilitate research into this central problem. | accept-spotlight | Thanks to the authors for the submission and the active discussion. The paper applies deep learning to the duration-of-stay estimation problem in the warehouse storage application. The authors provide problem formulation and describe the pipeline of their solutions, including datasets preparation and loss functions design. The reviewers agree that this is a good application paper that showcases how deep learning can be useful for a real-world problem. The release of the dataset can also be a nice contribution. A major debate during the discussion is whether this paper is in scope of ICLR given that it is mostly a straightforward application existing techniques. After several rounds of discussion, reviewers think that this should fit under the category "applications in vision, ... , computational biology, and others." Overall, this paper can be a good example of applying deep learning to real-world problems.
| train | [
"SyxjqEicsB",
"BJliJBi5oS",
"HJlLZ399oS",
"BygCtgOqjB",
"S1xwzqPz9S",
"ryl-kZsKjS",
"rkgEoBqFoB",
"rkeHtQxstB",
"ryl5kFWUsH",
"SkgGIvZLoS",
"H1xhrLZUjH",
"HJl5RrmNoB",
"rJl8j4X4jB",
"ryl4cL8zir",
"Hygi6_NgoH",
"SylFyZjyjS",
"ryx2oPcyiH",
"B1g8RZ5koB",
"SyecrTwlFS"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_re... | [
"We once again thank Reviewer 3 for the extremely helpful comments.\n\n1. In response to the comment on formalizing the problem, we completely rewrote the introduction to do a formal introduction of the problem and the theorem that proved DoS optimality (in a slightly different form to the original paper for easibi... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
1,
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"rkeHtQxstB",
"BygCtgOqjB",
"ryl-kZsKjS",
"H1xhrLZUjH",
"iclr_2020_Hkx7xRVYDr",
"rkeHtQxstB",
"ryl4cL8zir",
"iclr_2020_Hkx7xRVYDr",
"rJl8j4X4jB",
"rkeHtQxstB",
"S1xwzqPz9S",
"rkeHtQxstB",
"ryx2oPcyiH",
"Hygi6_NgoH",
"B1g8RZ5koB",
"S1xwzqPz9S",
"SyecrTwlFS",
"rkeHtQxstB",
"iclr_20... |
iclr_2020_HklSeREtPB | Emergence of functional and structural properties of the head direction system by optimization of recurrent neural networks | Recent work suggests goal-driven training of neural networks can be used to model neural activity in the brain. While response properties of neurons in artificial neural networks bear similarities to those in the brain, the network architectures are often constrained to be different. Here we ask if a neural network can recover both neural representations and, if the architecture is unconstrained and optimized, also the anatomical properties of neural circuits. We demonstrate this in a system where the connectivity and the functional organization have been characterized, namely, the head direction circuit of the rodent and fruit fly. We trained recurrent neural networks (RNNs) to estimate head direction through integration of angular velocity. We found that the two distinct classes of neurons observed in the head direction system, the Compass neurons and the Shifter neurons, emerged naturally in artificial neural networks as a result of training. Furthermore, connectivity analysis and in-silico neurophysiology revealed structural and mechanistic similarities between artificial networks and the head direction system. Overall, our results show that optimization of RNNs in a goal-driven task can recapitulate the structure and function of biological circuits, suggesting that artificial neural networks can be used to study the brain at the level of both neural activity and anatomical organization. | accept-spotlight | This paper studies properties that emerge in an RNN trained to report head direction, showing that several properties in natural neural circuits performing that function are detected.
All reviewers agree that this is quite an interesting paper. While there are some reservations as to the value of letting a property of interest emerge as opposed to simply hand-coding it in, this approach is seen as powerful and valuable by many people, in that it suggests a higher plausibility that the emerging properties are actually useful when optimizing for that function -- a claim which hand-coding would not make possible. Reviewers have also provided valuable suggestions and requests for clarifications, and authors have responded by improving the presentation and providing more insights.
Overall, this is a solid contribution that will be of interest to the part of the ICLR audience that is interested in biological systems. | train | [
"rylbc4Y2iH",
"H1xvhXY3sr",
"S1lNm7KnsS",
"Hkl8SfFhjr",
"ByeXr3JAKH",
"BkghycwRYr",
"Hygbnylk5B"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your positive assessment and suggestions. We address your concerns/suggestions below.\n\n1. Difference between our approach and the traditional approach using hand-crafted connectivity\n\nThe reviewer touched on an important and more general point, namely the value of using optimization-based RNNs as... | [
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"ByeXr3JAKH",
"BkghycwRYr",
"Hygbnylk5B",
"iclr_2020_HklSeREtPB",
"iclr_2020_HklSeREtPB",
"iclr_2020_HklSeREtPB",
"iclr_2020_HklSeREtPB"
] |
iclr_2020_SyxrxR4KPS | Deep neuroethology of a virtual rodent | Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems. However, this interaction between fields is less developed in the study of motor control. In this work, we develop a virtual rodent as a platform for the grounded study of motor activity in artificial models of embodied control. We then use this platform to study motor activity across contexts by training a model to solve four complex tasks. Using methods familiar to neuroscientists, we describe the behavioral representations and algorithms employed by different layers of the network using a neuroethological approach to characterize motor activity relative to the rodent's behavior and goals. We find that the model uses two classes of representations which respectively encode the task-specific behavioral strategies and task-invariant behavioral kinematics. These representations are reflected in the sequential activity and population dynamics of neural subpopulations. Overall, the virtual rodent facilitates grounded collaborations between deep reinforcement learning and motor neuroscience. | accept-spotlight | This paper is somewhat unorthodox in what it sets out to do: use neuroscience methods to understand a trained deep network controlling an embodied agent. This is exciting, but the actual training of the virtual rodent and the performance it exhibits is also impressive in its own right. All reviewers liked the papers. The question that recurred among all reviewers was what was actually learned in this analysis. The authors responded to this convincingly by listing a number of interesting findings.
I think this paper represents an interesting new direction that many will be interested in. | train | [
"r1g-jDM6Fr",
"Hylxk8RLir",
"HJeksHCUsB",
"BJlFmHCIsS",
"HJlaRNA8iB",
"rJl1PmhsYB",
"rkgMsbnQ9B"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\n=============================== Update after rebuttal ======================================================\n\nI thank the authors for their rebuttal and the revisions. I'm not entirely satisfied with the authors' response to my request for more architectural exploration, but I understand that there wasn't real... | [
6,
-1,
-1,
-1,
-1,
8,
6
] | [
5,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_SyxrxR4KPS",
"rkgMsbnQ9B",
"r1g-jDM6Fr",
"rJl1PmhsYB",
"iclr_2020_SyxrxR4KPS",
"iclr_2020_SyxrxR4KPS",
"iclr_2020_SyxrxR4KPS"
] |
iclr_2020_S1glGANtDr | Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation | Infinite horizon off-policy policy evaluation is a highly challenging task due to the excessively large variance of typical importance sampling (IS) estimators. Recently, Liu et al. (2018) proposed an approach that significantly reduces the variance of infinite-horizon off-policy evaluation by estimating the stationary density ratio, but at the cost of introducing potentially high risks due to the error in density ratio estimation. In this paper, we develop a bias-reduced augmentation of their method, which can take advantage of a learned value function to obtain higher accuracy. Our method is doubly robust in that the bias vanishes when either the density ratio or value function estimation is perfect. In general, when either of them is accurate, the bias can also be reduced. Both theoretical and empirical results show that our method yields significant advantages over previous methods. | accept-spotlight | The paper proposes a doubly robust off-policy evaluation method that uses both stationary density ratio as well as a learned value function in order to reduce bias.
The reviewers unanimously recommend acceptance of this paper. | train | [
"rylbBYHcjS",
"SyxNOqS5oH",
"HkxbciuDjH",
"ryxVg9B9oH",
"Hkl5PWypYH",
"Hye5X3aYoB",
"S1lZXXAdsS",
"BklNSppdjS",
"ByloMnuDoB",
"SJlKt2OPsS",
"SJlAGjuDjS",
"Syga485aKS",
"rkelhjsaYB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank Reviewer #3 for the follow-up comments. Below are responses to your specific questions:\n\n- 1.Variance analysis\n\nIn general, if $R_{\\mathrm{res}}$ and $R_{\\mathrm{VAL}}$ are negatively correlated for the joint distribution of $\\mu_0$ and $d_{\\pi_0}$, we can reduce the variance. We leave it as futur... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"BklNSppdjS",
"Hye5X3aYoB",
"Syga485aKS",
"S1lZXXAdsS",
"iclr_2020_S1glGANtDr",
"ByloMnuDoB",
"HkxbciuDjH",
"SJlAGjuDjS",
"Hkl5PWypYH",
"iclr_2020_S1glGANtDr",
"rkelhjsaYB",
"iclr_2020_S1glGANtDr",
"iclr_2020_S1glGANtDr"
] |
iclr_2020_H1ldzA4tPr | Learning Compositional Koopman Operators for Model-Based Control | Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis. The Koopman operator theory lays the foundation for identifying the nonlinear-to-linear coordinate transformations with data-driven methods. Recently, researchers have proposed to use deep neural networks as a more expressive class of basis functions for calculating the Koopman operators. These approaches, however, assume a fixed dimensional state space; they are therefore not applicable to scenarios with a variable number of objects. In this paper, we propose to learn compositional Koopman operators, using graph neural networks to encode the state into object-centric embeddings and using a block-wise linear transition matrix to regularize the shared structure across objects. The learned dynamics can quickly adapt to new environments of unknown physical parameters and produce control signals to achieve a specified goal. Our experiments on manipulating ropes and controlling soft robots show that the proposed method has better efficiency and generalization ability than existing baselines. | accept-spotlight | This paper proposes using object-centered graph neural network embeddings of a dynamical system as approximate Koopman embeddings, and then learning the linear transition matrix to model the dynamics of the system according to the Koopman operator theory. The authors propose adding an inductive bias (a block diagonal structure of the transition matrix with shared components) to limit the number of parameters necessary to learn, which improves the computational efficiency and generalisation of the proposed approach. The authors also propose adding an additional input component that allows for external control of the dynamics of the system. The reviewers initially had concerns about the experimental section, since the approach was only tested on toy domains. The reviewers also asked for more baselines. The authors were able to answer some of the questions raised during the discussion period, and by the end of it all reviewers agreed that this is a solid and novel piece of work that deserves to be accepted. For this reason I recommend acceptance. | test | [
"HJlw9pVtir",
"S1Mdmj9hoS",
"r1eClp0acr",
"H1lxiTWjir",
"HklsJA4tsS",
"Bke42TNtor",
"r1gn76NYoB",
"BkghAoVYiS",
"SklAzVndYH",
"H1gWd_J4cS",
"H1lOXuJacS"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your thoughtful and constructive comments. \n\n1. Real-world experiments\n\nWe agree that showing real-world experiments would be beneficial. As a first step, we are starting with synthetic environments, which allow us to systematically evaluate and ablate on our model to fully understand its capabil... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
1,
3
] | [
"H1lOXuJacS",
"H1lxiTWjir",
"iclr_2020_H1ldzA4tPr",
"r1gn76NYoB",
"SklAzVndYH",
"H1gWd_J4cS",
"r1eClp0acr",
"iclr_2020_H1ldzA4tPr",
"iclr_2020_H1ldzA4tPr",
"iclr_2020_H1ldzA4tPr",
"iclr_2020_H1ldzA4tPr"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.