paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2018_SkF2D7g0b | Exploring the Space of Black-box Attacks on Deep Neural Networks | Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs. We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks against state-of-the-art defenses. We show that the Gradient Estimation attacks are very effective even against these defenses. | rejected-papers | The paper explores an increasingly important questions, especially showing the attack on existing APIs. The update to the paper has also improved it, but the paper is still not yet as impactful as it could be and needs much more comprehensive analysis to correctly appreciate its benefits and role. | train | [
"Syh_3H0VM",
"Hk96V1clf",
"rJGGOrcxz",
"B10Nn-jlf",
"Hkx9uTl4G",
"BkWKdLPGM",
"H1ddlLvzf",
"BJ0xxIwMM",
"H1FQ2HPfG",
"ry7lNQexM",
"ryDHmbY1G",
"SkYyvAX1G"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"Thank you for your revised review. Regarding the higher value of distortion for SPSA, we would like to refer you to the second column of Table 2 titled 'Attack success'. The numbers in parentheses in this column provide the average distortion value for each type of attack. Since the earlier table (Table 1) of resu... | [
-1,
5,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Hk96V1clf",
"iclr_2018_SkF2D7g0b",
"iclr_2018_SkF2D7g0b",
"iclr_2018_SkF2D7g0b",
"BJ0xxIwMM",
"iclr_2018_SkF2D7g0b",
"Hk96V1clf",
"rJGGOrcxz",
"B10Nn-jlf",
"ryDHmbY1G",
"SkYyvAX1G",
"iclr_2018_SkF2D7g0b"
] |
iclr_2018_r1RF3ExCb | Transformation Autoregressive Networks | The fundamental task of general density estimation has been of keen interest to machine learning. Recent advances in density estimation have either: a) proposed using a flexible model to estimate the conditional factors of the chain rule; or b) used flexible, non-linear transformations of variables of a simple base distribution. Instead, this work jointly leverages transformations of variables and autoregressive conditional models, and proposes novel methods for both. We provide a deeper understanding of our models, showing a considerable improvement with our methods through a comprehensive study over both real world and synthetic data. Moreover, we illustrate the use of our models in outlier detection and image modeling task. | rejected-papers | This paper looks at building new density estimation methods and new methods for tranformations and autoregressive models. The request from reviewers for comparison improves the paper. These models have seen a wide range of applications and have been highly successful, needing the added benefits shown and their potential impact to be expanded further. | train | [
"S1FCACYeG",
"By_sZWcgz",
"HkZ8Gb9eG",
"B1naFJL7z",
"SkEr0orXM",
"r1E9brOZf",
"r1A1-Bd-f",
"H1Ubxrd-f",
"Bkw504ubz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors propose to combine nonlinear bijective transformations and flexible density models for density estimation. In terms of bijective change of variables transformations, they propose linear triangular transformations and recurrent transformations. They also propose to use as base transformation an autoregr... | [
5,
5,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
2,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1RF3ExCb",
"iclr_2018_r1RF3ExCb",
"iclr_2018_r1RF3ExCb",
"SkEr0orXM",
"r1E9brOZf",
"S1FCACYeG",
"By_sZWcgz",
"HkZ8Gb9eG",
"iclr_2018_r1RF3ExCb"
] |
iclr_2018_rkONG0xAW | Recursive Binary Neural Network Learning Model with 2-bit/weight Storage Requirement | This paper presents a storage-efficient learning model titled Recursive Binary Neural Networks for embedded and mobile devices having a limited amount of on-chip data storage such as hundreds of kilo-Bytes. The main idea of the proposed model is to recursively recycle data storage of weights (parameters) during training. This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip, achieving better classification accuracy. Such efficient use of on-chip storage reduces off-chip storage accesses, improving energy-efficiency and speed of training. We verified the proposed training model with deep and convolutional neural network classifiers on the MNIST and voice activity detection benchmarks. For the deep neural network, our model achieves data storage requirement of as low as 2 bits/weight, whereas the conventional binary neural network learning models require data storage of 8 to 32 bits/weight. With the same amount of data storage, our model can train a bigger network having more weights, achieving 1% less test error than the conventional binary neural network learning model. To achieve the similar classification error, the conventional binary neural network model requires 4× more data storage for weights than our proposed model. For the convolution neural network classifier, the proposed model achieves 2.4% less test error for the same on-chip storage or 6× storage savings to achieve the similar accuracy.
| rejected-papers | This is an interesting paper and addresses an important problem of neural networks with memory constrains. New experiments have been added that add to the paper, but the full impact of the paper is not yet realised, needing further exploration of models of current practice, wider set of experiments and analysis, and additional clarifying discussion. | train | [
"BkYwge9ef",
"SkMJBHOez",
"H11OyNqgM",
"HyA3vW57z",
"H1suSbq7z",
"HyJRxZ9mf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"There could be an interesting idea here, but the limitations and applicability of the proposed approach are not clear yet. More analysis should be done to clarify its potential. Besides, the paper seriously needs to be reworked. The text in general, but also the notation, should be improved.\n\nIn my opinion, the ... | [
6,
7,
5,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1
] | [
"iclr_2018_rkONG0xAW",
"iclr_2018_rkONG0xAW",
"iclr_2018_rkONG0xAW",
"SkMJBHOez",
"BkYwge9ef",
"H11OyNqgM"
] |
iclr_2018_SJIA6ZWC- | Stochastic Hyperparameter Optimization through Hypernetworks | Machine learning models are usually tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of both weights and hyperparameters. Our method trains a neural network to output approximately optimal weights as a function of hyperparameters. We show that our method converges to locally optimal weights and hyperparameters for sufficiently large hypernets. We compare this method to standard hyperparameter optimization strategies and demonstrate its effectiveness for tuning thousands of hyperparameters. | rejected-papers | The paper is interesting, and the update to the paper and additional experiments has already improved it in many ways, but the paper still does still not have as much impact as it could, by further strengthening the comparisons and usefulness in many of situations of current practice. | train | [
"Bk_UdcKxf",
"ryb9D_Bxf",
"r1dLqgZWM",
"BJH3BITQG",
"rJ6qV8pXf",
"ryTNBU6mM",
"Sygmdl-WG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"*Summary*\n\nThe paper proposes to use hyper-networks [Ha et al. 2016] for the tuning of hyper-parameters, along the lines of [Brock et al. 2017]. The core idea is to have a side neural network sufficiently expressive to learn the (large-scale, matrix-valued) mapping from a given configuration of hyper-parameters ... | [
6,
6,
6,
-1,
-1,
-1,
-1
] | [
4,
3,
1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJIA6ZWC-",
"iclr_2018_SJIA6ZWC-",
"iclr_2018_SJIA6ZWC-",
"ryb9D_Bxf",
"r1dLqgZWM",
"Bk_UdcKxf",
"Bk_UdcKxf"
] |
iclr_2018_ByW5yxgA- | Multiscale Hidden Markov Models For Covariance Prediction | This paper presents a novel variant of hierarchical hidden Markov models (HMMs), the multiscale hidden Markov model (MSHMM), and an associated spectral estimation and prediction scheme that is consistent, finds global optima, and is computationally efficient. Our MSHMM is a generative model of multiple HMMs evolving at different rates where the observation is a result of the additive emissions of the HMMs. While estimation is relatively straightforward, prediction for the MSHMM poses a unique challenge, which we address in this paper. Further, we show that spectral estimation of the MSHMM outperforms standard methods of predicting the asset covariance of stock prices, a widely addressed problem that is multiscale, non-stationary, and requires processing huge amounts of data. | rejected-papers | The paper addresses and interesting problem, but the reviewers found that the paper is not as strong as it could be: improving the range of evaluated data (significantly improve the convincingness of the experiments, and clearly adressing any alternatives, their limitations and as baselines). | val | [
"HyUR-6Oez",
"Bk-DjW5ef",
"r1hXsPf-G",
"SJ7sJ_pmz",
"BkXd1O6mM",
"HJYNyupmG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper focuses on a very particular HMM structure which involves multiple, independent HMMs. Each HMM emits an unobserved output with an explicit duration period. This explicit duration modelling captures multiple scale of temporal resolution. The actual observations are a weighted linear combination of the emi... | [
5,
6,
6,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_ByW5yxgA-",
"iclr_2018_ByW5yxgA-",
"iclr_2018_ByW5yxgA-",
"HyUR-6Oez",
"Bk-DjW5ef",
"r1hXsPf-G"
] |
iclr_2018_r1uOhfb0W | Learning Sparse Structured Ensembles with SG-MCMC and Network Pruning | An ensemble of neural networks is known to be more robust and accurate than an individual network, however usually with linearly-increased cost in both training and testing.
In this work, we propose a two-stage method to learn Sparse Structured Ensembles (SSEs) for neural networks.
In the first stage, we run SG-MCMC with group sparse priors to draw an ensemble of samples from the posterior distribution of network parameters. In the second stage, we apply weight-pruning to each sampled network and then perform retraining over the remained connections.
In this way of learning SSEs with SG-MCMC and pruning, we not only achieve high prediction accuracy since SG-MCMC enhances exploration of the model-parameter space, but also reduce memory and computation cost significantly in both training and testing of NN ensembles.
This is thoroughly evaluated in the experiments of learning SSE ensembles of both FNNs and LSTMs.
For example, in LSTM based language modeling (LM), we obtain 21\% relative reduction in LM perplexity by learning a SSE of 4 large LSTM models, which has only 30\% of model parameters and 70\% of computations in total, as compared to the baseline large LSTM LM.
To the best of our knowledge, this work represents the first methodology and empirical study of integrating SG-MCMC, group sparse prior and network pruning together for learning NN ensembles. | rejected-papers | This paper is interesting since it goes to showing the role of model averaging. The clarifications made improve the paper, but the impact of the paper is still not realised: the common confusion on the retraining can be re-examined, clarifications in the methodology and evaluation, and deeper contextulaisation of the wider literature. | train | [
"B1A7YkceM",
"BJt3Bg5gM",
"Hy6mmeCgf",
"S14r4n5fz",
"B1OxK2cMG",
"HkmrOnqzG",
"BJJWN39zf",
"BkgBDh9GG",
"HysnNFwA-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"The authors propose a procedure to generate an ensemble of sparse structured models. To do this, the authors propose to (1) sample models using SG-MCMC with group sparse prior, (2) prune hidden units with small weights, (3) and retrain weights by optimizing each pruned model. The ensemble is applied to MNIST class... | [
4,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1uOhfb0W",
"iclr_2018_r1uOhfb0W",
"iclr_2018_r1uOhfb0W",
"iclr_2018_r1uOhfb0W",
"HysnNFwA-",
"B1A7YkceM",
"Hy6mmeCgf",
"BJt3Bg5gM",
"iclr_2018_r1uOhfb0W"
] |
iclr_2018_HJJ0w--0W | Long-term Forecasting using Tensor-Train RNNs | We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation. Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions. Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance. We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs. We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data. | rejected-papers | This paper address the increasingly studied problem of predictions over long-term horizons. Despite this, and the important updates from the authors, the paper is not yeat ready and improvements identified include more control over the fair comparisons, improved clarity in exposition. | train | [
"B1BulASgf",
"SJfyCxYgG",
"HJv0cb5xG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes Tensor-Train RNN and Tensor-Train LSTM (TT-RNN/TLSTM), a RNN/LSTM architecture whose hidden unit at time t h_t is computed from the tensor-vector product between a tensor of weights and a concatenation of hidden units from the previous L time steps. The motivation is to incorporate previous hidd... | [
4,
5,
6
] | [
4,
3,
4
] | [
"iclr_2018_HJJ0w--0W",
"iclr_2018_HJJ0w--0W",
"iclr_2018_HJJ0w--0W"
] |
iclr_2018_Skx5txzb0W | A Boo(n) for Evaluating Architecture Performance | We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws. Each time a model is trained, one gets a different result due to random factors in the training process, which include random parameter initialization and random data shuffling. Reporting the best single model performance does not appropriately address this stochasticity. We propose a normalized expected best-out-of-n performance (Boo_n) as a way to correct these problems. | rejected-papers | The subject of model evaluation will always be a contentious one, and the reviewers were not yet fully-convinced by the discussion. The points you bring up at the end of your rresponse already point to directions for improvement as well as a greater degree of precision and control. | val | [
"H1otcvggM",
"BknlT5Bez",
"rynGrnpeM",
"rkvaNxw-G",
"H1Nhrlvbz",
"SyJ-SgPWG",
"HyRyZFL-G"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors propose a new measure to capture the inherent randomness of the performance of a neural net under different random initialisations and/or data inputs. Just reporting the best performance among many random realisations is clearly flawed yet still widely adopted. Instead, the authors propose to compute t... | [
4,
6,
4,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Skx5txzb0W",
"iclr_2018_Skx5txzb0W",
"iclr_2018_Skx5txzb0W",
"H1otcvggM",
"rynGrnpeM",
"BknlT5Bez",
"iclr_2018_Skx5txzb0W"
] |
iclr_2018_SkwAEQbAb | A novel method to determine the number of latent dimensions with SVD | Determining the number of latent dimensions is a ubiquitous problem in machine
learning. In this study, we introduce a novel method that relies on SVD to discover
the number of latent dimensions. The general principle behind the method is to
compare the curve of singular values of the SVD decomposition of a data set with
the randomized data set curve. The inferred number of latent dimensions corresponds
to the crossing point of the two curves. To evaluate our methodology, we
compare it with competing methods such as Kaisers eigenvalue-greater-than-one
rule (K1), Parallel Analysis (PA), Velicers MAP test (Minimum Average Partial).
We also compare our method with the Silhouette Width (SW) technique which is
used in different clustering methods to determine the optimal number of clusters.
The result on synthetic data shows that the Parallel Analysis and our method have
similar results and more accurate than the other methods, and that our methods is
slightly better result than the Parallel Analysis method for the sparse data sets. | rejected-papers | The paper addresses the important question of determining the intrinsic dimensionality, but there remain several issue, which make the paper not ready at this point: unclear exposition, lack of contextualisation of existing work and seemingly limited insights. The reviewers have provided many suggestions to improve the paper which we hope will be useful to improve the paper. | train | [
"ByPKCgNgG",
"r1N3gmtlz",
"HJAPXrtgM",
"B1hFqIXgM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"The manuscript proposes to estimate the number of components in SVD by comparing the eigenvalues to those obtained on bootstrapped version of the input.\n\nThe paper has numerous flaws and is clearly below acceptance threshold for any scientific forum. Some of the more obvious issues, each alone sufficient for rej... | [
1,
2,
3,
-1
] | [
4,
5,
4,
-1
] | [
"iclr_2018_SkwAEQbAb",
"iclr_2018_SkwAEQbAb",
"iclr_2018_SkwAEQbAb",
"iclr_2018_SkwAEQbAb"
] |
iclr_2018_rJ5C67-C- | Hyperedge2vec: Distributed Representations for Hyperedges | Data structured in form of overlapping or non-overlapping sets is found in a variety of domains, sometimes explicitly but often subtly. For example, teams, which are of prime importance in social science studies are \enquote{sets of individuals}; \enquote{item sets} in pattern mining are sets; and for various types of analysis in language studies a sentence can be considered as a \enquote{set or bag of words}. Although building models and inference algorithms for structured data has been an important task in the fields of machine learning and statistics, research on \enquote{set-like} data still remains less explored. Relationships between pairs of elements can be modeled as edges in a graph. However, modeling relationships that involve all members of a set, a hyperedge is a more natural representation for the set. In this work, we focus on the problem of embedding hyperedges in a hypergraph (a network of overlapping sets) to a low dimensional vector space. We propose a probabilistic deep-learning based method as well as a tensor-based algebraic model, both of which capture the hypergraph structure in a principled manner without loosing set-level information. Our central focus is to highlight the connection between hypergraphs (topology), tensors (algebra) and probabilistic models. We present a number of interesting baselines, some of which adapt existing node-level embedding models to the hyperedge-level, as well as sequence based language techniques which are adapted for set structured hypergraph topology. The performance is evaluated with a network of social groups and a network of word phrases. Our experiments show that accuracy wise our methods perform similar to those of baselines which are not designed for hypergraphs. Moreover, our tensor based method is quiet efficient as compared to deep-learning based auto-encoder method. We therefore, argue that we have proposed more general methods which are suited for hypergraphs (and therefore also for graphs) while maintaining accuracy and efficiency. | rejected-papers | While there are some interesting and novel aspects in this paper, none of the reviewers recommends acceptance. | train | [
"H1kAEtYlz",
"rJvDxGceG",
"S1teFU6gG",
"ryF8mLfNM",
"SyHasMG4M",
"HyhIPu67f",
"Sk1QH_aQM",
"Skqb4upmf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper studies different methods for defining hypergraph embeddings, i.e. defining vectorial representations of the set of hyperedges of a given hypergraph. It should be noted that the framework does not allow to compute a vectorial representation of a set of nodes not already given as an hyperedge. A set of me... | [
5,
5,
5,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJ5C67-C-",
"iclr_2018_rJ5C67-C-",
"iclr_2018_rJ5C67-C-",
"SyHasMG4M",
"HyhIPu67f",
"H1kAEtYlz",
"rJvDxGceG",
"S1teFU6gG"
] |
iclr_2019_B1gabhRcYX | BA-Net: Dense Bundle Adjustment Networks | This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error. The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA. The basis depth maps generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem. Experiments on large scale real data prove the success of the proposed method. | accepted-oral-papers | The first reviewer summarizes the contribution well: This paper combines [a CNN that computes both a multi-scale feature pyramid and a depth prediction, which is expressed as a linear combination of "depth bases"]. This is used to [define a dense re-projection error over the images, akin to that of dense or semi-dense methods]. [Then, this error is optimized with respect to the camera parameters and depth linear combination coefficients using Levenberg-Marquardt (LM). By unrolling 5 iterations of LM and expressing the dampening parameter lambda as the output of a MLP, the optimization process is made differentiable, allowing back-propagation and thus learning of the networks' parameters.]
Strengths:
While combining deep learning methods with bundle adjustment is not new, reviewers generally agree that the particular way in which that is achieved in this paper is novel and interesting. The authors accounted for reviewer feedback during the review cycle and improved the manuscript leading to an increased rating.
Weaknesses:
Weaknesses were addressed during the rebuttal including better evaluation of their predicted lambda and comparison with CodeSLAM.
Contention:
This paper was not particularly contentious, there was a score upgrade due to the efforts of the authors during the rebuttal period.
Consensus:
This paper addresses an interesting area of research at the intersection of geometric computer vision and deep learning and should be of considerable interest to many within the ICLR community. The discussion of the paper highlighted some important nuances of terminology regarding the characterization of different methods. This paper was also rated the highest in my batch. As such, I recommend this paper for an oral presentation. | test | [
"r1x8O_Sw3X",
"SylPHRPDnQ",
"BkgvvbtzkN",
"H1ljP2vqAQ",
"r1xqEgFcCX",
"H1gAMXd90X",
"rkxhFe_qAm",
"HkeWLII90Q",
"SJx-VMJcnm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"edit: the authors added several experiments (better evaluation of the predicted lambda, comparison with CodeSLAM), which address my concerns. I think the paper is much more convincing now. I am happy to increase my rating to clear accept.\n\nI also agree with the introduction of the Chi vector, and with the use of... | [
8,
7,
-1,
-1,
-1,
-1,
-1,
-1,
9
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_B1gabhRcYX",
"iclr_2019_B1gabhRcYX",
"rkxhFe_qAm",
"SJx-VMJcnm",
"r1x8O_Sw3X",
"r1x8O_Sw3X",
"SylPHRPDnQ",
"iclr_2019_B1gabhRcYX",
"iclr_2019_B1gabhRcYX"
] |
iclr_2019_B1l08oAct7 | Deterministic Variational Inference for Robust Bayesian Neural Networks | Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal with uncertainty when learning from finite data. Among approaches to realize probabilistic inference in deep neural networks, variational Bayes (VB) is theoretically grounded, generally applicable, and computationally efficient. With wide recognition of potential advantages, why is it that variational Bayes has seen very limited practical use for BNNs in real applications? We argue that variational inference in neural networks is fragile: successful implementations require careful initialization and tuning of prior variances, as well as controlling the variance of Monte Carlo gradient estimates. We provide two innovations that aim to turn VB into a robust inference tool for Bayesian neural networks: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel Empirical Bayes procedure for automatically selecting prior variances. Combining these two innovations, the resulting method is highly efficient and robust. On the application of heteroscedastic regression we demonstrate good predictive performance over alternative approaches. | accepted-oral-papers | The manuscript proposes deterministic approximations for Bayesian neural networks as an alternative to the standard Monte-Carlo approach. The results suggest that the deterministic approximation can be more accurate than previous methods. Some explicit contributions include efficient moment estimates and empirical Bayes procedures.
The reviewers and ACs note weakness in the breadth and complexity of models evaluated, particularly with regards to ablation studies. This issue seems to have been addressed to the reviewer's satisfaction by the rebuttal. The updated manuscript also improves references to related prior work.
Overall, reviewers and AC agree that the general problem statement is timely and interesting, and well executed. We recommend acceptance. | train | [
"H1eOIrXYhm",
"HyeV1yHgAm",
"HJxcV9EgRX",
"rJex4YNeCQ",
"H1g0a1ir2Q",
"rJexO5ZynQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a new approach to perform deterministic variational inference for feed-forward BNN with specific nonlinear activation functions by approximating layerwise moments. Under certain conditions, the authors show that the proposed method achieves better performance than existing Monte Carlo variation... | [
7,
-1,
-1,
-1,
7,
7
] | [
3,
-1,
-1,
-1,
3,
5
] | [
"iclr_2019_B1l08oAct7",
"rJexO5ZynQ",
"H1g0a1ir2Q",
"H1eOIrXYhm",
"iclr_2019_B1l08oAct7",
"iclr_2019_B1l08oAct7"
] |
iclr_2019_B1l6qiR5F7 | Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks | Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses). When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed. While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents. This paper proposes to add such inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated. Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference. | accepted-oral-papers | This paper presents a substantially new way of introducing a syntax-oriented inductive bias into sentence-level models for NLP without explicitly injecting linguistic knowledge. This is a major topic of research in representation learning for NLP, so to see something genuinely original work well is significant. All three reviewers were impressed by the breadth of the experiments and by the results, and this will clearly be among the more ambitious papers presented at this conference.
In preparing a final version of this paper, though, I'd urge the authors to put serious further effort into the writing and presentation. All three reviewers had concerns about confusing or misleading passages, including the title and the discussion of the performance of tree-structured models so far. | train | [
"B1gIbtAKRm",
"BkgiNwT7h7",
"B1xh_mvdRX",
"HkgTokYQCX",
"Bkgp-SFxRQ",
"Skxnyrtx0m",
"SyeNaVYxCm",
"Bygp1Apv2m",
"H1ewsJDcjm"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Regarding “LSTM’s performance consistently lags behind that of tree-based models”. \nOn sentence embedding tasks (e.g SNLI) and sequential labeling tasks (e.g sentiment analysis), TreeLSTM has shown better performance compared to vanilla LSTM. We’ve also updated the abstract according to the reviews.\n\nRegarding ... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
9,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"B1xh_mvdRX",
"iclr_2019_B1l6qiR5F7",
"BkgiNwT7h7",
"SyeNaVYxCm",
"H1ewsJDcjm",
"BkgiNwT7h7",
"Bygp1Apv2m",
"iclr_2019_B1l6qiR5F7",
"iclr_2019_B1l6qiR5F7"
] |
iclr_2019_B1xsqj09Fm | Large Scale GAN Training for High Fidelity Natural Image Synthesis | Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick", allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Frechet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65. | accepted-oral-papers | The paper proposes a set of tricks leading to a new SOTA for sampling high resolution images. It is clearly written and the presented contribution will be of high interest for practitioners. | train | [
"SJl68_Hx37",
"SkgkCbBm0Q",
"Syxd9-HXAQ",
"r1gI_-SQAm",
"BJeJx-H7RQ",
"rJx99xSXAX",
"S1gaWerP2X",
"HklmZ1xqhm",
"SkgcCLXypQ",
"Hkgd30pT27",
"BJgFGkiT2Q",
"rJgBuz5a3Q",
"rJlaYkcTnX",
"BklSXtmL2X",
"Sklp_OFLjm",
"SyesNhmUjm",
"Hke0IlKSim",
"S1xw0OrXqm",
"SJlWU-HGcm",
"rkxcudXfq7"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"author",
"public",
"author",
"public",
"author",
"public",
"public",
"author",
"public",
"author",
"public",
"author",
"public... | [
"This paper present extensions of the Self-Attention Generative Adversarial Network approach SAGAN, leading to impressive images generations conditioned on imagenet classes. \nThe key components of the approach are :\n- increasing the batch size by a factor 8\n- augmenting the width of the networks by 50% \nThese f... | [
9,
-1,
-1,
-1,
-1,
-1,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_B1xsqj09Fm",
"SJl68_Hx37",
"r1gI_-SQAm",
"S1gaWerP2X",
"HklmZ1xqhm",
"iclr_2019_B1xsqj09Fm",
"iclr_2019_B1xsqj09Fm",
"iclr_2019_B1xsqj09Fm",
"iclr_2019_B1xsqj09Fm",
"iclr_2019_B1xsqj09Fm",
"rJgBuz5a3Q",
"rJlaYkcTnX",
"iclr_2019_B1xsqj09Fm",
"iclr_2019_B1xsqj09Fm",
"SyesNhmUjm"... |
iclr_2019_Bklr3j0cKX | Learning deep representations by mutual information estimation and maximization | This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals. | accepted-oral-papers | This paper proposes a new unsupervised learning approach based on maximizing the mutual information between the input and the representation. The results are strong across several image datasets. Essentially all of the reviewer's concerns were directly addressed in revisions of the paper, including additional experiments. The only weakness is that only image datasets were experimented with; however, the image-based experiments and comparisons are extensive. The reviewers and I all agree that the paper should be accepted, and I think it should be considered for an oral presentation. | train | [
"rkgBfaIahQ",
"SkxJeJTX07",
"SJxEJLX2CQ",
"B1gEEtgAjm",
"ryxmvC2mC7",
"rJekflpQCX",
"SJxOvyaQCm",
"B1lMGywi6Q",
"SJxqzmcVp7",
"BkxA0Kt3nQ",
"rygYR8UMoQ",
"rklhZYMJom"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"This paper proposes Deep InfoMax (DIM), for learning representations by maximizing the mutual information between the input and a deep representation. By structuring the network and objectives to encode input locality or priors on the representation, DIM learns features that are useful for downstream tasks without... | [
7,
-1,
-1,
9,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1
] | [
5,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1
] | [
"iclr_2019_Bklr3j0cKX",
"rkgBfaIahQ",
"B1gEEtgAjm",
"iclr_2019_Bklr3j0cKX",
"iclr_2019_Bklr3j0cKX",
"B1gEEtgAjm",
"BkxA0Kt3nQ",
"rkgBfaIahQ",
"rkgBfaIahQ",
"iclr_2019_Bklr3j0cKX",
"rklhZYMJom",
"iclr_2019_Bklr3j0cKX"
] |
iclr_2019_ByeZ5jC5YQ | KnockoffGAN: Generating Knockoffs for Feature Selection using Generative Adversarial Networks | Feature selection is a pervasive problem. The discovery of relevant features can be as important for performing a particular task (such as to avoid overfitting in prediction) as it can be for understanding the underlying processes governing the true label (such as discovering relevant genetic factors for a disease). Machine learning driven feature selection can enable discovery from large, high-dimensional, non-linear observational datasets by creating a subset of features for experts to focus on. In order to use expert time most efficiently, we need a principled methodology capable of controlling the False Discovery Rate. In this work, we build on the promising Knockoff framework by developing a flexible knockoff generation model. We adapt the Generative Adversarial Networks framework to allow us to generate knockoffs with no assumptions on the feature distribution. Our model consists of 4 networks, a generator, a discriminator, a stability network and a power network. We demonstrate the capability of our model to perform feature selection, showing that it performs as well as the originally proposed knockoff generation model in the Gaussian setting and that it outperforms the original model in non-Gaussian settings, including on a real-world dataset. | accepted-oral-papers | The paper presents a novel strategy for statistically motivated feature selection i.e. aimed at controlling the false discovery rate. This is achieved by extending knockoffs to complex predictive models and complex distributions via (multiple) generative adversarial networks.
The reviewers and ACs noted weakness in the original submission which seems to have been fixed after the rebuttal period -- primary related to missing experimental details. There was also some concern (as is common with inferential papers) that the claims are difficult to evaluate on real data, as the ground truth is unknown. To this end, the authors provide empirical results with simulated data that address this issue. There is also some concern that more complex predictive models are not evaluated.
Overall the reviewers and AC have a positive opinion of this paper and recommend acceptance. | train | [
"S1lT0N1cTQ",
"Hkx5lH19T7",
"B1xSOmkcpQ",
"B1gBqxk9pm",
"H1eAulfwpm",
"HklHlOUPnQ",
"HkeTawVrhX"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nA5: This was indeed an oversight and we will correct the text. We will change “trivial” to “well-known” and hopefully that will make clearer our point. The asterisk will be removed as we do not feel it helps provide any clarity.\n\nA6: The citations found in Table 1 are in fact citations to relevant PubMed liter... | [
-1,
-1,
-1,
-1,
6,
10,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"HklHlOUPnQ",
"HklHlOUPnQ",
"HkeTawVrhX",
"H1eAulfwpm",
"iclr_2019_ByeZ5jC5YQ",
"iclr_2019_ByeZ5jC5YQ",
"iclr_2019_ByeZ5jC5YQ"
] |
iclr_2019_Byg3y3C9Km | Learning Protein Structure with a Differentiable Simulator | The Boltzmann distribution is a natural model for many systems, from brains to materials and biomolecules, but is often of limited utility for fitting data because Monte Carlo algorithms are unable to simulate it in available time. This gap between the expressive capabilities and sampling practicalities of energy-based models is exemplified by the protein folding problem, since energy landscapes underlie contemporary knowledge of protein biophysics but computer simulations are challenged to fold all but the smallest proteins from first principles. In this work we aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data. We compose a neural energy function with a novel and efficient simulator based on Langevin dynamics to build an end-to-end-differentiable model of atomic protein structure given amino acid sequence information. We introduce techniques for stabilizing backpropagation under long roll-outs and demonstrate the model's capacity to make multimodal predictions and to, in some cases, generalize to unobserved protein fold types when trained on a large corpus of protein structures. | accepted-oral-papers | This paper presents a differentiable simulator for protein structure prediction that can be trained end-to-end. It makes several contributions to this research area. Particularly training a differentiable sampling simulator could be of interest to a wider community.
The main criticism comes from the clarity for the machine learning community and empirical comparison with the state-of-the-art methods. The authors' feedback addressed a few confusions in the description, and I recommend the authors to further polish the text for better readability. R4 argues that a good comparison with the state-of-the-art method in this field would be difficult and the comparison with an RNN baseline is rigorously carried out.
After discussion, all reviewers agree that this paper deserves a publication at ICLR. | train | [
"SJlUZjqEim",
"B1lRoQMLTX",
"HylvOjIjCQ",
"r1etop_q0X",
"SkxK8JG507",
"HJlD-yM90X",
"rJeaJ0bc0X",
"Hyxj9pbc0Q",
"Skx-IPBvTQ",
"Hyl_nFNX6m",
"Hkgk3b343m",
"ryeM3Sm4im",
"ByeS8UUb5X",
"Ske1rVBZcX",
"B1eOXxVb9X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"Post-rebuttal revision: The authors have adressed my concerns sufficiently. The paper still has issues with presentation, and weak comparisons to earlier methods. However, the field is currently rapidly developing, and comparing to earlier works is often difficult. I believe the Langevin-based prediction is a sign... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1
] | [
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2019_Byg3y3C9Km",
"iclr_2019_Byg3y3C9Km",
"r1etop_q0X",
"SkxK8JG507",
"SJlUZjqEim",
"Hkgk3b343m",
"B1lRoQMLTX",
"Skx-IPBvTQ",
"iclr_2019_Byg3y3C9Km",
"Hkgk3b343m",
"iclr_2019_Byg3y3C9Km",
"ByeS8UUb5X",
"Ske1rVBZcX",
"B1eOXxVb9X",
"iclr_2019_Byg3y3C9Km"
] |
iclr_2019_Bygh9j09KX | ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness | Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes. Some recent studies suggest a more important role of image textures. We here put these conflicting hypotheses to a quantitative test by evaluating CNNs and human observers on images with a texture-shape cue conflict. We show that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies. We then demonstrate that the same standard architecture (ResNet-50) that learns a texture-based representation on ImageNet is able to learn a shape-based representation instead when trained on 'Stylized-ImageNet', a stylized version of ImageNet. This provides a much better fit for human behavioural performance in our well-controlled psychophysical lab setting (nine experiments totalling 48,560 psychophysical trials across 97 observers) and comes with a number of unexpected emergent benefits such as improved object detection performance and previously unseen robustness towards a wide range of image distortions, highlighting advantages of a shape-based representation. | accepted-oral-papers | This paper proposes a hypothesis about the kinds of visual information for which popular neural networks are most selective. It then proposes a series of empirical experiments on synthetically modified training sets to test this and related hypotheses. The main conclusions of the paper are contained in the title, and the presentation was consistently rated as very clear. As such, it is both interesting to a relatively wide audience and accessible.
Although the paper is comparatively limited in theoretical or algorithmic contribution, the empirical results and experimental design are of sufficient quality to inform design choices of future neural networks, and to better understand the reasons for their current behavior.
The reviewers were unanimous in their appreciation of the contributions, and all recommended that the paper be accepted.
| train | [
"BklS3JIeR7",
"r1lJ3Mke0X",
"S1eErXNKC7",
"rJlAnqlH2X",
"HylJK7nYAm",
"HJxSI4x527",
"BkeaMFAK3m",
"r1xmGpubRX",
"HygXM-Yb07",
"SygjL7t-0m",
"S1g75tNxTQ",
"B1x2vGlKhm",
"BJghHSvNh7",
"Bkea9yCbiQ"
] | [
"author",
"public",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"Thanks for your interest in our work!\n\nAs mentioned in the paper, we used the PyTorch implementation from [1]. The degree of stylization (parameter \"alpha\" in the implementation) was kept at the default value of 1.0; it might be interesting to explore whether a lower coefficient still nudges a model towards a ... | [
-1,
-1,
-1,
8,
-1,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"r1lJ3Mke0X",
"iclr_2019_Bygh9j09KX",
"iclr_2019_Bygh9j09KX",
"iclr_2019_Bygh9j09KX",
"SygjL7t-0m",
"iclr_2019_Bygh9j09KX",
"iclr_2019_Bygh9j09KX",
"HJxSI4x527",
"BkeaMFAK3m",
"rJlAnqlH2X",
"HJxSI4x527",
"BJghHSvNh7",
"Bkea9yCbiQ",
"iclr_2019_Bygh9j09KX"
] |
iclr_2019_H1xSNiRcF7 | Smoothing the Geometry of Probabilistic Box Embeddings | There is growing interest in geometrically-inspired embeddings for learning hierarchies, partial orders, and lattice structures, with natural applications to transitive relational data such as entailment graphs. Recent work has extended these ideas beyond deterministic hierarchies to probabilistically calibrated models, which enable learning from uncertain supervision and inferring soft-inclusions among concepts, while maintaining the geometric inductive bias of hierarchical embedding models. We build on the Box Lattice model of Vilnis et al. (2018), which showed promising results in modeling soft-inclusions through an overlapping hierarchy of sets, parameterized as high-dimensional hyperrectangles (boxes). However, the hard edges of the boxes present difficulties for standard gradient based optimization; that work employed a special surrogate function for the disjoint case, but we find this method to be fragile. In this work, we present a novel hierarchical embedding model, inspired by a relaxation of box embeddings into parameterized density functions using Gaussian convolutions over the boxes. Our approach provides an alternative surrogate to the original lattice measure that improves the robustness of optimization in the disjoint case, while also preserving the desirable properties with respect to the original lattice. We demonstrate increased or matching performance on WordNet hypernymy prediction, Flickr caption entailment, and a MovieLens-based market basket dataset. We show especially marked improvements in the case of sparse data, where many conditional probabilities should be low, and thus boxes should be nearly disjoint. | accepted-oral-papers | The manuscript presents a promising new algorithm for learning geometrically-inspired embeddings for learning hierarchies, partial orders, and lattice structures. The manuscript builds on the build on the box lattice model, extending prior work by relaxing the box embeddings via Gaussian convolutions. This is shown to be particularly effective for non-overlapping boxes, where the previous method fail.
The primary weakness identified by reviewers was the writing, which was thought to be lacking some context, and may be difficult to approach for the non-domain expert. This can be improved by including an additional general introduction. Otherwise, the manuscript was well written.
Overall, reviewers and AC agree that the general problem statement is timely and interesting, and well executed. In our opinion, this paper is a clear accept. | val | [
"H1xPEJOVsm",
"Bye2jf25Rm",
"rJeo2-39Rm",
"SylyqW25Cm",
"BkejJghqAQ",
"rJglCnOshm",
"SylZ79N5hm",
"HylK0EWqhX",
"r1lircRdnQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Post-rebuttal revision: All my concerns were adressed by the authors. This is a great paper and should be accepted.\n\n------\n\nThe paper presents smoothing probabilistic box embeddings with softplus functions, which make the optimization landscape continuous, while also presenting the theoretical background of t... | [
7,
-1,
-1,
-1,
-1,
8,
8,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1
] | [
"iclr_2019_H1xSNiRcF7",
"H1xPEJOVsm",
"SylyqW25Cm",
"SylZ79N5hm",
"rJglCnOshm",
"iclr_2019_H1xSNiRcF7",
"iclr_2019_H1xSNiRcF7",
"r1lircRdnQ",
"iclr_2019_H1xSNiRcF7"
] |
iclr_2019_HJx54i05tX | On Random Deep Weight-Tied Autoencoders: Exact Asymptotic Analysis, Phase Transitions, and Implications to Training | We study the behavior of weight-tied multilayer vanilla autoencoders under the assumption of random weights. Via an exact characterization in the limit of large dimensions, our analysis reveals interesting phase transition phenomena when the depth becomes large. This, in particular, provides quantitative answers and insights to three questions that were yet fully understood in the literature. Firstly, we provide a precise answer on how the random deep weight-tied autoencoder model performs “approximate inference” as posed by Scellier et al. (2018), and its connection to reversibility considered by several theoretical studies. Secondly, we show that deep autoencoders display a higher degree of sensitivity to perturbations in the parameters, distinct from the shallow counterparts. Thirdly, we obtain insights on pitfalls in training initialization practice, and demonstrate experimentally that it is possible to train a deep autoencoder, even with the tanh activation and a depth as large as 200 layers, without resorting to techniques such as layer-wise pre-training or batch normalization. Our analysis is not specific to any depths or any Lipschitz activations, and our analytical techniques may have broader applicability. | accepted-oral-papers | This paper analyzes random auto encoders in the infinite dimension limit with an assumption that the weights are tied in the encoder and decoder. In the limit the paper is able to show the random auto encoder transformation as doing an approximate inference on data. The paper is able to obtain principled initialization strategies for training deep autoencoders using this analysis, showing the usefulness of their analysis. Even though there are limitations of paper such as studying only random models, and characterizing them only in the limit, all the reviewers agree that the analysis is novel and gives insights on an interesting problem. | train | [
"rJeRWADXyE",
"B1lULFW9hm",
"S1lmapk71V",
"SJe6Z_6dAm",
"B1lLKO6dAX",
"rJehwu6dRm",
"BkeCavpORQ",
"rklt1LauAQ",
"SygkNeB92Q",
"Skeq2IQ7h7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your reply. We are happy to know that.",
"This work applies infinite width limit random network framework (a.k.a. Mean field analysis) to study deep autoencoders when weights are tied between encoder and decoder. Random network analysis allows to have exact analysis of asymptotic behaviour where th... | [
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
9,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"S1lmapk71V",
"iclr_2019_HJx54i05tX",
"BkeCavpORQ",
"B1lULFW9hm",
"Skeq2IQ7h7",
"Skeq2IQ7h7",
"B1lULFW9hm",
"SygkNeB92Q",
"iclr_2019_HJx54i05tX",
"iclr_2019_HJx54i05tX"
] |
iclr_2019_HkNDsiC9KQ | Meta-Learning Update Rules for Unsupervised Representation Learning | A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this involves minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target later desired tasks by meta-learning an unsupervised learning rule which leads to representations useful for those tasks. Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to different neural network architectures, datasets, and data modalities. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We further show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task. | accepted-oral-papers | The reviewers all agree that the idea is interesting, the writing clear and the experiments sufficient.
To improve the paper, the authors should consider better discussing their meta-objective and some of the algorithmic choices. | train | [
"SJeJvkj5hX",
"rJgRNO1Kp7",
"HJeD-O1Yam",
"r1eK0P1F67",
"Bkgckkbah7",
"r1eIOmju3m"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work brings a novel meta-learning approach that learns unsupervised learning rules for learning representations across different modalities, datasets, input permutation, and neural network architectures. The meta-objectives consist of few shot learning scores from several supervised tasks. The idea of using m... | [
8,
-1,
-1,
-1,
8,
8
] | [
3,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_HkNDsiC9KQ",
"r1eIOmju3m",
"SJeJvkj5hX",
"Bkgckkbah7",
"iclr_2019_HkNDsiC9KQ",
"iclr_2019_HkNDsiC9KQ"
] |
iclr_2019_HygBZnRctX | Transferring Knowledge across Learning Processes | In complex transfer learning scenarios new tasks might not be tightly linked to previous tasks. Approaches that transfer information contained only in the final parameters of a source model will therefore struggle. Instead, transfer learning at at higher level of abstraction is needed. We propose Leap, a framework that achieves this by transferring knowledge across learning processes. We associate each task with a manifold on which the training process travels from initialization to final parameters and construct a meta-learning objective that minimizes the expected length of this path. Our framework leverages only information obtained during training and can be computed on the fly at negligible cost. We demonstrate that our framework outperforms competing methods, both in meta-learning and transfer learning, on a set of computer vision tasks. Finally, we demonstrate that Leap can transfer knowledge across learning processes in demanding reinforcement learning environments (Atari) that involve millions of gradient steps. | accepted-oral-papers | This paper proposes an approach for learning to transfer knowledge across multiple tasks. It develops a principled approach for an important problem in meta-learning (short horizon bias). Nearly all of the reviewer's concerns were addressed throughout the discussion phase. The main weakness is that the experimental settings are somewhat non-standard (i.e. the Omniglot protocol in the paper is not at all standard). I would encourage the authors to mention the discrepancies from more standard protocols in the paper, to inform the reader. The results are strong nonetheless, evaluating in settings where typical meta-learning algorithms would struggle. The reviewers and I all agree that the paper should be accepted, and I think it should be considered for an oral presentation. | val | [
"Hye_eUxDk4",
"BkemR3cMkE",
"rkezle-C0X",
"HJgKnIJY27",
"BkextEnBCQ",
"Byxo0m2rRm",
"Bkevhwv927",
"Skgx-yRE0Q",
"SyxtTCp4Am",
"SJgQTudgAX",
"BkeSD__xR7",
"HJeAEd_eR7",
"SylzzddgRm",
"HyeuqwdeCm",
"HkeqwPulCQ",
"H1xWiy_q2Q"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Dear reviewer,\n \nThank you for taking the time to consider our rebuttal and revised manuscript.\n \nYou raise good points and we will address these in a final version of the paper; we have added a sentence following the stabilizer describing how it affects the meta gradient, and to answer your question about the... | [
-1,
-1,
-1,
8,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"rkezle-C0X",
"H1xWiy_q2Q",
"HJeAEd_eR7",
"iclr_2019_HygBZnRctX",
"iclr_2019_HygBZnRctX",
"Skgx-yRE0Q",
"iclr_2019_HygBZnRctX",
"SyxtTCp4Am",
"HyeuqwdeCm",
"iclr_2019_HygBZnRctX",
"H1xWiy_q2Q",
"SylzzddgRm",
"HJgKnIJY27",
"HkeqwPulCQ",
"Bkevhwv927",
"iclr_2019_HygBZnRctX"
] |
iclr_2019_HylzTiC5Km | GENERATING HIGH FIDELITY IMAGES WITH SUBSCALE PIXEL NETWORKS AND MULTIDIMENSIONAL UPSCALING | The unconditional generation of high fidelity images is a longstanding benchmark
for testing the performance of image decoders. Autoregressive image models
have been able to generate small images unconditionally, but the extension of
these methods to large images where fidelity can be more readily assessed has
remained an open problem. Among the major challenges are the capacity to encode
the vast previous context and the sheer difficulty of learning a distribution that
preserves both global semantic coherence and exactness of detail. To address the
former challenge, we propose the Subscale Pixel Network (SPN), a conditional
decoder architecture that generates an image as a sequence of image slices of equal
size. The SPN compactly captures image-wide spatial dependencies and requires a
fraction of the memory and the computation. To address the latter challenge, we
propose to use multidimensional upscaling to grow an image in both size and depth
via intermediate stages corresponding to distinct SPNs. We evaluate SPNs on the
unconditional generation of CelebAHQ of size 256 and of ImageNet from size 32
to 128. We achieve state-of-the-art likelihood results in multiple settings, set up
new benchmark results in previously unexplored settings and are able to generate
very high fidelity large scale samples on the basis of both datasets. | accepted-oral-papers | All reviewers recommend acceptance, with two reviewers in agreement that the results represent a significant advance for autoregressive generative models. The AC concurs.
| val | [
"Bkx8AurWeN",
"rJgX0gv814",
"SJeStxgqRm",
"HJgrrcCO2X",
"H1gOyezcaX",
"H1llqKX56m",
"r1xv8KGcp7",
"H1lNzuZ9a7",
"Bkgl4P-cpm",
"B1eI3IJ5pQ",
"rkeURDYKpQ",
"S1gjDP-927",
"HkxN569T2X",
"Bke6VkQAY7",
"SJgqbZTaYQ"
] | [
"public",
"public",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"https://arxiv.org/abs/1109.4389 seems to be another relevant reference for AR models using multiple scales",
"Dear authors:\n\nThank you for your really interesting and impressive ideas. The idea is really amazing and experimental results are sound. Generating 256x256 imagenet images in the auto-regressive manne... | [
-1,
-1,
-1,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
10,
7,
-1,
-1
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
-1,
-1
] | [
"Bke6VkQAY7",
"iclr_2019_HylzTiC5Km",
"iclr_2019_HylzTiC5Km",
"iclr_2019_HylzTiC5Km",
"HJgrrcCO2X",
"r1xv8KGcp7",
"rkeURDYKpQ",
"S1gjDP-927",
"S1gjDP-927",
"HkxN569T2X",
"S1gjDP-927",
"iclr_2019_HylzTiC5Km",
"iclr_2019_HylzTiC5Km",
"SJgqbZTaYQ",
"iclr_2019_HylzTiC5Km"
] |
iclr_2019_S1x4ghC9tQ | Temporal Difference Variational Auto-Encoder | To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: (a) it should build an abstract state representing the condition of the world; (b) it should form a belief which represents uncertainty on the world; (c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction. Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning. | accepted-oral-papers | The reviewers agree that this is a novel paper with a convincing evaluation. | train | [
"BJxUrnv_AQ",
"SyxSfnP_0Q",
"BJgAOovOC7",
"rkeaQHUJam",
"rJeR1S-ThQ",
"BkgnEnawnm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review and comments. We clarified our intuitive derivation of the loss in section A. It is indeed difficult to compare the jumpy TD-VAE model to other models, as there is little work that studies such models. We updated the appendix to explain how a model similar to jumpy TD-VAE provides an appr... | [
-1,
-1,
-1,
8,
9,
7
] | [
-1,
-1,
-1,
4,
4,
5
] | [
"BkgnEnawnm",
"rJeR1S-ThQ",
"rkeaQHUJam",
"iclr_2019_S1x4ghC9tQ",
"iclr_2019_S1x4ghC9tQ",
"iclr_2019_S1x4ghC9tQ"
] |
iclr_2019_S1xq3oR5tQ | A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs | The vertebrate visual system is hierarchically organized to process visual information in successive stages. Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex (V1), typical RFs are sharply tuned to a precise orientation. There is currently no unified theory explaining these differences in representations across layers. Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and for the first time we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing. The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck. Second, we find that, for simple downstream cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortical networks. This result predicts that the retinas of small vertebrates (e.g. salamander, frog) should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli. These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system. | accepted-oral-papers | The paper advocates neuroscience-based V1 models to adapt CNNs. The results of the simulations are convincing from a neuroscience-perspective. The reviewers equivocally recommend publication. | train | [
"r1eiULeC3Q",
"H1gQhLTipX",
"rkls0Uasp7",
"BJx4_OpjpQ",
"B1xQQuaiam",
"rkxrOwpjpQ",
"HJgMwJe5hm",
"Hyg8wO1rhQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"EDIT: On the basis of revisions made to the paper, which significantly augment the results, the authors note: \"the call for papers explicitly mentions applications in neuroscience as within the scope of the conference\" which clarifies my other concern. For both of these reasons, I have changed my prior rating.\n... | [
8,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
5,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2019_S1xq3oR5tQ",
"r1eiULeC3Q",
"H1gQhLTipX",
"B1xQQuaiam",
"Hyg8wO1rhQ",
"HJgMwJe5hm",
"iclr_2019_S1xq3oR5tQ",
"iclr_2019_S1xq3oR5tQ"
] |
iclr_2019_SkVhlh09tX | Pay Less Attention with Lightweight and Dynamic Convolutions | Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU. | accepted-oral-papers | Very solid work, recognized by all reviewers as worthy of acceptance. Additional readers also commented and there is interest in the open source implementation that the authors promise to provide. | train | [
"SyexV29PJN",
"H1glXkLRhm",
"SkgRkkYAAQ",
"BkxEwCe6pQ",
"BJxVasJp6m",
"B1lA_ikT6m",
"r1gUWo16Tm",
"Bkx6Dmk6Tm",
"SygyHzyaTX",
"BJlW-fk667",
"BklnHDDcT7",
"S1xedHZ_pX",
"rygh1-1upQ",
"Byg-bBXZaX",
"H1gJoj8C3X",
"Byx0nlaKnX",
"Hkgbd38ChQ"
] | [
"public",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I found your work very interesting, but there are some recent works that are closely related to your work, which take a sentence as input and generate convolutional kernels that are further applied on the sentence, but with a different granularity. I think those works are definitely worth comparing to.\n\nmissing ... | [
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
-1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1
] | [
"iclr_2019_SkVhlh09tX",
"iclr_2019_SkVhlh09tX",
"Bkx6Dmk6Tm",
"Byg-bBXZaX",
"Byx0nlaKnX",
"H1glXkLRhm",
"H1gJoj8C3X",
"rygh1-1upQ",
"S1xedHZ_pX",
"BklnHDDcT7",
"iclr_2019_SkVhlh09tX",
"iclr_2019_SkVhlh09tX",
"iclr_2019_SkVhlh09tX",
"iclr_2019_SkVhlh09tX",
"iclr_2019_SkVhlh09tX",
"iclr_... |
iclr_2019_r1lYRjC9F7 | Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset | Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales. Fortunately, most music is also highly structured and can be represented as discrete note events played on musical instruments. Herein, we show that by using notes as an intermediate representation, we can train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure on timescales spanning six orders of magnitude (~0.1 ms to ~100 s), a process we call Wave2Midi2Wave. This large advance in the state of the art is enabled by our release of the new MAESTRO (MIDI and Audio Edited for Synchronous TRacks and Organization) dataset, composed of over 172 hours of virtuosic piano performances captured with fine alignment (~3 ms) between note labels and audio waveforms. The networks and the dataset together present a promising approach toward creating new expressive and interpretable neural models of music. | accepted-oral-papers | All reviewers agree that the presented audio data augmentation is very interesting, well presented, and clearly advancing the state of the art in the field. The authors’ rebuttal clarified the remaining questions by the reviewers. All reviewers recommend strong acceptance (oral presentation) at ICLR. I would like to recommend this paper for oral presentation due to a number of reasons including the importance of the problem addressed (data augmentation is the only way forward in cases where we do not have enough of training data), the novelty and innovativeness of the model, and the clarity of the paper. The work will be of interest to the widest audience beyond ICLR. | test | [
"SklV7Ix9aX",
"rJllkIgcTQ",
"H1eFiHgqTQ",
"rklS4Hl5am",
"BJl9uwaQ67",
"B1efz6dgpX",
"S1gnFxZjnX"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review and comments.\n\n* Eq (1) this is really the joint distribution between audio and notes, not the marginal of audio\n\nThank you for catching the mistake. We have updated the equation to include the marginalizing integral through the expectation over notes: P(audio) = E_{notes} [ P(audio|n... | [
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
5,
2,
4
] | [
"S1gnFxZjnX",
"B1efz6dgpX",
"BJl9uwaQ67",
"iclr_2019_r1lYRjC9F7",
"iclr_2019_r1lYRjC9F7",
"iclr_2019_r1lYRjC9F7",
"iclr_2019_r1lYRjC9F7"
] |
iclr_2019_r1xlvi0qYm | Learning to Remember More with Less Memorization | Memory-augmented neural networks consisting of a neural controller and an external memory have shown potentials in long-term sequential learning. Current RAM-like memory models maintain memory accessing every timesteps, thus they do not effectively leverage the short-term memory held in the controller. We hypothesize that this scheme of writing is suboptimal in memory utilization and introduces redundant computation. To validate our hypothesis, we derive a theoretical bound on the amount of information stored in a RAM-like system and formulate an optimization problem that maximizes the bound. The proposed solution dubbed Uniform Writing is proved to be optimal under the assumption of equal timestep contributions. To relax this assumption, we introduce modifications to the original solution, resulting in a solution termed Cached Uniform Writing. This method aims to balance between maximizing memorization and forgetting via overwriting mechanisms. Through an extensive set of experiments, we empirically demonstrate the advantages of our solutions over other recurrent architectures, claiming the state-of-the-arts in various sequential modeling tasks. | accepted-oral-papers | Well-written paper that motivates through theoretical analysis new memory writing methods in memory augmented neural networks. Extensive experimental analysis support and demonstrate the advantages of the new solutions over other recurrent architectures.
Reviewers suggested extension and clarification of the analysis presented in the paper, for example, for different memory sizes. The paper was revised accordingly. Another important suggestion was considering ACT as a baseline. Authors explained clearly why it wasn't considered as a baseline, and updated the paper to include references and explanations in the paper as well. | train | [
"HyxFDRZuCX",
"B1lqiDbOAQ",
"Byl6cSbdC7",
"SygH6Dzjpm",
"BJxv6enuhm",
"SJeG8ByF6m",
"Bklj1SJKa7",
"rJeuIPUjnm"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"Thanks for your responses and paper revisions. I still agree this is a nicely conducted piece of research and will retain my score of 8.\n\nRe. (1) & (2) I see, in most cases when people perform the copy task they train on programatically generated sequences that essentially cannot be overfit to (e.g. because the ... | [
-1,
-1,
-1,
7,
7,
-1,
-1,
8
] | [
-1,
-1,
-1,
3,
4,
-1,
-1,
4
] | [
"SJeG8ByF6m",
"iclr_2019_r1xlvi0qYm",
"SygH6Dzjpm",
"iclr_2019_r1xlvi0qYm",
"iclr_2019_r1xlvi0qYm",
"rJeuIPUjnm",
"BJxv6enuhm",
"iclr_2019_r1xlvi0qYm"
] |
iclr_2019_rJEjjoR9K7 | Learning Robust Representations by Projecting Superficial Statistics Out | Despite impressive performance as evaluated on i.i.d. holdout data, deep neural networks depend heavily on superficial statistics of the training data and are liable to break under distribution shift. For example, subtle changes to the background or texture of an image can break a seemingly powerful classifier. Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training. This setting is challenging because the model may extract many distribution-specific (superficial) signals together with distribution-agnostic (semantic) signals. To overcome this challenge, we incorporate the gray-level co-occurrence matrix (GLCM) to extract patterns that our prior knowledge suggests are superficial: they are sensitive to the texture but unable to capture the gestalt of an image. Then we introduce two techniques for improving our networks' out-of-sample performance. The first method is built on the reverse gradient method that pushes our model to learn representations from which the GLCM representation is not predictable. The second method is built on the independence introduced by projecting the model's representation onto the subspace orthogonal to GLCM representation's.
We test our method on the battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training. | accepted-oral-papers | The paper presents a new approach for domain generalization whereby the original supervised model is trained with an explicit objective to ignore so called superficial statistics present in the training set but which may not be present in future test sets. The paper proposes using a differentiable variant of gray-level co-occurrence matrix to capture the textural information and then experiments with two techniques for learning feature invariance. All reviewers agree the approach is novel, unique, and potentially high impact to the community.
The main issues center around reproducibility as well as the intended scope of problems this approach addresses. The authors have offered to include further discussions in the final version to address these points. Doing so will strengthen the paper and aid the community in building upon this work. | train | [
"S1xWkt7QCX",
"BJet3_XXRQ",
"S1gEYuQXC7",
"SJxqbOQmRX",
"rJxbWynhhm",
"H1ehlWduhm",
"HJee7cfwh7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the strong positive assessment of our work. We’re glad that you appreciated the originality of our approach, the value of our new datasets, and the quality of our exposition. We will continue to improve the draft in the camera-ready version.",
"Thanks for a detailed review. We are grateful both for... | [
-1,
-1,
-1,
-1,
7,
7,
9
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"HJee7cfwh7",
"H1ehlWduhm",
"rJxbWynhhm",
"iclr_2019_rJEjjoR9K7",
"iclr_2019_rJEjjoR9K7",
"iclr_2019_rJEjjoR9K7",
"iclr_2019_rJEjjoR9K7"
] |
iclr_2019_rJVorjCcKQ | Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware | As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy for outsourced ML computations. A pragmatic solution comes from Trusted Execution Environments (TEEs), which use hardware and software protections to isolate sensitive computations from the untrusted software stack. However, these isolation guarantees come at a price in performance, compared to untrusted alternatives. This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in TEEs by efficiently partitioning DNN computations between trusted and untrusted devices. Building upon an efficient outsourcing scheme for matrix multiplication, we propose Slalom, a framework that securely delegates execution of all linear layers in a DNN from a TEE (e.g., Intel SGX or Sanctum) to a faster, yet untrusted, co-located processor. We evaluate Slalom by running DNNs in an Intel SGX enclave, which selectively delegates work to an untrusted GPU. For canonical DNNs (VGG16, MobileNet and ResNet variants) we obtain 6x to 20x increases in throughput for verifiable inference, and 4x to 11x for verifiable and private inference. | accepted-oral-papers | The authors propose a new method of securely evaluating neural networks.
The reviewers were unanimous in their vote to accept. The paper is very well written, the idea is relatively simple, and so it is likely that this would make a nice presentation. | train | [
"Hkgpl17URQ",
"BJl-sAfICQ",
"Hyx47Cf80Q",
"SylJsazLRm",
"rkgtvmXc2m",
"r1eyl4tHhQ",
"HJg2YxyGoQ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In response to the below reviews, we have made the following main changes to our paper:\n\n- As suggested by the second reviewer, we have moved some of the content from the Appendix back to the main body. These include the microbenchmark results, as well as a discussion of the challenges in extending Slalom to DNN... | [
-1,
-1,
-1,
-1,
7,
7,
9
] | [
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"iclr_2019_rJVorjCcKQ",
"HJg2YxyGoQ",
"r1eyl4tHhQ",
"rkgtvmXc2m",
"iclr_2019_rJVorjCcKQ",
"iclr_2019_rJVorjCcKQ",
"iclr_2019_rJVorjCcKQ"
] |
iclr_2019_rJgMlhRctm | The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision | We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval. | accepted-oral-papers | Strong paper in an interesting new direction.
More work should be done in this area. | train | [
"r1g6tF8F3X",
"BylnXIUC0Q",
"HJeHGTF5nX",
"Bkx7KxOjpX",
"r1xJIbv5A7",
"ryxKPxw9Rm",
"rJxoZ-_ipQ",
"SJgbnnCgRX",
"rJl4slOsa7",
"SyxhAx_jpm",
"rJx2mlOjTQ",
"Sklo1V_znQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"To achieve the state-of-the-art on the CLEVR and the variations of this, the authors propose a method to use object-based visual representations and a differentiable quasi-symbolic executor. Since the semantic parser for a question input is not differentiable, they use REINFORCE algorithm and a technique to reduce... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2019_rJgMlhRctm",
"r1xJIbv5A7",
"iclr_2019_rJgMlhRctm",
"r1g6tF8F3X",
"SJgbnnCgRX",
"iclr_2019_rJgMlhRctm",
"iclr_2019_rJgMlhRctm",
"Bkx7KxOjpX",
"r1g6tF8F3X",
"Sklo1V_znQ",
"HJeHGTF5nX",
"iclr_2019_rJgMlhRctm"
] |
iclr_2019_rJl-b3RcF7 | The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks | Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.
We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.
We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy. | accepted-oral-papers | The authors posit and investigate a hypothesis -- the “lottery ticket hypothesis” -- which aims to explain why overparameterized neural networks are easier to train than their sparse counterparts. Under this hypothesis, randomly initialized dense networks are easier to train because they contain a larger number of “winning tickets”.
This paper received very favorable reviews, though there were some notable points of concern. The reviewers and the AC appreciated the detailed and careful experimentation and analysis. However, there were a couple of points of concern raised by the reviewers: 1) the lack of experiments conducted on large-scale tasks and models, and 2) the lack of a clear application of the idea beyond what has been proposed previously.
Overall, this is a very interesting paper with convincing experimental validation and as such the AC is happy to accept the work. | train | [
"S1xmvZRayE",
"Hkelbn3I14",
"r1l-QxArJ4",
"ryggsG-VkV",
"HJeDy85anQ",
"HygUFDOTAX",
"Bkg5UpU52m",
"ryemP68v2m",
"SkloQowaAm",
"BygUAeW5R7",
"BJeHlbWcCX",
"SylwJEW9Cm",
"H1gQ-4Z5CQ",
"ryg2lfbcRQ",
"SygdQnlqRm",
"r1xZ0Ag5Cm",
"BylPD1gKT7"
] | [
"author",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"\n\nWe have an update with several further experiments that examine the relationship between SNIP and our paper.\n\nWe have simplified our pruning mechanism to prune weights globally (instead of per-layer) with otherwise the same pruning technique. For our three main networks (MNIST, Resnet-18, and VGG-19), we fin... | [
-1,
-1,
-1,
-1,
5,
-1,
9,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Hkelbn3I14",
"r1l-QxArJ4",
"ryggsG-VkV",
"iclr_2019_rJl-b3RcF7",
"iclr_2019_rJl-b3RcF7",
"ryg2lfbcRQ",
"iclr_2019_rJl-b3RcF7",
"iclr_2019_rJl-b3RcF7",
"SylwJEW9Cm",
"HJeDy85anQ",
"HJeDy85anQ",
"ryemP68v2m",
"ryemP68v2m",
"Bkg5UpU52m",
"iclr_2019_rJl-b3RcF7",
"SygdQnlqRm",
"HJeDy85an... |
iclr_2019_rJxgknCcK7 | FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models | A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be used if the transformation is specified by an ordinary differential equation. In this paper, we use Hutchinson’s trace estimator to give a scalable unbiased estimate of the log-density. The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures. We demonstrate our approach on high-dimensional density estimation, image generation, and variational inference, achieving the state-of-the-art among exact likelihood methods with efficient sampling. | accepted-oral-papers | This paper proposes the use of recently propose neural ODEs in a flow-based generative model.
As the paper shows, a big advantage of a neural ODE in a generative flow is that an unbiased estimator of the log-determinant of the mapping is straightforward to construct. Another advantage, compared to earlier published flows, is that all variables can be updated in parallel, as the method does not require "chopping up" the variables into blocks. The paper shows significant improvements on several benchmarks, and seems to be a promising venue for further research.
A disadvantage of the method is that the authors were unable to show that the method could produce results that were similar (of better than) the SOTA on the more challenging benchmark of CIFAR-10. Another downside is its computational cost. Since neural ODEs are relatively new, however, these problems might resolved with further refinements to the method. | train | [
"Sylsscjbe4",
"ryxeNAsF14",
"r1xI56j2R7",
"HklkaJG92m",
"Hylm9LQd3X",
"H1xNi4_oT7",
"rJglGJOi6m",
"rJlBrCwsp7",
"rklhnnPj6X",
"ryeYYxV9n7",
"Byg3817r5Q"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"public"
] | [
"Thank for you pointing this out. We will update the camera-ready version with the correct results if our paper is accepted. ",
"It looks like the authors are not reporting the most up-to-date likelihoods using TANs (as per the Table 1 in official ICML paper http://proceedings.mlr.press/v80/oliva18a.html ). Hence... | [
-1,
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
7,
-1
] | [
-1,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
4,
-1
] | [
"ryxeNAsF14",
"iclr_2019_rJxgknCcK7",
"rklhnnPj6X",
"iclr_2019_rJxgknCcK7",
"iclr_2019_rJxgknCcK7",
"Byg3817r5Q",
"Hylm9LQd3X",
"HklkaJG92m",
"ryeYYxV9n7",
"iclr_2019_rJxgknCcK7",
"iclr_2019_rJxgknCcK7"
] |
iclr_2019_ryGs6iA5Km | How Powerful are Graph Neural Networks? | Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance. | accepted-oral-papers | Graph neural networks are an increasingly popular topic of research in machine learning, and this paper does a good job of studying the representational power of some newly proposed variants. The framing of the problem in terms of the WL test, and the proposal of the GIN architecture is a valuable contribution. Through the reviews and subsequent discussion, it looks like the issues surrounding Theorem 3 have been resolved, and therefore all of the reviewers now agree that this paper should be accepted. There may be some interesting followup work based on studying depth, as pointed out by reviewer 1, but this may not be an issue in GIN and is regardless a topic for future research. | train | [
"rkl2Q1Qi6X",
"B1xYlDERRX",
"SJeYuLH41V",
"HJgMSgUqhQ",
"BygALwN0CX",
"B1et5yXJ14",
"H1gRfJQJy4",
"rkxt80KARX",
"rJxY7atRCX",
"BkgrFw3iRQ",
"S1egpyLCAX",
"H1xW3wVA0X",
"rkeW9FDnnQ",
"S1ljyieA0m",
"BJgIGNTP27",
"BkxHNhu607",
"rJx9PavpA7",
"BJeRhEhiRX",
"ryeaD73iRX",
"SJxgqg2N0m"... | [
"public",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
... | [
"I do not think that Equation (4.1) is as powerful as the 1-WL. Consider the two labeled graphs \n\nr -- g\n| |\ng -- r\n\nand \n\nr -- g\n| |\nr -- g\n\nwith node color \"g\" and \"r\". Clearly, the 1-WL can distinguish between these two graphs. Howeover, when using (4.1) with an 1-hot encoding of the labels... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-... | [
"iclr_2019_ryGs6iA5Km",
"S1ljyieA0m",
"S1ljyieA0m",
"iclr_2019_ryGs6iA5Km",
"rJx9PavpA7",
"rJxY7atRCX",
"rkxt80KARX",
"BygALwN0CX",
"H1xW3wVA0X",
"BJeRhEhiRX",
"BkgrFw3iRQ",
"BkxHNhu607",
"iclr_2019_ryGs6iA5Km",
"H1xLhQUEA7",
"iclr_2019_ryGs6iA5Km",
"Byg6WVL4Rm",
"B1l2k48VCQ",
"rye... |
iclr_2019_B1G5ViAqFm | Convolutional Neural Networks on Non-uniform Geometrical Signals Using Euclidean Spectral Transformation | Convolutional Neural Networks (CNN) have been successful in processing data signals that are uniformly sampled in the spatial domain (e.g., images). However, most data signals do not natively exist on a grid, and in the process of being sampled onto a uniform physical grid suffer significant aliasing error and information loss. Moreover, signals can exist in different topological structures as, for example, points, lines, surfaces and volumes. It has been challenging to analyze signals with mixed topologies (for example, point cloud with surface mesh). To this end, we develop mathematical formulations for Non-Uniform Fourier Transforms (NUFT) to directly, and optimally, sample nonuniform data signals of different topologies defined on a simplex mesh into the spectral domain with no spatial sampling error. The spectral transform is performed in the Euclidean space, which removes the translation ambiguity from works on the graph spectrum. Our representation has four distinct advantages: (1) the process causes no spatial sampling error during initial sampling, (2) the generality of this approach provides a unified framework for using CNNs to analyze signals of mixed topologies, (3) it allows us to leverage state-of-the-art backbone CNN architectures for effective learning without having to design a particular architecture for a particular data structure in an ad-hoc fashion, and (4) the representation allows weighted meshes where each element has a different weight (i.e., texture) indicating local properties. We achieve good results on-par with state-of-the-art for 3D shape retrieval task, and new state-of-the-art for point cloud to surface reconstruction task. | accepted-poster-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- The paper tackles an interesting and challenging problem with a novel approach.
- The method gives improves improved performance for the surface reconstruction task.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
The paper
- lacks clarity in some areas
- doesn't sufficiently explain the trade-offs between performing all computations in the spectral domain vs the spatial domain.
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
Reviewers had a divergent set of concerns. After the rebuttal, the remaining concerns were:
- the significance of the performance improvements. The AC believes that the quantitative and qualitative results in Table 3 and Figures 5 and 6 show significant improvements with respect to two recent methods.
- a feeling that the proposed method could have been more efficient if more computations were done in the spectral domain. This is a fair point but should be considered as suggestions for improvement and future work rather than grounds for rejection in the AC's view.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers did not reach a consensus. The final decision is aligned with the more positive reviewer, AR1, because AR1 was more confident in his/her review and because of the additional reasons stated in the previous section.
| test | [
"HyxCDlbK0Q",
"r1gpWlWFC7",
"H1xYOJWKAQ",
"HJlzBRwC2m",
"BJxMFSeonm",
"SJlRpk09hQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank you for your review and feedback, and we hope to be able to address your concerns below.\n\nOur paper addresses the issue of handling irregular domains, with possibly mixed topologies in the context of deep learning, and propose an optimal spectral sampling scheme for constructing a volumetric representat... | [
-1,
-1,
-1,
5,
7,
4
] | [
-1,
-1,
-1,
3,
4,
3
] | [
"SJlRpk09hQ",
"BJxMFSeonm",
"HJlzBRwC2m",
"iclr_2019_B1G5ViAqFm",
"iclr_2019_B1G5ViAqFm",
"iclr_2019_B1G5ViAqFm"
] |
iclr_2019_B1G9doA9F7 | Augmented Cyclic Adversarial Learning for Low Resource Domain Adaptation | Training a model to perform a task typically requires a large amount of data from the domains in which the task will be applied.
However, it is often the case that data are abundant in some domains but scarce in others. Domain adaptation deals with the challenge of adapting a model trained from a data-rich source domain to perform well in a data-poor target domain. In general, this requires learning plausible mappings between domains. CycleGAN is a powerful framework that efficiently learns to map inputs from one domain to another using adversarial training and a cycle-consistency constraint. However, the conventional approach of enforcing cycle-consistency via reconstruction may be overly restrictive in cases where one or more domains have limited training data. In this paper, we propose an augmented cyclic adversarial learning model that enforces the cycle-consistency constraint via an external task specific model, which encourages the preservation of task-relevant content as opposed to exact reconstruction. We explore digit classification in a low-resource setting in supervised, semi and unsupervised situation, as well as high resource unsupervised. In low-resource supervised setting, the results show that our approach improves absolute performance by 14% and 4% when adapting SVHN to MNIST and vice versa, respectively, which outperforms unsupervised domain adaptation methods that require high-resource unlabeled target domain. Moreover, using only few unsupervised target data, our approach can still outperforms many high-resource unsupervised models. Our model also outperforms on USPS to MNIST and synthetic digit to SVHN for high resource unsupervised adaptation. In speech domains, we similarly adopt a speech recognition model from each domain as the task specific model. Our approach improves absolute performance of speech recognition by 2% for female speakers in the TIMIT dataset, where the majority of training samples are from male voices. | accepted-poster-papers | The authors propose a method for low-resource domain adaptation where the number of examples available in the target domain are limited. The proposed method modifies the basic approach in a CycleGAN by augmenting it with a “content” (task-specific) loss, instead of the standard reconstruction error. The authors also demonstrate experimentally that it is important to enforce the loss in both directions (target → source and source --> target). Experiments are conducted on both supervised as well as unsupervised settings.
The main concern expressed by the reviewers relates to the novelty of the approach since it is a relatively straightforward extension of CycleGAN/CyCADA, but in the view of a majority of reviewers the work serves a useful contribution as a practical method for developing systems in low-resource conditions where it is feasible to label a few new instances. Although the reviewers were not unanimous in their recommendations, on balance in the view of the AC the work is a useful contribution with clear and detailed experiments in the revised version.
| train | [
"rkgVnNv52Q",
"HJxLgpUNJE",
"HJxHpfhH2m",
"SJgRrjSf14",
"Byez806lJE",
"Syl3KFjyJN",
"BylYeKiykE",
"HyeIY_i1y4",
"rkxI_hF3RX",
"HklKo3NW6Q",
"B1xgX-Y0am",
"BJxNk5FwCQ",
"H1gOeZtCpQ",
"ByxZHgYCam",
"SylfAxKCpQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The authors propose an extension of cycle-consistent adversarial adaptation methods in order to tackle domain adaptation in settings where a limited amount of supervised target data is available (though they also validate their model in the standard unsupervised setting as well). The method appears to be a natural... | [
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_B1G9doA9F7",
"H1gOeZtCpQ",
"iclr_2019_B1G9doA9F7",
"Byez806lJE",
"HyeIY_i1y4",
"ByxZHgYCam",
"rkxI_hF3RX",
"rkxI_hF3RX",
"BJxNk5FwCQ",
"iclr_2019_B1G9doA9F7",
"HJxHpfhH2m",
"B1xgX-Y0am",
"rkgVnNv52Q",
"iclr_2019_B1G9doA9F7",
"HklKo3NW6Q"
] |
iclr_2019_B1GAUs0cKQ | Variance Networks: When Expectation Does Not Meet Your Expectations | Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging. In this paper, we introduce variance layers, a different kind of stochastic layers. Each weight of a variance layer follows a zero-mean distribution and is only parameterized by its variance. It means that each object is represented by a zero-mean distribution in the space of the activations. We show that such layers can learn surprisingly well, can serve as an efficient exploration tool in reinforcement learning tasks and provide a decent defense against adversarial attacks. We also show that a number of conventional Bayesian neural networks naturally converge to such zero-mean posteriors. We observe that in these cases such zero-mean parameterization leads to a much better training objective than more flexible conventional parameterizations where the mean is being learned. | accepted-poster-papers | The authors describe a very counterintuitive type of layer: one with mean zero Gaussian weights. They show that various Bayesian deep learning algorithms tend to converge to layers of this variety. This work represents a step forward in our understanding of bayesian deep learning methods and potentially may shine light on how to improve those methods. | val | [
"SyeR8Podp7",
"ByloevouTQ",
"Hkx_V8sdam",
"SkxrImCK2Q",
"r1e0vhrKhX",
"HygV2aqO3Q"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review and your questions!\n\n> (1) My main concern is verification. Most of the comparisons are between variance layer (zero-mean) and conventional binary dropout, while the main argument of the paper is that it’s better to set Gaussian posterior’s mean to zero. So in all the experiments the pa... | [
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"HygV2aqO3Q",
"r1e0vhrKhX",
"SkxrImCK2Q",
"iclr_2019_B1GAUs0cKQ",
"iclr_2019_B1GAUs0cKQ",
"iclr_2019_B1GAUs0cKQ"
] |
iclr_2019_B1GMDsR5tm | Initialized Equilibrium Propagation for Backprop-Free Training | Deep neural networks are almost universally trained with reverse-mode automatic differentiation (a.k.a. backpropagation). Biological networks, on the other hand, appear to lack any mechanism for sending gradients back to their input neurons, and thus cannot be learning in this way. In response to this, Scellier & Bengio (2017) proposed Equilibrium Propagation - a method for gradient-based train- ing of neural networks which uses only local learning rules and, crucially, does not rely on neurons having a mechanism for back-propagating an error gradient. Equilibrium propagation, however, has a major practical limitation: inference involves doing an iterative optimization of neural activations to find a fixed-point, and the number of steps required to closely approximate this fixed point scales poorly with the depth of the network. In response to this problem, we propose Initialized Equilibrium Propagation, which trains a feedforward network to initialize the iterative inference procedure for Equilibrium propagation. This feed-forward network learns to approximate the state of the fixed-point using a local learning rule. After training, we can simply use this initializing network for inference, resulting in a learned feedforward network. Our experiments show that this network appears to work as well or better than the original version of Equilibrium propagation. This shows how we might go about training deep networks without using backpropagation. | accepted-poster-papers | The paper investigates a novel initialisation method to improve Equilibrium Propagation. In particular, the results are convincing, but the reviewers remain with small issues here and there.
An issue with the paper is the biological plausibility of the approach. Nonetheless publication is recommended. | train | [
"rJxVRXrulV",
"ryeI_q3v3Q",
"Hyl4OzIIJN",
"Syx_vW0HkE",
"rkeWkpvBk4",
"r1gcB3JH14",
"H1l2Nbf50Q",
"B1gzOC6Wa7",
"HkxVPjhFAQ",
"rkenvhdtA7",
"rkxaD5dY07",
"rye_pK_YAm",
"H1xeru_K07",
"r1eOy7xKhQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Have bumped score to 7, in anticipation of the final improvements from this thread being included in the camera ready.",
"This paper presents an improvement on the local/derivative-free learning algorithm equilibrium propagation. Specifically, it trains a feedforward network to initialize the iterative optimizat... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5
] | [
"Hyl4OzIIJN",
"iclr_2019_B1GMDsR5tm",
"Syx_vW0HkE",
"rkeWkpvBk4",
"r1gcB3JH14",
"rkenvhdtA7",
"HkxVPjhFAQ",
"iclr_2019_B1GMDsR5tm",
"rye_pK_YAm",
"ryeI_q3v3Q",
"r1eOy7xKhQ",
"B1gzOC6Wa7",
"iclr_2019_B1GMDsR5tm",
"iclr_2019_B1GMDsR5tm"
] |
iclr_2019_B1MXz20cYQ | Explaining Image Classifiers by Counterfactual Generation | When an image classifier makes a prediction, which parts of the image are relevant and why? We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision? Producing an answer requires marginalizing over images that could have been seen but weren't. We can sample plausible image in-fills by conditioning a generative model on the rest of the image. We then optimize to find the image regions that most change the classifier's decision after in-fill. Our approach contrasts with ad-hoc in-filling approaches, such as blurring or injecting noise, which generate inputs far from the data distribution, and ignore informative relationships between different parts of the image. Our method produces more compact and relevant saliency maps, with fewer artifacts compared to previous methods. | accepted-poster-papers | Important problem (explainable AI); sensible approach, one of the first to propose a method for the counter-factual question (if this part of the input were different, what would the network have predicted). Initially there were some concerns by the reviewers but after the author response and reviewer discussion, all three recommend acceptance (not all of them updated their final scores in the system). | train | [
"rJxUUoj8g4",
"S1edEowWgN",
"rJxRJWdayV",
"H1x2bX-p1V",
"B1eKk_WRaX",
"HkguvdW0a7",
"HJlz9u-RaQ",
"H1g195-CT7",
"BklNCIGhh7",
"SklSwVYo37",
"SkephArKjX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"As we mentioned above, Fan et al. is orthogonal to our work. We highly recommend you to reread our manuscript to understand the scope of our work.",
"Fan et al. is used in saliency prediction and seems to achieve good accuracy as reported in other papers:\nhttps://openreview.net/forum?id=BJxbYoC9FQ\n\n",
"Good... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"S1edEowWgN",
"HkguvdW0a7",
"H1x2bX-p1V",
"B1eKk_WRaX",
"iclr_2019_B1MXz20cYQ",
"BklNCIGhh7",
"SklSwVYo37",
"SkephArKjX",
"iclr_2019_B1MXz20cYQ",
"iclr_2019_B1MXz20cYQ",
"iclr_2019_B1MXz20cYQ"
] |
iclr_2019_B1VZqjAcYX | SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY | Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task. | accepted-poster-papers | This method proposes a criterion (SNIP) to prune neural networks before training. The pro is that SNIP can find the architecturally important parameters in the network without full training. The con is that SNIP only evaluated on small datasets (mnist, cifar, tiny-imagenet) and it's uncertain if the same heuristic works on large-scale dataset. Small datasets can always achieve high pruning ratio, so evaluation on ImageNet is quite important for pruning work. The reviewers have consensus on accept. The authors are recommended to compare with previous work [1][2] to make the paper more convincing.
[1] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. NIPS, 2015.
[2] Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. NIPS, 2016. | train | [
"HkgD860RJN",
"BJlFsBZA14",
"HkeMgJ6LyN",
"r1ecBMxwTm",
"rygEepnLJV",
"Skx0ADeVkE",
"rye-Cr95Am",
"Bygrtyec3X",
"HylRy09FCX",
"SygWrMoHRX",
"HkeeUt2G0X",
"rylC4nnfCQ",
"rJgNZnhzCX",
"S1gwW9hGRX",
"HkxRgu-z0X",
"BylnY5EPTX",
"SJlJlW6g6m",
"Hkx-JZ3vh7",
"SJxr7w6B3Q",
"H1e1b9pz3Q"... | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"author",
"official_reviewer",
"author",
"public",
... | [
"\nWe believe that the comparison is misleading since [1] and SNIP focus on different (orthogonal) aspects of network pruning, and we elaborate this below.\n- SNIP focuses on finding a subnetwork at single-shot with a mini-batch of data, and shows that the subnetwork can be trained in the standard way. There are no... | [
-1,
-1,
-1,
8,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"BJlFsBZA14",
"rygEepnLJV",
"rygEepnLJV",
"iclr_2019_B1VZqjAcYX",
"Skx0ADeVkE",
"rJgNZnhzCX",
"HylRy09FCX",
"iclr_2019_B1VZqjAcYX",
"SygWrMoHRX",
"S1gwW9hGRX",
"Hkx-JZ3vh7",
"r1ecBMxwTm",
"r1ecBMxwTm",
"BylnY5EPTX",
"iclr_2019_B1VZqjAcYX",
"SJlJlW6g6m",
"Bygrtyec3X",
"iclr_2019_B1V... |
iclr_2019_B1e0X3C9tQ | Diagnosing and Enhancing VAE Models | Although variational autoencoders (VAEs) represent a widely influential deep generative model, many aspects of the underlying energy function remain poorly understood. In particular, it is commonly believed that Gaussian encoder/decoder assumptions reduce the effectiveness of VAEs in generating realistic samples. In this regard, we rigorously analyze the VAE objective, differentiating situations where this belief is and is not actually true. We then leverage the corresponding insights to develop a simple VAE enhancement that requires no additional hyperparameters or sensitive tuning. Quantitatively, this proposal produces crisp samples and stable FID scores that are actually competitive with a variety of GAN models, all while retaining desirable attributes of the original VAE architecture. The code for our model is available at \url{https://github.com/daib13/TwoStageVAE}. | accepted-poster-papers | The reviewers acknowledge the value of the careful analysis of Gaussian encoder/decoder VAE presented in the paper. The proposed algorithm shows impressive FID scores that are comparable to those obtained by state of the art GANs. The paper will be a valuable addition to the ICLR program.
| val | [
"Bkg4rgm93Q",
"rygm8MFA0X",
"BJxjvk7nCm",
"H1xarJXnCm",
"S1exXym2Cm",
"Bye1byCKCX",
"HJxVsagYnm",
"Sye7L4k1AQ",
"ryg--PcTTQ",
"SyxF01pF6X",
"Bkletw5FT7",
"ByeNgv9KTQ",
"H1xC7LAvaQ",
"S1low-VDpX",
"BJxGPNr4TX",
"B1xK2VH4T7",
"ByebcVS4TX",
"BJlRhQrETQ",
"BJ-97B4pQ",
"B1eNhzHVTX",... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposed a two-stage VAE method to generate high-quality samples and avoid blurriness. It is accomplished by utilizing a VAE structure on the observation and latent variable separately. The paper exploited a collection of interesting properties of VAE and point out the problem existed in the generative ... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_B1e0X3C9tQ",
"iclr_2019_B1e0X3C9tQ",
"Bye1byCKCX",
"Bye1byCKCX",
"Bye1byCKCX",
"BJ-97B4pQ",
"iclr_2019_B1e0X3C9tQ",
"ryg--PcTTQ",
"SyxF01pF6X",
"Bkletw5FT7",
"S1low-VDpX",
"H1xC7LAvaQ",
"iclr_2019_B1e0X3C9tQ",
"iclr_2019_B1e0X3C9tQ",
"HJxVsagYnm",
"HJxVsagYnm",
"HJxVsagYnm... |
iclr_2019_B1exrnCcF7 | Disjoint Mapping Network for Cross-modal Matching of Voices and Faces | We propose a novel framework, called Disjoint Mapping Network (DIMNet), for cross-modal biometric matching, in particular of voices and faces. Different from the existing methods, DIMNet does not explicitly learn the joint relationship between the modalities. Instead, DIMNet learns a shared representation for different modalities by mapping them individually to their common covariates. These shared representations can then be used to find the correspondences between the modalities. We show empirically that DIMNet is able to achieve better performance than the current state-of-the-art methods, with the additional benefits of being conceptually simpler and less data-intensive. | accepted-poster-papers | All reviewers agree that the proposed method interesting and well presented. The authors' rebuttal addressed all outstanding raised issues. Two reviewers recommend clear accept and the third recommends borderline accept. I agree with this recommendation and believe that the paper will be of interest to the audience attending ICLR. I recommend accepting this work for a poster presentation at ICLR. | val | [
"r1gPRr2jT7",
"r1lCSghsam",
"ByxLt0iipQ",
"r1e5d52hhQ",
"Hkxxm_Qsh7",
"HkxvRZesnQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We sincerely appreciate the review for the recognition of our novelty and many valuable suggestions.\n\nOur main contribution mainly lies in proposing a cross modal matching framework called DIMNet, which learns a shared representation for different modalities by mapping them individually to their common covariate... | [
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"Hkxxm_Qsh7",
"HkxvRZesnQ",
"r1e5d52hhQ",
"iclr_2019_B1exrnCcF7",
"iclr_2019_B1exrnCcF7",
"iclr_2019_B1exrnCcF7"
] |
iclr_2019_B1ffQnRcKX | Automatically Composing Representation Transformations as a Means for Generalization | A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning -- either training a separate learner per task or training a single learner for all tasks -- both have difficulty with such generalization because they do not leverage the compositional structure of the task distribution. This paper introduces the compositional problem graph as a broadly applicable formalism to relate tasks of different complexity in terms of problems with shared subproblems. We propose the compositional generalization problem for measuring how readily old knowledge can be reused and hence built upon. As a first step for tackling compositional generalization, we introduce the compositional recursive learner, a domain-general framework for learning algorithmic procedures for composing representation transformations, producing a learner that reasons about what computation to execute by making analogies to previously seen problems. We show on a symbolic and a high-dimensional domain that our compositional approach can generalize to more complex problems than the learner has previously encountered, whereas baselines that are not explicitly compositional do not. | accepted-poster-papers |
pros:
- the paper is well-written and presents a nice framing of the composition problem
- good comparison to prior work
- very important research direction
cons:
- from an architectural standpoint the paper is somewhat incremental over Routing Networks [Rosenbaum et al]
- as Reviewers 2 and 3 point out, the experiments are a bit weak, relying on heuristics such as a window over 3 symbols in the multi-lingual arithmetic case, and a pre-determined set of operations (scaling, translation, rotation, identity) in the MNIST case.
As the authors state, there are three core ideas in this paper (my paraphrase):
(1) training on a set of compositional problems (with the right architecture/training procedure) can encourage the model to learn modules which can be composed to solve new problems, enabling better generalization.
(2) treating the problem of selecting functions for composition as a sequential decision-making problem in an MDP
(3) jointly learning the parameters of the functions and the (meta-level) composition policy.
As discussed during the review period, these three ideas are already present in the Routing Networks (RN) architecture of Rosenbaum et al. However CRL offers insights and improvements over RN algorithmically in a several ways:
(1) CRL uses a curriculum learning strategy. This seems to be key in achieving good results and makes a lot of sense for naturally compositional problems.
(2) The focus in RN was on using the architecture to solve multi-task problems in object recognition. The solutions learned in image domains while "compositional" are less clearly interpretable. In this paper (CRL) the focus is more squarely on interpretable compositional tasks like arithmetic and explores extrapolation.
(3) The RN architecture does support recursion (and there are some experiments in this mode) but it was not the main focus. In this paper (CRL) recursion is given a clear, prominent role.
I appreciate that the authors' engagement in the discussion period. My feeling is that the paper offers nice improvements, a useful framing of the problem, a clear recursive formulation, and a more central focus on naturally compositional problems. I am recommending the paper for acceptance but suggest that the authors remove or revise their contributions (3) and (4) on pg. 2 in light of the discussion on routing nets.
Routing Networks, Adaptive Selection of Non-Linear Functions for Multi-task Learning, ICLR 2018 | train | [
"r1gbHlap3X",
"BJl1SmLj3m",
"SygCi7pr14",
"BJeiNE7TaQ",
"H1ei6m7TaX",
"Byg3F7Qap7",
"B1eifeX9n7",
"r1x-kZyT2X",
"Hkx25xk6hQ",
"SJg_qTBs3X",
"HJlzO5a1nm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"public",
"public"
] | [
"Summary: This paper is about trying to learn a function from typed input-output data so that it can generalize to test data with an input-output type that it hasn't seen during training. It should be able to use \"analogy\" (if we want to translate from French to Spanish but don't know how to do so directly, we sh... | [
7,
9,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1
] | [
2,
4,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2019_B1ffQnRcKX",
"iclr_2019_B1ffQnRcKX",
"Byg3F7Qap7",
"Hkx25xk6hQ",
"B1eifeX9n7",
"r1gbHlap3X",
"iclr_2019_B1ffQnRcKX",
"SJg_qTBs3X",
"HJlzO5a1nm",
"HJlzO5a1nm",
"iclr_2019_B1ffQnRcKX"
] |
iclr_2019_B1fpDsAqt7 | Visual Reasoning by Progressive Module Networks | Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a functional program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate improved performances in all tasks by learning progressively. By evaluating the reasoning process using human judges, we show that our model is more interpretable than an attention-based baseline.
| accepted-poster-papers | Important problem (modular & interpretable approaches for VQA and visual reasoning); well-written manuscript, sensible approach. Paper was reviewed by three experts. Initially there were some concerns but after the author response and reviewer discussion, all three unanimously recommend acceptance. | train | [
"SkeIsnWJlN",
"HygakjZyl4",
"SJgyyPTv3m",
"SkeHNjEqRQ",
"BkgO1UGq07",
"Hygs-DpF0X",
"S1enb1hOCm",
"BylkHTN_RX",
"rygVoa6DRQ",
"H1xozmAE0m",
"SyeoH17BTQ",
"SJlY63MSpX",
"Sklfd2fSpX",
"H1gUmqkh3Q",
"r1eaptK5hm"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We have added the GT captions experiment in the 'plug-and-play architecture' paragraph in Section 4.1.\n\nThank you again for your great suggestion!",
"Thanks for the response! It is interesting that the GT captions can help improve the VQA performance, please incorporate the results and update the manuscripts a... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"HygakjZyl4",
"SJlY63MSpX",
"iclr_2019_B1fpDsAqt7",
"BkgO1UGq07",
"Hygs-DpF0X",
"S1enb1hOCm",
"BylkHTN_RX",
"rygVoa6DRQ",
"SyeoH17BTQ",
"iclr_2019_B1fpDsAqt7",
"SJgyyPTv3m",
"r1eaptK5hm",
"H1gUmqkh3Q",
"iclr_2019_B1fpDsAqt7",
"iclr_2019_B1fpDsAqt7"
] |
iclr_2019_B1g30j0qF7 | Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes | There is a previously identified equivalence between wide fully connected neural networks (FCNs) and Gaussian processes (GPs). This equivalence enables, for instance, test set predictions that would have resulted from a fully Bayesian, infinitely wide trained FCN to be computed without ever instantiating the FCN, but by instead evaluating the corresponding GP. In this work, we derive an analogous equivalence for multi-layer convolutional neural networks (CNNs) both with and without pooling layers, and achieve state of the art results on CIFAR10 for GPs without trainable kernels. We also introduce a Monte Carlo method to estimate the GP corresponding to a given neural network architecture, even in cases where the analytic form has too many terms to be computationally feasible.
Surprisingly, in the absence of pooling layers, the GPs corresponding to CNNs with and without weight sharing are identical. As a consequence, translation equivariance, beneficial in finite channel CNNs trained with stochastic gradient descent (SGD), is guaranteed to play no role in the Bayesian treatment of the infinite channel limit - a qualitative difference between the two regimes that is not present in the FCN case. We confirm experimentally, that while in some scenarios the performance of SGD-trained finite CNNs approaches that of the corresponding GPs as the channel count increases, with careful tuning SGD-trained CNNs can significantly outperform their corresponding GPs, suggesting advantages from SGD training compared to fully Bayesian parameter estimation. | accepted-poster-papers | There has been a recent focus on proving the convergence of Bayesian fully connected networks to GPs. This work takes these ideas one step further, by proving the equivalence in the convolutional case.
All reviewers and the AC are in agreement that this is interesting and impactful work. The nature of the topic is such that experimental evaluations and theoretical proofs are difficult to carry out in a convincing manner, however the authors have done a good job at it, especially after carefully taking into account the reviewers’ comments.
| val | [
"SklzQ5QfgN",
"Skluw57MeN",
"SygaK6RU14",
"SJgkvoCrJE",
"BkxQiXRYhX",
"BJgxlnn4yV",
"BJejjc3VkV",
"SkgZdQ0t37",
"rJx7Qcjq07",
"HJgaDeh9R7",
"S1lesx2c0X",
"Hkes7gncAm",
"r1eI1gn5Cm",
"ryeVtyn907",
"Bkg_Xk290X",
"BygZgkn5RQ",
"BJl_qRsqCQ",
"BygYYYicRX",
"H1xWncjq0Q",
"ryl9t5scRQ"... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"------------------------------------------------------------------------------------\n>>> - Demonstrate through some sample figures that GP-CNN with pooling achieves invariance while GP-CNN with out pooling fail to capture it.\n\nThank you for the suggestion, we are working on and are planning to include covarian... | [
-1,
-1,
7,
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
3,
-1,
2,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"SygaK6RU14",
"SygaK6RU14",
"iclr_2019_B1g30j0qF7",
"rke_hxhHhX",
"iclr_2019_B1g30j0qF7",
"BkxQiXRYhX",
"SkgZdQ0t37",
"iclr_2019_B1g30j0qF7",
"rke_hxhHhX",
"SkgZdQ0t37",
"SkgZdQ0t37",
"SkgZdQ0t37",
"SkgZdQ0t37",
"SkgZdQ0t37",
"SkgZdQ0t37",
"SkgZdQ0t37",
"SkgZdQ0t37",
"rke_hxhHhX",
... |
iclr_2019_B1gTShAct7 | Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference | Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller. | accepted-poster-papers | Pros:
- novel method for continual learning
- clear, well written
- good results
- no need for identified tasks
- detailed rebuttal, new results in revision
Cons:
- experiments could be on more realistic/challenging domains
The reviewers agree that the paper should be accepted. | train | [
"rye41gla2Q",
"HkxQpF3zk4",
"B1xXbsgkJ4",
"SkeOiAJ9RQ",
"rJl0zU-cCQ",
"SJlIlAJqCm",
"Bkxpap1cAm",
"H1l7x3R_h7",
"B1eZ4nh_nm",
"ByxHXBo8h7",
"Hyxl43qlnQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"The transfer/ interference perspective of lifelong learning is well motivated, and combining the meta-learning literature with the continual learning literature (applying reptile twice), even if seems obvious, wasn't explored before. In addition, this paper shows that a lot of gain can be obtained if one uses more... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
-1,
-1
] | [
"iclr_2019_B1gTShAct7",
"SJlIlAJqCm",
"rJl0zU-cCQ",
"H1l7x3R_h7",
"B1eZ4nh_nm",
"Bkxpap1cAm",
"rye41gla2Q",
"iclr_2019_B1gTShAct7",
"iclr_2019_B1gTShAct7",
"Hyxl43qlnQ",
"iclr_2019_B1gTShAct7"
] |
iclr_2019_B1gstsCqt7 | Sparse Dictionary Learning by Dynamical Neural Networks | A dynamical neural network consists of a set of interconnected neurons that interact over time continuously. It can exhibit computational properties in the sense that the dynamical system’s evolution and/or limit points in the associated state space can correspond to numerical solutions to certain mathematical optimization or learning problems. Such a computational system is particularly attractive in that it can be mapped to a massively parallel computer architecture for power and throughput efficiency, especially if each neuron can rely solely on local information (i.e., local memory). Deriving gradients from the dynamical network’s various states while conforming to this last constraint, however, is challenging. We show that by combining ideas of top-down feedback and contrastive learning, a dynamical network for solving the l1-minimizing dictionary learning problem can be constructed, and the true gradients for learning are provably computable by individual neurons. Using spiking neurons to construct our dynamical network, we present a learning process, its rigorous mathematical analysis, and numerical results on several dictionary learning problems. | accepted-poster-papers | While there has been lots of previous work on training dictionaries for sparse coding, this work tackles the problem of doing son in a purely local way. While previous work suggests that the exact computation of gradient addressed in the paper is not necessarily critical, as noted by reviewers, all reviewers agree that the work still makes important contributions through both its theoretical analyses and presented experiments. Authors are encouraged to work on improving clarity further and delineating their contribution more precisely with respect to previous results. | train | [
"S1xN4BLApX",
"Hyl2hXI067",
"rJgVjEUATX",
"rklBey5vTQ",
"SJe04G-g67",
"ryxykzEyam"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Figure 2 serves to illustrate our theoretical results and shows how the algorithm is run in practice. We revised the caption of Figure 2, providing a more detailed and clear description.\n\nWe indeed cited and discussed the early \"similarity matching\" work (Hu et al. 2014) in our original submission. In our upda... | [
-1,
-1,
-1,
6,
9,
8
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"ryxykzEyam",
"iclr_2019_B1gstsCqt7",
"rklBey5vTQ",
"iclr_2019_B1gstsCqt7",
"iclr_2019_B1gstsCqt7",
"iclr_2019_B1gstsCqt7"
] |
iclr_2019_B1lKS2AqtX | Eidetic 3D LSTM: A Model for Video Prediction and Beyond | Spatiotemporal predictive learning, though long considered to be a promising self-supervised feature learning method, seldom shows its effectiveness beyond future video prediction. The reason is that it is difficult to learn good representations for both short-term frame dependency and long-term high-level relations. We present a new model, Eidetic 3D LSTM (E3D-LSTM), that integrates 3D convolutions into RNNs. The encapsulated 3D-Conv makes local perceptrons of RNNs motion-aware and enables the memory cell to store better short-term features. For long-term relations, we make the present memory state interact with its historical records via a gate-controlled self-attention module. We describe this memory transition mechanism eidetic as it is able to effectively recall the stored memories across multiple time stamps even after long periods of disturbance. We first evaluate the E3D-LSTM network on widely-used future video prediction datasets and achieve the state-of-the-art performance. Then we show that the E3D-LSTM network also performs well on the early activity recognition to infer what is happening or what will happen after observing only limited frames of video. This task aligns well with video prediction in modeling action intentions and tendency. | accepted-poster-papers | Strengths: Strong results on future frame video prediction using a 3D convolutional network. Use of future video prediction to jointly learn auxiliary tasks shown to to increase performance. Good ablation study.
Weaknesses: Comparisons with older action recognition methods. Some concerns about novelty, the main contribution is the E3D-LSTM architecture, which R1 characterized as an LSTM with an extra gate and attention mechanism.
Contention: Authors point to novelty in 3D convolutions inside the RNN.
Consensus: All reviewers give a final score of 7- well done experiments helped address concerns around novelty. Easy to recommend acceptance given the agreement.
| val | [
"HJeJHrJc27",
"BJgyWXsuh7",
"rJl_P_YA0m",
"S1gVVTLVhX",
"SyxImDCiA7",
"ryeBsAat0Q",
"ByVd51AFRm",
"HyxkqaatRm",
"r1e-K3aF0Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"AFTER REBUTTAL:\n\nThis is an overall good work, and I do think proves its point. The results on the TaxiBJ dataset (not TatxtBJ, please correct the name in the paper) are compelling, and the concerns regarding some of the text explainations have been corrected.\n\n-----\n\nThe proposed model uses a 3D-CNN with a ... | [
7,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_B1lKS2AqtX",
"iclr_2019_B1lKS2AqtX",
"ryeBsAat0Q",
"iclr_2019_B1lKS2AqtX",
"ByVd51AFRm",
"BJgyWXsuh7",
"S1gVVTLVhX",
"HJeJHrJc27",
"iclr_2019_B1lKS2AqtX"
] |
iclr_2019_B1lnzn0ctQ | ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA | Deep neural networks based on unfolding an iterative algorithm, for example, LISTA (learned iterative shrinkage thresholding algorithm), have been an empirical success for sparse signal recovery. The weights of these neural networks are currently determined by data-driven “black-box” training. In this work, we propose Analytic LISTA (ALISTA), where the weight matrix in LISTA is computed as the solution to a data-free optimization problem, leaving only the stepsize and threshold parameters to data-driven learning. This significantly simplifies the training. Specifically, the data-free optimization problem is based on coherence minimization. We show our ALISTA retains the optimal linear convergence proved in (Chen et al., 2018) and has a performance comparable to LISTA. Furthermore, we extend ALISTA to convolutional linear operators, again determined in a data-free manner. We also propose a feed-forward framework that combines the data-free optimization and ALISTA networks from end to end, one that can be jointly trained to gain robustness to small perturbations in the encoding model. | accepted-poster-papers | This is a well executed paper that makes clear contributions to the understanding of unrolled iterative optimization and soft thresholding for sparse signal recovery with neural networks. | train | [
"HJen7j87JN",
"HklQ7zppRQ",
"B1xca8C4nX",
"rkxXkC3DTQ",
"r1xH4AhDpX",
"HyltGkJFiQ",
"ryxQcC8jp7",
"rJey90hw6m",
"r1lP83nDpX",
"SyxX7z5uhQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"[Opening is okay]\n\nPoints 1 and 2: There is no word \"tree\" or \"graph\", no \"beta\" or \"$\\beta$\", in our paper. We are confused and think they may refer to another paper. Could you kindly clarify?\n\n3: This is great suggestion. The matrix W is the solution of a convex quadratic program subject to linear c... | [
-1,
-1,
7,
-1,
-1,
9,
-1,
-1,
-1,
10
] | [
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
5
] | [
"HklQ7zppRQ",
"rkxXkC3DTQ",
"iclr_2019_B1lnzn0ctQ",
"B1xca8C4nX",
"B1xca8C4nX",
"iclr_2019_B1lnzn0ctQ",
"HyltGkJFiQ",
"HyltGkJFiQ",
"SyxX7z5uhQ",
"iclr_2019_B1lnzn0ctQ"
] |
iclr_2019_B1lz-3Rct7 | Three Mechanisms of Weight Decay Regularization | Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of L2 regularization.
Literal weight decay has been shown to outperform L2 regularization for optimizers for which they differ.
We empirically investigate weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a variety of network architectures. We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: (1) increasing the effective learning rate, (2) approximately regularizing the input-output Jacobian norm, and (3) reducing the effective damping coefficient for second-order optimization.
Our results provide insight into how to improve the regularization of neural networks. | accepted-poster-papers | Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.
| train | [
"SyxswZfEkV",
"B1eBMzYupm",
"B1g5xlXqnm",
"rygRc8IopQ",
"SJgG4O8op7",
"HklYSxUoTX",
"rygtlWdRnm",
"Skldcfu0hm",
"rJx5uduA2m",
"rJx3XFFv2Q",
"B1eS7cTdhm",
"rJlhD5wRnQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author"
] | [
"The authors have taken my comment into account in the new revision of the paper and adequately addressed issues pointed out by other reviewers. So, I keep my rating unchanged.",
"Q1: Agreed\n\nQ2: You are right about weight decay on gamma only affecting the complexity of the model due to the last layer which can... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
-1
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
-1
] | [
"Skldcfu0hm",
"rJx5uduA2m",
"iclr_2019_B1lz-3Rct7",
"HklYSxUoTX",
"B1eBMzYupm",
"iclr_2019_B1lz-3Rct7",
"B1g5xlXqnm",
"B1eS7cTdhm",
"rJx3XFFv2Q",
"iclr_2019_B1lz-3Rct7",
"iclr_2019_B1lz-3Rct7",
"iclr_2019_B1lz-3Rct7"
] |
iclr_2019_B1xJAsA5F7 | Learning Multimodal Graph-to-Graph Translation for Molecule Optimization | We view molecule optimization as a graph-to-graph translation problem. The goal is to learn to map from one molecular graph to another with better properties based on an available corpus of paired molecules. Since molecules can be optimized in different ways, there are multiple viable translations for each input graph. A key challenge is therefore to model diverse translation outputs. Our primary contributions include a junction tree encoder-decoder for learning diverse graph translations along with a novel adversarial training method for aligning distributions of molecules. Diverse output distributions in our model are explicitly realized by low-dimensional latent vectors that modulate the translation process. We evaluate our model on multiple molecule optimization tasks and show that our model outperforms previous state-of-the-art baselines by a significant margin.
| accepted-poster-papers | The revisions made by the authors convinced the reviewers to all recommend accepting this paper. Therefore, I am recommending acceptance as well. I believe the revisions were important to make since I concur with several points in the initial reviews about additional baselines. It is all too easy to add confusion to the literature by not including enough experiments. | test | [
"HylNHt4h0X",
"r1gHailcAQ",
"SyeFxjBc37",
"B1lysCy5CX",
"r1xXOkQO07",
"Syl_q2s_67",
"SyghQTiupQ",
"H1xuZ76rTm",
"Skl19Q6Hpm",
"SkgWrRs_Tm",
"Hkl9bvO52Q",
"HkgKBKlq2m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your insightful comments again! They are very helpful!",
"Thank you for updating the paper. I've updated the score as well.",
"Update:\nThe score has been updated to reflect the authors' great efforts in improving the manuscript. This reviewer would suggest to accept the paper now.\n\n\nOld Revie... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"r1gHailcAQ",
"B1lysCy5CX",
"iclr_2019_B1xJAsA5F7",
"r1xXOkQO07",
"H1xuZ76rTm",
"HkgKBKlq2m",
"HkgKBKlq2m",
"SyeFxjBc37",
"SyeFxjBc37",
"Hkl9bvO52Q",
"iclr_2019_B1xJAsA5F7",
"iclr_2019_B1xJAsA5F7"
] |
iclr_2019_B1xVTjCqKQ | A Data-Driven and Distributed Approach to Sparse Signal Representation and Recovery | In this paper, we focus on two challenges which offset the promise of sparse signal representation, sensing, and recovery. First, real-world signals can seldom be described as perfectly sparse vectors in a known basis, and traditionally used random measurement schemes are seldom optimal for sensing them. Second, existing signal recovery algorithms are usually not fast enough to make them applicable to real-time problems. In this paper, we address these two challenges by presenting a novel framework based on deep learning. For the first challenge, we cast the problem of finding informative measurements by using a maximum likelihood (ML) formulation and show how we can build a data-driven dimensionality reduction protocol for sensing signals using convolutional architectures. For the second challenge, we discuss and analyze a novel parallelization scheme and show it significantly speeds-up the signal recovery process. We demonstrate the significant improvement our method obtains over competing methods through a series of experiments. | accepted-poster-papers | This paper studies deep convolutional architectures to perform compressive sensing of natural images, demonstrating improved empirical performance with an efficient pipeline.
Reviewers reached a consensus that this is an interesting contribution that advances data-driven methods for compressed sensing, despite some doubts about the experimental setup and the scope of the theoretical insights. We thus recommend acceptance as poster. | test | [
"SygHVHWiyV",
"Byl6y31r37",
"BkgimTEqRm",
"HklY22V9RQ",
"B1gQwnV9R7",
"Bylk_jNcC7",
"S1lGNoV90m",
"HkgjfHOahX",
"SJgsRkfb3m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I think the authors have addressed all my comments and I recommend acceptance. ",
"Quality & Clarity:\nThis is a nice paper with clear explanations and justifications. The experiments seem a little shakey.\n\nOriginality & Significance:\nI'm personally not familiar enough to say the theoretical work is original,... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"Bylk_jNcC7",
"iclr_2019_B1xVTjCqKQ",
"SJgsRkfb3m",
"Byl6y31r37",
"Byl6y31r37",
"HkgjfHOahX",
"HkgjfHOahX",
"iclr_2019_B1xVTjCqKQ",
"iclr_2019_B1xVTjCqKQ"
] |
iclr_2019_B1xWcj0qYm | On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data | Empirical risk minimization (ERM), with proper loss function and regularization, is the common practice of supervised classification. In this paper, we study training arbitrary (from linear to deep) binary classifier from only unlabeled (U) data by ERM. We prove that it is impossible to estimate the risk of an arbitrary binary classifier in an unbiased manner given a single set of U data, but it becomes possible given two sets of U data with different class priors. These two facts answer a fundamental question---what the minimal supervision is for training any binary classifier from only U data. Following these findings, we propose an ERM-based learning method from two sets of U data, and then prove it is consistent. Experiments demonstrate the proposed method could train deep models and outperform state-of-the-art methods for learning from two sets of U data. | accepted-poster-papers | This paper studies the task of learning a binary classifier from only unlabeled data. They first provide a negative result, i.e., they show it is impossible to learn an unbiased estimator from a set of unlabeled data. Then they provide an empirical risk minimization method which works when given two sets of unlabeled data, as well as the class priors.
The four submitted reviews were unanimous in their vote to accept. The results are impactful, and might make for an interesting oral presentation. | train | [
"ryxRqB96R7",
"HJgRrkQw07",
"BJe2DbrUR7",
"r1enCpTGCQ",
"S1gL7qwf07",
"rkxn-AIbC7",
"SyeJk0I-A7",
"rJlyh6UW07",
"H1leKaUW0Q",
"rJgzB6LbAX",
"r1gZR2ahaQ",
"rkx7Fs-ham",
"BJgWwjZ3T7",
"rJeZEjbhTX",
"rJepWiW3pm",
"rkeeoqWha7",
"ryxIBivipQ",
"BkgahCMcpQ",
"H1xTSAEqnm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors have responded to my questions, and I have no other comment to make.",
"Thank you for your many insightful clarifications and expanding your experiments. I look forward to seeing more work in the future!",
"We would like to thank all reviewers for their helpful comments! We have now updated our sub... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
8,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"S1gL7qwf07",
"rJgzB6LbAX",
"iclr_2019_B1xWcj0qYm",
"r1gZR2ahaQ",
"ryxIBivipQ",
"rJgzB6LbAX",
"rJgzB6LbAX",
"rJgzB6LbAX",
"rJgzB6LbAX",
"BkgahCMcpQ",
"iclr_2019_B1xWcj0qYm",
"rkeeoqWha7",
"rkeeoqWha7",
"rkeeoqWha7",
"rkeeoqWha7",
"H1xTSAEqnm",
"iclr_2019_B1xWcj0qYm",
"iclr_2019_B1x... |
iclr_2019_B1xY-hRctX | Neural Logic Machines | We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning. NLMs exploit the power of both neural networks---as function approximators, and logic programming---as a symbolic processor for objects with properties, relations, logic connectives, and quantifiers. After being trained on small-scale tasks (such as sorting short arrays), NLMs can recover lifted rules, and generalize to large-scale tasks (such as sorting longer arrays). In our experiments, NLMs achieve perfect generalization in a number of tasks, from relational reasoning tasks on the family tree and general graphs, to decision making tasks including sorting arrays, finding shortest paths, and playing the blocks world. Most of these tasks are hard to accomplish for neural networks or inductive logic programming alone. | accepted-poster-papers |
pros:
- The paper presents an interesting forward chaining model which makes use of meta-level expansions and reductions on predicate arguments in a neat way to reduce complexity. As Reviewer 3 points out, there are a number of other papers from the neuro-symbolic community that learn relations (logic tensor networks is one good reference there). However using these meta-rules you can mix predicates of different arities in a principled way in the construction of the rules, which is something I haven't seen.
- The paper is reasonably well written (see cons for specific issues)
- There is quite a broad evaluation across a number of different tasks. I appreciated that you integrated this into an RL setting for tasks like blocks world.
- The results are good on small datasets and generalize well
cons:
- (scalability) As both Reviewers 1 and 3 point out, there are scalability issues as a function of the predicate arity in computing the set of permutations for the output predicate computation.
- (interpretability) As Reviewer 2 notes, unlike del-ILP, it is not obvious how symbolic rules can be extracted. This is an important point to address up front in the text.
- (clarity) The paper is confusing or ambiguous in places:
-Initially I read the 1,2,3 sequence at the top of 3 to be a deduction (and was confused) rather than three applications of the meta-rules. Maybe instead of calling that section "primitive logic rules" you can call them "logical meta-rules".
-Another confusion, also mentioned by reviewer 3 is that you are assuming that free variables (e.g. the "x" in the expression "Clear(x)") are implicitly considered universally quantified in your examples but you don't say this anywhere. If I have the fact "Clear(x)" as an input fact, then presumably you will interpret this as "for all x Clear(x)" and provide an input tensor to the first layer which will have all 1.0's along the "Clear" relation dimension, right?
-It seems that you are making the assumption that you will never need to apply a predicate to the same object in multiple arguments? If not, I don't see why you say that the shape of the tensor will be m x (m-1) instead of m^2. You need to be able to do this to get reflexivity for example: "a <= a".
-I think you are implicitly making the closed world assumption (CWA) and should say so.
-On pg. 4 you say "The facts are tensors that encode relations among multiple objectives, as described in Sec. 2.2.". What do you mean by "objectives"? I would say the facts are tensors that encode relations among multiple objects.
-On pg. 5 you say "We finish this subsection, continuing with the blocks world to illustrate the forward
propagation in NLM". I see no mention of blocks world in this paragraph. It just seems like a description of what happens at one block, generically.
-In many places you say that this model can compute deduction on first-order predicate calculus (FOPC) but it seems to me that you are limited to horn logic (rule logic) in which there is at most one positive literal per clause (i.e. rules of the form: b1 AND b2 AND ... AND bn => h). From what I can tell you cannot handle deduction on clauses such as b1 AND b2 => h1 or (h2 and h3).
-There is not enough description of the exact setup for each experiment. For example in blocks world, how do you choose predicates for each layer? How many exactly for each experiment? You make it seem on p3 that you can handle recursive predicates but this seems to not have been worked out completely in the appendix. You should make this clear.
-In figure 1 you list Move as if its a predicate like On but it's a very different thing. On is predicate describing a relation in one state. Move is an action which updates a state by changing the values of predicates. They should not be presented in the same way.
-You use "min" and "max" for "and" and "or" respectively. Other approaches have found that using the product t-norm t-norm(x,y) = x * y helps with gradient propagation. del-ILP discusses this in more detail on p 19. Did you try these variations?
-I think it would be helpful to somewhere explicitly describe the actual MLP model you use for deduction including layer sizes and activation functions.
-p. 5. typo: "Such a parameter sharing mechanism is crucial to the generalization ability of NLM to
problems ov varying sizes." ("ov" -> "of")
-p. 6. sec 3.1 typo: "For ∂ILP, the set of pre-conditions of the symbols is used direclty as input of the system." ("direclty" -> "directly")
I think this is a valuable contribution and novel in the particulars of the architecture (eg. expand/reduce) and am recommending acceptance. But I would like to see a real effort made to sharpen the writing and make the exposition crystal clear. Please in particular pay attention to Reviewer 3's comments.
| train | [
"r1ee4DycyV",
"HJxOd0tDJE",
"S1e1Do49Cm",
"rklvumwmCQ",
"r1e_izwX0m",
"S1l4CCI66Q",
"rJxii0IT6Q",
"H1g19nL6Tm",
"rylWbydT3Q",
"rkgpGkN52Q",
"r1gMP1TKnQ"
] | [
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your pointers to the related papers. We will discuss them in the next version of our paper.",
"... although it is not a differentiable model or even a neural model, the idea of learning to sort infinite arrays from short examples has been explored in the \"Generalized Planning\" literature, for exampl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
5
] | [
"HJxOd0tDJE",
"iclr_2019_B1xY-hRctX",
"r1e_izwX0m",
"rkgpGkN52Q",
"rylWbydT3Q",
"rJxii0IT6Q",
"r1gMP1TKnQ",
"iclr_2019_B1xY-hRctX",
"iclr_2019_B1xY-hRctX",
"iclr_2019_B1xY-hRctX",
"iclr_2019_B1xY-hRctX"
] |
iclr_2019_B1xf9jAqFQ | Neural Speed Reading with Structural-Jump-LSTM | Recurrent neural networks (RNNs) can model natural language by sequentially ''reading'' input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as ''neural speed reading'', either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word.
A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that
Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text. | accepted-poster-papers | The authors obtain nice speed improvements by learning to skip and jump over input words when processing text with an LSTM. At some points the reviewers considered the work incremental since similar ideas have already been explored, but at the end two of the reviewers ended up endorsing the paper with strong support. | test | [
"H1liRmli2Q",
"BygoH5N51N",
"S1eAQNnPnm",
"rygL7FSlRQ",
"r1xklFrl07",
"S1gJqdreRm",
"SylYFwwu2m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a Structural-Jump-LSTM model to speed up machine reading, which is an extension of the previous speed reading models, such as LSTM-Jump, Skim-LSTM and LSTM-Shuffle. The major difference, as claimed by the authors, is that the proposed model has two agents instead of one. One agent decides whethe... | [
7,
-1,
7,
-1,
-1,
-1,
5
] | [
5,
-1,
4,
-1,
-1,
-1,
4
] | [
"iclr_2019_B1xf9jAqFQ",
"S1gJqdreRm",
"iclr_2019_B1xf9jAqFQ",
"S1eAQNnPnm",
"SylYFwwu2m",
"H1liRmli2Q",
"iclr_2019_B1xf9jAqFQ"
] |
iclr_2019_B1xhQhRcK7 | Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures | This paper addresses the problem of evaluating learning systems in safety critical domains such as autonomous driving, where failures can have catastrophic consequences. We focus on two problems: searching for scenarios when learned agents fail and assessing their probability of failure. The standard method for agent evaluation in reinforcement learning, Vanilla Monte Carlo, can miss failures entirely, leading to the deployment of unsafe agents. We demonstrate this is an issue for current agents, where even matching the compute used for training is sometimes insufficient for evaluation. To address this shortcoming, we draw upon the rare event probability estimation literature and propose an adversarial evaluation approach. Our approach focuses evaluation on adversarially chosen situations, while still providing unbiased estimates of failure probabilities. The key difficulty is in identifying these adversarial situations -- since failures are rare there is little signal to drive optimization. To solve this we propose a continuation approach that learns failure modes in related but less robust agents. Our approach also allows reuse of data already collected for training the agent. We demonstrate the efficacy of adversarial evaluation on two standard domains: humanoid control and simulated driving. Experimental results show that our methods can find catastrophic failures and estimate failures rates of agents multiple orders of magnitude faster than standard evaluation schemes, in minutes to hours rather than days. | accepted-poster-papers |
* Strengths
The paper addresses a timely topic, and reviewers generally agreed that the approach is reasonable and the experiments are convincing. Reviewers raised a number of specific concerns (which could be addressed in a revised version or future work), described below.
* Weaknesses
Some reviewers were concerned the baselines are weak. Several reviewers were concerned that relying on failures observed during training could create issues by narrowing the proposal distribution (Reviewer 3 characterizes this in a particularly precise manner). In addition, there was a general feeling that more steps are needed before the method can be used in practice (but this could be said of most research).
* Recommendation
All reviewers agreed that the paper should be accepted, although there was also consensus that the paper would benefit from stronger baselines and more close attention to issues that could be caused by an overly narrow proposal distribution. The authors should consider addressing or commenting on these issues in the final version. | train | [
"r1ggQ1XjCX",
"BJex-sbiRm",
"H1l_8M3gRm",
"H1lea0SWpQ",
"Byxi9c4fpm",
"r1lgl9Iz6X",
"r1xLqdIM6X",
"BJxm6c4GaQ",
"BJl92-Odhm",
"B1lWFTj03X",
"H1guoXzK37",
"ByeinoLypQ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Thanks for clarifying your concerns.\n\nWe understand the high-level question raised here to be: “when should practitioners deploying a system in the real world test this system with the FPP rather than VMC”? In short, the answer is *always*. \n\nFirst, for risk estimation, by mixing the FPP and VMC estimates, we ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
-1
] | [
"BJex-sbiRm",
"r1xLqdIM6X",
"iclr_2019_B1xhQhRcK7",
"BJl92-Odhm",
"B1lWFTj03X",
"H1guoXzK37",
"H1guoXzK37",
"B1lWFTj03X",
"iclr_2019_B1xhQhRcK7",
"iclr_2019_B1xhQhRcK7",
"iclr_2019_B1xhQhRcK7",
"BJl92-Odhm"
] |
iclr_2019_BJG0voC9YQ | Woulda, Coulda, Shoulda: Counterfactually-Guided Policy Search | Learning policies on data synthesized by models can in principle quench the thirst of reinforcement learning algorithms for large amounts of real experience, which is often costly to acquire. However, simulating plausible experience de novo is a hard problem for many complex environments, often resulting in biases for model-based policy evaluation and search. Instead of de novo synthesis of data, here we assume logged, real experience and model alternative outcomes of this experience under counterfactual actions, i.e. actions that were not actually taken. Based on this, we propose the Counterfactually-Guided Policy Search (CF-GPS) algorithm for learning policies in POMDPs from off-policy experience. It leverages structural causal models for counterfactual evaluation of arbitrary policies on individual off-policy episodes. CF-GPS can improve on vanilla model-based RL algorithms by making use of available logged data to de-bias model predictions. In contrast to off-policy algorithms based on Importance Sampling which re-weight data, CF-GPS leverages a model to explicitly consider alternative outcomes, allowing the algorithm to make better use of experience data. We find empirically that these advantages translate into improved policy evaluation and search results on a non-trivial grid-world task. Finally, we show that CF-GPS generalizes the previously proposed Guided Policy Search and that reparameterization-based algorithms such Stochastic Value Gradient can be interpreted as counterfactual methods. | accepted-poster-papers | see my comment to the authors below | train | [
"Ske2pWsvg4",
"Skl3eOkblN",
"SklAR5DLpm",
"rJxsxWflC7",
"ryxd2ezlRQ",
"H1l_SlGxCX",
"H1xnN1GxAm",
"Bye_P5EZT7",
"B1lQbh_c37"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the area chair for pointing out the references, we will add them to our\nmanuscript. As stated in the response to the reviewers, we agree that our\nexperiments test our algorithm only in the idealized setting of known transition\nand reward kernels and unknown initial state. We will change the wording in ... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
2,
-1,
-1,
-1,
-1,
3,
3
] | [
"Skl3eOkblN",
"iclr_2019_BJG0voC9YQ",
"iclr_2019_BJG0voC9YQ",
"B1lQbh_c37",
"Bye_P5EZT7",
"SklAR5DLpm",
"iclr_2019_BJG0voC9YQ",
"iclr_2019_BJG0voC9YQ",
"iclr_2019_BJG0voC9YQ"
] |
iclr_2019_BJe-DsC5Fm | signSGD via Zeroth-Order Oracle | In this paper, we design and analyze a new zeroth-order (ZO) stochastic optimization algorithm, ZO-signSGD, which enjoys dual advantages of gradient-free operations and signSGD. The latter requires only the sign information of gradient estimates but is able to achieve a comparable or even better convergence speed than SGD-type algorithms. Our study shows that ZO signSGD requires d times more iterations than signSGD, leading to a convergence rate of O(d/T) under mild conditions, where d is the number of optimization variables, and T is the number of iterations. In addition, we analyze the effects of different types of gradient estimators on the convergence of ZO-signSGD, and propose two variants of ZO-signSGD that at least achieve O(d/T) convergence rate. On the application side we explore the connection between ZO-signSGD and black-box adversarial attacks in robust deep learning. Our empirical evaluations on image classification datasets MNIST and CIFAR-10 demonstrate the superior performance of ZO-signSGD on the generation of adversarial examples from black-box neural networks. | accepted-poster-papers | This is a solid paper that proposes and analyzes a sound approach to zero order optimization, covering a variants of a simple base algorithm. After resolving some issues during the response period, the reviewers concluded with a unanimous recommendation of acceptance. Some concerns regarding the necessity for such algorithms persisted, but the connection to adversarial examples provides an interesting motivation. | train | [
"S1lxUJxU6X",
"Skemq-kZkV",
"rkxd0tneyN",
"HyxK1V69n7",
"ryex1VPEC7",
"r1xqTty5CQ",
"SJgTdFJc0Q",
"r1e7SFJcCQ",
"SJewbwwVC7",
"rJxHBMwE07",
"SJxAFZDECm",
"SklDel3Vam",
"Hyl5D-vj3X",
"r1xwxNwtnQ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for pointing out the concurrent ICLR submission, which focused on the first-order Byzantine setting. The authors agreed that the extra unimodal symmetric assumption can improve the theoretical convergence bound. And indeed we showed that in the zeroth-order setting, this conclusion holds (Corollary 2). M... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"SklDel3Vam",
"rkxd0tneyN",
"ryex1VPEC7",
"iclr_2019_BJe-DsC5Fm",
"HyxK1V69n7",
"SJgTdFJc0Q",
"r1e7SFJcCQ",
"ryex1VPEC7",
"r1xwxNwtnQ",
"Hyl5D-vj3X",
"iclr_2019_BJe-DsC5Fm",
"iclr_2019_BJe-DsC5Fm",
"iclr_2019_BJe-DsC5Fm",
"iclr_2019_BJe-DsC5Fm"
] |
iclr_2019_BJe0Gn0cY7 | Preventing Posterior Collapse with delta-VAEs | Due to the phenomenon of “posterior collapse,” current latent variable generative models pose a challenging design choice that either weakens the capacity of the decoder or requires altering the training objective. We develop an alternative that utilizes the most powerful generative models as decoders, optimize the variational lower bound, and ensures that the latent variables preserve and encode useful information. Our proposed δ-VAEs achieve this by constraining the variational family for the posterior to have a minimum distance to the prior. For sequential latent variable models, our approach resembles the classic representation learning approach of slow feature analysis. We demonstrate our method’s efficacy at modeling text on LM1B and modeling images: learning representations, improving sample quality, and achieving state of the art log-likelihood on CIFAR-10 and ImageNet 32 × 32. | accepted-poster-papers | Strengths: The proposed method is relatively principled. The paper also demonstrates a new ability: training VAEs with autoregressive decoders that have meaningful latents. The paper is clear and easy to read.
Weaknesses: I wasn't entirely convinced by the causal/anticausal formulation, and it's a bit unfortunate that the decoder couldn't have been copied without modification from another paper.
Points of contention:
It's not clear how general the proposed approach is, or how important the causal/anti-causal idea was, although the authors added an ablation study to check this last question.
Consensus: All reviewers rated the paper above the bar, and the objections of the two 6's seem to have been satisfactorily addressed by the rebuttal and paper update. | train | [
"SyegyQ0LCm",
"S1eyafA807",
"BJxg9M0LCQ",
"BJgVLMCIAm",
"SJxpwu6A2m",
"Byl_786uhQ",
"r1ixv5dnX",
"HJlopG40cX",
"BJl4aRZTq7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We thank all the reviewers for their valuable feedback. All three reviewers agree that the paper is clear and well-written. R1 and R2 highlighted the convincing results of learning useful representations with autoregressive decoders and noted our extensive experiments. R3 was concerned about experiments demonstrat... | [
-1,
-1,
-1,
-1,
6,
7,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
3,
4,
3,
-1,
-1
] | [
"iclr_2019_BJe0Gn0cY7",
"r1ixv5dnX",
"Byl_786uhQ",
"SJxpwu6A2m",
"iclr_2019_BJe0Gn0cY7",
"iclr_2019_BJe0Gn0cY7",
"iclr_2019_BJe0Gn0cY7",
"BJl4aRZTq7",
"iclr_2019_BJe0Gn0cY7"
] |
iclr_2019_BJe1E2R5KX | Algorithmic Framework for Model-based Deep Reinforcement Learning with Theoretical Guarantees | Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves the state-of-the-art performance when only 1M or fewer samples are permitted on a range of continuous control benchmark tasks. | accepted-poster-papers | This paper proposes model-based reinforcement learning algorithms that have theoretical guarantees. These methods are shown to good results on Mujuco benchmark tasks. All of the reviewers have given a reasonable score to the paper, and the paper can be accepted. | train | [
"Hkx29sJxC7",
"rJxCLWG_Rm",
"rJgrRkSDTX",
"rygz4okeAQ",
"rJlACiJXa7",
"r1xWIcFhTm",
"H1g1w5th6X",
"r1gl8iJX6X",
"H1e69okXam",
"HJeBddsh37",
"SyxTcTgq3Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We’ve added a paragraph below Theorem 3.1 and Appendix G, which contains a finite sample complexity results. We can obtain an approximate local maximum in $O(1/\\epsilon)$ iterations with sample complexity (in the number of trajectories) that is linear in the number of parameters and accuracy $\\epsilon$ and is lo... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"SyxTcTgq3Q",
"H1g1w5th6X",
"iclr_2019_BJe1E2R5KX",
"iclr_2019_BJe1E2R5KX",
"HJeBddsh37",
"rJgrRkSDTX",
"rJgrRkSDTX",
"SyxTcTgq3Q",
"HJeBddsh37",
"iclr_2019_BJe1E2R5KX",
"iclr_2019_BJe1E2R5KX"
] |
iclr_2019_BJeOioA9Y7 | Knowledge Flow: Improve Upon Your Teachers | A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model. To address this issue, in this paper, we develop knowledge flow which moves ‘knowledge’ from multiple deep nets, referred to as teachers, to a new deep net model, called the student. The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too. Upon training with knowledge flow the student is independent of the teachers. We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other ‘knowledge exchange’ methods.
| accepted-poster-papers | The authors have taken inspiration from recent publications that demonstrate transfer learning over sequential RL tasks and have proposed a method that trains individual learners from experts using layerwise connections, gradually forcing the features to distill into the student with a hard-coded annealing of coeffiecients. The authors have done thorough experiments and the value of the approach seems clear, especially compared against progressive nets and pathnets. The paper is well-written and interesting, and the approach is novel. The reviewers have discussed the paper in detail and agree, with the AC, that it should be accepted. | train | [
"Bkl3Lcd80m",
"rklBpc_LC7",
"r1liuBG50X",
"r1lPW7iI07",
"BkgiFx9nnQ",
"BJlDEqbKA7",
"ByeCKHTOA7",
"SkgAysOIRm",
"rkgVy1xs2m",
"Byx-vLP5hQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Updated: Changed section numbers to fit latest revision.\n---------------------------------------------------------------------------\nWe thank the reviewer for time and feedback.\n\nRe 1: Use teachers with different architectures from the student. \nIn additional experiments, following the suggestion of the revie... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
8,
7
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
5,
4
] | [
"Byx-vLP5hQ",
"rkgVy1xs2m",
"BJlDEqbKA7",
"SkgAysOIRm",
"iclr_2019_BJeOioA9Y7",
"rklBpc_LC7",
"r1lPW7iI07",
"BkgiFx9nnQ",
"iclr_2019_BJeOioA9Y7",
"iclr_2019_BJeOioA9Y7"
] |
iclr_2019_BJeWUs05KQ | Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information | The use of imitation learning to learn a single policy for a complex task that has multiple modes or hierarchical structure can be challenging. In fact, previous work has shown that when the modes are known, learning separate policies for each mode or sub-task can greatly improve the performance of imitation learning. In this work, we discover the interaction between sub-tasks from their resulting state-action trajectory sequences using a directed graphical model. We propose a new algorithm based on the generative adversarial imitation learning framework which automatically learns sub-task policies from unsegmented demonstrations. Our approach maximizes the directed information flow in the graphical model between sub-task latent variables and their generated trajectories. We also show how our approach connects with the existing Options framework, which is commonly used to learn hierarchical policies. | accepted-poster-papers | This paper proposes an approach for imitation learning from unsegmented demonstrations. The paper addresses an important problem and is well-motivated. Many of the concerns about the experiments have been addressed with follow-up comments. We strongly encourage the authors to integrate the new results and additional literature to the final version. With these changes, the reviewers agree that the paper exceeds the bar for acceptance. Thus, I recommend acceptance. | val | [
"S1gRVWWc0m",
"HkehIf6w0Q",
"rJxs7IUPAQ",
"rklVVJy5pX",
"SJx_MRAKaX",
"ryeTN6CYpm",
"H1l3BLbJpm",
"rJel6IwA3Q",
"Ske4Ltaws7"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"For completeness, here is the table of results on the FetchPickandPlace-v1 environment with results of the VAE baseline included:\n\nDirected Info GAIL + L2 loss: Mean = -9.47, Std dev. = 4.84\nGAIL + L2 loss: Mean = -12. 05, Std dev. = 4.94\nDirected-Info GAIL: Mean = -11.74, Std dev. = 5.87\nGAIL: Mean = -13.29,... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rJxs7IUPAQ",
"iclr_2019_BJeWUs05KQ",
"SJx_MRAKaX",
"H1l3BLbJpm",
"rJel6IwA3Q",
"Ske4Ltaws7",
"iclr_2019_BJeWUs05KQ",
"iclr_2019_BJeWUs05KQ",
"iclr_2019_BJeWUs05KQ"
] |
iclr_2019_BJej72AqF7 | A Max-Affine Spline Perspective of Recurrent Neural Networks | We develop a framework for understanding and improving recurrent neural networks (RNNs) using max-affine spline operators (MASOs). We prove that RNNs using piecewise affine and convex nonlinearities can be written as a simple piecewise affine spline operator. The resulting representation provides several new perspectives for analyzing RNNs, three of which we study in this paper. First, we show that an RNN internally partitions the input space during training and that it builds up the partition through time. Second, we show that the affine slope parameter of an RNN corresponds to an input-specific template, from which we can interpret an RNN as performing a simple template matching (matched filtering) given the input. Third, by carefully examining the MASO RNN affine mapping, we prove that using a random initial hidden state corresponds to an explicit L2 regularization of the affine parameters, which can mollify exploding gradients and improve generalization. Extensive experiments on several datasets of various modalities demonstrate and validate each of the above conclusions. In particular, using a random initial hidden states elevates simple RNNs to near state-of-the-art performers on these datasets. | accepted-poster-papers | While the reformulation of RNNs is not practical as it is missing sigmoids and tanhs that are common in LSTMs it does provide an interesting analysis of traditional RNNs and a technique that's novel for many in the ICLR community.
| train | [
"SkePFUZVam",
"HyeMiQWEaQ",
"S1eTiBWVpm",
"B1eby2B5n7",
"B1e0FnM93X",
"r1e_41DDhQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their careful reading and constructive suggestions. We agree that the MASO framework sheds new light on the inner workings of RNNs. We have made significant simplifications and revisions to the mathematical notation, particularly in Sections 1.1, 1.2, and 2, that should address most of yo... | [
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
3,
3,
3
] | [
"r1e_41DDhQ",
"B1eby2B5n7",
"B1e0FnM93X",
"iclr_2019_BJej72AqF7",
"iclr_2019_BJej72AqF7",
"iclr_2019_BJej72AqF7"
] |
iclr_2019_BJemQ209FQ | Learning to Navigate the Web | Learning in environments with large state and action spaces, and sparse rewards, can hinder a Reinforcement Learning (RL) agent’s learning through trial-and-error. For instance, following natural language instructions on the Web (such as booking a flight ticket) leads to RL settings where input vocabulary and number of actionable elements on a page can grow very large. Even though recent approaches improve the success rate on relatively simple environments with the help of human demonstrations to guide the exploration, they still fail in environments where the set of possible instructions can reach millions. We approach the aforementioned problems from a different perspective and propose guided RL approaches that can generate unbounded amount of experience for an agent to learn from. Instead of learning from a complicated instruction with a large vocabulary, we decompose it into multiple sub-instructions and schedule a curriculum in which an agent is tasked with a gradually increasing subset of these relatively easier sub-instructions. In addition, when the expert demonstrations are not available, we propose a novel meta-learning framework that generates new instruction following tasks and trains the agent more effectively. We train DQN, deep reinforcement learning agent, with Q-value function approximated with a novel QWeb neural network architecture on these smaller, synthetic instructions. We evaluate the ability of our agent to generalize to new instructions onWorld of Bits benchmark, on forms with up to 100 elements, supporting 14 million possible instructions. The QWeb agent outperforms the baseline without using any human demonstration achieving 100% success rate on several difficult environments. | accepted-poster-papers | All reviewers (including those with substantial expertise in RL) were solid in their praise for this paper that is also tackling an interesting application that is much less well studied but deserves attention.
| train | [
"SylnSTaghX",
"Bklaz-qtRm",
"SJeSplqYR7",
"HJlCjlqKAX",
"BJexNkcF0Q",
"ryejZrK9h7",
"HkxIVs6PsQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"UPDATE:\n\nThank you to the authors for a comprehensive response. I have increased my score based on these changes. I apologize for the misunderstanding about ArXiV papers and indeed the authors are correct on that point. Thank you as well for reporting the learning speeds. As you mentioned, they confirm our i... | [
8,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_BJemQ209FQ",
"HkxIVs6PsQ",
"HJlCjlqKAX",
"SylnSTaghX",
"ryejZrK9h7",
"iclr_2019_BJemQ209FQ",
"iclr_2019_BJemQ209FQ"
] |
iclr_2019_BJfIVjAcKm | Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability | We explore the concept of co-design in the context of neural network verification. Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily. To this end, we identify two properties of network models - weight sparsity and so-called ReLU stability - that turn out to significantly impact the complexity of the corresponding verification task. We demonstrate that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones. Then, improving ReLU stability leads to an additional 4-13x speedup in verification times. An important feature of our methodology is its "universality," in the sense that it can be used with a broad range of training procedures and verification approaches.
| accepted-poster-papers | This paper introduced a concept called ReLU stability to motivate regularization and enable fast verification. Most of the analysis was presented empirically on two simple datasets and with low-performing models. I feel theoretical analysis and more comprehensive and realistic empirical studies would make the paper stronger. In general, the contribution of this paper is original and interesting.
| train | [
"rkgwBQQKCQ",
"HJlXjAodT7",
"rkxpILquA7",
"Bke0ohlmoQ",
"SklV2h91A7",
"BJlht5PcaX",
"Hkgl-jv5p7",
"rkefO5yq67",
"H1e9xkx5TQ",
"ryl5Ywpd6Q",
"ByxrBih_aX",
"H1lQYCj_6Q",
"H1gHm0i_pm",
"Skxfe0sdpX",
"Skg1L6iuTm",
"SylfYwH5hX",
"rkeN2o7FhQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank all reviewers and commenters for their suggestions on improving the manuscript. We have revised our submission based on the feedback we received, and uploaded our revision.",
"We thank the reviewer for their helpful comments. We are glad you found the paper pleasant to read!\n\nWe agree th... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"iclr_2019_BJfIVjAcKm",
"SylfYwH5hX",
"SklV2h91A7",
"iclr_2019_BJfIVjAcKm",
"Hkgl-jv5p7",
"rkefO5yq67",
"H1e9xkx5TQ",
"ryl5Ywpd6Q",
"rkefO5yq67",
"ByxrBih_aX",
"H1gHm0i_pm",
"rkeN2o7FhQ",
"Skxfe0sdpX",
"Skg1L6iuTm",
"Bke0ohlmoQ",
"iclr_2019_BJfIVjAcKm",
"iclr_2019_BJfIVjAcKm"
] |
iclr_2019_BJfOXnActQ | Learning to Learn with Conditional Class Dependencies | Neural networks can learn to extract statistical properties from data, but they seldom make use of structured information from the label space to help representation learning. Although some label structure can implicitly be obtained when training on huge amounts of data, in a few-shot learning context where little data is available, making explicit use of the label structure can inform the model to reshape the representation space to reflect a global sense of class dependencies. We propose a meta-learning framework, Conditional class-Aware Meta-Learning (CAML), that conditionally transforms feature representations based on a metric space that is trained to capture inter-class dependencies. This enables a conditional modulation of the feature representations of the base-learner to impose regularities informed by the label space. Experiments show that the conditional transformation in CAML leads to more disentangled representations and achieves competitive results on the miniImageNet benchmark. | accepted-poster-papers | The reviewers think that incorporating class conditional dependencies into the metric space of a few-shot learner is a sufficiently good idea to merit acceptance. The performance isn’t necessarily better than the state-of-the-art approaches like LEO, but it is nonetheless competitive. One reviewer suggests incorporating a pre-training strategy to strengthen your results. In terms of experimental details, one reviewer pointed out that the embedding network architecture is quite a bit more powerful than the base learner and would like some additional justification for this. They would also like more detail on the computing the MAML gradients in the context of this method. Beyond this, please ensure that you have incorporated all of the clarifications that were required during the discussion phase. | train | [
"r1xVbbV9RQ",
"H1x2TeE90m",
"B1gPog4c0Q",
"Hyg7wl45AQ",
"HJxo10uFh7",
"ByxxsLKVn7",
"H1lyuO0foX"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the very detailed and constructive comments.\n\n1. The motivation\n1.1 How the metric space is trained?\nThe metric space is trained in a pre-training step and it is not updated while training the base-learner. The embeddings obtained from the metric space is different from other popular pre-training... | [
-1,
-1,
-1,
-1,
6,
8,
4
] | [
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"H1lyuO0foX",
"ByxxsLKVn7",
"HJxo10uFh7",
"iclr_2019_BJfOXnActQ",
"iclr_2019_BJfOXnActQ",
"iclr_2019_BJfOXnActQ",
"iclr_2019_BJfOXnActQ"
] |
iclr_2019_BJfYvo09Y7 | Hierarchical Visuomotor Control of Humanoids | We aim to build complex humanoid agents that integrate perception, motor control, and memory. In this work, we partly factor this problem into low-level motor control from proprioception and high-level coordination of the low-level skills informed by vision. We develop an architecture capable of surprisingly flexible, task-directed motor control of a relatively high-DoF humanoid body by combining pre-training of low-level motor controllers with a high-level, task-focused controller that switches among low-level sub-policies. The resulting system is able to control a physically-simulated humanoid body to solve tasks that require coupling visual perception from an unstabilized egocentric RGB camera during locomotion in the environment. Supplementary video link: https://youtu.be/fBoir7PNxPk | accepted-poster-papers | A hierarchical method is presented for developing humanoid motion control,
using low-level control fragments, egocentric visual input, recurrent high-level control.
It is likely the first demonstration of 3D humanoids learning to do memory-enabled tasks using only
proprioceptive and head-based ego-centric vision. The use of control fragments as opposed
to mocapclip-based skills allows for finer-grained repurposing of pieces of motion, while
still allowing for mocap-based learning
Weaknesses: It is largely a mashup up of previously known results (R2). Caveat: this can be said for all research
at some sufficient level of abstraction. The motions are jerky when transitions happen between control fragments (R2,R3).
There are some concerns as to whether the method compares against other methods; the authors note
that they are either not directly comparable, i.e., solving a different problem, or are implicitly
contained in some of the comparisons that are performed in the paper.
Overall, the reviewers and AC are in broad agreement regarding the strengths and weaknesses of the paper.
The AC believes that the work will be of broad interest. Demonstrating memory-enabled, vision-driven,
mocap-imitating skills is a broad step forward. The paper also provides a further datapoint as
to which combinations of method work well, and some of the specific features required to make them work.
The paper could acknowledge motion quality artifacts, as noted by the reviewers and
in the online discussion. Suggest to include [Peng et al 2017] as some of the most relevant related HRL humanoid control work, as per the reviews & discussion.
| test | [
"r1gObOE-Am",
"HJg6uB8Pam",
"HkeENL21C7",
"S1xdTHWs6m",
"BketCjOKp7",
"HJxPPo_FTm",
"rJlfZ-ut6m",
"Sygyex8D67",
"SJxBek8vTX",
"r1lL-jzbTm",
"Hke4Tu33nX",
"BkgV2s7537",
"r1e-UMBoiQ",
"BkltQstIi7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"The authors claim that this method improves upon the earlier work by substantially decreasing the amount of manual curation needed, however I still cannot see any real difference in the level of manual work required. This method as well as the earlier work (Peng et al. 2017 and Peng et al 2018) use existing pre-cl... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
-1,
-1
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1
] | [
"BketCjOKp7",
"iclr_2019_BJfYvo09Y7",
"iclr_2019_BJfYvo09Y7",
"iclr_2019_BJfYvo09Y7",
"HJxPPo_FTm",
"HJg6uB8Pam",
"r1e-UMBoiQ",
"BkgV2s7537",
"Hke4Tu33nX",
"Hke4Tu33nX",
"iclr_2019_BJfYvo09Y7",
"iclr_2019_BJfYvo09Y7",
"BkltQstIi7",
"iclr_2019_BJfYvo09Y7"
] |
iclr_2019_BJg4Z3RqF7 | Unsupervised Adversarial Image Reconstruction | We address the problem of recovering an underlying signal from lossy, inaccurate observations in an unsupervised setting. Typically, we consider situations where there is little to no background knowledge on the structure of the underlying signal, no access to signal-measurement pairs, nor even unpaired signal-measurement data. The only available information is provided by the observations and the measurement process statistics. We cast the problem as finding the \textit{maximum a posteriori} estimate of the signal given each measurement, and propose a general framework for the reconstruction problem. We use a formulation of generative adversarial networks, where the generator takes as input a corrupted observation in order to produce realistic reconstructions, and add a penalty term tying the reconstruction to the associated observation. We evaluate our reconstructions on several image datasets with different types of corruptions. The proposed approach yields better results than alternative baselines, and comparable performance with model variants trained with additional supervision. | accepted-poster-papers | This paper proposes a GAN-based method to recover images from a noisy version of it. The paper builds upon existing works on AmbientGAN and CS-GAN. By combining the two approaches, the work finds a new method that performs better than existing approaches.
The paper clearly has new interesting ideas which have been executed well. Two of the reviewers have voted in favour of acceptance, with one of the reviewer providing an extensive and detailed review. The third reviewer however has some doubts which were not resolved completely after the rebuttal.
Upon reading the work myself, I am convinced that this will be interesting to the community. However, I will recommend the authors to take the comments of Reviewer 2 into account and do whatever it takes to resolve issues pointed by the reviewer.
During the review process, another related work was found to be very similar to the approach discussed in this work. This work should be cited in the paper, as a prior work that the authors were unaware of.
https://arxiv.org/abs/1812.04744
Please also discuss any new insights this work offers on top of this existing work.
Given that the above suggestions are taken into account, I recommend to accept this paper.
| test | [
"HkgUbQ_WgN",
"BJguc0OE0Q",
"BJlcvR_VRm",
"H1xoXRON0m",
"H1xo8T_4RQ",
"SklY001CnQ",
"BylSgJYp2Q",
"B1lziIvI3m"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have released the code used in this paper : https://github.com/UNIR-Anonymous/UNIR",
"Thank you for your feedback. We have taken note of your comments and have been actively working to take them into account.\nYou raised two main questions , one concerning the measurement process and the second one concerning... | [
-1,
-1,
-1,
-1,
-1,
6,
8,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2019_BJg4Z3RqF7",
"SklY001CnQ",
"BylSgJYp2Q",
"B1lziIvI3m",
"iclr_2019_BJg4Z3RqF7",
"iclr_2019_BJg4Z3RqF7",
"iclr_2019_BJg4Z3RqF7",
"iclr_2019_BJg4Z3RqF7"
] |
iclr_2019_BJg9DoR9t7 | Max-MIG: an Information Theoretic Approach for Joint Learning from Crowds | Eliciting labels from crowds is a potential way to obtain large labeled data. Despite a variety of methods developed for learning from crowds, a key challenge remains unsolved: \emph{learning from crowds without knowing the information structure among the crowds a priori, when some people of the crowds make highly correlated mistakes and some of them label effortlessly (e.g. randomly)}. We propose an information theoretic approach, Max-MIG, for joint learning from crowds, with a common assumption: the crowdsourced labels and the data are independent conditioning on the ground truth. Max-MIG simultaneously aggregates the crowdsourced labels and learns an accurate data classifier. Furthermore, we devise an accurate data-crowds forecaster that employs both the data and the crowdsourced labels to forecast the ground truth. To the best of our knowledge, this is the first algorithm that solves the aforementioned challenge of learning from crowds. In addition to the theoretical validation, we also empirically show that our algorithm achieves the new state-of-the-art results in most settings, including the real-world data, and is the first algorithm that is robust to various information structures. Codes are available at https://github.com/Newbeeer/Max-MIG .
| accepted-poster-papers | This paper proposes an interesting approach to leveraging crowd-sourced labels, along with an ML model learned from the data itself.
The reviewers were unanimous in their vote to accept. | train | [
"rygf8BMF37",
"BklLgntDhX",
"B1g5uv1N07",
"B1xLvvU0pQ",
"B1e5ewz9pQ",
"Syxqx4gdp7",
"SJg2zPgOpQ",
"SkgHhklOTQ",
"HylQebguTX",
"HygvOxIC3Q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Update after feedback: I would like to thank the authors for their detailed answers, it would be great to see some revisions in the paper also though (except new experimental results).\nEspecially thank you for providing details of a training procedure which I was missing in the initial draft. I hope to see them i... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_BJg9DoR9t7",
"iclr_2019_BJg9DoR9t7",
"rygf8BMF37",
"B1e5ewz9pQ",
"SJg2zPgOpQ",
"rygf8BMF37",
"BklLgntDhX",
"iclr_2019_BJg9DoR9t7",
"HygvOxIC3Q",
"iclr_2019_BJg9DoR9t7"
] |
iclr_2019_BJgK6iA5KX | AutoLoss: Learning Discrete Schedule for Alternate Optimization | Many machine learning problems involve iteratively and alternately optimizing different task objectives with respect to different sets of parameters. Appropriately scheduling the optimization of a task objective or a set of parameters is usually crucial to the quality of convergence. In this paper, we present AutoLoss, a meta-learning framework that automatically learns and determines the optimization schedule. AutoLoss provides a generic way to represent and learn the discrete optimization schedule from metadata, allows for a dynamic and data-driven schedule in ML problems that involve alternating updates of different parameters or from different loss objectives.
We apply AutoLoss on four ML tasks: d-ary quadratic regression, classification using a multi-layer perceptron (MLP), image generation using GANs, and multi-task neural machine translation (NMT). We show that the AutoLoss controller is able to capture the distribution of better optimization schedules that result in higher quality of convergence on all four tasks. The trained AutoLoss controller is generalizable -- it can guide and improve the learning of a new task model with different specifications, or on different datasets. | accepted-poster-papers | The paper suggests using meta-learning to tune the optimization schedule of alternative optimization problems. All of the reviewers agree that the paper is worthy of publication at ICLR. The authors have engaged with the reviewers and improved the paper since the submission. I asked the authors to address the rest of the comments in the camera ready version. | train | [
"BJgG8g-Kx4",
"SkxBNIaEeV",
"Bke9KJljRX",
"B1lSm_S9AX",
"BkgdU-e16m",
"SJl_vpqYRQ",
"rkgn_XcxRm",
"ByxRxkQFpX",
"BJlq-BGFa7",
"rylM2EzFpX",
"ByeGCYgKTX",
"BklC6-auT7",
"rked778JaQ",
"HklV1vNp2m"
] | [
"author",
"public",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for pointing us to your work [1], which studies the similar topic concurrently with us. Both works focus on designing methods to introducing dynamicas into objectives/loss functions. Specifically, [1] tries to directly cast the objective function as a a learnable neural network (learned by measuring the ... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"SkxBNIaEeV",
"iclr_2019_BJgK6iA5KX",
"SJl_vpqYRQ",
"rkgn_XcxRm",
"iclr_2019_BJgK6iA5KX",
"ByeGCYgKTX",
"rylM2EzFpX",
"iclr_2019_BJgK6iA5KX",
"rked778JaQ",
"rked778JaQ",
"BkgdU-e16m",
"HklV1vNp2m",
"iclr_2019_BJgK6iA5KX",
"iclr_2019_BJgK6iA5KX"
] |
iclr_2019_BJgLg3R9KQ | Learning what and where to attend | Most recent gains in visual recognition have originated from the inclusion of attention mechanisms in deep convolutional networks (DCNs). Because these networks are optimized for object recognition, they learn where to attend using only a weak form of supervision derived from image class labels. Here, we demonstrate the benefit of using stronger supervisory signals by teaching DCNs to attend to image regions that humans deem important for object recognition. We first describe a large-scale online experiment (ClickMe) used to supplement ImageNet with nearly half a million human-derived "top-down" attention maps. Using human psychophysics, we confirm that the identified top-down features from ClickMe are more diagnostic than "bottom-up" saliency features for rapid image categorization. As a proof of concept, we extend a state-of-the-art attention network and demonstrate that adding ClickMe supervision significantly improves its accuracy and yields visual features that are more interpretable and more similar to those used by human observers. | accepted-poster-papers | This paper presents a large-scale annotation of human-derived attention maps for ImageNet dataset. This annotation can be used for training more accurate and more interpretable attention models (deep neural networks) for object recognition. All reviewers and AC agree that this work is clearly of interest to ICLR and that extensive empirical evaluations show clear advantages of the proposed approach in terms of improved classification accuracy. In the initial review, R3 put this paper below the acceptance bar requesting major revision of the manuscript and addressing three important weaknesses: (1) no analysis on interpretability; (2) no details about statistical analysis; (3) design choices of the experiments are not motivated. Pleased to report that based on the author respond, the reviewer was convinced that the most crucial concerns have been addressed in the revision. R3 subsequently increased assigned score to 6. As a result, the paper is not in the borderline bucket anymore.
The specific recommendation for the authors is therefore to further revise the paper taking into account a better split of the material in the main paper and its appendix. The additional experiments conducted during rebuttal (on interpretability) would be better to include in the main text, as well as explanation regarding statistical analysis.
| train | [
"r1eVYyoh3Q",
"H1gDHhddR7",
"Sklu7PO_C7",
"Bke5rI__CX",
"ryg0WL__07",
"HyeS0BduCm",
"HJxxoO1ram",
"S1xOmfJSTQ",
"rylRhZkSam",
"S1gAO-ySTQ",
"S1gU756h3X",
"HygHbsciim"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nSUMMARY\n\nThis paper argues that most recent gains in visual recognition are due to the use of visual attention mechanisms in deep convolutional networks (DCNs). According to the authors; the networks learn where to focus through a weak form of supervision based on image class labels. This paper introduces a da... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_BJgLg3R9KQ",
"HyeS0BduCm",
"HygHbsciim",
"r1eVYyoh3Q",
"S1gU756h3X",
"iclr_2019_BJgLg3R9KQ",
"HygHbsciim",
"r1eVYyoh3Q",
"S1gU756h3X",
"iclr_2019_BJgLg3R9KQ",
"iclr_2019_BJgLg3R9KQ",
"iclr_2019_BJgLg3R9KQ"
] |
iclr_2019_BJgRDjR9tQ | ROBUST ESTIMATION VIA GENERATIVE ADVERSARIAL NETWORKS | Robust estimation under Huber's ϵ-contamination model has become an important topic in statistics and theoretical computer science. Rate-optimal procedures such as Tukey's median and other estimators based on statistical depth functions are impractical because of their computational intractability. In this paper, we establish an intriguing connection between f-GANs and various depth functions through the lens of f-Learning. Similar to the derivation of f-GAN, we show that these depth functions that lead to rate-optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f-Learning. This connection opens the door of computing robust estimators using tools developed for training GANs. In particular, we show that a JS-GAN that uses a neural network discriminator with at least one hidden layer is able to achieve the minimax rate of robust mean estimation under Huber's ϵ-contamination model. Interestingly, the hidden layers of the neural net structure in the discriminator class are shown to be necessary for robust estimation. | accepted-poster-papers |
* Strengths
This paper presents a very interesting connection between GANs and robust estimation in the presence of corrupted training data. The conceptual ideas are novel and can likely be extended in many further directions. I would not be surprised if this opens up a new line of research.
* Weaknesses
The paper is poorly written. Due to disagreement among the authors and my interest in the topic, I read the paper in detail myself. I think it would be difficult for a non-expert to understand the key ideas and I strongly encourage the authors to carefully revise the paper to reach a broader audience and highlight the key insights. Additionally, the experiments are only on toy data.
* Discussion
One of the reviewers was concerned about the lack of efficiency guarantees for the proposed algorithm (indeed, the algorithm requires training GANs which are currently beyond the reach of theory and finicky in practice). That reviewer points to the fact that most papers in the robustness literature are concerned with computational efficiency and is concerned that ignoring this sidesteps one of the key challenges. The reviewer is also concerned about the restriction to parametric or nearly-parametric families (e.g. Gaussians and elliptical distributions). Other reviewers were more positive and did not see these as major issues.
* Decision
In my opinion, the lack of efficiency guarantees is not a huge issue, as the primary contribution of the paper is pointing out a non-obvious conceptual connection between two literatures. The restriction to parametric families is more concerning, but it seems possible this could be removed with further developments. The main reason for accepting the paper (despite concerns about the writing) is the importance of the conceptual connection. I think this connection is likely to lead to a new line of research and would like to get it out there as soon as possible.
* Comments
Despite the accept decision, I again urge the authors to improve the quality of exposition to ensure that a large audience can appreciate the ideas. | test | [
"rkx90sfIJ4",
"rklmCB5rk4",
"rkef55Kc0Q",
"SyeWL0ScAm",
"B1xPE0H507",
"rkx0ascORX",
"SJgijaKR6Q",
"S1lJt6YR6X",
"ryeV9nKRpQ",
"ByluFd5mpm",
"rkxfaWqihQ",
"B1e-NIOq3Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the question. \n\nYes, the statement is a bit confusing. In the formulation $\\min_Q\\max_{\\tilde{Q}}$, notice that $\\min_Q$ is before $\\max_{\\tilde{Q}}$. Thus, the class that we maximize over $\\tilde{Q}$ is allowed to depend on $Q$. To be specific, for example in location estimation (Propositio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"rklmCB5rk4",
"iclr_2019_BJgRDjR9tQ",
"SJgijaKR6Q",
"rkx0ascORX",
"rkx0ascORX",
"S1lJt6YR6X",
"ByluFd5mpm",
"rkxfaWqihQ",
"B1e-NIOq3Q",
"iclr_2019_BJgRDjR9tQ",
"iclr_2019_BJgRDjR9tQ",
"iclr_2019_BJgRDjR9tQ"
] |
iclr_2019_BJg_roAcK7 | INVASE: Instance-wise Variable Selection using Neural Networks | The advent of big data brings with it data with more and more dimensions and thus a growing need to be able to efficiently select which features to use for a variety of problems. While global feature selection has been a well-studied problem for quite some time, only recently has the paradigm of instance-wise feature selection been developed. In this paper, we propose a new instance-wise feature selection method, which we term INVASE. INVASE consists of 3 neural networks, a selector network, a predictor network and a baseline network which are used to train the selector network using the actor-critic methodology. Using this methodology, INVASE is capable of flexibly discovering feature subsets of a different size for each instance, which is a key limitation of existing state-of-the-art methods. We demonstrate through a mixture of synthetic and real data experiments that INVASE significantly outperforms state-of-the-art benchmarks. | accepted-poster-papers | This manuscript proposes a new algorithm for instance-wise feature selection. To this end, the selection is achieved by combining three neural networks trained via an actor-critic methodology. The manuscript highlight that beyond prior work, this strategy enables the selection of a different number of features for each example. Encouraging results are provided on simulated data in comparison to related work, and on real data.
The reviewers and AC note issues with the evaluation of the proposed method. In particular, the evaluation of computer vision and natural language processing datasets may have further highlighted the performance of the proposed method. Further, while technically innovative, the approach is closely related to prior work (L2X) -- limiting the novelty.
The paper presents a promising new algorithm for training generative adversarial networks. The mathematical foundation for the method is novel and thoroughly motivated, the theoretical results are non-trivial and correct, and the experimental evaluation shows a substantial improvement over the state of the art. | val | [
"SkgNuvW707",
"BkehMR6shm",
"Byl06-RnTQ",
"H1lgbw8K3X",
"S1xeFW7oT7",
"SylXL-XspQ",
"r1eVBWms6Q",
"Hke-ey7oTQ",
"rkxFiB2Tom"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"I would like to thank the authors for clarifying my concerns in details, especially for the first point. I think this is a straightforward idea that relaxes the need for a predefined k in L2X and has good performance. I have updated my score accordingly.",
"This paper proposes an instance-wise feature selection ... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
3
] | [
"Hke-ey7oTQ",
"iclr_2019_BJg_roAcK7",
"r1eVBWms6Q",
"iclr_2019_BJg_roAcK7",
"rkxFiB2Tom",
"H1lgbw8K3X",
"H1lgbw8K3X",
"BkehMR6shm",
"iclr_2019_BJg_roAcK7"
] |
iclr_2019_BJgklhAcK7 | Meta-Learning with Latent Embedding Optimization | Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space. | accepted-poster-papers | This work builds on MAML by (1) switching from a single underlying set of parameters to a distribution in a latent lower-dimensional space, and (2) conditioning the initial parameter of each subproblem on the input data.
All reviewers agree that the solid experimental results are impressive, with careful ablation studies to show how conditional parameter generation and optimization in the lower-dimensional space both contribute to the performance. While there were some initial concerns on clarity and experimental details, we feel the revised version has addressed those in a satisfying way. | train | [
"BJx4Oa-Be4",
"BJl60tXjJN",
"SkeNDEQoJN",
"H1gS9SOFk4",
"HJlZ5J93hm",
"rkgrGgvgRX",
"BygPCp8lRQ",
"H1ec3GPg0m",
"ryxI0Wwx0X",
"S1xuqePgAQ",
"SJler0LgAQ",
"ryeVsW1Ha7",
"r1gYWGRjnQ",
"ryeYsYpe2m",
"SyxZkTAacm",
"B1gQ7PdK5Q"
] | [
"public",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Hi, \n\nMay I ask what happens if the relation net is removed? How much will it affect the performance?",
"Thanks for your constructive comments! We are happy to address any remaining concerns.",
"Thank you for helping us improve the paper! We are in the process of open-sourcing our code and embeddings.",
"T... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
-1,
-1
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
-1,
-1
] | [
"rkgrGgvgRX",
"rkgrGgvgRX",
"H1gS9SOFk4",
"BygPCp8lRQ",
"iclr_2019_BJgklhAcK7",
"r1gYWGRjnQ",
"HJlZ5J93hm",
"iclr_2019_BJgklhAcK7",
"ryeYsYpe2m",
"ryeVsW1Ha7",
"BygPCp8lRQ",
"r1gYWGRjnQ",
"iclr_2019_BJgklhAcK7",
"iclr_2019_BJgklhAcK7",
"B1gQ7PdK5Q",
"iclr_2019_BJgklhAcK7"
] |
iclr_2019_BJgqqsAct7 | Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach | Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be ``compressed to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size that, combined with off-the-shelf compression algorithms, leads to state-of-the-art generalization guarantees. In particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. Additionally, we show that compressibility of models that tend to overfit is limited. Empirical results show that an increase in overfitting increases the number of bits required to describe a trained network. | accepted-poster-papers | The paper combines PAC-Bayes bound with network compression to derive a generalization bound for large-scale neural nets such as ImageNet. The approach is novel and interesting and the paper is well-written. The authors provided detailed replies and improvements in response to reviewers questions, and all reviewers agree this is a very nice contribution. | train | [
"HklR-CdX67",
"BygfLTdX67",
"BJl6K3uXa7",
"BklTXaUjnm",
"H1gYrD3Sn7",
"SkxzLGFojm",
"ryll1c-6oQ",
"HJxGoDyTjm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"Thank you for your careful reading and detailed questions and comments. .\n\n0. We have added a remark following Theorem 2.1 noting that this form is relatively complicated, explaining the reason we use it, and providing references to a unified treatment of the different PAC-Bayes bounds. In particular, Laviolett... | [
-1,
-1,
-1,
6,
6,
8,
-1,
-1
] | [
-1,
-1,
-1,
4,
5,
4,
-1,
-1
] | [
"BklTXaUjnm",
"SkxzLGFojm",
"H1gYrD3Sn7",
"iclr_2019_BJgqqsAct7",
"iclr_2019_BJgqqsAct7",
"iclr_2019_BJgqqsAct7",
"HJxGoDyTjm",
"iclr_2019_BJgqqsAct7"
] |
iclr_2019_BJl6AjC5F7 | Learning to Represent Edits | We introduce the problem of learning distributed representations of edits. By combining a
"neural editor" with an "edit encoder", our models learn to represent the salient
information of an edit and can be used to apply edits to new inputs.
We experiment on natural language and source code edit data. Our evaluation yields
promising results that suggest that our neural network models learn to capture
the structure and semantics of edits. We hope that this interesting task and
data source will inspire other researchers to work further on this problem. | accepted-poster-papers | This paper investigates learning to represent edit operations for two domains: text and source code. The primary contributions of the paper are in the specific task formulation and the new dataset (for source code edits). The technical novelty is relatively weak.
Pros:
The paper introduces a new dataset for source code edits.
Cons:
Reviewers raised various concerns about human evaluation and many other experimental details, most of which the rebuttal have successfully addressed. As a result, R3 updated their score from 4 to 6.
Verdict:
Possible weak accept. None of the remaining issues after the rebuttal is a serious deal breaker (e.g., task simplification by assuming the knowledge of when and where the edit must be applied, simplifying the real-world application of the automatic edits). However, the overall impact and novelty of the paper is relatively weak. | val | [
"B1l0SBlOJE",
"B1xPkXivJV",
"H1eeIsRo3Q",
"Sye616mI0m",
"HJgMTr4LAm",
"HkebczdFCm",
"HklnK9E8RX",
"rkeXF8NUCQ",
"H1gbD8dQ6Q",
"Hyg1fVtXaQ",
"rJeUUtum6X",
"r1g6GvdXp7",
"HylSf_0a27",
"HkeCSi493X"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for clarifying! We certainly agree that this needs to be stated as prominently as possible and we will make changes to state this more prominently and clearly in the next version of the paper.",
"Thank you for the updates!\n\nIn agreement with R3's concerns, I do think it's important to state (prominently... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"B1xPkXivJV",
"rkeXF8NUCQ",
"iclr_2019_BJl6AjC5F7",
"iclr_2019_BJl6AjC5F7",
"rJeUUtum6X",
"HJgMTr4LAm",
"r1g6GvdXp7",
"Hyg1fVtXaQ",
"iclr_2019_BJl6AjC5F7",
"HkeCSi493X",
"H1eeIsRo3Q",
"HylSf_0a27",
"iclr_2019_BJl6AjC5F7",
"iclr_2019_BJl6AjC5F7"
] |
iclr_2019_BJl6TjRcY7 | Neural Probabilistic Motor Primitives for Humanoid Control | We focus on the problem of learning a single motor module that can flexibly express a range of behaviors for the control of high-dimensional physically simulated humanoids. To do this, we propose a motor architecture that has the general structure of an inverse model with a latent-variable bottleneck. We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space. The trained neural probabilistic motor primitive system can perform one-shot imitation of whole-body humanoid behaviors, robustly mimicking unseen trajectories. Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the resulting movements are relatively naturalistic. To support the training of our model, we compare two approaches for offline policy cloning, including an experience efficient method which we call linear feedback policy cloning. We encourage readers to view a supplementary video (https://youtu.be/CaDEf-QcKwA ) summarizing our results. | accepted-poster-papers | Strengths: One-shot physics-based imitation at a scale and with efficiency not seen before.
Clear video, paper, and related work.
Weaknesses described include: the description of a secondary contribution (LFPC)
takes up too much space (R1,4); results are not compelling (R1,4); prior art in graphics and robotics (R2,6);
concerns about the potential limitations of the linearization used by LFPC.
The original reviews are negative overall (6,3,4). The authors have posted detailed replies.
R1 has posted a followup, standing by their score. We have not heard more from R2 and R3.
The AC has read the paper, watched the video, and read all the reviews.
Based on expertise in this area, the AC endorses the author's responses to R1 and R2.
Being able to compare LFPC to more standard behavior cloning is a valuable data point for the community;
there is value in testing simple and efficient models first.
The AC identifies the following recent (Nov 2018) paper as being the closest work, which is not identified by the authors or the reviewers. The approach being proposed in the submitted paper demonstrates equal-or-better scalability,
learning efficiency, and motion quality, and includes examples of learned high-level behaviors.
An elaboration on HL/LL control: the DeepLoco work also learns mocap-based LL-control with learned HL behaviors.
although with a more dedicated structure.
Physics-based motion capture imitation with deep reinforcement learning
https://dl.acm.org/citation.cfm?id=3274506
Overall, the AC recommends this paper to be accepted as a paper of interest to ICLR.
This does partially discount R3 and R1, who may not have worked as directly on these specific problems before.
The AC requests is rating the confidence as "not sure" to flag this for the program committee chairs, in light of the fact that this discounts the R1 and R3 reviews.
The AC is quite certain in terms of the technical contributions of the paper.
| train | [
"HyxwKQnF1N",
"ryevUebMRm",
"SygmMXpJAm",
"Sylz2k9o6Q",
"SJlewPYtpX",
"HJlsxPYYam",
"HkllTLYYaQ",
"HJxBuUYY6m",
"S1eS0BTpnX",
"S1gTWJZ6hX",
"HJeCcTtms7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We are reaching the end of the discussion period.\nThere remain mixed opinions on the paper.\nAny further thoughts from R2 and R3? Stating pros + cons and summarizing any change in opinion would be very useful.\nThe main contribution is centred around one-shot imitation as well as reuse of low-level motor behavior... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2019_BJl6TjRcY7",
"iclr_2019_BJl6TjRcY7",
"iclr_2019_BJl6TjRcY7",
"iclr_2019_BJl6TjRcY7",
"HJeCcTtms7",
"S1gTWJZ6hX",
"S1eS0BTpnX",
"iclr_2019_BJl6TjRcY7",
"iclr_2019_BJl6TjRcY7",
"iclr_2019_BJl6TjRcY7",
"iclr_2019_BJl6TjRcY7"
] |
iclr_2019_BJlgNh0qKQ | Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a Structured Variational Autoencoder | Human annotation for syntactic parsing is expensive, and large resources are available only for a fraction of languages. A question we ask is whether one can leverage abundant unlabeled texts to improve syntactic parsers, beyond just using the texts to obtain more generalisable lexical features (i.e. beyond word embeddings). To this end, we propose a novel latent-variable generative model for semi-supervised syntactic dependency parsing. As exact inference is intractable, we introduce a differentiable relaxation to obtain approximate samples and compute gradients with respect to the parser parameters. Our method (Differentiable Perturb-and-Parse) relies on differentiable dynamic programming over stochastically perturbed edge scores. We demonstrate effectiveness of our approach with experiments on English, French and Swedish. | accepted-poster-papers | This paper proposes a method for unsupervised learning that uses a latent variable generative model for semi-supervised dependency parsing. The key learning method consists of making perturbations to the logits going into a parsing algorithm, to make it possible to sample within the variational auto-encoder framework. Significant gains are found through semi-supervised learning.
The largest reviewer concern was that the baselines were potentially not strong enough, as significantly better numbers have been reported in previous work, which may have a result of over-stating the perceived utility.
Overall though it seems that the reviewers appreciated the novel solution to an important problem, and in general would like to see the paper accepted. | val | [
"r1gciHq_pQ",
"SJlN1H5up7",
"ByxR94q_p7",
"SylgfKWC27",
"HJlknmEq2X",
"Bkgmgia43Q",
"rJgRXtqO3Q",
"HJebIHYwh7"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"Thank you for your comments and for finding the method novel and interesting.\n\nWe would like first to clarify that we are not making claiming that our method is appropriate in the high resource scenario (i.e. full in-domain English PTB parsing). However, large datasets are available only for a few languages, so... | [
-1,
-1,
-1,
8,
7,
5,
-1,
-1
] | [
-1,
-1,
-1,
4,
3,
3,
-1,
-1
] | [
"Bkgmgia43Q",
"HJlknmEq2X",
"SylgfKWC27",
"iclr_2019_BJlgNh0qKQ",
"iclr_2019_BJlgNh0qKQ",
"iclr_2019_BJlgNh0qKQ",
"HJebIHYwh7",
"iclr_2019_BJlgNh0qKQ"
] |
iclr_2019_BJluy2RcFm | Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs | We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods. | accepted-poster-papers | AR1 is concerned about whether higher-order interactions are modeled explicitly and if pi-SGD convergence conditions can be easily satisfied. AR2 is concerned that basic JP has been conceptually discussed in the literature and \pi-SGD is not novel because it was realized by Hamilton et al. (2017) and Moore & Neville (2017). However, the authors provide some theoretical analysis for this setting in contrast to prior works. AR1 is also concerned that the effect of higher-order information has not been 'disentangled' experimentally from order invariance. AR4 is concerned about poor performance of higher order Janossy pooling compared to k =1 case and asks about the number of hyper-parameters. The authors showed a harder task of computing the variance of a sequence of numbers in response.
On balance, despite justified concerns of AR2 about novelty and AR1 about experimental verification, the work appears to tackle an interesting topic. Reviewers find the problem interesting and see some hope in the proposed solutions. On balance, AC recommends this paper to be accepted at ICLR. The authors are asked to update manuscript to reflect honestly weaknesses as expressed by reviewers, e.g. issue with effects of 'higher-order information' and 'disentangled' from order invariance. | train | [
"B1ey5ZDp2X",
"Skgp6BtKAm",
"BklEWNrmC7",
"SJxGsKnlA7",
"r1x2XvP107",
"B1gpO5L10X",
"B1eC2L8J0m",
"B1gYh_tiTm",
"HJlOsS79pm",
"HklWWHbbT7",
"S1xNEC4q2X"
] | [
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors presented a new pooling method called Janossy Pooling (JP), which is designed to better capture high-order information by addressing two limitations of existing works - fixed pooling function and fixed-size inputs. The studied problem is important and the motivation is clear, where the i... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_BJluy2RcFm",
"BklEWNrmC7",
"SJxGsKnlA7",
"iclr_2019_BJluy2RcFm",
"B1ey5ZDp2X",
"S1xNEC4q2X",
"HklWWHbbT7",
"HJlOsS79pm",
"S1xNEC4q2X",
"iclr_2019_BJluy2RcFm",
"iclr_2019_BJluy2RcFm"
] |
iclr_2019_BJlxm30cKm | An Empirical Study of Example Forgetting during Deep Neural Network Learning | Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a ``forgetting event'' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance. | accepted-poster-papers | This paper is an analysis of the phenomenon of example forgetting in deep neural net training. The empirical study is the first of its kind and features convincing experiments with architectures that achieve near state-of-the-art results. It shows that a portion of the training set can be seen as support examples. The reviewers noted weaknesses such as in the measurement of the forgetting itself and the training regiment. However, they agreed that their concerns we addressed by the rebuttal. They also noted that the paper is not forthcoming with insights, but found enough value in the systematic empirical study it provides. | test | [
"S1gAKB7QC7",
"rklya7Xm0Q",
"rkl2xZmXCQ",
"SyxkXUThiQ",
"SkgKResg0X",
"rygcHkH53m",
"S1lWe2ceRQ",
"ryeU8m5xRQ",
"Skx4CMsha7",
"BJgqy28cTQ",
"ryxipBQcpX",
"B1eemyXqp7"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author"
] | [
"Thanks for your review and suggestions, your suggested additional experiments have strengthened the paper and we will acknowledge them in the paper, if accepted. Applying some of our results towards solving catastrophic forgetting is one of the promising directions we hope to investigate in the future. One of the ... | [
-1,
-1,
-1,
7,
-1,
8,
-1,
-1,
-1,
9,
-1,
-1
] | [
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
5,
-1,
-1
] | [
"SkgKResg0X",
"BJgqy28cTQ",
"S1lWe2ceRQ",
"iclr_2019_BJlxm30cKm",
"ryxipBQcpX",
"iclr_2019_BJlxm30cKm",
"ryeU8m5xRQ",
"Skx4CMsha7",
"B1eemyXqp7",
"iclr_2019_BJlxm30cKm",
"SyxkXUThiQ",
"rygcHkH53m"
] |
iclr_2019_BJx0sjC5FX | RNNs implicitly implement tensor-product representations | Recurrent neural networks (RNNs) can learn continuous vector representations of symbolic structures such as sequences and sentences; these representations often exhibit linear regularities (analogies). Such regularities motivate our hypothesis that RNNs that show such regularities implicitly compile symbolic structures into tensor product representations (TPRs; Smolensky, 1990), which additively combine tensor products of vectors representing roles (e.g., sequence positions) and vectors representing fillers (e.g., particular words). To test this hypothesis, we introduce Tensor Product Decomposition Networks (TPDNs), which use TPRs to approximate existing vector representations. We demonstrate using synthetic data that TPDNs can successfully approximate linear and tree-based RNN autoencoder representations, suggesting that these representations exhibit interpretable compositional structure; we explore the settings that lead RNNs to induce such structure-sensitive representations. By contrast, further TPDN experiments show that the representations of four models trained to encode naturally-occurring sentences can be largely approximated with a bag of words, with only marginal improvements from more sophisticated structures. We conclude that TPDNs provide a powerful method for interpreting vector representations, and that standard RNNs can induce compositional sequence representations that are remarkably well approximated byTPRs; at the same time, existing training tasks for sentence representation learning may not be sufficient for inducing robust structural representations | accepted-poster-papers | AR1 seeks the paper to be more standalone and easier to read. As this comment comes from the reviewer who is very experienced in tensor models, it is highly recommended that the authors make further efforts to make the paper easier to follow. AR2 is concerned about the manually crafted role schemes and alignment discrepancy of results between these schemes and RNNs. To this end, the authors hypothesized further reasons as to why this discrepancy occurs. AC encourages authors to make further efforts to clarify this point without overstating the ability of tensors to model RNNs (it would be interesting to see where these schemes and RNN differ). Lastly, AR3 seeks more clarifications on contributions.
While the paper is not ground breaking, it offers some starting point on relating tensors and RNNs. Thus, AC recommends an accept. Kindly note that tensor outer products have been used heavily in computer vision, i.e.:
- Higher-Order Occurrence Pooling for Bags-of-Words: Visual Concept Detection by Koniusz et al. (e.g. section 3 considers bi-modal outer tensor product for combining multiple sources: one source can be considered a filter, another as role (similar to Smolensky at al. 1990), e.g. a spatial grid number refining local role of a visual word. This further is extended to multi-modal cases (multiple filter or role modes etc.) )
- Multilinear image analysis for facial recognition (e.g. so called tensor-faces) by Vasilescu et al.
- Multilinear independent components analysis by Vasilescu et al.
- Tensor decompositions for learning latent variable models by Anandkumar et al.
Kindly make connections to these works in your final draft (and to more prior works).
| train | [
"H1gY4uvYJE",
"SklIQYXqCQ",
"HyxC1Km907",
"SkeR9OQ50X",
"rJl1E7lHTX",
"r1eg-zgrT7",
"rkgLvpJraX",
"S1x9a0e2hX",
"S1l7y8esh7",
"HJgWEWbPnQ"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have created an anonymized webpage with interactive demos to accompany this paper. The page can be found here:\nhttps://tpdn-iclr.github.io/tpdn-demo/tpr_demo.html",
"Thank you again for the suggestions. We have uploaded a new version of the paper that incorporates the changes discussed in our response.",
"... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2019_BJx0sjC5FX",
"rJl1E7lHTX",
"r1eg-zgrT7",
"rkgLvpJraX",
"HJgWEWbPnQ",
"S1l7y8esh7",
"S1x9a0e2hX",
"iclr_2019_BJx0sjC5FX",
"iclr_2019_BJx0sjC5FX",
"iclr_2019_BJx0sjC5FX"
] |
iclr_2019_BJxgz2R9t7 | Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach | Recent efforts to combine Representation Learning with Formal Methods, commonly known as the Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems. In this paper, we propose a neural framework that can learn to solve the Circuit Satisfiability problem. Our framework is built upon two fundamental contributions: a rich embedding architecture that encodes the problem structure and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem. The experimental results show the superior out-of-sample generalization performance of our framework compared to the recently developed NeuroSAT method. | accepted-poster-papers | This paper introduces a new graph neural network architecture designed to learn to solve Circuit SAT problems, a fundamental problem in computer science. The key innovation is the ability to to use the DAG structure as an input, as opposed to typical undirected (factor graph style) representations of SAT problems. The reviewers appreciated the novelty of the approach as well as the empirical results provided that demonstrate the effectiveness of the approach. Writing is clear. While the comparison with NeuroSAT is interesting and useful, there is no comparison with existing SAT solvers which are not based on learning methods. So it is not clear how big the gap with state-of-the-art is. Overall, I recommend acceptance, as the results are promising and this could inspire other researchers working on neural-symbolic approaches to search and optimization problems. | train | [
"BkekmsMohm",
"HklvKK14aX",
"Skx8EY1Nam",
"rJgGgF1V6Q",
"SJg8puyN6m",
"HyeLzuJVTX",
"rJxGBgb527",
"HyevmThdhX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a graph neural network architecture that is designed to use the DAG structure in the input to learn to solve Circuit SAT problems. Unlike graph neural nets for undirected graphs, the proposed network propagates information according to the edge directions, using a deep sets representation to agg... | [
6,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_BJxgz2R9t7",
"HyevmThdhX",
"rJxGBgb527",
"BkekmsMohm",
"BkekmsMohm",
"iclr_2019_BJxgz2R9t7",
"iclr_2019_BJxgz2R9t7",
"iclr_2019_BJxgz2R9t7"
] |
iclr_2019_BJxh2j0qYm | Dynamic Channel Pruning: Feature Boosting and Suppression | Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss. | accepted-poster-papers | The authors propose a dynamic inference technique for accelerating neural network prediction with minimal accuracy loss. The method are simple and effective. The paper is clear and easy to follow. However, the real speedup on CPU/GPU is not demonstrated beyond the theoretical FLOPs reduction. Reviewers are also concerned that the idea of dynamic channel pruning is not novel. The evaluation is on fairly old networks. | train | [
"BJlgEOGIl4",
"SJgBir11l4",
"H1g7fwG3JE",
"SkgSXSGny4",
"B1l4hUboJE",
"B1e6h0GUy4",
"Bkgib72i2X",
"rJxv5KuXRm",
"SJevI4GXRQ",
"Bkx2UnWm0m",
"rye68ibXCm",
"rklHZy28TX",
"S1euMaPJ67",
"SJlk4ADkaQ",
"B1eUMSd16X"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"We tested VGG-16 and ResNet-18 with FBS against their respective baselines, the experiments were repeated 1000 times and we recorded the average wall-clock time results for each model.\n\nThe VGG-16 baseline observed on average 520.80 ms for each inference. FBS was applied to VGG-16 and reduced the amount of comp... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
-1,
-1,
-1,
6,
-1,
-1,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
-1,
-1,
-1,
3,
-1,
-1,
4
] | [
"SJgBir11l4",
"SkgSXSGny4",
"rJxv5KuXRm",
"B1l4hUboJE",
"iclr_2019_BJxh2j0qYm",
"SJevI4GXRQ",
"iclr_2019_BJxh2j0qYm",
"iclr_2019_BJxh2j0qYm",
"Bkgib72i2X",
"B1eUMSd16X",
"rklHZy28TX",
"iclr_2019_BJxh2j0qYm",
"Bkgib72i2X",
"Bkgib72i2X",
"iclr_2019_BJxh2j0qYm"
] |
iclr_2019_BJxhijAcY7 | signSGD with Majority Vote is Communication Efficient and Fault Tolerant | Training neural networks on large datasets can be accelerated by distributing the workload over a network of machines. As datasets grow ever larger, networks of hundreds or thousands of machines become economically viable. The time cost of communicating gradients limits the effectiveness of using such large machine counts, as may the increased chance of network faults. We explore a particularly simple algorithm for robust, communication-efficient learning---signSGD. Workers transmit only the sign of their gradient vector to a server, and the overall update is decided by a majority vote. This algorithm uses 32x less communication per iteration than full-precision, distributed SGD. Under natural conditions verified by experiment, we prove that signSGD converges in the large and mini-batch settings, establishing convergence for a parameter regime of Adam as a byproduct. Aggregating sign gradients by majority vote means that no individual worker has too much power. We prove that unlike SGD, majority vote is robust when up to 50% of workers behave adversarially. The class of adversaries we consider includes as special cases those that invert or randomise their gradient estimate. On the practical side, we built our distributed training system in Pytorch. Benchmarking against the state of the art collective communications library (NCCL), our framework---with the parameter server housed entirely on one machine---led to a 25% reduction in time for training resnet50 on Imagenet when using 15 AWS p3.2xlarge machines. | accepted-poster-papers | The Reviewers noticed that the paper undergone many editions and raise concern about the content. They encourage improving experimental section further and strengthening the message of the paper. | test | [
"SkgZHKcaJE",
"B1eUDnU037",
"HkgS6925hX",
"r1lMfYylTQ",
"r1lxIcJxpm",
"SJg-hh75C7",
"SJg5q27qAX",
"r1g4OhQqRQ",
"Byx4HczqC7",
"S1eaXqJgpX",
"SygCGdygTQ",
"BJezmXgjt7",
"r1eEYTEq2m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Dear AC and AnonReviewer1,\n\nThe reviewers’ scores show a consensus to accept. Still, AnonReviewer1 raises important points that we want to address here.\n\n1. QSGD precision. We agree, thanks for pointing it out. We are running experiments on 2 and 4bit QSGD and will add these to the paper.\n\n2. Bulyan. We disa... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_BJxhijAcY7",
"iclr_2019_BJxhijAcY7",
"iclr_2019_BJxhijAcY7",
"B1eUDnU037",
"HkgS6925hX",
"r1eEYTEq2m",
"HkgS6925hX",
"B1eUDnU037",
"iclr_2019_BJxhijAcY7",
"r1eEYTEq2m",
"iclr_2019_BJxhijAcY7",
"iclr_2019_BJxhijAcY7",
"iclr_2019_BJxhijAcY7"
] |
iclr_2019_BJxssoA5KX | Bounce and Learn: Modeling Scene Dynamics with Real-World Bounces | We introduce an approach to model surface properties governing bounces in everyday scenes. Our model learns end-to-end, starting from sensor inputs, to predict post-bounce trajectories and infer
two underlying physical properties that govern bouncing - restitution and effective collision normals. Our model, Bounce and Learn, comprises two modules -- a Physics Inference Module (PIM) and a Visual Inference Module (VIM). VIM learns to infer physical parameters for locations in a scene given a single still image, while PIM learns to model physical interactions for the prediction task given physical parameters and observed pre-collision 3D trajectories.
To achieve our results, we introduce the Bounce Dataset comprising 5K RGB-D videos of bouncing trajectories of a foam ball to probe surfaces of varying shapes and materials in everyday scenes including homes and offices.
Our proposed model learns from our collected dataset of real-world bounces and is bootstrapped with additional information from simple physics simulations. We show on our newly collected dataset that our model out-performs baselines, including trajectory fitting with Newtonian physics, in predicting post-bounce trajectories and inferring physical properties of a scene. | accepted-poster-papers | This paper proposes a novel dataset of bouncing balls and a way to learn the dynamics of the balls when colliding. The reviewers found the paper well-written, tackling an interesting and hard problem in a novel way. The main concern (that I share with one of the reviewers) is about the fact that the paper proposes both a new dataset/environment *and* a solution for the problem. This made it difficult the for the authors to provide baselines to compare to. The ensuing back and forth had the authors relax some of the assumptions from the environment and made it possible to evaluate with interaction nets.
The main weakness of the paper is the relatively contrived setup that the authors have come up with. I will summarize some of the discussion that happened as a result of this point: it is relatively difficult to see how this setup that the authors have and have studied (esp. knowing the groundtruth impact locations and the timing of the impact) can generalize outside of the proposed approach. There is some concern that the comparison with interaction nets was not entirely fair.
I would recommend the authors redo the comparisons with interaction nets in a careful way, with the right ablations, and understand if the methods have access to the same input data (e.g. are interaction nets provided with the bounce location?).
Despite the relatively high average score, I think of this paper as quite borderline, specifically because of the issues related to the setup being too niche. Nonetheless, the work does have a lot of scientific value to it, in addition to a new simulation environment/dataset that other researchers can then use. Assuming the baselines are done in a way that is trustworthy, the ablation experiments and discussion will be something interesting to the ICLR community. | train | [
"ryeBDwsMs7",
"SJloGnWRnQ",
"Hyl5ady1JV",
"rkgYYukJ1N",
"S1lJ8Ylo0Q",
"SJe2hHXj6Q",
"H1gfIEmspm",
"r1eXfEXiaX",
"ByekAQQo6Q",
"HkegBTQcnm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Paper summary:\nThe paper proposes to predict bouncing behavior from visual data. The model has two main components: (1) Physics Interface Module, which predicts the output trajectory from a given incoming trajectory and the physical properties of the contact surface. (2) Visual Interface Module, which predicts th... | [
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_BJxssoA5KX",
"iclr_2019_BJxssoA5KX",
"rkgYYukJ1N",
"S1lJ8Ylo0Q",
"r1eXfEXiaX",
"ryeBDwsMs7",
"HkegBTQcnm",
"ByekAQQo6Q",
"SJloGnWRnQ",
"iclr_2019_BJxssoA5KX"
] |
iclr_2019_BJxvEh0cFQ | K for the Price of 1: Parameter-efficient Multi-task and Transfer Learning | We introduce a novel method that enables parameter-efficient transfer and multi-task learning with deep neural networks. The basic approach is to learn a model patch - a small set of parameters - that will specialize to each task, instead of fine-tuning the last layer or the entire network. For instance, we show that learning a set of scales and biases is sufficient to convert a pretrained network to perform well on qualitatively different problems (e.g. converting a Single Shot MultiBox Detection (SSD) model into a 1000-class image classification model while reusing 98% of parameters of the SSD feature extractor). Similarly, we show that re-learning existing low-parameter layers (such as depth-wise convolutions) while keeping the rest of the network frozen also improves transfer-learning accuracy significantly. Our approach allows both simultaneous (multi-task) as well as sequential transfer learning. In several multi-task learning problems, despite using much fewer parameters than traditional logits-only fine-tuning, we match single-task performance.
| accepted-poster-papers | Reviewers largely agree that the proposed method for finetuning the deep neural networks is interesting and empirical results clearly show the benefits over finetuning only the last layer. I recommend acceptance. | train | [
"S1eTI1PuAX",
"BJgEyp8gAQ",
"SJgssRVq3X",
"SJeD6ZMT6Q",
"B1lPdZz6a7",
"H1lFC1M6p7",
"BygmSOC2hm",
"S1e7BbGqj7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks to the authors for their reply. I am satisfied with the current state of the paper and tend to keep my score.",
"Several changes have been made to my comments, thanks for pointing out the mistakes. ",
"This paper explored the means of tuning the neural network models using less parameters. The authors e... | [
-1,
-1,
6,
-1,
-1,
-1,
7,
8
] | [
-1,
-1,
3,
-1,
-1,
-1,
5,
4
] | [
"SJeD6ZMT6Q",
"B1lPdZz6a7",
"iclr_2019_BJxvEh0cFQ",
"S1e7BbGqj7",
"SJgssRVq3X",
"BygmSOC2hm",
"iclr_2019_BJxvEh0cFQ",
"iclr_2019_BJxvEh0cFQ"
] |
iclr_2019_BJzbG20cFQ | Towards Metamerism via Foveated Style Transfer | The problem of visual metamerism is defined as finding a family of perceptually
indistinguishable, yet physically different images. In this paper, we propose our
NeuroFovea metamer model, a foveated generative model that is based on a mixture
of peripheral representations and style transfer forward-pass algorithms. Our
gradient-descent free model is parametrized by a foveated VGG19 encoder-decoder
which allows us to encode images in high dimensional space and interpolate
between the content and texture information with adaptive instance normalization
anywhere in the visual field. Our contributions include: 1) A framework for
computing metamers that resembles a noisy communication system via a foveated
feed-forward encoder-decoder network – We observe that metamerism arises as a
byproduct of noisy perturbations that partially lie in the perceptual null space; 2)
A perceptual optimization scheme as a solution to the hyperparametric nature of
our metamer model that requires tuning of the image-texture tradeoff coefficients
everywhere in the visual field which are a consequence of internal noise; 3) An
ABX psychophysical evaluation of our metamers where we also find that the rate
of growth of the receptive fields in our model match V1 for reference metamers
and V2 between synthesized samples. Our model also renders metamers at roughly
a second, presenting a ×1000 speed-up compared to the previous work, which now
allows for tractable data-driven metamer experiments. | accepted-poster-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- The problem is well-motivated and related work is thoroughly discussed
- The evaluation is compelling and extensive.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- Very dense. Clarity could be improved in some sections.
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
No major points of contention.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers reached a consensus that the paper should be accepted.
| train | [
"S1gispPmRX",
"r1e9ZnwmA7",
"ryx7JhwmRX",
"BygPHovmC7",
"HyeGxiD7C7",
"rJeH99PXA7",
"B1li_tDmAm",
"HJedMX8Na7",
"rJxupYl0hm",
"Byxt733Fhm"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We’d like thank all reviewers for the feedback and assessment of our paper. We hope to have individually addressed all your concerns. We have uploaded a modified version of our paper where we have addresses such concerns, re-arranged figures, and fixed minor typos and corrections. These include:\n\nMoving Figure 1... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2019_BJzbG20cFQ",
"ryx7JhwmRX",
"BygPHovmC7",
"Byxt733Fhm",
"rJeH99PXA7",
"rJxupYl0hm",
"HJedMX8Na7",
"iclr_2019_BJzbG20cFQ",
"iclr_2019_BJzbG20cFQ",
"iclr_2019_BJzbG20cFQ"
] |
iclr_2019_BkG5SjR5YQ | Post Selection Inference with Incomplete Maximum Mean Discrepancy Estimator | Measuring divergence between two distributions is essential in machine learning and statistics and has various applications including binary classification, change point detection, and two-sample test. Furthermore, in the era of big data, designing divergence measure that is interpretable and can handle high-dimensional and complex data becomes extremely important. In this paper, we propose a post selection inference (PSI) framework for divergence measure, which can select a set of statistically significant features that discriminate two distributions. Specifically, we employ an additive variant of maximum mean discrepancy (MMD) for features and introduce a general hypothesis test for PSI. A novel MMD estimator using the incomplete U-statistics, which has an asymptotically normal distribution (under mild assumptions) and gives high detection power in PSI, is also proposed and analyzed theoretically. Through synthetic and real-world feature selection experiments, we show that the proposed framework can successfully detect statistically significant features. Last, we propose a sample selection framework for analyzing different members in the Generative Adversarial Networks (GANs) family. | accepted-poster-papers | The submission evaluates maximum mean discrepancy estimators for post selection inference.
It combines two contributions: (i) it proposes an incomplete u-statistic estimator for MMD, (ii) it evaluates this and existing estimators in a post selection inference setting.
The method extends the post selection inference approach of (Lee et al. 2016) to the current u-statistic approach for MMD. The top-k selection problem is phrased as a linear constraint reducing it to the problem of Lee et al. The approach is illustrated on toy examples and a GAN application.
The main criticism of the problem is the novelty of the paper. R1 feels that it is largely just the combination of two known approaches (although it appears that the incomplete estimator is key), while R3 was significantly more impressed. Both are senior experts in the topic.
On the balance, the reviewers were more positive than negative. R2 felt that the authors comments helped to address their concerns, while R3 gave detailed arguments in favor of the submission and championed the paper. The paper provides an additional interesting framework for evaluation of estimators, and considers their application in a broader context of post-selection inference. | train | [
"H1lpAZgWAQ",
"S1xJY1jeC7",
"HJeK_589a7",
"S1lGXvh8T7",
"ryeN2LnIaX",
"Syl46BhIpX",
"BkxV1bD02Q",
"HkgHB2j23X",
"SyghNyEchQ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We really appreciate your feedback. We have already fixed typos. ",
"A few additional typos to fix:\n-Section 1: 2nd paragraph: 'i.e. higher order' -> 'i.e., higher order',\n-Section 2: 1st paragraph: 'larges score' -> 'largest score',\n-Section 3.3.: last paragraph: 'see theoretical analysis section' -> 'see Se... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"S1xJY1jeC7",
"HJeK_589a7",
"S1lGXvh8T7",
"SyghNyEchQ",
"HkgHB2j23X",
"BkxV1bD02Q",
"iclr_2019_BkG5SjR5YQ",
"iclr_2019_BkG5SjR5YQ",
"iclr_2019_BkG5SjR5YQ"
] |
iclr_2019_BkG8sjR5Km | Emergent Coordination Through Competition | We study the emergence of cooperative behaviors in reinforcement learning agents by introducing a challenging competitive multi-agent soccer environment with continuous simulated physics. We demonstrate that decentralized, population-based training with co-play can lead to a progression in agents' behaviors: from random, to simple ball chasing, and finally showing evidence of cooperation. Our study highlights several of the challenges encountered in large scale multi-agent training in continuous control. In particular, we demonstrate that the automatic optimization of simple shaping rewards, not themselves conducive to co-operative behavior, can lead to long-horizon team behavior. We further apply an evaluation scheme, grounded by game theoretic principals, that can assess agent performance in the absence of pre-defined evaluation tasks or human baselines. | accepted-poster-papers | The paper studies population-based training for MARL with co-play, in MuJoCo (continuous control) soccer. It shows that (long term) cooperative behaviors can emerge from simple rewards, shaped but not towards cooperation.
The paper is overall well written and includes a thorough study/ablation. The weaknesses are the lack of strong comparisons (or at least easy to grasp baselines) on a new task, and the lack of some of the experimental details (about reward shaping, about hyperparameters).
The reviewers reached an agreement. This paper is welcomed to be published at ICLR. | train | [
"rygaShhcn7",
"BJlGC9UhpX",
"HkgfkJPnaX",
"Sylf0sIn6X",
"BJl-oTGeaX",
"Skx1dt70hX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a new multiagent research environment---a simplified version of 2x2 RoboSoccer using the MuJoCo physics engine with spherical players that can rotate laterally, move forwards / backwards, and jump.\n\nThe paper deploys a fine-tuned version of population-based sampling on top of a stochastic v... | [
6,
-1,
-1,
-1,
7,
7
] | [
3,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_BkG8sjR5Km",
"BJl-oTGeaX",
"rygaShhcn7",
"Skx1dt70hX",
"iclr_2019_BkG8sjR5Km",
"iclr_2019_BkG8sjR5Km"
] |
iclr_2019_BkMiWhR5K7 | Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors | We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available. We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and demonstrate that the current state-of-the-art methods are optimal in a natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors. We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples. The resulting methods use two to four times fewer queries and fail two to five times less than the current state-of-the-art. The code for reproducing our work is available at https://git.io/fAjOJ. | accepted-poster-papers | This paper is on the problem of adversarial example generation in the setting where the predictor is only accessible via function evaluations with no gradients available. The associated problem can be cast as a blackbox optimization problem wherein finite difference and related gradient estimation techniques can be used. This setting appears to be pervasive. The reviewers agree that the paper is well written and the proposed bandit optimization-based algorithm provides a nice framework in which to integrate priors, resulting in impressive empirical improvements. | train | [
"BJx1HlJJpQ",
"S1eCfrzsA7",
"H1gTyuSq0Q",
"BkglK_U5C7",
"SJg2WDBcR7",
"rkgBbCf90X",
"B1xIAu-qhm",
"BJg_v6iOCQ",
"SJeCGji_A7",
"SyglTvjdR7",
"rklvTWsO07",
"rke9oWZuam",
"rJgK3ebOpX",
"B1ead6lupQ",
"B1lDhBt52X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper formulates the black-box adversarial attack as a gradient estimation\nproblem, and provide some theoretical analysis to show the optimality of an\nexisting gradient estimation method (Neural Evolution Strategies) for black-box\nattacks.\n\nThis paper also proposes two additional methods to reduce the nu... | [
7,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"iclr_2019_BkMiWhR5K7",
"BkglK_U5C7",
"SJg2WDBcR7",
"SJg2WDBcR7",
"rkgBbCf90X",
"B1ead6lupQ",
"iclr_2019_BkMiWhR5K7",
"B1xIAu-qhm",
"B1lDhBt52X",
"BJx1HlJJpQ",
"iclr_2019_BkMiWhR5K7",
"B1xIAu-qhm",
"B1lDhBt52X",
"BJx1HlJJpQ",
"iclr_2019_BkMiWhR5K7"
] |
iclr_2019_BkN5UoAqF7 | Sample Efficient Imitation Learning for Continuous Control | The goal of imitation learning (IL) is to enable a learner to imitate expert behavior given expert demonstrations. Recently, generative adversarial imitation learning (GAIL) has shown significant progress on IL for complex continuous tasks. However, GAIL and its extensions require a large number of environment interactions during training. In real-world environments, the more an IL method requires the learner to interact with the environment for better imitation, the more training time it requires, and the more damage it causes to the environments and the learner itself. We believe that IL algorithms could be more applicable to real-world problems if the number of interactions could be reduced.
In this paper, we propose a model-free IL algorithm for continuous control. Our algorithm is made up mainly three changes to the existing adversarial imitation learning (AIL) methods – (a) adopting off-policy actor-critic (Off-PAC) algorithm to optimize the learner policy, (b) estimating the state-action value using off-policy samples without learning reward functions, and (c) representing the stochastic policy function so that its outputs are bounded. Experimental results show that our algorithm achieves competitive results with GAIL while significantly reducing the environment interactions. | accepted-poster-papers | The paper proposes a simple method for improving the sample efficiency of GAIL, essentially a way of turning inverse reinforcement learning into classification. As reviewers noted, the method is based on a simple idea with potentially broad applicability.
Concerns were raised about the multiple components of the system and what each contributed, and missing pointers to the literature. A baseline wherein GAIL is initialized with behaviour cloning, although only suggested but not tried in previous works. The authors did, however, attempt this setting and found it to hurt, not help, performance. I find this surprising and would urge the authors to validate that this isn't merely an uninteresting artifact of the setup, however I commend the authors for trying it and don't believe that a surprising result in this regard is a barrier to publication.
As several reviewers did not provide feedback on revisions addressing their concerns, this Area Chair was left to determine to a large degree whether or not reviewer concerns were in fact addressed. I thank AnonReviewer4 for revisiting their review towards the end of the period, and concur with them that many of the concerns raised by reviewers have indeed been adequately dealt with. | train | [
"rye5EKn1pm",
"B1lQQqme6X",
"BkgMjKRznX",
"Sklwhthhnm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed an imitation learning algorithm that achieves competitive results with GAIL, while requiring significantly fewer interactions with the environment.\n\nI like the method proposed in this paper. It seems similar to ideas in this concurrent submission: https://openreview.net/forum?id=B1excoAqKQ\n\... | [
7,
5,
5,
5
] | [
5,
4,
5,
5
] | [
"iclr_2019_BkN5UoAqF7",
"iclr_2019_BkN5UoAqF7",
"iclr_2019_BkN5UoAqF7",
"iclr_2019_BkN5UoAqF7"
] |
iclr_2019_Bke4KsA5FX | Generative Code Modeling with Graphs | Generative models forsource code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output. Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps. An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines. | accepted-poster-papers | This paper presents an interesting method for code generation using a graph-based generative approach. Empirical evaluation shows that the method outperforms relevant baselines (PHOG).
There is consensus among reviewers that the methods are novel and is worth acceptance to ICLR. | train | [
"S1x0nNKgCm",
"rJgMmnfxC7",
"SyxeYOdu3Q",
"HJl5-iMxAX",
"Byen_drhpX",
"Ske3KQBnT7",
"rkxpzWG36m",
"H1eHUfG36m",
"HJeYe4p9p7",
"Skly0QT9pm",
"S1gsoQ69Tm",
"S1ggSfa567",
"S1gbGzT5aX",
"rygKSlpcp7",
"rkeutuqqnm",
"Hyl3dbvq2X",
"SJeXntTY37"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author"
] | [
"The new figure 2 is indeed much clearer. Thanks!",
"Looking forward to revisions",
"The paper proposes a code completion task that given the rest of a program, predicts the content of an expression. This task has similarity to code completion tasks in the code editor of an IDE. The paper proposes an interestin... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
-1
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1
] | [
"S1gbGzT5aX",
"HJl5-iMxAX",
"iclr_2019_Bke4KsA5FX",
"SyxeYOdu3Q",
"H1eHUfG36m",
"rkxpzWG36m",
"HJeYe4p9p7",
"Skly0QT9pm",
"SyxeYOdu3Q",
"SyxeYOdu3Q",
"rygKSlpcp7",
"Hyl3dbvq2X",
"rkeutuqqnm",
"iclr_2019_Bke4KsA5FX",
"iclr_2019_Bke4KsA5FX",
"iclr_2019_Bke4KsA5FX",
"iclr_2019_Bke4KsA5F... |
iclr_2019_BkeStsCcKQ | Critical Learning Periods in Deep Networks | Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of "Information Plasticity". Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution. Once such strong connections are created, they do not appear to change during additional training. These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process. Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning. Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing. | accepted-poster-papers | Irrespective of their taste for comparisons of neural networks to biological organisms, all reviewers agree that the empirical observations in this paper are quite interesting and well presented. While some reviewers note that the paper is not making theoretical contributions, the empirical results in themselves are intriguing enough to be of interest to ICLR audiences. | train | [
"rJxHyW8rRX",
"rJlxyjwX07",
"Bkgba5DmCQ",
"H1ebz8D7Cm",
"ryguIWkf6Q",
"SJg8cOOka7",
"HJgH4ifFi7",
"SJlyF5rTnQ",
"BJeHWCM6hQ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We are thankful to the reviewer for their positive assessment of our paper. In fact, we share the same sentiment, as we articulate in the Conclusion, that one should resist the temptation to build too much on structural correspondences between such diverse systems. By showing these data we mostly wanted to emphasi... | [
-1,
-1,
-1,
-1,
9,
8,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
4,
5,
-1,
-1
] | [
"ryguIWkf6Q",
"Bkgba5DmCQ",
"SJg8cOOka7",
"HJgH4ifFi7",
"iclr_2019_BkeStsCcKQ",
"iclr_2019_BkeStsCcKQ",
"iclr_2019_BkeStsCcKQ",
"BJeHWCM6hQ",
"iclr_2019_BkeStsCcKQ"
] |
iclr_2019_BkeU5j0ctQ | CEM-RL: Combining evolutionary and gradient-based methods for policy search | Deep neuroevolution and deep reinforcement learning (deep RL) algorithms are two popular approaches to policy search. The former is widely applicable and rather stable, but suffers from low sample efficiency. By contrast, the latter is more sample efficient, but the most sample efficient variants are also rather unstable and highly sensitive to hyper-parameter setting. So far, these families of methods have mostly been compared as competing tools. However, an emerging approach consists in combining them so as to get the best of both worlds. Two previously existing combinations use either an ad hoc evolutionary algorithm or a goal exploration process together with the Deep Deterministic Policy Gradient (DDPG) algorithm, a sample efficient off-policy deep RL algorithm. In this paper, we propose a different combination scheme using the simple cross-entropy
method (CEM) and Twin Delayed Deep Deterministic policy gradient (TD3), another off-policy deep RL algorithm which improves over DDPG. We evaluate the resulting method, CEM-RL, on a set of benchmarks classically used in deep RL.
We show that CEM-RL benefits from several advantages over its competitors and offers a satisfactory trade-off between performance and sample efficiency. | accepted-poster-papers | This paper combines two different types of existing optimization methods, CEM/CMA-ES and DDPG/TD3, for policy optimization. The approach resembles ERL but demonstrates good better performance on a variety of continuous control benchmarks. Although I feel the novelty of the paper is limited, the provided promising results may justify the acceptance of the paper. | test | [
"H1g8kU290X",
"Ske_YvI527",
"Ske3D7Jqh7",
"SJeaaoUYAX",
"rJgh-YoWAQ",
"SyefCujZRm",
"rkxM5djb0Q",
"r1lJmuibAQ",
"BJgACvi-CQ",
"Syev33W527"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The rebuttal provided by the authors is convincing.",
"The contributions of this paper are in the domain of policy search, where the authors combine evolutionary and gradient-based methods. Particularly, they propose a combination approach based on cross-entropy method (CEM) and TD3 as an alternative to existing... | [
-1,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"Ske_YvI527",
"iclr_2019_BkeU5j0ctQ",
"iclr_2019_BkeU5j0ctQ",
"SyefCujZRm",
"Ske3D7Jqh7",
"rkxM5djb0Q",
"Syev33W527",
"Ske_YvI527",
"iclr_2019_BkeU5j0ctQ",
"iclr_2019_BkeU5j0ctQ"
] |
iclr_2019_BkedznAqKQ | LanczosNet: Multi-Scale Deep Graph Convolutional Networks | We propose Lanczos network (LanczosNet) which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution.
Relying on the tridiagonal decomposition of the Lanczos algorithm, we not only efficiently exploit multi-scale information via fast approximated computation of matrix power but also design learnable spectral filters.
Being fully differentiable, LanczosNet facilitates both graph kernel learning as well as learning node embeddings.
We show the connection between our LanczosNet and graph based manifold learning, especially diffusion maps.
We benchmark our model against 8 recent deep graph networks on citation datasets and QM8 quantum chemistry dataset.
Experimental results show that our model achieves the state-of-the-art performance in most tasks. | accepted-poster-papers | The reviewers unanimously agreed that the paper was a significant advance in the field of machine learning on graph-structured inputs. They commented particularly on the quality of the research idea, and its depth of development. The results shared by the researchers are compelling, and they also report optimal hyperparameters, a welcome practice when describing experiments and results.
A small drawback the reviewers highlighted is the breadth of the content in the paper, which gave the impression of a slight lack of focus. Overall, the paper is a clear advance, and I recommend it for acceptance. | train | [
"SJghoMfW0Q",
"SklJrmGZA7",
"BJeBk7f-0Q",
"r1eh_ffWAm",
"S1lEn5RRhQ",
"ryxJEZ4Rhm",
"r1llrOIv2Q"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the comments! We have not tried Arnoldi algorithm since we only deal with undirected graphs in the current applications which have symmetric graph Laplacians. Unlike Lanczos algorithm which has error bounds and monotonic convergence properties, Arnoldi algorithm is not well understood since eigenvalues ... | [
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"S1lEn5RRhQ",
"r1llrOIv2Q",
"ryxJEZ4Rhm",
"iclr_2019_BkedznAqKQ",
"iclr_2019_BkedznAqKQ",
"iclr_2019_BkedznAqKQ",
"iclr_2019_BkedznAqKQ"
] |
iclr_2019_BkfbpsAcF7 | Excessive Invariance Causes Adversarial Vulnerability | Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs. One core idea of adversarial example research is to reveal neural network errors under such distribution shifts. We decompose these errors into two complementary sources: sensitivity and invariance. We show deep networks are not only too sensitive to task-irrelevant changes of their input, as is well-known from epsilon-adversarial examples, but are also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks. We show such excessive invariance occurs across various tasks and architecture types. On MNIST and ImageNet one can manipulate the class-specific content of almost any image without changing the hidden activations. We identify an insufficiency of the standard cross-entropy loss as a reason for these failures. Further, we extend this objective based on an information-theoretic analysis so it encourages the model to consider all task-dependent features in its decision. This provides the first approach tailored explicitly to overcome excessive invariance and resulting vulnerabilities. | accepted-poster-papers | This paper studies the roots of the existence of adversarial perspective from a new perspective. This perspective is quite interesting and thought-provoking. However, some of the contributions rely on fairly restrictive assumptions and/or are not properly evaluated.
Still, overall, this paper should be a valuable addition to the program. | val | [
"rkl8B7OVJ4",
"HklfNANAh7",
"ByeDB22aRQ",
"Bkeye2iT0X",
"r1e1SFU_2m",
"SyeLf8916Q",
"Byx23FoQaQ",
"ByeqLhs7TX",
"BklM_OiQ6m",
"HJeo0yhma7",
"B1xjee27aX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"We were glad to see your positive feedback.\n\nIndeed we agree some open questions (summarized below in point (II)) remain. Yet, we hope that our efforts to prove the underlying principles of our objective sparks future analysis how/when our optimality assumptions (discussed below in point (I)) can be achieved and... | [
-1,
6,
-1,
-1,
6,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
2,
4,
-1,
-1,
-1,
-1,
-1
] | [
"ByeDB22aRQ",
"iclr_2019_BkfbpsAcF7",
"Bkeye2iT0X",
"B1xjee27aX",
"iclr_2019_BkfbpsAcF7",
"iclr_2019_BkfbpsAcF7",
"iclr_2019_BkfbpsAcF7",
"SyeLf8916Q",
"r1e1SFU_2m",
"HklfNANAh7",
"HJeo0yhma7"
] |
iclr_2019_Bkg2viA5FQ | Hindsight policy gradients | A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy. In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals. In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning. However, reinforcement learning agents have only recently been endowed with such capacity for hindsight. In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms. Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency. | accepted-poster-papers | The paper generalizes the concept of "hindsight", i.e. the recycling of data from trajectories in a goal-based system based on the goal state actually achieved, to policy gradient methods.
This was an interesting paper in that it scored quite highly despite all three reviewers mentioning incrementality or a relative lack of novelty. Although the authors naturally took some exception to this, AC personally believes that properly executed, contributions that seem quite straightforward in hindsight (pun partly intended) can be valuable in moving the field forward: a clean and didactic presentation of theory backed by well-designed and extensive empirical investigation (both of which are adjectives used by reviewers to describe the empirical work in this paper) can be as valuable, or moreso, than a poorly executed but higher-novelty works. To quote AnonReviewer3, "HPG is almost certainly going to end up being a widely used addition to the RL toolbox".
Feedback from reviewers prompted extensive discussion and a direct comparison with Hindsight Experience Replay which reviewers agreed added significant value to the manuscript, earning it a post-rebuttal unanimous rating of 7. It is therefore my pleasure to recommend acceptance. | test | [
"Hyx3e4Pc3X",
"BJxW_kWc3Q",
"B1enK-g7Am",
"SkgYmj3gCQ",
"HklK6CgeCm",
"rygTDRlx0X",
"B1l7mFA06m",
"BJlHmDRC6Q",
"HJeizrR0pQ",
"HJxEna5ETQ",
"SyefN6lMcQ",
"BkxBfwLbcQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"The authors present HPG, which applies the hindsight formulation already applied to off-policy RL algorithms (hindsight experience replay, HER, Andrychowicz et al., 2017) to policy gradients.\nBecause the idea is not new, and formulating HPG from PG is so straightforward (simply tie the dynamical model over goals)... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1
] | [
"iclr_2019_Bkg2viA5FQ",
"iclr_2019_Bkg2viA5FQ",
"iclr_2019_Bkg2viA5FQ",
"HJeizrR0pQ",
"BJlHmDRC6Q",
"B1l7mFA06m",
"HJxEna5ETQ",
"Hyx3e4Pc3X",
"BJxW_kWc3Q",
"iclr_2019_Bkg2viA5FQ",
"BkxBfwLbcQ",
"iclr_2019_Bkg2viA5FQ"
] |
iclr_2019_Bkg3g2R9FX | Adaptive Gradient Methods with Dynamic Bound of Learning Rate | Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods. In our paper, we demonstrate that extreme learning rates can lead to poor performance. We provide new variants of Adam and AMSGrad, called AdaBound and AMSBound respectively, which employ dynamic bounds on learning rates to achieve a gradual and smooth transition from adaptive methods to SGD and give a theoretical proof of convergence. We further conduct experiments on various popular tasks and models, which is often insufficient in previous work. Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD and maintain higher learning speed early in training at the same time. Moreover, they can bring significant improvement over their prototypes, especially on complex deep networks. The implementation of the algorithm can be found at https://github.com/Luolc/AdaBound . | accepted-poster-papers | The paper was found to be well-written and conveys interesting idea. However the AC notices a large body of clarifications that were provided to the reviewers (regarding the theory, experiments, and setting in general) that need to be well addressed in the paper. | train | [
"Bke-32cM1N",
"ryl12OxuAX",
"BylLNcbdAX",
"S1eizWWuCQ",
"BJgABOx_C7",
"S1lEtdgdAQ",
"SJef8P3thQ",
"BkeFPweF3m",
"rkg0-SM-3m",
"rkg7oagJn7",
"Skx8lvomim",
"H1glD5ZAcm",
"Byg-RFWA5X",
"Hklkd3A25X",
"SklPiVVncm",
"r1lJqtvjcm",
"HkxJNxD55X",
"S1goTjL5qX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"I thank the reviewers for their response, and I keep my score.",
"\n[About details and extra experiments you asked for]\n\n>>> Am I correct in saying that with t=100 (i.e., the 100th iteration), the \\eta s constrain the learning rates to be in a tight bound around 0.1? If beta=0.9, then \\eta_l(1) = 0.1 - 0.1 /... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"BylLNcbdAX",
"S1lEtdgdAQ",
"rkg0-SM-3m",
"SJef8P3thQ",
"BkeFPweF3m",
"BJgABOx_C7",
"iclr_2019_Bkg3g2R9FX",
"iclr_2019_Bkg3g2R9FX",
"iclr_2019_Bkg3g2R9FX",
"Skx8lvomim",
"iclr_2019_Bkg3g2R9FX",
"Byg-RFWA5X",
"Hklkd3A25X",
"iclr_2019_Bkg3g2R9FX",
"r1lJqtvjcm",
"iclr_2019_Bkg3g2R9FX",
... |
iclr_2019_Bkg6RiCqY7 | Decoupled Weight Decay Regularization | L2 regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is \emph{not} the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L2 regularization (often calling it ``weight decay'' in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by \emph{decoupling} the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is available at \url{https://github.com/loshchil/AdamW-and-SGDW} | accepted-poster-papers | Evaluating this paper is somewhat awkward because it has already been through multiple reviewing cycles, and in the meantime, the trick has already become widely adopted and inspired interesting follow-up work. Much of the paper is devoted to reviewing this follow-up work. I think it's clearly time for this to be made part of the published literature, so I recommend acceptance. (And all reviewers are in agreement that the paper ought to be accepted.)
The paper proposes, in the context of Adam, to apply literal weight decay in place of L2 regularization. An impressively thorough set of experiments are given to demonstrate the improved generalization performance, as well as a decoupling of the hyperparameters.
Previous versions of the paper suffered from a lack of theoretical justification for the proposed method. Ordinarily, in such cases, one would worry that the improved results could be due to some sort of experimental confound. But AdamW has been validated by so many other groups on a range of domains that the improvement is well established. And other researchers have offered possible explanations for the improvement.
| train | [
"Bkx6qDs50m",
"rJxk4OZ5AQ",
"HJlCOfb90X",
"HylQ0bbqR7",
"rJl_LZZcA7",
"B1xxyZZqCm",
"rkeDkABcnm",
"rJlYWZMYhm",
"rkgKJ4AXhX"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your positive evaluation! We have fixed the typo and updated the paper. ",
"1) This completely clears up my concern.\n\n2) It seems that we largely share the same opinion here. After some more reflection, I think that this proposition does bring some good to the paper by attempting to for... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rJxk4OZ5AQ",
"rJl_LZZcA7",
"iclr_2019_Bkg6RiCqY7",
"rkgKJ4AXhX",
"rJlYWZMYhm",
"rkeDkABcnm",
"iclr_2019_Bkg6RiCqY7",
"iclr_2019_Bkg6RiCqY7",
"iclr_2019_Bkg6RiCqY7"
] |
iclr_2019_Bkg8jjC9KQ | Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile | Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality – a property which we call coherence. We first show that ordinary, “vanilla” MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution. We then show that this deficiency is mitigated by optimism: by taking an “extra-gradient” step, optimistic mirror descent (OMD) converges in all coherent problems. Our analysis generalizes and extends the results of Daskalakis et al. [2018] for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for provable convergence beyond convex-concave games. We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, and the CelebA and CIFAR-10 datasets). | accepted-poster-papers | This paper investigates the usage of the extragradient step for solving saddle-point problems with non-monotone stochastic variational inequalities, motivated by GANs. The authors propose an assumption weaker/diffrerent than the pseudo-monotonicity of the variational inequality for their convergence analysis (that they call "coherence"). Interestingly, they are able to show the (asympotic) last iterate convergence for the extragradient algorithm in this case (in contrast to standard results which normally requires averaging of the iterates for the stochastic *and* mototone variational inequality such as the cited work by Gidel et al.). The authors also describe an interesting difference between the gradient method without the extragradient step (mirror descent) vs. with (that they called optimistic mirror descent).
R2 thought the coherence condition was too related to the notion of pseudo-monoticity for which one could easily extend previous known convergence results for stochastic variational inequality. The AC thinks that this point was well answered by the authors rebuttal and in their revision: the conditions are sufficiently different, and while there is still much to do to analyze non variational inequalities or having realistic assumptions, this paper makes some non-trivial and interesting steps in this direction. The AC thus sides with expert reviewer R1 and recommends acceptance. | test | [
"r1x7Pw_4yE",
"HJxro3I4kV",
"rkxGKGDY6m",
"r1lVGZvtTm",
"SkxF31vKpm",
"HkxnI7tn3Q",
"H1xUfVr9hX",
"SyeTm7oIhm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Many thanks for the extra round of feedback and the encouraging remarks! We reply to the points you raised below:\n\n1. Regarding the example of a coherent problem with a general convex solution set.\n\nAgain, for simplicity, focus on the optimization case, i.e., the minimization of a function f:X->R (X convex). I... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
"HJxro3I4kV",
"r1lVGZvtTm",
"SyeTm7oIhm",
"H1xUfVr9hX",
"HkxnI7tn3Q",
"iclr_2019_Bkg8jjC9KQ",
"iclr_2019_Bkg8jjC9KQ",
"iclr_2019_Bkg8jjC9KQ"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.