paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2020_rJgjGxrFPS | A Simple and Scalable Shape Representation for 3D Reconstruction | Deep learning applied to the reconstruction of 3D shapes has seen growing interest. A popular approach to 3D reconstruction and generation in recent years has been the CNN decoder-encoder model often applied in voxel space. However this often scales very poorly with the resolution limiting the effectiveness of these models. Several sophisticated alternatives for decoding to 3D shapes have been proposed typically relying on alternative deep learning architectures. We show however in this work that standard benchmarks in 3D reconstruction can be tackled with a surprisingly simple approach: a linear decoder obtained by principal component analysis on the signed distance transform of the surface. This approach allows easily scaling to larger resolutions. We show in multiple experiments it is competitive with state of the art methods and also allows the decoder to be fine-tuned on the target task using a loss designed for SDF transforms, obtaining further gains. | reject | This paper proposes to use PCS to replace the conventional decoder for 3D shape reconstruction. It shows competitive performance to the state of the art methods. While reviewer #3 is overall positive about this work, both reviewer #1 and #2 rated weak rejection. Reviewer #1 concerns that important details are missing, and the discussion of results is insufficient. Reviewer #3 has questions on the clarity of the presentation and comparison with SOTA methods. The authors provided response to the questions, but did not change the rating of the reviewers. The ACs agree that this work has merits. However, given the various concerns raised by the reviewers, this paper can not be accepted at its current state. | val | [
"B1lQSgTfcS",
"r1lKDzEtjS",
"ryxABb4KiS",
"SkgVc1NFsH",
"HkgKmkEKir",
"H1laIOJ6Fr",
"rJe5U-qh9r"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank the authors for the response. I am still in favor of the idea -- applying simple, old-school method into a new problem, and I also agree with R1 and R2 that the paper is currently lack of details and experimental results. I will keep my score, but would not fight for the acceptance if R1 and R2 insist.\n----... | [
6,
-1,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_rJgjGxrFPS",
"iclr_2020_rJgjGxrFPS",
"H1laIOJ6Fr",
"B1lQSgTfcS",
"rJe5U-qh9r",
"iclr_2020_rJgjGxrFPS",
"iclr_2020_rJgjGxrFPS"
] |
iclr_2020_HyljzgHtwS | Regularly varying representation for sentence embedding | The dominant approaches to sentence representation in natural language rely on learning embeddings on massive corpuses. The obtained embeddings have desirable properties such as compositionality and distance preservation (sentences with similar meanings have similar representations). In this paper, we develop a novel method for learning an embedding enjoying a dilation invariance property. We propose two algorithms: Orthrus, a classification algorithm, constrains the distribution of the embedded variable to be regularly varying, i.e. multivariate heavy-tail. and uses Extreme Value Theory (EVT) to tackle the classification task on two separate regions: the tail and the bulk. Hydra, a text generation algorithm for dataset augmentation, leverages the invariance property of the embedding learnt by Orthrus to generate coherent sentences with controllable attribute, e.g. positive or negative sentiment. Numerical experiments on synthetic and real text data demonstrate the relevance of the proposed framework.
| reject | Three reviewers recommend rejection. After a good rebuttal, the first reviewer is more positive about the paper yet still feels the paper is not ready for publication. The authors are encouraged to strengthen their work and resubmit to a future venue. | train | [
"rJg0xTz3iS",
"SklA3vGnjB",
"rJlc-qfhor",
"HyeJgFbiFS",
"S1evAPSTtS",
"rye-_wpCFS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank AnonReviewer3 for articles [1,2,3]. Though our framework is different, connections with these refs are worthy of attention and we now cite these papers in the introduction.\n\n• “In order to show that the EVT indeed helps empirically in the way that an adversarial classifier enforces the inf-norm of vecto... | [
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"HyeJgFbiFS",
"rye-_wpCFS",
"S1evAPSTtS",
"iclr_2020_HyljzgHtwS",
"iclr_2020_HyljzgHtwS",
"iclr_2020_HyljzgHtwS"
] |
iclr_2020_S1ghzlHFPS | Informed Temporal Modeling via Logical Specification of Factorial LSTMs | Consider a world in which events occur that involve various entities. Learning how to predict future events from patterns of past events becomes more difficult as we consider more types of events. Many of the patterns detected in the dataset by an ordinary LSTM will be spurious since the number of potential pairwise correlations, for example, grows quadratically with the number of events. We propose a type of factorial LSTM architecture where different blocks of LSTM cells are responsible for capturing different aspects of the world state. We use Datalog rules to specify how to derive the LSTM structure from a database of facts about the entities in the world. This is analogous to how a probabilistic relational model (Getoor & Taskar, 2007) specifies a recipe for deriving a graphical model structure from a database. In both cases, the goal is to obtain useful inductive biases by encoding informed independence assumptions into the model. We specifically consider the neural Hawkes process, which uses an LSTM to modulate the rate of instantaneous events in continuous time. In both synthetic and real-world domains, we show that we obtain better generalization by using appropriate factorial designs specified by simple Datalog programs.
| reject | While reviewers find this paper interesting, they raised number of concerns including the novelty, writing, experiments, references and clear mention of the benefit. Unfortunately, excellent questions and insightful comments left by reviewers are gone without authors’ answers. | test | [
"r1e_z2XijB",
"BJxQnlqy9S",
"r1lm7jx-9H",
"Skx3yBR8qH"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks very much to the reviewers -- these are high-quality reviews. We appreciate the time you spent on the paper and the thoughtful feedback. \n\nOur presentation was written too quickly, and more careful writing would have answered some of your main concerns. In the model, we do have ways to handle parameter ... | [
-1,
3,
3,
1
] | [
-1,
5,
3,
5
] | [
"iclr_2020_S1ghzlHFPS",
"iclr_2020_S1ghzlHFPS",
"iclr_2020_S1ghzlHFPS",
"iclr_2020_S1ghzlHFPS"
] |
iclr_2020_rJg3zxBYwH | Learning Likelihoods with Conditional Normalizing Flows | Normalizing Flows (NFs) are able to model complicated distributions p(y) with strong inter-dimensional correlations and high multimodality by transforming a simple base density p(z) through an invertible neural network under the change of variables formula. Such behavior is desirable in multivariate structured prediction tasks, where handcrafted per-pixel loss-based methods inadequately capture strong correlations between output dimensions. We present a study of conditional normalizing flows (CNFs), a class of NFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x). CNFs are efficient in sampling and inference, they can be trained with a likelihood-based objective, and CNFs, being generative flows, do not suffer from mode collapse or training instabilities. We provide an effective method to train continuous CNFs for binary problems and in particular, we apply these CNFs to super-resolution and vessel segmentation tasks demonstrating competitive performance on standard benchmark datasets in terms of likelihood and conventional metrics. | reject | The authors propose a conditional normalizing flow approach to learning likelihoods. While reviewers appreciated the paper, in its present form it lacked a clear champion, and there were still some remaining concerns about novelty and clarity of presentation. The authors are encouraged to continue with this work and to account for reviewer comments in future revisions. Following up on the author response, a reviewer adds:
"Thanks for your clarification. I still disagree that the conditional flow architecture proposed should be considered as a novel contribution. The reason why I mentioned [1] or [2] was not because they follow the exact setting (coupling based conditional flow model) discussed in this paper. I wanted to highlight that the idea to use conditioning variables as an input to the transforming network (whether it is an autoregressive density function, autoregressive transforming network, or coupling layers) is quite universal (as we all know many of the existing codes implementing flow-based models includes additional keyword arguments 'context' to model conditioning). I'm not sure why the fact that the proposed framework is conditioning on high-dimensional variables makes a contribution. There seems to be no particular challenge in doing that and novel design choices to circumvent that (i.e., we can just use existing architectures with minor modifications).
I agree that the binary dequantization should be considered as a contribution, but as significant as to change my decision to accept. Thanks for the clarification on experiments. Considering this, I raise my rating to weak reject...
Another previous work I forgot to mention in the initial review is "Structured output learning with the conditional generative flow", Lu and Huang 2019, ICML 2019 invertible neural network workshop. This paper discusses the conditional flow based on a similar idea, and attacks high-dimensional structured output prediction. I think this should be cited in the paper."
| train | [
"rJeN7RdTYr",
"HylwT6ZjoS",
"Skgiqp-ooB",
"rkgaw6Wijr",
"BJxfq8jg5S",
"BklyFUW-9r",
"H1l055o6qB",
"SJxUsi1j9H",
"S1eCD0Vv9H",
"Skx3ofIe5r",
"r1lfgEbJ9r",
"B1ggTdXDKS",
"S1xLH2ZGYr",
"BkegD4JfKS",
"Hklnft3zdS"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"The paper proposes the conditional normalizing flow for structured prediction. The idea is to use conditioning variables as additional inputs to the flow parameter forming networks. The model was demonstrated on image superresolution and vessel segmentation.\n\nI find the contribution of this paper minimal. The i... | [
3,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_rJg3zxBYwH",
"BJxfq8jg5S",
"BklyFUW-9r",
"rJeN7RdTYr",
"iclr_2020_rJg3zxBYwH",
"iclr_2020_rJg3zxBYwH",
"SJxUsi1j9H",
"S1eCD0Vv9H",
"Skx3ofIe5r",
"r1lfgEbJ9r",
"B1ggTdXDKS",
"S1xLH2ZGYr",
"iclr_2020_rJg3zxBYwH",
"Hklnft3zdS",
"iclr_2020_rJg3zxBYwH"
] |
iclr_2020_Syg6fxrKDB | A Graph Neural Network Assisted Monte Carlo Tree Search Approach to Traveling Salesman Problem | We present a graph neural network assisted Monte Carlo Tree Search approach for the classical traveling salesman problem (TSP). We adopt a greedy algorithm framework to construct the optimal solution to TSP by adding the nodes successively. A graph neural network (GNN) is trained to capture the local and global graph structure and give the prior probability of selecting each vertex every step. The prior probability provides a heuristics for MCTS, and the MCTS output is an improved probability for selecting the successive vertex, as it is the feedback information by fusing the prior with the scouting procedure. Experimental results on TSP up to 100 nodes demonstrate that the proposed method obtains shorter tours than other learning-based methods. | reject | The paper is a contribution to the recently emerging literature on learning
based approaches to combinatorial optimization.
The authors propose to pre-train a policy network to imitate SOTA solvers for
TSPs.
At test time, this policy is then improved, in an alpha-go like manner, with
MCTS, using beam-search rollouts to estimate bootstrap values.
The main concerns raised by the reviewers is lack of novelty (the proposed
algorithm is a straight forward application of graph NNs to MCTS) as well a the
experimental results.
Although comparing well to other learning based methods, the algorithm is far
away from the performance of SOTA solvers.
Although well written, the paper is below acceptance threshold.
The methodological novelty is low.
The reported results are an order of magnitude away from SOTA solvers, while previous work
has already reported the general feasibility of learned solvers to TPSs.
Furthermore, the overall contribution is somewhat unclear as the policy relies
on pre-training with solutions form existing solvers. | train | [
"Syga-qC2FS",
"H1eL7g195S",
"BJlWnNN2jH",
"rkeV3rQ3sH",
"BJebXVm3oH",
"HyxFhSiYsH",
"SJenfBotor",
"ryxI8_Q1qS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors introduce a new Monte Carlo Tree Search-based (MCTS) algorithm for computing approximate solutions to the Traveling Salesman Problem (TSP). Yet since the TSP is NP-complete, a learned heuristic is used to guide the search process. For this learned heuristic, the authors propose a Graph N... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
1
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_Syg6fxrKDB",
"iclr_2020_Syg6fxrKDB",
"rkeV3rQ3sH",
"H1eL7g195S",
"ryxI8_Q1qS",
"Syga-qC2FS",
"Syga-qC2FS",
"iclr_2020_Syg6fxrKDB"
] |
iclr_2020_HJxRMlrtPH | Verification of Generative-Model-Based Visual Transformations | Generative networks are promising models for specifying visual transformations. Unfortunately, certification of generative models is challenging as one needs to capture sufficient non-convexity so to produce precise bounds on the output. Existing verification methods either fail to scale to generative networks or do not capture enough non-convexity. In this work, we present a new verifier, called ApproxLine, that can certify non-trivial properties of generative networks. ApproxLine performs both deterministic and probabilistic abstract interpretation and captures infinite sets of outputs of generative networks. We show that ApproxLine can verify interesting interpolations in the network's latent space. | reject | The goal of verification of properties of generative models is very interesting and the contributions of this work seem to make some progress in this context. However, the current state of the paper (particularly, its presentation) makes it difficult to recommend its acceptance. | train | [
"H1xHn7EwKH",
"rkxG6PQ3jS",
"BkeIpRLOoS",
"SygWvCLdoS",
"HkeEJ68djH",
"SJxQ8hL_oB",
"Bkx7AjUOir",
"rJen8KnatS",
"rkgJpdFCKr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary\n\nThis work aims to provide warranties on the outputs of generative models by providing bounds on robustness (over adversarial attacks for instance, or other transformation in this case). The specific case of restricting the inputs to a line segment allows performing verification of robustness exactly (Ex... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4
] | [
"iclr_2020_HJxRMlrtPH",
"HkeEJ68djH",
"H1xHn7EwKH",
"rJen8KnatS",
"rJen8KnatS",
"rkgJpdFCKr",
"iclr_2020_HJxRMlrtPH",
"iclr_2020_HJxRMlrtPH",
"iclr_2020_HJxRMlrtPH"
] |
iclr_2020_HyeJmlrFvH | Provably Communication-efficient Data-parallel SGD via Nonuniform Quantization | As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel. Alistarh et al. (2017) describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs. For the first variant, QSGD, they provide strong theoretical guarantees. For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks. Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf. | reject | This paper proposes a communication-efficient data-parallel SGD with quantization. The method bridges the gap between theory and practice. The QSGD method has theoretical guarantees while QSGDinf doesn't, but the latter gives better result. This paper proves stronger results for QSGD using a different quantization scheme which matches the performance of QSGDinf.
The reviewers find issues with the approach and have pointed some of them out. During the discussion period, we did discuss if reviewers would like to raise their scores. Unfortunately, they still have unresolved issues (see R1's comment).
R1 made another comment recently that they were unable to add to their review:
"The proposed algorithm and the theoretical analysis does not include momentum. However, in the experiments, it is clearly stated that momentum (with a factor of 0.9) is used. Thus, it is unclear whether the experiments really validate the theoretical guarantees. And, it is also unclear how momentum is added for both NUQSGD and EF-SGD, since momentum is not mentioned in Algorithm 1 in this paper, or the paper of QSGD, or the paper of EF-SignSGD. (There is a version of SignSGD with momentum *without* error feedback, called SIGNUM)."
With the current score, the paper does not make the cut for ICLR, but I encourage the authors to revise the paper based on reviewers' feedback. For now, I recommend to reject this paper. | train | [
"BkxWwBzniS",
"rygJKdFDjr",
"HJeK8dYwir",
"rkgOQdKDjS",
"SklTsSGviH",
"SJxLkbwatH",
"r1xsybtTFB",
"Bkxbah715H"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We will be posting a new version of the paper momentarily. This note summarizes the changes:\n\n1. We now report results comparing NUQSGD with error-corrected methods, notably EF-SIGNSGD, on ImageNet. We find that our techniques are superior. In particular, we had to perform significant hyperparameter tuning to ev... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2020_HyeJmlrFvH",
"SJxLkbwatH",
"r1xsybtTFB",
"Bkxbah715H",
"iclr_2020_HyeJmlrFvH",
"iclr_2020_HyeJmlrFvH",
"iclr_2020_HyeJmlrFvH",
"iclr_2020_HyeJmlrFvH"
] |
iclr_2020_Bkx1mxSKvB | Disentangling Trainability and Generalization in Deep Learning | A fundamental goal in deep learning is the characterization of trainability and generalization of neural networks as a function of their architecture and hyperparameters. In this paper, we discuss these challenging issues in the context of wide neural networks at large depths where we will see that the situation simplifies considerably. To do this, we leverage recent advances that have separately shown: (1) that in the wide network limit, random networks before training are Gaussian Processes governed by a kernel known as the Neural Network Gaussian Process (NNGP) kernel, (2) that at large depths the spectrum of the NNGP kernel simplifies considerably and becomes ``weakly data-dependent'', and (3) that gradient descent training of wide neural networks is described by a kernel called the Neural Tangent Kernel (NTK) that is related to the NNGP. Here we show that by combining the in the large depth limit the spectrum of the NTK simplifies in much the same way as that of the NNGP kernel. By analyzing this spectrum, we arrive at a precise characterization of trainability and generalization across a range of architectures including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs). We find that there are large regions of hyperparameter space where networks will train but will fail to generalize, in contrast with several recent results. By comparing CNNs with- and without-global average pooling, we show that CNNs without average pooling have very nearly identical learning dynamics to FCNs while CNNs with pooling contain a correction that alters its generalization performance. We perform a thorough empirical investigation of these theoretical results and finding excellent agreement on real datasets. | reject | The paper investigates the trainability and generalization of deep networks as a function of hyperparameters/architecture, while focusing on wide nets of large depth; it aims to characterize regions of hyperparameter space where networks generalize well vs where they do not; empirical observations are demonstrated to support theoretical results. However, all reviewers agree that, while the topic of the paper is important and interesting, more work is required to improve the readability and clarify the exposition to support the proposed theoretical results.
| train | [
"rJgSrGqM5S",
"HyxVdSLniB",
"H1x-k_Lhir",
"HJxQmJ8niB",
"HkxBD7UnsS",
"r1x-LlrnFH",
"SkgIDMbRYS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the spectra of neural tangent kernels (NTKs) at large depth -- first let width go to infinity, and then let depth go to infinity. At infinite depth the kernel has the form a*identity+b*(all-one matrix), and the paper studies how the large-depth NTK converges to the limit in three cases: chaotic,... | [
3,
-1,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_Bkx1mxSKvB",
"rJgSrGqM5S",
"r1x-LlrnFH",
"rJgSrGqM5S",
"SkgIDMbRYS",
"iclr_2020_Bkx1mxSKvB",
"iclr_2020_Bkx1mxSKvB"
] |
iclr_2020_BygkQeHKwB | Walking on the Edge: Fast, Low-Distortion Adversarial Examples | Adversarial examples of deep neural networks are receiving ever increasing attention because they help in understanding and reducing the sensitivity to their input. This is natural given the increasing applications of deep neural networks in our everyday lives. When white-box attacks are almost always successful, it is typically only the distortion of the perturbations that matters in their evaluation.
In this work, we argue that speed is important as well, especially when considering that fast attacks are required by adversarial training. Given more time, iterative methods can always find better solutions. We investigate this speed-distortion trade-off in some depth and introduce a new attack called boundary projection BP that improves upon existing methods by a large margin. Our key idea is that the classification boundary is a manifold in the image space: we therefore quickly reach the boundary and then optimize distortion on this manifold. | reject | In this paper the authors highlight the role of time in adversarial training and study various speed-distortion trade-offs. They introduce an attack called boundary projection BP which relies on utilizing the classification boundary. The reviewers agree that searching on the class boundary manifold, is interesting and promising but raise important concerns about evaluations on state of the art data sets. Some of the reviewers also express concern about the quality of presentation and lack of detail. While the authors have addressed some of these issues in the response, the reviewers continue to have some concerns. Overall I agree with the assessment of the reviewers and do not recommend acceptance at this time. | val | [
"HJxpocuhir",
"H1xcvIbwsH",
"BJeE0SbvsS",
"r1eb-VbvsS",
"rJxIPmZPjH",
"BJgdRMs6tB",
"S1eWEHyecr",
"SJxJXx2Bcr"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This is a response to all reviewers, meant to summarize the main points and the updates we have made to the paper.\n\n1. We would like to thank all reviewers for their in-depth feedback. As a result of this discussion, we are improving a lot our paper. For each point below, we discuss the corresponding updates we ... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
1,
4,
3
] | [
"iclr_2020_BygkQeHKwB",
"S1eWEHyecr",
"S1eWEHyecr",
"BJgdRMs6tB",
"SJxJXx2Bcr",
"iclr_2020_BygkQeHKwB",
"iclr_2020_BygkQeHKwB",
"iclr_2020_BygkQeHKwB"
] |
iclr_2020_BJxlmeBKwS | FRICATIVE PHONEME DETECTION WITH ZERO DELAY | People with high-frequency hearing loss rely on hearing aids that employ frequency lowering algorithms. These algorithms shift some of the sounds from the high frequency band to the lower frequency band where the sounds become more perceptible for the people with the condition. Fricative phonemes have an important part of their content concentrated in high frequency bands. It is important that the frequency lowering algorithm is activated exactly for the duration of a fricative phoneme, and kept off at all other times. Therefore, timely (with zero delay) and accurate fricative phoneme detection is a key problem for high quality hearing aids. In this paper we present a deep learning based fricative phoneme detection algorithm that has zero detection delay and achieves state-of-the-art fricative phoneme detection accuracy on the TIMIT Speech Corpus. All reported results are reproducible and come with easy to use code that could serve as a baseline for future research.
| reject | The reviewers appreciate the importance of the problem, and one reviewer particularly appreciated the gains in performance. However, two reviewers raised concerns about limited novelty and missing comparisons to prior work. While the rebuttal helped address these concerns, the novelty is still limited. The authors are encouraged to revise the presentation to clarify the novelty. | train | [
"B1x0i7iTFB",
"S1esJzqhFr",
"SJl7IornsB",
"SygAaFH2iB",
"rJxfNtr3iS",
"rke_KL2hFS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"====================================== Updated Review =====================================\nI would like to thank the authors for providing more experiments and details regarding their work.\nHowever, after reading the authors rebuttal, I still think that there is more work to do in terms of comparison to prior w... | [
3,
3,
-1,
-1,
-1,
6
] | [
5,
5,
-1,
-1,
-1,
1
] | [
"iclr_2020_BJxlmeBKwS",
"iclr_2020_BJxlmeBKwS",
"S1esJzqhFr",
"rke_KL2hFS",
"B1x0i7iTFB",
"iclr_2020_BJxlmeBKwS"
] |
iclr_2020_HJgb7lSFwS | Distance-based Composable Representations with Neural Networks | We introduce a new deep learning technique that builds individual and class representations based on distance estimates to randomly generated contextual dimensions for different modalities. Recent works have demonstrated advantages to creating representations from probability distributions over their contexts rather than single points in a low-dimensional Euclidean vector space. These methods, however, rely on pre-existing features and are limited to textual information. In this work, we obtain generic template representations that are vectors containing the average distance of a class to randomly generated contextual information. These representations have the benefit of being both interpretable and composable. They are initially learned by estimating the Wasserstein distance for different data subsets with deep neural networks. Individual samples or instances can then be compared to the generic class representations, which we call templates, to determine their similarity and thus class membership. We show that this technique, which we call WDVec, delivers good results for multi-label image classification. Additionally, we illustrate the benefit of templates and their composability by performing retrieval with complex queries where we modify the information content in the representations. Our method can be used in conjunction with any existing neural network and create theoretically infinitely large feature maps. | reject | The paper proposes an approach for learning class-level and individual-level (token-level) representations based on Wasserstein distances between data subsets. The idea is appealing and seems to have applicability to multiple tasks. The reviewers voiced significant concerns with the unclear writing of the paper and with the limited experiments. The authors have improved the paper, but to my mind it still needs a good amount of work on both of these aspects. The choice of wording in many places is imprecise. The tasks are non-standard ones so they don't have existing published numbers to compare against; in such a situation I would expect to see more baselines, such as alternative class/instance representations that would show the benefit specifically of the Wasserstein distance-based approach. I cannot tell from the paper in its current form whether or when I would want to use the proposed approach. In short, despite a very interesting initial idea, I believe the paper is too preliminary for publication. | train | [
"rkxVztJjsB",
"rJg7R9qcsS",
"Hkejtt59jB",
"rJxIZt9qsS",
"Skx0fuGctH",
"HygNqe2ntB",
"rkxEKlP0tr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewers, thanks for your thoughtful input on this submission! The authors have now responded to your comments. Please be sure to go through their replies and revisions. If you have additional feedback or questions, it would be great to get them this week while the authors still have the opportunity to re... | [
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2020_HJgb7lSFwS",
"Skx0fuGctH",
"HygNqe2ntB",
"rkxEKlP0tr",
"iclr_2020_HJgb7lSFwS",
"iclr_2020_HJgb7lSFwS",
"iclr_2020_HJgb7lSFwS"
] |
iclr_2020_HyezmlBKwr | Test-Time Training for Out-of-Distribution Generalization | We introduce a general approach, called test-time training, for improving the performance of predictive models when test and training data come from different distributions. Test-time training turns a single unlabeled test instance into a self-supervised learning problem, on which we update the model parameters before making a prediction on the test sample. We show that this simple idea leads to surprising improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts. Theoretical investigations on a convex model reveal helpful intuitions for when we can expect our approach to help. | reject | The paper is on a new approach approach to transductive learning. Reviewers were a bit on the fence. Their most important objection is that the performance improvements that the authors report almost entirely come from the "online" version, which basically gets to see the test distribution. That contribution is nevertheless, in itself, potentially interesting, but I was surprised not to see comparison with simple transductive learning from semi-supervised learning, learning with cache, or domain adaptation, e.g., using knowledge of the target distribution to reweigh the training sample, or [0], on using an adversary to select a distribution consistent with sample statistics. I encourage the authors to add more baselines, analyze differences with existing approaches, and, if their approach is superior to existing approaches, resubmit elsewhere.
[0] http://papers.nips.cc/paper/5458-robust-classification-under-sample-selection-bias.pdf | train | [
"BklX60fKtH",
"B1gWRWfMcH",
"HkltDr4oiS",
"S1ldGH4ioS",
"SJgihEVsjr",
"SkgD9svKFS",
"SkxK93PFtr",
"rJxhI4ktFB",
"Bkx_qyUOtr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"public",
"author",
"public"
] | [
"The authors propose a method for adapting model parameters by doing self-supervised training on each individual test example. They show striking improvements in out-of-domain performance across a variety of image classification tasks while preserving in-domain performance; the latter is a marked difference from ot... | [
6,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1
] | [
4,
4,
-1,
-1,
-1,
3,
-1,
-1,
-1
] | [
"iclr_2020_HyezmlBKwr",
"iclr_2020_HyezmlBKwr",
"B1gWRWfMcH",
"SkgD9svKFS",
"BklX60fKtH",
"iclr_2020_HyezmlBKwr",
"rJxhI4ktFB",
"Bkx_qyUOtr",
"iclr_2020_HyezmlBKwr"
] |
iclr_2020_BkgM7xHYwH | Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory | Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix. This class of models is particularly effective to solve tasks that require the memorization of long sequences. We propose an alternative solution based on explicit memorization using linear autoencoders for sequences. We show how a recently proposed recurrent architecture, the Linear Memory Network, composed of a nonlinear feedforward layer and a separate linear recurrence, can be used to solve hard memorization tasks. We propose an initialization schema that sets the weights of a recurrent architecture to approximate a linear autoencoder of the input sequences, which can be found with a closed-form solution. The initialization schema can be easily adapted to any recurrent architecture.
We argue that this approach is superior to a random orthogonal initialization due to the autoencoder, which allows the memorization of long sequences even before training. The empirical analysis show that our approach achieves competitive results against alternative orthogonal models, and the LSTM, on sequential MNIST, permuted MNIST and TIMIT. | reject | The paper explores an initialization scheme for the recently introduced linear memory network (LMN) (Bacciu et al., 2019) that is better than random initialization and the approach is tested on various MNIST and TIMIT data sets with positive results.
Reviewer 3 raised concerns about the breadth of experiments and novelty. Reviewer 2 recognized that the model performs well on its MNIST baselines and had concerns about applicability to larger settings. Reviewer 1 acknowledges a very well written paper, but again raises concerns about the thoroughness of the experiments. The authors responded to all three reviewers, responding that the tasks were chosen to match existing work and that the approach is complementary to LSTMs to solve different tasks. Overall the reviewers did not re-adjust their ratings.
There remains questions on scalability and generality, which makes the paper not yet ready for acceptance. We hope that the reviews support the authors further research. | train | [
"rJxEiykssr",
"HJgBe0A5sH",
"HJlKoTAcir",
"H1llbY2sdH",
"B1gi6tditH",
"BJxS8t0k9r"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\n>>> 1. The authors claimed the proposed method could help with exploding gradient in training the linear memories. It would be helpful to include some experiments indicating that this was the case (for the baseline) and that this method does indeed help with this problem.\n>>> 4. In general, it seems that the e... | [
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"H1llbY2sdH",
"B1gi6tditH",
"BJxS8t0k9r",
"iclr_2020_BkgM7xHYwH",
"iclr_2020_BkgM7xHYwH",
"iclr_2020_BkgM7xHYwH"
] |
iclr_2020_r1lQQeHYPr | Embodied Multimodal Multitask Learning | Visually-grounded embodied language learning models have recently shown to be effective at learning multiple multimodal tasks such as following navigational instructions and answering questions. In this paper, we address two key limitations of these models, (a) the inability to transfer the grounded knowledge across different tasks and (b) the inability to transfer to new words and concepts not seen during training using only a few examples. We propose a multitask model which facilitates knowledge transfer across tasks by disentangling the knowledge of words and visual attributes in the intermediate representations. We create scenarios and datasets to quantify cross-task knowledge transfer and show that the proposed model outperforms a range of baselines in simulated 3D environments. We also show that this disentanglement of representations makes our model modular and interpretable which allows for transfer to instructions containing new concepts. | reject | This paper offers a new approach to cross-modal embodied learning that aims to overcome limited vocabulary and other issues. Reviews are mixed. I concur with the two reviewers who say the work is interesting but the contribution is not sufficiently clear for acceptance at this time. | train | [
"HJxnPOdRYr",
"B1g45ZScor",
"BJehkWrcjS",
"Hkl2Ber9jS",
"HyxBKMA6KB",
"HJgz_vSntH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I thank the authors for their detailed response and appreciate their hard work in bringing us this paper. \n\nI think that my main point is that this work relies too much on the extra information/constraints in the synthetic env. E.g., 1. since the vocab size is small, thus the feature map could be designed 'equal... | [
3,
-1,
-1,
-1,
3,
6
] | [
5,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_r1lQQeHYPr",
"HJgz_vSntH",
"HyxBKMA6KB",
"HJxnPOdRYr",
"iclr_2020_r1lQQeHYPr",
"iclr_2020_r1lQQeHYPr"
] |
iclr_2020_BJl7mxBYvB | Robust Reinforcement Learning via Adversarial Training with Langevin Dynamics | We re-think the Two-Player Reinforcement Learning (RL) as an instance of a distribution sampling problem in infinite dimensions. Using the powerful Stochastic Gradient Langevin Dynamics, we propose a new two-player RL algorithm, which is a sampling variant of the two-player policy gradient method. Our new algorithm consistently outperforms existing baselines, in terms of generalization across differing training and testing conditions, on several MuJoCo environments. | reject | The authors address the problem of robust reinforcement learning. They propose an adversarial perspective on robustness. Improving the robustness can now be seen as two agent playing a competitive game, which means that in many cases the first agent needs to play a mixed strategy. The authors propose an algorithm for optimizing such mixed strategies.
Although the reviewers are convinced of the relevance of the work (as a first approach of Bayesian learning to reach mixed Nash equilibria, which is useful not only for robustness but for any problem that can be formulated as zero-sum game requiring a mixed strategy), they are not completely convinced by the work in current state. Three of the reviewers commented on the experiments not being rigorous and convincing enough in current form, and thus not (yet!) being able to recommend acceptance to ICLR. | train | [
"HkxECxAUiB",
"B1xt4BU9Fr",
"SkxOFmx2iB",
"Hkxwl7e3jS",
"B1e-fXEoiS",
"BkgcrZNojH",
"SJxLet99sB",
"r1lU1m95oS",
"B1eUcyvaFH",
"SyxZyNppYB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThe paper considers the task of learning robust policies. Specifically, they focus on the noisy-robust (NR) variant of the action robust framework proposed in [1]. As noted in [1], as strong duality does not hold in the NR variant it can not be solved using deterministic policies (pure strategies).\nThe ... | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
5,
1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
"iclr_2020_BJl7mxBYvB",
"iclr_2020_BJl7mxBYvB",
"B1e-fXEoiS",
"BkgcrZNojH",
"BkgcrZNojH",
"SJxLet99sB",
"iclr_2020_BJl7mxBYvB",
"iclr_2020_BJl7mxBYvB",
"iclr_2020_BJl7mxBYvB",
"iclr_2020_BJl7mxBYvB"
] |
iclr_2020_r1gEXgBYDH | Defensive Tensorization: Randomized Tensor Parametrization for Robust Neural Networks | As deep neural networks become widely adopted for solving most problems in computer vision and audio-understanding, there are rising concerns about their potential vulnerability. In particular, they are very sensitive to adversarial attacks, which manipulate the input to alter models' predictions. Despite large bodies of work to address this issue, the problem remains open. In this paper, we propose defensive tensorization, a novel adversarial defense technique that leverages a latent high order factorization of the network. Randomization is applied in the latent subspace, therefore resulting in dense reconstructed weights, without the sparsity or perturbations typically induced by the randomization.
Our approach can be easily integrated with any arbitrary neural architecture and combined with techniques like adversarial training. We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks. We further validate the generalizability of our approach across domains and low-precision architectures by considering an audio classification task and binary networks. In all cases, we demonstrate superior performance compared to prior works in the target scenario. | reject | Three reviewers have assessed this submission and were moderately positive about it . However, the reviewers have also raised a number of concerns. Initially, they complained about substandard experimentation which has been resolved to some degree after rebuttal (rev. believe more can be done in terms of unifying them, investigating backbones, attack methods, and experimental settings in light of recent papers).
A somewhat bigger criticism concerns the theoretical part:
1. Rev. remained unclear why using tensor decomposition techniques is a sound approach for designing robust network.
2. AC and rev. also noted during discussions that using low rank constraints (and other mechanisms) and i.e. encouraging smoothness (one important mechanism among many in robustness to attacks) have been extensively investigated in the literature, yet, the proposed idea makes scarce if any theoretical connection to such important theoretical tools.
Some references (not exhaustive) that may help authors further study the above aspects are:
Certified Adversarial Robustness via Randomized Smoothing, Cohen et al.
Local Gradients Smoothing: Defense against localized adversarial attacks, Naseer et al.
Limitations of the Lipschitz constant as adefense against adversarial examples, Huster et al.
Learning Low-Rank Representations, Huster et al.
On balance, AC feels that despite the enthusiasm, this paper is not ready yet for the publication in ICLR as the key theory behind the proposed idea is missing. Thus, this submission falls marginally short of acceptance in ICLR 2020. However, the authors are encouraged to build up a compelling theory and resubmit to another venue (currently the paper feels like a solid workshop idea that needs to be investigated further). | train | [
"HJluYfNooB",
"ByeaXG4ssB",
"rJxdCW4ojr",
"BJlqtyEosB",
"Bkgn8aQooH",
"BylqtktpFS",
"Hyl2dMt0YB",
"r1l0cn0k9H",
"r1gR0Ei4ur",
"BJxKtFdnDH"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"\nWe are glad to see that all reviewers recognised the novelty of our approach and that there is a consensus for acceptance. We are grateful to all reviewers’ comments, which we believe will help in greatly improving the quality of the paper. In this rebuttal, we carefully addressed all the comments and ran additi... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
3,
1,
3,
-1,
-1
] | [
"iclr_2020_r1gEXgBYDH",
"rJxdCW4ojr",
"r1l0cn0k9H",
"Hyl2dMt0YB",
"BylqtktpFS",
"iclr_2020_r1gEXgBYDH",
"iclr_2020_r1gEXgBYDH",
"iclr_2020_r1gEXgBYDH",
"BJxKtFdnDH",
"iclr_2020_r1gEXgBYDH"
] |
iclr_2020_Bkf4XgrKvS | Unsupervised Learning of Graph Hierarchical Abstractions with Differentiable Coarsening and Optimal Transport | Hierarchical abstractions are a methodology for solving large-scale graph problems in various disciplines. Coarsening is one such approach: it generates a pyramid of graphs whereby the one in the next level is a structural summary of the prior one. With a long history in scientific computing, many coarsening strategies were developed based on mathematically driven heuristics. Recently, resurgent interests exist in deep learning to design hierarchical methods learnable through differentiable parameterization. These approaches are paired with downstream tasks for supervised learning. In this work, we propose an unsupervised approach, coined \textsc{OTCoarsening}, with the use of optimal transport. Both the coarsening matrix and the transport cost matrix are parameterized, so that an optimal coarsening strategy can be learned and tailored for a given set of graphs. We demonstrate that the proposed approach produces meaningful coarse graphs and yields competitive performance compared with supervised methods for graph classification. | reject | This paper presents a differentiable coarsening approach for graph neural network. It provides the empirical demonstration that the proposed approach is competitive to existing pooling approaches. However, although the paper shows an interesting observation, there are remaining novelty as well as clarity concerns. In particular, the contribution of the proposed work over the graph kernels based on other forms of coarsening such as the early work of Shervashidze et al. as well as higher-order WL (pointed out by Reviewer1) remains unclear. We believe the paper currently lacks comparisons and discussions, and will benefit from additional rounds of future revisions.
| train | [
"ryldwtXVKB",
"Sye4VJGKsH",
"S1gJM1MYiS",
"B1xmARZKsS",
"Bylkq0ZYjH",
"S1lgrCbKiS",
"SyeMI_gOjS",
"SkxoB6Fbsr",
"B1xsVn_aKH",
"HygY5iIScS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method to summarize a given graph based on the algebraic multigrid and optimal transport, which can be further used for the downstream ML tasks such as graph classification.\nAlthough the problem of graph summarization is a relevant task, there are a number of unclear points in this paper lis... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2020_Bkf4XgrKvS",
"iclr_2020_Bkf4XgrKvS",
"SkxoB6Fbsr",
"HygY5iIScS",
"B1xsVn_aKH",
"ryldwtXVKB",
"SkxoB6Fbsr",
"iclr_2020_Bkf4XgrKvS",
"iclr_2020_Bkf4XgrKvS",
"iclr_2020_Bkf4XgrKvS"
] |
iclr_2020_SklSQgHFDS | Scheduled Intrinsic Drive: A Hierarchical Take on Intrinsically Motivated Exploration | Exploration in sparse reward reinforcement learning remains an open challenge. Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration. Commonly these signals are added as bonus rewards, which results in a mixture policy that neither conducts exploration nor task fulfillment resolutely.
In this paper, we instead learn separate intrinsic and extrinsic task policies and schedule between these different drives to accelerate exploration and stabilize learning. Moreover, we introduce a new type of intrinsic reward denoted as successor feature control (SFC), which is general and not task-specific. It takes into account statistics over complete trajectories and thus differs from previous methods that only use local information to evaluate intrinsic motivation. We evaluate our proposed scheduled intrinsic drive (SID) agent using three different environments with pure visual inputs: VizDoom, DeepMind Lab and DeepMind Control Suite. The results show a substantially improved exploration efficiency with SFC and the hierarchical usage of the intrinsic drives. A video of our experimental results can be found at https://gofile.io/?c=HpEwTd. | reject | The paper presents a method for intrinsically motivated exploration using successor features by interleaving the exploration task with intrinsic rewards and extrinsic task original external rewards. In addition, the paper proposes "successor feature control" (distance between consecutive successor features) as an intrinsic reward. The proposed method is interesting and it can potentially address the limitation of existing exploration methods based on intrinsic motivation. In experimental results, the method is evaluated on navigation tasks using Vizdoom and DeepMind Lab, as well as continuous control tasks of Cartpole in the DeepMind control suite, with promising results.
On the negative side, there are some domain-specific properties (e.g., moderate map size with relatively simple structures, different rooms having visually distinct patterns, bottleneck states generally leading to better rewards, etc.) that make the proposed method work well. In addition, off-policy learning of the successor features could be a potential technical issue. Finally, the proposed method is not evaluated against stronger baselines on harder exploration tasks (such as Atari Montezuma's revenge, etc.), thus the addition of such results would make the paper more convincing. In the current form, the paper seems to need more work to be acceptable for ICLR. | val | [
"BkesHtI_iB",
"HyewqIr_iH",
"r1ecSRLOiB",
"S1lM84BuoH",
"rylhp2IOjr",
"B1l-41IuoH",
"Hkx7NQrOsS",
"B1xGsCVdjH",
"Syga7o0atr",
"BJev5lEbcS",
"BkxkKNOh9H",
"BkeuWsu0qS"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the detailed comments and the effort spent on reviewing our paper. Below we address the concerns raised by the reviewer point by point.\n\n$\\bullet$ Differences to [1],[2]\n\nResponse:\nWe thank the reviewers for the references. Below we discuss their differences from our paper. \n[1] p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"Syga7o0atr",
"Syga7o0atr",
"BJev5lEbcS",
"BkeuWsu0qS",
"BkxkKNOh9H",
"BkeuWsu0qS",
"BkeuWsu0qS",
"iclr_2020_SklSQgHFDS",
"iclr_2020_SklSQgHFDS",
"iclr_2020_SklSQgHFDS",
"iclr_2020_SklSQgHFDS",
"iclr_2020_SklSQgHFDS"
] |
iclr_2020_Hye87grYDH | Sparse Transformer: Concentrated Attention Through Explicit Selection | Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Sparse Transformer. Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Sparse Transformer in model performance.
Sparse Transformer reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation and IWSLT 2014 German-to-English translation. In addition, we conduct qualitative analysis to account for Sparse Transformer's superior performance. | reject | The paper proposes a variant of Sparse Transformer where only top K activations are kept in the softmax. The resulting transformer model is applied to NMT, image caption generation and language modeling, where it outperformed a vanilla Transformer.
While the proposed idea is simple, easy to implement, and it does not add additional computational or memory cost, the reviewers raised several concerns in the discussion phase, including: several baselines missing from the tables; incomplete experimental details; incorrect/misleading selection of best performing model in tables of results (e.g. In Table 1, the authors boldface their results on En-De (29.4) and De-En (35.6) but in fact, the best performance on these is achieved by competing models, respectively 29.7 and 35.7. The caption claims their model "achieves the state-of-the-art performances in En-Vi and De-En" but this is not true for De-En (albeit by 0.1). In Table 3, they boldface their result of 1.05 but the best result is 1.02; the text says their model beats the Transf-XL "with an advantage" (of 0.01) but do not point out that the advantage of Adaptive-span over their model is 3 times as large (0.03)).
This prevents me from recommending acceptance of this paper in its current form. I strongly encourage the authors to address these concerns in a future submission. | train | [
"rkezjkjniS",
"Bkx1jJ53ir",
"HylfgT9njB",
"BkeSP792or",
"HJgIfWo2oB",
"BylJAgnjjB",
"BylYWr7msB",
"B1x0T9sPtS",
"BkehHdjsFS",
"SkxkjiCatr",
"SyenHj3h9H",
"Skg1vbiHtH",
"HyxM3_4yKS",
"BJlZADEp_H"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"Thank you for your valuable comments. We have empirically addressed your concerns about the optimal choice of k and the comparisons to the previous sparse attention methods in the updates. \n\nAs you said, our approach is simple, so the Explicit Sparse Transformer is significantly faster in both inference and trai... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
6,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
-1,
-1,
-1,
-1
] | [
"BkehHdjsFS",
"iclr_2020_Hye87grYDH",
"B1x0T9sPtS",
"SkxkjiCatr",
"BylJAgnjjB",
"B1x0T9sPtS",
"iclr_2020_Hye87grYDH",
"iclr_2020_Hye87grYDH",
"iclr_2020_Hye87grYDH",
"iclr_2020_Hye87grYDH",
"Skg1vbiHtH",
"iclr_2020_Hye87grYDH",
"BJlZADEp_H",
"iclr_2020_Hye87grYDH"
] |
iclr_2020_HklvmlrKPB | Improving Sequential Latent Variable Models with Autoregressive Flows | We propose an approach for sequence modeling based on autoregressive normalizing flows. Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics. This technique provides a simple, general-purpose method for improving sequence modeling, with connections to existing and classical techniques. We demonstrate the proposed approach both with standalone models, as well as a part of larger sequential latent variable models. Results are presented on three benchmark video datasets, where flow-based dynamics improve log-likelihood performance over baseline models. | reject | The paper scores low on novelty. The experiments and model analysis are not very strong. | train | [
"HylbHoHSFS",
"r1eoA5cssB",
"rkepiqqsir",
"r1eNXq5ioH",
"ryeqOccosH",
"HyejGEhEFS",
"Skld8grnYB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nSummary \nThe paper proposes to combine the video modeling approaches based on autoregressive flows (e.g. Kumar’19) with amortized variational inference (e.g. Denton’18), wherein an autoregressive latent variable model optimized with variational inference is extended with an autoregressive flow that further tran... | [
6,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2020_HklvmlrKPB",
"HyejGEhEFS",
"HylbHoHSFS",
"iclr_2020_HklvmlrKPB",
"Skld8grnYB",
"iclr_2020_HklvmlrKPB",
"iclr_2020_HklvmlrKPB"
] |
iclr_2020_SyxD7lrFPH | Frequency Pooling: Shift-Equivalent and Anti-Aliasing Down Sampling | Convolutional layer utilizes the shift-equivalent prior of images which makes it a great success for image processing. However, commonly used down sampling methods in convolutional neural networks (CNNs), such as max-pooling, average-pooling, and strided-convolution, are not shift-equivalent. This destroys the shift-equivalent property of CNNs and degrades their performance. In this paper, we propose a novel pooling method which is \emph{strict shift equivalent and anti-aliasing} in theory. This is achieved by (inverse) Discrete Fourier Transform and we call our method frequency pooling. Experiments on image classifications show that frequency pooling improves accuracy and robustness w.r.t shifts of CNNs. | reject | This submission has been assessed by three reviewers and scored 3/6/1. The reviewers also have not increased their scores after the rebuttal. Two reviewers pointed to poor experimental results that do not fully support what is claimed in contributions and conclusions. Theoretical support for the reconstruction criterion was considered weak. Finally, the paer is pointed to be a special case of (Zhang 2019). While the paper has some merits, all reviewers had a large number of unresolved criticism. Thus, this paper cannot be accepted by ICLR2020.
| train | [
"BJxED1cGsB",
"ryxwPzW3jr",
"SJgjcJWnjH",
"B1eWm06isr",
"H1e6lQ9Mjr",
"r1xXF0YMoS",
"Hyl3QZqMjS",
"r1lTmBYHtB",
"Skx-jEBr5B",
"Hyl73mFK9H"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We think your suggestions are very meaningful. We respond to them one by one:\n\n1. We will explain anti-aliasing in our updated paper. Roughly, anti-aliasing is helpful for signal reconstruction. However, we can’t provide a strict treatment of how anti-aliasing relates to classification. But we have intuitions: f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3
] | [
"Hyl73mFK9H",
"iclr_2020_SyxD7lrFPH",
"B1eWm06isr",
"H1e6lQ9Mjr",
"r1lTmBYHtB",
"iclr_2020_SyxD7lrFPH",
"Skx-jEBr5B",
"iclr_2020_SyxD7lrFPH",
"iclr_2020_SyxD7lrFPH",
"iclr_2020_SyxD7lrFPH"
] |
iclr_2020_rkedXgrKDH | Trajectory growth through random deep ReLU networks | This paper considers the growth in the length of one-dimensional trajectories as they are passed through deep ReLU neural networks, which, among other things, is one measure of the expressivity of deep networks. We generalise existing results, providing an alternative, simpler method for lower bounding expected trajectory growth through random networks, for a more general class of weights distributions, including sparsely connected networks. We illustrate this approach by deriving bounds for sparse-Gaussian, sparse-uniform, and sparse-discrete-valued random nets. We prove that trajectory growth can remain exponential in depth with these new distributions, including their sparse variants, with the sparsity parameter appearing in the base of the exponent. | reject | This article studies the length of one-dimensional trajectories as they are mapped through the layers of a ReLU network, simplifying proof methods and generalising previous results on networks with random weights to cover different classes of weight distributions including sparse ones. It is observed that the behaviour is similar for different distributions, suggesting a type of universality. The reviewers found that the paper is well written and appreciated the clear description of the places where the proofs deviate from previous works. However, they found that the results, although adding interesting observations in the sparse setting, are qualitatively very close to previous works and possibly not substantial enough for publication in ICLR. The revision includes some experiments with trained networks and updates the title to better reflect the contribution. However, the reviewers did not find this convincing enough. The article would benefit from a deeper theory clarifying the observations that have been made so far, and more extensive experiments connecting to practice. | test | [
"SJgm51BhiH",
"HyxQGXj4oH",
"HkeLXYbXsS",
"S1e66dZQsS",
"r1eOLBb7iB",
"BJemDG-XiH",
"Skv8-W7ir",
"HkxWMcyQtr",
"r1xMfg2LKS",
"rJen4nfhqB"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have now uploaded a further revised submission with an extra two figures in Appendix C.2 which show the results of some experiments on trained networks, and a pointer to these figures at the end of Section 4 on page 9. \n\nThe results indicate, firstly, that even in trained networks, trajectory growth through... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
1,
3
] | [
"iclr_2020_rkedXgrKDH",
"iclr_2020_rkedXgrKDH",
"HkxWMcyQtr",
"HkxWMcyQtr",
"r1xMfg2LKS",
"rJen4nfhqB",
"rJen4nfhqB",
"iclr_2020_rkedXgrKDH",
"iclr_2020_rkedXgrKDH",
"iclr_2020_rkedXgrKDH"
] |
iclr_2020_BklOXeBFDS | Transfer Active Learning For Graph Neural Networks | Graph neural networks have been proved very effective for a variety of prediction tasks on graphs such as node classification. Generally, a large number of labeled data are required to train these networks. However, in reality it could be very expensive to obtain a large number of labeled data on large-scale graphs. In this paper, we studied active learning for graph neural networks, i.e., how to effectively label the nodes on a graph for training graph neural networks. We formulated the problem as a sequential decision process, which sequentially label informative nodes, and trained a policy network to maximize the performance of graph neural networks for a specific task. Moreover, we also studied how to learn a universal policy for labeling nodes on graphs with multiple training graphs and then transfer the learned policy to unseen graphs. Experimental results on both settings of a single graph and multiple training graphs (transfer learning setting) prove the effectiveness of our proposed approaches over many competitive baselines. | reject | Paper proposes a method for active learning on graphs. Reviewers found the presentation of the method confusing and somewhat lacking novelty in light of existing works (some of which were not compared to). After the rebuttal and revisions, reviewers minds were not changed from rejection. | val | [
"SJejFIWTFH",
"S1gVscRKir",
"Bylru5RYjB",
"S1xwX5RYoH",
"SJev7G8sYS",
"BklvSUYRFH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the author response.\nOriginal review:\n\nThis paper presents a method for active learning on graphs, including a novel setting of transferring an active learning policy to unseen graphs. The problems tackled here are important and the method is shown to improve over previous work in some cases. On... | [
3,
-1,
-1,
-1,
3,
3
] | [
3,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_BklOXeBFDS",
"SJejFIWTFH",
"SJev7G8sYS",
"BklvSUYRFH",
"iclr_2020_BklOXeBFDS",
"iclr_2020_BklOXeBFDS"
] |
iclr_2020_r1xF7lSYDS | Transferable Recognition-Aware Image Processing | Recent progress in image recognition has stimulated the deployment of vision systems (e.g. image search engines) at an unprecedented scale. As a result, visual data are now often consumed not only by humans but also by machines. Meanwhile, existing image processing methods only optimize for better human perception, whereas the resulting images may not be accurately recognized by machines. This can be undesirable, e.g., the images can be improperly handled by search engines or recommendation systems. In this work, we propose simple approaches to improve machine interpretability of processed images: optimizing the recognition loss directly on the image processing network or through an intermediate transforming model, a process which we show can also be done in an unsupervised manner. Interestingly, the processing model's ability to enhance the recognition performance can transfer when evaluated on different recognition models, even if they are of different architectures, trained on different object categories or even different recognition tasks. This makes the solutions applicable even when we do not have the knowledge about future downstream recognition models, e.g., if we are to upload the processed images to the Internet. We conduct comprehensive experiments on three image processing tasks with two downstream recognition tasks, and confirm our method brings substantial accuracy improvement on both the same recognition model and when transferring to a different one, with minimal or no loss in the image processing quality. | reject | This paper presents several models for recognition-aware image enhancement. The authors propose to enhance the image quality in the presence of image degradation (e.g., low-resolution, noise, compression artifacts) as well as to improve the recognition accuracy in a joint model. While acknowledging that the paper is addressing an interesting direction, the reviewers and AC note the following potential weaknesses: presentation clarity, limited technical contributions, insufficient empirical evidence. AC can confirm all the reviewers have read the rebuttal and have contributed to the discussion. All the reviewers and AC agree that the rebuttal was informative, and the authors have partially addressed some of the concerns (e.g. additional experiments). R2 has raised the score from reject to weak reject. However, at this stage AC suggest the manuscript is below the acceptance bar and needs a major revision before submitting for another round of reviews. We hope the reviews are useful for improving and revising the paper. | val | [
"BklAQWtjOB",
"rJlzwMAEiH",
"ByeL8DRVor",
"HJe7nBR4sH",
"HyxXILCEor",
"SygheU04sB",
"rJeSsN0VoH",
"Hkl_dbR4oB",
"ryxXHWAViS",
"r1xh8HK_Fr",
"SJgmkJdkqB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Claims: \n\nThe paper presents a concept of \"recognition-aware (RA) image processing\": when one enhances image in a some way, not only human judjement should be taken into account, but also performance of various computer vision application using that image.\n\nAs an example of processing tasks, authors take sup... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_r1xF7lSYDS",
"SJgmkJdkqB",
"BklAQWtjOB",
"r1xh8HK_Fr",
"r1xh8HK_Fr",
"r1xh8HK_Fr",
"r1xh8HK_Fr",
"SJgmkJdkqB",
"SJgmkJdkqB",
"iclr_2020_r1xF7lSYDS",
"iclr_2020_r1xF7lSYDS"
] |
iclr_2020_rylqmxBKvH | Unsupervised Spatiotemporal Data Inpainting | We tackle the problem of inpainting occluded area in spatiotemporal sequences, such as cloud occluded satellite observations, in an unsupervised manner. We place ourselves in the setting where there is neither access to paired nor unpaired training data. We consider several cases in which the underlying information of the observed sequence in certain areas is lost through an observation operator. In this case, the only available information is provided by the observation of the sequence, the nature of the measurement process and its associated statistics. We propose an unsupervised-learning framework to retrieve the most probable sequence using a generative adversarial network. We demonstrate the capacity of our model to exhibit strong reconstruction capacity on several video datasets such as satellite sequences or natural videos.
| reject | This paper studies the problem of unsupervised inpainting occluded areas in spatiotemporal sequences and propose a GAN-based framework which is able to complete the occluded areas given the stochastic model of the occlusion process. The reviewers agree that the problem is interesting, the paper is well written, and that the proposed approach is reasonable. However, after the discussion phase the critical point raised by AnonReviewer1 remains: in principle, when applying different corruptions in each step, the model is able to see the entire video over the duration of the training. This coupled with the strong assumptions on the mask distribution makes it questionable whether the approach should be considered unsupervised. Given that the results of the supervised methods significantly outperform the unsupervised ones, this issue needs to be carefully addressed to provide a clear and convincing selling point. Hence, I will recommend rejection and encourage the authors to address the remaining issues (the answers in the rebuttal are a good starting point). | train | [
"HkgHEBLhjB",
"rkgs_3rnir",
"SJeTm2SnoB",
"r1xzroShsS",
"ryl7gsB3oH",
"HJlXPY_SFH",
"rJeZL4PtYr",
"rkl9Dco75S"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks to all the reviewers for their comments and suggestions. We tried to take all of them into account, we reorganized the paper accordingly and hope to provide all the required precisions.\n\nWe address below some general comments/questions raised by the reviewers and then give detailed answers for each review... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
1
] | [
"iclr_2020_rylqmxBKvH",
"rkl9Dco75S",
"HJlXPY_SFH",
"rJeZL4PtYr",
"rJeZL4PtYr",
"iclr_2020_rylqmxBKvH",
"iclr_2020_rylqmxBKvH",
"iclr_2020_rylqmxBKvH"
] |
iclr_2020_Hygi7xStvS | Lossless Data Compression with Transformer | Transformers have replaced long-short term memory and other recurrent neural networks variants in sequence modeling. It achieves state-of-the-art performance on a wide range of tasks related to natural language processing, including language modeling, machine translation, and sentence representation. Lossless compression is another problem that can benefit from better sequence models. It is closely related to the problem of online learning of language models. But, despite this ressemblance, it is an area where purely neural network based methods have not yet reached the compression ratio of state-of-the-art algorithms. In this paper, we propose a Transformer based lossless compression method that match the best compression ratio for text. Our approach is purely based on neural networks and does not rely on hand-crafted features as other lossless compression algorithms. We also provide a thorough study of the impact of the different components of the Transformer and its training on the compression ratio. | reject | The paper proposes to use transformers to do lossless data compression. The idea is simple and straightforward (with adding n-gram inputs). The initial submission considered one dataset, a new dataset was added in the rebuttal. Still, there is no runtime in the experiments (and Transformers can take a lot of time to train). Since this is more an experimental paper, this is crucial (and the improvements reports are very small and it is difficult to judge if there are significant).
Overall, there was a positive discussion between the authors and the reviewers. The reviewers commented that concerns have been addressed, but did not change the evaluation which is unanimous reject. | train | [
"rkxDdksnjB",
"S1lcZ1ohsH",
"BJxPqa9noS",
"BkeZpB56FH",
"r1eFrDP19S",
"HklOX_jl5r"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their feedback.\n\nIn this paper, we show that a method purely based on neural networks, without hand designed features, can obtain SoTA results compression results on benchmarks such as enwik8. Existing work showed a significant gap between methods purely based on neural networks, compar... | [
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
4,
5,
3
] | [
"r1eFrDP19S",
"BkeZpB56FH",
"HklOX_jl5r",
"iclr_2020_Hygi7xStvS",
"iclr_2020_Hygi7xStvS",
"iclr_2020_Hygi7xStvS"
] |
iclr_2020_BJlnmgrFvS | BAIL: Best-Action Imitation Learning for Batch Deep Reinforcement Learning | The field of Deep Reinforcement Learning (DRL) has recently seen a surge in research in batch reinforcement learning, which aims for sample-efficient learning from a given data set without additional interactions with the environment. In the batch DRL setting, commonly employed off-policy DRL algorithms can perform poorly and sometimes even fail to learn altogether. In this paper we propose anew algorithm, Best-Action Imitation Learning (BAIL), which unlike many off-policy DRL algorithms does not involve maximizing Q functions over the action space. Striving for simplicity as well as performance, BAIL first selects from the batch the actions it believes to be high-performing actions for their corresponding states; it then uses those state-action pairs to train a policy network using imitation learning. Although BAIL is simple, we demonstrate that BAIL achieves state of the art performance on the Mujoco benchmark, typically outperforming BatchConstrained deep Q-Learning (BCQ) by a wide margin. | reject | The authors propose a novel algorithm for batch RL with offline data. The method is simple and outperforms a recently proposed algorithm, BCQ, on Mujoco benchmark tasks.
The main points that have not been addressed after the author rebuttal are:
* Lack of rigor and incorrectness of theoretical statements. Furthermore, there is little analysis of the method beyond the performance results.
* Non-standard assumptions/choices in the algorithm without justification (e.g., concatenating episodes).
* Numerous sloppy statements / assumptions that are not justified.
* No comparison to BEAR, making it challenging to evaluate their state-of-the-art claims.
The reviewers also point out several limitations of the proposed method. Adding a brief discussion of these limitations would strengthen the paper.
The method is interesting and simple, so I believe that the paper has the potential to be a strong submission if the authors incorporate the reviewers suggestions in a future submission. However, at this time, the paper falls below the acceptance bar. | train | [
"S1gQk7josH",
"rkxaR_9sor",
"Ske2Jgqssr",
"S1gZB2I2YB",
"SJlbdX86tr",
"SJelB0Pj5H"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your thorough review. We appreciate that you \"like the simplicity of the approach and the fact that it is much easier to understand than existing works like BCQ\". \n\nFor your second point you write: \"Experimental results are a little unsettling. The primary reason is that in all of the plots, BCQ... | [
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"S1gZB2I2YB",
"SJlbdX86tr",
"SJelB0Pj5H",
"iclr_2020_BJlnmgrFvS",
"iclr_2020_BJlnmgrFvS",
"iclr_2020_BJlnmgrFvS"
] |
iclr_2020_H1l2mxHKvr | Few-Shot Few-Shot Learning and the role of Spatial Attention | Few-shot learning is often motivated by the ability of humans to learn new tasks from few examples. However, standard few-shot classification benchmarks assume that the representation is learned on a limited amount of base class data, ignoring the amount of prior knowledge that a human may have accumulated before learning new tasks. At the same time, even if a powerful representation is available, it may happen in some domain that base class data are limited or non-existent. This motivates us to study a problem where the representation is obtained from a classifier pre-trained on a large-scale dataset of a different domain, assuming no access to its training process, while the base class data are limited to few examples per class and their role is to adapt the representation to the domain at hand rather than learn from scratch. We adapt the representation in two stages, namely on the few base class data if available and on the even fewer data of new tasks. In doing so, we obtain from the pre-trained classifier a spatial attention map that allows focusing on objects and suppressing background clutter. This is important in the new problem, because when base class data are few, the network cannot learn where to focus implicitly. We also show that a pre-trained network may be easily adapted to novel classes, without meta-learning. | reject | This paper tackles the interesting problem of meta-learning in problem spaces where training "tasks" are scarce. Two criticisms that seems to shared across reviewers are that (i) it is debatable how "novel" the space of meta learning with "few" tasks is, especially since there aren't established standard for how many training tasks should be available, and (ii) the paper could use more comparisons with baseline methods and ablations to understand the contributions. As an AC, I down-weight criticism (i) because I don't feel the paper has to be creating a new problem definition; it's acceptable to make advances within an existing space. However, criticism (ii) seems to remain. After conferring with reviewers it seems that the rebuttal was not strong enough to significantly alter the reviewer's opinions on this issue, and so the paper does not have enough support to justify acceptance. The paper certainly addresses interesting issues, and I look forward to seeing a revised/improved version at another venue. | test | [
"rkgOJZaaKH",
"S1xtjozaYr",
"H1epAP5jFS",
"Byec1WQ3oH",
"BklTklXnjr",
"HyeIMy7noS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"A new task is suggested, similarly to FSL the test is done in an episodic manner of k-shot 5-way, but the number of samples for base classes is also limited. The model is potentially pre-trained on a large scale dataset from another domain. The suggested method is applying spatial attention according to entropy cr... | [
3,
3,
1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2020_H1l2mxHKvr",
"iclr_2020_H1l2mxHKvr",
"iclr_2020_H1l2mxHKvr",
"H1epAP5jFS",
"S1xtjozaYr",
"rkgOJZaaKH"
] |
iclr_2020_ByeaXeBFvH | Hydra: Preserving Ensemble Diversity for Model Distillation | Ensembles of models have been empirically shown to improve predictive performance and to yield robust measures of uncertainty. However, they are expensive in computation and memory. Therefore, recent research has focused on distilling ensembles into a single compact model, reducing the computational and memory burden of the ensemble while trying to preserve its predictive behavior. Most existing distillation formulations summarize the ensemble by capturing its average predictions. As a result, the diversity of the ensemble predictions, stemming from each individual member, is lost. Thus the distilled model cannot provide a measure of uncertainty comparable to that of the original ensemble. To retain more faithfully the diversity of the ensemble, we propose a distillation method based on a single multi-headed neural network, which we refer to as Hydra. The shared body network learns a joint feature representation that enables each head to capture the predictive behavior of each ensemble member. We demonstrate that with a slight increase in parameter count, Hydra improves distillation performance on classification and regression settings while capturing the uncertainty behaviour of the original ensemble over both in-domain and out-of-distribution tasks. | reject | This work introduces a simple and effective method for ensemble distillation. The method is a simple extension of earlier “prior networks”: it differs in which, instead of fitting a single network to mimic a distribution produced by the ensemble, this work suggests to use multi-head (one head per individual ensemble member) in order to better capture the ensemble diversity. This paper experimentally shows that multi-head architecture performs well on MNIST and CIFAR-10 (they added CIFAR-100 in the revised version) in terms of accuracy and uncertainty.
While the method is effective and the experiments on CIFAR-100 (a harder task) improved the paper, the reviewers (myself included) pointed out in the discussion phase that the limited novelty remains a major weakness. The proposed method seems like a trivial extension of the prior work, and does not provide much additional insight. To remedy this shortcoming, I suggest the authors provide extensive experimental supports including various datasets and ablation studies.
Another concern mentioned in the discussion is the fact that these small improvements are in spite of the fact that the proposed method ends up using many more parameters than the baselines. Including and comparing different model sizes in a full fledged experimental evaluation would better convey the trade-offs of the proposed approach.
| train | [
"ryexo70hKr",
"Bkxv-Md3jr",
"Hyen1zO2ir",
"BklDgbd3oS",
"r1egFx_3ir",
"rJxxmgd3jB",
"HJgwwEGEFH",
"ByxD8uBpFr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Overview:\nThis work introduces a new method for ensemble distillation. The problem of making better ensemble distillation methods seems relevant as ensembles are still one of the best ways to estimate uncertainty in practice (although see concerns below). The method itself is a simple extension of earlier “prior ... | [
6,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_ByeaXeBFvH",
"Hyen1zO2ir",
"HJgwwEGEFH",
"ryexo70hKr",
"ByxD8uBpFr",
"iclr_2020_ByeaXeBFvH",
"iclr_2020_ByeaXeBFvH",
"iclr_2020_ByeaXeBFvH"
] |
iclr_2020_rJlTXxSFPr | A Quality-Diversity Controllable GAN for Text Generation | Text generation is a critical and difficult natural language processing task. Maximum likelihood estimate (MLE) based models have been arguably suffered from exposure bias in the inference stage and thus varieties of language generative adversarial networks (GANs) bypassing this problem have emerged. However, recent study has demonstrated that MLE models can constantly outperform GANs models over quality-diversity space under several metrics. In this paper, we propose a quality-diversity controllable language GAN. | reject | This paper provides a method (loss function) for training GAN model for generation of discrete text token generation. The aim of this loss method to control the trade off between quality vs diversity while generating the text data.
The paper is generally well written, but the experimental section is not overly good: Interpretation of the results is missing; error bars are missing. | train | [
"B1gG2j12sr",
"ryeTZj1niH",
"SygLjcZ3oS",
"B1xIAMv8Yr",
"H1e4ESkYKB",
"BygMPlX3Kr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your detailed review.\n=== Theoretical analysis ===\n1、D_G^* has a term of empirical distribution, whether that term becomes real distribution when N goes to infinity is an open question, it will be answered in our future work.\n2、Although generalized JSD is 0 when \\pi = 0 and \\pi = 1, generalized JSD... | [
-1,
-1,
-1,
1,
1,
3
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"H1e4ESkYKB",
"BygMPlX3Kr",
"B1xIAMv8Yr",
"iclr_2020_rJlTXxSFPr",
"iclr_2020_rJlTXxSFPr",
"iclr_2020_rJlTXxSFPr"
] |
iclr_2020_rJxRmlStDB | Self-Induced Curriculum Learning in Neural Machine Translation | Self-supervised neural machine translation (SS-NMT) learns how to extract/select suitable training data from comparable (rather than parallel) corpora and how to translate, in a way that the two tasks support each other in a virtuous circle. SS-NMT has been shown to be competitive with state-of-the-art unsupervised NMT. In this study we provide an in-depth analysis of the sampling choices the SS-NMT model takes during training. We show that, without it having been told to do so, the model selects samples of increasing (i) complexity and (ii) task-relevance in combination with (iii) a denoising curriculum. We observe that the dynamics of the mutual-supervision of both system internal representation types is vital for the extraction and hence translation performance. We show that in terms of the human Gunning-Fog Readability index (GF), SS-NMT starts by extracting and learning from Wikipedia data suitable for high school (GF=10--11) and quickly moves towards content suitable for first year undergraduate students (GF=13). | reject | This paper presents a method for curriculum learning based on extracting parallel sentences from comparable corpora (wikipedia), and continuously retraining the model based on these examples. Two reviewers pointed out that the initial version of the paper lacked references and baselines from methods of mining parallel sentences from comparable corpora such as Wikipedia. The authors have responded at length and included some of the requested baseline results. This changed one reviewer's score but has not tipped the balance strongly enough for considering this for publication. | train | [
"HylrgUNXiS",
"H1gMbCH3sS",
"BJg9vTBYiS",
"rJeew15WjH",
"Syez1CFWjS",
"SkxqBn8OYH",
"Bygx092pFB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper describes a method for training self-supervised neural machine translation systems from a document-aligned comparable corpus (Wikipedia in en, fr, de and es).\n\nThe proposed training method consists of two concurrent processes: a pseudo-parallel sentence pair extraction process, where average word embe... | [
6,
-1,
-1,
-1,
-1,
3,
1
] | [
5,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2020_rJxRmlStDB",
"BJg9vTBYiS",
"HylrgUNXiS",
"SkxqBn8OYH",
"Bygx092pFB",
"iclr_2020_rJxRmlStDB",
"iclr_2020_rJxRmlStDB"
] |
iclr_2020_SkxlElBYDS | Continual Learning via Principal Components Projection | Continual learning in neural networks (NN) often suffers from catastrophic forgetting. That is, when learning a sequence of tasks on an NN, the learning of a new task will cause weight changes that may destroy the learned knowledge embedded in the weights for previous tasks. Without solving this problem, it is difficult to use an NN to perform continual or lifelong learning. Although researchers have attempted to solve the problem in many ways, it remains to be challenging. In this paper, we propose a new approach, called principal components projection (PCP). The idea is that in learning a new task, if we can ensure that the gradient updates will only occur in the orthogonal directions to the input vectors of the previous tasks, then the weight updates for learning the new task will not affect the previous tasks. We propose to compute the principal components of the input vectors and use them to transform the input and to project the gradient updates for learning each new task. PCP does not need to store any sampled data from previous tasks or to generate pseudo data of previous tasks and use them to help learn a new task. Empirical evaluation shows that the proposed method PCP markedly outperforms the state-of-the-art baseline methods. | reject | There is no author response for this paper. The paper addresses the issue of catastrophic forgetting in continual learning. The authors build upon the idea from [Zheng,2019], namely finding gradient updates in the space perpendicular to the input vectors of the previous tasks resulting in less forgetting, and propose an improvement, namely to use principal component analysis to enable learning new tasks without restricting their solution space as in [Zheng,2019].
While the reviewers acknowledge the importance to study continual learning, they raised several concerns that were viewed by the AC as critical issues: (1) convincing experimental evaluation -- an analysis that clearly shows how and when the proposed method can solve the issue that [Zheng,2019] faces with (task similarity/dissimilarity scenario) would substantially strengthen the evaluation and would allow to assess the scope and contributions of this work; also see R3’s detailed concerns and questions on empirical evaluation, R2’s suggestion to follow the standard protocols, and R1’s suggestion to use PackNet and HAT as baselines for comparison; (2) lack of presentation clarity -- see R2’s concerns how to improve, and R1’s suggestions on how to better position the paper.
A general consensus among reviewers and AC suggests, in its current state the manuscript is not ready for a publication. It needs clarifications, more empirical studies and polish to achieve the desired goal.
| train | [
"r1lB7pU2FS",
"S1gezXh3tB",
"SyxXQhYAYr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The method proposes a method for continual learning. The method is an extension of recent work, called orthogonal weights modification (OWM) [Zheng,2019]. This method aims to find gradient updates which are perpendicular to the input vectors of previous tasks (resulting in less forgetting). However, the authors ar... | [
3,
3,
3
] | [
4,
4,
3
] | [
"iclr_2020_SkxlElBYDS",
"iclr_2020_SkxlElBYDS",
"iclr_2020_SkxlElBYDS"
] |
iclr_2020_SkglVlSFPS | Uncertainty - sensitive learning and planning with ensembles | We propose a reinforcement learning framework for discrete environments in which an agent optimizes its behavior on two timescales. For the short one, it uses tree search methods to perform tactical decisions. The long strategic level is handled with an ensemble of value functions learned using TD-like backups. Combining these two techniques brings synergies. The planning module performs \textit{what-if} analysis allowing to avoid short-term pitfalls and boost backups of the value function. Notably, our method performs well in environments with sparse rewards where standard TD(1) backups fail. On the other hand, the value functions compensate for inherent short-sightedness of planning. Importantly, we use ensembles to measure the epistemic uncertainty of value functions. This serves two purposes: a) it stabilizes planning, b) it guides exploration.
We evaluate our methods on discrete environments with sparse rewards: the Deep sea chain environment, toy Montezuma's Revenge, and Sokoban. In all the cases, we obtain speed-up of learning and boost to the final performance. | reject | The authors study planning problems with sparse rewards.
They propose a tree search algorithm together with an ensemble of value
functions to guide exploration in this setting.
The value predictions from the ensemble are combined in a risk sensitive way,
therefore biasing the search towards states with high uncertainty in value
prediction.
The approach is applied to several grid-world environments.
The reviewers mostly criticized the presentation of the material, in particular
that the paper provided insufficient details on the proposed
method. Furthermore, the comparison to model-free RL methods was deemed somewhat
lacking, as the proposed algorithm has access to the ground truth model.
The authors improved the manuscript in the rebuttal.
Based on the reviews and my own reading I think that the paper in it's current
form is below acceptance threshold. However, with further improved presentation
and baselines for the experiments, this has potential to be an important contribution. | train | [
"H1lRg1qtcS",
"Hyl3nCQnoH",
"HJxM0T73oS",
"SJlz4TX3jH",
"HkgcGAHAFH",
"HJlP92j69H",
"SJlvczd5ur",
"Byea19BhDS"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"The authors propose to combine planning methods like MCTS with an ensemble of value functions to a) estimate the value of leaf nodes of the search tree and b) use the ensemble estimate of uncertainty to guide exploration during MCTS search. \nThe MCTS rollouts are also used as optimization targets for the value fu... | [
3,
-1,
-1,
-1,
6,
1,
-1,
-1
] | [
3,
-1,
-1,
-1,
3,
5,
-1,
-1
] | [
"iclr_2020_SkglVlSFPS",
"HkgcGAHAFH",
"H1lRg1qtcS",
"HJlP92j69H",
"iclr_2020_SkglVlSFPS",
"iclr_2020_SkglVlSFPS",
"Byea19BhDS",
"iclr_2020_SkglVlSFPS"
] |
iclr_2020_r1l-VeSKwS | SemanticAdv: Generating Adversarial Examples via Attribute-Conditional Image Editing | Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee “subtle perturbation" by limiting the Lp norm of the perturbation. In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate “unrestricted adversarial examples". Such semantic based perturbation is more practical compared with the Lp bounded perturbation. In particular, we propose an algorithm SemanticAdv which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards various “adversarial" targets. We conduct extensive experiments to show that the semantic based adversarial examples can not only fool different learning tasks such as face verification and landmark detection, but also achieve high targeted attack success rate against real-world black-box services such as Azure face verification service based on transferability. To further demonstrate the applicability of SemanticAdv beyond face recognition domain, we also generate semantic perturbations on street-view images. Such adversarial examples with controlled semantic manipulation can shed light on further understanding about vulnerabilities of DNNs as well as potential defensive approaches. | reject | I had a little bit of difficulty with my recommendation here, but in the end I don't feel confident in recommending this paper for acceptance, with my concerns largely boiling down to the lack of clear description of the overall motivation.
Standard adversarial attacks are meant to be *imperceptible* changes that do not change the underlying semantics of the input to the human eye. In other words, the goal of the current work, generating "semantically meaningful" perturbations goes against the standard definition of adversarial attacks. This left me with two questions:
1. Under the definition of semantic adversarial attacks, what is to prevent someone from swapping out the current image with an entirely different image? From what I saw in the evaluation measures utilized in the paper, such a method would be judged as having performed a successful attack, and given no constraints there is nothing stopping this.
2. In what situation would such an attack method would be practically useful?
Even the reviewers who reviewed the paper favorably were not able to provide answers to these questions, and I was not able to resolve this from my reading of the paper as well. I do understand that there is a challenge on this by Google. In my opinion, even this contest is somewhat ill-defined, but it also features extensive human evaluation to evaluate the validity of the perturbations, which is not featured in the experimental evaluation here.
While I think this work is potentially interesting, it seems that there are too many open questions that are not resolved yet to recommend acceptance at this time, but I would encourage the authors to tighten up the argumentation/evaluation in this regard and revise the paper to be better accordingly! | train | [
"rJxyQVhiiB",
"rJlh8NU5sB",
"rkg5HFI5sS",
"rkxZTuIcoH",
"Skx6qOI9iH",
"ryx3grU9iS",
"rke1SHLqiB",
"SkxDlf-ttr",
"rkxCcRlYtH",
"H1lWBP4oFH",
"S1e7sXjM5H",
"Hke-61rAFH",
"SJeZgu5juS",
"ByeBgbotOr"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"\nWe thank all reviewers for their valuable comments and suggestions. We appreciate the reviewers recognizing our work interesting (R1, R2, R3), technically sound with concrete experiment results (R2, R3), broadening the study of adversarial examples and encouraging a good deal of follow-up research (R3). Based o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2020_r1l-VeSKwS",
"H1lWBP4oFH",
"rkxZTuIcoH",
"Skx6qOI9iH",
"SkxDlf-ttr",
"rJlh8NU5sB",
"rkxCcRlYtH",
"iclr_2020_r1l-VeSKwS",
"iclr_2020_r1l-VeSKwS",
"iclr_2020_r1l-VeSKwS",
"Hke-61rAFH",
"iclr_2020_r1l-VeSKwS",
"ByeBgbotOr",
"iclr_2020_r1l-VeSKwS"
] |
iclr_2020_HkxZVlHYvH | Ergodic Inference: Accelerate Convergence by Optimisation | Statistical inference methods are fundamentally important in machine learning. Most state-of-the-art inference algorithms are
variants of Markov chain Monte Carlo (MCMC) or variational inference (VI). However, both methods struggle with limitations in practice: MCMC methods can be computationally demanding; VI methods may have large bias.
In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation. The proposed method can generate low-biased samples by increasing the length of MCMC simulation and optimising the MCMC hyper-parameters, which offers attractive balance between approximation bias and computational efficiency. We show that our method produces promising results on popular benchmarks when compared to recent hybrid methods of MCMC and VI. | reject | This paper presents a way of adapting an HMC-based posterior inference algorithm. It's based on two approximations: replacing the entropy of the final state with the entropy of the initial state, and differentiating through the MH acceptance step. Experiments show it is able to sample from some toy distributions and achieves slightly higher log-likelihood on binarized MNIST than competing approaches.
The paper is well-written, and the experiments seem pretty reasonable.
I don't find the motivations for the aforementioned approximations very convincing. It's claimed that encouraging entropy of P_0 has a similar effect to encouraging entropy of P_T, but it seems easy to come up with situations where the algorithm could "cheat" by finding a high-entropy P_0 which leads straight downhill to an atypically high-density region. Similarly, there was some reviewer discussion about whether it's OK to differentiate through the indicator function; while we differentiate through nondifferentiable functions all the time, it makes no sense to differentiate through a discontinuous function. (This is a big part of why adaptive HMC is hard.)
This paper has some promising ideas, but overall the reviewers and I don't think this is quite ready.
| train | [
"HJgOa5gnoS",
"SJlVrce2ir",
"Bkx2W9xhir",
"BJxNl7DdOH",
"Bkgsjnv_Yr",
"HJlkXwkAtS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your valuable feedback. We would like to address concerns and questions in the review as following:\n\n1)\"First, for Equation 4, the explanation behind \"replacing\" H(P_{T}) with ELBO w.r.t. P_{0} is confusing...\": \n\nYes, this statement could be confusing, we will rephrase it in a better word if th... | [
-1,
-1,
-1,
3,
8,
3
] | [
-1,
-1,
-1,
3,
5,
4
] | [
"BJxNl7DdOH",
"Bkx2W9xhir",
"HJlkXwkAtS",
"iclr_2020_HkxZVlHYvH",
"iclr_2020_HkxZVlHYvH",
"iclr_2020_HkxZVlHYvH"
] |
iclr_2020_B1xfElrKPr | Enhancing the Transformer with explicit relational encoding for math problem solving | We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure.
Our Tensor-Product Transformer (TP-Transformer) sets a new state of the art on the recently-introduced Mathematics Dataset containing 56 categories of free-form math word-problems.
The essential component of the model is a novel attention mechanism, called TP-Attention, which explicitly encodes the relations between each Transformer cell and the other cells from which values have been retrieved by attention. TP-Attention goes beyond linear combination of retrieved values, strengthening representation-building and resolving ambiguities introduced by multiple layers of regular attention.
The TP-Transformer's attention maps give better insights into how it is capable of solving the Mathematics Dataset's challenging problems.
Pretrained models and code will be made available after publication. | reject | This paper proposes a change in the attention mechanism of Transformers yielding the so-called "Tensor-Product Transformer" (TP-Transformer). The main idea is to capture filler-role relationships by incorporating a Hadamard product of each value vector representation (after attention) with a relation vector, for every attention head at every layer. The resulting model achieves SOTA on the Mathematics Dataset. Attention maps are shown in the analysis to give insights into how TP-Transformer is capable of solving the Mathematics Dataset's challenging problems.
While the modified attention mechanism is interesting and the analysis is insightful (and improved with the addition of an experiment in NMT after the rebuttal), the reviewers expressed some concerns in the discussion stage:
1. The comparison to baseline is not fair (not to mention the 8.24% claim in conclusion). The proposed approach adds 5 million parameters to a normal transformer (table 1, 5M is a lot!), but in terms of interpolation, it only improves 3% (extrapolation improves 0.5%) at 700k steps. The rebuttal claimed that it is fair as long as the hidden size is comparable, but I don't think that's a fair argument. I suspect that increasing the feedforward hidden size (d_ff) of a normal transformer to match parameters (and add #training steps to match #train steps) might change the conclusion.
2. The new experiment on WMT further convinces me that the theoretical motivation does not hold in practice. Even with the added few million more parameters, it only improved BLEU by 0.05 (we usually consider >0.5 as significant or non-random). This might be because the feedforward and non-linearity can disambiguate as well.
I also found the name TP-Transformer a bit misleading, since what is proposed and tested here is the Hadamard product (i.e. only the diagonal part of the tensor product).
I recommend resubmitting an improved version of this paper with stronger empirical evidence of outperformance of regular Transformers with comparable number of parameters. | train | [
"Hyx0sLGk5S",
"rJezC1lDoH",
"BylEoglDsS",
"r1lwBglviS",
"r1gnfxeDsH",
"rkx9a33nKS",
"HJeOaw0CKr",
"rkxFHLa2dH",
"Syx3-8jBOH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Motivated by the fact that the attention mechanism in transformers is symmetric which might not be able to disambiguate different orders, this work proposes to use a subject vector (in addition to query, key states) for each attention head, and multiply it elementwise with the context vector for each head before m... | [
3,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
5,
1,
-1,
-1
] | [
"iclr_2020_B1xfElrKPr",
"iclr_2020_B1xfElrKPr",
"rkx9a33nKS",
"HJeOaw0CKr",
"Hyx0sLGk5S",
"iclr_2020_B1xfElrKPr",
"iclr_2020_B1xfElrKPr",
"Syx3-8jBOH",
"iclr_2020_B1xfElrKPr"
] |
iclr_2020_r1xQNlBYPS | Multichannel Generative Language Models | A channel corresponds to a viewpoint or transformation of an underlying meaning. A pair of parallel sentences in English and French express the same underlying meaning but through two separate channels corresponding to their languages. In this work, we present Multichannel Generative Language Models (MGLM), which models the joint distribution over multiple channels, and all its decompositions using a single neural network. MGLM can be trained by feeding it k way parallel-data, bilingual data, or monolingual data across pre-determined channels. MGLM is capable of both conditional generation and unconditional sampling. For conditional generation, the model is given a fully observed channel, and generates the k-1 channels in parallel. In the case of machine translation, this is akin to giving it one source, and the model generates k-1 targets. MGLM can also do partial conditional sampling, where the channels are seeded with prespecified words, and the model is asked to infill the rest. Finally, we can sample from MGLM unconditionally over all k channels. Our experiments on the Multi30K dataset containing English, French, Czech, and German languages suggest that the multitask training with the joint objective leads to improvements in bilingual translations. We provide a quantitative analysis of the quality-diversity trade-offs for different variants of the multichannel model for conditional generation, and a measurement of self-consistency during unconditional generation. We provide qualitative examples for parallel greedy decoding across languages and sampling from the joint distribution of the 4 languages. | reject | This paper presents a multi-view generative model which is applied to multilingual text generation. Although all reviewers find the overall approach is important and some results are interesting, the main concern is about the novelty. At the technical level, the proposed method is the extension of the original two-view KERMIT to multiviews, which I have to say incremental. At a higher level, multi-lingual language generation itself is not a very novel idea, and the contribution of the proposed method should be better positioned comparing to related studies. (for example, Dong et al, ACL 2015 as suggested by R#3). Also, some reviewers pointed out the problems in presentation and unconvincing experimental setup. I support the reviewers’ opinions and would like to recommend rejection this time.
I recommend authors to take in the reviewers’ comments and polish the work for the next chance. | train | [
"rJeGc5YniH",
"r1xTOcthsH",
"Hye_Pqt3oS",
"S1gYvmxpFB",
"H1g0adr3Kr",
"SJlT6ASJcS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for taking the time to review our paper. We address your questions below:\n\n1. Indeed, we agree that Multi30k is not a typical large scale machine translation dataset. However, we chose the Multi30k dataset because it provided us with multiple high quality channels that expresses the same underlying mea... | [
-1,
-1,
-1,
3,
3,
1
] | [
-1,
-1,
-1,
4,
5,
3
] | [
"H1g0adr3Kr",
"S1gYvmxpFB",
"SJlT6ASJcS",
"iclr_2020_r1xQNlBYPS",
"iclr_2020_r1xQNlBYPS",
"iclr_2020_r1xQNlBYPS"
] |
iclr_2020_r1e7NgrYvH | DO-AutoEncoder: Learning and Intervening Bivariate Causal Mechanisms in Images | Some fundamental limitations of deep learning have been exposed such as lacking generalizability and being vunerable to adversarial attack. Instead, researchers realize that causation is much more stable than association relationship in data. In this paper, we propose a new framework called do-calculus AutoEncoder(DO-AE) for deep representation learning that fully capture bivariate causal relationship in the images which allows us to intervene in images generation process. DO-AE consists of two key ingredients: causal relationship mining in images and intervention-enabling deep causal structured representation learning. The goal here is to learn deep representations that correspond to the concepts in the physical world as well as their causal structure. To verify the proposed method, we create a dataset named PHY2D, which contains abstract graphic description in accordance with the laws of physics. Our experiments demonstrate our method is able to correctly identify the bivariate causal relationship between concepts in images and the representation learned enables a do-calculus manipulation to images, which generates artificial images that might possibly break the physical law depending on where we intervene the causal system. | reject | The idea of integrating causality into an auto-encoder is interesting and very timely. While the reviewers find this paper to contain some interesting ideas, the technical contributions and mathematical rigor, scope of the method, and the presentation of results would need to be significantly improved in order for this work to reach the quality bar of ICLR. | val | [
"HyxLcHJnsB",
"BJxLxHJ2sr",
"ByePp4yniH",
"HyeKwG52tH",
"SyxvfpcntB",
"SkgN4O6TFB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank reviewer #3 for his review. \n\n-We will enhance the writing ability to make the narrative of the paper more professional and rigorous.\n\n-There are a number of variables and complicated causal graph in natural images, we want to use the artificial data to conduct a preliminary exploration. Our next work... | [
-1,
-1,
-1,
1,
1,
6
] | [
-1,
-1,
-1,
4,
3,
1
] | [
"SyxvfpcntB",
"HyeKwG52tH",
"SkgN4O6TFB",
"iclr_2020_r1e7NgrYvH",
"iclr_2020_r1e7NgrYvH",
"iclr_2020_r1e7NgrYvH"
] |
iclr_2020_S1lBVgHYvr | Towards Certified Defense for Unrestricted Adversarial Attacks | Certified defenses against adversarial examples are very important in safety-critical applications of machine learning. However, existing certified defense strategies only safeguard against perturbation-based adversarial attacks, where the attacker is only allowed to modify normal data points by adding small perturbations. In this paper, we provide certified defenses under the more general threat model of unrestricted adversarial attacks. We allow the attacker to generate arbitrary inputs to fool the classifier, and assume the attacker knows everything except the classifiers' parameters and the training dataset used to learn it. Lack of knowledge about the classifiers parameters prevents an attacker from generating adversarial examples successfully. Our defense draws inspiration from differential privacy, and is based on intentionally adding noise to the classifier's outputs to limit the attacker's knowledge about the parameters. We prove concrete bounds on the minimum number of queries required for any attacker to generate a successful adversarial attack. For a simple linear classifiers we prove that the bound is asymptotically optimal up to a constant by exhibiting an attack algorithm that achieves this lower bound. We empirically show the success of our defense strategy against strong black box attack algorithms. | reject | This paper proposes a certified defense under the more general threat model beyond additive perturbation. The proposed defense method is based on adding noise to the classifier's outputs to limit the attacker's knowledge about the parameters, which is similar to differential privacy mechanism. The authors proved the query complexity for any attacker to generate a successful adversarial attack. The main objection of this work is (1) the assumption of the attacker and the definition of the query complexity (to recover the optimal classifier rather than generating an adversarial example successfully) is uncommon, (2) the claim is misleading, and (3) the experimental evaluation is not sufficient (only two attacks are evaluated). The authors only provided a brief response to address the reviewers’ comments/questions without submitting a revision. Unfortunately none of the reviewer is in support of this paper even after author response.
| train | [
"r1guL36CFS",
"BJxBz-t7cH",
"BJeYzwt2iB",
"HJemqGSatB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"Although this paper's title contains \"certified defense\" and \"unrestricted adversarial attack\", what I believe this paper is doing is analyzing the query complexity of query-based black-box attacks under simple linear models such as logistic regressions (or kernelized versions). The authors considered a binar... | [
1,
3,
-1,
3
] | [
5,
5,
-1,
4
] | [
"iclr_2020_S1lBVgHYvr",
"iclr_2020_S1lBVgHYvr",
"iclr_2020_S1lBVgHYvr",
"iclr_2020_S1lBVgHYvr"
] |
iclr_2020_r1erNxBtwr | Demystifying Graph Neural Network Via Graph Filter Assessment | Graph Neural Networks (GNNs) have received tremendous attention recently due to their power in handling graph data for different downstream tasks across different application domains. The key of GNN is its graph convolutional filters, and recently various kinds of filters are designed. However, there still lacks in-depth analysis on (1) Whether there exists a best filter that can perform best on all graph data; (2) Which graph properties will influence the optimal choice of graph filter; (3) How to design appropriate filter adaptive to the graph data. In this paper, we focus on addressing the above three questions. We first propose a novel assessment tool to evaluate the effectiveness of graph convolutional filters for a given graph. Using the assessment tool, we find out that there is no single filter as a `silver bullet' that perform the best on all possible graphs. In addition, different graph structure properties will influence the optimal graph convolutional filter's design choice. Based on these findings, we develop Adaptive Filter Graph Neural Network (AFGNN), a simple but powerful model that can adaptively learn task-specific filter. For a given graph, it leverages graph filter assessment as regularization and learns to combine from a set of base filters. Experiments on both synthetic and real-world benchmark datasets demonstrate that our proposed model can indeed learn an appropriate filter and perform well on graph tasks. | reject | The paper investigates graph convolutional filters, and proposes an adaptation of the Fisher score to assess the quality of a convolutional filter. Formally, the defined Graph Filter Discriminant Score assesses how the filter improves the Fisher score attached to a pair of classes (considering the nodes in each class, and their embedding through the filter and the graph structure, as propositional samples), taking into account the class imbalance.
An analysis is conducted on synthetic graphs to assess how the hyper-parameters (order, normalization strategy) of the filter rule the GFD score depending on the graph and class features. As could have been expected there no single killer filter.
A finite set of filters, called base filters, being defined by varying the above hyper-parameters, the search space is that of a linear combination of the base filters in each layer. Three losses are considered: with and without graph filter discriminant score, and alternatively optimizing the cross-entropy loss and the GFD; this last option is the best one in the experiments.
As noted by the reviewers and other public comments, the idea of incorporating LDA ideas into GNN is nice and elegant. The reservations of the reviewers are mostly related to the experimental validation: of course getting the best score on each dataset is not expected; but the set of considered problems is too limited and their diversity is limited too (as demonstrated by the very nice Fig. 5).
The area chair thus encourages the authors to pursue this very promising line of research and hopes to see a revised version backed up with more experimental evidence. | train | [
"S1liHkwIiB",
"BkeIi3y3iH",
"BklViGPUsB",
"BJxgrTJ3iH",
"Bkg0BVP8oB",
"rJxjpfv8or",
"rygcY3OCFB",
"H1gL4Eq0tr",
"SklzDh7NcB",
"S1xLBztAuH",
"SJlmRFw2OB",
"BkxQrdIyOH",
"rJgE7xBJur",
"rklZkk_ZuH"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author"
] | [
"Thank you so much for the positive feedback! We really appreciate your support for our paper as well as your constructive suggestions. We have improved our paper based on your advice (we marked the modifications related to your suggestions with blue text, and highlighted the previous version with strikethrough): \... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
1,
3,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
1,
3,
-1,
-1,
-1,
-1,
-1
] | [
"rygcY3OCFB",
"Bkg0BVP8oB",
"H1gL4Eq0tr",
"BkeIi3y3iH",
"SklzDh7NcB",
"BklViGPUsB",
"iclr_2020_r1erNxBtwr",
"iclr_2020_r1erNxBtwr",
"iclr_2020_r1erNxBtwr",
"SJlmRFw2OB",
"iclr_2020_r1erNxBtwr",
"iclr_2020_r1erNxBtwr",
"iclr_2020_r1erNxBtwr",
"rJgE7xBJur"
] |
iclr_2020_rJg8NertPr | Top-down training for neural networks | Vanishing gradients pose a challenge when training deep neural networks, resulting in the top layers (closer to the output) in the network learning faster when compared with lower layers closer to the input. Interpreting the top layers as a classifier and the lower layers a feature extractor, one can hypothesize that unwanted network convergence may occur when the classifier has overfit with respect to the feature extractor. This can lead to the feature extractor being under-trained, possibly failing to learn much about the patterns in the input data. To address this we propose a good classifier hypothesis: given a fixed classifier that partitions the space well, the feature extractor can be further trained to fit that classifier and learn the data patterns well. This alleviates the problem of under-training the feature extractor and enables the network to learn patterns in the data with small partial derivatives. We verify this hypothesis empirically and propose a novel top-down training method. We train all layers jointly, obtaining a good classifier from the top layers, which are then frozen. Following re-initialization, we retrain the bottom layers with respect to the frozen classifier. Applying this approach to a set of speech recognition experiments using the Wall Street Journal and noisy CHiME-4 datasets we observe substantial accuracy gains. When combined with dropout, our method enables connectionist temporal classification (CTC) models to outperform joint CTC-attention models, which have more capacity and flexibility. | reject | The paper proposes a top-down approach to train deep neural networks -- freezing top layers after supervised pre-training, then re-initializing and retraining the bottom layers. As mentioned by all the reviewers, the novelty is on the low side. The paper is purely experimental (no theory), and the experimental section is currently too weak. In particular:
- Experiments on different domains should be performed.
- Different models should be evaluated.
- Ablation experiments should be performed to understand better under which conditions the proposed approach works.
- For speech recognition, WER should be reported - even if it is without a LM - such that one can compare with existing work.
| train | [
"ByeuDjgq2H",
"ryguYP6aKH",
"ByguumWsiH",
"rkxTnMZjor",
"ByeJnb-ojr",
"rylPEZ-jjH",
"Syxr1WFpFB",
"BkxCrGtNqr",
"SJxOk6SN9S",
"BJeeoj3XFB",
"Hkgv1no7FS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"This paper studies the common experimental finding that low level features trained end-to-end in a deep model converge (get \"locked in place\") earlier than higher level features, which may result in problematic undertraining. The focus of the study is not on skip connections, but really on getting adequate train... | [
3,
3,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1
] | [
5,
3,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2020_rJg8NertPr",
"iclr_2020_rJg8NertPr",
"Syxr1WFpFB",
"rylPEZ-jjH",
"rylPEZ-jjH",
"ryguYP6aKH",
"iclr_2020_rJg8NertPr",
"SJxOk6SN9S",
"iclr_2020_rJg8NertPr",
"Hkgv1no7FS",
"iclr_2020_rJg8NertPr"
] |
iclr_2020_H1eLVxrKwS | Removing input features via a generative model to explain their attributions to classifier's decisions | Interpretability methods often measure the contribution of an input feature to an image classifier's decisions by heuristically removing it via e.g. blurring, adding noise, or graying out, which often produce unrealistic, out-of-samples. Instead, we propose to integrate a generative inpainter into three representative attribution map methods as a mechanism for removing input features. Compared to the original counterparts, our methods (1) generate more plausible counterfactual samples under the true data generating process; (2) are more robust to hyperparameter settings; and (3) localize objects more accurately. Our findings were consistent across both ImageNet and Places365 datasets and two different pairs of classifiers and inpainters. | reject | Perturbation-based methods often produce artefacts that make the perturbed samples less realistic. This paper proposes to corrects this through use of an inpainter. Authors claim that this results in more plausible perturbed samples and produces methods more robust to hyperparameter settings.
Reviewers found the work intuitive and well-motivated, well-written, and the experiments comprehensive.
However they also had concerns about minimal novelty and unfair experimental comparisons, as well as inconclusive results. Authors response have not sufficiently addressed these concerns.
Therefore, we recommend rejection. | train | [
"rkeFo6TjiH",
"H1xQDCssjH",
"r1e3h7zMjS",
"BJxLlbxfor",
"BJgd9leMsH",
"HklU8yTbjH",
"HylJ807-oB",
"rJeKlJ0liS",
"S1e1xRnRFS",
"SkxCsZgccr",
"ByeyNBBAqr"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you so much for taking the time to respond to us!\n\nWe would greatly appreciate it if you could elaborate on your reasons for \"contributions are not enough for ICLR\".\nWe really wish to improve the manuscript further in light of your comments.\n\nre: results\n- If you worry about the insignificant result ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"H1xQDCssjH",
"HylJ807-oB",
"S1e1xRnRFS",
"ByeyNBBAqr",
"ByeyNBBAqr",
"SkxCsZgccr",
"SkxCsZgccr",
"iclr_2020_H1eLVxrKwS",
"iclr_2020_H1eLVxrKwS",
"iclr_2020_H1eLVxrKwS",
"iclr_2020_H1eLVxrKwS"
] |
iclr_2020_BJxDNxSFDH | Few-Shot Regression via Learning Sparsifying Basis Functions | Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples. Previous few-shot learning works have mainly focused on classification and reinforcement learning. In this paper, we propose a few-shot meta-learning system that focuses exclusively on regression tasks. Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions. This enables a few labeled samples to approximate the function. We design a Basis Function Learner network to encode basis functions for a task distribution, and a Weights Generator network to generate the weight vector for a novel task. We show that our model outperforms the current state of the art meta-learning methods in various regression tasks. | reject | All reviewers agree that this paper is not ready for publication. | train | [
"r1x-IvpijH",
"SkemNPaiiH",
"BJl_xPaojB",
"S1eefcvhtH",
"B1eXuCxRYS",
"r1ggNMGw5B",
"HklCW2U3Pr",
"HylDbJr2DS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for your review and comments. We will do our best to address your comments/questions below.\n\nWe apologize if our method is not clearly explained enough. Yes indeed as you pointed in Eq (5). Both the weights of the Basis Function Learner, \\theta and Weights Generator \\psi are optimized jointly end-to-... | [
-1,
-1,
-1,
3,
3,
3,
-1,
-1
] | [
-1,
-1,
-1,
4,
3,
3,
-1,
-1
] | [
"r1ggNMGw5B",
"S1eefcvhtH",
"B1eXuCxRYS",
"iclr_2020_BJxDNxSFDH",
"iclr_2020_BJxDNxSFDH",
"iclr_2020_BJxDNxSFDH",
"HylDbJr2DS",
"iclr_2020_BJxDNxSFDH"
] |
iclr_2020_S1xO4xHFvB | Atomic Compression Networks | Compressed forms of deep neural networks are essential in deploying large-scale
computational models on resource-constrained devices. Contrary to analogous
domains where large-scale systems are build as a hierarchical repetition of small-
scale units, the current practice in Machine Learning largely relies on models with
non-repetitive components. In the spirit of molecular composition with repeating
atoms, we advance the state-of-the-art in model compression by proposing Atomic
Compression Networks (ACNs), a novel architecture that is constructed by recursive
repetition of a small set of neurons. In other words, the same neurons with the
same weights are stochastically re-positioned in subsequent layers of the network.
Empirical evidence suggests that ACNs achieve compression rates of up to three
orders of magnitudes compared to fine-tuned fully-connected neural networks (88×
to 1116× reduction) with only a fractional deterioration of classification accuracy
(0.15% to 5.33%). Moreover our method can yield sub-linear model complexities
and permits learning deep ACNs with less parameters than a logistic regression
with no decline in classification accuracy. | reject | This paper proposed a very general idea called Atomic Compression Networks (ACNs) to construct neural networks. The idea looks simple and effective. However, the reason why it works is not well explained. The experiments are not sufficient enough to convince the reviewers. | train | [
"rke2IBwPtS",
"H1eZ4aN2jS",
"ryguvqJjiB",
"Bklvp51joB",
"SkxlZokior",
"ByxlTSdttS",
"SJgIDBwJ9B"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper explores the use of replicating neurons across and within layers to compress fully connected neural networks. The idea is simple, and is evaluated on a number of datasets and compared with fully connected, single layer, and several compression schemes. \n\nStrengths: a lot of nice experiments with clear... | [
6,
-1,
-1,
-1,
-1,
1,
1
] | [
1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_S1xO4xHFvB",
"iclr_2020_S1xO4xHFvB",
"SJgIDBwJ9B",
"ByxlTSdttS",
"rke2IBwPtS",
"iclr_2020_S1xO4xHFvB",
"iclr_2020_S1xO4xHFvB"
] |
iclr_2020_HJluEeHKwH | The Differentiable Cross-Entropy Method | We study the Cross-Entropy Method (CEM) for the non-convex optimization of a continuous and parameterized objective function and introduce a differentiable variant (DCEM) that enables us to differentiate the output of CEM with respect to the objective function's parameters. In the machine learning setting this brings CEM inside of the end-to-end learning pipeline in cases this has otherwise been impossible. We show applications in a synthetic energy-based structured prediction task and in non-convex continuous control. In the control setting we show on the simulated cheetah and walker tasks that we can embed their optimal action sequences with DCEM and then use policy optimization to fine-tune components of the controller as a step towards combining model-based and model-free RL. | reject | This paper proposes a differentiable version of CEM, allowing CEM to be used as an operator within end-to-end training settings. The reviewers all like the idea -- it is simple and should be of interest to the community. Unfortunately, the reviewers also are in consensus that the experiments are not sufficiently convincing. We encourage the authors to expand the empirical analysis, based on the reviewer's specific comments, and resubmit the paper to a future venue. | train | [
"B1xaWlkCFS",
"S1xI86l9oB",
"HJlhapg5sS",
"HylCoagqjH",
"rkgcuTgqjS",
"SJeQE6lqir",
"B1ea-6e5jS",
"rJgo2xHotB",
"HyxhrAGRtB",
"rJeB1qzxdB",
"Sylm1GWeuS",
"SkxansyeOB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public"
] | [
"After reading authors' response, I am sticking to my original decision. Authors addressed most of the issues I raised and I am happy with their response; however, I still believe the paper should not be accepted since it is not adding enough value. The problem is important and impactful. However, the algorithmic i... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
1,
-1,
-1,
-1
] | [
"iclr_2020_HJluEeHKwH",
"HyxhrAGRtB",
"HylCoagqjH",
"rJgo2xHotB",
"B1xaWlkCFS",
"B1ea-6e5jS",
"iclr_2020_HJluEeHKwH",
"iclr_2020_HJluEeHKwH",
"iclr_2020_HJluEeHKwH",
"Sylm1GWeuS",
"SkxansyeOB",
"iclr_2020_HJluEeHKwH"
] |
iclr_2020_BygY4grYDr | The divergences minimized by non-saturating GAN training | Interpreting generative adversarial network (GAN) training as approximate divergence minimization has been
theoretically insightful, has spurred discussion, and has lead to theoretically and practically interesting
extensions such as f-GANs and Wasserstein GANs. For both classic GANs and f-GANs, there is an original variant of training and a "non-saturating" variant which uses an alternative form of generator gradient. The original variant is theoretically easier to study, but for GANs the alternative variant performs better in practice. The non-saturating scheme is often regarded as a simple modification to deal with optimization issues, but we show that in fact the non-saturating scheme for GANs is effectively optimizing a reverse KL-like f-divergence. We also develop a number of theoretical tools to help compare and classify f-divergences. We hope these results may help to clarify some of the theoretical discussion surrounding the divergence minimization view of GAN training. | reject | As the reviewers point out, the core contribution might be potentially important but the current execution of the paper makes it difficult to gauge this importance. In the light of this, this paper does not seem ready for appearance in a conference like ICLR. | test | [
"SkeyqBiynS",
"BJezD3zRtH",
"Bkx1JiIFjB",
"r1lFVP8tiB",
"Syg7S7Utjr",
"SJgXA_V2FB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors study the ‘non-saturating’ variant of training for GANs and show that it is equivalent to a regular training procedure minimizing a “softened” reverse KL divergence as opposed to Jensen-Shannon divergence. They show a connection between the two training procedures for more general f-divergence losses. ... | [
3,
3,
-1,
-1,
-1,
3
] | [
3,
4,
-1,
-1,
-1,
3
] | [
"iclr_2020_BygY4grYDr",
"iclr_2020_BygY4grYDr",
"SJgXA_V2FB",
"BJezD3zRtH",
"BJezD3zRtH",
"iclr_2020_BygY4grYDr"
] |
iclr_2020_SJeFNlHtPS | Hidden incentives for self-induced distributional shift | Decisions made by machine learning systems have increasing influence on the world. Yet it is common for machine learning algorithms to assume that no such influence exists. An example is the use of the i.i.d. assumption in online learning for applications such as content recommendation, where the (choice of) content displayed can change users' perceptions and preferences, or even drive them away, causing a shift in the distribution of users. Generally speaking, it is possible for an algorithm to change the distribution of its own inputs. We introduce the term self-induced distributional shift (SIDS) to describe this phenomenon. A large body of work in reinforcement learning and causal machine learning aims to deal with distributional shift caused by deploying learning systems previously trained offline. Our goal is similar, but distinct: we point out that changes to the learning algorithm, such as the introduction of meta-learning, can reveal hidden incentives for distributional shift (HIDS), and aim to diagnose and prevent problems associated with hidden incentives. We design a simple environment as a "unit test" for HIDS, as well as a content recommendation environment which allows us to disentangle different types of SIDS. We demonstrate the potential for HIDS to cause unexpected or undesirable behavior in these environments, and propose and test a mitigation strategy. | reject | The paper shows how meta-learning contains hidden incentives for distributional shift and how a technique called context swapping can help deal with this. Overall, distributional shift is an important problem, but the contributions made by this paper to deal with this, such as the introduction of unit-tests and context-swapping, is not sufficiently clear. Therefore, my recommendation is a reject. | val | [
"rJxhKoSaFB",
"SkgTmWnooB",
"HkgaesWFsr",
"SJx3qIZtoB",
"HJg5_UWFsB",
"rkxvzVbKoS",
"H1e8RBdx9r",
"S1xlW-zf9S"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors study the phenomena of self-introduced distributional shift. They define the term along with the term hidden incentives for distributional shift. The latter describes factors that motivate the learner to change the distribution in order to achieve a higher performance. The authors study both phenomena ... | [
6,
-1,
-1,
-1,
-1,
-1,
1,
1
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_SJeFNlHtPS",
"SJx3qIZtoB",
"H1e8RBdx9r",
"HJg5_UWFsB",
"S1xlW-zf9S",
"rJxhKoSaFB",
"iclr_2020_SJeFNlHtPS",
"iclr_2020_SJeFNlHtPS"
] |
iclr_2020_B1ecVlrtDr | Symmetric-APL Activations: Training Insights and Robustness to Adversarial Attacks | Deep neural networks with learnable activation functions have shown superior performance over deep neural networks with fixed activation functions for many different problems. The adaptability of learnable activation functions adds expressive power to the model which results in better performance. Here, we propose a new learnable activation function based on Adaptive Piecewise Linear units (APL), which 1) gives equal expressive power to both the positive and negative halves on the input space and 2) is able to approximate any zero-centered continuous non-linearity in a closed interval. We investigate how the shape of the Symmetric-APL function changes during training and perform ablation studies to gain insight into the reason behind these changes. We hypothesize that these activation functions go through two distinct stages: 1) adding gradient information and 2) adding expressive power. Finally, we show that the use of Symmetric-APL activations can significantly increase the robustness of deep neural networks to adversarial attacks. Our experiments on both black-box and open-box adversarial attacks show that commonly-used architectures, namely Lenet, Network-in-Network, and ResNet-18 can be up to 51% more resistant to adversarial fooling by only using the proposed activation functions instead of ReLUs. | reject | This work presents a learnable activation function based on adaptive piecewise linear (APL) units. Specifically, it extends APL to the symmetric form. The authors argue that S-APL activations can lead networks that are more robust to adversarial attacks. They present an empirical evaluation to prove the latter claim. However, the significance of these empirical results were not clear due to non-standard threat models used in black-box setting and the weak attacks used in open-box setting. The authors revised the submission and addressed some of the concerns the reviewers had. This effort was greatly appreciated by the reviewers. However, the issues related to the significance of robustness results remained unclear even after the revision. In particular, as pointed by R4, some of the revisions seem to be incomplete (Table 4). Also, the concern R4 had initially raised about non-standard black-box attacks was not addressed. Finally, some experimental details are still missing. While the revision indeed a great step, the adversarial experiments more clear and use more standard setup be convincing.
| train | [
"Hye1Aka8qB",
"ryeYA91hjB",
"HygGNcyhjS",
"BJliPOk2sS",
"rklJl_k3oB",
"SyxadDtqFS",
"H1xhh7JecS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a learnable piece-wise linear activation unit whose hinges are placed symmetrically. It gives a proof on the universality of the proposed unit on a certain condition. The superiority of the method is empirically shown. The change of the activation during training is analyzed and insight on the ... | [
6,
-1,
-1,
-1,
-1,
1,
3
] | [
3,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2020_B1ecVlrtDr",
"SyxadDtqFS",
"H1xhh7JecS",
"Hye1Aka8qB",
"iclr_2020_B1ecVlrtDr",
"iclr_2020_B1ecVlrtDr",
"iclr_2020_B1ecVlrtDr"
] |
iclr_2020_BkgqExrYvS | PopSGD: Decentralized Stochastic Gradient Descent in the Population Model | The population model is a standard way to represent large-scale decentralized
distributed systems, in which agents with limited computational power interact
in randomly chosen pairs, in order to collectively solve global computational
tasks. In contrast with synchronous gossip models, nodes are anonymous, lack a
common notion of time, and have no control over their scheduling. In this paper,
we examine whether large-scale distributed optimization can be performed in this
extremely restrictive setting.
We introduce and analyze a natural decentralized variant of stochastic gradient
descent (SGD), called PopSGD, in which every node maintains a local parameter,
and is able to compute stochastic gradients with respect to this parameter.
Every pair-wise node interaction performs a stochastic gradient step at each
agent, followed by averaging of the two models. We prove that, under standard
assumptions, SGD can converge even in this extremely loose, decentralized
setting, for both convex and non-convex objectives. Moreover, surprisingly, in
the former case, the algorithm can achieve linear speedup in the number of nodes
n. Our analysis leverages a new technical connection between decentralized SGD
and randomized load balancing, which enables us to tightly bound the
concentration of node parameters. We validate our analysis through experiments,
showing that PopSGD can achieve convergence and speedup for large-scale
distributed learning tasks in a supercomputing environment. | reject | This manuscript studies scaling distributed stochastic gradient descent to a large number of nodes. Specifically, it proposes to use algorithms based on population analysis (relevant for large numbers of distributed nodes) to implement distributed training of deep neural networks.
In reviews and discussions, the reviewers and AC note missing or inadequate comparisons to previous work on asynchronous SGD, and possible lack of novelty compared to previous work. The reviewers also mentioned the incomplete empirical comparison to closely related work. On the writing, reviewers mentioned that the conciseness of the manuscript could be improved.
| train | [
"rklkIa9njr",
"Syxg2t52sH",
"r1xiS8nisB",
"SJla4SGIsB",
"ByloJUGLiH",
"SyesdPGIir",
"BJxfsLM8or",
"SklpaUAaKH",
"BkeHxzS0Kr"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We again thank the reviewers for their feedback. We summarize our revision, and our claimed contributions.\n\n- We have significantly re-written the introduction and related work, specifically for clarity with respect to the work of (Lian et al.) and (Assran et al.) We have added Table 1 (Appendix), which summariz... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_BkgqExrYvS",
"r1xiS8nisB",
"BJxfsLM8or",
"iclr_2020_BkgqExrYvS",
"iclr_2020_BkgqExrYvS",
"SklpaUAaKH",
"BkeHxzS0Kr",
"iclr_2020_BkgqExrYvS",
"iclr_2020_BkgqExrYvS"
] |
iclr_2020_B1x3EgHtwB | ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks | In this paper, we introduce a novel approach to training a given compact network. To this end, we build upon over-parameterization, which typically improves both optimization and generalization in neural network training, while being unnecessary at inference time. We propose to expand each linear layer of the compact network into multiple linear layers, without adding any nonlinearity. As such, the resulting expanded network can benefit from over-parameterization during training but can be compressed back to the compact one algebraically at inference. As evidenced by our experiments, this consistently outperforms training the compact network from scratch and knowledge distillation using a teacher. In this context, we introduce several expansion strategies, together with an initialization scheme, and demonstrate the benefits of our ExpandNets on several tasks, including image classification, object detection, and semantic segmentation. | reject | The paper develops linear over-parameterization methods to improve training of small neural network models. This is compared to training from scratch and other knowledge distillation methods.
Reviewer 1 found the paper to be clear with good analysis, and raised concerns on generality and extensiveness of experimental work. Reviewer 2 raised concerns about the correctness of the approach and laid out several other possibilities. The authors conducted several other experiments and responded to all the feedback from the reviewers, although there was no final consensus on the scores.
The review process has made this a better paper and it is of interest to the community. The paper demonstrates all the features of a good paper, but due to a large number of strong papers, was not accepted at this time. | test | [
"Hkef7hN4qH",
"Syl-HS8sjS",
"rkgS998soH",
"SyxeqmIjiB",
"r1lZzyIoir",
"H1lptzIjjB",
"SyxThYwDYS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes linear over-parameterization methods to improve training of small neural network models. The idea is simple -- each linear transformation in a network is overparameterized by a series of linear transformation which is algebraically equivalent to the original linear transformation. Number of exp... | [
6,
-1,
-1,
-1,
-1,
-1,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_B1x3EgHtwB",
"SyxThYwDYS",
"SyxThYwDYS",
"Hkef7hN4qH",
"iclr_2020_B1x3EgHtwB",
"Hkef7hN4qH",
"iclr_2020_B1x3EgHtwB"
] |
iclr_2020_Hkx3ElHYwS | GQ-Net: Training Quantization-Friendly Deep Networks | Network quantization is a model compression and acceleration technique that has become essential to neural network deployment. Most quantization methods per- form fine-tuning on a pretrained network, but this sometimes results in a large loss in accuracy compared to the original network. We introduce a new technique to train quantization-friendly networks, which can be directly converted to an accurate quantized network without the need for additional fine-tuning. Our technique allows quantizing the weights and activations of all network layers down to 4 bits, achieving high efficiency and facilitating deployment in practical settings. Com- pared to other fully quantized networks operating at 4 bits, we show substantial improvements in accuracy, for example 66.68% top-1 accuracy on ImageNet using ResNet-18, compared to the previous state-of-the-art accuracy of 61.52% Louizos et al. (2019) and a full precision reference accuracy of 69.76%. We performed a thorough set of experiments to test the efficacy of our method and also conducted ablation studies on different aspects of the method and techniques to improve training stability and accuracy. Our codebase and trained models are available on GitHub. | reject | The paper propose a new quantization-friendly network training algorithm called GQ (or DQ) net. The paper is well-written, and the proposed idea is interesting. Empirical results are also good. However, the major performance improvement comes from the combination of different incremental improvements. Some of these additional steps do seem orthogonal to the proposed idea. Also, it is not clear how robust the method is to the various hyperparameters / schedules. For example, it seems that some of the suggested training options are conflicting each other. More in-depth discussions and analysis on the setting of the regularization parameter and schedule for the loss term blending parameters will be useful. | val | [
"Skl9Y2InoS",
"BJgS_zI3sr",
"Bygh9Wl2jB",
"rklRvKlniH",
"BkejVFl2oB",
"H1lrw8ghoH",
"rkx1gIx2ir",
"Hkgpu1e2iB",
"SkeVhAmjYH",
"Hye6gdYCYr",
"HJlVa_8WcB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I appreciate the detailed reply of the authors. Overall, the new results make me feel more positive about this work, but I decided to keep my rating the same. This is primarily for two reasons; While I can understand that some of the additional steps can be understood as part of the main method (eg detaching the g... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"SkeVhAmjYH",
"Hye6gdYCYr",
"SkeVhAmjYH",
"Hye6gdYCYr",
"Hye6gdYCYr",
"HJlVa_8WcB",
"HJlVa_8WcB",
"SkeVhAmjYH",
"iclr_2020_Hkx3ElHYwS",
"iclr_2020_Hkx3ElHYwS",
"iclr_2020_Hkx3ElHYwS"
] |
iclr_2020_BkxREeHKPS | On the Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks | Variational Bayesian Inference is a popular methodology for approximating posterior distributions in Bayesian neural networks. Recent work developing this class of methods has explored ever richer parameterizations of the approximate posterior in the hope of improving performance. In contrast, here we share a curious experimental finding that suggests instead restricting the variational distribution to a more compact parameterization. For a variety of deep Bayesian neural networks trained using Gaussian mean-field variational inference, we find that the posterior standard deviations consistently exhibits strong low-rank structure after convergence. This means that by decomposing these variational parameters into a low-rank factorization, we can make our variational approximation more compact without decreasing the models' performance. What's more, we find that such factorized parameterizations are easier to train since they improve the signal-to-noise ratio of stochastic gradient estimates of the variational lower bound, resulting in faster convergence. | reject | This paper proposes to reduce the number of variational parameters for mean-field VI. A low-rank approximation is used for this purpose. Results on a few small problems are reported.
As R3 has pointed out, the main reason to reject this paper is the lack of comparison of uncertainty estimates. I also agree that, recent Adam-like optimizers do use preconditioning that can be interpreted as variances, so it is not clear why reducing this will give better results.
I agree with R2's comments about missing the "point estimate" baseline. Also the reason for rank 1,2,3 giving better accuracies is unclear and I think the reasons provided by the authors is speculative.
I do believe that reducing the parameterization is a reasonable idea and could be useful. But it is not clear if the proposal of this paper is the right one. Due to this reason, I recommend to reject this paper. However, I highly encourage the authors to improve their paper taking these points into account. | train | [
"rylP-SK3jr",
"r1lC94K3iS",
"SJlwMNK2iB",
"B1xP-y9tFr",
"HJxXIP82YB",
"S1xrRMiJcr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"[R3.1]\nWhile the observation is somewhat interesting, currently it is only verified in a narrow range of network architectures, and it's unclear if the observation and the proposed method will still be useful on network architectures used in real-world applications. As such, I believe this work would be more suit... | [
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
4,
4,
5
] | [
"B1xP-y9tFr",
"HJxXIP82YB",
"S1xrRMiJcr",
"iclr_2020_BkxREeHKPS",
"iclr_2020_BkxREeHKPS",
"iclr_2020_BkxREeHKPS"
] |
iclr_2020_HJeANgBYwr | Towards Scalable Imitation Learning for Multi-Agent Systems with Graph Neural Networks | We propose an implementation of GNN that predicts and imitates the motion be- haviors from observed swarm trajectory data. The network’s ability to capture interaction dynamics in swarms is demonstrated through transfer learning. We finally discuss the inherent availability and challenges in the scalability of GNN, and proposed a method to improve it with layer-wise tuning and mixing of data enabled by padding. | reject | This paper proposes a graph neural network based approach for scaling up imitation learning (e.g., of swarm behaviors). Reviewers noted key limitations in the discussion of related work, size of the proposed contribution in terms of model novelty, and evaluation / comparison to strong baselines. Reviewers appreciated the author replies which resolved some concerns but agree that the paper is overall not ready for publication. | val | [
"rklr5kJniH",
"SJgFgy13oB",
"Byg-cACsiS",
"BJlkO_dstB",
"B1x2QZWRFB",
"HJeeFJLRYB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We agree the introduction and explanation of our model are partly based on the standard practices of GNN. However, we point out that the emphasis and significance of our paper is the scalability of GNN based networks, and the introduction of our model is to show a working model. We dedicated half our paper to the ... | [
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"BJlkO_dstB",
"B1x2QZWRFB",
"HJeeFJLRYB",
"iclr_2020_HJeANgBYwr",
"iclr_2020_HJeANgBYwr",
"iclr_2020_HJeANgBYwr"
] |
iclr_2020_HJgySxSKvB | Deep Relational Factorization Machines | Factorization Machines (FMs) is an important supervised learning approach due to its unique ability to capture feature interactions when dealing with high-dimensional sparse data. However, FMs assume each sample is independently observed and hence incapable of exploiting the interactions among samples. On the contrary, Graph Neural Networks (GNNs) has become increasingly popular due to its strength at capturing the dependencies among samples. But unfortunately, it cannot efficiently handle high-dimensional sparse data, which is quite common in modern machine learning tasks. In this work, to leverage their complementary advantages and yet overcome their issues, we proposed a novel approach, namely Deep Relational Factorization Machines, which can capture both the feature interaction and the sample interaction. In particular, we disclosed the relationship between the feature interaction and the graph, which opens a brand new avenue to deal with high-dimensional features. Finally, we demonstrate the effectiveness of the proposed approach with experiments on several real-world datasets. | reject | This paper proposes to combine FMs and GNNs. All reviewers voted reject, as the paper lacks experiments (eg ablation studies) and novelty. Writing can be significant improved - some information is missing. Authors did not respond to reviewers questions and concerns. For this reason, I recommend reject.
| train | [
"r1x93hOWtH",
"r1eyeua6tB",
"Skev-GNHqH",
"HJleRvQaFH",
"B1gblq9cur"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"In this paper, the authors propose generalize the FM to consider both interaction between features and interaction between samples. For the interaction between features, the authors propose to use graph convolution to capture high-order feature interactions. Moreover, the authors construct a graph on the instances... | [
1,
1,
3,
-1,
-1
] | [
3,
4,
1,
-1,
-1
] | [
"iclr_2020_HJgySxSKvB",
"iclr_2020_HJgySxSKvB",
"iclr_2020_HJgySxSKvB",
"B1gblq9cur",
"iclr_2020_HJgySxSKvB"
] |
iclr_2020_HkxJHlrFvr | Angular Visual Hardness | The mechanisms behind human visual systems and convolutional neural networks (CNNs) are vastly different. Hence, it is expected that they have different notions of ambiguity or hardness. In this paper, we make a surprising discovery: there exists a (nearly) universal score function for CNNs whose correlation with human visual hardness is statistically significant. We term this function as angular visual hardness (AVH) and in a CNN, it is given by the normalized angular distance between a feature embedding and the classifier weights of the corresponding target category. We conduct an in-depth scientific study. We observe that CNN models with the highest accuracy also have the best AVH scores. This agrees with an earlier finding that state-of-art models tend to improve on classification of harder training examples. We find that AVH displays interesting dynamics during training: it quickly reaches a plateau even though the training loss keeps improving. This suggests the need for designing better loss functions that can target harder examples more effectively. Finally, we empirically show significant improvement in performance by using AVH as a measure of hardness in self-training tasks.
| reject | This paper proposes a new measure for CNN and show its correlation to human visual hardness. The topic of this paper is interesting, and it sparked many interesting discussions among reviews. After reviewing each others’ comments, reviewers decided to recommend reject due to a few severe concerns that are yet to be address. In particular, reviewer 1 and 2 both raised concerns about potentially misleading and perhaps confusing statements around the correlation between HSF and accuracy. A concrete step was suggested by a reviewer - reporting correlation between accuracy and HSF. A few other points were raised around its conflict/agreement with prior work [RRSS19], or self-contradictory statements as pointed out by Reviewer 1 and 2 (see reviewer 2’s comment). We hope authors would use this helpful feedback to improve the paper for the future submission.
| train | [
"B1l6DMEhsS",
"HylNHnQ2sH",
"SJxZePXFjS",
"S1xahZ6uoS",
"r1geIMa_sB",
"H1x6Jm6_or",
"H1lWTzTdsH",
"HJeVWRoCKH",
"SkgCP9CRFH",
"BJeifxHT5r"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We sincerely thank all the reviewers for providing constructive suggestions and helping us improve the paper! We list the major changes we have done according to the recommendations in the following:\n\n1. We made a revision to our introduction, especially the first and second paragraphs to better motivate our wor... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2020_HkxJHlrFvr",
"BJeifxHT5r",
"SkgCP9CRFH",
"HJeVWRoCKH",
"S1xahZ6uoS",
"H1lWTzTdsH",
"r1geIMa_sB",
"iclr_2020_HkxJHlrFvr",
"iclr_2020_HkxJHlrFvr",
"iclr_2020_HkxJHlrFvr"
] |
iclr_2020_rkxgHerKvH | DEEP GRAPH SPECTRAL EVOLUTION NETWORKS FOR GRAPH TOPOLOGICAL TRANSFORMATION | Characterizing the underlying mechanism of graph topological evolution from a source graph to a target graph has attracted fast increasing attention in the deep graph learning domain. However, there lacks expressive and efficient that can handle global and local evolution patterns between source and target graphs. On the other hand, graph topological evolution has been investigated in the graph signal processing domain historically, but it involves intensive labors to manually determine suitable prescribed spectral models and prohibitive difficulty to fit their potential combinations and compositions. To address these challenges, this paper proposes the deep Graph Spectral Evolution Network (GSEN) for modeling the graph topology evolution problem by the composition of newly-developed generalized graph kernels. GSEN can effectively fit a wide range of existing graph kernels and their combinations and compositions with the theoretical guarantee and experimental verification. GSEN has outstanding efficiency in terms of time complexity (O(n)) and parameter complexity (O(1)), where n is the number of nodes of the graph. Extensive experiments on multiple synthetic and real-world datasets have demonstrated outstanding performance. | reject | The reviewers kept their scores after the author response period, pointing to continued concerns with methodology, needing increased exposition in parts, and not being able to verify theoretical results. As such, my recommendation is to improve the clarity around the methodological and theoretical contributions in a revision. | train | [
"rkete7wIjr",
"rJxus2Q-jH",
"r1gYsF--iB",
"SygNO0FTtS",
"r1g_0Jxl5S"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer #2, following Item 4 in your comments, we have added more analysis on the performance evaluation of our method and comparison methods on all four real-world datasets. This new evaluation is based on the R2 score, which is a widely-used metric for prediction performance evaluation. Please see them in ... | [
-1,
-1,
-1,
6,
3
] | [
-1,
-1,
-1,
3,
4
] | [
"rJxus2Q-jH",
"r1g_0Jxl5S",
"SygNO0FTtS",
"iclr_2020_rkxgHerKvH",
"iclr_2020_rkxgHerKvH"
] |
iclr_2020_HyxgBerKwB | GraphQA: Protein Model Quality Assessment using Graph Convolutional Network | Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure.
Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible.
Alternatively, protein folding can be modeled using computational methods, which however are not guaranteed to always produce optimal results.
GraphQA is a graph-based method to estimate the quality of protein models, that possesses favorable properties such as representation learning, explicit modeling of both sequential and 3D structure, geometric invariance and computational efficiency.
In this work, we demonstrate significant improvements of the state-of-the-art for both hand-engineered and representation-learning approaches, as well as carefully evaluating the individual contributions of GraphQA. | reject | This paper introduces an approach for estimating the quality of protein models. The proposed method consists in using graph convolutional networks (GCNs) to learn a representation of protein models and predict both a local and a global quality score. Experiments show that the proposed approach performs better than methods based on 1D and 3D CNNs.
Overall, this is a borderline paper. The improvement over state of the art for this specific application is noticeable. However, a major drawback is the lack of methodological novelty, the proposed solution being a direct application of GCNs. It does not bring new insights in representation learning. The contribution would therefore be of interest to a limited audience, in light of which I recommend to reject this paper. | val | [
"rJx90K-qor",
"SkgUfoWcir",
"Byxl9tb9jS",
"SkgPfYWcsS",
"ByelYd-qiH",
"SJxKk_b9or",
"rJeIpEYCKr",
"SyeIDP2AKr",
"HJli5s0ycr",
"B1gxZdZFdH"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"\n_______________________________________________________________________\n6) “QA application is a bit of a niche problem in bioinformatics.”\n\nWe try to address this concern, shared with reviewer 1, from four different viewpoints.\n\na) Most applications that are now commonly used as a benchmark in machine learn... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
3,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
-1
] | [
"Byxl9tb9jS",
"rJeIpEYCKr",
"SkgPfYWcsS",
"SyeIDP2AKr",
"HJli5s0ycr",
"iclr_2020_HyxgBerKwB",
"iclr_2020_HyxgBerKwB",
"iclr_2020_HyxgBerKwB",
"iclr_2020_HyxgBerKwB",
"iclr_2020_HyxgBerKwB"
] |
iclr_2020_SJeWHlSYDB | SPREAD DIVERGENCE | For distributions p and q with different supports, the divergence ÷pq may not exist. We define a spread divergence \sdivpq on modified p and q and describe sufficient conditions for the existence of such a divergence. We demonstrate how to maximize the discriminatory power of a given divergence by parameterizing and learning the spread. We also give examples of using a spread divergence to train and improve implicit generative models, including linear models (Independent Components Analysis) and non-linear models (Deep Generative Networks). | reject | This paper studies spread divergence between distributions, which may exist in settings where the divergence between said distributions does not. The reviewers feel this work does not have sufficient technical novelty to merit acceptance at this time. | val | [
"rkeamfp9iB",
"H1xiH26qiB",
"BJlMtz3csB",
"SJlHMmjiKH",
"B1ex_oAT9H",
"BJe9q6g09r"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for valuable reviews. We believe that we could fully address the concerns with the following arguments.\n\n1. \"The issues about JS:\nIn the case of two distributions that have disjoint support, the JS divergence is always a finite constant. We think this is ill-defined since it is not a vali... | [
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
5,
5,
4
] | [
"SJlHMmjiKH",
"BJe9q6g09r",
"B1ex_oAT9H",
"iclr_2020_SJeWHlSYDB",
"iclr_2020_SJeWHlSYDB",
"iclr_2020_SJeWHlSYDB"
] |
iclr_2020_BJgZBxBYPB | Learning Underlying Physical Properties From Observations For Trajectory Prediction | In this work we present an approach that combines deep learning together with
laws of Newton’s physics for accurate trajectory predictions in physical games.
Our model learns to estimate physical properties and forces that generated given
observations, learns the relationships between available player’s actions and estimated
physical properties and uses these extracted forces for predictions. We
show the advantages of using physical laws together with deep learning by evaluating
it against two baseline models that automatically discover features from
the data without such a knowledge. We evaluate our model abilities to extract
physical properties and to generalize to unseen trajectories in two games with a
shooting mechanism. We also evaluate our model capabilities to transfer learned
knowledge from a 2D game for predictions in a 3D game with a similar physics.
We show that by using physical laws together with deep learning we achieve a better
human-interpretability of learned physical properties, transfer of knowledge to
a game with similar physics and very accurate predictions for previously unseen
data. | reject | This paper aims to estimate the parameters of a projectile physical equation from a small number of trajectory observations in two computer games. The authors demonstrate that their method works, and that the learnt model generalises from one game to another. However, the reviewers had concerns about the simplicity of the tasks, the longer term value of the proposed method to the research community, and the writing of the paper. During the discussion period, the authors were able to address some of these questions, however many other points were left unanswered, and the authors did not modify the paper to reflect the reviewers’ feedback. Hence, in the current state this paper appears more suitable for a workshop rather than a conference, and I recommend rejection. | train | [
"HyeH-orssB",
"Sylmf3musH",
"H1gA0k4usB",
"BJeAPiQOoS",
"ryxaWOnSKH",
"HkxD3tnBKH",
"S1ecWacacH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for this reponse, but it addresses only a single one of my questions, and it is not really an answer to my concerns. No other question was addressed.\n\nI will keep my rating.",
"We would like to thank the reviewer for their remarks and comments. We would like to provide an answer to the questions rais... | [
-1,
-1,
-1,
-1,
1,
3,
3
] | [
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"H1gA0k4usB",
"HkxD3tnBKH",
"ryxaWOnSKH",
"S1ecWacacH",
"iclr_2020_BJgZBxBYPB",
"iclr_2020_BJgZBxBYPB",
"iclr_2020_BJgZBxBYPB"
] |
iclr_2020_SkxzSgStPS | Exploration via Flow-Based Intrinsic Rewards | Exploration bonuses derived from the novelty of observations in an environment have become a popular approach to motivate exploration for reinforcement learning (RL) agents in the past few years. Recent methods such as curiosity-driven exploration usually estimate the novelty of new observations by the prediction errors of their system dynamics models. In this paper, we introduce the concept of optical flow estimation from the field of computer vision to the RL domain and utilize the errors from optical flow estimation to evaluate the novelty of new observations. We introduce a flow-based intrinsic curiosity module (FICM) capable of learning the motion features and understanding the observations in a more comprehensive and efficient fashion. We evaluate our method and compare it with a number of baselines on several benchmark environments, including Atari games, Super Mario Bros., and ViZDoom. Our results show that the proposed method is superior to the baselines in certain environments, especially for those featuring sophisticated moving patterns or with high-dimensional observation spaces. | reject | This paper proposes a method for improving exploration by implementing intrinsic rewards based on optical flow prediction error. The approach was evaluated on several Atari games, Super Mario, and VizDoom.
There are several strengths to this work, including the fact that it comes with open source code, and several reviewers agree it’s an interesting approach. R1 thought it was well-written and quite easy to follow. I also commend the authors for being so responsive with comments and for adding the new experiments that were asked for.
The main issue that reviewers pointed out, and which I am also concerned about, is how these particular games were chosen. R3 points out that these 5 Atari games are not known for being hard exploration games. Authors did conduct further experiments on 6 Atari games suggested by the reviewer, but the results didn’t show significant improvement over baselines.
I appreciate the authors’ argument that every method has “its niche”, but the environments chosen must still be properly motivated. I would have preferred to see results on all Atari games, along with detailed and quantitative analysis into why FICM fails on specific tasks. For instance, they state in the rebuttal that “The selection criteria of our environments is determined by the relevance of motions of the foreground and background components (including the controllable agent and the uncontrollable objects) to the performance (i.e., obtainable scores) of the agent.” But it doesn’t seem like this was assessed in any quantitative way. Without this understanding, it’d be difficult for an outsider to know which tasks are appropriate to use with this approach. I urge the authors to focus on expanding and quantifying the work they depict in Figure 8, which, although it begins to illuminate why FICM works for some games and not others, is still only a qualitative snapshot of 2 games. I still think this is a very interesting approach and look forward to future versions of this paper. | train | [
"SJe_-TWosH",
"Skg5I2SDjB",
"rygAnxNSoH",
"H1lI1PwNoH",
"rkx6Fww4iH",
"B1eO3uGrjr",
"SyefqHL4oS",
"BJgr1UU4or",
"ryxnSPL4sH",
"SkleInpWiS",
"HJgBCOT4Kr",
"S1xxeBsnFB",
"SJxmXHNhtS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThe authors appreciate the perspective shared by the reviewer. To address the second concern from the reviewer, we performed further experiments on the suggested six established hard exploration environments with original sparse reward settings, as in [1]. We compared our proposed method with forward dynamics (R... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"Skg5I2SDjB",
"SyefqHL4oS",
"H1lI1PwNoH",
"SJxmXHNhtS",
"SJxmXHNhtS",
"SJxmXHNhtS",
"S1xxeBsnFB",
"S1xxeBsnFB",
"S1xxeBsnFB",
"HJgBCOT4Kr",
"iclr_2020_SkxzSgStPS",
"iclr_2020_SkxzSgStPS",
"iclr_2020_SkxzSgStPS"
] |
iclr_2020_BygMreSYPB | Learning Latent Dynamics for Partially-Observed Chaotic Systems | This paper addresses the data-driven identification of latent representations of partially-observed dynamical systems, i.e. dynamical systems whose some components are never observed, with an emphasis on forecasting applications and long-term asymptotic patterns. Whereas state-of-the-art data-driven approaches rely on delay embeddings and linear decompositions of the underlying operators, we introduce a framework based on the data-driven identification of an augmented state-space model using a neural-network-based representation. For a given training dataset, it amounts to jointly reconstructing the latent states and learning an ODE (Ordinary Differential Equation) representation in this space. Through numerical experiments, we demonstrate the relevance of the proposed framework w.r.t. state-of-the-art approaches in terms of short-term forecasting errors and long-term behaviour. We further discuss how the proposed framework relates to Koopman operator theory and Takens' embedding theorem. | reject | This paper presents an ODE-based latent variable model, argues that extra unobserved dimensions are necessary in general, and that deterministic encodings are also insufficient in general. Instead, they optimize the latent representation during training. They include small-scale experiments showing that their framework beats alternatives.
In my mind, the argument about fixed mappings being inadequate is a fair one, but it misses the fact that the variational inference framework already has several ways to address this shortcoming:
1) The recognition network outputs a distribution over latent values, which in itself does not address this issue, but provides regularization benefits.
2) The recognition network is just a strategy for speeding up inference. There's no reason you can't just do variational inference or MCMC for inference instead (which is similar to your approach), or do semi-amortized variational inference.
Basically, this paper could have been somewhat convincing as a general exploration of approximate inference strategies in the latent ODE model. Instead, it provides a lot of philosophical arguments and a small amount of empirical evidence that a particular encoder is insufficient when doing MAP inference. It also seems like a problem that hyperparameters were copied from Chen et al 2018, but are used in a MAP setting instead of a VAE setting. Finally, it's not clear how hyperparameters such as the size of the latent dimensions were chosen. | train | [
"rJl7V-W6KH",
"SJxNsrX2or",
"rylaafctjB",
"H1g0MS5YsH",
"SJgTW8cYjr",
"S1gg5RtYjr",
"Bkx9VkctjS",
"rJegRjtFsr",
"HkxEhStFor",
"Syxv3Mo2YS",
"H1lvkgDO9B"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Update: I raised the score from 1 to 3 to acknowledge the authors' consideration for the 2000-2010 literature on learning dynamical systems from partial observations. Unfortunately, the writing is still confusing, some of the claims in the introduction and rebuttal are inexact ([5] does not embed the observations ... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1
] | [
"iclr_2020_BygMreSYPB",
"rylaafctjB",
"rJl7V-W6KH",
"rJl7V-W6KH",
"rJl7V-W6KH",
"Syxv3Mo2YS",
"rJegRjtFsr",
"H1lvkgDO9B",
"iclr_2020_BygMreSYPB",
"iclr_2020_BygMreSYPB",
"iclr_2020_BygMreSYPB"
] |
iclr_2020_BJx4rerFwB | wMAN: WEAKLY-SUPERVISED MOMENT ALIGNMENT NETWORK FOR TEXT-BASED VIDEO SEGMENT RETRIEVAL | Given a video and a sentence, the goal of weakly-supervised video moment retrieval is to locate the video segment which is described by the sentence without having access to temporal annotations during training. Instead, a model must learn how to identify the correct segment (i.e. moment) when only being provided with video-sentence pairs. Thus, an inherent challenge is automatically inferring the latent correspondence between visual and language representations. To facilitate this alignment, we propose our Weakly-supervised Moment Alignment Network (wMAN) which exploits a multi-level co-attention mechanism to learn richer multimodal representations. The aforementioned mechanism is comprised of a Frame-By-Word interaction module as well as a novel Word-Conditioned Visual Graph (WCVG). Our approach also incorporates a novel application of positional encodings, commonly used in Transformers, to learn visual-semantic representations that contain contextual information of their relative positions in the temporal sequence through iterative message-passing. Comprehensive experiments on the DiDeMo and Charades-STA datasets demonstrate the effectiveness of our learned representations: our combined wMAN model not only outperforms the state-of-the-art weakly-supervised method by a significant margin but also does better than strongly-supervised state-of-the-art methods on some metrics. | reject | This paper proposes a method for aligning an input text with the frames in a video that correspond to what the text describes in a weakly supervised way. The main technical contribution of the paper is the use of co-attention at different abstraction levels.
Among the four reviewers, one reviewer advocates for the paper while the others find this paper to be a borderline reject paper. Reviewer3 who was initially positive about the paper, during the discussion period, expressed that he/she wants to downgrade his/her rating to weak reject after reading the other reviewers' comments and concerns. The main concern of the reviewers is that the contribution of the paper incremental, particularly since the idea of co-attention has been used in many different area in other context. The authors responded to this in the rebuttal that the proposed approach incorporate different components such as Positional Encodings and is different from prior work, and that they experimentally perform superior compared to other co-attention usages such as LCGN. Although the AC understands the authors response, the majority of the reviewers are still not fully convinced about the contribution and their opinion stay opposed to the paper. | train | [
"BJeFV9pBjr",
"rygs-0TroB",
"Syg5866BoB",
"Hkeyen6rsH",
"HylzGuproS",
"ByxSBaCJqr",
"Byx0x0jDcB",
"Skeukb_ncH",
"rJxsIwXpcB"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review. We address your concerns below.\n\n1) This is addressed in the general response.\n\n2) We will update the next version of the paper with the necessary clarifications to the caption and modifications.\n\n3) We have updated the submission with an ablation study of how the number of message... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"rJxsIwXpcB",
"ByxSBaCJqr",
"Byx0x0jDcB",
"Skeukb_ncH",
"iclr_2020_BJx4rerFwB",
"iclr_2020_BJx4rerFwB",
"iclr_2020_BJx4rerFwB",
"iclr_2020_BJx4rerFwB",
"iclr_2020_BJx4rerFwB"
] |
iclr_2020_rJerHlrYwH | Data-Efficient Image Recognition with Contrastive Predictive Coding | Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge. We hypothesize that data-efficient recognition is enabled by representations which make the variability in natural signals more predictable, as suggested by recent perceptual evidence. We therefore revisit and improve Contrastive Predictive Coding, a recently-proposed unsupervised learning framework, and arrive at a representation which enables generalization from small amounts of labeled data. When provided with only 1% of ImageNet labels (i.e. 13 per class), this model retains a strong classification performance, 73% Top-5 accuracy, outperforming supervised networks by 28% (a 65% relative improvement) and state-of-the-art semi-supervised methods by 14%. We also find this representation to serve as a useful substrate for object detection on the PASCAL-VOC 2007 dataset, approaching the performance of representations trained with a fully annotated ImageNet dataset. | reject | The paper tackles the key question of achieving high prediction performances with few labels. The proposed approach builds upon Contrastive Predictive Coding (van den Oord et al. 2018). The contribution lies in i) refining CPC along several axes including model capacity, directional predictions, patch-based augmentation; ii) showing that the refined representation learned by the called CPC.v2 supports an efficient classification in a few-label regime, and can be transferred to another dataset; iii) showing that the auxiliary losses involved in the CPC are not necessarily predictive of the eventual performance of the network.
This paper generated a hot discussion. Reviewers were not convinced that the paper contributions are sufficiently innovative to deserve being published at ICLR. Authors argued that novelty does not have to lie in equations, and that the new ideas and evidence presented are worth.
The area chair thinks that the paper raises profound questions (e.g., what auxiliary losses are most conducive to learning a good representation; how to divide the computational efforts among the preliminary phase of representation learning and the later phase of classifier learning), but given the number of options and details involved, these results may support several interpretations besides the authors'.
The authors might also want to leave the claim about the generality of the CPC++ principles (e.g., regarding audio) for further work - or to bring additional evidence backing up this claim.
In conclusion, this paper contains brilliant ideas and I hope to see them published with a strengthened analysis of its components. | train | [
"ryxqDXApcr",
"SylzQECKor",
"SygZlNAYoS",
"BJeIT7RKiS",
"rJlQ5mRtjB",
"Byl00MAtoS",
"Sye7g-moFH",
"HJxj6VX0tr",
"BJg06Ig69B"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes to use Contrastive Predictive Coding (CPC), an unsupervised learning approach, to learn representations for further image classification. The authors show that using CPC for representation learning allows to achieve better results than other self-supervised methods. Moreover, CPC is shown to be ... | [
3,
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
5,
1,
5
] | [
"iclr_2020_rJerHlrYwH",
"Sye7g-moFH",
"HJxj6VX0tr",
"BJg06Ig69B",
"ryxqDXApcr",
"iclr_2020_rJerHlrYwH",
"iclr_2020_rJerHlrYwH",
"iclr_2020_rJerHlrYwH",
"iclr_2020_rJerHlrYwH"
] |
iclr_2020_S1gLBgBtDH | SLM Lab: A Comprehensive Benchmark and Modular Software Framework for Reproducible Deep Reinforcement Learning | We introduce SLM Lab, a software framework for reproducible reinforcement learning (RL) research. SLM Lab implements a number of popular RL algorithms, provides synchronous and asynchronous parallel experiment execution, hyperparameter search, and result analysis. RL algorithms in SLM Lab are implemented in a modular way such that differences in algorithm performance can be confidently ascribed to differences between algorithms, not between implementations. In this work we present the design choices behind SLM Lab and use it to produce a comprehensive single-codebase RL algorithm benchmark. In addition, as a consequence of SLM Lab's modular design, we introduce and evaluate a discrete-action variant of the Soft Actor-Critic algorithm (Haarnoja et al., 2018) and a hybrid synchronous/asynchronous training method for RL agents. | reject | A new software framework fo Deep RL is introduced. This is a useful work for the community, but it is not a research work. I agree with Reviewer4 that somehow it is not a right venue: other papers need to have technical contributions, SOTA, and here - it is difficult but it is another type of work - accurate technical implementation and commenting. I do not feel right to have as it a paper on ICLR. | train | [
"H1xO_s5CKr",
"Sygd4Ol9sB",
"Hklbldgcor",
"r1lT9u6FjB",
"SyeSd7ZAKr",
"SyxZDeUO5r"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new RL library called « SLM Lab ». Its most relevant features for RL research are: (1) modularity to help re-use existing components (thus reducing the risk of subtle implementation differences when comparing algorithms), (2) implementations of most popular algorithms like DQN & variants, A3C... | [
3,
-1,
-1,
-1,
8,
3
] | [
3,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_S1gLBgBtDH",
"SyeSd7ZAKr",
"H1xO_s5CKr",
"SyxZDeUO5r",
"iclr_2020_S1gLBgBtDH",
"iclr_2020_S1gLBgBtDH"
] |
iclr_2020_HJl8SgHtwr | VIMPNN: A physics informed neural network for estimating potential energies of out-of-equilibrium systems | Simulation of molecular and crystal systems enables insight into interesting chemical properties that benefit processes ranging from drug discovery to material synthesis. However these simulations can be computationally expensive and time consuming despite the approximations through Density Functional Theory (DFT). We propose the Valence Interaction Message Passing Neural Network (VIMPNN) to approximate DFT's ground-state energy calculations. VIMPNN integrates physics prior knowledge such as the existence of different interatomic bounds to estimate more accurate energies. Furthermore, while many previous machine learning methods consider only stable systems, our proposed method is demonstrated on unstable systems at different atomic distances. VIMPNN predictions can be used to determine the stable configurations of systems, i.e. stable distance for atoms -- a necessary step for the future simulation of crystal growth for example. Our method is extensively evaluated on a augmented version of the QM9 dataset that includes unstable molecules, as well as a new dataset of infinite- and finite-size crystals, and is compared with the Message Passing Neural Network (MPNN). VIMPNN has comparable accuracy with DFT, while allowing for 5 orders of magnitude in computational speed up compared to DFT simulations, and produces more accurate and informative potential energy curves than MPNN for estimating stable configurations. | reject | The paper considers the problem of estimating the electronic structure's ground state energy of a given atomic system by means of supervised machine learning, as a fast alternative to conventional explicit methods (DFT). For this purpose, it modifies the neural message-passing architecture to account for further physical properties, and it extends the empirical validation to also include unstable molecules.
Reviewers acknowledged the valuable experimental setup of this work and the significance of the results in the application domain, but were generally skeptical about the novelty of the machine learning model under study. Ultimately, and given that the main focus of this conference is on Machine Learning methodology, this AC believes this work could be more suitable in a more specialized venue in computational/quantum chemistry. | train | [
"BJxU7QjKiH",
"rJgfufsYiB",
"rkevAhctoB",
"rkeUP-b8oB",
"Hket-APeFB",
"S1lomgQcYS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their comments. We have responded to the feedback:\n\n\"In 4.2 it is explained how different ways of incorporating [...] I would suggest systematically evaluating the different options and including the results in an appendix. \"\n\nWe agree that the paper would benefit from adding the ex... | [
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
1,
3,
1
] | [
"Hket-APeFB",
"S1lomgQcYS",
"rkeUP-b8oB",
"iclr_2020_HJl8SgHtwr",
"iclr_2020_HJl8SgHtwr",
"iclr_2020_HJl8SgHtwr"
] |
iclr_2020_ryxOBgBFPH | Preventing Imitation Learning with Adversarial Policy Ensembles | Imitation learning can reproduce policies by observing experts, which poses a problem regarding policy propriety. Policies, such as human, or policies on deployed robots, can all be cloned without consent from the owners. How can we protect our proprietary policies from cloning by an external observer? To answer this question we introduce a new reinforcement learning framework, where we train an ensemble of optimal policies, whose demonstrations are guaranteed to be useless for an external observer. We formulate this idea by a constrained optimization problem, where the objective is to improve proprietary policies, and at the same time deteriorate the virtual policy of an eventual external observer. We design a tractable algorithm to solve this new optimization problem by modifying the standard policy gradient algorithm. It appears such problem formulation admits plausible interpretations of confidentiality, adversarial behaviour, which enables a broader perspective of this work. We demonstrate explicitly the existence of such 'non-clonable' ensembles, providing a solution to the above optimization problem, which is calculated by our modified policy gradient algorithm. To our knowledge, this is the first work regarding the protection and privacy of policies in Reinforcement Learning. | reject | Although the reviewers appreciated the novelty of this work, they unanimously recommended rejection. The current version of the paper exhibits weak presentation quality and lacks sufficient technical depth. The experimental evaluation was not found to be sufficiently convincing by any of the reviewers. The submitted comments should help the authors improve their paper. | train | [
"SkeyZ3HTYB",
"Hyer3HfnsB",
"rkl_PHfhjB",
"SkgsaNM3oS",
"S1xnK7fniH",
"HJeehMRpYS",
"BkxgCNX0tr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper addresses the problem of poisoning behavioral cloning using an optimized ensemble of demonstrators. The goals is allow the ensemble to still achieve an expected return above a certain threshold while minimizing the return of a policy trained via behavioral cloning. \n\nThis is a very exciting and novel ... | [
3,
-1,
-1,
-1,
-1,
1,
3
] | [
5,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_ryxOBgBFPH",
"iclr_2020_ryxOBgBFPH",
"SkeyZ3HTYB",
"BkxgCNX0tr",
"HJeehMRpYS",
"iclr_2020_ryxOBgBFPH",
"iclr_2020_ryxOBgBFPH"
] |
iclr_2020_HJeYSxHFDS | Gauge Equivariant Spherical CNNs | Spherical CNNs are convolutional neural networks that can process signals on the sphere, such as global climate and weather patterns or omnidirectional images. Over the last few years, a number of spherical convolution methods have been proposed, based on generalized spherical FFTs, graph convolutions, and other ideas. However, none of these methods is simultaneously equivariant to 3D rotations, able to detect anisotropic patterns, computationally efficient, agnostic to the type of sample grid used, and able to deal with signals defined on only a part of the sphere. To address these limitations, we introduce the Gauge Equivariant Spherical CNN. Our method is based on the recently proposed theory of Gauge Equivariant CNNs, which is in principle applicable to signals on any manifold, and which can be computed on any set of local charts covering all of the manifold or only part of it. In this paper we show how this method can be implemented efficiently for the sphere, and show that the resulting method is fast, numerically accurate, and achieves good results on the widely used benchmark problems of climate pattern segmentation and omnidirectional semantic segmentation. | reject | The paper extends Gauge invariant CNNs to Gauge invariant spherical CNNs. The authors significantly improved both theory and experiments during the rebuttal and the paper is well presented. However, the topic is somewhat niche, and the bar for ICLR this year was very high, so unfortunately this paper did not make it. We encourage the authors to resubmit the work including the new results obtained during the rebuttal period. | val | [
"BJeVkETtiH",
"SygRPmTYsH",
"S1ltX7TYsS",
"BJl9iMTKsS",
"SJgLAnS0YB",
"HylO6pcycB",
"SJeURqxxcB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments. It is true that the present paper does not introduce a new framework like the paper by Cohen et al. However, although that paper contained a detailed continuous mathematical theory, as well as a discretized implementation of the idea for the icosahedron, it lacked an explanation of how... | [
-1,
-1,
-1,
-1,
8,
8,
3
] | [
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"SJeURqxxcB",
"HylO6pcycB",
"SJgLAnS0YB",
"iclr_2020_HJeYSxHFDS",
"iclr_2020_HJeYSxHFDS",
"iclr_2020_HJeYSxHFDS",
"iclr_2020_HJeYSxHFDS"
] |
iclr_2020_BkeYSlrYwH | Collaborative Inter-agent Knowledge Distillation for Reinforcement Learning | Reinforcement Learning (RL) has demonstrated promising results across several sequential decision-making tasks. However, reinforcement learning struggles to learn efficiently, thus limiting its pervasive application to several challenging problems. A typical RL agent learns solely from its own trial-and-error experiences, requiring many experiences to learn a successful policy. To alleviate this problem, we propose collaborative inter-agent knowledge distillation (CIKD). CIKD is a learning framework that uses an ensemble of RL agents to execute different policies in the environment while sharing knowledge amongst agents in the ensemble. Our experiments demonstrate that CIKD improves upon state-of-the-art RL methods in sample efficiency and performance on several challenging MuJoCo benchmark tasks. Additionally, we present an in-depth investigation on how CIKD leads to performance improvements.
| reject | The paper introduces an ensemble of RL agents that share knowledge amongst themselves. Because there are no theoretical results, the experiments have to carry the paper. The reviewers had rather different views on the significance of these experiments and whether they are sufficient to convincingly validate the learning framework introduced. Overall, because of the high bar for ICLR acceptance, this paper falls just below the threshold.
| test | [
"rJl2BS_niH",
"HJgbNT4nor",
"ryg6qbs5iS",
"r1g2F6OqiS",
"rJlz4hu4sr",
"rygrTiuEiB",
"Bkg-TbEmor",
"Hkx1u0nj_H",
"HygygT3M9H",
"BkgE-WYp5H",
"rylFv_5hvB"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"[Update] Question/Comment #8 Response: \nWe have attached the new experimental results in the appendix.",
"We would like to thank all the reviewers for their helpful comments. We have provided responses to all of the reviewers, and have updated our paper in response to the reviews. In particular, we have improve... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
-1,
-1
] | [
"rJlz4hu4sr",
"iclr_2020_BkeYSlrYwH",
"Hkx1u0nj_H",
"HygygT3M9H",
"rygrTiuEiB",
"Bkg-TbEmor",
"iclr_2020_BkeYSlrYwH",
"iclr_2020_BkeYSlrYwH",
"iclr_2020_BkeYSlrYwH",
"rylFv_5hvB",
"iclr_2020_BkeYSlrYwH"
] |
iclr_2020_Hkg9HgBYwH | Encoding Musical Style with Transformer Autoencoders | We consider the problem of learning high-level controls over the global structure of sequence generation, particularly in the context of symbolic music generation with complex language models. In this work, we present the Transformer autoencoder, which aggregates encodings of the input data across time to obtain a global representation of style from a given performance. We show it is possible to combine this global embedding with other temporally distributed embeddings, enabling improved control over the separate aspects of performance style and and melody. Empirically, we demonstrate the effectiveness of our method on a variety of music generation tasks on the MAESTRO dataset and an internal, 10,000+ hour dataset of piano performances, where we achieve improvements in terms of log-likelihood and mean listening scores as compared to relevant baselines. | reject | Main content:
Blind review #3 summarizes it well:
This paper presents a technique for encoding the high level “style” of pieces of symbolic music. The music is represented as a variant of the MIDI format. The main strategy is to condition a Music Transformer architecture on this global “style embedding”. Additionally, the Music Transformer model is also conditioned on a combination of both “style” and “melody” embeddings to try and generate music “similar” to the conditioning melody but in the style of the performance embedding.
--
Discussion:
The reviewers questioned the novelty. Blind review #2 wrote: "Overall, I think the paper presents an interesting application and parts of it are well written, however I have concerns with the technical presentation in parts of the paper and some of the methodology. Firstly, I think the algorithmic novelty in the paper is fairly limited. The performance conditioning vector is generated by an additional encoding transformer, compared to the Music Transformer paper (Huang et. al. 2019b). However, the limited algorithmic novelty is not the main concern. The authors also mention an internal dataset of music audio and transcriptions, which can be a major contribution to the music information retrieval (MIR) community. However it is not clear if this dataset will be publicly released or is only for internal experiments."
However, after revision, the same reviewer has upgraded the review to a weak accept, as the authors wrote "We emphasize that our goal is to provide users with more fine-grained control over the outputs generated by a seq2seq language model. Despite its simplicity, our method is able to learn a global representation of style for a Transformer, which to the best of our knowledge is a novel contribution for music generation. Additionally, we can synthesize an arbitrary melody into the style of another performance, and we demonstrate the effectiveness of our results both quantitatively (metrics) and qualitatively (interpolations, samples, and user listening studies)."
--
Recommendation and justification:
This paper is borderline for the reasons above, and due to the large number of strong papers, is not accepted at this time. As one comment, this work might actually be more suitable for a more specialized conference like ISMIR, as its novel contribution is more to music applications than to fundamental machine learning approaches. | train | [
"Hkx-_FQnjS",
"BygAqcDQsS",
"Byecdcvmsr",
"HyeZH9PXjr",
"Hygtfbj6FH",
"rJlAcEsTtS"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear Authors, \n\nThank you for all the changes to the draft. I think the paper is much improved due to all the changes. I need some time to go through all the changes in detail and reconsider my rating for the paper. ",
"In addition to the common concerns as written above, we address Reviewer #3's specific conc... | [
-1,
-1,
-1,
-1,
3,
6
] | [
-1,
-1,
-1,
-1,
4,
1
] | [
"BygAqcDQsS",
"Hygtfbj6FH",
"rJlAcEsTtS",
"iclr_2020_Hkg9HgBYwH",
"iclr_2020_Hkg9HgBYwH",
"iclr_2020_Hkg9HgBYwH"
] |
iclr_2020_SygcSlHFvS | On Understanding Knowledge Graph Representation | Many methods have been developed to represent knowledge graph data, which implicitly exploit low-rank latent structure in the data to encode known information and enable unknown facts to be inferred. To predict whether a relationship holds between entities, their embeddings are typically compared in the latent space following a relation-specific mapping. Whilst link prediction has steadily improved, the latent structure, and hence why such models capture semantic information, remains unexplained. We build on recent theoretical interpretation of word embeddings as a basis to consider an explicit structure for representations of relations between entities. For identifiable relation types, we are able to predict properties and justify the relative performance of leading knowledge graph representation methods, including their often overlooked ability to make independent predictions. | reject | The paper proposes a set of conditions that enable a mapping from word embeddings to relation embeddings in knowledge graphs. Then, using recent results about pointwise mutual information word embeddings, the paper provides insights to the latent space of relations, enabling a categorization of relations of entities in a knowledge graph. Empirical experiments on recent knowledge graph models (TransE, DistMult, TuckER and MuRE) are interpreted in light of the predictions coming from the proposed set of conditions.
The authors responded to reviewer comments well, providing significant updates during the discussion period. Unfortunately, the reviewers did not engage further after their original reviews, and so it is hard to tell whether they agreed that the changes resolved all their questions.
Overall, the paper provides much needed analysis for understanding of the latent space of relations on knowledge graphs. Unfortunately, the original submission did not clearly present the ideas, and it is unclear whether the updated version addresses all the concerns. The paper in its current state is therefore not yet suitable for publication at ICLR. | train | [
"B1guFCaotH",
"rJxbP-CqoH",
"rJlY-jGmiH",
"ryxf-Kf7sH",
"Bygk9dGQoB",
"HJx2gzzmjS",
"S1e-n1f7sS",
"B1lsoiXVKH",
"rkgCydwkcr",
"HJeKIG6LYr",
"Hklleo57tr",
"Ske42A1AdS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public"
] | [
"This paper proposes to provide a detailed study on the explainability of link prediction (LP) models by utilizing a recent interpretation of word embeddings. More specifically, the authors categorize the relations in KG into three categories (R, S, C) using the correlation between the semantic relation between two... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1
] | [
"iclr_2020_SygcSlHFvS",
"iclr_2020_SygcSlHFvS",
"B1lsoiXVKH",
"B1guFCaotH",
"B1guFCaotH",
"rkgCydwkcr",
"rkgCydwkcr",
"iclr_2020_SygcSlHFvS",
"iclr_2020_SygcSlHFvS",
"Hklleo57tr",
"Ske42A1AdS",
"iclr_2020_SygcSlHFvS"
] |
iclr_2020_r1nSxrKPH | Learning Functionally Decomposed Hierarchies for Continuous Navigation Tasks | Solving long-horizon sequential decision making tasks in environments with sparse rewards is a longstanding problem in reinforcement learning (RL) research. Hierarchical Reinforcement Learning (HRL) has held the promise to enhance the capabilities of RL agents via operation on different levels of temporal abstraction. Despite the success of recent works in dealing with inherent nonstationarity and sample complexity, it remains difficult to generalize to unseen environments and to transfer different layers of the policy to other agents. In this paper, we propose a novel HRL architecture, Hierarchical Decompositional Reinforcement Learning (HiDe), which allows decomposition of the hierarchical layers into independent subtasks, yet allows for joint training of all layers in end-to-end manner. The main insight is to combine a control policy on a lower level with an image-based planning policy on a higher level. We evaluate our method on various complex continuous control tasks for navigation, demonstrating that generalization across environments and transfer of higher level policies can be achieved. See videos https://sites.google.com/view/hide-rl | reject | The submission proposes a complex, hierarchical architecture for continuous control RL that combines Hindsight Experience Replay, vision-based planning with privileged information, and low-level control policy learning. The authors demonstrate that the approach can achieve transfer of the different control levels between different bodies in a single environment.
The reviewers were initially all negative, but 2 were persuaded towards weak acceptance by the improvements to the paper and the authors' rebuttal. The discussion focused on remaining limitations: the use of a single maze environment for evaluation, as well as whether the baselines were fair (HAC in particular). After reading the paper, I believe that these limitations are substantial. In particular, this is not a general approach and its relevance is severely limited unless the authors demonstrate that it will work as well in a more general control setting, which is in their future work already.
Thus I recommend rejection at this time. | val | [
"BJxSfTH8KH",
"HJg9A-usKH",
"SyxS5cVnjH",
"S1xc8U4hoS",
"SJlFt6EnoB",
"Bylf9sVnsS",
"S1lqIISnjr",
"S1lUPaEnsB",
"ByxorF4hsr",
"Hyly6HEniB",
"Bye3eya2KB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a neat framework for creating HRL framework that will be able to generalize its application to slightly different environment layout. This is done via an image-based top-down from as input to the high level. An intermediate layer is used to help create more fine-grained goal specification for a ... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_r1nSxrKPH",
"iclr_2020_r1nSxrKPH",
"BJxSfTH8KH",
"Bye3eya2KB",
"iclr_2020_r1nSxrKPH",
"SyxS5cVnjH",
"Bylf9sVnsS",
"iclr_2020_r1nSxrKPH",
"iclr_2020_r1nSxrKPH",
"HJg9A-usKH",
"iclr_2020_r1nSxrKPH"
] |
iclr_2020_Byl3HxBFwH | Efficient Deep Representation Learning by Adaptive Latent Space Sampling | Supervised deep learning requires a large amount of training samples with annotations (e.g. label class for classification task, pixel- or voxel-wised label map for segmentation tasks), which are expensive and time-consuming to obtain. During the training of a deep neural network, the annotated samples are fed into the network in a mini-batch way, where they are often regarded of equal importance. However, some of the samples may become less informative during training, as the magnitude of the gradient start to vanish for these samples. In the meantime, other samples of higher utility or hardness may be more demanded for the training process to proceed and require more exploitation. To address the challenges of expensive annotations and loss of sample informativeness, here we propose a novel training framework which adaptively selects informative samples that are fed to the training process. The adaptive selection or sampling is performed based on a hardness-aware strategy in the latent space constructed by a generative model. To evaluate the proposed training framework, we perform experiments on three different datasets, including MNIST and CIFAR-10 for image classification task and a medical image dataset IVUS for biophysical simulation task. On all three datasets, the proposed framework outperforms a random sampling method, which demonstrates the effectiveness of our framework. | reject | VAE-based sample selection for training NNs. A well-written experimental paper that is demonstrated through a number of experiments, all of which are minimal and from which generalization is not per se expected. The absence of an underlying theory, and the absence of rigorous experimentation makes me request to extend either or, better, both. | train | [
"SJeEOpVOiH",
"BygRX1NCtS",
"r1xpqsNuoS",
"r1x78YE_or",
"BygrWUEOoB",
"rJxqsKr6YH",
"SJldn3zH9H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the detailed comments. It has resolved my concerns. I think the paper is very interesting and insightful. We should encourage such work that explores how a method works. Although it is not practical for large-scale experiments yet, it may do with some extensions in future work. Therefore, I have raised ... | [
-1,
8,
-1,
-1,
-1,
3,
6
] | [
-1,
5,
-1,
-1,
-1,
3,
5
] | [
"r1x78YE_or",
"iclr_2020_Byl3HxBFwH",
"rJxqsKr6YH",
"BygRX1NCtS",
"SJldn3zH9H",
"iclr_2020_Byl3HxBFwH",
"iclr_2020_Byl3HxBFwH"
] |
iclr_2020_H1e3HlSFDr | Variational Constrained Reinforcement Learning with Application to Planning at Roundabout | Planning at roundabout is crucial for autonomous driving in urban and rural environments. Reinforcement learning is promising not only in dealing with complicated environment but also taking safety constraints into account as a as a constrained Markov Decision Process. However, the safety constraints should be explicitly mathematically formulated while this is challenging for planning at roundabout due to unpredicted dynamic behavior of the obstacles. Therefore, to discriminate the obstacles' states as either safe or unsafe is desired which is known as situation awareness modeling. In this paper, we combine variational learning and constrained reinforcement learning to simultaneously learn a Conditional Representation Model (CRM) to encode the states into safe and unsafe distributions respectively as well as to learn the corresponding safe policy. Our approach is evaluated in using Simulation of Urban Mobility (SUMO) traffic simulator and it can generalize to various traffic flows. | reject | This paper proposes to add constraints to the RL problem within a variational method. The hope is to specify a safe vs non-safe states. The reviewers were not convinced that this paper makes the cut for ICLR. Moreover, there was no rebuttal from the authors, so it didn't give the reviewer a chance to reconsider their opinion. Based on the current ratings, I recommend to reject this paper. | test | [
"B1lF6gJTtH",
"HJl-v-KAYH",
"SklUtlxBqB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presented a CRM model which is VAE with separate priors for save and unsafe modes and utilize it in RL for roundabout planning task. \n\nPros:\n1. The motivation of the work is clear and solves an important task\n2. The approach is sensible \n\nCons:\n1. Experimental evaluation is very weak. It is only... | [
1,
1,
1
] | [
3,
1,
4
] | [
"iclr_2020_H1e3HlSFDr",
"iclr_2020_H1e3HlSFDr",
"iclr_2020_H1e3HlSFDr"
] |
iclr_2020_SylpBgrKPH | MissDeepCausal: causal inference from incomplete data using deep latent variable models | Inferring causal effects of a treatment, intervention or policy from observational data is central to many applications. However, state-of-the-art methods for causal inference seldom consider the possibility that covariates have missing values, which is ubiquitous in many real-world analyses. Missing data greatly complicate causal inference procedures as they require an adapted unconfoundedness hypothesis which can be difficult to justify in practice. We circumvent this issue by considering latent confounders whose distribution is learned through variational autoencoders adapted to missing values. They can be used either as a pre-processing step prior to causal inference but we also suggest to embed them in a multiple imputation strategy to take into account the variability due to missing values. Numerical experiments demonstrate the effectiveness of the proposed methodology especially for non-linear models compared to competitors. | reject | This paper addresses the problem of causal inference from incomplete data. The main idea is to use a latent confounders through a VAE. A multiple imputation strategy is then used to account for missing values. Reviewers have mixed responses to this paper. Initially, the scores were 8,6,3. After discussion the reviewer who rated is 8 reduced their score to 6, but at the same time the score of 3 went up to 6. The reviewers agree that the problem tackled in the paper is difficult, and also acknowledge that the rebuttal of the paper was reasonable and honest. The authors added a simulation study which shows good results.
The main argument towards rejection is that the paper does not beat the state of the art. I do think that this is still ok if the paper brings useful insights for the community even though it does not beat the state fo the art. For now, with the current score, the paper does not make the cut. For this reason, I recommend to reject the paper, but I encourage the authors to resubmit this to another venue after improving the paper. | train | [
"SJxGMoVaYH",
"HkeSD_qeoB",
"BylnDUKyqB",
"rJegLivtjr",
"B1eiqH-QoS",
"SyepcEk7iS",
"rkxbxm1msr",
"r1l-nW1QoH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper introduces MissDeepCausal method to address the problem of treatment effect estimation with incomplete covariates matrix (missing values at random -- MAR). It makes use of Variational AutoEncoders (VAE) to learn the latent confounders from incomplete covariates. This also helps encoding complex non-line... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_SylpBgrKPH",
"iclr_2020_SylpBgrKPH",
"iclr_2020_SylpBgrKPH",
"B1eiqH-QoS",
"r1l-nW1QoH",
"SJxGMoVaYH",
"BylnDUKyqB",
"HkeSD_qeoB"
] |
iclr_2020_ByxCrerKvS | Set Functions for Time Series | Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency.
Our method SeFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios.
We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime. | reject | The paper investigates a new approach to classification of irregularly sampled and unaligned multi-modal time series via set function mapping. Experiment results on health care datasets are reported to demonstrate the effectiveness of the proposed approach.
The idea of extending set functions to address missing value in time series is interesting and novel. The paper does a good job at motivating the methods and describing the proposed solution. The authors did a good job at addressing the concerns of the reviewers.
During the discussion, some reviewers are still concerned about the empirical results, which do not match well with published results (even though the authors provided an explanation for it). In addition, the proposed method is only tested on the health care datasets, but the improvement is limited. Therefore it would be worthwhile investigating other time series datasets, and most important answering the important question in terms of what datasets/applications the proposed method works well.
The paper is one step away for being a strong publication. We hope the reviews can help improve the paper for a strong publication in the future. | train | [
"BJeBhVSAYS",
"Hygu7HYM9B",
"rJe5FdAqjS",
"ryx5adhtjr",
"ryxt2xmwjB",
"HkxFegXPjS",
"rkeqAk7wir",
"H1xKiyXvsr",
"H1l8MVt9FS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary:\nThe work is focused on classification of irregularly sampled and unaligned multi-modal time series. Prior work has primarily focused on imputation methods, either end-to-end or otherwise. This paper approaches the problem as a set function mapping between the time-series tuples to the class label. The pr... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_ByxCrerKvS",
"iclr_2020_ByxCrerKvS",
"ryx5adhtjr",
"H1xKiyXvsr",
"iclr_2020_ByxCrerKvS",
"BJeBhVSAYS",
"Hygu7HYM9B",
"H1l8MVt9FS",
"iclr_2020_ByxCrerKvS"
] |
iclr_2020_BJxAHgSYDB | Learning to Rank Learning Curves | Many automated machine learning methods, such as those for hyperparameter and neural architecture optimization, are computationally expensive because they involve training many different model configurations. In this work, we present a new method that saves computational budget by terminating poor configurations early on in the training. In contrast to existing methods, we consider this task as a ranking and transfer learning problem. We qualitatively show that by optimizing a pairwise ranking loss and leveraging learning curves from other data sets, our model is able to effectively rank learning curves without having to observe many or very long learning curves. We further demonstrate that our method can be used to accelerate a neural architecture search by a factor of up to 100 without a significant performance degradation of the discovered architecture. In further experiments we analyze the quality of ranking, the influence of different model components as well as the predictive behavior of the model. | reject | Authors propose a new way of early stopping for neural architecture search. In contrast to making keep or kill decisions based on extrapolating the learning curves then making decisions between alternatives, this work learns a model on pairwise comparisons between learning curves directly. Reviewers were concerned with over-claiming of novelty since the original version of this paper overlooked significant hyperparameter tuning works. In a revision, additional experiments were performed using some of the suggested methods but reviewers remained skeptical that the empirical experiments provided enough justification that this work was ready for prime time. | train | [
"S1xmXOVhYH",
"ryx1Vnf5jB",
"BJedSBmDoB",
"B1lInjy7jH",
"rklD0QAzjH",
"rylmAApfsS",
"rJlhVTqTKS",
"HylXcz8GcB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nThe paper proposes a new method to rank learning curves of neural networks which can be used to speed up neural architecture search.\nCompared to previous work, the learning curve model not only takes hyperparameter configurations into account, but by training it on offline generated data, it is able to model le... | [
6,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2020_BJxAHgSYDB",
"rylmAApfsS",
"rklD0QAzjH",
"S1xmXOVhYH",
"rJlhVTqTKS",
"HylXcz8GcB",
"iclr_2020_BJxAHgSYDB",
"iclr_2020_BJxAHgSYDB"
] |
iclr_2020_HJlyLgrFvB | All Simulations Are Not Equal: Simulation Reweighing for Imperfect Information Games | Imperfect information games are challenging benchmarks for artificial intelligent systems. To reason and plan under uncertainty is a key towards general AI. Traditionally, large amounts of simulations are used in imperfect information games, and they sometimes perform sub-optimally due to large state and action spaces. In this work, we propose a simulation reweighing mechanism using neural networks. It performs backwards verification to public previous actions and assign proper belief weights to the simulations from the information set of the current observation, using an incomplete state solver network (ISSN). We use simulation reweighing in the playing phase of the game contract bridge, and show that it outperforms previous state-of-the-art Monte Carlo simulation based methods, and achieves better play per decision. | reject | A method is introduced to estimate the hidden state in imperfect information in multiplayer games, in particular Bridge. This is interesting, but the paper falls short in various ways. Several reviewers complained about the readability of the paper, and also about the quality and presentation of the interesting results.
It seems that this paper represents an interesting idea, but is not yet ready for publication. | train | [
"BJeVPJj3ir",
"ByxET2cDsr",
"rJgyHy5_jr",
"BJxPbAYOjS",
"BJx2XMoiFS",
"BJeru3k2Yr"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks the reviewer for the insightful feedbacks.\n\nWe sincerely apologize for the grammar errors and have updated a revision to correct them. We have also cited the mentioned work in the new revision. For the experiments we use trick losses compared with optimal play assuming perfect information as the evaluatio... | [
-1,
1,
-1,
-1,
3,
3
] | [
-1,
5,
-1,
-1,
1,
3
] | [
"iclr_2020_HJlyLgrFvB",
"iclr_2020_HJlyLgrFvB",
"BJx2XMoiFS",
"BJeru3k2Yr",
"iclr_2020_HJlyLgrFvB",
"iclr_2020_HJlyLgrFvB"
] |
iclr_2020_SkgWIxSFvr | FLAT MANIFOLD VAES | Latent-variable models represent observed data by mapping a prior distribution over some latent space to an observed space. Often, the prior distribution is specified by the user to be very simple, effectively shifting the burden of a learning algorithm to the estimation of a highly non-linear likelihood function. This poses a problem for the calculation of a popular distance function, the geodesic between data points in the latent space, as this is often solved iteratively via numerical methods. These are less effective if the problem at hand is not well captured by first or second-order approximations. In this work, we propose less complex likelihood functions by allowing complex distributions and explicitly penalising the curvature of the decoder. This results in geodesics which are approximated well by the Euclidean distance in latent space, decreasing the runtime by a factor of 1,000 with little loss in accuracy.
| reject | The paper proposes to regularize the decoder of the VAE to have a flat pull-back metric, with the goal of making Euclidean distances in the latent space correspond to geodesic distances. This, in turn, results in faster geodesic distance computation. I share the concern of R2 that this regularization towards a flat metric could result in "biased" geodesic distances in regions where data is scarce. I suggest the authors discuss in the next version of the paper if there are situations where this regularization might have drawbacks and if possible, conduct experiments (perhaps on toy data) to either rule out or highlight these points, particularly about scarce data regions. | train | [
"S1guDwFhir",
"rJlmU9O3iB",
"HJlXO5WwYr",
"rkekuz-3sB",
"HkeKVmyjiS",
"rJg9E72qoH",
"HJlnqgn9sB",
"BylaTCjqjr",
"rkekX8OYiS",
"HJxzlWLFjS",
"ryl1AlLKiS",
"r1xw1pSKjS",
"rJxI645i_r",
"SJeN-ZNl5r"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"To be clear, my argument is not based on the mentioned \"Only Bayes...\" paper; I point to this reference as it is the clearest exposition of the problem that I have seen. The fundamental problem is trivial: if you regularize towards smooth manifolds, then distances along the manifold will, by definition, be short... | [
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"rJlmU9O3iB",
"rkekuz-3sB",
"iclr_2020_SkgWIxSFvr",
"HkeKVmyjiS",
"rJg9E72qoH",
"HJlnqgn9sB",
"BylaTCjqjr",
"ryl1AlLKiS",
"SJeN-ZNl5r",
"HJlXO5WwYr",
"HJlXO5WwYr",
"rJxI645i_r",
"iclr_2020_SkgWIxSFvr",
"iclr_2020_SkgWIxSFvr"
] |
iclr_2020_rklfIeSFwS | CNAS: Channel-Level Neural Architecture Search | There is growing interest in automating designing good neural network architectures. The NAS methods proposed recently have significantly reduced architecture search cost by sharing parameters, but there is still a challenging problem of designing search space. We consider search space is typically defined with its shape and a set of operations and propose a channel-level architecture search\,(CNAS) method using only a fixed type of operation. The resulting architecture is sparse in terms of channel and has different topology at different cell. The experimental results for CIFAR-10 and ImageNet show that a fine-granular and sparse model searched by CNAS achieves very competitive performance with dense models searched by the existing methods. | reject | This paper proposes a channel pruning approach based one-shot neural architecture search (NAS). As agreed by all reviewers, it has limited novelty, and the method can be viewed as a straightforward combination of NAS and pruning. Experimental results are not convincing. The proposed method is not better than STOA on the accuracy or number of parameters. The setup is not fair, as the proposed method uses autoaugment while the other baselines do not. The authors should also compare with related methods such as Bayesnas, and other pruning techniques. Finally, the paper is poorly written, and many related works are missing. | val | [
"HJgFd7gpKB",
"rJxkVRfTFB",
"ryx2wOgP9H",
"SJlMlAIp5B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a channel pruning approach based one-shot neural architecture search (NAS). Unlike other NAS works that mostly search for operations/connections and topologies, this paper focuses on pruning channels for a fixed network.\n\nIn general, the idea of channel pruning has been extensively studied in... | [
3,
1,
3,
3
] | [
4,
4,
3,
3
] | [
"iclr_2020_rklfIeSFwS",
"iclr_2020_rklfIeSFwS",
"iclr_2020_rklfIeSFwS",
"iclr_2020_rklfIeSFwS"
] |
iclr_2020_H1lQIgrFDS | ℓ1 Adversarial Robustness Certificates: a Randomized Smoothing Approach | Robustness is an important property to guarantee the security of machine learning models. It has recently been demonstrated that strong robustness certificates can be obtained on ensemble classifiers generated by input randomization. However, tight robustness certificates are only known for symmetric norms including ℓ0 and ℓ2, while for asymmetric norms like ℓ1, the existing techniques do not apply. By converting the likelihood ratio into a one-dimensional mixed random variable, we derive the first tight ℓ1 robustness certificate under isotropic Laplace distributions. Empirically, the deep networks smoothed by Laplace distributions yield the state-of-the-art certified robustness in ℓ1 norm on CIFAR-10 and ImageNet. | reject | After reading the author's response, all the reviewers agree that this paper is an incremental work. The presentation need to be polished before publish. | train | [
"rkgZIzOmqS",
"rJgvdv2isB",
"rkgObP2isS",
"HyxgtI3soS",
"H1xWW82jiH",
"SklWazkntr",
"rJxnt1oTtB",
"HkeDpYAGYH",
"rJgZs-Waur",
"SJl2_f2QuH",
"rJlGE2gQur",
"ryxTo1HWOB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public",
"author",
"public"
] | [
"The paper provides a random smoothing technique for L1 perturbation and proves the tightness results for binary classification case. Overall, there are some new results in this paper -- establishing a new certificate bounds for L1 perturbation model. However, I have several concerns about whether this contribution... | [
6,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_H1lQIgrFDS",
"SklWazkntr",
"rJxnt1oTtB",
"rkgZIzOmqS",
"iclr_2020_H1lQIgrFDS",
"iclr_2020_H1lQIgrFDS",
"iclr_2020_H1lQIgrFDS",
"rJgZs-Waur",
"iclr_2020_H1lQIgrFDS",
"rJlGE2gQur",
"ryxTo1HWOB",
"iclr_2020_H1lQIgrFDS"
] |
iclr_2020_SygSLlStwS | Consistent Meta-Reinforcement Learning via Model Identification and Experience Relabeling | Reinforcement learning algorithms can acquire policies for complex tasks automatically, however the number of samples required to learn a diverse set of skills can be prohibitively large. While meta-reinforcement learning has enabled agents to leverage prior experience to adapt quickly to new tasks, the performance of these methods depends crucially on how close the new task is to the previously experienced tasks. Current approaches are either not able to extrapolate well, or can do so at the expense of requiring extremely large amounts of data due to on-policy training. In this work, we present model identification and experience relabeling (MIER), a meta-reinforcement learning algorithm that is both efficient and extrapolates well when faced with out-of-distribution tasks at test time based on a simple insight: we recognize that dynamics models can be adapted efficiently and consistently with off-policy data, even if policies and value functions cannot. These dynamics models can then be used to continue training policies for out-of-distribution tasks without using meta-reinforcement learning at all, by generating synthetic experience for the new task. | reject | The authors propose an algorithm for meta-rl which reduces the problem to one of model identification. The main idea is to meta-train a fast-adapting model of the environment and a shared policy, both conditioned on task-specific context variables. At meta-testing, only the model is adapted using environment data, while the policy simply requires simulated experience. Finally, the authors show experimentally that this procedure better generalizes to out-of-distribution tasks than similar methods.
The reviewers agree that the paper has a few significant shortcomings. It's unclear how hyper-parameters are selected in the experimental section; the algorithm does not allow for continual adaptation; all policy learning is done through data relabelled by the model.
Overall, the problem the paper addresses is very important, but we do not deem the paper publishable in its current form. | train | [
"HJlNvGfaFr",
"BkxcUP53jH",
"Bkl6149noS",
"Hyx0Nmujsr",
"rJe75CPjjB",
"Sylec8rosr",
"HkxKVIHjiH",
"HJlJS_WoiS",
"HkeZ_CytiS",
"HJgRKlV0OH",
"S1lwHLn2tB"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"### Summary\n1. The paper proposes an algorithm capable of off-policy meta-training (Similar to PEARL) as well as off-policy policy adaptation (By relabelling previous data using the adapted model and reward function). \n\n2. The basic idea is to meta-learn a model that can adapt to different MDPs using a small a... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_SygSLlStwS",
"Bkl6149noS",
"HkeZ_CytiS",
"rJe75CPjjB",
"HkeZ_CytiS",
"S1lwHLn2tB",
"HJgRKlV0OH",
"HkeZ_CytiS",
"HJlNvGfaFr",
"iclr_2020_SygSLlStwS",
"iclr_2020_SygSLlStwS"
] |
iclr_2020_SygBIxSFDS | An Empirical and Comparative Analysis of Data Valuation with Scalable Algorithms | This paper focuses on valuating training data for supervised learning tasks and studies the Shapley value, a data value notion originated in cooperative game theory. The Shapley value defines a unique value distribution scheme that satisfies a set of appealing properties desired by a data value notion. However, the Shapley value requires exponential complexity to calculate exactly. Existing approximation algorithms, although achieving great improvement over the exact algorithm, relies on retraining models for multiple times, thus remaining limited when applied to larger-scale learning tasks and real-world datasets.
In this work, we develop a simple and efficient algorithm to estimate the Shapley value with complexity independent with the model size. The key idea is to approximate the model via a K-nearest neighbor (KNN) classifier, which has a locality structure that can lead to efficient Shapley value calculation. We evaluate the utility of the values produced by the KNN proxies in various settings, including label noise correction, watermark detection, data summarization, active data acquisition, and domain adaption. Extensive experiments demonstrate that our algorithm achieves at least comparable utility to the values produced by existing algorithms while significant efficiency improvement. Moreover, we theoretically analyze the Shapley value and justify its advantage over the leave-one-out error as a data value measure. | reject | There is insufficient support to recommend accepting this paper. The authors provided detailed responses to the reviewer comments, but the reviewers did not raise their evaluation of the significance and novelty of the contributions as a result. The feedback provided should help the authors improve their paper. | train | [
"rJgIEi38iB",
"Bye_QnZ2sB",
"Hye3O9ZhoH",
"SJePyo38iH",
"Byxxvhn8iH",
"BJx713hLsr",
"rJg_8sn8or",
"Bkg6LTc5sS",
"rJgr1KZ9jr",
"ryxX9xEpFB",
"H1ejzNb0YS",
"HJle4GUAKr"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewer for the comments.\n\nQ: The authors need to better motivate the advantages of using Shapley value as a data valuation metric. It is not completely clear to me why Shapley value is a good data valuation metric, compared with other options. The authors argue that it is both fair a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"HJle4GUAKr",
"Bkg6LTc5sS",
"SJePyo38iH",
"ryxX9xEpFB",
"BJx713hLsr",
"H1ejzNb0YS",
"rJgIEi38iB",
"rJgr1KZ9jr",
"Byxxvhn8iH",
"iclr_2020_SygBIxSFDS",
"iclr_2020_SygBIxSFDS",
"iclr_2020_SygBIxSFDS"
] |
iclr_2020_HJe88xBKPr | Mixed Precision Training With 8-bit Floating Point | Reduced precision computation is one of the key areas addressing the widening’compute gap’, driven by an exponential growth in deep learning applications. In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges, because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16)and a broader set of workloads (Resnet-18/34/50, GNMT, and Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point, to improve error propagation.We also examine the impact of quantization noise on generalization, and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline. | reject | This paper propose a method to train DNNs using 8-bit floating point numbers, by using an enhanced loss scaling method and stochastic rounding method. However, the proposed method lacks novel and both the paper presentation and experiments need to be improved throughout. | train | [
"ryxbyEtnsB",
"Hyx3iPknoH",
"S1erqEBnor",
"SJldvfVooB",
"SkxWA0Y9oB",
"rkectR8csS",
"S1g1u7I5sH",
"SkxP4oloYB",
"Skxg0CjUtH",
"SyxwlXOjcH"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your quick response.\n\nI think the method part of the paper still needs much improvement to clarify the novelty and contribution. Also, the experiments in the paper are not enough to demonstrate the generalizability of the proposed methods across different models and datasets. In its current form, I... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
1,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"Hyx3iPknoH",
"SJldvfVooB",
"iclr_2020_HJe88xBKPr",
"rkectR8csS",
"SkxP4oloYB",
"Skxg0CjUtH",
"SyxwlXOjcH",
"iclr_2020_HJe88xBKPr",
"iclr_2020_HJe88xBKPr",
"iclr_2020_HJe88xBKPr"
] |
iclr_2020_BJlPLlrFvH | Variable Complexity in the Univariate and Multivariate Structural Causal Model | We show that by comparing the individual complexities of univariante cause and effect in the Structural Causal Model, one can identify the cause and the effect, without considering their interaction at all. The entropy of each variable is ineffective in measuring the complexity, and we propose to capture it by an autoencoder that operates on the list of sorted samples. Comparing the reconstruction errors of the two autoencoders, one for each variable, is shown to perform well on the accepted benchmarks of the field.
In the multivariate case, where one can ensure that the complexities of the cause and effect are balanced, we propose a new method that mimics the disentangled structure of the causal model. We extend the results of~\cite{Zhang:2009:IPC:1795114.1795190} to the multidimensional case, showing that such modeling is only likely in the direction of causality. Furthermore, the learned model is shown theoretically to perform the separation to the causal component and to the residual (noise) component. Our multidimensional method obtains a significantly higher accuracy than the literature methods. | reject | The author response and revisions to the manuscript motivated two reviewers to increase their scores to weak accept. While these revisions increased the quality of the work, the overall assessment is just shy of the threshold for inclusion. | train | [
"rJgH6p3pFH",
"B1gq-oWoir",
"SyxF49zXjS",
"rJxRDaBYjS",
"S1e_TnBtjr",
"SJlqghHKiB",
"SJg4mjrYsH",
"Skx3i1gMqr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Update:\n\nThe authors have successfully justified my concerns. Therefore, I have increased my score to 6.\n\nOriginal comments:\n\nIn this paper, the authors consider learning causal directions from observational data from both univariate case and multi-dimensional case. In the univariate case, the authors propos... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
3
] | [
4,
-1,
1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2020_BJlPLlrFvH",
"SJlqghHKiB",
"iclr_2020_BJlPLlrFvH",
"rJgH6p3pFH",
"Skx3i1gMqr",
"SyxF49zXjS",
"iclr_2020_BJlPLlrFvH",
"iclr_2020_BJlPLlrFvH"
] |
iclr_2020_H1lOUeSFvB | Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions | We propose a novel method to optimally incorporate surrogate gradient information. Our approach, unlike previous work, needs no information about the quality of the surrogate gradients and is always guaranteed to find a descent direction that is better than the surrogate gradient. This allows to iteratively use the previous gradient estimate as surrogate gradient for the current search point. We theoretically prove that this yields fast convergence to the true gradient for linear functions and show under simplifying assumptions that it significantly improves gradient estimates for general functions. Finally, we evaluate our approach empirically on MNIST and reinforcement learning tasks and show that it considerably improves the gradient estimation of ES at no extra computational cost. | reject | The authors propose a novel approach to using surrogate gradient information in ES. Unlike previous approaches, their method always finds a descent direction that is better than the surrogate gradient. This allows them to use previous gradient estimates as the surrogate gradient. They prove results for the linear case and under simplifying assumptions that it extends beyond the linear case. Finally, they evaluate on MNIST and RL tasks and show improvements over ES.
After the revisions, reviewers were concerned about:
* The strong (and potentially unrealistic) assumptions for the theorems. They felt that these assumptions trivialized the theorems.
* Limited experiments demonstrating advantages in situations where other more effective methods could be used. The performance on the RL tasks shows small gains compared to a vanilla ES approach. Thus, the usefulness of the approach is not clearly demonstrated.
I think that the paper has the potential to be a strong submission if the authors can extend their experiments to more complex problems and demonstrate gains. At this time however, I recommend rejection. | train | [
"r1lg_TEnoH",
"SJxrQTVhjB",
"SygPkpE2iB",
"SygTYhEnoS",
"SkgtRHvaFB",
"Ske0aL1AKB",
"SyxXZ-_0Fr"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We agree that Theorem one follows very easily from that assumption. Therefore, we renamed it to Proposition 1.\n\nWe shortly discuss how reasonable the assumption is that the numerical approximation is equal to the true directional derivative. The assumption can be violated because of two reasons: 1) third order a... | [
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"SkgtRHvaFB",
"Ske0aL1AKB",
"SyxXZ-_0Fr",
"iclr_2020_H1lOUeSFvB",
"iclr_2020_H1lOUeSFvB",
"iclr_2020_H1lOUeSFvB",
"iclr_2020_H1lOUeSFvB"
] |
iclr_2020_S1lF8xHYwS | Unsupervised Domain Adaptation through Self-Supervision | This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data. Like much of previous work, we seek to align the learned representations of the source and target domains while preserving discriminability. The way we accomplish alignment is by learning to perform auxiliary self-supervised task(s) on both domains simultaneously. Each self-supervised task brings the two domains closer together along the direction relevant to that task. Training this jointly with the main task classifier on the source domain is shown to successfully generalize to the unlabeled target domain. The presented objective is straightforward to implement and easy to optimize. We achieve state-of-the-art results on four out of seven standard benchmarks, and competitive results on segmentation adaptation. We also demonstrate that our method composes well with another popular pixel-level adaptation method. | reject | Thanks for your detailed replies to the reviewers, which helped us a lot to clarify several issues.
Although the paper discusses an interesting topic and contains potentially interesting idea, its novelty is limited.
Given the high competition of ICLR2020, this paper is still below the bar unfortunately. | train | [
"HJxNH7jusH",
"BJxK3M2OsH",
"BkeNnpsdoS",
"rkgvKBsujH",
"Bygt1CBfsH",
"rkxxcB1Rtr",
"rklFudcVqB",
"HkxybvFNqH",
"SJx5Q7Ua5B",
"SyxF2CBaqB",
"H1eZ5xsSKr",
"B1xColtrYr",
"r1x1BXafOr",
"SJxUhmL6vr"
] | [
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"Thank you for your thoughtful review. We have added qualitative comparisons in Appendix G of our latest revision (page 16).\n",
"For the ICCV 2019 paper, please see our reply to reviewer 1 for a thorough comparison of the differences, both algorithmic and conceptual.\n\nFor the domain generalization paper [Carlu... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rklFudcVqB",
"Bygt1CBfsH",
"HkxybvFNqH",
"rkxxcB1Rtr",
"iclr_2020_S1lF8xHYwS",
"iclr_2020_S1lF8xHYwS",
"iclr_2020_S1lF8xHYwS",
"iclr_2020_S1lF8xHYwS",
"SyxF2CBaqB",
"H1eZ5xsSKr",
"B1xColtrYr",
"iclr_2020_S1lF8xHYwS",
"SJxUhmL6vr",
"iclr_2020_S1lF8xHYwS"
] |
iclr_2020_HklFUlBKPB | Identifying Weights and Architectures of Unknown ReLU Networks | The output of a neural network depends on its parameters in a highly nonlinear way, and it is widely assumed that a network's parameters cannot be identified from its outputs. Here, we show that in many cases it is possible to reconstruct the architecture, weights, and biases of a deep ReLU network given the ability to query the network. ReLU networks are piecewise linear and the boundaries between pieces correspond to inputs for which one of the ReLUs switches between inactive and active states. Thus, first-layer ReLUs can be identified (up to sign and scaling) based on the orientation of their associated hyperplanes. Later-layer ReLU boundaries bend when they cross earlier-layer boundaries and the extent of bending reveals the weights between them. Our algorithm uses this to identify the units in the network and weights connecting them (up to isomorphism). The fact that considerable parts of deep networks can be identified from their outputs has implications for security, neuroscience, and our understanding of neural networks. | reject | This article studies the identifiability of architecture and weights of a ReLU network from the values of the computed functions, and presents an algorithm to do this. This is a very interesting problem with diverse implications. The reviewers raised concerns about the completeness of various parts of the proposed algorithm and the complexity analysis, some of which were addressed in the author's response. Another concern raised was that the experiments were limited to small networks, with a proof of concept on more realistic networks missing. The revision added experiments with MNIST. Other concerns (which in my opinion could be studied separately) include possible limitations of the approach to networks with no shared weights nor pooling. The reviewers agree that the article concerns an interesting topic that has not been studied in much detail yet. Still, the article would benefit from a more transparent presentation of the algorithm and theoretical analysis, as well as more extensive experiments. | train | [
"rJl2PVMsoS",
"HJguCCkijB",
"S1efi0yioS",
"BkgCE01ioH",
"SygDMCksoB",
"SyxqKXM6KS",
"SJeEalATKB",
"BJxjoEJUcB",
"HJx7Jukc9H",
"BJx_nit4Fr",
"Hkl8me6QtH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"I have read your rebuttal, and most of my questions are well addressed. I maintain my original rating on this paper.",
"Thank you for the careful review and feedback. To respond to the questions raised:\n\n- Detail on algorithmic primitives. We have clarified the text. The algorithm PointsOnLine is able to perf... | [
-1,
-1,
-1,
-1,
-1,
3,
1,
6,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
1,
-1,
-1
] | [
"SygDMCksoB",
"SyxqKXM6KS",
"SJeEalATKB",
"BJxjoEJUcB",
"HJx7Jukc9H",
"iclr_2020_HklFUlBKPB",
"iclr_2020_HklFUlBKPB",
"iclr_2020_HklFUlBKPB",
"iclr_2020_HklFUlBKPB",
"Hkl8me6QtH",
"iclr_2020_HklFUlBKPB"
] |
iclr_2020_S1et8gBKwH | Semi-supervised Pose Estimation with Geometric Latent Representations | Pose estimation is the task of finding the orientation of an object within an image with respect to a fixed frame of reference. Current classification and regression approaches to the task require large quantities of labelled data for their purposes. The amount of labelled data for pose estimation is relatively limited. With this in mind, we propose the use of Conditional Variational Autoencoders (CVAEs) \cite{Kingma2014a} with circular latent representations to estimate the corresponding 2D rotations of an object. The method is capable of training with datasets that have an arbitrary amount of labelled images providing relatively similar performance for cases in which 10-20% of the labels for images is missing. | reject | This paper addresses the problem of rotation estimation in 2D images. The method attempted to reduce the labeling need by learning in a semi-supervised fashion. The approach learns a VAE where the latent code is be factored into the latent vector and the object rotation.
All reviewers agreed that this paper is not ready for acceptance. The reviewers did express promise in the direction of this work. However, there were a few main concerns. First, the focus on 2D instead of 3D orientation. The general consensus was that 3D would be more pertinent use case and that extension of the proposed approach from 2D to 3D is likely non-trivial. The second issue is that minimal technical novelty. The reviewers argue that the proposed solution is a combination of existing techniques to a new problem area.
Since the work does not have sufficient technical novelty to compare against other disentanglement works and is being applied to a less relevant experimental setting, the AC does not recommend acceptance.
| train | [
"Bklwm1hYsr",
"S1eXpCiYsr",
"Hyg_tCjKjS",
"HkgKc3DJqS",
"B1gUsOVx5B",
"Byxpp1v0cS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewer, I would like to thank you for the time and effort spent in analyzing this paper and for the specific suggestions made to improve this work. We will answer the concerns presented in the review and indicate the future actions to improve the paper.\n1) The central empirical result stated is that using ... | [
-1,
-1,
-1,
3,
3,
1
] | [
-1,
-1,
-1,
5,
1,
4
] | [
"HkgKc3DJqS",
"B1gUsOVx5B",
"Byxpp1v0cS",
"iclr_2020_S1et8gBKwH",
"iclr_2020_S1et8gBKwH",
"iclr_2020_S1et8gBKwH"
] |
iclr_2020_HkxcUxrFPS | Improving Visual Relation Detection using Depth Maps | State of the art visual relation detection methods mostly rely on object information extracted from RGB images such as predicted class probabilities, 2D bounding boxes and feature maps. In this paper, we argue that the 3D positions of objects in space can provide additional valuable information about object relations. This information helps not only to detect spatial relations, such as \textit{standing behind}, but also non-spatial relations, such as \textit{holding}. Since 3D information of a scene is not easily accessible, we propose incorporating a pre-trained RGB-to-Depth model within visual relation detection frameworks. We discuss different feature extraction strategies from depth maps and show their critical role in relation detection.
Our experiments confirm that the performance of state-of-the-art visual relation detection approaches can significantly be improved by utilizing depth map information. | reject | The paper proposes to improve visual relation prediction by using depth maps. Since existing RGB images do not contain depth informations, the authors use a monocular depth estimation method to predict depth maps. The authors show that using depths maps, they are able to improve prediction of relations between ground truth object bounding boxes and labels.
The paper got relatively low scores (with 3 initial weak rejects). After the revision and suggested improvements, one of the reviewers updated their score so the paper now has 2 weak rejects and 1 weak accept.
The paper had the following weaknesses:
1. The paper has limited technical novelty as it combines off the shelf components. The components also used different backbones (ResNet at some places, VGGNet at others) that were directly from prior work. Was there any attempt to have an unified architecture? As the main novelty of the work is not in the model aspect, the paper needs to have stronger experiments and analysis.
2. More analysis on the quality of the depth estimation is needed. Ideally, the work should provide some insight into whether some of the errors is due to having bad depth estimation? The depth estimation method used is from 2016, there are newer depth estimation methods now. Would having better depth estimation give improved results? Experiments that illustrates that method works well with predicted bounding boxes instead of ground truth bounding boxes will also strengthen the paper.
3. There was the question of whether the related Yang et al. 2018 workshop paper should be included as basis for comparison. In the AC's opinion, Yang et al. 2018 is not concurrent work and should be treated as prior work. However, it is not clear whether it is feasible to compare against that work. The authors should attempt to do so and if infeasible, clearly articulate why that is the case.
4. As pointed out by R3, once there is a depth map available, it is also possible to compare against 3D methods (such as those that operate on point clouds)
Overall the paper had a nice insight by proposing the simple but effective idea of using depth information to help with visual relation prediction. Still the work is somewhat borderline in quality. In the AC's opinion, the main contribution and insight of the paper is of limited interest to the ICLR community, and it would be more appreciated in a computer vision conference. The authors are encouraged to improve the paper with stronger experiments and analysis, incorporate various suggestions from the reviewers, and resubmit to a vision conference.
| val | [
"SkenUk7MFB",
"ryeuECq3sr",
"SJxUbAq3jS",
"r1e-k092oB",
"H1grl0q2jB",
"ryeR2T53oS",
"ByxwGiq2oB",
"r1xH7953sH",
"B1epn-3FsS",
"SylMkK2CYH",
"SylGDVKEqS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\n\n********* Post Rebuttal *********\n\nI appreciate the authors' effort in providing thorough responses and revised manuscript. \n\nI agree with the authors that \"the finding not being surprising\" is not a ground for rejection. I tried to word my final decision carefully but it seems it has still caused confus... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1
] | [
"iclr_2020_HkxcUxrFPS",
"iclr_2020_HkxcUxrFPS",
"SkenUk7MFB",
"SkenUk7MFB",
"SkenUk7MFB",
"SkenUk7MFB",
"SylMkK2CYH",
"SylGDVKEqS",
"SkenUk7MFB",
"iclr_2020_HkxcUxrFPS",
"iclr_2020_HkxcUxrFPS"
] |
iclr_2020_Bkg5LgrYwS | Imitation Learning of Robot Policies using Language, Vision and Motion | In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn can be used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to influence a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability. | reject | The present paper addresses the problem of imitation learning in multi-modal settings, combining vision, language and motion. The proposed approach learns an abstract task representation, and the goal is to use this as a basis for generalization. This paper was subject to considerable discussion, and the authors clarified several issues that reviewers raised during the rebuttal phase. Overall, the empirical study presented in the paper remains limited, for example in terms of ablations (which components of the proposed model have what effect on performance) and placement in the context of prior work. As a result, the depth of insights is not yet sufficient for publication. | train | [
"SJev8vVhsB",
"r1xpSgRjiS",
"H1gellRsjr",
"HJe_hkAsjS",
"SyxZV1AojB",
"rJlPACaijH",
"BJg1cbsTtB",
"BylouRApKH",
"HJecYd1RFS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for taking the time to address these comments.\n\nThe closed loop control experiments are very helpful. I think it would be worth conducting a similar experiment in environments where the trajectory itself (not just the goal) needs to change over time, to avoid dynamics obstacles. This would show that ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"SyxZV1AojB",
"BylouRApKH",
"HJe_hkAsjS",
"HJecYd1RFS",
"BJg1cbsTtB",
"iclr_2020_Bkg5LgrYwS",
"iclr_2020_Bkg5LgrYwS",
"iclr_2020_Bkg5LgrYwS",
"iclr_2020_Bkg5LgrYwS"
] |
iclr_2020_BkljIlHtvS | Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning | Meta-learning methods, most notably Model-Agnostic Meta-Learning (Finn et al, 2017) or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks.
The mechanism behind their success, however, is poorly understood.
We begin this work with an experimental analysis of MAML, finding that deep models are crucial for its success, even given sets of simple tasks where a linear model would suffice on any individual task.
Furthermore, on image-recognition tasks, we find that the early layers of MAML-trained models learn task-invariant features, while later layers are used for adaptation, providing further evidence that these models require greater capacity than is strictly necessary for their individual tasks.
Following our findings, we propose a method which enables better use of model capacity at inference time by separating the adaptation aspect of meta-learning into parameters that are only used for adaptation but are not part of the forward model.
We find that our approach enables more effective meta-learning in smaller models, which are suitably sized for the individual tasks.
| reject | This paper presents a number of experiments involving the Model-Agnostic Meta-Learning (MAML) framework, both for the purpose of understanding its behavior and motivating specific enhancements. With respect to the former, the paper argues that deeper networks allow earlier layers to learn generic modeling features that can be adapted via later layers in a task-specific way. The paper then suggests that this implicit decomposition can be explicitly formulated via the use of meta-optimizers for handling adaptations, allowing for simpler networks that may not require generic modeling-specific layers.
At the end of the rebuttal and discussion phases, two reviewers chose rejection while one preferred acceptance. In this regard, as AC I did not find clear evidence that warranted overriding the reviewer majority, and consistent with some of the evaluations, I believe that there are several points whereby this paper could be improved.
More specifically, my feeling is that some of the conclusions of this paper would either already be expected by members of the community, or else would require further empirical support to draw more firm conclusions. For example, the fact that earlier layers encode more generic features that are not adapted for each task is not at all surprising (such low-level features are natural to be shared). Moreover, when the linear model from Section 3.2 is replaced by a deep linear network, clearly the model capacity is not changed, but the effective number of parameters which determine the gradient update will be significantly expanded in a seemingly non-trivial way. This is then likely to be of some benefit.
Consequently, one could naturally view the extra parameters as forming an implicit meta-optimizer, and it is not so remarkable that other trainable meta-optimizers might work well. Indeed cited references such as (Park & Oliva, 2019) have already applied explicit meta-optimizers to MAML and few-shot learning tasks. And based on Table 2, the proposed factorized meta-optimizer does not appear to show any clear advantage over the meta-curvature method from (Park & Oliva, 2019). Overall, either by using deeper networks or an explicit trainable meta-optimizer, there are going to be more adaptable parameters to exploit and so the expectation is that there will be room for improvement. Even so, I am not against the message of this paper. Rather it is just that for an empirically-based submission with close ties to existing work, the bar is generally a bit higher in terms of the quality and scope of the experiments.
As a final (lesser) point, the paper argues that meta-optimizers allow for the decomposition of modeling and adaptation as mentioned above; however, I did not see exactly where this claim was precisely corroborated empirically. For example, one useful test could be to recreate Figure 2 but with the meta-optimizer in place and a shallower network architecture. The expectation then might be that general features are no longer necessary. | test | [
"SJg63_1CYB",
"rygbTkp-5r",
"H1gp7puhjH",
"B1guno_3jB",
"HygiFsd3oH",
"S1lBNiOhiH",
"S1eNiqdnsH",
"HylCzquhoH",
"Byl46dOhjS",
"rygbXAUK9B",
"B1ecO9yx5H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public"
] | [
"This paper investigated the effect of depth on the meta-learning model. \nThe paper mainly studies through experimental means and does not have mathematical analysis to demonstrate. In this way of analysis, a large number of experiments are necessary. In addition to ensuring a large number of experiments, it is ... | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1
] | [
"iclr_2020_BkljIlHtvS",
"iclr_2020_BkljIlHtvS",
"B1ecO9yx5H",
"HygiFsd3oH",
"rygbXAUK9B",
"S1eNiqdnsH",
"rygbTkp-5r",
"Byl46dOhjS",
"SJg63_1CYB",
"iclr_2020_BkljIlHtvS",
"iclr_2020_BkljIlHtvS"
] |
iclr_2020_B1liIlBKvS | Selfish Emergent Communication | Current literature in machine learning holds that unaligned, self-interested agents do not learn to use an emergent communication channel. We introduce a new sender-receiver game to study emergent communication for this spectrum of partially-competitive scenarios and put special care into evaluation. We find that communication can indeed emerge in partially-competitive scenarios, and we discover three things that are tied to improving it. First, that selfish communication is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive. Second, that stability and performance are improved by using LOLA (Foerster et al, 2018), especially in more competitive scenarios. And third, that discrete protocols lend themselves better to learning cooperative communication than continuous ones. | reject | There has been a long discussion on the paper, especially between the authors and the 2nd reviewer. While the authors' comments and paper modifications have improved the paper, the overall opinion on this paper is that it is below par in its current form. The main issue is that the significance of the results is insufficiently clear. While the sender-receiver game introduced is interesting, a more thorough investigation would improve the paper a lot (for example, by looking if theoretical statements can be made). | train | [
"HkleHtCd9S",
"SJgddF_hoS",
"HJliF6cniH",
"SklHlnc2ir",
"BJgC_BOnir",
"r1eSxdPjsr",
"SkgaB38jsH",
"H1lAfjIosH",
"rylU7YLiiB",
"Hklb-YSPir",
"BJlPGfe9iH",
"SkgUaRgFoS",
"Bklu1pgFiB",
"SygY0oetiB",
"SyxjGoHDoS",
"Skla0QSwsB",
"S1eiWjvIir",
"HJlic5wLoB",
"r1gZEZSwoB",
"BkezmBvUoB"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
... | [
"This paper looks at the question of emergent communication amongst self-interested learning agents. The paper finds that \"selfish\" (ie. self-interested) agents can learn to communicate using a cheap talk channel as long as the objective is partially cooperative. \nThe paper makes states that this is is a novel f... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_B1liIlBKvS",
"Hklb-YSPir",
"iclr_2020_B1liIlBKvS",
"Skla0QSwsB",
"H1lAfjIosH",
"SkgaB38jsH",
"SkgUaRgFoS",
"Bklu1pgFiB",
"SygY0oetiB",
"rJl-EdvIir",
"H1eW6F5J9S",
"r1gZEZSwoB",
"SyxjGoHDoS",
"SyxjGoHDoS",
"BygopOPLjr",
"H1gm5SDUiH",
"r1xb36iCFS",
"r1xb36iCFS",
"BkezmBv... |
iclr_2020_Byl28eBtwH | Learning Cluster Structured Sparsity by Reweighting | Recently, the paradigm of unfolding iterative algorithms into finite-length feed-forward neural networks has achieved a great success in the area of sparse recovery. Benefit from available training data, the learned networks have achieved state-of-the-art performance in respect of both speed and accuracy. However, the structure behind sparsity, imposing constraint on the support of sparse signals, is often an essential prior knowledge but seldom considered in the existing networks. In this paper, we aim at bridging this gap. Specifically, exploiting the iterative reweighted ℓ1 minimization (IRL1) algorithm, we propose to learn the cluster structured sparsity (CSS) by rewegihting adaptively. In particular, we first unfold the Reweighted Iterative Shrinkage Algorithm (RwISTA) into an end-to-end trainable deep architecture termed as RW-LISTA. Then instead of the element-wise reweighting, the global and local reweighting manner are proposed for the cluster structured sparse learning. Numerical experiments further show the superiority of our algorithm against both classical algorithms and learning-based networks on different tasks. | reject | The paper is proposed a rejection based on majority reviews. | train | [
"HygwxY8niB",
"rkgDrRBhoB",
"SylaFTH2sH",
"rylJ1Qhx9H",
"HyxE0iR-cS",
"SyxMwM1B5B"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Q1: \"Motivation for CSS is that there is structure in the recovered signal — however no comparison of the recovered structure is made. While it is true, that is the signal is perfectly recovered, it would follow the structure from this data was obtained, however no such guarantees can be made for non-zero errors.... | [
-1,
-1,
-1,
1,
6,
3
] | [
-1,
-1,
-1,
5,
1,
3
] | [
"SyxMwM1B5B",
"HyxE0iR-cS",
"rylJ1Qhx9H",
"iclr_2020_Byl28eBtwH",
"iclr_2020_Byl28eBtwH",
"iclr_2020_Byl28eBtwH"
] |
iclr_2020_BJxnIxSKDr | Mint: Matrix-Interleaving for Multi-Task Learning | Deep learning enables training of large and flexible function approximators from scratch at the cost of large amounts of data. Applications of neural networks often consider learning in the context of a single task. However, in many scenarios what we hope to learn is not just a single task, but a model that can be used to solve multiple different tasks. Such multi-task learning settings have the potential to improve data efficiency and generalization by sharing data and representations across tasks. However, in some challenging multi-task learning settings, particularly in reinforcement learning, it is very difficult to learn a single model that can solve all the tasks while realizing data efficiency and performance benefits. Learning each of the tasks independently from scratch can actually perform better in such settings, but it does not benefit from the representation sharing that multi-task learning can potentially provide. In this work, we develop an approach that endows a single model with the ability to represent both extremes: joint training and independent training. To this end, we introduce matrix-interleaving (Mint), a modification to standard neural network models that projects the activations for each task into a different learned subspace, represented by a per-task and per-layer matrix. By learning these matrices jointly with the other model parameters, the optimizer itself can decide how much to share representations between tasks. On three challenging multi-task supervised learning and reinforcement learning problems with varying degrees of shared task structure, we find that this model consistently matches or outperforms joint training and independent training, combining the best elements of both. | reject | Reviewers put this paper in the lower half and question the theoretical motivation and the experimental design. On the other hand, this seems like an alternative general framework for solving large-scale multi-task learning problems. In the future, I would encourage the authors to evaluate on multi-task benchmarks such as SuperGLUE, decaNLP and C4. Note: It seems there's more similarities with Ruder et al. (2019) [0] than the paper suggests.
[0] https://arxiv.org/abs/1705.08142 | train | [
"r1eSE7CioB",
"HkgASG0sjB",
"rkep9-0siB",
"HyxLMz0oir",
"HJxlJG0ior",
"BylK3m7RtH",
"rygCTmYRtr",
"rJxF5WxU9S",
"rJlCxtZ59S"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review! We have uploaded a revised version of the paper to address your feedback and concerns.\n\n1) Questions regarding tensor factorization-based approaches to multi-task learning.\nThank you for pointing out this related literature. We have added a discussion and cited all of these methods in... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
1,
5,
1
] | [
"BylK3m7RtH",
"rygCTmYRtr",
"rJlCxtZ59S",
"iclr_2020_BJxnIxSKDr",
"rJxF5WxU9S",
"iclr_2020_BJxnIxSKDr",
"iclr_2020_BJxnIxSKDr",
"iclr_2020_BJxnIxSKDr",
"iclr_2020_BJxnIxSKDr"
] |
iclr_2020_Bkl2UlrFwr | Iterative Deep Graph Learning for Graph Neural Networks | In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly learning graph structure and graph embedding simultaneously. We first cast graph structure learning problem as similarity metric learning problem and leverage an adapted graph regularization for controlling smoothness, connectivity and sparsity of the generated graph. We further propose a novel iterative method for searching for hidden graph structure that augments the initial graph structure. Our iterative method dynamically stops when learning graph structure approaches close enough to the ground truth graph. Our extensive experiments demonstrate that the proposed IDGL model can consistently outperform or match state-of-the-art baselines in terms of both classification accuracy and computational time. The proposed approach can cope with both transductive training and inductive training. | reject | The submission proposes a method for learning a graph structure and node embeddings through an iterative process. Smoothness and sparsity are both optimized in this approach. The iterative method has a stopping mechanism based on distance from a ground truth.
The concerns of the reviewers were about scalability and novelty. Since other methods have used the same costs for optimization, as well as other aspects of this approach, there is little contribution other than the iterative process. The improvement over LDS, the most similar approach, is relatively minor.
Although the paper is promising, more work is required to establish the contributions of the method. Recommendation is for rejection.
| test | [
"Bkx0HaPUor",
"HJxUglOIoB",
"SkxbT1dLsB",
"rJeFkAvIir",
"SJgD3hvUoB",
"S1xl73mpYB",
"rJlSbRu6tH",
"Bkxq-J6pFr",
"HJgORtYk_r"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"We thank the reviewer for giving valuable feedback! However, there are some points of misunderstanding that we address in this rebuttal. \n\nWe emphasize at the outset that the main contribution of this work is the iterative learning of graph structures and graph node embeddings, which iteratively learn a better g... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
3,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
-1
] | [
"rJlSbRu6tH",
"Bkxq-J6pFr",
"Bkxq-J6pFr",
"S1xl73mpYB",
"iclr_2020_Bkl2UlrFwr",
"iclr_2020_Bkl2UlrFwr",
"iclr_2020_Bkl2UlrFwr",
"iclr_2020_Bkl2UlrFwr",
"iclr_2020_Bkl2UlrFwr"
] |
iclr_2020_HygTUxHKwH | Qgraph-bounded Q-learning: Stabilizing Model-Free Off-Policy Deep Reinforcement Learning | In state of the art model-free off-policy deep reinforcement learning (RL), a replay memory is used to store past experience and derive all network updates. Even if both state and action spaces are continuous, the replay memory only holds a finite number of transitions. We represent these transitions in a data graph and link its structure to soft divergence. By selecting a subgraph with a favorable structure, we construct a simple Markov Decision Process (MDP) for which exact Q-values can be computed efficiently as more data comes in - resulting in a Qgraph. We show that the Q-value for each transition in the simplified MDP is a lower bound of the Q-value for the same transition in the original continuous Q-learning problem. By using these lower bounds in TD learning, our method is less prone to soft divergence and exhibits increased sample efficiency while being more robust to hyperparameters. Qgraphs also retain information from transitions that have already been overwritten in the replay memory, which can decrease the algorithm's sensitivity to the replay memory capacity.
| reject | This paper proposes a method to reduce the instability issues of off-policy deep reinforcement learning. The proposed solution constructs a simple MDP from the experience in the agent's replay memory. This graph is used to compute a lower bound for the values from the original problem. Incorporating this bound can make the learning system less prone to soft divergence.
The reviewers appreciated the motivation of the paper and the direction of this research. However, the reviewers were not convinced that the formulation was sufficiently complete. There were concerns that the method makes additional assumptions about the data distribution (the presence of state aggregation and the absence of repeated states in continuous spaces). Reviewers found related work was missing. The reviewers also found multiple aspects of the presentation unclear even after the author response.
This paper is not ready for publication as the generality of the proposed method was not sufficiently clear to the reviewers after the author response. | train | [
"SkgJmBIIoH",
"rJe54QLUiS",
"ryeeYMIUiH",
"rJl9BeUUiB",
"HJgus3ATFH",
"rygpYvGTKr",
"HJg27xDRtH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for those comments. \nAs you suggested, we will of course re-iterate the paper to find typos and grammatical errors; add more details on related work, include more explicit pros/cons; explicitly add a list of our contributions (graph-perspective on the replay memory; thereby insights into different class... | [
-1,
-1,
-1,
-1,
6,
3,
1
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"HJg27xDRtH",
"rJl9BeUUiB",
"HJgus3ATFH",
"rygpYvGTKr",
"iclr_2020_HygTUxHKwH",
"iclr_2020_HygTUxHKwH",
"iclr_2020_HygTUxHKwH"
] |
iclr_2020_SyeRIgBYDB | Semi-Implicit Back Propagation | Neural network has attracted great attention for a long time and many researchers are devoted to improve the effectiveness of neural network training algorithms. Though stochastic gradient descent (SGD) and other explicit gradient-based methods are widely adopted, there are still many challenges such as gradient vanishing and small step sizes, which leads to slow convergence and instability of SGD algorithms. Motivated by error back propagation (BP) and proximal methods, we propose a semi-implicit back propagation method for neural network training. Similar to BP, the difference on the neurons are propagated in a backward fashion and the parameters are updated with proximal mapping. The implicit update for both hidden neurons and parameters allows to choose large step size in the training algorithm. Finally, we also show that any fixed point of convergent sequences produced by this algorithm is a stationary point of the objective loss function. The experiments on both MNIST and CIFAR-10 demonstrate that the proposed semi-implicit BP algorithm leads to better performance in terms of both loss decreasing and training/validation accuracy, compared to SGD and a similar algorithm ProxBP. | reject | The reviewers equivocally reject the paper, which is mostly experimental and the results of which are limited. The authors do not react to the reviewers' comments. | val | [
"SyeIeeDnKH",
"rkg4bK5jYr",
"BJgOydfpFH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces a novel algorithm for computing update directions for neural network's weights.\nThe algorithm consists of the modified backpropagation procedure where a layer's error is computed using implicitly-updated weights.\n\nThe proposed idea is interesting, but its presentation and evaluation could b... | [
1,
1,
3
] | [
4,
4,
4
] | [
"iclr_2020_SyeRIgBYDB",
"iclr_2020_SyeRIgBYDB",
"iclr_2020_SyeRIgBYDB"
] |
iclr_2020_HkgR8erKwB | PAC-Bayesian Neural Network Bounds | Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets. However, generalization bounds for this setting is still missing.
In this paper, we present a new PAC-Bayesian generalization bound for the negative log-likelihood loss which utilizes the \emph{Herbst Argument} for the log-Sobolev inequality to bound the moment generating function of the learners risk. | reject | This paper proposes PAC_Bayesian bounds for negative log-likelihood loss function. A few reviewers raised concerns around 1) distinguish their contributions better from prior work (eg Alquier). 2) confounders in their experiments. Both reviewers agreed that the paper, as it is written, does not provide sufficient evidence of significance. In addition, experiments shown in the paper varies two things - # parameters (therefore expressiveness and potential generalizability) and depth at each setting. As pointed out, this isn’t right - in order to capture the effect, one has to control for all confounders carefully. Another concerned raised were around Theorem 2 - that it contains data-distribution on the right hand side, which isn’t all that useful to calculate generalization bounds (we don’t have access to the distribution). We highly encourage authors to take another cycle of edits to better distinguish their work from others before future submissions.
| val | [
"Skxxg4nDqH",
"r1em1aSnsS",
"rkxF8d2tsB",
"r1llUSjHoS",
"rkg-AVjBiS",
"BJggBmjHsH",
"Bye8oCvAKr",
"rkgITCHrqS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper suggests a PAC-Bayesian bound for negative log-likelihood loss function. Many PAC-Bayesian bounds are provided for bounded loss functions but as authors point out, Alquier et al. (2016) and Germain et al. (2016) extend them to unbounded loss functions. I have two major concerns regarding this paper:\n\... | [
3,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
5,
-1,
-1,
-1,
-1,
-1,
1,
4
] | [
"iclr_2020_HkgR8erKwB",
"rkxF8d2tsB",
"BJggBmjHsH",
"Bye8oCvAKr",
"rkgITCHrqS",
"Skxxg4nDqH",
"iclr_2020_HkgR8erKwB",
"iclr_2020_HkgR8erKwB"
] |
iclr_2020_SkeJPertPS | Collaborative Training of Balanced Random Forests for Open Set Domain Adaptation | In this paper, we introduce a collaborative training algorithm of balanced random forests for domain adaptation tasks which can avoid the overfitting problem. In real scenarios, most domain adaptation algorithms face the challenges from noisy, insufficient training data. Moreover in open set categorization, unknown or misaligned source and target categories adds difficulty. In such cases, conventional methods suffer from overfitting and fail to successfully transfer the knowledge of the source to the target domain. To address these issues, the following two techniques are proposed. First, we introduce the optimized decision tree construction method, in which the data at each node are split into equal sizes while maximizing the information gain. Compared to the conventional random forests, it generates larger and more balanced decision trees due to the even-split constraint, which contributes to enhanced discrimination power and reduced overfitting. Second, to tackle the domain misalignment problem, we propose the domain alignment loss which penalizes uneven splits of the source and target domain data. By collaboratively optimizing the information gain of the labeled source data as well as the entropy of unlabeled target data distributions, the proposed CoBRF algorithm achieves significantly better performance than the state-of-the-art methods. The proposed algorithm is extensively evaluated in various experimental setups in challenging domain adaptation tasks with noisy and small training data as well as open set domain adaptation problems, for two backbone networks of AlexNet and ResNet-50. | reject | This paper proposes new target objectives for training random forests for better cross-domain generalizability.
As reviewers mentioned, I think the idea of using random forests for domain adaptation is novel and interesting, while the proposed method has potential especially in the noisy settings. However, I think the paper can be much improved and is not ready to publish due to the following reviewers' comments:
- This paper is not well-written and has too many unclear parts in the experiments and method section. The results are not guaranteed to be reproducible given the content of the paper. Also, the organization of the paper could be improved.
- The open-set domain adaptation setting requires more elaboration. More carefully designed experiments should be presented.
- It remains unclear how the feature extractors can be trained or fine-tuned in the DNN + tree architecture. Applying trees to high-dimensional features sacrifices the interpretability of the tree models, hampering the practical value of the approach.
Hence, I recommend rejection. | train | [
"B1gHvfE_oH",
"r1gYn-4diB",
"S1ecq-Ndir",
"H1gbR6xTtB",
"ryg6fy_ptB",
"S1l_sCFXcS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for helpful comments on our paper.\n\n1. We use noisy labels only for the ‘source’ domain in the experiments since we design this experiment to validate the robustness of the proposed algorithm. \nWe randomly change the original label to create a noisy setting. The specified portion of changed noise data... | [
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
5,
4,
3
] | [
"H1gbR6xTtB",
"ryg6fy_ptB",
"S1l_sCFXcS",
"iclr_2020_SkeJPertPS",
"iclr_2020_SkeJPertPS",
"iclr_2020_SkeJPertPS"
] |
iclr_2020_HJxkvlBtwH | Certifying Neural Network Audio Classifiers | We present the first end-to-end verifier of audio classifiers. Compared to existing methods, our approach enables analysis of both, the entire audio processing stage as well as recurrent neural network architectures (e.g., LSTM). The audio processing is verified using novel convex relaxations tailored to feature extraction operations used in audio (e.g., Fast Fourier Transform) while recurrent architectures are certified via a novel binary relaxation for the recurrent unit update. We show the verifier scales to large networks while computing significantly tighter bounds than existing methods for common audio classification benchmarks: on the challenging Google Speech Commands dataset we certify 95% more inputs than the interval approximation (only prior scalable method), for a perturbation of -90dB. | reject | The paper developed log abstract transformer, square abstract transformer and sigmoid-tanh abstract transformer to certifiy robustness of neural network models for audio. The work is interesting but the scope is limited. It presented a neural network certification methods for one particular type of audio classifiers that use MFCC as input features and LSTM as the neural network layers. This thus may have limited interest to the general readers.
The paper targets to present an end-to-end solution to audio classifiers. Investigation on one particular type of audio classifier is far from sufficient. As the reviewers pointed out, there're large literature of work using raw waveform inputs systems. Also there're many state-of-the-art systems are HMM/DNN and attnetion based encoder-decoder models. In terms of neural network models, resent based models, transformer models etc are also important. A more thorough investigation/comparison would greatly enlarge the scope of this paper. | train | [
"rJlkTjrTFB",
"SklqlcwuiS",
"r1gpOKwOiB",
"Hye17tPOsB",
"SJeB2PvusB",
"S1lGzu0hYH",
"BJeL0admqr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents an end-to-end neural network verifier that is specially designed for audio signal processing to certify the robustness of a system when facing noise perturbation. The approach is based on abstract transformers to deal with non-linearity in the audio signal processing pipeline and LSTM acoustic... | [
6,
-1,
-1,
-1,
-1,
1,
3
] | [
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_HJxkvlBtwH",
"S1lGzu0hYH",
"rJlkTjrTFB",
"BJeL0admqr",
"iclr_2020_HJxkvlBtwH",
"iclr_2020_HJxkvlBtwH",
"iclr_2020_HJxkvlBtwH"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.