paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2019_BkgBvsC9FQ | DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder | Variational autoencoders (VAEs) have shown a promise in data-driven conversation modeling. However, most VAE conversation models match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., single-modal) scope. In this paper, we propose DialogWAE, a conditional Wasserstein autoencoder (WAE) specially designed for dialogue modeling. Unlike VAEs that impose a simple distribution over the latent variables, DialogWAE models the distribution of data by training a GAN within the latent variable space. Specifically, our model samples from the prior and posterior distributions over the latent variables by transforming context-dependent random noise using neural networks and minimizes the Wasserstein distance between the two distributions. We further develop a Gaussian mixture prior network to enrich the latent space. Experiments on two popular datasets show that DialogWAE outperforms the state-of-the-art approaches in generating more coherent, informative and diverse responses. | accepted-poster-papers | This paper tackles the task of end-to-end systems for dialogue generation and proposes a novel, improved GAN for dialogue modeling, which adopts conditional Wasserstein Auto-Encoder to learn high-level representations of responses. In experiments, the proposed approach is compared to several state-of-the-art baselines on two dialog datasets, and improvements are shown both in terms of objective measures and human evaluation, making a strong support for the proposed approach.
Two reviewers suggest similarities with a recent ICML paper on ARAE and request including reference to it and also request examples demonstrating differences, which are included in the latest version of the paper. | train | [
"rJextSEHx4",
"ryenU7v_am",
"BJlpvEDuTQ",
"BkxsExnM6X",
"HyghGy8-TQ",
"r1xEp0mqnm",
"ryxFd-TK2Q"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We appreciate all reviewers for constructive feedback and comments for the improvement of the paper.\nWe have revised our paper according to the comments and replied to all reviewers.\nBefore the final decision, we will explain any of your questions.",
"We thank the reviewer for taking the time to read our paper... | [
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2019_BkgBvsC9FQ",
"r1xEp0mqnm",
"HyghGy8-TQ",
"ryxFd-TK2Q",
"iclr_2019_BkgBvsC9FQ",
"iclr_2019_BkgBvsC9FQ",
"iclr_2019_BkgBvsC9FQ"
] |
iclr_2019_BkgPajAcY7 | No Training Required: Exploring Random Encoders for Sentence Classification | We explore various methods for computing sentence representations from pre-trained word embeddings without any training, i.e., using nothing but random parameterizations. Our aim is to put sentence embeddings on more solid footing by 1) looking at how much modern sentence embeddings gain over random methods---as it turns out, surprisingly little; and by 2) providing the field with more appropriate baselines going forward---which are, as it turns out, quite strong. We also make important observations about proper experimental protocol for sentence classification evaluation, together with recommendations for future research. | accepted-poster-papers | This paper provides a new family of untrained/randomly initialized sentence encoder baselines for a standard suite of NLP evaluation tasks, and shows that it does surprisingly well—very close to widely-used methods for some of the tasks. All three reviewers acknowledge that this is a substantial contribution, and none see any major errors or fatal flaws.
One reviewer had initially argued the experiments and discussion are not as thorough as would be typical for a strong paper. In particular, the results are focused on a single set of word embeddings and a narrow class of architectures. I'm sympathetic to this concern, but since there don't seem to be any outstanding concerns about the correctness of the paper, and since the other reviewers see the contribution as quite important, I recommend acceptance. [Update: This reviewer has since revised their review to make it more positive.]
(As a nit, I'd ask the authors to ensure that the final version of the paper fits within the margins.) | train | [
"r1glPhYWTX",
"H1e7fT0K0X",
"HyxhXnAY0X",
"rygDbIX90m",
"Syx4KoAYCQ",
"rJxtPTCtCX",
"r1eGVpCKRm",
"HygtcSqw6m",
"SyeP-oxUa7",
"SyxAjuu9h7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes that randomly encoding a sentence using a set of pretrained word embeddings is almost as good as using a trained encoder with the same embeddings. This is shown through a variety of tasks where certain tasks perform well with a random encoder and certain ones don't.\n\nThe paper is well written... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_BkgPajAcY7",
"r1glPhYWTX",
"SyeP-oxUa7",
"HygtcSqw6m",
"iclr_2019_BkgPajAcY7",
"SyxAjuu9h7",
"H1e7fT0K0X",
"iclr_2019_BkgPajAcY7",
"iclr_2019_BkgPajAcY7",
"iclr_2019_BkgPajAcY7"
] |
iclr_2019_BkgWHnR5tm | Neural Graph Evolution: Towards Efficient Automatic Robot Design | Despite the recent successes in robotic locomotion control, the design of robot relies heavily on human engineering. Automatic robot design has been a long studied subject, but the recent progress has been slowed due to the large combinatorial search space and the difficulty in evaluating the found candidates. To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space. We propose Neural Graph Evolution (NGE), which performs selection on current candidates and evolves new ones iteratively. Different from previous approaches, NGE uses graph neural networks to parameterize the control policies, which reduces evaluation cost on new candidates with the help of skill transfer from previously evaluated designs. In addition, NGE applies Graph Mutation with Uncertainty (GM-UC) by incorporating model uncertainty, which reduces the search space by balancing exploration and exploitation. We show that NGE significantly outperforms previous methods by an order of magnitude. As shown in experiments, NGE is the first algorithm that can automatically discover kinematically preferred robotic graph structures, such as a fish with two symmetrical flat side-fins and a tail, or a cheetah with athletic front and back legs. Instead of using thousands of cores for weeks, NGE efficiently solves searching problem within a day on a single 64 CPU-core Amazon EC2
machine.
| accepted-poster-papers | Lean in favor
Strengths: The paper tackles the difficult problem of automatic robot design. The approach uses graph neural
networks to parameterize the control policies, which allows for weight sharing / transfer to new policies even
as the topology changes. Understanding how to efficiently explore through non-differentiable changes to the body
is an important problem (AC). The authors will release the code and environments, which will be useful in an area where there are
currently no good baselines (AC).
Weaknesses: There are concerns (particularly R2, R1) over the lack of a strong baseline, and with the results
being demonstrated on a limited number of environments (R1) (fish, 2D walker). In response, the authors clarified the nomenclature and
description of a number of the baselines, and added others. AC: there is no submitted video (searches for "video" on the PDF text
produces no hits); this is seen by the AC as being a real limitation from the perspective of evaluation.
AC agrees with some of the reviewer remarks that some of the original stated claims are too strong.
AC: the simplified fluid model of Mujoco (http://mujoco.org/book/computation.html#gePassive) is
unable to model the fluid state, in particular the induced fluid vortices that are responsible for a
good portion of fish locomotion, i.e., "Passive and active flow control by swimming fishes and
mammals" and other papers. Acknowledging this kind of limitation will make the paper stronger, not weaker;
the ML community can learn from much existing work at the interface of biology and fluid mechancis.
There remain points of contention, i.e., the sufficiency of the baselines. However, the reviewers R2 and R3 have
not responded to the detailed replies from the authors, including additional baselines (totaling 5 at present)
and pointing out that baselines such as CMA-ES (R2) in a continuous space and therefore do not translate in any obvious way
to the given problem at hand.
On balance, with the additional baselines and related clarifications, the AC feels that this paper makes a
useful and valid contribution to the field, and will help establish a benchmark in an important area.
The authors are strongly encouraged to further state caveats and limitations, and to emphasize why some
candidate baseline methods are not readily applicable.
| train | [
"Bkl1xB-3JE",
"B1xqGNaoyV",
"SJgfTweYk4",
"r1l6OwV-AQ",
"r1lAJc6xRm",
"rkeEvt6xA7",
"BkeBMdpg07",
"S1x7SdplAX",
"SygmVm7XpQ",
"HJg1qgpZTm",
"rJxUPOqhhX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We respect the reviewer's opinion and thanks again for the response. But still, we disagree with the claim that the experiment part is weak.\n\nIn terms of the quality of baselines, we already include 5 comparing baselines including previous state-of-the-art. And NGE has the best performance and efficiency by a la... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"SJgfTweYk4",
"S1x7SdplAX",
"BkeBMdpg07",
"iclr_2019_BkgWHnR5tm",
"iclr_2019_BkgWHnR5tm",
"SygmVm7XpQ",
"rJxUPOqhhX",
"HJg1qgpZTm",
"iclr_2019_BkgWHnR5tm",
"iclr_2019_BkgWHnR5tm",
"iclr_2019_BkgWHnR5tm"
] |
iclr_2019_BkgtDsCcKQ | Function Space Particle Optimization for Bayesian Neural Networks | While Bayesian neural networks (BNNs) have drawn increasing attention, their posterior inference remains challenging, due to the high-dimensional and over-parameterized nature. To address this issue, several highly flexible and scalable variational inference procedures based on the idea of particle optimization have been proposed. These methods directly optimize a set of particles to approximate the target posterior. However, their application to BNNs often yields sub-optimal performance, as such methods have a particular failure mode on over-parameterized models. In this paper, we propose to solve this issue by performing particle optimization directly in the space of regression functions. We demonstrate through extensive experiments that our method successfully overcomes this issue, and outperforms strong baselines in a variety of tasks including prediction, defense against adversarial examples, and reinforcement learning. | accepted-poster-papers | Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready. | train | [
"SJgtpDLf1E",
"HJlcUPIfkV",
"ByeUSNVqCX",
"Hyx6dqfunQ",
"HyloUTy5nQ",
"HJlsKYptRX",
"SylRwHjQAm",
"SJlJ9HiQCQ",
"Hyxehfi70Q",
"H1lfE7o7Cm",
"BJeUUXsQ0X",
"SklEAZjXR7",
"rJepoAru2m"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thanks for acknowledging our contribution and revising the rating.",
"Thank you for acknowledging our contributions, revising the ratings and the nice comments. We appreciate that. Below, we briefly address the extra questions.\n\nQ1: Regarding the impact of B' on performance, and values we have used in the expe... | [
-1,
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"ByeUSNVqCX",
"HJlsKYptRX",
"SJlJ9HiQCQ",
"iclr_2019_BkgtDsCcKQ",
"iclr_2019_BkgtDsCcKQ",
"BJeUUXsQ0X",
"Hyx6dqfunQ",
"SylRwHjQAm",
"rJepoAru2m",
"HyloUTy5nQ",
"H1lfE7o7Cm",
"iclr_2019_BkgtDsCcKQ",
"iclr_2019_BkgtDsCcKQ"
] |
iclr_2019_BkgzniCqY7 | Structured Adversarial Attack: Towards General Implementation and Better Interpretability | When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example. However, such adversarial attacks perturbing the raw input spaces may fail to capture structural information hidden in the input. This work develops a more general attack model, i.e., the structured attack (StrAttack), which explores group sparsity in adversarial perturbation by sliding a mask through images aiming for extracting key spatial structures. An ADMM (alternating direction method of multipliers)-based framework is proposed that can split the original problem into a sequence of analytically solvable subproblems and can be generalized to implement other attacking methods. Strong group sparsity is achieved in adversarial perturbations even with the same level of Lp-norm distortion (p∈ {1,2,∞}) as the state-of-the-art attacks. We demonstrate the effectiveness of StrAttack by extensive experimental results on MNIST, CIFAR-10 and ImageNet. We also show that StrAttack provides better interpretability (i.e., better correspondence with discriminative image regions) through adversarial saliency map (Paper-not et al., 2016b) and class activation map (Zhou et al., 2016). | accepted-poster-papers | This paper contributes a novel approach to evaluating the robustness of DNN based on structured sparsity to exploit the underlying structure of the image and introduces a method to solve it. The proposed approach is well evaluated and the authors answered the main concerns of the reviewers. | train | [
"ByxVIQD2yE",
"SJgf67ndCX",
"BygM1K4c37",
"HkeSkuLdRQ",
"S1x4tLIOR7",
"SyerbE8uCX",
"r1gjPN8uRm",
"ByxX-z6E67",
"B1xz04MWaX",
"rkl0Xux-aX",
"HJl2S2JA3X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"Thanks for incorporating feedback, the additional related work section is helpful and provided better context for this work.",
"Thank you for revising your paper, the new version seems to be more clear to me in terms if the positioning of your work. I have bumped up the numerical score to 6 in my review.",
"Th... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
7
] | [
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
2,
-1,
3
] | [
"SyerbE8uCX",
"HkeSkuLdRQ",
"iclr_2019_BkgzniCqY7",
"iclr_2019_BkgzniCqY7",
"ByxX-z6E67",
"B1xz04MWaX",
"HJl2S2JA3X",
"rkl0Xux-aX",
"iclr_2019_BkgzniCqY7",
"BygM1K4c37",
"iclr_2019_BkgzniCqY7"
] |
iclr_2019_Bkl-43C9FQ | Spherical CNNs on Unstructured Grids | We present an efficient convolution kernel for Convolutional Neural Networks (CNNs) on unstructured grids using parameterized differential operators while focusing on spherical signals such as panorama images or planetary signals.
To this end, we replace conventional convolution kernels with linear combinations of differential operators that are weighted by learnable parameters. Differential operators can be efficiently estimated on unstructured grids using one-ring neighbors, and learnable parameters can be optimized through standard back-propagation. As a result, we obtain extremely efficient neural networks that match or outperform state-of-the-art network architectures in terms of performance but with a significantly lower number of network parameters. We evaluate our algorithm in an extensive series of experiments on a variety of computer vision and climate science tasks, including shape classification, climate pattern segmentation, and omnidirectional image semantic segmentation. Overall, we present (1) a novel CNN approach on unstructured grids using parameterized differential operators for spherical signals, and (2) we show that our unique kernel parameterization allows our model to achieve the same or higher accuracy with significantly fewer network parameters. | accepted-poster-papers | The paper presents a simple and effective convolution kernel for CNNs on spherical data (convolution by a linear combination of differential operators). The proposed method is efficient in the number of parameters and achieves strong classification and segmentation performance in several benchmarks. The paper is generally well written but the authors should clarify the details and address reviewer comments (for example, clarity/notations of equations) in the revision.
| test | [
"r1lhQZeq37",
"SJeO8PlFAX",
"SygAhSlF0Q",
"rJgo4QetAQ",
"H1gjxtiv6X",
"rkxqEcLqsX",
"HklBQtGu9m",
"B1xhqVz_qQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This article introduces a simple yet efficient method that enables deep learning on spherical data (or 3D mesh projected onto a spherical surface), with much less parameters than the popular approaches, and also a good alternative to the regular correlation based models.\n\nInstead of running patches of spherical ... | [
7,
-1,
-1,
-1,
6,
7,
-1,
-1
] | [
3,
-1,
-1,
-1,
3,
5,
-1,
-1
] | [
"iclr_2019_Bkl-43C9FQ",
"rkxqEcLqsX",
"r1lhQZeq37",
"H1gjxtiv6X",
"iclr_2019_Bkl-43C9FQ",
"iclr_2019_Bkl-43C9FQ",
"B1xhqVz_qQ",
"iclr_2019_Bkl-43C9FQ"
] |
iclr_2019_BklCusRct7 | Optimal Transport Maps For Distribution Preserving Operations on Latent Spaces of Generative Models | Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian. After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample. However, the latent space operations commonly used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. Previous works have attempted to reduce this mismatch with heuristic modification to the operations or by changing the latent distribution and re-training models. In this paper, we propose a framework for modifying the latent space operations such that the distribution mismatch is fully eliminated. Our approach is based on optimal transport maps, which adapt the latent space operations such that they fully match the prior distribution, while minimally modifying the original operation. Our matched operations are readily obtained for the commonly used operations and distributions and require no adjustment to the training procedure. | accepted-poster-papers | This is a well-written paper that shows how to use optimal transport to perform smooth interpolation, between two random vectors sampled from the prior distribution of the latent space of a deep generative model. By encouraging the marginal of the interpolated vector to match the prior distribution, these interpolated distribution-preserving random vectors in the latent space are shown to result in better image interpolation quality for GANs. The problem is of interest to the community and the resulted solutions are simple to implement.
As pointed out by Reviewer 1, the paper could be made clearly more convincing by showing that these distribution preservation operations also help perform interpolation in the latent space of VAEs, and the AC strongly encourages the authors to add these results if possible.
The AC appreciates that the authors have added experiments to satisfactorily address his/her concern:
"Suppose z_1,z_2 are independent, and drawn from N(\mu,\Sigma), then t z_1 + (1-t)z_2 ~ N(\mu, (t^2+(1-t)^2)\Sigma). If one lets y | z_1, z_2 ~ N(t z_1 + (1-t)z_2, (1-t^2-(1-t)^2)\Sigma) as the latent space interpolation, then marginally we have y ~ N(\mu, \Sigma). This is an extremely simple and fast procedure to make sure that the latent space interpolation y is highly related to the linear interpolation t z_1 + (1-t)z_2 but also satisfies y ~ N(\mu, \Sigma)."
The AC strongly encourages the authors to add these new results into their revision, and highlight "smooth interpolation" as an important characteristic in addition to "distribution preserving." A potential suggestion is changing "Distribution Preserving Operations" in the title to "Distribution Preserving Smooth Operations."
| val | [
"HklrDZXoCX",
"SylZU8FPnQ",
"B1e-bBgiCX",
"HJlwcxljR7",
"SklDZf7C2Q",
"rJe1NJN6hQ"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the feedback.\n\nWe argue that just because latent space operations do not help with GAN training, it does not mean they are not useful. Just as the reviewer suggests, they provide insights into how the trained generator works. For example, interpolations are one of the most intuitive way... | [
-1,
5,
-1,
-1,
7,
5
] | [
-1,
3,
-1,
-1,
5,
3
] | [
"SylZU8FPnQ",
"iclr_2019_BklCusRct7",
"rJe1NJN6hQ",
"SklDZf7C2Q",
"iclr_2019_BklCusRct7",
"iclr_2019_BklCusRct7"
] |
iclr_2019_BklHpjCqKm | Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning | Deep learning has achieved astonishing results on many tasks with large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system.
Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility.
The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time. | accepted-poster-papers | The paper looks at a novel form of physics-constrained system identification for a multi-link robot,
although it could also be applied more generally. The contributions is in many simple; this is seen
in a good light (R1, R3) or more modestly (R2). R3 notes surprise that this hasn't been done before.
Results are demonstrated on a simualted 2-dof robot and real Barrett WAM arm, better than a pure
neural network modeling approach, PID control, or an analytic model.
Some aspects of the writing needed to be addressed, i.e., PDE vs ODE notations.
The point of biggest concern is related to positioning the work relative to other system-identification
literature, where there has been an abundance of work in the robotics and control literature.
There is no final consensus on this point for R3; R3 did not receive the email notification of the author's detailed reply,
and notes that the author has clarified some respects, but still has concerns, and did not have time to further
provide feedback on short notice.
In balance, the AC believes that this kind of constrained learning of models is underexplored, and
notes that the reviewers (who have considerable shared expertise in robotics-related work) believe
that this is a step in the right direction and that it is surprising this type of approach has not
been investigated yet. The authors have further reconciled their work with earlier sys-ID work, and
can further describe how their work is situated with respect to prior art in sys-ID (as they do in
their discussion comments). The AC recommends that: (a) the abstract explicitly mention "system
identification" as a relevant context for the work in this paper, given that the ML audience should
be (or can be) made aware of this terminology; and (b) push more of the math related to the
development of the necessary derivatives to an appendix, given that the particular use of the
derivations seems to be more in support of obtaining the performance necessary for online use,
rather than something that cannot be accomplished with autodiff.
| train | [
"HkejIJNp14",
"HJe9uxpo1N",
"H1lsBkAKJE",
"HJg1lShYk4",
"SygM8NcFRm",
"BJgEjeIxR7",
"SJe1dTrlCm",
"S1xsD_SxA7",
"BJlOB02JRm",
"SJxMlEq-pm",
"H1exdKrp27",
"S1xd1wOo2X"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Once we can update the paper, we will make this statement clearer and include the modelling as flexible joint, i.e., as two joint coupled by a massless spring. Furthermore, we will also include that this is not possible with the Barrett WAM as one cannot sense the motor positions. \n\nThanks for bringing this to o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"HJe9uxpo1N",
"HJg1lShYk4",
"HJg1lShYk4",
"iclr_2019_BklHpjCqKm",
"BJgEjeIxR7",
"H1exdKrp27",
"S1xd1wOo2X",
"SJxMlEq-pm",
"iclr_2019_BklHpjCqKm",
"iclr_2019_BklHpjCqKm",
"iclr_2019_BklHpjCqKm",
"iclr_2019_BklHpjCqKm"
] |
iclr_2019_BklMjsRqY7 | Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks | Efforts to reduce the numerical precision of computations in deep learning training have yielded systems that aggressively quantize weights and activations, yet employ wide high-precision accumulators for partial sums in inner-product operations to preserve the quality of convergence. The absence of any framework to analyze the precision requirements of partial sum accumulations results in conservative design choices. This imposes an upper-bound on the reduction of complexity of multiply-accumulate units. We present a statistical approach to analyze the impact of reduced accumulation precision on deep learning training. Observing that a bad choice for accumulation precision results in loss of information that manifests itself as a reduction in variance in an ensemble of partial sums, we derive a set of equations that relate this variance to the length of accumulation and the minimum number of bits needed for accumulation. We apply our analysis to three benchmark networks: CIFAR-10 ResNet 32, ImageNet ResNet 18 and ImageNet AlexNet. In each case, with accumulation precision set in accordance with our proposed equations, the networks successfully converge to the single precision floating-point baseline. We also show that reducing accumulation precision further degrades the quality of the trained network, proving that our equations produce tight bounds. Overall this analysis enables precise tailoring of computation hardware to the application, yielding area- and power-optimal systems. | accepted-poster-papers | The authors present a theoretical and practical study on low-precision training of neural networks. They introduce the notion of variance retention ratio (VRR) that determines the accumulation bit-width for
precise tailoring of computation hardware. Empirically, the authors show that their theoretical result extends to practical implementation in three standard benchmarks.
A criticism of the paper has been certain hyperparameters that a reviewer found to be chosen rather arbitrarily, but I think the reviewers do a reasonable job in rebutting it.
Overall, there is consensus that the paper presents an interesting framework and does both practical and empirical analysis, and it should be accepted. | train | [
"SJeEW_UM37",
"HygrtjgXAQ",
"SyesPogQ0m",
"B1lLrixm0m",
"SyxIBxbtam",
"ByxaQebt6Q",
"BJgdyxbYpQ",
"r1xM6yWFpX",
"HJgqZjSA2Q",
"H1lrdaUqnm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"There has been a lot of work on limited precision training and inference for deep learning hardware, but in most of this work, the accumulators for the multiply-and-add (FMA) operations that occur for inner products are chosen conservatively or treated as having unlimited precision. The authors address this with ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_BklMjsRqY7",
"SJeEW_UM37",
"H1lrdaUqnm",
"HJgqZjSA2Q",
"ByxaQebt6Q",
"SJeEW_UM37",
"H1lrdaUqnm",
"HJgqZjSA2Q",
"iclr_2019_BklMjsRqY7",
"iclr_2019_BklMjsRqY7"
] |
iclr_2019_Bklfsi0cKm | Deep Convolutional Networks as shallow Gaussian Processes | We show that the output of a (residual) CNN with an appropriate prior over the weights and biases is a GP in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike "deep kernels", has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new record for GP with a comparable number of parameters. | accepted-poster-papers | This paper builds on a promising line of literature developing connections between Gaussian processes and deep neural networks. Viewing one model under the lens of (the infinite limit of) another can lead to neat new insights and algorithms. In this case the authors develop a connection between convolutional networks and Gaussian processes with a particular kind of kernel. The reviews were quite mixed with one champion and two just below borderline.
The reviewers all believed the paper had contributions which would be interesting to the community (such as R1: "the paper presents a novel efficient way to compute the convolutional kernel, which I believe has merits on its own" and R2: "I really like the idea of authors that kernels based on convolutional networks might be more practical compared to the ones based on fully connected networks"). All the reviewers found the contribution of the covariance function to be novel and exciting.
Some cited weaknesses of the paper were that the authors didn't analyze the uncertainty from the model (arguably the reasoning for adopting a Bayesian treatment), novelty in appealing to the central limit theorem to arrive at the connection, and scalability of the model.
In the review process it also became apparent that there was another paper with a substantially similar contribution. The decision for this paper was calibrated accordingly with that work.
Weighing the strengths and weaknesses of the paper and taking into account a reviewer willing to champion the work it seems there is enough novel contribution and interest in the work to justify acceptance.
The authors provided responses to the reviewer concerns including calibration plots and timing experiments in the discussion period and it would be appreciated if these can be incorporated into the camera ready version. | train | [
"BJxop09FRQ",
"ryeUF0qtRX",
"HJl0JC9F0X",
"BklBFxQkT7",
"BkxW0nej3X",
"HJxvPo8t3m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you very much for your comments.\n\nTo check the quality of the uncertainties produced by our method, we used GPflow to perform the multi-class classification problem, on the full dataset, with a RobustMax likelihood. To ensure tractability in the harder case with a non-conjugate likelihood, we were forced t... | [
-1,
-1,
-1,
5,
8,
5
] | [
-1,
-1,
-1,
5,
3,
4
] | [
"HJxvPo8t3m",
"BkxW0nej3X",
"BklBFxQkT7",
"iclr_2019_Bklfsi0cKm",
"iclr_2019_Bklfsi0cKm",
"iclr_2019_Bklfsi0cKm"
] |
iclr_2019_BklhAj09K7 | Unsupervised Domain Adaptation for Distance Metric Learning | Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain. However, the two predominant methods, domain discrepancy reduction learning and semi-supervised learning, are not readily applicable when source and target domains do not share a common label space. This paper addresses the above scenario by learning a representation space that retains discriminative power on both the (labeled) source and (unlabeled) target domains while keeping representations for the two domains well-separated. Inspired by a theoretical analysis, we first reformulate the disjoint classification task, where the source and target domains correspond to non-overlapping class labels, to a verification one. To handle both within and cross domain verifications, we propose a Feature Transfer Network (FTN) to separate the target feature space from the original source space while aligned with a transformed source space. Moreover, we present a non-parametric multi-class entropy minimization loss to further boost the discriminative power of FTNs on the target domain. In experiments, we first illustrate how FTN works in a controlled setting of adapting from MNIST-M to MNIST with disjoint digit classes between the two domains and then demonstrate the effectiveness of FTNs through state-of-the-art performances on a cross-ethnicity face recognition problem.
| accepted-poster-papers | This paper proposes a new solution for tackling domain adaptation across disjoint label spaces. Two of the reviewers agree that the main technical approach is interesting and novel. The final reviewer asked for clarification of the problem setting which the authors have provided in their rebuttal. We encourage the authors to include this in the final version. However, there is also a consensus that more experimental evaluation would improve the manuscript and complete experimental details are needed for reliable reproduction. | train | [
"SJei8ZU81E",
"BklPeLVUyN",
"HJx8mmk7Am",
"Sy86MymAm",
"rylttGymA7",
"BJecfeN52Q",
"HylVtcW53Q",
"ByemonjVnm"
] | [
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Hi Hui-Po, \n\nThanks for your comment.\n\nAs you mentioned, the conventional domain adaptation problems assume the same \"task\" between the source and the target domains and this allows to transfer discriminative knowledge (e.g., classifier) learned from the source domain to the target domain. On the other hand,... | [
-1,
-1,
-1,
-1,
-1,
8,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"BklPeLVUyN",
"iclr_2019_BklhAj09K7",
"HylVtcW53Q",
"ByemonjVnm",
"BJecfeN52Q",
"iclr_2019_BklhAj09K7",
"iclr_2019_BklhAj09K7",
"iclr_2019_BklhAj09K7"
] |
iclr_2019_BkloRs0qK7 | A comprehensive, application-oriented study of catastrophic forgetting in DNNs | We present a large-scale empirical study of catastrophic forgetting (CF) in modern Deep Neural Network (DNN) models that perform sequential (or: incremental) learning.
A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios.
As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks (SLTs) in close alignment to previous works on CF.
Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions. We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models. | accepted-poster-papers | This paper has two main contributions. The first is that it proposes a specific framework for measuring catastrophic forgetting in deep neural networks that incorporates three application-oriented constraints: (1) a low memory footprint, which implies that data from prior tasks cannot be retained; (2) causality, meaning that data from future tasks cannot be used in any way, including hyperparameter optimization and model selection; and (3) update complexity for new tasks that is moderate and also independent of the number of previously learned tasks, which precludes replay strategies. The second contribution is an extensive study of catastrophic forgetting, using different sequential learning tasks derived from 9 different datasets and examining 7 different models. The key conclusions from the study are that (1) permutation-based tasks are comparatively easy and should not be relied on to measure catastrophic forgetting; (2) with the application-oriented contraints in effect, all of the examined models suffer from catastrophic forgetting (a result that is contrary to a number of other recent papers); (3) elastic weight consolidation provides some protection against catastrophic forgetting for simple sequential learning tasks, but fails for more complex tasks; and (4) IMM is effective, but only if causality is violated in the selection of the IMM balancing parameter. The reviewer scores place this paper close to the decision boundary. The most negative reviewer (R2) had concerns about the novelty of the framework and its application-oriented constraints. The authors contend that recent papers on catastrophic forgetting fail to apply these quite natural constraints, leading to the deceptive conclusion that catastrophic forgetting may not be as big of a problem as it once was. The AC read a number of the papers mentioned by the authors and agrees with them: these constraints have been, at least at times, ignored in the literature, and they shouldn't be ignored. The other two reviewers appreciated the scope and rigor of the empirical study. On the balance, the AC thinks this is an important contribution and that it should appear at ICLR. | train | [
"B1ljgG7inm",
"H1lXXQf5n7",
"ryereK-TnQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the updates and rebuttals from the authors. \n\nI now think including the results for HAT may not be essential for the current version of the paper. I now understand better about the main point of the paper - providing a different setting for evaluating algorithms for combatting CF, and it seems the wid... | [
6,
7,
5
] | [
5,
3,
4
] | [
"iclr_2019_BkloRs0qK7",
"iclr_2019_BkloRs0qK7",
"iclr_2019_BkloRs0qK7"
] |
iclr_2019_BkltNhC9FX | Posterior Attention Models for Sequence to Sequence Learning | Modern neural architectures critically rely on attention for mapping structured inputs to sequences. In this paper we show that prevalent attention architectures do not adequately model the dependence among the attention and output tokens across a predicted sequence.
We present an alternative architecture called Posterior Attention Models that after a principled factorization of the full joint distribution of the attention and output variables, proposes two major changes. First, the position where attention is marginalized is changed from the input to the output. Second, the attention propagated to the next decoding stage is a posterior attention distribution conditioned on the output. Empirically on five translation and two morphological inflection tasks the proposed posterior attention models yield better BLEU score and alignment accuracy than existing attention models. | accepted-poster-papers | The reviewers of this paper agreed that it has done a stellar job of presenting a novel and principled approach to attention as a latent variable, providing a new and sound set of inference techniques to this end. This builds on top of a discussion of the limitations of existing deterministic approaches to attention, and frames the contribution well in relation to other recurrent and stochastic approaches to attention. While there are a few issues with clarity surrounding some aspects of the proposed method, which the authors are encouraged to fine-tune in their final version, paying careful attention to the review comments, this paper is more or less ready for publication with a few tweaks. It makes a clear, significant, and well-evaluate contribution to the field of attention models in sequence to sequence architectures, and will be of great interest to many attendees at ICLR. | train | [
"HkeNhpWNyE",
"H1lZgtGmyN",
"HkeaY7QqR7",
"BJe8NreK2m",
"B1l0ox750X",
"ByebxyX9C7",
"r1lWSVVj0Q",
"r1xb9bQ9AX",
"H1lBWBQcCQ",
"SylcXSSt6m",
"HyezVpjwpm",
"rJlU-8qJTX",
"B1xBIw_93X",
"HklfYhUe2m"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"public",
"public",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the suggestion. We will take this into account and contextualize better in the next draft.",
"I thank the authors for improving the clarity of the model derivation and updating the paper to mention related work and alternative derivations. I agree that the author's formulation provides novel and inter... | [
-1,
-1,
-1,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"H1lZgtGmyN",
"B1l0ox750X",
"HklfYhUe2m",
"iclr_2019_BkltNhC9FX",
"B1xBIw_93X",
"HyezVpjwpm",
"ByebxyX9C7",
"BJe8NreK2m",
"SylcXSSt6m",
"iclr_2019_BkltNhC9FX",
"rJlU-8qJTX",
"iclr_2019_BkltNhC9FX",
"iclr_2019_BkltNhC9FX",
"iclr_2019_BkltNhC9FX"
] |
iclr_2019_Bkx0RjA9tX | Generative Question Answering: Learning to Answer the Whole Question | Discriminative question answering models can overfit to superficial biases in datasets, because their loss function saturates when any clue makes the answer likely. We introduce generative models of the joint distribution of questions and answers, which are trained to explain the whole question, not just to answer it.Our question answering (QA) model is implemented by learning a prior over answers, and a conditional language model to generate the question given the answer—allowing scalable and interpretable many-hop reasoning as the question is generated word-by-word. Our model achieves competitive performance with specialised discriminative models on the SQUAD and CLEVR benchmarks, indicating that it is a more general architecture for language understanding and reasoning than previous work. The model greatly improves generalisation both from biased training data and to adversarial testing data, achieving a new state-of-the-art on ADVERSARIAL SQUAD. We will release our code. | accepted-poster-papers | All reviewers recommend accept.
Discussion can be consulted below. | train | [
"ByeMpnB_CQ",
"rJg7OxYc3X",
"HJe5i_MDRX",
"rkehP_GP0m",
"SklmEufv0Q",
"rJxuRDGPCQ",
"rkllvsQsTX",
"HyxAZ3Q82m",
"ryeIrWQch7",
"H1xkn3XCjm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"When I saw this description, I thought you were comparing against Clark and Gardner 2018 (https://arxiv.org/abs/1710.10723; DocQA). I hadn't seen Weaver before, and I was surprised there there hasn't been a comparison between Weaver and DocQA (so I'm not actually sure which is better). DocQA only requires traini... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
8,
6,
-1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1
] | [
"rJxuRDGPCQ",
"iclr_2019_Bkx0RjA9tX",
"HyxAZ3Q82m",
"ryeIrWQch7",
"rJg7OxYc3X",
"iclr_2019_Bkx0RjA9tX",
"iclr_2019_Bkx0RjA9tX",
"iclr_2019_Bkx0RjA9tX",
"iclr_2019_Bkx0RjA9tX",
"iclr_2019_Bkx0RjA9tX"
] |
iclr_2019_BkxWJnC9tX | Diversity and Depth in Per-Example Routing Models | Routing models, a form of conditional computation where examples are routed through a subset of components in a larger network, have shown promising results in recent works. Surprisingly, routing models to date have lacked important properties, such as architectural diversity and large numbers of routing decisions. Both architectural diversity and routing depth can increase the representational power of a routing network. In this work, we address both of these deficiencies. We discuss the significance of architectural diversity in routing models, and explain the tradeoffs between capacity and optimization when increasing routing depth. In our experiments, we find that adding architectural diversity to routing models significantly improves performance, cutting the error rates of a strong baseline by 35% on an Omniglot setup. However, when scaling up routing depth, we find that modern routing techniques struggle with optimization. We conclude by discussing both the positive and negative results, and suggest directions for future research. | accepted-poster-papers |
pros:
- good, clear writing
- interesting analysis
- very important research area
- nice results on multi-task omniglot
cons:
- somewhat limited experimental evaluation
The reviewers I think all agree that the work is interesting and the paper well-written. I think there is still a need for more thorough experiments (which it sounds like the authors are undertaking). I recommend acceptance.
| train | [
"ryxf_zfv6X",
"rkegxvXQ14",
"S1eyvl60C7",
"SJeR4arqRQ",
"Byg9-aB50Q",
"B1exyaBqAX",
"rygVK_xR2m",
"S1lhnTr5hX"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper \"Diversity and Depth in Per-Example Routing Models\" extends previous work on routing networks by adding diversity to the type of architectural unit available for the router at each decision and by scaling to deeper networks. They evaluate their approach on Omniglot, where they achieve state of the art ... | [
7,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2019_BkxWJnC9tX",
"S1eyvl60C7",
"SJeR4arqRQ",
"ryxf_zfv6X",
"rygVK_xR2m",
"S1lhnTr5hX",
"iclr_2019_BkxWJnC9tX",
"iclr_2019_BkxWJnC9tX"
] |
iclr_2019_Bkxbrn0cYX | Selfless Sequential Learning | Sequential learning, also called lifelong learning, studies the problem of learning tasks in a sequence with access restricted to only the data of the current task. In this paper we look at a scenario with fixed model capacity, and postulate that the learning process should not be selfish, i.e. it should account for future tasks to be added and thus leave enough capacity for them. To achieve Selfless Sequential Learning we study different regularization strategies and activation functions. We find that
imposing sparsity at the level of the representation (i.e. neuron activations) is more beneficial for sequential learning than encouraging parameter sparsity. In particular, we propose a novel regularizer, that encourages representation sparsity by means of neural inhibition. It results in few active neurons which in turn leaves more free neurons to be utilized by upcoming tasks. As neural inhibition over an entire layer can be too drastic, especially for complex tasks requiring strong representations,
our regularizer only inhibits other neurons in a local neighbourhood, inspired by lateral inhibition processes in the brain. We combine our novel regularizer with state-of-the-art lifelong learning methods that penalize changes to important previously learned parts of the network. We show that our new regularizer leads to increased sparsity which translates in consistent performance improvement on diverse datasets. | accepted-poster-papers | Two of the reviewers raised their scores during the discussion phase noting that the revised version was clearer and addressed some of their concerns. As a result, all the reviewers ultimately recommended acceptance. They particularly enjoyed the insights that the authors shared from their experiments and appreciated that the experiments were quite thorough. All the reviewers mentioned that the work seemed somewhat incremental, but given the results, insights and empirical evaluation decided that it would still be a valuable contribution to the conference. One reviewer added feedback about how to improve the writing and clarity of the paper for the camera ready version. | train | [
"rJeJwe0d3m",
"Bkgrm1dmkV",
"SJeKGqFth7",
"SJgEC5nlkV",
"SygxmvXCAQ",
"rJeWYmDAnm",
"Hkl7uQV9C7",
"rke9cE2NAX",
"B1xX3QhE0m",
"SklPsoiEAm"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"[REVISION]\nThe work is thorough and some of my minor concerns have been addressed, so I am increasing my score to 6. I cannot go beyond because of the incremental nature of the work, and the very limited applicability of the used continual learning setup from this paper.\n\n[OLD REVIEW]\nThe paper proposes a nove... | [
6,
-1,
6,
-1,
-1,
7,
-1,
-1,
-1,
-1
] | [
5,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2019_Bkxbrn0cYX",
"SJeKGqFth7",
"iclr_2019_Bkxbrn0cYX",
"rJeWYmDAnm",
"Hkl7uQV9C7",
"iclr_2019_Bkxbrn0cYX",
"SJeKGqFth7",
"rJeJwe0d3m",
"SJeKGqFth7",
"rJeWYmDAnm"
] |
iclr_2019_BkzeUiRcY7 | M^3RL: Mind-aware Multi-agent Management Reinforcement Learning | Most of the prior work on multi-agent reinforcement learning (MARL) achieves optimal collaboration by directly learning a policy for each agent to maximize a common reward. In this paper, we aim to address this from a different angle. In particular, we consider scenarios where there are self-interested agents (i.e., worker agents) which have their own minds (preferences, intentions, skills, etc.) and can not be dictated to perform tasks they do not want to do. For achieving optimal coordination among these agents, we train a super agent (i.e., the manager) to manage them by first inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together. The objective of the manager is to maximize the overall productivity as well as minimize payments made to the workers for ad-hoc worker teaming. To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning (M^3RL), which consists of agent modeling and policy learning. We have evaluated our approach in two environments, Resource Collection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents. The experimental results have validated the effectiveness of our approach in modeling worker agents' minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation. | accepted-poster-papers | The paper addresses a variant of multi-agent reinforcement learning that aligns well with real-world applications - it considers the case where agents may have individual, diverging preferences. The proposed approach trains a "manager" agent which coordinates the self-interested worker agents by assigning them appropriate tasks and rewarding successful task completion (through contract generation). The approach is empirically validated on two grid-world domains: resource collection and crafting. The reviewers point out that this formulation is closely related to the principle-agent problem known in the economics literature, and see a key contribution of the paper in bringing this type of problem into the deep RL space.
The reviewers noted several potential weaknesses: They asked to clarify the relation to prior work, especially on the principle-agents work done in other areas, as well as connections to real world applications. In this context, they also noted that the significance of the contribution was unclear. Several modeling choices were questioned, including the choice of using rule-based agents for the empirical results presented in the main paper, and the need for using deep learning for contract generation. They asked the authors to provide additional details regarding scalability and sample complexity of the approach.
The authors carefully addressed the reviewer concerns, and the reviewers have indicated that they are satisfied with the response and updates to the paper. The consensus is to accept the paper.
The AC is particularly pleased to see that the authors plan to open source their code so that experiments can be replicated, and encourages them to do so in a timely manner. The AC also notes that the figures in the paper are very small, and often not readable in print - please increase figure and font sizes in the camera ready version to ensure the paper is legible when printed. | test | [
"Hye-Lo29hm",
"SyggSiwZp7",
"HylgEnCY0X",
"B1xIv7jKCQ",
"ryl9TkfK0m",
"rJlQL4SGC7",
"S1lrcUBMA7",
"H1lU9Hrf0X",
"HkgbGMxEnm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"\nThis paper studies the problem of generating contracts by a principal to incentive agents to optimally accomplish multiagent tasks. The setup of the environment is that the agents have certain skills and preferences for activities, which the principal must learn to act optimally. The paper takes a combined appro... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
1
] | [
"iclr_2019_BkzeUiRcY7",
"iclr_2019_BkzeUiRcY7",
"S1lrcUBMA7",
"ryl9TkfK0m",
"iclr_2019_BkzeUiRcY7",
"HkgbGMxEnm",
"SyggSiwZp7",
"Hye-Lo29hm",
"iclr_2019_BkzeUiRcY7"
] |
iclr_2019_ByGuynAct7 | The Deep Weight Prior | Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for convolutional neural networks, deep weight prior (DWP), that exploit generative models to encourage a specific structure of trained convolutional filters e.g., spatial correlations of weights. We define DWP in the form of an implicit distribution and propose a method for variational inference with such type of implicit priors. In experiments, we show that DWP improves the performance of Bayesian neural networks when training data are limited, and initialization of weights with samples from DWP accelerates training of conventional convolutional neural networks.
| accepted-poster-papers | This paper proposes factorized prior distributions for CNN weights by using explicit and implicit parameterization for the prior. The paper suggest a few tractable methods to learn the prior and the model jointly. The paper, overall, is interesting.
The reviewers have had some disagreement regarding the effectiveness of the method. The factorized prior may not be the most informative prior and using extra machinery to estimate it might deteriorates the performance. On the other hand, estimating a more informative prior might be difficult. It is extremely important to discuss this trade-off in the paper. I strongly recommend for the authors to discuss the pros and cons of using priors that are weakly informative vs strongly informative.
The idea of using a hierarchical model has been around, e.g., see the paper on "Hierarchical variational models" and more recently "semi-implicit Variational Inference". Please include a related work on such existing work. Please discuss why your proposed method is better than these existing methods.
Conditioned on the two discussions added to the paper, we can accept it.
| train | [
"S1l_qeRt3X",
"HkxI42NoCX",
"HJxe-nQiR7",
"HklzhvZs07",
"SketciAt0X",
"r1gW7uAYRQ",
"H1e5OxjKpQ",
"SkxoNliK6m",
"rkl9WeiYa7",
"HkebAXKun7",
"rylsLCoBnX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers modeling convolutional neural network by a Bayes method. The prior for the weights is considered in which the weights from various layers, input and output channels are assumed to be independent. A varational method is considered to approximate the posterior distribution of the weights of CNN... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_ByGuynAct7",
"HJxe-nQiR7",
"HklzhvZs07",
"SketciAt0X",
"H1e5OxjKpQ",
"SkxoNliK6m",
"S1l_qeRt3X",
"HkebAXKun7",
"rylsLCoBnX",
"iclr_2019_ByGuynAct7",
"iclr_2019_ByGuynAct7"
] |
iclr_2019_ByME42AqK7 | Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution | Architecture search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: (1) the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption; (2)most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform different-sized NASNets, MobileNets, MobileNets V2 and Wide Residual Networks on CIFAR-10 and ImageNet64x64 within only one week on eight GPUs, which is about 20-40x less compute power than previous architecture search methods that yield state-of-the-art performance. | accepted-poster-papers | The paper proposes an evolutionary architecture search method which uses weight inheritance through network morphism to avoid training candidate models from scratch. The method can optimise multiple objectives (e.g. accuracy and inference time), which is relevant for practical applications, and the results are promising and competitive with the state of the art. All reviewers are generally positive about the paper. Reviewers’ feedback on improving presentation and adding experiments with a larger number of objectives has been addressed in the new revision.
I strongly encourage the authors to add experiments on the full ImageNet dataset (not just 64x64) and/or language modelling -- the two benchmarks widely used in neural architecture search field. | train | [
"BygMkWst37",
"HkgjDn7V07",
"ryxN-nQVRX",
"B1x65oXNRX",
"BkgSVo7VRm",
"SJgMKr5h3X",
"rJeAPV55hQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\n- Summary\nThis paper proposes a multi-objective evolutionary algorithm for the neural architecture search. Specifically, this paper employs a Lamarckian inheritance mechanism based on network morphism operations for speeding up the architecture search. The proposed method is evaluated on CIFAR-10 and ImageNet (... | [
6,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_ByME42AqK7",
"iclr_2019_ByME42AqK7",
"BygMkWst37",
"rJeAPV55hQ",
"SJgMKr5h3X",
"iclr_2019_ByME42AqK7",
"iclr_2019_ByME42AqK7"
] |
iclr_2019_ByMHvs0cFQ | Quaternion Recurrent Neural Networks | Recurrent neural networks (RNNs) are powerful architectures to model sequential data, due to their capability to learn short and long-term dependencies between the basic elements of a sequence. Nonetheless, popular tasks such as speech or images recognition, involve multi-dimensional input features that are characterized by strong internal dependencies between the dimensions of the input vector. We propose a novel quaternion recurrent neural network (QRNN), alongside with a quaternion long-short term memory neural network (QLSTM), that take into account both the external relations and these internal structural dependencies with the quaternion algebra. Similarly to capsules, quaternions allow the QRNN to code internal dependencies by composing and processing multidimensional features as single entities, while the recurrent operation reveals correlations between the elements composing the sequence. We show that both QRNN and QLSTM achieve better performances than RNN and LSTM in a realistic application of automatic speech recognition. Finally, we show that QRNN and QLSTM reduce by a maximum factor of 3.3x the number of free parameters needed, compared to real-valued RNNs and LSTMs to reach better results, leading to a more compact representation of the relevant information. | accepted-poster-papers | The authors derive and experiment with quaternion-based recurrent neural networks, and demonstrate their effectiveness on speech recognition tasks (TIMIT and WSJ), where the authors demonstrate that the proposed models can achieve the same accuracy with fewer parameters than conventional models. The reviewers were unanimous in recommending that the paper be accepted. | train | [
"SkxejkS5hX",
"HJgC3kCNh7",
"SkxSXlFmT7",
"ryxPHgY7p7",
"H1evr1F76m",
"HJlaJJFQTm",
"SJg4KAOmTQ",
"B1ebqu4y6X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Quality: sufficient though there are issues. Work done in automatic speech recognition on numerous variants of recurrent models, such as interleaved TDNN and LSTM (Peddinti 2017), is completely ignored [addressed in the revision]. The description of derivatives needs to mention the linear relationship between inpu... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
8
] | [
5,
5,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_ByMHvs0cFQ",
"iclr_2019_ByMHvs0cFQ",
"HJgC3kCNh7",
"HJgC3kCNh7",
"HJlaJJFQTm",
"SkxejkS5hX",
"B1ebqu4y6X",
"iclr_2019_ByMHvs0cFQ"
] |
iclr_2019_ByMVTsR5KQ | Adversarial Audio Synthesis | Audio signals are sampled at high temporal resolutions, and learning to synthesize audio requires capturing structure across a range of timescales. Generative adversarial networks (GANs) have seen wide success at generating images that are both locally and globally coherent, but they have seen little application to audio generation. In this paper we introduce WaveGAN, a first attempt at applying GANs to unsupervised synthesis of raw-waveform audio. WaveGAN is capable of synthesizing one second slices of audio waveforms with global coherence, suitable for sound effect generation. Our experiments demonstrate that—without labels—WaveGAN learns to produce intelligible words when trained on a small-vocabulary speech dataset, and can also synthesize audio from other domains such as drums, bird vocalizations, and piano. We compare WaveGAN to a method which applies GANs designed for image generation on image-like audio feature representations, finding both approaches to be promising. | accepted-poster-papers | This paper proposes a GAN model to synthesize raw-waveform audio by adapting the popular DC-GAN architecture to handle audio signals. Experimental results are reported on several datasets, including speech and instruments.
Unfortunately this paper received two low-quality reviews, with little signal. The only substantial review was mildly positive, highlighting the clarity, accessibility and reproducibility of the work, and expressing concerns about the relative lack of novelty. The AC shares this assessment. The paper claims to be the first successful GAN application operating directly on wave-forms. Whereas this is certainly an important contribution, it is less clear to the AC whether this contribution belongs to a venue such as ICLR, as opposed to ICASSP or Ismir. This is a borderline paper, and the decision is ultimately relative to other submissions with similar scores. In this context, given the mainstream popularity of GANs for image modeling, the AC feels this paper can help spark significant further research in adversarial training for audio modeling, and therefore recommends acceptance. I also encourage the authors to address the issues raised by R1. | train | [
"r1lL9-zDTX",
"S1lL5vpRhm",
"ByxwkkouTX",
"r1x7zyjOam",
"r1gFfA7GaX",
"SJeGDaQMaQ",
"Skgki1VfT7",
"BJxnO07zTQ",
"HkeNkp5xTQ",
"HJgpLMhn37"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your clarifications. Here are my thoughts on modifying the score.\n\n“““ the algorithmic contribution is limited. ”””\n\nIf as said in the response, the concrete methodological contributions are phase shuffle (Section 3.3) and the learned post processing filters (Appendix B). At first reading of the pap... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"r1gFfA7GaX",
"iclr_2019_ByMVTsR5KQ",
"r1lL9-zDTX",
"ByxwkkouTX",
"HkeNkp5xTQ",
"iclr_2019_ByMVTsR5KQ",
"HJgpLMhn37",
"S1lL5vpRhm",
"iclr_2019_ByMVTsR5KQ",
"iclr_2019_ByMVTsR5KQ"
] |
iclr_2019_Bye5SiAqKX | Preconditioner on Matrix Lie Group for SGD | We study two types of preconditioners and preconditioned stochastic gradient descent (SGD) methods in a unified framework. We call the first one the Newton type due to its close relationship to the Newton method, and the second one the Fisher type as its preconditioner is closely related to the inverse of Fisher information matrix. Both preconditioners can be derived from one framework, and efficiently estimated on any matrix Lie groups designated by the user using natural or relative gradient descent minimizing certain preconditioner estimation criteria. Many existing preconditioners and methods, e.g., RMSProp, Adam, KFAC, equilibrated SGD, batch normalization, etc., are special cases of or closely related to either the Newton type or the Fisher type ones. Experimental results on relatively large scale machine learning problems are reported for performance study. | accepted-poster-papers | The method presented here adapts an SGD preconditioner by minimizing particular cost functions which are minimized by the inverse Hessian or inverse Fisher matrix. These cost functions are minimized using natural (or relative) gradient on the Lie group, as previously introduced by Amari. This can be extended to learn a Kronecker-factored preconditioner similar to K-FAC, except that the preconditioner is constrained to be upper triangular, which allows the relative gradient to be computed using backsubstitution rather than inversion. Experiments show modest speedups compared to SGD on ImageNet and language modeling.
There's a wide divergence in reviewer scores. We can disregard the extremely short review by R2. R1 and R3 each did very careful reviews (R3 even tried out the algorithm), but gave scores of 5 and 8. They agree on most of the particulars, but just emphasized different factors. Because of this, I took a careful look, and indeed I think the paper has significant strengths and weaknesses.
The main strength is the novelty of the approach. Combining relative gradient with upper triangular preconditioners is clever, and allows for a K-FAC-like algorithm which avoids matrix inversion. I haven't seen anything similar, and this method seems potentially useful. R3 reports that (s)he tried out the algorithm and found it to work well. Contrary to R1, I think the paper does use Lie groups in a meaningful way.
Unfortunately, the writing is below the standards of an ICLR paper. The title is misleading, since the method isn't learning a preconditioner "on" the Lie group. The abstract and introduction don't give a clear idea of what the paper is about. While some motivation for the algorithms is given, it's expressed very tersely, and in a way that will only make sense to someone who knows the mathematical toolbox well enough to appreciate why the algorithm makes sense. As the reviewers point out, important details (such as hyperparameter tuning schemes) are left out of the experiments section.
The experiments are also somewhat problematic, as pointed out by R1. The paper compares only to SGD and Adam, even though many other second-order optimizers have been proposed (and often with code available). It's unclear how well the baselines were tuned, and at the end of the day, the performance gain is rather limited. The experiments measure only iterations, not wall clock time.
On the plus side, the experiments include ImageNet, which is ambitious by the standards of an algorithmic paper, and as mentioned above, R3 got good results from the method.
On the whole, I would favor acceptance because of the novelty and potential usefulness of the approach. This would be a pretty solid submission of the writing were improved. (While the authors feel constrained by the 8 page limit, I'd recommend going beyond this for clarity.) However, I emphasize that it is very important to clean up the writing.
| train | [
"rJexSy6s2Q",
"r1lQAt9vT7",
"Hyxv3jbM6X",
"Hyg6oT30h7",
"HklSR4lZpm",
"B1gF-Z1Za7",
"B1xklEYyaQ",
"S1e9NsdXsm"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a preconditioned SGD method where the preconditioner is adapted by performing some type of gradient descent on some secondary objective \"c\". The preconditioner lives in one of a restricted class of invertible matrices (e.g. symmetric, diagonal, Kronecker-factored) constituting a Lie group (w... | [
5,
-1,
-1,
8,
-1,
-1,
-1,
7
] | [
5,
-1,
-1,
5,
-1,
-1,
-1,
3
] | [
"iclr_2019_Bye5SiAqKX",
"iclr_2019_Bye5SiAqKX",
"rJexSy6s2Q",
"iclr_2019_Bye5SiAqKX",
"Hyg6oT30h7",
"S1e9NsdXsm",
"iclr_2019_Bye5SiAqKX",
"iclr_2019_Bye5SiAqKX"
] |
iclr_2019_ByeMB3Act7 | Learning to Screen for Fast Softmax Inference on Large Vocabulary Neural Networks | Neural language models have been widely used in various NLP tasks, including machine translation, next word prediction and conversational agents. However, it is challenging to deploy these models on mobile devices due to their slow prediction speed, where the bottleneck is to compute top candidates in the softmax layer. In this paper, we introduce a novel softmax layer approximation algorithm by exploiting the clustering structure of context vectors. Our algorithm uses a light-weight screening model to predict a much smaller set of candidate words based on the given context, and then conducts an exact softmax only within that subset. Training such a procedure end-to-end is challenging as traditional clustering methods are discrete and non-differentiable, and thus unable to be used with back-propagation in the training process. Using the Gumbel softmax, we are able to train the screening model end-to-end on the training set to exploit data distribution. The algorithm achieves an order of magnitude faster inference than the original softmax layer for predicting top-k words in various tasks such as beam search in machine translation or next words prediction. For example, for machine translation task on German to English dataset with around 25K vocabulary, we can achieve 20.4 times speed up with 98.9% precision@1 and 99.3% precision@5 with the original softmax layer prediction, while state-of-the-art (Zhang et al., 2018) only achieves 6.7x speedup with 98.7% precision@1 and 98.1% precision@5 for the same task. | accepted-poster-papers | This paper introduces an approach for improving the scalability of neural network models with large output spaces, where naive soft-max inference scales linearly with the vocabulary size. The proposed approach is based on a clustering step combined with per-cluster, smaller soft-maxes. It retains differentiability with the Gumbel softmax trick. The experimental results are impressive. There are some minor flaws, however there's consensus among the reviewers the paper should be published.
| test | [
"HkeASf47AX",
"HyetbGEm0m",
"SJl5TbNQAQ",
"Hyx4lWEXRX",
"BylZTkmc3X",
"ByxDcLsR2X",
"HyehHMVFnm",
"S1gF35P_nQ",
"B1eYqyEt3m"
] | [
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"We want to thank the reviewer for the useful suggestions!!\n\n-- about larger vocabulary experiment:\n\nWe have added an experiment with a much larger dataset --- Wikitext103 with vocabulary size of 80k. The result of prediction time speedup versus accuracy is shown in Figure 9 in the new version. As you can see f... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
-1
] | [
"HyehHMVFnm",
"ByxDcLsR2X",
"S1gF35P_nQ",
"iclr_2019_ByeMB3Act7",
"B1eYqyEt3m",
"iclr_2019_ByeMB3Act7",
"iclr_2019_ByeMB3Act7",
"iclr_2019_ByeMB3Act7",
"iclr_2019_ByeMB3Act7"
] |
iclr_2019_ByeSdsC9Km | Adaptive Posterior Learning: few-shot learning with a surprise-based memory module | The ability to generalize quickly from few observations is crucial for intelligent systems. In this paper we introduce APL, an algorithm that approximates probability distributions by remembering the most surprising observations it has encountered. These past observations are recalled from an external memory module and processed by a decoder network that can combine information from different memory slots to generalize beyond direct recall. We show this algorithm can perform as well as state of the art baselines on few-shot classification benchmarks with a smaller memory footprint. In addition, its memory compression allows it to scale to thousands of unknown labels. Finally, we introduce a meta-learning reasoning task which is more challenging than direct classification. In this setting, APL is able to generalize with fewer than one example per class via deductive reasoning. | accepted-poster-papers | All reviewers recommend acceptance. The problem is an interesting one. THe method is interesting.
Authors were responsive in the reviewing process.
Good work. I recommend acceptance :) | test | [
"Hkx8p7qAA7",
"BJxh2pdP2X",
"ryehOhG2T7",
"BylMPhznTQ",
"rJebKiM2pX",
"rJe9NiGnpm",
"HJgNL9M2TX",
"SyTTVGKnQ",
"r1lzbPiPnX"
] | [
"public",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"First, it is a very interesting idea.\nI wonder if the Memory store can be updated? If a point stored in the Memory store, it will be deleted in the later iteration or stored in the memory forever.\nBesides, is the Memory store with/without the upper limit?\n\nThanks.",
"Summary: the authors propose a new algori... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_ByeSdsC9Km",
"iclr_2019_ByeSdsC9Km",
"BJxh2pdP2X",
"BJxh2pdP2X",
"r1lzbPiPnX",
"SyTTVGKnQ",
"iclr_2019_ByeSdsC9Km",
"iclr_2019_ByeSdsC9Km",
"iclr_2019_ByeSdsC9Km"
] |
iclr_2019_ByetGn0cYX | Probabilistic Planning with Sequential Monte Carlo methods | In this work, we propose a novel formulation of planning which views it as a probabilistic inference problem over future optimal trajectories. This enables us to use sampling methods, and thus, tackle planning in continuous domains using a fixed computational budget. We design a new algorithm, Sequential Monte Carlo Planning, by leveraging classical methods in Sequential Monte Carlo and Bayesian smoothing in the context of control as inference. Furthermore, we show that Sequential Monte Carlo Planning can capture multimodal policies and can quickly learn continuous control tasks. | accepted-poster-papers | This paper presents a new approach for posing control as inference that leverages Sequential Monte Carlo and Bayesian smoothing. There is significant interest from the reviewers into this method, and also an active discussion about this paper, particularly with respect to the optimism bias issue. The paper is borderline and the authors are encouraged to address the desired clarifications and changes from the reviewers.
| test | [
"rJgIdijLl4",
"B1xd7Ks8gE",
"HJe9yYuVlN",
"SJgSEm1yg4",
"BJebbLfA14",
"Sklapa-0kN",
"S1xzTol01E",
"rJl3_Zc2y4",
"HkeszuKYkE",
"BJlqzpD_14",
"Bkgqc0IOk4",
"SylcRv_DhQ",
"ByxFxKEqRQ",
"rylLxlrqR7",
"ryeUz24c0m",
"H1laAFN9CQ",
"B1x5aFNqCX",
"rJerZg4c0m",
"HJejH5YDnm",
"SJgI4p6O2m"... | [
"author",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Our final results with CEM+value function show no improved performance overall over vanilla CEM. This seems mainly due to the fact that the CEM policy and the SAC value function do not match and our value/Q losses diverge.",
"Thank you for the more detailed answer, we think we finally understood the source of ou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"BJebbLfA14",
"SJgSEm1yg4",
"iclr_2019_ByetGn0cYX",
"S1xzTol01E",
"rJl3_Zc2y4",
"HJejH5YDnm",
"rJl3_Zc2y4",
"BJlqzpD_14",
"rylLxlrqR7",
"Bkgqc0IOk4",
"ByxFxKEqRQ",
"iclr_2019_ByetGn0cYX",
"SylcRv_DhQ",
"SJgI4p6O2m",
"HJejH5YDnm",
"SylcRv_DhQ",
"SylcRv_DhQ",
"iclr_2019_ByetGn0cYX",
... |
iclr_2019_Byey7n05FQ | Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control | We propose a "plan online and learn offline" framework for the setting where an agent, with an internal model, needs to continually act and learn in the world. Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration. We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning. Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions. Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation. This exploration is critical for fast and stable learning of the value function. Combining these components enable solutions to complex control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world. | accepted-poster-papers | The paper makes novel explorations into how MPC and approximate-DP / value-function approaches, with value-fn ensembles to model value-fn uncertainty, can be effectively combined. The novelty lies in exploring their combination. The experiments are solid. The paper is clearly written.
Open issues include overall novelty, and delineating the setting in which this method is appropriate.
The reviewers and AC are in agreement on what is in the paper. The open question is whether
the combination of the ideas is interesting.
After further reviewing the paper and results. the AC believes that the overall combination of ideas and related evaluations that make a useful and promising contribution. As evidenced in some of the reviewer discussion, there is often a
considerable schism in the community regarding what is considered fair to introduce in terms of
prior knowledge, and blurred definitions regarding planning and control. The AC discounted some of the
concerns of R2 that related more to discrete action settings and theoretical considerations; these
often fail to translate to difficult problems in continuous action settings. The
AC believes that R3 nicely articulates the issues of the paper that can be (and should be) addressed in the writing, i.e., to
describe and motivate the settings that the proposed framework targets, as articulated in the reviews and ensuing discussion.
| train | [
"rJlrQ6CZxE",
"SJxZOqMtnQ",
"HklU137bCQ",
"r1grAPeCa7",
"Syl5JztsaX",
"rJxWryti67",
"Byen-JFsa7",
"SkeXEnOipQ",
"rJewsWF937",
"BJgIdkoD2Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewers,\n\nThank you once again for taking time to review our paper. Please let us know if we can answer any additional questions about the work, or if the answers to any of your questions require further discussion. If the responses were satisfactory, we kindly request that the reviewers adjust the rating... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_Byey7n05FQ",
"iclr_2019_Byey7n05FQ",
"iclr_2019_Byey7n05FQ",
"iclr_2019_Byey7n05FQ",
"BJgIdkoD2Q",
"Byen-JFsa7",
"SJxZOqMtnQ",
"rJewsWF937",
"iclr_2019_Byey7n05FQ",
"iclr_2019_Byey7n05FQ"
] |
iclr_2019_Byf5-30qFX | DHER: Hindsight Experience Replay for Dynamic Goals | Dealing with sparse rewards is one of the most important challenges in reinforcement learning (RL), especially when a goal is dynamic (e.g., to grasp a moving object). Hindsight experience replay (HER) has been shown an effective solution to handling sparse rewards with fixed goals. However, it does not account for dynamic goals in its vanilla form and, as a result, even degrades the performance of existing off-policy RL algorithms when the goal is changing over time.
In this paper, we present Dynamic Hindsight Experience Replay (DHER), a novel approach for tasks with dynamic goals in the presence of sparse rewards. DHER automatically assembles successful experiences from two relevant failures and can be used to enhance an arbitrary off-policy RL algorithm when the tasks' goals are dynamic. We evaluate DHER on tasks of robotic manipulation and moving object tracking, and transfer the polices from simulation to physical robots. Extensive comparison and ablation studies demonstrate the superiority of our approach, showing that DHER is a crucial ingredient to enable RL to solve tasks with dynamic goals in manipulation and grid world domains. | accepted-poster-papers | This work proposes a method for extending hindsight experience replay to the setting where the goal is not fixed, but dynamic or moving. It proceeds by amending failed episodes by searching replay memory for a compatible trajectories from which to construct a trajectory that can be productively learned from.
Reviewers were generally positive on the novelty and importance of the contribution. While noting its limitations, it was still felt that the key ideas could be useful and influential. The tasks considered are modifications of OpenAI robotics environments, adapted to the dynamic goal setting, as well as a 2D planar "snake" game. There were concerns about the strength of the baselines employed but reviewers seemed happy with the state of these post-revision. There were also concerns regarding clarity of presentation, particularly from AnonReviewer2, but significant progress was made on this front following discussions and revision.
Despite remaining concerns over clarity I am convinced that this is an interesting problem setting worth studying and that the proposed method makes significant progress. The method has limitations with respect to the sorts of environments where we can reasonably expect it to work (where other aspects of the environment are relatively stable both within and across episodes), but there is lots of work in the literature, particularly where robotics is concerned, that focuses on exactly these kinds of environments. This submission is therefore highly relevant to current practice and by reviewers' accounts, generally well-executed in its post-revision form. I therefore recommend acceptance. | train | [
"r1e8PbJV0X",
"B1lgRLesoX",
"H1xtaCNcnX",
"Sygu7-xF0Q",
"rJlLKVtS07",
"H1lI5-_BAm",
"S1lN5tUBCQ",
"SJekV4-SC7",
"HJxjXfLXRQ",
"B1eDAUxO6m",
"ByxsM6u76X",
"BJxeUuD76X",
"BklxR_v7p7",
"BkljCMD7a7",
"r1eb4ZDQ67",
"rJgHzRb5hX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"official_reviewer",
"author",
"official_reviewer",
"public",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thanks for your interest. The paper belongs to relevant topics in ICLR, for example, reinforcement learning or applications in robotics, or any other field. Please see https://iclr.cc/Conferences/2019/CallForPapers . \nTake HER (Andrychowicz et al., 2017) as an example, it was published in NIPS 2017. \n\nFor DHER... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"HJxjXfLXRQ",
"iclr_2019_Byf5-30qFX",
"iclr_2019_Byf5-30qFX",
"H1lI5-_BAm",
"H1lI5-_BAm",
"S1lN5tUBCQ",
"SJekV4-SC7",
"BJxeUuD76X",
"iclr_2019_Byf5-30qFX",
"ByxsM6u76X",
"r1eb4ZDQ67",
"B1lgRLesoX",
"B1lgRLesoX",
"rJgHzRb5hX",
"H1xtaCNcnX",
"iclr_2019_Byf5-30qFX"
] |
iclr_2019_ByftGnR9KX | FlowQA: Grasping Flow in History for Conversational Machine Comprehension | Conversational machine comprehension requires a deep understanding of the conversation history. To enable traditional, single-turn models to encode the history comprehensively, we introduce Flow, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure. Compared to shallow approaches that concatenate previous questions/answers as input, Flow integrates the latent semantics of the conversation history more deeply. Our model, FlowQA, shows superior performance on two recently proposed conversational challenges (+7.2% F1 on CoQA and +4.0% on QuAC). The effectiveness of Flow also shows in other tasks. By reducing sequential instruction understanding to conversational machine comprehension, FlowQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy. | accepted-poster-papers | Interesting and novel approach of modeling context (mainly external documents with information about the conversation content) for the conversational question answering task, demonstrating significant improvements on the newly released conversational QA datasets.
The first version of the paper was weaker on motivation and lacked a clearer presentation of the approach as mentioned by the reviewers, but the paper was updated as explained in the responses to the reviewers.
The ablation studies are useful in demonstration of the proposed FLOW approach.
A question still remains after the reviews (this was not raised by the reviewers): How does the approach perform in comparison to the state of the art for the single question and answer tasks? If each question was asked in isolation, would it still be the best?
| train | [
"rJx8PrR4CX",
"SkglhHAEAm",
"SJeOXH0VAm",
"SkepZq6VR7",
"SJx9ptTE07",
"HJeLfagi3Q",
"SyxpAA4c2Q",
"SylB4hN5hQ"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This comment is moved to be below the main response.",
"Re: Some questions on the experiments\n\n1) Computational efficiency compared to single-turn MC: Without our alternating parallel processing structure, training time will be multiplied by the number of QA pairs in a dialog. After implementing this mechanis... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"SyxpAA4c2Q",
"SJeOXH0VAm",
"SyxpAA4c2Q",
"SylB4hN5hQ",
"HJeLfagi3Q",
"iclr_2019_ByftGnR9KX",
"iclr_2019_ByftGnR9KX",
"iclr_2019_ByftGnR9KX"
] |
iclr_2019_ByfyHh05tQ | Learning to Design RNA | Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation. Since an RNA's function depends on its structural properties, the RNA Design problem is to find an RNA sequence which satisfies given structural constraints. Here, we propose a new algorithm for the RNA Design problem, dubbed LEARNA. LEARNA uses deep reinforcement learning to train a policy network to sequentially design an entire RNA sequence given a specified target structure. By meta-learning across 65000 different RNA Design tasks for one hour on 20 CPU cores, our extension Meta-LEARNA constructs an RNA Design policy that can be applied out of the box to solve novel RNA Design tasks. Methodologically, for what we believe to be the first time, we jointly optimize over a rich space of architectures for the policy network, the hyperparameters of the training procedure and the formulation of the decision process. Comprehensive empirical results on two widely-used RNA Design benchmarks, as well as a third one that we introduce, show that our approach achieves new state-of-the-art performance on the former while also being orders of magnitudes faster in reaching the previous state-of-the-art performance. In an ablation study, we analyze the importance of our method's different components.
| accepted-poster-papers | After a healthy discussion between reviewers and authors, the reviewers' consensus is to recommend acceptance to ICLR. The authors thoroughly addressed reviewer concerns, and all reviewers noted the quality of the paper, methodological innovations and SotA results. | train | [
"BklQ-Y9PkV",
"rkeF2D9PJV",
"Syl-dLf4yN",
"rJxME7y-yE",
"HJgr7SfxnX",
"BkAgSNekN",
"Bygfptj6A7",
"BkxvU5i6Cm",
"r1e-JqspC7",
"H1eWdO9lTQ",
"HylyHJLcCQ",
"SJgLJQJ8Am",
"ByguVuyUCQ",
"B1ewpv1ICm",
"BklaXw1LCm",
"B1ecw8JLAX",
"SkgXlBJLRX",
"rkxet6Pa27"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thanks, we did not think of the interpretation \"Neural (Architecture Search)\". Now being very aware of the two different possible interpretations of NAS, we will be sure to use a wording that avoids the confusion. Thanks again!",
"Thanks for these references. We already cited the first two in Section 5\n(Joint... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1
] | [
"Syl-dLf4yN",
"rJxME7y-yE",
"BkxvU5i6Cm",
"BkAgSNekN",
"iclr_2019_ByfyHh05tQ",
"Bygfptj6A7",
"HylyHJLcCQ",
"H1eWdO9lTQ",
"HylyHJLcCQ",
"iclr_2019_ByfyHh05tQ",
"SkgXlBJLRX",
"iclr_2019_ByfyHh05tQ",
"H1eWdO9lTQ",
"rkxet6Pa27",
"HJgr7SfxnX",
"HJgr7SfxnX",
"HJgr7SfxnX",
"iclr_2019_Byfy... |
iclr_2019_Byg0DsCqYQ | Robust Conditional Generative Adversarial Networks | Conditional generative adversarial networks (cGAN) have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision. The major focus so far has been on performance improvement, while there has been little effort in making cGAN more robust to noise. The regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGAN unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue. Our model augments the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and experimentally verify that our model outperforms existing state-of-the-art cGAN architectures by a large margin in a variety of domains including images from natural scenes and faces. | accepted-poster-papers | The proposed method suggests a way to do robust conditional image generation with GANs. The premise is to make the image to image translation model resilient to noise by leveraging structure in the output space, with an unsupervised "pathway".
In general, the qualitative results seem reasonable on a a number of datasets, including those suggested by reviewers. The method appears simple, novel and easy to try. The main concerns seem to be that the idea is maybe too simple, but I'm not particularly bothered by that. The authors showed it working well on a variety of tasks (synthetic and natural), provide SSIM numbers that look compelling (despite SSIM's short-comings) and otherwise give compelling arguments for the technical soundness of the approach.
Thus, I recommend acceptance. | train | [
"Bkln1phBJV",
"rJgrCJ8rkE",
"B1x_dKfthX",
"SygbgUr527",
"HkehuKRZJ4",
"S1gdUGDQAm",
"HklhCxV2a7",
"rJev0VNha7",
"BJgoRH1vaQ",
"SJxl7bVhpQ",
"Hkl9noywp7",
"HkewAY1Pam",
"SyeeXquTh7"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Dear reviewer,\n\nWe are thankful to the reviewer for their vote for acceptance after our revision.\n\nWe are happy to answer any further question, to address any potential remaining concern. We thank the reviewer for the feedback which has helped improve the quality of our manuscript.",
"Dear reviewer,\n\nthank... | [
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"HklhCxV2a7",
"S1gdUGDQAm",
"iclr_2019_Byg0DsCqYQ",
"iclr_2019_Byg0DsCqYQ",
"Hkl9noywp7",
"HkewAY1Pam",
"B1x_dKfthX",
"iclr_2019_Byg0DsCqYQ",
"iclr_2019_Byg0DsCqYQ",
"B1x_dKfthX",
"SygbgUr527",
"SyeeXquTh7",
"iclr_2019_Byg0DsCqYQ"
] |
iclr_2019_Byg5QhR5FQ | Top-Down Neural Model For Formulae | We present a simple neural model that given a formula and a property tries to answer the question whether the formula has the given property, for example whether a propositional formula is always true. The structure of the formula is captured by a feedforward neural network recursively built for the given formula in a top-down manner. The results of this network are then processed by two recurrent neural networks. One of the interesting aspects of our model is how propositional atoms are treated. For example, the model is insensitive to their names, it only matters whether they are the same or distinct. | accepted-poster-papers | This paper presents a method for building representations of logical formulae not by propagating information upwards from leaves to root and making decisions (e.g. as to whether one formula entails another) based on the root representation, but rather by propagating information down from root to leaves.
It is a somewhat curious approach, and it is interesting to see that it works so well, especially on the "massive" train/test split of Evans et al. (2018). This paper certainly piques my interest, and I was disappointed to see a complete absence of discussion from reviewers during the rebuttal period despite author responses. The reviewer scores are all middle-of-the-road scores lightly leaning towards accepting, so the paper is rather borderline. It would have been most helpful to hear what the reviewers thought of the rebuttal and revisions made to the paper.
Having read through the paper myself, and through the reviews and rebuttal, I am hesitantly casting an extra vote in favour of acceptance: the sort of work discussed in this paper is important and under-represented in the conference, and the results are convincing. I however, share the concerns outlined by the reviewers in their first (and only) set of comments, and invite the authors to take particular heed of the points made by AnonReviewer3, although all make excellent points. There needs to be some further analysis and explanation of these results. If not in this paper, then at least in follow up work. For now, I will recommend with medium confidence that the paper be accepted. | train | [
"B1lJNSp7JV",
"BJexSD_PnQ",
"SJlJykpRAQ",
"H1gP6yjq0m",
"rkxMYQi9A7",
"HyeZl-j507",
"H1xTZE693X",
"HkeTjz3wnQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I've read the new version of the paper and the comments of other reviewers and I've decided to increase my score.",
"In this paper the authors propose a neural model that, given a logical formula as input, predicts whether the formula is a tautology or not. Showing that a formula is a tautology is important beca... | [
-1,
6,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
2,
-1,
-1,
-1,
-1,
3,
4
] | [
"SJlJykpRAQ",
"iclr_2019_Byg5QhR5FQ",
"BJexSD_PnQ",
"H1xTZE693X",
"BJexSD_PnQ",
"HkeTjz3wnQ",
"iclr_2019_Byg5QhR5FQ",
"iclr_2019_Byg5QhR5FQ"
] |
iclr_2019_BygANhA9tQ | Cost-Sensitive Robustness against Adversarial Examples | Several recent works have developed methods for training classifiers that are certifiably robust against norm-bounded adversarial perturbations. These methods assume that all the adversarial transformations are equally important, which is seldom the case in real-world applications. We advocate for cost-sensitive robustness as the criteria for measuring the classifier's performance for tasks where some adversarial transformation are more important than others. We encode the potential harm of each adversarial transformation in a cost matrix, and propose a general objective function to adapt the robust training method of Wong & Kolter (2018) to optimize for cost-sensitive robustness. Our experiments on simple MNIST and CIFAR10 models with a variety of cost matrices show that the proposed approach can produce models with substantially reduced cost-sensitive robust error, while maintaining classification accuracy. | accepted-poster-papers | This paper studies the notion of certified cost-sensitive robustness against adversarial examples, by building from the recent [Wong & Koller'18]. Its main contribution is to adapt the robust classification objective to a 'cost-sensitive' objective, that weights labelling errors according to their potential damage.
This paper received mixed reviews, with a clear champion and two skeptical reviewers. On the one hand, they all highlighted the clarity of the presentation and the relevance of the topic as strengths; on the other hand, they noted the relatively little novelty of the paper relative [W & K'18]. Reviewers also acknowledged the diligence of authors during the response phase. The AC mostly agrees with these assessments, and taking them all into consideration, he/she concludes that the potential practical benefits of cost-sensitive certified robustness outweight the limited scientific novelty. Therefore, he recommends acceptance as a poster. | train | [
"HJg7nLR0n7",
"Byx0aQAY0X",
"ByxW5QAYCX",
"B1xUE7Rt0m",
"HJxSIK7VRX",
"rJgLQKrMRX",
"ryxEPIvKhm",
"ByeXwZZzCm",
"HJlIr-JG0X",
"SygPPR_l0X",
"B1e-WxMi6Q",
"HklCJkMs6X",
"SklzwA-oaQ",
"H1llJV-z6X",
"B1lNPh9c3Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors define the notion of cost-sensitive robustness, which measures the seriousness of adversarial attack with a cost matrix. The authors then plug the costs of adversarial attack into the objective of optimization to get a model that is (cost-sensitively) robust against adversarial attacks.\n\nThe initiati... | [
5,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_BygANhA9tQ",
"ByeXwZZzCm",
"HJxSIK7VRX",
"rJgLQKrMRX",
"SklzwA-oaQ",
"H1llJV-z6X",
"iclr_2019_BygANhA9tQ",
"HJlIr-JG0X",
"SygPPR_l0X",
"HklCJkMs6X",
"HJg7nLR0n7",
"ryxEPIvKhm",
"B1lNPh9c3Q",
"HJg7nLR0n7",
"iclr_2019_BygANhA9tQ"
] |
iclr_2019_BygfghAcYX | The role of over-parametrization in generalization of neural networks | Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization. In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks. Our capacity bound correlates with the behavior of test error with increasing network sizes (within the range reported in the experiments), and could partly explain the improvement in generalization with over-parametrization. We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks. | accepted-poster-papers | I agree with the reviewers that this is a strong contribution and provides new insights, even if it doesn't quite close the problem.
p.s.: It seems that centering the weight matrices at initialization is a key idea. The authors note that Dziugaite and Roy used bounds that were based on the distance to initialization, but that their reported numerical generalization bounds also increase with the increasing network size. Looking back at that work, they look at networks where the size increases by a very large factor (going from e.g. 400,000 parameters roughly to over 1.2 million, so a factor of 2.5), at the same time the bound increases by a much smaller factor. The type of increase also seems much less severe than those pictured in Figures 3/5. Since Dzugate and Roy's bounds involved optimization, perhaps the increase there is merely apparent. | train | [
"Sygwq4iZC7",
"rklmVdS_AX",
"BJeOrBBOCm",
"SkgLfnBiTm",
"SkezHFcITm",
"SJeStEOcTm",
"HklzMNu9TX",
"rkeQQzTK3m"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"The authors aim to shed light on the role of over-parametrization in generalization error. They do so for the special case of 2 layer fully connected ReLU networks, a \"simple\" setting where one still sees empirically that the test error decreasing as over-parametrization increases.\n\nBased on empirical observat... | [
7,
-1,
-1,
-1,
7,
-1,
-1,
7
] | [
3,
-1,
-1,
-1,
5,
-1,
-1,
3
] | [
"iclr_2019_BygfghAcYX",
"iclr_2019_BygfghAcYX",
"Sygwq4iZC7",
"SkezHFcITm",
"iclr_2019_BygfghAcYX",
"rkeQQzTK3m",
"SkezHFcITm",
"iclr_2019_BygfghAcYX"
] |
iclr_2019_BygqBiRcFQ | Diffusion Scattering Transforms on Graphs | Stability is a key aspect of data analysis. In many applications, the natural notion of stability is geometric, as illustrated for example in computer vision. Scattering transforms construct deep convolutional representations which are certified stable to input deformations. This stability to deformations can be interpreted as stability with respect to changes in the metric structure of the domain.
In this work, we show that scattering transforms can be generalized to non-Euclidean domains using diffusion wavelets, while preserving a notion of stability with respect to metric changes in the domain, measured with diffusion maps. The resulting representation is stable to metric perturbations of the domain while being able to capture ''high-frequency'' information, akin to the Euclidean Scattering. | accepted-poster-papers | The paper gives an extension of scattering transform to non-Euclidean domains by introducing scattering transforms on graphs using diffusion wavelet representations, and presents a stability analysis of such a representation under deformation of the underlying graph metric defined in terms of graph diffusion.
Concerns of the reviewers are primarily with what type of graphs is the primary consideration (small world social networks or point cloud submanifold samples) and experimental studies. Technical development like deformation in the proposed graph metric is motivated by sub-manifold scenarios in computer vision, and whether the development is well suitable to social networks in experiments still needs further investigations.
The authors make satisfied answers to the reviewers’ questions. The reviewers unanimously accept the paper for ICLR publication. | test | [
"Ske6Coy537",
"HJllyFuFRm",
"H1e0vqhd07",
"SJlDUcn_R7",
"B1glN93_0Q",
"HkgFlmEW07",
"Byl0YSBwaX",
"HkgA_NSvaQ",
"rJggb9NDaX",
"HJx45ONA2X"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper introduces an adaptation of the Scattering transform to signals defined on graphs\nby relying on multi-scale diffusion wavelets, and studies a notion of stability of this representation\nwith respect to changes in the graph structure with an appropriate diffusion metric.\n\nThe notion of stability in con... | [
7,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
9
] | [
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
5
] | [
"iclr_2019_BygqBiRcFQ",
"iclr_2019_BygqBiRcFQ",
"SJlDUcn_R7",
"B1glN93_0Q",
"HkgFlmEW07",
"iclr_2019_BygqBiRcFQ",
"HkgA_NSvaQ",
"Ske6Coy537",
"HJx45ONA2X",
"iclr_2019_BygqBiRcFQ"
] |
iclr_2019_Byl8BnRcYm | Capsule Graph Neural Network | The high-quality node embeddings learned from the Graph Neural Networks (GNNs) have been applied to a wide range of node-based applications and some of them have achieved state-of-the-art (SOTA) performance. However, when applying node embeddings learned from GNNs to generate graph embeddings, the scalar node representation may not suffice to preserve the node/graph properties efficiently, resulting in sub-optimal graph embeddings.
Inspired by the Capsule Neural Network (CapsNet), we propose the Capsule Graph Neural Network (CapsGNN), which adopts the concept of capsules to address the weakness in existing GNN-based graph embeddings algorithms. By extracting node features in the form of capsules, routing mechanism can be utilized to capture important information at the graph level. As a result, our model generates multiple embeddings for each graph to capture graph properties from different aspects. The attention module incorporated in CapsGNN is used to tackle graphs with various sizes which also enables the model to focus on critical parts of the graphs.
Our extensive evaluations with 10 graph-structured datasets demonstrate that CapsGNN has a powerful mechanism that operates to capture macroscopic properties of the whole graph by data-driven. It outperforms other SOTA techniques on several graph classification tasks, by virtue of the new instrument. | accepted-poster-papers | AR1 asks for a clear experimental evaluation showing that capsules and dynamic routing help in the GCN setting. After rebuttal, AR1 seems satisfied that routing in CapsGNN might help generate 'more representative graph embeddings from different aspects'. AC strongly encourages the authors to improve the discussion on these 'different aspects' as currently it feels vague. AR2 is initially concerned about experimental evaluations and whether the attention mechanism works as expected, though, he/she is happy with the revised experiments. AR3 would like to see all biological datasets included in experiments. He/she is also concerned about the lack of ability to preserve fine structures by CapsGNN. The authors leave this aspect of their approach for the future work.
On balance, all reviewers felt this paper is a borderline paper. After going through all questions and responses, AC sees that many requests about aspects of the proposed method have not been clarified by the authors. However, reviewers note that the authors provided more evaluations/visualisations etc. The reviewers expressed hope (numerous times) that this initial attempt to introduce capsules into GCN will result in future developments and improvements. While AC thinks this is an overoptimistic view, AC will give the authors the benefit of doubt and will advocate a weak accept.
The authors are asked to incorporate all modifications requested by the reviewers. Moreover, 'Graph capsule convolutional neural networks' is not a mere ArXiV work. It is an ICML workshop paper. Kindly check all ArXiV references and update with the actual conference venues. | train | [
"ByxrZRED3m",
"S1gMPXrcn7",
"rJlwCtK5nm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper fuses Capsule Networks with Graph Neural Networks. The idea seems technically correct and is well-written. With 13 pages the paper seems really long. Moreover, the experimental part seems to be too short. So, the theoretical and experimental part is not well-balanced.\n\nMinor concerns/ notes to the auth... | [
6,
6,
6
] | [
4,
4,
4
] | [
"iclr_2019_Byl8BnRcYm",
"iclr_2019_Byl8BnRcYm",
"iclr_2019_Byl8BnRcYm"
] |
iclr_2019_BylBr3C9K7 | Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking | Deep Neural Networks (DNNs) are increasingly deployed in highly energy-constrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time. Therefore, reducing the energy consumption has become a major design consideration in DNN training. This paper proposes the first end-to-end DNN training framework that provides quantitative energy consumption guarantees via weighted sparse projection and input masking. The key idea is to formulate the DNN training as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. We prove that an approximate algorithm can be used to efficiently solve the optimization problem. Compared to the best prior energy-saving techniques, our framework trains DNNs that provide higher accuracies under same or lower energy budgets. | accepted-poster-papers | All of the reviewers agree that this is a well-written paper with the novel perspective of minimizing energy consumption in neural networks, as opposed to maximizing sparsity, which does not always correlate with energy cost. There are a number of promised clarifications and additional results that have emerged from the discussion that should be put into the final draft. Namely, describing the overhead of converting from sparse to dense representations, adding the Imagenet sparsity results, and adding the time taken to run the projection step. | val | [
"ByeTwU9O3m",
"SkgP-w3aRQ",
"r1xncHxi2X",
"Sylap5jTAX",
"ryxef2id07",
"H1g_J-vO6m",
"rJlG0gwdpQ",
"H1em0nLOTm",
"rygIrxvOpm",
"ByxoQev_T7",
"BJgr8CIOpX",
"ByeuQTS92Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a method for neural network training under a hard energy constraint (i.e. the method guarantees the energy consumption to be upper bounded). Based on a systolic array hardware architecture the authors model the energy consumption of transferring the weights and activations into different levels ... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_BylBr3C9K7",
"ByxoQev_T7",
"iclr_2019_BylBr3C9K7",
"H1em0nLOTm",
"BJgr8CIOpX",
"r1xncHxi2X",
"r1xncHxi2X",
"r1xncHxi2X",
"ByeTwU9O3m",
"ByeTwU9O3m",
"ByeuQTS92Q",
"iclr_2019_BylBr3C9K7"
] |
iclr_2019_BylE1205Fm | Emerging Disentanglement in Auto-Encoder Based Unsupervised Image Content Transfer | We study the problem of learning to map, in an unsupervised way, between domains A and B, such that the samples \vb∈B contain all the information that exists in samples \va∈A and some additional information. For example, ignoring occlusions, B can be people with glasses, A people without, and the glasses, would be the added information. When mapping a sample \va from the first domain to the other domain, the missing information is replicated from an independent reference sample \vb∈B. Thus, in the above example, we can create, for every person without glasses a version with the glasses observed in any face image.
Our solution employs a single two-pathway encoder and a single decoder for both domains. The common part of the two domains and the separate part are encoded as two vectors, and the separate part is fixed at zero for domain A. The loss terms are minimal and involve reconstruction losses for the two domains and a domain confusion term. Our analysis shows that under mild assumptions, this architecture, which is much simpler than the literature guided-translation methods, is enough to ensure disentanglement between the two domains. We present convincing results in a few visual domains, such as no-glasses to glasses, adding facial hair based on a reference image, etc. | accepted-poster-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
The proposed method performed well on 3 visual content transfer problems.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- The paper is hard to follow at times
- The problem being addressed is technically interesting but not well-motivated. That is, the question "why is this of interest to the ICLR community" was not well-answered.
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
There were no major points of contention.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers reached a consensus that the paper should be accepted.
| test | [
"Hyeo8WP92Q",
"Hkg9D83rTm",
"S1xt9pjBpX",
"rylZpTorpQ",
"rygvXKzET7",
"rJxq6Tr627",
"BylwDt7yaQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author"
] | [
"This paper proposes an unsupervised style transfer method uses two-pathway encoder and a decoder for both domains. The loss function can be written using reconstruction losses and the confusion term. Experimental results are very promising comparing to state of the art methods. \n\nThe methodology presented in thi... | [
6,
-1,
-1,
-1,
6,
6,
-1
] | [
2,
-1,
-1,
-1,
3,
1,
-1
] | [
"iclr_2019_BylE1205Fm",
"iclr_2019_BylE1205Fm",
"rygvXKzET7",
"S1xt9pjBpX",
"iclr_2019_BylE1205Fm",
"iclr_2019_BylE1205Fm",
"Hyeo8WP92Q"
] |
iclr_2019_BylIciRcYQ | SGD Converges to Global Minimum in Deep Learning via Star-convex Path | Stochastic gradient descent (SGD) has been found to be surprisingly effective in training a variety of deep neural networks. However, there is still a lack of understanding on how and why SGD can train these complex networks towards a global minimum. In this study, we establish the convergence of SGD to a global minimum for nonconvex optimization problems that are commonly encountered in neural network training. Our argument exploits the following two important properties: 1) the training loss can achieve zero value (approximately), which has been widely observed in deep learning; 2) SGD follows a star-convex path, which is verified by various experiments in this paper. In such a context, our analysis shows that SGD, although has long been considered as a randomized algorithm, converges in an intrinsically deterministic manner to a global minimum. | accepted-poster-papers | The proposed notion of star convexity is interesting and the empirical work done to provide evidence that it is indeed present in real-world neural network training is appreciated. The reviewers raise a number of concerns. The authors were able to convince some of the reviewers with new experiments under MSE loss and experiments showing how robust the method was to the reference point. The most serious concerns relate to novelty and the assumptions that individual functions share a global minima with respect to which the path of iterates generated by SGD satisfies the star convexity property. I'm inclined to accept the authors rebuttal, although it would have been nicer had the reviewer re-engaged. Overall, the paper is on the borderline. | val | [
"H1eq2Hdo07",
"B1xsEN2d37",
"H1g6MIMIAm",
"Hyg7evSKnQ",
"HkgubIm-Am",
"B1equOGWCm",
"SJxGnBGW07",
"HyxZqj0O27"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"While our submission is under review, a few related but very different studies [1,2,3] were posted on Arxiv very recently. We would like to briefly clarify our difference from these results here, which we will further include into the future version of this paper. \n\nThe studies [1,2,3] proved the optimization co... | [
-1,
8,
-1,
6,
-1,
-1,
-1,
5
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
5
] | [
"iclr_2019_BylIciRcYQ",
"iclr_2019_BylIciRcYQ",
"SJxGnBGW07",
"iclr_2019_BylIciRcYQ",
"HyxZqj0O27",
"B1xsEN2d37",
"Hyg7evSKnQ",
"iclr_2019_BylIciRcYQ"
] |
iclr_2019_BylQV305YQ | Toward Understanding the Impact of Staleness in Distributed Machine Learning | Most distributed machine learning (ML) systems store a copy of the model parameters locally on each machine to minimize network communication. In practice, in order to reduce synchronization waiting time, these copies of the model are not necessarily updated in lock-step, and can become stale. Despite much development in large-scale ML, the effect of staleness on the learning efficiency is inconclusive, mainly because it is challenging to control or monitor the staleness in complex distributed environments. In this work, we study the convergence behaviors of a wide array of ML models and algorithms under delayed updates. Our extensive experiments reveal the rich diversity of the effects of staleness on the convergence of ML algorithms and offer insights into seemingly contradictory reports in the literature. The empirical findings also inspire a new convergence analysis of SGD in non-convex optimization under staleness, matching the best-known convergence rate of O(1/\sqrt{T}). | accepted-poster-papers | The reviewers that provided extensive and technically well-justified reviews agreed that the paper is of high quality. The authors are encouraged to make sure all concerns of these reviewers are properly addressed in the paper. | test | [
"B1e7cN_6RX",
"rJlVH62mo7",
"rJgnEvDxCX",
"SkgQs-Yg07",
"ByeqAlqeC7",
"ryeUVuDg0Q",
"B1e0R75T2m",
"SJxI4taJ2X"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"LSTM is indeed an interesting piece to add. We have added new results on LSTMs in Appendix A.8 -- we vary the number of layers of LSTMs (see Figure 13) and types of SGD algorithms (see Figure 14), and have observed that (1) staleness impacts deeper network variants more than shallower counterparts, which is consis... | [
-1,
7,
-1,
-1,
-1,
-1,
4,
9
] | [
-1,
5,
-1,
-1,
-1,
-1,
5,
4
] | [
"SJxI4taJ2X",
"iclr_2019_BylQV305YQ",
"iclr_2019_BylQV305YQ",
"SJxI4taJ2X",
"rJlVH62mo7",
"B1e0R75T2m",
"iclr_2019_BylQV305YQ",
"iclr_2019_BylQV305YQ"
] |
iclr_2019_ByldlhAqYQ | Transfer Learning for Sequences via Learning to Collocate | Transfer learning aims to solve the data sparsity for a specific domain by applying information of another domain. Given a sequence (e.g. a natural language sentence), the transfer learning, usually enabled by recurrent neural network (RNN), represent the sequential information transfer. RNN uses a chain of repeating cells to model the sequence data. However, previous studies of neural network based transfer learning simply transfer the information across the whole layers, which are unfeasible for seq2seq and sequence labeling. Meanwhile, such layer-wise transfer learning mechanisms also lose the fine-grained cell-level information from the source domain.
In this paper, we proposed the aligned recurrent transfer, ART, to achieve cell-level information transfer. ART is in a recurrent manner that different cells share the same parameters. Besides transferring the corresponding information at the same position, ART transfers information from all collocated words in the source domain. This strategy enables ART to capture the word collocation across domains in a more flexible way. We conducted extensive experiments on both sequence labeling tasks (POS tagging, NER) and sentence classification (sentiment analysis). ART outperforms the state-of-the-arts over all experiments.
| accepted-poster-papers | This paper presents a method for transferring source information via the hidden states of recurrent networks. The transfer happens via an attention mechanism that operates between the target and the source. Results on two tasks are strong.
I found this paper similar in spirit to Hypernetworks (David Ha, Andrew Dai, Quoc V Le, ICLR 2016) since there too there is a dynamic weight generation for network given another network, although this method did not use an attention mechanism.
However, reviewers thought that there is merit in this paper (albeit pointed the authors to other related work) and the empirical results are solid.
| train | [
"rJgs-aD6Am",
"S1g0uXesAX",
"r1xwngaY3Q",
"SklbRJ0_37",
"ByeMKYAtCX",
"rkxdELsKRm",
"H1ltRziK0X",
"rJeHRZoYC7",
"Hye7v16q3m"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"For your detailed writing advices, we have rewritten the two sentences accordingly.\n\n1.\tWe rewrote the sentence \n“ART discriminates between information of the corresponding position and that of all positions with collocated words.” \nto \n“For each word in the target domain, ART learns to incorporate two types... | [
-1,
-1,
5,
6,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
3
] | [
"ByeMKYAtCX",
"ByeMKYAtCX",
"iclr_2019_ByldlhAqYQ",
"iclr_2019_ByldlhAqYQ",
"H1ltRziK0X",
"r1xwngaY3Q",
"SklbRJ0_37",
"Hye7v16q3m",
"iclr_2019_ByldlhAqYQ"
] |
iclr_2019_ByleB2CcKm | Learning Procedural Abstractions and Evaluating Discrete Latent Temporal Structure | Clustering methods and latent variable models are often used as tools for pattern mining and discovery of latent structure in time-series data. In this work, we consider the problem of learning procedural abstractions from possibly high-dimensional observational sequences, such as video demonstrations. Given a dataset of time-series, the goal is to identify the latent sequence of steps common to them and label each time-series with the temporal extent of these procedural steps. We introduce a hierarchical Bayesian model called Prism that models the realization of a common procedure across multiple time-series, and can recover procedural abstractions with supervision. We also bring to light two characteristics ignored by traditional evaluation criteria when evaluating latent temporal labelings (temporal clusterings) -- segment structure, and repeated structure -- and develop new metrics tailored to their evaluation. We demonstrate that our metrics improve interpretability and ease of analysis for evaluation on benchmark time-series datasets. Results on benchmark and video datasets indicate that Prism outperforms standard sequence models as well as state-of-the-art techniques in identifying procedural abstractions. | accepted-poster-papers | While the reviews of this paper were somewhat mixed (7,6,4), I ended up favoring acceptance because of the thorough author responses, and the novelty of what is being examined.
The reviewer with a score of 4, argues that this work is not a good fit for iclr, but, although tailoring new metrics may not be a common area that is explored, I don't believe that it's outside the range of iclr's interest, and therefore also more unique. | test | [
"ryxNNJIs2m",
"rkgO_O9zxE",
"HyxQ798507",
"Ske7htL5AX",
"B1x2lF85Rm",
"SyxHYuUcAQ",
"HygJ8OLcCQ",
"ryxCFI8907",
"Hyx08Tcp2Q",
"BkgfMLD42X"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This is a hybrid paper, making contributions on two related fronts:\n1. the paper proposes a performance metric for sequence labeling, capturing salient qualities missed by other metrics, and\n2. the paper also proposes a new sequence labeling method based on inference in a hierarchical Bayesian model, focused on ... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_ByleB2CcKm",
"ryxNNJIs2m",
"Ske7htL5AX",
"B1x2lF85Rm",
"ryxNNJIs2m",
"HygJ8OLcCQ",
"BkgfMLD42X",
"Hyx08Tcp2Q",
"iclr_2019_ByleB2CcKm",
"iclr_2019_ByleB2CcKm"
] |
iclr_2019_Bylmkh05KX | Unsupervised Speech Recognition via Segmental Empirical Output Distribution Matching | We consider the problem of training speech recognition systems without using any labeled data, under the assumption that the learner can only access to the input utterances and a phoneme language model estimated from a non-overlapping corpus. We propose a fully unsupervised learning algorithm that alternates between solving two sub-problems: (i) learn a phoneme classifier for a given set of phoneme segmentation boundaries, and (ii) refining the phoneme boundaries based on a given classifier. To solve the first sub-problem, we introduce a novel unsupervised cost function named Segmental Empirical Output Distribution Matching, which generalizes the work in (Liu et al., 2017) to segmental structures. For the second sub-problem, we develop an approximate MAP approach to refining the boundaries obtained from Wang et al. (2017). Experimental results on TIMIT dataset demonstrate the success of this fully unsupervised phoneme recognition system, which achieves a phone error rate (PER) of 41.6%. Although it is still far away from the state-of-the-art supervised systems, we show that with oracle boundaries and matching language model, the PER could be improved to 32.5%. This performance approaches the supervised system of the same model architecture, demonstrating the great potential of the proposed method. | accepted-poster-papers | This paper is about unsupervised learning for ASR, by matching the acoustic distribution, learned unsupervisedly, with a prior phone-lm distribution. Overall, the results look good on TIMIT. Reviewers agree that this is a well written paper and that it has interesting results.
Strengths
- Novel formulation for unsupervised ASR, and a non-trivial extension to previously proposed unsupervised classification to segmental level.
- Well written, with strong results. Improved results and analysis based on review feedback.
Weaknesses
- Results are on TIMIT -- a small phone recognition task.
- Unclear how it extends to large vocabulary ASR tasks, and tasks that have large scale training data, and RNNs that may learn implicit LMs. The authors propose to deal with this in future work.
Overall, the reviewers agree that this is an excellent contribution with strong results. Therefore, it is recommended that the paper be accepted. | test | [
"B1leQl1lRQ",
"Hyl_IuId37",
"rylDvvMp67",
"H1ekM-ah67",
"HJeZfyTnp7",
"HyxpXfphpQ",
"Skgk33o6hm",
"BkxJWYi6n7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I want to thank the authors for addressing my concerns. I understand that their focus was not exactly the same as in previous work, but want to thank the authors for nevertheless adding the additional motivations and extra analysis. I believe that this will help situate this work better within this area, and als... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"HyxpXfphpQ",
"iclr_2019_Bylmkh05KX",
"iclr_2019_Bylmkh05KX",
"BkxJWYi6n7",
"Skgk33o6hm",
"Hyl_IuId37",
"iclr_2019_Bylmkh05KX",
"iclr_2019_Bylmkh05KX"
] |
iclr_2019_Bylnx209YX | Adversarial Attacks on Graph Neural Networks via Meta Learning | Deep learning models for graphs have advanced the state of the art on many tasks. Despite their recent success, little is known about their robustness. We investigate training time attacks on graph neural networks for node classification that perturb the discrete graph structure. Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks, essentially treating the graph as a hyperparameter to optimize. Our experiments show that small graph perturbations consistently lead to a strong decrease in performance for graph convolutional networks, and even transfer to unsupervised embeddings. Remarkably, the perturbations created by our algorithm can misguide the graph neural networks such that they perform worse than a simple baseline that ignores all relational information. Our attacks do not assume any knowledge about or access to the target classifiers. | accepted-poster-papers | The paper proposes an method for investigating robustness of graph neural nets for node classification problem; training-time attacks for perturbing graph structure are generated using meta-learning approach. Reviewers agree that the contribution is novel and empirical results support the validity of the approach.
| train | [
"SkeoiDVcAX",
"HJe1c7y_27",
"rJlQtgaW0m",
"B1xHHl6Z0m",
"BJxOYyT-0m",
"Skli62kQ6Q",
"BkltS1uMpQ",
"H1lFhSrF3X",
"HkeXZtLM2m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"The authors have made efforts in addressing my concerns and have improved their paper. ",
"This paper studies the problem of learning a better poisoned graph parameters that can maximize the loss of a graph neural network. The proposed using meta-learning to compute the second-order derivatives to get the meta-g... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"BJxOYyT-0m",
"iclr_2019_Bylnx209YX",
"HkeXZtLM2m",
"HJe1c7y_27",
"H1lFhSrF3X",
"BkltS1uMpQ",
"iclr_2019_Bylnx209YX",
"iclr_2019_Bylnx209YX",
"iclr_2019_Bylnx209YX"
] |
iclr_2019_ByloIiCqYQ | Maximal Divergence Sequential Autoencoder for Binary Software Vulnerability Detection | Due to the sharp increase in the severity of the threat imposed by software vulnerabilities, the detection of vulnerabilities in binary code has become an important concern in the software industry, such as the embedded systems industry, and in the field of computer security. However, most of the work in binary code vulnerability detection has relied on handcrafted features which are manually chosen by a select few, knowledgeable domain experts. In this paper, we attempt to alleviate this severe binary vulnerability detection bottleneck by leveraging recent advances in deep learning representations and propose the Maximal Divergence Sequential Auto-Encoder. In particular, latent codes representing vulnerable and non-vulnerable binaries are encouraged to be maximally divergent, while still being able to maintain crucial information from the original binaries. We conducted extensive experiments to compare and contrast our proposed methods with the baselines, and the results show that our proposed methods outperform the baselines in all performance measures of interest. | accepted-poster-papers |
* Strengths
This paper applies deep learning to the domain of cybersecurity, which is non-traditional relative to more common domains such as vision and speech. I see this as a strength. Additionally, the paper curates a dataset that may be of broader interest.
* Weaknesses
While the empirical results are good, there appears to be limited conceptual novelty. However, this is fine for a paper that is providing a new task in an interesting application domain.
* Discussion
Some reviewers were concerned about whether the dataset is a substantial contribution, as it is created based on existing publicly available data. However, these concerns were addressed by the author responses and all reviewers now agree with accepting the paper. | train | [
"H1eNrWoxyV",
"BJga1_cxyN",
"S1l--zJuTm",
"SklIUa9mAm",
"SkgU5i5m0Q",
"r1e4s7lT6Q",
"SkeGRelT6X",
"Hke2AkxTpX",
"ByxCw4yT67",
"ryeuP2Ikam",
"Skl_mb8yTX",
"SkevBv5dhm"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Many thanks for your decision. We appreciate this.",
"Based on the response and the revision, I would like to increase my score to 6.",
"This paper proposes a variational autoencoder-based architecture for binary code embedding. For evaluation, they construct a dataset by compiling source code in the NDSS18 da... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2
] | [
"BJga1_cxyN",
"r1e4s7lT6Q",
"iclr_2019_ByloIiCqYQ",
"SkgU5i5m0Q",
"ByxCw4yT67",
"S1l--zJuTm",
"Skl_mb8yTX",
"SkevBv5dhm",
"ryeuP2Ikam",
"iclr_2019_ByloIiCqYQ",
"iclr_2019_ByloIiCqYQ",
"iclr_2019_ByloIiCqYQ"
] |
iclr_2019_ByloJ20qtm | Neural Program Repair by Jointly Learning to Localize and Repair | Due to its potential to improve programmer productivity and software quality, automated program repair has been an active topic of research. Newer techniques harness neural networks to learn directly from examples of buggy programs and their fixes. In this work, we consider a recently identified class of bugs called variable-misuse bugs. The state-of-the-art solution for variable misuse enumerates potential fixes for all possible bug locations in a program, before selecting the best prediction. We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs. We present multi-headed pointer networks for this purpose, with one head each for localization and repair. The experimental results show that the joint model significantly outperforms an enumerative solution that uses a pointer based model for repair alone. | accepted-poster-papers | This paper provides an approach to jointly localize and repair VarMisuse bugs, where a wrong variable from the context has been used. The proposed work provides an end-to-end training pipeline for jointly localizing and repairing, as opposed to independent predictions in existing work. The reviewers felt that the manuscript was very well-written and clear, with fairly strong results on a number of datasets.
The reviewers and AC note the following potential weaknesses: (1) reviewer 4 brings up related approaches from automated program repair (APR), that are much more general than the VarMisuse bugs, and the paper lacks citation and comparison to them, (2) the baselines that were compared against are fairly weak, and some recent approaches like DeepBugs and Sk_p are ignored, (3) the approach is trained and evaluated only on synthetic bugs, which look very different from the realistic ones, and (4) the contributions were found to be restricted in novelty, just uses a pointer-based LSTM for locating and fixing bugs.
The authors provided detailed comments and a revision to address and clarify these concerns. They added an evaluation on realistic bugs, along with differences from DeepBugs and Sk_p, and differences between neural and automated program repair. They also added more detail comparisons, including separating the localization vs repair aspects by comparing against enumeration. During the discussion, the reviewers disagree on the "weakness" of the baseline, as reviewers 1 and 4 feel it is a reasonable baseline as it builds upon the Allamanis paper. They found, to different degrees, that the results on realistic bugs are much more convincing than the synthetic bug evaluation. Finally, all reviewers agree that the novelty of this work is limited.
Although the reviewers disagree on the strength of the baselines (a recent paper) and the evaluation benchmarks, they agreed that the results are quite strong. The paper, however, addressed many of the concerns in the response/revision, and thus, the reviewers agree that it meets the bar for acceptance. | val | [
"Byg7TN9daQ",
"rkxYwwjqTQ",
"BJe25LDiRX",
"B1ghy9f5Rm",
"H1gNmih4Am",
"SJg8Qn6gAQ",
"r1eodmKxRQ",
"BJx4h1YgCX",
"H1ecT0_gR7",
"H1gAbwMlAX",
"HJgADrtPT7",
"S1lIQfpZ6X",
"SyeyTXFyTQ",
"HkgAKQg93Q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"This paper considers the problem of VarMisuse, a kind of software bug where a variable has been misused. Existing approaches to the problem create a complex model, followed by enumerating all possible variable replacements at all possible positions, in order to identify where the bug may exist. This can be problem... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2019_ByloJ20qtm",
"iclr_2019_ByloJ20qtm",
"SJg8Qn6gAQ",
"H1gNmih4Am",
"BJx4h1YgCX",
"H1gAbwMlAX",
"iclr_2019_ByloJ20qtm",
"rkxYwwjqTQ",
"Byg7TN9daQ",
"HJgADrtPT7",
"S1lIQfpZ6X",
"SyeyTXFyTQ",
"HkgAKQg93Q",
"iclr_2019_ByloJ20qtm"
] |
iclr_2019_Byx83s09Km | Information-Directed Exploration for Deep Reinforcement Learning | Efficient exploration remains a major challenge for reinforcement learning. One reason is that the variability of the returns often depends on the current state and action, and is therefore heteroscedastic. Classical exploration strategies such as upper confidence bound algorithms and Thompson sampling fail to appropriately account for heteroscedasticity, even in the bandit setting. Motivated by recent findings that address this issue in bandits, we propose to use Information-Directed Sampling (IDS) for exploration in reinforcement learning. As our main contribution, we build on recent advances in distributional reinforcement learning and propose a novel, tractable approximation of IDS for deep Q-learning. The resulting exploration strategy explicitly accounts for both parametric uncertainty and heteroscedastic observation noise. We evaluate our method on Atari games and demonstrate a significant improvement over alternative approaches. | accepted-poster-papers | The paper introduces a method for using information directed sampling, by taking advantage of recent advances in computing parametric uncertainty and variance estimates for returns. These estimates are used to estimate the information gain, based on a formula from (Kirschner & Krause, 2018) for the bandit setting. This paper takes these ideas and puts them together in a reasonably easy-to-use and understandable way for the reinforcement learning setting, which is both nontrivial and useful. The work then demonstrates some successes in Atari. Though it is of course laudable that the paper runs on 57 Atari games, it would make the paper even stronger if a simpler setting (some toy domain) was investigated to more systematically understand this approach and some choices in the approach. | val | [
"SJxIO823n7",
"B1xRtDI937",
"BkxudvgP27",
"B1lbK9KWCX",
"rylLQ5Y-A7",
"rkxMFKYbAQ",
"SJg6C-K8jm",
"rylspvI-9m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"Combining the parametric uncertainty of bootstrapped DQN with the return uncertainty of C51, the authors propose a deep RL algorithm that can explore in the presence of heteroscedasticity. The motivation is quite well written, going through IDS and the approximations in a way that didn't presume prior familiarity.... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_Byx83s09Km",
"iclr_2019_Byx83s09Km",
"iclr_2019_Byx83s09Km",
"BkxudvgP27",
"B1xRtDI937",
"SJxIO823n7",
"rylspvI-9m",
"iclr_2019_Byx83s09Km"
] |
iclr_2019_ByxBFsRqYm | Attention, Learn to Solve Routing Problems! | The recently presented idea to learn heuristics for combinatorial optimization problems is promising as it can save costly development. However, to push this idea towards practical implementation, we need better models and better ways of training. We contribute in both directions: we propose a model based on attention layers with benefits over the Pointer Network and we show how to train this model using REINFORCE with a simple baseline based on a deterministic greedy rollout, which we find is more efficient than using a value function. We significantly improve over recent learned heuristics for the Travelling Salesman Problem (TSP), getting close to optimal results for problems up to 100 nodes. With the same hyperparameters, we learn strong heuristics for two variants of the Vehicle Routing Problem (VRP), the Orienteering Problem (OP) and (a stochastic variant of) the Prize Collecting TSP (PCTSP), outperforming a wide range of baselines and getting results close to highly optimized and specialized algorithms. | accepted-poster-papers | The paper presents a new deep learning approach for combinatorial optimization
problems based on the Transformer architecture. The paper is well written
and several experiments are provided. A reviewer asked for more intuition to
the proposed approach and authors have responded accordingly. Reviewers are
also concerned with scalability and theoretical basis.
Overall, all reviewers were positives in their scores, and I recommend accepting the paper. | train | [
"rkl7nk8QkN",
"Bke7cL5jnX",
"rylWKa1hCX",
"HkgKKfy56m",
"Bye8WGJcaX",
"SkePlW15aQ",
"rJeXVEsdnm",
"BkeH3bYNn7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Wonderful - I have updated my score accordingly.",
"This paper proposes an alternative deep learning model for use in combinatorial optimization. The attention model is inspired by the Transformer architecture of Vaswani et al. (2017). Given a distribution over problem instances (e.g. TSP), the REINFORCE update ... | [
-1,
7,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
5,
-1,
-1,
-1,
-1,
5,
5
] | [
"SkePlW15aQ",
"iclr_2019_ByxBFsRqYm",
"HkgKKfy56m",
"BkeH3bYNn7",
"rJeXVEsdnm",
"Bke7cL5jnX",
"iclr_2019_ByxBFsRqYm",
"iclr_2019_ByxBFsRqYm"
] |
iclr_2019_ByxGSsR9FQ | L2-Nonexpansive Neural Networks | This paper proposes a class of well-conditioned neural networks in which a unit amount of change in the inputs causes at most a unit amount of change in the outputs or any of the internal layers. We develop the known methodology of controlling Lipschitz constants to realize its full potential in maximizing robustness, with a new regularization scheme for linear layers, new ways to adapt nonlinearities and a new loss function. With MNIST and CIFAR-10 classifiers, we demonstrate a number of advantages. Without needing any adversarial training, the proposed classifiers exceed the state of the art in robustness against white-box L2-bounded adversarial attacks. They generalize better than ordinary networks from noisy data with partially random labels. Their outputs are quantitatively meaningful and indicate levels of confidence and generalization, among other desirable properties. | accepted-poster-papers |
* Strengths
This paper studies adversarial robustness to perturbations that are bounded in the L2 norm. It is motivated by a theoretical sufficient condition (non-expansiveness) but rather than trying to formally verify robustness, it uses this condition as inspiration, modifying standard network architectures in several ways to encourage non-expansiveness while mostly preserving computational efficiency and accuracy. This “theory-inspired practically-focused” hybrid is a rare perspective in this area and could fruitfully inspire further improvements. Finally, the paper came under substantial scrutiny during the review period (there are 65 comments on the page) and the authors have convincingly answered a number of technical criticisms.
* Weaknesses
One reviewer and some commenters were concerned that the L2 norm is not a realistic norm to measure adversarial attacks in. There were also concerns that the empirical level of robustness of the network was too weak to be meaningful. In addition, while some parts of the experiments were thorough and some parts of the paper were well-presented, the quality was not uniform throughout. Finally, while the proposed changes improve adversarial robustness, they also decrease the accuracy of the network on clean examples (this is to be expected but may be an issue in practice).
* Discussion
There was substantial disagreement on whether to accept the paper. On the one hand, there has been limited progress on robustness to adversarial examples (even under simple norms such as the L2 norm) and most methods that do work are based on formal verification and therefore quite computationally expensive. On the other hand, simple norms such as the L2 norm are somewhat contrived and mainly chosen for convenience (although doing well in the L2 norm is a necessary condition for being robust to more general attacks). Moreover, the empirical results are currently too weak to confer meaningful robustness even under the L2 norm.
* Decision
While I agree with the reviewers and commenters who are skeptical of the L2 norm model (and would very much like to see approaches that consider more realistic threat models), I decided to accept the paper for two reasons: first, doing well in L2 is a necessary condition to doing well in more general models, and the ideas and approach here are simple enough that they might provide inspiration in these more general models as well. Additionally, this was one of the strongest adversarial defense papers at ICLR this year in terms of credibility of the claims (certainly the strongest in my pile) and contains several useful ideas as well as novel empirical findings (such as the increased success of attacks up to 1 million iterations). | train | [
"SJehfLMzlE",
"BkeP1htbxN",
"H1ezI47-gV",
"BJg5F5RxgV",
"HygKL1jgg4",
"HyeXrBUggN",
"Syx6Z5egeN",
"BJgb-yNJgN",
"r1lcLW2TkN",
"HJlmeQpHkN",
"HJx15w00RQ",
"Syg_IW86hm",
"ryxW1EcRRQ",
"Bkly6cc60Q",
"HJgJxX5TRX",
"r1e3-5Hh0m",
"B1xLfvQnRm",
"HkxISreiCm",
"Bklz239KCX",
"H1eKGidtnm"... | [
"author",
"public",
"author",
"public",
"author",
"public",
"author",
"public",
"public",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"public",
"official_reviewer",
"public",
"author",
"author"... | [
"Regarding the specific points:\n-- We chose the best available baselines, i.e. classifiers from Madry et al. (2017), which happen to be trained with L_inf adversary. There seem to be no available models that are trained with L2 adversary, and the only paper that talked about such models, i.e. https://arxiv.org/pdf... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-... | [
"BkeP1htbxN",
"H1ezI47-gV",
"BJg5F5RxgV",
"HygKL1jgg4",
"HyeXrBUggN",
"Syx6Z5egeN",
"BJgb-yNJgN",
"S1xf3ovhpm",
"Bklz239KCX",
"ryeApYUEpX",
"ryxW1EcRRQ",
"iclr_2019_ByxGSsR9FQ",
"B1gzYu84p7",
"HJgJxX5TRX",
"r1e3-5Hh0m",
"B1xLfvQnRm",
"HkxISreiCm",
"Bklz239KCX",
"iclr_2019_ByxGSsR... |
iclr_2019_ByxPYjC5KQ | Improving Generalization and Stability of Generative Adversarial Networks | Generative Adversarial Networks (GANs) are one of the most popular tools for learning complex high dimensional distributions. However, generalization properties of GANs have not been well understood. In this paper, we analyze the generalization of GANs in practical settings. We show that discriminators trained on discrete datasets with the original GAN loss have poor generalization capability and do not approximate the theoretically optimal discriminator. We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator. The penalty guarantees the generalization and convergence of GANs. Experiments on synthetic and large scale datasets verify our theoretical analysis.
| accepted-poster-papers | The paper received unanimous accept over reviewers (7,7,6), hence proposed as definite accept. | train | [
"SklRxguMJE",
"Bkgv8HHx3Q",
"S1xy_MQnCQ",
"S1xjW_viAX",
"rJex_oBqCX",
"rkxFtYUtCQ",
"rJe8HnEFC7",
"ByeSlr4gR7",
"HJgkLVVlCX",
"BJgwGQVxRQ",
"HJeB6kCJpQ",
"Skgb5H01TQ",
"H1gwUJ0yTQ",
"rJg7SvH5hQ",
"S1lCpY4dn7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for increasing the rating. We have performed another experiment to compare the generalization of GANs. The detail of the experiment is as follows:\n\n1. Experiment setup \nDataset: We stack 3 MNIST images into an RGB image, resulting in a dataset of 1000 major modes. \nNetwork architecture: Generator and... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"S1xy_MQnCQ",
"iclr_2019_ByxPYjC5KQ",
"rkxFtYUtCQ",
"rJex_oBqCX",
"iclr_2019_ByxPYjC5KQ",
"rJe8HnEFC7",
"BJgwGQVxRQ",
"rJg7SvH5hQ",
"S1lCpY4dn7",
"Bkgv8HHx3Q",
"Bkgv8HHx3Q",
"rJg7SvH5hQ",
"S1lCpY4dn7",
"iclr_2019_ByxPYjC5KQ",
"iclr_2019_ByxPYjC5KQ"
] |
iclr_2019_ByxZX20qFQ | Adaptive Input Representations for Neural Language Modeling | We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. (2017) to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic comparison of popular choices for a self-attentional architecture. Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters. On the WikiText-103 benchmark we achieve 18.7 perplexity, an improvement of 10.5 perplexity compared to the previously best published result and on the Billion Word benchmark, we achieve 23.02 perplexity. | accepted-poster-papers | There is a clear consensus among the reviews to accept this submission thus I am recommending acceptance. The paper makes a clear, if modest, contribution to language modeling that is likely to be valuable to many other researchers. | train | [
"SJgnkDkMCm",
"ryloqL1G0Q",
"Hke6aByfA7",
"S1ePmKlW0Q",
"S1gSrFe-R7",
"SkxTKpJwa7",
"H1xamJD0hQ",
"H1lJ9-Pdn7",
"HJxeCIut2Q"
] | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The primary goal of the projections is to project all embeddings into the model dimension d so that we can have variable sized embeddings. Our goal was not to make the model model expressive. Compared to the rest of the model, these projections add very little overhead compared to the rest of the model. Doing with... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"H1lJ9-Pdn7",
"HJxeCIut2Q",
"H1xamJD0hQ",
"iclr_2019_ByxZX20qFQ",
"SkxTKpJwa7",
"iclr_2019_ByxZX20qFQ",
"iclr_2019_ByxZX20qFQ",
"iclr_2019_ByxZX20qFQ",
"iclr_2019_ByxZX20qFQ"
] |
iclr_2019_ByxkijC5FQ | Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology | While many approaches to make neural networks more fathomable have been proposed, they are restricted to interrogating the network with input data. Measures for characterizing and monitoring structural properties, however, have not been developed. In this work, we propose neural persistence, a complexity measure for neural network architectures based on topological data analysis on weighted stratified graphs. To demonstrate the usefulness of our approach, we show that neural persistence reflects best practices developed in the deep learning community such as dropout and batch normalization. Moreover, we derive a neural persistence-based stopping criterion that shortens the training process while achieving comparable accuracies as early stopping based on validation loss. | accepted-poster-papers | The paper presents a topological complexity measure of neural networks based on persistence 0-homology of the weights in each layer. Some lower and upper bounds of the p-norm persistence diagram are derived that leads to normalized persistence metric. The main discovery of such a topological complexity measure is that it leads to a stability-based early stopping criterion without a statistical cross-validation, as well as distinct characterizations on random initialization, batch normalization and drop out. Experiments are conducted with simple networks and MNIST, Fashion-MNIST, CIFAR10, IMDB datasets.
The main concerns from the reviewers are that experimental studies are still preliminary and the understanding on the observed interesting phenomenon is premature. The authors make comprehensive responses to the raised questions with new experiments and some reviewers raise the rating.
The reviewers all agree that the paper presents a novel study on neural network from an algebraic topology perspective with interesting results that has not been seen before. The paper is thus suggested to be borderline lean accept.
| train | [
"Syl--iQzJE",
"HygcYhVqnm",
"ByxpGah6R7",
"BJgk_yOORX",
"B1gkIaPO0X",
"rket7Pu_Rm",
"HJxw_vuu0m",
"B1xgWZA02m",
"r1x6sUQ6nm",
"S1eODZ5u5Q",
"BJe77QJ_qX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for this very positive change! We aim to update the discussion concerning Fig. 11 in a revision of the paper.",
"The authors, motivated by work in topological graph analysis, introduce a new broadly applicable complexity measure they call neural persistence--essentially a sum over norms of persistence... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
4,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4,
-1,
-1
] | [
"ByxpGah6R7",
"iclr_2019_ByxkijC5FQ",
"BJgk_yOORX",
"HygcYhVqnm",
"r1x6sUQ6nm",
"B1xgWZA02m",
"BJe77QJ_qX",
"iclr_2019_ByxkijC5FQ",
"iclr_2019_ByxkijC5FQ",
"BJe77QJ_qX",
"iclr_2019_ByxkijC5FQ"
] |
iclr_2019_Byxpfh0cFm | Efficient Augmentation via Data Subsampling | Data augmentation is commonly used to encode invariances in learning methods. However, this process is often performed in an inefficient manner, as artificial examples are created by applying a number of transformations to all points in the training set. The resulting explosion of the dataset size can be an issue in terms of storage and training costs, as well as in selecting and tuning the optimal set of transformations to apply. In this work, we demonstrate that it is possible to significantly reduce the number of data points included in data augmentation while realizing the same accuracy and invariance benefits of augmenting the entire dataset. We propose a novel set of subsampling policies, based on model influence and loss, that can achieve a 90% reduction in augmentation set size while maintaining the accuracy gains of standard data augmentation. | accepted-poster-papers | The paper proposes several subsampling policies to achieve a clear reduction in the size of augmented data while maintaining the accuracy of using a standard data augmentation method. The paper in general is clearly written and easy to follow, and provides sufficiently convincing experimental results to support the claim. After reading the authors' response and revision, the reviewers have reached a general consensus that the paper is above the acceptance bar. | test | [
"HylD3z4ch7",
"SkgnhejSC7",
"SJgIeO84CQ",
"r1xc5LLE0m",
"Hkldxs7qnX",
"SyxMijTQhX"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The authors study the problem of identifying subsampling strategies for data augmentation, primarily for encoding invariances in learning methods. The problem seems relevant with applications to learning invariances as well as close connections with the covariate shift problem. \n\nContributions: The key ... | [
6,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_Byxpfh0cFm",
"HylD3z4ch7",
"SyxMijTQhX",
"Hkldxs7qnX",
"iclr_2019_Byxpfh0cFm",
"iclr_2019_Byxpfh0cFm"
] |
iclr_2019_ByzcS3AcYX | Neural TTS Stylization with Adversarial and Collaborative Games | The modeling of style when synthesizing natural human speech from text has been the focus of significant attention. Some state-of-the-art approaches train an encoder-decoder network on paired text and audio samples (x_txt, x_aud) by encouraging its output to reconstruct x_aud. The synthesized audio waveform is expected to contain the verbal content of x_txt and the auditory style of x_aud. Unfortunately, modeling style in TTS is somewhat under-determined and training models with a reconstruction loss alone is insufficient to disentangle content and style from other factors of variation. In this work, we introduce an end-to-end TTS model that offers enhanced content-style disentanglement ability and controllability. We achieve this by combining a pairwise training procedure, an adversarial game, and a collaborative game into one training scheme. The adversarial game concentrates the true data distribution, and the collaborative game minimizes the distance between real samples and generated samples in both the original space and the latent space. As a result, the proposed model delivers a highly controllable generator, and a disentangled representation. Benefiting from the separate modeling of style and content, our model can generate human fidelity speech that satisfies the desired style conditions. Our model achieves start-of-the-art results across multiple tasks, including style transfer (content and style swapping), emotion modeling, and identity transfer (fitting a new speaker's voice). | accepted-poster-papers | The paper proposes using GANs for disentangling style information from speech content, and thereby improve style transfer in TTS. The review and responses for this paper have been especially thorough! The authors significantly improved the paper during the review process, as pointed out by the reviewers. Inclusion of additional baselines, evaluations and ablation analysis helped improve the overall quality of the paper and helped alleviate concerns raised by the reviewers. Therefore, it is recommended that the paper be accepted for publication. | test | [
"HyeiSBJQT7",
"SJg099o2RX",
"BJl11ws2AX",
"B1lxfzs30Q",
"r1gu2kHsAQ",
"r1esueVMp7",
"BygbTKVoR7",
"ryl7IwesAX",
"rkeebN1oR7",
"ByejWaRc0Q",
"SJeKU5Ic27",
"H1gefjatCm",
"HJgZD2nYRQ",
"Hklxr33Y0m",
"rJe7lY3KCQ",
"BJebIsvXpm",
"BJxRuqPQam",
"Byg1SllT37"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_rev... | [
"This paper proposes to use GAN to disentangle style information from speech content. The presentation of the core idea is clear but IMO there are some key missing details and experiments.\n\n* The paper mentions '....the model could simply learn to copy the waveform information from xaud to the output and ignore s... | [
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
5,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_ByzcS3AcYX",
"BJl11ws2AX",
"B1lxfzs30Q",
"rJe7lY3KCQ",
"BygbTKVoR7",
"iclr_2019_ByzcS3AcYX",
"rkeebN1oR7",
"ByejWaRc0Q",
"HJgZD2nYRQ",
"Hklxr33Y0m",
"iclr_2019_ByzcS3AcYX",
"BJxRuqPQam",
"Hklxr33Y0m",
"r1esueVMp7",
"HyeiSBJQT7",
"Byg1SllT37",
"SJeKU5Ic27",
"iclr_2019_Byz... |
iclr_2019_H1MW72AcK7 | Optimal Control Via Neural Networks: A Convex Approach | Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability.
In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5 times less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.
| accepted-poster-papers | The paper makes progress on a problem that is still largely unexplored, presents promising results, and builds bridges with
prior work on optimal control. It designs input convex recurrent neural networks to capture temporal behavior of
dynamical systems; this then allows optimal controllers to be computed by solving a convex model predictive control problem.
There were initial critiques regarding some of the claims. These have now been clarified.
Also, there is in the end a compromise between the (necessary) approximations of the input-convex model and the true dynamics, and being able to compute an optimal result.
Overall, all reviewers and the AC are in agreement to see this paper accepted.
There was extensive and productive interaction between the reviewers and authors.
It makes contributions that will be of interest to many, and builds interesting bridges with known control methods. | val | [
"Hke3H7li1N",
"B1ewimxjkV",
"BkgQZvtByN",
"S1ljLTYjnm",
"SJeInZNZR7",
"r1gu-tJJRQ",
"r1lZ9YkkC7",
"r1lKWukkCQ",
"B1e8JrJ1Am",
"S1eqHIJyCm",
"HklIprkJRm",
"H1eRNOBqn7",
"rJxXxUZ92X",
"ryxdDdm3F7"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
"We thank the reviewer again for carefully reading our manuscript and providing so many valuable feedbacks. We address reviewer’s concern as follows:\n\n1) One limitation that is now present in the revised version of the paper that was not present in the original submission is that these dynamics models do *not* su... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
-1
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1
] | [
"BkgQZvtByN",
"Hke3H7li1N",
"S1eqHIJyCm",
"iclr_2019_H1MW72AcK7",
"iclr_2019_H1MW72AcK7",
"rJxXxUZ92X",
"iclr_2019_H1MW72AcK7",
"H1eRNOBqn7",
"S1ljLTYjnm",
"HklIprkJRm",
"B1e8JrJ1Am",
"iclr_2019_H1MW72AcK7",
"iclr_2019_H1MW72AcK7",
"iclr_2019_H1MW72AcK7"
] |
iclr_2019_H1MgjoR9tQ | CBOW Is Not All You Need: Combining CBOW with the Compositional Matrix Space Model | Continuous Bag of Words (CBOW) is a powerful text embedding method. Due to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute. However, CBOW is not capable of capturing the word order. The reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same. In order to address this shortcoming, we propose a
learning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW). Our algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text. We empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content. Motivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW. Our results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%. As a result, the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%. | accepted-poster-papers | This paper presents CMOW—an unsupervised sentence representation learning method that treats sentences as the product of their word matrices. This method is not entirely novel, as the authors acknowledge, but it has not been successfully applied to downstream tasks before. This paper presents methods for successfully training it, and shows results on the SentEval benchmark suite for sentence representations and an associated set of analysis tasks.
All three reviewers agree that the results are unimpressive: CMOW is no better than the faster CBOW baseline on most tasks, and the combination of the two is only marginally better than CBOW. However, CMOW does show some real advantages on the analysis tasks. No reviewer has any major correctness concerns that I can see.
As I see it, this paper is borderline, but narrowly worth accepting: As a methods paper, it presents weak results, and it's not likely that many practitioners will leap to use the method. However, the method is so appealingly simple and well known that there is some value in seeing this as an analysis paper that thoroughly evaluates it. Because it is so simple, it will likely be of interest to researchers beyond just the NLP domain in which it is tested (as CBOW-style models have been), so ICLR seems like an appropriate venue. It seems like it's in the community's best interest to see a method like this be evaluated, and since this paper appears to offer a thorough and sound evaluation, I recommend acceptance. | train | [
"Hyld2_TykV",
"H1e3m8gvRm",
"Byg69SmWhQ",
"H1eTBddx0X",
"SJe4Q3-x07",
"BkxNis-xC7",
"rJl-95bx0X",
"Bkgah1-i37",
"rkxCqDVshm"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer,\n\nDue to its brevity, your comment leaves a lot of room for interpretation, and we are not sure if we understood your concerns correctly. Nevertheless, we would like to address them in the following.\n\nTo our understanding, your main concerns with our paper now is that\n\n1. We study a problem tha... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"H1e3m8gvRm",
"rkxCqDVshm",
"iclr_2019_H1MgjoR9tQ",
"SJe4Q3-x07",
"Byg69SmWhQ",
"Bkgah1-i37",
"rkxCqDVshm",
"iclr_2019_H1MgjoR9tQ",
"iclr_2019_H1MgjoR9tQ"
] |
iclr_2019_H1eSS3CcKX | Stochastic Optimization of Sorting Networks via Continuous Relaxations | Sorting input objects is an important step in many machine learning pipelines. However, the sorting operator is non-differentiable with respect to its inputs, which prohibits end-to-end gradient-based optimization. In this work, we propose NeuralSort, a general-purpose continuous relaxation of the output of the sorting operator from permutation matrices to the set of unimodal row-stochastic matrices, where every row sums to one and has a distinct argmax. This relaxation permits straight-through optimization of any computational graph involve a sorting operation. Further, we use this relaxation to enable gradient-based stochastic optimization over the combinatorially large space of permutations by deriving a reparameterized gradient estimator for the Plackett-Luce family of distributions over permutations. We demonstrate the usefulness of our framework on three tasks that require learning semantic orderings of high-dimensional objects, including a fully differentiable, parameterized extension of the k-nearest neighbors algorithm | accepted-poster-papers | This paper proposes a general-purpose continuous relaxation of the output of the sorting operator. This enables end-to-end training to enable more efficient stochastic optimization over the combinatorially large space of permutations.
In the submitted versions, two of the reviewers had difficulty in understanding the writing. After the rebuttal and the revised version, one of the reviewers is satisfied. I personally went through the paper and found that it could be tricky to read certain parts of the paper. For example, I am personally very familiar with the Placket-Luce model but the writing in Section 2.1 does not do a good job in explaining the model (particularly Eq 1 is not very easy to read, same with Eq. 3 for the key identity used in the paper).
I encourage authors to improve writing and make it a bit more intuitive to read.
Overall, this is a good paper and I recommend to accept it.
| train | [
"BkgtftDpJN",
"rkgvVs-q27",
"SJgx2mx2p7",
"SJefvGxnaX",
"SyeSE1x2aX",
"r1xmEny2TQ",
"SyxXUGsc3Q",
"Byeq6TEfn7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"reviewed rebuttal; still support strong accept",
"After responses: I now understand the paper, and I believe it is a good contribution. \n\n================================================\n\nAt a high level, the paper considers how to sort a number of items without explicitly necessarily learning their actual m... | [
-1,
7,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
4,
3
] | [
"r1xmEny2TQ",
"iclr_2019_H1eSS3CcKX",
"iclr_2019_H1eSS3CcKX",
"rkgvVs-q27",
"Byeq6TEfn7",
"SyxXUGsc3Q",
"iclr_2019_H1eSS3CcKX",
"iclr_2019_H1eSS3CcKX"
] |
iclr_2019_H1ebTsActm | Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality | Deep learning has shown high performances in various types of tasks from visual recognition to natural language processing,
which indicates superior flexibility and adaptivity of deep learning.
To understand this phenomenon theoretically, we develop a new approximation and estimation error analysis of
deep learning with the ReLU activation for functions in a Besov space and its variant with mixed smoothness.
The Besov space is a considerably general function space including the Holder space and Sobolev space, and especially can capture spatial inhomogeneity of smoothness. Through the analysis in the Besov space, it is shown that deep learning can achieve the minimax optimal rate and outperform any non-adaptive (linear) estimator such as kernel ridge regression,
which shows that deep learning has higher adaptivity to the spatial inhomogeneity of the target function than other estimators such as linear ones. In addition to this, it is shown that deep learning can avoid the curse of dimensionality if the target function is in a mixed smooth Besov space. We also show that the dependency of the convergence rate on the dimensionality is tight due to its minimax optimality. These results support high adaptivity of deep learning and its superior ability as a feature extractor.
| accepted-poster-papers | The paper extends the results in Yarotsky (2017) from Sobolev spaces to Besov spaces, stating that once the target function lies in certain Besov spaces, there exists some deep neural networks with ReLU activation that approximate the target in the minimax optimal rates. Such adaptive networks can be found by empirical risk minimization, which however is not yet known to be found by SGDs etc. This gap is the key weakness of applying approximation theory to the study of constructive deep neural networks of certain approximation spaces, which lacks algorithmic guarantees. The gap is hoped to be filled in future studies.
Despite the incompleteness of approximation theory, this paper is still a good solid work. Based on fact that the majority of reviewers suggest accept (6,8,6), with some concerns on the clarity, the paper is proposed as probable accept. | train | [
"SJl71edYT7",
"Bylje1dKaQ",
"SJxcoCDYpX",
"ryeP_CPYTm",
"rJg7UAwKp7",
"SygSECvtaX",
"rJgL04k-aQ",
"BJeVd9HTnQ",
"S1eqfHtq3m",
"r1gX-pN9hQ"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your careful reading. We have uploaded a revised version.\nThe main difference from the original one is as follows:\n\n1. Some additional text explanations are added for the definition of m-Besov space.\n2. We added a few remarks for the approximation error bound in Proposition 1 and Theorem 1.\n3. W... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
2
] | [
"iclr_2019_H1ebTsActm",
"rJgL04k-aQ",
"r1gX-pN9hQ",
"S1eqfHtq3m",
"BJeVd9HTnQ",
"BJeVd9HTnQ",
"iclr_2019_H1ebTsActm",
"iclr_2019_H1ebTsActm",
"iclr_2019_H1ebTsActm",
"iclr_2019_H1ebTsActm"
] |
iclr_2019_H1edIiA9KQ | Generating Multiple Objects at Spatially Distinct Locations | Recent improvements to Generative Adversarial Networks (GANs) have made it possible to generate realistic images in high resolution based on natural language descriptions such as image captions. Furthermore, conditional GANs allow us to control the image generation process through labels or even natural language descriptions. However, fine-grained control of the image layout, i.e. where in the image specific objects should be located, is still difficult to achieve. This is especially true for images that should contain multiple distinct objects at different spatial locations. We introduce a new approach which allows us to control the location of arbitrarily many objects within an image by adding an object pathway to both the generator and the discriminator. Our approach does not need a detailed semantic layout but only bounding boxes and the respective labels of the desired objects are needed. The object pathway focuses solely on the individual objects and is iteratively applied at the locations specified by the bounding boxes. The global pathway focuses on the image background and the general image layout. We perform experiments on the Multi-MNIST, CLEVR, and the more complex MS-COCO data set. Our experiments show that through the use of the object pathway we can control object locations within images and can model complex scenes with multiple objects at various locations. We further show that the object pathway focuses on the individual objects and learns features relevant for these, while the global pathway focuses on global image characteristics and the image background. | accepted-poster-papers | The submission proposes a model to generate images where one can control the fine-grained locations of objects. This is achieved by adding an "object pathway" to the GAN architecture. Experiments on a number of baselines are performed, including a number of reviewer-suggested metrics that were added post-rebuttal.
The method needs bounding boxes of the objects to be placed (and labels). The proposed method is simple and likely novel and I like the evaluating done with Yolov3 to get a sense of the object detection performance on the generated images. I find the results (qual & quant) and write-up compelling and I think that the method will be of practical relevance, especially in creative applications.
Because of this, I recommend acceptance. | val | [
"SJgxVtZn3X",
"HJlFhAnO3X",
"HyeGwusKC7",
"HJeYpRthTm",
"SygkaFlo37",
"SJg7xYB16Q",
"B1l3puS1am",
"B1ejRvrkam",
"B1lM28SJTm",
"S1laSvBJTm",
"Byx7gDB1aQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper proposed a model to generate location-controllable images built upon GANs. The experiments are conducted on several datasets. Although this problem seems interesting, here are several concerns I have:\n\n1.Novelty: the overall framework is still conditional GAN framework. The multiple -generators-discr... | [
6,
7,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_H1edIiA9KQ",
"iclr_2019_H1edIiA9KQ",
"iclr_2019_H1edIiA9KQ",
"iclr_2019_H1edIiA9KQ",
"iclr_2019_H1edIiA9KQ",
"SJgxVtZn3X",
"SJgxVtZn3X",
"SygkaFlo37",
"HJlFhAnO3X",
"HJlFhAnO3X",
"HJlFhAnO3X"
] |
iclr_2019_H1emus0qF7 | Near-Optimal Representation Learning for Hierarchical Reinforcement Learning | We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning. In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach. Accordingly, the choice of representation -- the mapping of observation space to goal space -- is crucial. To study this problem, we develop a notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation. We derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice. Results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies, compared to existing methods. | accepted-poster-papers | Strong paper on hierarchical RL with very strong reviews from people expert in this subarea that I know well.
| train | [
"SJxwef4nkN",
"rJelT18IJV",
"rkgHckvwa7",
"HJgqekK7RX",
"H1lHRc8XCm",
"HJe-f_zfAm",
"BkgQo25K2X",
"Hyel2G2eR7",
"B1gHHboeAQ",
"ryxRvbij6Q",
"HJeNJMoia7",
"HJgkpZoj6X",
"H1xOtWssp7",
"SyeaG7S5hQ",
"SygyQWfZqm"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author"
] | [
"Thanks for the close reading of the paper and the helpful feedback. Answers to your concerns are below. Let us know if you have additional questions!\n\n\"If I understood correctly, ... Is this correct?\"\nYes, your understanding is correct. To us, this was a relatively straightforward way to translate the theo... | [
-1,
-1,
8,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
9,
-1
] | [
-1,
-1,
3,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1
] | [
"rJelT18IJV",
"iclr_2019_H1emus0qF7",
"iclr_2019_H1emus0qF7",
"H1xOtWssp7",
"HJe-f_zfAm",
"Hyel2G2eR7",
"iclr_2019_H1emus0qF7",
"B1gHHboeAQ",
"HJeNJMoia7",
"rkgHckvwa7",
"BkgQo25K2X",
"SyeaG7S5hQ",
"ryxRvbij6Q",
"iclr_2019_H1emus0qF7",
"iclr_2019_H1emus0qF7"
] |
iclr_2019_H1eqjiCctX | Understanding Composition of Word Embeddings via Tensor Decomposition | Word embedding is a powerful tool in natural language processing. In this paper we consider the problem of word embedding composition \--- given vector representations of two words, compute a vector for the entire phrase. We give a generative model that can capture specific syntactic relations between words. Under our model, we prove that the correlations between three words (measured by their PMI) form a tensor that has an approximate low rank Tucker decomposition. The result of the Tucker decomposition gives the word embeddings as well as a core tensor, which can be used to produce better compositions of the word embeddings. We also complement our theoretical results with experiments that verify our assumptions, and demonstrate the effectiveness of the new composition method. | accepted-poster-papers | AR1 is concerned about lack of downstream applications which show that higher-order interactions are useful and asks why not to model higher-order interactions for all (a,b) pairs. AR2 notes that this submission is a further development of Arora et al. and is satisfied with the paper. AR3 is the most critical regarding lack of explanations, e.g. why linear addition of two word embeddings is bad and why the corrective term proposed here is a good idea. The authors suggest that linear addition is insufficient when final meaning differs from the individual meanings and show tome quantitative results to back up their corrective term.
On balance, all reviewers find the theoretical contributions sufficient which warrants an accept. The authors are asked to honestly reflect all uncertain aspects of their work in the final draft to reflect legitimate concerns of reviewers. | train | [
"SJgh2tslAQ",
"Skeg0uolRm",
"HyeFqDog0X",
"Skg5uUslRX",
"rkeMbIWjn7",
"SJgejqZqhQ",
"Skg4J4xt27"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have uploaded a revision of the paper that incorporates suggestions of the reviewers and expands on experimental results. The largest changes are in Section 5 on the experimental verification, where we include the results of our experiments on verb-object phrases (previously we only showed results for adjective... | [
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"iclr_2019_H1eqjiCctX",
"Skg4J4xt27",
"SJgejqZqhQ",
"rkeMbIWjn7",
"iclr_2019_H1eqjiCctX",
"iclr_2019_H1eqjiCctX",
"iclr_2019_H1eqjiCctX"
] |
iclr_2019_H1ersoRqtm | Structured Neural Summarization | Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input. Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text. In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks. | accepted-poster-papers | This paper examines ways of encoding structured input such as source code or parsed natural language into representations that are conducive for summarization. Specifically, the innovation is to not use only a sequence model, nor only a tree model, but both. Empirical evaluation is extensive, and it is exhaustively demonstrated that combining both models provides the best results.
The major perceived issue of the paper is the lack of methodological novelty, which the authors acknowledge. In addition, there are other existing graph-based architectures that have not been compared to.
However, given that the experimental results are informative and convincing, I think that the paper is a reasonable candidate to be accepted to the conference. | train | [
"S1e6m3190m",
"Sygyxl0IhX",
"HygQkgOY07",
"HkexklOFRX",
"rylTptBSR7",
"BkxA0BqW0m",
"rygnAxKlRQ",
"BkehgP1qaQ",
"B1g-t4TSTX",
"SJlxsmaBpm",
"ByxzV76S6X",
"BJgtlQaBTm",
"S1xZrF4J6m",
"HygI4PN52m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In light of the extensive new experiments and their conclusions, I indeed think that this paper is now much stronger. I have changed my original score from 4 to 7.",
"Note: I changed my original score from 4 to 7 based on the new experiments that answer many of the questions I had about the relative performance ... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"HygQkgOY07",
"iclr_2019_H1ersoRqtm",
"BkehgP1qaQ",
"BkehgP1qaQ",
"iclr_2019_H1ersoRqtm",
"rygnAxKlRQ",
"SJlxsmaBpm",
"B1g-t4TSTX",
"Sygyxl0IhX",
"HygI4PN52m",
"S1xZrF4J6m",
"iclr_2019_H1ersoRqtm",
"iclr_2019_H1ersoRqtm",
"iclr_2019_H1ersoRqtm"
] |
iclr_2019_H1ewdiR5tQ | Graph Wavelet Neural Network | We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcomings of previous spectral graph CNN methods that depend on graph Fourier transform. Different from graph Fourier transform, graph wavelet transform can be obtained via a fast algorithm without requiring matrix eigendecomposition with high computational cost. Moreover, graph wavelets are sparse and localized in vertex domain, offering high efficiency and good interpretability for graph convolution. The proposed GWNN significantly outperforms previous spectral graph CNNs in the task of graph-based semi-supervised classification on three benchmark datasets: Cora, Citeseer and Pubmed. | accepted-poster-papers | AR1 and AR3 have found this paper interesting in terms of replacing the spectral operations in GCN by wavelet operations. However, AR4 was more critical about the poor complexity of the proposed method compared to approximations in Hammond et al. AR4 was also right to find the proposed work similar to Chebyshev approximations in ChebNet and to highlight that the proposed approach is only marginally better than GCN. On balance, all reviewers find some merit in this work thus AC advocates an accept. The authors are asked to keep the contents of the final draft as agreed with AR4 (and other reviewers) during rebuttal without making any further theoretical changes/brushing over various new claims/ideas unsolicited by the reviewers (otherwise such changes would require passing the draft again through reviewers). | train | [
"rke3djBHJE",
"B1lZ8H4Ch7",
"HJxnVQ_Gk4",
"H1lVuGuMk4",
"ryg5qifzkV",
"BkeGhSLykN",
"r1gNgBRO3X",
"SJgbLEF0RQ",
"HyedjpicA7",
"S1gYRpj9A7",
"SJxDst0O0X",
"BJg_QmL8Rm",
"SJgloxU4CQ",
"rygjzdBlA7",
"H1gdhOBxAm",
"BkeYvdreR7",
"SJl0RLrgC7",
"B1eMDDBe0X",
"Bkeje8BeRm",
"HkgPJe2k6Q"... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
"Thank you for your comment! This paper seems interesting, i.e., any feature generated by the network is\napproximately invariant to permutations and stable to graph manipulations. We will pay attention to it and add it as our related work if necessary.",
"This paper proposes to learn graph wavelet kernels throu... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"BkeGhSLykN",
"iclr_2019_H1ewdiR5tQ",
"SJgbLEF0RQ",
"ryg5qifzkV",
"HyedjpicA7",
"iclr_2019_H1ewdiR5tQ",
"iclr_2019_H1ewdiR5tQ",
"BJg_QmL8Rm",
"SJxDst0O0X",
"SJxDst0O0X",
"SJl0RLrgC7",
"SJgloxU4CQ",
"BkeYvdreR7",
"B1lZ8H4Ch7",
"r1gNgBRO3X",
"r1gNgBRO3X",
"HkgPJe2k6Q",
"HkgPJe2k6Q",
... |
iclr_2019_H1fU8iAqKX | A rotation-equivariant convolutional neural network model of primary visual cortex | Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that convolutional neural networks (CNNs) can be trained to predict V1 activity more accurately, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework for identifying common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this rotation-equivariant CNN to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network outperforms a regular CNN with the same number of feature maps and reveals a number of common features, which are shared by many V1 neurons and are pooled sparsely to predict neural activity. Our findings are a first step towards a powerful new tool to study the nonlinear functional organization of visual cortex. | accepted-poster-papers | The overall consensus after an extended discussion of the paper is that this work should be accepted to ICLR. The back-and-forth between reviewers and authors was very productive, and resulted in substantial clarification of the work, and modification (trending positive) of the reviewer scores. | test | [
"HklYxH1ShX",
"B1xewvj5AQ",
"rylC6Sjq0Q",
"SJewNg6FC7",
"rygl-JTFCQ",
"SkeVCOsFCX",
"BylsquoYCQ",
"SklJEXPI0m",
"BJgDxJhrC7",
"ByeMgTF40X",
"H1egkTYEA7",
"SJlinIY4AX",
"S1epMlHQRX",
"HkxrJkvlAX",
"HkesEK4KaX",
"Bkl4DDIda7",
"rJgB5uLO6Q",
"SkeaZwLuaQ",
"H1gsFfq1Tm",
"Skx-rp2qnQ"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_r... | [
"This paper applies a rotation-equivariant convolutional neural network model to a dataset of neural responses from mouse primary visual cortex. This submission follows a series of recent papers using deep convolutional neural networks to model visual responses, either in the retina (Batty et al., 2016; McIntosh e... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
-1,
-1
] | [
"iclr_2019_H1fU8iAqKX",
"iclr_2019_H1fU8iAqKX",
"rygl-JTFCQ",
"ByeMgTF40X",
"BylsquoYCQ",
"SklJEXPI0m",
"HkxrJkvlAX",
"BJgDxJhrC7",
"HkesEK4KaX",
"HkxrJkvlAX",
"S1epMlHQRX",
"S1epMlHQRX",
"iclr_2019_H1fU8iAqKX",
"Bkl4DDIda7",
"SkeaZwLuaQ",
"HklYxH1ShX",
"SylAnad4om",
"Skx-rp2qnQ",
... |
iclr_2019_H1g0Z3A9Fm | Supervised Community Detection with Line Graph Neural Networks | Community detection in graphs can be solved via spectral methods or posterior inference under certain probabilistic graphical models. Focusing on random graph families such as the stochastic block model, recent research has unified both approaches and identified both statistical and computational detection thresholds in terms of the signal-to-noise ratio. By recasting community detection as a node-wise classification problem on graphs, we can also study it from a learning perspective. We present a novel family of Graph Neural Networks (GNNs) for solving community detection problems in a supervised learning setting. We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multiclass stochastic block models, which is believed to reach the computational threshold in these cases. In particular, we propose to augment GNNs with the non-backtracking operator defined on the line graph of edge adjacencies. The GNNs are achieved good performance on real-world datasets. In addition, we perform the first analysis of the optimization landscape of using (linear) GNNs to solve community detection problems, demonstrating that under certain simplifications and assumptions, the loss value at any local minimum is close to the loss value at the global minimum/minima. | accepted-poster-papers | This paper introduces a new graph convolutional neural network, called LGNN, and applied it to solve the community detection problem. The reviewers think LGNN yields a nice and useful extension of graph CNN, especially in using the line graph of edge adjacencies and a non-backtracking operator. The empirical evaluation shows that the new method provides a useful tool for real datasets. The reviewers raised some issues in writing and reference, for which the authors have provided clarification and modified the papers accordingly. | train | [
"S1lor2mcAQ",
"H1g2iCOrC7",
"SkxI53uPpX",
"rylL0sdwT7",
"HkgV9quwam",
"SyxP9MFRhm",
"rklkDauTn7",
"r1x6e8mXhX"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank again our three reviewers for their time and high-quality feedback. We have integrated their comments into an updated manuscript. The main changes include:\n\n-- ablation experiments of our GNN/LGNN architectures, in Sections 6.1 and 6.2\n-- fixed several typos.\n-- clarified assumptions of ... | [
-1,
-1,
-1,
-1,
-1,
6,
9,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2019_H1g0Z3A9Fm",
"HkgV9quwam",
"SyxP9MFRhm",
"rklkDauTn7",
"r1x6e8mXhX",
"iclr_2019_H1g0Z3A9Fm",
"iclr_2019_H1g0Z3A9Fm",
"iclr_2019_H1g0Z3A9Fm"
] |
iclr_2019_H1g2NhC5KQ | Multiple-Attribute Text Rewriting | The dominant approach to unsupervised "style transfer'' in text is based on the idea of learning a latent representation, which is independent of the attributes specifying its "style''. In this paper, we show that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning such disentangled representations. We thus propose a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation. Our method allows control over multiple attributes, like gender, sentiment, product type, etc., and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space. Our experiments demonstrate that the fully entangled model produces better generations, even when tested on new and more challenging benchmarks comprising reviews with multiple sentences and multiple attributes. | accepted-poster-papers | The paper shows how techniques introduced in the context of unsupervised machine translation can be used to build a style transfer methods.
Pros:
- The approach is simple and questions assumptions made by previous style transfer methods (specifically, they show that we do not need to specifically enforce disentanglement).
- The evaluation is thorough and shows benefits of the proposed method
- Multi-attribute style transfer is introduced and benchmarks are created
- Given the success of unsupervised NMT, it makes a lot of sense to see if it can be applied to the style transfer problem
Cons:
- Technical novelty is limited
- Some findings may be somewhat trivial (e.g., we already know that offline classifiers are stronger than the adversarials, e.g., see Elazar and Goldberg, EMNLP 2018).
| val | [
"HklLDIyapm",
"HygNdVy66m",
"r1eBLQ1pT7",
"Skxizm1aaQ",
"rylhcb16TX",
"BkekG-1TaQ",
"H1xk7NH9pX",
"SkgGgYEt6m",
"Hkgb4vEFpQ",
"B1la-jxRhQ",
"S1ll1KQ5hX",
"H1lXb9okiX",
"BkxKA_vRnX",
"rylWfjH0hQ"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public"
] | [
"Thank you for your question. As AnonReviewer3 mentioned, simply copying input sentences wouldn’t satisfy the auto-encoding part of equation (1), as noise has been added to sentences. However, it would indeed satisfy the back-translation loss.\n\nThe idea of denoising here is that by removing random words from a se... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
-1,
-1
] | [
"H1xk7NH9pX",
"rylWfjH0hQ",
"S1ll1KQ5hX",
"S1ll1KQ5hX",
"H1lXb9okiX",
"B1la-jxRhQ",
"SkgGgYEt6m",
"Hkgb4vEFpQ",
"iclr_2019_H1g2NhC5KQ",
"iclr_2019_H1g2NhC5KQ",
"iclr_2019_H1g2NhC5KQ",
"iclr_2019_H1g2NhC5KQ",
"rylWfjH0hQ",
"iclr_2019_H1g2NhC5KQ"
] |
iclr_2019_H1g4k309F7 | Wasserstein Barycenter Model Ensembling | In this paper we propose to perform model ensembling in a multiclass or a multilabel learning setting using Wasserstein (W.) barycenters. Optimal transport metrics, such as the Wasserstein distance, allow incorporating semantic side information such as word embeddings. Using W. barycenters to find the consensus between models allows us to balance confidence and semantics in finding the agreement between the models. We show applications of Wasserstein ensembling in attribute-based classification, multilabel learning and image captioning generation. These results show that the W. ensembling is a viable alternative to the basic geometric or arithmetic mean ensembling. | accepted-poster-papers | The paper proposes a novel way to ensemble multi-class or multi-label models
based on a Wasserstein barycenter approach. The approach is theoretically
justified and obtains good results. Reviewers were concerned with time
complexity, and authors provided a clear breakdown of the complexity.
Overall, all reviewers were positives in their scores, and I recommend accepting the paper. | train | [
"HJl9L6zjpX",
"HJgJphGop7",
"Syg7gifsam",
"Byxz65zja7",
"SyxbnDMoaX",
"rkgY3_xa2X",
"H1xL7X6i3X",
"HklQY2Q53X"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their positive feedback and their questions that we answer in the following:\n\n1) REVIEWER: Please define the acronyms before using them, for instance DNN (in first page, 4th line), KL (also first page), NLP, etc. \n\nAUTHORS: Thanks, we have implemented that in the revision. \n\n 2) R... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"HklQY2Q53X",
"H1xL7X6i3X",
"rkgY3_xa2X",
"rkgY3_xa2X",
"iclr_2019_H1g4k309F7",
"iclr_2019_H1g4k309F7",
"iclr_2019_H1g4k309F7",
"iclr_2019_H1g4k309F7"
] |
iclr_2019_H1g6osRcFQ | Policy Transfer with Strategy Optimization | Computer simulation provides an automatic and safe way for training robotic control
policies to achieve complex tasks such as locomotion. However, a policy
trained in simulation usually does not transfer directly to the real hardware due
to the differences between the two environments. Transfer learning using domain
randomization is a promising approach, but it usually assumes that the target environment
is close to the distribution of the training environments, thus relying
heavily on accurate system identification. In this paper, we present a different
approach that leverages domain randomization for transferring control policies to
unknown environments. The key idea that, instead of learning a single policy in
the simulation, we simultaneously learn a family of policies that exhibit different
behaviors. When tested in the target environment, we directly search for the best
policy in the family based on the task performance, without the need to identify
the dynamic parameters. We evaluate our method on five simulated robotic control
problems with different discrepancies in the training and testing environment
and demonstrate that our method can overcome larger modeling errors compared
to training a robust policy or an adaptive policy. | accepted-poster-papers | The paper presents quite a simple idea to transfer a policy between domains by conditioning
the orginal learned policy on the physical parameter used in dynamics randomization. CMA-ES then
finds the best parameters in the target domain. Importantly, it is shown to work well,
for examples where the dynamics randomization parameters do not span the parameters that are
actually changed, i.e., as is likely common in reality-gap problems.
A weakness is the size of the contribution beyond UPOSI (Yu et al. 2017), the closest work.
The authors now explicitly benchmark against this, with (generally) positive results.
AC: It would be ideal to see that the method does truly help span the reality gap, by seeing working sim2real transfer.
Overall, the reviewers and AC are in agreement that this is a good idea that is likely to have impact.
Its fundamental simplicity means that it can also readily be used as a benchmark in future sim2real work.
The AC recommend it be considered for oral presentation based on its simplicity, the importance of
the sim2real problem, and particularly if it can be demonstrated to work well on actual
sim2real transfer tasks (not yet shown in the current results).
| train | [
"HJlhOgWZp7",
"BJeoB_Dn07",
"Hkg6Xry5h7",
"BkxgDQGjC7",
"SJxgQshk0m",
"SyevB3PopX",
"SyeODTvoTm",
"SkgOw4vjam",
"SyefcQPoTQ",
"rkel69Xq3X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper introduces a simple technique to transfer policies between domains by learning a policy that's parametrized by domain randomization parameters. During transfer CMA-ES is used to find the best parameters for the target domain.\n\nQuestions/remarks:\n- If I understand correctly, a rollout of a policy duri... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_H1g6osRcFQ",
"SyevB3PopX",
"iclr_2019_H1g6osRcFQ",
"SJxgQshk0m",
"iclr_2019_H1g6osRcFQ",
"HJlhOgWZp7",
"iclr_2019_H1g6osRcFQ",
"rkel69Xq3X",
"Hkg6Xry5h7",
"iclr_2019_H1g6osRcFQ"
] |
iclr_2019_H1gKYo09tX | code2seq: Generating Sequences from Structured Representations of Code | The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present code2seq: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding.
We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to 16M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as general state-of-the-art NMT models. An interactive online demo of our model is available at http://code2seq.org. Our code, data and trained models are available at http://github.com/tech-srl/code2seq. | accepted-poster-papers | Overall this paper presents a few improvements over the code2vec model of Alon et al., applying it to seq2seq tasks. The empirical results are very good, and there is fairly extensive experimentation.
This is a relatively crowded space, so there are a few natural baselines that were not compared to, but I don't think that comparison to every single baseline is warranted or necessary, and the authors have done an admirable job. One thing that still is quite puzzling is the strength of the "AST nodes only baseline", which the authors have given a few explanations for (using nodes helps focus on variables, and also there is an effect of combining together things that are close together in the AST tree). Still, this result doesn't seem to mesh with the overall story of the paper all that well, and again opens up some obvious questions such as whether a Transformer model trained on only AST nodes would have done similarly, and if not why not.
This paper is very much on the borderline, so if there is space in the conference I think it would be a reasonable addition, but there could also be an argument made that the paper would be stronger in a re-submission where the above questions are answered. | test | [
"rkgOIg_yaX",
"rkxpuJ9qn7",
"r1xh1lBZAQ",
"BkeUa0Vb0X",
"HJeGbC7Z0m",
"rylTcFveA7",
"H1gSlHAA6m",
"S1lPHIbk0m",
"r1gdd_xy0X",
"BJlBqinKT7",
"ByxS4OqH67",
"Hygxm-VB6Q",
"HygBjl9m6Q",
"BygxsiSMpX",
"ByeVmSMGTX",
"SJxc-Nzf6m",
"r1g84NzfTX",
"BJgTXkO53Q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors present a method for generating sequences from code. To achieve this, they parse the code and produce a syntax tree. Then, they enumerate paths in the tree along leaf nodes. Each path is encoded via an bidirectional LSTM and a (sub)token-level LSTM decoder with attention over the paths is used to produ... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_H1gKYo09tX",
"iclr_2019_H1gKYo09tX",
"iclr_2019_H1gKYo09tX",
"rylTcFveA7",
"S1lPHIbk0m",
"H1gSlHAA6m",
"BJlBqinKT7",
"ByeVmSMGTX",
"iclr_2019_H1gKYo09tX",
"BygxsiSMpX",
"iclr_2019_H1gKYo09tX",
"HygBjl9m6Q",
"iclr_2019_H1gKYo09tX",
"rkgOIg_yaX",
"BJgTXkO53Q",
"rkxpuJ9qn7",
... |
iclr_2019_H1gL-2A9Ym | Predict then Propagate: Graph Neural Networks meet Personalized PageRank | Neural message passing algorithms for semi-supervised classification on graphs have recently achieved great success. However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighborhood is hard to extend. In this paper, we use the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank. We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP. Our model's training time is on par or faster and its number of parameters on par or lower than previous models. It leverages a large, adjustable neighborhood for classification and can be easily combined with any neural network. We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models. Our implementation is available online. | accepted-poster-papers | There were several ambivalent reviews for this submission and one favorable one. Although this is a difficult case, I am recommending accepting the paper.
There were two main questions in my mind.
1. Did the authors justify that the limited neighborhood problem they try to fix with their method is a real problem and that they fixed it? If so, accept.
Here I believe evidence has been presented, but the case remains undecided.
2. If they have not, is the method/experiments sufficiently useful to be interesting anyway?
This question I would lean towards answering in the affirmative.
I believe the paper as a whole is sufficiently interesting and executed sufficiently well to be accepted, although I was not convinced of the first point (1) above. One review voting to reject did not find the conceptual contribution very valuable but still thought the paper was not severely flawed. I am partly down-weighting the conceptual criticism they made. I am more concerned with experimental issues. However, I did not see sufficiently severe issues raised by the reviewers to justify rejection.
Ultimately, I could go either way on this case, but I think some members of the community will benefit from reading this work enough that it should be accepted. | train | [
"rylzwtw3JV",
"ryeRGUehk4",
"ryl2um9KJV",
"SygzJkminQ",
"HkxoIiqdk4",
"B1xn8s4o6m",
"HJg43sI_6Q",
"S1lvvM8OTX",
"BkxnHzUdaQ",
"HkeSQM8Oa7",
"S1ga6ZIuT7",
"S1e-m_U5nX",
"Bkxy1bhPnX"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewer,\n\nThank you for clarifying your review and reconsidering and upgrading your score!\n\nWe would like to point out that Laplacian feature propagation is just that very basic PPR-based baseline you wanted to see -- it uses PPR-like feature propagation in combination with logistic regression.\n\nSince ... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"ryl2um9KJV",
"HkxoIiqdk4",
"HkeSQM8Oa7",
"iclr_2019_H1gL-2A9Ym",
"B1xn8s4o6m",
"HJg43sI_6Q",
"BkxnHzUdaQ",
"Bkxy1bhPnX",
"S1e-m_U5nX",
"SygzJkminQ",
"iclr_2019_H1gL-2A9Ym",
"iclr_2019_H1gL-2A9Ym",
"iclr_2019_H1gL-2A9Ym"
] |
iclr_2019_H1gMCsAqY7 | Slimmable Neural Networks | We present a simple and general method to train a single neural network executable at different widths (number of channels in a layer), permitting instant and adaptive accuracy-efficiency trade-offs at runtime. Instead of training individual networks with different width configurations, we train a shared network with switchable batch normalization. At runtime, the network can adjust its width on the fly according to on-device benchmarks and resource constraints, rather than downloading and offloading different models. Our trained networks, named slimmable neural networks, achieve similar (and in many cases better) ImageNet classification accuracy than individually trained models of MobileNet v1, MobileNet v2, ShuffleNet and ResNet-50 at different widths respectively. We also demonstrate better performance of slimmable models compared with individual ones across a wide range of applications including COCO bounding-box object detection, instance segmentation and person keypoint detection without tuning hyper-parameters. Lastly we visualize and discuss the learned features of slimmable networks. Code and models are available at: https://github.com/JiahuiYu/slimmable_networks | accepted-poster-papers | This paper proposed a method that creates neural networks that can run under different resource constraints. The reviewers have consensus on accept. The pro is that the paper is novel and provides a practical approach to adjust model for different computation resource, and achieved performance improvement on object detection. One concern from reviewer2 and another public reviewer is the inconsistent performance impact on classification/detection (performance improvement on detection, but performance degradation on classification). Besides, the numbers reported in Table 1 should be confirmed: MobileNet v1 on Google Pixel 1 should have less than 120ms latency [1], not 296 ms.
[1] Table 4 of https://arxiv.org/pdf/1801.04381.pdf | train | [
"Skxg-ZFjkE",
"BJxHanYjJV",
"S1g9rv-fJN",
"SyeIKWl11N",
"Byg13FQjRm",
"HJgcCEmjCQ",
"rkgxjlo9Am",
"rylzSkDA6X",
"r1xT_6ICTX",
"HJe6pnUA6m",
"HyeH_nUAaX",
"rkgg7sI0TQ",
"rkxP-cICpm",
"ryl2OdD8p7",
"r1gelBGgTm",
"rklrgxdT37",
"BJez1UBa27",
"H1lytDzo2Q",
"Byg-61r4n7"
] | [
"public",
"author",
"author",
"public",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Dear authors,\n\nThis is a very interesting work. And I think it is closely related to the mutual learning frameworks [1,2], where the core idea is also to jointly train several models for improving the performance of training each model separately. The main difference is with/without weight sharing, which is one ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
9,
7,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
-1
] | [
"iclr_2019_H1gMCsAqY7",
"Skxg-ZFjkE",
"SyeIKWl11N",
"iclr_2019_H1gMCsAqY7",
"HJgcCEmjCQ",
"iclr_2019_H1gMCsAqY7",
"r1xT_6ICTX",
"rklrgxdT37",
"H1lytDzo2Q",
"BJez1UBa27",
"r1gelBGgTm",
"ryl2OdD8p7",
"Byg-61r4n7",
"iclr_2019_H1gMCsAqY7",
"iclr_2019_H1gMCsAqY7",
"iclr_2019_H1gMCsAqY7",
... |
iclr_2019_H1gR5iR5FX | Analysing Mathematical Reasoning Abilities of Neural Models | Mathematical reasoning---a core ability within human intelligence---presents some unique challenges as a domain: we do not come to understand and solve mathematical problems primarily on the back of experience and evidence, but on the basis of inferring, learning, and exploiting laws, axioms, and symbol manipulation rules. In this paper, we present a new challenge for the evaluation (and eventually the design) of neural architectures and similar system, developing a task suite of mathematics problems involving sequential questions and answers in a free-form textual input/output format. The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test spits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes. Having described the data generation process and its potential future expansions, we conduct a comprehensive analysis of models from two broad classes of the most powerful sequence-to-sequence architectures and find notable differences in their ability to resolve mathematical problems and generalize their knowledge.
| accepted-poster-papers |
Pros:
- A useful and well-structured dataset which will be of use to the community
- Well-written and clear (though see Reviewer 2's comment concerning the clarity of the model description section)
- Good methodology
Cons:
- There is a question about why a new dataset is needed rather than a combination of previous datasets and also why these datasets couldn't be harvested from school texts directly. Presumably it would've been a lot more work but please address the issue in your rebuttal.
- Evaluation: Reviewer 3 is concerned that the evaluation should perhaps have included more mathematics-specific models (a couple of which are mentioned in the text). On the other hand, Reviewer 2 is concerned that the specific choices (e.g. "thinking steps") made for the general models are non-standard in seq-2-seq models. I haven't heard about the thinking step approach but perhaps it's out there somewhere. It would be helpful generally to have more discussion about the reasoning involved in these decisions.
I think this is a useful contribution to the community, well written and thoughtfully constructed. I am tentatively accepting this paper with the understanding that you will engage directly with the reviewers to address their concerns about the evaluation section. Please in particular use the rebuttal period to focus on the clarity of the model description and the motivation for the particular models chosen. Also consider adding additional experiments to allay the concerns of the reviewers. | train | [
"r1eF1-xjh7",
"SylO9VV507",
"S1gN_2Yl0m",
"HkxKS2FxR7",
"BJeVJhFxRX",
"H1goRfkKnm",
"SJeDlWsjj7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new synthetic dataset to evaluate the mathematical reasoning ability of sequence-to-sequence models. It consists of math problems in various categories such as algebra, arithmetic, calculus, etc. The dataset is designed carefully so that it is very unlikely there will be any duplicate between... | [
7,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_H1gR5iR5FX",
"BJeVJhFxRX",
"SJeDlWsjj7",
"H1goRfkKnm",
"r1eF1-xjh7",
"iclr_2019_H1gR5iR5FX",
"iclr_2019_H1gR5iR5FX"
] |
iclr_2019_H1gTEj09FX | RotDCF: Decomposition of Convolutional Filters for Rotation-Equivariant Deep Networks | Explicit encoding of group actions in deep features makes it possible for convolutional neural networks (CNNs) to handle global deformations of images, which is critical to success in many vision tasks. This paper proposes to decompose the convolutional filters over joint steerable bases across the space and the group geometry simultaneously, namely a rotation-equivariant CNN with decomposed convolutional filters (RotDCF). This decomposition facilitates computing the joint convolution, which is proved to be necessary for the group equivariance. It significantly reduces the model size and computational complexity while preserving performance, and truncation of the bases expansion serves implicitly to regularize the filters. On datasets involving in-plane and out-of-plane object rotations, RotDCF deep features demonstrate greater robustness and interpretability than regular CNNs. The stability of the equivariant representation to input variations is also proved theoretically. The RotDCF framework can be extended to groups other than rotations, providing a general approach which achieves both group equivariance and representation stability at a reduced model size. | accepted-poster-papers | This paper builds on the recent DCFNet (Decomposed Convolutional Filters) architecture to incorporate rotation equivariance while preserving stability. The core idea is to decompose the trainable filters into a steerable representation and learn over a subset of the coefficients of that representation.
Reviewers all agreed that this is a solid contribution that advances research into group equivariant CNNs, bringing efficiency gains and stability guarantees, albeit these appear to be incremental with respect to the techniques developed in the DCFNet work. In summary, the AC believes this to be a valuable contribution and therefore recommends acceptance. | train | [
"BklM-VfchQ",
"Skxq9HUa0X",
"HkxDLKH5R7",
"HyxEmhb7Am",
"ByejiBuhT7",
"ByxMFYg8aX",
"SkeOE1Hc37"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"This work extends on [1] by constructing CNN filters using Fourier-Bessel (FB) bases for rotation equivariant networks. Additionally to [1] it extends the process with using SO(2) bases which allow to learn combination of rotated FB bases and ultimately achieve good performance with less parameters than standard C... | [
7,
-1,
-1,
-1,
7,
-1,
7
] | [
3,
-1,
-1,
-1,
2,
-1,
4
] | [
"iclr_2019_H1gTEj09FX",
"BklM-VfchQ",
"SkeOE1Hc37",
"iclr_2019_H1gTEj09FX",
"iclr_2019_H1gTEj09FX",
"iclr_2019_H1gTEj09FX",
"iclr_2019_H1gTEj09FX"
] |
iclr_2019_H1gfOiAqYm | Execution-Guided Neural Program Synthesis | Neural program synthesis from input-output examples has attracted an increasing interest from both the machine learning and the programming language community. Most existing neural program synthesis approaches employ an encoder-decoder architecture, which uses an encoder to compute the embedding of the given input-output examples, as well as a decoder to generate the program from the embedding following a given syntax. Although such approaches achieve a reasonable performance on simple tasks such as FlashFill, on more complex tasks such as Karel, the state-of-the-art approach can only achieve an accuracy of around 77%. We observe that the main drawback of existing approaches is that the semantic information is greatly under-utilized. In this work, we propose two simple yet principled techniques to better leverage the semantic information, which are execution-guided synthesis and synthesizer ensemble. These techniques are general enough to be combined with any existing encoder-decoder-style neural program synthesizer. Applying our techniques to the Karel dataset, we can boost the accuracy from around 77% to more than 90%. | accepted-poster-papers | This paper presents a system which exploits semantic information of partial programs during program synthesis, and ensembling of synthesisers. The idea is general, and admirably simple. The explanation is clear, and the results are impressive. The reviewers, some after significant discussion, agree that this paper makes an import contribution and is one of the stronger papers in the conference. While some possible improvements to the method and experiment were discussed with the reviewers, it seems these are more suitable for future research, and that the paper is clearly publishable in its current form. | train | [
"HkeaqiiCCX",
"HJxHhAJt0Q",
"SJxSoQpu07",
"Bkea3ghuRQ",
"SkxQ9icORQ",
"Byg6or5chQ",
"SyxNtmfzRX",
"HylIAzMf07",
"SyxTLzMzAQ",
"B1e46-fMRX",
"HkegUbzfCX",
"B1lJT1zzRQ",
"SyldTAA6nm",
"Byg4unw53X",
"SJlrMs41p7"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
"This is an important part of ICLR's review system, and of the scientific process as a whole, so your engagement is noted and appreciated.",
"Thank you for your explanation! Unfortunately, we do not have enough time to implement these ideas and report the results before the end of the rebuttal period. But we beli... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
-1
] | [
"SkxQ9icORQ",
"SJxSoQpu07",
"SyxTLzMzAQ",
"SkxQ9icORQ",
"B1e46-fMRX",
"iclr_2019_H1gfOiAqYm",
"iclr_2019_H1gfOiAqYm",
"SJlrMs41p7",
"SyldTAA6nm",
"HkegUbzfCX",
"Byg6or5chQ",
"Byg4unw53X",
"iclr_2019_H1gfOiAqYm",
"iclr_2019_H1gfOiAqYm",
"iclr_2019_H1gfOiAqYm"
] |
iclr_2019_H1goBoR9F7 | Dynamic Sparse Graph for Efficient Deep Learning | We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference. The great success of DNNs motivates the pursuing of lightweight models for the deployment onto embedded devices. However, most of the previous studies optimize for inference while neglect training or even complicate it. Training is far more intractable, since (i) the neurons dominate the memory cost rather than the weights in inference; (ii) the dynamic activation makes previous sparse acceleration via one-off optimization on fixed weight invalid; (iii) batch normalization (BN) is critical for maintaining accuracy while its activation reorganization damages the sparsity. To address these issues, DSG activates only a small amount of neurons with high selectivity at each iteration via a dimensionreduction search and obtains the BN compatibility via a double-mask selection. Experiments show significant memory saving (1.7-4.5x) and operation reduction (2.3-4.4x) with little accuracy loss on various benchmarks. | accepted-poster-papers | This paper proposes a novel approach for network pruning in both training and inference. This paper received a consensus of acceptance. Compared with previous work that focus and model compression on training, this paper saves memory and accelerates both training and inference. It is activation, rather than weight that dominates the training memory. Reviewer1 posed a valid concern about the efficient implementation on GPUs, and authors agreed that practical speedup on GPU is difficult. It'll be great if the authors can give practical insights on how to achieve real speedup in the final draft. | train | [
"ByllM5Etn7",
"B1ggCsTnpX",
"BJWZDiT2TX",
"B1gbAKa267",
"B1l_FKTha7",
"rJegBoSsnX",
"Syeqkc-Dhm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"REVISED: I am fine accepting. The authors did make it a bit easier to read (although it is still very dense). I am also satisfied with related work and comparisons\nSummary: \nThis paper proposes to activate only a small number of neurons during both training and inference time, in order to speed up training and d... | [
7,
-1,
-1,
-1,
-1,
8,
7
] | [
2,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2019_H1goBoR9F7",
"BJWZDiT2TX",
"Syeqkc-Dhm",
"ByllM5Etn7",
"rJegBoSsnX",
"iclr_2019_H1goBoR9F7",
"iclr_2019_H1goBoR9F7"
] |
iclr_2019_H1gsz30cKX | Fixup Initialization: Residual Learning Without Normalization | Normalization layers are a staple in state-of-the-art deep neural network architectures. They are widely believed to stabilize training, enable higher learning rate, accelerate convergence and improve generalization, though the reason for their effectiveness is still an active research topic. In this work, we challenge the commonly-held beliefs by showing that none of the perceived benefits is unique to normalization. Specifically, we propose fixed-update initialization (Fixup), an initialization motivated by solving the exploding and vanishing gradient problem at the beginning of training via properly rescaling a standard initialization. We find training residual networks with Fixup to be as stable as training with normalization -- even for networks with 10,000 layers. Furthermore, with proper regularization, Fixup enables residual networks without normalization to achieve state-of-the-art performance in image classification and machine translation. | accepted-poster-papers | The paper explores the effect of normalization and initialization in residual networks, motivated by the need to avoid exploding and vanishing activations and gradients. Based on some theoretical analysis of stepsizes in SGD, the authors propose a sensible but effective way of initializing a network that greatly increases training stability. In a nutshell, the method comes down to initializing the residual layers such that a single step of SGD results in a change in activations that is invariant to the depth of the network. The experiments in the paper provide supporting evidence for the benefits; the authors were able to train networks of up to 10,000 layers deep. The experiments have sufficient depth to support the claims. Overall, the method seems to be a simple but effective technique for learning very deep residual networks.
While some aspects of the network have been used in earlier work, such as initializing residual branches to output zeros, these earlier methods lacked the rescaling aspect, which seems crucial to the performance of this network.
The reviewers agree that the papers provides interesting ideas and significant theoretical and empirical contributions. The main concerns by the reviewers were addressed by the author responses. The AC finds that the remaining concerns raised by the reviewers are minor and insufficient for rejection of the paper.
| val | [
"HJlQe0YjA7",
"rJlrkyKsRm",
"Skgn80Is0X",
"r1gl9tKq07",
"BJl7K-FcCm",
"r1gTIYutnX",
"rJxL-c1V0m",
"BJe83JIMC7",
"SygoSH1b0m",
"HJxywPsqaQ",
"BJxPwIScaX",
"B1lgruiDa7",
"HJgkrvsDpm",
"rkgrdHowTm",
"BJeS4HjvpX",
"r1xnufswaX",
"B1xtAbjwaX",
"SJlbdxjva7",
"H1e_BZjmpQ",
"H1eV6OUGpQ"... | [
"author",
"author",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your comments!\n\nOne of the authors here. I think you raised interesting questions in the first part, but am not sure what you mean exactly there. Am I correct that you would like to:\n(1) see the result of a standard ResNet (i.e. with batch normalization layers) if we initialize the last gamma in each... | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"BJl7K-FcCm",
"Skgn80Is0X",
"HJxywPsqaQ",
"iclr_2019_H1gsz30cKX",
"B1lgruiDa7",
"iclr_2019_H1gsz30cKX",
"BJeS4HjvpX",
"SygoSH1b0m",
"iclr_2019_H1gsz30cKX",
"BJxPwIScaX",
"B1xtAbjwaX",
"H1eV6OUGpQ",
"H1e_BZjmpQ",
"Skl714KOnX",
"SyeHla6K37",
"r1gTIYutnX",
"r1gTIYutnX",
"r1gTIYutnX",
... |
iclr_2019_H1l7bnR5Ym | ProbGAN: Towards Probabilistic GAN with Theoretical Guarantees | Probabilistic modelling is a principled framework to perform model aggregation, which has been a primary mechanism to combat mode collapse in the context of Generative Adversarial Networks (GAN). In this paper, we propose a novel probabilistic framework for GANs, ProbGAN, which iteratively learns a distribution over generators with a carefully crafted prior. Learning is efficiently triggered by a tailored stochastic gradient Hamiltonian Monte Carlo with a novel gradient approximation to perform Bayesian inference. Our theoretical analysis further reveals that our treatment is the first probabilistic framework that yields an equilibrium where generator distributions are faithful to the data distribution. Empirical evidence on synthetic high-dimensional multi-modal data and image databases (CIFAR-10, STL-10, and ImageNet) demonstrates the superiority of our method over both start-of-the-art multi-generator GANs and other probabilistic treatment for GANs. | accepted-poster-papers | The paper proposes a new method that builds on the Bayesian modelling framework for GANs and is supported by a theoretical analysis and an empirical evaluation that shows very promising results. All reviewers agree, that the method is interesting and the results are convincing, but that the model does not really fit in the standard Bayesian setting due to a data dependency of the priors. I would therefore encourage the authors to reflect this by adapting the title and making the differences more clear in the camera ready version. | train | [
"rJeRLZfI14",
"ryxAXLZYhm",
"rJg-57eh07",
"Hkxok8co27",
"Skg9rHqqC7",
"Hyx9hN59A7",
"B1g1aeEapX",
"Hkgyn9ICTm",
"B1lq-8vRa7",
"Byg6_RX6Tm",
"SJxci3qu3m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Following is our response to your updated comments.\n\n=== Bayesian GAN prior in toy experiment ===\n\nWe agree that using a broad normal prior would be a better choice. We have rerun our experiment with the normal prior (with mean 0, std 1) and got the similar results as in the uniform prior case. We will certain... | [
-1,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
9
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"rJg-57eh07",
"iclr_2019_H1l7bnR5Ym",
"B1g1aeEapX",
"iclr_2019_H1l7bnR5Ym",
"Hkxok8co27",
"iclr_2019_H1l7bnR5Ym",
"ryxAXLZYhm",
"Hkxok8co27",
"iclr_2019_H1l7bnR5Ym",
"SJxci3qu3m",
"iclr_2019_H1l7bnR5Ym"
] |
iclr_2019_H1lJJnR5Ym | Exploration by random network distillation | We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access the underlying state of the game, and occasionally completes the first level. This suggests that relatively simple methods that scale well can be sufficient to tackle challenging exploration problems. | accepted-poster-papers | Pros:
- novel, general idea for hard exploration domains
- multiple additional tricks
- ablations, control experiments
- well-written paper
- excellent results on Montezuma
Cons:
- low sample efficiency (2B+ frames)
- unresolved questions (non-episodic intrinsic rewards)
- could have done better apples-to-apples comparisons to baselines
The reviewers did not reach consensus on whether to accept or reject the paper. In particular, after multiple rounds of discussion, reviewer 1 remains adamant that the downsides of the paper outweigh its good points. However, given that the other three reviewers argue strongly and credibly for acceptance, I think the paper should be accepted. | test | [
"r1xxk91FRQ",
"HJe07lnMCm",
"Bkl1t7HL0Q",
"rylpqi3tnX",
"Skl-yi3rCQ",
"BygSUUyXAX",
"r1eafqNz0Q",
"r1gvw_EzAm",
"rJlaHIVM07",
"H1ek-LEfRm",
"rkgyTSVGRX",
"rkg65WVMRQ",
"SkxVd-4zCm",
"Hye5WR-Y6Q",
"H1e2w4Ak0X",
"HkgYozuoaX",
"S1lASKafpX",
"Bkgy-aa-67"
] | [
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Regarding Freeway, I feel like you've slightly dodged my point. In your other recent paper on exploration (\"Large-Scale Study of Curiosity-Driven Learning\"), you *have* benchmarked on Freeway in Table 2. Furthermore, the intrinsic rewards look slightly harmful, even with a coefficient of 0.01 for the intrinsic r... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
9,
10
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"r1gvw_EzAm",
"SkxVd-4zCm",
"rkg65WVMRQ",
"iclr_2019_H1lJJnR5Ym",
"rkgyTSVGRX",
"H1ek-LEfRm",
"H1e2w4Ak0X",
"Hye5WR-Y6Q",
"Bkgy-aa-67",
"S1lASKafpX",
"rylpqi3tnX",
"SkxVd-4zCm",
"HkgYozuoaX",
"iclr_2019_H1lJJnR5Ym",
"Hye5WR-Y6Q",
"iclr_2019_H1lJJnR5Ym",
"iclr_2019_H1lJJnR5Ym",
"icl... |
iclr_2019_H1lqZhRcFm | Unsupervised Learning of the Set of Local Maxima | This paper describes a new form of unsupervised learning, whose input is a set of unlabeled points that are assumed to be local maxima of an unknown value function v in an unknown subset of the vector space. Two functions are learned: (i) a set indicator c, which is a binary classifier, and (ii) a comparator function h that given two nearby samples, predicts which sample has the higher value of the unknown function v. Loss terms are used to ensure that all training samples \vx are a local maxima of v, according to h and satisfy c(\vx)=1. Therefore, c and h provide training signals to each other: a point \vx′ in the vicinity of \vx satisfies c(\vx)=−1 or is deemed by h to be lower in value than \vx. We present an algorithm, show an example where it is more efficient to use local maxima as an indicator function than to employ conventional classification, and derive a suitable generalization bound. Our experiments show that the method is able to outperform one-class classification algorithms in the task of anomaly detection and also provide an additional signal that is extracted in a completely unsupervised way.
| accepted-poster-papers | The paper proposes a new unsupervised learning scheme via utilizing local maxima as an indicator function.
The reviewers and AC note the novelty of this paper and good empirical justifications. Hence, AC decided to recommend acceptance.
However, AC thinks the readability of the paper can be improved. | val | [
"BJle8y2LJ4",
"SJleu-jZyN",
"HygAnf8lyN",
"rJl3jlGJ6X",
"BJgPwCiuhQ",
"B1xnM_vyAX",
"ryxp2idFpQ",
"S1gaKy9P37",
"S1lMMR0W6Q",
"BJe4X2ykpm"
] | [
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author"
] | [
"Thank you very much for pointing us to the NIPS 2018 work by Golan and El-Yaniv, which we will happily include in our next version. \n\nWe completely agree with AnonReviewer2 that the two methods are different in their scope and orthogonal in their contributions and are working on combining both methods. It will t... | [
-1,
-1,
-1,
8,
8,
-1,
-1,
8,
-1,
-1
] | [
-1,
-1,
-1,
3,
4,
-1,
-1,
3,
-1,
-1
] | [
"HygAnf8lyN",
"HygAnf8lyN",
"iclr_2019_H1lqZhRcFm",
"iclr_2019_H1lqZhRcFm",
"iclr_2019_H1lqZhRcFm",
"ryxp2idFpQ",
"BJgPwCiuhQ",
"iclr_2019_H1lqZhRcFm",
"rJl3jlGJ6X",
"S1gaKy9P37"
] |
iclr_2019_H1x-x309tm | On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization | This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the ''``Adam-type'', includes the popular algorithms such as Adam, AMSGrad, AdaGrad. Despite their popularity in training deep neural networks (DNNs), the convergence of these algorithms for solving non-convex problems remains an open question. In this paper, we develop an analysis framework and a set of mild sufficient conditions that guarantee the convergence of the Adam-type methods, with a convergence rate of order O(logT/T) for non-convex stochastic optimization. Our convergence analysis applies to a new algorithm called AdaFom (AdaGrad with First Order Momentum). We show that the conditions are essential, by identifying concrete examples in which violating the conditions makes an algorithm diverge. Besides providing one of the first comprehensive analysis for Adam-type methods in the non-convex setting, our results can also help the practitioners to easily monitor the progress of algorithms and determine their convergence behavior. | accepted-poster-papers | This paper analysis the convergence properties of a family of 'Adam-Type' optimization algorithms, such as Adam, Amsgrad and AdaGrad, in the non-convex setting. The paper provides of the first comprehensive analyses of such algorithms in the non-convex setting. In addition, the results can help practitioners with monitoring convergence in experiments. Since Adam is a widely used method, the results have a potentially large impact.
The reviewers agree that the paper is well-written, provides interesting new insights, and that is results are of sufficient interest to the ICLR community to be worthy of publication. | train | [
"BJecL9ugTX",
"r1lS9eA6AQ",
"rJgatHaaCm",
"HJlleRKc07",
"HkeyuhPLAQ",
"SyeAXhDIAm",
"S1xgznw8RX",
"rygs0jvUAX",
"HJlR3oD8RX",
"B1esVdlCh7",
"SkgumbaYn7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThis paper presents a convergence analysis in the non-convex setting for a family of optimization algorithms, which the authors call the \"Adam-type\". This family incorporates popular existing methods like Adam, AdaGrad and AMSGrad. The analysis relies only on standard assumptions like Lipschitz smoot... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"iclr_2019_H1x-x309tm",
"rJgatHaaCm",
"HJlleRKc07",
"rygs0jvUAX",
"iclr_2019_H1x-x309tm",
"SkgumbaYn7",
"B1esVdlCh7",
"HJlR3oD8RX",
"BJecL9ugTX",
"iclr_2019_H1x-x309tm",
"iclr_2019_H1x-x309tm"
] |
iclr_2019_H1xD9sR5Fm | Minimum Divergence vs. Maximum Margin: an Empirical Comparison on Seq2Seq Models | Sequence to sequence (seq2seq) models have become a popular framework for neural sequence prediction. While traditional seq2seq models are trained by Maximum Likelihood Estimation (MLE), much recent work has made various attempts to optimize evaluation scores directly to solve the mismatch between training and evaluation, since model predictions are usually evaluated by a task specific evaluation metric like BLEU or ROUGE scores instead of perplexity. This paper puts this existing work into two categories, a) minimum divergence, and b) maximum margin. We introduce a new training criterion based on the analysis of existing work, and empirically compare models in the two categories. Our experimental results show that our new training criterion can usually work better than existing methods, on both the tasks of machine translation and sentence summarization. | accepted-poster-papers | The reviewers agree that the paper is worthy of publication at ICLR, hence I recommend accept.
Regarding section 4.3 of the submission and the claim that this paper presents the first insight for existing work from a divergence minimization perspective, as pointed out by R2, I went and checked the details of RAML and they have similar insights in their equations (5) and (8). Please make this clearer in the paper. Regarding evaluation using greedy search instead of beam search, please consider using beam search for reporting test performance as this is the standard setup in sequence prediction. Please take my comments and the reviews into account an prepare the final version. | train | [
"SJevAu1qhX",
"rJxph-Uo3X",
"ByeC4dDphX",
"BkgNtCr_0X",
"BJxp2tSERQ",
"Hylqq_B40Q",
"rkegpIsbA7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"In this paper the authors distinguish between two families of training objectives for seq2seq models, namely, divergence minimization objectives and max-margin objectives. They primarily focus on the divergence minimization family, and show that the MRT and RAML objectives can be related to minimizing the KL diver... | [
7,
7,
5,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2019_H1xD9sR5Fm",
"iclr_2019_H1xD9sR5Fm",
"iclr_2019_H1xD9sR5Fm",
"rJxph-Uo3X",
"ByeC4dDphX",
"SJevAu1qhX",
"rJxph-Uo3X"
] |
iclr_2019_H1xQVn09FX | GANSynth: Adversarial Neural Audio Synthesis | Efficient audio synthesis is an inherently difficult machine learning task, as human perception is sensitive to both global structure and fine-scale waveform coherence. Autoregressive models, such as WaveNet, model local structure at the expense of global latent structure and slow iterative sampling, while Generative Adversarial Networks (GANs), have global latent conditioning and efficient parallel sampling, but struggle to generate locally-coherent audio waveforms. Herein, we demonstrate that GANs can in fact generate high-fidelity and locally-coherent audio by modeling log magnitudes and instantaneous frequencies with sufficient frequency resolution in the spectral domain. Through extensive empirical investigations on the NSynth dataset, we demonstrate that GANs are able to outperform strong WaveNet baselines on automated and human evaluation metrics, and efficiently generate audio several orders of magnitude faster than their autoregressive counterparts.
| accepted-poster-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- novel approach to audio synthesis
- strong qualitative and quantitative results
- extensive evaluation
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- small grammatical issues (mostly resolved in the revision).
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
No major points of contention.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers reached a consensus that the paper should be accepted.
| train | [
"Skgsl-CCnm",
"r1xjgxknaX",
"rJxQcyy26m",
"S1xnyJyhpm",
"r1xhvRAoTX",
"rke0sgTc2X",
"BkgZIiNcn7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an approach that uses GAN framework to generate audio through modeling log magnitudes and instantaneous frequencies with sufficient frequency resolution in the spectral domain. Experiments on NSynth dataset show that it gives better results then WaveNet. The most successful deep generative mode... | [
6,
-1,
-1,
-1,
-1,
7,
8
] | [
3,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_H1xQVn09FX",
"rke0sgTc2X",
"BkgZIiNcn7",
"Skgsl-CCnm",
"iclr_2019_H1xQVn09FX",
"iclr_2019_H1xQVn09FX",
"iclr_2019_H1xQVn09FX"
] |
iclr_2019_H1xaJn05FQ | Sliced Wasserstein Auto-Encoders | In this paper we use the geometric properties of the optimal transport (OT) problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder. We introduce Sliced-Wasserstein Auto-Encoders (SWAE), that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or having a likelihood function specified. In short, we regularize the auto-encoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a samplable prior distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Auto-Encoders (WAE) and Variational Auto-Encoders (VAE), while benefiting from an embarrassingly simple implementation. We provide extensive error analysis for our algorithm, and show its merits on three benchmark datasets. | accepted-poster-papers | The paper proposed to add the sliced-Wasserstein distance between the distribution of the encoded training samples and a samplable prior distribution to the auto encoder (AE) loss, resulting in a model named sliced-Wasserstein AE. The difference compared to the Wasserstein AE (WAE) lies in using the usage of the sliced-Wasserstein distance instead of GAN or MMD-based penalties.
The idea of the paper is interesting, and a theoretical and an empirical analysis supporting the approach are presented. As reviewer 1 noticed, „the advantage of using sliced Wasserstein distance is twofold: 1)parametric-free (compared to GANs); 2) almost hyperparameter-free (compared to the MMD with RBF kernels), except setting the number of random projection bases.“ However, the empirical evaluation in the paper and concurrent ICLR submission on Cramer-World-AEs the authors refer to shows no clear practical advantage over the WAE, which leads to better results at least regarding the FID score. On the other hand, the Cramer-World-AE is based on the ideas presented in this paper (which was previously available on arxive) proving that the paper presents interesting ideas which are of value to the communty. Therefore, the paper is a bit boarderline, but I recommand to accept it. | train | [
"HyxabNG1xE",
"rklTDDbky4",
"H1lo8vWJyE",
"ByeqWvbJyN",
"BJg2A5pYn7",
"rkg8bhIpRQ",
"r1eh1O0KRm",
"Skx63wCtA7",
"ryeF5vAKAX",
"rklRqLxY2m",
"r1eZQY4dhm"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewers, \n\nWe are certainly grateful for your time and careful evaluation of our work. We did our best to respond to the issues you raised, including extending our experimental results along with theoretical clarifications. \n\nWe would greatly appreciate if you could take a second look at our paper an... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
4,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_H1xaJn05FQ",
"r1eh1O0KRm",
"Skx63wCtA7",
"rkg8bhIpRQ",
"iclr_2019_H1xaJn05FQ",
"ryeF5vAKAX",
"r1eZQY4dhm",
"rklRqLxY2m",
"BJg2A5pYn7",
"iclr_2019_H1xaJn05FQ",
"iclr_2019_H1xaJn05FQ"
] |
iclr_2019_H1xipsA5K7 | Learning Two-layer Neural Networks with Symmetric Inputs | We give a new algorithm for learning a two-layer neural network under a very general class of input distributions. Assuming there is a ground-truth two-layer network
y = A \sigma(Wx) + \xi,
where A, W are weight matrices, \xi represents noise, and the number of neurons in the hidden layer is no larger than the input or output, our algorithm is guaranteed to recover the parameters A, W of the ground-truth network. The only requirement on the input x is that it is symmetric, which still allows highly complicated and structured input.
Our algorithm is based on the method-of-moments framework and extends several results in tensor decompositions. We use spectral algorithms to avoid the complicated non-convex optimization in learning neural networks. Experiments show that our algorithm can robustly learn the ground-truth neural network with a small number of samples for many symmetric input distributions. | accepted-poster-papers | Although the paper considers a somewhat limited problem of learning a neural network with a single hidden layer, it achieves a surprisingly strong result that such a network can be learned exactly (or well approximated under sampling) under weaker assumptions than recent work. The reviewers unanimously recommended the paper be accepted. The paper would be more impactful if the authors could clarify the barriers to extending the technique of pure neuron detection to deeper networks, as well as the barriers to incorporating bias to eliminate the symmetry assumption. | train | [
"SkgfHt_PRX",
"BygkLw_vAm",
"HJlQpUuwC7",
"H1eC-EI6n7",
"rJxkjtHThX",
"H1xu6gRmhm"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks a lot for your efforts in the review process. We really appreciate your valuable suggestions and detailed comments.\n\n-generalize technique to shifted input or bias term.\n\nOur current technique does not generalize to the case where the input is shifted or there is a bias term in the output. We think this... | [
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
4,
4,
5
] | [
"rJxkjtHThX",
"H1xu6gRmhm",
"H1eC-EI6n7",
"iclr_2019_H1xipsA5K7",
"iclr_2019_H1xipsA5K7",
"iclr_2019_H1xipsA5K7"
] |
iclr_2019_H1xsSjC9Ym | Learning to Understand Goal Specifications by Modelling Reward | Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity of the environment and the language scales. To overcome this limitation, we present a framework within which instruction-conditional RL agents are trained using rewards obtained not from the environment, but from reward models which are jointly trained from expert examples. As reward models improve, they learn to accurately reward agents for completing tasks for environment configurations---and for instructions---not present amongst the expert data. This framework effectively separates the representation of what instructions require from how they can be executed.
In a simple grid world, it enables an agent to learn a range of commands requiring interaction with blocks and understanding of spatial relations and underspecified abstract arrangements. We further show the method allows our agent to adapt to changes in the environment without requiring new expert examples. | accepted-poster-papers | Pros:
- The paper is well-written and clear and presented with helpful illustrations and videos.
- The training methodology seems sound (multiple random seeds etc.)
- The results are encouraging.
Cons:
- There was some concern generally about how this work is positioned relative to related work and the completeness of the related work. However, the authors have made this clearer in their rebuttal.
There was a considerable amount of discussion between the authors and all reviewers to pin down some unclear aspects of the paper. I believe in the end there was good convergence and I thank both the authors and reviewers for their persistence and dilligence in working through this. The final paper is much better I think and I recommend acceptance. | train | [
"H1gboYgu2Q",
"BkxYUMmnRX",
"BygfD88v3Q",
"BkgNO00FCm",
"SkeqSjhY0X",
"ryxWDrhFCQ",
"SkeGFQD2aX",
"SJlxrIwc6m",
"SklRX9tw67",
"HkgsOyrm67",
"BJlGrMrza7",
"rylCieSMp7",
"Skl_UbSMTX",
"r1emqB4Gp7",
"rkeb7SNMTm",
"HkextZ7MaQ",
"H1llub7G6m",
"Bkehr-QM67",
"rkgpQWQMam",
"H1lXVonl6m"... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"The previous version of the paper was not clear enough in the motivation and uniqueness of the work. After a long and devoted discussion with the authors, we agreed on certain ways of improving the paper presentation, including connection to some related work. \n\nThe current paper is much better, so I would like ... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1
] | [
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1
] | [
"iclr_2019_H1xsSjC9Ym",
"BkgNO00FCm",
"iclr_2019_H1xsSjC9Ym",
"r1emqB4Gp7",
"BygfD88v3Q",
"H1gboYgu2Q",
"SJlxrIwc6m",
"SklRX9tw67",
"HkgsOyrm67",
"rylCieSMp7",
"HkextZ7MaQ",
"H1llub7G6m",
"Bkehr-QM67",
"BygfD88v3Q",
"BygfD88v3Q",
"H1lXVonl6m",
"H1lXVonl6m",
"H1lXVonl6m",
"H1lXVon... |
iclr_2019_H1xwNhCcYm | Do Deep Generative Models Know What They Don't Know? | A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we challenge this assumption. We find that the density learned by flow-based models, VAEs, and PixelCNNs cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR-10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. Moreover, we find evidence of this phenomenon when pairing several popular image data sets: FashionMNIST vs MNIST, CelebA vs SVHN, ImageNet vs CIFAR-10 / CIFAR-100 / SVHN. To investigate this curious behavior, we focus analysis on flow-based generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flows to constant-volume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature.
Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution until their behavior for out-of-distribution inputs is better understood. | accepted-poster-papers | This paper makes the intriguing observation that a density model trained on CIFAR10 has higher likelihood on SVHN than CIFAR10, i.e., it assigns higher probability to inputs that are out of the training distribution. This phenomenon is also shown to occur for several other dataset pairs. This finding is surprising and interesting, and the exposition is generally clear. The authors provide empirical and theoretical analysis, although based on rather strong assumptions. Overall, there's consensus among the reviewers that the paper would make a valuable contribution to the proceedings, and should therefore be accepted for publication. | train | [
"HJl0-fqLeV",
"BklXQj-R1V",
"rkxi_sW8kN",
"B1eudmtWkN",
"HJlpx-we1E",
"rJxBdmxxyV",
"HJxL_ysjC7",
"BJgniGai0m",
"HylTaY-jAm",
"SyedvzM9Am",
"rygFbvpYRX",
"Skgkfk6tRm",
"ByxeVRhY0m",
"BygHqa2F0X",
"SygxWa3Y0m",
"ByloTnnFRQ",
"HkgLWfveT7",
"r1eWc6qjnX",
"BJe__C5d3Q",
"HJx8SCQEn7"... | [
"author",
"public",
"author",
"author",
"official_reviewer",
"author",
"public",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
... | [
"Thanks for your comment and question.\n\nPer the reviewers' requests for more evidence of the phenomenon on additional data sets, we wanted to bolster the 'motivating observations' section with experiments that better exhibit the curious out-of-distribution behavior. We found that the FashionMNIST-vs-MNIST pair i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
-1,
-1,
-1
] | [
"BklXQj-R1V",
"rygFbvpYRX",
"HJxL_ysjC7",
"HJlpx-we1E",
"rJxBdmxxyV",
"BJgniGai0m",
"r1eWc6qjnX",
"Skgkfk6tRm",
"SyedvzM9Am",
"ByxeVRhY0m",
"iclr_2019_H1xwNhCcYm",
"HJx8SCQEn7",
"BJe__C5d3Q",
"r1eWc6qjnX",
"ByloTnnFRQ",
"iclr_2019_H1xwNhCcYm",
"BJl6am8z37",
"iclr_2019_H1xwNhCcYm",
... |
iclr_2019_H1z-PsR5KX | Identifying and Controlling Important Neurons in Neural Machine Translation | Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons. | accepted-poster-papers | Strong points:
-- Interesting, fairly systematic and novel analyses of recurrent NMT models, revealing individual neurons responsible for specific type of information (e.g., verb tense or gender)
-- Interesting experiments showing how these neurons can be used to manipulate translations in specific ways (e.g., specifying the gender for a pronoun when the source sentence does not reveal it)
-- The paper is well written
Weak points
-- Nothing serious (e.g., maybe interesting to test across multiple runs how stable these findings are).
There is a consensus among the reviewers that this is a strong paper and should be accepted.
| train | [
"BygWvPj7CQ",
"rkeX0uhx0Q",
"ByeWhune0X",
"BklYGw3lRQ",
"B1gWh83lCQ",
"Byl0raGq2X",
"rklAXqtrhX",
"r1l4JK8ijQ"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have uploaded a revised version incorporating the reviewers's helpful comments. This is the list of changes:\n\n1. Added appendix A.4 with results with different model checkpoints and a footnote referring to the appendix in Section 4. \n2. Added to the Conclusion a potential future work on controlling translati... | [
-1,
-1,
-1,
-1,
-1,
7,
10,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2019_H1z-PsR5KX",
"r1l4JK8ijQ",
"r1l4JK8ijQ",
"rklAXqtrhX",
"Byl0raGq2X",
"iclr_2019_H1z-PsR5KX",
"iclr_2019_H1z-PsR5KX",
"iclr_2019_H1z-PsR5KX"
] |
iclr_2019_H1zeHnA9KX | Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks | We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language. Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language. Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an {\em abstraction} obtained by clustering small sets of MDFA states into ``''superstates''. A qualitative analysis reveals that the abstraction often has a simple interpretation. Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure.
| accepted-poster-papers | This paper presents experiments showing that a linear mapping existing between the hidden states of RNNs trained to recognise (rather than model) formal languages, in the hope of at least partially elucidating the sort of representations this class of network architectures learns. This is important and timely work, fitting into a research programme begun by CL Giles in 92.
Despite its relatively low overall score, I am concurring with the assessment made by reviewer 1, whose expertise in the topic I am aware of and respect. But more importantly, I feel the review process has failed the authors here: reviewers 2 and 3 had as chief concern that there were issues with the clarity of some aspects of the paper. The authors made a substantial and bona fide attempt in their response to address the points of concern raised by these reviewers. This is precisely what the discussion period of ICLR is for, and one would expect that clarity issues can be successfully remedied during this period. I am disappointed to have seen little timely engagement from these reviewers, or willingness to explain why they are stick by their assessment if not revisiting it. As far as I am concerned, the authors have done an appropriate job of addressing these concerns, and given reviewer 1's support for the paper, I am happy to add mine as well. | train | [
"BygbNmtukV",
"B1gaaOqVk4",
"BkeSqOcN14",
"HJg-yN9V1N",
"BkewlgpRAQ",
"HylORAYn0Q",
"ByxyXEWcA7",
"H1lkffW5R7",
"SygggMZ90X",
"S1eeAxb9Am",
"B1xVWwh3hm",
"HJllpRIqn7",
"Bkl2Tb__nX",
"S1eW-odEn7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"To summarize my understanding of the author's rebuttal, they're saying that the key result isn't that linear decoders achieve high accuracy in decoding the abstract DFA states, but is instead that the abstract DFAs that are recovered from the \"hierarchical clustering\" process bear some kind of resemblance to the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
-1
] | [
"HJg-yN9V1N",
"S1eW-odEn7",
"BkewlgpRAQ",
"HylORAYn0Q",
"ByxyXEWcA7",
"SygggMZ90X",
"Bkl2Tb__nX",
"HJllpRIqn7",
"HJllpRIqn7",
"B1xVWwh3hm",
"iclr_2019_H1zeHnA9KX",
"iclr_2019_H1zeHnA9KX",
"iclr_2019_H1zeHnA9KX",
"iclr_2019_H1zeHnA9KX"
] |
iclr_2019_H1ziPjC5Fm | Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks | Visual Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. In this paper, we propose a novel scheme for both interpretation as well as explanation in which, given a pretrained model, we automatically identify internal features relevant for the set of classes considered by the model, without relying on additional annotations. We interpret the model through average visualizations of this reduced set of features. Then, at test time, we explain the network prediction by accompanying the predicted class label with supporting visualizations derived from the identified features. In addition, we propose a method to address the artifacts introduced by strided operations in deconvNet-based visualizations. Moreover, we introduce an8Flower , a dataset specifically designed for objective quantitative evaluation of methods for visual explanation. Experiments on the MNIST , ILSVRC 12, Fashion 144k and an8Flower datasets show that our method produces detailed explanations with good coverage of relevant features of the classes of interest. | accepted-poster-papers | This was a difficult decision to converge to. R2 strongly champions this work, R1 is strongly critical, and R3 did not participate in the discussions (or take a stand). On the one hand, the AC can sympathize with R1's concerns -- insights developed on synthetic datasets may fail to generalize and fundamentally, the burden is not on a reviewer to be able to provide to authors a realistic dataset for the paper to experiment on. Having said that, a carefully constructed synthetic dataset is often *exactly* what the community needs as the first step to studying a difficult problem. Moreover, it is better for a proceeding to include works that generate vigorous discussions than the routine bland incremental works that typically dominate. Welcome to ICLR19. | val | [
"Byx6M0cC2Q",
"HJgSnw53nQ",
"Bkg5s3Op2m",
"rJxqomaeCX",
"rJeWC7TgAX",
"SJx10L6lRQ",
"Sye4lwalA7",
"B1eE1fpxCm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"Pros:\n\nThis paper\n - Proposes a method for producing visual explanations for deep neural network outputs,\n - Improves quality of the guided backprop approach for strided layers by converting stride 2 layers to stride 1 and resampling inputs (improving on a longstanding difficulty with such approaches),\n - Sho... | [
8,
5,
4,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
5,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_H1ziPjC5Fm",
"iclr_2019_H1ziPjC5Fm",
"iclr_2019_H1ziPjC5Fm",
"HJgSnw53nQ",
"HJgSnw53nQ",
"Bkg5s3Op2m",
"Bkg5s3Op2m",
"Byx6M0cC2Q"
] |
iclr_2019_HJE6X305Fm | Don't let your Discriminator be fooled | Generative Adversarial Networks are one of the leading tools in generative modeling, image editing and content creation.
However, they are hard to train as they require a delicate balancing act between two deep networks fighting a never ending duel. Some of the most promising adversarial models today minimize a Wasserstein objective. It is smoother and more stable to optimize. In this paper, we show that the Wasserstein distance is just one out of a large family of objective functions that yield these properties. By making the discriminator of a GAN robust to adversarial attacks we can turn any GAN objective into a smooth and stable loss. We experimentally show that any GAN objective, including Wasserstein GANs, benefit from adversarial robustness both quantitatively and qualitatively. The training additionally becomes more robust to suboptimal choices of hyperparameters, model architectures, or objective functions. | accepted-poster-papers | The paper provides a simple method for regularising and robustifying GAN training. Always appreciated contribution to GANs. :-) | train | [
"SJlbkwLcRm",
"rklFwIBK2Q",
"BkggGxgM6X",
"H1eRJegGaQ",
"HJlipkeGp7",
"SygU5RN927",
"Hkg6Ixk93Q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the detailed feedback. Some of my issues are addressed in the feedback and it would be better to clarify them in the revised paper. Now I change my rating to 6. The reason why I cannot give 7 is the missing analysis of robust D leading to better G.",
"## Overview\n\nThis paper proposes a new way to st... | [
-1,
6,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
-1,
3,
3
] | [
"HJlipkeGp7",
"iclr_2019_HJE6X305Fm",
"SygU5RN927",
"Hkg6Ixk93Q",
"rklFwIBK2Q",
"iclr_2019_HJE6X305Fm",
"iclr_2019_HJE6X305Fm"
] |
iclr_2019_HJGciiR5Y7 | Latent Convolutional Models | We present a new latent model of natural images that can be learned on large-scale datasets. The learning process provides a latent embedding for every image in the training dataset, as well as a deep convolutional network that maps the latent space to the image space. After training, the new model provides a strong and universal image prior for a variety of image restoration tasks such as large-hole inpainting, superresolution, and colorization. To model high-resolution natural images, our approach uses latent spaces of very high dimensionality (one to two orders of magnitude higher than previous latent image models). To tackle this high dimensionality, we use latent spaces with a special manifold structure (convolutional manifolds) parameterized by a ConvNet of a certain architecture. In the experiments, we compare the learned latent models with latent models learned by autoencoders, advanced variants of generative adversarial networks, and a strong baseline system using simpler parameterization of the latent space. Our model outperforms the competing approaches over a range of restoration tasks. | accepted-poster-papers | The reviewers are in general impressed by the results and like the idea but they also express some uncertainty about how the proposed actually is set up. The authors have made a good attempt to address the reviewers' concerns. | train | [
"r1xLI0153X",
"B1xopr2HCQ",
"S1x8opbLaX",
"BJeIwpbITQ",
"Skls46b8TQ",
"B1et32fpnX",
"r1gXKJZ6nX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"[Summary]\n- This work proposes a new complex latent space described by convolutional manifold, and this manifold can map the image in a more robust manner (when some part of the image are to be restored).\n\n[Pros]\n- The results show that the latent variable mapped to the image well represents the image, and it ... | [
7,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
3,
2
] | [
"iclr_2019_HJGciiR5Y7",
"S1x8opbLaX",
"r1xLI0153X",
"B1et32fpnX",
"r1gXKJZ6nX",
"iclr_2019_HJGciiR5Y7",
"iclr_2019_HJGciiR5Y7"
] |
iclr_2019_HJGkisCcKm | A Universal Music Translation Network | We present a method for translating music across musical instruments and styles. This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training. We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations. We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans. | accepted-poster-papers | The paper describes a method which, given a music waveform, generates another recording of the same music which should sound as if it was performed by different instruments. The model is an auto-encoder with a WaveNet-like domain-specific decoder and a shared encoder, trained with an adversarial "domain confusion loss". Even though the method is constructed mostly from existing components, the reviewers found the results interesting and convincing, and recommended the paper for acceptance. | test | [
"r1gmLjY3hX",
"HygwBMhu2m",
"SJxSg-L2C7",
"Skl9-YkXAm",
"HklwF6N107",
"HyxyWaNyRX",
"Bkx-JhEJAQ",
"HklhboVJRX",
"H1lDy33g6m",
"BJeNgRt7c7",
"r1ecG0vl57"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"\nThe paper proposes a multi-domain music translation method. The model presents a Wavenet auto-encoder setting with a single (domain independent) encoder and multiple (domain specific) decoders. From the model perspective, the paper builds up on several exciting ideas such as Wavenet and autoencoder based transl... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1
] | [
"iclr_2019_HJGkisCcKm",
"iclr_2019_HJGkisCcKm",
"HklwF6N107",
"Bkx-JhEJAQ",
"HygwBMhu2m",
"r1gmLjY3hX",
"H1lDy33g6m",
"iclr_2019_HJGkisCcKm",
"iclr_2019_HJGkisCcKm",
"r1ecG0vl57",
"iclr_2019_HJGkisCcKm"
] |
iclr_2019_HJGven05Y7 | How to train your MAML | The field of few-shot learning has recently seen substantial advancements. Most of these advancements came from casting few-shot learning as a meta-learning problem.Model Agnostic Meta Learning or MAML is currently one of the best approaches for few-shot learning via meta-learning. MAML is simple, elegant and very powerful, however, it has a variety of issues, such as being very sensitive to neural network architectures, often leading to instability during training, requiring arduous hyperparameter searches to stabilize training and achieve high generalization and being very computationally expensive at both training and inference times. In this paper, we propose various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAML, which we call MAML++. | accepted-poster-papers | This paper proposes several improvements for the MAML algorithm that improve its stability and performance.
Strengths: The improvements are useful for future researchers building upon the MAML algorithm. The results demonstrate a significant improvement over MAML. The authors revised the paper to address concerns about overstatements
Weaknesses: The paper does not present a major conceptual advance. It would also be very helpful to present a more careful ablation study of the six individual techniques.
Overall, the significance of the results outweights the weaknesses. However, the authors are strongly encouraged to perform and include a more detailed ablation study in the final paper. I recommend accept. | train | [
"Byg5xz2rCX",
"Hkg2spUSRQ",
"H1eHVoEaaX",
"H1ewhME6TQ",
"BklkMRGpam",
"Skg14bX92X",
"rJg-0D-927",
"rJxOlkMPh7",
"HJgieSXZ5X",
"rJlF4Lf-97",
"HkgRQ8EAtQ",
"rke46mAatQ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"Thanks for your prompt response. I think your point about the _automation_ of things is correct. I will amend the paper to be more precise in that claim as per your request. Regarding the automation of additional parts of the system, I am currently working on that, but it felt like it exceeded the scope of this pa... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
-1,
-1,
-1,
-1
] | [
"Hkg2spUSRQ",
"H1ewhME6TQ",
"rJg-0D-927",
"rJxOlkMPh7",
"Skg14bX92X",
"iclr_2019_HJGven05Y7",
"iclr_2019_HJGven05Y7",
"iclr_2019_HJGven05Y7",
"rJlF4Lf-97",
"iclr_2019_HJGven05Y7",
"rke46mAatQ",
"iclr_2019_HJGven05Y7"
] |
iclr_2019_HJMC_iA5tm | Learning a SAT Solver from Single-Bit Supervision | We present NeuroSAT, a message passing neural network that learns to solve SAT problems after only being trained as a classifier to predict satisfiability. Although it is not competitive with state-of-the-art SAT solvers, NeuroSAT can solve problems that are substantially larger and more difficult than it ever saw during training by simply running for more iterations. Moreover, NeuroSAT generalizes to novel distributions; after training only on random SAT problems, at test time it can solve SAT problems encoding graph coloring, clique detection, dominating set, and vertex cover problems, all on a range of distributions over small random graphs. | accepted-poster-papers | The submission proposes a machine learning approach to directly train a prediction system for whether a boolean sentence is satisfiable. The strengths of the paper seem to be largely in proposing an architecture for SAT problems and the analysis of the generalization performance of the resulting classifier on classes of problems not directly seen during training.
Although the resulting system cannot be claimed to be a state of the art system, and it does not have a correctness guarantee like DPLL based approaches, the paper is a nice re-introduction of SAT in a machine learning context using deep networks. It may be nice to mention e.g. (W. Ruml. Adaptive Tree Search. PhD thesis, Harvard University, 2002) which applied reinforcement learning techniques to SAT problems. The empirical validation on variable sized problems, etc. is a nice contribution showing interesting generalization properties of the proposed approach.
The reviewers were unanimous in their recommendation that the paper be accepted, and the review process attracted a number of additional comments showing the broader interest of the setting. | train | [
"r1lwc2f1C7",
"SklcSBbD07",
"rkecACeG07",
"r1lvKBRCT7",
"SJx8VBfCpm",
"S1e74C9XTm",
"ByeriC5Q6X",
"Ske_LQKQTX",
"rkeiH0Bmpm",
"B1xxGmSmpm",
"BJxsg-S767",
"ByxJhsEmam",
"B1x2lmDK3X",
"r1eoySkthX",
"H1x_vEau3m",
"r1x80rJJTQ",
"rJlAYEJyT7",
"rylt7MfEhm"
] | [
"author",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public"
] | [
"Thank you for the suggestion. I think you may have overlooked the crucial note at the end of S5: \"Note: for the entire rest of the paper, \\emph{NeuroSAT} refers to the specific trained model that has only been trained on $\\SR(\\U(10, 40))$\". We need to rely on a note like this because we use the phrase \"Neuro... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
-1,
-1,
-1
] | [
"r1lvKBRCT7",
"rkecACeG07",
"iclr_2019_HJMC_iA5tm",
"SJx8VBfCpm",
"Ske_LQKQTX",
"rylt7MfEhm",
"rylt7MfEhm",
"BJxsg-S767",
"r1x80rJJTQ",
"H1x_vEau3m",
"r1eoySkthX",
"B1x2lmDK3X",
"iclr_2019_HJMC_iA5tm",
"iclr_2019_HJMC_iA5tm",
"iclr_2019_HJMC_iA5tm",
"B1x2lmDK3X",
"rylt7MfEhm",
"icl... |
iclr_2019_HJMCcjAcYX | Learning Representations of Sets through Optimized Permutations | Representations of sets are challenging to learn because operations on sets should be permutation-invariant. To this end, we propose a Permutation-Optimisation module that learns how to permute a set end-to-end. The permuted set can be further processed to learn a permutation-invariant representation of that set, avoiding a bottleneck in traditional set models. We demonstrate our model's ability to learn permutations and set representations with either explicit or implicit supervision on four datasets, on which we achieve state-of-the-art results: number sorting, image mosaics, classification from image mosaics, and visual question answering.
| accepted-poster-papers | The paper proposes an architecture to learn over sets, by proposing a way
to have permutations differentiable end-to-end, hence learnable by gradient
descent. Reviewers pointed out to the computational limitation (quadratic in
the size of the set just to consider pairwise interactions, and cubic overall).
One reviewer (with low confidence) though the approach was not novel but
didn't appreciate the integration of learning-to-permute with a differentiable
setting, so I decided to down-weight their score. Overall, I found the paper
borderline but would propose to accept it if possible. | train | [
"SJeO5obc3X",
"HyxNhFQNRQ",
"BJlqr2mmR7",
"B1xeI-0-R7",
"r1ecut0WRX",
"rJeFrla-AX",
"rkxAsPCgAX",
"HyeIZzxbAX",
"B1x7WnW22Q",
"B1xmilebCX",
"rygmYbjlAQ",
"Syl2O-tkR7",
"SJlGnzY3am",
"HJludzt2T7",
"SygvBfKnpX",
"H1eeEy2jaQ",
"SkgX7AqYa7",
"SJlJE61t6Q",
"Skemjcuvam",
"SylBN0Pv6m"... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
... | [
"Update: From the perspective of a \"broader ML\" audience, I cannot recommend acceptance of this paper. The paper does not provide even a clear and concrete problem statement due to which it is difficult for me to appreciate the results. This is the only paper out of all ICLR2019 papers that I have reviewed / read... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2019_HJMCcjAcYX",
"iclr_2019_HJMCcjAcYX",
"r1ecut0WRX",
"rJeFrla-AX",
"B1xeI-0-R7",
"rkxAsPCgAX",
"rygmYbjlAQ",
"iclr_2019_HJMCcjAcYX",
"iclr_2019_HJMCcjAcYX",
"Syl2O-tkR7",
"SylBN0Pv6m",
"H1eeEy2jaQ",
"H1e6T_sI3Q",
"SJeO5obc3X",
"B1x7WnW22Q",
"iclr_2019_HJMCcjAcYX",
"SJlJE61t6... |
iclr_2019_HJMHpjC9Ym | Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition | In this paper, we propose a novel Convolutional Neural Network (CNN) architecture for learning multi-scale feature representations with good tradeoffs between speed and accuracy. This is achieved by using a multi-branch network, which has different computational complexity at different branches with different resolutions. Through frequent merging of features from branches at distinct scales, our model obtains multi-scale features while using less computation. The proposed approach demonstrates improvement of model efficiency and performance on both object recognition and speech recognition tasks, using popular architectures including ResNet, ResNeXt and SEResNeXt. For object recognition, our approach reduces computation by 1/3 while improving accuracy significantly over 1% point than the baselines, and the computational savings can be higher up to 1/2 without compromising the accuracy. Our model also surpasses state-of-the-art CNN acceleration approaches by a large margin in terms of accuracy and FLOPs. On the task of speech recognition, our proposed multi-scale CNNs save 30% FLOPs with slightly better word error rates, showing good generalization across domains. | accepted-poster-papers | This paper propose a novel CNN architecture for learning multi-scale feature representations with good tradeoffs between speed and accuracy. reviewers generally arrived at a consensus on accept. | val | [
"Syll-AfcTQ",
"BJxO_af9pQ",
"Hye0Uaf96X",
"B1xmXpMqpm",
"BygYyRtc3Q",
"BJx2-Tcv3m",
"B1egeFYOom"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We updated the pdf to address the comments from the reviewers. (the revised parts are highlighted in blue.)",
"We thank the reviewer for the positive comments on our approach. We have included in Table 11 (Page 18) the results of bL-ResNet-50 and bL-ResNet-101 with alpha and beta both set to be 1. Not surprising... | [
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"iclr_2019_HJMHpjC9Ym",
"B1egeFYOom",
"BJx2-Tcv3m",
"BygYyRtc3Q",
"iclr_2019_HJMHpjC9Ym",
"iclr_2019_HJMHpjC9Ym",
"iclr_2019_HJMHpjC9Ym"
] |
iclr_2019_HJe62s09tX | Unsupervised Hyper-alignment for Multilingual Word Embeddings | We consider the problem of aligning continuous word representations, learned in multiple languages, to a common space. It was recently shown that, in the case of two languages, it is possible to learn such a mapping without supervision. This paper extends this line of work to the problem of aligning multiple languages to a common space. A solution is to independently map all languages to a pivot language. Unfortunately, this degrades the quality of indirect word translation. We thus propose a novel formulation that ensures composable mappings, leading to better alignments. We evaluate our method by jointly aligning word vectors in eleven languages, showing consistent improvement with indirect mappings while maintaining competitive performance on direct word translation. | accepted-poster-papers | This paper provides a simple and intuitive method for learning multilingual word embeddings that makes it possible to softly encourage the model to align the spaces of non-English language pairs. The results are better than learning just pairwise embeddings with English.
The main remaining concern (in my mind) after the author response is that the method is less accurate empirically than Chen and Cardie (2018). I think however that given that these two works are largely contemporaneous, the methods are appreciably different, and the proposed method also has advantages with respect to speed, that the paper here is still a reasonably candidate for acceptance at ICLR.
However, I would like to request that in the final version the authors feature Chen and Cardie (2018) more prominently in the introduction and discuss the theoretical and empirical differences between the two methods. This will make sure that readers get the full picture of the two works and understand their relative differences and advantages/disadvantages. | train | [
"ryllvPPg0X",
"HketImInnX",
"Bke2ofhw37",
"SyesiPQrhQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewers: could you please take a look at the author response? I think it is comprehensive, and could very well address some of the concerns expressed in the original reviews. I'd appreciate any additional feedback or discussion, which would help write the final review of the paper.",
"The authors present ... | [
-1,
5,
6,
7
] | [
-1,
3,
4,
3
] | [
"iclr_2019_HJe62s09tX",
"iclr_2019_HJe62s09tX",
"iclr_2019_HJe62s09tX",
"iclr_2019_HJe62s09tX"
] |
iclr_2019_HJeRkh05Km | Visual Semantic Navigation using Scene Priors | How do humans navigate to target objects in novel scenes? Do we use the semantic/functional priors we have built over years to efficiently search and navigate? For example, to search for mugs, we search cabinets near the coffee machine and for fruits we try the fridge. In this work, we focus on incorporating semantic priors in the task of semantic navigation. We propose to use Graph Convolutional Networks for incorporating the prior knowledge into a deep reinforcement learning framework. The agent uses the features from the knowledge graph to predict the actions. For evaluation, we use the AI2-THOR framework. Our experiments show how semantic knowledge improves the performance significantly. More importantly, we show improvement in generalization to unseen scenes and/or objects. | accepted-poster-papers | The authors propose an approach for visual navigation that leverages a semantic knowledge graph to ground and inform the policy of an RL agent. The agent uses a graphnet to learn relationships and support the navigation. The empirical protocol is sound and uses best practices, and the authors have added additional experiments during the revision period, in response to the reviewers' requests. However, there were some significant problems with the submission - there were no comparisons to other semantic navigation methods, the approach is somewhat convoluted and will not survive the test of time, and the authors did not conclusively show the value of their approach. The reviewers uniformly support the publication of this paper, but with a low confidence. | train | [
"BkgGikf51N",
"B1g19HJmhX",
"S1glgBmmJN",
"Hyxib3AJRm",
"Skldw5RkAX",
"Syxtu20kC7",
"Sklpi3C7h7",
"BkgRhp8Dj7",
"r1liIPRCjQ",
"Hkx_7tZ0jX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"I upgraded my score from 6 to 7.\n\nThe authors have responded satisfactorily to my questions. I still find the method a bit convoluted and do not think that it will stand the test of time. However, the paper is competently done and is a fine addition to the literature. I support acceptance.",
"This paper explor... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
7,
-1,
-1
] | [
-1,
3,
-1,
-1,
-1,
-1,
1,
4,
-1,
-1
] | [
"B1g19HJmhX",
"iclr_2019_HJeRkh05Km",
"Syxtu20kC7",
"B1g19HJmhX",
"Sklpi3C7h7",
"BkgRhp8Dj7",
"iclr_2019_HJeRkh05Km",
"iclr_2019_HJeRkh05Km",
"Hkx_7tZ0jX",
"iclr_2019_HJeRkh05Km"
] |
iclr_2019_HJeu43ActQ | NOODL: Provable Online Dictionary Learning and Sparse Coding | We consider the dictionary learning problem, where the aim is to model the given data as a linear combination of a few columns of a matrix known as a dictionary, where the sparse weights forming the linear combination are known as coefficients. Since the dictionary and coefficients, parameterizing the linear model are unknown, the corresponding optimization is inherently non-convex. This was a major challenge until recently, when provable algorithms for dictionary learning were proposed. Yet, these provide guarantees only on the recovery of the dictionary, without explicit recovery guarantees on the coefficients. Moreover, any estimation error in the dictionary adversely impacts the ability to successfully localize and estimate the coefficients. This potentially limits the utility of existing provable dictionary learning methods in applications where coefficient recovery is of interest. To this end, we develop NOODL: a simple Neurally plausible alternating Optimization-based Online Dictionary Learning algorithm, which recovers both the dictionary and coefficients exactly at a geometric rate, when initialized appropriately. Our algorithm, NOODL, is also scalable and amenable for large scale distributed implementations in neural architectures, by which we mean that it only involves simple linear and non-linear operations. Finally, we corroborate these theoretical results via experimental evaluation of the proposed algorithm with the current state-of-the-art techniques. | accepted-poster-papers | Alternating minimization is surprisingly effective for low-rank matrix factorization and dictionary learning problems. Better theoretical characterization of these methods is well motivated. This paper fills up a gap by providing simultaneous guarantees for support recovery as well as coefficient estimates for linearly convergence to the true factors, in the online learning setting. The reviewers are largely in agreement that the paper is well written and makes a valuable contribution. The authors are advised to address some of the review comments around relationship to prior work highlighting novelties. | train | [
"ByeGb44y0X",
"rJlnpeEk0m",
"SJxXh0Q1A7",
"SJgzc8XTnX",
"B1xtSy6t2Q",
"B1x81eSE2m"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We are grateful to the reviewer for the comments. In this revision, we have corrected the minor typos, added additional comparisons, and added a proof map for easier navigation of the results. Specific comments are addressed below. \n\n1. Regarding exact recovery guarantees — NOODL converges geometrically to the t... | [
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
2,
2,
2
] | [
"B1x81eSE2m",
"B1xtSy6t2Q",
"SJgzc8XTnX",
"iclr_2019_HJeu43ActQ",
"iclr_2019_HJeu43ActQ",
"iclr_2019_HJeu43ActQ"
] |
iclr_2019_HJf9ZhC9FX | Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization | Stochastic descent methods (of the gradient and mirror varieties) have become increasingly popular in optimization. In fact, it is now widely recognized that the success of deep learning is not only due to the special deep architecture of the models, but also due to the behavior of the stochastic descent methods used, which play a key role in reaching "good" solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models---originally developed in the 1990's---and extend them to \emph{general} stochastic mirror descent (SMD) algorithms for \emph{general} loss functions and \emph{nonlinear} models.
In particular, we show that there is a fundamental identity which holds for SMD (and SGD) under very general conditions, and which implies the minimax optimality of SMD (and SGD) for sufficiently small step size, and for a general class of loss functions and general nonlinear models.
We further show that this identity can be used to naturally establish other properties of SMD (and SGD), namely convergence and \emph{implicit regularization} for over-parameterized linear models (in what is now being called the "interpolating regime"), some of which have been shown in certain cases in prior literature. We also argue how this identity can be used in the so-called "highly over-parameterized" nonlinear setting (where the number of parameters far exceeds the number of data points) to provide insights into why SMD (and SGD) may have similar convergence and implicit regularization properties for deep learning. | accepted-poster-papers | The authors give a characterization of stochastic mirror descent (SMD) as a conservation law (17) in terms of the Bregman divergence of the loss. The identity allows the authors to show that SMD converges to the optimal solution of a particular minimax filtering problem. In the special overparametrized linear case, when SMD is simply SGD, the result recovers a recent theorem due to Gunasekar et al. (2018). The consequences for the overparametrized nonlinear case are more speculative.
The main criticisms are around impact, however, I'm inclined to think that any new insight on this problem, especially one that imports results from other areas like control, are useful to incorporate into the literature.
I will comment that the discussion of previous work is wholly inadequate. The authors essentially do not engage with previous work, and mostly make throwaway citations. This is a real pity. I would be nice to see better scholarship. | val | [
"HJgikArTR7",
"HyxHcqKsCX",
"Skg_Iqwq37",
"HJxIZOj5RX",
"ryxsZIo507",
"HJg0RVi5CQ",
"ryxWiLXCh7",
"S1gpoI__hQ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their additional comments and have noted that they increased their score. We are not in disagreement with regards to SMD, or the reviewer's clarifying remarks about it. Furthermore, as also mentioned to Reviewer 3, we cannot comment on whether the implicit regularization of SMD is \"surpr... | [
-1,
-1,
5,
-1,
-1,
-1,
7,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
4,
3
] | [
"HyxHcqKsCX",
"ryxsZIo507",
"iclr_2019_HJf9ZhC9FX",
"S1gpoI__hQ",
"Skg_Iqwq37",
"ryxWiLXCh7",
"iclr_2019_HJf9ZhC9FX",
"iclr_2019_HJf9ZhC9FX"
] |
iclr_2019_HJfSEnRqKQ | Active Learning with Partial Feedback | While many active learning papers assume that the learner can simply ask for a label and receive it, real annotation often presents a mismatch between the form of a label (say, one among many classes), and the form of an annotation (typically yes/no binary feedback). To annotate examples corpora for multiclass classification, we might need to ask multiple yes/no questions, exploiting a label hierarchy if one is available. To address this more realistic setting, we propose active learning with partial feedback (ALPF), where the learner must actively choose both which example to label and which binary question to ask. At each step, the learner selects an example, asking if it belongs to a chosen (possibly composite) class. Each answer eliminates some classes, leaving the learner with a partial label. The learner may then either ask more questions about the same example (until an exact label is uncovered) or move on immediately, leaving the first example partially labeled. Active learning with partial labels requires (i) a sampling strategy to choose (example, class) pairs, and (ii) learning from partial labels between rounds. Experiments on Tiny ImageNet demonstrate that our most effective method improves 26% (relative) in top-1 classification accuracy compared to i.i.d. baselines and standard active learners given 30% of the annotation budget that would be required (naively) to annotate the dataset. Moreover, ALPF-learners fully annotate TinyImageNet at 42% lower cost. Surprisingly, we observe that accounting for per-example annotation costs can alter the conventional wisdom that active learners should solicit labels for hard examples. | accepted-poster-papers | This paper is on active deep learning in the setting where a label hierarchy is available for multiclass classification problems: a fairly natural and pervasive setting. The extension where the learner can ask for example labels as well as a series of questions to adequately descend the label hierarchy is an interesting twist on active learning. The paper is well written and develops several natural formulations which are then benchmarked on CIFAR10, CIFAR100, and Tiny ImageNet using a ResNet-18 architecture. The empirical results are carefully analyzed and appear to set interesting new baselines for active learning. | train | [
"rJg1hu9JCX",
"rJl_6OcJCm",
"H1eMytqJ0Q",
"SkxYxO9kRX",
"B1xh9IemTX",
"BklDVW9F2X",
"SkgbODq_3X"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their thoughtful feedback and clear recommendation to accept. We were glad to see that you found the paper to be well-articulated and easy to read. \n\nPer your feedback, we will bring up the related work (currently in section 4) and cite it throughout as each prior technical idea is intr... | [
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"B1xh9IemTX",
"BklDVW9F2X",
"SkgbODq_3X",
"iclr_2019_HJfSEnRqKQ",
"iclr_2019_HJfSEnRqKQ",
"iclr_2019_HJfSEnRqKQ",
"iclr_2019_HJfSEnRqKQ"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.