paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2019_rygo9iR9F7
Progressive Weight Pruning Of Deep Neural Networks Using ADMM
Deep neural networks (DNNs) although achieving human-level performance in many domains, have very large model size that hinders their broader applications on edge computing devices. Extensive research work have been conducted on DNN model compression or pruning. However, most of the previous work took heuristic approaches. This work proposes a progressive weight pruning approach based on ADMM (Alternating Direction Method of Multipliers), a powerful technique to deal with non-convex optimization problems with potentially combinatorial constraints. Motivated by dynamic programming, the proposed method reaches extremely high pruning rate by using partial prunings with moderate pruning rates. Therefore, it resolves the accuracy degradation and long convergence time problems when pursuing extremely high pruning ratios. It achieves up to 34× pruning rate for ImageNet dataset and 167× pruning rate for MNIST dataset, significantly higher than those reached by the literature work. Under the same number of epochs, the proposed method also achieves faster convergence and higher compression rates. The codes and pruned DNN models are released in the anonymous link bit.ly/2zxdlss.
rejected-papers
The paper proposes a progressive pruning technique that achieves high pruning ratio. Reviewers have a consensus on rejection. Reviewer 1 pointed out that the experimental results are weak. Reviewer 2 is also concerned about the proposed method and experiments. Reviewer 3 is is concerned that this paper is incremental work. Overall, this paper does not meet the standard of ICLR. Recommend for rejection.
train
[ "BJlr4Trw3X", "H1gBi7r507", "H1xnpMS5Am", "rklRmGH5Rm", "BJxW_kJ5h7", "r1l8ihSI27" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper focus on weight pruning for neural network compression. The proposed method is based on ADMM optimization method for neural network loss with constraint on the l_0 norm of weights, proposed in Zhang et al. 2018b. Two improvements, masked retraining and progressive pruning, are introduced. Masked retrain...
[ 5, -1, -1, -1, 5, 4 ]
[ 4, -1, -1, -1, 3, 4 ]
[ "iclr_2019_rygo9iR9F7", "r1l8ihSI27", "BJlr4Trw3X", "BJxW_kJ5h7", "iclr_2019_rygo9iR9F7", "iclr_2019_rygo9iR9F7" ]
iclr_2019_rygp3iRcF7
Area Attention
Existing attention mechanisms, are mostly item-based in that a model is trained to attend to individual items in a collection (the memory) where each item has a predefined, fixed granularity, e.g., a character or a word. Intuitively, an area in the memory consisting of multiple items can be worth attending to as a whole. We propose area attention: a way to attend to an area of the memory, where each area contains a group of items that are either spatially adjacent when the memory has a 2-dimensional structure, such as images, or temporally adjacent for 1-dimensional memory, such as natural language sentences. Importantly, the size of an area, i.e., the number of items in an area or the level of aggregation, is dynamically determined via learning, which can vary depending on the learned coherence of the adjacent items. By giving the model the option to attend to an area of items, instead of only individual items, a model can attend to information with varying granularity. Area attention can work along multi-head attention for attending to multiple areas in the memory. We evaluate area attention on two tasks: neural machine translation (both character and token-level) and image captioning, and improve upon strong (state-of-the-art) baselines in all the cases. These improvements are obtainable with a basic form of area attention that is parameter free. In addition to proposing the novel concept of area attention, we contribute an efficient way for computing it by leveraging the technique of summed area tables.
rejected-papers
although the idea is a straightforward extension of the usual (flat) attention mechanism (which is positive), it does show some improvement in a series of experiments done in this submission. the reviewers however found the experimental results to be rather weak and believe that there may be other problems in which the proposed attention mechanism could be better utilized, despite the authors' effort at improving the result further during the rebuttal period. this may be due to a less-than-desirable form the initial submission was in, and when the new version with perhaps a new set of more convincing experiments is reviewed elsewhere, it may be received with a more positive attitude from the reviewers.
train
[ "Sye91YISeV", "BklOdLLreV", "ryxwGrUrxE", "B1ga9sDNxN", "SkxW5QBK3m", "HygTAER-gE", "HygeYLjd27", "rylq26xWeV", "rkeQcjb9AX", "S1xGY8W5CQ", "BJeGDE-9AQ", "rygBF0ec0X", "H1gTcC3xnQ", "S1e0m61nom", "BJe3HncKoQ", "r1gjJpJ_oQ", "ryxUM6rPi7", "HJxsDJ2jtQ", "H1lSNXPF57", "BJlczK-Mcm"...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "public", "author", "public", "public", "author", "public", "author", ...
[ "While accuracy gains are not always significant, the improvements from area attention are pretty consistently seen across most conditions. While the baseline models were well tuned by previous work, we didn’t particularly tune each model to better work with area attention. We believe some hyperparameter tuning wou...
[ -1, -1, -1, -1, 6, -1, 5, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, -1, 5, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "B1ga9sDNxN", "rylq26xWeV", "HygTAER-gE", "S1xGY8W5CQ", "iclr_2019_rygp3iRcF7", "BJeGDE-9AQ", "iclr_2019_rygp3iRcF7", "rkeQcjb9AX", "HygeYLjd27", "H1gTcC3xnQ", "SkxW5QBK3m", "iclr_2019_rygp3iRcF7", "iclr_2019_rygp3iRcF7", "BJe3HncKoQ", "r1gjJpJ_oQ", "ryxUM6rPi7", "B1eyP2OZc7", "icl...
iclr_2019_rygunsAqYQ
Implicit Maximum Likelihood Estimation
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.
rejected-papers
The manuscript proposes a novel estimation technique for generative models based on fast nearest neighbors and inspired by maximum likelihood estimation. Overall, reviewers and AC agree that the general problem statement is timely and interesting, and the subject is of interest to the ICLR community The reviewers and ACs note weakness in the evaluation of the proposed method. In particular, reviewers note that the Parzen-based log-likelihood estimate is known to be unreliable in high-dimensions. This makes a quantitative evaluation of the results challenging, thus other metrics should be evaluated. Reviewers also expressed concerns about the strengths of the baselines compared. Additional concerns are raised with regards to scalability which the authors address in the rebuttal.
test
[ "SyxxEX5MkN", "HJxKXwZYnm", "rylhASZchX", "S1xeMzFO3X", "BJljEH0Aim", "SJgIh9jniX" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Sorry for the delay in posting the rebuttal. We've been a bit short on time due to various other deadlines, but below is a rebuttal of the key points that were raised. \n\nAnonReviewer1:\n\nTheorem1: Any distribution that can be translated and scaled arbitrarily is in a location-sacle family of distributions. This...
[ -1, 3, 4, 5, -1, -1 ]
[ -1, 4, 4, 4, -1, -1 ]
[ "iclr_2019_rygunsAqYQ", "iclr_2019_rygunsAqYQ", "iclr_2019_rygunsAqYQ", "iclr_2019_rygunsAqYQ", "SJgIh9jniX", "iclr_2019_rygunsAqYQ" ]
iclr_2019_rylIy3R9K7
Understand the dynamics of GANs via Primal-Dual Optimization
Generative adversarial network (GAN) is one of the best known unsupervised learning techniques these days due to its superior ability to learn data distributions. In spite of its great success in applications, GAN is known to be notoriously hard to train. The tremendous amount of time it takes to run the training algorithm and its sensitivity to hyper-parameter tuning have been haunting researchers in this area. To resolve these issues, we need to first understand how GANs work. Herein, we take a step toward this direction by examining the dynamics of GANs. We relate a large class of GANs including the Wasserstein GANs to max-min optimization problems with the coupling term being linear over the discriminator. By developing new primal-dual optimization tools, we show that, with a proper stepsize choice, the widely used first-order iterative algorithm in training GANs would in fact converge to a stationary solution with a sublinear rate. The same framework also applies to multi-task learning and distributional robust learning problems. We verify our analysis on numerical examples with both synthetic and real data sets. We hope our analysis shed light on future studies on the theoretical properties of relevant machine learning problems.
rejected-papers
The paper studies the convergence of a primal-dual algorithm on a special min-max problem in WGAN where the maximization is with respect to linear variables (linear discriminator) and minimization is over non-convex generators. Experiments with both simulated and real world data are conducted to show that the algorithm works for WGANs and multi-task learning. The major concern of reviewers lies in that the linear discriminator assumption in WGAN is too restrictive to general non-convex mini-max saddle point problem in GANs. Linear discriminator implies that the maximization part in min-max problem is concave, and it is thus not surprise that under this assumption the paper converts the original problem to a non-convex optimization instance and proves its first order convergence with descent lemma. This technique however can’t be applied to general non-convex saddle point problem in GANs. Also the experimental studies are also not strong enough. Therefore, current version of the paper is proposed as borderline lean reject.
val
[ "SkgeCzEiT7", "rJxd8fVo6m", "Byeb4WVoa7", "rJxuNaMj6Q", "ryeU9NyT2Q", "HJeodzPtnm", "B1eiAIPInQ", "H1xMvWvMhQ", "Hyl428xe9X", "Skx2YTK0Km" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "Thanks sincerely for the positive feedback from the reviewer. We greatly appreciate the time and effort that have been made by the reviewer. \n\nThe general GANs can be formulated as primal-dual optimization problems with both the primal and the dual are nonconvex. Proving the convergence of any algorithm on these...
[ -1, -1, -1, -1, 4, 5, 6, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 3, 3, -1, -1, -1 ]
[ "B1eiAIPInQ", "HJeodzPtnm", "ryeU9NyT2Q", "H1xMvWvMhQ", "iclr_2019_rylIy3R9K7", "iclr_2019_rylIy3R9K7", "iclr_2019_rylIy3R9K7", "Hyl428xe9X", "Skx2YTK0Km", "iclr_2019_rylIy3R9K7" ]
iclr_2019_rylKB3A9Fm
Assessing Generalization in Deep Reinforcement Learning
Deep reinforcement learning (RL) has achieved breakthrough results on many tasks, but has been shown to be sensitive to system changes at test time. As a result, building deep RL agents that generalize has become an active research area. Our aim is to catalyze and streamline community-wide progress on this problem by providing the first benchmark and a common experimental protocol for investigating generalization in RL. Our benchmark contains a diverse set of environments and our evaluation methodology covers both in-distribution and out-of-distribution generalization. To provide a set of baselines for future research, we conduct a systematic evaluation of state-of-the-art algorithms, including those that specifically tackle the problem of generalization. The experimental results indicate that in-distribution generalization may be within the capacity of current algorithms, while out-of-distribution generalization is an exciting challenge for future work.
rejected-papers
The manuscript proposes benchmarks for studying generalization in reinforcement learning, primarily through the alteration of the environment parameters of standard tasks such as Mountain Car and Half Cheetah. In contrast with methodological innovations where a numerical argument can often be made for the new method's performance on well-understood tasks, a paper introducing a new benchmark must be held to a high standard in terms of the usefulness of the benchmark in studying the phenomenon under consideration. Reviewers commended the quality of writing and considered the experiments given the set of tasks to be thorough, but there were serious concerns from several reviewers regarding how well-motivated this benchmark is and restrictions viewed as artificial (no training at test-time), concerns which the updated manuscript has failed to address. I therefore recommend rejection at this stage, and urge the authors to carefully consider the desiderata for a generalization benchmark and why their current proposed set of tasks satisfies (or doesn't satisfy) those desiderata.
train
[ "HyecLpxACQ", "BJlEPEq56X", "HJg2zNq5aQ", "HygylNq5aQ", "rkGd7996m", "B1gkuMGYnX", "ryloTLBC2m", "B1lNZjECnQ" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The results of the OpenAI Retro contest are consistent with our conclusion that vanilla deep RL algorithms usually generalize better than EPOpt and RL^2. As a recap, the OpenAI Retro contest was a transfer learning challenge on Sonic the Hedgehog games. Given a set of training levels, teams were tasked with traini...
[ -1, -1, -1, -1, -1, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, 2, 5, 3 ]
[ "ryloTLBC2m", "ryloTLBC2m", "B1gkuMGYnX", "B1lNZjECnQ", "iclr_2019_rylKB3A9Fm", "iclr_2019_rylKB3A9Fm", "iclr_2019_rylKB3A9Fm", "iclr_2019_rylKB3A9Fm" ]
iclr_2019_rylV6i09tX
Interpreting Adversarial Robustness: A View from Decision Surface in Input Space
One popular hypothesis of neural network generalization is that the flat local minima of loss surface in parameter space leads to good generalization. However, we demonstrate that loss surface in parameter space has no obvious relationship with generalization, especially under adversarial settings. Through visualizing decision surfaces in both parameter space and input space, we instead show that the geometry property of decision surface in input space correlates well with the adversarial robustness. We then propose an adversarial robustness indicator, which can evaluate a neural network's intrinsic robustness property without testing its accuracy under adversarial attacks. Guided by it, we further propose our robust training method. Without involving adversarial training, our method could enhance network's intrinsic adversarial robustness against various adversarial attacks.
rejected-papers
This paper studies the relationship between flatness in parameter space and generalization. They show through visualization experiments on MNIST and CIFAR-10 that there is no obvious relationship between the two. However, the reviewers found the motivation for the visualization approach unconvincing and further found significant overlap between the proposed method and that of Ross & Doshi. Thus the paper should improve its framing, experimental insights and relation to prior work before being ready for publication.
train
[ "S1epoBY4lV", "S1gwEq9SkE", "H1gFxIzSJN", "S1xY1GzSk4", "SkgNELFfRQ", "rylpztufCm", "SkxL3EDvaQ", "SyxnXcj03m", "S1licXsy6X", "HJgm8ciA3X", "ryehFslyam", "HJejsv4An7" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thanks for the reviewer’s precious comments! We do believe that there is a lot to be improved in this paper!\n1)\tAbout the first concern of the reviewer, the reason of “equated generalization” is that previously upon the finish of this paper, there is no clear definition of the generalization in the adversarial s...
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 4, 5 ]
[ "SkxL3EDvaQ", "S1xY1GzSk4", "SyxnXcj03m", "HJgm8ciA3X", "rylpztufCm", "S1licXsy6X", "iclr_2019_rylV6i09tX", "HJejsv4An7", "ryehFslyam", "HJejsv4An7", "iclr_2019_rylV6i09tX", "iclr_2019_rylV6i09tX" ]
iclr_2019_rylWVnR5YQ
Context Dependent Modulation of Activation Function
We propose a modification to traditional Artificial Neural Networks (ANNs), which provides the ANNs with new aptitudes motivated by biological neurons. Biological neurons work far beyond linearly summing up synaptic inputs and then transforming the integrated information. A biological neuron change firing modes accordingly to peripheral factors (e.g., neuromodulators) as well as intrinsic ones. Our modification connects a new type of ANN nodes, which mimic the function of biological neuromodulators and are termed modulators, to enable other traditional ANN nodes to adjust their activation sensitivities in run-time based on their input patterns. In this manner, we enable the slope of the activation function to be context dependent. This modification produces statistically significant improvements in comparison with traditional ANN nodes in the context of Convolutional Neural Networks and Long Short-Term Memory networks.
rejected-papers
The paper adds a new level of complexity to neural networks, by modulating activation functions of a layer as a function of the previous layer activations. The method is evaluated on relatively simple vision and language tasks. The idea is nice, but seems to be a special case of previously published work; and the results are not convincing. Four of five reviewers agree that the work would benefit from: improving comparisons with existing approaches, but also improving its theoretical framework, in light of competing approaches.
train
[ "r1xhUmtURX", "rylOJXtUC7", "ByemMfYIRX", "SJeiT-t8Cm", "Skg599BQRX", "SJgMQuLd6Q", "H1l5sLRf6X", "HJxlr2rg6Q", "HJgd_ZWanX", "BJgfNNK92m" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I.1 We used a modulation to change the shape of the activation, it is still a single computation but the effect is multiplicative instead of additive and effects an entire layer rather than a single node. \nI.2 Our modulation was set on the convolution layer only before the activation layer. It adds a very small a...
[ -1, -1, -1, -1, -1, 4, 4, 4, 4, 6 ]
[ -1, -1, -1, -1, -1, 5, 3, 4, 5, 4 ]
[ "BJgfNNK92m", "HJgd_ZWanX", "HJxlr2rg6Q", "H1l5sLRf6X", "SJgMQuLd6Q", "iclr_2019_rylWVnR5YQ", "iclr_2019_rylWVnR5YQ", "iclr_2019_rylWVnR5YQ", "iclr_2019_rylWVnR5YQ", "iclr_2019_rylWVnR5YQ" ]
iclr_2019_rylbWhC5Ym
HR-TD: A Regularized TD Method to Avoid Over-Generalization
Temporal Difference learning with function approximation has been widely used recently and has led to several successful results. However, compared with the original tabular-based methods, one major drawback of temporal difference learning with neural networks and other function approximators is that they tend to over-generalize across temporally successive states, resulting in slow convergence and even instability. In this work, we propose a novel TD learning method, Hadamard product Regularized TD (HR-TD), that reduces over-generalization and thus leads to faster convergence. This approach can be easily applied to both linear and nonlinear function approximators. HR-TD is evaluated on several linear and nonlinear benchmark domains, where we show improvement in learning behavior and performance.
rejected-papers
All three reviewers raised the issues that (a) the problem tackled in the paper was insufficiently motivated, (b) the solution strategy was also not sufficiently motivated and (c) the experiments had serious methodological issues.
train
[ "rygarC_anX", "rygCKFqc2X", "rklhGtXS3m" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper considers the problem of overgeneralization between adjacent states of the one-step temporal difference error, when using function approximation. The authors suggest an explicit regularization scheme based on the correlation between the respective features, which reduces to penalizing the Hadamard produc...
[ 4, 3, 2 ]
[ 4, 4, 5 ]
[ "iclr_2019_rylbWhC5Ym", "iclr_2019_rylbWhC5Ym", "iclr_2019_rylbWhC5Ym" ]
iclr_2019_rylhToC5YQ
Unsupervised Neural Multi-Document Abstractive Summarization of Reviews
Abstractive summarization has been studied using neural sequence transduction methods with datasets of large, paired document-summary examples. However, such datasets are rare and the models trained from them do not generalize to other domains. Recently, some progress has been made in learning sequence-to-sequence mappings with only unpaired examples. In our work, we consider the setting where there are only documents (product or business reviews) with no summaries provided, and propose an end-to-end, neural model architecture to perform unsupervised abstractive summarization. Our proposed model consists of an auto-encoder trained so that the mean of the representations of the input reviews decodes to a reasonable summary-review. We consider variants of the proposed architecture and perform an ablation study to show the importance of specific components. We show through metrics and human evaluation that the generated summaries are highly abstractive, fluent, relevant, and representative of the average sentiment of the input reviews.
rejected-papers
This paper introduces a method for unsupervised abstractive summarization of reviews. Strengths: (1) The direction (developing unsupervised multi-document summarization systems) is exciting (2) There are interesting aspects to the model Weaknesses: (1) The authors are clearly undecided how to position this work: either as introducing a generic document summarization framework or as an approach specific to summarization of reviews. If this is the former, the underlying assumptions, e.g., that the summary looks like a single document in a group is problematic. If this is the latter, then comparison to some more specialized methods are lacking (see comments of R1). (2) Evaluation, though improved since the first submitted version (when human evaluation was added), is still not great (see R1 / R3). The automatic metrics are not very convincing and do not seem to be very consistent with the results of human eval. I believe that instead or along with human eval, the authors should create human written summaries and evaluate against them. It has been done for extractive multi-document summarization and can be done here. Without this, it would be impossible to compare to this submission in the future work. (3) It is not very clear that generating abstractive summaries of the form proposed in the paper is an effective way to summarize documents. Basically, a good summary should reflect diversity of the opinions rather than reflect an average / most frequent opinion from tin the review collection. By generating the summary from a review LM, the authors make sure that there is no redundancy (e.g., alternative views) or contradictions. That's not really what one would want from a summary (See R3 and also non-public discussion with R1) Overall, I'd definitely like to see this work published but my take is that it is not ready yet. R1 and R2 are relatively negative and generally in agreement. R3 is very positive. I share excitement about the research direction with R3 but I believe that concerns of R1 and R2 are valid and need to be addressed before the paper gets published.
train
[ "r1gFz6AaAQ", "B1xGr10aCm", "B1gpdP5K37", "SJeQ9oU_6X", "SylKksg_pm", "B1eUccxupQ", "SJgmRtgdT7", "BygCDbac37", "BJgnUKPvhX", "rJgGTrfa5Q" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "Regarding usefulness/practicality of abstractive summarization, we believe the most natural form of summary for humans is language, i.e. sentences/paragraphs. Certainly extracting common bi-grams could be done, is straightforward, and has been done in review-specific summarization systems in prior work, but is in ...
[ -1, -1, 4, -1, -1, -1, -1, 5, 9, -1 ]
[ -1, -1, 4, -1, -1, -1, -1, 4, 4, -1 ]
[ "iclr_2019_rylhToC5YQ", "B1gpdP5K37", "iclr_2019_rylhToC5YQ", "BJgnUKPvhX", "iclr_2019_rylhToC5YQ", "B1gpdP5K37", "BygCDbac37", "iclr_2019_rylhToC5YQ", "iclr_2019_rylhToC5YQ", "iclr_2019_rylhToC5YQ" ]
iclr_2019_ryljV2A5KX
IB-GAN: Disentangled Representation Learning with Information Bottleneck GAN
We present a novel architecture of GAN for a disentangled representation learning. The new model architecture is inspired by Information Bottleneck (IB) theory thereby named IB-GAN. IB-GAN objective is similar to that of InfoGAN but has a crucial difference; a capacity regularization for mutual information is adopted, thanks to which the generator of IB-GAN can harness a latent representation in disentangled and interpretable manner. To facilitate the optimization of IB-GAN in practice, a new variational upper-bound is derived. With experiments on CelebA, 3DChairs, and dSprites datasets, we demonstrate that the visual quality of samples generated by IB-GAN is often better than those by β-VAEs. Moreover, IB-GAN achieves much higher disentanglement metrics score than β-VAEs or InfoGAN on the dSprites dataset.
rejected-papers
Strengths: This paper introduces a clever construction to build a more principled disentanglement objective for GANs than the InfoGAN. The paper is relatively clearly written. This method provides the possibility of combining the merits of GANs with the useful information-theoretic quantities that can be used to regularize VAEs. Weaknesses: The quantitative experiments are based entirely around the toy dSprites dataset, on which they perform comparably to other methods. Additionally, the qualitative results look pretty bad (in my subjective opinion). They may still be better than a naive VAE, but the authors could have demonstrated the ability of their model by comparing their models against other models both qualitatively and quantitatively on problems hard enough to make the VAEs fail. Points of contention: The quantitative baselines are taken from another paper which did zero hyperparameter search. However the authors provided an updated results table based on numbers from other papers in a comment. Consensus: Everyone agreed that the idea was good and the experiments were lacking. Some of the comments about experiments were addressed in the updated version but not all.
train
[ "HyxtBKBJgV", "rJlUMwlC1N", "Bke74c_cA7", "rkgsWcu9AX", "S1lLJvolTX", "H1xIo_iea7", "SygaMvd5AQ", "HkgL54gpRX", "HJeR0Du50X", "SklaxQY0nm" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "\nWe understand your concern that the hyperparameters for the baselines are not thoroughly explored in [1]. Per your suggestion, we will modify the final draft as follows.\n(1) We will update the baseline scores in Table 1 with the scores in the original papers [2,3,4], including Kim & Mnih’s model. We believe the...
[ -1, -1, -1, -1, 7, 7, -1, -1, -1, 4 ]
[ -1, -1, -1, -1, 3, 4, -1, -1, -1, 4 ]
[ "rJlUMwlC1N", "iclr_2019_ryljV2A5KX", "SklaxQY0nm", "SklaxQY0nm", "iclr_2019_ryljV2A5KX", "iclr_2019_ryljV2A5KX", "H1xIo_iea7", "H1xIo_iea7", "S1lLJvolTX", "iclr_2019_ryljV2A5KX" ]
iclr_2019_rylxxhRctX
Coverage and Quality Driven Training of Generative Image Models
Generative modeling of natural images has been extensively studied in recent years, yielding remarkable progress. Current state-of-the-art methods are either based on maximum likelihood estimation or adversarial training. Both methods have their own drawbacks, which are complementary in nature. The first leads to over-generalization as the maximum likelihood criterion encourages models to cover the support of the training data by heavily penalizing small masses assigned to training data. Simplifying assumptions in such models limits their capacity and makes them spill mass on unrealistic samples. The second leads to mode-dropping since adversarial training encourages high quality samples from the model, but only indirectly enforces diversity among the samples. To overcome these drawbacks we make two contributions. First, we propose a model that extends variational autoencoders by using deterministic invertible transformation layers to map samples from the decoder to the image space. This induces correlations among the pixels given the latent variables, improving over factorial decoders commonly used in variational autoencoders. Second, we propose a unified training approach that leverages coverage and quality based criteria. Our models obtain likelihood scores competitive with state-of-the-art likelihood-based models, while achieving sample quality typical of adversarially trained networks.
rejected-papers
The overall view of the reviewers is that the paper is not quite good enough as it stands. The reviewers also appreciate the contributions so taking the comments into account and resubmit elsewhere is encouraged.
train
[ "S1gg1Q_z1E", "r1gATcflyN", "Byg2J05T07", "Hygt-7q6Rm", "rJeQDLz6AX", "r1ePWfg3RX", "Syl5Qfg3AX", "S1gtLMQ50m", "HkesL2dBCm", "ryePJAu_Cm", "HJgraPppn7", "rJeShhuHRm", "B1gzU5E7Rm", "B1g-vw4QC7", "BylW3ZxQRX", "SJxDFZe7CX", "BJg_Of7MA7", "S1eKyTOnpQ", "B1eXzImmp7", "S1eUt8Qma7"...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", ...
[ "The authors would like to thank reviewer 3 for his thoughts, and fast answers.\n\nThe last paragraph of our previous response details the validity of Eq(8) under optimal discriminator assumption. It seemed necessary, as in the f-gan paper (https://arxiv.org/pdf/1606.00709.pdf), in section 2.2 Eq(4), a lower bound ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 5 ]
[ "r1gATcflyN", "Hygt-7q6Rm", "ryePJAu_Cm", "rJeQDLz6AX", "r1ePWfg3RX", "S1gtLMQ50m", "S1gtLMQ50m", "Sye3lkqxTX", "B1g-vw4QC7", "rJeShhuHRm", "iclr_2019_rylxxhRctX", "B1gzU5E7Rm", "SJxDFZe7CX", "BylW3ZxQRX", "BJg_Of7MA7", "BJg_Of7MA7", "iclr_2019_rylxxhRctX", "iclr_2019_rylxxhRctX", ...
iclr_2019_ryx3_iAcY7
Contextualized Role Interaction for Neural Machine Translation
Word inputs tend to be represented as single continuous vectors in deep neural networks. It is left to the subsequent layers of the network to extract relevant aspects of a word's meaning based on the context in which it appears. In this paper, we investigate whether word representations can be improved by explicitly incorporating the idea of latent roles. That is, we propose a role interaction layer (RIL) that consists of context-dependent (latent) role assignments and role-specific transformations. We evaluate the RIL on machine translation using two language pairs (En-De and En-Fi) and three datasets of varying size. We find that the proposed mechanism improves translation quality over strong baselines with limited amounts of data, but that the improvement diminishes as the size of data grows, indicating that powerful neural MT systems are capable of implicitly modeling role-word interaction by themselves. Our qualitative analysis reveals that the RIL extracts meaningful context-dependent roles and that it allows us to inspect more deeply the internal mechanisms of state-of-the-art neural machine translation systems.
rejected-papers
This paper proposes to improve MT with a specialized encoder component that models roles. It shows some improvements in low-resource scenarios. Overall, reviewers felt there were two issues with the paper: clarity of description of the contribution, and also the fact that the method itself was not seeing large empirical gains. On top of this, the method adds some additional complexity on top of the original model. Given that no reviewer was strongly in favor of the paper, I am not going to recommend acceptance at this time.
train
[ "rJeEvtKFAQ", "HJexH879hX", "B1lVhyY7R7", "BJlGWKO-pX", "HJgD1tOZTX", "r1x3n_uZam", "Hkxk2bKt2X", "rkxeKKs_h7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the response.\n\nAccording to the authours' response, now I understand that the proposed method is mainly for the \"low-data regime.\"\nHowever, we cannot find this kind of descriptions in the submitted version.\nI believe that this paper should be re-organized to clarify the primal goal (or motivat...
[ -1, 4, -1, -1, -1, -1, 5, 4 ]
[ -1, 5, -1, -1, -1, -1, 4, 4 ]
[ "r1x3n_uZam", "iclr_2019_ryx3_iAcY7", "HJgD1tOZTX", "rkxeKKs_h7", "Hkxk2bKt2X", "HJexH879hX", "iclr_2019_ryx3_iAcY7", "iclr_2019_ryx3_iAcY7" ]
iclr_2019_ryxDUs05KQ
Difference-Seeking Generative Adversarial Network
We propose a novel algorithm, Difference-Seeking Generative Adversarial Network (DSGAN), developed from traditional GAN. DSGAN considers the scenario that the training samples of target distribution, pt, are difficult to collect. Suppose there are two distributions pd¯ and pd such that the density of the target distribution can be the differences between the densities of pd¯ and pd. We show how to learn the target distribution pt only via samples from pd and pd¯ (relatively easy to obtain). DSGAN has the flexibility to produce samples from various target distributions (e.g. the out-of-distribution). Two key applications, semi-supervised learning and adversarial training, are taken as examples to validate the effectiveness of DSGAN. We also provide theoretical analyses about the convergence of DSGAN.
rejected-papers
The paper presents a GAN for learning a target distribution that is defined as the difference between two other distributions. The reviewers and AC note the critical limitation of novelty and appealing results of this paper to meet the high standard of ICLR. AC thinks the proposed method has potential and is interesting, but decided that the authors need more works to publish.
test
[ "rklu4Xex0m", "SyxWewllA7", "rJlhD4gxAm", "BkgHQNggRQ", "SkeKjN28aQ", "HJlU9T4ph7", "BkxL26einQ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for all the comments.\n\nWe list all the modifications as following:\n\n1. We add Sec. 1.1 to describe our motivation and scenario.\n\n2. We reorganize Sec. 2.2 to clarify the main idea of DSGAN through several case studies.\n\n3. We move Sec. 3.1 in the original manuscript to appendix C in the new one.\n\n...
[ -1, -1, -1, -1, 5, 4, 3 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2019_ryxDUs05KQ", "BkxL26einQ", "HJlU9T4ph7", "SkeKjN28aQ", "iclr_2019_ryxDUs05KQ", "iclr_2019_ryxDUs05KQ", "iclr_2019_ryxDUs05KQ" ]
iclr_2019_ryxDjjCqtQ
Deconfounding Reinforcement Learning in Observational Settings
In this paper, we propose a general formulation to cope with a family of reinforcement learning tasks in observational settings, that is, learning good policies solely from the historical data produced by real environments with confounders (i.e., the factors affecting both actions and rewards). Based on the proposed approach, we extend one representative of reinforcement learning algorithms: the Actor-Critic method, to its deconfounding variant, which is also straightforward to be applied to other algorithms. In addition, due to lack of datasets in this direction, a benchmark is developed for deconfounding reinforcement learning algorithms by revising OpenAI Gym and MNIST. We demonstrate that the proposed algorithms are superior to traditional reinforcement learning algorithms in confounded environments. To the best of our knowledge, this is the first time that confounders are taken into consideration for addressing full reinforcement learning problems.
rejected-papers
The paper studies RL based on data with confounders, where the confounders can affect both rewards and actions. The setting is relevant in many problems and can have much potential. This work is an interesting and useful attempt. However, reviewers raised many questions regarding the problem setup and its comparison to related areas like causal inference. While the author response provided further helpful details, the questions remained among the reviewers. Therefore, the paper is not recommended for acceptance in its current stage; more work is needed to better motivate the setting and clarify its relation to other areas. Furthermore, the paper should probably discuss its relation to (1) partially observable MDP; and (2) off-policy RL.
train
[ "Bkx0NwKtCX", "S1xZZDFFCX", "S1gEh7FYR7", "H1lrLmtKA7", "SJl8CZFK0Q", "SygU8Vhl0X", "HJxY5yhdC7", "Skeu0k3O0Q", "ryxNPyhd0X", "Bygl71nuC7", "Byg9XiEs3X", "BJelvuEjnm", "Hyx9C2Vqhm", "SyeZ0H9h2Q", "S1gO5nK3nm", "HkguruuhnQ", "H1lpVE_3nQ", "r1x1gx_hhm", "ByxwiAPhhQ", "Skluaf-shX"...
[ "author", "author", "author", "author", "author", "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "author", "public", "public", "author", "public", "author...
[ "Re (1): Refer to Section 2.3 and Section 2.4, where we describe more methods of adjusting for confounders.\n \nRe (2): The kidney stone example is used throughout the paper, referring to Section 1, Section 2.1, Section 2.2, Section 3.1, Footnote 2, Appendix F.2, and Appendix H.3.\n \nRe (3): Z, sampled using Equat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "BJelvuEjnm", "BJelvuEjnm", "SygU8Vhl0X", "Hyx9C2Vqhm", "Byg9XiEs3X", "iclr_2019_ryxDjjCqtQ", "Hyx9C2Vqhm", "iclr_2019_ryxDjjCqtQ", "BJelvuEjnm", "Byg9XiEs3X", "iclr_2019_ryxDjjCqtQ", "iclr_2019_ryxDjjCqtQ", "iclr_2019_ryxDjjCqtQ", "S1gO5nK3nm", "HkguruuhnQ", "H1lpVE_3nQ", "r1x1gx_hh...
iclr_2019_ryxHii09KQ
In Your Pace: Learning the Right Example at the Right Time
Training neural networks is traditionally done by sequentially providing random mini-batches sampled uniformly from the entire dataset. In our work, we show that sampling mini-batches non-uniformly can both enhance the speed of learning and improve the final accuracy of the trained network. Specifically, we decompose the problem using the principles of curriculum learning: first, we sort the data by some difficulty measure; second, we sample mini-batches with a gradually increasing level of difficulty. We focus on CNNs trained on image recognition. Initially, we define the difficulty of a training image using transfer learning from some competitive "teacher" network trained on the Imagenet database, showing improvement in learning speed and final performance for both small and competitive networks, using the CIFAR-10 and the CIFAR-100 datasets. We then suggest a bootstrap alternative to evaluate the difficulty of points using the same network without relying on a "teacher" network, thus increasing the applicability of our suggested method. We compare this approach to a related version of Self-Paced Learning, showing that our method benefits learning while SPL impairs it.
rejected-papers
This paper presents an interesting strategy of curriculum learning for training neural networks, where mini-batches of samples are formed with a gradually increasing level of difficulty. While reviewers acknowledge the importance of studying the curriculum learning and the potential usefulness of the proposed approach for training neural networks, they raised several important concerns that place this paper bellow the acceptance bar: (1) empirical results are not convincing (R2, R3); comparisons on other datasets (large-scale) and with state-of-the-art methods would substantially strengthen the evaluation (R3); see also R2’s concerns regarding the comprehensive study; (2) important references and baseline methods are missing – see R2’s suggestions how to improve; (3) limited technical novelty -- R1 has provided a very detailed review questioning novelty of the proposed approach w.r.t. Weinshall et al, 2018. Another suggestions to further strengthen and extend the manuscript is to consider curriculum and anti-curriculum learning for increasing performance (R1). The authors provided additional experiment on a subset of 7 classes from the ImageNet dataset, but this does not show the advantage of the proposed model in a large-scale learning setting. The AC decided that addressing (1)-(3) is indeed important for understanding the contribution in this work, and it is difficult to assess the scope of the contribution without addressing them.
train
[ "HJle-twYRQ", "rJxxa4IY3m", "BklIbzwd3X", "rkgoETj43m", "SylJfSAKsQ", "HJl8pfYo9Q", "r1xQui-t97" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "Following the reviews, we've added a section showing that curriculum by transfer achieves similar qualitative improvements to network generalization also when trained on a subset of the popular ImageNet dataset.\nWe've included a broader review of the relevant literature, emphasizing the difference between previou...
[ -1, 5, 4, 4, -1, -1, -1 ]
[ -1, 4, 4, 4, -1, -1, -1 ]
[ "iclr_2019_ryxHii09KQ", "iclr_2019_ryxHii09KQ", "iclr_2019_ryxHii09KQ", "iclr_2019_ryxHii09KQ", "iclr_2019_ryxHii09KQ", "r1xQui-t97", "iclr_2019_ryxHii09KQ" ]
iclr_2019_ryxLG2RcYX
Learning Abstract Models for Long-Horizon Exploration
In high-dimensional reinforcement learning settings with sparse rewards, performing effective exploration to even obtain any reward signal is an open challenge. While model-based approaches hold promise of better exploration via planning, it is extremely difficult to learn a reliable enough Markov Decision Process (MDP) in high dimensions (e.g., over 10^100 states). In this paper, we propose learning an abstract MDP over a much smaller number of states (e.g., 10^5), which we can plan over for effective exploration. We assume we have an abstraction function that maps concrete states (e.g., raw pixels) to abstract states (e.g., agent position, ignoring other objects). In our approach, a manager maintains an abstract MDP over a subset of the abstract states, which grows monotonically through targeted exploration (possible due to the abstract MDP). Concurrently, we learn a worker policy to travel between abstract states; the worker deals with the messiness of concrete states and presents a clean abstraction to the manager. On three of the hardest games from the Arcade Learning Environment (Montezuma's, Pitfall!, and Private Eye), our approach outperforms the previous state-of-the-art by over a factor of 2 in each game. In Pitfall!, our approach is the first to achieve superhuman performance without demonstrations.
rejected-papers
The paper presents a novel approach to exploration in long-horizon / sparse reward RL settings. The approach is based on the notion of abstract states, a space that is lower-dimensional than the original state space, and in which transition dynamics can be learned and exploration is planned. A distributed algorithm is proposed for managing exploration in the abstract space (done by the manager), and learning to navigate between abstract states (workers). Empirical results show strong performance on hard exploration Atari games. The paper addresses a key challenge in reinforcement learning - learning and planning in long horizon MDPs. It presents an original approach to this problem, and demonstrates that it can be leveraged to achieve strong empirical results. At the same time, the reviewers and AC note several potential weaknesses, the focus here is on the subset that substantially affected the final acceptance decision. First, the paper deviates from the majority of current state of the art deep RL approaches by leveraging prior knowledge in the form of the RAM state. The cause for concern is not so much the use of the RAM information, but the comparison to other prior approaches using "comparable amounts of prior knowledge" - an argument that was considered misleading by the reviewers and AC. The reviewers make detailed suggestions on how to address these concerns in a future revision. Despite initially diverging assessments, the final consensus between the reviewers and AC was that the stated concerns would require a thorough revision of the paper and that it should not be accepted in its current stage. On a separate note, a lot of the discussion between R1 and the authors centered on whether more comparisons / a larger number of seeds should be run. The authors argued that the requested comparisons would be too costly. A suggestion for a future revision of the paper would be to only run a large number (e.g., 10) of seeds for the first 150M steps of each experiment, and presenting these results separately from the long-running experiments. This should be a cost efficient way to shed light on a particularly important range, and would help validate claims about sample efficiency.
train
[ "SJl2_N-q2m", "B1ehFbba37", "Hye9J5a1k4", "BkgV4LeC0m", "B1eAD7KoAX", "BkeBNpfsCm", "HJeSXCh9Cm", "rJeSWZK9A7", "r1eQOgK90m", "ryl096Y5pm", "BklWPRFqpQ", "SygvApF5TQ", "rylm4WhLTQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "public", "author", "public", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper considers reinforcement learning tasks that have high-dimensional space, long-horizon time, sparse-rewards. In this setting, current reinforcement learning algorithms struggle to train agents so that they can achieve high rewards. To address this problem, the authors propose an abstract MDP algorithm. T...
[ 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ 2, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2019_ryxLG2RcYX", "iclr_2019_ryxLG2RcYX", "iclr_2019_ryxLG2RcYX", "B1eAD7KoAX", "BkeBNpfsCm", "HJeSXCh9Cm", "rJeSWZK9A7", "r1eQOgK90m", "rylm4WhLTQ", "B1ehFbba37", "SJl2_N-q2m", "ryl096Y5pm", "iclr_2019_ryxLG2RcYX" ]
iclr_2019_ryxMX2R9YQ
CGNF: Conditional Graph Neural Fields
Graph convolutional networks have achieved tremendous success in the tasks of graph node classification. These models could learn a better node representation through encoding the graph structure and node features. However, the correlation between the node labels are not considered. In this paper, we propose a novel architecture for graph node classification, named conditional graph neural fields (CGNF). By integrating the conditional random fields (CRF) in the graph convolutional networks, we explicitly model a joint probability of the entire set of node labels, thus taking advantage of neighborhood label information in the node label prediction task. Our model could have both the representation capacity of graph neural networks and the prediction power of CRFs. Experiments on several graph datasets demonstrate effectiveness of CGNF.
rejected-papers
This paper introduces conditional graph neural fields, an approach that combines label compatibility scoring of conditional random fields with deep neural representations of nodes provided by graph convolutional networks. The intuition behind the proposed work is promising, and the results are strong. The reviewers and the AC note the following as the primary concerns of the paper: (1) The novelty of this work is limited, since a number of approaches have recently combined CRFs and neural networks, and it is unclear whether the application of those ideas to GCNs is sufficiently interesting, (2) the losses, especially EBM, and the use of greedy/beam-search inference was found to be quite simple, especially given these have been studied extensively in the literature, and (3) analysis and adequate discussion of the results is missing (only a single table of numbers is provided). Amongst other concerns, the reviewers identified issues with writing quality, lack of clear motivation for CRFs, and the selection of the benchmarks. Given the feedback, the authors responded with comments, and a revision that removes the use of EBM loss from the paper, which the reviewers appreciated. However, most of the concerns remain unaddressed. Reviewer 2 maintains that CRFs+NNs still need to be motivated better, since hidden representations already take the neighborhood into account, as demonstrated by the fact that CRF+NNs are not state-of-art in other applications. Reviewer 2 also points out the lack of a detailed analysis of the results. Reviewer 2 focuses on the simplicity of the loss and inference algorithms, which is also echoed by reviewer 2 and reviewer 1. Finally, reviewer 1 also notes that the datasets are quite simple, and not ideal evaluation for label consistency given most of them are single-label (and thus need only few transition probabilities). Based on this discussion, the reviewers and the AC agree that the paper is not ready for acceptance.
train
[ "SJxmaRYzJ4", "HylhCmRp2Q", "Syx6o21R2Q", "S1eg0jkc0X", "rygYcsk5Cm", "B1x5Ls1qR7", "SyeJdNZYhm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thanks for the revision. But my main concerns (weakness in the learning algorithm and experiment results) are not addressed.\n\nw.r.t. motivation: I can fully understand why combining CRF and graph neural nets might be helpful, and indeed such combinations have been explored before in other domains like sequence ...
[ -1, 4, 5, -1, -1, -1, 5 ]
[ -1, 5, 4, -1, -1, -1, 5 ]
[ "S1eg0jkc0X", "iclr_2019_ryxMX2R9YQ", "iclr_2019_ryxMX2R9YQ", "SyeJdNZYhm", "HylhCmRp2Q", "Syx6o21R2Q", "iclr_2019_ryxMX2R9YQ" ]
iclr_2019_ryxOIsA5FQ
Stacking for Transfer Learning
In machine learning tasks, overtting frequently crops up when the number of samples of target domain is insufficient, for the generalization ability of the classifier is poor in this circumstance. To solve this problem, transfer learning utilizes the knowledge of similar domains to improve the robustness of the learner. The main idea of existing transfer learning algorithms is to reduce the dierence between domains by sample selection or domain adaptation. However, no matter what transfer learning algorithm we use, the difference always exists and the hybrid training of source and target data leads to reducing fitting capability of the learner on target domain. Moreover, when the relatedness between domains is too low, negative transfer is more likely to occur. To tackle the problem, we proposed a two-phase transfer learning architecture based on ensemble learning, which uses the existing transfer learning algorithms to train the weak learners in the first stage, and uses the predictions of target data to train the final learner in the second stage. Under this architecture, the fitting capability and generalization capability can be guaranteed at the same time. We evaluated the proposed method on public datasets, which demonstrates the effectiveness and robustness of our proposed method.
rejected-papers
This work proposes a method for both instance and feature based transfer learning. The reviewers agree that the approach in current form lacks sufficient technical novelty for publication. The paper would benefit from experiments on larger datasets and with more analysis into the different aspects of the proposed model.
test
[ "BJx9irP76X", "rkeGG_6OhX", "SkeeJ9ku3Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed to solve the instance-based transfer learning and feature-based transfer learning by stacking with a two-phase training strategy. The source data and target data are hybrid together first to train weak learners, and then the ensembled super learner is utilized to get the final prediction. Detai...
[ 3, 4, 2 ]
[ 5, 5, 5 ]
[ "iclr_2019_ryxOIsA5FQ", "iclr_2019_ryxOIsA5FQ", "iclr_2019_ryxOIsA5FQ" ]
iclr_2019_ryxY73AcK7
Sorting out Lipschitz function approximation
Training neural networks subject to a Lipschitz constraint is useful for generalization bounds, provable adversarial robustness, interpretable gradients, and Wasserstein distance estimation. By the composition property of Lipschitz functions, it suffices to ensure that each individual affine transformation or nonlinear activation function is 1-Lipschitz. The challenge is to do this while maintaining the expressive power. We identify a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation. Based on this, we propose to combine a gradient norm preserving activation function, GroupSort, with norm-constrained weight matrices. We show that norm-constrained GroupSort architectures are universal Lipschitz function approximators. Empirically, we show that norm-constrained GroupSort networks achieve tighter estimates of Wasserstein distance than their ReLU counterparts and can achieve provable adversarial robustness guarantees with little cost to accuracy.
rejected-papers
This paper presents an interesting and theoretically motivated approach to imposing Lipschitz constraints on functions learned by neural networks. R2 and R3 found the idea interesting, but R1 and R2 both point out several issues with the submitted version, including some problems with the proof--probably fixable--as well as a number of writing issues. The authors submitted a cleaned-up revised version, but upon checking revisions it appears the paper was almost completely re-written after the deadline. I do not think reviewers should be expected to comment a second time on such large changes, so I am okay with R1's decision to not review the updated version. Future reviewers of a more polished version of the paper will be in a better position to assess its merits in detail.
train
[ "BJx3P-AvpX", "HkgwSlAPpQ", "rkeWyGAwp7", "Hkx-jx0v6Q", "rJxFhEd63Q", "HkgdKwx93Q", "H1gaveTEn7", "BJgZYz8167", "B1evMmjA2X" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for your detailed comments. We have uploaded a revised version of the paper which we believe addresses the majority of your concerns. You can find more detailed responses below. \n\nConcern 1: Is GroupSort leading to bad networks? (Integrate the topology of inputs)\n\nCould you clarify what you mean by “...
[ -1, -1, -1, -1, 7, 5, 4, -1, -1 ]
[ -1, -1, -1, -1, 3, 4, 4, -1, -1 ]
[ "HkgdKwx93Q", "iclr_2019_ryxY73AcK7", "H1gaveTEn7", "rJxFhEd63Q", "iclr_2019_ryxY73AcK7", "iclr_2019_ryxY73AcK7", "iclr_2019_ryxY73AcK7", "B1evMmjA2X", "iclr_2019_ryxY73AcK7" ]
iclr_2019_ryxaSsActQ
Dual Skew Divergence Loss for Neural Machine Translation
For neural sequence model training, maximum likelihood (ML) has been commonly adopted to optimize model parameters with respect to the corresponding objective. However, in the case of sequence prediction tasks like neural machine translation (NMT), training with the ML-based cross entropy loss would often lead to models that overgeneralize and plunge into local optima. In this paper, we propose an extended loss function called dual skew divergence (DSD), which aims to give a better tradeoff between generalization ability and error avoidance during NMT training. Our empirical study indicates that switching to DSD loss after the convergence of ML training helps the model skip the local optimum and stimulates a stable performance improvement. The evaluations on WMT 2014 English-German and English-French translation tasks demonstrate that the proposed loss indeed helps bring about better translation performance than several baselines.
rejected-papers
This paper proposes a new loss function that can be used in place of the standard maximum likelihood objective in training NMT models. This leads to a small improvement in training MT systems. There were some concerns about the paper though: one was that the method itself seemed somewhat heuristic without a clear mathematical explanation. The second was that the baselines seemed relatively dated, although one reviewer noted that this seemed like a bit of a lesser concern. Finally, the improvements afforded were relatively small. Given the high number of good papers submitted to ICLR this year, it seems that this one falls short of the acceptance threshold.
train
[ "SJxB8-YThQ", "rJlTXPy5n7", "HkgORetD3m" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper describes a new loss function for training, that can be\nused as an alternative to maximum likelihood (cross entropy), or\nas a metric that is used to fine-tune a model that is initially\ntrained using ML.\n\nExperiments are reported on the WMT 2014 English-German and\nEnglish-French test sets.\n\nI thin...
[ 3, 6, 5 ]
[ 4, 4, 4 ]
[ "iclr_2019_ryxaSsActQ", "iclr_2019_ryxaSsActQ", "iclr_2019_ryxaSsActQ" ]
iclr_2019_ryxeB30cYX
Stochastic Quantized Activation: To prevent Overfitting in Fast Adversarial Training
Existing neural networks are vulnerable to "adversarial examples"---created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks. The most investigated defense strategy is adversarial training which augments training data with adversarial examples. However, applying single-step adversaries in adversarial training does not support the robustness of the networks, instead, they will even make the networks to be overfitted. In contrast to the single-step, multi-step training results in the state-of-the-art performance on MNIST and CIFAR10, yet it needs a massive amount of time. Therefore, we propose a method, Stochastic Quantized Activation (SQA) that solves overfitting problems in single-step adversarial training and fastly achieves the robustness comparable to the multi-step. SQA attenuates the adversarial effects by providing random selectivity to activation functions and allows the network to learn robustness with only single-step training. Throughout the experiment, our method demonstrates the state-of-the-art robustness against one of the strongest white-box attacks as PGD training, but with much less computational cost. Finally, we visualize the learning process of the network with SQA to handle strong adversaries, which is different from existing methods.
rejected-papers
While the paper contains interesting ideas, the reviewers suggest improving the clarity and experimental study of the paper. The work holds promises but is not ready for publication at ICLR.
train
[ "H1gel-SAn7", "S1lm07nnnQ", "B1xyY2gFhQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a model to improve adversarial training, by introducing random perturbations in the activations of one of the hidden layers. Experiments show that robustness to attacks can be improved, but seemingly at a significant cost to accuracy on non-adversarial input.\n\nI have not spent significant time...
[ 4, 5, 4 ]
[ 3, 5, 4 ]
[ "iclr_2019_ryxeB30cYX", "iclr_2019_ryxeB30cYX", "iclr_2019_ryxeB30cYX" ]
iclr_2019_ryxhB3CcK7
Probabilistic Neural-Symbolic Models for Interpretable Visual Question Answering
We propose a new class of probabilistic neural-symbolic models for visual question answering (VQA) that provide interpretable explanations of their decision making in the form of programs, given a small annotated set of human programs. The key idea of our approach is to learn a rich latent space which effectively propagates program annotations from known questions to novel questions. We do this by formalizing prior work on VQA, called module networks (Andreas, 2016) as discrete, structured, latent variable models on the joint distribution over questions and answers given images, and devise a procedure to train the model effectively. Our results on a dataset of compositional questions about SHAPES (Andreas, 2016) show that our model generates more interpretable programs and obtains better accuracy on VQA in the low-data regime than prior work.
rejected-papers
This paper proposes a latent variable approach to the neural module networks of Andreas et al, whereby the program determining the structure of a module network is a structured discrete latent variable. The authors explore inference mechanisms over such programs and evaluate them on SHAPES. This paper may seem acceptable on the basis of its scores, but R1 (in particular) and R3 did a shambolic job of reviewing: their reviews are extremely short, and offer no substance to justify their scores. R2 has admirably engaged in discussion and upped their score to 6, but continue to find the paper fairly borderline, as do I. Weighing the reviews by the confidence I have in the reviewers based on their engagement, I would have to concur with R2 that this paper is very borderline. I like the core idea, but agree that the presentation of the inference techniques for V-NMN is complex and its presentation could stand to be significantly improved. I appreciate that the authors have made some updates on the basis of R2's feedback, but unfortunately due to the competitive nature of this year's ICLR and the number of acceptable paper, I cannot fully recommend acceptance at this time. As a complete side note, it is surprising not to see the Kingma & Welling (2013) VAE paper cited here, given the topic.
train
[ "rkgGCh_e1N", "ryl1ZpPly4", "BJxuJagkk4", "HkgE_Wx1yE", "rygqGjiRAm", "HJgdaqL5nQ", "B1xhUTsjRX", "BkeK5LLq07", "HkeMV8U907", "HygagHLqRX", "Hkl7MurqR7", "ryxReMqxCX", "SJxb4jA567", "BygPBqabpm", "Hkeq6bpbaQ", "SkxggDsxam", "ryeoWKdlpX", "BkevSwdl6Q", "HklnNrv53m", "S1x1C3I7i7"...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_revi...
[ "I'd like to thank the authors for the comments. I agree with your points. I think that the updated version is a good work and I give it a positive score. As I pointed out, I can see that this work proposes a probabilistic neural symbolic models to the field of VQA. The probabilistic formulation is popular in many ...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "ryl1ZpPly4", "HkgE_Wx1yE", "HkgE_Wx1yE", "rygqGjiRAm", "B1xhUTsjRX", "iclr_2019_ryxhB3CcK7", "HkeMV8U907", "SkxggDsxam", "SkxggDsxam", "iclr_2019_ryxhB3CcK7", "ryxReMqxCX", "iclr_2019_ryxhB3CcK7", "HklnNrv53m", "SkxggDsxam", "SkxggDsxam", "BkevSwdl6Q", "HklnNrv53m", "HJgdaqL5nQ", ...
iclr_2019_ryxhynC9KX
CNNSAT: Fast, Accurate Boolean Satisfiability using Convolutional Neural Networks
Boolean satisfiability (SAT) is one of the most well-known NP-complete problems and has been extensively studied. State-of-the-art solvers exist and have found a wide range of applications. However, they still do not scale well to formulas with hundreds of variables. To tackle this fundamental scalability challenge, we introduce CNNSAT, a fast and accurate statistical decision procedure for SAT based on convolutional neural networks. CNNSAT's effectiveness is due to a precise and compact representation of Boolean formulas. On both real and synthetic formulas, CNNSAT is highly accurate and orders of magnitude faster than the state-of-the-art solver Z3. We also describe how to extend CNNSAT to predict satisfying assignments when it predicts a formula to be satisfiable.
rejected-papers
The authors provide a convolutional neural network for predicting the satisfiability of SAT instances. The idea is interesting, and the main novelty in the paper is the use of convolutions in the architecture and a procedure to predict a witness when the formula is satisfiable. However, there are concerns about the suitability of convolutions for this problem because of the permutation invariance of SAT. Empirically, the resulting models are accurate (correctly predicting sat/unsat 90-99% of the time) while taking less time than some existing solvers. However, as pointed out by the reviewers, the empirical results are not sufficient to demonstrate the effectiveness of the approach. I want to thank the authors for the great work they did to address the concerns of the reviewers. The paper significantly improved over the reviewing period, and while it is not yet ready for publication, I want to encourage the authors to keep pushing the idea to further and improve the experimental results.
test
[ "Skxyjipi1N", "BkgBcAO_JV", "Syedaq__yN", "HkgK8jD53X", "H1g73lQD14", "ryxwUYbD1N", "r1gq0XWDyN", "Skx7_7WvkV", "BkxNCJMLJN", "HyxNIq0VyE", "rJlK9vC-14", "rJxd44Xt27", "Byg1KgFhAm", "HkxdYvRiAm", "Hkgd8D7jRX", "r1xyoXi9A7", "rJxXT0DOCQ", "SkgSKDHdA7", "r1gpxB7w0X", "Syl2A4mPR7"...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "...
[ "Thanks for the reply. \n\n> Our wording may be unclear, which we will state more accurately: (1) CNNSAT's focus is on predicting SAT/UNSAT, not solving, and (2) it achieves very accurate and fast SAT/UNSAT prediction.\n\n-> Hmm, this sounds very different than what the authors said on November 28 in their comment ...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "Syedaq__yN", "ryxwUYbD1N", "H1g73lQD14", "iclr_2019_ryxhynC9KX", "BkxNCJMLJN", "Skx7_7WvkV", "HkxdYvRiAm", "HkxdYvRiAm", "HyxNIq0VyE", "rJxTi4mDR7", "rJxd44Xt27", "iclr_2019_ryxhynC9KX", "HkxdYvRiAm", "Hkgd8D7jRX", "r1xyoXi9A7", "r1gpxB7w0X", "SkgSKDHdA7", "Syl2A4mPR7", "SygnL1q...
iclr_2019_ryxjH3R5KQ
Single Shot Neural Architecture Search Via Direct Sparse Optimization
Recently Neural Architecture Search (NAS) has aroused great interest in both academia and industry, however it remains challenging because of its huge and non-continuous search space. Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a Direct Sparse Optimization NAS (DSO-NAS) method. In DSO-NAS, we provide a novel model pruning view to NAS problem. In specific, we start from a completely connected block, and then introduce scaling factors to scale the information flow between operations. Next, we impose sparse regularizations to prune useless connections in the architecture. Lastly, we derive an efficient and theoretically sound optimization method to solve it. Our method enjoys both advantages of differentiability and efficiency, therefore can be directly applied to large datasets like ImageNet. Particularly, On CIFAR-10 dataset, DSO-NAS achieves an average test error 2.84%, while on the ImageNet dataset DSO-NAS achieves 25.4% test error under 600M FLOPs with 8 GPUs in 18 hours.
rejected-papers
This paper proposes Direct Sparse Optimization (DSO)-NAS to obtain neural architectures on specific problems at a reasonable computational cost. Regularization by sparsity is a neat idea, but similar idea has been discussed by many pruning papers. "model pruning formulation for neural architecture search based on sparse optimization" is claimed to be the main contribution, but it's debatable if such contribution is strong: worse accuracy, more computation, more #parameters than Mnas (less search time, but also worse search quality). The effect of each proposed technique is appropriately evaluated. However, the reviewers are concerned that the proposed method does not outperform the existing state-of-the-art methods in terms of classification accuracy. There's also some concerns about the search space of the proposed method. It is debatable about claim that "the first NAS algorithm to perform direct search on ImageNet" and "the first method to perform direct search without block structure sharing". Given the acceptance rate of ICLR should be <30%, I would say this paper is good but not outstanding.
test
[ "H1xML64ikN", "H1eM0qbj1V", "rylJA7G82Q", "HJeUxsq60Q", "S1eaY6jqRQ", "r1e9RpscAX", "H1l9VFs9AQ", "Syxfj_H90X", "S1xBci1FAm", "H1lLBiyK0Q", "HyewJo1YAQ", "B1g1aj1F07", "HJecGl3Z0m", "Bkxnj9SlTX", "SJx-Kv9JTQ", "H1gmII1EaX", "SkgtFcZbTm", "ryehyj0ka7", "rylgsNqchQ", "SkgrQMiFhQ"...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "public", "public", "author", "author", "author", "official_reviewer", "official_r...
[ "We have added the MNasNet -92 (without SE) results to Table2 for comparison. Please note that, the difference on Imagenet is only 0.2% in terms of accuracy, which is quite minor compared to the error rate 25.2%. We indeed don't optimize latency intentionally in this work, however this should not be hard if we coul...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, -1 ]
[ "H1eM0qbj1V", "H1l9VFs9AQ", "iclr_2019_ryxjH3R5KQ", "S1xBci1FAm", "Syxfj_H90X", "H1l9VFs9AQ", "Syxfj_H90X", "B1g1aj1F07", "rylJA7G82Q", "SkgrQMiFhQ", "rylgsNqchQ", "HJecGl3Z0m", "iclr_2019_ryxjH3R5KQ", "ryehyj0ka7", "iclr_2019_ryxjH3R5KQ", "Syemi4AR3Q", "Bkxnj9SlTX", "SJx-Kv9JTQ", ...
iclr_2019_ryxsS3A5Km
Continual Learning via Explicit Structure Learning
Despite recent advances in deep learning, neural networks suffer catastrophic forgetting when tasks are learned sequentially. We propose a conceptually simple and general framework for continual learning, where structure optimization is considered explicitly during learning. We implement this idea by separating the structure and parameter learning. During structure learning, the model optimizes for the best structure for the current task. The model learns when to reuse or modify structure from previous tasks, or create new ones when necessary. The model parameters are then estimated with the optimal structure. Empirically, we found that our approach leads to sensible structures when learning multiple tasks continuously. Additionally, catastrophic forgetting is also largely alleviated from explicit learning of structures. Our method also outperforms all other baselines on the permuted MNIST and split CIFAR datasets in continual learning setting.
rejected-papers
The paper presents a promising approach for continual learning with no access to data from the previous tasks. For learning the current task, the authors propose to find an optimal structure of the neural network model first (select either to reuse, adapt previously learned layers or to train new layers) and then to learn its parameters. While acknowledging the originality of the method and the importance of the problem that it tries to address, all reviewers and AC agreed that they would like to see more intensive empirical evaluations and comparisons to state-of-the-art models for continual learning using more datasets and in-depth analysis of the results – see details comments of all reviewers before and after rebuttal. The authors have tried to address some of these concerns during rebuttal, but an in-depth analysis of the results (evaluation in terms on accuracy, efficiency, memory demand) using different datasets still remains a critical issue. Two other requests to further strengthen the manuscript: 1) an ablation study on the three choices for structural learning (R3), and especially the importance of ‘adaptation’ (R3 and R1) The authors have tried to address this verbally in their responses but a proper ablation study would be desirable to strengthen the evaluation. 2) Readability and proofreading of the manuscript is still unsatisfying after revision.
train
[ "Hygcmic3AX", "r1e_1DrcAX", "HygXZSH50m", "ByxWsETU07", "BkekeLd52X", "SJxiRZ7A6Q", "Skx9o-X0TQ", "ryxJKZm067", "rygRL-QRTX", "HkeheEjO3X", "B1e--EHYnQ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Are they fair comparisons (evaluation only in terms of accuracy)? Different methods expand the network different amount. Hence, they should be compared on this metric too.\n\nAs mentioned in the paper, we make sure that all methods use similar amount of parameters. In particular, we make sure that all other method...
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, 4, 4 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, 4, 5 ]
[ "BkekeLd52X", "ByxWsETU07", "rygRL-QRTX", "SJxiRZ7A6Q", "iclr_2019_ryxsS3A5Km", "HkeheEjO3X", "B1e--EHYnQ", "BkekeLd52X", "iclr_2019_ryxsS3A5Km", "iclr_2019_ryxsS3A5Km", "iclr_2019_ryxsS3A5Km" ]
iclr_2019_ryxyHnR5tX
Accelerated Sparse Recovery Under Structured Measurements
Extensive work on compressed sensing has yielded a rich collection of sparse recovery algorithms, each making different tradeoffs between recovery condition and computational efficiency. In this paper, we propose a unified framework for accelerating various existing sparse recovery algorithms without sacrificing recovery guarantees by exploiting structure in the measurement matrix. Unlike fast algorithms that are specific to particular choices of measurement matrices where the columns are Fourier or wavelet filters for example, the proposed approach works on a broad range of measurement matrices that satisfy a particular property. We precisely characterize this property, which quantifies how easy it is to accelerate sparse recovery for the measurement matrix in question. We also derive the time complexity of the accelerated algorithm, which is sublinear in the signal length in each iteration. Moreover, we present experimental results on real world data that demonstrate the effectiveness of the proposed approach in practice.
rejected-papers
The main idea of this paper is to use nearest neighbor search to to accelerate iterative thresholding based sparse recovery algorithms. All reviewers were underwhelmed by somewhat straightforward combination of existing results in sparse recovery and nearest-neighbor search. While the proposed method seems effective in practice, the paper has the feel of not being a fully publishable unit yet. Several technical questions were asked but no author feedback was provided to potentially lift this paper up.
train
[ "HyeWSuL527", "Hylg2MTYhQ", "SJxbH3C4h7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Clarity: Paper is generally well written; however, certain theoretical statements (e.g. Theorem 1) are not very precise.\n\nOriginality: Contribution seems to be incremental; the proposed method seems to be a straightforward concatenation of well-known existing results in sparse recovery and nearest-neighbor searc...
[ 4, 5, 5 ]
[ 5, 3, 4 ]
[ "iclr_2019_ryxyHnR5tX", "iclr_2019_ryxyHnR5tX", "iclr_2019_ryxyHnR5tX" ]
iclr_2019_ryzHXnR5Y7
Select Via Proxy: Efficient Data Selection For Training Deep Networks
At internet scale, applications collect a tremendous amount of data by logging user events, analyzing text, and collecting images. This data powers a variety of machine learning models for tasks such as image classification, language modeling, content recommendation, and advertising. However, training large models over all available data can be computationally expensive, creating a bottleneck in the development of new machine learning models. In this work, we develop a novel approach to efficiently select a subset of training data to achieve faster training with no loss in model predictive performance. In our approach, we first train a small proxy model quickly, which we then use to estimate the utility of individual training data points, and then select the most informative ones for training the large target model. Extensive experiments show that our approach leads to a 1.6x and 1.8x speed-up on CIFAR10 and SVHN by selecting 60% and 50% subsets of the data, while maintaining the predictive performance of the model trained on the entire dataset.
rejected-papers
There reviewers unanimously recommend rejecting this paper and, although I believe the submission is close to something that should be accepted, I concur with their recommendation. This paper should be improved and published elsewhere, but the improvements needed are too extensive to justify accepting it in this conference. I believe the authors are studying a very promising algorithm and it is irrelevant that the algorithm is a relatively obvious one. Ideally, the contribution would be a clear experimental investigation of the utility of this algorithm in realistic conditions. Unfortunately, the existing experiments are not quite there. I agree with reviewer 2 that the method is not particularly novel. However, I disagree that this is a problem, so it was not a factor in my decision. Novelty can be overrated and it would be fine if the experiments were sufficiently insightful and comprehensive. I believe experiments that train for a single epoch on the reduced dataset are absolutely essential in order to understand the potential usefulness of the algorithm. Although it would of course be better, I do not think it is necessary to find datasets traditionally trained in a single pass. You can do single epoch training on other datasets even though it will likely degrade the final validation error reached. This is the type of small scale experiment the paper should include, additional apples-to-apples baselines just need to be added. Also, there are many large language modeling datasets where it is reasonable to make only a single pass over the training set. The goal should be to simulate, as closely as is possible, the sort of conditions that would actually justify using the algorithm in practice. Another issue with the experimental protocol is that, when claiming a potential speedup, one must tune the baseline to get a particular result in the fewest steps. Most baselines get tuned to produce the best final validation error given a fixed number of steps. But when studying training speed, we should fix a competitive goal error rate and then tune for speed. Careful attention to these experimental protocol issues would be important.
train
[ "HygQI0gny4", "r1gDA2CN14", "rkxIQEdqAm", "HylwSMOcCX", "Sylti-Oc0Q", "SkxFGD7c2X", "HkxQhmG5nQ", "rygxBMfw2m", "HylHT7Y0cX", "r1liOizs5m" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Great question! Our method is generic and could be applied to training procedures that only make a single pass over the data. Unfortunately, we did not have access to a large task that is solved in a single pass over the dataset. To the best of our knowledge, all of the popular benchmarked datasets make multiple p...
[ -1, -1, -1, -1, -1, 4, 4, 5, -1, -1 ]
[ -1, -1, -1, -1, -1, 4, 2, 4, -1, -1 ]
[ "r1gDA2CN14", "iclr_2019_ryzHXnR5Y7", "rygxBMfw2m", "HkxQhmG5nQ", "SkxFGD7c2X", "iclr_2019_ryzHXnR5Y7", "iclr_2019_ryzHXnR5Y7", "iclr_2019_ryzHXnR5Y7", "r1liOizs5m", "iclr_2019_ryzHXnR5Y7" ]
iclr_2019_ryza73R9tQ
Machine Translation With Weakly Paired Bilingual Documents
Neural machine translation, which achieves near human-level performance in some languages, strongly relies on the availability of large amounts of parallel sentences, which hinders its applicability to low-resource language pairs. Recent works explore the possibility of unsupervised machine translation with monolingual data only, leading to much lower accuracy compared with the supervised one. Observing that weakly paired bilingual documents are much easier to collect than bilingual sentences, e.g., from Wikipedia, news websites or books, in this paper, we investigate the training of translation models with weakly paired bilingual documents. Our approach contains two components/steps. First, we provide a simple approach to mine implicitly bilingual sentence pairs from document pairs which can then be used as supervised signals for training. Second, we leverage the topic consistency of two weakly paired documents and learn the sentence-to-sentence translation by constraining the word distribution-level alignments. We evaluate our proposed method on weakly paired documents from Wikipedia on four tasks, the widely used WMT16 German↔English and WMT13 Spanish↔English tasks, and obtain 24.1/30.3 and 28.0/27.6 BLEU points separately, outperforming state-of-the-art unsupervised results by more than 5 BLEU points and reducing the gap between unsupervised translation and supervised translation up to 50\%.
rejected-papers
This paper proposes a new method to mine sentence from Wikipedia and use them to train an MT system, and also a topic-based loss function. In particular, the first contribution, which is the main aspect of the proposal is effective, outperforming methods for fully unsupervised learning. The main concern with the proposed method, or at least it's description in the paper, is that it isn't framed appropriately with respect to previous work on mining parallel sentences from comparable corpora such as Wikipedia. Based on interaction in the reviews, I feel that things are now framed a bit better, and there are additional baselines, but still the explanation in the paper isn't framed with respect to this previous work, and also the baselines are not competitive, despite previous work reporting very nice results for these previous methods. I feel like this could be a very nice paper at some point if it's re-written with the appropriate references to previous work, and experimental results where the baselines are done appropriately. Thus at this time I'm not recommending that the paper be accepted, but encourage the authors to re-submit a revised version in the future.
train
[ "H1epCemnRm", "S1eeOBNcC7", "r1lZazqO3m", "rJxrJT2FA7", "B1xhXRDKCm", "BJxgL3mY0Q", "Syx1cFNOAQ", "ryxnh_NOCQ", "Hyl08hvHCQ", "r1xbcS_zCX", "r1lcsUNfAQ", "SyxSMI_G0X", "Hyl0GH_M0Q", "HJlNJaHMC7", "Hyl7Mj1a2X", "HkgVsbHch7", "SJxtPwIypm", "r1l5gBF4qm" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "Dear Reviewer 2 and Area Chair:\n\nThanks for the comments. We will definitely revise our paper to add more related work and comparisons. We also want to make discussions about the following points. \n\n1. For the experimental setting\n1). We fully agree that the method should be tested on the setting you mentione...
[ -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, -1, -1 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, -1, -1 ]
[ "iclr_2019_ryza73R9tQ", "Syx1cFNOAQ", "iclr_2019_ryza73R9tQ", "ryxnh_NOCQ", "SyxSMI_G0X", "iclr_2019_ryza73R9tQ", "HJlNJaHMC7", "r1lcsUNfAQ", "r1l5gBF4qm", "HkgVsbHch7", "Hyl7Mj1a2X", "SJxtPwIypm", "HkgVsbHch7", "r1lZazqO3m", "iclr_2019_ryza73R9tQ", "iclr_2019_ryza73R9tQ", "iclr_2019...
iclr_2019_ryzfcoR5YQ
Layerwise Recurrent Autoencoder for General Real-world Traffic Flow Forecasting
Accurate spatio-temporal traffic forecasting is a fundamental task that has wide applications in city management, transportation area and financial domain. There are many factors that make this significant task also challenging, like: (1) maze-like road network makes the spatial dependency complex; (2) the traffic-time relationships bring non-linear temporal complication; (3) with the larger road network, the difficulty of flow forecasting grows. The prevalent and state-of-the-art methods have mainly been discussed on datasets covering relatively small districts and short time span, e.g., the dataset that is collected within a city during months. To forecast the traffic flow across a wide area and overcome the mentioned challenges, we design and propose a promising forecasting model called Layerwise Recurrent Autoencoder (LRA), in which a three-layer stacked autoencoder (SAE) architecture is used to obtain temporal traffic correlations and a recurrent neural networks (RNNs) model for prediction. The convolutional neural networks (CNNs) model is also employed to extract spatial traffic information within the transport topology for more accurate prediction. To the best of our knowledge, there is no general and effective method for traffic flow prediction in large area which covers a group of cities. The experiment is completed on such large scale real-world traffic datasets to show superiority. And a smaller dataset is exploited to prove universality of the proposed model. And evaluations show that our model outperforms the state-of-the-art baselines by 6% - 15%.
rejected-papers
The paper proposes an interesting neural architecture for traffic flow forecasting, which is tested on a number of datasets. Unfortunately, the lack of clarity as well as precision in writing appears to be a big issue for this paper, which prevents it from being accepted for publication in its current form. However, the reviewers did provide valuable feedback regarding writing, explanation, presentation and structure, that the paper would benefit from.
train
[ "Syxx2arKa7", "rJlQtpHta7", "rkxDV6Bt6m", "H1lJ6bITn7", "BkgiKcMihQ", "H1xTDOudh7", "SJeKCW7scm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "Thanks for your comments. Here's our reply:\n\nFor your major comments: \n1)\tThe two datasets are introduced in the Appendix C, even with a distribution of sensors of the larger one. Please check it in our paper.\n2)\tWe listed some key parameters in Appendix D, we believe with information of our paper, the reade...
[ -1, -1, -1, 4, 5, 3, -1 ]
[ -1, -1, -1, 3, 3, 4, -1 ]
[ "H1xTDOudh7", "BkgiKcMihQ", "H1lJ6bITn7", "iclr_2019_ryzfcoR5YQ", "iclr_2019_ryzfcoR5YQ", "iclr_2019_ryzfcoR5YQ", "iclr_2019_ryzfcoR5YQ" ]
iclr_2020_Syx4wnEtvH
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is LARS, which by employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes. However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning. In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes.
accept-poster
This paper presents a range of methods for over-coming the challenges of large-batch training with transformer models. While one reviewer still questions the utility of training with such large numbers of devices, there is certainly a segment of the community that focuses on large-batch training, and the ideas in this paper will hopefully find a range of uses.
train
[ "H1eSuSc9YS", "SJxDlO0mjB", "S1gvIwRQjB", "B1x_XvR7oH", "H1ggL7XhKS", "r1g_3CFatr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nIn this paper, the authors made a study on large-batch training for the BERT, and successfully trained a BERT model in 76 minutes. The results look quite exciting, however, after looking into the details of the paper, I would say that this is just a kind of RED AI – the results were mostly achieved by putting to...
[ 3, -1, -1, -1, 8, 6 ]
[ 5, -1, -1, -1, 3, 3 ]
[ "iclr_2020_Syx4wnEtvH", "H1eSuSc9YS", "H1ggL7XhKS", "r1g_3CFatr", "iclr_2020_Syx4wnEtvH", "iclr_2020_Syx4wnEtvH" ]
iclr_2020_HkgsPhNYPS
SELF: Learning to Filter Noisy Labels with Self-Ensembling
Deep neural networks (DNNs) have been shown to over-fit a dataset when being trained with noisy labels for a long enough time. To overcome this problem, we present a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training. Our method improves the task performance by gradually allowing supervision only from the potentially non-noisy (clean) labels and stops learning on the filtered noisy labels. For the filtering, we form running averages of predictions over the entire training dataset using the network output at different training epochs. We show that these ensemble estimates yield more accurate identification of inconsistent predictions throughout training than the single estimates of the network at the most recent training epoch. While filtered samples are removed entirely from the supervised training loss, we dynamically leverage them via semi-supervised learning in the unsupervised loss. We demonstrate the positive effect of such an approach on various image classification tasks under both symmetric and asymmetric label noise and at different noise ratios. It substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures.
accept-poster
The authors addressed the issues raised by the reviewers; I suggest to accept this paper.
train
[ "HklMHtHhjB", "BkxwUHrnsS", "Sked0MN3sS", "BJejYQ4hiH", "rkeySME3iB", "S1lUabE3oH", "BJguQ66MsH", "SylY2HN4tr", "SyeJgpOAtS", "rkek1NdX5r" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for asking!\n\nWe implement JointOpt (Tanaka et al., 2018) based on the official implementation.\nFor SL (Wang et al., 2019) , D2L (Ma et al., 2018), we adopt the performance from the respective publication.\n\n", "Are the results in Tables 1 & 2 in the manner copy and paste for baselines? ", "--- Su...
[ -1, -1, -1, -1, -1, -1, -1, 6, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "BkxwUHrnsS", "rkeySME3iB", "SyeJgpOAtS", "SylY2HN4tr", "rkek1NdX5r", "BJguQ66MsH", "iclr_2020_HkgsPhNYPS", "iclr_2020_HkgsPhNYPS", "iclr_2020_HkgsPhNYPS", "iclr_2020_HkgsPhNYPS" ]
iclr_2020_HygnDhEtvr
Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation
Natural question generation (QG) aims to generate questions from a passage and an answer. Previous works on QG either (i) ignore the rich structure information hidden in text, (ii) solely rely on cross-entropy loss that leads to issues like exposure bias and inconsistency between train/test measurement, or (iii) fail to fully exploit the answer information. To address these limitations, in this paper, we propose a reinforcement learning (RL) based graph-to-sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective combining both cross-entropy and RL losses to ensure the generation of syntactically and semantically valid text. We also introduce an effective Deep Alignment Network for incorporating the answer information into the passage at both the word and contextual levels. Our model is end-to-end trainable and achieves new state-of-the-art scores, outperforming existing methods by a significant margin on the standard SQuAD benchmark.
accept-poster
The reviewers found this paper on improving NLG using a graph-to-sequence architecture interesting and the results impressive. While I would personally have preferred to see further evaluation of this model on another NLG task, I think it would be overstepping in my role as AC to go against the reviewer consensus. The paper is clearly acceptable.
train
[ "SJg13QwHFH", "HyeyXb4TFS", "ryx2nFruor", "BylQEYvvir", "HJxNOcwDsS", "HJxq5qwwsH", "BJgaV9vvsS", "rkemIYPPiH", "BklwQSsJtH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper focuses on improving the performance on the task of natural language generation. To this end, they propose a graph-to-sequence (Graph2Seq) model for the task of question generation which exploits the rich structure information in the text as well as use reinforcement learning based policy gradient appr...
[ 6, 6, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_HygnDhEtvr", "iclr_2020_HygnDhEtvr", "iclr_2020_HygnDhEtvr", "HyeyXb4TFS", "SJg13QwHFH", "BklwQSsJtH", "SJg13QwHFH", "HyeyXb4TFS", "iclr_2020_HygnDhEtvr" ]
iclr_2020_rkgpv2VFvr
Sharing Knowledge in Multi-Task Deep Reinforcement Learning
We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.
accept-poster
This paper considers the benefits of deep multi-task RL with shared representations, by deriving bounds for multi-task approximate value and policy iteration bounds. This shows both theoretically and empirically that shared representations across multiple tasks can outperform single task performance. There were a number of minor concerns from the reviewers regarding relation to prior work and details of the analysis, but these were clarified in the discussion. This paper adds important theoretical analysis to the literature, and so I recommend it is accepted.
val
[ "B1gVz2MTtB", "SJljDLm7iB", "B1lwMP7Xsr", "HkgxVEm7iH", "Sye2k-pntH", "HkeFR4ZTKS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper attempts to give theoretical support for using shared representations among multiple tasks. The architecture has already been proposed in another paper. The main contribution of the paper is the theory that it claims to support this architecture. However, I am dubious that the architecture achieves th...
[ 6, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, 4, 4 ]
[ "iclr_2020_rkgpv2VFvr", "HkeFR4ZTKS", "Sye2k-pntH", "B1gVz2MTtB", "iclr_2020_rkgpv2VFvr", "iclr_2020_rkgpv2VFvr" ]
iclr_2020_H1eCw3EKvH
On the Weaknesses of Reinforcement Learning for Neural Machine Translation
Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our results suggest that RL practices in MT are likely to improve performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve.
accept-poster
In my opinion, the main strength of this work is the theoretical analysis and some observations that may be of great interest to the NLP community in terms of better analyzing the performance of RL (and "RL-like") methods as optimizers. The main weakness, as pointed out by R3, the limited empirical analysis. I would urge the authors to take R3's advice and attempt insofar as possible to broaden the scope of the empirical analysis in the final. I believe that this is important for the paper to be able to make its case convincingly. Nonetheless, I do think that the paper makes a significant contribution that will be of interest to the community, and should be presented at ICLR. Therefore, I would recommend for it to be accepted.
train
[ "HyJgzPCYr", "HkxVYKz2oH", "rklt7aJ3iS", "BJl8qb66KS", "H1lfPJNVor", "Hylc7k4Vor", "rJlqi074sS", "ryx8p2xW9B" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "In the context of neural machine translation, limitations of some reinforcement learning methods, in particular REINFORCE and contrastive minimum risk training (MRT), are analyzed. The authors argue that MRT doesn't optimize the expected reward. Moreover, they show that using REINFORCE, with either realistic or du...
[ 6, -1, -1, 3, -1, -1, -1, 8 ]
[ 4, -1, -1, 5, -1, -1, -1, 3 ]
[ "iclr_2020_H1eCw3EKvH", "rklt7aJ3iS", "iclr_2020_H1eCw3EKvH", "iclr_2020_H1eCw3EKvH", "BJl8qb66KS", "HyJgzPCYr", "ryx8p2xW9B", "iclr_2020_H1eCw3EKvH" ]
iclr_2020_BJxg_hVtwH
StructPool: Structured Graph Pooling via Conditional Random Fields
Learning high-level representations for graphs is of great importance for graph analysis tasks. In addition to graph convolution, graph pooling is an important but less explored research area. In particular, most of existing graph pooling techniques do not consider the graph structural information explicitly. We argue that such information is important and develop a novel graph pooling technique, know as the StructPool, in this work. We consider the graph pooling as a node clustering problem, which requires the learning of a cluster assignment matrix. We propose to formulate it as a structured prediction problem and employ conditional random fields to capture the relationships among assignments of different nodes. We also generalize our method to incorporate graph topological information in designing the Gibbs energy function. Experimental results on multiple datasets demonstrate the effectiveness of our proposed StructPool.
accept-poster
The paper proposed an operation called StructPool for graph-pooling by treating it as node clustering problem (assigning a label from 1..k to each node) and then use a pairwise CRF structure to jointly infer these labels. The reviewers all think that this is a well-written paper, and the experimental results are adequate to back up the claim that StructPool offers advantage over other graph-pooling operations. Even though the idea of the presented method is simple and it does add more (albeit by a constant factor) to the computational burden of graph neural network, I think this would make a valuable addition to the literature.
train
[ "H1lHXXRhtH", "Skl-RzrhiB", "Hkx9oVrwir", "HyxiLFjujr", "S1e-w-gKoH", "Byx_jhHLoS", "SylGq_LxqB", "SJgUtBvZqS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Strength:\n-- An interesting idea to use CRF idea to cluster the nodes on a graph for pooling purpose\n-- The paper is well written and easy to follow\n-- On a few datasets and task, the proposed method works pretty well\n\nWeakness:\n-- The computational complexity of the proposed algorithm is on(n^3), which is ...
[ 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_BJxg_hVtwH", "Hkx9oVrwir", "H1lHXXRhtH", "SylGq_LxqB", "iclr_2020_BJxg_hVtwH", "SJgUtBvZqS", "iclr_2020_BJxg_hVtwH", "iclr_2020_BJxg_hVtwH" ]
iclr_2020_rJgBd2NYPH
Learning deep graph matching with channel-independent embedding and Hungarian attention
Graph matching aims to establishing node-wise correspondence between two graphs, which is a classic combinatorial problem and in general NP-complete. Until very recently, deep graph matching methods start to resort to deep networks to achieve unprecedented matching accuracy. Along this direction, this paper makes two complementary contributions which can also be reused as plugin in existing works: i) a novel node and edge embedding strategy which stimulates the multi-head strategy in attention models and allows the information in each channel to be merged independently. In contrast, only node embedding is accounted in previous works; ii) a general masking mechanism over the loss function is devised to improve the smoothness of objective learning for graph matching. Using Hungarian algorithm, it dynamically constructs a structured and sparsely connected layer, taking into account the most contributing matching pairs as hard attention. Our approach performs competitively, and can also improve state-of-the-art methods as plugin, regarding with matching accuracy on three public benchmarks.
accept-poster
This paper proposed a new graph matching approach. The main contribution is a Hungarian attention mechanism, which dynamically generates links in computational graph. The resulting matching algorithm is tested on vision tasks. The main concern of reviews is that the general matching algorithm is only tested on vision tasks. The authors partially addressed this problem by providing new experimental results with only geometric edge features. Other comments of Blind Review #2 are about some minor questions, which have also been answered by the authors. Overall, this paper proposed a promising graph match approach and I tend to accept it.
train
[ "SyxEqUhKsS", "HylWQd3Yor", "HygmNN3tjH", "HkeJVOr0FH", "rkeFNhQucB", "S1xDXRoc9r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for his constructive suggestion and comments.\n\nAs suggested by the reviewer, we have tested Hungarian attention on the model in [2]. To this end, we perform Hungarian algorithm on the output of [2] and establish the attention link during training stage (on synthetic training data). The rest...
[ -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, 1, 5, 1 ]
[ "rkeFNhQucB", "HkeJVOr0FH", "S1xDXRoc9r", "iclr_2020_rJgBd2NYPH", "iclr_2020_rJgBd2NYPH", "iclr_2020_rJgBd2NYPH" ]
iclr_2020_r1evOhEKvH
Graph inference learning for semi-supervised classification
In this work, we address the semi-supervised classification of graph data, where the categories of those unlabeled nodes are inferred from labeled nodes as well as graph structures. Recent works often solve this problem with the advanced graph convolution in a conventional supervised manner, but the performance could be heavily affected when labeled data is scarce. Here we propose a Graph Inference Learning (GIL) framework to boost the performance of node classification by learning the inference of node labels on graph topology. To bridge the connection of two nodes, we formally define a structure relation by encapsulating node attributes, between-node paths and local topological structures together, which can make inference conveniently deduced from one node to another node. For learning the inference process, we further introduce meta-optimization on structure relations from training nodes to validation nodes, such that the learnt graph inference capability can be better self-adapted into test nodes. Comprehensive evaluations on four benchmark datasets (including Cora, Citeseer, Pubmed and NELL) demonstrate the superiority of our GIL when compared with other state-of-the-art methods in the semi-supervised node classification task.
accept-poster
The authors propose a graph inference learning framework to address the issues of sparse labeled data in graphs. The authors use structural information and node attributes to define a structure relation which is then use to infer unknown labels from known labels. The authors demonstrate the effectiveness of their approach on four benchmark datasets. The approach presented in the paper is sound and the empirical results are convincing. All reviewers have given a positive rating for this paper. Two reviewers had some initial concerns about the paper but after the rebuttal they have acknowledged the answers given by the authors and adjusted their scores. R1 still has concerns about the motivation of the paper and I request the authors to adequately address this in their final version.
train
[ "SJxR53RhtS", "rkxlbSepYB", "BJxqaF2WqB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to leverage the between-node-path information into the inference of conventional graph neural network methods. Specifically, the proposed method treats the nodes in training set as a reference corpus and, when infering the label of a specific node, makes this node \"attend\" to the reference co...
[ 6, 6, 6 ]
[ 4, 1, 1 ]
[ "iclr_2020_r1evOhEKvH", "iclr_2020_r1evOhEKvH", "iclr_2020_r1evOhEKvH" ]
iclr_2020_S1xKd24twB
SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards
Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo. This paper is a proof of concept that illustrates how a simple imitation method based on RL with constant rewards can be as effective as more complex methods that use learned rewards.
accept-poster
The authors present a simple alternative to adversarial imitation learning methods like GAIL that is potentially less brittle, and can skip learning a reward function, instead learning an imitation policy directly. Their method has a close relationship with behavioral cloning, but overcomes some of the disadvantages of BC by encouraging the agent via reward to return to demonstration states if it goes out of distribution. The reviewers agree that overcoming the difficulties of both BC and adversarial imitation is an important contribution. Additionally, the authors reasonably addressed the majority of the minor concerns that the reviewers had. Therefore, I recommend for this paper to be accepted.
train
[ "BJeffMm3jH", "Skxpgi9Lsr", "HklfkiqLiH", "HklbTc58oB", "rkl_u9hhYH", "HJgvIf2kcS", "H1xFrZXx9S" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the detailed response and for clarifying my doubts. I confirm my initial view and vote for acceptance. I also recommend the authors to update the paper according to the suggestions of all reviewers.", "Thank you for the thoughtful feedback. We agree with points 1, 2, 4, 5, and 9, and will update t...
[ -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "Skxpgi9Lsr", "rkl_u9hhYH", "HJgvIf2kcS", "H1xFrZXx9S", "iclr_2020_S1xKd24twB", "iclr_2020_S1xKd24twB", "iclr_2020_S1xKd24twB" ]
iclr_2020_r1eiu2VtwH
Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data
Nowadays, deep neural networks (DNNs) have become the main instrument for machine learning tasks within a wide range of domains, including vision, NLP, and speech. Meanwhile, in an important case of heterogenous tabular data, the advantage of DNNs over shallow counterparts remains questionable. In particular, there is no sufficient evidence that deep learning machinery allows constructing methods that outperform gradient boosting decision trees (GBDT), which are often the top choice for tabular problems. In this paper, we introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning architecture, designed to work with any tabular data. In a nutshell, the proposed NODE architecture generalizes ensembles of oblivious decision trees, but benefits from both end-to-end gradient-based optimization and the power of multi-layer hierarchical representation learning. With an extensive experimental comparison to the leading GBDT packages on a large number of tabular datasets, we demonstrate the advantage of the proposed NODE architecture, which outperforms the competitors on most of the tasks. We open-source the PyTorch implementation of NODE and believe that it will become a universal framework for machine learning on tabular data.
accept-poster
This paper proposes Neural Oblivious Decision Ensembles, a formulation of ensembles of decision trees that is end-to-end differentiable and can use multi-layer representation learning. The reviewers are in agreement that this is a novel and useful tool, although there was some mild concern about the extent of the improvement over other methods. Post-discussion, I am recommending the paper be accepted.
train
[ "SJgwHPDEsB", "Sylvs8PVsH", "S1lGDwDNoB", "BkgbSpbaKH", "SJepxs90tS", "rkeiA3I19H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments, we address your concerns below.\n\n[add comparison to FCNN with DenseNet connections]\nWe agree and conduct an additional set of experiments focused on densely-connected FCNN models. We use the standard FCNN tuning procedure described in the submission. Numbers in the table below corre...
[ -1, -1, -1, 6, 8, 3 ]
[ -1, -1, -1, 1, 4, 1 ]
[ "SJepxs90tS", "rkeiA3I19H", "BkgbSpbaKH", "iclr_2020_r1eiu2VtwH", "iclr_2020_r1eiu2VtwH", "iclr_2020_r1eiu2VtwH" ]
iclr_2020_rJlnOhVYPS
Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification
Person re-identification (re-ID) aims at identifying the same persons' images across different cameras. However, domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one. State-of-the-art unsupervised domain adaptation methods for person re-ID transferred the learned knowledge from the source domain by optimizing with pseudo labels created by clustering algorithms on the target domain. Although they achieved state-of-the-art performances, the inevitable label noise caused by the clustering procedure was ignored. Such noisy pseudo labels substantially hinders the model's capability on further improving feature representations on the target domain. In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner. In addition, the common practice is to adopt both the classification loss and the triplet loss jointly for achieving optimal performances in person re-ID models. However, conventional triplet loss cannot work with softly refined labels. To solve this problem, a novel soft softmax-triplet loss is proposed to support learning with soft pseudo triplet labels for achieving the optimal domain adaptation performance. The proposed MMT framework achieves considerable improvements of 14.4%, 18.2%, 13.1% and 16.4% mAP on Market-to-Duke, Duke-to-Market, Market-to-MSMT and Duke-to-MSMT unsupervised domain adaptation tasks.
accept-poster
The paper proposes an unsupervised framework for domain adaptation in the context of person re-identification to reduce the effect of noisy labels. They use refined soft labels and propose a soft softmax-triplet loss to support learning with these soft labels. All reviewers have unanimously agreed to accept the paper and appreciated the comprehensive experiments on four datasets and ablation studies which give some insights about the proposed method. I agree with the assessment of the reviewers and recommend that this paper be accepted.
train
[ "Hkg7fUDxFS", "rkef5M0RtH", "BJeCW-u0tS", "B1eAUjAsiS", "rJgCG5Cosr", "SyeBy90isr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors' response addressed my concerns. After reading the reviews and the comments, I choose to stand with the other reviewers.\n\n===================\n\nThis paper uses mean-teacher to ease the noisy pseudo label of clustering methods for domain adaptive Person re-identification task. The authors also propos...
[ 6, 6, 8, -1, -1, -1 ]
[ 1, 3, 3, -1, -1, -1 ]
[ "iclr_2020_rJlnOhVYPS", "iclr_2020_rJlnOhVYPS", "iclr_2020_rJlnOhVYPS", "rkef5M0RtH", "Hkg7fUDxFS", "BJeCW-u0tS" ]
iclr_2020_BJl2_nVFPB
Automatically Discovering and Learning New Visual Categories with Ranking Statistics
We tackle the problem of discovering novel classes in an image collection given labelled examples of other classes. This setting is similar to semi-supervised learning, but significantly harder because there are no labelled examples for the new classes. The challenge, then, is to leverage the information contained in the labelled images in order to learn a general-purpose clustering model and use the latter to identify the new classes in the unlabelled data. In this work we address this problem by combining three ideas: (1) we suggest that the common approach of bootstrapping an image representation using the labeled data only introduces an unwanted bias, and that this can be avoided by using self-supervised learning to train the representation from scratch on the union of labelled and unlabelled data; (2) we use rank statistics to transfer the model's knowledge of the labelled classes to the problem of clustering the unlabelled images; and, (3) we train the data representation by optimizing a joint objective function on the labelled and unlabelled subsets of the data, improving both the supervised classification of the labelled data, and the clustering of the unlabelled data. We evaluate our approach on standard classification benchmarks and outperform current methods for novel category discovery by a significant margin.
accept-poster
The paper defines a methodology to discover unknown classes in a semi-supervised learning setting, based on: i) defining a proper representation based on self-supervision on all samples; ii) defining equivalence classes on the unlabelled samples, based on ranking statistics; iii) training supervised heads aimed to predict the labels (when available) and the equivalence class indices (when unlabelled). All reviewers agree that the ranking statistics-based heuristics is a quite innovative element of the paper. The extensive and careful experimental validation, with the ablation studies, establishes the merits of all ingredients. Therefore, I propose acceptance of this paper.
train
[ "B1gRFhVosS", "HJlL3v0HsB", "ryge9ORHor", "Syg_O_RHjS", "SygW1_ABsr", "S1lWqw86FS", "SJx82Wa0YH", "HJlZg2lpqH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In addition to our previous response to R1-Q4 and R3-Q1 w.r.t the case of unknown number of classes, we would like to inform the reviewers that the ImageNet experiment using the number of clusters computed from DTC is now completed.\nOur method reached final performance of 80.5% acc outperforming DTC (77.6%), MCL ...
[ -1, -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "iclr_2020_BJl2_nVFPB", "HJlZg2lpqH", "Syg_O_RHjS", "S1lWqw86FS", "SJx82Wa0YH", "iclr_2020_BJl2_nVFPB", "iclr_2020_BJl2_nVFPB", "iclr_2020_BJl2_nVFPB" ]
iclr_2020_Bkg0u3Etwr
Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
Q-learning suffers from overestimation bias, because it approximates the maximum action value using the maximum estimated action value. Algorithms have been proposed to reduce overestimation bias, but we lack an understanding of how bias interacts with performance, and the extent to which existing algorithms mitigate bias. In this paper, we 1) highlight that the effect of overestimation bias on learning efficiency is environment-dependent; 2) propose a generalization of Q-learning, called \emph{Maxmin Q-learning}, which provides a parameter to flexibly control bias; 3) show theoretically that there exists a parameter choice for Maxmin Q-learning that leads to unbiased estimation with a lower approximation variance than Q-learning; and 4) prove the convergence of our algorithm in the tabular case, as well as convergence of several previous Q-learning variants, using a novel Generalized Q-learning framework. We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems.
accept-poster
The authors propose the use of an ensembling scheme to remove over-estimation bias in Q-Learning. The idea is simple but well-founded on theory and backed by experimental evidence. The authors also extensively clarified distinctions between their idea and similar ideas in the reinforcement learning literature in response to reviewer concerns.
train
[ "BkgA2mRnYS", "H1xnL1thiB", "r1eCjSUisH", "SJedTOmioS", "HklG0dbjjB", "H1x2Rbu9oH", "rJly_W_9jB", "r1xaf5nKjB", "SygBvWQYsB", "S1g6ow9Xsr", "Hke5OP97ir", "H1g3zD9QoB", "BJgkW77MYH", "rygrmXsCtS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new Q learning algorithm framework: maxmin Q-learning, to address the overestimation bias issue of Q learning. The main contributions of this paper are three folds: 1) It provides an inspiring example on overestimation/underestimation of Q learning. 2) Generalize Q learning by a new maxmin Q-...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_Bkg0u3Etwr", "rJly_W_9jB", "HklG0dbjjB", "SygBvWQYsB", "H1x2Rbu9oH", "r1xaf5nKjB", "SygBvWQYsB", "S1g6ow9Xsr", "Hke5OP97ir", "BJgkW77MYH", "BkgA2mRnYS", "rygrmXsCtS", "iclr_2020_Bkg0u3Etwr", "iclr_2020_Bkg0u3Etwr" ]
iclr_2020_HJezF3VYPB
Federated Adversarial Domain Adaptation
Federated learning improves data privacy and efficiency in machine learning performed over networks of distributed devices, such as mobile phones, IoT and wearable devices, etc. Yet models trained with federated learning can still fail to generalize to new devices due to the problem of domain shift. Domain shift occurs when the labeled data collected by source nodes statistically differs from the target node's unlabeled data. In this work, we present a principled approach to the problem of federated domain adaptation, which aims to align the representations learned among the different nodes with the data distribution of the target node. Our approach extends adversarial adaptation techniques to the constraints of the federated setting. In addition, we devise a dynamic attention mechanism and leverage feature disentanglement to enhance knowledge transfer. Empirically, we perform extensive experiments on several image and text classification tasks and show promising results under unsupervised federated domain adaptation setting.
accept-poster
This paper studies an interesting new problem, federated domain adaptation, and proposes an approach based on dynamic attention, federated adversarial alignment, and representation disentanglement. Reviewers generally agree that the paper contributes a novel approach to an interesting problem with theoretical guarantees and empirical justification. While many professional concerns were raised by the reviewers, the authors managed to perform an effective rebuttal with a major revision, which addressed the concerns convincingly. AC believes that the updated version is acceptable. Hence I recommend acceptance.
train
[ "BJe8hCFWqB", "BylrOmZjsH", "r1e5G7-isB", "Bygb4cmcjS", "H1x1c9m9sH", "S1g6PH4Yir", "SJgnuHj6KB", "r1xvdmZJ5B", "SkgFvPsUtr", "rkefDBWBtH", "B1eQ0FpmYB", "B1x3jxPfYH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "This paper introduces an unsupervised federated domain adaptation (UFDA) problem and proposes a new model called Federated Adversarial Domain Adaptation (FADA) to transfer the knowledge learned from distributed source domains to an unlabeled target domain. This paper uses a dynamic attention mechanism by leveragin...
[ 6, -1, -1, -1, -1, -1, 6, 3, -1, -1, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, 3, 1, -1, -1, -1, -1 ]
[ "iclr_2020_HJezF3VYPB", "BJe8hCFWqB", "BJe8hCFWqB", "r1xvdmZJ5B", "r1xvdmZJ5B", "SJgnuHj6KB", "iclr_2020_HJezF3VYPB", "iclr_2020_HJezF3VYPB", "rkefDBWBtH", "B1eQ0FpmYB", "B1x3jxPfYH", "iclr_2020_HJezF3VYPB" ]
iclr_2020_SJg7KhVKPH
Depth-Adaptive Transformer
State of the art sequence-to-sequence models for large scale tasks perform a fixed number of computations for each input sequence regardless of whether it is easy or hard to process. In this paper, we train Transformer models which can make output predictions at different stages of the network and we investigate different ways to predict how much computation is required for a particular sequence. Unlike dynamic computation in Universal Transformers, which applies the same set of layers iteratively, we apply different layers at every step to adjust both the amount of computation as well as the model capacity. On IWSLT German-English translation our approach matches the accuracy of a well tuned baseline Transformer while using less than a quarter of the decoder layers.
accept-poster
This paper presents an adaptive computation time method for reducing the average-case inference time of a transformer sequence-to-sequence model. The reviewers reached a rough consensus: This paper makes a proposes a novel method for an important problem, and offers reasonably compelling evidence for that method. However, the experiments aren't *quite* sufficient to isolate the cause of the observed improvements, and the discussion of related work could be clearer. I acknowledge that this paper is borderline (and thank R3 for an extremely thorough discussion, both in public and privately), but I lean toward acceptance: The paper doesn't have any fatal flaws, and it brings some fresh ideas to an area where further work would be valuable.
train
[ "S1lBy4S0tS", "BJgkmBE_uB", "B1ldAbEpYS", "BJlbJx9nsH", "SJxwVwKnsH", "BkeX-3Dsor", "HJl-zQ95oB", "Hyx5LJqcjr", "rJgsZg95sH", "SJlaFkcciB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper studies using dynamic computation to alter the number of Transformer decoding layers each token uses to translate a given sentence. The paper considers using two variants of losses: aligned training - same layer wise prediction loss for all tokens, and mixed training: loss on the output of a random lay...
[ 6, 3, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_SJg7KhVKPH", "iclr_2020_SJg7KhVKPH", "iclr_2020_SJg7KhVKPH", "SJxwVwKnsH", "BkeX-3Dsor", "rJgsZg95sH", "iclr_2020_SJg7KhVKPH", "S1lBy4S0tS", "BJgkmBE_uB", "B1ldAbEpYS" ]
iclr_2020_rylBK34FDS
DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures
In seeking for sparse and efficient neural network models, many previous works investigated on enforcing L1 or L0 regularizers to encourage weight sparsity during training. The L0 regularizer measures the parameter sparsity directly and is invariant to the scaling of parameter values. But it cannot provide useful gradients and therefore requires complex optimization techniques. The L1 regularizer is almost everywhere differentiable and can be easily optimized with gradient descent. Yet it is not scale-invariant and causes the same shrinking rate to all parameters, which is inefficient in increasing sparsity. Inspired by the Hoyer measure (the ratio between L1 and L2 norms) used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant. Our experiments show that enforcing DeepHoyer regularizers can produce even sparser neural network models than previous works, under the same accuracy level. We also show that DeepHoyer can be applied to both element-wise and structural pruning.
accept-poster
The authors propose a scale-invariant sparsity measure for deep networks. The experiments are extensive and convincing, according to reviewers. I recommend acceptance.
train
[ "S1gyncMZjH", "HyegG5GWsr", "HJxYKKfbsr", "Byx6q3uqtS", "HygbT3sqKH", "HkllzwGI5B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your interest in our paper and your positive and constructive comments. Hopefully this reply can address all your concerns.\n\nFor the contribution, the main idea of this paper is to find a sparsity-inducing regularizer leveraging the desired properties of both the L0 regularizer (scale-invariant, minim...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 4, 5, 3 ]
[ "HkllzwGI5B", "Byx6q3uqtS", "HygbT3sqKH", "iclr_2020_rylBK34FDS", "iclr_2020_rylBK34FDS", "iclr_2020_rylBK34FDS" ]
iclr_2020_H1loF2NFwr
Evaluating The Search Phase of Neural Architecture Search
Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we propose to evaluate the NAS search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the state-of-the-art NAS algorithms perform similarly to the random policy; (ii) the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.
accept-poster
This is one of several recent parallel papers that pointed out issues with neural architecture search (NAS). It shows that several NAS algorithms do not perform better than random search and finds that their weight sharing mechanism leads to low correlations of the search performance and final evaluation performance. Code is available to ensure reproducibility of the work. After the discussion period, all reviewers are mildly in favour of accepting the paper. My recommendation is therefore to accept the paper. The paper's results may in part appear to be old news by now, but they were not when the paper first appeared on arXiv (in parallel to Li & Talwalkar, so similarities to that work should not be held against this paper).
val
[ "rklNiq1rqr", "rJx-uYwaFS", "rye0KQb2sr", "SJewb7Z3jH", "rJeqdzzDoS", "HkxALZMvoH", "SJgLweGwjB", "ryx1A71lqB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This works studies the evaluation of search strategies for neural architecture search. It points out existing problems of the current evaluation scheme: (1) only compares the final result without testing the robustness under different random seeds; (2) lacking fair comparison with random baseline under different r...
[ 6, 6, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_H1loF2NFwr", "iclr_2020_H1loF2NFwr", "iclr_2020_H1loF2NFwr", "rJx-uYwaFS", "rJx-uYwaFS", "ryx1A71lqB", "rklNiq1rqr", "iclr_2020_H1loF2NFwr" ]
iclr_2020_ryxnY3NYPS
Diverse Trajectory Forecasting with Determinantal Point Processes
The ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety-critical perception systems (e.g., autonomous vehicles). In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions. It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome (major mode). While generative models such as variational autoencoders (VAEs) have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse -- the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data. In this work, we propose to learn a diversity sampling function (DSF) that generates a diverse yet likely set of future trajectories. The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model (e.g., VAE) into a set of diverse trajectory samples. Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation. To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process (DPP). Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories. Our method is a novel application of DPPs to optimize a set of items (forecasted trajectories) in continuous space. We demonstrate the diversity of the trajectories produced by our approach on both low-dimensional 2D trajectory data and high-dimensional human motion data.
accept-poster
The paper proposes an approach for forecasting diverse object trajectories using determinantal point processes (DPP). Past trajectory is mapped to a latent code and a conditional VAE is used to generate the future trajectories. Instead of using log-likelihood of DPP, the propose method optimizes expected cardinality as a measure for diversity. While there are some concerns about the core method being incremental in novelty over some existing DPP based methods, the context of the paper is different from these papers (ie, diverse trajectories in continuous space) and reviewers have appreciated the empirical improvements over the baselines, in particular over DPP-NLL and DPP-MAP in latent space.
train
[ "r1eEQVNMiH", "BJghs-NzoB", "ByluB7NfiH", "B1evjEEMsH", "r1lM4l4MjH", "rkgSq-KstS", "rklqqtBhYB", "HJlvetnpFH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review and feedback. In the following, we will address your comments and concerns, especially the ones w.r.t. experiments.\n\n** Variable size of the output set **\n\nFor now, we only consider generating sets of variable size instead of a fixed or minimal size, because we want to be able to filt...
[ -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, 1, 4, 1 ]
[ "rkgSq-KstS", "rklqqtBhYB", "BJghs-NzoB", "r1eEQVNMiH", "HJlvetnpFH", "iclr_2020_ryxnY3NYPS", "iclr_2020_ryxnY3NYPS", "iclr_2020_ryxnY3NYPS" ]
iclr_2020_HygpthEtvr
ProxSGD: Training Structured Neural Networks under Regularization and Constraints
In this paper, we consider the problem of training neural networks (NN). To promote a NN with specific structures, we explicitly take into consideration the nonsmooth regularization (such as L1-norm) and constraints (such as interval constraint). This is formulated as a constrained nonsmooth nonconvex optimization problem, and we propose a convergent proximal-type stochastic gradient descent (Prox-SGD) algorithm. We show that under properly selected learning rates, momentum eventually resembles the unknown real gradient and thus is crucial in analyzing the convergence. We establish that with probability 1, every limit point of the sequence generated by the proposed Prox-SGD is a stationary point. Then the Prox-SGD is tailored to train a sparse neural network and a binary neural network, and the theoretical analysis is also supported by extensive numerical tests.
accept-poster
This paper proposes a new gradient-based stochastic optimization algorithm by adapting theory for proximal algorithms to the non-convex setting. The majority of reviewers voted for accept. The authors are encouraged to revise with respect to reviewer comments.
train
[ "Skx2lGd6Kr", "HJxfvYuhor", "HyxySddhsB", "HkeLEhthiB", "HyelrcOnjH", "HJxQie4LcS", "Byx3w9ePqS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n[Summary]\nThis paper proposes Prox-SGD, a theoretical framework for stochastic optimization algorithms that (1) incorporates momentum and coordinate-wise scaling as in Adam, and (2) can handle constraint and (non-smooth) regularizers through the proximal operator. With proper choices of hyperparameters, the alg...
[ 3, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_HygpthEtvr", "HJxQie4LcS", "Byx3w9ePqS", "iclr_2020_HygpthEtvr", "Skx2lGd6Kr", "iclr_2020_HygpthEtvr", "iclr_2020_HygpthEtvr" ]
iclr_2020_Skgxcn4YDS
LAMOL: LAnguage MOdeling for Lifelong Language Learning
Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples. When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task. The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. Overall, LAMOL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking, which is usually considered the LLL upper bound. The source code is available at https://github.com/jojotenya/LAMOL.
accept-poster
This paper proposes a new method for lifelong learning of language using language modeling. Their training scheme is designed so as to prevent catastrophic forgetting. The reviewers found the motivation clear and that the proposed method outperforms prior related work. Reviewers raised concerns about the title and the lack of some baselines which the authors have addressed in the rebuttal and their revision.
train
[ "r1g0xR9CYH", "SklZ60w2iB", "rJeUujP3oB", "ByeNJtP3iS", "rJgTXYw2iH", "BJe_zR1aYB", "Byg6nlvaKB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nSummary:\n\nThe paper proposes to use the same language model to learn multiple tasks and also to generate pseudo-samples for these tasks which could be used for rehearsal while learning new tasks. The authors demonstrate that this idea works well compared to other SOTA lifelong learning methods for learning var...
[ 6, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_Skgxcn4YDS", "BJe_zR1aYB", "Byg6nlvaKB", "r1g0xR9CYH", "ByeNJtP3iS", "iclr_2020_Skgxcn4YDS", "iclr_2020_Skgxcn4YDS" ]
iclr_2020_ryeG924twB
Learning Expensive Coordination: An Event-Based Deep RL Approach
Existing works in deep Multi-Agent Reinforcement Learning (MARL) mainly focus on coordinating cooperative agents to complete certain tasks jointly. However, in many cases of the real world, agents are self-interested such as employees in a company and clubs in a league. Therefore, the leader, i.e., the manager of the company or the league, needs to provide bonuses to followers for efficient coordination, which we call expensive coordination. The main difficulties of expensive coordination are that i) the leader has to consider the long-term effect and predict the followers' behaviors when assigning bonuses and ii) the complex interactions between followers make the training process hard to converge, especially when the leader's policy changes with time. In this work, we address this problem through an event-based deep RL approach. Our main contributions are threefold. (1) We model the leader's decision-making process as a semi-Markov Decision Process and propose a novel multi-agent event-based policy gradient to learn the leader's long-term policy. (2) We exploit the leader-follower consistency scheme to design a follower-aware module and a follower-specific attention module to predict the followers' behaviors and make accurate response to their behaviors. (3) We propose an action abstraction-based policy gradient algorithm to reduce the followers' decision space and thus accelerate the training process of followers. Experiments in resource collections, navigation, and the predator-prey game reveal that our approach outperforms the state-of-the-art methods dramatically.
accept-poster
This paper tackles the challenge of incentivising selfish agents towards a collaborative goal. In doing so, the authors propose several new modules. The reviewers commented on experiments being extremely thorough. One reviewer commented on a lack of ablation study of the 3 contributions, which was promptly provided by the authors. The proposed method is also supported by theoretical derivations. The contributions appear to be quite novel, significantly improving performance of the studied SMGs. One reviewer mentioned the clarity being compromised by too much material being in the appendix, which has been addressed by the authors moving some main pieces of content to the main text. Two reviewer commented on the relevance being lower because of the problem not being widely studied in RL. I would disagree with the reviewers on this aspect, it is great to have new problem brought to light and have fresh and novel results, rather than having yet another paper work on Atari. I also think that the authors in their rebuttal made the practical relevance of their problem setting sufficiently clear with several practical examples.
train
[ "SyxzQLYqsB", "HJesYVt5iS", "HyxzyDt5oS", "BJeR2rY9oS", "B1xwwBtcoB", "S1ljILl2FH", "Skx4ZTo2YH", "BylAQBSCFS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your detailed and insightful comments. Here are our responses to your questions (we would like to start with Question 4): \n\nQ4 (part 1/2):” Whether our problem is widely studied and the contributions to the RL community.”\n\nWe would like to highlight that Stackelberg (leader-follower) Ga...
[ -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, 3, 1, 4 ]
[ "S1ljILl2FH", "iclr_2020_ryeG924twB", "SyxzQLYqsB", "Skx4ZTo2YH", "BylAQBSCFS", "iclr_2020_ryeG924twB", "iclr_2020_ryeG924twB", "iclr_2020_ryeG924twB" ]
iclr_2020_BylEqnVFDB
Curvature Graph Network
Graph-structured data is prevalent in many domains. Despite the widely celebrated success of deep neural networks, their power in graph-structured data is yet to be fully explored. We propose a novel network architecture that incorporates advanced graph structural features. In particular, we leverage discrete graph curvature, which measures how the neighborhoods of a pair of nodes are structurally related. The curvature of an edge (x, y) defines the distance taken to travel from neighbors of x to neighbors of y, compared with the length of edge (x, y). It is a much more descriptive feature compared to previously used features that only focus on node specific attributes or limited topological information such as degree. Our curvature graph convolution network outperforms state-of-the-art on various synthetic and real-world graphs, especially the larger and denser ones.
accept-poster
The paper presents a novel graph convolutional network by integrating the curvature information (based on the concept of Ricci curvature). The key idea is well motivated and the paper is clearly written. Experimental results show that the proposed curvature graph network methods outperform existing graph convolution algorithms. One potential limitation is the computational cost of computing the Ricci curvature, which is discussed in the appendix. Overall, the concept of using curvature in graph convolutional networks seems like a novel and promising idea, and I also recommend acceptance.
train
[ "rJeV_VaqoH", "rJlLYXT5oB", "rJxBnQT9jB", "r1lIrpy6YS", "SJl3FmwTtS", "SJxet-ByqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the constructive comments. Below we address your concerns one-by-one. \n\n1, The complexity of the method:\nThe complexity of calculating the Ricci curvature exactly would require solving |E| LP problems but one could use approximation methods parallel computation which is much faster. Also, the curv...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 4, 1, 1 ]
[ "r1lIrpy6YS", "SJxet-ByqH", "SJl3FmwTtS", "iclr_2020_BylEqnVFDB", "iclr_2020_BylEqnVFDB", "iclr_2020_BylEqnVFDB" ]
iclr_2020_BJeB5hVtvB
Distance-Based Learning from Errors for Confidence Calibration
Deep neural networks (DNNs) are poorly calibrated when trained in conventional ways. To improve confidence calibration of DNNs, we propose a novel training method, distance-based learning from errors (DBLE). DBLE bases its confidence estimation on distances in the representation space. In DBLE, we first adapt prototypical learning to train classification models. It yields a representation space where the distance between a test sample and its ground truth class center can calibrate the model's classification performance. At inference, however, these distances are not available due to the lack of ground truth labels. To circumvent this by inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model. We integrate this into training by merely learning from mis-classified training samples, which we show to be highly beneficial for effective learning. On multiple datasets and DNN architectures, we demonstrate that DBLE outperforms alternative single-model confidence calibration approaches. DBLE also achieves comparable performance with computationally-expensive ensemble approaches with lower computational cost and lower number of parameters.
accept-poster
All reviewers voted to accept this paper. The AC recommends acceptance.
train
[ "r1xCI1QosB", "BJeoLeQojr", "B1eWo1XisH", "r1llj41atS", "HkxX4zVCKB", "rkgRPJHRYr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the constructive comments! We address the two main concerns below.\n\n1. Why does DBLE use misclassified training samples instead of all training samples to train the confidence model?\n\nTo answer this question, we first describe our intuitions of training the confidence model with misclassified tr...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 3, 3, 1 ]
[ "rkgRPJHRYr", "r1llj41atS", "HkxX4zVCKB", "iclr_2020_BJeB5hVtvB", "iclr_2020_BJeB5hVtvB", "iclr_2020_BJeB5hVtvB" ]
iclr_2020_rkeIq2VYPr
Deep Learning of Determinantal Point Processes via Proper Spectral Sub-gradient
Determinantal point processes (DPPs) is an effective tool to deliver diversity on multiple machine learning and computer vision tasks. Under deep learning framework, DPP is typically optimized via approximation, which is not straightforward and has some conflict with diversity requirement. We note, however, there has been no deep learning paradigms to optimize DPP directly since it involves matrix inversion which may result in highly computational instability. This fact greatly hinders the wide use of DPP on some specific objectives where DPP serves as a term to measure the feature diversity. In this paper, we devise a simple but effective algorithm to address this issue to optimize DPP term directly expressed with L-ensemble in spectral domain over gram matrix, which is more flexible than learning on parametric kernels. By further taking into account some geometric constraints, our algorithm seeks to generate valid sub-gradients of DPP term in case when the DPP gram matrix is not invertible (no gradients exist in this case). In this sense, our algorithm can be easily incorporated with multiple deep learning tasks. Experiments show the effectiveness of our algorithm, indicating promising performance for practical learning problems.
accept-poster
Most reviewers seems in favour of accepting this paper, with the borderline rejection being satisfied with acceptance if the authors take special heed of their comments to improve the clarity of the paper when preparing the final version. From examination of the reviews, the paper achieves enough to warrant publication. Accept.
train
[ "Syl5iFwiKB", "SJeRWiM-iB", "BygR4_f-jB", "rJlRFHfWiS", "r1l215DtFH", "H1g40VNkcr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: the authors introduce a method to learn a deep-learning model whose loss function is augmented with a DPP-like regularization term to enforce diversity within the feature embeddings. \n\nDecision: I recommend that this paper be rejected. At a high level, this paper is experimentally focused, but I am not ...
[ 3, -1, -1, -1, 8, 6 ]
[ 5, -1, -1, -1, 4, 1 ]
[ "iclr_2020_rkeIq2VYPr", "Syl5iFwiKB", "H1g40VNkcr", "r1l215DtFH", "iclr_2020_rkeIq2VYPr", "iclr_2020_rkeIq2VYPr" ]
iclr_2020_r1ecqn4YwB
N-BEATS: Neural basis expansion analysis for interpretable time series forecasting
We focus on solving the univariate times series point forecasting problem using deep learning. We propose a deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers. The architecture has a number of desirable properties, being interpretable, applicable without modification to a wide array of target domains, and fast to train. We test the proposed architecture on several well-known datasets, including M3, M4 and TOURISM competition datasets containing time series from diverse domains. We demonstrate state-of-the-art performance for two configurations of N-BEATS for all the datasets, improving forecast accuracy by 11% over a statistical benchmark and by 3% over last year's winner of the M4 competition, a domain-adjusted hand-crafted hybrid between neural network and statistical time series models. The first configuration of our model does not employ any time-series-specific components and its performance on heterogeneous datasets strongly suggests that, contrarily to received wisdom, deep learning primitives such as residual blocks are by themselves sufficient to solve a wide range of forecasting problems. Finally, we demonstrate how the proposed architecture can be augmented to provide outputs that are interpretable without considerable loss in accuracy.
accept-poster
The paper received positive recommendation from all reviewers. Accept.
train
[ "HkxhvW1H5S", "SJghIrYk9r", "B1xVpwzUiB", "Hke5AhiwjB", "Bkx2OPjKir", "Sye4p4tVir", "BJx_NbbQir", "Hyxr-nwyiB", "H1ei-PvyjS", "rkxJOVv1oH", "BkxaWAkZ9S" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a DL architecture that achieves better performance on time series prediction. The proposed architecture is relatively straightforward and composes residual blocks. While the paper does achieve superior results, a lot of the text is devoted to comparing to prior work and arguing that DL approache...
[ 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 1, -1, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_r1ecqn4YwB", "iclr_2020_r1ecqn4YwB", "Sye4p4tVir", "BkxaWAkZ9S", "HkxhvW1H5S", "H1ei-PvyjS", "HkxhvW1H5S", "BkxaWAkZ9S", "SJghIrYk9r", "HkxhvW1H5S", "iclr_2020_r1ecqn4YwB" ]
iclr_2020_rklp93EtwH
Automated Relational Meta-learning
In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones. However, a critical challenge in meta-learning is the task heterogeneity which cannot be well handled by traditional globally shared meta-learning methods. In addition, current task-specific meta-learning methods may either suffer from hand-crafted structure design or lack the capability to capture complex relations between tasks. In this paper, motivated by the way of knowledge organization in knowledge bases, we propose an automated relational meta-learning (ARML) framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph. When a new task arrives, it can quickly find the most relevant structure and tailor the learned structure knowledge to the meta-learner. As a result, the proposed framework not only addresses the challenge of task heterogeneity by a learned meta-knowledge graph, but also increases the model interpretability. We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
accept-poster
This paper proposes to deal with task heterogeneity in meta-learning by extracting cross-task relations and constructing a meta-knowledge graph, which can then quickly adapt to new tasks. The authors present a comprehensive set of experiments, which show consistent performance gains over baseline methods, on a 2D regression task and a series of few-shot classification tasks. They further conducted some ablation studies and additional analyses/visualization to aid interpretation. Two of the reviewers were very positive, indicating that they found the paper well-written, motivated, novel, and thorough, assessments that I also share. The authors were very responsive to reviewer comments and implemented all actionable revisions, as far as I can see. The paper looks to be in great shape. I’m therefore recommending acceptance.
train
[ "rJenyvF6YB", "B1eHtPUsoS", "S1eiPzl_oH", "ryxUlOquiS", "S1xAuglusr", "rkgcfxldiH", "BJeu-JE2FB", "BkeCLhJAtr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "################################################################################\nSummary:\n\nThe paper provides a interesting direction in the meta-learning filed. In particular, it proposes to enhance meta learning performance by fully exploring relations across multiple tasks. To capture such information, the a...
[ 8, -1, -1, -1, -1, -1, 8, 3 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_rklp93EtwH", "iclr_2020_rklp93EtwH", "BJeu-JE2FB", "rJenyvF6YB", "rkgcfxldiH", "BkeCLhJAtr", "iclr_2020_rklp93EtwH", "iclr_2020_rklp93EtwH" ]
iclr_2020_Sylgsn4Fvr
To Relieve Your Headache of Training an MRF, Take AdVIL
We propose a black-box algorithm called {\it Adversarial Variational Inference and Learning} (AdVIL) to perform inference and learning on a general Markov random field (MRF). AdVIL employs two variational distributions to approximately infer the latent variables and estimate the partition function of an MRF, respectively. The two variational distributions provide an estimate of the negative log-likelihood of the MRF as a minimax optimization problem, which is solved by stochastic gradient descent. AdVIL is proven convergent under certain conditions. On one hand, compared with contrastive divergence, AdVIL requires a minimal assumption about the model structure and can deal with a broader family of MRFs. On the other hand, compared with existing black-box methods, AdVIL provides a tighter estimate of the log partition function and achieves much better empirical results.
accept-poster
The paper proposes a black box algorithm for MRF training, utilizing a novel approach based on variational approximations of both the positive and negative phase terms of the log likelihood gradient (as R2 puts it, "a fairly creative combination of existing approaches"). Several technical and rhetorical points were raised by the reviewers, most of which seem to have been satisfactorily addressed, but all reviewers agreed that this was a good direction. The main weakness of the work is that the empirical work is very small scale, mainly due to the bottleneck imposed by an inner loop optimization of the variational distribution q(v, h). I believe it's important to note that most truly large scale results in the literature revolve around purely feedforward models that don't require expensive to compute approximations; that said, MNIST experiments would have been nice. Nevertheless, this work seems like a promising step on a difficult problem, and it seems that the ideas herein are worth disseminating, hopefully stimulating future work on rendering this procedure less expensive and more scalable.
train
[ "Bkx3SA9ooS", "Skxc1C5ojr", "S1xsOT5ioS", "HyemFh5ojS", "S1gGkTcjor", "B1gIVn5jiS", "BJegMJ0hKS", "Bkx48mSRKS", "BkxP9JsX9H" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nQ3: The limitations of the theory\n\nA3: Thanks for the suggestion. We revised Sec. 3.2 and Appendix E to address the comment. We agree that Lemma 1 assumes that the variational decoder should recover the distribution of the MRF *at convergence*. However, the assumption is weaker in the sense that it allows non-...
[ -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ "Skxc1C5ojr", "BJegMJ0hKS", "S1gGkTcjor", "B1gIVn5jiS", "Bkx48mSRKS", "BkxP9JsX9H", "iclr_2020_Sylgsn4Fvr", "iclr_2020_Sylgsn4Fvr", "iclr_2020_Sylgsn4Fvr" ]
iclr_2020_H1lBj2VFPS
Linear Symmetric Quantization of Neural Networks for Low-precision Integer Hardware
With the proliferation of specialized neural network processors that operate on low-precision integers, the performance of Deep Neural Network inference becomes increasingly dependent on the result of quantization. Despite plenty of prior work on the quantization of weights or activations for neural networks, there is still a wide gap between the software quantizers and the low-precision accelerator implementation, which degrades either the efficiency of networks or that of the hardware for the lack of software and hardware coordination at design-phase. In this paper, we propose a learned linear symmetric quantizer for integer neural network processors, which not only quantizes neural parameters and activations to low-bit integer but also accelerates hardware inference by using batch normalization fusion and low-precision accumulators (e.g., 16-bit) and multipliers (e.g., 4-bit). We use a unified way to quantize weights and activations, and the results outperform many previous approaches for various networks such as AlexNet, ResNet, and lightweight models like MobileNet while keeping friendly to the accelerator architecture. Additional, we also apply the method to object detection models and witness high performance and accuracy in YOLO-v2. Finally, we deploy the quantized models on our specialized integer-arithmetic-only DNN accelerator to show the effectiveness of the proposed quantizer. We show that even with linear symmetric quantization, the results can be better than asymmetric or non-linear methods in 4-bit networks. In evaluation, the proposed quantizer induces less than 0.4\% accuracy drop in ResNet18, ResNet34, and AlexNet when quantizing the whole network as required by the integer processors.
accept-poster
This paper considers the question of how to quantize deep neural networks, for processors operating on low-precision integers. The authors propose a methodology and have evaluated it thoroughly. The reviewers all agree that this question is important in practice, though there was disagreement about how novel a contribution this paper is specifically, and on its clarity. The clarity questions were resolved on rebuttal, so I lean to accepting the paper.
test
[ "H1eYUpB8oH", "rJgfbir8jr", "ryxfBzICtS", "SyxkdJsFsr", "S1xjlcrUjB", "H1e9joSUiS", "ByxRzN_RKH", "HJezXLYC5B" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "New supplemental materials have been added to the appendix of the paper, and we will continue to update the paper before the rebuttal deadline.\n\nMainly update:\n1. The pseudo-code;\n2. The training time of LLSQ;\n3. The simulated gradient formulation of the scaling factors and the bit-shift quantization formulat...
[ -1, -1, 6, -1, -1, -1, 6, 3 ]
[ -1, -1, 3, -1, -1, -1, 1, 4 ]
[ "iclr_2020_H1lBj2VFPS", "ByxRzN_RKH", "iclr_2020_H1lBj2VFPS", "S1xjlcrUjB", "ryxfBzICtS", "HJezXLYC5B", "iclr_2020_H1lBj2VFPS", "iclr_2020_H1lBj2VFPS" ]
iclr_2020_B1xIj3VYvr
Weakly Supervised Clustering by Exploiting Unique Class Count
A weakly supervised learning based clustering framework is proposed in this paper. As the core of this framework, we introduce a novel multiple instance learning task based on a bag level label called unique class count (ucc), which is the number of unique classes among all instances inside the bag. In this task, no annotations on individual instances inside the bag are needed during training of the models. We mathematically prove that with a perfect ucc classifier, perfect clustering of individual instances inside the bags is possible even when no annotations on individual instances are given during training. We have constructed a neural network based ucc classifier and experimentally shown that the clustering performance of our framework with our weakly supervised ucc classifier is comparable to that of fully supervised learning models where labels for all instances are known. Furthermore, we have tested the applicability of our framework to a real world task of semantic segmentation of breast cancer metastases in histological lymph node sections and shown that the performance of our weakly supervised framework is comparable to the performance of a fully supervised Unet model.
accept-poster
The paper proposes a weakly supervised learning algorithm, motivated by its application to histopathology. Similar to the multiple instance learning scenario, labels are provided for bags of instances. However instead of a single (binary) label per bag, the paper introduces the setting where the training algorithm is provided with the number of classes in the bag (but not which ones). Careful empirical experiments on semantic segmentation of histopathology data, as well as simulated labelling from MNIST and CIFAR demonstrate the usefulness of the method. The proposed approach is similar in spirit to works such as learning from label proportions and UU learning (both which solve classification tasks). http://www.jmlr.org/papers/volume10/quadrianto09a/quadrianto09a.pdf https://arxiv.org/abs/1808.10585 The reviews are widely spread, with a low confidence reviewer rating (1). However it seems that the high confidence reviewers are also providing higher scores and better comments. The authors addressed many of the reviewer comments, and seeked clarification for certain points, but the reviewers did not engage further during the discussion period. This paper provides a novel weakly supervised learning setting, motivated by a real world semantic segmentation task, and provides an algorithm to learn from only the number of classes per bag, which is demonstrated to work on empirical experiments. It is a good addition to the ICLR program.
train
[ "H1ltFvBdoS", "B1lh_KrOjB", "H1xxfX07jr", "HyxEhVAmjB", "HkxzH40QsS", "SJx5FW0mir", "HJeO1rrZor", "Hyg1rHHZsS", "BygIZGO3YS", "r1e8JoT2tS", "SyedjL7pYH" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n2) \"UCC model uses bags of sizes from 1 to 4. Assuming uniform distribution, 25% of the samples are fully labeled. The semi-supervised methods from Table 6 how many labeled samples do they use? Having that small bag samples the problem is quite easy. Moreover, during training, I guess that the same sample can g...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 1, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 1, 4 ]
[ "HkxzH40QsS", "HyxEhVAmjB", "SyedjL7pYH", "BygIZGO3YS", "r1e8JoT2tS", "iclr_2020_B1xIj3VYvr", "r1e8JoT2tS", "BygIZGO3YS", "iclr_2020_B1xIj3VYvr", "iclr_2020_B1xIj3VYvr", "iclr_2020_B1xIj3VYvr" ]
iclr_2020_r1gdj2EKPB
Scalable and Order-robust Continual Learning with Additive Parameter Decomposition
While recent continual learning methods largely alleviate the catastrophic problem on toy-sized datasets, there are issues that remain to be tackled in order to apply them to real-world problem domains. First, a continual learning model should effectively handle catastrophic forgetting and be efficient to train even with a large number of tasks. Secondly, it needs to tackle the problem of order-sensitivity, where the performance of the tasks largely varies based on the order of the task arrival sequence, as it may cause serious problems where fairness plays a critical role (e.g. medical diagnosis). To tackle these practical challenges, we propose a novel continual learning method that is scalable as well as order-robust, which instead of learning a completely shared set of weights, represents the parameters for each task as a sum of task-shared and sparse task-adaptive parameters. With our Additive Parameter Decomposition (APD), the task-adaptive parameters for earlier tasks remain mostly unaffected, where we update them only to reflect the changes made to the task-shared parameters. This decomposition of parameters effectively prevents catastrophic forgetting and order-sensitivity, while being computation- and memory-efficient. Further, we can achieve even better scalability with APD using hierarchical knowledge consolidation, which clusters the task-adaptive parameters to obtain hierarchically shared parameters. We validate our network with APD, APD-Net, on multiple benchmark datasets against state-of-the-art continual learning methods, which it largely outperforms in accuracy, scalability, and order-robustness.
accept-poster
The submission addresses the problem of continual learning with large numbers of tasks and variable task ordering and proposes a parameter decomposition approach such that part of the parameters are task-adaptive and some are task-shared. The validation is on omniglot and other benchmarks. The reviews were mixed on this paper, but most reviewers were favorably impressed with the problem setup, the scalability of the method, and the results. The baselines were limited but acceptable. The recommendation is to accept this paper, but the authors are advised to address all the points in the reviews in their final revision.
train
[ "S1e4AYBAtB", "rkeFAoNnjS", "B1lHdSr3iS", "r1gT4B4hjH", "HyeTqEBhiS", "rJlE9aE2jS", "HJlJEwV2iB", "SylSxFg2or", "H1e8e81ntS", "r1eencGisH", "S1l6UcMisB", "Bkl9ey9For", "Ske630FYsH", "SkxCZAYKiH", "ryxv0-DuoS", "rkgNFWvdjr", "H1epx-DOsr", "Syg4508dsB", "HylrvBHXqH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary: The paper addresses continual learning challenges such as catastrophic forgetting and task-order robustness by introducing a new hybrid algorithm that uses architecture growth as well as parameter regularization where parameters of each layer are decomposed into task-specific and task-private parameters. ...
[ 1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_r1gdj2EKPB", "r1eencGisH", "S1l6UcMisB", "r1eencGisH", "r1eencGisH", "SylSxFg2or", "r1eencGisH", "Syg4508dsB", "iclr_2020_r1gdj2EKPB", "H1epx-DOsr", "H1epx-DOsr", "iclr_2020_r1gdj2EKPB", "S1e4AYBAtB", "S1e4AYBAtB", "H1e8e81ntS", "H1e8e81ntS", "H1e8e81ntS", "HylrvBHXqH", ...
iclr_2020_Hklso24Kwr
Continual Learning with Adaptive Weights (CLAW)
Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner. Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario. A key modelling decision is to what extent the architecture should be shared across tasks. On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models. On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur. Ideally, the network should adaptively identify which parts of the network to share in a data driven way. Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference. Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting.
accept-poster
The paper proposes a new variational-inference-based continual learning algorithm with strong performance. There was some disagreement in the reviews, with perhaps the one shared concern being the complexity of the proposed method. One reviewer brought up other potentially related work, but this was convincingly rebutted by the authors. Finally, one reviewer had an issue with the simplicity with the networks in the experiments, but the authors rightly pointed out that the architectures were simply designed to match those from the baselines. Continual learning has been an active area for quite some time and convincingly achieving SOTA in a new way is a strong contribution, and will be of interest to the community. Progress in a field is sometimes made by iteratively simplifying an initially complex solution, and this work lays in a brick in that direction. For these reasons, I recommend acceptance.
train
[ "rkx_G-XciB", "BJe7P1mcjr", "SJg-iM7qsS", "HJgp2lKpYB", "Sygqf-m0Fr", "B1xod8Xe9S" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the welcome feedback, which we will/do incorporate into the revised version. \n\n- The sequential task setting for continual learning has very little to add in practice, and to other branches of machine learning: it has little to say for domains where continual learning problems occur nat...
[ -1, -1, -1, 3, 8, 3 ]
[ -1, -1, -1, 4, 5, 4 ]
[ "Sygqf-m0Fr", "B1xod8Xe9S", "HJgp2lKpYB", "iclr_2020_Hklso24Kwr", "iclr_2020_Hklso24Kwr", "iclr_2020_Hklso24Kwr" ]
iclr_2020_rJxAo2VYwr
Transferable Perturbations of Deep Feature Distributions
Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network. This work presents a new adversarial attack based on the modeling and exploitation of class-wise and layer-wise deep feature distributions. We achieve state-of-the-art targeted blackbox transfer-based attack results for undefended ImageNet models. Further, we place a priority on explainability and interpretability of the attacking process. Our methodology affords an analysis of how adversarial attacks change the intermediate feature distributions of CNNs, as well as a measure of layer-wise and class-wise feature distributional separability/entanglement. We also conceptualize a transition from task/data-specific to model-specific features within a CNN architecture that directly impacts the transferability of adversarial examples.
accept-poster
This paper considers black box adversarial attacks based on perturbations of the intermediate layers of a neural network classifier, obtained by training a binary classifier for each target class. Reviewers were happy with the novelty of the approach as well as the presentation, described the presentation as rigorous and were pleased with the situation of this method relative to the literature. R3 had concerns about evaluation, success rate, and that the procedure was "cumbersome". Some of their concerns were addressed in rebuttal, but remained steadfast that the method was too cumbersome to be practical. I agree with R1 & R2 that this approach is novel and interesting and disagree with R3 that it is too impractical. The paper could be stronger with the addition of adversarial training experiments (and I disagree with the authors that "there are currently no whitebox attacks that do well at attacking AT models", this is very much not the case), but I concur with R1 & R2 that this is interesting work that may stimulate further exploration, enough so to warrant acceptance.
train
[ "H1xHsPMsiB", "HJe0aM3sYB", "Syx1UcJjjS", "HJxWeUeror", "rye7hSeSsB", "H1xvrrxHsH", "BJeiCEeriH", "B1l4i1BCtB", "S1gWD_G9Yr" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the reply and the upgraded rating. We’re glad that we were able to clear up any confusion about [1] with respect to our results. Some additional comments about the training process:\n\nWe want to emphasize that it is only necessary to train the auxiliary model $g_{l,c}$. Like other attack methods, we...
[ -1, 3, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, 3, -1, -1, -1, -1, -1, 1, 3 ]
[ "Syx1UcJjjS", "iclr_2020_rJxAo2VYwr", "HJxWeUeror", "rye7hSeSsB", "HJe0aM3sYB", "S1gWD_G9Yr", "B1l4i1BCtB", "iclr_2020_rJxAo2VYwr", "iclr_2020_rJxAo2VYwr" ]
iclr_2020_BJe1334YDH
A Learning-based Iterative Method for Solving Vehicle Routing Problems
This paper is concerned with solving combinatorial optimization problems, in particular, the capacitated vehicle routing problems (CVRP). Classical Operations Research (OR) algorithms such as LKH3 \citep{helsgaun2017extension} are inefficient and difficult to scale to larger-size problems. Machine learning based approaches have recently shown to be promising, partly because of their efficiency (once trained, they can perform solving within minutes or even seconds). However, there is still a considerable gap between the quality of a machine learned solution and what OR methods can offer (e.g., on CVRP-100, the best result of learned solutions is between 16.10-16.80, significantly worse than LKH3's 15.65). In this paper, we present ``Learn to Improve'' (L2I), the first learning based approach for CVRP that is efficient in solving speed and at the same time outperforms OR methods. Starting with a random initial solution, L2I learns to iteratively refine the solution with an improvement operator, selected by a reinforcement learning based controller. The improvement operator is selected from a pool of powerful operators that are customized for routing problems. By combining the strengths of the two worlds, our approach achieves the new state-of-the-art results on CVRP, e.g., an average cost of 15.57 on CVRP-100.
accept-poster
The paper proposed the use of a combination of RL-based iterative improvement operator to refine the solution progressively for the capacitated vehicle routing problem. It has been shown to outperform both classical non-learning based and SOTA learning based methods. The idea is novel and the results are impressive, the presentation is clear. Also the authors addressed the concern of lacking justification on larger tasks by including an appendix of additional experiments.
train
[ "SkeKKnRaKB", "HJxugmKhiH", "HJx9FfpjsS", "r1e8rfajir", "rkxcpMvnsB", "B1xJQA3iiS", "ByxhHphooS", "SJxSed6qdB", "B1x_eK-RYB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "EDIT: I have read the responses and increased my rating to Weak Accept.\n\nThe paper proposes an algorithm for the Capacitated Vehicle Routing problem that starts with a random solution and then iteratively improves it by using a learned policy to select an improvement operator to apply to the current solution. On...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_BJe1334YDH", "rkxcpMvnsB", "SkeKKnRaKB", "SkeKKnRaKB", "ByxhHphooS", "SJxSed6qdB", "B1x_eK-RYB", "iclr_2020_BJe1334YDH", "iclr_2020_BJe1334YDH" ]
iclr_2020_SkxgnnNFvH
Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring
The use of deep pre-trained transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on four tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks.
accept-poster
The paper presents a new architecture that achieves the advantages of both Bi-encoder and Cross-encoder architectures. The proposed idea is reasonable and well-motivated, and the paper is clearly written. The experimental results on retrieval and dialog tasks are strong, achieving high accuracy while the computational efficiency is orders of magnitude smaller than Cross-encoder. All reviewers recommend acceptance of the paper and this AC concurs.
train
[ "ByeUgZT35r", "BylSsUmijH", "H1xLuTzbcr", "HJeESBliqS" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: This work proposes a new transformer architecture for tasks that involve a query sequence and multiple candidate sequences. The proposed architecture, called poly-encoder, strikes a balance between a dual encoder which independently encodes the query and candidate and combines representations at the top, ...
[ 8, -1, 8, 6 ]
[ 4, -1, 1, 1 ]
[ "iclr_2020_SkxgnnNFvH", "ByeUgZT35r", "iclr_2020_SkxgnnNFvH", "iclr_2020_SkxgnnNFvH" ]
iclr_2020_rygfnn4twS
AutoQ: Automated Kernel-Wise Neural Network Quantization
Network quantization is one of the most hardware friendly techniques to enable the deployment of convolutional neural networks (CNNs) on low-power mobile devices. Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy. The quantization bitwidth or bit number (QBN) directly decides the inference accuracy, latency, energy and hardware overhead. To effectively reduce the redundancy and accelerate CNN inferences, various weight kernels should be quantized with different QBNs. However, prior works use only one QBN to quantize each convolutional layer or the entire CNN, because the design space of searching a QBN for each weight kernel is too large. The hand-crafted heuristic of the kernel-wise QBN search is so sophisticated that domain experts can obtain only sub-optimal results. It is difficult for even deep reinforcement learning (DRL) DDPG-based agents to find a kernel-wise QBN configuration that can achieve reasonable inference accuracy. In this paper, we propose a hierarchical-DRL-based kernel-wise network quantization technique, AutoQ, to automatically search a QBN for each weight kernel, and choose another QBN for each activation layer. Compared to the models quantized by the state-of-the-art DRL-based schemes, on average, the same models quantized by AutoQ reduce the inference latency by 54.06%, and decrease the inference energy consumption by 50.69%, while achieving the same inference accuracy.
accept-poster
This paper proposes a network quantization method which is based on kernel-level quantization. The extension from layer-level to kernel-level is straightforward, and so the novelty is somewhat limited given its similarity with HAQ. Nevertheless, experimental results demonstrate its efficiency in real applications. The paper can be improved by clarifying some experimental details, and have further discussions on its relationship with HAQ.
train
[ "H1ebnpXOnH", "BkeC2ghiFB", "HyggGAw6YB", "HJlrlfxUoS", "Hke2bWfriB", "Skei0BMBoS", "HJxJ0SuRtH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Summary\nThis paper proposes a network quantization method. Different from previous methods focusing on network-level or layer-lever quantization, this work pays attention to kernel-level quantization. Specifically, they use a hierarchical reinforcement learning framework to search in the search space related with...
[ 6, 3, 8, -1, -1, -1, 6 ]
[ 3, 1, 3, -1, -1, -1, 1 ]
[ "iclr_2020_rygfnn4twS", "iclr_2020_rygfnn4twS", "iclr_2020_rygfnn4twS", "BkeC2ghiFB", "HJxJ0SuRtH", "HyggGAw6YB", "iclr_2020_rygfnn4twS" ]
iclr_2020_BJxH22EKPS
Understanding Architectures Learnt by Cell-based Neural Architecture Search
Neural architecture search (NAS) searches architectures automatically for given tasks, e.g., image classification and language modeling. Improving the search efficiency and effectiveness has attracted increasing attention in recent years. However, few efforts have been devoted to understanding the generated architectures. In this paper, we first reveal that existing NAS algorithms (e.g., DARTS, ENAS) tend to favor architectures with wide and shallow cell structures. These favorable architectures consistently achieve fast convergence and are consequently selected by NAS algorithms. Our empirical and theoretical study further confirms that their fast convergence derives from their smooth loss landscape and accurate gradient information. Nonetheless, these architectures may not necessarily lead to better generalization performance compared with other candidate architectures in the same search space, and therefore further improvement is possible by revising existing NAS algorithms.
accept-poster
The paper reports interesting NAS patterns, supported by empirical and theoretical evidence that the pattern arises due to smooth loss landscape. Reviewers generally agree the this paper would be of interest for the NAS researchers. Some questions raised by reviewers are answered by authors with a few more extra experiments. We highly recommend authors to carefully reflect on reviewers both pros and cons of the paper before camera ready.
train
[ "Bke6KMzcqr", "BJlFXM9aKB", "BJeWrbUKoB", "rJex0levjB", "BJxWqexvir", "rJg7qeJmor", "Syg3VDyXjB", "B1xFOow8iH", "HJx8NdWaKB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper makes an interesting observation and tries to explain what causes it: architecture search methods tend to favor models that are easier to optimize, but not necessarily better at generalization. I lean towards accepting the paper but there is some clear room for improvement.\n\nThe paper shows that NAS m...
[ 8, 6, -1, -1, -1, -1, -1, -1, 3 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_BJxH22EKPS", "iclr_2020_BJxH22EKPS", "iclr_2020_BJxH22EKPS", "BJlFXM9aKB", "BJlFXM9aKB", "Bke6KMzcqr", "HJx8NdWaKB", "Syg3VDyXjB", "iclr_2020_BJxH22EKPS" ]
iclr_2020_r1xPh2VtPB
SVQN: Sequential Variational Soft Q-Learning Networks
Partially Observable Markov Decision Processes (POMDPs) are popular and flexible models for real-world decision-making applications that demand the information from past observations to make optimal decisions. Standard reinforcement learning algorithms for solving Markov Decision Processes (MDP) tasks are not applicable, as they cannot infer the unobserved states. In this paper, we propose a novel algorithm for POMDPs, named sequential variational soft Q-learning networks (SVQNs), which formalizes the inference of hidden states and maximum entropy reinforcement learning (MERL) under a unified graphical model and optimizes the two modules jointly. We further design a deep recurrent neural network to reduce the computational complexity of the algorithm. Experimental results show that SVQNs can utilize past information to help decision making for efficient inference, and outperforms other baselines on several challenging tasks. Our ablation study shows that SVQNs have the generalization ability over time and are robust to the disturbance of the observation.
accept-poster
The paper proposes a novel model-free solution to POMDPs, which proposes a unified graphical model for hidden state inference and max entropy RL. The method is principled and provides good empirical results on a set of experiments that relatively comprehensive. I would have liked to see more POMDP tasks instead of Atari, but the results are good. Overall this is good work.
train
[ "HJgfkkxDsS", "HyxupCkPsH", "HJgdqX8HKB", "SJls05-ptS" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank Reviwer #3 for the valuable comments and the appreciation of our novel contributions. ", "We thank the reviewer for the valuable comments. However, we humbly disagree on the primary concern that our paper is an \"obvious extension\" of DRQNs (Hausknecht & Stone, 2015). \n\nFirstly, it is not just an \"o...
[ -1, -1, 8, 3 ]
[ -1, -1, 3, 4 ]
[ "HJgdqX8HKB", "SJls05-ptS", "iclr_2020_r1xPh2VtPB", "iclr_2020_r1xPh2VtPB" ]
iclr_2020_rJld3hEYvS
Ranking Policy Gradient
Sample inefficiency is a long-lasting problem in reinforcement learning (RL). The state-of-the-art estimates the optimal action values while it usually involves an extensive search over the state-action space and unstable optimization. Towards the sample-efficient RL, we propose ranking policy gradient (RPG), a policy gradient method that learns the optimal rank of a set of discrete actions. To accelerate the learning of policy gradient methods, we establish the equivalence between maximizing the lower bound of return and imitating a near-optimal policy without accessing any oracles. These results lead to a general off-policy learning framework, which preserves the optimality, reduces variance, and improves the sample-efficiency. We conduct extensive experiments showing that when consolidating with the off-policy learning framework, RPG substantially reduces the sample complexity, comparing to the state-of-the-art.
accept-poster
The paper introduces a novel and effective approach to policy optimization. The overall contribution is sufficient to merit acceptance. Nevertheless, the authors should improve the presentation and experimental evaluation in line with the reviewer criticisms. The criticisms of AnonReviewer2 in particular should not be neglected. Regarding the theory, I agree with AnonReviewer3 that the UNOP assumption is too limiting. The paper would be much stronger if this assumption could be significantly weakened, or better justified.
train
[ "BJl5vuFnsr", "S1lKhPMvoH", "H1xuIvMvjH", "S1x5ePGwjS", "r1lM4XHstB", "SyxNjjojKB", "rJenP7R6FH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all the reviewers for their helpful comments. We have provided responses to all of the reviewers, and have updated our paper accordingly. In summary, we would like to highlight the following changes:\n1) Experiments: we added two additional baselines (SIL for all games and ACER in ablation s...
[ -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, 5, 4, 3 ]
[ "iclr_2020_rJld3hEYvS", "r1lM4XHstB", "SyxNjjojKB", "rJenP7R6FH", "iclr_2020_rJld3hEYvS", "iclr_2020_rJld3hEYvS", "iclr_2020_rJld3hEYvS" ]
iclr_2020_rkxoh24FPH
On Mutual Information Maximization for Representation Learning
Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data. This comes with several immediate problems: For example, MI is notoriously hard to estimate, and using it as an objective for representation learning may lead to highly entangled representations due to its invariance under arbitrary invertible transformations. Nevertheless, these methods have been repeatedly shown to excel in practice. In this paper we argue, and provide empirical evidence, that the success of these methods cannot be attributed to the properties of MI alone, and that they strongly depend on the inductive bias in both the choice of feature extractor architectures and the parametrization of the employed MI estimators. Finally, we establish a connection to deep metric learning and argue that this interpretation may be a plausible explanation for the success of the recently introduced methods.
accept-poster
This paper exams the role of mutual information (MI) estimation in representation learning. Through experiments, they show that the large MI is not predictive of downstream performance, and the empirical success of methods like InfoMax may be more attributed to the inductive bias in the choice of architectures of discriminators, rather than accurate MI estimation. The work is well appreciated by the reviewers. It forms a strong contribution and may motivate subsequent works in the field.
train
[ "rkgq4K2Isr", "ryxdEd28jH", "SygNXD3IsB", "rJgugNuaKH", "rJxFYm3TtB", "HyxjcDWbcH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your insightful comments and thorough review. While indeed our main focus was to start a discussion and point out widespread misconceptions with the current understanding of the methods motivated by information maximization, we believe that there are many exciting approaches to address some of these ...
[ -1, -1, -1, 6, 8, 8 ]
[ -1, -1, -1, 4, 1, 3 ]
[ "rJgugNuaKH", "rJxFYm3TtB", "HyxjcDWbcH", "iclr_2020_rkxoh24FPH", "iclr_2020_rkxoh24FPH", "iclr_2020_rkxoh24FPH" ]
iclr_2020_HJli2hNKDH
Observational Overfitting in Reinforcement Learning
A major component of overfitting in model-free reinforcement learning (RL) involves the case where the agent may mistakenly correlate reward with certain spurious features from the observations generated by the Markov Decision Process (MDP). We provide a general framework for analyzing this scenario, which we use to design multiple synthetic benchmarks from only modifying the observation space of an MDP. When an agent overfits to different observation spaces even if the underlying MDP dynamics is fixed, we term this observational overfitting. Our experiments expose intriguing properties especially with regards to implicit regularization, and also corroborate results from previous works in RL generalization and supervised learning (SL).
accept-poster
The paper proposes a way to analyze overfitting to non-relevant parts of the state space in RL and proposes a framework to measure this type of generalization error. All reviewers agree that the formulation is interesting and useful for practical RL.
train
[ "SJxnmgeCFr", "BygwRoiNsS", "SJe1ypjEoH", "BJg012sViB", "H1gcwoj4oB", "BJeA4jo4iH", "ryxi-2sVor", "HJlLkIImYr", "rygLTFZ4YH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Review:\n\nThis paper considers the problem of overfitting in RL through a specific model of overfitting, namely one where noise is introduced in the observation space, independently of the controllable dynamics. The paper provides both theoretical and empirical insights into the various manifestations of overfit...
[ 6, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 5, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_HJli2hNKDH", "SJxnmgeCFr", "iclr_2020_HJli2hNKDH", "BygwRoiNsS", "HJlLkIImYr", "rygLTFZ4YH", "BJg012sViB", "iclr_2020_HJli2hNKDH", "iclr_2020_HJli2hNKDH" ]
iclr_2020_BkgWahEFvr
Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier
Adversarial attacks on convolutional neural networks (CNN) have gained significant attention and there have been active research efforts on defense mechanisms. Stochastic input transformation methods have been proposed, where the idea is to recover the image from adversarial attack by random transformation, and to take the majority vote as consensus among the random samples. However, the transformation improves the accuracy on adversarial images at the expense of the accuracy on clean images. While it is intuitive that the accuracy on clean images would deteriorate, the exact mechanism in which how this occurs is unclear. In this paper, we study the distribution of softmax induced by stochastic transformations. We observe that with random transformations on the clean images, although the mass of the softmax distribution could shift to the wrong class, the resulting distribution of softmax could be used to correct the prediction. Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images. With these observations, we propose a method to improve existing transformation-based defenses. We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed images. Our empirical studies show that our distribution classifier, by training on distributions obtained from clean images only, outperforms majority voting for both clean and adversarial images. Our method is generic and can be integrated with existing transformation-based defenses.
accept-poster
This paper investigates tradeoffs between preserving accuracy on clean samples and increasing robustness on adversarial samples by using transformations and majority votes. Observations on the distribution of the induced softmax show that existing methods could be improved by leveraging information from that distribution to correct predictions, as confirmed by experiments. The problem space is important and reviewers find the approach interesting. Authors have provided some necessary clarifications during rebuttal and additional experiments. While some reservations remain, this paper's premise and its experimental results appear sufficiently interesting to justify an acceptance recommendation.
train
[ "BJlcBGi3cr", "BJgKs4u3oB", "r1lyJ-rhir", "S1gaTa4FjB", "HJeL3YMVsB", "rJgFpRhmjS", "B1xHgy6Xjr", "S1x1d03Xor", "BJlV06hXor", "Skg35ThQiS", "BJgr54CbjB", "BJeARU7ZiB", "BkgUEOWkqH", "HJeltjbHcH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "\n\n===== Post Rebuttal ===== \n\nThe authors addressed almost all of my comments. I think the revised paper is of higher quality and clarity as a result. Figure 3 is the main remaining concern. However, given the listed strengths of the paper, I am happy to change my rating to weak accept .\n\n===== Summary ====...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_BkgWahEFvr", "r1lyJ-rhir", "BJlV06hXor", "HJeL3YMVsB", "iclr_2020_BkgWahEFvr", "S1x1d03Xor", "BkgUEOWkqH", "BJgr54CbjB", "Skg35ThQiS", "BJlcBGi3cr", "BJeARU7ZiB", "HJeltjbHcH", "iclr_2020_BkgWahEFvr", "iclr_2020_BkgWahEFvr" ]
iclr_2020_BkgXT24tDS
Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks
We propose Additive Powers-of-Two~(APoT) quantization, an efficient non-uniform quantization scheme for the bell-shaped and long-tailed distribution of weights and activations in neural networks. By constraining all quantization levels as the sum of Powers-of-Two terms, APoT quantization enjoys high computational efficiency and a good match with the distribution of weights. A simple reparameterization of the clipping function is applied to generate a better-defined gradient for learning the clipping threshold. Moreover, weight normalization is presented to refine the distribution of weights to make the training more stable and consistent. Experimental results show that our proposed method outperforms state-of-the-art methods, and is even competitive with the full-precision models, demonstrating the effectiveness of our proposed APoT quantization. For example, our 4-bit quantized ResNet-50 on ImageNet achieves 76.6% top-1 accuracy without bells and whistles; meanwhile, our model reduces 22% computational cost compared with the uniformly quantized counterpart.
accept-poster
This paper presents a quantization scheme with the advantage of high computational efficiency. The experimental results show that the proposed scheme outperforms SOTA methods and is competitive with the full-precision models. The reviewers initially raised some concerns including baseline ResNet performance, detailed comparison of the quantization size, and comparison with ResNet50. Authors addressed these concerns in the rebuttal and revised the draft to accommodate the requested items. The reviewers appreciated the revision and find it highly improved. Their overall recommendation is toward accept, which I also support.
train
[ "rkxQ9BYvKr", "HkeFobjzjH", "BkeTJZjGjr", "S1xzIxoGiS", "HkgrAksfiH", "SJe9wpdScS", "SJgDOp8L5S" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe authors propose to compress Neural Networks (NNs) by quantizing their weights to sums of powers-of-two. This allows both to take the non-uniform distribution of the weights into account and to perform fast inference on dedicated hardware. \n\nStrengths of the paper:\n- The problem is clearly stated (...
[ 6, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_BkgXT24tDS", "rkxQ9BYvKr", "SJe9wpdScS", "SJgDOp8L5S", "iclr_2020_BkgXT24tDS", "iclr_2020_BkgXT24tDS", "iclr_2020_BkgXT24tDS" ]
iclr_2020_rJx4p3NYDB
Lazy-CFR: fast and near-optimal regret minimization for extensive games with imperfect information
Counterfactual regret minimization (CFR) methods are effective for solving two-player zero-sum extensive games with imperfect information with state-of-the-art results. However, the vanilla CFR has to traverse the whole game tree in each round, which is time-consuming in large-scale games. In this paper, we present Lazy-CFR, a CFR algorithm that adopts a lazy update strategy to avoid traversing the whole game tree in each round. We prove that the regret of Lazy-CFR is almost the same to the regret of the vanilla CFR and only needs to visit a small portion of the game tree. Thus, Lazy-CFR is provably faster than CFR. Empirical results consistently show that Lazy-CFR is significantly faster than the vanilla CFR.
accept-poster
The paper proposed an regret based approach to speed up counterfactural regret minimization. The reviewers find the proposed approach interesting. However, the method require large memory. More experimental comparisons and comparisons pointed out by reviewers and public comments will help improve the paper.
train
[ "Syx6tQccKB", "ryxVsIO2sH", "HklwCIXuoB", "SJeGev7Osr", "rkgWs8XOoH", "HkxAPUXujH", "Hyg1iJyMiS", "HJly7LIcYS", "HyeDKG10KH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an improvement to Counterfactual Regret Minimization, avoiding traversing the whole tree on each iteration. The idea is not to change the strategy in those infosets, where the reach probability of opponents is low. The strategy in such infosets is only updated once in several iterations, when th...
[ 8, -1, -1, -1, -1, -1, -1, 8, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_rJx4p3NYDB", "HklwCIXuoB", "Syx6tQccKB", "HJly7LIcYS", "HyeDKG10KH", "Hyg1iJyMiS", "iclr_2020_rJx4p3NYDB", "iclr_2020_rJx4p3NYDB", "iclr_2020_rJx4p3NYDB" ]
iclr_2020_BJeS62EtwH
Knowledge Consistency between Neural Networks and Beyond
This paper aims to analyze knowledge consistency between pre-trained deep neural networks. We propose a generic definition for knowledge consistency between neural networks at different fuzziness levels. A task-agnostic method is designed to disentangle feature components, which represent the consistent knowledge, from raw intermediate-layer features of each neural network. As a generic tool, our method can be broadly used for different applications. In preliminary experiments, we have used knowledge consistency as a tool to diagnose representations of neural networks. Knowledge consistency provides new insights to explain the success of existing deep-learning techniques, such as knowledge distillation and network compression. More crucially, knowledge consistency can also be used to refine pre-trained networks and boost performance.
accept-poster
This paper presents a method for extracting "knowledge consistency" between neural networks and understanding their representations. Reviewers and AC are positive on the paper, in terms of insightful findings and practical usages, and also gave constructive suggestions to improve the paper. In particular, I think the paper can gain much attention for ICLR audience. Hence, I recommend acceptance.
test
[ "BylN-zSe5H", "BJefPfcRYB", "Bkx0vFRwoB", "rygiUdRDoB", "SkePfHCPjB", "r1xeVM5WiB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The submission proposes a method for extracting \"knowledge consistency\" in neural networks and using that toward analyzing different aspects of them, eg understanding the representations, explaining knowledge distillation, and analyzing network compression. What's defined as consistent knowledge is essentially t...
[ 8, 6, -1, -1, -1, 6 ]
[ 5, 4, -1, -1, -1, 3 ]
[ "iclr_2020_BJeS62EtwH", "iclr_2020_BJeS62EtwH", "BylN-zSe5H", "BJefPfcRYB", "r1xeVM5WiB", "iclr_2020_BJeS62EtwH" ]
iclr_2020_Hyg9anEFPS
Image-guided Neural Object Rendering
We propose a learned image-guided rendering technique that combines the benefits of image-based rendering and GAN-based image synthesis. The goal of our method is to generate photo-realistic re-renderings of reconstructed objects for virtual and augmented reality applications (e.g., virtual showrooms, virtual tours and sightseeing, the digital inspection of historical artifacts). A core component of our work is the handling of view-dependent effects. Specifically, we directly train an object-specific deep neural network to synthesize the view-dependent appearance of an object. As input data we are using an RGB video of the object. This video is used to reconstruct a proxy geometry of the object via multi-view stereo. Based on this 3D proxy, the appearance of a captured view can be warped into a new target view as in classical image-based rendering. This warping assumes diffuse surfaces, in case of view-dependent effects, such as specular highlights, it leads to artifacts. To this end, we propose EffectsNet, a deep neural network that predicts view-dependent effects. Based on these estimations, we are able to convert observed images to diffuse images. These diffuse images can be projected into other views. In the target view, our pipeline reinserts the new view-dependent effects. To composite multiple reprojected images to a final output, we learn a composition network that outputs photo-realistic results. Using this image-guided approach, the network does not have to allocate capacity on ``remembering'' object appearance, instead it learns how to combine the appearance of captured images. We demonstrate the effectiveness of our approach both qualitatively and quantitatively on synthetic as well as on real data.
accept-poster
The paper presents a new variation of neural (re) rendering of objects, that uses a set of two deep ConvNets to model non-Lambertian effects associated with an object. The paper has received mostly positive reviews. The reviewers agree that the contribution is well-described, valid and valuable. The method is validated against strong baselines including Hedman et al., though Reviewer4 rightfully points out that the comparison might have been more thorough. One additional concern not raised by the reviewers is the lack of comparison with [Thies et al. 2019], which is briefly mentioned but not discussed. The authors are encouraged to provide a corresponding comparison (as well as additional comparisons with Hedman et al) and discuss pros and cons w.r.t. [Thies et al] in the final version.
train
[ "Bkl-0CW2jB", "Byln3WvatS", "rJeAFcPpFr", "BkewAg0n9r", "rJgQmJS6qB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers,\nThank you for your detailed reviews!\nAll reviewers are positive regarding our proposed method.\n\nSome concerns regarding the experiments were raised that we want to clarify in a minor revision. Specifically, we will improve the presentation of the comparison to DeepBlending of Hedman et al. to h...
[ -1, 6, 8, 3, 6 ]
[ -1, 3, 4, 4, 3 ]
[ "iclr_2020_Hyg9anEFPS", "iclr_2020_Hyg9anEFPS", "iclr_2020_Hyg9anEFPS", "iclr_2020_Hyg9anEFPS", "iclr_2020_Hyg9anEFPS" ]
iclr_2020_HkgTTh4FDH
Implicit Bias of Gradient Descent based Adversarial Training on Separable Data
Adversarial training is a principled approach for training robust neural networks. Despite of tremendous successes in practice, its theoretical properties still remain largely unexplored. In this paper, we provide new theoretical insights of gradient descent based adversarial training by studying its computational properties, specifically on its implicit bias. We take the binary classification task on linearly separable data as an illustrative example, where the loss asymptotically attains its infimum as the parameter diverges to infinity along certain directions. Specifically, we show that for any fixed iteration T, when the adversarial perturbation during training has proper bounded L2 norm, the classifier learned by gradient descent based adversarial training converges in direction to the maximum L2 norm margin classifier at the rate of O(1/T), significantly faster than the rate O(1/\log T}O(1/\log T} of training with clean data. In addition, when the adversarial perturbation during training has bounded Lq norm, the resulting classifier converges in direction to a maximum mixed-norm margin classifier, which has a natural interpretation of robustness, as being the maximum L2 norm margin classifier under worst-case bounded Lq norm perturbation to the data. Our findings provide theoretical backups for adversarial training that it indeed promotes robustness against adversarial perturbation.
accept-poster
This paper provides theoretical guarantees for adversarial training. While the reviews raise a variety of criticisms (e.g., the results are under a variety of assumptions), overall the paper constitutes valuable progress on an emerging problem.
val
[ "S1gAAUrhoH", "Syeqq1njiB", "SylDeb3ioS", "rygmDM2oiH", "ryePl_e5Kr", "H1eIh4uhYB", "SkeJiVQ0Fr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the response.\n\nI have no doubts that deriving the results required - as the authors indicate in their response - significant efforts and the development of novel techniques. My main reservation is that, while this work advances the state-of-the-art in the theory of implicit bias of gradient descent...
[ -1, -1, -1, -1, 3, 8, 6 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "rygmDM2oiH", "SkeJiVQ0Fr", "H1eIh4uhYB", "ryePl_e5Kr", "iclr_2020_HkgTTh4FDH", "iclr_2020_HkgTTh4FDH", "iclr_2020_HkgTTh4FDH" ]
iclr_2020_rkeJRhNYDH
TabFact: A Large-scale Dataset for Table-based Fact Verification
The problem of verifying whether a textual hypothesis holds based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation. However, existing studies are mainly restricted to dealing with unstructured evidence (e.g., natural language sentences and documents, news, etc), while verification under structured evidence, such as tables, graphs, and databases, remains unexplored. This paper specifically aims to study the fact verification given semi-structured data as evidence. To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED. TabFact is challenging since it involves both soft linguistic reasoning and hard symbolic reasoning. To address these reasoning challenges, we design two different models: Table-BERT and Latent Program Algorithm (LPA). Table-BERT leverages the state-of-the-art pre-trained language model to encode the linearized tables and statements into continuous vectors for verification. LPA parses statements into LISP-like programs and executes them against the tables to obtain the returned binary value for verification. Both methods achieve similar accuracy but still lag far behind human performance. We also perform a comprehensive analysis to demonstrate great future opportunities.
accept-poster
This paper presents a new dataset for fact verification in text from tables. The task is to identify whether a given claim is supported by the information presented in the table. The authors have also presented two baseline models, one based on BERT and based on symbolic reasoning which have an ok performance on the dataset but still very behind the human performance. The paper is well-written and the arguments and experiments presented in the paper are sound. After reviewer comments, the authors have incorporated major changes in the paper. I recommend an Accept for the paper in its current form.
train
[ "H1gN3TMEcH", "Bye06G7Btr", "r1lp3ZLT5S", "H1egcXQwjr", "HJeRMMXvjB", "B1lMDfXvoS", "Bke0vXQPor", "HyekwrQPjr", "H1enN81DYH", "BkxSySZStS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public" ]
[ "This work proposes the problem of fact verification with semi-structured data source such as tables. Specifically, the authors created a new dataset TabFact and evaluated two baseline models with different variations. They applied two criteria and different rewards for workers to collect two subsets of different l...
[ 6, 6, 8, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 5, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_rkeJRhNYDH", "iclr_2020_rkeJRhNYDH", "iclr_2020_rkeJRhNYDH", "Bye06G7Btr", "r1lp3ZLT5S", "H1gN3TMEcH", "Bye06G7Btr", "iclr_2020_rkeJRhNYDH", "BkxSySZStS", "iclr_2020_rkeJRhNYDH" ]
iclr_2020_S1exA2NtDB
ES-MAML: Simple Hessian-Free Meta Learning
We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES). Existing algorithms for MAML are based on policy gradients, and incur significant difficulties when attempting to estimate second derivatives using backpropagation on stochastic policies. We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement. Moreover, ES-MAML can handle new types of nonsmooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable. We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries.
accept-poster
This paper introduces an evolution strategy for solving the MAML problem. Following up on some other evolutionary methods as alternatives for RL algorithms, this ES-MAML algorithm appears to be quite stable and efficient. The idea makes sense, and the experiments appear strong. The scores of the reviews showed a lot of variance: 1,6,8. Therefore, I asked a 4th reviewer for a tie-breaking review, and he/she gave another 8. The rejecting reviewer mostly took objection to the fact that learning rates / step sizes were not tuned consistently, which can easily change the relative ranking of different ES algorithms. Here, I agree with the authors' rebuttal: the fact that even a simple ES algorithm performs well is very promising, and further tuning would only strengthen that result. Nevertheless, it would be useful to assess the algorithm's sensitivity w.r.t. its learning rate / step size. In summary, I agree with the tie breaking review and recommend acceptance as a poster.
train
[ "Ske4fI5znS", "BkxZFz57jS", "SJeVl7qmjB", "SklZbRhWjB", "BkgG1XqXsr", "HJl0czc7jB", "rJxpDp3boB", "Bkx9mRhZsB", "Bygg6qV1sB", "ryekKugcOr", "rygqNADaKr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Note: I was asked to write a last-minute review for this paper since the overall rating of the other reviews are not consistent. Therefore, the review is rather brief and I will comment also on concerns raised by the other reviewers.\n\nThe paper introduces a new MAML algorithm based on evolutionary strategies (ES...
[ 8, -1, -1, -1, -1, -1, -1, -1, 8, 1, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "iclr_2020_S1exA2NtDB", "Bygg6qV1sB", "BkgG1XqXsr", "rJxpDp3boB", "rygqNADaKr", "BkxZFz57jS", "ryekKugcOr", "SklZbRhWjB", "iclr_2020_S1exA2NtDB", "iclr_2020_S1exA2NtDB", "iclr_2020_S1exA2NtDB" ]
iclr_2020_rkxxA24FDr
Neural Stored-program Memory
Neural networks powered with external memory simulate computer behaviors. These models, which use the memory to store data for a neural controller, can learn algorithms and other complex tasks. In this paper, we introduce a new memory to store weights for the controller, analogous to the stored-program memory in modern computer architectures. The proposed model, dubbed Neural Stored-program Memory, augments current memory-augmented neural networks, creating differentiable machines that can switch programs through time, adapt to variable contexts and thus fully resemble the Universal Turing Machine. A wide range of experiments demonstrate that the resulting machines not only excel in classical algorithmic problems, but also have potential for compositional, continual, few-shot learning and question-answering tasks.
accept-poster
This paper presents the neural stored-program memory, which is a key-value memory that is used to store weights for another neural network, analogous to having programs in computers. They provide an extensive set of experiments in various domains to show the benefit of the proposed method, including synthetic tasks and few-shot learning experiments. This is an interesting paper proposing a new idea. We discuss this submission extensively and based on our discussion I recommend accepting this submission. A few final comments from reviewers for the authors: - Please try to make the paper a bit more self-contained so that it is more useful to a general audience. This can be done by either making more space in the main text (e.g., reducing the size of Figure 1, reducing space between sections, table captions and text, etc.) or adding more details in the Appendix. Importantly, your formatting is a bit off. Please use the correct style file, it will give you more space. All reviewers agree that the paper are missing some important details that would improve the paper. - Please cite the original fast weight paper by Malsburg (1981). - Regarding fast-weights using outer products, this was actually first done in the 1993 paper instead of the 2016 and 2017 papers.
train
[ "B1eL9UG5FH", "SJexxPC2YB", "S1ev-7v2ir", "HkxjnCvjiH", "SklQWQ2cjH", "HyxLvL7IiH", "HkgKNSQ8or", "SylTpEXLir", "BkempfXIsB", "ryxDlvQLjS", "Skx4JPuvYH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors discuss an interesting idea for memory augmented neural networks: storing a subset of the weights of the network in a key-value memory. The actual weights used are chosen dynamically. This is similar to having programs/subprograms in a classical computer. To the best of our knowledge, this idea was not...
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_rkxxA24FDr", "iclr_2020_rkxxA24FDr", "BkempfXIsB", "SklQWQ2cjH", "HyxLvL7IiH", "Skx4JPuvYH", "SylTpEXLir", "B1eL9UG5FH", "SJexxPC2YB", "iclr_2020_rkxxA24FDr", "iclr_2020_rkxxA24FDr" ]
iclr_2020_H1gzR2VKDH
Hierarchical Foresight: Self-Supervised Learning of Long-Horizon Tasks via Visual Subgoal Generation
Video prediction models combined with planning algorithms have shown promise in enabling robots to learn to perform many vision-based tasks through only self-supervision, reaching novel goals in cluttered scenes with unseen objects. However, due to the compounding uncertainty in long horizon video prediction and poor scalability of sampling-based planning optimizers, one significant limitation of these approaches is the ability to plan over long horizons to reach distant goals. To that end, we propose a framework for subgoal generation and planning, hierarchical visual foresight (HVF), which generates subgoal images conditioned on a goal image, and uses them for planning. The subgoal images are directly optimized to decompose the task into easy to plan segments, and as a result, we observe that the method naturally identifies semantically meaningful states as subgoals. Across three out of four simulated vision-based manipulation tasks, we find that our method achieves more than 20% absolute performance improvement over planning without subgoals and model-free RL approaches. Further, our experiments illustrate that our approach extends to real, cluttered visual scenes.
accept-poster
This paper proposes a method that uses subgoals for planning when using video prediction. The reviewers thought that the paper was clearly written and interesting. The reviewer questions and concerns were mostly addressed during the discussion phase, and the reviewers are in agreement that the paper should be accepted.
test
[ "BklFhY24jH", "SkehQY24jr", "Bygq9J7zKr", "rylPuR2RKB", "rJgTs1NJuB" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "We thank reviewer 1 for their thoughtful comments. We address each comment below:\n\n(1): Thank you for pointing this out - we have switched to only using absolute percentages in the new revision.\n\n(2): We agree that the ability to explore the environment well is a critical assumption of this line of work, and t...
[ -1, -1, 6, 6, -1 ]
[ -1, -1, 4, 4, -1 ]
[ "Bygq9J7zKr", "rylPuR2RKB", "iclr_2020_H1gzR2VKDH", "iclr_2020_H1gzR2VKDH", "iclr_2020_H1gzR2VKDH" ]
iclr_2020_Syx7A3NFvH
Multi-agent Reinforcement Learning for Networked System Control
This paper considers multi-agent reinforcement learning (MARL) in networked system control. Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors. We formulate such a networked MARL (NMARL) problem as a spatiotemporal Markov decision process and introduce a spatial discount factor to stabilize the training of each local agent. Further, we propose a new differentiable communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL. Based on experiments in realistic NMARL scenarios of adaptive traffic signal control and cooperative adaptive cruise control, an appropriate spatial discount factor effectively enhances the learning curves of non-communicative MARL algorithms, while NeurComm outperforms existing communication protocols in both learning efficiency and control performance.
accept-poster
The paper focuses on multi-agent reinforcement learning applications in network systems control settings. A key consideration is the spatial layout of such systems, and the authors propose a problem formulation designed to leverage structural assumptions (e.g., locality). The authors derive a novel approach / communication protocol for these settings, and demonstrate strong performance and novel insights in realistic applications. Reviewers particularly commended the realistic applications explored here. Clarifying questions about the setting, experiments, and results were addressed in the rebuttal, and the resulting paper is judged to provide valuable novel insights.
train
[ "SJeiQyMjir", "BJl_sSWojr", "B1gRxUMWjS", "rklSreJbsB", "r1xN8zWesH", "SkloMOu3FB", "Skg3YJBCFH", "HJlJ342CYB" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for reading our response. We agree with the reviewer that the trade-off on message size vs performance should be considered. We will explicitly compare the message sizes across different protocols in the paper. As stated in our response, including fingerprint would lead to <= 10% increase in ...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 1, 4, 3 ]
[ "BJl_sSWojr", "B1gRxUMWjS", "SkloMOu3FB", "Skg3YJBCFH", "HJlJ342CYB", "iclr_2020_Syx7A3NFvH", "iclr_2020_Syx7A3NFvH", "iclr_2020_Syx7A3NFvH" ]
iclr_2020_HJgBA2VYwH
FSPool: Learning Set Representations with Featurewise Sort Pooling
Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem. We introduce a pooling method for sets of feature vectors based on sorting features across elements of the set. This can be used to construct a permutation-equivariant auto-encoder that avoids this responsibility problem. On a toy dataset of polygons and a set version of MNIST, we show that such an auto-encoder produces considerably better reconstructions and representations. Replacing the pooling function in existing set encoders with FSPool improves accuracy and convergence speed on a variety of datasets.
accept-poster
Overall, this paper got strong scores from the reviewers (2 accepts and 1 weak accept). The paper proposes to address the responsibility problem, enabling encoding and decoding sets without worrying about permutations. This is achieved using permutation-equivariant set autoencoders and an 'inverse' operation that undoes the sorting in the decoder. The reviewers all agreed that the paper makes a meaningful contribution and should be accepted. Some concerns regarding clarity of exposition were initially raised but were addressed during the rebuttal period. I recommend that the paper be accepted.
train
[ "SkeQ-NpFjH", "HylilvEkcH", "rJxESlkpqS", "B1g9WmmzjB", "r1xpRNxfsH", "SygraGgMsB", "SJxsPzeMoH", "HygRLhcx5H" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "We have just uploaded a revision of our paper with the following changes:\n\n\n# R4\n- We added an appendix (Appendix A) with a more formal treatment of the responsibility problem.\n- We fixed all the minor things that were pointed out.\n- We added a sentence to the caption of Figure 1 that explains why the 90 deg...
[ -1, 6, 8, -1, -1, -1, -1, 8 ]
[ -1, 3, 4, -1, -1, -1, -1, 3 ]
[ "iclr_2020_HJgBA2VYwH", "iclr_2020_HJgBA2VYwH", "iclr_2020_HJgBA2VYwH", "r1xpRNxfsH", "rJxESlkpqS", "HygRLhcx5H", "HylilvEkcH", "iclr_2020_HJgBA2VYwH" ]
iclr_2020_H1xPR3NtPB
Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction
With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their inner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic notion of constituency. Our method provides an effective way of extracting constituency trees from the pre-trained LMs without training. In addition, we report intriguing findings in the induced trees, including the fact that pre-trained LMs outperform other approaches in correctly demarcating adverb phrases in sentences.
accept-poster
This paper presents results of looking at the inside of pre-trained language models to capture and extract syntactic constituency. Reviewers initially had neutral to positive comments, and after the author rebuttal which addressed some of the major questions and concerns, their scores were raised to reflect their satisfaction with the response and the revised paper. Reviewer discussions followed in which they again expressed that they became more positive that the paper makes novel and interesting contributions. I thank the authors for submitting this paper to ICLR and look forward to seeing it at the conference..
val
[ "rkgaaBeRYB", "HkxE0JaLOB", "rylWSLwKjB", "Hye7eUvKsH", "HJlYmBPtir", "SJgDxBDFoH", "SJevPNvKjr", "SklyT7wtir", "SJxacNTpFB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "I am satisfied with the author's response and I am increasing the score to 6 after the rebuttal.\n\n===\n\nThis paper introduces a new and simple method of probing whether syntax information under the form of constituency trees is present in recent pre-trained language models (e.g. BERT, RoBERTa, XLNet and GPT2) w...
[ 6, 8, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_H1xPR3NtPB", "iclr_2020_H1xPR3NtPB", "HkxE0JaLOB", "HkxE0JaLOB", "rkgaaBeRYB", "rkgaaBeRYB", "rkgaaBeRYB", "SJxacNTpFB", "iclr_2020_H1xPR3NtPB" ]
iclr_2020_rkeuAhVKvB
Dynamically Pruned Message Passing Networks for Large-scale Knowledge Graph Reasoning
We propose Dynamically Pruned Message Passing Networks (DPMPN) for large-scale knowledge graph reasoning. In contrast to existing models, embedding-based or path-based, we learn an input-dependent subgraph to explicitly model a sequential reasoning process. Each subgraph is dynamically constructed, expanding itself selectively under a flow-style attention mechanism. In this way, we can not only construct graphical explanations to interpret prediction, but also prune message passing in Graph Neural Networks (GNNs) to scale with the size of graphs. We take the inspiration from the consciousness prior proposed by Bengio to design a two-GNN framework to encode global input-invariant graph-structured representation and learn local input-dependent one coordinated by an attention module. Experiments show the reasoning capability in our model that is providing a clear graphical explanation as well as predicting results accurately, outperforming most state-of-the-art methods in knowledge base completion tasks.
accept-poster
The paper presents a graph neural network model inspired by the consciousness prior of Bengio (2017) and implements it by means of two GNN models: the inattentive and the attentive GNN, respectively IGNN and AGNN. The reviewers think - The idea of learning an input-dependent subgraph using GNN seems new. - The proposed way to reduce the complexity by restricting the attention horizon sounds interesting.
train
[ "S1gGWFHRtS", "S1lVXlppKB", "rJgWQFPPjH", "Hkl3sqUvjS", "HygZ7hBwoH", "rylJm6TTFr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "# Update after the rebuttal.\nThanks for reflecting some of my comments in the revision. The presentation seems to be improved, and the additional ablation study seems to address my concerns. \n\n# Summary\nThis paper proposes a new neural network architecture for sequential reasoning task. The idea is to have two...
[ 6, 6, -1, -1, -1, 8 ]
[ 1, 3, -1, -1, -1, 1 ]
[ "iclr_2020_rkeuAhVKvB", "iclr_2020_rkeuAhVKvB", "rylJm6TTFr", "S1lVXlppKB", "S1gGWFHRtS", "iclr_2020_rkeuAhVKvB" ]
iclr_2020_ByxtC2VtPB
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally non-linear behavior nearby input examples. Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples. However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited. Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions. Inspired by simple geometric intuition, we develop an inference principle, named mixup inference (MI), for mixup-trained models. MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial. Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.
accept-poster
This paper proposed a mixup inference (MI) method, for mixup-trained models, to better defend adversarial attacks. The idea is novel and is proved to be effective on CIFAR-10 and CIFAR-100. All reviewers and the AC agree to accept the paper.
train
[ "Hye3IBo9oH", "Hylb_3L5uS", "Bkloiyi5iS", "HklUuXwqor", "SkxDmETLoB", "B1xw_i0lor", "SJljWoRgjS", "ByxZpqCgjr", "SkgiTp66YB", "B1g-lU5W9B", "HJgt1UVGcr", "rygsYPl9tB", "ryxtdQAuYS", "SkeFK-RuYS" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "official_reviewer", "public" ]
[ "Thank you again for your kind suggestions, which really help a lot to improve the original version of the paper. We deeply appreciate it!", "Notes: \n\n-Paper claims that adversarial examples stem from locally non-linear behavior. However, wasn't this the exact opposite of the conclusion of \"Explaining and Har...
[ -1, 6, -1, -1, -1, -1, -1, -1, 6, 6, -1, -1, -1, -1 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, 5, 5, -1, -1, -1, -1 ]
[ "Bkloiyi5iS", "iclr_2020_ByxtC2VtPB", "B1xw_i0lor", "iclr_2020_ByxtC2VtPB", "SkgiTp66YB", "Hylb_3L5uS", "SkgiTp66YB", "B1g-lU5W9B", "iclr_2020_ByxtC2VtPB", "iclr_2020_ByxtC2VtPB", "ryxtdQAuYS", "SkeFK-RuYS", "SkeFK-RuYS", "iclr_2020_ByxtC2VtPB" ]
iclr_2020_HJgK0h4Ywr
Theory and Evaluation Metrics for Learning Disentangled Representations
We make two theoretical contributions to disentanglement learning by (a) defining precise semantics of disentangled representations, and (b) establishing robust metrics for evaluation. First, we characterize the concept “disentangled representations” used in supervised and unsupervised methods along three dimensions–informativeness, separability and interpretability–which can be expressed and quantified explicitly using information-theoretic constructs. This helps explain the behaviors of several well-known disentanglement learning models. We then propose robust metrics for measuring informativeness, separability and interpretability. Through a comprehensive suite of experiments, we show that our metrics correctly characterize the representations learned by different methods and are consistent with qualitative (visual) results. Thus, the metrics allow disentanglement learning methods to be compared on a fair ground. We also empirically uncovered new interesting properties of VAE-based methods and interpreted them with our formulation. These findings are promising and hopefully will encourage the design of more theoretically driven models for learning disentangled representations.
accept-poster
This manuscript proposes and evaluates new metrics for measuring the quality of disentangled representations for both supervised and unsupervised settings. The contributions include conceptual definitions and empirical evaluation. In reviews and discussion, the reviewers and AC note missing or inadequate empirical evaluation with many available methods for learning disentangled representations. On the writing, reviewers mentioned that the conciseness of the manuscript could be improved. The reviewers also mentioned incomplete references and discussion of prior work, which should be improved.
test
[ "SklLJPZFoB", "r1lCmuWFsr", "HklWpvWtjB", "Syg_EIAutH", "Bkg0XRg6KB", "ryevsg7M5H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed comments and suggestions. We would like to address your concerns as follows:\n\n1) >> My main concern with the paper, …\nThe main reason for focusing on FactorVAEs and BetaVAEs is that the two models clearly highlight the difference between metrics, and these justify the need for better...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 4, 1, 5 ]
[ "ryevsg7M5H", "Syg_EIAutH", "Bkg0XRg6KB", "iclr_2020_HJgK0h4Ywr", "iclr_2020_HJgK0h4Ywr", "iclr_2020_HJgK0h4Ywr" ]
iclr_2020_SygcCnNKwr
Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.
accept-poster
Main content: Blind review #1 summarizes it well: This paper first introduces a method for quantifying to what extent a dataset split exhibits compound (or, alternatively, atom) divergence, where in particular atoms refer to basic structures used by examples in the datasets, and compounds result from compositional rule application to these atoms. The paper then proposes to evaluate learners on datasets with maximal compound divergence (but minimal atom divergence) between the train and test portions, as a way of testing whether a model exhibits compositional generalization, and suggests a greedy algorithm for forming datasets with this property. In particular, the authors introduce a large automatically generated semantic parsing dataset, which allows for the construction of datasets with these train/test split divergence properties. Finally, the authors evaluate three sequence-to-sequence style semantic parsers on the constructed datasets, and they find that they all generalize very poorly on datasets with maximal compound divergence, and that furthermore the compound divergence appears to be anticorrelated with accuracy. -- Discussion: Blind review #1 is the most knowledgeable in this area and wrote "This is an interesting and ambitious paper tackling an important problem. It is worth noting that the claim that it is the compound divergence that controls the difficulty of generalization (rather than something else, like length) is a substantive one, and the authors do provide evidence of this." -- Recommendation and justification: This paper deserves to be accepted because it tackles an important problem that is overlooked in current work that is evaluated on datasets of questionable meaningfulness. It adds insight by focusing on the qualities of datasets that enable testing how well learning algorithms do on compositional generalization, which is crucial to intelligence.
train
[ "rJlaCw8tsS", "HkexIvIKoB", "rJloMP8KjH", "rkg8FIIYjS", "H1lWSY0fKH", "BJgDGC9RFB", "Skgd0wo0tS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for the careful reading of the paper and the valuable comments. We have revised the paper as suggested by the reviewers, and summarize the major changes as follows: \n\n* Reworded the paragraph in Section 2.1 discussing subgraph weighting to make it more precise, and added an Appendix L.4 wi...
[ -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, 1, 3, 4 ]
[ "iclr_2020_SygcCnNKwr", "H1lWSY0fKH", "BJgDGC9RFB", "Skgd0wo0tS", "iclr_2020_SygcCnNKwr", "iclr_2020_SygcCnNKwr", "iclr_2020_SygcCnNKwr" ]
iclr_2020_Byg9A24tvB
Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness
Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models. Since collecting new training data could be costly, we focus on better utilizing the given data by inducing the regions with high sample density in the feature space, which could lead to locally sufficient samples for robust learning. We first formally show that the softmax cross-entropy (SCE) loss and its variants convey inappropriate supervisory signals, which encourage the learned feature points to spread over the space sparsely in training. This inspires us to propose the Max-Mahalanobis center (MMC) loss to explicitly induce dense feature regions in order to benefit robustness. Namely, the MMC loss encourages the model to concentrate on learning ordered and compact representations, which gather around the preset optimal centers for different classes. We empirically demonstrate that applying the MMC loss can significantly improve robustness even under strong adaptive attacks, while keeping state-of-the-art accuracy on clean inputs with little extra computation compared to the SCE loss.
accept-poster
This paper proposes an alternative loss function, the max-mahalanobis center loss, that is claimed to improve adversarial robustness. In terms of quality, the reviewers commented on the convincing experiments and theoretical results, and were happy to see the sample density analysis. In terms of clarity, the reviewers commented that the paper is well-written. The problem of adversarial robustness is relevant to the ICLR community, and the proposed approach is a novel and significant contribution in this area. The authors have also convincingly answered the questions of the authors and even provided new theoretical and experimental results in their final upload.
train
[ "rkg0glK2sr", "SJxKz6B2ir", "HJg7WGbhYB", "rJgxo3S2sr", "BygFbsH3oS", "SklLIKB3sH", "HklhmVi9sS", "Sye48n16FB", "BJlhtpUciS", "ByeZZ-gc5B", "r1xg2e3YsS", "Bkla0uKUjB", "SJgdKWYIjB", "HJeCDtiksr", "rkxNi5sysB", "BJgACOjJiS", "SklH2OoksS", "HJxtbjQUcS" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Thanks for the detailed clarification. I am satisfied with the author's rebuttal. \n\nI will rate this paper as \"accept\" in normal cases, but since we should apply a rigorous judgement for 10-page paper, I am sitting between \"weak accept\" and \"accept\".", "Thank you for the suggestion, we have uploaded a ne...
[ -1, -1, 6, -1, -1, -1, -1, 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "SJgdKWYIjB", "rJgxo3S2sr", "iclr_2020_Byg9A24tvB", "BygFbsH3oS", "SklLIKB3sH", "rkxNi5sysB", "Sye48n16FB", "iclr_2020_Byg9A24tvB", "r1xg2e3YsS", "iclr_2020_Byg9A24tvB", "SklH2OoksS", "iclr_2020_Byg9A24tvB", "HJxtbjQUcS", "Sye48n16FB", "HJg7WGbhYB", "HJxtbjQUcS", "ByeZZ-gc5B", "icl...
iclr_2020_H1lj0nNFwB
The Implicit Bias of Depth: How Incremental Learning Drives Generalization
A leading hypothesis for the surprising generalization of neural networks is that the dynamics of gradient descent bias the model towards simple solutions, by searching through the solution space in an incremental order of complexity. We formally define the notion of incremental learning dynamics and derive the conditions on depth and initialization for which this phenomenon arises in deep linear models. Our main theoretical contribution is a dynamical depth separation result, proving that while shallow models can exhibit incremental learning dynamics, they require the initialization to be exponentially small for these dynamics to present themselves. However, once the model becomes deeper, the dependence becomes polynomial and incremental learning can arise in more natural settings. We complement our theoretical findings by experimenting with deep matrix sensing, quadratic neural networks and with binary classification using diagonal and convolutional linear networks, showing all of these models exhibit incremental learning.
accept-poster
The paper studies the role of depth on incremental learning, defined as a favorable learning regime in which one searches through the hypothesis space in increasing order of complexity. Specifically, it establishes a dynamical depth separation result, whereby shallow models require exponetially smaller initializations than deep ones in order to operate in the incremental learning regime. Despite some concerns shared amongst reviewers about the significance of these results to explain realistic deep models (that exhibit nonlinear behavior as well as interactions between neurons) and some remarks about the precision of some claims, the overall consensus -- also shared by the AC -- is that this paper puts forward an interesting phenomenon that will likely spark future research in this important direction. The AC thus recommends acceptance.
train
[ "Bye65cRntB", "BylL_KMviH", "rkg7aURHjB", "rJgPoUVror", "S1gQUUESoS", "Syg2-S4rsr", "BylBwTjpFS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "*Contributions*\nThis paper deals with the theoretical study of the gradient dynamics in deep neural networks. More precisely, this paper define a notion of incremental learning for a particular learning dynamics and study how the depth of the network influence it. Then, the authors show two cases where it applies...
[ 6, -1, 6, -1, -1, -1, 6 ]
[ 4, -1, 1, -1, -1, -1, 3 ]
[ "iclr_2020_H1lj0nNFwB", "rkg7aURHjB", "iclr_2020_H1lj0nNFwB", "iclr_2020_H1lj0nNFwB", "Bye65cRntB", "BylBwTjpFS", "iclr_2020_H1lj0nNFwB" ]
iclr_2020_Hye1kTVFDS
The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget
In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant. The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target. In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input. This is typically the case when we have a standard conditioning input, such as a state observation, and a ``privileged'' input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent. In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication). Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access. In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not. We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.
accept-poster
Existing implementation of information bottleneck need access to privileged information which goes against the idea of compression. The authors propose variational bandwidth bottleneck which estimates the value of the privileged information and then stochastically decided whether to access this information or not. They provide a suitable approximation and show that their method improves generalisation in RL while reducing access to expensive information. These paper received only two reviews. However, both the reviews were favourable. During discussions with the AC the reviewers acknowledged that most of their concerns were addressed. R2 is still concerned that VBB does not result in improvement in terms of sample efficiency. I request the authors to adequately address this in the final version. Having said that, the paper does make other interesting contributions, hence I recommend that this paper should be accepted.
train
[ "HklauJG3jr", "SJe_pvITtH", "B1xJyr2osB", "SyxBif3ooH", "SklhHucooB", "Hyln8Yk5sr", "HylyE_2FiH", "HkloHmR-sB", "r1gG7rR-ir", "BJxTAM0bsS", "H1xDZmAWsH", "H1lnUmn7cH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ ">We did not observe any change in sample efficiency b/w InfoBot as well as the proposed method.\nWe note that in the current work goal of the proposed method is not to obtain better sample efficiency, but to generalize better with respect to the privileged input. \n\nThanks for responding to this. The observation ...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "B1xJyr2osB", "iclr_2020_Hye1kTVFDS", "SyxBif3ooH", "SJe_pvITtH", "SJe_pvITtH", "r1gG7rR-ir", "SJe_pvITtH", "H1xDZmAWsH", "H1lnUmn7cH", "SJe_pvITtH", "BJxTAM0bsS", "iclr_2020_Hye1kTVFDS" ]
iclr_2020_rylJkpEtwS
Learning the Arrow of Time for Problems in Reinforcement Learning
We humans have an innate understanding of the asymmetric progression of time, which we use to efficiently and safely perceive and manipulate our environment. Drawing inspiration from that, we approach the problem of learning an arrow of time in a Markov (Decision) Process. We illustrate how a learned arrow of time can capture salient information about the environment, which in turn can be used to measure reachability, detect side-effects and to obtain an intrinsic reward signal. Finally, we propose a simple yet effective algorithm to parameterize the problem at hand and learn an arrow of time with a function approximator (here, a deep neural network). Our empirical results span a selection of discrete and continuous environments, and demonstrate for a class of stochastic processes that the learned arrow of time agrees reasonably well with a well known notion of an arrow of time due to Jordan, Kinderlehrer and Otto (1998).
accept-poster
This paper develops the notion of the arrow of time in MDPs and explores how this might be useful in RL. All the reviewers found the paper thought provoking, well-written, and they believe the work could have significant impact. The paper does not fit the typical mold: it presents some ideas and uses illustrative experiments to suggest the potential utility of the arrow without nailing down a final algorithm or make a precise performance claim. Overall it is a solid paper, and the reviewers all agreed on acceptance. There are certainly weaknesses in the work, and there is a bit of work to do to get this paper ready. R2 had a nice suggestion of a baseline based on simply learning a transition model (its described in the updated review)---please include it. The description of the experimental methodology is a bit of a mess. Most of the experiments in the paper do not clearly indicate how many runs were conducted or how errorbars where computed or what they represent. It is likely that only a handful of runs were used, which is surprising given the size of some of the domains used. In many cases the figure caption does not even indicate which domain the data came from. All of this is dangerously close to criteria for rejection; please do better. Readability is also known as empowerment and it would be good to discuss this connection. In general the paper was a bit light on connections outlining how information theory has been used in RL. I suggest you start here (http://www2.hawaii.edu/~sstill/StillPrecup2011.pdf) to improve this aspect. Finally, the paper has a very large appendix (~14 oages) with many many more experiments and theory. I am still not convinced that the balance is quite right. This is probably a journal or long arxiv paper. Maybe this paper should be thought of as a nectar version of a longer standalone arxiv paper. Finally, relying on effectiveness of random exploration is no small thing and there is a long history in RL of ideas that would work well, given it is easy to gather data that accurately summarizes the dynamics of the world (e.g. proto-value, funcs). Many ideas are effective given this assumption. The paper should clearly and honestly discuss this assumption, and provide some arguments why there is hope.
val
[ "BJxZ6kgRKH", "r1xX5ySjsS", "BJekgkSosr", "B1gD8hJsjB", "HJliisGqoH", "SklKfM-qsH", "r1eILSe9jB", "B1ej2GptoB", "ryxBTF2YsS", "r1gvg0wPjr", "HJeZ0pAfjS", "SklzyGobsS", "Bkx6un9-iH", "S1gdSn5-iB", "HkguuWdWsr", "rJlLak_ZiB", "Byg771dWjB", "BygOxqxSYS", "H1xTTMZW5H" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper draws on a wide range of ideas, and proposes novel perspectives on how these ideas might apply in RL. In particular, the concept of reachability, reversability and dissipation are explored, with respect to properties of the underlying MDP that can be exploited.\n\nI found many of the ideas thought-provo...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_rylJkpEtwS", "BJxZ6kgRKH", "B1gD8hJsjB", "HkguuWdWsr", "r1eILSe9jB", "BJxZ6kgRKH", "ryxBTF2YsS", "iclr_2020_rylJkpEtwS", "H1xTTMZW5H", "HJeZ0pAfjS", "SklzyGobsS", "rJlLak_ZiB", "S1gdSn5-iB", "BJxZ6kgRKH", "BygOxqxSYS", "Byg771dWjB", "H1xTTMZW5H", "iclr_2020_rylJkpEtwS", ...
iclr_2020_ryxgJTEYDr
Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives
Reinforcement learning agents that operate in diverse and complex environments can benefit from the structured decomposition of their behavior. Often, this is addressed in the context of hierarchical reinforcement learning, where the aim is to decompose a policy into lower-level primitives or options, and a higher-level meta-policy that triggers the appropriate behaviors for a given situation. However, the meta-policy must still produce appropriate decisions in all states. In this work, we propose a policy design that decomposes into primitives, similarly to hierarchical reinforcement learning, but without a high-level meta-policy. Instead, each primitive can decide for themselves whether they wish to act in the current state. We use an information-theoretic mechanism for enabling this decentralized decision: each primitive chooses how much information it needs about the current state to make a decision and the primitive that requests the most information about the current state acts in the world. The primitives are regularized to use as little information as possible, which leads to natural competition and specialization. We experimentally demonstrate that this policy architecture improves over both flat and hierarchical policies in terms of generalization.
accept-poster
In contrast to many current hierarchical reinforcement learning approaches, the authors present a decentralized method that learns low level policies that decide for themselves whether to act in the current state, rather than having a centralized higher level meta policy that chooses between low level policies. The reviewers primarily had minor concerns about clarity, reward scaling, and several other issues that were clarified by the authors. The only outstanding concern is that of whether transfer/pretraining is required for the experiments to work or not. While this is an interesting question that I would encourage authors to address as much as possible, it does not seem like a dealbreaker in light of the reviewers' agreement on the core contribution. Thus, I recommend this paper for acceptance.
test
[ "HkgFlSj0Kr", "ryxcJ3XpYr", "BygcEBXqor", "BkgrR-YtoB", "ryg4i-Ktsr", "HJl6plZKoB", "HJeN66SzoH", "rkeqzTHMjr", "BkgHDsBzir", "rke4PHfTFB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\tThis paper takes a different approach for tackling the hierarchical RL problem. Their approach is to decompose the policy into a bunch of primitives. Each primitive acts according to its own interpretation of the state. All the primitives are competing with each other on a given state to take an action. It turns...
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_ryxgJTEYDr", "iclr_2020_ryxgJTEYDr", "rkeqzTHMjr", "rkeqzTHMjr", "HJeN66SzoH", "iclr_2020_ryxgJTEYDr", "HkgFlSj0Kr", "ryxcJ3XpYr", "rke4PHfTFB", "iclr_2020_ryxgJTEYDr" ]
iclr_2020_H1lZJpVFvr
Robust Local Features for Improving the Generalization of Adversarial Training
Adversarial training has been demonstrated as one of the most effective methods for training robust models to defend against adversarial examples. However, adversarially trained models often lack adversarially robust generalization on unseen testing data. Recent works show that adversarially trained models are more biased towards global structure features. Instead, in this work, we would like to investigate the relationship between the generalization of adversarial training and the robust local features, as the robust local features generalize well for unseen shape variation. To learn the robust local features, we develop a Random Block Shuffle (RBS) transformation to break up the global structure features on normal adversarial examples. We continue to propose a new approach called Robust Local Features for Adversarial Training (RLFAT), which first learns the robust local features by adversarial training on the RBS-transformed adversarial examples, and then transfers the robust local features into the training of normal adversarial examples. To demonstrate the generality of our argument, we implement RLFAT in currently state-of-the-art adversarial training frameworks. Extensive experiments on STL-10, CIFAR-10 and CIFAR-100 show that RLFAT significantly improves both the adversarially robust generalization and the standard generalization of adversarial training. Additionally, we demonstrate that our models capture more local features of the object on the images, aligning better with human perception.
accept-poster
Earlier work suggests that adversarial examples exploit local features and that more robust models rely on global features. The authors propose to exploit this insight by performing data augmentation in adversarial training, by cutting and reshuffling image block. They demonstrate the idea empirically and witness interesting gains. I think the technique is an interesting contribution, but empirically and as a tool.
train
[ "Byg-szbhtH", "rJlBQbyjir", "B1egLD4EiH", "HklhB5cqjS", "Hkxict-tsr", "H1gjudE4oS", "B1gR0K4ViH", "SyxxxKE4oH", "rkxiT9QatH", "H1lZuixYcS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors proposed a new approach to improve the robustness of CNNs against adversarial examples.\nThe recent studies show that CNNs capture local features, which can be easily affected by the adversarial perturbations.\nThus, in the paper, the authors proposed to train CNNs so that they can captu...
[ 6, -1, -1, -1, -1, -1, -1, -1, 3, 8 ]
[ 1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_H1lZJpVFvr", "B1gR0K4ViH", "iclr_2020_H1lZJpVFvr", "Hkxict-tsr", "SyxxxKE4oH", "H1lZuixYcS", "rkxiT9QatH", "Byg-szbhtH", "iclr_2020_H1lZJpVFvr", "iclr_2020_H1lZJpVFvr" ]
iclr_2020_rJgQkT4twH
Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification
Semmelhack et al. (2014) have achieved high classification accuracy in distinguishing swim bouts of zebrafish using a Support Vector Machine (SVM). Convolutional Neural Networks (CNNs) have reached superior performance in various image recognition tasks over SVMs, but these powerful networks remain a black box. Reaching better transparency helps to build trust in their classifications and makes learned features interpretable to experts. Using a recently developed technique called Deep Taylor Decomposition, we generated heatmaps to highlight input regions of high relevance for predictions. We find that our CNN makes predictions by analyzing the steadiness of the tail's trunk, which markedly differs from the manually extracted features used by Semmelhack et al. (2014). We further uncovered that the network paid attention to experimental artifacts. Removing these artifacts ensured the validity of predictions. After correction, our best CNN beats the SVM by 6.12%, achieving a classification accuracy of 96.32%. Our work thus demonstrates the utility of AI explainability for CNNs.
accept-poster
This paper presents a case study of training a video classifier and subsequently analyzing the features to reduce reliance on spurious artifacts. The supervised learning task is zebrafish bout classification which is relevant for biological experiments. The paper analyzed the image support for the learned neural net features using a previously developed technique called Deep Taylor Decomposition. This analysis showed that the CNNs when applied to the raw video were relying on artifacts of the data collection process, which spuriously increased classification accuracies by a "clever Hans" mechanism. By identifying and removing these artifacts, a retrained CNN classifier was able to outperform an older SVM classifier. More importantly, the analysis of the network features enabled the researchers to isolate which parts of the zebrafish motion were relevant for the classification. The reviewers found the paper to be well-written and the experiments to be well-designed. The reviewers suggested a some changes to the phrasing in the document, which the authors adopted. In response to the reviewers, the authors also clarified their use of ImageNet for pre-training and examined alternative approaches for building saliency maps. This paper should be published as the reviewers found the paper to be a good case study of how model interpretability can be useful in practice.
train
[ "S1edFKl_jS", "HJlYwYedsS", "ryxPpdeusr", "Skl0dCZk5r", "r1gLu9xO5S", "SkxFdq2Y9r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your kind review and your accurate assessment. That is exactly what we wanted to demonstrate.", "Thank you for your detailed review and for pointing out issues that need further clarification. We hope our revisions will address your concerns.\n\nWe revised problematic wording in the manuscript. Ter...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 1, 3, 4 ]
[ "Skl0dCZk5r", "r1gLu9xO5S", "SkxFdq2Y9r", "iclr_2020_rJgQkT4twH", "iclr_2020_rJgQkT4twH", "iclr_2020_rJgQkT4twH" ]
iclr_2020_HkxBJT4YvB
Learning Disentangled Representations for CounterFactual Regression
We consider the challenge of estimating treatment effects from observational data; and point out that, in general, only some factors based on the observed covariates X contribute to selection of the treatment T, and only some to determining the outcomes Y. We model this by considering three underlying sources of {X, T, Y} and show that explicitly modeling these sources offers great insight to guide designing models that better handle selection bias. This paper is an attempt to conceptualize this line of thought and provide a path to explore it further. In this work, we propose an algorithm to (1) identify disentangled representations of the above-mentioned underlying factors from any given observational dataset D and (2) leverage this knowledge to reduce, as well as account for, the negative impact of selection bias on estimating the treatment effects from D. Our empirical results show that the proposed method achieves state-of-the-art performance in both individual and population based evaluation measures.
accept-poster
The paper proposes a new way of estimating treatment effects from observational data. The text is clear and experiments support the proposed model.
train
[ "S1x-5u6ioS", "HJlXmF45jB", "rJxIX7EWjS", "HkgJTUZxiS", "H1lE8vZloH", "SkeKSvde9r", "SJx2t271cB", "H1g5l-raqB", "B1e4pr4NqS" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you for your thought-provoking suggestion regarding incorporation of the Counterfactual Risk Minimization (CRM) principle [1] into this framework. \n\nIn contextual bandits, the goal is often learning an optimal policy that minimizes the regret. There are 2 strategies to do this: \n(i) estimating the outcome...
[ -1, -1, 8, -1, -1, 8, 3, -1, -1 ]
[ -1, -1, 3, -1, -1, 5, 1, -1, -1 ]
[ "SkeKSvde9r", "rJxIX7EWjS", "iclr_2020_HkxBJT4YvB", "H1g5l-raqB", "SJx2t271cB", "iclr_2020_HkxBJT4YvB", "iclr_2020_HkxBJT4YvB", "B1e4pr4NqS", "iclr_2020_HkxBJT4YvB" ]
iclr_2020_SkeIyaVtwB
Exploration in Reinforcement Learning with Deep Covering Options
While many option discovery methods have been proposed to accelerate exploration in reinforcement learning, they are often heuristic. Recently, covering options was proposed to discover a set of options that provably reduce the upper bound of the environment's cover time, a measure of the difficulty of exploration. Covering options are computed using the eigenvectors of the graph Laplacian, but they are constrained to tabular tasks and are not applicable to tasks with large or continuous state-spaces. We introduce deep covering options, an online method that extends covering options to large state spaces, automatically discovering task-agnostic options that encourage exploration. We evaluate our method in several challenging sparse-reward domains and we show that our approach identifies less explored regions of the state-space and successfully generates options to visit these regions, substantially improving both the exploration and the total accumulated reward.
accept-poster
This paper considers options discovery in hierarchical reinforcement learning. It extends the idea of covering options, using the Laplacian of the state space discover a set of options that reduce the upper bound of the environment's cover time, to continuous and large state spaces. An online method is also included, and evaluated on several domains. The reviewers had major questions on a number of aspects of the paper, including around the novelty of the work which seemed limited, the quantitative results in the ATARI environments, and problems with comparisons to other exploration methods. These were all appropriately dealt with in the rebuttals, leaving this paper worthy of acceptance.
train
[ "Byx79-ee9r", "rygIzO_noH", "H1ew7UD2sH", "HJlGfUk0KH", "r1x9SnV2iB", "ryxH9Ld9sS", "BkxTsrSOoH", "r1lDKBHusH", "SJxcXSBuoS", "BylqKGBdir", "BygSuR-qFB" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes an algorithm to extend the recently proposed method of “covering options” from a tabular setting to continuous state spaces (or large discrete state spaces). The proposed algorithm approximately computes the second eigenfunction of the normalized laplacian of the state space, uses it to identify...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_SkeIyaVtwB", "SJxcXSBuoS", "r1x9SnV2iB", "iclr_2020_SkeIyaVtwB", "ryxH9Ld9sS", "BkxTsrSOoH", "r1lDKBHusH", "HJlGfUk0KH", "Byx79-ee9r", "BygSuR-qFB", "iclr_2020_SkeIyaVtwB" ]
iclr_2020_HkldyTNYwH
AE-OT: A NEW GENERATIVE MODEL BASED ON EXTENDED SEMI-DISCRETE OPTIMAL TRANSPORT
Generative adversarial networks (GANs) have attracted huge attention due to its capability to generate visual realistic images. However, most of the existing models suffer from the mode collapse or mode mixture problems. In this work, we give a theoretic explanation of the both problems by Figalli’s regularity theory of optimal transportation maps. Basically, the generator compute the transportation maps between the white noise distributions and the data distributions, which are in general discontinuous. However, DNNs can only represent continuous maps. This intrinsic conflict induces mode collapse and mode mixture. In order to tackle the both problems, we explicitly separate the manifold embedding and the optimal transportation; the first part is carried out using an autoencoder to map the images onto the latent space; the second part is accomplished using a GPU-based convex optimization to find the discontinuous transportation maps. Composing the extended OT map and the decoder, we can finally generate new images from the white noise. This AE-OT model avoids representing discontinuous maps by DNNs, therefore effectively prevents mode collapse and mode mixture.
accept-poster
The authors present a different perspective on the mode collapse and mode mixture problems in GAN based on some recent theoretical results. This is an interesting work. However, two reviewers have raised some concerns about the results and hence given a low rating of the paper. After reading the reviews and the rebuttal carefully I feel that the authors have addressed all the concerns of the reviewers. In particular, at least for one reviewer I felt that there was a slight misunderstanding on the reviewer's part which was clarified in the rebuttal. The concerns of R1 about a simpler baseline have also been addressed by the authors with the help of additional experiments. I am convinced that the original concerns of the reviewers are addressed. Hence, I recommend that this paper be accepted. Having said that, I strongly recommend that in the final version, the authors should be a bit more clear in motivating the problem. In particular, please make it clear that you are only dealing with the generator and do not have an adversarial component in the training. Also, as suggested by R3 add more intuitive descriptions to make the paper accessible to a wider audience.
train
[ "S1ew3XnmsS", "BkxxhH_wjS", "SkgkludPiH", "SJx3_vOPiH", "Syeh0W3Xor", "SylGfl3XiH", "ryetcUV7iH", "SylWjepycS", "r1e5aB-z9r" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n\n----------------------------\nQ1: My concern is whether the proposed method is overkill because the singular point detection can\nbe very tricky and relies on heavy linear programming.\n\nAnswer: The detection of singularities is direct and simple, for the convex polyhedron of the Brenier\npotential, just comp...
[ -1, -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "SylWjepycS", "ryetcUV7iH", "SJx3_vOPiH", "BkxxhH_wjS", "r1e5aB-z9r", "iclr_2020_HkldyTNYwH", "iclr_2020_HkldyTNYwH", "iclr_2020_HkldyTNYwH", "iclr_2020_HkldyTNYwH" ]
iclr_2020_rkecJ6VFvr
Logic and the 2-Simplicial Transformer
We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.
accept-poster
This paper extends the Transformer, implementing higher-dimensional attention generalizing the dot-product attention. The AC agrees that Reviewer3's comment that generalizing attention from 2nd- to 3rd-order relations is an important upgrade, that the mathematical context is insightful, and that this could lead to the further potential development. The readability of the paper still remains as an issue, and it needs to be address in the final version of the paper.
train
[ "HJlXRk2oiH", "r1gWsS3qiH", "SylaGS2cir", "rJeOck6FjB", "Skx3fovoH", "r1lDo-iviB", "r1xqnC9DoB", "S1lQd95vsS", "rJg7BzyAYS", "r1erdj7CFB", "Skes8yd0FS" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Having read the other reviews and the responses to them, I still feel that this is important work. From my perspective, the details of current-generation AI tasks and questions of which might benefit from the proposed generalization of attention are not nearly as important as a more fundamental aspect of this work...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 3 ]
[ "r1lDo-iviB", "iclr_2020_rkecJ6VFvr", "rJeOck6FjB", "r1xqnC9DoB", "iclr_2020_rkecJ6VFvr", "Skes8yd0FS", "r1erdj7CFB", "rJg7BzyAYS", "iclr_2020_rkecJ6VFvr", "iclr_2020_rkecJ6VFvr", "iclr_2020_rkecJ6VFvr" ]
iclr_2020_SJg5J6NtDr
Watch, Try, Learn: Meta-Learning from Demonstrations and Rewards
Imitation learning allows agents to learn complex behaviors from demonstrations. However, learning a complex vision-based task may require an impractical number of demonstrations. Meta-imitation learning is a promising approach towards enabling agents to learn a new task from one or a few demonstrations by leveraging experience from learning similar tasks. In the presence of task ambiguity or unobserved dynamics, demonstrations alone may not provide enough information; an agent must also try the task to successfully infer a policy. In this work, we propose a method that can learn to learn from both demonstrations and trial-and-error experience with sparse reward feedback. In comparison to meta-imitation, this approach enables the agent to effectively and efficiently improve itself autonomously beyond the demonstration data. In comparison to meta-reinforcement learning, we can scale to substantially broader distributions of tasks, as the demonstration reduces the burden of exploration. Our experiments show that our method significantly outperforms prior approaches on a set of challenging, vision-based control tasks.
accept-poster
The paper proposed a meta-learning approach that learns from demonstrations and subsequent RL tasks. The reviewers found this work interesting and promising. There have been some concerns regarding the clarity of presentation, which seems to be addressed in the revised version. Therefore, I recommend acceptance for this paper.
train
[ "HkxkRDmBsr", "B1xEKDrsir", "Syg18_mrir", "Bkxx84qesH", "SJxKT8BYtS", "Syxmdc1CKB", "rJe17q5Z9H" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the thoughtful comments, and have updated the paper to improve the presentation:\n\n>>> The context should be better introduced [...]\n\nWe have updated Section 3.4 to explain in more detail how the policies condition on demonstration/trial data by extracting context vectors.\n\n>>> It is...
[ -1, -1, -1, -1, 6, 3, 8 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "rJe17q5Z9H", "Syxmdc1CKB", "SJxKT8BYtS", "Syxmdc1CKB", "iclr_2020_SJg5J6NtDr", "iclr_2020_SJg5J6NtDr", "iclr_2020_SJg5J6NtDr" ]
iclr_2020_rJl31TNYPr
Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking
Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models. However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories of surrounding obstacles. Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindly target objection detection: we find that a success rate of over 98% is needed for them to actually affect the tracking results, a requirement that no existing attack technique can satisfy. In this paper, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. Using our technique, successful AEs on as few as one single frame can move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards. We perform evaluation using the Berkeley Deep Drive dataset and find that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%.
accept-poster
The authors agree after reading the rebuttal that attacks on MOT are novel. While the datasets used are small, and the attacks are generated in digital simulation rather than the physical world, this paper still demonstrates an interesting attack on a realistic system.
train
[ "Hke0M2aaKr", "SylVQsFaKB", "Bkg-jxS8oS", "BJehZyBLsB", "rylxh-BIsB", "Syepl_jkqB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper addresses adversarial attacks against visual perception pipelines in autonomous driving. Both subprocesses in the visual perception pipeline, object detection and multiple object tracking (MOT), are considered. The paper proposes a novel approach in adversarial attacks, the tracking hijacking, which can ...
[ 6, 6, -1, -1, -1, 8 ]
[ 3, 5, -1, -1, -1, 4 ]
[ "iclr_2020_rJl31TNYPr", "iclr_2020_rJl31TNYPr", "Hke0M2aaKr", "Syepl_jkqB", "SylVQsFaKB", "iclr_2020_rJl31TNYPr" ]
iclr_2020_HJgExaVtwr
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
Deep neural networks are known to be annotation-hungry. Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks. Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data. In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques. In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner. To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network. During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively. Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods. Code is available at https://github.com/LiJunnan1992/DivideMix .
accept-poster
This paper proposes an algorithm for noisy labels by adopting an idea in the recent semi-supervised learning algorithm. As two problems of training noisy labels and semi-supervised ones are closely related, it is not surprising to expect such results as pointed out by reviewers. However, reported thorough experimental results are strong and I think this paper can be useful for practitioners and following works. Hence, I recommend acceptance.
train
[ "ByxNpGZ2jB", "ryxKwK0jiH", "HyxSBF0ssB", "rJe2zKAosB", "S1gKsV9EYS", "HkeBKNcaKr", "SyxNO3Ne5S", "B1l5KLT-uS", "SJgK7RakdS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The authors have responded to my questions, and I have no other comments to make.", "We thank Reviewer 2 for the insightful comments. We believe that our work can inspire new research ideas to explore the intersection between the areas of learning with noisy labels and semi-supervised learning.", "We appreciat...
[ -1, -1, -1, -1, 6, 6, 6, -1, -1 ]
[ -1, -1, -1, -1, 5, 4, 4, -1, -1 ]
[ "rJe2zKAosB", "S1gKsV9EYS", "HkeBKNcaKr", "SyxNO3Ne5S", "iclr_2020_HJgExaVtwr", "iclr_2020_HJgExaVtwr", "iclr_2020_HJgExaVtwr", "SJgK7RakdS", "iclr_2020_HJgExaVtwr" ]