paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2020_BkeaEyBYDB | Improving Federated Learning Personalization via Model Agnostic Meta Learning | Federated Learning (FL) refers to learning a high quality global model based on decentralized data storage, without ever copying the raw data. A natural scenario arises with data created on mobile phones by the activity of their users. Given the typical data heterogeneity in such situations, it is natural to ask how can the global model be personalized for every such device, individually. In this work, we point out that the setting of Model Agnostic Meta Learning (MAML), where one optimizes for a fast, gradient-based, few-shot adaptation to a heterogeneous distribution of tasks, has a number of similarities with the objective of personalization for FL. We present FL as a natural source of practical applications for MAML algorithms, and make the following observations. 1) The popular FL algorithm, Federated Averaging, can be interpreted as a meta learning algorithm. 2) Careful fine-tuning can yield a global model with higher accuracy, which is at the same time easier to personalize. However, solely optimizing for the global model accuracy yields a weaker personalization result. 3) A model trained using a standard datacenter optimization method is much harder to personalize, compared to one trained using Federated Averaging, supporting the first claim. These results raise new questions for FL, MAML, and broader ML research. | reject | The reviewers have reached consensus that while the paper is interesting, it could use more time. We urge the authors to continue their investigations. | train | [
"r1xmrgpU9H",
"Syxuj0h8iB",
"rklkJ0hIjB",
"ByxphT2UjB",
"B1eIla2IoH",
"Byxju1vRFr",
"HkxMzGThcS",
"rkeipu_fOS",
"H1ehS-3-_B",
"r1eGfIbZdB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public"
] | [
"Update: I thank authors for the rebuttal. I agree that direction of exploring personalization in FL is interesting. With a stronger methodological contribution, this could become a good paper.\n\n----------------------------------------------------------------------------------------------------------------\nThe m... | [
1,
-1,
-1,
-1,
-1,
1,
3,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1
] | [
"iclr_2020_BkeaEyBYDB",
"iclr_2020_BkeaEyBYDB",
"HkxMzGThcS",
"r1xmrgpU9H",
"Byxju1vRFr",
"iclr_2020_BkeaEyBYDB",
"iclr_2020_BkeaEyBYDB",
"H1ehS-3-_B",
"r1eGfIbZdB",
"iclr_2020_BkeaEyBYDB"
] |
iclr_2020_BJg641BKPH | Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems | Recently, several studies have proven the global convergence and generalization abilities of the gradient descent method for two-layer ReLU networks. Most studies especially focused on the regression problems with the squared loss function, except for a few, and the importance of the positivity of the neural tangent kernel has been pointed out. However, the performance of gradient descent on classification problems using the logistic loss function has not been well studied, and further investigation of this problem structure is possible. In this work, we demonstrate that the separability assumption using a neural tangent model is more reasonable than the positivity condition of the neural tangent kernel and provide a refined convergence analysis of the gradient descent for two-layer networks with smooth activations. A remarkable point of our result is that our convergence and generalization bounds have much better dependence on the network width in comparison to related studies. Consequently, our theory significantly enlarges a class of over-parameterized networks with provable generalization ability, with respect to the network width, while most studies require much higher over-parameterization. | reject | This article studies gradient optimization for classification problems with shallow networks with smooth activations, obtaining convergence and generalisation results under a separability assumption on the data. The results are obtained under much less stringent requirements on the width of the network than other related recent works. However, with results on convergence and generalisation having been established in other previous works, the reviewers found the contribution incremental. The responses clarified some of the distinctive challenges with the logistic loss compared with the squared loss that has been considered in other works, and provided examples for the separability assumption. Overall, the article makes important contributions in the case of classification problems. However, with many recent works addressing challenging problems in a similar direction, the bar has been set quite high. As pointed out by some of the reviewers, the contribution could gain substantially in relevance and make a more convincing case by addressing extensions to non smooth activations and deep models. | train | [
"B1eXd8M2jH",
"r1eeznMtiB",
"rJgcQjGYir",
"HygV-iztiH",
"Syez85fKoB",
"rylWgpohKS",
"S1lRoqAaKB",
"SygjVAKaYS"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We have also added the following comments to the revised version:\n5. Different proof techniques for the squared loss and the logistic loss functions.\n6. Difference with another separation assumption in [1-5] (in review #2) made for regression problems.",
"Dear reviewers,\nWe have updated the paper. The main ch... | [
-1,
-1,
-1,
-1,
-1,
8,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
1,
4,
4
] | [
"r1eeznMtiB",
"iclr_2020_BJg641BKPH",
"rylWgpohKS",
"SygjVAKaYS",
"S1lRoqAaKB",
"iclr_2020_BJg641BKPH",
"iclr_2020_BJg641BKPH",
"iclr_2020_BJg641BKPH"
] |
iclr_2020_HylA41Btwr | CP-GAN: Towards a Better Global Landscape of GANs | GANs have been very popular in data generation and unsupervised learning, but our understanding of GAN training is still very limited. One major reason is that GANs are often formulated as non-convex-concave min-max optimization. As a result, most recent studies focused on the analysis in the local region around the equilibrium. In this work, we perform a global analysis of GANs from two perspectives: the global landscape of the outer-optimization problem and the global behavior of the gradient descent dynamics. We find that the original GAN has exponentially many bad strict local minima which are perceived as mode-collapse, and the training dynamics (with linear discriminators) cannot escape mode collapse. To address these issues, we propose a simple modification to the original GAN, by coupling the generated samples and the true samples. We prove that the new formulation has no bad basins, and its training dynamics (with linear discriminators) has a Lyapunov function that leads to global convergence. Our experiments on standard datasets show that this simple loss outperforms the original GAN and WGAN-GP. | reject | The paper is proposed a rejection based on majority reviews. | train | [
"ByezNv2hjB",
"Bke2tkj2jB",
"BJeGUR9njH",
"rkl8N05hiH",
"SJgnsRq3iH",
"rygZ-NwDKB",
"SygX86JaYS",
"HJleOiRaFr"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewers,\n\nThank you for your effort and time. \n\nWe mainly added the following parts in the revised paper to address the comments: \n 1) Add Appendix A (with 4 figures), to explain why learning n-points can be viewed as the \"macro-learning\" part of learning n-modes. \n 2) Add Appendix B on some rel... | [
-1,
-1,
-1,
-1,
-1,
8,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"iclr_2020_HylA41Btwr",
"SygX86JaYS",
"HJleOiRaFr",
"HJleOiRaFr",
"rygZ-NwDKB",
"iclr_2020_HylA41Btwr",
"iclr_2020_HylA41Btwr",
"iclr_2020_HylA41Btwr"
] |
iclr_2020_ryl1r1BYDS | Multiagent Reinforcement Learning in Games with an Iterated Dominance Solution | Multiagent reinforcement learning (MARL) attempts to optimize policies of intelligent agents interacting in the same environment. However, it may fail to converge to a Nash equilibrium in some games. We study independent MARL under the more demanding solution concept of iterated elimination of strictly dominated strategies. In dominance solvable games, if players iteratively eliminate strictly dominated strategies until no further strategies can be eliminated, we obtain a single strategy profile. We show that convergence to the iterated dominance solution is guaranteed for several reinforcement learning algorithms (for multiple independent learners). We illustrate an application of our results by studying mechanism design for principal-agent problems, where a principal wishes to incentivize agents to exert costly effort in a joint project when it can only observe whether the project succeeded, but not whether agents actually exerted effort. We show that MARL converges to the desired outcome if the rewards are designed so that exerting effort is the iterated dominance solution, but fails if it is merely a Nash equilibrium. | reject | The paper proofs that reinforcement learning (using two different algorithms) converge to iterative dominance solutions for a class of multi-player games (dominance solvable games).
There was a lively discussion around the paper. However, two of the reviewers remain unconvinced of the novelty of the approach, pointing to [1] and [2], with [1] only pertaining to supermodular games. The exact contribution over such existing results is currently not addressed in the manuscript. There were also concerns about the scaling and applicability of the results, as dominance solvable games are limited.
[1] http://www.parisschoolofeconomics.eu/docs/guesnerie-roger/milgromroberts90.pdf
[2] Friedman, James W., and Claudio Mezzetti. "Learning in games by random sampling." Journal of Economic Theory 98.1 (2001): 55-84. | train | [
"BkloUdRjjS",
"S1gushOisH",
"H1x1FlvssB",
"SygfZk8oiH",
"r1emt9Q5jr",
"SyxVGcXqiH",
"ByeQsYX9iS",
"rylLgtQ5oS",
"SyxMP-S9tB",
"H1x2QH4-cB",
"HJeqKrkBcS",
"H1luZksD9r"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Milgrom and Roberts's Theorem 8 statement says that:\n\"Let {x(t)} be an adaptive dynamic process and let x = inf(S) andX = sup (S). Then for every *supermodular* game T...\"\ni.e. the theorem relates only for supermodular games. \n\nGiven this theorem and a *supermodular game*, one could indeed run through the i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
3
] | [
"S1gushOisH",
"H1x1FlvssB",
"SygfZk8oiH",
"rylLgtQ5oS",
"H1luZksD9r",
"H1x2QH4-cB",
"HJeqKrkBcS",
"SyxMP-S9tB",
"iclr_2020_ryl1r1BYDS",
"iclr_2020_ryl1r1BYDS",
"iclr_2020_ryl1r1BYDS",
"iclr_2020_ryl1r1BYDS"
] |
iclr_2020_SygkSkSFDB | On the expected running time of nonconvex optimization with early stopping | This work examines the convergence of stochastic gradient algorithms that use early stopping based on a validation function, wherein optimization ends when the magnitude of a validation function gradient drops below a threshold. We derive conditions that guarantee this stopping rule is well-defined and analyze the expected number of iterations and gradient evaluations needed to meet this criteria. The guarantee accounts for the distance between the training and validation sets, measured with the Wasserstein distance. We develop the approach for stochastic gradient descent (SGD), allowing for biased update directions subject to a Lyapunov condition. We apply the approach to obtain new bounds on the expected running time of several algorithms, including Decentralized SGD (DSGD), a variant of decentralized SGD, known as \textit{Stacked SGD}, and the stochastic variance reduced gradient (SVRG) algorithm. Finally, we consider the generalization properties of the iterate returned by early stopping. | reject | The authors made no response to reviewers. Based on current reviews, the paper is suggested a rejection as majority. | train | [
"rJgTmIO6Kr",
"r1gFw9KcFH",
"B1ghnfO49B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors consider stochastic optimization in the setting where a validation function is used to guide the termination of the algorithm. In more details, the algorithm terminates if the gradient of the validation function at an iterate is smaller than a threshold. In this framework, the authors co... | [
3,
6,
3
] | [
4,
5,
3
] | [
"iclr_2020_SygkSkSFDB",
"iclr_2020_SygkSkSFDB",
"iclr_2020_SygkSkSFDB"
] |
iclr_2020_BJxeHyrKPB | RATE-DISTORTION OPTIMIZATION GUIDED AUTOENCODER FOR GENERATIVE APPROACH | In the generative model approach of machine learning, it is essential to acquire an accurate probabilistic model and compress the dimension of data for easy treatment. However, in the conventional deep-autoencoder based generative model such as VAE, the probability of the real space cannot be obtained correctly from that of in the latent space, because the scaling between both spaces is not controlled. This has also been an obstacle to quantifying the impact of the variation of latent variables on data. In this paper, we propose a method to learn parametric probability distribution and autoencoder simultaneously based on Rate-Distortion Optimization to support scaling control. It is proved theoretically and experimentally that (i) the probability distribution of the latent space obtained by this model is proportional to the probability distribution of the real space because Jacobian between two spaces is constant: (ii) our model behaves as non-linear PCA, which enables to evaluate the influence of latent variables on data. Furthermore, to verify the usefulness on the practical application, we evaluate its performance in unsupervised anomaly detection and outperform current state-of-the-art methods. | reject | Agreement by the reviewers: although the idea is good, the paper is very hard to read and not accurately enough formulated to merit publication.
This can be repaired, and the authors should try again after a thorough revision and rewrite. | train | [
"BkeYHLbnsr",
"SylLp4W3jB",
"rJeBEtbnsB",
"r1lHzzZ3iH",
"rkg7tMbnsS",
"BklKgdWnsS",
"r1xu3tgSYH",
"HJewScvRYH",
"SylJiIjpqr"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your time and valuable comments. \nFrom your comments, we found that our work would be closely related to a practical method of isometric embedding of Riemannian manifold. \nBecause our background is not only deep autoencoders but also image compression, we have overlooked that there is a gap between... | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"SylJiIjpqr",
"SylJiIjpqr",
"HJewScvRYH",
"r1xu3tgSYH",
"r1xu3tgSYH",
"HJewScvRYH",
"iclr_2020_BJxeHyrKPB",
"iclr_2020_BJxeHyrKPB",
"iclr_2020_BJxeHyrKPB"
] |
iclr_2020_SJe-HkBKDS | Amharic Text Normalization with Sequence-to-Sequence Models | All areas of language and speech technology, directly or indirectly, require handling of real text. In addition to ordinary words and names, the real text contains non-standard words (NSWs), including numbers, abbreviations, dates, currency, amounts, and acronyms. Typically, one cannot find NSWs in a dictionary, nor can one find their pronunciation by an application of ordinary letter-to-sound rules. It is desirable to normalize text by replacing such non-standard words with a consistently formatted and contextually appropriate variant in several NLP applications. To address this challenge, in this paper, we model the problem as character-level sequence-to-sequence learning where we map a sequence of input characters to a sequence of output words. It consists of two neural networks, the encoder network, and the decoder network. The encoder maps the input characters to a fixed dimensional vector and the decoder generates the output words. We have achieved an accuracy of 94.8 % which is promising given the resource we use. | reject | The paper proposes a text normalisation model for Amharic text. The model uses word classification, followed by a character-based GRU attentive encoder-decoder model. The paper is very short and does not present reproducible experiments. It also does not conform to the style guidelines of the conference. There has been no discussion of this paper beyond the initial reviews, all of which reject it with a score of 1. It is not ready to publish and the authors should consider a more NLP focussed venue for future research of this kind.
| train | [
"Skg1Jt4msB",
"BkxPOKMe9r",
"HkgF8KJT9r"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper describes a method for word normalization of Amharic text using a word classification system followed by a character-based GRU attentive encoder-decoder model.\n\nThe paper is very short and lacks many important details, such as where the data is collected from, how it is processed and split into trainin... | [
1,
1,
1
] | [
3,
4,
3
] | [
"iclr_2020_SJe-HkBKDS",
"iclr_2020_SJe-HkBKDS",
"iclr_2020_SJe-HkBKDS"
] |
iclr_2020_Skgfr1rYDH | SoftAdam: Unifying SGD and Adam for better stochastic gradient descent | Abstract Stochastic gradient descent (SGD) and Adam are commonly used to optimize deep neural networks, but choosing one usually means making tradeoffs between speed, accuracy and stability. Here we present an intuition for why the tradeoffs exist as well as a method for unifying the two in a continuous way. This makes it possible to control the way models are trained in much greater detail. We show that for default parameters, the new algorithm equals or outperforms SGD and Adam across a range of models for image classification tasks and outperforms SGD for language modeling tasks. | reject | The reviewers all agreed that the proposed modification was minor. I encourage the authors to pursue in this direction, as they mentioned in their rebuttal, before resubmitting to another conference. | train | [
"rkl8RuKjir",
"HJlSAwFoir",
"ByxbUvYoiS",
"rkx2eAItFH",
"ByeabQxpFS",
"S1ghLD70FH",
"Bylf0S_7KB",
"Skek3-qxYB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you very much for the detailed feedback. Overall, we want to note that the algorithm significantly outperforms Adam and even outperforms SGD in computer vision tasks. The changes here have a significant effect on the generalization performance and constitute novel research.\n\nBelow are responses to your poi... | [
-1,
-1,
-1,
3,
1,
3,
-1,
-1
] | [
-1,
-1,
-1,
5,
4,
4,
-1,
-1
] | [
"S1ghLD70FH",
"ByeabQxpFS",
"rkx2eAItFH",
"iclr_2020_Skgfr1rYDH",
"iclr_2020_Skgfr1rYDH",
"iclr_2020_Skgfr1rYDH",
"Skek3-qxYB",
"iclr_2020_Skgfr1rYDH"
] |
iclr_2020_SkeBBJrFPH | Characterize and Transfer Attention in Graph Neural Networks | Does attention matter and, if so, when and how? Our study on both inductive and transductive learning suggests that datasets have a strong influence on the effects of attention in graph neural networks. Independent of learning setting, task and attention variant, attention mostly degenerate to simple averaging for all three citation networks, whereas they behave strikingly different in the protein-protein interaction networks and molecular graphs: nodes attend to different neighbors per head and get more focused in deeper layers. Consequently, attention distributions become telltale features of the datasets themselves. We further explore the possibility of transferring attention for graph sparsification and show that, when applicable, attention-based sparsification retains enough information to obtain good performance while reducing computational and storage costs. Finally, we point out several possible directions for further study and transfer of attention. | reject | This paper suggests that datasets have a strong influence on the effects of attention in graph neural networks and explores the possibility of transferring attention for graph sparsification, suggesting that attention-based sparsification retains enough information to obtain good performance while reducing computational and storage costs.
Unfortunately I cannot recommend acceptance for this paper in its present form. Some concerns raised by the reviewers are: the analysis lacks theoretical insights and does not seem to be very useful in practice; the proposed method for graph sparsification lacks novelty; the experiments are not thorough to validate its usefulness. I encourage the authors to address these concerns in an eventual resubmission.
| train | [
"HkllKbpYiH",
"rygxXNFFsB",
"SJxqeVFuoH",
"HJgwHMK_iS",
"Byei-cu_sr",
"HkeK27OOiS",
"SJli7sDOjB",
"rklO62vCYH",
"HJlML8uhtr",
"HkeCr9okcS"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the suggestion and I've made an update.",
"Thanks for the clarification. This is now clear. For a reader that is more used to the $\\sum_{i\\in\\mathcal{V}}\\sum_{j\\in\\mathcal{N}(i)}$ notation being a sum over all edges this is a bit unintuitive. It would be good to add some explanation of thi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"rygxXNFFsB",
"HJgwHMK_iS",
"HkeCr9okcS",
"HkeK27OOiS",
"rklO62vCYH",
"SJli7sDOjB",
"HJlML8uhtr",
"iclr_2020_SkeBBJrFPH",
"iclr_2020_SkeBBJrFPH",
"iclr_2020_SkeBBJrFPH"
] |
iclr_2020_HJlrS1rYwH | Policy Tree Network | Decision-time planning policies with implicit dynamics models have been shown to work in discrete action spaces with Q learning. However, decision-time planning with implicit dynamics models in continuous action space has proven to be a difficult problem. Recent work in Reinforcement Learning has allowed for implicit model based approaches to be extended to Policy Gradient methods. In this work we propose Policy Tree Network (PTN). Policy Tree Network lies at the intersection of Model-Based Reinforcement Learning and Model-Free Reinforcement Learning. Policy Tree Network is a novel approach which, for the first time, demonstrates how to leverage an implicit model to perform decision-time planning with Policy Gradient methods in continuous action spaces. This work is empirically justified on 8 standard MuJoCo environments so that it can easily be compared with similar work done in this area. Additionally, we offer a lower bound on the worst case change in the mean of the policy when tree planning is used and theoretically justify our design choices. | reject | The consensus amongst the reviewers is that the paper discusses an interesting idea and shows significant promise, but that the presentation of the initial submission was not of a publishable standard. While some of the issues were clarified during discussion, the reviewers agree that the paper lacks polish and is therefore not ready. While I think Reviewer #3 is overly strict in sticking to a 1, as it is the nature of ICLR to allow papers to be improved through the discussion, in the absence of any of the reviewers being ready to champion the paper, I cannot recommend acceptance. I however have no doubt that with further work on the presentation of what sounds like a potentially fascinating contribution to the field, the paper will stand a chance at acceptance at a future conference. | train | [
"rkehqpXRtS",
"SJgqbzEnor",
"BJez1bJniS",
"HkeAHNrisH",
"H1g3FeccoB",
"H1l2rC3KoH",
"BkeuMY2KjH",
"HyxdZrntjH",
"rke9o42Ysr",
"Hyg-1926Yr",
"S1e8BP_aYB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a modification to Policy Prediction Networks (PPN) in which the learned transition-, reward- and value function models are used at test-time in a planning procedure. \nA second contribution is the \"pi-Q-backup\" which uses the geometric mean of both the policy and the value function as maximisa... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_HJlrS1rYwH",
"HyxdZrntjH",
"H1l2rC3KoH",
"H1g3FeccoB",
"BkeuMY2KjH",
"rke9o42Ysr",
"S1e8BP_aYB",
"Hyg-1926Yr",
"rkehqpXRtS",
"iclr_2020_HJlrS1rYwH",
"iclr_2020_HJlrS1rYwH"
] |
iclr_2020_BkgUB1SYPS | Interpretable Network Structure for Modeling Contextual Dependency | Neural language models have achieved great success in many NLP tasks, to a large extent, due to the ability to capture contextual dependencies among terms in a text. While many efforts have been devoted to empirically explain the connection between the network hyperparameters and the ability to represent the contextual dependency, the theoretical analysis is relatively insufficient. Inspired by the recent research on the use of tensor space to explain the neural network architecture, we explore the interpretable mechanism for neural language models. Specifically, we define the concept of separation rank in the language modeling process, in order to theoretically measure the degree of contextual dependencies in a sentence. Then, we show that the lower bound of such a separation rank can reveal the quantitative relation between the network structure (e.g. depth/width) and the modeling ability for the contextual dependency. Especially, increasing the depth of the neural network can be more effective to improve the ability of modeling contextual dependency. Therefore, it is important to design an adaptive network to compute the adaptive depth in a task. Inspired by Adaptive Computation Time (ACT), we design an adaptive recurrent network based on the separation rank to model contextual dependency. Experiments on various NLP tasks have verified the proposed theoretical analysis. We also test our adaptive recurrent neural network in the sentence classification task, and the experiments show that it can achieve better results than the traditional bidirectional LSTM. | reject | This paper a theoretical interpretation of separation rank as a measure of a recurrent network's ability to capture contextual dependencies in text, and introduces a novel bidirectional NLP variant and tests it on several NLP tasks to verify their analysis.
Reviewer 3 found that the paper does not provide a clear description of the method and that a focus on single message would have worked better. Reviewer 2 made a claim of several shortcomings in the paper relating to lack of clarity, limited details on method, reliance on a 'false dichotomy', and failure to report performance. Reviewer 1 found the goals of the work to be interesting but that the paper was not clear, that the proofs were not rigorous enough, and clarity of experiments. The authors responded to the all the comments. The reviewers felt that their comments were still valid and did not adjust their ratings.
Overall, the paper is not yet ready in its current form. We hope that the authors will find valuable feedback for their ongoing research. | train | [
"SyxGrIxnoS",
"BkxKaEe2iB",
"rJeDeLg2or",
"Hylc0Nt8Yr",
"Bygw1CsiFB",
"BJg5uXxkjB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your helpful advices. We have provided our responses below.\n1. This work builds upon the previous work (namely tensor space language model, TSLM) which has been published and the detailed introduction of the TSLM will be added in Supplementary Appendices in the revised version.\n2. In this work, we con... | [
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
3,
3,
1
] | [
"Hylc0Nt8Yr",
"BJg5uXxkjB",
"Bygw1CsiFB",
"iclr_2020_BkgUB1SYPS",
"iclr_2020_BkgUB1SYPS",
"iclr_2020_BkgUB1SYPS"
] |
iclr_2020_HJl8SkBYPr | Consistency-Based Semi-Supervised Active Learning: Towards Minimizing Labeling Budget | Active learning (AL) aims to integrate data labeling and model training in a unified way, and to minimize the labeling budget by prioritizing the selection of high value data that can best improve model performance. Readily-available unlabeled data are used to evaluate selection mechanisms, but are not used for model training in conventional pool-based AL. To minimize the labeling budget, we unify unlabeled sample selection and model training based on two principles. First, we exploit both labeled and unlabeled data using semi-supervised learning (SSL) to distill information from unlabeled data that improves representation learning and sample selection. Second, we propose a simple yet effective selection metric that is coherent with the training objective such that the selected samples are effective at improving model performance. Our experimental results demonstrate superior performance with our proposed principles for limited labeled data compared to alternative AL and SSL combinations. In addition, we study the AL phenomena of `cold start', which is becoming an increasingly more important factor to enable optimal unification of data labeling, model training and labeling budget minimization. We propose a measure that is found to be empirically correlated with the AL target loss. This measure can be used to assist in determining the proper start size. | reject | The authors leverage advances in semi-supervised learning and data augmentation to propose a method for active learning. The AL method is based on the principle that a model should consistently label across perturbation/augmentations of examples, and thus propose to choose samples for active learning based on how much the estimated label distribution changes based on different perturbations of a given example. The method is intuitive and the experiments provide some evidence of efficacy. However, during discussion there was a lingering question of novelty that eventually swayed the group to reject this paper. | test | [
"SkxcrNYsiS",
"ByeD3J4SoB",
"SygZ4p7rsS",
"HJl9sGNHjr",
"Syl2VCwWiH",
"rkghWIQitS",
"HyeG10GntB",
"BklkAy1pqH",
"rylu0rBd5S",
"HkgdYboDqS",
"SygeC5GPcB"
] | [
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"Thanks again for all the valuable comments. We have updated our manuscript. \nThe current version includes the following changes.\n\n1. We improved our writing (fixed typos and grammar issues etc.). \n2. We revised some confusing statements.\n3. We revised the manuscript according to Q2, Q3 and Q5 of the Reviewer ... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2020_HJl8SkBYPr",
"rkghWIQitS",
"HyeG10GntB",
"Syl2VCwWiH",
"iclr_2020_HJl8SkBYPr",
"iclr_2020_HJl8SkBYPr",
"iclr_2020_HJl8SkBYPr",
"rylu0rBd5S",
"HkgdYboDqS",
"SygeC5GPcB",
"iclr_2020_HJl8SkBYPr"
] |
iclr_2020_BJxvH1BtDS | Three-Head Neural Network Architecture for AlphaZero Learning | The search-based reinforcement learning algorithm AlphaZero has been used as a general method for
mastering two-player games Go, chess and Shogi. One crucial ingredient in AlphaZero (and its predecessor AlphaGo Zero) is the two-head network architecture that outputs two estimates --- policy and value --- for one input game state. The merit of such an architecture is that letting policy and value learning share the same representation substantially improved generalization of the neural net.
A three-head network architecture has been recently proposed that can learn a third action-value head on a fixed dataset the same as for two-head net. Also, using the action-value head in Monte Carlo tree search (MCTS) improved the search efficiency.
However, effectiveness of the three-head network has not been investigated in an AlphaZero style learning paradigm.
In this paper, using the game of Hex as a test domain, we conduct an empirical study of the three-head network architecture in AlpahZero learning. We show that the architecture is also advantageous at the zero-style iterative learning. Specifically, we find that three-head network can induce the following benefits: (1) learning can become faster as search takes advantage of the additional action-value head; (2) better prediction results than two-head architecture can be achieved when using additional action-value learning as an auxiliary task. | reject | The authors provide an empirical study of the recent 3-head architecture applied to AlphaZero style learning. They thoroughly evaluate this approach using the game Hex as a test domain.
Initially, reviewers were concerned about how well the hyper parameters for tuned for different methods. The authors did a commendable job addressing the reviewers concerns in their revision. However, the reviewers agreed that with the additional results showing the gap between the 2 headed architecture and the three-headed architecture narrowed, the focus of the paper has changed substantially from the initial version. They suggest that a substantial rewrite of the paper would make the most sense before publication.
As a result, at this time, I'm going to recommend rejection, but I encourage the authors to incorporate the reviewers feedback. I believe this paper has the potential to be a strong submission in the future.
| train | [
"S1eKL42R_H",
"HJgHu7bCKH",
"rkeEfpDoiH",
"S1lIoZdsoB",
"S1xCxJdoiS",
"r1xuzKwGqB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper applies three-head neural network (3HNN) architecture in AlphaZero learning paradigm. This architecture was proposed in [1] and the paper builds upon their work. In AlphaGo and AlphaZero 2HNN is used, which predicts policy and value for a given state. 3HNN also predicts action-value Q function. In [1], t... | [
6,
3,
-1,
-1,
-1,
6
] | [
1,
5,
-1,
-1,
-1,
3
] | [
"iclr_2020_BJxvH1BtDS",
"iclr_2020_BJxvH1BtDS",
"HJgHu7bCKH",
"S1eKL42R_H",
"r1xuzKwGqB",
"iclr_2020_BJxvH1BtDS"
] |
iclr_2020_BylPSkHKvB | Natural- to formal-language generation using Tensor Product Representations | Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output. Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks. In this paper, we propose a new encoder-decoder model based on Tensor Product Representations (TPRs) for Natural- to Formal-language generation, called TP-N2F. The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate a sequence of relational tuples, each consisting of a relation (or operation) and a number of arguments, in symbolic space. TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art results on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis. Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning. | reject | The paper proposed a new seq2seq method to implement natural language to formal language translation. Fixed length Tensor Product Representations are used as the intermediate representation between encoder and decoder. Experiments are conducted on MathQA and AlgoList datasets and show the effectiveness of the methods. Intensive discussions happened between the authors and reviewers. Despite of the various concerns raised by the reviewers, a main problem pointed by both reviewer#3 and reviewer#4 is that there is a gap between the theory and the implementation in this paper. The other reviewer (#2) likes the paper but is less confident and tend to agree with the other two reviewers. | test | [
"rJxLze2soB",
"HylvvoJ2oB",
"rJlElS6ioH",
"BkxiUOcisB",
"HJl6Riadir",
"SygeZ0TuiH",
"SkeBMX0dsB",
"rkgaigA_sH",
"rkxqciuCtH",
"SygoAfzI9r",
"SyecPABa9S"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your continued interest in our model. Your comment starts by saying that the output of the encoder ($H_s$ in your notation) has the form $\\sum_i a_i \\otimes r_i \\otimes p_i$ , an order-3 tensor. But actually the output of the encoder is an order-2 tensor with the form $\\sum_k f_k \\otimes r_k$ ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
1,
3
] | [
"BkxiUOcisB",
"rJlElS6ioH",
"rJxLze2soB",
"HJl6Riadir",
"SyecPABa9S",
"SygoAfzI9r",
"rkxqciuCtH",
"SygeZ0TuiH",
"iclr_2020_BylPSkHKvB",
"iclr_2020_BylPSkHKvB",
"iclr_2020_BylPSkHKvB"
] |
iclr_2020_SyedHyBFwS | Relative Pixel Prediction For Autoregressive Image Generation | In natural images, transitions between adjacent pixels tend to be smooth and gradual, a fact that has long been exploited in image compression models based on predictive coding. In contrast, existing neural autoregressive image generation models predict the absolute pixel intensities at each position, which is a more challenging problem. In this paper, we propose to predict pixels relatively, by predicting new pixels relative to previously generated pixels (or pixels from the conditioning context, when available). We show that this form of prediction fare favorably to its absolute counterpart when used independently, but their coordination under an unified probabilistic model yields optimal performance, as the model learns to predict sharp transitions using the absolute predictor, while generating smooth transitions using the relative predictor.
Experiments on multiple benchmarks for unconditional image generation, image colorization, and super-resolution indicate that our presented mechanism leads to improvements in terms of likelihood compared to the absolute prediction counterparts. | reject | All reviewers rated this submission as a weak reject and there was no author response.
The AC recommends rejection. | train | [
"rJgWhRKyjH",
"H1xVFeinYr",
"SyxKniRnYS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper bases its methodology on well known developments in image analysis/synthesis about similarity of pixel values in adjacent locations. Many techniques have been used for modelling this similarity, including predictive models, cliques and graphs. The paper uses a simple autoregressive model for generating p... | [
3,
3,
3
] | [
5,
4,
3
] | [
"iclr_2020_SyedHyBFwS",
"iclr_2020_SyedHyBFwS",
"iclr_2020_SyedHyBFwS"
] |
iclr_2020_SylurJHFPS | The Detection of Distributional Discrepancy for Text Generation | The text generated by neural language models is not as good as the real text. This means that their distributions are different. Generative Adversarial Nets (GAN) are used to alleviate it. However, some researchers argue that GAN variants do not work at all. When both sample quality (such as Bleu) and sample diversity (such as self-Bleu) are taken into account, the GAN variants even are worse than a well-adjusted language model. But, Bleu and self-Bleu can not precisely measure this distributional discrepancy. In fact, how to measure the distributional discrepancy between real text and generated text is still an open problem. In this paper, we theoretically propose two metric functions to measure the distributional difference between real text and generated text. Besides that, a method is put forward to estimate them. First, we evaluate language model with these two functions and find the difference is huge. Then, we try several methods to use the detected discrepancy signal to improve the generator. However the difference becomes even bigger than before. Experimenting on two existing language GANs, the distributional discrepancy between real text and generated text increases with more adversarial learning rounds. It demonstrates both of these language GANs fail. | reject | The authors propose a novel metric to detect distributional discrepancy for text generation models and argue that these can be used to explain the failure of GANs for language generation tasks. The reviewers found significant deficiencies with the paper, including:
1) Numerous grammatical errors and typos, that make it difficult to read the paper.
2) Mischarcterization of prior work on neural language models, and failure to compare with standard distributional discrepancy measures studied in prior work (KL, total variation, Wasserstein etc.). Further, the necessity of the complicated procedure derived by the authors is not well-justified.
3) Failure to run experiments on standard banchmarks for image generation (which are much better studied applications of GANs) and confirm the superiority of the proposed metrics relative to standard baselines.
The reviewers were agreed on the rejection decision and the authors did not participate in the rebuttal phase.
I therefore recommend rejection. | val | [
"H1e5DF0aKH",
"SJlBb0_QYB",
"HyxQWDWtYH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper argues that text generated by existing neural language models are not as good as real text and proposes two metric functions to measure the distributional difference between real text and generated text. The proposed metrics are tried on language GANs but fail to produce any improvement.\n\nMajor issues... | [
1,
3,
1
] | [
4,
5,
4
] | [
"iclr_2020_SylurJHFPS",
"iclr_2020_SylurJHFPS",
"iclr_2020_SylurJHFPS"
] |
iclr_2020_SyxKrySYPr | Stabilizing Transformers for Reinforcement Learning | Owing to their ability to both effectively integrate information over long time horizons and scale to massive amounts of data, self-attention architectures have recently shown breakthrough success in natural language processing (NLP), achieving state-of-the-art results in domains such as language modeling and machine translation. Harnessing the transformer's ability to process long time horizons of information could provide a similar performance boost in partially-observable reinforcement learning (RL) domains, but the large-scale transformers used in NLP have yet to be successfully applied to the RL setting. In this work we demonstrate that the standard transformer architecture is difficult to optimize, which was previously observed in the supervised learning setting but becomes especially pronounced with RL objectives. We propose architectural modifications that substantially improve the stability and learning speed of the original Transformer and XL variant. The proposed architecture, the Gated Transformer-XL (GTrXL), surpasses LSTMs on challenging memory environments and achieves state-of-the-art results on the multi-task DMLab-30 benchmark suite, exceeding the performance of an external memory architecture. We show that the GTrXL, trained using the same losses, has stability and performance that consistently matches or exceeds a competitive LSTM baseline, including on more reactive tasks where memory is less critical. GTrXL offers an easy-to-train, simple-to-implement but substantially more expressive architectural alternative to the standard multi-layer LSTM ubiquitously used for RL agents in partially-observable environments. | reject | This paper proposes architectural modifications to transformers, which are promising for sequential tasks requiring memory but can be unstable to optimize, and applies the resulting method to the RL setting, evaluated in the DMLab-30 benchmark.
While I thought the approach was interesting and the results promising, the reviewers unanimously felt that the experimental evaluation could be more thorough, and were concerned with the motivation behind of some of the proposed changes.
| train | [
"BJerZG59YS",
"S1xS4AVsjB",
"rkeZiLPDiS",
"r1gutMDPjS",
"HJxtWzwviH",
"Hkxirbwvjr",
"SJxOvgPPsH",
"S1x2q1vDjS",
"HkeMR-1QiB",
"rklAUbhRFr",
"H1l3qFXvKr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
"This paper is motivated by the unstable performance of Transformer in reinforcement learning, and tried several variants of Transformer to see whether some of them can stabilize the Transformer. The experimental results look good, however, I have problems in understanding the motivation, the intuition of the propo... | [
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1
] | [
"iclr_2020_SyxKrySYPr",
"iclr_2020_SyxKrySYPr",
"iclr_2020_SyxKrySYPr",
"HJxtWzwviH",
"rklAUbhRFr",
"SJxOvgPPsH",
"BJerZG59YS",
"HkeMR-1QiB",
"iclr_2020_SyxKrySYPr",
"iclr_2020_SyxKrySYPr",
"iclr_2020_SyxKrySYPr"
] |
iclr_2020_Skg5r1BFvB | Continuous Control with Contexts, Provably | A fundamental challenge in artificially intelligence is to build an agent that generalizes and adapts to unseen environments. A common strategy is to build a decoder that takes a context of the unseen new environment and generates a policy. The current paper studies how to build a decoder for the fundamental continuous control environment, linear quadratic regulator (LQR), which can model a wide range of real world physical environments. We present a simple algorithm for this problem, which uses upper confidence bound (UCB) to refine the estimate of the decoder and balance the exploration-exploitation trade-off. Theoretically, our algorithm enjoys a O~(T) regret bound in the online setting where T is the number of environments the agent played. This also implies after playing O~(1/ϵ2) environments, the agent is able to transfer the learned knowledge to obtain an ϵ-suboptimal policy for an unseen environment. To our knowledge, this is first provably efficient algorithm to build a decoder in the continuous control setting. While our main focus is theoretical, we also present experiments that demonstrate the effectiveness of our algorithm. | reject | This work considers the popular LQR objective but with [A,B] unknown and dynamically changing. At each time a context [C,D] is observed and it is assumed there exist a linear map Theta from [C,D] to [A,B]. The particular problem statement is novel, but is heavily influenced by other MDP settings and the also follows very closely to previous works. The algorithm seems computationally intractable (a problem shared by previous work this work builds on) and so in experiments a gross approximation is used.
Reviewers found the work very stylized and did not adequately review related work. For example, little attention is paid to switching linear systems and the recent LQR advances are relegated to a list of references with no discussion. The reviewers also questioned how the theory relates to the traditional setting of LQR regret, say, if [C,D] were identity at all times so that Theta = [A,B].
This paper received 3 reviews (a third was added late to the process) and my own opinion influenced the decision. While the problem statement is interesting, the work fails to put the paper in context with the existing work, and there are some questions of algorithm methods. | test | [
"B1lQPS9pYH",
"BkeJLdEMnS",
"HkeH_BO3sS",
"H1xficTBsr",
"B1luvxAijS",
"rJg0mNiYsS",
"r1edA96SjH",
"HJelb3w15r"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"\n# Summary\n- The paper proposes a UCB-inspired algorithm for a contextual LQR problem. The problem itself is introduced in this paper and is similar in spirit to CMDPs, with the difference that instead of learning a mapping from context to transition matrix, a mapping from context to matrices [A, B] figuring in ... | [
3,
1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_Skg5r1BFvB",
"iclr_2020_Skg5r1BFvB",
"iclr_2020_Skg5r1BFvB",
"HJelb3w15r",
"rJg0mNiYsS",
"r1edA96SjH",
"B1lQPS9pYH",
"iclr_2020_Skg5r1BFvB"
] |
iclr_2020_rJecSyHtDS | Learning to Recognize the Unseen Visual Predicates | Visual relationship recognition models are limited in the ability to generalize from finite seen predicates to unseen ones. We propose a new problem setting named predicate zero-shot learning (PZSL): learning to recognize the predicates without training data. It is unlike the previous zero-shot learning problem on visual relationship recognition which learns to recognize the unseen relationship triplets (<subject, predicate, object>) but requires all components (subject, predicate, and object) to be seen in the training set. For the PZSL problem, however, the models are expected to recognize the diverse even unseen predicates, which is meaningful for many downstream high-level tasks, like visual question answering, to handle complex scenes and open questions. The PZSL is a very challenging task since the predicates are very abstract and follow an extreme long-tail distribution. To address the PZSL problem, we present a model that performs compatibility learning leveraging the linguistic priors from the corpus and knowledge base. An unbalanced sampled-softmax is further developed to tackle the extreme long-tail distribution of predicates. Finally, the experiments are conducted to analyze the problem and verify the effectiveness of our methods. The dataset and source code will be released for further study. | reject | The paper proposes a new problem setting of predicate zero-shot learning for visual relation recognition for the setting when some of the predicates are missing, and a model that is able to address it.
All reviewers agreed that the problem setting is interesting and important, but had reservations about the proposed model. In particular, the reviewers were concerned that it is too simple of a step from existing methods. One reviewer also pointed towards potential comparisons with other zero-shot methods.
Following that discussion, I recommend rejection at this time but highly encourage the authors to take the feedback into account and resubmit to another venue. | test | [
"ryeMPURijr",
"BJg4MLRsjS",
"BJg-INCsjr",
"HJgsbSRisH",
"H1lPSWAooH",
"B1eKItmTFr",
"BJeeZWtaFr",
"rkxZmVqCYS"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for tending towards accepting our work and mentioning that our work has “a few strong contributions”. The “minor edits” are done in the revised version. \n \n*** Response to the things that could be strengthened or addressed further ***\n \nQ 1. There could be more meaningful comparison to ot... | [
-1,
-1,
-1,
-1,
-1,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"B1eKItmTFr",
"BJeeZWtaFr",
"rkxZmVqCYS",
"rkxZmVqCYS",
"iclr_2020_rJecSyHtDS",
"iclr_2020_rJecSyHtDS",
"iclr_2020_rJecSyHtDS",
"iclr_2020_rJecSyHtDS"
] |
iclr_2020_SklsBJHKDS | Model Inversion Networks for Model-Based Optimization | In this work, we aim to solve data-driven optimization problems, where the goal is to find an input that maximizes an unknown score function given access to a dataset of input, score pairs. Inputs may lie on extremely thin manifolds in high-dimensional spaces, making the optimization prone to falling-off the manifold. Further, evaluating the unknown function may be expensive, so the algorithm should be able to exploit static, offline data. We propose model inversion networks (MINs) as an approach to solve such problems. Unlike prior work, MINs scale to extremely high-dimensional input spaces and can efficiently leverage offline logged datasets for optimization in both contextual and non-contextual settings. We show that MINs can also be extended to the active setting, commonly studied in prior work, via a simple, novel and effective scheme for active data collection. Our experiments show that MINs act as powerful optimizers on a range of contextual/non-contextual, static/active problems including optimization over images and protein designs and learning from logged bandit feedback. | reject | This paper proposes Model Inversion Networks (MINs) to solve model optimization problems high-dimensional spaces. The paper received three reviews from experts working in this area. In a short review, R1 recommends Reject based on limited novelty compared to an ICDM 2019 paper. R2 recommends Weak Reject, identifying several strengths of the paper but also a number of concerns including unclear or missing technical explanations and need for some additional experiments (ablation studies). R3 recommends Weak Accept, giving the opinion that the idea the paper proposes is worthy of publication, but also identifying a number of weaknesses including a "rushed" experimental section that is missing details, need for additional quantitative experimental results, and some "ad hoc" parts of the formulation. The authors prepared responses that address many of these concerns, including a convincing argument that there is significant difference and novelty compared to the ICDM 2019. However, even if excluding R1's review, the reviews of R2 and R3 are borderline; the ACs read the paper and while they feel the work has significant merit, they agree with R2 and R3 that the paper needs additional work and another round of peer review to fully address R2 and R3's concerns.
| train | [
"HkeR_fSZoB",
"Syg3bE52jr",
"SkgTIpFcsS",
"Hkg9vv6koB",
"HyxTb_1TYr",
"H1xX0KDRFB",
"SyeB72uRKS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the constructive feedback. We have updated the paper (changes in red) to address the clarity concerns. Concisely, we have addressed clarity issues (in red) along these directions:\n1. Added interpretation of the function $g$ in reweighting (Section 3.3)\n2. Described the procedure for creating the au... | [
-1,
-1,
-1,
-1,
6,
3,
1
] | [
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"H1xX0KDRFB",
"iclr_2020_SklsBJHKDS",
"HyxTb_1TYr",
"SyeB72uRKS",
"iclr_2020_SklsBJHKDS",
"iclr_2020_SklsBJHKDS",
"iclr_2020_SklsBJHKDS"
] |
iclr_2020_HklsHyBKDr | On Predictive Information Sub-optimality of RNNs | Certain biological neurons demonstrate a remarkable capability to optimally compress the history of sensory inputs while being maximally informative about the future. In this work, we investigate if the same can be said of artificial neurons in recurrent neural networks (RNNs) trained with maximum likelihood. In experiments on two datasets, restorative Brownian motion and a hand-drawn sketch dataset, we find that RNNs are sub-optimal in the information plane. Instead of optimally compressing past information, they extract additional information that is not relevant for predicting the future. Overcoming this limitation may require alternative training procedures and architectures, or objectives beyond maximum likelihood estimation. | reject | Nice start but unfortunately not ripe. The issues remarked by the reviewers were only partly addressed, and an improved version of the paper should be submitted at a future venue. | val | [
"SklLK06ijH",
"B1l--0aioB",
"r1x0Ja6isB",
"rkxNUnoftB",
"Hyxmmk9kcS",
"r1l3FOGR9B"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review and thoughtful feedback!\n\n--It is not very clear to me how the authors trained stochastic RNNs deterministically during training.\n\n We compared two different setups. In the first setup, we trained deterministic RNNs, and then added noise post-hoc at test-time, i.e. the model used ... | [
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
4,
3,
3
] | [
"rkxNUnoftB",
"Hyxmmk9kcS",
"r1l3FOGR9B",
"iclr_2020_HklsHyBKDr",
"iclr_2020_HklsHyBKDr",
"iclr_2020_HklsHyBKDr"
] |
iclr_2020_rkliHyrFDB | Information Theoretic Model Predictive Q-Learning | Model-free Reinforcement Learning (RL) algorithms work well in sequential decision-making problems when experience can be collected cheaply and model-based RL is effective when system dynamics can be modeled accurately. However, both of these assumptions can be violated in real world problems such as robotics, where querying the system can be prohibitively expensive and real-world dynamics can be difficult to model accurately. Although sim-to-real approaches such as domain randomization attempt to mitigate the effects of biased simulation, they can still suffer from optimization challenges such as local minima and hand-designed distributions for randomization, making it difficult to learn an accurate global value function or policy that directly transfers to the real world. In contrast to RL, Model Predictive Control (MPC) algorithms use a simulator to optimize a simple policy class online, constructing a closed-loop controller that can effectively contend with real-world dynamics. MPC performance is usually limited by factors such as model bias and the limited horizon of optimization. In this work, we present a novel theoretical connection between information theoretic MPC and entropy regularized RL and develop a Q-learning algorithm that can leverage biased models. We validate the proposed algorithm on sim-to-sim control tasks to demonstrate the improvements over optimal control and reinforcement learning from scratch. Our approach paves the way for deploying reinforcement learning algorithms on real-robots in a systematic manner. | reject | The authors develop a novel connection between information theoretic MPC and entropy regularized RL. Using this connection, they develop Q learning algorithm that can work with biased models. They evaluate their proposed algorithm on several control tasks and demonstrate performance over the baseline methods.
Unfortunately, reviewers were not convinced that the technical contribution of this work was sufficient. They felt that this was a fairly straightforward extension of MPPI. Furthermore, I would have expected a comparison to POLO. As the authors note, their approach is more theoretically principled, so it would be nice to see them outperforming POLO as a validation of their framework.
Given the large number of high-quality submissions this year, I recommend rejection at this time. | train | [
"Hye5ppju5H",
"B1gjSMattS",
"HyxcBAg2sH",
"r1g_AgcooH",
"ryeVLDmYiS",
"S1gNUGQKjr",
"HJeCDWEtsS",
"H1xwr1V9iB",
"rygnPvmFoS",
"r1gSAHXFjB",
"HyeB2rXYoB",
"HJgk1myAFr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"\n\nIn this paper, the authors proposed the algorithm to introduce model-free reinforcement learning (RL) to model predictive control~(MPC), which is a representative algorithm in model-based RL, to overcome the finite-horizon issue in the existing MPC. The authors evaluated the algorithm on three environments and... | [
3,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1
] | [
"iclr_2020_rkliHyrFDB",
"iclr_2020_rkliHyrFDB",
"r1g_AgcooH",
"rygnPvmFoS",
"B1gjSMattS",
"HJgk1myAFr",
"iclr_2020_rkliHyrFDB",
"iclr_2020_rkliHyrFDB",
"ryeVLDmYiS",
"HyeB2rXYoB",
"Hye5ppju5H",
"iclr_2020_rkliHyrFDB"
] |
iclr_2020_HJg3HyStwB | Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions | Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier. Recently, various kinds of adversarial attack methods have been proposed, most of which focus on adding small perturbations to input images. Despite the success of existing approaches, the way to generate realistic adversarial images with small perturbations remains a challenging problem. In this paper, we aim to address this problem by proposing a novel adversarial method, which generates adversarial examples by imposing not only perturbations but also spatial distortions on input images, including scaling, rotation, shear, and translation. As humans are less susceptible to small spatial distortions, the proposed approach can produce visually more realistic attacks with smaller perturbations, able to deceive classifiers without affecting human predictions. We learn our method by amortized techniques with neural networks and generate adversarial examples efficiently by a forward pass of the networks. Extensive experiments on attacking different types of non-robustified classifiers and robust classifiers with defence show that our method has state-of-the-art performance in comparison with advanced attack parallels. | reject | The method proposed and explored here is to introduce small spatial distortions, with the goal of making them undetectable by humans but affecting the classification of the images. As reviewers point out, very similar methods have been tested before. The methods are also only tested on a few low-resolution datasets.
The reviewers are unanimous in their judgement that the method is not novel enough, and the authors' rebuttals have not convinced the reviewers or me about the opposite. | train | [
"HklsdMjcor",
"rJxLyGs9sH",
"SylTVgs5sr",
"r1xnXDVnFH",
"r1e3Y-QTFH",
"BkgXRYNaYB",
"SklVActsdS",
"B1gf_h28Or",
"H1er8xkSOS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public"
] | [
"Our response is as follows:\n\n1. Please allow us to re-emphasize the main novelty of our method: We focus on generating adversarial examples that look realistic to humans but also attack the classifier well; We achieve this goal by proposing a generator that conducts both spatial distortions and perturbations. Im... | [
-1,
-1,
-1,
3,
1,
3,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
4,
-1,
-1,
-1
] | [
"r1xnXDVnFH",
"r1e3Y-QTFH",
"BkgXRYNaYB",
"iclr_2020_HJg3HyStwB",
"iclr_2020_HJg3HyStwB",
"iclr_2020_HJg3HyStwB",
"iclr_2020_HJg3HyStwB",
"H1er8xkSOS",
"iclr_2020_HJg3HyStwB"
] |
iclr_2020_Skl3SkSKDr | Generating valid Euclidean distance matrices | Generating point clouds, e.g., molecular structures, in arbitrary rotations, translations, and enumerations remains a challenging task. Meanwhile, neural networks
utilizing symmetry invariant layers have been shown to be able to optimize their
training objective in a data-efficient way. In this spirit, we present an architecture
which allows to produce valid Euclidean distance matrices, which by construction are already invariant under rotation and translation of the described object.
Motivated by the goal to generate molecular structures in Cartesian space, we use
this architecture to construct a Wasserstein GAN utilizing a permutation invariant critic network. This makes it possible to generate molecular structures in a
one-shot fashion by producing Euclidean distance matrices which have a three-
dimensional embedding. | reject | This paper proposes a parametrisation of Euclidean distance matrices amenable to be used within a differentiable generative model. The resulting model is used in a WGAN architecture and demonstrated empirically in the generation of molecular structures.
Reviewers were positive about the motivation from a specific application area (generation of molecular structures). However, they raised some concerns about the actual significance of the approach. The AC shares these concerns; the methodology essentially amounts to constraining the output of a neural network to be symmetric and positive semidefinite, which is in turn equivalent to producing a non-negative diagonal matrix (corresponding to the eigenvalues). As a result, the AC recommends rejection, and encourages the authors to include simple baselines in the next iteration. | val | [
"BkeaLRpdoB",
"SylhY66_iB",
"HyxknhaOiB",
"Skx9wBpHFB",
"rkeVnp86YH",
"BkeMvX5TFH"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your kind review and positive feedback!\n- We added to the explanation of the chosen loss function, especially with respect to the atom types via a cross entropy term and added a short explanation of the drift term. Furthermore an intuitive description of attaining high and/or low values in the criti... | [
-1,
-1,
-1,
8,
3,
8
] | [
-1,
-1,
-1,
3,
3,
1
] | [
"Skx9wBpHFB",
"rkeVnp86YH",
"BkeMvX5TFH",
"iclr_2020_Skl3SkSKDr",
"iclr_2020_Skl3SkSKDr",
"iclr_2020_Skl3SkSKDr"
] |
iclr_2020_S1x1IkHtPr | A Generative Model for Molecular Distance Geometry | Computing equilibrium states for many-body systems, such as molecules, is a long-standing challenge. In the absence of methods for generating statistically independent samples, great computational effort is invested in simulating these systems using, for example, Markov chain Monte Carlo. We present a probabilistic model that generates such samples for molecules from their graph representations. Our model learns a low-dimensional manifold that preserves the geometry of local atomic neighborhoods through a principled learning representation that is based on Euclidean distance geometry. We create a new dataset for molecular conformation generation with which we show experimentally that our generative model achieves state-of-the-art accuracy. Finally, we show how to use our model as a proposal distribution in an importance sampling scheme to compute molecular properties. | reject | The paper presents a solution to generating molecule with three dimensional structure by learning a low-dimensional manifold that preserves the geometry of local atomic neighborhoods based on Euclidean distance geometry.
The application is interesting and the proposed solution is reasonable. The authors did a good job at addressing most concerns raised in the reviews and updating the draft.
Two main concerns were left unresolved: one is the lack of novelty in the proposed model, and the other is that some arguments in the paper are not fully supported. The paper could benefit from one more round of revision before being ready for publication.
| train | [
"HyxVxQy0FH",
"H1gRN1LPiB",
"HJgLgvHvjH",
"rJe4IsrvsB",
"H1eLIqBvsr",
"rkgKMOSwjB",
"S1l4hHBvjB",
"SkeK8VrDsB",
"rkeWJZSPjB",
"SkgzPOBuYB",
"HyxOGi_D5H",
"HJxAaKHQ_B"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Summary:\nThe authors propose a generative model designed for molecules, which is essentially a conditional variational auto-encoder. The model learns to generate, when conditioning on a molecule graph, the distribution of distances between each of the atoms and its second and third neighbour. Finally, using these... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
-1
] | [
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
-1
] | [
"iclr_2020_S1x1IkHtPr",
"iclr_2020_S1x1IkHtPr",
"SkgzPOBuYB",
"H1eLIqBvsr",
"rkgKMOSwjB",
"HJgLgvHvjH",
"SkeK8VrDsB",
"HyxVxQy0FH",
"HyxOGi_D5H",
"iclr_2020_S1x1IkHtPr",
"iclr_2020_S1x1IkHtPr",
"iclr_2020_S1x1IkHtPr"
] |
iclr_2020_B1lgUkBFwr | Unsupervised domain adaptation with imputation | Motivated by practical applications, we consider unsupervised domain adaptation for classification problems, in the presence of missing data in the target domain. More precisely, we focus on the case where there is a domain shift between source and target domains, while some components of the target data are systematically absent. We propose a way to impute non-stochastic missing data for a classification task by leveraging supervision from a complete source domain through domain adaptation. We introduce a single model performing joint domain adaptation, imputation and classification which is shown to perform well under various representative divergence families (H-divergence, Optimal Transport). We perform experiments on two families of datasets: a classical digit classification benchmark commonly used in domain adaptation papers and real world digital advertising datasets, on which we evaluate our model’s classification performance in an unsupervised setting. We analyze its behavior showing the benefit of explicitly imputing non-stochastic missing data jointly with domain adaptation. | reject | This paper addresses the problem of performing unsupervised domain adaptation when some target domain data is missing is a potentially non-stochastic way. The proposed solution consists of applying a version of domain adversarial learning for adaptation together with an MSE based imputation loss learned using complete source data. The method is evaluated on both the standard digit recognition datasets and a real-world advertising dataset.
The reviewers had mixed recommendations for this work, with two recommending weak reject and one recommending acceptance. The key positive point from R3 who recommended acceptance was that this work addresses a new problem statement which may be of practical importance. The other two reviewers expressed concerns over the contribution of the work and the validity of the problem setting. Namely, both R2 and R4 had significant confusion over the problem specification and/or under what conditions the proposed setting is valid.
It is a difficult decision for this paper as there is a core disagreement between the reviewers. All reviewers seem to agree that the proposed solution is a combination of prior methods in a new way to address the specific problem setting of this work. However, the reviewers differ in precisely whether they determine the proposed problem setting to be valid and justified. Due to this discrepancy, the AC does not recommend acceptance at this time. If the core contribution is to be an application of existing techniques to a new problem statement than that should be clarified and motivated further.
| train | [
"Ske7FfJTFB",
"HJlx9WDWcS",
"HkxWJa9KsH",
"r1gWD5cFiH",
"B1xhZRqYjS",
"BJxzTp9tsB",
"HJeBF25tjH",
"ryxFdPLRYH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The submission describes an approach for unsupervised domain adaptation in a setting where some parts of the target data are missing.\n\nBoth UDA approaches as well as data completion approaches have a sizable research history, as laid out in the related work section (Section 5). The novelty here comes from the pr... | [
8,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
4,
5,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_B1lgUkBFwr",
"iclr_2020_B1lgUkBFwr",
"ryxFdPLRYH",
"Ske7FfJTFB",
"HJlx9WDWcS",
"HJlx9WDWcS",
"ryxFdPLRYH",
"iclr_2020_B1lgUkBFwr"
] |
iclr_2020_SyeZIkrKwS | DyNet: Dynamic Convolution for Accelerating Convolution Neural Networks | Convolution operator is the core of convolutional neural networks (CNNs) and occupies the most computation cost. To make CNNs more efficient, many methods have been proposed to either design lightweight networks or compress models. Although some efficient network structures have been proposed, such as MobileNet or ShuffleNet, we find that there still exists redundant information between convolution kernels. To address this issue, we propose a novel dynamic convolution method named \textbf{DyNet} in this paper, which can adaptively generate convolution kernels based on image contents. To demonstrate the effectiveness, we apply DyNet on multiple state-of-the-art CNNs. The experiment results show that DyNet can reduce the computation cost remarkably, while maintaining the performance nearly unchanged. Specifically, for ShuffleNetV2 (1.0), MobileNetV2 (1.0), ResNet18 and ResNet50, DyNet reduces 40.0%, 56.7%, 68.2% and 72.4% FLOPs respectively while the Top-1 accuracy on ImageNet only changes by +1.0%, -0.27%, -0.6% and -0.08%. Meanwhile, DyNet further accelerates the inference speed of MobileNetV2 (1.0), ResNet18 and ResNet50 by 1.87x,1.32x and 1.48x on CPU platform respectively. To verify the scalability, we also apply DyNet on segmentation task, the results show that DyNet can reduces 69.3% FLOPs while maintaining the Mean IoU on segmentation task. | reject | The paper proposed the use of dynamic convolutional kernels as a way to reduce inference computation cost, which is a linear combination of static kernels and fused after training for inference to reduce computation cost. The authors evaluated the proposed methods on a variety models and shown good FLOPS reduction while maintaining accuracy.
The main concern for this paper is the limited novelty. There have been many works use dynamic convolutions as pointed out by all the reviewers. The most similar ones are SENet and soft conditional computation. Although the authors claim that soft conditional computation "focus on using more parameters to make models to be more expressive while we focus on reducing redundant calculations", the methods are pretty the same and moreover in the abstract of soft conditional computation they have "CondConv improves the performance and inference cost trade-off". | train | [
"rylQ2j7njS",
"BkecUvXhjH",
"HyeZFOZ3jB",
"B1lC6hMCYS",
"rkltsVmsiH",
"H1gKEIgwsr",
"rye239JPsr",
"HyxAifCmsr",
"SkeG0bSxoB",
"r1gRUl0TYB",
"SJxn_9yy9H",
"BJx7isQ6tB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
">>> Response to “The proposed add-on requires many parameters but the number of parameters is not shown in this paper”:\nThe proposed add-on indeed requires many parameters, we have added the number of parameters in Table4, Table5 and Table6 to illustrate this point.\n Table 4\n+------... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
5,
-1
] | [
"H1gKEIgwsr",
"HyxAifCmsr",
"rkltsVmsiH",
"iclr_2020_SyeZIkrKwS",
"rye239JPsr",
"r1gRUl0TYB",
"B1lC6hMCYS",
"SJxn_9yy9H",
"BJx7isQ6tB",
"iclr_2020_SyeZIkrKwS",
"iclr_2020_SyeZIkrKwS",
"iclr_2020_SyeZIkrKwS"
] |
iclr_2020_B1gX8JrYPr | Connecting the Dots Between MLE and RL for Sequence Prediction | Sequence prediction models can be learned from example sequences with a variety of training algorithms. Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time.
Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency. A rich set of other algorithms, such as data noising, RAML, and softmax policy gradient, have also been developed from different perspectives.
In this paper, we present a formalism of entropy regularized policy optimization, and show that the apparently distinct algorithms, including MLE, can be reformulated as special instances of the formulation. The difference between them is characterized by the reward function and two weight hyperparameters.
The unifying interpretation enables us to systematically compare the algorithms side-by-side, and gain new insights into the trade-offs of the algorithm design.
The new perspective also leads to an improved approach that dynamically interpolates among the family of algorithms, and learns the model in a scheduled way. Experiments on machine translation, text summarization, and game imitation learning demonstrate superiority of the proposed approach. | reject | The authors construct a weighted objective that subsumes many of the existing approaches for sequence prediction, such as MLE, RAML, and entropy regularized policy optimization. By dynamically tuning the weights in the objective, they show improved performance across several tasks.
Although there were no major issues with the paper, reviewers generally felt that the technical contribution is fairly incremental and the empirical improvements are limited. Given the large number of high-quality submissions this year, I am recommending rejection for this submission. | train | [
"HJgYDw2osr",
"rJgImv3sor",
"Bkev-vnsoB",
"S1xxM8j8Fr",
"HylY7KLAKB",
"S1xROF9CFr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the comments! We’d like to clarify that this paper aims to reformulate the various algorithms and distill them into a single common formulation. The common formulation is governed by the reward function and two weight hyperparameters, and thus defines a *family* of sequence prediction algorithms. Changi... | [
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
4,
5,
4
] | [
"S1xxM8j8Fr",
"HylY7KLAKB",
"S1xROF9CFr",
"iclr_2020_B1gX8JrYPr",
"iclr_2020_B1gX8JrYPr",
"iclr_2020_B1gX8JrYPr"
] |
iclr_2020_HyeEIyBtvr | BETANAS: Balanced Training and selective drop for Neural Architecture Search | Automatic neural architecture search techniques are becoming increasingly important in machine learning area recently. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations on searching strategies: these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates, or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths. To address these issues, we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduces conflicts among candidate paths. The experimental results show that our proposed method can achieve a leading performance of 79.0% on ImageNet under mobile settings, which outperforms other state-of-the-art methods in both accuracy and efficiency. | reject | This paper proposes a neural architecture search method that uses balanced sampling of architectures from the one-shot model and drops operators whose importance drops below a certain weight.
The reviewers agreed that the paper's approach is intuitive, but main points of criticism were:
- Lack of good baselines
- Potentially unfair comparison, not using the same training pipeline
- Lack of available code and thus of reproducibility. (The authors promised code in response, which is much appreciated. If the open-sourcing process has completed in time for the next version of the paper, I encourage the authors to include an anonymized version of the code in the submission to avoid this criticism.)
The reviewers appreciated the authors' rebuttal, but it did not suffice for them to change their ratings.
I agree with the reviewers that this work may be a solid contribution, but that additional evaluation is needed to demonstrate this. I therefore recommend rejection and encourage resubmission to a different venue after addressing the issues pointed out by the reviewers. | train | [
"rkxrlpTisS",
"BkxkWIvUiS",
"rygU3SP8jH",
"BkeEGBDIiH",
"rkg_5UjRYr",
"BkxCrcjmcH",
"SJxVImW65r"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We're sorry to say that the code and models could not be open sourced until the approaval from our organization. We have filed the application for releasing code and models with the legal affairs department. This procedure might cost several weeks. Our training curves and experiments settings for all reported mode... | [
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"rygU3SP8jH",
"rkg_5UjRYr",
"BkxCrcjmcH",
"SJxVImW65r",
"iclr_2020_HyeEIyBtvr",
"iclr_2020_HyeEIyBtvr",
"iclr_2020_HyeEIyBtvr"
] |
iclr_2020_S1gN8yrYwB | AUGMENTED POLICY GRADIENT METHODS FOR EFFICIENT REINFORCEMENT LEARNING | We propose a new mixture of model-based and model-free reinforcement learning
(RL) algorithms that combines the strengths of both RL methods. Our goal is to reduce the sample complexity of model-free approaches utilizing fictitious trajectory
rollouts performed on a learned dynamics model to improve the data efficiency of
policy gradient methods while maintaining the same asymptotic behaviour. We
suggest to use a special type of uncertainty quantification by a stochastic dynamics
model in which the next state prediction is randomly drawn from the distribution
predicted by the dynamics model. As a result, the negative effect of exploiting
erroneously optimistic regions in the dynamics model is addressed by next state
predictions based on an uncertainty aware ensemble of dynamics models. The
influence of the ensemble of dynamics models on the policy update is controlled
by adjusting the number of virtually performed rollouts in the next iteration according to the ratio of the real and virtual total reward. Our approach, which we
call Model-Based Policy Gradient Enrichment (MBPGE), is tested on a collection of benchmark tests including simulated robotic locomotion. We compare our
approach to plain model-free algorithms and a model-based one. Our evaluation
shows that MBPGE leads to higher learning rates in an early training stage and an
improved asymptotic behaviour. | reject | The authors propose a hybrid model-free/model-based policy gradient method that attempts to reduce sample complexity without degrading asymptotic performance. They evaluate their approach on a collection of benchmark tests.
The reviewers raised concerns about limited novelty of the proposed approach and flaws in the evaluation. The authors need to compare to more baselines and ensure that the baseline algorithms are performing as previously reported. Even then, the reported improvements were small.
Given the issues raised by the reviewers, this paper is not ready for publication at ICLR. | train | [
"SJeE8rx6YH",
"S1l3VHlRKS",
"BklIkSDXqH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Summary \n\nThis paper described a model-based RL method which uses learned dynamics models to augment the replay buffer used when training a PPO agent. \nSpecifically, the agent learns an ensemble of dynamics models, and then performs a PPO updates using a mixture of trajectories sampled from the true environ... | [
1,
1,
1
] | [
4,
5,
5
] | [
"iclr_2020_S1gN8yrYwB",
"iclr_2020_S1gN8yrYwB",
"iclr_2020_S1gN8yrYwB"
] |
iclr_2020_SklVI1HKvH | Sample-Based Point Cloud Decoder Networks | Point clouds are a flexible and ubiquitous way to represent 3D objects with arbitrary resolution and precision. Previous work has shown that adapting encoder networks to match the semantics of their input point clouds can significantly improve their effectiveness over naive feedforward alternatives. However, the vast majority of work on point-cloud decoders are still based on fully-connected networks that map shape representations to a fixed number of output points. In this work, we investigate decoder architectures that more closely match the semantics of variable sized point clouds. Specifically, we study sample-based point-cloud decoders that map a shape representation to a point feature distribution, allowing an arbitrary number of sampled features to be transformed into individual output points. We develop three sample-based decoder architectures and compare their performance to each other and show their improved effectiveness over feedforward architectures. In addition, we investigate the learned distributions to gain insight into the output transformation. Our work is available as an extensible software platform to reproduce these results and serve as a baseline for future work. | reject | The reviewers have raised several important concerns about the paper that the authors decided not to address. | train | [
"HkgILpeiFS",
"SkeaYr229H",
"Hkx9xqR35B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper is about decoder networks for 3D point cloud data, i.e., given a latent vector that encodes shape information, the decoder network outputs a 3D point cloud. Previous approaches like PU-Net, or TopNet have used an MLP with a fixed number of output points for this task. The approach in this paper takes ins... | [
3,
3,
1
] | [
4,
3,
4
] | [
"iclr_2020_SklVI1HKvH",
"iclr_2020_SklVI1HKvH",
"iclr_2020_SklVI1HKvH"
] |
iclr_2020_rylrI1HtPr | Pixel Co-Occurence Based Loss Metrics for Super Resolution Texture Recovery | Single Image Super Resolution (SISR) has significantly improved with Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), often achieving order of magnitude better pixelwise accuracies (distortions) and state-of-the-art perceptual accuracy. Due to the stochastic nature of GAN reconstruction and the ill-posed nature of the problem, perceptual accuracy tends to correlate inversely with pixelwise accuracy which is especially detrimental to SISR, where preservation of original content is an objective. GAN stochastics can be guided by intermediate loss functions such as the VGG featurewise loss, but these features are typically derived from biased pre-trained networks. Similarly, measurements of perceptual quality such as the human Mean Opinion Score (MOS) and no-reference measures have issues with pre-trained bias. The spatial relationships between pixel values can be measured without bias using the Grey Level Co-occurence Matrix (GLCM), which was found to match the cardinality and comparative value of the MOS while reducing subjectivity and automating the analytical process. In this work, the GLCM is also directly used as a loss function to guide the generation of perceptually accurate images based on spatial collocation of pixel values. We compare GLCM based loss against scenarios where (1) no intermediate guiding loss function, and (2) the VGG feature function are used. Experimental validation is carried on X-ray images of rock samples, characterised by significant number of high frequency texture features. We find GLCM-based loss to result in images with higher pixelwise accuracy and better perceptual scores. | reject | This paper proposes to use the grey level co-occurrence matrix method (GLCM) for both the performance evaluation metric and an auxiliary loss function for single image super resolution. Experiments are conducted on X-ray images of rock samples. Three reviewers provide comments. Two reviewers rated reject while one rated weak reject. The major concerns include the lack of clear and detailed description, low novelty, limited experiment on only one database, unconvincing improvement over the prior work, etc. The authors agree that the limited experiment on one database does not demonstrate the generalization capability of the proposed method. The AC agrees with the reviewers’ comments, and recommend rejection. | val | [
"HyguxlXIoS",
"HygS97ittS",
"SJlcwtH2tB",
"Hyl0aS9aYr"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Regarding the comments:\n\n> We agree that the method we propose should be further tested on other image types, such as medical images and natural images. These experiments are in progress\n\n> The novelty of this work is the summation of its parts and the performance it can obtain. As we state in the previous poi... | [
-1,
1,
1,
3
] | [
-1,
1,
3,
5
] | [
"Hyl0aS9aYr",
"iclr_2020_rylrI1HtPr",
"iclr_2020_rylrI1HtPr",
"iclr_2020_rylrI1HtPr"
] |
iclr_2020_rJx8I1rFwr | Meta-Learning by Hallucinating Useful Examples | Learning to hallucinate additional examples has recently been shown as a promising direction to address few-shot learning tasks, which aim to learn novel concepts from very few examples. The hallucination process, however, is still far from generating effective samples for learning. In this work, we investigate two important requirements for the hallucinator --- (i) precision: the generated examples should lead to good classifier performance, and (ii) collaboration: both the hallucinator and the classification component need to be trained jointly. By integrating these requirements as novel loss functions into a general meta-learning with hallucination framework, our model-agnostic PrecisE Collaborative hAlluciNator (PECAN) facilitates data hallucination to improve the performance of new classification tasks. Extensive experiments demonstrate state-of-the-art performance on competitive miniImageNet and ImageNet based few-shot benchmarks in various scenarios. | reject | This paper describes a new approach to meta-learning with generating new useful examples.
The reviewers liked the paper but overall felt that the paper is not ready for publication as it stands.
Rejection is recommended. | val | [
"BylL9a5hsS",
"Hklm7p93jS",
"rJggDiqhoB",
"HkeTeIBiiB",
"ByxBNMRKjr",
"HkeCv6fj_B",
"rJl3L37nYB",
"SJlzWIKX5r",
"Hklpg8yk9r"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"1. … imposing that Performance(Query | Augmented data) = Performance(Query | Support):\n\nWe are not imposing that Performance(Query | Augmented data) = Performance(Query | Support). Instead, we target Performance(Query | Augmented data) = Performance(Query | Larger set of real data). This is explained in the para... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
6,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
-1
] | [
"Hklpg8yk9r",
"SJlzWIKX5r",
"iclr_2020_rJx8I1rFwr",
"rJl3L37nYB",
"HkeCv6fj_B",
"iclr_2020_rJx8I1rFwr",
"iclr_2020_rJx8I1rFwr",
"iclr_2020_rJx8I1rFwr",
"iclr_2020_rJx8I1rFwr"
] |
iclr_2020_HyxPIyrFvH | When Robustness Doesn’t Promote Robustness: Synthetic vs. Natural Distribution Shifts on ImageNet | We conduct a large experimental comparison of various robustness metrics for image classification. The main question of our study is to what extent current synthetic robustness interventions (lp-adversarial examples, noise corruptions, etc.) promote robustness under natural distribution shifts occurring in real data. To this end, we evaluate 147 ImageNet models under 199 different evaluation settings. We find that no current robustness intervention improves robustness on natural distribution shifts beyond a baseline given by standard models without a robustness intervention. The only exception is the use of larger training datasets, which provides a small increase in robustness on one natural distribution shift. Our results indicate that robustness improvements on real data may require new methodology and more evaluations on natural distribution shifts. | reject | The authors show that models trained to satisfy adversarial robustness properties do not possess robustness to naturally occuring distribution shifts. The majority of the reviewers agree that this is not a surprising result especially for the choice of natural distribution shifts chosen by the authors (for instance it would be better if the authors compare to natural distribution shifts that look similar to the adversarial corruptions). Moreover, this is a survey study and no novel algorithms are presented, so the paper cannot be accepted on that merit either. | test | [
"BkgJfUqRYH",
"Hke8aQMpKS",
"BkeiMMzBsB",
"SkgTxwWSoS",
"HyepRLZrsH",
"Bkl69MZHoB",
"SkxmTz-rir",
"rJgUJQbSoS",
"HkxRhsD5YS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studied an interesting question, whether the gain of robustness from synthetic distribution shifts can be transferred/generalized to the robustness under natural distribution shifts. It was shown that in the context of natural distribution shifts, no current robustness intervention can really outperform... | [
3,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_HyxPIyrFvH",
"iclr_2020_HyxPIyrFvH",
"HkxRhsD5YS",
"Hke8aQMpKS",
"Hke8aQMpKS",
"BkgJfUqRYH",
"Bkl69MZHoB",
"Bkl69MZHoB",
"iclr_2020_HyxPIyrFvH"
] |
iclr_2020_BylKL1SKvr | Towards Understanding the Transferability of Deep Representations | Deep neural networks trained on a wide range of datasets demonstrate impressive transferability. Deep features appear general in that they are applicable to many datasets and tasks. Such property is in prevalent use in real-world applications. A neural network pretrained on large datasets, such as ImageNet, can significantly boost generalization and accelerate training if fine-tuned to a smaller target dataset. Despite its pervasiveness, few effort has been devoted to uncovering the reason of transferability in deep feature representations. This paper tries to understand transferability from the perspectives of improved generalization, optimization and the feasibility of transferability. We demonstrate that 1) Transferred models tend to find flatter minima, since their weight matrices stay close to the original flat region of pretrained parameters when transferred to a similar target dataset; 2) Transferred representations make the loss landscape more favorable with improved Lipschitzness, which accelerates and stabilizes training substantially. The improvement largely attributes to the fact that the principal component of gradient is suppressed in the pretrained parameters, thus stabilizing the magnitude of gradient in back-propagation. 3) The feasibility of transferability is related to the similarity of both input and label. And a surprising discovery is that the feasibility is also impacted by the training stages in that the transferability first increases during training, and then declines. We further provide a theoretical analysis to verify our observations. | reject | This paper studies the transfer of representations learned by deep neural networks across various datasets and tasks when the network is pre-trained on some dataset and subsequently fine-tuned on the target dataset. The authors theoretically analyse two-layer fully connected networks and provide an extensive empirical evaluation arguing that the loss landscape of appropriately pre-trained networks is easier to optimise (improved Lipschitzness).
Understanding the transferability of representations is an important problem and the reviewers appreciated some aspects of the extensive empirical evaluation and the initial theoretical investigation. However, we feel that the manuscript needs a major revision and that there is not enough empirical evidence to support the stated conclusions. As a result, I will recommend rejecting this paper in the current form.
Nevertheless, as the problem is extremely important I encourage the authors to improve the clarity and provide more convincing arguments towards the stated conclusions by addressing the issues raised during the discussion phase. | train | [
"r1eGgJ6h5r",
"BygS4-hBoS",
"S1e6J-3rjr",
"r1gpWRjBoH",
"H1lX2ToBiH",
"BJgi1poSor",
"Bkx40G0oFr",
"H1gSNMbatr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I thank the authors for the clarifications and the modifications to the paper. However, I still lean towards rejection. While the authors provided detailed explanations on some of my points here, most of these are still not in the paper, so the reader would still probably be confused. There are new figures in the ... | [
3,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_BylKL1SKvr",
"Bkx40G0oFr",
"r1eGgJ6h5r",
"H1gSNMbatr",
"Bkx40G0oFr",
"r1eGgJ6h5r",
"iclr_2020_BylKL1SKvr",
"iclr_2020_BylKL1SKvr"
] |
iclr_2020_SkeYUkStPr | Deep Lifetime Clustering | The goal of lifetime clustering is to develop an inductive model that maps subjects into K clusters according to their underlying (unobserved) lifetime distribution. We introduce a neural-network based lifetime clustering model that can find cluster assignments by directly maximizing the divergence between the empirical lifetime distributions of the clusters. Accordingly, we define a novel clustering loss function over the lifetime distributions (of entire clusters) based on a tight upper bound of the two-sample Kuiper test p-value. The resultant model is robust to the modeling issues associated with the unobservability of termination signals, and does not assume proportional hazards. Our results in real and synthetic datasets show significantly better lifetime clusters (as evaluated by C-index, Brier Score, Logrank score and adjusted Rand index) as compared to competing approaches. | reject | The authors propose a clustering algorithm for users in a system based on their lifetime distribution. The reviewers acknowledge the novelty of the proposed clustering algorithm, but one concern left unresolved is how the results of the analysis can be of use in the real world examples used. | train | [
"rkeJx_dhjr",
"H1gI2HOCYB",
"rklkla9osS",
"SJlq3yvEjH",
"rJg3hAU4sH",
"SJlhmaL4jr",
"Sygn43LNir",
"Hkeo3cU4iS",
"B1l4mKqqFS",
"S1lAJysb9r"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nWe have fixed the minor error in Table 1, thanks. \n\nWe have stated the assumption about summary statistics in Section 3.1. Additionally, we have added sentences in both the Friendster and MIMIC experiments to remind the reader. \nWe have added a sentence (last line of page 2) stating that all the users of a cl... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"rklkla9osS",
"iclr_2020_SkeYUkStPr",
"SJlhmaL4jr",
"S1lAJysb9r",
"S1lAJysb9r",
"H1gI2HOCYB",
"H1gI2HOCYB",
"B1l4mKqqFS",
"iclr_2020_SkeYUkStPr",
"iclr_2020_SkeYUkStPr"
] |
iclr_2020_rkeYL1SFvH | WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia | We present an approach based on multilingual sentence embeddings to automatically extract parallel sentences from the content of Wikipedia articles in 85 languages, including several dialects or low-resource languages. We do not limit the extraction process to alignments with English, but systematically consider all possible language pairs. In total, we are able to extract 135M parallel sentences for 1620 different language pairs, out of which only 34M are aligned with English. This corpus of parallel sentences is freely available (URL anonymized)
To get an indication on the quality of the extracted bitexts, we train neural MT baseline systems on the mined data only for 1886 languages pairs, and evaluate them on the TED corpus, achieving strong BLEU scores for many language pairs. The WikiMatrix bitexts seem to be particularly interesting to train MT systems between distant languages without the need to pivot through English. | reject | The authors present an approach to large scale bitext extraction from Wikipedia. This builds heavily on previous work, with the novelty being somewhat minor efficient approximate K-nearest neighbor search and language agnostic parameters such as cutoffs. These techniques have not been validated on other data sets and it is unclear how well they generalise. The major contribution of the paper is the corpus created, consisting of 85 languages, 1620 language pairs and 135M parallel sentences, of which most do not include English. This corpus is very valuable and already in use in the field, but IMO ICLR is not the right venue for this kind of publication. There were four reviews, all broadly in agreement, and some discussion with the authors.
| train | [
"HygOmHZm2H",
"ryecO-JoYS",
"rJlWFce3sB",
"H1e-3TuojH",
"SJgJ9AOsiH",
"ByemxAuijr",
"B1eiO93RYB",
"SklmxWok9S"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a multi-lingual multi-way pseudo-parallel text corpus automatically extracted from Wikipedia.\n\nThe authors use a variety of pre-existing techniques applied at large scale with substantial engineering effort to extract a large number of sentence pairs in 1620 language pairs from 85 languages.\n... | [
3,
6,
-1,
-1,
-1,
-1,
8,
3
] | [
5,
1,
-1,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_rkeYL1SFvH",
"iclr_2020_rkeYL1SFvH",
"ByemxAuijr",
"SklmxWok9S",
"B1eiO93RYB",
"ryecO-JoYS",
"iclr_2020_rkeYL1SFvH",
"iclr_2020_rkeYL1SFvH"
] |
iclr_2020_B1xcLJrYwH | Lean Images for Geo-Localization | Most computer vision tasks use textured images. In this paper we consider the geo-localization task - finding the pose of a camera in a large 3D scene from a single lean image, i.e. an image with no texture. We aim to experimentally explore whether texture and correlation between nearby images are necessary in a CNN-based solution for this task. Our results may give insight to the role of geometry (as opposed to textures) in a CNN-based geo-localization solution. Lean images are projections of a simple 3D model of a city. They contain solely information that relates to the geometry of the scene viewed (edges, faces, or relative depth). We find that the network is capable of estimating the camera pose from lean images for a relatively large number of locations (order of hundreds of thousands of images). The main contributions of this paper are: (i) demonstrating the power of CNNs for recovering camera pose using lean images; and (ii) providing insight into the role of geometry in the CNN learning process; | reject | The submission studies the problem of geolocalizing a city based on geometric information encoded in so called "lean" images. The reviewers were unanimous in their opinion that the submission does not meet the threshold for publication at ICLR. Concerns included quality of writing, novelty with respect to existing literature (in particular see Review #2), and limited validation on one geographic area. No rebuttal was provided. | train | [
"SklaSE3uKB",
"BJxZfI20KB",
"S1gCdj5T9H"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper evaluates the performances of Deep Learning for geo-localization tasks in a world of images without textures (\"lean images\"). More exactly, the lean images are images rendered from a 3D model of a city. made of the depth and/or the buildings' edges and/or the buildings' faces. For the purpose of the e... | [
3,
3,
3
] | [
4,
5,
3
] | [
"iclr_2020_B1xcLJrYwH",
"iclr_2020_B1xcLJrYwH",
"iclr_2020_B1xcLJrYwH"
] |
iclr_2020_rkg98yBFDr | Reject Illegal Inputs: Scaling Generative Classifiers with Supervised Deep Infomax | Deep Infomax~(DIM) is an unsupervised representation learning framework by maximizing the mutual information between the inputs and the outputs of an encoder, while probabilistic constraints are imposed on the outputs. In this paper, we propose Supervised Deep InfoMax~(SDIM), which introduces supervised probabilistic constraints to the encoder outputs. The supervised probabilistic constraints are equivalent to a generative classifier on high-level data representations, where class conditional log-likelihoods of samples can be evaluated. Unlike other works building generative classifiers with conditional generative models, SDIMs scale on complex datasets, and can achieve comparable performance with discriminative counterparts. With SDIM, we could perform \emph{classification with rejection}.
Instead of always reporting a class label, SDIM only makes predictions when test samples' largest logits surpass some pre-chosen thresholds, otherwise they will be deemed as out of the data distributions, and be rejected. Our experiments show that SDIM with rejection policy can effectively reject illegal inputs including out-of-distribution samples and adversarial examples. | reject | This paper combines a well-known, recently proposed unsupervised representation learning technique technique with a class-conditional negative log likelihood and a squared hinge loss on the class-wise conditional likelihoods, and proposes to use the resulting conditional density model for generative classification. The empirical work appears to validate the claim that their method leads to good out of distribution detection, and better performance using a rejection option. The adversarial defense results are less clear. Reporting raw logits is a strange choice, and difficult to interpret; the table is also difficult to read, and this method of reporting makes it difficult to compare against existing methods.
The reviewers generally remarked on presentation issues. R1 asked about the contribution of various loss terms, a matter I feel is underexplored in this work, and the authors mainly replied with a qualitative description of loss behaviour in the joint system, which I don't believe was the question. R1 also asked about the choice of thresholds and the issues of fairness of comparison regarding model capacity, neither of which seemed adequately addressed. R3 remarked on the clarity being lacking, and also that "Generative modeling of representations is novel, afaik." (It is not; see, for example, the VQ-VAE line of work where PixelCNN priors are fit on top of representations, and layer-wise pre-training works of the mid 2000s, where generative models were frequently fit on greedily trained feature representations, sometimes in conjunction with a joint generative model of class labels). R2's review was very brief, and with a self-reported low confidence, but their concerns were addressed in a subsequent update.
There are three weaknesses which are my grounds for recommending rejection. First, this paper does a poor job of situating itself in the wider body of literature on classification with rejection, which dates to at least the 1970s (see Bartlett & Wengkamp, 2006 and the references therein). Second, the empirical work makes little comparison to other methods in the literature; baselines on clean data are self-generated, and the paper compares to no other adversarial defense proposals. In a minor drawback, ImageNet results are also missing; given that one of the purported advantages of the method is scalability, a large scale benchmark would have strengthened this claim. Third, no ablation study is undertaken that might give us insight into the role of each term of the loss. Given that this is a straightforward combination of well-understood techniques, a fully empirical paper ought to deliver more insight into the combination than this manuscript has. | train | [
"SJlnF6GvsS",
"HklRHtZ_jH",
"BJgFY6lvoB",
"Syeg3q4AtH",
"rkxXYzgb9r",
"SylJmNTn5H"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments. \n\n1. Your suggestion of a structure map was exactly what we wanted to do, but failed due to the 8-page submission limit. We've added a structure map of SDIM framework in the revised paper. Please check. \n\n2. Table 2 is the classification accuracies on the clean test datasets. Si... | [
-1,
-1,
-1,
3,
8,
3
] | [
-1,
-1,
-1,
4,
1,
3
] | [
"Syeg3q4AtH",
"SylJmNTn5H",
"rkxXYzgb9r",
"iclr_2020_rkg98yBFDr",
"iclr_2020_rkg98yBFDr",
"iclr_2020_rkg98yBFDr"
] |
iclr_2020_ryxsUySFwr | Neural Network Out-of-Distribution Detection for Regression Tasks | Neural network out-of-distribution (OOD) detection aims to identify when a model is unable to generalize to new inputs, either due to covariate shift or anomalous data. Most existing OOD methods only apply to classification tasks, as they assume a discrete set of possible predictions. In this paper, we propose a method for neural network OOD detection that can be applied to regression problems. We demonstrate that the hidden features for in-distribution data can be described by a highly concentrated, low dimensional distribution. Therefore, we can model these in-distribution features with an extremely simple generative model, such as a Gaussian mixture model (GMM) with 4 or fewer components. We demonstrate on several real-world benchmark data sets that GMM-based feature detection achieves state-of-the-art OOD detection results on several regression tasks. Moreover, this approach is simple to implement and computationally efficient. | reject | The paper investigates out-of-distribution detection for regression tasks.
The reviewers raised several concerns about novelty of the method relative to existing methods, motivation & theoretical justification and clarity of the presentation (in particular, the discussion around regression vs classification).
I encourage the authors to revise the draft based on the reviewers’ feedback and resubmit to a different venue.
| train | [
"BklMk3EojH",
"HylFsPxioS",
"S1xquvlooH",
"S1ll8DxijB",
"SyxnlDgojH",
"SygZYijTFH",
"ryl4xqCRKH",
"B1xBgmDPcH",
"BJxg_FwwqB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for answering my concerns.\n\nOf course information is most likely discarded when mapping from R^d->R, but this does not tell us much about the intrinsic dimensionality of the last layers output before this is happening. And this is the space in which OOD detection is carried out if I understand correctl... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
1,
1
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"S1xquvlooH",
"SygZYijTFH",
"ryl4xqCRKH",
"B1xBgmDPcH",
"BJxg_FwwqB",
"iclr_2020_ryxsUySFwr",
"iclr_2020_ryxsUySFwr",
"iclr_2020_ryxsUySFwr",
"iclr_2020_ryxsUySFwr"
] |
iclr_2020_Syl38yrFwr | Near-Zero-Cost Differentially Private Deep Learning with Teacher Ensembles | Ensuring the privacy of sensitive data used to train modern machine learning models is of paramount importance in many areas of practice. One approach to study these concerns is through the lens of differential privacy. In this framework, privacy guarantees are generally obtained by perturbing models in such a way that specifics of data used to train the model are made ambiguous. A particular instance of this approach is through a ``teacher-student'' model, wherein the teacher, who owns the sensitive data, provides the student with useful, but noisy, information, hopefully allowing the student model to perform well on a given task without access to particular features of the sensitive data. Because stronger privacy guarantees generally involve more significant noising on the part of the teacher, deploying existing frameworks fundamentally involves a trade-off between utility and privacy guarantee. One of the most important techniques used in previous work involves an ensemble of teacher models, which return information to a student based on a noisy voting procedure. In this work, we propose a novel voting mechanism, which we call an Immutable Noisy ArgMax, that, under certain conditions, can bear very large random noising from the teacher without affecting the useful information transferred to the student. Our mechanisms improve over the state-of-the-art methods on all measures, and scale to larger tasks with both higher utility and stronger privacy (ϵ≈0). | reject | This paper presents a differentially private mechanism, called Noisy ArgMax, for privately aggregating predictions from several teacher models. There is a consensus in the discussion that the technique of adding a large constant to the largest vote breaks differential privacy. Given this technical flaw, the paper cannot be accepted. | train | [
"B1gTzqlMjB",
"rylLJjxMoH",
"Hkli6KxzoH",
"HkxrciCwtB",
"H1eUlfF_tH",
"SyeKgE2JqS",
"BJgZqAHoPS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Thank you for reviewing and comments. The following are the reply to some of your comments.\n\nIn fact, we mainly follow each step as in PATE. In PATE, for each student sample x, it will create an individual mechanism, and then multiple student data samples lead to multiple individual mechanisms. The difference he... | [
-1,
-1,
-1,
1,
1,
1,
-1
] | [
-1,
-1,
-1,
5,
3,
1,
-1
] | [
"H1eUlfF_tH",
"HkxrciCwtB",
"SyeKgE2JqS",
"iclr_2020_Syl38yrFwr",
"iclr_2020_Syl38yrFwr",
"iclr_2020_Syl38yrFwr",
"iclr_2020_Syl38yrFwr"
] |
iclr_2020_HyenUkrtDB | Detecting Noisy Training Data with Loss Curves | This paper introduces a new method to discover mislabeled training samples and to mitigate their impact on the training process of deep networks. At the heart of our algorithm lies the Area Under the Loss (AUL) statistic, which can be easily computed for each sample in the training set. We show that the AUL can use training dynamics to differentiate between (clean) samples that benefit from generalization and (mislabeled) samples that need to be “memorized”. We demonstrate that the estimated AUL score conditioned on clean vs. noisy is approximately Gaussian distributed and can be well estimated with a simple Gaussian Mixture Model (GMM). The resulting GMM provides us with mixing coefficients that reveal the percentage of mislabeled samples in a data set as well as probability estimates that each individual training sample is mislabeled. We show that these probability estimates can be used to down-weight suspicious training samples and successfully alleviate the damaging impact of label noise. We demonstrate on the CIFAR10/100 datasets that our proposed approach is significantly more accurate and consistent across model architectures than all prior work. | reject | The paper proposes a new, stable metric, called Area Under Loss curve (AUL) to recognize mislabeled samples in a dataset due to the different behavior of their loss function over time. The paper build on earlier observations (e.g. by Shen & Sanghavi) to propose this new metric as a concrete solution to the mislabeling problem.
Although the reviewers remarked that this is an interesting approach for a relevant problems, they expressed several concerns regarding this paper. Two of them are whether the hardness of a sample would also result in high AUL scores, and another whether the results hold up under realistic mislabelings, rather than artificial label swapping / replacing. The authors did anecdotally suggest that neither of these effects has a major impact on the results. Still, I think a precise analysis of these effects would be critically important to have in the paper. Especially since there might be a complex interaction between the 'hardness' of samples and mislabelings (an MNIST 1 that looks like a 7 might be sooner mislabeled than a 1 that doesn't look like a 7). The authors show some examples of 'real' mislabeled sentences recognized by the model but it is still unclear whether downweighting these helped final test set performance in this case.
Because of these issues, I cannot recommend acceptance of the paper in its current state. However, based on the identified relevance of the problem tackled and the identified potential for significant impact I do think this could be a great paper in a next iteration. | val | [
"BJerJ9yDoH",
"HylRqYJvoH",
"SyeJTO1vjB",
"B1g0BwJdKS",
"S1eybeV2YH",
"r1e5gNqptr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"W.R.T. additional benchmarks and datasets: We have since performed additional experiments on TinyImagenet. A ResNet-32 trained on 95000 training samples (5000 withheld for validation) receives 49.4% test error on the clean dataset. With 40% label noise this number drops to 65.5%. AUL reweighting achieves 55.7% err... | [
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
3,
3,
4
] | [
"B1g0BwJdKS",
"S1eybeV2YH",
"r1e5gNqptr",
"iclr_2020_HyenUkrtDB",
"iclr_2020_HyenUkrtDB",
"iclr_2020_HyenUkrtDB"
] |
iclr_2020_B1xpI1BFDS | Semi-Supervised Few-Shot Learning with a Controlled Degree of Task-Adaptive Conditioning | Few-shot learning aims to handle previously unseen tasks using only a small amount of new training data. In preparing (or meta-training) a few-shot learner, however, massive labeled data are necessary. In the real world, unfortunately, labeled data are expensive and/or scarce. In this work, we propose a few-shot learner that can work well under the semi-supervised setting where a large portion of training data is unlabeled. Our method employs explicit task-conditioning in which unlabeled sample clustering for the current task takes place in a new projection space different from the embedding feature space. The conditioned clustering space is linearly constructed so as to quickly close the gap between the class centroids for the current task and the independent per-class reference vectors meta-trained across tasks. In a more general setting, our method introduces a concept of controlling the degree of task-conditioning for meta-learning: the amount of task-conditioning varies with the number of repetitive updates for the clustering space. During each update, the soft labels of the unlabeled samples estimated in the conditioned clustering space are used to update the class averages in the original embedded space, which in turn are used to reconstruct the clustering space. Extensive simulation results based on the miniImageNet and tieredImageNet datasets show state-of-the-art semi-supervised few-shot classification performance of the proposed method. Simulation results also indicate that the proposed task-adaptive clustering shows graceful degradation with a growing number of distractor samples, i.e., unlabeled samples coming from outside the candidate classes. | reject | This paper proposes an approach to semi-supervised few-shot learning. In a discussion after the rebuttal phase, the reviewers were somewhat split on this paper, appreciating the advantages of the algorithm such as increased robustness to distractors and the ability to adapt with additional iterations, but were concerned that the contributions over Ren et al were not significant. Overall, the contributions of this paper don't quite warrant publication at ICLR. | train | [
"BJlxo1StjH",
"r1gwDySYoS",
"SyxxSkSYsr",
"S1gq60EFjB",
"SJltUIRoYB",
"Bkxb6KTpYH",
"BkgW8i5RKB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We agree that the proposed method is indeed a combination of TapNet and soft-clustering. However, we maintain that the proposed method is not an easily anticipated extension/combination of prior work. Note that the proposed iterative updates of projection space provide an increasing degree of task-conditioning, a ... | [
-1,
-1,
-1,
-1,
3,
6,
1
] | [
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"SJltUIRoYB",
"Bkxb6KTpYH",
"Bkxb6KTpYH",
"BkgW8i5RKB",
"iclr_2020_B1xpI1BFDS",
"iclr_2020_B1xpI1BFDS",
"iclr_2020_B1xpI1BFDS"
] |
iclr_2020_rygePJHYPH | Towards trustworthy predictions from deep neural networks with fast adversarial calibration | To facilitate a wide-spread acceptance of AI systems guiding decision making in real-world applications, trustworthiness of deployed models is key. That is, it is crucial for predictive models to be uncertainty-aware and yield well-calibrated (and thus trustworthy) predictions for both in-domain samples as well as under domain shift. Recent efforts to account for predictive uncertainty include post-processing steps for trained neural networks, Bayesian neural networks as well as alternative non-Bayesian approaches such as ensemble approaches and evidential deep learning. Here, we propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift. We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions for a wide range of perturbations. We comprehensively evaluate previously proposed approaches on different data modalities, a large range of data sets, network architectures and perturbation strategies and observe that our modelling approach substantially outperforms existing state-of-the-art approaches, yielding well-calibrated predictions for both in-domain and out-of domain samples. | reject | This paper proposes an algorithm to produce well-calibrated uncertainty estimates. The work accomplishes this by introducing two loss terms: entropy-encouraging loss and an adversarial calibration loss to encourage predictive smoothness in response to adversarial input perturbations.
All reviewers recommended weak reject for this work with a major issue being the presentation of the work. Each reviewer provided specific examples of areas in which the paper text, figures, equations etc were unclear or missing details. Though the authors have put significant effort into responding to the specific reviewer mentions, the reviewers have determined that the manuscript would benefit from further revision for clarity.
Therefore, we do not recommend acceptance of this work at this time and instead encourage the authors to further iterate on the manuscript and consider resubmission to a future venue.
| train | [
"H1xalm-RKr",
"HyeT9fvFoS",
"SJlh3Z5FoS",
"H1lvk4vYoB",
"SkxbtWOKsS",
"HklDuBPFsS",
"Hyggr7vFoH",
"r1leOMl3tH",
"BklNA9L6tS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a new loss function for training deep neural networks, which show good performance with respect to well-calibrated, trustworthy probabilities for samples after a domain shift. The authors conduct experiments with multiple datasets and multiple forms of perturbations, where the proposed method... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_rygePJHYPH",
"BklNA9L6tS",
"iclr_2020_rygePJHYPH",
"BklNA9L6tS",
"H1xalm-RKr",
"r1leOMl3tH",
"BklNA9L6tS",
"iclr_2020_rygePJHYPH",
"iclr_2020_rygePJHYPH"
] |
iclr_2020_Hkexw1BtDr | Deep Auto-Deferring Policy for Combinatorial Optimization | Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields. Recently, deep reinforcement learning (DRL) frameworks have gained considerable attention as a new approach: they can automatically learn the design of a good solver without using any sophisticated knowledge or hand-crafted heuristic specialized for the target problem. However, the number of stages (until reaching the final solution) required by existing DRL solvers is proportional to the size of the input graph, which hurts their scalability to large-scale instances. In this paper, we seek to resolve this issue by proposing a novel design of DRL's policy, coined auto-deferring policy (ADP), automatically stretching or shrinking its decision process. Specifically, it decides whether to finalize the value of each vertex at the current stage or defer to determine it at later stages. We apply the proposed ADP framework to the maximum independent set (MIS) problem, a prototype of NP-complete problems, under various scenarios. Our experimental results demonstrate significant improvement of ADP over the current state-of-the-art DRL scheme in terms of computational efficiency and approximation quality. The reported performance of our generic DRL scheme is also comparable with that of the state-of-the-art solvers specialized for MIS, e.g., ADP outperforms them for some graphs with millions of vertices. | reject | This paper proposes a new way to formulate the design of the deep reinforcement learning that automatically shrinks or expands decision processes.
The paper is borderline and all reviewers appreciate the paper and gives thorough reviews. However, it not completely convince that it is ready publication.
Rejection is recommended. This can become a nice paper for next conference by taking feedback into account. | train | [
"HkxCFnOTFS",
"BylIEntsjS",
"rJl15Er9or",
"H1lMS4B5oS",
"S1lJkEr9jH",
"Skl3tmS9iB",
"rklWm7H5oB",
"Hkghsm4cFr",
"H1g_kS26YH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces auto-deferring policies (ADPs) for deep reinforcement learning (RL). ADPs automatically stretching or shrinking their decision process, in particular, deciding whether to finalize the value of each vertex at the current stage or defer to determine it at later stages. ADPs are evaluated on maxi... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_Hkexw1BtDr",
"iclr_2020_Hkexw1BtDr",
"H1lMS4B5oS",
"Hkghsm4cFr",
"HkxCFnOTFS",
"rklWm7H5oB",
"H1g_kS26YH",
"iclr_2020_Hkexw1BtDr",
"iclr_2020_Hkexw1BtDr"
] |
iclr_2020_B1xZD1rtPr | The Dual Information Bottleneck | The Information-Bottleneck (IB) framework suggests a general characterization of optimal representations in learning, and deep learning in particular. It is based on the optimal trade off between the representation complexity and accuracy, both of which are quantified by mutual information. The problem is solved by alternating projections between the encoder and decoder of the representation, which can be performed locally at each representation level. The framework, however, has practical drawbacks, in that mutual information is notoriously difficult to handle at high dimension, and only has closed form solutions in special cases. Further, because it aims to extract representations which are minimal sufficient statistics of the data with respect to the desired label, it does not necessarily optimize the actual prediction of unseen labels. Here we present a formal dual problem to the IB which has several interesting properties. By switching the order in the KL-divergence between the representation decoder and data, the optimal decoder becomes the geometric rather than the arithmetic mean of the input points. While providing a good approximation to the original IB, it also preserves the form of exponential families, and optimizes the mutual information on the predicted label rather than the desired one. We also analyze the critical points of the dualIB and discuss their importance for the quality of this approach. | reject | Main content:
Blind review #1 summarizes it well:
This paper introduces a variant of the Information Bottleneck (IB) framework, which consists in permuting the conditional probabilities of y given x and y given \hat{x} in a Kullback-Liebler divergence involved in the IB optimization criterion.
Interestingly, this change only results in changing an arithmetic mean into a geometric mean in the algorithmic resolution.
Good properties of the exponential families (existence of non-trivial minimal sufficient statistics) are preserved, and an analysis of the new critical points/information plane induced is carried out.
--
Discussion:
The reviews generally agree on the elegant mathematical result, but are critical of the fact that the paper lacks any empirical component whatsoever.
--
Recommendation and justification:
The paper would be good for ICLR if it had any decent empirical component at all; it is a shame that none was presented as this does not seem very difficult. | train | [
"ByeFBnz3sH",
"S1lGehG3iS",
"rJgaJbzhsH",
"rkewlAW2jr",
"HyxXpukrjH",
"Byl0gW8RFH",
"S1xNKglJcH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the helpful comments. We were delighted to see that the reviewer agrees with our understanding of the contribution and innovations that the formalism suggest in comparison to the known $\\rm{IB}$. \n\nWe will now relate to specific points raised be the reviewer:\n\n- Motivation: We refer... | [
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"Byl0gW8RFH",
"HyxXpukrjH",
"S1xNKglJcH",
"iclr_2020_B1xZD1rtPr",
"iclr_2020_B1xZD1rtPr",
"iclr_2020_B1xZD1rtPr",
"iclr_2020_B1xZD1rtPr"
] |
iclr_2020_SkezP1HYvS | Diagonal Graph Convolutional Networks with Adaptive Neighborhood Aggregation | Graph convolutional networks (GCNs) and their variants have generalized deep learning methods into non-Euclidean graph data, bringing a substantial improvement on many graph mining tasks. In this paper, we revisit the mathematical foundation of GCNs and study how to extend their representation capacity. We discover that their performance can be improved with an adaptive neighborhood aggregation step. The core idea is to adaptively scale the output signal for each node and automatically train a suitable nonlinear encoder for the input signal. In this work, we present a new method named Diagonal Graph Convolutional Networks (DiagGCN) based on this idea. Importantly, one of the adaptive aggregation techniques—the permutations of diagonal matrices—used in DiagGCN offers a flexible framework to design GCNs and in fact, some of the most expressive GCNs, e.g., the graph attention network, can be reformulated as a particular instance of our model. Standard experiments on open graph benchmarks show that our proposed framework can consistently improve the graph classification accuracy when compared to state-of-the-art baselines. | reject | All three reviewers are consistently negative on this paper. Thus a reject is recommended. | train | [
"S1gSMbtpuH",
"rkxOTkZg5S",
"H1lFVxr0Fr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a so-called diagonal GCN method with adaptive neighborhood aggregation rules., namely, each node will be associated with an individual importance factor to re-scale the output signal of the graph convolutional operator, and also an adaptive encoder function is adopted. This is achieved by add... | [
3,
3,
3
] | [
4,
4,
1
] | [
"iclr_2020_SkezP1HYvS",
"iclr_2020_SkezP1HYvS",
"iclr_2020_SkezP1HYvS"
] |
iclr_2020_BkeGPJrtwB | Fairness with Wasserstein Adversarial Networks | Quantifying, enforcing and implementing fairness emerged as a major topic in machine learning. We investigate these questions in the context of deep learning. Our main algorithmic and theoretical tool is the computational estimation of similarities between probability, ``\`a la Wasserstein'', using adversarial networks. This idea is flexible enough to investigate different fairness constrained learning tasks, which we model by specifying properties of the underlying data generative process. The first setting considers bias in the generative model which should be filtered out. The second model is related to the presence of nuisance variables in the observations producing an unwanted bias for the learning task. For both models, we devise a learning algorithm based on approximation of Wasserstein distances using adversarial networks. We provide formal arguments describing the fairness enforcing properties of these algorithm in relation with the underlying fairness generative processes. Finally we perform experiments, both on synthetic and real world data, to demonstrate empirically the superiority of our approach compared to state of the art fairness algorithms as well as concurrent GAN type adversarial architectures based on Jensen divergence. | reject | This paper presents an approach to enforce statistical fairness notions using adversarial networks. The reviewers point out several issues of the paper, including 1) their approach does not provably enforce criteria such as demographic parity, 2) lack of novelty and 3) poor presentation. | train | [
"HygO3_32YH",
"BJerxTNAYr",
"rkgtvbuH9S"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors proposed a fairness-aware learning method.\nIn particular, the authors considered two kinds of fairness problem and designed two regularizers accordingly.\nEssentially, both of these two strategies learn classifiers and calibrate the distributions conditioned on protected variables joint... | [
1,
1,
1
] | [
3,
3,
4
] | [
"iclr_2020_BkeGPJrtwB",
"iclr_2020_BkeGPJrtwB",
"iclr_2020_BkeGPJrtwB"
] |
iclr_2020_SJefPkSFPr | Regulatory Focus: Promotion and Prevention Inclinations in Policy Search | The estimation of advantage is crucial for a number of reinforcement learning algorithms, as it directly influences the choices of future paths. In this work, we propose a family of estimates based on the order statistics over the path ensemble, which allows one to flexibly drive the learning process in a promotion focus or prevention focus. On top of this formulation, we systematically study the impacts of different regulatory focuses. Our findings reveal that regulatory focus, when chosen appropriately, can result in significant benefits. In particular, for the environments with sparse rewards, promotion focus would lead to more efficient exploration of the policy space; while for those where individual actions can have critical impacts, prevention focus is preferable. On various benchmarks, including MuJoCo continuous control, Terrain locomotion, Atari games, and sparse-reward environments, the proposed schemes consistently demonstrate improvement over mainstream methods, not only accelerating the learning process but also obtaining substantial performance gains. | reject | The authors take inspiration from regulatory fit theory and propose a new parameter for policy gradient algorithms in RL that can manage the "regulatory focus" of an agent. They hypothesize that this can affect performance in a problem-specific way, especially when trading off between broad exploration and risk. The reviewers expressed concerns about the usefulness of the proposed algorithm in practice and a lack of thorough empirical comparisons or theoretical results. Unfortunately, the authors did not provide a rebuttal, so no further discussion of these issues was possible; thus, I recommend to reject. | test | [
"SkxnCx6Ytr",
"H1xOY14FtB",
"r1eAZ3hCFB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the problem of advantage estimation for actor-critic RL algorithms. The key observation is that the advantage can be computed using 1-step returns, 2-step returns, etc. The paper suggests that, instead of choosing a fixed n, we should aggregate these advantageous together. If the maximum is taken... | [
3,
3,
3
] | [
4,
3,
3
] | [
"iclr_2020_SJefPkSFPr",
"iclr_2020_SJefPkSFPr",
"iclr_2020_SJefPkSFPr"
] |
iclr_2020_S1gmvyHFDS | Provenance detection through learning transformation-resilient watermarking | Advancements in deep generative models have made it possible to synthesize images, videos and audio signals that are hard to distinguish from natural signals, creating opportunities for potential abuse of these capabilities. This motivates the problem of tracking the provenance of signals, i.e., being able to determine the original source of a signal. Watermarking the signal at the time of signal creation is a potential solution, but current techniques are brittle and watermark detection mechanisms can easily be bypassed by doing some post-processing (cropping images, shifting pitch in the audio etc.). In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations. Our detection method can be applied to domains with continuous data representations such as images, videos or sound signals. Experiments on watermarking image and audio signals show that our method can reliably detect the provenance of a synthetic signal, even if the signal has been through several post-processing transformations, and improve upon related work in this setting. Furthermore, we show that for specific kinds of transformations (perturbations bounded in the ℓ2 norm), we can even get formal guarantees on the ability of our model to detect the watermark. We provide qualitative examples of watermarked image and audio samples in the anonymous code submission link. | reject | This paper offers an interesting and potentially useful approach to robust watermarking. The reviewers are divided on the significance of the method. The most senior and experienced reviewer was the most negative. On balance, my assessment of this paper is borderline; given the number of more highly ranked papers in my pile, that means I have to assign "reject". | train | [
"rJe59wI7sS",
"H1gouI8QoH",
"BygcMIUmjr",
"ryeRqiZaKH",
"rklOwGk0YH",
"SyeSWv57cB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their review and detailed feedback. We will attempt to address each of the highlighted concerns.\n\n1. Re: how the detector and watermarking mechanisms are jointly obtained: We apologize for the confusion around this. Indeed, in our scheme the watermark generation and watermark detection ... | [
-1,
-1,
-1,
1,
6,
8
] | [
-1,
-1,
-1,
3,
3,
3
] | [
"ryeRqiZaKH",
"SyeSWv57cB",
"rklOwGk0YH",
"iclr_2020_S1gmvyHFDS",
"iclr_2020_S1gmvyHFDS",
"iclr_2020_S1gmvyHFDS"
] |
iclr_2020_B1eXvyHKwS | THE EFFECT OF ADVERSARIAL TRAINING: A THEORETICAL CHARACTERIZATION | It has widely shown that adversarial training (Madry et al., 2018) is effective in defending adversarial attack empirically. However, the theoretical understanding of the difference between the solution of adversarial training and that of standard training is limited. In this paper, we characterize the solution of adversarial training for linear classification problem for a full range of adversarial radius ". Specifically, we show that if the data themselves are ”-strongly linearly-separable”, adversarial
training with radius smaller than " converges to the hard margin solution of SVM with a faster rate than standard training. If the data themselves are not ”-strongly linearly-separable”, we show that adversarial training with radius " is stable to outliers while standard training is not. Moreover, we prove that the classifier returned by adversarial training with a large radius " has low confidence in each data point. Experiments corroborate our theoretical finding well. | reject | This paper studies adversarial training in the linear classification setting, and shows a rate of convergence for adversarial training of o(1/log T) to the hard margin SVM solution under a set of assumptions.
While 2 reviewers agree that the problem and the central result is somewhat interesting (though R3 is uncertain of the applicability to deep learning, I agree that useful insights can often be gleaned from studying the linear case), reviewers were critical of the degree of clarity and rigour in the writing, including notation, symbol reuse, repetitions/redundancies, and clarity surrounding the assumptions made.
No updates to the paper were made and reviewers did not feel their concerns were addressed by the rebuttals. I therefore recommend rejection, but would encourage the authors to continue refining their paper in order to showcase their results more clearly and didactically. | test | [
"r1gP-GPSqr",
"Bkeuqk7NtS",
"B1lw6TI3iS",
"rkgmGXzusS",
"rkxcy7zdoS",
"Syevazz_or",
"BygvfpMCFS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The aim of this paper is to provide a theoretical analysis of adversarial training under the linear classification setting. The main result states that, under many technical assumptions, adversarial training using gradient descent may converge to the hard margin SVM classifier with a fast rate. Here \"fast\" is n... | [
1,
1,
-1,
-1,
-1,
-1,
1
] | [
3,
1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_B1eXvyHKwS",
"iclr_2020_B1eXvyHKwS",
"rkxcy7zdoS",
"Bkeuqk7NtS",
"BygvfpMCFS",
"r1gP-GPSqr",
"iclr_2020_B1eXvyHKwS"
] |
iclr_2020_ryx4PJrtvS | A Copula approach for hyperparameter transfer learning | Bayesian optimization (BO) is a popular methodology to tune the hyperparameters of expensive black-box functions. Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multiple datasets. In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics. The main idea is to regress the mapping from hyperparameter to metric quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that can occur in different tasks. We introduce two methods to leverage this estimation: a Thompson sampling strategy as well as a Gaussian Copula process using such quantile estimate as a prior. We show that these strategies can combine the estimation of multiple metrics such as runtime and accuracy, steering the optimization toward cheaper hyperparameters for the same level of accuracy. Experiments on an extensive set of hyperparameter tuning tasks demonstrate significant improvements over state-of-the-art methods. | reject | This paper tackles the problem of transferring learning between tasks when performing Bayesian hyperparameter optimization. In this setting, tasks can correspond to different datasets or different metrics. The proposed approach uses Gaussian copulas to synchronize the different scales of the considered tasks and uses Thompson Sampling from the resulting Gaussian Copula Process for selecting next hyperparameters.
The main weakness of the paper resides in the concerns raised about the experiments. First, the results are hard to interpret, leading to a misunderstanding of performances. Moreover, the considered baselines may not be adapted (they may be trivial). This might be due to a misunderstanding of the paper, which would align with the third major concern, that is the lack of clarity. These points could be addressed in a future version of the work, but it would need to be reviewed again and therefore would be too late for the current camera-ready.
Hence, I recommend rejecting this paper. | train | [
"HkeNJPwAFS",
"HJg70jIujS",
"rkxI0KIuoS",
"SJg7RcUdjH",
"r1gHWqIOjB",
"BkecWwptYr",
"rJgxW_iQ5S"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper tackles the problem of black-box hyperparameter optimization when multiple related optimization tasks are available simultaneously, performing transfer learning between tasks. Different tasks correspond to different datasets and/or metrics. Gaussian copulas are used to synchronize the different scales o... | [
3,
-1,
-1,
-1,
-1,
1,
6
] | [
3,
-1,
-1,
-1,
-1,
5,
1
] | [
"iclr_2020_ryx4PJrtvS",
"BkecWwptYr",
"iclr_2020_ryx4PJrtvS",
"HkeNJPwAFS",
"rJgxW_iQ5S",
"iclr_2020_ryx4PJrtvS",
"iclr_2020_ryx4PJrtvS"
] |
iclr_2020_BJe4PyrFvB | Imagining the Latent Space of a Variational Auto-Encoders | Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset. As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. To help understand the type of information stored in the latent space we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space. This allows us to ''imagine'' the information captured in the latent space. We argue that this is necessary to make a VAE into a truly generative model. We use our GAN to visualise the latent space of a standard VAE and of a β-VAE. | reject | The paper proposes a new method for improving generative properties of VAE model. The reviewers unanimously agree that this paper is not ready to be published, particularly being concerned about the unclear objective and potentially misleading claims of the paper. Multiple reviewers pointed out about incorrect claims and statements without theoretical or empirical justification. The reviewers also mention that the paper does not provide new insights about VAE model as MDL interpretation of VAE it is not new. | train | [
"Syg8gRa6tr",
"Skg_RY4tdB",
"SJeDpXb3jH",
"ByeO-pZ7jr",
"BJlR2h-Qjr",
"HylIK2ZQjr",
"B1eCVhZXjS",
"Bkxzxw96Kr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a new method for improving generative properties of VAE model. The idea is to train VAE in two stages: at first, train the vanilla VAE, then at the second stage freeze the encoder part and train the decoder part as a GAN generator with an additional regularizer which encourages cycle consistency... | [
3,
3,
-1,
-1,
-1,
-1,
-1,
1
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_BJe4PyrFvB",
"iclr_2020_BJe4PyrFvB",
"BJlR2h-Qjr",
"Skg_RY4tdB",
"Bkxzxw96Kr",
"Syg8gRa6tr",
"iclr_2020_BJe4PyrFvB",
"iclr_2020_BJe4PyrFvB"
] |
iclr_2020_SJeHwJSYvH | Learning De-biased Representations with Biased Representations | Many machine learning algorithms are trained and evaluated by splitting data from a single source into training and test sets. While such focus on in-distribution learning scenarios has led interesting advances, it has not been able to tell if models are relying on dataset biases as shortcuts for successful prediction (e.g., using snow cues for recognising snowmobiles). Such biased models fail to generalise when the bias shifts to a different class. The cross-bias generalisation problem has been addressed by de-biasing training data through augmentation or re-sampling, which are often prohibitive due to the data collection cost (e.g., collecting images of snowmobile on a desert) and the difficulty of quantifying or expressing biases in the first place. In this work, we propose a novel framework to train a de-biased representation by encouraging it to be different from a set of representations that are biased by design. This tactic is feasible in many scenarios where it is much easier to define a set of biased representations than to define and quantify bias. Our experiments and analyses show that our method discourages models from taking bias shortcuts, resulting in improved performances on de-biased test data. | reject | This paper provides and analyzes an interesting approach to "de-biasing" a predictor from its training set. The work is valuable, however unfortunately just below the borderline for this year. I urge the authors to continue their investigations, for instance further addressing the reviewer comments below (some of which are marked as coming after the end of the feedback period). | train | [
"S1g5ZEg6KB",
"B1gpPXAK5B",
"HJl8kkyhjH",
"H1gotx12iH",
"HklOfkk3or",
"Bke_IxkhiS",
"rJlz_k13or",
"BJeTKL-siS",
"ByewI_D15B"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"### Summary \n\nThe paper proposes a method for regularizing neural networks to mitigate certain known biases from the representations learned by CNNs. The authors look at the setting in which the distribution of biases in the train and test set remains the same, but the distribution of targets given biases chang... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2020_SJeHwJSYvH",
"iclr_2020_SJeHwJSYvH",
"B1gpPXAK5B",
"Bke_IxkhiS",
"HJl8kkyhjH",
"S1g5ZEg6KB",
"ByewI_D15B",
"iclr_2020_SJeHwJSYvH",
"iclr_2020_SJeHwJSYvH"
] |
iclr_2020_B1xwv1StvS | Few-shot Learning by Focusing on Differences | Few-shot classification may involve differentiating data that belongs to a different level of labels granularity. Compounded by the fact that the number of available labeled examples are scarce in the novel classification set, relying solely on the loss function to implicitly guide the classifier to separate data based on its label might not be enough; few-shot classifier needs to be very biased to perform well. In this paper, we propose a model that incorporates a simple prior: focusing on differences by building a dissimilar set of class representations. The model treats a class representation as a vector and removes its component that is shared among closely related class representatives. It does so through the combination of learned attention and vector orthogonalization. Our model works well on our newly introduced dataset, Hierarchical-CIFAR, that contains different level of labels granularity. It also substantially improved the performance on fine-grained classification dataset, CUB; whereas staying competitive on standard benchmarks such as mini-Imagenet, Omniglot, and few-shot dataset derived from CIFAR. | reject | Main content:
[Blind review #3] The authors propose a metric based model for few-shot learning. The goal of the proposed technique is to incorporate a prior that highlight better the dissimilarity between closely related class prototype. Thus, the proposed paper is related to prototypical neural network (use of prototype to represent a class) but differ from it by using inner product scoring as a similarity measure instead of the use of euclidean distance. There is also close similarity between the proposed method and matching network.
[Blind review #2] The stated contributions of the paper are: (1) a method for performing few-shot learning and (2) an approach for building harder few-shot learning datasets from existing datasets. The authors describe a model for creating a task-aware embedding for different novel sets (for different image classification settings) using a nonlinear self-attention-like mechanism applied to the centroid of the global embeddings for each class. The resulting embeddings are used per class with an additional attention layer applied on the embeddings from the other classes to identify closely-related classes and consider the part of the embedding orthogonal to the attention-weighted-average of these closely-related classes. They compare the accuracy of their model vs others in the 1-shot and 5-shot setting on various datasets, including a derived dataset from CIFAR which they call Hierarchical-CIFAR.
--
Discussion:
All reviews agree on a weak reject.
--
Recommendation and justification:
While the ideas appear to be on a good track, the paper itself is poorly written - as one review put it, more like notes to themselves, rather than a well-written document to the ICLR audience. | train | [
"SylXjPS3ir",
"SJgTwPrnjB",
"H1eexvS3sH",
"ryx1J6kTFB",
"S1esrDQ0YS",
"S1l8TWFRFS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for reviewing our paper.\n\n\nSimilarity with Prototypical Networks and Matching Network\n---\nOur proposed method is similar to the prototypical networks (Snell et al., 2017) -- and subsequently Mensink et al. (2013) -- in its use of mean representation of class (or prototypes). The similarity stops there,... | [
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
3,
3,
3
] | [
"ryx1J6kTFB",
"S1esrDQ0YS",
"S1l8TWFRFS",
"iclr_2020_B1xwv1StvS",
"iclr_2020_B1xwv1StvS",
"iclr_2020_B1xwv1StvS"
] |
iclr_2020_SJe_D1SYvr | Partial Simulation for Imitation Learning | Model-based imitation learning methods require full knowledge of the transition kernel for policy evaluation. In this work, we introduce the Expert Induced Markov Decision Process (eMDP) model as a formulation of solving imitation problems using Reinforcement Learning (RL), when only partial knowledge about the transition kernel is available. The idea of eMDP is to replace the unknown transition kernel with a synthetic kernel that: a) simulate the transition of state components for which the transition kernel is known (s_r), and b) extract from demonstrations the state components for which the kernel is unknown (s_u). The next state is then stitched from the two components: s={s_r,s_u}. We describe in detail the recipe for building an eMDP and analyze the errors caused by its synthetic kernel. Our experiments include imitation tasks in multiplayer games, where the agent has to imitate one expert in the presence of other experts for whom we cannot provide a transition model. We show that combining a policy gradient algorithm with our model achieves superior performance compared to the simulation-free alternative. | reject | The paper introduces the concept of an Expert Induced MDP (eMDP) to address imitation learning settings where environment dynamics are part known / part unknown. Based on the formulation a model-based imitation learning approach is derived and the authors obtain theoretical guarantees. Empirical validation focuses on comparison to behavior cloning. Reviewers raised concerns about the size of the contribution. For example, it is unclear to what degree the assumptions made here would hold in practical settings. | train | [
"r1xJuNzPYH",
"SygLpncfjS",
"Syeg935GoS",
"H1eDzhcfiB",
"rkgvO5taFB",
"SkeQKHjaYH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"* Paper summary.\nThe paper considers an IL problem where partial knowledge about the transition probability of the MDP is available. To use of this knowledge, the paper proposes an expert induced MDP (eMDP) model where the unknown part of transition probability is modeled as-is from demonstrations. Based on eMDP,... | [
3,
-1,
-1,
-1,
6,
1
] | [
4,
-1,
-1,
-1,
1,
3
] | [
"iclr_2020_SJe_D1SYvr",
"r1xJuNzPYH",
"rkgvO5taFB",
"SkeQKHjaYH",
"iclr_2020_SJe_D1SYvr",
"iclr_2020_SJe_D1SYvr"
] |
iclr_2020_ryguP1BFwr | Walking the Tightrope: An Investigation of the Convolutional Autoencoder Bottleneck | In this paper, we present an in-depth investigation of the convolutional autoencoder (CAE) bottleneck.
Autoencoders (AE), and especially their convolutional variants, play a vital role in the current deep learning toolbox.
Researchers and practitioners employ CAEs for a variety of tasks, ranging from outlier detection and compression to transfer and representation learning.
Despite their widespread adoption, we have limited insight into how the bottleneck shape impacts the emergent properties of the CAE.
We demonstrate that increased height and width of the bottleneck drastically improves generalization, which in turn leads to better performance of the latent codes in downstream transfer learning tasks.
The number of channels in the bottleneck, on the other hand, is secondary in importance.
Furthermore, we show empirically, that, contrary to popular belief, CAEs do not learn to copy their input, even when the bottleneck has the same number of neurons as there are pixels in the input.
Copying does not occur, despite training the CAE for 1,000 epochs on a tiny (~ 600 images) dataset.
We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs. | reject | The paper investigates the effect of convolutional information bottlenecks to generalization. The paper concludes that the width and height of the bottleneck can greatly influence generalization, whereas the number of channels has smaller effect. The paper also shows evidence against a common belief that CAEs with sufficiently large bottleneck will learn an identity map.
During the rebuttal period, there was a long discussion mainly about the sufficiency of the experimental setup and the trustworthiness of the claims made in the paper. A paper that empirically investigates an exiting method or belief should include extensive experiments of high quality in to enable general conclusions. I’m thus recommending rejection, but encourage the authors to improve the experiments and resubmitting. | train | [
"Byg17qZy9S",
"HyxVI542sr",
"HJeMC9mnoS",
"HJeaaef2jH",
"H1xTS8cptB",
"HygvX-XsiH",
"HyeYdRRvsS",
"BJlG9CCwsr",
"BJezspRDjH",
"SJgULT0Dir",
"S1li_V7ecS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors evaluate convolutional autoencoders (CAE) by varying the size (width & height) and depth of the bottleneck layer on three datasets and compare test and training performance. They furthermore evaluate the quality of the bottleneck activations for linear classification. The authors also investigate the b... | [
6,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_ryguP1BFwr",
"HJeMC9mnoS",
"HJeaaef2jH",
"HygvX-XsiH",
"iclr_2020_ryguP1BFwr",
"HyeYdRRvsS",
"H1xTS8cptB",
"H1xTS8cptB",
"Byg17qZy9S",
"S1li_V7ecS",
"iclr_2020_ryguP1BFwr"
] |
iclr_2020_rkxKwJrKPS | QXplore: Q-Learning Exploration by Maximizing Temporal Difference Error | A major challenge in reinforcement learning is exploration, especially when reward landscapes are sparse. Several recent methods provide an intrinsic motivation to explore by directly encouraging agents to seek novel states. A potential disadvantage of pure state novelty-seeking behavior is that unknown states are treated equally regardless of their potential for future reward. In this paper, we propose an exploration objective using the temporal difference error experienced on extrinsic rewards as a secondary reward signal for exploration in deep reinforcement learning. Our objective yields novelty-seeking in the absence of extrinsic reward, while accelerating exploration of reward-relevant states in sparse (but nonzero) reward landscapes. This objective draws inspiration from dopaminergic pathways in the brain that influence animal behavior. We implement the objective with an adversarial Q-learning method in which Q and Qx are the action-value functions for extrinsic and secondary rewards, respectively. Secondary reward is given by the absolute value of the TD-error of Q. Training is off-policy, based on a replay buffer containing a mix of trajectories sampled using Q and Qx. We characterize performance on a set of continuous control benchmark tasks, and demonstrate comparable or faster convergence on all tasks when compared with other state-of-the-art exploration methods. | reject | There is insufficient support to recommend accepting this paper. Although the authors provided detailed responses, none of the reviewers changed their recommendation from reject. One of the main criticisms, even after revision, concerned the quality of the experimental evaluation. The reviewers criticized the lack of important baselines, and remained unsure about adequate hyperparameter tuning in the revision. The technical exposition lacked a sober discussion of limitations. The paper would be greatly strengthened by the addition of a theoretical justification of the proposed approach. In the end, the submitted reviews should be able to help the authors strengthen this paper. | train | [
"rkl5dt-ycH",
"rJetimXYiH",
"HyrY1o2oH",
"HylANqQFjr",
"Byg2jD7FjS",
"BJeRhNmYjH",
"SygTs5emKH",
"H1xbZ_dpYr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors present a novel exploration method wherein an additional Q-function/policy is learned that treats abs(TD-error) of the standard Q-function as its reward function. Both policies are executed in parallel and experience is shared between them for off-policy learning. They demonstrate their method's superi... | [
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_rkxKwJrKPS",
"iclr_2020_rkxKwJrKPS",
"rJetimXYiH",
"SygTs5emKH",
"H1xbZ_dpYr",
"rkl5dt-ycH",
"iclr_2020_rkxKwJrKPS",
"iclr_2020_rkxKwJrKPS"
] |
iclr_2020_HylKvyHYwS | Learning with Protection: Rejection of Suspicious Samples under Adversarial Environment | We propose a novel framework for avoiding the misclassification of data by using a framework of learning with rejection and adversarial examples. Recent developments in machine learning have opened new opportunities for industrial innovations such as self-driving cars. However, many machine learning models are vulnerable to adversarial attacks and industrial practitioners are concerned about accidents arising from misclassification. To avoid critical misclassifications, we define a sample that is likely to be mislabeled as a suspicious sample. Our main idea is to apply a framework of learning with rejection and adversarial examples to assist in the decision making for such suspicious samples. We propose two frameworks, learning with rejection under adversarial attacks and learning with protection. Learning with rejection under adversarial attacks is a naive extension of the learning with rejection framework for handling adversarial examples. Learning with protection is a practical application of learning with rejection under adversarial attacks. This algorithm transforms the original multi-class classification problem into a binary classification for a specific class, and we reject suspicious samples to protect a specific label. We demonstrate the effectiveness of the proposed method in experiments. | reject | The paper addresses the setting of learning with rejection while incorporating the ideas from learning with adversarial examples to tackle adversarial attacks. While the reviewers acknowledged the importance to study learning with rejection in this setting, they raised several concerns: (1) lack of technical contribution -- see R1’s and R2’s related references, see R3’s suggestion on designing c(x); (2) insufficient empirical evidence -- see R3’s comment about the sensitivity experiment on the strength of the attack, see R1’s suggestion to compare with a baseline that learns the rejection function such as SelectiveNet; (3) clarity of presentation -- see R2’s suggestions how to improve clarity.
Among these, (3) did not have a substantial impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (2) make it very difficult to assess the benefits of the proposed approach, and were viewed by AC as critical issues.
AC can confirm that all three reviewers have read the author responses and have revised the final ratings. AC suggests, in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.
| train | [
"Sye1YXKatH",
"SylCPVKnFH",
"S1xcdBLAtB",
"BkeLyHI3oH",
"rkgA_QI3sr",
"rJelEpUniH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper proposes a framework for learning with rejection using ideas from adversarial examples. The essential idea is, while predicting on a point x, we can reject classifying the point if it has an adversarial example very close to it. So, the algorithm can be simply summarized as,\n1. Learn a classifier functi... | [
3,
3,
3,
-1,
-1,
-1
] | [
3,
3,
3,
-1,
-1,
-1
] | [
"iclr_2020_HylKvyHYwS",
"iclr_2020_HylKvyHYwS",
"iclr_2020_HylKvyHYwS",
"S1xcdBLAtB",
"Sye1YXKatH",
"SylCPVKnFH"
] |
iclr_2020_H1gcw1HYPr | AlignNet: Self-supervised Alignment Module | The natural world consists of objects that we perceive as persistent in space and time, even though these objects appear, disappear and reappear in our field of view as we move. This can be attributed to our notion of object persistence -- our knowledge that objects typically continue to exist, even if we can no longer see them -- and our ability to track objects. Drawing inspiration from the psychology literature on `sticky indices', we propose the AlignNet, a model that learns to assign unique indices to new objects when they first appear and reassign the index to subsequent instances of that object. By introducing a persistent object-based memory, the AlignNet may be used to keep track of objects across time, even if they disappear and reappear later. We implement the AlignNet as a graph network applied to a bipartite graph, in which the input nodes are objects from two sets that we wish to align. The network is trained to predict the edges which connect two instances of the same object across sets. The model is also capable of identifying when there are no matches and dealing with these cases. We perform experiments to show the model's ability to deal with the appearance, disappearance and reappearance of objects. Additionally, we demonstrate how a persistent object-based memory can help solve question-answering problems in a partially observable environment. | reject | This paper proposes a network architecture which labels object with an identifier that it is trained to retain across subsequent instances of that same object.
After discussion, the reviewers agree that the approach is interesting, well-motivated and written, and novel. However, there was unanimous concern about the experimental evaluation, so the paper does not appear to be ready for publication just yet, and I am recommending rejection. | train | [
"r1x9jiPKiS",
"SJx1DjvYjr",
"BygXLcPKiB",
"ByxlfiUntH",
"B1gLTsN2FB",
"B1x_gbizcr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors would like to thank reviewer three for their feedback and for appreciating our contributions. We agree that experiments on real-world datasets would be interesting, however the motivation for using a symbolic dataset was that we were better able to focus on the alignment problem, rather than the proble... | [
-1,
-1,
-1,
6,
3,
1
] | [
-1,
-1,
-1,
5,
3,
4
] | [
"ByxlfiUntH",
"B1gLTsN2FB",
"B1x_gbizcr",
"iclr_2020_H1gcw1HYPr",
"iclr_2020_H1gcw1HYPr",
"iclr_2020_H1gcw1HYPr"
] |
iclr_2020_HyeqPJHYvH | Stochastic Latent Residual Video Prediction | Video prediction is a challenging task: models have to account for the inherent uncertainty of the future. Most works in the literature are based on stochastic image-autoregressive recurrent networks, raising several performance and applicability issues. An alternative is to use fully latent temporal models which untie frame synthesis and dynamics. However, no such model for video prediction has been proposed in the literature yet, due to design and training difficulties. In this paper, we overcome these difficulties by introducing a novel stochastic temporal model. It is based on residual updates of a latent state, motivated by discretization schemes of differential equations. This first-order principle naturally models video dynamics as it allows our simpler, lightweight, interpretable, latent model to outperform prior state-of-the-art methods on challenging datasets. | reject | The paper proposes a method for learning a latent dynamics model for videos. The main idea is to learn a latent representation and model the dynamics of the latent features via residual connection motivated by ODE. The architectural choice of residual connection itself is not new as many prior works have employed "skip connections" in hidden representations but the notion of connecting this with ODE and factoring time as input into the residual function seems a new idea. The experimental results show the promise of the proposed method on moving MNIST, KTH, and BAIR datasets. The experiments on different frame rates are also nice. In terms of weakness, the evaluation is performed on relatively simple domains (e.g., moving MNIST and KTH) with static backgrounds and the improvement on BAIR dataset (which is not considered as a difficult benchmark) in terms of FVD is not as clear. For the BAIR dataset, it's unclear how the proposed method will handle the interactions between the robot arm and background objects due to the modeling assumption (i.e., static background). In this sense, content swap results on BAIR dataset look quite anecdotal, and the significance is limited. For improvement, I would suggest adding evaluations on other challenging domains, such as Human 3.6M (where human motions are much more uncertain compared to KTH) and other Robot datasets with more complex robot-object interactions. Overall, the paper proposes an interesting architecture with promising results on relatively simple datasets, but the advantage over existing SOTA methods on challenging benchmarks is unclear yet.
| train | [
"r1gsleMpKS",
"HJxrNhLnjS",
"rygBpsU2oB",
"BJePei82sr",
"r1xmnYInjH",
"S1efBB1AYS",
"r1xt2GQpFH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Contributions: this submission proposes a video pixel generation framework with the goal to decouple visual appearance and dynamics. The latent dynamics are modeled with a latent residual dynamics model. Empirical evaluations on moving MNIST show that the proposed residual dynamics model outperform MLP or GRU. On ... | [
6,
-1,
-1,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_HyeqPJHYvH",
"r1gsleMpKS",
"r1xt2GQpFH",
"S1efBB1AYS",
"iclr_2020_HyeqPJHYvH",
"iclr_2020_HyeqPJHYvH",
"iclr_2020_HyeqPJHYvH"
] |
iclr_2020_SkloDJSFPH | Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling | We propose a generic confidence-based approximation that can be plugged in and simplify an auto-regressive generation process with a proved convergence. We first assume that the priors of future samples can be generated in an independently and identically distributed (i.i.d.) manner using an efficient predictor. Given the past samples and future priors, the mother AR model can post-process the priors while the accompanied confidence predictor decides whether the current sample needs a resampling or not. Thanks to the i.i.d. assumption, the post-processing can update each sample in a parallel way, which remarkably accelerates the mother model. Our experiments on different data domains including sequences and images show that the proposed method can successfully capture the complex structures of the data and generate the meaningful future samples with lower computational cost while preserving the sequential relationship of the data.} | reject | The paper presents a technique for approximately sampling from autoregressive models using something like a a proposal distribution and a critic. The idea is to chunk the output into blocks and, for each block, predict each element in the block independently from a proposal network, ask a critic network whether the block looks sensible and, if not, resampling the block using the autoregressive model itself.
The idea in the paper is interesting, but the paper would benefit from
- a better relation to existing methods
- a better experimental section, which details the hyper-parameters of the algorithm (and how they were chosen) and which provides error bars on all plots (and tables) | train | [
"H1l8W_ndtS",
"ByxpVK-jtB",
"ryxp5HFpFr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a technique for approximately sampling from autoregressive models using something like a a proposal distribution and a critic. The idea is to chunk the output into blocks and, for each block, predict each element in the block independently from a proposal network, ask a critic network whether th... | [
6,
3,
6
] | [
4,
1,
4
] | [
"iclr_2020_SkloDJSFPH",
"iclr_2020_SkloDJSFPH",
"iclr_2020_SkloDJSFPH"
] |
iclr_2020_BJlowyHYPr | CloudLSTM: A Recurrent Neural Model for Spatiotemporal Point-cloud Stream Forecasting | This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources. We design a Dynamic Point-cloud Convolution (D-Conv) operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and extracts local spatial features from sets of neighboring points that surround different elements of the input. This operator maintains the permutation invariance of sequence-to-sequence learning frameworks, while representing neighboring correlations at each time step -- an important aspect in spatiotemporal predictive learning. The D-Conv operator resolves the grid-structural data requirements of existing spatiotemporal forecasting models and can be easily plugged into traditional LSTM architectures with sequence-to-sequence learning and attention mechanisms.
We apply our proposed architecture to two representative, practical use cases that involve point-cloud streams, i.e. mobile service traffic forecasting and air quality indicator forecasting. Our results, obtained with real-world datasets collected in diverse scenarios for each use case, show that CloudLSTM delivers accurate long-term predictions, outperforming a variety of neural network models. | reject | The paper presents an approach to forecasting over temporal streams of permutation-invariant data such as point clouds. The approach is based on an operator (DConv) that is related to continuous convolution operators such as X-Conv and others. The reviews are split. After the authors' responses, concerns remain and two ratings remain "3". The AC agrees with the concerns and recommends against accepting the paper. | train | [
"rJxT5QF55S",
"HkgqkZFBjB",
"H1goK1W3jS",
"H1gC3MZcsr",
"H1eRY1JvjS",
"rJgfGGKrsB",
"S1eohztSjH",
"r1l5J0OSsS",
"BkgVBUphtH",
"ByeeBVfN9S"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"=========== Update after rebuttal\n\nThanks for the clarifications and the update. I recommend acceptance of the paper and updated to 8.\n\nLast comment: please still improve the appearance of Figure 4 by using a more diverse set of marker shapes as well as overlay and offset tricks -- see https://www.cs.ubc.ca/~s... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_BJlowyHYPr",
"ByeeBVfN9S",
"H1gC3MZcsr",
"rJgfGGKrsB",
"iclr_2020_BJlowyHYPr",
"BkgVBUphtH",
"BkgVBUphtH",
"rJxT5QF55S",
"iclr_2020_BJlowyHYPr",
"iclr_2020_BJlowyHYPr"
] |
iclr_2020_Bye3P1BYwr | Deep End-to-end Unsupervised Anomaly Detection | This paper proposes a novel method to detect anomalies in large datasets under a fully unsupervised setting. The key idea behind our algorithm is to learn the representation underlying normal data. To this end, we leverage the latest clustering
technique suitable for handling high dimensional data. This hypothesis provides a reliable starting point for normal data selection. We train an autoencoder from the normal data subset, and iterate between hypothesizing normal candidate subset
based on clustering and representation learning. The reconstruction error from the learned autoencoder serves as a scoring function to assess the normality of the data. Experimental results on several public benchmark datasets show that the proposed method outperforms state-of-the-art unsupervised techniques and is comparable to semi-supervised techniques in most cases. | reject | The authors propose an approach for anomaly detection in the setting where the training data includes both normal and anomalous data. Their approach is a fairly straightforward extension of existing ideas, in which they iterate between clustering the data into normal vs. anomalous and learning an autoencoder representation of normal data that is then used to score normality of new data. The results are promising, but the experiments are fairly limited. The authors argue that their experimental settings follow those of prior work, but I think that for such an incremental contribution, more empirical work should be done, regardless of the limitations of particular prior work. | train | [
"SyxMGkB2or",
"SklXH3l2jB",
"rylzfReniH",
"rylFrTg3sH",
"rkxuJ6x3sS",
"SJlQMrZjtS",
"B1g4zUaotB",
"SJx1fc7aKr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewers, thanks for your thoughtful input on this submission! The authors have now responded to your comments. Please be sure to go through their replies and revisions. If you have additional feedback or questions, it would be great to know. The authors still have one more day to respond/revise further.... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
1
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2020_Bye3P1BYwr",
"iclr_2020_Bye3P1BYwr",
"SJlQMrZjtS",
"B1g4zUaotB",
"SJx1fc7aKr",
"iclr_2020_Bye3P1BYwr",
"iclr_2020_Bye3P1BYwr",
"iclr_2020_Bye3P1BYwr"
] |
iclr_2020_S1gTwJSKvr | OPTIMAL BINARY QUANTIZATION FOR DEEP NEURAL NETWORKS | Quantizing weights and activations of deep neural networks results in significant improvement in inference efficiency at the cost of lower accuracy. A source of the accuracy gap between full precision and quantized models is the quantization error.
In this work, we focus on the binary quantization, in which values are mapped to -1 and 1. We introduce several novel quantization algorithms: optimal 2-bits, optimal ternary, and greedy. Our quantization algorithms can be implemented efficiently on the hardware using bitwise operations. We present proofs to show that our proposed methods are optimal, and also provide empirical error analysis. We conduct experiments on the ImageNet dataset and show a reduced accuracy gap when using the proposed optimal quantization algorithms. | reject | This paper proposes to quantize the weights of neural networks that can minimize the L_2 loss between the quantized values and the full-precision ones. The paper has limited novelty, as many of the solutions presented in the paper have already been discovered in the literature. During the discussion, the reviewers agree that it is an incremental contribution. Parts of the paper can also be clarified, particularly on the optimality of the solution, assumptions used in the approximation, and some of the experimental results. Experimental results can also be made more convincing by adding comparision with the more recent quantization methods. | train | [
"SkgXuMtDnB",
"SkxFoCbjjH",
"HkxLCbMcsS",
"rJeVGrbcir",
"Syg3lE-qir",
"rylLGm-9iB",
"BkxkiR7o_r",
"SklAmT2cKS",
"SygD-H4pKH"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to quantize the weights of neural networks that can minimize the L_2 loss between the quantized values and the full-precision ones. The authors propose solutions for optimal 1-bit/ternary/2-bit quantization, as well as a greedy algorithm to approximate the optimal k-bit quantization. Experimen... | [
3,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
1,
3,
3
] | [
"iclr_2020_S1gTwJSKvr",
"HkxLCbMcsS",
"Syg3lE-qir",
"BkxkiR7o_r",
"SklAmT2cKS",
"SygD-H4pKH",
"iclr_2020_S1gTwJSKvr",
"iclr_2020_S1gTwJSKvr",
"iclr_2020_S1gTwJSKvr"
] |
iclr_2020_BkgCv1HYvB | Generating Multi-Sentence Abstractive Summaries of Interleaved Texts | In multi-participant postings, as in online chat conversations, several conversations or topic threads may take place concurrently. This leads to difficulties for readers reviewing the postings in not only following discussions but also in quickly identifying their essence. A two-step process, disentanglement of interleaved posts followed by summarization of each thread, addresses the issue, but disentanglement errors are propagated to the summarization step, degrading the overall performance. To address this, we propose an end-to-end trainable encoder-decoder network for summarizing interleaved posts. The interleaved posts are encoded hierarchically, i.e., word-to-word (words in a post) followed by post-to-post (posts in a channel). The decoder also generates summaries hierarchically, thread-to-thread (generate thread representations) followed by word-to-word (i.e., generate summary words). Additionally, we propose a hierarchical attention mechanism for interleaved text. Overall, our end-to-end trainable hierarchical framework enhances performance over a sequence to sequence framework by 8-10% on multiple synthetic interleaved texts datasets. | reject | This paper proposes an end-to-end approach for abstractive summarization of on-line discussions. The approach is contrary to the previous work that first disentangles discussions, and the summarizes them, and aims to tackle transfer of disentanglement errors in the pipeline. The proposed method is a hierarchical encoder - hierarchical decoder architecture. Experimental results on two corpora demonstrate the benefits of the proposed approach. The reviewers are concerned about the synthetic nature of the datasets, limited novelty given the previous work, lack of clear explanation of whether disentanglement is actually needed for summarization, and simpler baselines in comparison to the state-of-the-art. Hence, I recommend rejecting the paper. | train | [
"S1gRaXDjoB",
"ryxsnA9wiB",
"Hyx2id9wjr",
"ryxIU45wjB",
"Bye6JV5cYr",
"HklrJnbpFr",
"BklB71JS5H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to let the reviewer know that there was a mix up while running the disentangled text experiments: an incorrectly sampled corpus was used. Therefore, we reran the disentangled experiments with the correct 150k samples of the hard Pubmed corpus and updated the results (see Table 4). The results indica... | [
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"Hyx2id9wjr",
"Bye6JV5cYr",
"HklrJnbpFr",
"BklB71JS5H",
"iclr_2020_BkgCv1HYvB",
"iclr_2020_BkgCv1HYvB",
"iclr_2020_BkgCv1HYvB"
] |
iclr_2020_HJx7uJStPH | Music Source Separation in the Waveform Domain | Source separation for music is the task of isolating contributions, or stems, from different instruments recorded individually and arranged together to form a song.Such components include voice, bass, drums and any other accompaniments. While end-to-end models that directly generate the waveform are state-of-the-art in many audio synthesis problems, the best multi-instrument source separation models generate masks on the magnitude spectrum and achieve performances far above current end-to-end, waveform-to-waveform models. We present an in-depth analysis of a new architecture, which we will refer to as Demucs, based on a (transposed) convolutional autoencoder, with a bidirectional LSTM at the bottleneck layer and skip-connections as in U-Networks (Ronneberger et al., 2015). Compared to the state-of-the-art waveform-to-waveform model, Wave-U-Net (Stoller et al., 2018), the main features of our approach in addition of the bi-LSTM are the use of trans-posed convolution layers instead of upsampling-convolution blocks, the use of gated linear units, exponentially growing the number of channels with depth and a new careful initialization of the weights. Results on the MusDB dataset show that our architecture achieves a signal-to-distortion ratio (SDR) nearly 2.2 points higher than the best waveform-to-waveform competitor (from 3.2 to 5.4 SDR). This makes our model match the state-of-the-art performances on this dataset, bridging the performance gap between models that operate on the spectrogram and end-to-end approaches. | reject | The paper proposed a waveform-to-waveform music source separation system. Experimental justification shows the proposed model achieved the best SDR among all the existing waveform-to-waveform models, and obtained similar performance to spectrogram based ones. The paper is clearly written and the experimental evaluation and ablation study are thorough. But the main concern is the limited novelty, it is an improvement over the existing Wave-U-Net, it added some changes to the existing model architecture for better modeling the waveform data and compared masking vs. synthesis for music source separation. | train | [
"Hyld4J2niB",
"HJgSX4chjB",
"HJgR9oY2jr",
"S1lrCvKhsB",
"SJlSfDunjB",
"rkl1c1t3sr",
"SJg58JKhsH",
"BJllrDO3sr",
"H1g-lHiCKB",
"SJxIHvP15H",
"Hkl_LuUQcS"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"There is significantly more interference with Demucs, you can find the SIR on page 15 of the paper, in the appendix. SIR for Demucs is 10.39, 11.47 for Tasnet.\nWe don't pretend that Demucs is better than Tasnet, but we support that the two have different approaches and capabilities, one being better at separation... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"HJgSX4chjB",
"HJgR9oY2jr",
"S1lrCvKhsB",
"BJllrDO3sr",
"iclr_2020_HJx7uJStPH",
"H1g-lHiCKB",
"Hkl_LuUQcS",
"SJxIHvP15H",
"iclr_2020_HJx7uJStPH",
"iclr_2020_HJx7uJStPH",
"iclr_2020_HJx7uJStPH"
] |
iclr_2020_SygEukHYvB | CEB Improves Model Robustness | We demonstrate that the Conditional Entropy Bottleneck (CEB) can improve model robustness. CEB is an easy strategy to implement and works in tandem with data augmentation procedures. We report results of a large scale adversarial robustness study on CIFAR-10, as well as the IMAGENET-C Common Corruptions Benchmark. | reject | This paper proposes CEB, Conditional Entropy Bottleneck, as a way to improves the robustness of a model against adversarial attacks and noisy data. The model is tested empirically using several experiments and various datasets.
We appreciate the authors for submitting the paper to ICLR and providing detailed responses to the reviewers' comments and concerns. After the initial reviews and rebuttal, we had extensive discussions to judge whether the contributions are clear and sufficient for publication. In particular, we discussed the overlap with a previous (arXiv) paper and decided that the overlap should not be considered because it is not published at a conference or journal. Plus the paper makes additional contributions.
However, reviewers in the end did not think the paper showed sufficient explanation and proof of why and how this model works, and whether this approach improves upon other state-of-the-art adversarial defense approaches.
Again, thank you for submitting to ICLR, and I hope to see an improved version in a future publication. | train | [
"rJxEZfH3iH",
"BJluZv0jiH",
"ryx9RUAiiH",
"HJx49U0sjB",
"B1e4B8Assr",
"r1gbwXWiFr",
"B1xkJ3EhtB",
"SJlrME_6Yr"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The reference [A2] in the review and reply above is:\n\n[A2] Zhang et al., Defending against Whitebox Adversarial Attacks via Randomized Discretization, In AISTATS, 2019.",
"Thank you for your review. We have addressed your minor comments in the updated text. We agree that the early figures are difficult to read... | [
-1,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"HJx49U0sjB",
"r1gbwXWiFr",
"B1xkJ3EhtB",
"SJlrME_6Yr",
"iclr_2020_SygEukHYvB",
"iclr_2020_SygEukHYvB",
"iclr_2020_SygEukHYvB",
"iclr_2020_SygEukHYvB"
] |
iclr_2020_Hke4_JrYDr | Global-Local Network for Learning Depth with Very Sparse Supervision | Natural intelligent agents learn to perceive the three dimensional structure of the world without training on large datasets and are unlikely to have the precise equations of projective geometry hard-wired in the brain. Such skill would also be valuable to artificial systems in order to avoid the expensive collection of labeled datasets, as well as tedious tuning required by methods based on multi-view geometry. Inspired by natural agents, who interact with the environment via visual and haptic feedback, this paper explores a new approach to learning depth from images and very sparse depth measurements, just a few pixels per image. To learn from such extremely sparse supervision, we introduce an appropriate inductive bias by designing a specialized global-local network architecture. Experiments on several datasets show that the proposed model can learn monocular dense depth estimation when trained with very sparse ground truth, even a single pixel per image. Moreover, we find that the global parameters extracted by the network are predictive of the metric agent motion. | reject | This paper proposes a deep network architecture for learning to predict depth from images with sparsely depth-labeled pixels.
This paper was subject to some discussion, since the authors felt that the approach was interesting and the problem-well motivated. Some of the concerns about experimental evaluation (especially from R1) were resolved due the author's rebuttal, but ultimately the reviewers felt the paper was not yet ready for publication. | train | [
"Skgsjr36Kr",
"H1gVMVLVoB",
"rkeMz0AHsB",
"Sye0k_k8sH",
"HyemLN24jr",
"H1l1jzoOtB",
"S1ldszjk9B"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a novel global-local network, which can be trained with extremely sparse ground truth, to predict dense depth. Though widely applied on the task of segmentation, the use of only uncalibrated input and extremely sparse label for depth estimation is novel. By incorporating optical flow and decoup... | [
6,
-1,
-1,
-1,
-1,
1,
3
] | [
3,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_Hke4_JrYDr",
"H1l1jzoOtB",
"S1ldszjk9B",
"iclr_2020_Hke4_JrYDr",
"Skgsjr36Kr",
"iclr_2020_Hke4_JrYDr",
"iclr_2020_Hke4_JrYDr"
] |
iclr_2020_rJeU_1SFvr | LOGAN: Latent Optimisation for Generative Adversarial Networks | Training generative adversarial networks requires balancing of delicate adversarial dynamics. Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes. In this work, we introduce a new form of latent optimisation inspired by the CS-GAN and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator. We develop supporting theoretical analysis from the perspectives of differentiable games and stochastic approximation. Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet (128 x 128) dataset. Our model achieves an Inception Score (IS) of 148 and an Frechet Inception Distance (FID) of 3.4, an improvement of 17% and 32% in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters. | reject | The authors propose to overcome challenges in GAN training through latent optimization, i.e. updating the latent code, motivated by natural gradients. The authors show improvement over previous methods. The work is well-motivated, but in my opinion, further experiments and comparisons need to be made before the work can be ready for publication.
The authors write that "Unfortunately, SGA is expensive to scale because computing the second-order derivatives with respect to all parameters is expensive" and further "Crucially, latent optimization approximates SGA using only second-order derivatives with respect to the latent z and parameters of the discriminator and generator separately. The second-order terms involving parameters of both the discriminator and the generator – which are extremely expensive to compute – are not used. For latent z’s with dimensions typically used in GANs (e.g., 128–256, orders of magnitude less than the number of parameters), these can be computed efficiently. In short, latent optimization efficiently couples the gradients of the discriminator and generator, as prescribed by SGA, but using the much lower-dimensional latent source z which makes the adjustment scalable."
However, this is not true. Computing the Hessian vector product is not that expensive. In fact, it can be computed at a cost comparable to gradient evaluations using automatic differentiation (Pearlmutter (1994)). In frameworks such as PyTorch, this can be done efficiently using double backpropagation, so only twice the cost. Based on the above, one of the main claims of improvement over existing methods, which is furthermore not investigated experimentally, is false.
It is unacceptable that the authors do not compare with SGA: both in terms of quality and computational cost since that is the premise of the paper. The authors also miss recent works that successfully ran methods with Hessian-vector products: https://arxiv.org/abs/1905.12103 https://arxiv.org/abs/1910.05852 | train | [
"ByembX1QjB",
"rkef_7JQjH",
"HJx7zgxTFr",
"BJxDhRyCtS"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments. \n\nWe will open source the CIFAR10 model and training code for the camera-ready version. In the meantime, we would be glad to answer your questions about the implementation, as well as to add any modelling details that might be missing. We appreciate your effort in attempting to repli... | [
-1,
-1,
6,
6
] | [
-1,
-1,
4,
4
] | [
"BJxDhRyCtS",
"HJx7zgxTFr",
"iclr_2020_rJeU_1SFvr",
"iclr_2020_rJeU_1SFvr"
] |
iclr_2020_r1gIdySFPH | Skew-Fit: State-Covering Self-Supervised Reinforcement Learning | Autonomous agents that must exhibit flexible and broad capabilities will need to be equipped with large repertoires of skills. Defining each skill with a manually-designed reward function limits this repertoire and imposes a manual engineering burden. Self-supervised agents that set their own goals can automate this process, but designing appropriate goal setting objectives can be difficult, and often involves heuristic design decisions. In this paper, we propose a formal exploration objective for goal-reaching policies that maximizes state coverage. We show that this objective is equivalent to maximizing the entropy of the goal distribution together with goal reaching performance, where goals correspond to full state observations. To instantiate this principle, we present an algorithm called Skew-Fit for learning a maximum-entropy goal distributions. Skew-Fit enables self-supervised agents to autonomously choose and practice reaching diverse goals. We show that, under certain regularity conditions, our method converges to a uniform distribution over the set of valid states, even when we do not know this set beforehand. Our experiments show that it can learn a variety of manipulation tasks from images, including opening a door with a real robot, entirely from scratch and without any manually-designed reward function. | reject | This paper tackles the problem of exploration in RL. In order to maximize coverage of the state space, the authors introduce an approach where the agent attempts to reach some self-set goals. The empirically show that agents using this method uniformly visit all valid states under certain conditions. They also show that these agents are able to learn behaviours without providing a manually-defined reward function.
The drawback of this work is the combined lack of theoretical justification and limited (marginal) algorithmic novelty given other existing goal-directed techniques. Although they highlight the performance of the proposed approach, the current experiments do not convey a good enough understanding of why this approach works where other existing goal-directed techniques do not, which would be expected from a purely empirical paper. This dampers the contribution, hence I recommend to reject this paper. | train | [
"SJxzk8sitS",
"HJlTYAfnoS",
"rJlYx0G3jB",
"Bkl-PnTtir",
"SJgBwsTYsr",
"S1lF1vROjS",
"BygXShuZsS",
"rkgaGhdWjr",
"rJgUssdWjr",
"rJgogsu-jB",
"HyeleOROFS",
"BylJ3YZntB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces SKEW-FIT, an exploration approach that maximizes the entropy of a distribution of goals such that the agent maximizes state coverage. \n\nThe paper is well-written and provides an interesting combination of reinforcement learning with imagined goals (RIG) and entropy maximization. The approach... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_r1gIdySFPH",
"rJgogsu-jB",
"Bkl-PnTtir",
"SJgBwsTYsr",
"S1lF1vROjS",
"BygXShuZsS",
"rkgaGhdWjr",
"HyeleOROFS",
"SJxzk8sitS",
"BylJ3YZntB",
"iclr_2020_r1gIdySFPH",
"iclr_2020_r1gIdySFPH"
] |
iclr_2020_r1eUukrtwH | The Variational InfoMax AutoEncoder | We propose the Variational InfoMax AutoEncoder (VIMAE), an autoencoder based on a new learning principle for unsupervised models: the Capacity-Constrained InfoMax, which allows the learning of a disentangled representation while maintaining optimal generative performance. The variational capacity of an autoencoder is defined and we investigate its role. We associate the two main properties of a Variational AutoEncoder (VAE), generation quality and disentangled representation, to two different information concepts, respectively Mutual Information and network capacity. We deduce that a small capacity autoencoder tends to learn a more robust and disentangled representation than a high capacity one. This observation is confirmed by the computational experiments. | reject | This paper describes a new generative model based on the information theoretic principles for better representation learning. The approach is theoretically related to the InfoVAE and beta-VAE work, and is contrasted to vanilla VAEs. The reviewers have expressed strong concerns about the novelty of this work. Some of the very closely related baselines (e.g. Zhao et al., Chen et al., Alemi et a) are not compared against, and the contributions of this work over the baselines are not clearly discussed. Furthermore, the experimental section could be made stronger with more quantitative metrics. For these reasons I recommend rejection. | train | [
"r1lWnRs7YH",
"S1xGmqNnjB",
"HygxY7V3oH",
"r1gwWRm3iB",
"SkxE4Bxr9r",
"rkgKQMmd9r"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper develops an information-theoretic training scheme for Variational Auto-Encoders (VAEs). This scheme is tailored for addressing the well-known disentanglement problem of VAEs where an over-capacity encoder sometimes manages to both maximize data fit and shrink the KL divergence between the approximate pos... | [
3,
-1,
-1,
-1,
6,
1
] | [
4,
-1,
-1,
-1,
1,
4
] | [
"iclr_2020_r1eUukrtwH",
"r1lWnRs7YH",
"SkxE4Bxr9r",
"rkgKQMmd9r",
"iclr_2020_r1eUukrtwH",
"iclr_2020_r1eUukrtwH"
] |
iclr_2020_ByxduJBtPB | When Covariate-shifted Data Augmentation Increases Test Error And How to Fix It | Empirically, data augmentation sometimes improves and sometimes hurts test error, even when only adding points with labels from the true conditional distribution that the hypothesis class is expressive enough to fit. In this paper, we provide precise conditions under which data augmentation hurts test accuracy for minimum norm estimators in linear regression. To mitigate the failure modes of augmentation, we introduce X-regularization, which uses unlabeled data to regularize the parameters towards the non-augmented estimate. We prove that our new estimator never hurts test error and exhibits significant improvements over adversarial data augmentation on CIFAR-10. | reject | This paper describes situations whereby data augmentation (particularly drawn from a true distribution) can lead to increased generalization error even when the model being optimized is appropriately formulated. The authors propose "X-regularization" which requires that models trained on standard and augmented data produce similar predictions on unlabeled data. The paper includes a few experiments on a toy staircase regression problem as well as some ResNet experiments on CIFAR-10. This paper received 2 recommendations for rejection, and one weak accept recommendation. After the rebuttal phase, the author who recommended weak acceptance indicated their willingness to let the paper be rejected in light of the other reviews. The reviewer highlighted: "I think the authors could still to better to relate their theory to practice, and expand on the discussion/presentation of X-regularization." The main open issue is that the theoretical contributions of the paper are not sufficiently linked to the proposed algorithm. | train | [
"rJxfPtsnsH",
"Bkxjoce2jr",
"HygoNcl2ir",
"B1xv0Fx3sr",
"rJexjFlhor",
"r1eQCCNUYH",
"Sken8X2g5r",
"BkeSIRjNqr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your response.",
"We thank R3 for their feedback, questions and suggestions. We have significantly improved our presentation of both the theoretical results and the contributions and positioning. We request R3 to refer to the general comment above and evaluate the revised paper. \n\nAddressing the maj... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
1
] | [
"HygoNcl2ir",
"r1eQCCNUYH",
"Sken8X2g5r",
"BkeSIRjNqr",
"iclr_2020_ByxduJBtPB",
"iclr_2020_ByxduJBtPB",
"iclr_2020_ByxduJBtPB",
"iclr_2020_ByxduJBtPB"
] |
iclr_2020_B1xtd1HtPS | Quaternion Equivariant Capsule Networks for 3D Point Clouds | We present a 3D capsule architecture for processing of point clouds that is equivariant with respect to the SO(3) rotation group, translation and permutation of the unordered input sets. The network operates on a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end equivariance through a novel 3D quaternion group capsule layer, including an equivariant dynamic routing procedure. The capsule layer enables us to disentangle geometry from pose, paving the way for more informative descriptions and a structured latent space. In the process, we theoretically connect the process of dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares (IRLS) problems with provable convergence properties, enabling robust pose estimation between capsule layers. Due to the sparse equivariant quaternion capsules, our architecture allows joint object classification and orientation estimation, which we validate empirically on common benchmark datasets.
| reject | This paper presents a capsule network to handle 3d point clouds which is equivariant to SO(3) rotations. It also provides the theoretical analysis to connect the dynamic routing approach to the Generalized Weiszfeld Iterations. The equivariant property of the method is demonstrated on classification and orientation estimation tasks of 3D shapes.
While the technical contribution of the method is sound, the main concern raised by the reviewers was the lack of details in the presentation of methodology and results. Although the authors have made substantial efforts to update the paper, some reviewers were still not convinced and thus the scores remained the same. The paper was on the very borderline, but because of the limited capacity, I regret that I have to recommend rejection.
Invariances and equivariances are indeed important topics in representation learning, for which the capsule network is known as one of the promising approaches but still not well investigated compared to other standard architectures. I encourage authors to resubmit the paper taking in the reviewers' comments. | train | [
"SJlKIDS2sr",
"rygxhMlYjH",
"rJg8bGeYiH",
"SyeRw-lYsH",
"rkgmo9psKS",
"SklKk4uy9B",
"rJgTqxck5S"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your detailed comments and the responses all make sense to me - very happy to remain supportive of the paper.",
"We thank the reviewer for the specific comments and acknowledging the improved results we achieve in the paper. Please find our responses below.\n\n1. “The activation”: It is true that t... | [
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"SyeRw-lYsH",
"rkgmo9psKS",
"SklKk4uy9B",
"rJgTqxck5S",
"iclr_2020_B1xtd1HtPS",
"iclr_2020_B1xtd1HtPS",
"iclr_2020_B1xtd1HtPS"
] |
iclr_2020_BkeYdyHYPS | Evo-NAS: Evolutionary-Neural Hybrid Agent for Architecture Search | Neural Architecture Search has shown potential to automate the design of neural networks. Deep Reinforcement Learning based agents can learn complex architectural patterns, as well as explore a vast and compositional search space. On the other hand, evolutionary algorithms offer higher sample efficiency, which is critical for such a resource intensive application. In order to capture the best of both worlds, we propose a class of Evolutionary-Neural hybrid agents (Evo-NAS). We show that the Evo-NAS agent outperforms both neural and evolutionary agents when applied to architecture search for a suite of text and image classification benchmarks. On a high-complexity architecture search space for image classification, the Evo-NAS agent surpasses the accuracy achieved by commonly used agents with only 1/3 of the search cost. | reject | Thanks to the authors for the revision and discussion. This paper provides a neural architecture search (NAS) method, called Evolutionary-Neural hybrid agents (Evo-NAS), which combines NN-based NAS and Aging EVO. While the authors' response addressed some of the reviewers' comments, during discussion period there is a new concern that the idea proposed here highly overlaps with the method of RENAS, which stands for Reinforced Evolutionary Neural Architecture Search. Reviewers acknowledge that this might discount the novelty of the paper. Overall, there is not sufficient support for acceptance. | train | [
"HJxgtV33ir",
"Bke0AninsB",
"BJxpRqs3oB",
"HyeRIIo3iH",
"Bylvkp8jiS",
"HJgAFLz7oS",
"HyemFkD7cH",
"S1gae4gzsB",
"BklitU6Zor",
"BklcQEo-oH",
"S1ej9ZoWor",
"SJehrtaloS",
"BJe6-Up1iS",
"HkgQaG51iH",
"Byg4gXUTKB",
"BkxX0Fu0YS",
"SylXPYgV_B",
"BJgPzQU5PH"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thanks for the swift response. RENAS is the only Evolutionary-RL architecture search hybrid that we are aware of, and the approach it took does lead to good performance. However, we believe that the differences between RENAS and Evo-NAS are fundamental.\n\nAs we wrote in point 3 of our \"Comparison with RENAS\" co... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1
] | [
"Bke0AninsB",
"BJxpRqs3oB",
"HyeRIIo3iH",
"HJgAFLz7oS",
"iclr_2020_BkeYdyHYPS",
"S1gae4gzsB",
"iclr_2020_BkeYdyHYPS",
"BklcQEo-oH",
"BkxX0Fu0YS",
"HyemFkD7cH",
"Byg4gXUTKB",
"BJe6-Up1iS",
"iclr_2020_BkeYdyHYPS",
"SylXPYgV_B",
"iclr_2020_BkeYdyHYPS",
"iclr_2020_BkeYdyHYPS",
"BJgPzQU5P... |
iclr_2020_Bke9u1HFwB | Do recent advancements in model-based deep reinforcement learning really improve data efficiency? | Reinforcement learning (RL) has seen great advancements in the past few years. Nevertheless, the consensus among the RL community is that currently used model-free methods, despite all their benefits, suffer from extreme data inefficiency. To circumvent this problem, novel model-based approaches were introduced that often claim to be much more efficient than their model-free counterparts. In this paper, however, we demonstrate that the state-of-the-art model-free Rainbow DQN algorithm can be trained using a much smaller number of samples than it is commonly reported. By simply allowing the algorithm to execute network updates more frequently we manage to reach similar or better results than existing model-based techniques, at a fraction of complexity and computational costs. Furthermore, based on the outcomes of the study, we argue that the agent similar to the modified Rainbow DQN that is presented in this paper should be used as a baseline for any future work aimed at improving sample efficiency of deep reinforcement learning. | reject | The paper makes broad claims, but the depth of the experiments is very limited to a narrow combination of algorithms. | train | [
"SklzXhN3sS",
"BygXZnm2sr",
"r1ll5A73sB",
"H1xRhf42sr",
"ByechtJ6tB",
"Sygk5-d0Kr",
"H1eNYZKRtS",
"HJevJKu2_r",
"H1xyE8kNdH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Unfortunately, we were not able to finish revising the paper in the given time and include additional evaluations. Thus, we'll proceed with the initial version of the manuscript.\n",
"We want to thank all reviewers for all the time spent on analyzing our papers and for the constructive feedback. It is very much ... | [
-1,
-1,
-1,
-1,
3,
3,
3,
-1,
-1
] | [
-1,
-1,
-1,
-1,
5,
3,
3,
-1,
-1
] | [
"iclr_2020_Bke9u1HFwB",
"iclr_2020_Bke9u1HFwB",
"H1eNYZKRtS",
"ByechtJ6tB",
"iclr_2020_Bke9u1HFwB",
"iclr_2020_Bke9u1HFwB",
"iclr_2020_Bke9u1HFwB",
"H1xyE8kNdH",
"iclr_2020_Bke9u1HFwB"
] |
iclr_2020_SyecdJSKvr | Learning from Label Proportions with Consistency Regularization | The problem of learning from label proportions (LLP) involves training classifiers with weak labels on bags of instances, rather than strong labels on individual instances. The weak labels only contain the label proportions of each bag. The LLP problem is important for many practical applications that only allow label proportions to be collected because of data privacy or annotation costs, and has recently received lots of research attention. Most existing works focus on extending supervised learning models to solve the LLP problem, but the weak learning nature makes it hard to further improve LLP performance with a supervised angle. In this paper, we take a different angle from semi-supervised learning.
In particular, we propose a novel model inspired by consistency regularization, a popular concept in semi-supervised learning that encourages the model to produce a decision boundary that better describes the data manifold. With the introduction of consistency regularization, we further extend our study to non-uniform bag-generation and validation-based parameter-selection procedures that better match practical needs. Experiments not only justify that LLP with consistency regularization achieves superior performance, but also demonstrate the practical usability of the proposed procedures. | reject | After reading the author's rebuttal, the reviewer still hold that the main contribution is just the simple combination of already known losses. And the paper need to pay more attention on the clarity of the paper. | test | [
"rJgxABc9tr",
"B1xT2IkssB",
"r1ef2Q1iiB",
"HkxPdCi9oB",
"H1ez6JiqsH",
"HJlG4z0RtS",
"rJx0ZWZNqr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary of the paper: Learning from label proportions (LLP) is an area in machine learning that tries to learn a classifier that predicts labels of instances, with only bag-level aggregated labels given at the training stage. Instead of proposing a loss specialized for this problem, this paper proposes a regular... | [
3,
-1,
-1,
-1,
-1,
6,
3
] | [
3,
-1,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_SyecdJSKvr",
"iclr_2020_SyecdJSKvr",
"HJlG4z0RtS",
"rJgxABc9tr",
"rJx0ZWZNqr",
"iclr_2020_SyecdJSKvr",
"iclr_2020_SyecdJSKvr"
] |
iclr_2020_Hkls_yBKDB | Neural Program Synthesis By Self-Learning | Neural inductive program synthesis is a task generating instructions that can produce desired outputs from given inputs. In this paper, we focus on the generation of a chunk of assembly code that can be executed to match a state change inside the CPU. We develop a neural program synthesis algorithm, AutoAssemblet, learned via self-learning reinforcement learning that explores the large code space efficiently. Policy networks and value networks are learned to reduce the breadth and depth of the Monte Carlo Tree Search, resulting in better synthesis performance. We also propose an effective multi-entropy policy sampling technique to alleviate online update correlations. We apply AutoAssemblet to basic programming tasks and show significant higher success rates compared to several competing baselines. | reject | The authors consider the problem of program induction from input-output pairs.
They propose an approach based on a combination of imitation learning from
an auto-curriculum for policy and value functions and alpha-go style tree search.
It is a applied to inducing assembly programs and compared to ablation
baselines.
This paper is below acceptance threshold, based on the reviews and my own
reading.
The main points of concern are a lack of novelty (the proposed approach is
similar to previously published approaches in program synthesis), missing
references to prior work and a lack of baselines for the experiments. | val | [
"rylmwZrCFB",
"S1lSZTEhor",
"ryg03qV3iB",
"SJeVLtN2jB",
"HkldANPAYS",
"Hkg42vUycH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"[Summary] \nThis paper addresses the problem of synthesizing programs (x86 assembly code) from input/output (I/O) pairs. To this end, the paper proposes a framework (AutoAssemblet) that first learns a policy network and a value network using imitation learning (IL) and reinforcement learning (RL) and then leverage... | [
3,
-1,
-1,
-1,
1,
3
] | [
4,
-1,
-1,
-1,
5,
4
] | [
"iclr_2020_Hkls_yBKDB",
"rylmwZrCFB",
"HkldANPAYS",
"Hkg42vUycH",
"iclr_2020_Hkls_yBKDB",
"iclr_2020_Hkls_yBKDB"
] |
iclr_2020_Bye2uJHYwr | Weighted Empirical Risk Minimization: Transfer Learning based on Importance Sampling | We consider statistical learning problems, when the distribution P′ of the training observations Z1′,…,Zn′ differs from the distribution P involved in the risk one seeks to minimize (referred to as the \textit{test distribution}) but is still defined on the same measurable space as P and dominates it. In the unrealistic case where the likelihood ratio Φ(z)=dP/dP′(z) is known, one may straightforwardly extends the Empirical Risk Minimization (ERM) approach to this specific \textit{transfer learning} setup using the same idea as that behind Importance Sampling, by minimizing a weighted version of the empirical risk functional computed from the 'biased' training data Zi′ with weights Φ(Zi′). Although the \textit{importance function} Φ(z) is generally unknown in practice, we show that, in various situations frequently encountered in practice, it takes a simple form and can be directly estimated from the Zi′'s and some auxiliary information on the statistical population P. By means of linearization techniques, we then prove that the generalization capacity of the approach aforementioned is preserved when plugging the resulting estimates of the Φ(Zi′)'s into the weighted empirical risk. Beyond these theoretical guarantees, numerical results provide strong empirical evidence of the relevance of the approach promoted in this article. | reject | This paper aims to address transfer learning by importance weighted ERM that estimates a density ratio from the given sample and some auxiliary information on the population. Several learning bounds were proven to promote the use of importance weighted ERM.
Reviewers and AC feel that the novelty of this paper is modest given the rich relevant literature and the practical use of this paper may be limited. The discussion with related theoretical work such as generalization bound of PU learning can be expanded significantly. The presentation can be largely improved, especially in the experiment part. The rebuttal is somewhat subjective and unconvincing to address the concerns.
Hence I recommend rejection. | train | [
"B1gYfQuTYH",
"HyeE_uUstB",
"Bkxejum3or",
"B1xNiKQ3iH",
"S1l8QYXniH",
"BJeVudmnjB",
"SkgpaoVCKr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper targets the transfer learning problem. It wants to construct the unbiased estimator of the true risk for the target domain based on data from the source domain. To have the unbiased estimator, samples in the source domain are weighted based on some auxiliary information of both the source domain data di... | [
3,
3,
-1,
-1,
-1,
-1,
3
] | [
4,
4,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2020_Bye2uJHYwr",
"iclr_2020_Bye2uJHYwr",
"iclr_2020_Bye2uJHYwr",
"HyeE_uUstB",
"B1gYfQuTYH",
"SkgpaoVCKr",
"iclr_2020_Bye2uJHYwr"
] |
iclr_2020_ByxhOyHYwH | Fast Task Adaptation for Few-Shot Learning | Few-shot classification is a challenging task due to the scarcity of training examples for each class. The key lies in generalization of prior knowledge learned from large-scale base classes and fast adaptation of the classifier to novel classes. In this paper, we introduce a two-stage framework. In the first stage, we attempt to learn task-agnostic feature on base data with a novel Metric-Softmax loss. The Metric-Softmax loss is trained against the whole label set and learns more discriminative feature than episodic training. Besides, the Metric-Softmax classifier can be applied to base and novel classes in a consistent manner, which is critical for the generalizability of the learned feature. In the second stage, we design a task-adaptive transformation which adapts the classifier to each few-shot setting very fast within a few tuning epochs. Compared with existing fine-tuning scheme, the scarce examples of novel classes are exploited more effectively. Experiments show that our approach outperforms current state-of-the-arts by a large margin on the commonly used mini-ImageNet and CUB-200-2011 benchmarks. | reject | This paper develops a new few-shot image classification algorithm by using a metric-softmax loss for non-episodic training and a linear transformation to modify the model towards few-shot training data for task-agnostic adaptation.
Reviewers acknowledge that some of the results in the paper are impressive especially on domain sift settings as well as with a fine-tuning approach. However, they also raise very detailed and constructive concerns on the 1) lack of novelty, 2) improper claim of contribution, 3) inconsistent evaluation protocol with de facto ones in existing work. Author's rebuttal failed to convince the reviewers in regards to a majority of the critiques.
Hence I recommend rejection. | train | [
"HylyzIxXtB",
"BklJfCltiS",
"BJlI_uHOsS",
"HyeZeYzdor",
"HyekWIfdjS",
"r1ewpj31cB",
"HylaQFF8cH",
"H1ebjwJUtH",
"Hkx-s9HBtr",
"B1eyZUMEYS",
"H1g2MBMEKr",
"Byx6mHd7FH",
"HJgM7jD7FS",
"S1gOz-xXKS",
"Syxy3okmKH",
"BJlP3iMzKS",
"SylrfmMzYB",
"rkgj2klftS",
"HJgeCaL-Fr",
"rJgj3NQ-FH"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"public",
"public",
"author",
"author",
"public",
"public",
"author",
"author",
"public",
"public",
"public",
"author",
"public... | [
"Authors propose a new method for adaptation in a few-shot learning setting. Their method comprises two different steps; first they propose a new metric-softmax loss, which aims at improving the transferability of features pre-trained on base data to novel data. They achieve this via redefining the probability scor... | [
8,
-1,
-1,
-1,
-1,
1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_ByxhOyHYwH",
"r1ewpj31cB",
"HylaQFF8cH",
"HylyzIxXtB",
"iclr_2020_ByxhOyHYwH",
"iclr_2020_ByxhOyHYwH",
"iclr_2020_ByxhOyHYwH",
"Hkx-s9HBtr",
"iclr_2020_ByxhOyHYwH",
"HJgM7jD7FS",
"iclr_2020_ByxhOyHYwH",
"S1gOz-xXKS",
"iclr_2020_ByxhOyHYwH",
"BJlP3iMzKS",
"SylrfmMzYB",
"rkgj2... |
iclr_2020_rkgTdkrtPH | NoiGAN: NOISE AWARE KNOWLEDGE GRAPH EMBEDDING WITH GAN | Knowledge graph has gained increasing attention in recent years for its successful applications of numerous tasks. Despite the rapid growth of knowledge construction, knowledge graphs still suffer from severe incompletion and inevitably involve various kinds of errors. Several attempts have been made to complete knowledge graph as well as to detect noise. However, none of them considers unifying these two tasks even though they are inter-dependent and can mutually boost the performance of each other. In this paper, we proposed to jointly combine these two tasks with a unified Generative Adversarial Networks (GAN) framework to learn noise-aware knowledge graph embedding. Extensive experiments have demonstrated that our approach is superior to existing state-of-the-art algorithms both in regard to knowledge graph completion and error detection. | reject | This paper proposes a noise-aware knowledge graph embedding (NoiGAN) by combining KG completion and noise detection through the GANs framework. The reviewers find that the idea is interesting, but the comparison to SOTA is largely missing. The paper can be improved by addressing the reviewer comments. | train | [
"ryxPtR_2sB",
"BklSeTd3jS",
"rkxEN3dhsB",
"HklhuRpxiH",
"S1l96w7HKH",
"ryl29S9x5H"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the constructive reviews. We addressed the questions and concerns of the reviewer accordingly in the following.\n\n(1) Thanks to the reviewer for pointing out the problem of data leakage in FB15K and WN18. We have conducted the experiments on the FB15K-237 and WN18RR instead. Please find ... | [
-1,
-1,
-1,
3,
3,
1
] | [
-1,
-1,
-1,
3,
5,
5
] | [
"S1l96w7HKH",
"ryl29S9x5H",
"HklhuRpxiH",
"iclr_2020_rkgTdkrtPH",
"iclr_2020_rkgTdkrtPH",
"iclr_2020_rkgTdkrtPH"
] |
iclr_2020_rygT_JHtDr | Scalable Deep Neural Networks via Low-Rank Matrix Factorization | Compressing deep neural networks (DNNs) is important for real-world applications operating on resource-constrained devices. However, it is difficult to change the model size once the training is completed, which needs re-training to configure models suitable for different devices. In this paper, we propose a novel method that enables DNNs to flexibly change their size after training. We factorize the weight matrices of the DNNs via singular value decomposition (SVD) and change their ranks according to the target size. In contrast with existing methods, we introduce simple criteria that characterize the importance of each basis and layer, which enables to effectively compress the error and complexity of models as little as possible. In experiments on multiple image-classification tasks, our method exhibits favorable performance compared with other methods. | reject | The proposed paper presents low-rank compression method for DNNs. This topic has been around for a while, so the contribution is limited. Lebedev et. al paper in ICLR 2015 used CP-factorization to compress neural networks for Imagenet classification; in 2019, the idea has to be really novel in order to be presented on CIFAR datasets. The latency is not analyzed.
So, I agree with reviewers. | train | [
"SkgnNjL2sB",
"BJlF7FLnoB",
"HyxHmUIhoB",
"ByxS6g82oS",
"Skx9EjdpKH",
"r1eS584AYH",
"BJgsbqjy5r",
"SJx6uf6sFr",
"rkxGJainur"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for your thoughtful comments.\n\nWe will investigate or compare with those methods you suggested and also consider to do experiments on larger datasets as our future works. \nWe have revised some notations according to your suggestions.\n\n>In their deduction of full-rank-low-rank model joint training, .... | [
-1,
-1,
-1,
-1,
3,
1,
1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
5,
5,
-1,
-1
] | [
"Skx9EjdpKH",
"r1eS584AYH",
"BJgsbqjy5r",
"iclr_2020_rygT_JHtDr",
"iclr_2020_rygT_JHtDr",
"iclr_2020_rygT_JHtDr",
"iclr_2020_rygT_JHtDr",
"rkxGJainur",
"iclr_2020_rygT_JHtDr"
] |
iclr_2020_ByeadyrtPB | Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders | Probabilistic models with hierarchical-latent-variable structures provide state-of-the-art results amongst non-autoregressive, unsupervised density-based models. However, the most common approach to training such models based on Variational Autoencoders often fails to leverage deep-latent hierarchies; successful approaches require complex inference and optimisation schemes. Optimal Transport is an alternative, non-likelihood-based framework for training generative models with appealing theoretical properties, in principle allowing easier training convergence between distributions. In this work we propose a novel approach to training models with deep-latent hierarchies based on Optimal Transport, without the need for highly bespoke models and inference networks. We show that our method enables the generative model to fully leverage its deep-latent hierarchy, and that in-so-doing, it is more effective than the original Wasserstein Autoencoder with Maximum Mean Discrepancy divergence. | reject | The paper received 6, 3, 1. The main criticism is the lack of quantitative evaluation/comparison. The rebuttal did not convince the last reviewer who strongly argues for a comparison. The authors are encouraged to add additional results and resubmit to a future venue. | train | [
"rJgzs5lwtr",
"rJlkhDzwjB",
"rkl9gazPjB",
"SyxkyrGvsH",
"BJx5f3iAKr",
"BJllJxEH5B"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper aims to develop a deep generative model, which -unlike VAEs or GANs- comprises a hierarchy of latent variables rather than a direct map from the stochastic latent manifold to the observation space. To this end, the paper builds a training objective based on nesting the Wasserstein distance between the da... | [
1,
-1,
-1,
-1,
3,
6
] | [
3,
-1,
-1,
-1,
3,
1
] | [
"iclr_2020_ByeadyrtPB",
"BJx5f3iAKr",
"rJgzs5lwtr",
"BJllJxEH5B",
"iclr_2020_ByeadyrtPB",
"iclr_2020_ByeadyrtPB"
] |
iclr_2020_r1eCukHYDH | Manifold Learning and Alignment with Generative Adversarial Networks | We present a generative adversarial network (GAN) that conducts manifold learning and alignment (MLA): A task to learn the multi-manifold structure underlying data and to align those manifolds without any correspondence information. Our main idea is to exploit the powerful abstraction ability of encoder architecture. Specifically, we define multiple generators to model multiple manifolds, but in a particular way that their inverse maps can be commonly represented by a single smooth encoder. Then, the abstraction ability of the encoder enforces semantic similarities between the generators and gives a plausibly aligned embedding in the latent space. In experiments with MNIST, 3D-Chair, and UT-Zap50k datasets, we demonstrate the superiority of our model in learning the manifolds by FID scores and in aligning the manifolds by disentanglement scores. Furthermore, by virtue of the abstractive modeling, we show that our model can generate data from an untrained manifold, which is unique to our model. | reject | This work proposes a GAN architecture that aims to align the latent representations of the generator with different interpretable degrees of freedom of the underlying data (e.g., size, pose).
Reviewers found this paper well-motivated and the proposed method to be technically sound. However, they cast some doubts about the novelty of the approach, specifically with respect to DMWGAN and MADGAN. The AC shares these concerns and concludes that this paper will greatly benefit from an additional reviewing cycle that addresses the remaining concerns.
| val | [
"BJlU_xM-5S",
"HygfSsOnor",
"SkxgAi42sH",
"HkxhczaGor",
"rJlf6PbKjS",
"rJgkJHaGoH",
"B1xkYmpzjH",
"S1lO7zYNtr",
"B1evrRiTKS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"EDIT: Updated score to weak Accept in lieu of author's response. See below for more details.\n\nThe authors propose a GAN architecture that aims to align the latent representations of the GAN with different interpretable degrees of freedom of the underlying data (e.g., size, pose). While the text, motivation, an... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2020_r1eCukHYDH",
"HkxhczaGor",
"B1xkYmpzjH",
"BJlU_xM-5S",
"iclr_2020_r1eCukHYDH",
"S1lO7zYNtr",
"B1evrRiTKS",
"iclr_2020_r1eCukHYDH",
"iclr_2020_r1eCukHYDH"
] |
iclr_2020_rJgCOySYwH | Function Feature Learning of Neural Networks | We present a Function Feature Learning (FFL) method that can measure the similarity of non-convex neural networks. The function feature representation provides crucial insights into the understanding of the relations between different local solutions of identical neural networks. Unlike existing methods that use neuron activation vectors over a given dataset as neural network representation, FFL aligns weights of neural networks and projects them into a common function feature space by introducing a chain alignment rule. We investigate the function feature representation on Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN), finding that identical neural networks trained with different random initializations on different learning tasks by the Stochastic Gradient Descent (SGD) algorithm can be projected into different fixed points. This finding demonstrates the strong connection between different local solutions of identical neural networks and the equivalence of projected local solutions. With FFL, we also find that the semantics are often presented in a bottom-up way. Besides, FFL provides more insights into the structure of local solutions. Experiments on CIFAR-100, NameData, and tiny ImageNet datasets validate the effectiveness of the proposed method. | reject | This paper tackles an important problem: understanding if different NN solutions are similar or different. In the current form, however, the main motivation for the approach, and what the empirical results tell us, remains unclear. I read the paper after the updates and after reading reviews and author responses, and still had difficulty understanding the goals and outcomes of the experiments (such as what exactly is being reported as test accuracy and what is meant by: "High test accuracy means that assumptions are reasonable."). We highly recommend that the authors revisit the description of the motivation and approach based on comments from reviewers; further explain what is reported as test accuracy in the experiments; and more clearly highlight the insights obtain from the experiments. | train | [
"S1x3MHlHiH",
"rkezXwxHiS",
"HyxIbqgriS",
"HygT-omAtB",
"SJgHjqzIqr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the kind review and suggestions. We have revised the paper according to the suggestions and would like to answer the reviewer’s questions as follows. \n\nQ1: What are the findings of the paper? The authors make the assumption that weights across the same layer (say layer #2) are somehow a... | [
-1,
-1,
-1,
3,
3
] | [
-1,
-1,
-1,
4,
3
] | [
"SJgHjqzIqr",
"HygT-omAtB",
"iclr_2020_rJgCOySYwH",
"iclr_2020_rJgCOySYwH",
"iclr_2020_rJgCOySYwH"
] |
iclr_2020_H1lkYkrKDB | UNIVERSAL MODAL EMBEDDING OF DYNAMICS IN VIDEOS AND ITS APPLICATIONS | Extracting underlying dynamics of objects in image sequences is one of the challenging problems in computer vision. On the other hand, dynamic mode decomposition (DMD) has recently attracted attention as a way of obtaining modal representations of nonlinear dynamics from (general multivariate time-series) data without explicit prior knowledge about the dynamics. In this paper, we propose a convolutional autoencoder based DMD (CAE-DMD) that is an extended DMD (EDMD) approach, to extract underlying dynamics in videos. To this end, we develop a modified CAE model by incorporating DMD on the encoder, which gives a more meaningful compressed representation of input image sequences. On the reconstruction side, a decoder is used to minimize the reconstruction error after applying the DMD, which in result gives an accurate reconstruction of inputs. We empirically investigated the performance of CAE-DMD in two applications: background/foreground extraction and video classification, on publicly available datasets. | reject | The paper focuses on extracting the underlying dynamics of objects in video frames, for background/foreground extraction and video classification. Generally speaking, the presentation of the paper should be improved. Novelty should be clarified, contrasting the proposed approach with existing literature. All reviewers also agree the experimental section is also too weak in its current form.
| train | [
"Syl2HTO2tB",
"rJxRE-91cS",
"SyeTbYISqH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers the problem of extracting the underlying dynamics of objects in video frames. The paper focuses on two major applications: background/foreground extraction and video classification. The paper proposes a method that first obtains latent vectors from a video sequence by training a neural network ... | [
3,
3,
6
] | [
3,
3,
1
] | [
"iclr_2020_H1lkYkrKDB",
"iclr_2020_H1lkYkrKDB",
"iclr_2020_H1lkYkrKDB"
] |
iclr_2020_SygeY1SYvr | Are Few-shot Learning Benchmarks Too Simple ? | We argue that the widely used Omniglot and miniImageNet benchmarks are too simple because their class semantics do not vary across episodes, which defeats their intended purpose of evaluating few-shot classification methods. The class semantics of Omniglot is invariably “characters” and the class semantics of miniImageNet, “object category”. Because the class semantics are so similar, we propose a new method called Centroid Networks which can achieve surprisingly high accuracies on Omniglot and miniImageNet without using any labels at metaevaluation time. Our results suggest that those benchmarks are not adapted for supervised few-shot classification since the supervision itself is not necessary during meta-evaluation. The Meta-Dataset, a collection of 10 datasets, was recently proposed as a harder few-shot classification benchmark. Using our method, we derive a new metric, the Class Semantics Consistency Criterion, and use it to quantify the difficulty of Meta-Dataset. Finally, under some restrictive assumptions, we show that Centroid Networks is faster and more accurate than a state-of-the-art learning-to-cluster method (Hsu et al., 2018). | reject | The paper is interested in assessing the difficulty of popular few-shot classification benchmarks (Omniglot and miniImageNet). A clustering-based meta-learning method is proposed (called Centroid Network), on which a metric is built (gap between the performance of Prototypical Networks and Centroid Networks). As noted by several reviewers, the proposed metric (critical for the paper) is however not motivated enough, nor convincing enough - after discussion, the logic in the metric reasoning seems to remain flawed.
| val | [
"rkeIbjh3tr",
"HJlHfPO5or",
"r1g4kUu5or",
"HJlkjPX5iS",
"rJlFdCZFor",
"S1gSkRWKiS",
"rkeGKT-toS",
"rklIxTWFiS",
"SyxI703nYr",
"HklF3-A6tS"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a new method for learning to cluster without labels at meta-evaluation time and show that this method does as well as supervised methods on benchmarks with consistent class semantics. The authors propose a new metric for measuring the simplicity of a few-shot learning benchmark and demonstrat... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
"iclr_2020_SygeY1SYvr",
"r1g4kUu5or",
"HJlkjPX5iS",
"rkeGKT-toS",
"rkeIbjh3tr",
"SyxI703nYr",
"rklIxTWFiS",
"HklF3-A6tS",
"iclr_2020_SygeY1SYvr",
"iclr_2020_SygeY1SYvr"
] |
iclr_2020_H1ggKyrYwB | On Incorporating Semantic Prior Knowlegde in Deep Learning Through Embedding-Space Constraints | The knowledge that humans hold about a problem often extends far beyond a set of training data and output labels. While the success of deep learning mostly relies on supervised training, important properties cannot be inferred efficiently from end-to-end annotations alone, for example causal relations or domain-specific invariances. We present a general technique to supplement supervised training with prior knowledge expressed as relations between training instances. We illustrate the method on the task of visual question answering to exploit various auxiliary annotations, including relations of equivalence and of logical entailment between questions. Existing methods to use these annotations, including auxiliary losses and data augmentation, cannot guarantee the strict inclusion of these relations into the model since they require a careful balancing against the end-to-end objective. Our method uses these relations to shape the embedding space of the model, and treats them as strict constraints on its learned representations. %The resulting model encodes relations that better generalize across instances. In the context of VQA, this approach brings significant improvements in accuracy and robustness, in particular over the common practice of incorporating the constraints as a soft regularizer. We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used. It demonstrates the value of an additional training signal that is otherwise difficult to extract from end-to-end annotations alone. | reject | The paper proposes a technique for incorporating prior knowledge as relations between training instances.
The reviewers had a mixed set of concerns, with one common one being an insufficient comparison with / discussion of related work. Some reviewers also found the clarity lacking, but were satisfied with the revision. One reviewer found the claim of the approach being general but only tested and valid for the VQA dataset problematic.
Following the discussion, I recommend rejection at this time, but encourage the authors to take the feedback into account and resubmit to another venue. | train | [
"SyxXGwWjKB",
"BJg3HDxnoS",
"rJlYRymUsB",
"SkxKo1QIjB",
"HylXQyX8jB",
"SyeasshEor",
"B1l7jf5rFS",
"rkgQW97pKS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper argues for encoding external knowledge in the (linguistic) embedding layer of a multimodal neural network, as a set of hard constraints. The domain that the method is applied to is VQA, with various relations on the questions translated into hard constraints on the embedding space. A technique which invo... | [
6,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_H1ggKyrYwB",
"SkxKo1QIjB",
"B1l7jf5rFS",
"SyxXGwWjKB",
"rkgQW97pKS",
"rkgQW97pKS",
"iclr_2020_H1ggKyrYwB",
"iclr_2020_H1ggKyrYwB"
] |
iclr_2020_r1xxKJBKvr | PassNet: Learning pass probability surfaces from single-location labels. An architecture for visually-interpretable soccer analytics | We propose a fully convolutional network architecture that is able to estimate a full surface of pass probabilities from single-location labels derived from high frequency spatio-temporal data of professional soccer matches. The network is able to perform remarkably well from low-level inputs by learning a feature hierarchy that produces predictions at different sampling levels that are merged together to preserve both coarse and fine detail. Our approach presents an extreme case of weakly supervised learning where there is just a single pixel correspondence between ground-truth outcomes and the predicted probability map. By providing not just an accurate evaluation of observed events but also a visual interpretation of the results of other potential actions, our approach opens the door for spatio-temporal decision-making analysis, an as-yet little-explored area in sports. Our proposed deep learning architecture can be easily adapted to solve many other related problems in sports analytics; we demonstrate this by extending the network to learn to estimate pass-selection likelihood. | reject | The paper proposes PassNet, which is an architecture that produces a 2D map of probability of successful completion of a soccer pass. The architecture has some similarities with UNet and has downsampling and upsampling modules with a set of skip-connections between them.
The reviewers raised several issues:
* Novelty compared to UNET
* Lack of ablation studies
* Uncertainty about what probabilities mean and issues regarding output interpretation.
The authors have tried to address these concerns in their rebuttal and provided additional experiments. They also argue that the application area (sport analytics) of the paper is novel. Even though the application area is interesting and might lead to new problems, this paper did not get enough support from reviewers to justify its acceptance. | test | [
"B1eBEXSAtS",
"HJeEEZUtoS",
"B1eW1o4FiB",
"BJxH_iEYor",
"rJxZUjVFoS",
"S1gufiEFjS",
"HJeJc8nvtr",
"Sye0HBDpFB"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Contribution:\nThis paper proposes PassNet an architecture designed for soccer pass analytics. PassNet approach is similar to UNet, having a downsampling and upsampling modules with a set of skip-connection between the two modules. To train their model, authors apply the log-loss at the location of the passing eve... | [
3,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
1,
4
] | [
"iclr_2020_r1xxKJBKvr",
"iclr_2020_r1xxKJBKvr",
"iclr_2020_r1xxKJBKvr",
"iclr_2020_r1xxKJBKvr",
"iclr_2020_r1xxKJBKvr",
"iclr_2020_r1xxKJBKvr",
"iclr_2020_r1xxKJBKvr",
"iclr_2020_r1xxKJBKvr"
] |
iclr_2020_B1eZYkHYPS | Shifted Randomized Singular Value Decomposition | We extend the randomized singular value decomposition (SVD) algorithm (Halko et al., 2011) to estimate the SVD of a shifted data matrix without explicitly constructing the matrix in the memory. With no loss in the accuracy of the original algorithm, the extended algorithm provides for a more efficient way of matrix factorization. The algorithm facilitates the low-rank approximation and principal component analysis (PCA) of off-center data matrices. When applied to different types of data matrices, our experimental results confirm the advantages of the extensions made to the original algorithm. | reject | The proposed algorithm is found to be a straightforward extension of the previous work, which is not sufficient to warrant publication in ICLR2020. | train | [
"rJlncd0aKS",
"B1eRnOfRYH",
"HylY8zcy5r"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper adapts the approach by Halko to get a SVD using\na low rank concept to the case where the matrix is implicit shifted.\nHonestly - there is nothing wrong with this paper except the level\nof contribution. I consider this work to be widely irrelevant. You\ncan report this on arxiv if you like but I do no... | [
1,
1,
3
] | [
5,
3,
1
] | [
"iclr_2020_B1eZYkHYPS",
"iclr_2020_B1eZYkHYPS",
"iclr_2020_B1eZYkHYPS"
] |
iclr_2020_HkeMYJHYvS | High-Frequency guided Curriculum Learning for Class-specific Object Boundary Detection | This work addresses class-specific object boundary extraction, i.e., retrieving boundary pixels that belong to a class of objects in the given image. Although recent ConvNet-based approaches demonstrate impressive results, we notice that they produce several false-alarms and misdetections when used in real-world applications. We hypothesize that although boundary detection is simple at some pixels that are rooted in identifiable high-frequency locations, other pixels pose a higher level of difficulties, for instance, region pixels with an appearance similar to the boundaries; or boundary pixels with insignificant edge strengths. Therefore, the training process needs to account for different levels of learning complexity in different regions to overcome false alarms. In this work, we devise a curriculum-learning-based training process for object boundary detection. This multi-stage training process first trains the network at simpler pixels (with sufficient edge strengths) and then at harder pixels in the later stages of the curriculum. We also propose a novel system for object boundary detection that relies on a fully convolutional neural network (FCN) and wavelet decomposition of image frequencies. This system uses high-frequency bands from the wavelet pyramid and augments them to conv features from different layers of FCN. Our ablation studies with contourMNIST dataset, a simulated digit contours from MNIST, demonstrate that this explicit high-frequency augmentation helps the model to converge faster. Our model trained by the proposed curriculum scheme outperforms a state-of-the-art object boundary detection method by a significant margin on a challenging aerial image dataset.
| reject | This paper received all negative reviewers, and the scores were kept after the rebuttal. The authors are encouraged to submit their work to a computer vision conference where this kind of work may be more appreciated. Furthermore, including stronger baselines such as Acuna et al is recommended. | train | [
"rkgtgQXRYB",
"Bkx3Oy1VqB",
"r1grXbxL5H"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The suggest two improvements to boundary detection models: (1) a curriculum learning approach, and (2) augmenting CNNs with features derived from a wavelet transform. For (1), they train half of the epochs with a target boundary that is the intersection between a Canny edge filter and the dilated groundtr... | [
1,
1,
3
] | [
4,
5,
3
] | [
"iclr_2020_HkeMYJHYvS",
"iclr_2020_HkeMYJHYvS",
"iclr_2020_HkeMYJHYvS"
] |
iclr_2020_BklmtJBKDB | Conditional Flow Variational Autoencoders for Structured Sequence Prediction | Prediction of future states of the environment and interacting agents is a key competence required for autonomous agents to operate successfully in the real world. Prior work for structured sequence prediction based on latent variable models imposes a uni-modal standard Gaussian prior on the latent variables. This induces a strong model bias which makes it challenging to fully capture the multi-modality of the distribution of the future states. In this work, we introduce Conditional Flow Variational Autoencoders (CF-VAE) using our novel conditional normalizing flow based prior to capture complex multi-modal conditional distributions for effective structured sequence prediction. Moreover, we propose two novel regularization schemes which stabilizes training and deals with posterior collapse for stable training and better match to the data distribution. Our experiments on three multi-modal structured sequence prediction datasets -- MNIST Sequences, Stanford Drone and HighD -- show that the proposed method obtains state of art results across different evaluation metrics. | reject | The novelty of the proposed work is a very weak factor, the idea has been explored in various forms in previous work. | train | [
"ryx8qR0Bir",
"Bkl4EC0HiH",
"H1xASy1Ujr",
"rklY0RRroB",
"SyxVJJ0iKH",
"B1gBElBTtr",
"BygWcVYatB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the comments and address them here in detail.\n\n* ‘In general I like the idea, and the presentation seems solid to a large degree.’ - Thank you.\n\n\n* ‘the statements p(y|x) = p(y|x, z) p(z | x) and p(y|x) = p(y|z) p(z|x)’ - We thank you for pointing these out these typos. To clarify, t... | [
-1,
-1,
-1,
-1,
6,
6,
1
] | [
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"BygWcVYatB",
"BygWcVYatB",
"SyxVJJ0iKH",
"B1gBElBTtr",
"iclr_2020_BklmtJBKDB",
"iclr_2020_BklmtJBKDB",
"iclr_2020_BklmtJBKDB"
] |
iclr_2020_S1gEFkrtvH | BasisVAE: Orthogonal Latent Space for Deep Disentangled Representation | The variational autoencoder, one of the generative models, defines the latent space for the data representation, and uses variational inference to infer the posterior probability. Several methods have been devised to disentangle the latent space for controlling the generative model easily. However, due to the excessive constraints, the more disentangled the latent space is, the lower quality the generative model has. A disentangled generative model would allocate a single feature of the generated data to the only single latent variable. In this paper, we propose a method to decompose the latent space into basis, and reconstruct it by linear combination of the latent bases. The proposed model called BasisVAE consists of the encoder that extracts the features of data and estimates the coefficients for linear combination of the latent bases, and the decoder that reconstructs the data with the combined latent bases. In this method, a single latent basis is subject to change in a single generative factor, and relatively invariant to the changes in other factors. It maintains the performance while relaxing the constraint for disentanglement on a basis, as we no longer need to decompose latent space on a standard basis. Experiments on the well-known benchmark datasets of MNIST, 3DFaces and CelebA demonstrate the efficacy of the proposed method, compared to other state-of-the-art methods. The proposed model not only defines the latent space to be separated by the generative factors, but also shows the better quality of the generated and reconstructed images. The disentangled representation is verified with the generated images and the simple classifier trained on the output of the encoder. | reject | The paper proposes a new way to learn a disentangled representation by embedding the latent representation z into an explicit learnt orthogonal basis M. While the paper proposes an interesting new approach to disentangling, the reviewers agreed that it would benefit from further work in order to be accepted. In particular, after an extensive discussion it was still not clear whether the assumptions of Theorem 1 applied to VAEs, and whether Theorem 1 was necessary at all. In terms of experimental results, the discussions revealed that the method used supervision during training, while the baselines in the paper are all unsupervised. The authors are encouraged to add supervised baselines in the next iteration of the manuscript. For these reasons I recommend rejection. | train | [
"B1xdl-vCFB",
"rkeK3Tkijr",
"r1g81ERqsH",
"B1g_Pk6ciB",
"B1gMzgvqoB",
"HJgw2OGcjH",
"B1lk1AaDor",
"SygFi36wir",
"HJlaLopwsr",
"BJg9KSpwoS",
"ryxC77nvsr",
"HJloCRuPsH",
"SkxON3_Pir",
"rJxBtFuDjH",
"HylrrLuDsr",
"SygjGNOvor",
"H1gu7TPPir",
"r1llLeEDir",
"HyghbSTIsr",
"HJgjySaUjH"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_rev... | [
"[updated rating due to supervision of $c_i$, which was not made clear enough and would require other baseline models]\n\nThis paper proposes a modification of the usual parameterization of the encoder in VAEs, to more allow representing an embedding $z$ through an explicit basis $M_B$, which will be pushed to be o... | [
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1
] | [
"iclr_2020_S1gEFkrtvH",
"HJgjySaUjH",
"B1g_Pk6ciB",
"B1gMzgvqoB",
"HJgw2OGcjH",
"HyghbSTIsr",
"SygFi36wir",
"HJlaLopwsr",
"BJg9KSpwoS",
"ryxC77nvsr",
"HJloCRuPsH",
"SkxON3_Pir",
"rJxBtFuDjH",
"HylrrLuDsr",
"SygjGNOvor",
"H1gu7TPPir",
"r1llLeEDir",
"ryxnaET8or",
"H1gCgZOrKB",
"B... |
iclr_2020_BkgStySKPB | Contrastive Multiview Coding | Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a "dog" can be seen, heard, and felt). We hypothesize that a powerful representation is one that models view-invariant factors. Based on this hypothesis, we investigate a contrastive coding scheme, in which a representation is learned that aims to maximize mutual information between different views but is otherwise compact. Our approach scales to any number of views, and is view-agnostic. The resulting learned representations perform above the state of the art for downstream tasks such as object classification, compared to formulations based on predictive learning or single view reconstruction, and improve as more views are added. On the Imagenet linear readoff benchmark, we achieve 68.4% top-1 accuracy. | reject | This paper proposes to use contrastive predictive coding for self-supervised learning. The proposed approach is shown empirically to be more effective than existing self-supervised learning algorithms. While the reviewers found the experimental results encouraging, there were some questions about the contribution as a whole, in particular the lack of theoretical justification. | train | [
"SkePXK2PsS",
"SkeYethwsB",
"S1eC_7mmsS",
"r1lX1jDjtH",
"SkgXlgdptr",
"ByxjhmgRYB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nDear Reviewer 2,\n\nThank you very much for your review. We would like to explain more about our intuition here.\n\n“However, multi-views may provide redundancy information. What is the core information that affect the representation quality?”\n\nOur hypothesis is that each view has two parts of information: (... | [
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
4,
1,
3
] | [
"SkgXlgdptr",
"ByxjhmgRYB",
"r1lX1jDjtH",
"iclr_2020_BkgStySKPB",
"iclr_2020_BkgStySKPB",
"iclr_2020_BkgStySKPB"
] |
iclr_2020_rkg8FJBYDS | Variational Diffusion Autoencoders with Random Walk Sampling | Variational inference (VI) methods and especially variational autoencoders (VAEs) specify scalable generative models that enjoy an intuitive connection to manifold learning --- with many default priors the posterior/likelihood pair q(z|x)/p(x|z) can be viewed as an approximate homeomorphism (and its inverse) between the data manifold and a latent Euclidean space. However, these approximations are well-documented to become degenerate in training. Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match.
Conversely, diffusion maps (DM) automatically \textit{infer} the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism.
In this paper, we propose \textbf{a)} a principled measure for recognizing the mismatch between data and latent distributions and \textbf{b)} a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model. The measure, the \textit{locally bi-Lipschitz property}, is a sufficient condition for a homeomorphism and easy to compute and interpret. The method, the \textit{variational diffusion autoencoder} (VDAE), is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data. To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization. We prove approximation theoretic results for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold.
Finally, we demonstrate the utility of our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models. | reject | This paper proposes to train latent-variable models (VAEs) based on diffusion maps on the data-manifold. While this is an interesting idea, there are substantial problems with the current draft regarding clarity, novelty and scalability. In its current form, it is unlikely that the proposed model will have a substantial impact on the community. | train | [
"HJekY6E3iS",
"HkefWPY7or",
"r1eW2LtQsr",
"Hyle6WtmoH",
"BkgOaj6RKr",
"HyeY1E_y9B",
"ryefctiqqH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you again for your insightful reviews. We have made a few changes to the manuscript. Namely, four items:\n\n#1: We made changes to manuscript to reflect many of the items suggested by R1. We also proofread the manuscript and fixed several grammatical errors pointed out by R2. Thank you both for your thorough... | [
-1,
-1,
-1,
-1,
8,
3,
1
] | [
-1,
-1,
-1,
-1,
3,
1,
3
] | [
"iclr_2020_rkg8FJBYDS",
"BkgOaj6RKr",
"HyeY1E_y9B",
"ryefctiqqH",
"iclr_2020_rkg8FJBYDS",
"iclr_2020_rkg8FJBYDS",
"iclr_2020_rkg8FJBYDS"
] |
iclr_2020_HygDF1rYDB | Explaining Time Series by Counterfactuals | We propose a method to automatically compute the importance of features at every observation in time series, by simulating counterfactual trajectories given previous observations. We define the importance of each observation as the change in the model output caused by replacing the observation with a generated one. Our method can be applied to arbitrarily complex time series models. We compare the generated feature importance to existing methods like sensitivity analyses, feature occlusion, and other explanation baselines to show that our approach generates more precise explanations and is less sensitive to noise in the input signals. | reject | The paper proposes a definition of and an algorithm for computing the importance
of features in time series classification / regression.
The importance is defined as a finite difference version of standard sensitivity
analysis, where the distribution over finite perturbations is given by a
learned time series model.
The approach is tested on simulated and real-world data sets.
The reviewers note a lack of novelty in the paper and deem the contribution
somewhat incremental, although exposition and experiments have improved compared
to previous versions of the manuscript.
I recommend to reject this paper in its current form, taking into account on the reviews and my own
reading, mostly due t a lack of novelty.
Furthermore, the authors call their method a "counterfactual" approach.
I don't agree with this terminology.
No attempt is made to justify is by linking it to the relevant causal literature
on counterfactuals.
The authors do indeed motivate their algorithm by considering how the classifier
output would change "had an observation been different" (a counterfactual), but
mathematical in their model this the same as asking "what changes if the observation is
different" (interventional query). | train | [
"Bke9MQTKsr",
"H1xZ7kxqsr",
"HJgflT0FiB",
"SJeAAjm19r",
"BklV1x0J5H",
"ryejqDjqtS",
"BkerW5tNqH",
"HyxTgDBx9H"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We would like to thank the reviewer for taking the time to provide thoughtful and constructive feedback on our paper. We addressed all your comments and believe it made our paper better in the process. \n\n1. Definition:\nThank you very much for spotting the discrepancy in our two definitions of importance. There ... | [
-1,
-1,
-1,
6,
3,
3,
-1,
-1
] | [
-1,
-1,
-1,
3,
5,
4,
-1,
-1
] | [
"SJeAAjm19r",
"BklV1x0J5H",
"ryejqDjqtS",
"iclr_2020_HygDF1rYDB",
"iclr_2020_HygDF1rYDB",
"iclr_2020_HygDF1rYDB",
"HyxTgDBx9H",
"iclr_2020_HygDF1rYDB"
] |
iclr_2020_rygvFyrKwH | Adversarial Robustness as a Prior for Learned Representations | An important goal in deep learning is to learn versatile, high-level feature representations of input data. However, standard networks' representations seem to possess shortcomings that, as we illustrate, prevent them from fully realizing this goal. In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks. It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs. In particular, these representations are approximately invertible, while allowing for direct visualization and manipulation of salient input features. More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations. | reject | The paper proposes recasting robust optimization as regularizer for learning representations by neural networks, resulting e.g. in more semantically meaningful representations.
The reviewers found that the claimed contributions were well supported by the experimental evidence. The reviewers noted a few minor points regarding clarity that seem to have been addressed. The problems addressed are very relevant to the ICLR community (representation learning and adversarial robustness).
However, the reviewers were not convinced by the novelty of the paper. A big part of the discussion focused on prior work by the authors that is to be published at NeurIPS. This paper was not referenced in the manuscript but does reduce the novelty of the present submission. In contrast to the current submission, that paper focuses on manipulating the learned manipulations to solve image generation tasks, whereas the current paper focuses on the underlying properties of the representation. Since the underlying phenomenon had been described in the earlier paper and the current submission does not introduce a new approach / algorithm, the paper was deemed to lack the novelty for acceptance to ICLR.
| train | [
"SkefKjwjiB",
"HyxZF9njsS",
"rke78YtiiS",
"HyerEUFsoH",
"Bklg_o8TFS",
"HkeXjWKiiS",
"rkljGH8aKB",
"r1lFLcnDjH",
"B1x0RYdPor",
"r1xY0VPwjS",
"Bke8kM9Ljr",
"SJeuKWcIjr",
"rJeVkb58sH",
"ryerslqLsr",
"rylBUSKFYH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"I've updated my review (See \"Update 2\"). \n\n** Response edited ",
"I see. I presumed that you set the response to private by mistake. I personally don't think there is anything wrong with revealing this information to the public, but I will discuss it with the area chair during the post rebuttal discussion pe... | [
-1,
-1,
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"rkljGH8aKB",
"HyerEUFsoH",
"HkeXjWKiiS",
"SkefKjwjiB",
"iclr_2020_rygvFyrKwH",
"rJeVkb58sH",
"iclr_2020_rygvFyrKwH",
"B1x0RYdPor",
"r1xY0VPwjS",
"ryerslqLsr",
"iclr_2020_rygvFyrKwH",
"rylBUSKFYH",
"Bklg_o8TFS",
"rkljGH8aKB",
"iclr_2020_rygvFyrKwH"
] |
iclr_2020_ryevtyHtPr | Do Deep Neural Networks for Segmentation Understand Insideness? | Image segmentation aims at grouping pixels that belong to the same object or region. At the heart of image segmentation lies the problem of determining whether a pixel is inside or outside a region, which we denote as the "insideness" problem. Many Deep Neural Networks (DNNs) variants excel in segmentation benchmarks, but regarding insideness, they have not been well visualized or understood: What representations do DNNs use to address the long-range relationships of insideness? How do architectural choices affect the learning of these representations? In this paper, we take the reductionist approach by analyzing DNNs solving the insideness problem in isolation, i.e. determining the inside of closed (Jordan) curves. We demonstrate analytically that state-of-the-art feed-forward and recurrent architectures can implement solutions of the insideness problem for any given curve. Yet, only recurrent networks could learn these general solutions when the training enforced a specific "routine" capable of breaking down the long-range relationships. Our results highlights the need for new training strategies that decompose the learning into appropriate stages, and that lead to the general class of solutions necessary for DNNs to understand insideness. | reject | This paper investigates a notion of recognizing insideness (i.e., whether a pixel is inside a closed curve/shape in the image) with deep networks. It's an interesting problem, and the authors provide analysis on the limitations of existing architectures (e.g., feedforward and recurrent networks) and present a trick to handle the long-range relationships. While the topic is interesting, the constructed datasets are quite artificial and it's unclear how this study can lead to practically useful results (e.g., improvement in semantic segmentation, etc.). | train | [
"rJgEyzYwir",
"SJgAmWFwoS",
"rkggwfYwsr",
"SJeRkVKDoB",
"BJgVUXYPor",
"BJxYNXtDsS",
"r1lxfXKwsB",
"HyeXzhdjYS",
"HkgS6FcY9H",
"r1ldz9K69r"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"1. “Usefulness of learning insideness to improve segmentation”\n \nWe agree with R#5 that segmentation in natural images may involve different cues than insideness. This was commented in the introduction: \"[in semantic segmentation benchmarks], insideness is not necessary since a solution can rely only on o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"r1ldz9K69r",
"r1ldz9K69r",
"r1ldz9K69r",
"HyeXzhdjYS",
"HkgS6FcY9H",
"HkgS6FcY9H",
"HkgS6FcY9H",
"iclr_2020_ryevtyHtPr",
"iclr_2020_ryevtyHtPr",
"iclr_2020_ryevtyHtPr"
] |
iclr_2020_HJg_tkBtwS | Model-Agnostic Feature Selection with Additional Mutual Information | Answering questions about data can require understanding what parts of an input X influence the response Y. Finding such an understanding can be built by testing relationships between variables through a machine learning model. For example, conditional randomization tests help determine whether a variable relates to the response given the rest of the variables. However, randomization tests require users to specify test statistics. We formalize a class of proper test statistics that are guaranteed to select a feature when it provides information about the response even when the rest of the features are known. We show that f-divergences provide a broad class of proper test statistics. In the class of f-divergences, the KL-divergence yields an easy-to-compute proper test statistic that relates to the AMI. Questions of feature importance can be asked at the level of an individual sample. We show that estimators from the same AMI test can also be used to find important features in a particular instance. We provide an example to show that perfect predictive models are insufficient for instance-wise feature selection. We evaluate our method on several simulation experiments, on a genomic dataset, a clinical dataset for hospital readmission, and on a subset of classes in ImageNet. Our method outperforms several baselines in various simulated datasets, is able to identify biologically significant genes, can select the most important predictors of a hospital readmission event, and is able to identify distinguishing features in an image-classification task. | reject | The paper presents an approach to feature selection. Reviews were mixed and questions whether the paper has enough substance, novelty, the correctness of the theoretical contributions, experimental details, as well as whether the paper compares to the relevant literature. | val | [
"rkgGvZb3KS",
"BkxJ_TjqoB",
"SyxvyrNtiH",
"S1l9qV4YjH",
"B1gDv4VYiB",
"S1x8sQEKiB",
"H1xZGoOhFS",
"r1eV9y2k5r"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a practical improvement of the conditional randomization test (CRT) of (Candes et al., 2018).\nIn the study of (Candes et al., 2018), the choice of the test statistic as well as how one estimates conditional distributions were kept open.\nThe authors proposed \"proper test statistic\" as a prom... | [
6,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2020_HJg_tkBtwS",
"B1gDv4VYiB",
"rkgGvZb3KS",
"H1xZGoOhFS",
"H1xZGoOhFS",
"r1eV9y2k5r",
"iclr_2020_HJg_tkBtwS",
"iclr_2020_HJg_tkBtwS"
] |
iclr_2020_S1g_t1StDB | Self-Educated Language Agent with Hindsight Experience Replay for Instruction Following | Language creates a compact representation of the world and allows the description of unlimited situations and objectives through compositionality. These properties make it a natural fit to guide the training of interactive agents as it could ease recurrent challenges in Reinforcement Learning such as sample complexity, generalization, or multi-tasking. Yet, it remains an open-problem to relate language and RL in even simple instruction following scenarios. Current methods rely on expert demonstrations, auxiliary losses, or inductive biases in neural architectures. In this paper, we propose an orthogonal approach called Textual Hindsight Experience Replay (THER) that extends the Hindsight Experience Replay approach to the language setting. Whenever the agent does not fulfill its instruction, THER learn to output a new directive that matches the agent trajectory, and it relabels the episode with a positive reward. To do so, THER learns to map a state into an instruction by using past successful trajectories, which removes the need to have external expert interventions to relabel episodes as in vanilla HER. We observe that this simple idea also initiates a learning synergy between language acquisition and policy learning on instruction following tasks in the BabyAI environment. | reject | Two reviewers are borderline and one recommends rejection. The main criticism is the simplicity of language, scalability to a more complex problem, and questions about experiments. Due to the lack of stronger support, the paper cannot be accepted at this point. The authors are encouraged to address the reviewer's comments and resubmit to a future conference. | train | [
"rJgNUxNioS",
"S1g3tu7sor",
"SkxjbO7ijr",
"rkl5CIQsiH",
"HkgvFvmior",
"rklMdumoor",
"BkgdyumisB",
"r1ld3LQsor",
"Hyx1kN-K_r",
"BJlpIE-6tr",
"ByxCLQLRFS"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We want to thank the reviewers again for their comments and questions. \n\nFollowing the reviewer feedback, we made the following updates to the paper:\n - Minor changes in the introduction following Reviewer 2 comments\n - Explicit a validation procedure in Section 3 and Appendix to avoid ill-trained goal instru... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2020_S1g_t1StDB",
"rklMdumoor",
"HkgvFvmior",
"r1ld3LQsor",
"BJlpIE-6tr",
"Hyx1kN-K_r",
"BJlpIE-6tr",
"ByxCLQLRFS",
"iclr_2020_S1g_t1StDB",
"iclr_2020_S1g_t1StDB",
"iclr_2020_S1g_t1StDB"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.